Generating prediction using a back-propagation neural network model on R returns same values for all observation

只谈情不闲聊 提交于 2019-12-24 01:45:06

问题


I'm trying to generate prediction using a trained backpropagation neural network using the neuralnet package on a new data set. I used the 'compute' function but end up with the same value for all observations. What did I do wrong?

# the data
Var1 <- runif(50, 0, 100)
sqrt.data <- data.frame(Var1, Sqrt=sqrt(Var1))

# training the model
backnet = neuralnet(Sqrt~Var1, sqrt.data, hidden=2, err.fct="sse", linear.output=FALSE, algorithm="backprop", learningrate=0.01)

print (backnet)

Call: neuralnet(formula = Sqrt ~ Var1, data = sqrt.data, hidden = 2,     learningrate = 0.01, algorithm = "backprop", err.fct = "sse",     linear.output = FALSE)

1 repetition was calculated.

        Error Reached Threshold Steps
1 883.0038185    0.009998448226  5001

valnet = compute(backnet, (1:10)^2)

summary (valnet$net.result)

      V1           
Min.   :0.9998572  
1st Qu.:0.9999620  
Median :0.9999626  
Mean   :0.9999505  
3rd Qu.:0.9999626  
Max.   :0.9999626  

print (valnet$net.result)

         [,1]
[1,] 0.9998572272
[2,] 0.9999477241
[3,] 0.9999617930
[4,] 0.9999625684
[5,] 0.9999625831
[6,] 0.9999625831
[7,] 0.9999625831
[8,] 0.9999625831
[9,] 0.9999625831
[10,] 0.9999625831

回答1:


I was able to get the following to work:

library(neuralnet)

# the data
Var1 <- runif(50, 0, 100)
sqrt.data <- data.frame(Var1, Sqrt=sqrt(Var1))

# training the model
backnet = neuralnet(Sqrt~Var1, sqrt.data, hidden=10, learningrate=0.01)

print (backnet)


Var2<-c(1:10)^2

valnet = compute(backnet, Var2)

print (valnet$net.result)

Returns:

     [,1]
[1,] 0.9341689395
[2,] 1.9992711472
[3,] 3.0012823496
[4,] 3.9968226732
[5,] 5.0038316976
[6,] 5.9992936957
[7,] 6.9991576925
[8,] 7.9996871591
[9,] 9.0000849977
[10,] 9.9891334545

According to the neuralnet reference manual, the default training algo for the package is backpropogation:

neuralnet is used to train neural networks using backpropagation, resilient backpropagation (RPROP) with (Riedmiller, 1994) or without weight backtracking (Riedmiller and Braun, 1993) or the modified globally convergent version (GRPROP) by Anastasiadis et al. (2005). The function allows flexible settings through custom-choice of error and activation function. Furthermore the calculation of generalized weights (Intrator O. and Intrator N., 1993) is implemented.



来源:https://stackoverflow.com/questions/19206052/generating-prediction-using-a-back-propagation-neural-network-model-on-r-returns

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!