问题
Dea All,
I am trying to implement a neural network which uses backpropagation. So far I got to the stage where each neuron receives weighted inputs from all neurons in the previous layer, calculates the sigmoid function based on their sum and distributes it across the following layer. Finally, the entire network produces a result O. A then calculate the error as E = 1/2(D-O)^2 where D is the desired value. At this point, having all neurons across the network their individual output and the overall error of the net, how can I backpropagate it to adjusts the weights?
Cheers :)
回答1:
I would highly suggest looking at this website, this is what I've used in the past:
http://www.codeproject.com/Articles/14342/Designing-And-Implementing-A-Neural-Network-Librar
回答2:
You must apply next step of backpropagation algorithm in training mode, the delta rule, it will tell you the amount of change to apply to the weights in the next step
http://en.wikipedia.org/wiki/Delta_rule
http://en.wikipedia.org/wiki/Backpropagation
Hope this helps
来源:https://stackoverflow.com/questions/17113497/backpropagation-algorithm-implementation