How does a back-propagation training algorithm work?

前端 未结 4 1693
囚心锁ツ
囚心锁ツ 2020-12-08 05:36

I\'ve been trying to learn how back-propagation works with neural networks, but yet to find a good explanation from a less technical aspect.

How does back-propagatio

4条回答
  •  一向
    一向 (楼主)
    2020-12-08 06:42

    I'll try to explain without delving too much into code or math.

    Basically, you compute the classification from the neural network, and compare to the known value. This gives you an error at the output node.

    Now, from the output node, we have N incoming links from other nodes. We propagate the error to the last layer before the output node. Then propagate it down to the next layer (when there is more than one uplink, you sum the errors). And then recursively propagate to the first

    To adjust the weights for training, for each node you basically do the following:

    for each link in node.uplinks
      error = link.destination.error
      main = learningRate * error * node.output  // The amount of change is based on error, output, and the learning rate
    
      link.weight += main * alpha * momentum // adjust the weight based on the current desired change, alpha, and the "momentum" of the change.  
    
      link.momentum = main // Momentum is based on the last change. 
    

    learningRate and alpha are parameters you can set to adjust how quickly it hones in on a solution vs. how (hopefully) accurately you solve it in the end.

提交回复
热议问题