How does a back-propagation training algorithm work?

前端 未结 4 1690
囚心锁ツ
囚心锁ツ 2020-12-08 05:36

I\'ve been trying to learn how back-propagation works with neural networks, but yet to find a good explanation from a less technical aspect.

How does back-propagatio

4条回答
  •  不思量自难忘°
    2020-12-08 06:37

    Back-propagation works in a logic very similar to that of feed-forward. The difference is the direction of data flow. In the feed-forward step, you have the inputs and the output observed from it. You can propagate the values forward to train the neurons ahead.

    In the back-propagation step, you cannot know the errors occurred in every neuron but the ones in the output layer. Calculating the errors of output nodes is straightforward - you can take the difference between the output from the neuron and the actual output for that instance in training set. The neurons in the hidden layers must fix their errors from this. Thus you have to pass the error values back to them. From these values, the hidden neurons can update their weights and other parameters using the weighted sum of errors from the layer ahead.

    A step-by-step demo of feed-forward and back-propagation steps can be found here.


    Edit

    If you're a beginner to neural networks, you can begin learning from Perceptron, then advance to NN, which actually is a multilayer perceptron.

提交回复
热议问题