Calculate the error using a sigmoid function in backpropagation

旧城冷巷雨未停 提交于 2019-12-01 18:17:52
mikera

The reason you need this is that you are calculating the derivative of the error function with respect to the neuron's inputs.

When you take the derivative via the chain rule, you need to multiply by the derivative of the neuron's activation function (which happens to be a sigmoid)

Here's the important math.

Calculate the derivative of the error on the neuron's inputs via the chain rule:

E = -(target - output)^2

dE/dinput = dE/doutput * doutput/dinput

Work out doutput/dinput:

output = sigmoid (input)

doutput/dinput = output * (1 - output)    (derivative of sigmoid function)

therefore:

dE/dinput = 2 * (target - output) * output * (1 - output)

The choice of the sigmoid function is by no means arbitrary. Basically you are trying to estimate the conditional probability of a class label given some sample. If you take the absolute value, you are doing something different, and you will get different results.

For a practical introduction in the topic I would recommend you to check out the online Machine Learning course by Prof. Andrew Ng

https://www.coursera.org/course/ml

and the book by Prof. Christopher Bishop for an in depth study on the topic

http://www.amazon.com/Neural-Networks-Pattern-Recognition-Christopher/dp/0198538642/ref=sr_1_1?ie=UTF8&qid=1343123246&sr=8-1&keywords=christopher+bishop+neural+networks

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!