Gradient checking in backpropagation

懵懂的女人 提交于 2021-01-27 12:51:26

问题


I'm trying to implement gradient checking for a simple feedforward neural network with 2 unit input layer, 2 unit hidden layer and 1 unit output layer. What I do is the following:

  1. Take each weight w of the network weights between all layers and perform forward propagation using w + EPSILON and then w - EPSILON.
  2. Compute the numerical gradient using the results of the two feedforward propagations.

What I don't understand is how exactly to perform the backpropagation. Normally, I compare the output of the network to the target data (in case of classification) and then backpropagate the error derivative across the network. However, I think in this case some other value have to be backpropagated, since in the results of the numerical gradient computation are not dependent of the target data (but only of the input), while the error backpropagation depends on the target data. So, what is the value that should be used in the backpropagation part of gradient check?


回答1:


Backpropagation is performed after computing the gradients analytically and then using those formulas while training. A neural network is essentially a multivariate function, where the coefficients or the parameters of the functions needs to be found or trained.

The definition of a gradient with respect to a specific variable is the rate of change of the function value. Therefore, as you mentioned, and from the definition of the first derivative we can approximate the gradient of a function, including a neural network.

To check if your analytical gradient for your neural network is correct or not, it is good to check it using the numerical method.

For each weight layer w_l from all layers W = [w_0, w_1, ..., w_l, ..., w_k]
    For i in 0 to number of rows in w_l
        For j in 0 to number of columns in w_l
            w_l_minus = w_l; # Copy all the weights
            w_l_minus[i,j] = w_l_minus[i,j] - eps; # Change only this parameter

            w_l_plus = w_l; # Copy all the weights
            w_l_plus[i,j] = w_l_plus[i,j] + eps; # Change only this parameter

            cost_minus = cost of neural net by replacing w_l by w_l_minus
            cost_plus = cost of neural net by replacing w_l by w_l_plus

            w_l_grad[i,j] = (cost_minus - cost_plus)/(2*eps)

This process changes only one parameter at a time and computes the numerical gradient. In this case I have used the (f(x-h) - f(x+h))/2h, which seems to work better for me.

Note that, you mentiond: "since in the results of the numerical gradient computation are not dependent of the target data", this is not true. As when you find the cost_minus and cost_plus above, the cost is being computed on the basis of

  1. The weights
  2. The target classes

Therefore, the process of backpropagation should be independent of the gradient checking. Compute the numerical gradients before backpropagation update. Compute the gradients using backpropagation in one epoch (using something similar to above). Then compare each gradient component of the vectors/matrices and check if they are close enough.




回答2:


Whether you want to do some classification or have your network calculate a certain numerical function, you always have some target data. For example, let's say you wanted to train a network to calculate the function f(a, b) = a + b. In that case, this is the input and target data you want to train your network on:

 a       b     Target

 1       1        2
 3       4        7
21       0        21
 5       2        7 

        ...

Just as with "normal" classification problems, the more input-target pairs, the better.



来源:https://stackoverflow.com/questions/26192592/gradient-checking-in-backpropagation

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!