Forward vs reverse mode differentiation - Pytorch

一曲冷凌霜 提交于 2020-06-16 04:09:31

问题


In the first example of Learning PyTorch with Examples, the author demonstrates how to create a neural network with numpy. Their code is pasted below for convenience:

# from: https://pytorch.org/tutorials/beginner/pytorch_with_examples.html
# -*- coding: utf-8 -*-
import numpy as np

# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random input and output data
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)

# Randomly initialize weights
w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)

learning_rate = 1e-6
for t in range(500):
    # Forward pass: compute predicted y
    h = x.dot(w1)
    h_relu = np.maximum(h, 0)
    y_pred = h_relu.dot(w2)

    # Compute and print loss
    loss = np.square(y_pred - y).sum()
    print(t, loss)

    # Backprop to compute gradients of w1 and w2 with respect to loss
    grad_y_pred = 2.0 * (y_pred - y)
    grad_w2 = h_relu.T.dot(grad_y_pred)
    grad_h_relu = grad_y_pred.dot(w2.T)
    grad_h = grad_h_relu.copy()
    grad_h[h < 0] = 0
    grad_w1 = x.T.dot(grad_h)

    # Update weights
    w1 -= learning_rate * grad_w1
    w2 -= learning_rate * grad_w2

What is confusing to me is why gradients of w1 and w2 are computed with respect to loss (2nd to last code block).

Normally the opposite computation happens: the gradients of loss is computed with respect to the weights, as quoted here:

  • "When training neural networks, we think of the cost (a value describing how bad a neural network performs) as a function of the parameters (numbers describing how the network behaves). We want to calculate the derivatives of the cost with respect to all the parameters, for use in gradient descent. Now, there’s often millions, or even tens of millions of parameters in a neural network. So, reverse-mode differentiation, called backpropagation in the context of neural networks, gives us a massive speed up!" (Colah's blog).

So my question is: why is the derivation computation in the example above in reverse order as compared to normal back propagation computations?


回答1:


Seems to be a typo in the comment. They are actually computing gradient of loss w.r.t. w2 and w1.

Let's quickly derive the gradient of loss w.r.t. w2 just to be sure. By inspection of your code we have

Using the chain rule from calculus

.

Each term can be represented using the basic rules of matrix calculus. These turn out to be

and

.

Plugging these terms back into the initial equation we get

.

Which perfectly matches the expressions described by

grad_y_pred = 2.0 * (y_pred - y)       # gradient of loss w.r.t. y_pred
grad_w2 = h_relu.T.dot(grad_y_pred)    # gradient of loss w.r.t. w2

in the back-propagation code you provided.



来源:https://stackoverflow.com/questions/58886606/forward-vs-reverse-mode-differentiation-pytorch

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!