I am reading through the documentation of PyTorch and found an example where they write
gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradien
Typically, your computational graph has one scalar output says loss
. Then you can compute the gradient of loss
w.r.t. the weights (w
) by loss.backward()
. Where the default argument of backward()
is 1.0
.
If your output has multiple values (e.g. loss=[loss1, loss2, loss3]
), you can compute the gradients of loss w.r.t. the weights by loss.backward(torch.FloatTensor([1.0, 1.0, 1.0]))
.
Furthermore, if you want to add weights or importances to different losses, you can use loss.backward(torch.FloatTensor([-0.1, 1.0, 0.0001]))
.
This means to calculate -0.1*d(loss1)/dw, d(loss2)/dw, 0.0001*d(loss3)/dw
simultaneously.