Pytorch. Can autograd be used when the final tensor has more than a single value in it?

雨燕双飞 提交于 2019-12-20 03:43:16

问题


Can autograd be used when the final tensor has more than a single value in it?

I tried the following.

x = torch.tensor([4.0, 5.0], requires_grad=True)
y = x ** 2

print(y)

y.backward()

Throws an error

RuntimeError: grad can be implicitly created only for scalar outputs

The following however works.

x = torch.tensor([4.0, 5.0], requires_grad=True)
y = x ** 2
y = torch.sum(y)
print(y)

y.backward()
print(x.grad)

The output is as

tensor(41., grad_fn=<SumBackward0>)
tensor([ 8., 10.])

Am I missing something here or can I proceed with the assumption that autograd only works when the final tensor has a single value in it?


回答1:


See https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients

y.backward() is same as y.backward(torch.tensor(1.0))

Usually, the output is scalar and hence the scalar is passed to backward as a default choice. However, since your output is two dimensional you should call y.backward(torch.tensor([1.0,1.0]))

This will give expected results with x.grad being tensor([ 8., 10.])



来源:https://stackoverflow.com/questions/53273662/pytorch-can-autograd-be-used-when-the-final-tensor-has-more-than-a-single-value

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!