Pytorch. Can autograd be used when the final tensor has more than a single value in it?

孤街浪徒 提交于 2019-12-02 00:58:53

See https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients

y.backward() is same as y.backward(torch.tensor(1.0))

Usually, the output is scalar and hence the scalar is passed to backward as a default choice. However, since your output is two dimensional you should call y.backward(torch.tensor([1.0,1.0]))

This will give expected results with x.grad being tensor([ 8., 10.])

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!