PyTorch autograd — grad can be implicitly created only for scalar outputs

拜拜、爱过 提交于 2019-12-22 08:57:41

问题


I am using the autograd tool in PyTorch, and have found myself in a situation where I need to access the values in a 1D tensor by means of an integer index. Something like this:

def basic_fun(x_cloned):
    res = []
    for i in range(len(x)):
        res.append(x_cloned[i] * x_cloned[i])
    print(res)
    return Variable(torch.FloatTensor(res))


def get_grad(inp, grad_var):
    A = basic_fun(inp)
    A.backward()
    return grad_var.grad


x = Variable(torch.FloatTensor([1, 2, 3, 4, 5]), requires_grad=True)
x_cloned = x.clone()
print(get_grad(x_cloned, x))

I am getting the following error message:

[tensor(1., grad_fn=<ThMulBackward>), tensor(4., grad_fn=<ThMulBackward>), tensor(9., grad_fn=<ThMulBackward>), tensor(16., grad_fn=<ThMulBackward>), tensor(25., grad_fn=<ThMulBackward>)]
Traceback (most recent call last):
  File "/home/mhy/projects/pytorch-optim/predict.py", line 74, in <module>
    print(get_grad(x_cloned, x))
  File "/home/mhy/projects/pytorch-optim/predict.py", line 68, in get_grad
    A.backward()
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I am in general, a bit skeptical about how using the cloned version of a variable is supposed to keep that variable in gradient computation. The variable itself is effectively not used in the computation of A, and so when you call A.backward(), it should not be part of that operation.

I appreciate your help with this approach or if there is a better way to avoid losing the gradient history and still index through a 1D tensor with requires_grad=True!

**Edit (September 15):**

res is a list of zero-dimensional tensors containing squared values of 1 to 5. To concatenate in a single tensor containing [1.0, 4.0, ..., 25.0], I changed return Variable(torch.FloatTensor(res)) to torch.stack(res, dim=0), which produces tensor([ 1., 4., 9., 16., 25.], grad_fn=<StackBackward>).

However, I am getting this new error, caused by the A.backward() line.

Traceback (most recent call last):
  File "<project_path>/playground.py", line 22, in <module>
    print(get_grad(x_cloned, x))
  File "<project_path>/playground.py", line 16, in get_grad
    A.backward()
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 84, in backward
    grad_tensors = _make_grads(tensors, grad_tensors)
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 28, in _make_grads
    raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs

回答1:


in the basic_fun function, the res variable already is a torch-autograd-Variable you don't need to convert it again. IMHO

def basic_fun(x_cloned):
    res = []
    for i in range(len(x)):
        res.append(x_cloned[i] * x_cloned[i])
    print(res)
    #return Variable(torch.FloatTensor(res))
    return res[0]

def get_grad(inp, grad_var):
    A = basic_fun(inp)
    A.backward()
    return grad_var.grad


x = Variable(torch.FloatTensor([1, 2, 3, 4, 5]), requires_grad=True)
x_cloned = x.clone()
print(get_grad(x_cloned, x))



回答2:


I changed my basic_fun to the following, which resolved my problem:

def basic_fun(x_cloned):
    res = torch.FloatTensor([0])
    for i in range(len(x)):
        res += x_cloned[i] * x_cloned[i]
    return res

This version returns a scalar value.



来源:https://stackoverflow.com/questions/52317407/pytorch-autograd-grad-can-be-implicitly-created-only-for-scalar-outputs

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!