PyTorch autograd — grad can be implicitly created only for scalar outputs

被刻印的时光 ゝ 提交于 2019-12-05 18:01:37

in the basic_fun function, the res variable already is a torch-autograd-Variable you don't need to convert it again. IMHO

def basic_fun(x_cloned):
    res = []
    for i in range(len(x)):
        res.append(x_cloned[i] * x_cloned[i])
    print(res)
    #return Variable(torch.FloatTensor(res))
    return res[0]

def get_grad(inp, grad_var):
    A = basic_fun(inp)
    A.backward()
    return grad_var.grad


x = Variable(torch.FloatTensor([1, 2, 3, 4, 5]), requires_grad=True)
x_cloned = x.clone()
print(get_grad(x_cloned, x))

I changed my basic_fun to the following, which resolved my problem:

def basic_fun(x_cloned):
    res = torch.FloatTensor([0])
    for i in range(len(x)):
        res += x_cloned[i] * x_cloned[i]
    return res

This version returns a scalar value.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!