Pytorch preferred way to copy a tensor

北城以北 提交于 2020-04-07 12:16:12

问题


There seems to be several ways to create a copy of a tensor in Pytorch, including

y = tensor.new_tensor(x) #a

y = x.clone().detach() #b

y = torch.empty_like(x).copy_(x) #c

y = torch.tensor(x) #d

b is explicitly preferred over a and d according to a UserWarning I get if I execute either a or d. Why is it preferred? Performance? I'd argue it's less readable.

Any reasons for/against using c?


回答1:


According to Pytroch documentation #a and #b are equivalent. It also say that

The equivalents using clone() and detach() are recommended.

So if you want to copy a tensor and detach from the computation graph you should be using

y = x.clone().detach()

Since it is the cleanest and most readable way. With all other version there is some hidden logic and it is also not 100% clear what happens to the computation graph and gradient propagation.

Regarding #c: It seems a bit to complicated for what it actually done and could also introduces some overhead but I am not sure about that.




回答2:


Pytorch '1.1.0' recommends #b now and shows warning for #d



来源:https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!