问题
There seems to be several ways to create a copy of a tensor in Pytorch, including
y = tensor.new_tensor(x) #a
y = x.clone().detach() #b
y = torch.empty_like(x).copy_(x) #c
y = torch.tensor(x) #d
b
is explicitly preferred over a
and d
according to a UserWarning I get if I execute either a
or d
. Why is it preferred? Performance? I'd argue it's less readable.
Any reasons for/against using c
?
回答1:
According to Pytroch documentation #a and #b are equivalent. It also say that
The equivalents using clone() and detach() are recommended.
So if you want to copy a tensor and detach from the computation graph you should be using
y = x.clone().detach()
Since it is the cleanest and most readable way. With all other version there is some hidden logic and it is also not 100% clear what happens to the computation graph and gradient propagation.
Regarding #c: It seems a bit to complicated for what it actually done and could also introduces some overhead but I am not sure about that.
回答2:
Pytorch '1.1.0' recommends #b now and shows warning for #d
来源:https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor