Why is pyTorch “destroying” my numpy arrays?

余生长醉 提交于 2020-05-24 05:46:32

问题


I have am working with numpy tensors of shape (N,2,128,128).

When trying to visualize these as images (I reconstruct via ifft2), numpy and pyTorch seems to mix things up in an crazy manner ...

I have checked with small dummy arrays and when I pass a numpy ndarray to a torch.FloatTensor the values are exactly the same at the same positions (same shape!), but when I try to do an ifft2 on the torch tensor ones, the result is different than on the non-torch tensor! Can someone help me make sense of this ?

A small reproducible example is:

x=np.random.rand(3,2,2,2)
xTorch=torch.FloatTensor(x)

#visualize then in the interpreter, they are the same!
#
#now show the magnitude of an inverse fourier transform
plt.imshow(np.abs(np.fft.ifft2(xTorch[0,0,:,:]+1j*xTorch[0,1,:,:])))
plt.show()

plt.imshow(np.abs(np.fft.ifft2(x[0,0,:,:]+1j*x[0,1,:,:])))
plt.show()

#they are not the same ! What is the problem!?

I found out that if I use: torch.Tensor.cpu(xTorch).detach().numpy() I can get the same result, but what does that mean?!

P.S. ALso, note that I know the correct visualization is with the x and not the xTensor, so it seems that torch is changing something when I do the ifft2 .. or when I reconstruct the 2 channels...or maybe there is a problem/bug with complex numbers ... If you look inside : np.abs(np.fft.ifft2(x[0,0,:,:]+1j*x[0,1,:,:])) and the xTorch one, the values are so different, that it is not just a problem of floating point error, it is something serious, but I can't figure it out and it's driving me crazy.

来源:https://stackoverflow.com/questions/61158860/why-is-pytorch-destroying-my-numpy-arrays

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!