I\'m trying to have an in-depth understanding of how PyTorch Tensor memory model works.
# input numpy array
In [91]: arr = np.arange(10, dtype=float32).resha
This comes from _torch_docs.py
; there is also a possible discussion on the "why" here.
def from_numpy(ndarray): # real signature unknown; restored from __doc__
"""
from_numpy(ndarray) -> Tensor
Creates a :class:`Tensor` from a :class:`numpy.ndarray`.
The returned tensor and `ndarray` share the same memory.
Modifications to the tensor will be reflected in the `ndarray`
and vice versa. The returned tensor is not resizable.
Example::
>>> a = numpy.array([1, 2, 3])
>>> t = torch.from_numpy(a)
>>> t
torch.LongTensor([1, 2, 3])
>>> t[0] = -1
>>> a
array([-1, 2, 3])
"""
pass
Taken from the numpy
docs:
Different
ndarrays
can share the same data, so that changes made in one ndarray may be visible in another. That is, anndarray
can be a “view” to anotherndarray
, and the data it is referring to is taken care of by the “base”ndarray
.
Pytorch docs
:
If a
numpy.ndarray
,torch.Tensor
, ortorch.Storage
is given, a new tensor that shares the same data is returned. If a Python sequence is given, a new tensor is created from a copy of the sequence.