Pytorch: Why is the memory occupied by the `tensor` variable so small?

拟墨画扇 提交于 2019-12-04 12:57:30

The answer is in two parts. From the documentation of sys.getsizeof, firstly

All built-in objects will return correct results, but this does not have to hold true for third-party extensions as it is implementation specific.

so it could be that for tensors __sizeof__ is undefined or defined differently than you would expect - this function is not something you can rely on. Secondly

Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to.

which means that if the torch.Tensor object merely holds a reference to the actual memory, this won't show in sys.getsizeof. This is indeed the case, if you check the size of the underlying storage instead, you will see the expected number

import torch, sys
b = torch.randn(1, 1, 128, 256, dtype=torch.float64)
sys.getsizeof(b)
>> 72
sys.getsizeof(b.storage())
>> 262208

Note: I am setting dtype to float64 explicitly, because that is the default dtype in numpy, whereas torch uses float32 by default.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!