Pytorch: Why is the memory occupied by the `tensor` variable so small?
问题 In Pytorch 1.0.0, I found that a tensor variable occupies very small memory. I wonder how it stores so much data. Here's the code. a = np.random.randn(1, 1, 128, 256) b = torch.tensor(a, device=torch.device('cpu')) a_size = sys.getsizeof(a) b_size = sys.getsizeof(b) a_size is 262288. b_size is 72. 回答1: The answer is in two parts. From the documentation of sys.getsizeof, firstly All built-in objects will return correct results, but this does not have to hold true for third-party extensions as