Stacking copies of an array/ a torch tensor efficiently?

此生再无相见时 提交于 2020-02-23 07:12:30

问题


I'm a Python/Pytorch user. First, in numpy, let's say I have an array M of size LxL, and i want to have the following array: A=(M,...,M) of size, say, NxLxL, is there a more elegant/memory efficient way of doing it than :

A=np.array([M]*N) ?

Same question with torch tensor ! Cause, Now, if M is a Variable(torch.tensor), i have to do:

A=torch.autograd.Variable(torch.tensor(np.array([M]*N))) 

which is ugly !


回答1:


Note, that you need to decide whether you would like to allocate new memory for your expanded array or whether you simply require a new view of the existing memory of the original array.

In PyTorch, this distinction gives rise to the two methods expand() and repeat(). The former only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory. In contrast, the latter copies the original data and allocates new memory.

In PyTorch, you can use expand() and repeat() as follows for your purposes:

import torch

L = 10
N = 20
A = torch.randn(L,L)
A.expand(N, L, L) # specifies new size
A.repeat(N,1,1) # specifies number of copies

In Numpy, there are a multitude of ways to achieve what you did above in a more elegant and efficient manner. For your particular purpose, I would recommend np.tile() over np.repeat(), since np.repeat() is designed to operate on the particular elements of an array, while np.tile() is designed to operate on the entire array. Hence,

import numpy as np

L = 10
N = 20
A = np.random.rand(L,L)
np.tile(A,(N, 1, 1))



回答2:


In numpy repeat is faster:

np.repeat(M[None,...], N,0)

I expand the dimensions of the M, and then repeat along that new dimension.



来源:https://stackoverflow.com/questions/44593141/stacking-copies-of-an-array-a-torch-tensor-efficiently

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!