Copying 2D arrays to GPU of known variable width

纵然是瞬间 提交于 2019-12-02 08:08:43

问题


I am looking into how to copy a 2D array of variable width for each row into the GPU.

int rows = 1000;
int cols;
int** host_matrix = malloc(sizeof(*int)*rows);
int *d_array;
int *length;

...

Each host_matrix[i] might have a different length, which I know length[i], and there is where the problem starts. I would like to avoid copying dummy data. Is there a better way of doing it?

According to this thread, that won't be a clever way of doing it:

cudaMalloc(d_array, rows*sizeof(int*));  
for(int i = 0 ; i < rows ; i++)    {  
    cudaMalloc((void **)&d_array[i], length[i] * sizeof(int)); 
}  

But I cannot think of any other method. Is there any other smarter way of doing it? Can it be improved using cudaMallocPitch and cudaMemCpy2D ??


回答1:


The correct way to allocate an array of pointers for the GPU in CUDA is something like this:

int **hd_array, **d_array;
hd_array = (int **)malloc(nrows*sizeof(int*));
cudaMalloc(d_array, nrows*sizeof(int*));  
for(int i = 0 ; i < nrows ; i++)    {  
    cudaMalloc((void **)&hd_array[i], length[i] * sizeof(int)); 
}
cudaMemcpy(d_array, hd_array, nrows*sizeof(int*), cudaMemcpyHostToDevice);

(disclaimer: written in browser, never compiled, never tested, use at own risk)

The idea is that you assemble a copy of the array of device pointers in host memory first, then copy that to the device. For your hypothetical case with 1000 rows, that means 1001 calls to cudaMalloc and then 1001 calls to cudaMemcpy just to set up the device memory allocations and copy data into the device. That is an enormous overhead penalty, and I would counsel against trying it; the performance will be truly terrible.

If you have very jagged data and need to store it on the device, might I suggest taking a cue of the mother of all jagged data problems - large, unstructured sparse matrices - and copy one of the sparse matrix formats for your data instead. Using the classic compressed sparse row format as a model you could do something like this:

int * data, * rows, * lengths;

cudaMalloc(rows, nrows*sizeof(int));
cudaMalloc(lengths, nrows*sizeof(int));
cudaMalloc(data, N*sizeof(int));

In this scheme, store all the data in a single, linear memory allocation data. The ith row of the jagged array starts at data[rows[i]] and each row has a length of length[i]. This means you only need three memory allocation and copy operations to transfer any amount of data to the device, rather than nrows in your current scheme, ie. it reduces the overheads from O(N) to O(1).




回答2:


I would put all the data into one array. Then compose another array with the row lengths, so that A[0] is the length of row 0 and so on. so A[i] = length[i] Then you need just to allocate 2 arrays on the card and call memcopy twice.
Of course it's a little bit of extra work, but i think performance wise it will be an improvement (depending of course on how you use the data on the card)



来源:https://stackoverflow.com/questions/12480182/copying-2d-arrays-to-gpu-of-known-variable-width

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!