Allocate 2D Array on Device Memory in CUDA

人走茶凉 提交于 2019-12-17 15:36:36

问题


How do I allocate and transfer(to and from Host) 2D arrays in device memory in Cuda?


回答1:


I found a solution to this problem. I didn't have to flatten the array.

The inbuilt cudaMallocPitch() function did the job. And I could transfer the array to and from device using cudaMemcpy2D() function.

For example

cudaMallocPitch((void**) &array, &pitch, a*sizeof(float), b);

This creates a 2D array of size a*b with the pitch as passed in as parameter.

The following code creates a 2D array and loops over the elements. It compiles readily, you may use it.

#include<stdio.h>
#include<cuda.h>
#define height 50
#define width 50

// Device code
__global__ void kernel(float* devPtr, int pitch)
{
    for (int r = 0; r < height; ++r) {
        float* row = (float*)((char*)devPtr + r * pitch);
        for (int c = 0; c < width; ++c) {
             float element = row[c];
        }
    }
}

//Host Code
int main()
{

float* devPtr;
size_t pitch;
cudaMallocPitch((void**)&devPtr, &pitch, width * sizeof(float), height);
kernel<<<100, 512>>>(devPtr, pitch);
return 0;
}



回答2:


Flatten it: make it one-dimensional. See how it's done here




回答3:


Your device code could be faster. Try utilizing the threads more.

__global__ void kernel(float* devPtr, int pitch)
{
    int r = threadIdx.x;

    float* row = (float*)((char*)devPtr + r * pitch);
    for (int c = 0; c < width; ++c) {
         float element = row[c];
    }
}

Then you calculate the blocks and threads allocation appropriate so that each thread deals with a single element.



来源:https://stackoverflow.com/questions/1047369/allocate-2d-array-on-device-memory-in-cuda

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!