Efficiently Initializing Shared Memory Array in CUDA

…衆ロ難τιáo~ 提交于 2019-12-12 06:18:55

问题


Note that this shared memory array is never written to, only read from.

As I have it, my shared memory gets initialized like:

__shared__ float TMshared[2592]; 
for (int i = 0; i< 2592; i++)
{
TMshared[i] = TM[i];

}
__syncthreads();

(TM is passed into all threads from kernel launch)

You might have noticed that this is highly inefficient as there is no parallelization going on and threads within the same block are writing to the same location.

Can someone please recommend a more efficient approach/comment on if this issue really needs optimization since the shared array in question is relatively small?

Thanks!


回答1:


Use all threads to write independent locations, it will probably be quicker.

Example assumes 1D threadblock/grid:

#define SSIZE 2592

__shared__ float TMshared[SSIZE]; 

  int lidx = threadIdx.x;
  while (lidx < SSIZE){
    TMShared[lidx] = TM[lidx];
    lidx += blockDim.x;}

__syncthreads();


来源:https://stackoverflow.com/questions/24419822/efficiently-initializing-shared-memory-array-in-cuda

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!