Dynamic Allocating memory on GPU

泪湿孤枕 提交于 2021-02-07 03:28:41

问题


Is it possible to dynamically allocate memory on a GPU's Global memory inside the Kernel?
i don't know how big will my answer be, therefore i need a way to allocate memory for each part of the answer. CUDA 4.0 alloww us to use the RAM... is it a good idea or will it reduce the speed??


回答1:


it is possible to use malloc inside a kernel. check the following which is taken from nvidia cuda guide:

__global__ void mallocTest() 
{ 
  char* ptr = (char*)malloc(123); 
  printf(“Thread %d got pointer: %p\n”, threadIdx.x, ptr); 
  free(ptr); 
} 
void main() 
{ 
  cudaThreadSetLimit(cudaLimitMallocHeapSize, 128*1024*1024); 
  mallocTest<<<1, 5>>>(); 
  cudaThreadSynchronize(); 
} 

will output: 
Thread 0 got pointer: 00057020 
Thread 1 got pointer: 0005708c 
Thread 2 got pointer: 000570f8 
Thread 3 got pointer: 00057164 



回答2:


From CUDA 4.0 you will be able to use new and delete operators from c++ instead of malloc and free from c.



来源:https://stackoverflow.com/questions/5248726/dynamic-allocating-memory-on-gpu

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!