can two process shared same GPU memory? (CUDA)

让人想犯罪 __ 提交于 2020-01-01 09:28:13

问题


In CPU world one can do it via memory map. Can similar things done for GPU?

If two process can share a same CUDA context, I think it will be trivial - just pass GPU memory pointer around. Is it possible to share same CUDA context between two processes?

Another possibility I could think of is to map device memory to a memory mapped host memory. Since it's memory mapped, it can be shared between two processes. Does this make sense / possible, and are there any overhead?


回答1:


CUDA MPS effectively allows CUDA activity emanating from 2 or more processes to share the same context on the GPU. However this won't provide for what you are asking for:

can two processes share the same GPU memory?

One method to achieve this is via CUDA IPC (interprocess communication) API.

This will allow you to share an allocated device memory region (i.e. a memory region allocated via cudaMalloc) between multiple processes. This answer contains additional resources to learn about CUDA IPC.

However, according to my testing, this does not enable sharing of host pinned memory regions (e.g. a region allocated via cudaHostAlloc) between multiple processes. The memory region itself can be shared using ordinary IPC mechanisms available for your particular OS, but it cannot be made to appear as "pinned" memory in 2 or more processes (according to my testing).



来源:https://stackoverflow.com/questions/42032331/can-two-process-shared-same-gpu-memory-cuda

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!