Read/Write OpenCL memory buffers on multiple GPU in a single context

[亡魂溺海] 提交于 2019-12-24 06:35:16

问题


Assume a system with two distinct GPUs, but from the same vendor so they can be accessed from a single OpenCL Platform. Given the following simplified OpenCL code:

float* someRawData;

cl_device_id gpu1 = clGetDeviceIDs(0,...);
cl_device_id gpu2 = clGetDeviceIDs(1,...);
cl_context ctx = clCreateContext(gpu1,gpu2,...);

cl_command_queue queue1 = clCreateCommandQueue(ctx,gpu1,...);
cl_command_queue queue2 = clCreateCommandQueue(ctx,gpu2,...);

cl_mem gpuMem = clCreateBuffer(ctx, CL_MEM_READ_WRITE, ...);
clEnqueueWriteBuffer(queue1,gpuMem,...,someRawData,...);
clFinish(queue1);

At the end of the execution, will someRawData be on both GPU in-memory or will it be only on gpu1 in-memory?


回答1:


It is up to the implementation, where the data will be after calling clFinish() but most likely it will be on the GPU referenced by the queue. Also, this kind of abstraction makes it possible to access gpuMem from a kernel launched on queue2 without an explicit data transfer.



来源:https://stackoverflow.com/questions/11093826/read-write-opencl-memory-buffers-on-multiple-gpu-in-a-single-context

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!