Overlapping transfers and device computation in OpenCL

喜你入骨 提交于 2019-12-04 11:04:19

问题


I am a beginner with OpenCL and I have difficulties to understand something. I want to improve the transfers of an image between host and device. I made a scheme to better understand me.

Top: what I have now | Bottom: what I want HtD (Host to Device) and DtH ( Device to Host) are memory transfers. K1 and K2 are kernels.

I thought about using mapping memory, but the first transfer (Host to Device) is done with the clSetKernelArg() command, no ? Or do I have to cut my input image into sub-image and use mapping to get the output image ?

Thanks.

Edit: More information

K1 process mem input image. K2 process output image from K1.

So, I want to transfer MemInput into several pieces for K1. And I want to read and save on the host the MemOuput processed by K2.


回答1:


As you may have already seen, you do a transfer from host to device by using clEnqueueWriteBuffer and similar.

All the commands having the keyword 'enqueue' in them have a special property: The commands are not executed directly, but when you tigger them using clFinish, clFlush, clEnqueueWaitForEvents, using clEnqueueWriteBuffer in blocking mode and some more.

This means that all action happens at once and you have to synchronise it using the event objects. As everything (may) happen at once, you could do something like this (Each point happens at the same time):

  1. Transfer Data A
  2. Process Data A & Transfer Data B
  3. Process Data B & Transfer Data C & Retrive Data A'
  4. Process Data C & Retrieve Data B'
  5. Retrieve Data C'

Remember: Enqueueing Tasks without Event-Objects may result in a simultaneous execution of all enqueued elements!

To make sure that Process Data B doesn't happen before Transfer B, you have to retrieve an event object from clEnqueueWriteBuffer and supply it as an object to wait for to f.i. clEnqueueNDRangeKernel

cl_event evt;
clEnqueueWriteBuffer(... , bufferB , ... , ... , ... , bufferBdata , NULL , NULL , &evt);
clEnqueueNDRangeKernel(... , kernelB , ... , ... , ... , ... , 1 , &evt, NULL);

Instead of supplying NULL, each command can of course wait on certain objects AND generate a new event object. The parameter next to last is an array, so you can event wait for several events!


EDIT: To summarise the comments below Transferring data - What command acts where?
       CPU                        GPU
                            BufA       BufB
array[] = {...}
clCreateBuffer()  ----->  [     ]              //Create (empty) Buffer in GPU memory *
clCreateBuffer()  ----->  [     ]    [     ]   //Create (empty) Buffer in GPU memory *
clWriteBuffer()   -arr->  [array]    [     ]   //Copy from CPU to GPU
clCopyBuffer()            [array] -> [array]   //Copy from GPU to GPU
clReadBuffer()    <-arr-  [array]    [array]   //Copy from GPU to CPU

* You may initialise the buffer directly by providing data using the host_ptr parameter.




回答2:


Many OpenCL platforms don't support out-of-order command queues; the way most vendors say to do overlapped DMA and compute is to use multiple (in-order) command queues. You can use events to ensure dependencies are done in the right order. NVIDIA has example code that shows overlapped DMA and compute doing it this way (although it is suboptimal; it can go slightly faster than they say it can).




回答3:


When you create your command queue, you need to enable out-of-order execution in your properties. see: CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE, clCreateCommandQueue.

This will let you set up your smaller chains of tasks and link them to each other. This is all done on the host.

host pseudo code:

for i in taskChainList
  enqueueWriteDataFromHost
  enqueueKernel(K1)
  enqueueKernel(K2)
  enqueueReadFromDevice
clfinish

When you are queueing the tasks, put the previous cl_event into each task's event_wait_list. The 'enqueueWriteDataFromHost' I have above wouldn't have to wait for another event to begin.

Alternately,

cl_event prevWriteEvent;
cl_event newWriteEvent;
for i in taskChainList
  enqueueWriteDataFromHost // pass *prevWriteEvent as the event_wait_list, and update with newWriteEvent that the enqueue function produces. Now each Write will wait on the one before it.
  enqueueKernel(K1)
  enqueueKernel(K2)
  enqueueReadFromDevice  //The reads shouldn't come back out of order, but they could (if the last block of processing were much faster then the 2nd-last for example)
clfinish



回答4:


The proper way (as I do and does work perfectly) is to create 2 command queues, one for I/O and another for processing. Both must be in the same context.

You can use events to control the schedule of both queues, and the operations will execute in parallel (if they can). Even if the device does not support outoforderqueue it does indeed work.

For example, you can enqueue all the 100 images in the I/O queue to the GPU and get their events. Then set this events as the trigger for the kernels. And the DtoH transfer is triggered by the kernel events. Even if you enqueue all this jobs AT ONCE, they will be processed in order and with parallel I/O.



来源:https://stackoverflow.com/questions/12389321/overlapping-transfers-and-device-computation-in-opencl

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!