how can a __global__ function RETURN a value or BREAK out like C/C++ does

后端 未结 3 1819
夕颜
夕颜 2020-12-28 20:37

Recently I\'ve been doing string comparing jobs on CUDA, and i wonder how can a __global__ function return a value when it finds the exact string that I\'m looking for.

3条回答
  •  甜味超标
    2020-12-28 21:19

    The global function doesn't really contain a great amount of threads like you think it does. It is simply a kernel, function that runs on device, that is called by passing paramaters that specify the thread model. The model that CUDA employs is a 2D grid model and then a 3D thread model inside of each block on the grid.

    With the type of problem you have it is not really necessary to use anything besides a 1D grid with 1D of threads on in each block because the string pool doesn't really make sense to split into 2D like other problems (e.g. matrix multiplication)

    I'll walk through a simple example of say 100 strings in the string pool and you want them all to be checked in a parallelized fashion instead of sequentially.

    //main
    //Should cudamalloc and cudacopy to device up before this code
    dim3 dimGrid(10, 1); // 1D grid with 10 blocks
    dim3 dimBlocks(10, 1); //1D Blocks with 10 threads 
    fun<<>>(, Height)
    //cudaMemCpy answerIdx back to integer on host
    
    //kernel (Not positive on these types as my CUDA is very rusty
    __global__ void fun(char *strings[], char *stringToMatch, int *answerIdx)
    {
        int idx = blockIdx.x * 10 + threadIdx.x;
    
        //Obviously use whatever function you've been using for string comparison
        //I'm just using == for example's sake
        if(strings[idx] == stringToMatch)
        { 
           *answerIdx = idx
        }
    } 
    

    This is obviously not the most efficient and is most likely not the exact way to pass paramaters and work with memory w/ CUDA, but I hope it gets the point across of splitting the workload and that the 'global' functions get executed on many different cores so you can't really tell them all to stop. There may be a way I'm not familiar with, but the speed up you will get by just dividing the workload onto the device (in a sensible fashion of course) will already give you tremendous performance improvements. To get a sense of the thread model I highly recommend reading up on the documents on Nvidia's site for CUDA. They will help tremendously and teach you the best way to set up the grid and blocks for optimal performance.

提交回复
热议问题