Can/Should I run this code of a statistical application on a GPU?

后端 未结 5 802
猫巷女王i
猫巷女王i 2021-01-30 04:24

I\'m working on a statistical application containing approximately 10 - 30 million floating point values in an array.

Several methods performing different, but independen

5条回答
  •  情深已故
    2021-01-30 04:42

    UPDATE GPU Version

    __global__ void hash (float *largeFloatingPointArray,int largeFloatingPointArraySize, int *dictionary, int size, int num_blocks)
    {
        int x = (threadIdx.x + blockIdx.x * blockDim.x); // Each thread of each block will
        float y;                                         // compute one (or more) floats
        int noOfOccurrences = 0;
        int a;
        
        while( x < size )            // While there is work to do each thread will:
        {
            dictionary[x] = 0;       // Initialize the position in each it will work
            noOfOccurrences = 0;    
    
            for(int j = 0 ;j < largeFloatingPointArraySize; j ++) // Search for floats
            {                                                     // that are equal 
                                                                 // to it assign float
               y = largeFloatingPointArray[j];  // Take a candidate from the floats array 
               y *= 10000;                      // e.g if y = 0.0001f;
               a = y + 0.5;                     // a = 1 + 0.5 = 1;
               if (a == x) noOfOccurrences++;    
            }                                      
                                                        
            dictionary[x] += noOfOccurrences; // Update in the dictionary 
                                              // the number of times that the float appears 
    
        x += blockDim.x * gridDim.x;  // Update the position here the thread will work
        }
    }
    

    This one I just tested for smaller inputs, because I am testing in my laptop. Nevertheless, it is working, but more tests are needed.

    UPDATE Sequential Version

    I just did this naive version that executes your algorithm for an array with 30,000,000 element in less than 20 seconds (including the time taken by function that generates the data).

    This naive version first sorts your array of floats. Afterward, will go through the sorted array and check the number of times a given value appears in the array and then puts this value in a dictionary along with the number of times it has appeared.

    You can use sorted map, instead of the unordered_map that I used.

    Heres the code:

    #include 
    #include 
    #include "cuda.h"
    #include 
    #include 
    #include 
    #include 
    
    
    typedef std::tr1::unordered_map Mymap;
    
    
    void generator(float *data, long int size)
    {
        float LO = 0.0;
        float HI = 100.0;
        
        for(long int i = 0; i < size; i++)
            data[i] = LO + (float)rand()/((float)RAND_MAX/(HI-LO));
    }
    
    void print_array(float *data, long int size)
    {
    
        for(long int i = 2; i < size; i++)
            printf("%f\n",data[i]);
        
    }
    
    std::tr1::unordered_map fill_dict(float *data, int size)
    {
        float previous = data[0];
        int count = 1;
        std::tr1::unordered_map dict;
        
        for(long int i = 1; i < size; i++)
        {
            if(previous == data[i])
                count++;
            else
            {
              dict.insert(Mymap::value_type(previous,count));
              previous = data[i];
              count = 1;         
            }
            
        }
        dict.insert(Mymap::value_type(previous,count)); // add the last member
        return dict;
        
    }
    
    void printMAP(std::tr1::unordered_map dict)
    {
       for(std::tr1::unordered_map::iterator i = dict.begin(); i != dict.end(); i++)
      {
         std::cout << "key(string): " << i->first << ", value(int): " << i->second << std::endl;
       }
    }
    
    
    int main(int argc, char** argv)
    {
      int size = 1000000; 
      if(argc > 1) size = atoi(argv[1]);
      printf("Size = %d",size);
      
      float data[size];
      using namespace __gnu_cxx;
      
      std::tr1::unordered_map dict;
      
      generator(data,size);
      
      sort(data, data + size);
      dict = fill_dict(data,size);
      
      return 0;
    }
    

    If you have the library thrust installed in you machine your should use this:

    #include 
    thrust::sort(data, data + size);
    

    instead of this

    sort(data, data + size);
    

    For sure it will be faster.

    Original Post

    I'm working on a statistical application which has a large array containing 10 - 30 millions of floating point values.

    Is it possible (and does it make sense) to utilize a GPU to speed up such calculations?

    Yes, it is. A month ago, I ran an entirely Molecular Dynamic simulation on a GPU. One of the kernels, which calculated the force between pairs of particles, received as parameter 6 array each one with 500,000 doubles, for a total of 3 Millions doubles (22 MB).

    So if you are planning to put 30 Million floating points, which is about 114 MB of global Memory, it will not be a problem.

    In your case, can the number of calculations be an issue? Based on my experience with the Molecular Dynamic (MD), I would say no. The sequential MD version takes about 25 hours to complete while the GPU version took 45 Minutes. You said your application took a couple hours, also based in your code example it looks softer than the MD.

    Here's the force calculation example:

    __global__ void add(double *fx, double *fy, double *fz,
                        double *x, double *y, double *z,...){
       
         int pos = (threadIdx.x + blockIdx.x * blockDim.x); 
          
         ...
         
         while(pos < particles)
         {
         
          for (i = 0; i < particles; i++)
          {
                  if(//inside of the same radius)
                    {
                     // calculate force
                    } 
           }
         pos += blockDim.x * gridDim.x;  
         }        
      }
    

    A simple example of a code in CUDA could be the sum of two 2D arrays:

    In C:

    for(int i = 0; i < N; i++)
        c[i] = a[i] + b[i]; 
    

    In CUDA:

    __global__ add(int *c, int *a, int*b, int N)
    {
      int pos = (threadIdx.x + blockIdx.x)
      for(; i < N; pos +=blockDim.x)
          c[pos] = a[pos] + b[pos];
    }
    

    In CUDA you basically took each for iteration and assigned to each thread,

    1) threadIdx.x + blockIdx.x*blockDim.x;
    

    Each block has an ID from 0 to N-1 (N the number maximum of blocks) and each block has a 'X' number of threads with an ID from 0 to X-1.

    1. Gives you the for loop iteration that each thread will compute based on its ID and the block ID which the thread is in; the blockDim.x is the number of threads that a block has.

    So if you have 2 blocks each one with 10 threads and N=40, the:

    Thread 0 Block 0 will execute pos 0
    Thread 1 Block 0 will execute pos 1
    ...
    Thread 9 Block 0 will execute pos 9
    Thread 0 Block 1 will execute pos 10
    ....
    Thread 9 Block 1 will execute pos 19
    Thread 0 Block 0 will execute pos 20
    ...
    Thread 0 Block 1 will execute pos 30
    Thread 9 Block 1 will execute pos 39
    

    Looking at your current code, I have made this draft of what your code could look like in CUDA:

    __global__ hash (float *largeFloatingPointArray, int *dictionary)
        // You can turn the dictionary in one array of int
        // here each position will represent the float
        // Since  x = 0f; x < 100f; x += 0.0001f
        // you can associate each x to different position
        // in the dictionary:
    
        // pos 0 have the same meaning as 0f;
        // pos 1 means float 0.0001f
        // pos 2 means float 0.0002f ect.
        // Then you use the int of each position 
        // to count how many times that "float" had appeared 
    
    
       int x = blockIdx.x;  // Each block will take a different x to work
        float y;
        
    while( x < 1000000) // x < 100f (for incremental step of 0.0001f)
    {
        int noOfOccurrences = 0;
        float z = converting_int_to_float(x); // This function will convert the x to the
                                              // float like you use (x / 0.0001)
    
        // each thread of each block
        // will takes the y from the array of largeFloatingPointArray
        
        for(j = threadIdx.x; j < largeFloatingPointArraySize; j += blockDim.x)
        {
            y = largeFloatingPointArray[j];
            if (z == y)
            {
                noOfOccurrences++;
            }
        }
        if(threadIdx.x == 0) // Thread master will update the values
          atomicAdd(&dictionary[x], noOfOccurrences);
        __syncthreads();
    }
    

    You have to use atomicAdd because different threads from different blocks may write/read noOfOccurrences concurrently, so you have to ensure mutual exclusion.

    This is just one approach; you can even assign the iterations of the outer loop to the threads instead of the blocks.

    Tutorials

    The Dr Dobbs Journal series CUDA: Supercomputing for the masses by Rob Farmer is excellent and covers just about everything in its fourteen installments. It also starts rather gently and is therefore fairly beginner-friendly.

    and anothers:

    • Volume I: Introduction to CUDA Programming
    • Getting started with CUDA
    • CUDA Resources List

    Take a look on the last item, you will find many link to learn CUDA.

    OpenCL: OpenCL Tutorials | MacResearch

提交回复
热议问题