CUDA - CUBLAS: issues solving many (3x3) dense linear systems

[亡魂溺海] 提交于 2021-02-10 15:44:28

问题


I am trying to solve about 1200000 linear systems (3x3, Ax=B) with CUDA 10.1, in particular using the CUBLAS library. I took a cue from this (useful!) post and re-wrote the suggested code in a Unified Memory version. The algorithm firstly performs a LU factorization using cublasgetrfBatched() followed by two consecutive invocations of cublastrsm() which solves upper or lower triangular linear systems. The code is attached below. It works correctly up to about 10000 matrixes and, in this case, it takes ~570 ms to perform the LU factorization (on an NVIDIA GeForce 930MX) and ~311 ms to solve the systems.

My issues/questions are:

  1. Overload issue: it crashes allocating memory for more than 10k matrices. Why? How can I improve my code in order to solve the whole batch of 1.2 million matrices?

  2. Time issue: my goal would be to solve all of these systems in less than 1 second. Am I currently following the correct approach? Any suggestions otherwise?

  3. Would it be possible and/or useful, and if yes how, to use 'streams' of batches of 10k matrices?

Code:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <algorithm>
#include <cmath>
#include <iostream>
#include <vector>
#include <ctime>
#include <ratio>
#include <chrono>
#include <random>
#include <time.h>
#include <math.h>

// CUDA
#include <cuda.h>
#include <cuda_runtime.h>
#include "device_launch_parameters.h"
#include <cusolverDn.h>

//#include "Utilities.cuh"

using namespace std;
using namespace std::chrono;

/************************************/
/* COEFFICIENT REARRANGING FUNCTION */
/************************************/
void rearrange(double** vec, int* pivotArray, int N, int numMatrices) {
  for (int nm = 0; nm < numMatrices; nm++) {
    for (int i = 0; i < N; i++) {
      double temp = vec[nm][i];
      vec[nm][i] = vec[nm][pivotArray[N*i + nm] - 1];
      vec[nm][pivotArray[N * i + nm] - 1] = temp;
    }
  }
}


/************************************/
/* MAIN  */
/************************************/
int main() {

  const int N = 3; 
  const int numMatrices = 10000; // I want 1200000

  // random generator to fill matrices and coefficients
  random_device device;
  mt19937 generator(device());
  uniform_real_distribution<double> distribution(1., 5.);

  //ALLOCATE MEMORY - using unified memory
  double** h_A;
  cudaMallocManaged(&h_A, sizeof(double*) * numMatrices);
  for (int nm = 0; nm < numMatrices; nm++) {
    cudaMallocManaged(&(h_A[nm]), sizeof(double) * N * N);
  }

  double** h_b;
  cudaMallocManaged(&h_b, sizeof(double*) * numMatrices);
  for (int nm = 0; nm < numMatrices; nm++) {
    cudaMallocManaged(&(h_b[nm]), sizeof(double) * N );
  }
  cout << " memory allocated" << endl;

  // FILL MATRICES
  for (int nm = 0; nm < numMatrices; nm++) {
    for (int i = 0; i < N; i++) {
      for (int j = 0; j < N; j++) {
        h_A[nm][j * N + i] = distribution(generator);
      }
    }
  }
  cout << " Matrix filled " << endl;

  // FILL COEFFICIENTS
  for (int nm = 0; nm < numMatrices; nm++) {
    for (int i = 0; i < N; i++) {
      h_b[nm][i] = distribution(generator);
    }
  }
  cout << " Coeff. vector filled " << endl;
  cout << endl;

  // --- CUDA solver initialization
  cublasHandle_t cublas_handle;
  cublasCreate_v2(&cublas_handle);
  int* PivotArray;
  cudaMallocManaged(&PivotArray, N * numMatrices * sizeof(int));
  int* infoArray;
  cudaMallocManaged(&infoArray, numMatrices * sizeof(int));

  //CUBLAS LU SOLVER
  high_resolution_clock::time_point t1 = high_resolution_clock::now();
  cublasDgetrfBatched(cublas_handle, N, h_A, N, PivotArray, infoArray, numMatrices);
  cudaDeviceSynchronize();
  high_resolution_clock::time_point t2 = high_resolution_clock::now();
  duration<double> time_span = duration_cast<duration<double>>(t2 - t1);
  cout << "It took " << time_span.count() * 1000. << " milliseconds." << endl;


  for (int i = 0; i < numMatrices; i++)
    if (infoArray[i] != 0) {
      fprintf(stderr, "Factorization of matrix %d Failed: Matrix may be singular\n", i);
    }

 // rearrange coefficient 
 // (temporarily on CPU, this step will be on a GPU Kernel as well)
  high_resolution_clock::time_point tA = high_resolution_clock::now();
  rearrange(h_b, PivotArray, N, numMatrices);
  high_resolution_clock::time_point tB = high_resolution_clock::now();
  duration<double> time_spanA = duration_cast<duration<double>>(tB - tA);
  cout << "rearrangement took " << time_spanA.count() * 1000. << " milliseconds." << endl;

//INVERT UPPER AND LOWER TRIANGULAR MATRICES 
  // --- Function solves the triangular linear system with multiple right-hand sides
  // --- Function overrides b as a result 
  const double alpha = 1.f;
  high_resolution_clock::time_point t3 = high_resolution_clock::now();
  cublasDtrsmBatched(cublas_handle, CUBLAS_SIDE_LEFT, CUBLAS_FILL_MODE_LOWER, CUBLAS_OP_N, CUBLAS_DIAG_UNIT, N, 1, &alpha, h_A, N, h_b, N, numMatrices);
  cublasDtrsmBatched(cublas_handle, CUBLAS_SIDE_LEFT, CUBLAS_FILL_MODE_UPPER, CUBLAS_OP_N, CUBLAS_DIAG_NON_UNIT, N, 1, &alpha, h_A, N, h_b, N, numMatrices);
  cudaDeviceSynchronize();
  high_resolution_clock::time_point t4 = high_resolution_clock::now();
  duration<double> time_span2 = duration_cast<duration<double>>(t4 - t3);
  cout << "second step took " << time_span2.count() * 1000. << " milliseconds." << endl;
  
  // --- Free resources
  if (h_A) cudaFree(h_A);
  if (h_b) cudaFree(h_b);
 
  cudaDeviceReset();

  return 0;
}


回答1:


Overload issue: it crashes allocating memory for more than 10k matrices. Why? How can I improve my code in order to solve the whole batch of 1.2 million matrices?

In my opinion, the biggest problem in your code is that you are making horribly inefficient use of managed memory in these key allocation loops:

  //ALLOCATE MEMORY - using unified memory
  double** h_A;
  cudaMallocManaged(&h_A, sizeof(double*) * numMatrices);
  for (int nm = 0; nm < numMatrices; nm++) {
    cudaMallocManaged(&(h_A[nm]), sizeof(double) * N * N);
  }

  double** h_b;
  cudaMallocManaged(&h_b, sizeof(double*) * numMatrices);
  for (int nm = 0; nm < numMatrices; nm++) {
    cudaMallocManaged(&(h_b[nm]), sizeof(double) * N );
  }

The problem is that each call to cudaMallocManaged has a minimum granularity. That means that if you request to allocate 1 byte, it will actually use up something like 4kbyte of memory (I believe that is the linux allocation granularity. It looks like you are on windows, and I believe the windows allocation granularity may be larger). In addition, this creates a huge inefficient data transfer load on the managed memory subsystem, when you launch a kernel (kernels will be launched in your cublas calls).

A much better way to do this is to do a single large allocation, rather than the allocation-in-a-loop, and then just subdivide that allocation using pointer arithmetic. The code could look like this:

  //ALLOCATE MEMORY - using unified memory
  double** h_A;
  cudaMallocManaged(&h_A, sizeof(double*) * numMatrices);
  cudaMallocManaged(&(h_A[0]), sizeof(double)*numMatrices*N*N);
  for (int nm = 1; nm < numMatrices; nm++) {
    h_A[nm] = h_A[nm-1]+ N * N;
  }

  double** h_b;
  cudaMallocManaged(&h_b, sizeof(double*) * numMatrices);
  cudaMallocManaged(&(h_b[0]), sizeof(double) * numMatrices * N);
  for (int nm = 1; nm < numMatrices; nm++) {
    h_b[nm] = h_b[nm-1] + N;
  }

Another benefit of this is that the allocation process runs quite a bit faster.

Time issue: my goal would be to solve all of these systems in less than 1 second. Am I currently following the correct approach? Any suggestions otherwise?

With that change to your code, I am able to run successfully on a 1GB GPU (GeForce GT640), with:

const int numMatrices = 1200000;

with output like this:

$ ./t81
 memory allocated
 Matrix filled
 Coeff. vector filled

It took 70.3032 milliseconds.
rearrangement took 60.02 milliseconds.
second step took 156.067 milliseconds.

Your GPU may be somewhat slower, but I think the overall timing should easily come in at less than 1 second.

Would it be possible and/or useful, and if yes how, to use 'streams' of batches of 10k matrices?

With the above change, I don't think you need to worry about this. Streams won't help here with overlap of compute operations. They could help with copy/compute overlap (although maybe not much on your GPU) but this would be hard to architect on windows with managed memory. For windows usage, I would probably suggest switching to ordinary CUDA separation of host and device memory, if you want to explore copy/compute overlap.

As an aside, you may be able to get a set of cublas calls that will do the work even more quickly by using direct inversion. CUBLAS has a batch direct inversion method. I normally wouldn't suggest this for solution of linear equations, but it may be something to consider for a set of 3x3 or 4x4 inversions, where you could easily check for singularity with the determinant method. Here is an example.



来源:https://stackoverflow.com/questions/64664431/cuda-cublas-issues-solving-many-3x3-dense-linear-systems

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!