cuda

invalid argument error in cudaMemcpy from device to host

橙三吉。 提交于 2021-02-05 12:20:55
问题 I am new to CUDA/GPU and I am having problems copying data from my device back to the host. I am developing for Jetson TK1 with CUDA Toolkit 6.5. It builds successfully, but gives an error during runtime. My code is below: //main.cu void allocate(double* const d_inputCurrent, double* signal, double* const d_outputCurrent, const size_t size); int main () { int data_length = 1024000; const int length=512; const size_t size= length; double signalA[length], signalB[length], signalC[length]; for

Is it possible to manually set the SMs used for one CUDA stream?

强颜欢笑 提交于 2021-02-05 10:51:14
问题 By default, the kernel will use all available SMs of the device (if enough blocks). However, now I have 2 stream with one computational-intense and one memory-intense, and I want to limit the maximal SMs used for 2 stream respectively (after setting the maximal SMs, the kernel in one stream will use up to maximal SMs, like 20SMs for computational-intense and 4SMs for memory-intense), is it possible to do so? (if possible, which API should I use) 回答1: In short, no there is no way to do what

Is it possible to manually set the SMs used for one CUDA stream?

三世轮回 提交于 2021-02-05 10:51:05
问题 By default, the kernel will use all available SMs of the device (if enough blocks). However, now I have 2 stream with one computational-intense and one memory-intense, and I want to limit the maximal SMs used for 2 stream respectively (after setting the maximal SMs, the kernel in one stream will use up to maximal SMs, like 20SMs for computational-intense and 4SMs for memory-intense), is it possible to do so? (if possible, which API should I use) 回答1: In short, no there is no way to do what

Atomic Operation failed in CUDA

谁说我不能喝 提交于 2021-02-05 10:46:06
问题 As the compute ability is 2.1, the atomicAdd and atomicMax operations do not support double precision, then I define both functions based on some answers on stack overflow. It is strange that the atomicAdd function works well but the atomicMax doesn't work, here is my code. The test of my code is to generate random number on each block, and then sum the random numbers on each block, we have block sum, I want to test the atomicAdd and atomicMax on the block sum. #include <iostream> #include

cuda 11 kernel doesn't run

僤鯓⒐⒋嵵緔 提交于 2021-02-05 09:10:30
问题 here is a demo.cu aiming to printf from the GPU device: #include "cuda_runtime.h" #include "device_launch_parameters.h" #include <stdio.h> __global__ void hello_cuda() { printf("hello from GPU\n"); } int main() { printf("hello from CPU\n"); hello_cuda <<<1, 1>>> (); cudaDeviceSynchronize(); cudaDeviceReset(); printf("bye bye from CPU\n"); return 0; } it compiles and runs: $ nvcc demo.cu $ ./a.out that's the output that I get: hello from CPU bye bye from CPU Q: why there is no printing result

cuda 11 kernel doesn't run

匆匆过客 提交于 2021-02-05 09:09:23
问题 here is a demo.cu aiming to printf from the GPU device: #include "cuda_runtime.h" #include "device_launch_parameters.h" #include <stdio.h> __global__ void hello_cuda() { printf("hello from GPU\n"); } int main() { printf("hello from CPU\n"); hello_cuda <<<1, 1>>> (); cudaDeviceSynchronize(); cudaDeviceReset(); printf("bye bye from CPU\n"); return 0; } it compiles and runs: $ nvcc demo.cu $ ./a.out that's the output that I get: hello from CPU bye bye from CPU Q: why there is no printing result

Is There Any Way To Copy vtable From Host To Device (CUDA & C++)

雨燕双飞 提交于 2021-02-05 08:41:53
问题 It seems that Cuda does not allow me to "pass an object of a class derived from virtual base classes to __global__ function", for some reason related to "virtual table" or "virtual pointer". I wonder is there some way for me to setup the "virtual pointer" manually, so that I can use the polymorphism? 回答1: Is There Any Way To Copy vtable From Host To Device You wouldn't want to copy the vtable from host to device. The vtable on the host (i.e. in an object created on the host) has a set of host

How can I specify a minimum compute capability to the mexcuda compiler to compile a mexfunction?

匆匆过客 提交于 2021-02-05 08:19:29
问题 I have a CUDA project in a .cu file that I would like to compile to a .mex file using mexcuda . Because my code makes use of the 64-bit floating point atomic operation atomicAdd(double *, double) , which is only supposed for GPU devices of compute capability 6.0 or higher, I need to specify this as a flag when I am compiling. In my standard IDE, this works fine, but when compiling with mexcuda , this is not working as I would like. In this post on MathWorks, it was suggested to use the

How to make parallel cudaMalloc fast?

删除回忆录丶 提交于 2021-02-05 08:18:09
问题 When allocating a lot of memory on 4 distinct NVIDIA V100 GPUs , I observe the following behavior with regards to parallelization via OpenMP: Using the #pragma omp parallel for directive, and therefore making the cudaMalloc calls on each GPU in parallel, results in the same performance as doing it completely serial. This is tested and the same effect validated on two HPC systems: IBM Power AC922 and an AWS EC2 p3dn.24xlarge . (The numbers are obtained on the Power machine.) ./test 4000000000

How to make parallel cudaMalloc fast?

删除回忆录丶 提交于 2021-02-05 08:17:08
问题 When allocating a lot of memory on 4 distinct NVIDIA V100 GPUs , I observe the following behavior with regards to parallelization via OpenMP: Using the #pragma omp parallel for directive, and therefore making the cudaMalloc calls on each GPU in parallel, results in the same performance as doing it completely serial. This is tested and the same effect validated on two HPC systems: IBM Power AC922 and an AWS EC2 p3dn.24xlarge . (The numbers are obtained on the Power machine.) ./test 4000000000