问题
The Situation
I have a 2 gpu server (Ubuntu 12.04) where I switched a Tesla C1060 with a GTX 670. Than I installed CUDA 5.0 over the 4.2. Afterwards I compiled all examples execpt for simpleMPI without error. But when I run ./devicequery
I get following error message:
foo@bar-serv2:~/NVIDIA_CUDA-5.0_Samples/bin/linux/release$ ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
cudaGetDeviceCount returned 38
-> no CUDA-capable device is detected
What I have tried
To solve this I tried all of the thinks recommended by CUDA-capable device, but to no avail:
/dev/nvidia*
is there and the permissions are 666 (crw-rw-rw-) and owner root:rootfoo@bar-serv2:/dev$ ls -l nvidia* crw-rw-rw- 1 root root 195, 0 Oct 24 18:51 nvidia0 crw-rw-rw- 1 root root 195, 1 Oct 24 18:51 nvidia1 crw-rw-rw- 1 root root 195, 255 Oct 24 18:50 nvidiactl
I tried executing the code with sudo
CUDA 5.0 installs driver and libraries at the same time
PS here is lspci | grep -i nvidia:
foo@bar-serv2:/dev$ lspci | grep -i nvidia
03:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 670] (rev a1)
03:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1)
04:00.0 VGA compatible controller: NVIDIA Corporation G94 [Quadro FX 1800] (rev a1)
[update]
foo@bar-serv2:~/NVIDIA_CUDA-5.0_Samples/bin/linux/release$ nvidia-smi -a
NVIDIA: API mismatch: the NVIDIA kernel module has version 295.59,
but this NVIDIA driver component has version 304.54. Please make
sure that the kernel module and all NVIDIA driver components
have the same version.
Failed to initialize NVML: Unknown Error
How could that be, if I use the CUDA 5.0 installer to install driver and libs at the same time. Could the old 4.2 version, that is still lying around mess things up?
回答1:
I came across this issue, and running
nvidia-smi
informed me of an API mismatch. The problem was that my Linux distro had installed updates that required a system restart, so restarting resolved the issue.
回答2:
See this stack overflow question Installing cuda 5 samples in Ubuntu 12.10.
Ubuntu 12 is not a supported Linux distro (yet). For reference see CUDA 5.0 Toolkit Release Notes And Errata
** Distributions Currently Supported
Distribution 32 64 Kernel GCC GLIBC ----------------- -- -- --------------------- ---------- ------------- Fedora 16 X X 3.1.0-7.fc16 4.6.2 2.14.90 ICC Compiler 12.1 X OpenSUSE 12.1 X 3.1.0-1.2-desktop 4.6.2 2.14.1 Red Hat RHEL 6.x X 2.6.32-131.0.15.el6 4.4.5 2.12 Red Hat RHEL 5.5+ X 2.6.18-238.el5 4.1.2 2.5 SUSE SLES 11 SP2 X 3.0.13-0.27-pae 4.3.4 2.11.3 SUSE SLES 11.1 X X 2.6.32.12-0.7-pae 4.3.4 2.11.1 Ubuntu 11.10 X X 3.0.0-19-generic-pae 4.6.1 2.13 Ubuntu 10.04 X X 2.6.35-23-generic 4.4.5 2.12.1
If you want to do it run on Ubuntu 12 anyway then see answer of rpardo. It looks like this distro instead of installing 64 bit libraries to
/usr/lib64
installs them to/usr/lib/x86_64-linux-gnu/
I'd suggest searching for all instances of libcuda.so
and libnvidia-ml.so
on the system. Since the driver doesn't support this distro it might have installed libraries to a path that is not pointed by LD_LIBRARY_PATH
. Then move the libraries around and/or change the LD_LIBRARY_PATH
to point to this location (it should be the first path on the left). Then retry nvidia-smi
or deviceQuery
Good luck
回答3:
I got error 38 for cudaGetDeviceCount on a windows machine with GTX980 GPU. After I downloaded the latest driver for GTX 980 fro the NVIDIA site, installed it and restarted, everything is fine. Looks like the CUDA installer is not installing the latest driver.
回答4:
Try running the sample using sudo (or, you might do a 'sudo su', set LD_LIBRARY_PATH to the path of cuda libraries and run the sample while being root). Apparently, since you've probably installed CUDA 5.0 using sudo, the samples doesn't run with normal user. However, if you run a sample with root, then you'll be able to run samples with the regular user too! I've not yet restarted the system to see if samples work with normal user even after reboot, or each time you should run at least one CUDA application with root.
The problem might completely disappear if you install CUDA TookKit without using sudo.
回答5:
I had very similar problem on Debian and it turns out that loaded nvidia
module had different version than libcuda1
.
To check for installed nvidia
module you should do:
$ sudo modinfo nvidia-current | grep version
version: 319.82
If it doesn't match version of libcuda1
this the root of your problems.
来源:https://stackoverflow.com/questions/13054262/cuda-runtime-api-error-38-no-cuda-capable-device-is-detected