I\'ve installed the latest nvidia drivers (375.26) manually, and installed CUDA using cuda_8.0.44_linux.run (skipping the driver install there, since the bundled drivers are
First, check "CUDA Toolkit and Compatible Driver Versions" from here, and make sure that your cuda toolkit version is compatible with your cuda-driver version, e.g. if your driver version is nvidia-390, your cuda version must lower than CUDA 9.1.
Then, back to this issue. This issue is caused by "your cuda-driver version doesn't match your cuda version, and your CUDA local version may also different from the CUDA runtime version(cuda version in some specific virtual environments)."
I met the same issue when I tried to run tensorflow-gpu under the environment of "tensorflow_gpuenv" created by conda, and tried to test whether the "gpu:0" device worked. My driver version is nvidia-390 and I've already install cuda 9.0, so it doesn't make sense that raising that weird issue. I finally found the reason that the cuda version in the conda virtual environment is cuda 9.2 which isn't compatible with nvidia-390. I solved the issue by following steps in ubuntu 18.04:
~$ nvidia-smi or ~$ cat /proc/driver/nvidia/version ~$ nvcc --version or ~$ cat /usr/local/cuda/version.txt check local cudnn version
~$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
check cuda version in virtual environment
~$ conda list you can see something like these :
cudatoolkit 9.2 0
cudnn 7.3.1 cuda9.2_0
you may find that the cuda version in virtual environment is different from the local cuda version, and isn't compatible with driver version nvidia-390.
So reinstall cuda in the virtual environment:
~$ conda install cudatoolkit=8.0