Runtime error 999 when trying to use cuda with pytorch

泪湿孤枕 提交于 2020-12-03 07:30:33

问题


I installed Cuda 10.1 and the latest Nvidia Driver for my Geforce 2080 ti. I try to run a basic script to test if pytorch is working and I get the following error:

RuntimeError: cuda runtime error (999) : unknown error at ..\aten\src\THC\THCGeneral.cpp:50

Below is the code im trying to run:

import torch
torch.cuda.current_device()
torch.cuda.is_available()
torch.cuda.get_device_name(0)

回答1:


Restarting my computer fixed this for me.

But for a less invasive fix, you can also try this solution (from a tensorflow issue thread):

sudo rmmod nvidia_uvm
sudo rmmod nvidia
sudo modprobe nvidia
sudo modprobe nvidia_uvm



回答2:


In my case, I solved the 999 error with: nvidia-modprobe -u




回答3:


Reinstalling NVIDIA driver solves this problem in my case. (You don't need to restart the computer)




回答4:


In case of PyTorch it seems to be sufficient to restart only nvidia-uvm (Unified Virtual Memory) with

sudo modprobe --remove nvidia-uvm  # same as `rmmod`
sudo modprobe nvidia-uvm

If that doesn't work, go ahead and restart the whole module with additional modprobe (--remove) nvidia as mentioned @matwilso's answer.



来源:https://stackoverflow.com/questions/58595291/runtime-error-999-when-trying-to-use-cuda-with-pytorch

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!