how to choose designated GPU to run CUDA program?

流过昼夜 提交于 2019-12-11 03:59:59

问题


My PC (ubuntu 12.04 x86 with CUDA 6.0) have 2 GPUs, I have some CUDA programs, and I have a program written in python to manage them.

For example, I want to select one GPU to run some CUDA programs and select the other one to run the other CUDA programs. But the management process is outside the CUDA code, so I can not use "cudaSetDevice" API inside CUDA programs. That is, the CUDA programs are unalterable, I can only select GPU outside them.

Is it possible to do that?


回答1:


One option is to use the CUDA_VISIBLE_DEVICE in the environment of the program to restrict which devices it sees:

$ deviceQuery |& grep ^Device
Device 0: "Tesla M2090"
Device 1: "Tesla M2090"
$ CUDA_VISIBLE_DEVICES=0 deviceQuery |& grep ^Device
Device 0: "Tesla M2090"
$

See more information on the CUDA developer zone website.



来源:https://stackoverflow.com/questions/25564574/how-to-choose-designated-gpu-to-run-cuda-program

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!