Tensorflow set CUDA_VISIBLE_DEVICES within jupyter

徘徊边缘 提交于 2019-12-27 16:24:03

问题


I have two GPUs and would like to run two different networks via ipynb simultaneously, however the first notebook always allocates both GPUs.

Using CUDA_VISIBLE_DEVICES, I can hide devices for python files, however I am unsure of how to do so within a notebook.

Is there anyway to hide different GPUs in to notebooks running on the same server?


回答1:


You can set environment variables in the notebook using os.environ. Do the following before initializing TensorFlow to limit TensorFlow to first GPU.

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"   # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"

You can double check that you have the correct devices visible to TF

from tensorflow.python.client import device_lib
print device_lib.list_local_devices()

I tend to use it from utility module like notebook_util

import notebook_util
notebook_util.pick_gpu_lowest_memory()
import tensorflow as tf



回答2:


You can do it faster without any imports just by using magics:

%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0

Notice that all env variable are strings, so no need to use ". You can verify that env-variable is set up by running: %env <name_of_var>. Or check all of them with %env.



来源:https://stackoverflow.com/questions/37893755/tensorflow-set-cuda-visible-devices-within-jupyter

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!