问题
I want to programmatically find out the available GPUs and their current memory usage and use one of the GPUs based on their memory availability. I want to do this in PyTorch.
I have seen the following solution in this post:
import torch.cuda as cutorch
for i in range(cutorch.device_count()):
if cutorch.getMemoryUsage(i) > MEM:
opts.gpuID = i
break
but it is not working in PyTorch 0.3.1 (there is no function called, getMemoryUsage
). I am interested in a PyTorch based (using the library functions) solution. Any help would be appreciated.
回答1:
In the webpage you give, there exist an answer:
#!/usr/bin/env python
# encoding: utf-8
import subprocess
def get_gpu_memory_map():
"""Get the current gpu usage.
Returns
-------
usage: dict
Keys are device ids as integers.
Values are memory usage as integers in MB.
"""
result = subprocess.check_output(
[
'nvidia-smi', '--query-gpu=memory.used',
'--format=csv,nounits,noheader'
])
# Convert lines into a dictionary
gpu_memory = [int(x) for x in result.strip().split('\n')]
gpu_memory_map = dict(zip(range(len(gpu_memory)), gpu_memory))
return gpu_memory_map
print get_gpu_memory_map()
来源:https://stackoverflow.com/questions/49595663/find-a-gpu-with-enough-memory