Find a GPU with enough memory

≯℡__Kan透↙ 提交于 2020-01-04 05:12:38

问题


I want to programmatically find out the available GPUs and their current memory usage and use one of the GPUs based on their memory availability. I want to do this in PyTorch.

I have seen the following solution in this post:

import torch.cuda as cutorch

for i in range(cutorch.device_count()):
    if cutorch.getMemoryUsage(i) > MEM: 
        opts.gpuID = i
        break

but it is not working in PyTorch 0.3.1 (there is no function called, getMemoryUsage). I am interested in a PyTorch based (using the library functions) solution. Any help would be appreciated.


回答1:


In the webpage you give, there exist an answer:

#!/usr/bin/env python
# encoding: utf-8

import subprocess

def get_gpu_memory_map():
    """Get the current gpu usage.

    Returns
    -------
    usage: dict
        Keys are device ids as integers.
        Values are memory usage as integers in MB.
    """
    result = subprocess.check_output(
        [
            'nvidia-smi', '--query-gpu=memory.used',
            '--format=csv,nounits,noheader'
        ])
    # Convert lines into a dictionary
    gpu_memory = [int(x) for x in result.strip().split('\n')]
    gpu_memory_map = dict(zip(range(len(gpu_memory)), gpu_memory))
    return gpu_memory_map

print get_gpu_memory_map()  


来源:https://stackoverflow.com/questions/49595663/find-a-gpu-with-enough-memory

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!