Error using Tensorflow with GPU

后端 未结 2 831
既然无缘
既然无缘 2020-12-08 11:49

I\'ve tried a bunch of different Tensorflow examples, which works fine on the CPU but generates the same error when I\'m trying to run them on the GPU. One little example is

相关标签:
2条回答
  • 2020-12-08 12:38

    This can happen because your TensorFlow session is not able to get sufficient amount of memory in the GPU. Maybe you have a low amount of free memory for other processes like TensorFlow or there is another TensorFlow session running in your system . so you have to configure the amount of memory the TensorFlow session will use

    if you are using TensorFlow 1.x

    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
    
    sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
    

    as Tensorflow 2.x has undergone major changes from 1.x.if you want to use TensorFlow 1.x versions method/function there is a compatibility module kept in TensorFlow 2.x. So TensorFlow 2.x user can use this piece of code

    gpu_options = tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction=0.333)
    
    sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(gpu_options=gpu_options))
    
    0 讨论(0)
  • 2020-12-08 12:39

    There appear to be two issues here:

    1. By default, TensorFlow allocates a large fraction (95%) of the available GPU memory (on each GPU device) when you create a tf.Session. It uses a heuristic that reserves 200MB of GPU memory for "system" uses, but doesn't set this aside if the amount of free memory is smaller than that.

    2. It looks like you have very little free GPU memory on either of your GPU devices (105.73MiB and 133.48MiB). This means that TensorFlow will attempt to allocate memory that should probably be reserved for the system, and hence the allocation fails.

    Is it possible that you have another TensorFlow process (or some other GPU-hungry code) running while you attempt to run this program? For example, a Python interpreter with an open session—even if it is not using the GPU—will attempt to allocate almost the entire GPU memory.

    Currently, the only way to restrict the amount of GPU memory that TensorFlow uses is the following configuration option (from this question):

    # Assume that you have 12GB of GPU memory and want to allocate ~4GB:
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
    
    sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
    
    0 讨论(0)
提交回复
热议问题