How can I solve 'ran out of gpu memory' in TensorFlow

后端 未结 8 1588
你的背包
你的背包 2020-12-04 10:46

I ran the MNIST demo in TensorFlow with 2 conv layers and a full-conect layer, I got an message that \'ran out of memeory trying to allocate 2.59GiB\' , but it shows that to

8条回答
  •  一生所求
    2020-12-04 11:45

    I was encountering out of memory errors when training a small CNN on a GTX 970. Through somewhat of a fluke, I discovered that telling TensorFlow to allocate memory on the GPU as needed (instead of up front) resolved all my issues. This can be accomplished using the following Python code:

        config = tf.ConfigProto()
        config.gpu_options.allow_growth = True
        sess = tf.Session(config=config)
    

    Previously, TensorFlow would pre-allocate ~90% of GPU memory. For some unknown reason, this would later result in out-of-memory errors even though the model could fit entirely in GPU memory. By using the above code, I no longer have OOM errors.

    Note: If the model is too big to fit in GPU memory, this probably won't help!

提交回复
热议问题