GPU PoolAllocator explodes the CPU memory

后端 未结 1 1630
小蘑菇
小蘑菇 2021-02-20 00:24

I made a tensorflow model with relatively common operations (apart from a couple of tf.where and indices handling), but call it with very varying different input sh

相关标签:
1条回答
  • 2021-02-20 00:47

    This specific problem was solved some times ago by the TF team when they changed their memory allocator (see the Corresponding issue on github).

    If you encounter a growth in memory during training, a common mistake is that nodes are being added to the graph during the training (TF is not numpy, unless you use eager execution). Make sure to call graph.finalize() before your training loop to ensure no nodes are added during the training process, this allows to catch many memory growth issues.

    0 讨论(0)
提交回复
热议问题