Tensorflow runs out of memory while computing: how to find memory leaks?

后端 未结 1 1400
臣服心动
臣服心动 2020-12-11 13:48

I\'m iteratively deepdreaming images in a directory using the Google\'s TensorFlow DeepDream implementation (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/

相关标签:
1条回答
  • 2020-12-11 14:08

    99% of the time, when using tensorflow, "memory leaks" are actually due to operations that are continuously added to the graph while iterating — instead of building the graph first, then using it in a loop.

    The fact that you specify a device (with tf.device('/gpu:0) for your loop is a hint that it is the case: you typically specify a device for new nodes as this does not affect nodes that are already defined.

    Fortunately, tensorflow has a convenient tool to spot those errors: tf.Graph.finalize. When called, this function prevents further nodes to be added to your graph. It is good practice to call this function before iterating.

    So in your case I would call tf.get_default_graph().finalize() before your loop and look for any error it may throw.

    0 讨论(0)
提交回复
热议问题