I\'m iteratively deepdreaming images in a directory using the Google\'s TensorFlow DeepDream implementation (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/
99% of the time, when using tensorflow, "memory leaks" are actually due to operations that are continuously added to the graph while iterating — instead of building the graph first, then using it in a loop.
The fact that you specify a device (with tf.device('/gpu:0
) for your loop is a hint that it is the case: you typically specify a device for new nodes as this does not affect nodes that are already defined.
Fortunately, tensorflow has a convenient tool to spot those errors: tf.Graph.finalize. When called, this function prevents further nodes to be added to your graph. It is good practice to call this function before iterating.
So in your case I would call tf.get_default_graph().finalize()
before your loop and look for any error it may throw.