问题
I would like to perform pretraining of neural network using autoencoders implemented in TensorFlow.
- I am able to run whole network. (Using TF or Keras). the whole graph fits into GPU memory so that's fine.
- Problem occurs when I create more graphs (autoencoders). GPU run out of memory very quickly. Right now I have example where building second level autoencoder causes GPU out of mem. exception.
So what is happening:
I have implementation of autoencoders which has session as it's attribute, so :
self.session = tf.Session()
and implements method
destroy()
where
self.session.close()
is called.
When stacking autoencoders, some instances of Session are needed and that's when I got the problem.
What am I missing? isn't .close() enough?
Thanks
来源:https://stackoverflow.com/questions/37470667/session-close-doesnt-free-resources-on-gpu-using-tensorflow