Tensorflow tends to preallocate the entire available memory on it\'s GPUs. For debugging, is there a way of telling how much of that memory is actually in use?
The TensorFlow profiler has improved memory timeline that is based on real gpu memory allocator information
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler#visualize-time-and-memory