I installed tensorflow 1.0.1 GPU version on my Macbook Pro with GeForce GT 750M. Also installed CUDA 8.0.71 and cuDNN 5.1. I am running a tf code that works fine with non C
To me, the 4th can nicely solve the problem. https://blog.csdn.net/comway_Li/article/details/102953634?utm_medium=distribute.pc_relevant.none-task-blog-baidujs-2
1.
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 1.0
session = tf.Session(config=config, ...)
2.
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
3.
sudo rm -f ~/.nv
4.
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
#from tensorflow import ConfigProto
#from tensorflow import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)