I installed tensorflow 1.0.1 GPU version on my Macbook Pro with GeForce GT 750M. Also installed CUDA 8.0.71 and cuDNN 5.1. I am running a tf code that works fine with non C
In my case, after checking the cuDNN and CUDA version, I found my GPU was out of memory. Using watch -n 0.1 nvidia-smi in another bash terminal, the moment 2019-07-16 19:54:05.122224: E tensorflow/stream_executor/cuda/cuda_dnn.cc:334] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR onset is the moment GPU memory nearly full.
The screenshot
So I configure a limit for tnsorflow to use my gpu. As I use tf.keras module, I add the following code to the beginning of my program:
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.9
tf.keras.backend.set_session(tf.Session(config=config));
Then, problem solved!
You can change your batch_size or using smarter ways to input your training data (such as tf.data.Dataset and using cache). I hope my answer can help someone else.