could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

前端 未结 19 2560
故里飘歌
故里飘歌 2020-12-01 16:01

I installed tensorflow 1.0.1 GPU version on my Macbook Pro with GeForce GT 750M. Also installed CUDA 8.0.71 and cuDNN 5.1. I am running a tf code that works fine with non C

19条回答
  •  予麋鹿
    予麋鹿 (楼主)
    2020-12-01 16:20

    In my case, after checking the cuDNN and CUDA version, I found my GPU was out of memory. Using watch -n 0.1 nvidia-smi in another bash terminal, the moment 2019-07-16 19:54:05.122224: E tensorflow/stream_executor/cuda/cuda_dnn.cc:334] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR onset is the moment GPU memory nearly full. The screenshot

    So I configure a limit for tnsorflow to use my gpu. As I use tf.keras module, I add the following code to the beginning of my program:

    config = tf.ConfigProto()
    config.gpu_options.per_process_gpu_memory_fraction = 0.9
    tf.keras.backend.set_session(tf.Session(config=config));
    

    Then, problem solved!

    You can change your batch_size or using smarter ways to input your training data (such as tf.data.Dataset and using cache). I hope my answer can help someone else.

提交回复
热议问题