TensorFlow: InternalError: Blas SGEMM launch failed

那年仲夏 提交于 2019-11-27 17:47:34

Old question, but may help others.
Try to close interactive sessions active in other processes (if IPython Notebook - just restart kernels). This helped me!

Additionally, I use this code to close local sessions in this kernel during experiments:

if 'session' in locals() and session is not None:
    print('Close interactive session')
    session.close()
Doreen

I encountered this problem and solved it by setting allow_soft_placement=True and gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3), which specifically define the fraction of memory of GPU been used. I guess this has helped to avoid two tensorflow processes competing for the GPU memory.

gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3)
sess = tf.Session(config=tf.ConfigProto(
  allow_soft_placement=True, log_device_placement=True))

I got this error when running Tensorflow Distributed. Did you check if any of the workers were reporting CUDA_OUT_OF_MEMORY errors? If this is the case it may have to do with where you place your weight and bias variables. E.g.

with tf.device("/job:paramserver/task:0/cpu:0"):
   W = weight_variable([input_units, num_hidden_units])       
   b = bias_variable([num_hidden_units])             
David

My environment is Python 3.5, Tensorflow 0.12 and Windows 10 (no Docker). I am training neural networks in both CPU and GPU. I came across the same error InternalError: Blas SGEMM launch failed whenever training in the GPU.

I could not find the reason why this error happens but I managed to run my code in the GPU by avoiding the tensorflow function tensorflow.contrib.slim.one_hot_encoding(). Instead, I do the one-hot-encoding operation in numpy (input and output variables).

The following code reproduces the error and the fix. It is a minimal setup to learn the y = x ** 2 function using gradient descent.

import numpy as np
import tensorflow as tf
import tensorflow.contrib.slim as slim

def test_one_hot_encoding_using_tf():

    # This function raises the "InternalError: Blas SGEMM launch failed" when run in the GPU

    # Initialize
    tf.reset_default_graph()
    input_size = 10
    output_size = 100
    input_holder = tf.placeholder(shape=[1], dtype=tf.int32, name='input')
    output_holder = tf.placeholder(shape=[1], dtype=tf.int32, name='output')

    # Define network
    input_oh = slim.one_hot_encoding(input_holder, input_size)
    output_oh = slim.one_hot_encoding(output_holder, output_size)
    W1 = tf.Variable(tf.random_uniform([input_size, output_size], 0, 0.01))
    output_v = tf.matmul(input_oh, W1)
    output_v = tf.reshape(output_v, [-1])

    # Define updates
    loss = tf.reduce_sum(tf.square(output_oh - output_v))
    trainer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
    update_model = trainer.minimize(loss)

    # Optimize
    init = tf.initialize_all_variables()
    steps = 1000

    # Force CPU/GPU
    config = tf.ConfigProto(
        # device_count={'GPU': 0}  # uncomment this line to force CPU
    )

    # Launch the tensorflow graph
    with tf.Session(config=config) as sess:
        sess.run(init)

        for step_i in range(steps):

            # Get sample
            x = np.random.randint(0, 10)
            y = np.power(x, 2).astype('int32')

            # Update
            _, l = sess.run([update_model, loss], feed_dict={input_holder: [x], output_holder: [y]})

        # Check model
        print('Final loss: %f' % l)

def test_one_hot_encoding_no_tf():

    # This function does not raise the "InternalError: Blas SGEMM launch failed" when run in the GPU

    def oh_encoding(label, num_classes):
        return np.identity(num_classes)[label:label + 1].astype('int32')

    # Initialize
    tf.reset_default_graph()
    input_size = 10
    output_size = 100
    input_holder = tf.placeholder(shape=[1, input_size], dtype=tf.float32, name='input')
    output_holder = tf.placeholder(shape=[1, output_size], dtype=tf.float32, name='output')

    # Define network
    W1 = tf.Variable(tf.random_uniform([input_size, output_size], 0, 0.01))
    output_v = tf.matmul(input_holder, W1)
    output_v = tf.reshape(output_v, [-1])

    # Define updates
    loss = tf.reduce_sum(tf.square(output_holder - output_v))
    trainer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
    update_model = trainer.minimize(loss)

    # Optimize
    init = tf.initialize_all_variables()
    steps = 1000

    # Force CPU/GPU
    config = tf.ConfigProto(
        # device_count={'GPU': 0}  # uncomment this line to force CPU
    )

    # Launch the tensorflow graph
    with tf.Session(config=config) as sess:
        sess.run(init)

        for step_i in range(steps):

            # Get sample
            x = np.random.randint(0, 10)
            y = np.power(x, 2).astype('int32')

            # One hot encoding
            x = oh_encoding(x, 10)
            y = oh_encoding(y, 100)

            # Update
            _, l = sess.run([update_model, loss], feed_dict={input_holder: x, output_holder: y})

        # Check model
        print('Final loss: %f' % l)

maybe you not free your gpu rigthly , if you are using linux,try "ps -ef | grep python" to see what jobs are using GPU. then kill them

In my case, I had 2 python consoles open, both using keras/tensorflow. As I closed the old console (forgotten from previous day), everything started to work correctly.

So it is good to check, if you do not have multiple consoles / processes occupying GPU.

I closed all other Jupyter Sessions running and this solved the problem. I think It was GPU memory issue.

In my case,

First, I run

conda clean --all

to clean up tarballs and unused packages.

Then, I restart IDE (Pycharm in this case) and it works well. Environment: anaconda python 3.6, windows 10 64bit. I install tensorflow-gpu by a command provided on the anaconda website.

For me, I got this problem when I tried to run multiple tensorflow processes (e.g. 2) and both of them require to access GPU resources.

A simple solution is to make sure there has to be only one tensorflow process running at a single time.

For more details, you can see here.

To be clear, tensorflow will try (by default) to consume all available GPUs. It cannot be run with other programs also active. Closing. Feel free to reopen if this is actually another problem.

I encountered this error when running Keras CuDNN tests in parallel with pytest-xdist. The solution was to run them serially.

For me, I got this error when using Keras, and Tensorflow was the the backend. It was because the deep learning environment in Anaconda was not activated properly, as a result, Tensorflow didn't kick in properly either. I noticed this since the last time I activated my deep learning environment (which is called dl), the prompt changed in my Anaconda Prompt to this:

(dl) C:\Users\georg\Anaconda3\envs\dl\etc\conda\activate.d>set "KERAS_BACKEND=tensorflow"

While it only had the dl before then. Therefore, what I did to get rid of the above error was to close my jupyter notebook and Anaconda prompt, then relaunch, for several times.

I encountered this error after changing OS to Windows 10 recently, and I never encountered this before when using windows 7.

The error occurs if I load my GPU Tensorflow model when an another GPU program is running; it's my JCuda model loaded as socket server, which is not large. If I close my other GPU program(s), this Tensorflow model can be loaded very successfully.

This JCuda program is not large at all, just around 70M, and in comparison this Tensorflow model is more than 500M and much larger. But I am using 1080 ti, which has much memory. So it would be probably not an out-of-memory progblem, and it would perhaps be some tricky internal issue of Tensorflow regarding OS or Cuda. (PS: I am using Cuda version 8.0.44 and haven't downloaded a newer version.)

Restarting my Jupyter processes wasn't enough; I had to reboot my computer.

In my case, it is enough to open the Jupyter Notebooks in separate servers.

This error only occurs with me if I try using more than one tensorflow/keras model in the same server. It doesn't matter if open one notebook, execute it, than close and try opening another. If they are being loaded in the same Jupyter server the error always happens.

user3098688

In my case, the network filesystem under which libcublas.so was located simply died. The node was rebooted and everything was fine. Just to add another point to the dataset.

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!