I ran the MNIST demo in TensorFlow with 2 conv layers and a full-conect layer, I got an message that \'ran out of memeory trying to allocate 2.59GiB\' , but it shows that to
Before dwelving into other possible explanations like the ones mentioned above, please check that there is no hung process reserving GPU memory. It has just happened to me that my Tensorflow script got hung on some error but I did not notice it because I monitored running processes with nvidia-smi. Now that hung script did not show up in nvidia-smi's output but was still reserving GPU memory. Killing the hung scripts (Tensorflow typically spawns as many as there are GPUs in the system) completely solved a similar problem (after having exhausted all the TF wizardry).