This is the sample MNIST code I am running:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets(\'MNIST_data\', one_
Complementing Abhijay's answer, you can easily get the mean accuracy accross the test minibatches
accuracy_sum = tf.reduce_sum(tf.cast(correct_prediction, tf.float32))
good = 0
total = 0
for i in xrange(10):
testSet = mnist.test.next_batch(50)
good += accuracy_sum.eval(feed_dict={ x: testSet[0], y_: testSet[1], keep_prob: 1.0})
total += testSet[0].shape[0]
print("test accuracy %g"%(good/total))
Here is how I solved this problem: the error means that the GPU runs out of memory during accuracy evaluation. Hence it needs a smaller sized dataset, which can be achieved by using data in batches. So, instead of running the code on the whole test dataset it needs to be run in batches as mentioned in this post: How to read data in batches when using TensorFlow
Hence, for accuracy evaluation on test dataset, instead of this loc :
print("test accuracy %g"%accuracy.eval(feed_dict={ x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
this can be used :
for i in xrange(10):
testSet = mnist.test.next_batch(50)
print("test accuracy %g"%accuracy.eval(feed_dict={ x: testSet[0], y_: testSet[1], keep_prob: 1.0}))
When i ran 1000 epochs
for training
and used 10 batches
of batch_size = 50
for accuracy evaluation
, I got the following results:
step 0, training accuracy 0.04
step 100, training accuracy 0.88
step 200, training accuracy 0.9
step 300, training accuracy 0.88
step 400, training accuracy 0.94
step 500, training accuracy 0.96
step 600, training accuracy 0.94
step 700, training accuracy 0.96
step 800, training accuracy 0.9
step 900, training accuracy 1
test accuracy 1
test accuracy 0.92
test accuracy 1
test accuracy 1
test accuracy 0.94
test accuracy 0.96
test accuracy 0.92
test accuracy 0.96
test accuracy 0.92
test accuracy 0.94