Training in batches but testing individual data item in Tensorflow?

半世苍凉 提交于 2020-01-03 16:46:59

问题


I have trained a convolution neural network with batch size of 10. However when testing, I want to predict the classification for each dataset separately and not in batches, this gives error:

Assign requires shapes of both tensors to match. lhs shape= [1,3] rhs shape= [10,3]

I understand 10 refers to batch_size and 3 is the number of classes that I am classifying into.

Can we not train using batches and test individually?

Update:

Training Phase:

batch_size=10
classes=3
#vlimit is some constant : same for training and testing phase
X = tf.placeholder(tf.float32, [batch_size,vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [batch_size, classes], name='Y_placeholder')
w = tf.Variable(tf.random_normal(shape=[vlimit, classes], stddev=0.01), name='weights')
b = tf.Variable(tf.ones([batch_size,classes]), name="bias")
logits = tf.matmul(X, w) + b
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name='loss')
loss = tf.reduce_mean(entropy)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)

Testing Phase:

batch_size=1
classes=3
X = tf.placeholder(tf.float32, [batch_size,vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [batch_size, classes], name='Y_placeholder')
w = tf.Variable(tf.random_normal(shape=[vlimit, classes], stddev=0.01), name='weights')
b = tf.Variable(tf.ones([batch_size,classes]), name="bias")
logits = tf.matmul(X, w) + b
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y, name='loss')
loss = tf.reduce_mean(entropy)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)

回答1:


When you define your placeholder, use:

X = tf.placeholder(tf.float32, [None, vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [None, classes], name='Y_placeholder')
...

instead for both your training and testing phase (actually, you shouldn't need to re-define these for the testing phase). Also define your bias as:

b = tf.Variable(tf.ones([classes]), name="bias")

Otherwise you are training a separate bias for each sample in your batch, which is not what you want.

TensorFlow should automatically unroll along the first dimension of your input and recognize that as the batch size, so for training you can feed it batches of 10, and for testing you can feed it individual samples (or batches of 100 or whatever).




回答2:


Absolutely. Placeholders are 'buckets' that get fed data from your inputs. The only thing they do is direct data into your model. They can act like 'infinite buckets' using the None trick - you can chuck as much (or as little) data into them as you want (depending on available resources obviously).

In training, try replacing batch_size with None for the Training placeholders:

X = tf.placeholder(tf.float32, [None, vlimit ], name='X_placeholder')
Y = tf.placeholder(tf.int32, [None, classes], name='Y_placeholder')

Then define everything else you have as before.

Then do some training ops, for example:

 _, Tr_loss, Tr_acc = sess.run([optimizer, loss, accuracy], feed_dict{x: btc_x, y: btc_y})

For testing, re-use these same placeholders (X,Y) and don't bother redefining the other variables.

All Tensorflow variables are static for a single Tensorflow graph definition. If you're restoring the model, then the placeholders still exist from when it was trained. As will the other variables e.g. w,b,logits,entropy & optimizer.

Then do some testing op, for example:

 Ts_loss, Ts_acc = sess.run( [loss, accuracy], feed_dict{ x: test_x , y: test_y } )


来源:https://stackoverflow.com/questions/46430351/training-in-batches-but-testing-individual-data-item-in-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!