I have a TensorFlow model, and one part of this model evaluates the accuracy. The accuracy
is just another node in the tensorflow graph, that takes in log
You can reuse the the accuracy node but you need to use two different SummaryWriters, one for the training runs and one for the test data. Also you have to assign the scalar summary for accuracy to a variable.
accuracy_summary = tf.scalar_summary("Training Accuracy", accuracy)
tf.scalar_summary("SomethingElse", foo)
summary_op = tf.merge_all_summaries()
summaries_dir = '/me/mydir/'
train_writer = tf.train.SummaryWriter(summaries_dir + '/train', sess.graph)
test_writer = tf.train.SummaryWriter(summaries_dir + '/test')
Then in your training loop you have the normal training and record your summaries with the train_writer. In addition you run the graph on the test set each 100th iteration and record only the accuracy summary with the test_writer.
# Record train set summaries, and train
summary, _ = sess.run([summary_op, train_step], feed_dict=...)
train_writer.add_summary(summary, n)
if n % 100 == 0: # Record summaries and test-set accuracy
summary, acc = sess.run([accuracy_summary, accuracy], feed_dict=...)
test_writer.add_summary(summary, n)
print('Accuracy at step %s: %s' % (n, acc))
You can then point TensorBoard to the parent directory (summaries_dir) and it will load both data sets.
This can be also found in the TensorFlow HowTo's https://www.tensorflow.org/versions/r0.11/how_tos/summaries_and_tensorboard/index.html
To run the same operation but get summaries with different feed_dict data, simply attach two summary ops to that op. Say you want to run accuracy op on both validation and test data and want to get summaries for both:
validation_acc_summary = tf.summary.scalar('validation_accuracy', accuracy) # intended to run on validation set
test_acc_summary = tf.summary.scalar('test_accuracy', accuracy) # intended to run on test set
with tf.Session() as sess:
# do your thing
# ...
# accuracy op just needs labels y_ and input x to compute logits
validation_summary_str = sess.run(validation_acc_summary, feed_dict=feed_dict={x: mnist.validation.images,y_: mnist.validation.labels})
test_summary_str = sess.run(test_acc_summary, feed_dict={x: mnist.test.images,y_: mnist.test.labels})
# assuming you have a tf.summary.FileWriter setup
file_writer.add_summary(validation_summary_str)
file_writer.add_summary(test_summary_str)
Also remember you can always pull raw (scalar) data out of the protobuff summary_str like this and do your own logging.