I\'ve run several training sessions with different graphs in TensorFlow. The summaries I set up show interesting results in the training and validation. Now, I\'d like to ta
Here is a complete example for obtaining values from a scalar. You can see the message specification for the Event protobuf message here
import tensorflow as tf
for event in tf.train.summary_iterator('runs/easy_name/events.out.tfevents.1521590363.DESKTOP-43A62TM'):
for value in event.summary.value:
print(value.tag)
if value.HasField('simple_value'):
print(value.simple_value)
I've been using this. It assumes that you only want to see tags you've logged more than once whose values are floats and returns the results as a pd.DataFrame
. Just call metrics_df = parse_events_file(path)
.
from collections import defaultdict
import pandas as pd
import tensorflow as tf
def is_interesting_tag(tag):
if 'val' in tag or 'train' in tag:
return True
else:
return False
def parse_events_file(path: str) -> pd.DataFrame:
metrics = defaultdict(list)
for e in tf.train.summary_iterator(path):
for v in e.summary.value:
if isinstance(v.simple_value, float) and is_interesting_tag(v.tag):
metrics[v.tag].append(v.simple_value)
if v.tag == 'loss' or v.tag == 'accuracy':
print(v.simple_value)
metrics_df = pd.DataFrame({k: v for k,v in metrics.items() if len(v) > 1})
return metrics_df
You can simply use:
tensorboard --inspect --event_file=myevents.out
or if you want to filter a specific subset of events of the graph:
tensorboard --inspect --event_file=myevents.out --tag=loss
If you want to create something more custom you can dig into the
/tensorflow/python/summary/event_file_inspector.py
to understand how to parse the event files.
You can use the script serialize_tensorboard, which will take in a logdir and write out all the data in json format.
You can also use an EventAccumulator for a convenient Python API (this is the same API that TensorBoard uses).
As Fabrizio says, TensorBoard is a great tool for visualizing the contents of your summary logs. However, if you want to perform a custom analysis, you can use tf.train.summary_iterator() function to loop over all of the tf.Event and tf.Summary protocol buffers in the log:
for summary in tf.train.summary_iterator("/path/to/log/file"):
# Perform custom processing in here.
UPDATE for tf2:
from tensorflow.python.summary.summary_iterator import summary_iterator
You need to import it, that module level is not currently imported by default. On 2.0.0-rc2
Following works as of tensorflow version 2.0.0-beta1
:
import os
import tensorflow as tf
from tensorflow.python.framework import tensor_util
summary_dir = 'tmp/summaries'
summary_writer = tf.summary.create_file_writer('tmp/summaries')
with summary_writer.as_default():
tf.summary.scalar('loss', 0.1, step=42)
tf.summary.scalar('loss', 0.2, step=43)
tf.summary.scalar('loss', 0.3, step=44)
tf.summary.scalar('loss', 0.4, step=45)
from tensorflow.core.util import event_pb2
from tensorflow.python.lib.io import tf_record
def my_summary_iterator(path):
for r in tf_record.tf_record_iterator(path):
yield event_pb2.Event.FromString(r)
for filename in os.listdir(summary_dir):
path = os.path.join(summary_dir, filename)
for event in my_summary_iterator(path):
for value in event.summary.value:
t = tensor_util.MakeNdarray(value.tensor)
print(value.tag, event.step, t, type(t))
the code for my_summary_iterator
is copied from tensorflow.python.summary.summary_iterator.py
- there was no way to import it at runtime.