ML Engine Experiment eval tf.summary.scalar not displaying in tensorboard

↘锁芯ラ 提交于 2019-12-04 19:52:24

Because the summary node in the graph is just a node. It still needs to be evaluated (outputting a protobuf string), and that string still needs to be written to a file. It's not evaluated in training mode because it's not upstream of the train_op in your graph, and even if it were evaluated, it wouldn't be written to a file unless you specified a tf.train.SummarySaverHook as one of you training_chief_hooks in your EstimatorSpec. Because the Estimator class doesn't assume you want any extra evaluation during training, normally evaluation is only done during the EVAL phase, and you just increase min_eval_frequency or checkpoint_frequency to get more evaluation datapoints.

If you really really want to log a summary during training here's how you'd do it:

def model_fn(mode, features, labels, params):
  ...
  if mode == Modes.TRAIN:
    # loss is already written out during training, don't duplicate the summary op
    loss = tf.contrib.legacy_seq2seq.sequence_loss(logits, outputs, weights)
    sequence_accuracy = sequence_accuracy(targets, predictions,weights)
    seq_sum_op = tf.summary.scalar('sequence_accuracy', sequence_accuracy)
    with tf.control_depencencies([seq_sum_op]):
       train_op = optimizer.minimize(loss)

    return tf.estimator.EstimatorSpec(
      loss=loss,
      mode=mode,
      train_op=train_op,
      training_chief_hooks=[tf.train.SummarySaverHook(
          save_steps=100,
          output_dir='./summaries',
          summary_op=seq_sum_op
      )]
    )

But it's better to just increase your eval frequency and make an eval_metric_ops for accuracy with tf.metrics.streaming_accuracy

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!