TensorFlow: Should Loss and Metric be identical?

こ雲淡風輕ζ 提交于 2021-02-08 07:55:39

问题


I am using binary cross entropy as my loss function and also as my metric.

However, I see different values for the loss and metric. They are very similar, however they are different.

Why is this the case? I am using tf.keras.losses.binary_crossentropy(y_true, y_pred) for both.

Loss: 0.1506 and Metric Value is 0.1525, which is different


回答1:


If you use the same function as the loss and a metric, you will see different results usually in deep networks. This is generally just due to floating point precision errors: even though the mathematical equations are equivalent, the operations are not run in the same order, which can lead to small differences.
Which is exactly what's happening in your case.

But if you are using any Regularizer the loss and metrics difference will be more, since Regularizers penalizes the loss function to avoid overfitting.

Ideally, both metric and loss works the same way, let's look at the example from the document and compare.

BinaryCrossentropy as Metrics:

m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])
m.result().numpy() 

0.81492424

BinaryCrossentropy as Loss:

y_true = [[0., 1.], [0., 0.]]
y_pred = [[0.6, 0.4], [0.4, 0.6]]
bce = tf.keras.losses.BinaryCrossentropy()
bce(y_true, y_pred).numpy() 

0.81492424

Hope this answers your question, Happy Learning!



来源:https://stackoverflow.com/questions/58876233/tensorflow-should-loss-and-metric-be-identical

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!