问题
I am using binary cross entropy as my loss function and also as my metric.
However, I see different values for the loss and metric. They are very similar, however they are different.
Why is this the case? I am using tf.keras.losses.binary_crossentropy(y_true, y_pred)
for both.
Loss: 0.1506 and Metric Value is 0.1525, which is different
回答1:
If you use the same function as the loss and a metric, you will see different results usually in deep networks. This is generally just due to floating point precision errors
: even though the mathematical equations are equivalent, the operations are not run in the same order, which can lead to small differences.
Which is exactly what's happening in your case.
But if you are using any Regularizer
the loss and metrics difference will be more, since Regularizers
penalizes the loss function to avoid overfitting.
Ideally, both metric and loss works the same way, let's look at the example from the document and compare.
BinaryCrossentropy as Metrics:
m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]])
m.result().numpy()
0.81492424
BinaryCrossentropy as Loss:
y_true = [[0., 1.], [0., 0.]]
y_pred = [[0.6, 0.4], [0.4, 0.6]]
bce = tf.keras.losses.BinaryCrossentropy()
bce(y_true, y_pred).numpy()
0.81492424
Hope this answers your question, Happy Learning!
来源:https://stackoverflow.com/questions/58876233/tensorflow-should-loss-and-metric-be-identical