Keras Custom Binary Cross Entropy Loss Function. Get NaN as output for loss

帅比萌擦擦* 提交于 2019-12-11 04:35:19

问题


I try writing a custom binary cross-entropy loss function. This is my script:

def my_custom_loss(y_true,y_pred):
    t_loss = (-1)*(y_true * K.log(y_pred) + (1 - y_true) * K.log(1 - y_pred))
    return K.mean(t_loss)

When I run my script using this loss function, after few iterations, I get NaN as output for loss function.

Then I looked at TensorFlow documentation, I modified the loss function into the following:

 t_loss = K.max(y_pred,0)-y_pred * y_true + K.log(1+K.exp((-1)*K.abs(y_pred)))

The code runs without any issue. I would like to know if someone could provide some explanation why my first loss function gives a NaN output.

Binary Cross-Entropy: y * log(p) + (1-y) * log(1-p)

I have sigmoid function as activation for my last layer. So the value of 'p' should be between 0 and 1. Log should exist for this range.

Thank you.


回答1:


A naive implementation of Binary Cross Entropy will suffer numerical problem on 0 output or larger than one output, eg log(0) -> NaN. The formula you posted is reformulated to ensure stability and avoid underflow. The following deduction is from tf.nn.sigmoid_cross_entropy_with_logits.

z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
= z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
= z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
= z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
= (1 - z) * x + log(1 + exp(-x))
= x - x * z + log(1 + exp(-x))

For x < 0, to avoid overflow in exp(-x), we reformulate the above

x - x * z + log(1 + exp(-x))
= log(exp(x)) - x * z + log(1 + exp(-x))
= - x * z + log(1 + exp(x))

And the implementation use the equivalient form:

max(x, 0) - x * z + log(1 + exp(-abs(x)))


来源:https://stackoverflow.com/questions/48951109/keras-custom-binary-cross-entropy-loss-function-get-nan-as-output-for-loss

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!