why softmax_cross_entropy_with_logits_v2 return cost even same value

 ̄綄美尐妖づ 提交于 2019-12-21 20:07:12

问题


i have tested "softmax_cross_entropy_with_logits_v2" with a random number

import tensorflow as tf

x = tf.placeholder(tf.float32,shape=[None,5])
y = tf.placeholder(tf.float32,shape=[None,5])
softmax = tf.nn.softmax_cross_entropy_with_logits_v2(logits=x,labels=y)

with tf.Session() as sess:
    feedx=[[0.1,0.2,0.3,0.4,0.5],[0.,0.,0.,0.,1.]]
    feedy=[[1.,0.,0.,0.,0.],[0.,0.,0.,0.,1.]]
    softmax = sess.run(softmax, feed_dict={x:feedx, y:feedy})
    print("softmax", softmax)

console "softmax [1.8194163 0.9048325]"

what i understand about this function was This function only returns cost when logits and labels are different.

then why it returns 0.9048325 even same value?


回答1:


The way tf.nn.softmax_cross_entropy_with_logits_v2 works is that it does softmax on your x array to turn the array into probabilities:

where i is the index of your array. Then the output of tf.nn.softmax_cross_entropy_with_logits_v2 will be the dotproduct between -log(p) and the labels:

Since the labels are either 0 or 1, only the term where the label is equal to one contributes. So in your first sample, the softmax probability of the first index is

and the output will be

Your second sample will be different, since x[0] is different than x[1].




回答2:


tf.nn.softmax_cross_etnropy_with_logits_v2 as per the documentation expects unscaled inputs, because it performs a softmax operation on logits internally. Your second input [0, 0, 0, 0, 1] thus is internally softmaxed to something roughly like [0.15, 0.15, 0.15, 0.15, 0.4] and then, cross entropy for this logit and the true label [0, 0, 0, 0, 1] is computed to be the value you get



来源:https://stackoverflow.com/questions/52134869/why-softmax-cross-entropy-with-logits-v2-return-cost-even-same-value

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!