cross-entropy

How can I implement a weighted cross entropy loss in tensorflow using sparse_softmax_cross_entropy_with_logits

风格不统一 提交于 2019-11-27 05:30:09
问题 I am starting to use tensorflow (coming from Caffe), and I am using the loss sparse_softmax_cross_entropy_with_logits . The function accepts labels like 0,1,...C-1 instead of onehot encodings. Now, I want to use a weighting depending on the class label; I know that this could be done maybe with a matrix multiplication if I use softmax_cross_entropy_with_logits (one hot encoding), Is there any way to do the same with sparse_softmax_cross_entropy_with_logits ? 回答1: import tensorflow as tf

Why is the Cross Entropy method preferred over Mean Squared Error? In what cases does this doesn't hold up? [closed]

梦想与她 提交于 2019-11-26 22:37:30
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed last year . Although both of the above methods provide better score for better closeness of prediction, still cross-entropy is preferred. Is it in every cases or there are some peculiar scenarios where we prefer cross-entropy over MSE? 回答1: Cross-entropy is prefered for classification , while

What's the difference between sparse_softmax_cross_entropy_with_logits and softmax_cross_entropy_with_logits?

╄→гoц情女王★ 提交于 2019-11-26 15:41:23
I recently came across tf.nn.sparse_softmax_cross_entropy_with_logits and I can not figure out what the difference is compared to tf.nn.softmax_cross_entropy_with_logits . Is the only difference that training vectors y have to be one-hot encoded when using sparse_softmax_cross_entropy_with_logits ? Reading the API, I was unable to find any other difference compared to softmax_cross_entropy_with_logits . But why do we need the extra function then? Shouldn't softmax_cross_entropy_with_logits produce the same results as sparse_softmax_cross_entropy_with_logits , if it is supplied with one-hot

What is the meaning of the word logits in TensorFlow?

别等时光非礼了梦想. 提交于 2019-11-26 11:44:24
问题 In the following TensorFlow function, we must feed the activation of artificial neurons in the final layer. That I understand. But I don\'t understand why it is called logits? Isn\'t that a mathematical function? loss_function = tf.nn.softmax_cross_entropy_with_logits( logits = last_layer, labels = target_output ) 回答1: Logits is an overloaded term which can mean many different things: In Math , Logit is a function that maps probabilities ( [0, 1] ) to R ( (-inf, inf) ) Probability of 0.5

Tensorflow sigmoid and cross entropy vs sigmoid_cross_entropy_with_logits

限于喜欢 提交于 2019-11-26 07:57:13
问题 When trying to get cross entropy with sigmoid activation function, there is difference between loss1 = -tf.reduce_sum(p*tf.log(q), 1) loss2 = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q),1) But they are same when with softmax activation function. Following is the sample code: import tensorflow as tf sess2 = tf.InteractiveSession() p = tf.placeholder(tf.float32, shape=[None, 5]) logit_q = tf.placeholder(tf.float32, shape=[None, 5]) q = tf.nn.sigmoid(logit_q)

What's the difference between sparse_softmax_cross_entropy_with_logits and softmax_cross_entropy_with_logits?

﹥>﹥吖頭↗ 提交于 2019-11-26 04:32:34
问题 I recently came across tf.nn.sparse_softmax_cross_entropy_with_logits and I can not figure out what the difference is compared to tf.nn.softmax_cross_entropy_with_logits. Is the only difference that training vectors y have to be one-hot encoded when using sparse_softmax_cross_entropy_with_logits ? Reading the API, I was unable to find any other difference compared to softmax_cross_entropy_with_logits . But why do we need the extra function then? Shouldn\'t softmax_cross_entropy_with_logits

How to choose cross-entropy loss in tensorflow?

余生长醉 提交于 2019-11-26 02:48:33
Classification problems, such as logistic regression or multinomial logistic regression, optimize a cross-entropy loss. Normally, the cross-entropy layer follows the softmax layer, which produces probability distribution. In tensorflow, there are at least a dozen of different cross-entropy loss functions : tf.losses.softmax_cross_entropy tf.losses.sparse_softmax_cross_entropy tf.losses.sigmoid_cross_entropy tf.contrib.losses.softmax_cross_entropy tf.contrib.losses.sigmoid_cross_entropy tf.nn.softmax_cross_entropy_with_logits tf.nn.sigmoid_cross_entropy_with_logits ... Which work only for

How to choose cross-entropy loss in tensorflow?

大城市里の小女人 提交于 2019-11-26 01:11:07
问题 Classification problems, such as logistic regression or multinomial logistic regression, optimize a cross-entropy loss. Normally, the cross-entropy layer follows the softmax layer, which produces probability distribution. In tensorflow, there are at least a dozen of different cross-entropy loss functions : tf.losses.softmax_cross_entropy tf.losses.sparse_softmax_cross_entropy tf.losses.sigmoid_cross_entropy tf.contrib.losses.softmax_cross_entropy tf.contrib.losses.sigmoid_cross_entropy tf.nn