cross-entropy

binary classification, xentropy mismatch , invalid argument ( Received a label value of 1 which is outside the valid range of [0, 1) )

邮差的信 提交于 2019-12-11 12:11:37
问题 I'm working on a Deep neural Network for text-classification but I got a problem with my xentropy . I'm following a course with multiclass classification and I try to adapt it to my binary classification problem. The course used softmax for multiclass as : xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits) but doing so , I had that error : InvalidArgumentError (see above for traceback): Received a label value of 1 which is outside the valid range of [0, 1).

Need help understanding the Caffe code for SigmoidCrossEntropyLossLayer for multi-label loss

偶尔善良 提交于 2019-12-11 07:54:46
问题 I need help in understanding the Caffe function, SigmoidCrossEntropyLossLayer , which is the cross-entropy error with logistic activation. Basically, the cross-entropy error for a single example with N independent targets is denoted as: - sum-over-N( t[i] * log(x[i]) + (1 - t[i]) * log(1 - x[i] ) where t is the target, 0 or 1, and x is the output, indexed by i . x , of course goes through a logistic activation. An algebraic trick for quicker cross-entropy calculation reduces the computation

Keras: Weighted Binary Crossentropy Implementation

早过忘川 提交于 2019-12-10 04:24:45
问题 I'm new to Keras (and ML in general) and I'm trying to train a binary classifier. I'm using weighted binary cross entropy as a loss function but I am unsure how I can test if my implementation is correct. Is this an accurate implementation of weighted binary cross entropy? How could I test if it is? def weighted_binary_crossentropy(self, y_true, y_pred): logloss = -(y_true * K.log(y_pred) * self.weights[0] + \ (1 - y_true) * K.log(1 - y_pred) * self.weights[1]) return K.mean(logloss, axis=-1)

Tensorflow ValueError: Only call `sparse_softmax_cross_entropy_with_logits` with named arguments

我与影子孤独终老i 提交于 2019-12-08 03:54:44
问题 When calling the following method: losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels) for logits, labels in zip(logits_series,labels_series)] I receive the following ValueError: ValueError: Only call `sparse_softmax_cross_entropy_with_logits` with named arguments (labels=..., logits=..., ...) Against this: [tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels) According to the documentation for nn_ops.py I need to ensure that the logins and labels are

what's the difference between softmax_cross_entropy_with_logits and losses.log_loss?

自古美人都是妖i 提交于 2019-12-07 13:11:41
问题 whats the primary difference between tf.nn.softmax_cross_entropy_with_logits and tf.losses.log_loss ? both methods accept 1-hot labels and logits to calculate cross entropy loss for classification tasks. 回答1: Those methods are not so different in theory, however have number of differences in implementation: 1) tf.nn.softmax_cross_entropy_with_logits is designed for single-class labels, while tf.losses.log_loss can be used for multi-class classification. tf.nn.softmax_cross_entropy_with_logits

what's the difference between softmax_cross_entropy_with_logits and losses.log_loss?

坚强是说给别人听的谎言 提交于 2019-12-06 03:21:45
whats the primary difference between tf.nn.softmax_cross_entropy_with_logits and tf.losses.log_loss ? both methods accept 1-hot labels and logits to calculate cross entropy loss for classification tasks. Those methods are not so different in theory, however have number of differences in implementation: 1) tf.nn.softmax_cross_entropy_with_logits is designed for single-class labels, while tf.losses.log_loss can be used for multi-class classification. tf.nn.softmax_cross_entropy_with_logits won't throw an error if you feed multi-class labels, however your gradients won't be calculated correctly

TensorFlow: Are my logits in the right format for cross entropy function?

余生颓废 提交于 2019-12-06 01:27:15
问题 Alright, so I'm getting ready to run the tf.nn.softmax_cross_entropy_with_logits() function in Tensorflow. It's my understanding that the 'logits' should be a Tensor of probabilities, each one corresponding to a certain pixel's probability that it is part of an image that will ultimately be a "dog" or a "truck" or whatever... a finite number of things. These logits will get plugged into this cross entropy equation: As I understand it, the logits are plugged into the right side of the equation

Keras: Weighted Binary Crossentropy Implementation

旧巷老猫 提交于 2019-12-05 07:49:01
I'm new to Keras (and ML in general) and I'm trying to train a binary classifier. I'm using weighted binary cross entropy as a loss function but I am unsure how I can test if my implementation is correct. Is this an accurate implementation of weighted binary cross entropy? How could I test if it is? def weighted_binary_crossentropy(self, y_true, y_pred): logloss = -(y_true * K.log(y_pred) * self.weights[0] + \ (1 - y_true) * K.log(1 - y_pred) * self.weights[1]) return K.mean(logloss, axis=-1) Atop true vs pred loss, Keras train and val loss includes regularization losses. A simple testing

What is the difference between cross-entropy and log loss error?

南笙酒味 提交于 2019-12-04 19:31:17
问题 What is the difference between cross-entropy and log loss error? The formulae for both seem to be very similar. 回答1: They are essentially the same; usually, we use the term log loss for binary classification problems, and the more general cross-entropy (loss) for the general case of multi-class classification, but even this distinction is not consistent, and you'll often find the terms used interchangeably as synonyms. From the Wikipedia entry for cross-entropy: The logistic loss is sometimes

why softmax_cross_entropy_with_logits_v2 return cost even same value

烈酒焚心 提交于 2019-12-04 11:10:22
i have tested "softmax_cross_entropy_with_logits_v2" with a random number import tensorflow as tf x = tf.placeholder(tf.float32,shape=[None,5]) y = tf.placeholder(tf.float32,shape=[None,5]) softmax = tf.nn.softmax_cross_entropy_with_logits_v2(logits=x,labels=y) with tf.Session() as sess: feedx=[[0.1,0.2,0.3,0.4,0.5],[0.,0.,0.,0.,1.]] feedy=[[1.,0.,0.,0.,0.],[0.,0.,0.,0.,1.]] softmax = sess.run(softmax, feed_dict={x:feedx, y:feedy}) print("softmax", softmax) console "softmax [1.8194163 0.9048325]" what i understand about this function was This function only returns cost when logits and labels