cross-entropy

Tensorflow tf.nn.in_top_k: targets out of range error?

∥☆過路亽.° 提交于 2020-01-03 04:25:37
问题 I have figured out what was causing this error, it was due to mismatch between labels and outputs, like I'm doing 8 class sentiment classification and my labels are (1,2,3,4,7,8,9,10) so it was unable to match predictions(1,2,3,4,5,6,7,8) with my labels, so that's why it was giving out of range error. My question is, why it didn't gave me error in this line c_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits,Y) , how it's matching labels with predictions in this case as opposed to

How does binary cross entropy loss work on autoencoders?

≡放荡痞女 提交于 2019-12-31 10:45:15
问题 I wrote a vanilla autoencoder using only Dense layer. Below is my code: iLayer = Input ((784,)) layer1 = Dense(128, activation='relu' ) (iLayer) layer2 = Dense(64, activation='relu') (layer1) layer3 = Dense(28, activation ='relu') (layer2) layer4 = Dense(64, activation='relu') (layer3) layer5 = Dense(128, activation='relu' ) (layer4) layer6 = Dense(784, activation='softmax' ) (layer5) model = Model (iLayer, layer6) model.compile(loss='binary_crossentropy', optimizer='adam') (trainX, trainY),

How to do point-wise categorical crossentropy loss in Keras?

风流意气都作罢 提交于 2019-12-30 18:27:04
问题 I have a network that produces a 4D output tensor where the value at each position in spatial dimensions (~pixel) is to be interpreted as the class probabilities for that position. In other words, the output is (num_batches, height, width, num_classes) . I have labels of the same size where the real class is coded as one-hot. I would like to calculate the categorical-crossentropy loss using this. Problem #1: The K.softmax function expects a 2D tensor (num_batches, num_classes) Problem #2 : I

Comparing MSE loss and cross-entropy loss in terms of convergence

半世苍凉 提交于 2019-12-24 10:38:40
问题 For a very simple classification problem where I have a target vector [0,0,0,....0] and a prediction vector [0,0.1,0.2,....1] would cross-entropy loss converge better/faster or would MSE loss? When I plot them it seems to me that MSE loss has a lower error margin. Why would that be? Or for example when I have the target as [1,1,1,1....1] I get the following: 回答1: You sound a little confused... Comparing the values of MSE & cross-entropy loss and saying that one is lower than the other is like

Pytorch LSTM: Target Dimension in Calculating Cross Entropy Loss

时光怂恿深爱的人放手 提交于 2019-12-23 13:01:12
问题 I've been trying to get an LSTM (LSTM followed by a linear layer in a custom model), working in Pytorch, but was getting the following error when calculating the loss: Assertion cur_target >= 0 && cur_target < n_classes' failed. I defined the loss function with: criterion = nn.CrossEntropyLoss() and then called with loss += criterion(output, target) I was giving the target with dimensions [sequence_length, number_of_classes], and output has dimensions [sequence_length, 1, number_of_classes].

why softmax_cross_entropy_with_logits_v2 return cost even same value

 ̄綄美尐妖づ 提交于 2019-12-21 20:07:12
问题 i have tested "softmax_cross_entropy_with_logits_v2" with a random number import tensorflow as tf x = tf.placeholder(tf.float32,shape=[None,5]) y = tf.placeholder(tf.float32,shape=[None,5]) softmax = tf.nn.softmax_cross_entropy_with_logits_v2(logits=x,labels=y) with tf.Session() as sess: feedx=[[0.1,0.2,0.3,0.4,0.5],[0.,0.,0.,0.,1.]] feedy=[[1.,0.,0.,0.,0.],[0.,0.,0.,0.,1.]] softmax = sess.run(softmax, feed_dict={x:feedx, y:feedy}) print("softmax", softmax) console "softmax [1.8194163 0

How to implement Weighted Binary CrossEntropy on theano?

我与影子孤独终老i 提交于 2019-12-21 12:06:14
问题 How to implement Weighted Binary CrossEntropy on theano? My Convolutional neural network only predict 0 ~~ 1 (sigmoid). I want to penalize my predictions in this way : Basically, i want to penalize MORE when the model predicts 0 but the truth was 1. Question : How can I create this Weighted Binary CrossEntropy function using theano and lasagne ? I tried this below prediction = lasagne.layers.get_output(model) import theano.tensor as T def weighted_crossentropy(predictions, targets): # Copy

How to implement Weighted Binary CrossEntropy on theano?

萝らか妹 提交于 2019-12-21 12:06:07
问题 How to implement Weighted Binary CrossEntropy on theano? My Convolutional neural network only predict 0 ~~ 1 (sigmoid). I want to penalize my predictions in this way : Basically, i want to penalize MORE when the model predicts 0 but the truth was 1. Question : How can I create this Weighted Binary CrossEntropy function using theano and lasagne ? I tried this below prediction = lasagne.layers.get_output(model) import theano.tensor as T def weighted_crossentropy(predictions, targets): # Copy

Where is the origin coding of sparse_softmax_cross_entropy_with_logits function in tensorflow

孤人 提交于 2019-12-13 02:27:52
问题 I want to know what the tensorflow function sparse_softmax_cross_entropy_with_logits mathematically is exactly doing. But I can't find the origin of the coding. Can you help me? 回答1: sparse_softmax_cross_entropy_with_logits is equivalent to a numerically stable version of the following: -1. * tf.gather(tf.log(tf.nn.softmax(logits)), target) or, in more "readable" numpy-code: -1. * np.log(softmax(logits))[target] where softmax(x) = np.exp(x)/np.sum(np.exp(x)) . That is, it computes the softmax

Why not to use mean square error for classification problem

半城伤御伤魂 提交于 2019-12-11 17:45:49
问题 I am trying to implement a simple binary classification problem using RNN LSTM and still not available to figure out the correct loss function for the network. The issue is, when I use the cross_binary_entophy as loss function, the loss value for training and testing is relatively high as compared to using a mean_square_error function. Upon research, I came across to justifications that binary cross entropy should be used for classification problem and MSE for the regression problem. However,