loss-function

Keras: Weighted Binary Crossentropy Implementation

早过忘川 提交于 2019-12-10 04:24:45
问题 I'm new to Keras (and ML in general) and I'm trying to train a binary classifier. I'm using weighted binary cross entropy as a loss function but I am unsure how I can test if my implementation is correct. Is this an accurate implementation of weighted binary cross entropy? How could I test if it is? def weighted_binary_crossentropy(self, y_true, y_pred): logloss = -(y_true * K.log(y_pred) * self.weights[0] + \ (1 - y_true) * K.log(1 - y_pred) * self.weights[1]) return K.mean(logloss, axis=-1)

Why normalizing labels in MxNet makes accuracy close to 100%?

泄露秘密 提交于 2019-12-08 11:37:50
问题 I am training a model using multi-label logistic regression on MxNet (gluon api) as described here: multi-label logit in gluon My custom dataset has 13 features and one label of shape [,6]. My features are normalized from original values to [0,1] I use simple dense neural net with 2 hidden layers. I noticed when I don't normalize labels (which take discrete values of 1,2,3,4,5,6 and are purely my choice to map categorical values to these numbers), my training process slowly converges to some

Keras custom loss function: variable with shape of batch_size (y_true)

拥有回忆 提交于 2019-12-08 03:00:15
问题 When implementing a custom loss function in Keras, I require a tf.Variable with the shape of the batch size of my input data (y_true, y_pred) . def custom_loss(y_true, y_pred): counter = tf.Variable(tf.zeros(K.shape(y_true)[0], dtype=tf.float32)) ... However, this produces the error: You must feed a value for placeholder tensor 'dense_17_target' with dtype float and shape [?,?] If I fix the batch_size to a value: def custom_loss(y_true, y_pred): counter = tf.Variable(tf.zeros(batch_size,

Optimizing for accuracy instead of loss in Keras model

北城以北 提交于 2019-12-08 01:26:36
问题 If I correctly understood the significance of the loss function to the model, it directs the model to be trained based on minimizing the loss value. So for example, if I want my model to be trained in order to have the least mean absolute error, i should use the MAE as the loss function. Why is it, for example, sometimes you see someone wanting to achieve the best accuracy possible, but building the model to minimize another completely different function? For example: model.compile(loss='mean

Optimizing for accuracy instead of loss in Keras model

青春壹個敷衍的年華 提交于 2019-12-06 14:19:08
If I correctly understood the significance of the loss function to the model, it directs the model to be trained based on minimizing the loss value. So for example, if I want my model to be trained in order to have the least mean absolute error, i should use the MAE as the loss function. Why is it, for example, sometimes you see someone wanting to achieve the best accuracy possible, but building the model to minimize another completely different function? For example: model.compile(loss='mean_squared_error', optimizer='sgd', metrics='acc') How come the model above is trained to give us the

Which loss-function is better than MSE in temperature prediction?

混江龙づ霸主 提交于 2019-12-06 11:03:30
I have a feature vector size of 1x4098. Each feature vector corresponds to a float number (temperature). In training, I have 10.000 samples. Hence, I have training set size of 10000x4098 and the label of 10000x1. I want to use linear regression model to predict temperature from training data. i am using 3 hidden layers (512, 128, 32) with MSE loss. However, I only got 80% accuracy using tensorflow. Could you suggest to me others loss function to get better performance? Let me give a rather theoretical explanation on the choice of loss function. As you may guess, it all depends on the data. MSE

Keras - Making two predictions from one neural network

ⅰ亾dé卋堺 提交于 2019-12-06 07:50:32
I'm trying to combine two outputs that are produced by the same network that makes predictions on a 4 class task and a 10 class task. Then I look to combine these outputs to give a length 14 array which I use as my end target. While this seems to work actively the predictions are always for one class so it produces a probability dist which is only concerned with selecting 1 out of the 14 options instead of 2. What I actually need it to do is to provide 2 predictions, one for each class. I want this all to be produced by the same model. input = Input(shape=(100, 100), name='input') lstm = LSTM

How can I sort the values in a custom Keras / Tensorflow Loss Function?

余生颓废 提交于 2019-12-05 11:52:08
Introduction I would like to implement a custom loss function to Keras. I want to do this, because I am not happy with the current result for my dataset. I think the reason for this is because currently the built-in loss functions focuses on the whole dataset. And I just want to focus on the top values in my dataset. That is why I came up with the following idea for a custom loss function: Custom Loss Function Idea The custom loss function should take the top 4 predictions with the highest value and subtract it with the corresponding true value. Then take the absolute value from this

Keras: Weighted Binary Crossentropy Implementation

旧巷老猫 提交于 2019-12-05 07:49:01
I'm new to Keras (and ML in general) and I'm trying to train a binary classifier. I'm using weighted binary cross entropy as a loss function but I am unsure how I can test if my implementation is correct. Is this an accurate implementation of weighted binary cross entropy? How could I test if it is? def weighted_binary_crossentropy(self, y_true, y_pred): logloss = -(y_true * K.log(y_pred) * self.weights[0] + \ (1 - y_true) * K.log(1 - y_pred) * self.weights[1]) return K.mean(logloss, axis=-1) Atop true vs pred loss, Keras train and val loss includes regularization losses. A simple testing

Custom loss function in Keras, how to deal with placeholders

痴心易碎 提交于 2019-12-05 02:59:29
问题 I am trying to generate a custom loss function in TF/Keras,the loss function works if it is run in a session and passed constants, however, it stops working when compiled into a Keras. The cost function (thanks to Lior for converting it to TF) def ginicTF(actual,pred): n = int(actual.get_shape()[-1]) inds = K.reverse(tf.nn.top_k(pred,n)[1],axes=[0]) a_s = K.gather(actual,inds) a_c = K.cumsum(a_s) giniSum = K.sum(a_c)/K.sum(a_s) - (n+1)/2.0 return giniSum / n def gini_normalizedTF(a,p): return