loss-function

Custom loss function in Keras, how to deal with placeholders

不羁的心 提交于 2019-12-03 17:28:17
I am trying to generate a custom loss function in TF/Keras,the loss function works if it is run in a session and passed constants, however, it stops working when compiled into a Keras. The cost function (thanks to Lior for converting it to TF) def ginicTF(actual,pred): n = int(actual.get_shape()[-1]) inds = K.reverse(tf.nn.top_k(pred,n)[1],axes=[0]) a_s = K.gather(actual,inds) a_c = K.cumsum(a_s) giniSum = K.sum(a_c)/K.sum(a_s) - (n+1)/2.0 return giniSum / n def gini_normalizedTF(a,p): return -ginicTF(a, p) / ginicTF(a, a) #Test the cost function sess = tf.InteractiveSession() p = [0.9, 0.3, 0

Weighted mse custom loss function in keras

♀尐吖头ヾ 提交于 2019-12-02 22:52:41
I'm working with time series data, outputting 60 predicted days ahead. I'm currently using mean squared error as my loss function and the results are bad I want to implement a weighted mean squared error such that the early outputs are much more important than later ones. Weighted Mean Square Root formula: So I need some way to iterate over a tensor's elements, with an index (since I need to iterate over both the predicted and the true values at the same time, then write the results to a tensor with only one element. They're both (?,60) but really (1,60) lists. And nothing I'm trying is

Custom weighted loss function in Keras for weighing each element

谁说我不能喝 提交于 2019-12-02 16:25:16
I'm trying to create a simple weighted loss function. Say, I have input dimensions 100 * 5, and output dimensions also 100 * 5. I also have a weight matrix of the same dimension. Something like the following: import numpy as np train_X = np.random.randn(100, 5) train_Y = np.random.randn(100, 5)*0.01 + train_X weights = np.random.randn(*train_X.shape) Defining the custom loss function def custom_loss_1(y_true, y_pred): return K.mean(K.abs(y_true-y_pred)*weights) Defining the model from keras.layers import Dense, Input from keras import Model import keras.backend as K input_layer = Input(shape=

Custom loss function implementation issue in keras

笑着哭i 提交于 2019-12-02 13:15:57
I am implementing a custom loss function in keras. The output of the model is 10 dimensional softmax layer. To calculate loss: first I need to find the index of y firing 1 and then subtract that value with true value. I'm doing the following: from keras import backend as K def diff_loss(y_true,y_pred): # find the indices of neuron firing 1 true_ind=K.tf.argmax(y_true,axis=0) pred_ind=K.tf.argmax(y_pred,axis=0) # cast it to float32 x=K.tf.cast(true_ind,K.tf.float32) y=K.tf.cast(pred_ind,K.tf.float32) return K.abs(x-y) but it gives error "raise ValueError("None values not supported.") ValueError

Custom loss function without using keras backend library

喜欢而已 提交于 2019-12-02 07:09:19
问题 I am applying an ML model to an experimental setup to optimise a driving signal. The driving signal itself is the thing being optimised, but its quality is evaluated indirectly (it is applied to an experimental setup to produce a different signal). I am able to run and collect data from the experiment via functions in python. I would like to set up an ML model with a custom loss function that invokes the experiment driver functions with the optimised signal to get the error used for back-prop

Custom loss function without using keras backend library

。_饼干妹妹 提交于 2019-12-02 00:21:50
I am applying an ML model to an experimental setup to optimise a driving signal. The driving signal itself is the thing being optimised, but its quality is evaluated indirectly (it is applied to an experimental setup to produce a different signal). I am able to run and collect data from the experiment via functions in python. I would like to set up an ML model with a custom loss function that invokes the experiment driver functions with the optimised signal to get the error used for back-prop. I have looked into using keras however the restriction of having to use the keras backend functions

Loss in Keras Model evaluation

半城伤御伤魂 提交于 2019-12-01 09:06:19
I am doing binary classification with Keras loss='binary_crossentropy' , optimizer=tf.keras.optimizers.Adam and final layer is keras.layers.Dense(1, activation=tf.nn.sigmoid) . As I know, loss value is used to evaluate the model during training phase. However, when I use Keras model evaluation for my testing dataset (e.g. m_recall.evaluate(testData,testLabel) , there are also loss values, accompanied by accuracy values like the output below test size: (1889, 18525) 1889/1889 [==============================] - 1s 345us/step m_acc: [0.5690245978371045, 0.9523557437797776] 1889/1889 [============

Multiple losses for imbalanced dataset with Keras

筅森魡賤 提交于 2019-12-01 06:14:45
问题 My Model : I built a siamese network that take two input and has three outputs. So My loss functions is : total loss = alpha( loss1) + alpah( loss2) + (1_alpah) ( loss3) loss1 and loss2 is categorical cross entropy loss function, to classify the class identity from total of 8 classes. loss3 is similarity loss function ( euclidean distance loss), to verify if the both input from same class or different classes. My questions are as follow: If I have different losses, and I want to weight them

keras combining two losses with adjustable weights

二次信任 提交于 2019-11-30 09:48:37
So here is the detail description. I have a keras functional model with two layers with outputs x1 and x2. x1 = Dense(1,activation='relu')(prev_inp1) x2 = Dense(2,activation='relu')(prev_inp2) I need to use these x1 and x2, Merge/add Them and come up with weighted loss function like in the attached image. Propagate the 'same loss' into both branches. Alpha is flexible to vary with iterations It seems that propagating the "same loss" into both branches will not take effect, unless alpha is dependent on both branches. If alpha is not variable depending on both branches, then part of the loss

Keras using Tensorflow backend— masking on loss function

£可爱£侵袭症+ 提交于 2019-11-30 03:07:59
I am trying to implement a sequence-to-sequence task using LSTM by Keras with Tensorflow backend. The inputs are English sentences with variable lengths. To construct a dataset with 2-D shape [batch_number, max_sentence_length], I add EOF at the end of line and pad each sentence with enough placeholders, e.g. "#". And then each character in sentence is transformed to one-hot vector, now the dataset has 3-D shape [batch_number, max_sentence_length, character_number]. After LSTM encoder and decoder layers, softmax cross entropy between output and target is computed. To eliminate the padding