loss-function

Custom loss function implementation issue in keras

送分小仙女□ 提交于 2019-12-20 07:46:18
问题 I am implementing a custom loss function in keras. The output of the model is 10 dimensional softmax layer. To calculate loss: first I need to find the index of y firing 1 and then subtract that value with true value. I'm doing the following: from keras import backend as K def diff_loss(y_true,y_pred): # find the indices of neuron firing 1 true_ind=K.tf.argmax(y_true,axis=0) pred_ind=K.tf.argmax(y_pred,axis=0) # cast it to float32 x=K.tf.cast(true_ind,K.tf.float32) y=K.tf.cast(pred_ind,K.tf

keras combining two losses with adjustable weights

喜你入骨 提交于 2019-12-18 13:26:22
问题 So here is the detail description. I have a keras functional model with two layers with outputs x1 and x2. x1 = Dense(1,activation='relu')(prev_inp1) x2 = Dense(2,activation='relu')(prev_inp2) I need to use these x1 and x2, Merge/add Them and come up with weighted loss function like in the attached image. Propagate the 'same loss' into both branches. Alpha is flexible to vary with iterations 回答1: It seems that propagating the "same loss" into both branches will not take effect, unless alpha

How can I use TensorFlow's sampled softmax loss function in a Keras model?

∥☆過路亽.° 提交于 2019-12-18 06:12:21
问题 I'm training a language model in Keras and would like to speed up training by using sampled softmax as the final activation function in my network. From the TF docs, it looks like I need to supply arguments for weights and biases , but I'm unsure of what is expected as input for these. It seems like I could write a custom function in Keras as follows: import keras.backend as K def sampled_softmax(weights, biases, y_true, y_pred, num_sampled, num_classes): return K.sampled_softmax(weights,

How model.fit works in Keras?

安稳与你 提交于 2019-12-13 17:03:59
问题 My previous post or error is this one . So, I found a different way of writing the function so it will be Tensorflow compatible. I tested it and it was working fine. However when I want to integrate it into the keras ,I couldn't. This is the solution for my previous post: graph = tf.Graph() with graph.as_default(): i = tf.Variable(0) error = tf.Variable(initial_value=0,dtype=tf.float64) sol = tf.random_uniform(shape=[10, 36], dtype=tf.float64, maxval=1) error_1 = tf.Variable(initial_value=0

Keras custom loss, having error in batch size in y_true and y_pred

断了今生、忘了曾经 提交于 2019-12-13 04:24:50
问题 I want to create a custom loss function, where I want to use my pre calculated y_true, but unfortunately, when I use that I am having error with the batch size not matching with y_pred. I am interested to know the work around. Here is the example code of my custom loss. def avg_vect_loss(y_true, y_pred): #y_true need not to be used, instead pos_avg_vects and neg_avg_vects are considered as my pre calculated y_trues pos_avg_vects, neg_avg_vects = pos_neg_avg_vects() l2_dist_to_pos_avg = l2

Combining two loss function in Keras in Sequential model with ndarray output

痞子三分冷 提交于 2019-12-13 03:18:07
问题 I am training a CNN model in Keras (object detection in image and LiDAR (Kaggle Lyft Competition)). As an output I have a 34 channel gird. So output dimension is: LENGTH x WIDTH X 34. First 10 channels are for different categories of objects (ideally as one hot vector) and rest of 24 channels are coordinates of bounding box in 3D. For first 10 channels I want to use: keras.losses.categorical_crossentropy , and for rest of 24: keras.losses.mean_squared_error Also since numbers of objects

How to assign values to specified location in Tensorflow?

浪子不回头ぞ 提交于 2019-12-13 02:59:08
问题 I would like to implement a SSIM loss function, since the boarders are aborted by the convolution, I would like to preserve the boarders and compute L1 loss for the pixels of boarder. The code are learned from here. SSIM / MS-SSIM for TensorFlow For example, we hava img1 and img2 size [batch,32,32,32,1], and the window_size of Guassian 11, the result ssim map will be [batch,22,22,22,1], L1 map [batch,32,32,32,1] how can I assign ssim to the center of the L1? I receive error like this;

Implement custom loss function in Tensorflow 2.0

拟墨画扇 提交于 2019-12-12 23:57:02
问题 I'm building a model for Time series classification. The data is very unbalanced so I've decided to use a weighted cross entropy function as my loss. Tensorflow provides tf.nn.weighted_cross_entropy_with_logits but I'm not sure how to use it in TF 2.0. Because my model is build using tf.keras API I was thinking about creating my custom loss function like this: pos_weight=10 def weighted_cross_entropy_with_logits(y_true,y_pred): return tf.nn.weighted_cross_entropy_with_logits(y_true,y_pred,pos

How to correct this custom loss function for keras with tensorflow?

早过忘川 提交于 2019-12-12 17:20:01
问题 I want to write a custom loss function that would penalize underestimation of positive target values with weights. It would work like mean square error, with the only difference that square errors in said case would get multiplied with a weight greater than 1. I wrote it like this: def wmse(ground_truth, predictions): square_errors = np.square(np.subtract(ground_truth, predictions)) weights = np.ones_like(square_errors) weights[np.logical_and(predictions < ground_truth, np.sign(ground_truth)

How to conditionally assign values to tensor [masking for loss function]?

故事扮演 提交于 2019-12-12 12:32:08
问题 I want to create a L2 loss function that ignores values (=> pixels) where the label has the value 0. The tensor batch[1] contains the labels while output is a tensor for the net output, both have a shape of (None,300,300,1) . labels_mask = tf.identity(batch[1]) labels_mask[labels_mask > 0] = 1 loss = tf.reduce_sum(tf.square((output-batch[1])*labels_mask))/tf.reduce_sum(labels_mask) My current code yields to TypeError: 'Tensor' object does not support item assignment (on the second line). What