loss

Different loss function for validation set in Keras

淺唱寂寞╮ 提交于 2019-11-30 18:00:59
问题 I have unbalanced training dataset, thats why I built custom weighted categorical cross entropy loss function. But the problem is my validation set is balanced one and I want to use the regular categorical cross entropy loss. So can I pass different loss function for validation set within Keras? I mean the wighted one for training and regular one for validation set? def weighted_loss(y_pred, y_ture): ' ' ' return loss model.compile(loss= weighted_loss, metric='accuracy') 回答1: You can try the

loss calculation over different batch sizes in keras

自闭症网瘾萝莉.ら 提交于 2019-11-29 15:49:09
I know that in theory, the loss of a network over a batch is just the sum of all the individual losses. This is reflected in the Keras code for calculating total loss. Relevantly: for i in range(len(self.outputs)): if i in skip_target_indices: continue y_true = self.targets[i] y_pred = self.outputs[i] weighted_loss = weighted_losses[i] sample_weight = sample_weights[i] mask = masks[i] loss_weight = loss_weights_list[i] with K.name_scope(self.output_names[i] + '_loss'): output_loss = weighted_loss(y_true, y_pred, sample_weight, mask) if len(self.outputs) > 1: self.metrics_tensors.append(output

Keras multiple output: Custom loss function

情到浓时终转凉″ 提交于 2019-11-28 22:43:29
问题 I am using a multiple output model in keras model1 = Model(input=x, output=[y2,y3]) model1.compile((optimizer='sgd', loss=cutom_loss_function) my custom_loss_function is; def custom_loss(y_true, y_pred): y2_pred = y_pred[0] y2_true = y_true[0] loss = K.mean(K.square(y2_true - y2_pred), axis=-1) return loss I only want to train the network on output y2 . What is the shape/structure of the y_pred and y_true argument in loss function when multiple outputs are used? Can I access them as above? Is

Lost code lines when Notepad++ crashed

一曲冷凌霜 提交于 2019-11-27 17:25:50
I was working on a .js file this morning on Notepad++, as usual, when the program just crashed. So I ended it, and re-opened it to see that all my code lines in my .js file, had disappeared, and now all I have left is the file with a size of 0kb because there's nothing left in it. How the hell is that even possible ? It erased everything I typed and saved the file as if there's nothing in it. Do you know a way to get my code back ? Or did something like this ever happened to someone ? :/ I'm kinda worried because there was a lot of work there and I don't feel like re-typing it all... Indrajit

Keras custom loss function: Accessing current input pattern

删除回忆录丶 提交于 2019-11-27 07:53:58
In Keras (with Tensorflow backend), is the current input pattern available to my custom loss function? The current input pattern is defined as the input vector used to produce the prediction. For example, consider the following: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, shuffle=False) . Then the current input pattern is the current X_train vector associated with the y_train (which is termed y_true in the loss function). When designing a custom loss function, I intend to optimize/minimize a value that requires access to the current input pattern,

Lost code lines when Notepad++ crashed

守給你的承諾、 提交于 2019-11-26 18:58:44
问题 I was working on a .js file this morning on Notepad++, as usual, when the program just crashed. So I ended it, and re-opened it to see that all my code lines in my .js file, had disappeared, and now all I have left is the file with a size of 0kb because there's nothing left in it. How the hell is that even possible ? It erased everything I typed and saved the file as if there's nothing in it. Do you know a way to get my code back ? Or did something like this ever happened to someone ? :/ I'm

Keras custom loss function: Accessing current input pattern

南楼画角 提交于 2019-11-26 13:46:33
问题 In Keras (with Tensorflow backend), is the current input pattern available to my custom loss function? The current input pattern is defined as the input vector used to produce the prediction. For example, consider the following: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, shuffle=False) . Then the current input pattern is the current X_train vector associated with the y_train (which is termed y_true in the loss function). When designing a custom

Loss & accuracy - Are these reasonable learning curves?

折月煮酒 提交于 2019-11-26 11:53:10
I am learning neural networks and I built a simple one in Keras for the iris dataset classification from the UCI machine learning repository. I used a one hidden layer network with a 8 hidden nodes. Adam optimizer is used with a learning rate of 0.0005 and is run for 200 Epochs. Softmax is used at the output with loss as catogorical-crossentropy. I am getting the following learning curves. As you can see, the learning curve for the accuracy has a lot of flat regions and I don't understand why. The error seems to be decreasing constantly but the accuracy doesn't seem to be increasing in the

Loss & accuracy - Are these reasonable learning curves?

99封情书 提交于 2019-11-26 03:27:04
问题 I am learning neural networks and I built a simple one in Keras for the iris dataset classification from the UCI machine learning repository. I used a one hidden layer network with a 8 hidden nodes. Adam optimizer is used with a learning rate of 0.0005 and is run for 200 Epochs. Softmax is used at the output with loss as catogorical-crossentropy. I am getting the following learning curves. As you can see, the learning curve for the accuracy has a lot of flat regions and I don\'t understand