loss-function

Use of Keras Sparse Categorical Crossentropy for pixel-wise multi-class classification

主宰稳场 提交于 2021-02-07 08:45:29
问题 I'll start by disclosing that I'm a machine learning and Keras novice and don't know much beyond general CNN binary classifiers. I'm trying to perform pixelwise multi-class classification using a U-Net architecture (TF backend) on many 256x256 images. In other words, I input a 256x256 image, and I want it to output a 256x256 "mask" (or label image) where the values are integers from 0-30 (each integer represents a unique class). I'm training on 2 1080Ti NVIDIA GPUs. When I attempt to perform

Use of Keras Sparse Categorical Crossentropy for pixel-wise multi-class classification

删除回忆录丶 提交于 2021-02-07 08:44:35
问题 I'll start by disclosing that I'm a machine learning and Keras novice and don't know much beyond general CNN binary classifiers. I'm trying to perform pixelwise multi-class classification using a U-Net architecture (TF backend) on many 256x256 images. In other words, I input a 256x256 image, and I want it to output a 256x256 "mask" (or label image) where the values are integers from 0-30 (each integer represents a unique class). I'm training on 2 1080Ti NVIDIA GPUs. When I attempt to perform

Derivative in loss function in Keras

廉价感情. 提交于 2021-02-04 19:38:48
问题 I want to make following loss function in keras: Loss = mse + double_derivative(y_pred,x_train) I am not able to incorporate the derivative term. I have tried K.gradients(K.gradients(y_pred,x_train),x_train) but it does not help. I am getting error message: AttributeError: 'NoneType' object has no attribute 'op' def _loss_tensor(y_true, y_pred,x_train): l1 = K.mean(K.square(y_true - y_pred), axis=-1) sigma = 0.01 lamda = 3 term = K.square(sigma)*K.gradients(K.gradients(y_pred,x_train),x_train

What to do next when Deep Learning neural network stop improving in term of validation accuracy?

余生长醉 提交于 2021-01-29 18:31:31
问题 I was running into this issue where my model converge very fast only after about 20 or 30 epoch My data set contain 7000 sample and my neural network has 3 hidden layer, each with 18 neurons and batch normalization with drop out 0.2. My task is a multi label classification where my label are [0 0 1] , [0 1 0], [1 0 0] and [0 0 0] num_neuron = 18 model = Sequential() model.add(Dense(num_neuron, input_shape=(input_size,), activation='elu')) model.add(Dropout(0.2)) model.add(keras.layers

How to produce a variable size distance matrix in keras?

旧时模样 提交于 2021-01-29 14:13:11
问题 What I am trying to achieve now is to create a custom loss function in Keras that takes in two tensors (y_true, y_pred) with shapes (None, None, None) and (None, None, 3) , respectively. However, the None 's are so, that the two shapes are always equal for every (y_true, y_pred) . From these tensors I want to produce two distance matrices that contain the squared distances between every possible point pair (the third, length 3 dimension contains x, y, and z spatial values) inside them and

How to build a Neural Network to multiply two numbers

时光总嘲笑我的痴心妄想 提交于 2021-01-29 05:12:33
问题 I am trying to build a neural network which would multiply 2 numbers. To do the same, I took help of scikit-learn. I am going for a neural network with 2 hidden layers, (5, 3) and ReLU as my activation function. I have defined my MLPRegressor as follows: X = data.drop('Product', axis=1) y = data['Product'] X_train, X_test, y_train, y_test = train_test_split(X, y) scaler = StandardScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) mlp =

Implementing custom loss function in keras with condition

杀马特。学长 韩版系。学妹 提交于 2020-12-29 10:45:44
问题 I need some help with keras loss function. I have been implementing custom loss function on keras with Tensorflow backend. I have implemented the custom loss function in numpy but it would be great if it could be translated into keras loss function. The loss function takes dataframe and series of user id. The Euclidean distance for same user_id are positive and negative if the user_id are different. The function returns summed up scalar distance of the dataframe. def custom_loss_numpy

Implementing custom loss function in keras with condition

情到浓时终转凉″ 提交于 2020-12-29 10:45:08
问题 I need some help with keras loss function. I have been implementing custom loss function on keras with Tensorflow backend. I have implemented the custom loss function in numpy but it would be great if it could be translated into keras loss function. The loss function takes dataframe and series of user id. The Euclidean distance for same user_id are positive and negative if the user_id are different. The function returns summed up scalar distance of the dataframe. def custom_loss_numpy

Should the custom loss function in Keras return a single loss value for the batch or an arrary of losses for every sample in the training batch?

不想你离开。 提交于 2020-12-23 09:40:26
问题 I'm learning keras API in tensorflow(2.3). In this guide on tensorflow website, I found an example of custom loss funciton: def custom_mean_squared_error(y_true, y_pred): return tf.math.reduce_mean(tf.square(y_true - y_pred)) The reduce_mean function in this custom loss function will return an scalar. Is it right to define loss function like this? As far as I know, the first dimension of the shapes of y_true and y_pred is the batch size. I think the loss function should return loss values for