cross-entropy

Softmax Cross Entropy implementation in Tensorflow Github Source Code

我的梦境 提交于 2021-01-29 22:23:59
问题 I am trying to implement a Softmax Cross-Entropy loss in python. So, I was looking at the implementation of Softmax Cross-Entropy loss in the GitHub Tensorflow repository. I am trying to understand it but I run into a loop of three functions and I don't understand which line of code in the function is computing the Loss? The function softmax_cross_entropy_with_logits_v2(labels, logits, axis=-1, name=None) returns the function softmax_cross_entropy_with_logits_v2_helper(labels=labels, logits

Why does sigmoid & crossentropy of Keras/tensorflow have low precision?

不羁岁月 提交于 2020-08-27 21:54:10
问题 I have the following simple neural network (with 1 neuron only) to test the computation precision of sigmoid activation & binary_crossentropy of Keras: model = Sequential() model.add(Dense(1, input_dim=1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) To simplify the test, I manually set the only weight to 1 and bias to 0, and then evaluate the model with 2-point training set {(-a, 0), (a, 1)} , i.e. y = numpy.array([0, 1]) for a in

Channel wise CrossEntropyLoss for image segmentation in pytorch

倾然丶 夕夏残阳落幕 提交于 2020-07-05 12:11:32
问题 I am doing an image segmentation task. There are 7 classes in total so the final outout is a tensor like [batch, 7, height, width] which is a softmax output. Now intuitively I wanted to use CrossEntropy loss but the pytorch implementation doesn't work on channel wise one-hot encoded vector So I was planning to make a function on my own. With a help from some stackoverflow, My code so far looks like this from torch.autograd import Variable import torch import torch.nn.functional as F def cross

Channel wise CrossEntropyLoss for image segmentation in pytorch

人走茶凉 提交于 2020-07-05 12:08:45
问题 I am doing an image segmentation task. There are 7 classes in total so the final outout is a tensor like [batch, 7, height, width] which is a softmax output. Now intuitively I wanted to use CrossEntropy loss but the pytorch implementation doesn't work on channel wise one-hot encoded vector So I was planning to make a function on my own. With a help from some stackoverflow, My code so far looks like this from torch.autograd import Variable import torch import torch.nn.functional as F def cross

Channel wise CrossEntropyLoss for image segmentation in pytorch

时光总嘲笑我的痴心妄想 提交于 2020-07-05 12:08:14
问题 I am doing an image segmentation task. There are 7 classes in total so the final outout is a tensor like [batch, 7, height, width] which is a softmax output. Now intuitively I wanted to use CrossEntropy loss but the pytorch implementation doesn't work on channel wise one-hot encoded vector So I was planning to make a function on my own. With a help from some stackoverflow, My code so far looks like this from torch.autograd import Variable import torch import torch.nn.functional as F def cross

Using categorical_crossentropy for only two classes

纵饮孤独 提交于 2020-01-25 07:03:22
问题 Computer vision and deep learning literature usually say one should use binary_crossentropy for a binary (two-class) problem and categorical_crossentropy for more than two classes. Now I am wondering: is there any reason to not use the latter for a two-class problem as well? 回答1: categorical_crossentropy : accepts only one correct class per sample will take "only" the true neuron and make the crossentropy calculation with that neuron binary_crossentropy : accepts many correct classes per

What is the problem with my implementation of the cross-entropy function?

怎甘沉沦 提交于 2020-01-22 09:03:36
问题 I am learning the neural network and I want to write a function cross_entropy in python. Where it is defined as where N is the number of samples, k is the number of classes, log is the natural logarithm, t_i,j is 1 if sample i is in class j and 0 otherwise, and p_i,j is the predicted probability that sample i is in class j . To avoid numerical issues with logarithm, clip the predictions to [10^{−12}, 1 − 10^{−12}] range. According to the above description, I wrote down the codes by clipping

Binary cross entropy Vs categorical cross entropy with 2 classes

老子叫甜甜 提交于 2020-01-21 19:44:22
问题 When considering the problem of classifying an input to one of 2 classes, 99% of the examples I saw used a NN with a single output and sigmoid as their activation followed by a binary cross-entropy loss. Another option that I thought of is having the last layer produce 2 outputs and use a categorical cross-entropy with C=2 classes, but I never saw it in any example. Is there any reason for that? Thanks 回答1: If you are using softmax on top of the two output network you get an output that is

Binary cross entropy Vs categorical cross entropy with 2 classes

淺唱寂寞╮ 提交于 2020-01-21 19:42:27
问题 When considering the problem of classifying an input to one of 2 classes, 99% of the examples I saw used a NN with a single output and sigmoid as their activation followed by a binary cross-entropy loss. Another option that I thought of is having the last layer produce 2 outputs and use a categorical cross-entropy with C=2 classes, but I never saw it in any example. Is there any reason for that? Thanks 回答1: If you are using softmax on top of the two output network you get an output that is

Keras custom loss function dtype error

青春壹個敷衍的年華 提交于 2020-01-06 06:31:26
问题 I have a NN that has two identical CNN (similar to Siamese network), then merges the outputs, and intends to apply a custom loss function on the merged output, something like this: ----------------- ----------------- | input_a | | input_b | ----------------- ----------------- | base_network | | base_network | ------------------------------------------ | processed_a_b | ------------------------------------------ In my custom loss function, I need to break y vertically into two pieces, and then