Is it practically possible to have decreasing loss and decreasing accuracy at each epoch when training a CNN model? I am getting the below result while training.
Yes, this is possible.
To provide an intuitive example of why this might happen, suppose that your classifier outputs roughly the same probability for classes A and B, and class A has the highest density overall. Within this setting, changing the model’s parameters minimally might turn B into the most probable class. This effect would make the cross-entropy loss vary minimally, since it depends directly on the probability distribution, but the change would be clearly noticed for the accuracy, because it depends on the argmax of the output probability distribution.
As a conclusion, minimizing the cross-entropy loss does not always imply improving the accuracy, mainly because cross-entropy is a smooth function, while the accuracy is non-smooth.