Model with BatchNormalization: stagnant test loss
问题 I wrote a neural network using Keras. It contains BatchNormalization layers. When I trained it with model.fit , everything was fine. When training it with tensorflow as explained here, the training is fine, but the validation step always give very poor performance, and it quickly saturates (the accuracy goes 5%, 10%, 40%, 40%, 40%..; the loss is stagnant too). I need to use tensorflow because it allows more flexibility regarding the monitoring part of training. I strongly suspect it has