问题
I am using keras to build a simple neural network for a regression task. But the output is always tends to the 'mean value' of ground truth y data. See the first figure, blue is ground truth, red is predicted value (very close to the constant mean of ground truth).
Also the model stops learning very early even though I set a learning epoch=100.
Anyone have ideas under what kinds of conditions the neural network will stop learning early and why the regression output tends to 'the mean' of ground truth?
Thanks!
回答1:
Possibly because the data are unpredictable....? Do you know for certain that the data set has N order predictability of some kind?
Just eyeballing your data set, it lacks periodicity, lacks homoscedasticity, it lacks any slope or skew or trend or pattern... I can't really tell if there is anything wrong with your 'net. In the absence of any pattern, the mean is always the best prediction... and it is entirely possible (although not certain) that the neural net is doing its job.
I suggest you find an easier data set, and see if you can tackle that first.
回答2:
The model is not learning from the data. Think of a basic linear regression - the 'null' prediction, the prediction if you didn't have any predictors at all, is just the expected value; i.e. the mean. It could be caused by many different issues, but initialization comes to mind - bad initialization leads to no learning. This blog post has good practical advice that may help.
来源:https://stackoverflow.com/questions/39863606/why-neural-network-tends-to-output-mean-value