Decreasing training loss, stable validation loss - is the model overfitting?

假装没事ソ 提交于 2021-01-29 07:40:20

问题


Does my model overfit? I would be sure it overfitted, if the validation loss increased heavily, while the training loss decreased. However the validation loss is nearly stable, so I am not sure. Can you please help?


回答1:


  • I assume that you're using different hyperparameters? Perhaps save
    the parameters and resume with a different set of hyperparameters.
    This comment really depends on how you're doing hyperparameter
    optimization.
  • Try with different training/test splits. It might be idiosyncratic. Especially with so few epochs.

  • Depending on how costly it is to train the model and evaluate it, consider bagging your models, akin to how a random forest operates. In others words, fit your model to many different train/test splits, and average the model outputs, either in terms of a majority classification vote, or an averaging of the predicted probabilities. In this case, I'd err on the side of a slightly overfit model, because of the way that averaging can mitigate overfitting. But I wouldn't train to death either, unless you're going to fit very very many neural nets, and somehow ensure that you're decorrelating them akin to the method of random subspaces from random forests.



来源:https://stackoverflow.com/questions/57741135/decreasing-training-loss-stable-validation-loss-is-the-model-overfitting

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!