Setting Tol for XGBoost Early Stopping

断了今生、忘了曾经 提交于 2019-12-12 18:03:37

问题


I am using XGBoost with early stopping. After about 1000 epochs, the model is still improving, but the magnitude of improvement is very low. I.e.:

 clf = xgb.train(params, dtrain, num_boost_round=num_rounds, evals=watchlist, early_stopping_rounds=10)

Is it possible to set a "tol" for early stopping? I.e.: the minimum level of improvement that is required to not trigger early stopping.

Tol is a common parameter in SKLearn models, such as MLPClassifier and QuadraticDiscriminantAnalysis. Thank you.


回答1:


I do not think that there is a parameter tol in xgboost but you can set the early_stopping_round higher. This parameters means that if the performance on the test set does not improve for early_stopping_round times, then it stops. If you know that after 1000 epochs your model is still improving but very slowly, set early_stopping_round at 50 for example so it will be more "tolerante" about small changes in performance.



来源:https://stackoverflow.com/questions/43772623/setting-tol-for-xgboost-early-stopping

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!