I want to do a cross validation for LightGBM model with lgb.Dataset and use early_stopping_rounds. The following approach works without a problem with XGBoost's xgboost.cv. I prefer not to use Scikit Learn's approach with GridSearchCV, because it doesn't support early stopping or lgb.Dataset.
import lightgbm as lgb from sklearn.metrics import mean_absolute_error dftrainLGB = lgb.Dataset(data = dftrain, label = ytrain, feature_name = list(dftrain)) params = {'objective': 'regression'} cv_results = lgb.cv( params, dftrainLGB, num_boost_round=100, nfold=3, metrics='mae', early_stopping_rounds=10 )
The task is to do regression, but the following code throws an error: Supported target types are: ('binary', 'multiclass'). Got 'continuous' instead.
Does LightGBM support regression, or did I supply wrong parameters?