I want to train a lgb model with custom metric : f1_score with weighted average.
I went through the advanced examples of lightgbm over here and found the implimentation of custom binary error function. I implemented as similiar functon to return f1_score as shown below.
def f1_metric(preds, train_data): labels = train_data.get_label() return 'f1', f1_score(labels, preds, average='weighted'), True I tried to train the model by passing feval parameter as f1_metric as shown below.
evals_results = {} bst = lgb.train(params, dtrain, valid_sets= [dvalid], valid_names=['valid'], evals_result=evals_results, num_boost_round=num_boost_round, early_stopping_rounds=early_stopping_rounds, verbose_eval=25, feval=f1_metric) Then I am getting ValueError: Found input variables with inconsistent numbers of samples:
The training set is being passed to the function rather than the validation set.
How can I configure such that the validation set is passed and f1_score is returned.?