Scikit F-score metric error

故事扮演 提交于 2019-12-05 05:32:40

It seems it is a known bug here which has been fixed, I guess you should try update sklearn.

However, can anybody explain the meaning of the "UndefinedMetricWarning" warning that I am seeing? What is actually happening behind the curtains?

This is well-described at https://stackoverflow.com/a/34758800/1587329:

https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/classification.py

F1 = 2 * (precision * recall) / (precision + recall)

precision = TP/(TP+FP) as you've just said if predictor doesn't predicts positive class at all - precision is 0.

recall = TP/(TP+FN), in case if predictor doesn't predict positive class - TP is 0 - recall is 0.

So now you are dividing 0/0.

To fix the weighting problem (it's easy for the classifier to (almost) always predict the more prevalent class), you can use class_weight="balanced":

logistic = LogisticRegressionCV(
    Cs=50,
    cv=4,
    penalty='l2', 
    fit_intercept=True,
    scoring='f1',
    class_weight="balanced"
)

LogisticRegressionCV says:

The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)).

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!