Scikit learn Error Message 'Precision and F-score are ill-defined and being set to 0.0 in labels'

时光总嘲笑我的痴心妄想 提交于 2020-01-02 02:03:09

问题


Im working on a binary classification model, classifier is naive bayes. I have an almost balanced dataset however I get the following error message when I predict:

UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.
  'precision', 'predicted', average, warn_for)

I'm using gridsearch with CV k-fold 10. The test set and predictions contain both classes, so I don't understand the message. I'm working on the same dataset, train/test split, cv and random seed for 6 other models and those work perfect. Data is ingested externally into a dataframe, randomize and seed is fixed. Then the naive bayes classification model class the file at the beginning of before this code snippet.

X_train, X_test, y_train, y_test, len_train, len_test = \
     train_test_split(data['X'], data['y'], data['len'], test_size=0.4)
pipeline = Pipeline([
    ('classifier', MultinomialNB()) 
])

cv=StratifiedKFold(len_train, n_folds=10)

len_train = len_train.reshape(-1,1)
len_test = len_test.reshape(-1,1)

params = [
  {'classifier__alpha': [0, 0.0001, 0.001, 0.01]}

]

grid = GridSearchCV(
    pipeline,
    param_grid=params,
    refit=True,  
    n_jobs=-1, 
    scoring='accuracy',
    cv=cv, 
)

nb_fit = grid.fit(len_train, y_train)

preds = nb_fit.predict(len_test)

print(confusion_matrix(y_test, preds, labels=['1','0']))
print(classification_report(y_test, preds))

I was 'forced' by python to alter the shape of the series, maybe that is the culprit?


回答1:


The meaning of the warning

As the other answers here suggest, you encounter a situation where precision F-Score can't be computed due to its definition (precision/recall equal to 0). In this cases, the score of the metric is valued at 0.

Test data contains all labels, why does this still happen?

Well, you are using K-Fold (specifically in your case k=10), which means that one specific split might contain 0 samples of one class

Still happens, even when using Stratified K-Fold

This is a little tricky. Stratified K-Fold ensures the same portion of each class in each split. However, this does not only depend on the real classes. For example, Precision is computed like so: TP/predicted yes. If for some reason, you are predicting all of your samples with No, you will have predicted yes=0, which will result in undefined precision (which can lead to undefined F-Score).

This sounds like an edge case but consider the fact that in grid-search, you are probably searching for a whole lot of different combinations, which some might be totally off, and result in such scenario.

I hope this answers your question!




回答2:


As aadel has commented, when no data points are classified as positive, precision divides by zero as it is defined as TP / (TP + FP) (i.e., true positives / true and false positives). The library then sets precision to 0, but issues a warning as actually the value is undefined. F1 depends on precision and hence is not defined either.

Once you are aware of this, you can choose to disable the warning with:

import warnings
import sklearn.exceptions
warnings.filterwarnings("ignore", category=sklearn.exceptions.UndefinedMetricWarning)


来源:https://stackoverflow.com/questions/35225369/scikit-learn-error-message-precision-and-f-score-are-ill-defined-and-being-set

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!