问题
I am getting quite different results when classifying text (in only two categories) with the Bernoulli Naive Bayes algorithm in NLTK and the one in scikit-learn module. Although the overall accuracy is comparable between the two (although far from identical) the difference in Type I and Type II errors is significant. In particular, the NLTK Naive Bayes classifier would give more Type I than Type II errors , while the scikit-learn -- the opposite. This 'anomaly' seem to be consistent across different features and different training samples. Is there a reason for this ? Which of the two is more trustworthy?
回答1:
NLTK does not implement Bernoulli Naive Bayes. It implements multinomial Naive Bayes but only allows binary features.
来源:https://stackoverflow.com/questions/15732769/different-results-between-the-bernoulli-naive-bayes-in-nltk-and-in-scikit-learn