I have a dataset of reviews which has a class label of positive/negative. I am applying Naive Bayes to that reviews dataset. Firstly, I am converting into Bag of words. Here
You can get the important of each word out of the fit model by using the coefs_ or feature_log_prob_ attributes. For example
neg_class_prob_sorted = NB_optimal.feature_log_prob_[0, :].argsort()
pos_class_prob_sorted = NB_optimal.feature_log_prob_[1, :].argsort()
print(np.take(count_vect.get_feature_names(), neg_class_prob_sorted[:10]))
print(np.take(count_vect.get_feature_names(), pos_class_prob_sorted[:10]))
Prints the top ten most predictive words for each of your classes.
Edit
As noted in the comments by @yuri-malheiros this is actually the least important features. Take the last ten with the following
print(np.take(count_vect.get_feature_names(), neg_class_prob_sorted[-10:]))
print(np.take(count_vect.get_feature_names(), pos_class_prob_sorted[-10:]))