I have a dataset of reviews which has a class label of positive/negative. I am applying Naive Bayes to that reviews dataset. Firstly, I am converting into Bag of words. Here
I had the same trouble, maybe this is for datascience exchange forum but I want to post it here since I achieved a very good result.
First: + Stands for positive class , - Stands for negative class. P() stands for proability.
We are going to build odds ratio, which can be demostrated that it is equal to P(word i ,+) / P(word i ,-) (let me know if you need the demostration of it guys). If this ratio is greater than 1 means that the word i is more likely to occur in a positive texts than in negative text.
These are the priors in the naive bayes model:
prob_pos = df_train['y'].value_counts()[0]/len(df_train)
prob_neg = df_train['y'].value_counts()[1]/len(df_train)
Create a dataframe for storing the words
df_nbf = pd.DataFrame()
df_nbf.index = count_vect.get_feature_names()
# Convert log probabilities to probabilities.
df_nbf['pos'] = np.e**(nb.feature_log_prob_[0, :])
df_nbf['neg'] = np.e**(nb.feature_log_prob_[1, :])
df_nbf['odds_positive'] = (nb.feature_log_prob_[0, :])/(nb.feature_log_prob_[1, :])*(prob_nonneg/prob_neg)
df_nbf['odds_negative'] = (nb.feature_log_prob_[1, :])/(nb.feature_log_prob_[0, :])*(prob_neg/prob_nonneg)
Most important words. This will hive you a >1 ratio. For example a odds_ratio_negative =2 for the word "damn" means that this word is twice likely to occur when the comment or your class is negative in comparison with your positive class.
# Here are the top5 most important words of your positive class:
odds_pos_top5 = df_nbf.sort_values('odds_positive',ascending=False)['odds_positive'][:5]
# Here are the top5 most important words of your negative class:
odds_neg_top5 = df_nbf.sort_values('odds_negative',ascending=False)['odds_negative'][:5]