How to get feature Importance in naive bayes?

后端 未结 4 771
萌比男神i
萌比男神i 2020-12-28 18:41

I have a dataset of reviews which has a class label of positive/negative. I am applying Naive Bayes to that reviews dataset. Firstly, I am converting into Bag of words. Here

相关标签:
4条回答
  • 2020-12-28 19:01

    Try this:

    pred_proba = NB_optimal.predict_proba(X_test)
    words = np.take(count_vect.get_feature_names(), pred_proba.argmax(axis=1))
    
    0 讨论(0)
  • 2020-12-28 19:08

    You can get the important of each word out of the fit model by using the coefs_ or feature_log_prob_ attributes. For example

    neg_class_prob_sorted = NB_optimal.feature_log_prob_[0, :].argsort()
    pos_class_prob_sorted = NB_optimal.feature_log_prob_[1, :].argsort()
    
    print(np.take(count_vect.get_feature_names(), neg_class_prob_sorted[:10]))
    print(np.take(count_vect.get_feature_names(), pos_class_prob_sorted[:10]))
    

    Prints the top ten most predictive words for each of your classes.

    Edit

    As noted in the comments by @yuri-malheiros this is actually the least important features. Take the last ten with the following

    print(np.take(count_vect.get_feature_names(), neg_class_prob_sorted[-10:]))
    print(np.take(count_vect.get_feature_names(), pos_class_prob_sorted[-10:]))
    
    0 讨论(0)
  • 2020-12-28 19:23
    def get_salient_words(nb_clf, vect, class_ind):
        """Return salient words for given class
        Parameters
        ----------
        nb_clf : a Naive Bayes classifier (e.g. MultinomialNB, BernoulliNB)
        vect : CountVectorizer
        class_ind : int
        Returns
        -------
        list
            a sorted list of (word, log prob) sorted by log probability in descending order.
        """
    
        words = vect.get_feature_names()
        zipped = list(zip(words, nb_clf.feature_log_prob_[class_ind]))
        sorted_zip = sorted(zipped, key=lambda t: t[1], reverse=True)
    
        return sorted_zip
    
    neg_salient_top_20 = get_salient_words(NB_optimal, count_vect, 0)[:20]
    pos_salient_top_20 = get_salient_words(NB_optimal, count_vect, 1)[:20]
    
    0 讨论(0)
  • 2020-12-28 19:24

    I had the same trouble, maybe this is for datascience exchange forum but I want to post it here since I achieved a very good result.

    First: + Stands for positive class , - Stands for negative class. P() stands for proability.

    We are going to build odds ratio, which can be demostrated that it is equal to P(word i ,+) / P(word i ,-) (let me know if you need the demostration of it guys). If this ratio is greater than 1 means that the word i is more likely to occur in a positive texts than in negative text.

    These are the priors in the naive bayes model:

    prob_pos = df_train['y'].value_counts()[0]/len(df_train)
    prob_neg = df_train['y'].value_counts()[1]/len(df_train)
    

    Create a dataframe for storing the words

    df_nbf = pd.DataFrame()
    df_nbf.index = count_vect.get_feature_names()
    # Convert log probabilities to probabilities. 
    df_nbf['pos'] = np.e**(nb.feature_log_prob_[0, :])
    df_nbf['neg'] = np.e**(nb.feature_log_prob_[1, :])
    
    
    df_nbf['odds_positive'] = (nb.feature_log_prob_[0, :])/(nb.feature_log_prob_[1, :])*(prob_nonneg/prob_neg)
    
    df_nbf['odds_negative'] = (nb.feature_log_prob_[1, :])/(nb.feature_log_prob_[0, :])*(prob_neg/prob_nonneg)
    
    

    Most important words. This will hive you a >1 ratio. For example a odds_ratio_negative =2 for the word "damn" means that this word is twice likely to occur when the comment or your class is negative in comparison with your positive class.

    # Here are the top5 most important words of your positive class:
    odds_pos_top5 = df_nbf.sort_values('odds_positive',ascending=False)['odds_positive'][:5]
    # Here are the top5 most important words of your negative class:
    odds_neg_top5 = df_nbf.sort_values('odds_negative',ascending=False)['odds_negative'][:5]
    
    
    0 讨论(0)
提交回复
热议问题