The easiest way for getting feature names after running SelectKBest in Scikit Learn

别来无恙 提交于 2019-11-27 01:45:36

问题


I would like to make supervised learning.

Until now I know to do supervised learning to all features.

However, I would like also to conduct experiment with the K best features.

I read the documentation and found the in Scikit learn there is SelectKBest method.

Unfortunately, I am not sure how to create new dataframe after finding those best features:

Let's assume I would like to conduct experiment with 5 best features:

from sklearn.feature_selection import SelectKBest, f_classif
select_k_best_classifier = SelectKBest(score_func=f_classif, k=5).fit_transform(features_dataframe, targeted_class)

Now if I would add the next line:

dataframe = pd.DataFrame(select_k_best_classifier)

I will receive a new dataframe without feature names (only index starting from 0 to 4).

I should replace it to:

dataframe = pd.DataFrame(fit_transofrmed_features, columns=features_names)

My question is how to create the features_names list??

I know that I should use: select_k_best_classifier.get_support()

Which returns array of boolean values.

The true value in the array represent the index in the right column.

How should I use this boolean array with the array of all features names I can get via the method:

feature_names = list(features_dataframe.columns.values)

回答1:


You can do the following :

mask = select_k_best_classifier.get_support() #list of booleans
new_features = [] # The list of your K best features

for bool, feature in zip(mask, feature_names):
    if bool:
        new_features.append(feature)

Then change the name of your features:

dataframe = pd.DataFrame(fit_transofrmed_features, columns=new_features)



回答2:


This worked for me and doesn't require loops.

# Create and fit selector
selector = SelectKBest(f_classif, k=5)
selector.fit(features_df, target)
# Get columns to keep
cols = selector.get_support(indices=True)
# Create new dataframe with only desired columns, or overwrite existing
features_df_new = features_df[cols]



回答3:


For me this code works fine and is more 'pythonic':

mask = select_k_best_classifier.get_support()
new_features = features_dataframe.columns[mask]



回答4:


Following code will help you in finding top K features with their F-scores. Let, X is the pandas dataframe, whose columns are all the features and y is the list of class labels.

import pandas as pd
from sklearn.feature_selection import SelectKBest, f_classif
#Suppose, we select 5 features with top 5 Fisher scores
selector = SelectKBest(f_classif, k = 5)
#New dataframe with the selected features for later use in the classifier. fit() method works too, if you want only the feature names and their corresponding scores
X_new = selector.fit_transform(X, y)
names = X.columns.values[selector.get_support()]
scores = selector.scores_[selector.get_support()]
names_scores = list(zip(names, scores))
ns_df = pd.DataFrame(data = names_scores, columns=['Feat_names', 'F_Scores'])
#Sort the dataframe for better visualization
ns_df_sorted = ns_df.sort_values(['F_Scores', 'Feat_names'], ascending = [False, True])
print(ns_df_sorted)


来源:https://stackoverflow.com/questions/39839112/the-easiest-way-for-getting-feature-names-after-running-selectkbest-in-scikit-le

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!