Recursive feature elimination on Random Forest using scikit-learn

后端 未结 4 1093
一向
一向 2020-12-28 18:06

I\'m trying to preform recursive feature elimination using scikit-learn and a random forest classifier, with OOB ROC as the method of scoring each subset create

4条回答
  •  盖世英雄少女心
    2020-12-28 18:26

    This is my code, I've tidied it up a bit to make it relevant to your task:

    features_to_use = fea_cols #  this is a list of features
    # empty dataframe
    trim_5_df = DataFrame(columns=features_to_use)
    run=1
    # this will remove the 5 worst features determined by their feature importance computed by the RF classifier
    while len(features_to_use)>6:
        print('number of features:%d' % (len(features_to_use)))
        # build the classifier
        clf = RandomForestClassifier(n_estimators=1000, random_state=0, n_jobs=-1)
        # train the classifier
        clf.fit(train[features_to_use], train['OpenStatusMod'].values)
        print('classifier score: %f\n' % clf.score(train[features_to_use], df['OpenStatusMod'].values))
        # predict the class and print the classification report, f1 micro, f1 macro score
        pred = clf.predict(test[features_to_use])
        print(classification_report(test['OpenStatusMod'].values, pred, target_names=status_labels))
        print('micro score: ')
        print(metrics.precision_recall_fscore_support(test['OpenStatusMod'].values, pred, average='micro'))
        print('macro score:\n')
        print(metrics.precision_recall_fscore_support(test['OpenStatusMod'].values, pred, average='macro'))
        # predict the class probabilities
        probs = clf.predict_proba(test[features_to_use])
        # rescale the priors
        new_probs = kf.cap_and_update_priors(priors, probs, private_priors, 0.001)
        # calculate logloss with the rescaled probabilities
        print('log loss: %f\n' % log_loss(test['OpenStatusMod'].values, new_probs))
        row={}
        if hasattr(clf, "feature_importances_"):
            # sort the features by importance
            sorted_idx = np.argsort(clf.feature_importances_)
            # reverse the order so it is descending
            sorted_idx = sorted_idx[::-1]
            # add to dataframe
            row['num_features'] = len(features_to_use)
            row['features_used'] = ','.join(features_to_use)
            # trim the worst 5
            sorted_idx = sorted_idx[: -5]
            # swap the features list with the trimmed features
            temp = features_to_use
            features_to_use=[]
            for feat in sorted_idx:
                features_to_use.append(temp[feat])
            # add the logloss performance
            row['logloss']=[log_loss(test['OpenStatusMod'].values, new_probs)]
        print('')
        # add the row to the dataframe
        trim_5_df = trim_5_df.append(DataFrame(row))
    run +=1
    

    So what I'm doing here is I have a list of features I want to train and then predict against, using the feature importances I then trim the worst 5 and repeat. During each run I add a row to record the prediction performance so that I can do some analysis later.

    The original code was much bigger I had different classifiers and datasets I was analysing but I hope you get the picture from the above. The thing I noticed was that for random forest the number of features I removed on each run affected the performance so trimming by 1, 3 and 5 features at a time resulted in a different set of best features.

    I found that using a GradientBoostingClassifer was more predictable and repeatable in the sense that the final set of best features agreed whether I trimmed 1 feature at a time or 3 or 5.

    I hope I'm not teaching you to suck eggs here, you probably know more than me, but my approach to ablative anlaysis was to use a fast classifier to get a rough idea of the best sets of features, then use a better performing classifier, then start hyper parameter tuning, again doing coarse grain comaprisons and then fine grain once I get a feel of what the best params were.

提交回复
热议问题