How to extract feature importances from an Sklearn pipeline

前端 未结 2 787
情书的邮戳
情书的邮戳 2020-12-31 10:32

I\'ve built a pipeline in Scikit-Learn with two steps: one to construct features, and the second is a RandomForestClassifier.

While I can save that pipeline, look at

2条回答
  •  轻奢々
    轻奢々 (楼主)
    2020-12-31 11:09

    Ah, yes it is.

    You list identify the step where you want to check the estimator:

    For instance:

    pipeline.steps[1]
    

    Which returns:

    ('predictor',
     RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
                 max_depth=None, max_features='auto', max_leaf_nodes=None,
                 min_samples_leaf=1, min_samples_split=2,
                 min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=2,
                 oob_score=False, random_state=None, verbose=0,
                 warm_start=False))
    

    You can then access the model step directly:

    pipeline.steps[1][1].feature_importances_

提交回复
热议问题