Pyspark random forest feature importance mapping after column transformations

偶尔善良 提交于 2019-11-27 14:54:46

Extract metadata as shown here by user6910411

attrs = sorted(
    (attr["idx"], attr["name"]) for attr in (chain(*dataset
        .schema["features"]
        .metadata["ml_attr"]["attrs"].values())))

and combine with feature importance:

[(name, dtModel_1.featureImportances[idx])
 for idx, name in attrs
 if dtModel_1.featureImportances[idx]]

The transformed dataset metdata has the required attributes.Here is an easy way to do -

  1. create a pandas dataframe (generally feature list will not be huge, so no memory issues in storing a pandas DF)

    pandasDF = pd.DataFrame(dataset.schema["features"].metadata["ml_attr"] 
    ["attrs"]["binary"]+dataset.schema["features"].metadata["ml_attr"]["attrs"]["numeric"]).sort_values("idx")
    
  2. Then create a broadcast dictionary to map. broadcast is necessary in a distributed environment.

    feature_dict = dict(zip(pandasDF["idx"],pandasDF["name"])) 
    
    feature_dict_broad = sc.broadcast(feature_dict)
    
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!