How to handle categorical features for Decision Tree, Random Forest in spark ml?

流过昼夜 提交于 2019-12-04 10:38:34

One Hot Encoding should be done for categorical variables with categories > 2.

To understand why, you should know the difference between the sub categories of categorical data: Ordinal data and Nominal data.

Ordinal Data: The values has some sort of ordering between them. example: Customer Feedback(excellent, good, neutral, bad, very bad). As you can see there is a clear ordering between them (excellent > good > neutral > bad > very bad). In this case StringIndexer alone is sufficient for modelling purpose.

Nominal Data: The values has no defined ordering between them. example: colours(black, blue, white, ...). In this case StringIndexer alone is NOT sufficient. and One Hot Encoding is required after String Indexing.

After String Indexing lets assume the output is:

 id | colour   | categoryIndex
----|----------|---------------
 0  | black    | 0.0
 1  | white    | 1.0
 2  | yellow   | 2.0
 3  | red      | 3.0

Then without One Hot Encoding, the machine learning algorithm will assume: red > yellow > white > black, which we know its not true. OneHotEncoder() will help us avoid this situation.

So to answer your question,

Will indexed feature be considered as continuous in the algorithm?

It will be considered as continious variable.

Is it the right approach? Or should I go ahead with One-Hot-Encoding for categorical features

depends on your understanding of data.Although Random Forest and some boosting methods doesn't require OneHot Encoding, most ML algorithms need it.

Refer: https://spark.apache.org/docs/latest/ml-features.html#onehotencoder

In short, Spark's RandomForest does NOT require OneHotEncoder for categorical features created by StringIndexer or VectorIndexer.

Longer Explanation. In general DecisionTrees can handle both Ordinal and Nominal types of data. However, when it comes to the implementation, it could be that OneHotEncoder is required (as it is in Python's scikit-learn).
Luckily, Spark's implementation of RandomForest honors categorical features if properly handled and OneHotEncoder is NOT required! Proper handling means that categorical features contain the corresponding metadata so that RF knows what it is working on. Features that have been created by StringIndexer or VectorIndexer contain metadata in the DataFrame about being generated by the Indexer and being categorical.

According to the vdep answers, the StringIndexer is enough for Ordinal Data. Howerver the StringIndexer sort the data by label frequency, for example "excellent > good > neutral > bad > very bad" maybe become the "good,excellent,neutral". So for Oridinal data, the StringIndexer do not suit for it.

Secondly, for Nominal Data, the document tells us that

for a binary classification problem with one categorical feature with three categories A, B and C whose corresponding proportions of label 1 are 0.2, 0.6 and 0.4, the categorical features are ordered as A, C, B. The two split candidates are A | C, B and A , C | B where | denotes the split.

The "corresponding proportions of label 1" is same as the label frequency? So I am confused of the feasibility with the StringInder to DecisionTree in Spark.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!