How to use dummy variable to represent categorical data in python scikit-learn random forest

情到浓时终转凉″ 提交于 2019-11-29 08:22:36

Using boolean features encoded as 0 and 1 should work. If the predictive accuracy is bad even with a large number of decision trees in your forest it might be the case that your data is too noisy to get the learning algorithm to not pickup any think interesting.

Have you tried to fit a linear model (e.g. Logistic Regression) as a baseline on this data?

Edit: in practice using integer coding for categorical variables tends to work very well for many randomized decision trees models (such as RandomForest and ExtraTrees in scikit-learn).

Scikits random forest classifier can work with dummified variables, but it can also use categorical variables directly, which is the preferred approach. Just map your strings into integers. Assume your features vector is ['a' ,'b', 'b', 'c']

vals = ['a','b','b','c']
#create a map from your variable names to unique integers:
intmap = dict([(val, i) for i, val in enumerate(set(vals))]) 
#make the new array hold corresponding integers instead of strings:
new_vals = [intmap[val] for val in vals]

new_vals now holds values [0, 2, 2, 1], and you can give it to RF directly, without doing the dummification

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!