How to perform random forest/cross validation in R

怎甘沉沦 提交于 2019-12-20 08:44:08

问题


I'm unable to find a way of performing cross validation on a regression random forest model that I'm trying to produce.

So I have a dataset containing 1664 explanatory variables (different chemical properties), with one response variable (retention time). I'm trying to produce a regression random forest model in order to be able to predict the chemical properties of something given its retention time.

ID  RT (seconds)    1_MW    2_AMW   3_Sv    4_Se
4281    38  145.29  5.01    14.76   28.37
4952    40  132.19  6.29    11  21.28
4823    41  176.21  7.34    12.9    24.92
3840    41  174.24  6.7 13.99   26.48
3665    42  240.34  9.24    15.2    27.08
3591    42  161.23  6.2 13.71   26.27
3659    42  146.22  6.09    12.6    24.16

This is an example of the table that I have. I want to basically plot RT against 1_MW, etc (up to 1664 variables), so I can find which of these variables are of importance and which aren't.

I do:-

r = randomForest(RT..seconds.~., data = cadets, importance =TRUE, do.trace = 100)
varImpPlot(r)

which tells me which variables are of importance and what not, which is great. However, I want to be able to partition my dataset so that I can perform cross validation on it. I found an online tutorial that explained how to do it, but for a classification model rather than regression.

I understand you do:-

k = 10
n = floor(nrow(cadets)/k)
i = 1
s1 = ((i-1) * n+1)
s2 = (i * n)
subset = s1:s2

to define how many cross folds you want to do, and the size of each fold, and to set the starting and end value of the subset. However, I don't know what to do here on after. I was told to loop through but I honestly have no idea how to do this. Nor do I know how to then plot the validation set and the test set onto the same graph to depict the level of accuracy/error.

If you could please help me with this I'd be ever so grateful, thanks!


回答1:


This is actually faster as well as quite easy to do in Python using the scikit-learn library (http://scikit-learn.org/stable/modules/cross_validation.html). You can do K-fold validation, stratified K-fold (which ensures that the classes are equally distributed in each of the folds), leave one out, and others.

It's also very easy to generate the ROC curve, feature importances, and other evaluation metrics.

Here's a quick example:

y  = data[1:, 0].astype(np.float)
X  = data[1:, 1:].astype(np.float)
cv = StratifiedKFold(y, n_folds = 5)

precision   = []
accuracy    = []
sensitivity = []
matthews    = []
r2          = []
f1          = []
auroc       = []
cm          = [[0, 0], [0, 0]]

for i, (train, test) in enumerate(cv):
    probas_     = rf.fit(X[train], y[train]).predict_proba(X[test])
    classes     = rf.fit(X[train], y[train]).predict(X[test])
    r2          = np.append(r2, (r2_score(y[test], probas_[:, 1])))
    precision   = np.append(precision, (precision_score(y[test], classes)))
    auroc       = np.append(auroc, (roc_auc_score(y[test], classes)))
    accuracy    = np.append(accuracy, (accuracy_score(y[test], classes)))
    sensitivity = np.append(sensitivity, (recall_score(y[test], classes)))
    f1          = np.append(f1, (f1_score(y[test], classes)))
    matthews    = np.append(matthews, (matthews_corrcoef(y[test], classes)))
    cma         = np.add(cma, (confusion_matrix(y[test], classes)))

cma         = np.array(cma)
r2          = np.array(r2)
precision   = np.array(precision)
accuracy    = np.array(accuracy)
sensitivity = np.array(sensitivity)
f1          = np.array(f1)
auroc       = np.array(auroc)
matthews    = np.array(matthews)

print("KF Accuracy: %0.2f (+/- %0.2f)" % (accuracy.mean(), accuracy.std() * 2))
print("KF Precision: %0.2f (+/- %0.2f)" % (precision.mean(), precision.std() * 2))
print("KF Sensitivity: %0.2f (+/- %0.2f)" % (sensitivity.mean(), sensitivity.std() * 2))
print("KF R^2: %0.2f (+/- %0.2f)" % (r2.mean(), r2.std() * 2))
print("KF F1: %0.2f (+/- %0.2f)" % (f1.mean(), f1.std() * 2))
print("KF AUROC: %0.2f (+/- %0.2f)" % (auroc.mean(), auroc.std() * 2))
print("KF Matthews: %0.2f (+/- %0.2f)" % (matthews.mean(), matthews.std() * 2))
print("Confusion Matrix", cma)



回答2:


From the source:

The out-of-bag (oob) error estimate

In random forests, there is no need for cross-validation or a separate test set to get an unbiased estimate of the test set error. It is estimated internally , during the run...

In particular, predict.randomForest returns the out-of-bag prediction if newdata is not given.




回答3:


As topchef pointed out, cross-validation isn't necessary as a guard against over-fitting. This is a nice feature of the random forest algorithm.

It sounds like your goal is feature selection, cross-validation is still useful for this purpose. Take a look at the rfcv() function within the randomForest package. Documentation specifies input of a data frame & vector, so I'll start by creating those with your data.

set.seed(42)
x <- cadets
x$RT..seconds. <- NULL
y <- cadets$RT..seconds.

rf.cv <- rfcv(x, y, cv.fold=10)

with(rf.cv, plot(n.var, error.cv))


来源:https://stackoverflow.com/questions/19760169/how-to-perform-random-forest-cross-validation-in-r

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!