I\'m running out of memory on a normal 8GB server working with a fairly small dataset in a machine learning context:
> dim(basetrainf) # this is a dataframe
[1] 5
With that much data, the resampled error estimates and the random forest OOB error estimates should be pretty close. Try using trainControl(method = "OOB") and train() will not fit the extra models on resampled data sets.
Also, avoid the formula interface like the plague.
You also might try bagging instead. Since there is no random selection of predictors at each spit, you can get good result with 50-100 resamples (instead of many more needed by random forests to be effective).
Others may disagree, but I also think that modeling all the data you have is not always the best approach. Unless the predictor space is large, many of the data points will be very similar to others and don't contribute much to the model fit (besides the additional computation complexity and the footprint of the resulting object). caret has a function called maxDissim that might be helpful to thinning the data (although it is not terribly efficient either)