Build a random forest model
Here you will use the same cross-validation data to build (using train
) and evaluate (using validate
) random forests for each partition. Since you are using the same cross-validation partitions as your regression models, you are able to directly compare the performance of the two models.
Note: We will limit our random forests to contain 100 trees to ensure they finish fitting in a reasonable time. The default number of trees for ranger()
is 500.
This exercise is part of the course
Machine Learning in the Tidyverse
Exercise instructions
- Use
ranger()
to build a random forest predictinglife_expectancy
using all features intrain
for each cross validation partition. - Add a new column
validate_predicted
predicting thelife_expectancy
for the observations invalidate
using the random forest models you just created.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
library(ranger)
# Build a random forest model for each fold
cv_models_rf <- cv_data %>%
mutate(model = map(___, ~ranger(formula = ___, data = ___,
num.trees = 100, seed = 42)))
# Generate predictions using the random forest model
cv_prep_rf <- cv_models_rf %>%
mutate(validate_predicted = map2(.x = ___, .y = ___, ~predict(.x, .y)$predictions))