Using PCA as an alternative to nearZeroVar()
An alternative to removing low-variance predictors is to run PCA on your dataset. This is sometimes preferable because it does not throw out all of your data: many different low variance predictors may end up combined into one high variance PCA variable, which might have a positive impact on your model's accuracy.
This is an especially good trick for linear models: the pca
option in the preProcess
argument will center and scale your data, combine low variance variables, and ensure that all of your predictors are orthogonal. This creates an ideal dataset for linear regression modeling, and can often improve the accuracy of your models.
This exercise is part of the course
Machine Learning with caret in R
Exercise instructions
bloodbrain_x
and bloodbrain_y
are loaded in your workspace.
- Fit a
glm
model to the full blood-brain dataset using the"pca"
option topreProcess
. - Print the model to the console and inspect the result.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Fit glm model using PCA: model
model <- train(
x = ___,
y = ___,
method = ___,
preProcess = ___
)
# Print model to console