ComenzarEmpieza gratis

Tune the penalty hyperparameter

Now that you've seen how the penalty parameter affects lasso regression's selection of features, you might be wondering, "What's the best value for penalty?" tidymodels provides functions to explore the best value for hyperparameters like penalty.

In this exercise, you will find the best value of penalty based on the RMSE of the model, then fit a final model with that penalty value. This will optimize the feature selection of lasso regression for model performance.

lasso_recipe has been created for you and train is also available. The tidyverse and tidymodels packages have also been loaded for you.

Este ejercicio forma parte del curso

Dimensionality Reduction in R

Ver curso

Instrucciones del ejercicio

  • Define a linear_reg() workflow that will tune penalty.
  • Create a 3-fold cross validation sample from train and a sequence of 20 penalty values ranging from 0.001 to 0.1.
  • Create lasso models using with different penalty values.
  • Plot the model performance (RMSE) based on the penalty value.

Ejercicio interactivo práctico

Prueba este ejercicio y completa el código de muestra.

# Create tune-able model
lasso_model <- ___(___ = ___(), mixture = ___, engine = "glmnet")
lasso_workflow <- workflow(preprocessor = lasso_recipe, ___ = ___)

# Create a cross validation sample and sequence of penalty values
train_cv <- ___(___, v = ___)
penalty_grid <- grid_regular(penalty(range = c(___, ___)), levels = ___)

# Create lasso models with different penalty values
lasso_grid <- tune_grid(
  ___,
  resamples = ___,
  grid = ___)

# Plot RMSE vs. penalty values
___(___, metric = "rmse")
Editar y ejecutar código