Adjusting the regularization strength
Your current Lasso model has an \(R^2\) score of 84.7%. When a model applies overly powerful regularization it can suffer from high bias, hurting its predictive power.
Let's improve the balance between predictive power and model simplicity by tweaking the alpha
parameter.
This exercise is part of the course
Dimensionality Reduction in Python
Exercise instructions
- Find the highest value for
alpha
that gives an \(R^2\) value above 98% from the options:1
,0.5
,0.1
, and0.01
.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Find the highest alpha value with R-squared above 98%
la = Lasso(____, random_state=0)
# Fits the model and calculates performance stats
la.fit(X_train_std, y_train)
r_squared = la.score(X_test_std, y_test)
n_ignored_features = sum(la.coef_ == 0)
# Print peformance stats
print(f"The model can predict {r_squared:.1%} of the variance in the test set.")
print(f"{n_ignored_features} out of {len(la.coef_)} features were ignored.")