Back to regression with stacking
In Chapter 1, we treated the app ratings as a regression problem, predicting the rating on the interval from 1 to 5. So far in this chapter, we have dealt with it as a classification problem, rounding the rating to the nearest integer. To practice using the StackingRegressor, we'll go back to the regression approach.
As usual, the input features have been standardized for you with a StandardScaler().
The MAE (mean absolute error) is the evaluation metric. In Chapter 1, the MAE was around 0.61. Let's see if the stacking ensemble method can reduce that error.
Cet exercice fait partie du cours
Ensemble Methods in Python
Instructions
- Instantiate a decision tree regressor with: min_samples_leaf = 11andmin_samples_split = 33.
- Instantiate the default linear regression.
- Instantiate a Ridgeregression model withrandom_state = 500.
- Build and fit a StackingRegressor, passing theregressorsand themeta_regressor.
Exercice interactif pratique
Essayez cet exercice en complétant cet exemple de code.
# Instantiate the 1st-layer regressors
reg_dt = ____(____, ____, random_state=500)
reg_lr = ____
reg_ridge = ____
# Instantiate the 2nd-layer regressor
reg_meta = LinearRegression()
# Build the Stacking regressor
reg_stack = ____
reg_stack.____
# Evaluate the performance on the test set using the MAE metric
pred = reg_stack.predict(X_test)
print('MAE: {:.3f}'.format(mean_absolute_error(y_test, pred)))