Aan de slagGa gratis aan de slag

Back to regression with stacking

In Chapter 1, we treated the app ratings as a regression problem, predicting the rating on the interval from 1 to 5. So far in this chapter, we have dealt with it as a classification problem, rounding the rating to the nearest integer. To practice using the StackingRegressor, we'll go back to the regression approach. As usual, the input features have been standardized for you with a StandardScaler().

The MAE (mean absolute error) is the evaluation metric. In Chapter 1, the MAE was around 0.61. Let's see if the stacking ensemble method can reduce that error.

Deze oefening maakt deel uit van de cursus

Ensemble Methods in Python

Cursus bekijken

Oefeninstructies

  • Instantiate a decision tree regressor with: min_samples_leaf = 11 and min_samples_split = 33.
  • Instantiate the default linear regression.
  • Instantiate a Ridge regression model with random_state = 500.
  • Build and fit a StackingRegressor, passing the regressors and the meta_regressor.

Praktische interactieve oefening

Probeer deze oefening eens door deze voorbeeldcode in te vullen.

# Instantiate the 1st-layer regressors
reg_dt = ____(____, ____, random_state=500)
reg_lr = ____
reg_ridge = ____

# Instantiate the 2nd-layer regressor
reg_meta = LinearRegression()

# Build the Stacking regressor
reg_stack = ____
reg_stack.____

# Evaluate the performance on the test set using the MAE metric
pred = reg_stack.predict(X_test)
print('MAE: {:.3f}'.format(mean_absolute_error(y_test, pred)))
Code bewerken en uitvoeren