CommencerCommencer gratuitement

Making the most of AdaBoost

As you have seen, for predicting movie revenue, AdaBoost gives the best results with decision trees as the base estimator.

In this exercise, you'll specify some parameters to extract even more performance. In particular, you'll use a lower learning rate to have a smoother update of the hyperparameters. Therefore, the number of estimators should increase. Additionally, the following features have been added to the data: 'runtime', 'vote_average', and 'vote_count'.

Cet exercice fait partie du cours

Ensemble Methods in Python

Afficher le cours

Instructions

  • Build an AdaBoostRegressor using 100 estimators and a learning rate of 0.01.
  • Fit reg_ada to the training set and calculate the predictions on the test set.

Exercice interactif pratique

Essayez cet exercice en complétant cet exemple de code.

# Build and fit an AdaBoost regressor
reg_ada = ____(____, ____, random_state=500)
reg_ada.fit(X_train, y_train)

# Calculate the predictions on the test set
pred = ____

# Evaluate the performance using the RMSE
rmse = np.sqrt(mean_squared_error(y_test, pred))
print('RMSE: {:.3f}'.format(rmse))
Modifier et exécuter le code