Get startedGet started for free

Making the most of AdaBoost

As you have seen, for predicting movie revenue, AdaBoost gives the best results with decision trees as the base estimator.

In this exercise, you'll specify some parameters to extract even more performance. In particular, you'll use a lower learning rate to have a smoother update of the hyperparameters. Therefore, the number of estimators should increase. Additionally, the following features have been added to the data: 'runtime', 'vote_average', and 'vote_count'.

This exercise is part of the course

Ensemble Methods in Python

View Course

Exercise instructions

  • Build an AdaBoostRegressor using 100 estimators and a learning rate of 0.01.
  • Fit reg_ada to the training set and calculate the predictions on the test set.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Build and fit an AdaBoost regressor
reg_ada = ____(____, ____, random_state=500)
reg_ada.fit(X_train, y_train)

# Calculate the predictions on the test set
pred = ____

# Evaluate the performance using the RMSE
rmse = np.sqrt(mean_squared_error(y_test, pred))
print('RMSE: {:.3f}'.format(rmse))
Edit and Run Code