Get startedGet started for free

Gradient boosting ensemble

Boosting is a technique where the error of one predictor is passed as input to the next in a sequential manner. Gradient Boosting uses a gradient descent procedure to minimize the log loss for each subsequent classification tree added one at a time that, on their own, are weak decision models. Gradient Boosting for regression is similar, but uses a loss function such as mean squared error applied to gradient descent.

In this exercise, you will create a Gradient Boosting Classifier model and compare its performance to the Random Forest from the previous exercise, which had an accuracy score of 72.5%.

The loan_data DataFrame has already been split is available in your workspace as X_train, X_test, y_train, and y_test.

This exercise is part of the course

Practicing Machine Learning Interview Questions in Python

View Course

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Import
from sklearn.____ import ____
from sklearn.____ import ____, ____, ____, ____, ____

# Instantiate
gb_model = ____(____=____, learning_rate=___,random_state=123)
Edit and Run Code