Get startedGet started for free

Bootstrap aggregation (bagging)

In the last lesson, you got a small taste of classification models by applying logistic regression on data with engineered features. In machine learning interviews, it's sometimes worthwhile to know about ensemble models since they combine weak learners to create a strong learner that improves model accuracy.

In this exercise, you will start off by applying a bagging classifier which uses a sampling technique, with replacement, to maintain randomness and reduce overfitting. You will be using functions from the sklearn.ensemble module which you saw in the video exercise.

All relevant packages have been imported for you: pandas as pd, train_test_split from sklearn.model_selection, accuracy_score from sklearn.metrics, LogisticRegression from sklearn.linear_model, and BaggingClassifier and AdaBoostClassifier from sklearn.ensemble.

The loan_data DataFrame is already split into X_train, X_test, y_train and y_test.

This exercise is part of the course

Practicing Machine Learning Interview Questions in Python

View Course

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Instantiate bootstrap aggregation model
bagged_model = ____(____=____, random_state=123)
Edit and Run Code