ComenzarEmpieza gratis

Checking the out-of-bag score

Let's now check the out-of-bag score for the model from the previous exercise.

So far you've used the F1 score to measure performance. However, in this exercise you should use the accuracy score so that you can easily compare it to the out-of-bag score.

The decision tree classifier from the previous exercise, clf_dt, is available in your workspace.

The pokemon dataset is already loaded for you and split into train and test sets. In addition, the decision tree classifier was fit and is available for you as clf_dt to use it as base estimator.

Este ejercicio forma parte del curso

Ensemble Methods in Python

Ver curso

Instrucciones del ejercicio

  • Build the bagging classifier using the decision tree as base estimator and 21 estimators. This time, use the out-of-bag score by specifying an argument for the oob_score parameter.
  • Print the classifier's out-of-bag score.

Ejercicio interactivo práctico

Prueba este ejercicio y completa el código de muestra.

# Build and train the bagging classifier
clf_bag = ____(
  ____,
  ____,
  ____,
  random_state=500)
clf_bag.fit(X_train, y_train)

# Print the out-of-bag score
print('OOB-Score: {:.3f}'.format(____))

# Evaluate the performance on the test set to compare
pred = clf_bag.predict(X_test)
print('Accuracy: {:.3f}'.format(accuracy_score(y_test, pred)))
Editar y ejecutar código