Session Ready
Exercise

Evaluating model performance

Now that you have actively considered the class imbalance in your data by applying subsampling methods to each fold (resample) in your cross-validated training data, it is time to evaluate these models and see if all your efforts were worth anything.

A custom function named get_auc() has been written for you. This function calculates the Area Under the ROC Curve (AUC) metric for a given model tested on a pulsar star dataset. Type get_auc in the console to check it out.

We will use this function to assess the AUC produced by four models on the testing data.

The four models mdl_orig, mdl_down, mdl_up and mdl_smote are already loaded for you, as well as the caret package.

Instructions 1/3
undefined XP
  • 1
  • 2
  • 3
  • Gather the four decision tree models you created into a list called mdl_list. These models have been preloaded for you as indicated in the instructions.