Exercise

# Plot an ROC curve

As you saw in the video, an ROC curve is a really useful shortcut for summarizing the performance of a classifier over all possible thresholds. This saves you a lot of tedious work computing class predictions for many different thresholds and examining the confusion matrix for each.

My favorite package for computing ROC curves is `caTools`

, which contains a function called `colAUC()`

. This function is very user-friendly and can actually calculate ROC curves for multiple predictors at once. In this case, you only need to calculate the ROC curve for one predictor, e.g.:

```
colAUC(predicted_probabilities, actual, plotROC = TRUE)
```

The function will return a score called AUC (more on that later) and the `plotROC = TRUE`

argument will return the plot of the ROC curve for visual inspection.

Instructions

**100 XP**

`model`

, `test`

, and `train`

from the last exercise using the sonar data are loaded in your workspace.

- Predict probabilities (i.e.
`type = "response"`

) on the test set, then store the result as`p`

. - Make an ROC curve using the predicted test set probabilities.