Exercise

# Predict on a test set and compute AUC

In binary classification problems, we can predict numeric values instead of class labels. In fact, class labels are created only after you use the model to predict a raw, numeric, *predicted value* for a test point.

The *predicted label* is generated by applying a threshold to the *predicted value*, such that all tests points with predicted value greater than that threshold get a predicted label of "1" and, points below that threshold get a predicted label of "0".

In this exercise, generate predicted values (rather than class labels) on the test set and evaluate performance based on AUC (Area Under the ROC Curve). The AUC is a common metric for evaluating the discriminatory ability of a binary classification model.

Instructions

**100 XP**

- Use the
`predict()`

function with`type = "prob"`

to generate numeric predictions on the`credit_test`

dataset. - Compute the AUC using the
`auc()`

function from the**Metrics**package.