Exercise

# Plot & compare ROC curves

We conclude this course by plotting the ROC curves for all the models (one from each chapter) on the same graph. The **ROCR** package provides the `prediction()`

and `performance()`

functions which generate the data required for plotting the ROC curve, given a set of predictions and actual (true) values.

The more "up and to the left" the ROC curve of a model is, the better the model. The AUC performance metric is literally the "Area Under the ROC Curve", so the greater the area under this curve, the higher the AUC, and the better-performing the model is.

Instructions

**100 XP**

The **ROCR** package can plot multiple ROC curves on the same plot if you plot several sets of predictions as a list.

- The
`prediction()`

function takes as input a list of prediction vectors (one per model) and a corresponding list of true values (one per model, though in our case the models were all evaluated on the same test set so they all have the same set of true values). The`prediction()`

function returns a "prediction" object which is then passed to the`performance()`

function. - The
`performance()`

function generates the data necessary to plot the curve from the "prediction" object. For the ROC curve, you will also pass along two measures,`"tpr"`

and`"fpr"`

. - Once you have the "performance" object, you can plot the ROC curves using the
`plot()`

method. We will add some color to the curves and a legend so we can tell which curves belong to which algorithm.