Visualizing model performance
1. Visualizing model performance
In this section we will explore some common ways to visualize the results of a classification model.2. Plotting the confusion matrix
Confusion matrices generated by the conf_mat() function can be plotted with the autoplot() function. Simply pass the confusion matrix object into autoplot() and set the type to 'heatmap'. This creates a heat map of the counts in the confusion matrix and will highlight the combinations with the largest frequencies.3. Mosaic plot
Setting the type to 'mosaic' within autoplot() will create a mosaic plot of the confusion matrix which visualizes sensitivity and specificity. Each column in this plot represents 100 percent of the actual outcome value in that column. With the leads_results confusion matrix, the height of the yes-yes combination represents the sensitivity4. Mosiac plot
while the height of the no-no combination represents the specificity.5. Probability thresholds
In binary classification, the default probability threshold is point-5. This means that if the estimated probability of the positive class is greater than or equal to point-5, the positive class is predicted. With our leads_results tibble, if the dot-pred_yes column is greater than or equal to point-5, then our predicted outcome, dot-pred_class is set to 'yes' by the predict() function.6. Exploring performance across thresholds
It's important to explore the performance of a classification model across a range of probability thresholds to see if the model is able to consistently predict well. One way to do this is by using the unique values in the dot-pred_yes column of our leads_result tibble as probability thresholds and calculating the specificity and sensitivity for each one.7. Visualizing performance across thresholds
The receiver operating characteristic curve, or ROC curve visualizes the performance of a classification model across a range of probability thresholds. For each unique threshold in the previous table, a point that represents the sensitivity and one minus the specificity is added to the plot on the y and x axis, respectively.8. Visualizing performance across thresholds
In essence, this plot displays the proportion correct among actual positives versus the proportion incorrect among actual negatives across probability thresholds as a step function.9. ROC curves
The optimal point on this graph is (0, 1) and a classification model that produces points close to the left upper edge across all thresholds is ideal.10. ROC curves
A classification model that produces points along the diagonal line where sensitivity is equal to one minus the specificity indicates poor performance. This is the equivalent of a classification model that predicts outcomes based on the result of randomly flipping a fair coin.11. Summarizing the ROC curve
One way to summarize an ROC curve is to calculate the area under the curve, known as ROC AUC in tidymodels. This metric has a useful interpretation as a letter grade of classification performance, where values from point-9 to 1 represent an "A" and so forth.12. Calculating performance across thresholds
To plot an ROC curve, we first need to create a tibble with sensitivity and specificity calculations for various thresholds. To do this, we pass our leads_results tibble into the roc_curve() function and set the truth argument to purchased and pass the dot-pred_yes column as the third argument. This will return a tibble with specificity and sensitivity for all unique thresholds in the dot-pred_yes column.13. Plotting the ROC curve
Then we pass the results of roc_curve() to the autoplot() function to display the ROC curve for our classification model.14. Calculating ROC AUC
To calculate the ROC AUC, we use the roc_auc() function from yardstick. This function takes a tibble of model results, the column with the true outcome values, and the column with estimated probabilities of the positive class. Our logistic regression model has a ROC AUC of point-763, giving us a C in terms of model performance.15. Let's practice!
Let's practice visualizing model performance!Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.