Visually scoring credit models
Now, you want to visualize the performance of the model. In ROC charts, the X and Y axes are two metrics you've already looked at: the false positive rate (fall-out), and the true positive rate (sensitivity).
You can create a ROC chart of it's performance with the following code:
fallout, sensitivity, thresholds = roc_curve(y_test, prob_default)
plt.plot(fallout, sensitivity)
To calculate the AUC score, you use roc_auc_score()
.
The credit data cr_loan_prep
along with the data sets X_test
and y_test
have all been loaded into the workspace. A trained LogisticRegression()
model named clf_logistic
has also been loaded into the workspace.
This exercise is part of the course
Credit Risk Modeling in Python
Exercise instructions
- Create a set of predictions for probability of default and store them in
preds
. - Print the accuracy score the model on the
X
andy
test sets. - Use
roc_curve()
on the test data and probabilities of default to createfallout
andsensitivity
Then, create a ROC curve plot withfallout
on the x-axis. - Compute the AUC of the model using test data and probabilities of default and store it in
auc
.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Create predictions and store them in a variable
____ = clf_logistic.____(____)
# Print the accuracy score the model
print(clf_logistic.____(____, ____))
# Plot the ROC curve of the probabilities of default
prob_default = preds[:, 1]
fallout, sensitivity, thresholds = ____(____, ____)
plt.plot(fallout, sensitivity, color = 'darkorange')
plt.plot([0, 1], [0, 1], linestyle='--')
plt.____()
# Compute the AUC and store it in a variable
____ = ____(____, ____)