CommencerCommencer gratuitement

Calibration curves

You now know that the gradient boosted tree clf_gbt has the best overall performance. You need to check the calibration of the two models to see how stable the default prediction performance is across probabilities. You can use a chart of each model's calibration to check this by calling the calibration_curve() function.

Calibration curves can require many lines of code in python, so you will go through each step slowly to add the different components.

The two sets of predictions clf_logistic_preds and clf_gbt_preds have already been loaded into the workspace. Also, the output from calibration_curve() for each model has been loaded as: frac_of_pos_lr, mean_pred_val_lr, frac_of_pos_gbt, and mean_pred_val_gbt.

Cet exercice fait partie du cours

Credit Risk Modeling in Python

Afficher le cours

Exercice interactif pratique

Essayez cet exercice en complétant cet exemple de code.

# Create the calibration curve plot with the guideline
plt.____([0, 1], [0, 1], 'k:', label=____)    
plt.____('Fraction of positives')
plt.____('Average Predicted Probability')
plt.legend()
plt.title('Calibration Curve')
plt.____()
Modifier et exécuter le code