Measuring AUC
Now that you've used cross-validation to compute average out-of-sample accuracy (after converting from an error), it's very easy to compute any other metric you might be interested in. All you have to do is pass it (or a list of metrics) in as an argument to the metrics
parameter of xgb.cv()
.
Your job in this exercise is to compute another common metric used in binary classification - the area under the curve ("auc"
). As before, churn_data
is available in your workspace, along with the DMatrix churn_dmatrix
and parameter dictionary params
.
Este ejercicio forma parte del curso
Extreme Gradient Boosting with XGBoost
Instrucciones del ejercicio
- Perform 3-fold cross-validation with
5
boosting rounds and"auc"
as your metric. - Print the
"test-auc-mean"
column ofcv_results
.
Ejercicio interactivo práctico
Prueba este ejercicio completando el código de muestra.
# Perform cross_validation: cv_results
cv_results = ____(dtrain=____, params=____,
nfold=____, num_boost_round=____,
metrics="____", as_pandas=True, seed=123)
# Print cv_results
print(cv_results)
# Print the AUC
print((cv_results["____"]).iloc[-1])