Measuring accuracy
You'll now practice using XGBoost's learning API through its baked in cross-validation capabilities. As Sergey discussed in the previous video, XGBoost gets its lauded performance and efficiency gains by utilizing its own optimized data structure for datasets called a DMatrix.
In the previous exercise, the input datasets were converted into DMatrix data on the fly, but when you use the xgboost cv object, you have to first explicitly convert your data into a DMatrix. So, that's what you will do here before running cross-validation on churn_data.
Este ejercicio forma parte del curso
Extreme Gradient Boosting with XGBoost
Instrucciones del ejercicio
- Create a 
DMatrixcalledchurn_dmatrixfromchurn_datausingxgb.DMatrix(). The features are available inXand the labels iny. - Perform 3-fold cross-validation by calling 
xgb.cv().dtrainis yourchurn_dmatrix,paramsis your parameter dictionary,nfoldis the number of cross-validation folds (3),num_boost_roundis the number of trees we want to build (5),metricsis the metric you want to compute (this will be"error", which we will convert to an accuracy). 
Ejercicio interactivo práctico
Prueba este ejercicio y completa el código de muestra.
# Create arrays for the features and the target: X, y
X, y = churn_data.iloc[:,:-1], churn_data.iloc[:,-1]
# Create the DMatrix from X and y: churn_dmatrix
churn_dmatrix = ____(data=____, label=____)
# Create the parameter dictionary: params
params = {"objective":"reg:logistic", "max_depth":3}
# Perform cross-validation: cv_results
cv_results = ____(dtrain=____, params=____, 
                  nfold=____, num_boost_round=____, 
                  metrics="____", as_pandas=____, seed=123)
# Print cv_results
print(cv_results)
# Print the accuracy
print(((1-cv_results["test-error-mean"]).iloc[-1]))