Bringing it all together
Alright, it's time to bring together everything you've learned so far! In this final exercise of the course, you will combine your work from the previous exercises into one end-to-end XGBoost pipeline to really cement your understanding of preprocessing and pipelines in XGBoost.
Your work from the previous 3 exercises, where you preprocessed the data and set up your pipeline, has been pre-loaded. Your job is to perform a randomized search and identify the best hyperparameters.
Este ejercicio forma parte del curso
Extreme Gradient Boosting with XGBoost
Instrucciones del ejercicio
- Set up the parameter grid to tune 
'clf__learning_rate'(from0.05to1in increments of0.05),'clf__max_depth'(from3to10in increments of1), and'clf__n_estimators'(from50to200in increments of50). - Using your 
pipelineas the estimator, perform 2-foldRandomizedSearchCVwith ann_iterof2. Use"roc_auc"as the metric, and setverboseto1so the output is more detailed. Store the result inrandomized_roc_auc. - Fit 
randomized_roc_auctoXandy. - Compute the best score and best estimator of 
randomized_roc_auc. 
Ejercicio interactivo práctico
Prueba este ejercicio y completa el código de muestra.
# Create the parameter grid
gbm_param_grid = {
    '____': ____(____, ____, ____),
    '____': ____(____, ____, ____),
    '____': ____(____, ____, ____)
}
# Perform RandomizedSearchCV
randomized_roc_auc = ____
# Fit the estimator
____
# Compute metrics
print(____)
print(____)