1. Limits of grid search and random search
Now that you've done both GridSearch and RandomSearch for hyperparameter tuning on the Ames housing data, let's briefly go over the limits of both of these approaches for hyperparameter tuning.
2. Grid search and random search limitations
It should be clear to you that grid search and random search each suffer from distinct limitations.
As long as the number of hyperparameters and distinct values per hyperparameter you search over is kept small, grid search will give you an answer in a reasonable amount of time. However, as the number of hyperparameters grows, the time it takes to complete a full grid search increases exponentially.
For random search, the problem is a bit different. Since you can specify how many iterations a random search should be run, the time it takes to finish the random search wont explode as you add more and more hyperparameters to search through. The problem really is that as you add new hyperparameters to search over, the size of the hyperparameter space explodes as it did in the grid search case, and so you are left hoping that one of the random parameter configurations that the search chooses is a good one! You can always increase the number of iterations you want the random search to run, but then finding an optimal configuration becomes a combination of waiting randomly finding a good set of hyperparameters.
In any case, both approaches have significant limitations.
3. Let's practice!
Great, now that you've learned how to tune the most important hyperparameters found in XGBoost, lets move onto the final chapter, where we work through 2 end to end processing pipelines utilizing XGBoost and scikit-learn.