1. Wrap up
Well done on reaching the end of the course! Let's summarize all your learnings.
2. Hyperparameters vs Parameters
You firstly learned the difference between hyperparameters and parameters.
Remember that hyperparameters are inputs to the estimator or algorithm that you set, they are not learned by the algorithm in the modeling process.
Parameters are learned by the algorithm and returned to you. You do not set these.
3. Which hyperparameters & values?
A few times during the course you explored best practice for setting hyperparameters and their values to search over. You learned some top tips like:
Which hyperparameters are better to start with than others,for example with a Random Forest use case.
That there are potentially silly values you can set for hyperparameters that will waste your effort.
You need to beware of conflicting hyperparameter values, especially when the error may not be obvious
and finally that this best practice is specific to each algorithm and hyperparameter. So you have some work to do researching and learning this!
4. Remembering Grid Search
We then learned about grid search.
This is where we construct a grid of all the values we wish to test for all the different hyperparameters
Then we undertake an exhaustive search of all the different combinations
and finally pick the best model.
You learned that this is a computationally expensive method but it is guaranteed to find the best model in the grid you specify. You can remember being reminded often the importance of setting good grid values!
5. Remembering Random Search
Our next method was random search.
This was very similar to grid search
The main difference was instead of trying every square (or hyperparameter combination) on the grid, we randomly selected a certain number.
This method is more efficient at finding a reasonably good model faster but it is not guaranteed to find the best on your grid.
6. From uninformed to informed search
Finally we looked at some advanced methods that are known as 'informed search'.
This is where each iteration learns from the last, as opposed to grid and random where you do all your modeling at once and then pick the best.
These methods were:
'Coarse to Fine' where you iteratively build random searches to narrow your search space before a final grid search.
An introduction to Bayesian hyperparameter tuning where you use the method of updating your prior beliefs when new evidence arrives about model performance.
And finally genetic algorithms, drawing from nature and how evolution selects the best species as you select your best models over the generations.
7. Thank you!
Thanks for taking this course. I hope you learned some useful methodologies for your future work undertaking hyperparameter tuning in Python!