Model complexity and overfitting
1. Model complexity and overfitting
Congratulations on deciding to continue with this course! Deciding how complex a model should be is one of the most critical skills a data scientist must have, and is the subject of this lesson.2. What is model complexity?
Often classifiers have extra parameters that control their flexibility, or complexity. For example, inspecting the documentation of the random forest classifier, you will notice an entry for max-underscore-depth. This stands for maximum depth. A random forest classifier combines the predictions from a large number of decision trees. Using deeper trees makes the classifier more complex.3. What is model complexity?
Let's start by fitting a classifier with depth 2 and one with depth 4 to the credit scoring dataset from the previous lesson. How would a typical tree from each classifier look like? We can access individual trees using the private estimators-underscore attribute. Trees of depth 2 contain at most two nested decision rules, whereas depth 4 produces much deeper rules. Although these trees come from the same classifier family and data, they look very different.4. Train/test/validation
Tuning a complexity parameter is treated in the same way as model selection. You need to split your data into training and test, fit several classifiers of different depths to the training data, and pick the one with the best test performance. You can also keep a separate, hold-out dataset in order to get a fresh, final estimate of the accuracy of the winning classifier.5. Cross-validation
An alternative approach is cross-validation, which splits the data into several chunks, and repeats the training-test step by picking a different chunk per round to use as test data, shown here in yellow, while using the remaining data for training, shown in blue. Accuracy is averaged over all rounds, making this technique more stable.6. Cross-validation
Cross-validation is implemented as cross_val_score in the scikit-learn model_selection module. The function takes as input a classifier instance, and the full data X and y, which it then proceeds to split several times - three times, by default. The result are three estimates of accuracy, one for each run, that can be averaged using mean() from numpy.7. Tuning model complexity
To easily optimize a hyperparameter like tree depth using cross-validation, you can use the function GridSearchCV(), which takes as input a dictionary of parameters and values to try out, and a classifier instance. The resulting object is fitted to the entire dataset, and stores the best performing values in an attribute called underscore-best-underscore-params.8. Optimal complexity
Let's now review the accuracy of our random forest as the depth ranges from 1 to 10. Accuracy on the same data used for training, known as in-sample accuracy, is shown here in blue. As the trees become deeper, the classifier becomes so complex that it can now almost memorize the training data. This way, it can reach 100% in-sample accuracy.9. Optimal complexity
Performance using cross-validation, also known as out-of-sample accuracy, is much lower than in-sample performance, and a much more realistic estimate of future performance.10. Optimal complexity
The most important observation is that out-of-sample performance actually drops for depths greater than 10, due to overfitting: trying too hard to memorize the training data leads to worse performance on the test data. This also happens in real life! If you memorize the answers to past exam questions, you will only do well on the exam if the same questions appear in exactly the same wording.11. More complex is not always better!
You are already wiser than the average data scientist because you know that complex models are not always better than simple ones. The exercises that follow let you develop more intuition about this insight.Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.