Model Drift
1. Model Drift
Once a forecasting model is deployed to production, it's critical to monitor its performance continuously.2. Model drift
The key challenge in this phase is model drift - the degradation of a model's performance over time. Model drift is often driven by concept drift, which occurs when underlying patterns change. As the statistical properties of the series evolve, the model, trained on historical patterns, becomes misaligned with reality, resulting in performance drops.3. Concept drift
The main types of concept drift are sudden drift, gradual drift, and recurring drift. Sudden drift is an unexpected, rapid shift in the series, typically caused by external events introducing structural breaks. A classic example is COVID-19's impact on metrics like unemployment rates or airline passenger volumes—overnight, historical patterns became obsolete. Gradual drift occurs when the data distribution changes slowly over time, with new behaviors progressively replacing old ones. For instance, electricity consumption may rise steadily due to electric vehicle adoption. Recurring drift happens when seasonal patterns change or evolve over time, altering the series' cyclical behavior.4. Other causes of model drift
Other potential causes of model drift include data integrity and feature engineering issues. Data integrity issues occur when corrupted input data severely impacts model performance. As discussed previously, monitoring the data pipeline is essential for early identification of these cases. Feature engineering issues arise when features or labels are misaligned or incorrectly defined. For example, labeling an event with the wrong timestamp or mismapping categorical variables leads to faulty model behavior. Reviewing forecast residuals can help identify feature-label mismatches.5. Model life cycle
In the model lifecycle, monitoring plays a critical role in early drift detection, signaling when to return the model to experimentation for retraining or replacement.6. Detect model drift
Two straightforward methods detect model drift: tracking forecast accuracy over time and residual analysis. For tracking forecast accuracy, use a moving average window to monitor performance metrics like MAPE or RMSE. Moving averages smooth out short-term fluctuations and identify gradual performance declines. Residual analysis detects unusual variance, signaling that the model is missing important features or that the data distribution has changed. Let's stick to the first approach.7. Detect model drift
We'll use the forecast logs, which contain the model's performance scores.8. Identify drift
Let's start by importing the required libraries, loading the forecast log, and calculating the MAPE moving average using a 7-day and 14-day rolling window.9. Identify drift
Let's review the rolling window:10. Identify drift
We leverage backtesting performance to define threshold levels for triggering drift alerts. To reduce false positives, we add two standard deviations to the MAPE mean, which equals approximately three percent.11. Identify drift
Let's plot the model score over time12. Identify drift
adding the threshold line in red and the MAPE rolling windows.13. Identify drift
Both seven-day and fourteen-day trailing error lines indicate MAPE increases over time.14. Identify drift
The model goes out of tune during mid-June.15. Identify drift
The seven-day moving average provides early indication thirty days earlier - signaling a good point to return the model to experimentation for retuning or replacement with a better-suited alternative.16. Identify drift
It took additional days for the 14-day moving average to reach the threshold. When selecting a window size, remember that shorter windows react quickly but are more affected by outliers, while longer windows handle outliers better but respond more slowly to changes.17. Let's practice!
Let's tackle model drift with the following exercises.Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.