Get startedGet started for free

Forecast evaluation & experimentation

In this exercise, you'll evaluate the forecast model's performance to explore the use cases of experimentation.

The merged forecast (fc), combining predictions with actual test results, is preloaded. Evaluation functions (mape, rmse, coverage) and pandas (as pd) are also ready for use. Here's a quick reference for the functions:

def mape(y, yhat):
    mape = mean(abs(y - yhat) / y) 
    return mape

def rmse(y, yhat):
    rmse = (mean((y - yhat) ** 2)) ** 0.5
    return rmse

def coverage(y, lower, upper):
    coverage = sum((y <= upper) & (y >= lower)) / len(y)
    return coverage

First, compute performance metrics for the model. Then, answer a question about the goals of experimentation in forecasting.

This exercise is part of the course

Designing Forecasting Pipelines for Production

View Course

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Initialize a list for performance metrics
performance_metrics = []

# Loop through models and calculate metrics
for model in ["LGBMRegressor", "XGBRegressor", "LinearRegression"]:
    performance_metrics.append({
        "model": model,
        "mape": ____(fc["y"], fc[model]),
        "rmse": ____(fc["y"], fc[____]),
        "coverage": ____(fc["y"], fc[f"{model}-lo-95"], fc[f"{model}-hi-95"])
    })

# Create DataFrame and sort by RMSE
fc_performance = pd.DataFrame(performance_metrics).sort_values("____")

print(fc_performance)
Edit and Run Code