Evaluating forecast performance
In this exercise, you'll evaluate and visualize the performance of the forecast model you built in the previous exercise.
The test
dataset, ml_forecast
results, and plot_series
are preloaded, along with evaluation functions (mape
, rmse
, coverage
, shown below) and pandas
as pd
.
def mape(y, yhat):
mape = mean(abs(y - yhat)/ y)
return mape
def rmse(y, yhat):
rmse = (mean((y - yhat) ** 2 )) ** 0.5
return rmse
def coverage(y, lower, upper):
coverage = sum((y <= upper) & (y >= lower)) / len(y)
return coverage
Let’s first assess the model’s performance and then visualize the forecast.
This exercise is part of the course
Forecasting in Practice
Exercise instructions
- Create
fc
by merging theml_forecast
andtest
datasets. - Calculate
rmse
, storing as a variable calledr
within the for loop. - Complete the evaluation by sorting
fc_performance
by rmse.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Combine the data
fc = ml_forecast.____(____, how="left", on="ds")
fc_performance = None
for model in ["LGBMRegressor", "XGBRegressor", "LinearRegression"]:
m = mape(y=fc["y"], yhat=fc[model])
# Calculate rmse
r = ____(y=fc["____"], yhat=fc[____])
c = coverage(y=fc["y"], lower=fc[model + "-lo-95"], upper=fc[model + "-hi-95"])
perf = {"model": model, "mape": m, "rmse": r, "coverage": c}
if fc_performance is None:
fc_performance = pd.DataFrame([perf])
else:
fc_performance = pd.concat([fc_performance, pd.DataFrame([perf])])
# Sort the performance metrics by rmse
print(fc_performance.____("____"))