Get startedGet started for free

In-sample performance

It's very important to know whether your regression model is useful or not. A useful model can be one that captures the structure of your training set well. One way to assess this in-sample performance is to predict on training data and calculate the mean absolute error of all predicted data points.

In this exercise, you will evaluate your in-sample predictions using MAE (mean absolute error). MAE tells you approximately how far away the predictions are from the true values.

It is calculated using the following formula, where \(n\) is the number of predictions made:

$$MAE = \frac{1}{n} \cdot \sum_{i=1}^n \text{absolute value of the }i\text{th error}$$

Available in your workspace is your model, the regression tree that you built in the last exercises.

This exercise is part of the course

Machine Learning with Tree-Based Models in R

View Course

Exercise instructions

  • Create in_sample_predictions by using model to predict on the chocolate_train tibble.
  • Calculate a vector abs_diffs that contains the absolute differences between the in-sample-predictions and the true grades.
  • Calculate the mean absolute error according to the formula above.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Predict using the training set
in_sample_predictions <- predict(model,
                                 ___)

# Calculate the vector of absolute differences
abs_diffs <- ___(__$___ - ___$___)

# Calculate the mean absolute error
1 / ___ * ___
Edit and Run Code