1. Learn
  2. /
  3. Courses
  4. /
  5. Extreme Gradient Boosting with XGBoost

Connected

Exercise

Using regularization in XGBoost

Having seen an example of l1 regularization in the video, you'll now vary the l2 regularization penalty - also known as "lambda" - and see its effect on overall model performance on the Ames housing dataset.

Instructions

100 XP
  • Create your DMatrix from X and y as before.
  • Create an initial parameter dictionary specifying an "objective" of "reg:squarederror" and "max_depth" of 3.
  • Use xgb.cv() inside of a for loop and systematically vary the "lambda" value by passing in the current l2 value (reg).
  • Append the "test-rmse-mean" from the last boosting round for each cross-validated xgboost model.
  • Hit 'Submit Answer' to view the results. What do you notice?