Boosting contest: Light vs Extreme
While the performance of the CatBoost model is relatively good, let's try two other flavors of boosting and see which performs better: the "Light" or the "Extreme" approach.
CatBoost is highly recommended when there are categorical features. In this case, all features are numeric, therefore one of the other approaches might produce better results.
As we are building regressors, we'll use an additional parameter, objective, which specifies the learning function to be used. To apply a squared error, we'll set objective to 'reg:squarederror' for XGBoost and 'mean_squared_error' for LightGBM.
In addition, we'll specify the parameter n_jobs for XGBoost to improve its computation runtime.
OBS: be careful not to use classifiers, or your session might expire!
Deze oefening maakt deel uit van de cursus
Ensemble Methods in Python
Oefeninstructies
- Build an
XGBRegressorusing the parameters:max_depth = 3,learning_rate = 0.1,n_estimators = 100, andn_jobs=2. - Build an
LGBMRegressorusing the parameters:max_depth = 3,learning_rate = 0.1, andn_estimators = 100.
Praktische interactieve oefening
Probeer deze oefening eens door deze voorbeeldcode in te vullen.
# Build and fit an XGBoost regressor
reg_xgb = ____.____(____, ____, ____, ____, objective='reg:squarederror', random_state=500)
reg_xgb.fit(X_train, y_train)
# Build and fit a LightGBM regressor
reg_lgb = ____.____(____, ____, ____, objective='mean_squared_error', seed=500)
reg_lgb.fit(X_train, y_train)
# Calculate the predictions and evaluate both regressors
pred_xgb = reg_xgb.predict(X_test)
rmse_xgb = np.sqrt(mean_squared_error(y_test, pred_xgb))
pred_lgb = reg_lgb.predict(X_test)
rmse_lgb = np.sqrt(mean_squared_error(y_test, pred_lgb))
print('Extreme: {:.3f}, Light: {:.3f}'.format(rmse_xgb, rmse_lgb))