CommencerCommencer gratuitement

Boosting contest: Light vs Extreme

While the performance of the CatBoost model is relatively good, let's try two other flavors of boosting and see which performs better: the "Light" or the "Extreme" approach.

CatBoost is highly recommended when there are categorical features. In this case, all features are numeric, therefore one of the other approaches might produce better results.

As we are building regressors, we'll use an additional parameter, objective, which specifies the learning function to be used. To apply a squared error, we'll set objective to 'reg:squarederror' for XGBoost and 'mean_squared_error' for LightGBM.

In addition, we'll specify the parameter n_jobs for XGBoost to improve its computation runtime.

OBS: be careful not to use classifiers, or your session might expire!

Cet exercice fait partie du cours

Ensemble Methods in Python

Afficher le cours

Instructions

  • Build an XGBRegressor using the parameters: max_depth = 3, learning_rate = 0.1, n_estimators = 100, and n_jobs=2.
  • Build an LGBMRegressor using the parameters: max_depth = 3, learning_rate = 0.1, and n_estimators = 100.

Exercice interactif pratique

Essayez cet exercice en complétant cet exemple de code.

# Build and fit an XGBoost regressor
reg_xgb = ____.____(____, ____, ____, ____, objective='reg:squarederror', random_state=500)
reg_xgb.fit(X_train, y_train)

# Build and fit a LightGBM regressor
reg_lgb = ____.____(____, ____, ____, objective='mean_squared_error', seed=500)
reg_lgb.fit(X_train, y_train)

# Calculate the predictions and evaluate both regressors
pred_xgb = reg_xgb.predict(X_test)
rmse_xgb = np.sqrt(mean_squared_error(y_test, pred_xgb))
pred_lgb = reg_lgb.predict(X_test)
rmse_lgb = np.sqrt(mean_squared_error(y_test, pred_lgb))

print('Extreme: {:.3f}, Light: {:.3f}'.format(rmse_xgb, rmse_lgb))
Modifier et exécuter le code