BaşlayınÜcretsiz Başlayın

A gradient boosting model

Now we'll fit a gradient boosting (GB) model. It's been said a linear model is like a Toyota Camry, and GB is like a Black Hawk helicopter. GB has potential to outperform random forests, but doesn't always do so. This is called the no free lunch theorem, meaning we should always try lots of different models for each problem.

GB is similar to random forest models, but the difference is that trees are built successively. With each iteration, the next tree fits the residual errors from the previous tree in order to improve the fit.

For now we won't search our hyperparameters -- they've been searched for you.

Bu egzersiz

Machine Learning for Finance in Python

kursunun bir parçasıdır
Kursu Görüntüle

Egzersiz talimatları

  • Create a GradientBoostingRegressor object with the hyperparameters that have already been set for you.
  • Fit the gbr model to the train_features and train_targets.
  • Print the scores for the training and test features and targets.

Uygulamalı interaktif egzersiz

Bu örnek kodu tamamlayarak bu egzersizi bitirin.

from sklearn.ensemble import GradientBoostingRegressor

# Create GB model -- hyperparameters have already been searched for you
gbr = ____(max_features=4,
                                learning_rate=0.01,
                                n_estimators=200,
                                subsample=0.6,
                                random_state=42)
gbr.fit(____)

print(gbr.score(train_features, train_targets))
____
Kodu Düzenle ve Çalıştır