Ensembling models
One approach to improve predictions from machine learning models is ensembling. A basic approach is to average the predictions from multiple models. A more complex approach is to feed predictions of models into another model, which makes final predictions. Both approaches usually improve our overall performance (as long as our individual models are good). If you remember, random forests are also using ensembling of many decision trees.
To ensemble our neural net predictions, we'll make predictions with the 3 models we just created -- the basic model, the model with the custom loss function, and the model with dropout. Then we'll combine the predictions with numpy
's .hstack()
function, and average them across rows with np.mean(predictions, axis=1)
.
Cet exercice fait partie du cours
Machine Learning for Finance in Python
Instructions
- Create predictions on the
scaled_train_features
andscaled_test_features
for the 3 models we fit (model_1
,model_2
,model_3
) using the.predict()
method. - Horizontally stack (
np.hstack()
the predictions into a matrix, and take the row-wise averages to get average predictions for the train and test sets.
Exercice interactif pratique
Essayez cet exercice en complétant cet exemple de code.
# Make predictions from the 3 neural net models
train_pred1 = model_1.predict(____)
test_pred1 = model_1.predict(____)
train_pred2 = ____
test_pred2 = ____
train_pred3 = model_3.predict(scaled_train_features)
test_pred3 = model_3.predict(scaled_test_features)
# Horizontally stack predictions and take the average across rows
train_preds = np.mean(np.hstack((train_pred1, train_pred2, train_pred3)), axis=1)
test_preds = np.mean(____, ____)
print(test_preds[-5:])