CommencerCommencer gratuitement

Combatting overfitting with dropout

A common problem with neural networks is they tend to overfit to training data. What this means is the scoring metric, like R\(^2\) or accuracy, is high for the training set, but low for testing and validation sets, and the model is fitting to noise in the training data.

We can work towards preventing overfitting by using dropout. This randomly drops some neurons during the training phase, which helps prevent the net from fitting noise in the training data. keras has a Dropout layer that we can use to accomplish this. We need to set the dropout rate, or fraction of connections dropped during training time. This is set as a decimal between 0 and 1 in the Dropout() layer.

We're going to go back to the mean squared error loss function for this model.

Cet exercice fait partie du cours

Machine Learning for Finance in Python

Afficher le cours

Instructions

  • Add a dropout layer (Dropout()) after the first Dense layer in the model, and use 20% (0.2) as the dropout rate.
  • Use the adam optimizer and the mse loss function when compiling the model in .compile().
  • Fit the model to the scaled_train_features and train_targets using 25 epochs.

Exercice interactif pratique

Essayez cet exercice en complétant cet exemple de code.

from keras.layers import Dropout

# Create model with dropout
model_3 = Sequential()
model_3.add(Dense(100, input_dim=scaled_train_features.shape[1], activation='relu'))
model_3.add(____)
model_3.add(Dense(20, activation='relu'))
model_3.add(Dense(1, activation='linear'))

# Fit model with mean squared error loss function
model_3.compile(optimizer=____, loss=____)
history = model_3.fit(____, ____, epochs=____)
plt.plot(history.history['loss'])
plt.title('loss:' + str(round(history.history['loss'][-1], 6)))
plt.show()
Modifier et exécuter le code