Early stopping: Optimizing the optimization
Now that you know how to monitor your model performance throughout optimization, you can use early stopping to stop optimization when it isn't helping any more. Since the optimization stops automatically when it isn't helping, you can also set a high value for epochs
in your call to .fit()
, as Dan showed in the video.
The model you'll optimize has been specified as model
. As before, the data is pre-loaded as predictors
and target
.
This exercise is part of the course
Introduction to Deep Learning in Python
Exercise instructions
- Import
EarlyStopping
fromtensorflow.keras.callbacks
. - Compile the model, once again using
'adam'
as theoptimizer
,'categorical_crossentropy'
as the loss function, andmetrics=['accuracy']
to see the accuracy at each epoch. - Create an
EarlyStopping
object calledearly_stopping_monitor
. Stop optimization when the validation loss hasn't improved for 2 epochs by specifying thepatience
parameter ofEarlyStopping()
to be2
. - Fit the model using the
predictors
andtarget
. Specify the number ofepochs
to be30
and use a validation split of0.3
. In addition, pass[early_stopping_monitor]
to thecallbacks
parameter.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Import EarlyStopping
____
# Save the number of columns in predictors: n_cols
n_cols = predictors.shape[1]
input_shape = (n_cols,)
# Specify the model
model = Sequential()
model.add(Dense(100, activation='relu', input_shape = input_shape))
model.add(Dense(100, activation='relu'))
model.add(Dense(2, activation='softmax'))
# Compile the model
____
# Define early_stopping_monitor
early_stopping_monitor = ____
# Fit the model
____