Changing batch sizes
You've seen models are usually trained in batches of a fixed size. The smaller a batch size, the more weight updates per epoch, but at a cost of a more unstable gradient descent. Specially if the batch size is too small and it's not representative of the entire training set.
Let's see how different batch sizes affect the accuracy of a simple binary classification model that separates red from blue dots.
You'll use a batch size of one, updating the weights once per sample in your training set for each epoch. Then you will use the entire dataset, updating the weights only once per epoch.
Diese Übung ist Teil des Kurses
Introduction to Deep Learning with Keras
Interaktive Übung
Versuche dich an dieser Übung, indem du diesen Beispielcode vervollständigst.
# Get a fresh new model with get_model
model = ____
# Train your model for 5 epochs with a batch size of 1
model.fit(X_train, y_train, epochs=____, ____=____)
print("\n The accuracy when using a batch of size 1 is: ",
model.evaluate(X_test, y_test)[1])