Get startedGet started for free

Changing batch sizes

You've seen models are usually trained in batches of a fixed size. The smaller a batch size, the more weight updates per epoch, but at a cost of a more unstable gradient descent. Specially if the batch size is too small and it's not representative of the entire training set.

Let's see how different batch sizes affect the accuracy of a simple binary classification model that separates red from blue dots.

You'll use a batch size of one, updating the weights once per sample in your training set for each epoch. Then you will use the entire dataset, updating the weights only once per epoch.

This exercise is part of the course

Introduction to Deep Learning with Keras

View Course

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Get a fresh new model with get_model
model = ____

# Train your model for 5 epochs with a batch size of 1
model.fit(X_train, y_train, epochs=____, ____=____)
print("\n The accuracy when using a batch of size 1 is: ",
      model.evaluate(X_test, y_test)[1])
Edit and Run Code