Compiling a sequential model
In this exercise, you will work towards classifying letters from the Sign Language MNIST dataset; however, you will adopt a different network architecture than what you used in the previous exercise. There will be fewer layers, but more nodes. You will also apply dropout to prevent overfitting. Finally, you will compile the model to use the adam
optimizer and the categorical_crossentropy
loss. You will also use a method in keras
to summarize your model's architecture. Note that keras
has been imported from tensorflow
for you and a sequential keras
model has been defined as model
.
This exercise is part of the course
Introduction to TensorFlow in Python
Exercise instructions
- In the first dense layer, set the number of nodes to 16, the activation to
sigmoid
, and theinput_shape
to (784,). - Apply dropout at a rate of 25% to the first layer's output.
- Set the output layer to be dense, have 4 nodes, and use a
softmax
activation function. - Compile the model using an
adam
optimizer andcategorical_crossentropy
loss function.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Define the first dense layer
model.add(keras.layers.Dense(____, ____, ____))
# Apply dropout to the first layer's output
model.add(keras.layers.____(0.25))
# Define the output layer
____
# Compile the model
model.compile('____', loss='____')
# Print a model summary
print(model.summary())