1. Deeper networks
The difference between modern deep learning and the historical neural networks that didn’t deliver these amazing results, is the use of models with not just one hidden layer, but with many successive hidden layers. We forward propagate through these successive layers in a similar way to what you saw for a single hidden layer.
2. Multiple hidden layers
Here is a network with two hidden layers. We first fill in the values
3. Multiple hidden layers
for hidden layer one as a function of the inputs. Then apply the activation function to fill in the values in these nodes. Then use values from the first hidden layer to fill in
4. Multiple hidden layers
the second hidden layer. Then we make a prediction based on
5. Multiple hidden layers
the outputs of hidden layer two. In practice, it's becoming common to have neural networks that have many, many layers; five layers, ten layers. A few years ago 15 layers was state of the art but this can scale quite naturally to even a thousand layers.
6. Multiple hidden layers
You use the same forward propagation process, but you apply that iterative process more times. Let's walk through the first steps of that. Assume all layers here use the ReLU activation function. We'll start by filling in the top node
7. Multiple hidden layers
of the first hidden layer. That will use these two weights. The top weights contributes
8. Multiple hidden layers
3 times 2, or 6. The bottom weight
9. Multiple hidden layers
contributes 20. The ReLU activation function on a positive number just returns that number. So
10. Multiple hidden layers
we get 26. Now let's do
11. Multiple hidden layers
the bottom node of that first hidden layer. We use these two nodes. Using the same process, we get
12. Multiple hidden layers
4 times 3, or 12 from this weight.
13. Multiple hidden layers
And -25 from the bottom weight. So the input to this node is 12 minus 25. Recall that, when we apply ReLU to a negative number, we get 0.
14. Multiple hidden layers
So this node is 0.We've shown the values for the subsequent layers
15. Multiple hidden layers
here. Pause this video, and verify you can calculate the same values at each node. At this point, you understand the mechanics for how neural networks make predictions. Let’s close this chapter with an interesting and important fact about these deep networks. That is,
16. Representation learning
they internally build up representations of the patterns in the data that are useful for making predictions. And they find increasingly complex patterns as we go through successive hidden layers of the network. In this way, neural networks partially replace the need for feature engineering, or manually creating better predictive features. Deep learning is also sometimes called representation learning, because subsequent layers build increasingly sophisticated representations of the raw data, until we get to a stage where we can make predictions. This is easiest to understand from an application to images, which you will see later in this course. Even if you haven't worked with images, you may find it useful to think through
17. Representation learning
this example heuristically. When a neural network tries to classify an image, the first hidden layers build up patterns or interactions that are conceptually simple. A simple interaction would look at groups of nearby pixels and find patterns like diagonal lines, horizontal lines, vertical lines, blurry areas, etc. Once the network has identified where there are diagonal lines and horizontal lines and vertical lines, subsequent layers combine that information to find larger patterns, like big squares. A later layer might put together the location of squares and other geometric shapes to identify a checkerboard pattern, a face, a car, or whatever is in the image. The cool thing about deep learning is that
18. Deep learning
the modeler doesn’t need to specify those interactions. We never tell the model to look for diagonal lines. Instead, when you train the model, which you’ll learn to do in the next chapter, the network gets weights that find the relevant patterns to make better predictions. Working with images may still seem abstract, but this idea of finding increasingly complex or abstract patterns is a recurring theme when people talk about deep learning, and it will feel more concrete as you work with these networks more.
19. Let's practice!