1. Learn
  2. /
  3. Courses
  4. /
  5. Intermediate Deep Learning with PyTorch

Exercise

Batch Normalization

As a final improvement to the model architecture, let's add the batch normalization layer after each of the two linear layers. The batch norm trick tends to accelerate training convergence and protects the model from vanishing and exploding gradients issues.

Both torch.nn and torch.nn.init have already been imported for you as nn and init, respectively. Once you implement the change in the model architecture, be ready to answer a short question on how batch normalization works!

Instructions 1/3

undefined XP
    1
    2
    3
  • Add two BatchNorm1d layers assigning them to self.bn1 and self.bn2.