1. Learn
  2. /
  3. Courses
  4. /
  5. Introduction to Deep Learning with PyTorch

Connected

Exercise

Implementing leaky ReLU

While ReLU is widely used, it sets negative inputs to 0, resulting in null gradients for those values. This can prevent parts of the model from learning.

Leaky ReLU overcomes this by allowing small gradients for negative inputs, controlled by the negative_slope parameter. Instead of 0, negative inputs are scaled by this small value, keeping the model's learning active.

In this exercise, you will implement the leaky ReLU function in PyTorch and practice using it. torch package as well as the torch.nn as nn have already been imported.

Instructions 1/2

undefined XP
    1
    2
  • Create a leaky ReLU function in PyTorch with a negative slope of 0.05.
  • Call the function on the tensor x, which has already been defined for you.