Get startedGet started for free

Implementing leaky ReLU

While ReLU is widely used, it sets negative inputs to 0, resulting in null gradients for those values. This can prevent parts of the model from learning.

Leaky ReLU overcomes this by allowing small gradients for negative inputs, controlled by the negative_slope parameter. Instead of 0, negative inputs are scaled by this small value, keeping the model's learning active.

In this exercise, you will implement the leaky ReLU function in PyTorch and practice using it. torch package as well as the torch.nn as nn have already been imported.

This exercise is part of the course

Introduction to Deep Learning with PyTorch

View Course

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Create a leaky relu function in PyTorch
leaky_relu_pytorch = ____

x = torch.tensor(-2.0)
# Call the above function on the tensor x
output = ____
print(output)
Edit and Run Code