Aan de slagGa gratis aan de slag

Implementing leaky ReLU

While ReLU is widely used, it sets negative inputs to 0, resulting in null gradients for those values. This can prevent parts of the model from learning.

Leaky ReLU overcomes this by allowing small gradients for negative inputs, controlled by the negative_slope parameter. Instead of 0, negative inputs are scaled by this small value, keeping the model's learning active.

In this exercise, you will implement the leaky ReLU function in PyTorch and practice using it. torch package as well as the torch.nn as nn have already been imported.

Deze oefening maakt deel uit van de cursus

Introduction to Deep Learning with PyTorch

Cursus bekijken

Praktische interactieve oefening

Probeer deze oefening eens door deze voorbeeldcode in te vullen.

# Create a leaky relu function in PyTorch
leaky_relu_pytorch = ____

x = torch.tensor(-2.0)
# Call the above function on the tensor x
output = ____
print(output)
Code bewerken en uitvoeren