Implementing leaky ReLU
While ReLU is widely used, it sets negative inputs to 0, resulting in null gradients for those values. This can prevent parts of the model from learning.
Leaky ReLU overcomes this by allowing small gradients for negative inputs, controlled by the negative_slope parameter. Instead of 0, negative inputs are scaled by this small value, keeping the model's learning active.
In this exercise, you will implement the leaky ReLU function in PyTorch and practice using it. torch package as well as the torch.nn as nn have already been imported.
Bu egzersiz
Introduction to Deep Learning with PyTorch
kursunun bir parçasıdırUygulamalı interaktif egzersiz
Bu örnek kodu tamamlayarak bu egzersizi bitirin.
# Create a leaky relu function in PyTorch
leaky_relu_pytorch = ____
x = torch.tensor(-2.0)
# Call the above function on the tensor x
output = ____
print(output)