Get startedGet started for free

The Rectified Linear Activation Function

As Dan explained to you in the video, an "activation function" is a function applied at each node. It converts the node's input into some output.

The rectified linear activation function (called ReLU) has been shown to lead to very high-performance networks. This function takes a single number as an input, returning 0 if the input is negative, and the input if the input is positive.

Here are some examples:
relu(3) = 3
relu(-3) = 0

This exercise is part of the course

Introduction to Deep Learning in Python

View Course

Exercise instructions

  • Fill in the definition of the relu() function:
    • Use the max() function to calculate the value for the output of relu().
  • Apply the relu() function to node_0_input to calculate node_0_output.
  • Apply the relu() function to node_1_input to calculate node_1_output.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

def relu(input):
    '''Define your relu activation function here'''
    # Calculate the value for the output of the relu function: output
    output = max(____, ____)
    
    # Return the value just calculated
    return(output)

# Calculate node 0 value: node_0_output
node_0_input = (input_data * weights['node_0']).sum()
node_0_output = ____

# Calculate node 1 value: node_1_output
node_1_input = (input_data * weights['node_1']).sum()
node_1_output = ____

# Put node values into array: hidden_layer_outputs
hidden_layer_outputs = np.array([node_0_output, node_1_output])

# Calculate model output (do not apply relu)
model_output = (hidden_layer_outputs * weights['output']).sum()

# Print model output
print(model_output)
Edit and Run Code