Get startedGet started for free

The Q-Network architecture

You are almost ready to train your first Deep Reinforcement Learning agent! Before you can go ahead with your first complete training loop, you need a neural network architecture to drive the agent's decisions and its ability to learn.

You will modify the generic architecture you defined in an earlier exercise. torch and torch.nn are imported into your exercises.

This exercise is part of the course

Deep Reinforcement Learning in Python

View Course

Exercise instructions

  • Instantiate the first hidden layer; its input will be the environment state, with dimension state_size.
  • Instantiate the output layer; it provides the Q-values for each action, with dimension action_size.
  • Complete the forward() method; use the torch.relu activation function for this example.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

class QNetwork(nn.Module):
    def __init__(self, state_size, action_size):
        super(QNetwork, self).__init__()
        # Instantiate the first hidden layer
        self.fc1 = nn.Linear(____, ____)
        self.fc2 = nn.Linear(64, 64)
        # Instantiate the output layer
        self.fc3 = nn.Linear(____, ____)
    def forward(self, state):
        # Ensure the ReLU activation function is used
        x = ____(self.fc1(torch.tensor(state)))
        x = ____(self.fc2(x))
        return self.fc3(x)
Edit and Run Code