The Q-Network architecture
You are almost ready to train your first Deep Reinforcement Learning agent! Before you can go ahead with your first complete training loop, you need a neural network architecture to drive the agent's decisions and its ability to learn.
You will modify the generic architecture you defined in an earlier exercise.
torch
and torch.nn
are imported into your exercises.
Cet exercice fait partie du cours
Deep Reinforcement Learning in Python
Instructions
- Instantiate the first hidden layer; its input will be the environment state, with dimension
state_size
. - Instantiate the output layer; it provides the Q-values for each action, with dimension
action_size
. - Complete the
forward()
method; use thetorch.relu
activation function for this example.
Exercice interactif pratique
Essayez cet exercice en complétant cet exemple de code.
class QNetwork(nn.Module):
def __init__(self, state_size, action_size):
super(QNetwork, self).__init__()
# Instantiate the first hidden layer
self.fc1 = nn.Linear(____, ____)
self.fc2 = nn.Linear(64, 64)
# Instantiate the output layer
self.fc3 = nn.Linear(____, ____)
def forward(self, state):
# Ensure the ReLU activation function is used
x = ____(self.fc1(torch.tensor(state)))
x = ____(self.fc2(x))
return self.fc3(x)