Instantiating the Q-Network
Now that you defined its architecture, you are ready to instantiate the actual network that your agent will be using, as well as its optimizer. The Lunar Lander environment has a state space of dimension 8, and an action space of dimension 4 (corresponding to 0: do nothing
, 1: left thruster
, 2: main engine
, 3: right thruster
).
The QNetwork
class from the previous exercise is available to you.
Cet exercice fait partie du cours
Deep Reinforcement Learning in Python
Instructions
- Instantiate a Q Network for the Lunar Lander environment.
- Define the Adam optimizer for the neural network, specifying a learning rate of 0.0001.
Exercice interactif pratique
Essayez cet exercice en complétant cet exemple de code.
state_size = 8
action_size = 4
# Instantiate the Q Network
q_network = QNetwork(____, ____)
# Specify the optimizer learning rate
optimizer = optim.Adam(q_network.parameters(), ____)
print("Q-Network initialized as:\n", q_network)