Get startedGet started for free

Training the double DQN

You will now modify your code for DQN to implement double DQN.

Double DQN requires only a minimal adjustment to the DQN algorithm, but goes a long way towards solving the Q-value overestimation problem and often performs better than DQN.

This exercise is part of the course

Deep Reinforcement Learning in Python

View Course

Exercise instructions

  • Calculate the next actions for the Q-target calculation using the online_network(), making sure to obtain the right action and shape.
  • Estimate the Q-values to these actions with the target_network(), again, making sure to obtain the correct values and shape.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

for episode in range(10):
    state, info = env.reset()
    done = False
    step = 0
    episode_reward = 0
    while not done:
        step += 1
        total_steps += 1
        q_values = online_network(state)
        action = select_action(q_values, total_steps, start=.9, end=.05, decay=1000)
        next_state, reward, terminated, truncated, _ = env.step(action)
        done = terminated or truncated
        replay_buffer.push(state, action, reward, next_state, done)        
        if len(replay_buffer) >= batch_size:
            states, actions, rewards, next_states, dones = replay_buffer.sample(64)
            q_values = online_network(states).gather(1, actions).squeeze(1)
            with torch.no_grad():
                # Obtain next actions for Q-target calculation
                next_actions = ____.____.____
                # Estimate next Q-values from these actions
                next_q_values = ____.____.____
                target_q_values = rewards + gamma * next_q_values * (1-dones)
            loss = nn.MSELoss()(q_values, target_q_values)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            update_target_network(target_network, online_network, tau=.005)
        state = next_state
        episode_reward += reward    
    describe_episode(episode, reward, episode_reward, step)
Edit and Run Code