Implementing value iteration
Value iteration is a key method in RL for finding the optimal policy. It iteratively improves the value function for each state until it converges, resulting in the discovery of the optimal policy. You'll start with an initialized value function V
and policy
, both preloaded for you. Then, you'll update them in a loop until the value function converges and see the policy in action.
The get_max_action_and_value(state, V)
function has been pre-loaded for you.
This exercise is part of the course
Reinforcement Learning with Gymnasium in Python
Exercise instructions
- For each state, find the action with the maximum Q-value (
max_action
) and its corresponding value (max_q_value
). - Update the
new_V
dictionary and thepolicy
based onmax_action
andmax_q_value
. - Check for convergence by checking if the difference between
new_v
andV
for every state is less thanthreshold
.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
threshold = 0.001
while True:
new_V = {}
for state in range(num_states-1):
# Get action with maximum Q-value and its value
max_action, max_q_value = ____
# Update the value function and policy
new_V[state] = ____
policy[state] = ____
# Test if change in state values is negligeable
if all(abs(____ - ____) < ____ for state in ____):
break
V = new_V
render_policy(policy)