CommencerCommencer gratuitement

Assessing convergence in a multi-armed bandit

Evaluating the performance and convergence of strategies in a multi-armed bandit problem is crucial for understanding their effectiveness. By analyzing how frequently each arm is selected over time, we can infer the learning process and the strategy's ability to identify and exploit the best arm. This exercise involves visualizing the selection percentages of each arm over iterations to assess the convergence of an epsilon-greedy strategy.

The selected_arms array that shows which arm has been pulled in each iteration has been pre-loaded for you.

Cet exercice fait partie du cours

Reinforcement Learning with Gymnasium in Python

Afficher le cours

Instructions

  • Initialize an array selections_percentage with zeros, with dimensions to track the selection percentage of each bandit over time.
  • Get the selections_percentage over time by calculating the cumulative sum of selections for each bandit over iterations, and dividing by the iteration number.
  • Plot the cumulative selection percentages for each bandit, to visualize how often each bandit is chosen over iterations.

Exercice interactif pratique

Essayez cet exercice en complétant cet exemple de code.

# Initialize the selection percentages with zeros
selections_percentage = ____
for i in range(n_iterations):
    selections_percentage[i, selected_arms[i]] = 1
# Compute the cumulative selection percentages 
selections_percentage = np.____(____, axis=____) / np.arange(1, ____).reshape(-1, 1)
for arm in range(n_bandits):
  	# Plot the cumulative selection percentage for each arm
    plt.plot(____, label=f'Bandit #{arm+1}')
plt.xlabel('Iteration Number')
plt.ylabel('Percentage of Bandit Selections (%)')
plt.legend()
plt.show()
for i, prob in enumerate(true_bandit_probs, 1):
    print(f"Bandit #{i} -> {prob:.2f}")
Modifier et exécuter le code