Cet exercice fait partie du cours
Dive into the exciting world of Reinforcement Learning (RL) by exploring its foundational concepts, roles, and applications. Navigate through the RL framework, uncovering the agent-environment interaction. You'll also learn how to use the Gymnasium library to create environments, visualize states, and perform actions, thus gaining a practical foundation in RL concepts and applications.
Delve deeper into the world of RL focusing on model-based learning. Unravel the complexities of Markov Decision Processes (MDPs), understanding their essential components. Enhance your skill set by learning about policies and value functions. Gain expertise in policy optimization with policy iteration and value Iteration techniques.
Exercice en cours
Embark on a journey through the dynamic realm of Model-Free Learning in RL. Get introduced to to the foundational Monte Carlo methods, and apply first-visit and every-visit Monte Carlo prediction algorithms. Transition into the world of Temporal Difference Learning, exploring the SARSA algorithm. Finally, dive into the depths of Q-Learning, and analyze its convergence in challenging environments.
Dive into advanced strategies in Model-Free RL, focusing on enhancing decision-making algorithms. Learn about Expected SARSA for more accurate policy updates and Double Q-learning to mitigate overestimation bias. Explore the Exploration-Exploitation Tradeoff, mastering epsilon-greedy and epsilon-decay strategies for optimal action selection. Tackle the Multi-Armed Bandit Problem, applying strategies to solve decision-making challenges under uncertainty.