1. Learn
  2. /
  3. Courses
  4. /
  5. Deep Reinforcement Learning in Python

Connected

Exercise

Experience replay buffer

You will now create the data structure to support Experience Replay, which will enable your agent to learn much more efficiently.

This replay buffer should support two operations:

  • Storing experiences in its memory for future sampling.
  • "Replaying" a randomly sampled batch of past experiences from its memory.

As the data sampled from the replay buffer will be used to feed into a neural network, the buffer should return torch Tensors for convenience.

The torch and random modules and the deque class have been imported into your exercise environment.

Instructions

100 XP
  • Complete the push() method of ReplayBuffer by appending experience_tuple to the buffer memory.
  • In the sample() method, draw a random sample of size batch_size from self.memory.
  • Again in sample(), the sample is initially drawn as a list of tuples; ensure that it is transformed into a tuple of lists.
  • Transform actions_tensor into shape (batch_size, 1) instead of (batch_size).