1. Learn
  2. /
  3. Courses
  4. /
  5. Efficient AI Model Training with PyTorch

Connected

Exercise

Gradient checkpointing with Trainer

You want to use gradient checkpointing to reduce the memory footprint of your model. You've seen how to write the explicit training loop with Accelerator, and now you'd like to use a simplified interface without training loops with Trainer. The exercise will take some time to run with the call to trainer.train().

Set up the arguments for Trainer to use gradient checkpointing.

Instructions

100 XP
  • Use four gradient accumulation steps in TrainingArguments.
  • Enable gradient checkpointing in TrainingArguments.
  • Pass in the training arguments to Trainer.