Gradient checkpointing with Trainer
You want to use gradient checkpointing to reduce the memory footprint of your model. You've seen how to write the explicit training loop with Accelerator, and now you'd like to use a simplified interface without training loops with Trainer. The exercise will take some time to run with the call to trainer.train().
Set up the arguments for Trainer to use gradient checkpointing.
Deze oefening maakt deel uit van de cursus
Efficient AI Model Training with PyTorch
Oefeninstructies
- Use four gradient accumulation steps in
TrainingArguments. - Enable gradient checkpointing in
TrainingArguments. - Pass in the training arguments to
Trainer.
Praktische interactieve oefening
Probeer deze oefening eens door deze voorbeeldcode in te vullen.
training_args = TrainingArguments(output_dir="./results",
evaluation_strategy="epoch",
# Use four gradient accumulation steps
gradient_accumulation_steps=____,
# Enable gradient checkpointing
____=____)
trainer = Trainer(model=model,
# Pass in the training arguments
____=____,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
compute_metrics=compute_metrics)
trainer.train()