Gradient checkpointing with Trainer
You want to use gradient checkpointing to reduce the memory footprint of your model. You've seen how to write the explicit training loop with Accelerator, and now you'd like to use a simplified interface without training loops with Trainer. The exercise will take some time to run with the call to trainer.train().
Set up the arguments for Trainer to use gradient checkpointing.
Latihan ini adalah bagian dari kursus
Efficient AI Model Training with PyTorch
Petunjuk latihan
- Use four gradient accumulation steps in
TrainingArguments. - Enable gradient checkpointing in
TrainingArguments. - Pass in the training arguments to
Trainer.
Latihan interaktif praktis
Cobalah latihan ini dengan menyelesaikan kode contoh berikut.
training_args = TrainingArguments(output_dir="./results",
evaluation_strategy="epoch",
# Use four gradient accumulation steps
gradient_accumulation_steps=____,
# Enable gradient checkpointing
____=____)
trainer = Trainer(model=model,
# Pass in the training arguments
____=____,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
compute_metrics=compute_metrics)
trainer.train()