Gradient accumulation with Accelerator
You're training a language model to simplify translations by paraphrasing complex sentences, but your GPU is running out of memory. Gradient accumulation allows the model to effectively train on larger batches by using small batches that fit into memory. You prefer to write the training loop explicitly to see its structure, so you're using Accelerator
. Note that this exercise actually runs on the CPU, but the code remains the same for the GPU.
The model
, train_dataloader
, optimizer
, and lr_scheduler
have been pre-defined.
This exercise is part of the course
Efficient AI Model Training with PyTorch
Exercise instructions
- Configure
Accelerator()
to use gradient accumulation with two steps. - Set up an
Accelerator
context manager to enable gradient accumulation for themodel
.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Configure Accelerator
accelerator = ____(____=____)
model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(model, optimizer, train_dataloader, lr_scheduler)
for batch in train_dataloader:
# Set up an Accelerator context manager
with ____.____(____):
inputs, targets = batch["input_ids"], batch["labels"]
outputs = model(inputs, labels=targets)
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
print(f"Loss = {loss}")