Building a training loop with Accelerator
You're ready to implement a training loop for your language translation service. Now that you've seen how Accelerator
modifies a PyTorch loop for distributed training you can leverage the Accelerator
class in your training loop!
Some data has been pre-loaded:
accelerator
is an instance ofAccelerator
train_dataloader
,optimizer
,model
, andlr_scheduler
have been defined and prepared withAccelerator
This exercise is part of the course
Efficient AI Model Training with PyTorch
Exercise instructions
- Call the
optimizer
to zero the gradients. - Update the model's parameters.
- Update the learning rate of the
optimizer
.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
for batch in train_dataloader:
# Call the optimizer to zero the gradients
____.____()
inputs, targets = batch["input_ids"], batch["labels"]
outputs = model(inputs, labels=targets)
loss = outputs.loss
accelerator.backward(loss)
# Update the model's parameters
____.____()
# Update the learning rate of the optimizer
____.____()