Prepare a model for distributed training
You've decided to use the Hugging Face Accelerator
library to train your language translation model. Now it's time to prepare your model for distributed training!
Some data has been pre-loaded:
accelerator
is an instance ofAccelerator
model
,optimizer
,train_dataloader
, andlr_scheduler
have been defined
This exercise is part of the course
Efficient AI Model Training with PyTorch
Exercise instructions
- Call a method to prepare objects for distributed training.
- Pass in the training objects to the method, matching the order of the output.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Prepare objects for distributed training
model, optimizer, train_dataloader, lr_scheduler = ____.____(
# Pass in the training objects matching the order of the output
____,
____,
____,
____)