Automatic device placement with Accelerator
Your conversational AI model needs to train on a massive dataset, so you've decided to move the model to a GPU. You're leveraging Accelerator
for automatic device placement. Note this exercise actually runs on the CPU, but the code remains the same for running on the GPU.
A BERT-based model has been preloaded as model
.
This exercise is part of the course
Efficient AI Model Training with PyTorch
Exercise instructions
- Declare an
accelerator
object by instantiating the appropriate class. - Use the
accelerator
object to prepare the model for distributed training with GPU.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
from accelerate import Accelerator
# Declare an accelerator object
accelerator = ____()
# Prepare the model for distributed training
model = accelerator.____(model)
print(accelerator.device)