Training loops before and after Accelerator
You want to modify a PyTorch training loop to use Accelerator for your language model to simplify translations using the MPRC dataset of sentence paraphrases. Update the training loop to prepare your model for distributed training.
Some data has been pre-loaded:
acceleratoris an instance ofAcceleratortrain_dataloader,optimizer,model, andlr_schedulerhave been defined and prepared withAccelerator
Bu egzersiz
Efficient AI Model Training with PyTorch
kursunun bir parçasıdırEgzersiz talimatları
- Update the
.to(device)lines so that Accelerator handles device placement. - Modify the gradient computation to use
Accelerator.
Uygulamalı interaktif egzersiz
Bu örnek kodu tamamlayarak bu egzersizi bitirin.
for batch in train_dataloader:
optimizer.zero_grad()
inputs, targets = batch["input_ids"], batch["labels"]
# Update the lines so Accelerator handles device placement
inputs = inputs.to(device)
targets = targets.to(device)
outputs = model(inputs, labels=targets)
loss = outputs.loss
# Modify the gradient computation to use Accelerator
____.backward(____)
optimizer.step()
lr_scheduler.step()