BaşlayınÜcretsiz Başlayın

Mixed precision training with Accelerator

You want to simplify your PyTorch loop for mixed precision training of your language translation model by using Accelerator. Build the new training loop to leverage Accelerator!

Some objects have been preloaded: dataset, model, dataloader, and optimizer.

Bu egzersiz

Efficient AI Model Training with PyTorch

kursunun bir parçasıdır
Kursu Görüntüle

Egzersiz talimatları

  • Enable mixed precision training using FP16 in Accelerator.
  • Prepare training objects for mixed precision training before the loop.
  • Compute the gradients of the loss for mixed precision training.

Uygulamalı interaktif egzersiz

Bu örnek kodu tamamlayarak bu egzersizi bitirin.

# Enable mixed precision training using FP16
accelerator = Accelerator(____="____")

# Prepare training objects for mixed precision training
model, optimizer, train_dataloader, lr_scheduler = ____.____(____, ____, ____, ____)

for batch in train_dataloader:
    inputs, targets = batch["input_ids"], batch["labels"]
    outputs = model(inputs, labels=targets)
    loss = outputs.loss
    # Compute the gradients of the loss
    ____.____(loss)
    optimizer.step()
    optimizer.zero_grad()
Kodu Düzenle ve Çalıştır