IniziaInizia gratis

8-bit Adam with Accelerator

You would like to customize your training loop with 8-bit Adam to reduce memory requirements of your model. Prepare the loop with 8-bit Adam for training.

Assume that an 8-bit Adam optimizer has been defined as adam_bnb_optim. Other training objects have been defined: model, train_dataloader, lr_scheduler, and accelerator.

Questo esercizio fa parte del corso

Efficient AI Model Training with PyTorch

Visualizza il corso

Istruzioni dell'esercizio

  • Prepare the 8-bit Adam optimizer for distributed training.
  • Update the model parameters with the optimizer.
  • Zero the gradients with the optimizer.

Esercizio pratico interattivo

Prova a risolvere questo esercizio completando il codice di esempio.

# Prepare the 8-bit Adam optimizer for distributed training
model, ____, train_dataloader, lr_scheduler = accelerator.prepare(model, ____, train_dataloader, lr_scheduler)

for batch in train_dataloader:
    inputs, targets = batch["input_ids"], batch["labels"]
    outputs = model(inputs, labels=targets)
    loss = outputs.loss
    accelerator.backward(loss)
    # Update the model parameters
    ____.____()
    lr_scheduler.step()
    # Zero the gradients
    ____.____()
    print(f"Loss = {loss}")
Modifica ed esegui il codice