MulaiMulai sekarang secara gratis

Prepare a model for distributed training

You've decided to use the Hugging Face Accelerator library to train your language translation model. Now it's time to prepare your model for distributed training!

Some data has been pre-loaded:

  • accelerator is an instance of Accelerator
  • model, optimizer, train_dataloader, and lr_scheduler have been defined

Latihan ini adalah bagian dari kursus

Efficient AI Model Training with PyTorch

Lihat Kursus

Petunjuk latihan

  • Call a method to prepare objects for distributed training.
  • Pass in the training objects to the method, matching the order of the output.

Latihan interaktif praktis

Cobalah latihan ini dengan menyelesaikan kode contoh berikut.

# Prepare objects for distributed training
model, optimizer, train_dataloader, lr_scheduler = ____.____(
    # Pass in the training objects matching the order of the output
    ____,
    ____,
    ____,
    ____)
Edit dan Jalankan Kode