IniziaInizia gratis

Prepare datasets for distributed training

You've preprocessed a dataset for a precision agriculture system to help farmers monitor crop health. Now you'll load the data by creating a DataLoader and place the data on GPUs for distributed training, if GPUs are available. Note the exercise actually uses a CPU, but the code is the same for CPUs and GPUs.

Some data has been pre-loaded:

  • A sample dataset with agricultural imagery
  • The Accelerator class from the accelerate library
  • The DataLoader class

Questo esercizio fa parte del corso

Efficient AI Model Training with PyTorch

Visualizza il corso

Istruzioni dell'esercizio

  • Create a dataloader for the pre-defined dataset.
  • Place the dataloader on available devices using the accelerator object.

Esercizio pratico interattivo

Prova a risolvere questo esercizio completando il codice di esempio.

accelerator = Accelerator()

# Create a dataloader for the pre-defined dataset
dataloader = ____(____, batch_size=32, shuffle=True)

# Place the dataloader on available devices
dataloader = accelerator.____(____)

print(accelerator.device)
Modifica ed esegui il codice