Get startedGet started for free

Logging evaluation metrics

Tracking performance metrics allows you to monitor degradations, and you can make decisions about when to update your model to maintain a high level of accuracy. You decide you will log metrics after your model finishes an evaluation loop.

Some data has been pre-loaded:

  • accelerator is an instance of Accelerator
  • eval_metric is a dictionary of metrics like accuracy and f1
  • num_epochs is the number of epochs

This exercise is part of the course

Efficient AI Model Training with PyTorch

View Course

Exercise instructions

  • Call a method to log evaluation metrics of the model.
  • Log "accuracy" and "f1" score as evaluation metrics.
  • Track the epoch number using epoch of the training loop.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

accelerator = Accelerator(project_dir=".", log_with="all")
accelerator.init_trackers("my_project")

for epoch in range(num_epochs):
    # Training loop is here
    # Evaluation loop is here
    # Call a method to log metrics
    ____.____({
        # Log accuracy and F1 score as metrics
        "accuracy": ____["accuracy"],
        "f1": ____["f1"],
    # Track the epoch number
    }, ____=____)

accelerator.end_training()
Edit and Run Code