Logging evaluation metrics
Tracking performance metrics allows you to monitor degradations, and you can make decisions about when to update your model to maintain a high level of accuracy. You decide you will log metrics after your model finishes an evaluation loop.
Some data has been pre-loaded:
acceleratoris an instance ofAcceleratoreval_metricis a dictionary of metrics likeaccuracyandf1num_epochsis the number of epochs
Questo esercizio fa parte del corso
Efficient AI Model Training with PyTorch
Istruzioni dell'esercizio
- Call a method to log evaluation metrics of the model.
- Log
"accuracy"and"f1"score as evaluation metrics. - Track the epoch number using
epochof the training loop.
Esercizio pratico interattivo
Prova a risolvere questo esercizio completando il codice di esempio.
accelerator = Accelerator(project_dir=".", log_with="all")
accelerator.init_trackers("my_project")
for epoch in range(num_epochs):
# Training loop is here
# Evaluation loop is here
# Call a method to log metrics
____.____({
# Log accuracy and F1 score as metrics
"accuracy": ____["accuracy"],
"f1": ____["f1"],
# Track the epoch number
}, ____=____)
accelerator.end_training()