Analyzing metrics per class
While aggregated metrics are useful indicators of the model's performance, it is often informative to look at the metrics per class. This could reveal classes for which the model underperforms.
In this exercise, you will run the evaluation loop again to get our cloud classifier's precision, but this time per-class. Then, you will map these score to the class names to interpret them. As usual, Precision
has already been imported for you. Good luck!
This exercise is part of the course
Intermediate Deep Learning with PyTorch
Exercise instructions
- Define a precision metric appropriate for per-class results.
- Calculate the precision per class by finishing the dict comprehension, iterating over the
.items()
of the.class_to_idx
attribute ofdataset_test
.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Define precision metric
metric_precision = Precision(
____, ____, ____
)
net.eval()
with torch.no_grad():
for images, labels in dataloader_test:
outputs = net(images)
_, preds = torch.max(outputs, 1)
metric_precision(preds, labels)
precision = metric_precision.compute()
# Get precision per class
precision_per_class = {
k: ____[____].____
for k, v
in ____
}
print(precision_per_class)