Get startedGet started for free

Understanding text generation metrics

At PyBooks, the team just evaluated the performance of a pretrained model using the BLEU score and got a result of approximately 0.082 and a rouge1_fmeasure of around 0.2692. This metric is an indication of precision (how many selected items are relevant) and recall (how many relevant items are selected). How would you interpret this score in terms of the model's performance?

This exercise is part of the course

Deep Learning for Text with PyTorch

View Course

Hands-on interactive exercise

Turn theory into action with one of our interactive exercises

Start Exercise