Understanding text generation metrics
At PyBooks, the team just evaluated the performance of a pretrained model using the BLEU score and got a result of approximately 0.082 and a rouge1_fmeasure
of around 0.2692. This metric is an indication of precision (how many selected items are relevant) and recall (how many relevant items are selected). How would you interpret this score in terms of the model's performance?
Cet exercice fait partie du cours
Deep Learning for Text with PyTorch
Exercice interactif pratique
Passez de la théorie à la pratique avec l’un de nos exercices interactifs
