Understanding text generation metrics
At PyBooks, the team just evaluated the performance of a pretrained model using the BLEU score and got a result of approximately 0.082 and a rouge1_fmeasure of around 0.2692. This metric is an indication of precision (how many selected items are relevant) and recall (how many relevant items are selected). How would you interpret this score in terms of the model's performance?
Questo esercizio fa parte del corso
Deep Learning for Text with PyTorch
Esercizio pratico interattivo
Passa dalla teoria alla pratica con uno dei nostri esercizi interattivi
Inizia esercizio