Understanding text generation metrics
At PyBooks, the team just evaluated the performance of a pretrained model using the BLEU score and got a result of approximately 0.082 and a rouge1_fmeasure
of around 0.2692. This metric is an indication of precision (how many selected items are relevant) and recall (how many relevant items are selected). How would you interpret this score in terms of the model's performance?
Diese Übung ist Teil des Kurses
Deep Learning for Text with PyTorch
Interaktive Übung
Setze die Theorie in einer unserer interaktiven Übungen in die Praxis um
