Quiz 2 - Question 1
Imagine an attention mask that is a diagonal matrix (a matrix with ones along the diagonal and zeros everywhere else). If you applied this attention mask to the raw attention weights, which tokens could the model attend to when predicting the next token?
Latihan ini adalah bagian dari kursus
Google DeepMind: Discover The Transformer Architecture
Latihan interaktif praktis
Ubah teori menjadi tindakan dengan salah satu latihan interaktif kami.
Mulai berolahraga