Quiz 2 - Question 1
Imagine an attention mask that is a diagonal matrix (a matrix with ones along the diagonal and zeros everywhere else). If you applied this attention mask to the raw attention weights, which tokens could the model attend to when predicting the next token?
This exercise is part of the course
Google DeepMind: Discover The Transformer Architecture
Hands-on interactive exercise
Turn theory into action with one of our interactive exercises
Start Exercise