Quiz 3 - Question 2
Assume you are training a model on a dataset that was tokenized using a subword tokenizer. The tokenizer has a vocabulary of 7,012 subword tokens and additionally, includes four special tokens: a padding token, an unknown token, and a beginning and an end of sequence token. You set the model’s embedding size to 300, meaning that each token is represented by a 300-dimensional vector.What is the shape (the dimension) of the matrix that stores the embeddings of your model?
What is the shape (the dimension) of the matrix that stores the embeddings of your model?
This exercise is part of the course
Google DeepMind: Represent Your Language Data
Hands-on interactive exercise
Turn theory into action with one of our interactive exercises
Start Exercise