Quiz 2 - Question 1
Imagine you train a byte pair encoding (BPE) tokenizer on English and Amharic texts. This means that they share a single vocabulary consisting of English and Amharic subword tokens. You apply this tokenizer to the following Amharic sentence:
ስለተዋወቅን ደስ ብሎኛል
The tokenizer splits this sentence into 14 tokens. When you tokenize its English translation, “Nice to meet you”, it splits it into 7 tokens.
Which explanation is most plausible given how BPE learns merges and determines its subword token vocabulary.
Deze oefening maakt deel uit van de cursus
Google DeepMind: Represent Your Language Data
Praktische interactieve oefening
Zet theorie om in actie met een van onze interactieve oefeningen.
Begin met trainen