Embedding in PyTorch
PyBooks found success with a book recommendation system. However, it doesn't account for some of the semantics found in the text. PyTorch's built-in embedding layer can learn and represent the relationship between words directly from data. Your team is curious to explore this capability to improve the book recommendation system. Can you help implement it?
torch and torch.nn as nn have been imported for you.
Diese Übung ist Teil des Kurses
Deep Learning for Text with PyTorch
Anleitung zur Übung
- Map a unique index to each word in
words, saving toword_to_idx. - Convert
word_to_idxinto a PyTorch tensor and save toinputs. - Initialize an embedding layer using the
torchmodule with ten dimensions. - Pass the
inputstensor to the embedding layer and review the output.
Interaktive Übung
Vervollständige den Beispielcode, um diese Übung erfolgreich abzuschließen.
# Map a unique index to each word
words = ["This", "book", "was", "fantastic", "I", "really", "love", "science", "fiction", "but", "the", "protagonist", "was", "rude", "sometimes"]
word_to_idx = {word: ____ for i, word in enumerate(____)}
# Convert word_to_idx to a tensor
inputs = ____.____([word_to_idx[w] for w in words])
# Initialize embedding layer with ten dimensions
embedding = nn.____(num_embeddings=len(words), embedding_dim=____)
# Pass the tensor to the embedding layer
output = embedding(____)
print(output)