BaşlayınÜcretsiz Başlayın

Embedding in PyTorch

PyBooks found success with a book recommendation system. However, it doesn't account for some of the semantics found in the text. PyTorch's built-in embedding layer can learn and represent the relationship between words directly from data. Your team is curious to explore this capability to improve the book recommendation system. Can you help implement it?

torch and torch.nn as nn have been imported for you.

Bu egzersiz

Deep Learning for Text with PyTorch

kursunun bir parçasıdır
Kursu Görüntüle

Egzersiz talimatları

  • Map a unique index to each word in words, saving to word_to_idx.
  • Convert word_to_idx into a PyTorch tensor and save to inputs.
  • Initialize an embedding layer using the torch module with ten dimensions.
  • Pass the inputs tensor to the embedding layer and review the output.

Uygulamalı interaktif egzersiz

Bu örnek kodu tamamlayarak bu egzersizi bitirin.

# Map a unique index to each word
words = ["This", "book", "was", "fantastic", "I", "really", "love", "science", "fiction", "but", "the", "protagonist", "was", "rude", "sometimes"]
word_to_idx = {word: ____ for i, word in enumerate(____)}

# Convert word_to_idx to a tensor
inputs = ____.____([word_to_idx[w] for w in words])

# Initialize embedding layer with ten dimensions
embedding = nn.____(num_embeddings=len(words), embedding_dim=____)

# Pass the tensor to the embedding layer
output = embedding(____)
print(output)
Kodu Düzenle ve Çalıştır