BaşlayınÜcretsiz Başlayın

Shakespearean language preprocessing pipeline

Over at PyBooks, the team wants to transform a vast library of Shakespearean text data for further analysis. The most efficient way to do this is with a text processing pipeline, starting with the preprocessing steps.

The following have been loaded for you: torch, nltk, stopwords, PorterStemmer, get_tokenizer.

The Shakespearean text data is saved as shakespeare and the sentences have already been extracted.

Bu egzersiz

Deep Learning for Text with PyTorch

kursunun bir parçasıdır
Kursu Görüntüle

Uygulamalı interaktif egzersiz

Bu örnek kodu tamamlayarak bu egzersizi bitirin.

# Create a list of stopwords
stop_words = set(____(____))
Kodu Düzenle ve Çalıştır