Running a spaCy pipeline
You've already run a spaCy NLP pipeline on a single piece of text and also extracted tokens of a given list of Doc containers. In this exercise, you'll practice the initial steps of running a spaCy pipeline on texts, which is a list of text strings.
You will use the en_core_web_sm model for this purpose. The spaCy package has already been imported for you.
Diese Übung ist Teil des Kurses
Natural Language Processing with spaCy
Anleitung zur Übung
- Load the
en_core_web_smmodel asnlp. - Run an
nlp()model on each item oftexts, and append each correspondingDoccontainer to adocumentslist. - Print the token texts for each
Doccontainer of thedocumentslist.
Interaktive Übung
Vervollständige den Beispielcode, um diese Übung erfolgreich abzuschließen.
# Load en_core_web_sm model as nlp
nlp = spacy.____(____)
# Run an nlp model on each item of texts and append the Doc container to documents
documents = []
for text in ____:
documents.append(____)
# Print the token texts for each Doc container
for doc in documents:
print([____ for ____ in ____])