Running a spaCy pipeline
You've already run a spaCy NLP pipeline on a single piece of text and also extracted tokens of a given list of Doc containers. In this exercise, you'll practice the initial steps of running a spaCy
pipeline on texts
, which is a list of text strings.
You will use the en_core_web_sm
model for this purpose. The spaCy
package has already been imported for you.
Este ejercicio forma parte del curso
Natural Language Processing with spaCy
Instrucciones del ejercicio
- Load the
en_core_web_sm
model asnlp
. - Run an
nlp()
model on each item oftexts
, and append each correspondingDoc
container to adocuments
list. - Print the token texts for each
Doc
container of thedocuments
list.
Ejercicio interactivo práctico
Prueba este ejercicio y completa el código de muestra.
# Load en_core_web_sm model as nlp
nlp = spacy.____(____)
# Run an nlp model on each item of texts and append the Doc container to documents
documents = []
for text in ____:
documents.append(____)
# Print the token texts for each Doc container
for doc in documents:
print([____ for ____ in ____])