Classifying a restaurant's customer review
Let's practice loading an LLM from the Hugging Face hub into a pipeline to perform text classification for customer restaurant reviews.
The model_name
variable, has been already instantiated for you with the name of a BERT-based LLM particularly suited for classifying reviews in a 1-to-5 star rating scale.
This is a part of the course
“Introduction to LLMs in Python”
Exercise instructions
- Import the necessary function from the
transformers
library to load Hugging Face LLMs as pipelines. - Load the model with
model_name
within the pipeline, specifying suitable task for text classification. - Pass the customer review defined in
review
to the pipeline to get a sentiment prediction.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Import the function for loading Hugging Face pipelines
from transformers import ____
review = "The food was good, but service at the restaurant was a bit slow"
# Load the pipeline for text classification
classifier = ____(____, model=model_name)
# Pass the customer review to the model for prediction
prediction = ____(____)
print(prediction)
This exercise is part of the course
Introduction to LLMs in Python
Learn the nuts and bolts of LLMs and the revolutionary transformer architecture they are based on!
Large Language Models (LLMs) represent the current pinnacle of AI technology, driving remarkable advancements in Natural Language Processing and Understanding. This chapter serves as your gateway to comprehending LLMs: what they are, their remarkable capabilities, and the wide array of language tasks they excel at. You'll gain practical experience in loading and harnessing various LLMs for both language understanding and generation tasks. Along the way, you'll be introduced to the successful catalyst at the heart of most LLMs: the transformer architecture. Ready to start this journey into the world of LLMs?
Exercise 1: Introducing large language modelsExercise 2: Classifying a restaurant's customer reviewExercise 3: Pipeline re-arrangement puzzleExercise 4: Tasks LLMs can performExercise 5: Using a pipeline for summarizationExercise 6: Time for some question-answering!Exercise 7: The transformer architectureExercise 8: Hello PyTorch transformerExercise 9: Hands-on translation pipelineExercise 10: Generating replies to customer reviewsWhat is DataCamp?
Learn the data skills you need online at your own pace—from non-coding essentials to data science and machine learning.