BaşlayınÜcretsiz Başlayın

Using AutoClasses

You’ve seen how tokenizers work and explored their role in preparing text for models. Now, let’s take it a step further by combining AutoModels and AutoTokenizers with the pipeline() function. It's a nice balance of control and convenience.

Continue with the sentiment analysis task and combine AutoClasses with the pipeline module.

AutoModelForSequenceClassification, AutoTokenizer and pipeline from the transformers library have already been imported for you.

Bu egzersiz

Working with Hugging Face

kursunun bir parçasıdır
Kursu Görüntüle

Egzersiz talimatları

  • Download the model and tokenizer and save as my_model and my_tokenizer, respectively.
  • Create the pipeline and save as my_pipeline.
  • Predict the output using my_pipeline and save as output.

Uygulamalı interaktif egzersiz

Bu örnek kodu tamamlayarak bu egzersizi bitirin.

# Download the model and tokenizer
my_model = ____.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
my_tokenizer = ____.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")

# Create the pipeline
my_pipeline = pipeline(task="sentiment-analysis", ____=____, ____=____)

# Predict the sentiment
output = ____("This course is pretty good, I guess.")
print(f"Sentiment using AutoClasses: {output[0]['label']}")
Kodu Düzenle ve Çalıştır