Aan de slagGa gratis aan de slag

Evaluating the BERT model

Having tokenized the sample reviews using BERT's tokenizer, it's now time to evaluate the BERT model with the samples at PyBooks. Additionally, you will evaluate the model's sentiment prediction on new data.

The following has been imported for you: BertTokenizer, BertForSequenceClassification, torch. The trained model instance is also preloaded. We will now test it on a new data sample.

Deze oefening maakt deel uit van de cursus

Deep Learning for Text with PyTorch

Cursus bekijken

Oefeninstructies

  • Prepare the evaluation text for the model by tokenizing it and returning PyTorch tensors.
  • Convert the output logits to probabilities between zero and one.
  • Display the sentiments from the probabilities.

Praktische interactieve oefening

Probeer deze oefening eens door deze voorbeeldcode in te vullen.

text = "I had an awesome day!"

# Tokenize the text and return PyTorch tensors
input_eval = tokenizer(____, return_tensors=____, truncation=True, padding=True, max_length=32)
outputs_eval = model(**input_eval)

# Convert the output logits to probabilities
predictions = torch.nn.functional.____(outputs_eval.____, dim=-1)

# Display the sentiments
predicted_label = ____ if torch.____(predictions) > 0 else ____
print(f"Text: {text}\nSentiment: {predicted_label}")
Code bewerken en uitvoeren