Training and testing the Transformer model
With the TransformerEncoder
model in place, the next step at PyBooks is to train the model on sample reviews and evaluate its performance. Training on these sample reviews will help PyBooks understand the sentiment trends in their vast repository. By achieving a well-performing model, PyBooks can then automate sentiment analysis, ensuring readers get insightful recommendations and feedback.
The following packages have been imported for you: torch
, nn
, optim
.
The model
instance of the TransformerEncoder
class, token_embeddings
, and the train_sentences
, train_labels
,test_sentences
,test_labels
are preloaded for you.
Este ejercicio forma parte del curso
Deep Learning for Text with PyTorch
Instrucciones del ejercicio
- In the training loop, split the sentences into tokens and stack the embeddings.
- Zero the gradients and perform a backward pass.
- In the
predict
function, deactivate the gradient computations then get the sentiment prediction.
Ejercicio interactivo práctico
Prueba este ejercicio y completa el código de muestra.
for epoch in range(5):
for sentence, label in zip(train_sentences, train_labels):
# Split the sentences into tokens and stack the embeddings
tokens = ____
data = torch.____([token_embeddings[token] for token in ____], dim=1)
output = model(data)
loss = criterion(output, torch.tensor([label]))
# Zero the gradients and perform a backward pass
optimizer.____()
loss.____()
optimizer.step()
print(f"Epoch {epoch}, Loss: {loss.item()}")
def predict(sentence):
model.eval()
# Deactivate the gradient computations and get the sentiment prediction.
with torch.____():
tokens = sentence.split()
data = torch.stack([token_embeddings.get(token, torch.rand((1, 512))) for token in tokens], dim=1)
output = model(data)
predicted = torch.____(output, dim=1)
return "Positive" if predicted.item() == 1 else "Negative"
sample_sentence = "This product can be better"
print(f"'{sample_sentence}' is {predict(sample_sentence)}")