Loading and inspecting pre-trained models
You're building a conversational AI assistant that can engage in human-like dialog across a wide range of topics, leveraging the powerful BERT model that has been pre-trained on a large corpus of text data.
You'll print the configuration to verify that you've loaded a conversational AI model with certain parameters like model_type
: bert
, num_attention_heads
: 12, and num_hidden_layers
: 12.
This exercise is part of the course
Efficient AI Model Training with PyTorch
Exercise instructions
- Initialize the model parameters with the appropriate AutoModel class to load the
bert-base-uncased
model. - Print the model's configuration.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
from transformers import AutoModelForSequenceClassification
# Load a pre-trained bert-base-uncased model
model = ____.____("bert-base-uncased")
# Print the model's configuration
print(model.____)