Get startedGet started for free

Image-text to text with ViLT

Time to have a go with multi-modal generation, starting with Visual Question-Answering (VQA). You will use the dandelin/vilt-b32-finetuned-vqa model to determine the color of the traffic light in the following image:

Picture of a traffic light showing red

The preprocessor (processor), model (model), and image (image) have been loaded for you.

This exercise is part of the course

Multi-Modal Models with Hugging Face

View Course

Exercise instructions

  • Preprocess the text prompt and image.
  • Generate the answer tokens from the model and assign to outputs.
  • Find the ID of the answer with the highest confidence using the output logits.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

text = "What color is the traffic light?"

# Preprocess the text prompt and image
encoding = ____(____, ____, return_tensors="pt")

# Generate the answer tokens
outputs = ____

# Find the ID of the answer with the highest confidence
idx = outputs.logits.____
print("Predicted answer:", model.config.id2label[idx])
Edit and Run Code