Streaming Semantic Events
You're building a weather assistant that provides real-time forecasts. The OpenAI client has been initialized and configured to work with the Responses API. You'll stream semantic events to track when the response starts, when text blocks finish, and when the full response is complete. This creates a more engaging user experience by showing progress as the model generates the forecast.
This exercise is part of the course
Working with the OpenAI Responses API
Exercise instructions
- Handle the
"response.created"event by printing a start message. - Handle the
"response.output_text.done"event by printing a completion message. - Handle the
"response.completed"event by printing the full response text stored incurrent_text.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
prompt = "Explain how to read a weather forecast in one sentence for a beginner hiker."
with client.responses.create(model="gpt-5-mini", input=prompt, stream=True) as stream:
for event in stream:
# Find response created events
if event.type == "____":
print("Forecast generation started...\n")
# Find output text completed events
elif event.type == "____":
print("\n--- Forecast complete ---\n")
# Find response completed events
elif event.type == "____":
print(f"\nFull forecast:\n{current_text}")