Get startedGet started for free

Streaming with Function Calls

You've seen already how many use cases are unlocked with function-calling LLMs, which have their own event types. This is useful for providing real-time feedback to users when the model is preparing to call a tool, or for logging to track tool usage.

The convert_timezone() function you defined earlier to convert datetimes between timezones, and a tools list containing the function definition for the Responses API have been defined for you.

This exercise is part of the course

Working with the OpenAI Responses API

View Course

Exercise instructions

  • Complete the streaming context manager by calling client.responses.create() with the model "gpt-5-mini", the prompt, and the tools list.
  • Inside the loop, check if the "function_call_arguments.delta" events.
  • Add a condition to check for "function_call_arguments.done" events.
  • Add a final condition to check if the event type is "response.completed" and print a final completion message.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

prompt = "What time is 2:30pm on January 20th in New York in Tokyo time?"

# Open the streaming connection and enable tool-calling
with ____ as stream:
    for event in stream:
        # Filter for function call arguments delta events
        if ____:
            print(f"\nTool args streaming: {event.delta}")
        # Filter for function call arguments complete events
        elif ____:
            print("Tool call args complete.")
        # Filter for response completed events
        elif ____:
            print("\n--- Completed ---")
Edit and Run Code