Get startedGet started for free

Conversation Chaining

1. Conversation Chaining

Let's take everything we've learned so far to create a system where we can have an actual back-and-forth conversation with the model!

2. Solving the Conversation Puzzle

For conversations, we really need two things: a mechanism that allows us to control the flow of the conversation, so that prompt follows response continually;

3. Solving the Conversation Puzzle

and we need a way to store and retrieve previous prompts and responses, so the model has access to the conversation history. Let's build this step-by-step!

4. The Conversation History

The Responses API makes storing and retrieving conversation histories incredibly simple. Each request, like this one, has .id and .output_text attributes. These IDs can actually be used for much more than just traceability. We can setup another request and use the previous_response_id parameter to provide the first response's ID, which then loads the context from that conversation. Providing this context, we can see that the model is able to access and respond to prompts that require the context from the first request. You can think of these IDs like branches in a conversation, allowing us to pick up a conversation at any point.

5. The Conversation History

And this can keep on going. Here, we can create a third request asking for a summary of the conversation so far, passing it the ID from response2. This functionality essentially provides a conversation history right out of the box! Now let's talk about the second piece: a mechanism to control the flow of the conversation.

6. Crafting a Conversation

To start our conversation, we'll write a system prompt to provide to the instructions parameter of our request. Here, we're creating a system specifically designed for personalized learning. We also initially set a latest_response_id variable to None, as there is no conversation history for the first request. We'll update this variable after the first iteration to extract the latest response ID. In the next step, we create a while loop that will run until the user types "exit", then the loop breaks and the conversation ends. Notice that we're also using the input() function, which will open up a text box for the user to type into in many environments. Then there's the response code, which passes our system prompt, the user_input, and the response ID variable to load any conversation history. Finally, we print the model response and extract the response id for the next iteration. Time to give this a go!

7. Crafting a Conversation

Let's ask: Explain why honey doesn't expire in space terms. The model begins to break it down, drawing parallels between honey and a planet's conditions. This is ok, but let's ask for a

8. Crafting a Conversation

shorter, more concise answer that use emojis. This requires context of the conversation so far, as we haven't given it the text to edit. That's pretty good!

9. Let's practice!

Time for you to get hands-on with creating conversations in the exercises!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.