1. Multi-turn chat completions with GPT
Great work so far! In this video, we'll begin to unleash the full potential of chat models by creating multi-turn conversations.
2. Chat completions for single-turn tasks
Recall that messages are sent to the Chat Completions endpoint as a list of dictionaries, where each dictionary provides content to a specific role from either system, user, or assistant.
For single turn tasks, no content is sent to the assistant role - the model relies only on its existing knowledge, the behaviors sent to the system role, and the instruction from the user.
3. Providing examples
Providing assistant messages can be a simple way for developers to steer the model in the right direction without having to surface anything to end-users.
For a data science tutor application, we could provide a few examples of data science questions and answers that would be sent to the API along with the user's question.
Let's improve our data science tutor by providing an example.
4. Providing examples
Between the system message and the user's question, we'll add a user and assistant message to serve as an ideal example for the model.
The model now not only has its pre-existing understanding, but also an ideal example to guide its response.
5. The response
With an example to work with, the assistant provides a response in-line with the example.
6. Storing responses
Another common use for providing assistant messages is to store responses. Storing responses means that we can create a conversation history, which we can feed into the model to have conversations. This is exactly what goes on underneath AI chatbots like ChatGPT!
7. Building a conversation
To code a conversation, we'll need to create a system so that when a user message is sent, and an assistant response is generated,
8. Building a conversation
they are fed back into the messages
9. Building a conversation
and stored to be sent with the next user message.
10. Building a conversation
Then, when a new user message is provided,
11. Building a conversation
the model has the context from the conversation history to draw from.
This means that if we introduce ourselves in the first user message, then ask the model what our name is in the second, it should return the correct answer, as it has access to the conversation history.
Let's start to code this in Python.
12. Coding a conversation
We start by defining a system message to set the assistant's behavior - you can also add user-assistant example messages here if you wish.
Then we define a list of questions. Here we ask why Python is popular and then ask it to summarize the response provided in one sentence, which requires context on the previous response.
Because we want a response for each question, we start by looping over the user_qs list.
Next, to convert the user questions into messages for the API, we create a dictionary
and add it to the list of messages using the list append method.
We can now send the messages to the Chat Completions endpoint and store the response.
We extract the assistant's message by subsetting from the API response, converting to a dictionary so it's in the messages format, then add it to the messages list for the next iteration.
Finally, we'll add two print statements so the output is a conversation between the user and assistant written as a script.
13. Conversation with an AI
We can see that we were successfully able to provide a follow-up correction to the model's response without having to repeat our question or the model's response.
14. Let's practice!
You've learned about the key functionality underpinning many AI-powered chatbots and assistants - time to begin creating your own!