Get startedGet started for free

The Responses API: a Developer's Best Friend

1. The Responses API: a Developer's Best Friend

Hi, and welcome to this course on the OpenAI Responses API!

2. OpenAI and the Start of the Generative AI Race

Since the launch of GPT-3, OpenAI have established themselves as leaders in building and hosting proprietary AI technologies. Many of the AI products and features that you've used recently could be using the OpenAI API to send prompts to OpenAI's large language models, or LLMs.

3. OpenAI's API Evolution

In this time, the API interface has drastically changed,

4. OpenAI's API Evolution

starting with the Completions endpoint that really made OpenAI LLMs widely available for the first time;

5. OpenAI's API Evolution

to the Chat Completions endpoint, which was designed with LLM-powered chatbots in mind.

6. OpenAI's API Evolution

Since then, there has been the rise of AI agents and agentic systems, which allow LLM applications to "choose" to perform actions to retrieve external information or trigger events through the use of tools. AI agents require a lot of additional functionality to standard LLM calls, so OpenAI released first the Assistants API, and finally

7. OpenAI's API Evolution

the Responses API, which is a simplified interface for both standard chat and tool-based functionality.

8. Responses API x Developers

In a nutshell, the Responses API is designed to make it simpler than ever to develop AI applications, so the Responses API is a developer's best friend! Let's make our first Responses API request!

9. Our First Responses API Request

First, to setup your environment for communication with the OpenAI API, we must instantiate an OpenAI client. Here is where you would specify your API key if you were working independently, ideally using an environment variable. In this course, you don't need to create or enter your own API key. Everything is setup for you to begin prompting! To create a request to the Responses API, we call client.responses.create(), specifying the model we'd like to prompt. Let's talk about prompting! In the Responses API, prompts are split across two parameters: instructions and input. The way to think about these is that instructions should set clear requirements on how to behave, whereas the input is the task or question at-hand. The instructions are considered by the model as being superior to the input, so if the instructions require the output to be given in French, but the input says to answer in English, the model should, in most cases, still respond in French. Finally, the model we've chosen has reasoning capabilities, which allow the model to think more deeply about the question, and as a consequence, often take longer to respond and increase cost. Our question is relatively simple, so we'll set the reasoning effort to "minimal", and max_output_tokens to 60, so we should get a concise output fairly quickly. We'll talk more about these parameters and others in detail in a later lesson. Let's take a look at the output.

10. The Responses Output

There's a lot of information in the response from the API. Let's separate this out.

11. The Responses Output

Firstly, the key thing we want is the model's response to our task, which can be extracted from the .output_text attribute. The response also contains useful metadata like the number of tokens used in the output, and the id of the response. The response IDs, as we'll see a little later, can be used to bookmark particular points in the conversation without having to reload entire message histories. Finally, we can extract the output in a structured form through the .output attribute. Each object here is referred to as an "item", and extracting information from the individual items can be useful for error handling and writing custom logic. We'll use this more in Chapter 2, where we'll incorporate tool use like web search into our request.

12. Let's practice!

Time to put this into practice!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.