Get startedGet started for free

Making requests to DeepSeek models

1. Making requests to DeepSeek models

Let's learn how to make requests to DeepSeek models!

2. Recap

As we've seen, we'll be accessing and using DeepSeek models via Together AI's API, which we'll send a request to and receive a response.

3. Creating a request

There are several ways to create requests to DeepSeek models, but in this course, we'll use OpenAI's Python library. OpenAI, like DeepSeek, develop AI models and applications, including their GPT-series of models and the ChatGPT application. DeepSeek's API is fully-compatible with this library, which is nice because we can quickly switch between model providers with minimal code changes. We start by importing the OpenAI class from openai, which we'll use to instantiate an API client. The client configures the environment for communicating with the API. Inside, we update the base_url parameter to divert the request from the default OpenAI API to our DeepSeek model provider, and an API key to authenticate this request. In this course, these requests will always be sent to Together.ai, and the api_key and base_url parameters will be populated for you. If you want to use DeepSeek's API directly in personal projects, create an API key through the documentation linked and set the base_url to "https://api.deepseek.com".

4. Creating a request

Now for the request code. We'll create a request to the chat completions API endpoint. Endpoints are like access points for different API functionality. This endpoint is used to send a series of messages representing a conversation to a model, and we access it by calling the .create() method on client.chat.completions. Inside this method, we specify the model and the messages to send. The messages argument takes a list of dictionaries where content sent from the user role allows us to prompt the model. Here, we prompt the model to explain the concept of hallucination in the context of AI. Let's take a look at the API response.

5. The response

There's a lot here, so we'll add

6. The response

additional spacing to improve readability. The response from the API is a ChatCompletion object, which has attributes for accessing different information. It has an .id attribute, .choices, .created, .model, and other attributes below. We can see that the response message is located under the .choices attribute,

7. Interpreting the response

so we'll start by accessing it. Attributes are accessed using a dot, then the name of the attribute. We've gotten much closer to the text. Notice from the square brackets at the beginning and end, that this is actually a list with a single element.

8. Interpreting the response

Let's extract the first element to dig deeper. Ok - we're left with a Choice object, which has its own set of attributes. The message is located underneath the .message attribute,

9. Interpreting the response

which we can chain to our existing code. Almost there! Finally, we need to access the ChatCompletionMessage's .content attribute. There we have it - our model response as a string! We started off with a complex response object, but by taking it one attribute at a time, we were able to get to the result.

10. API usage costs

It's important to note that using models via APIs often incurs a cost, which is dependent on the provider, model requested, and the size of the model input and output. But in this course, we've configured the exercises in such a way that it's free to make requests to DeepSeek models!

11. Let's practice!

Time to make your own requests!