Setting token limits
An e-commerce platform just hired you to improve the performance of their customer service bot built using the OpenAI API. You've decided to start by ensuring that the input messages do not cause any rate limit issue by setting a limit of 100 tokens, and test it with a sample input message.
The tiktoken
library has been preloaded.
This exercise is part of the course
Developing AI Systems with the OpenAI API
Exercise instructions
- Use the
tiktoken
library to create an encoding for thegpt-4o-mini
model. - Check for the expected number of tokens in the input message.
- Print the response if the message passes both checks.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
client = OpenAI(api_key="")
input_message = {"role": "user", "content": "I'd like to buy a shirt and a jacket. Can you suggest two color pairings for these items?"}
# Use tiktoken to create the encoding for your model
encoding = tiktoken.____(____)
# Check for the number of tokens
num_tokens = ____
# Run the chat completions function and print the response
if num_tokens <= ____:
response = client.chat.completions.create(model="gpt-4o-mini", messages=[input_message])
print(____)
else:
print("Message exceeds token limit")