1. Generating a customer response
Welcome back to the case study. Last time, we transcribed and cleaned up a customer's audio message. This means we can now generate a professional customer response.
2. Reminder
As a reminder, we stored the refined translated text in a variable called corrected_text.
3. Case study plan
In this video we'll tackle the following steps:
First, we'll moderate the customer message to catch anything inappropriate.
Then, we'll generate a response powered by internal resources.
And finally, we'll run moderation again - this time on the AI's reply - to make sure it meets our standards before sending it out.
Let's dive into moderation.
4. Customer message moderation
To moderate the customer question, we make a request to the OpenAI moderation endpoint, specifying the model and passing our text as input.
The response includes a set of scores, which indicates the model's confidence that the message contains harmful content like hate or violence.
We access these scores using the .category_scores attribute and then convert them into a dictionary using the .model_dump() method.
5. Customer message moderation
This makes it easier to work with specific categories.
To keep things simple, we'll focus on just one category: violence.
6. Customer message moderation
We extract the violence score from the scores dictionary, and check whether it's above 0.7. If it is, we flag the message.
Categories and thresholds should be adjusted depending on the use case.
In our case, the message passed moderation - so we can move on to generating a response.
7. Generating a response
To do that, we'll give our chatbot the context it needs to answer like a real support assistant.
We have two documents to help the model respond: an FAQ file with common customer questions and answers, and a content overview listing current tracks, courses, and projects - complete with descriptions and links.
8. Generating a response
We create a system prompt that defines the assistant's role, includes
9. Generating a response
both resources, and sets expectations
10. Generating a response
for how to respond.
We ask for clear, concise replies. And if the model doesn't know the answer, it should politely direct the customer to the support email.
With our prompt ready,
11. Generating a response
we send a request to the chat completions endpoint.
The system message contains our instructions, and the user message is the customer's request.
12. Generating a response
The model returns a helpful response, and it's well-written and even includes course suggestions with direct links to the content.
13. Response moderation
Before we send it out, we still have one more check to run.
We want to make absolutely sure the AI's reply is safe - so we moderate that, too.
We pass our reply as input to the model, and extract the scores.
14. Response moderation
This time, we check every category, not just violence. If any score is above 0.7, we flag it and replace the response with a fallback message asking the customer to contact support for further assistance.
In this example, the AI response passed moderation - so we're good to go.
15. Recap
And that wraps up this part of the case study.
We moderated the incoming message, generated a context-aware response, and validated the AI's output before sending it.
16. Let's practice!
Now it's your turn. Let's practice!