1. Assigning chat roles
So far, we've explored how to generate responses with Llama and tune their style and length using parameters.
2. Defining roles
But what if we wanted to refine its tone even more, and maybe even assign it a specific personality?
For example, we might be building a customer support chatbot that needs to be used by customers with different expectations.
3. Defining roles
Some users may need a friendly and conversational assistant, while others expect clear,
4. Defining roles
professional answers.
5. Using roles in chat completion
We can use chat roles to cater to this.
With chat roles, we can guide Llama's responses by defining two key roles in our prompts.
The system role sets the assistant's personality and style.
The user role represents the person asking the question.
These roles can be set as part of our Llama call when using create chat completion. Let's see how it works.
6. Using roles in chat completion
Rather than sending just a single prompt as we've done so far, we now need a way to send a structured conversation.
The create chat completion function allows us to send a list of messages to Llama 3, with details about the message which include the system and user role. This is done by passing the list of messages to the 'messages' argument of create chat completion.
7. The system role
The system role gives instructions to the model about how it should behave throughout the conversation.
For example, let's say we want Llama to act as a business consultant. We define the system role by starting a new dictionary within the messages list. Under the 'role' key, we specify 'system', and under the 'content' key, we ask the model to behave as a consultant.
8. The user role
Next, we need to add a user message, which represents the actual question being asked.
We append another dictionary, with 'user' under 'role', and our question about business strategy under 'content'.
9. Generating the response
Now that we have our structured conversation, we can send it to Llama using create chat completion. The response is returned in a format similar to passing the simple prompt,
10. The assistant role
and we can access it as the first element of 'choices'. However, when using the messages list, we need to access the response with the 'message'
11. The assistant role
key. This contains a dictionary specifying 'assistant'
12. The assistant role
under the 'role' key, as the response is generated by the assistant, and the response itself under
13. The assistant role
the 'content' key. If we want to only access the output text, we'll have to select the 'message' and 'content' key after accessing the first element of 'choices'.
14. Let's practice!
Let's practice assigning roles.