Ensuring safe responses
You're configuring an internal chatbot for a medical team. To ensure consistent responses, you need to limit variability by setting a token limit and restricting token selection.
You have been provided the Llama
class instance in the llm
variable and the code to call the completion. You are also given a sample prompt to test with.
Este exercício faz parte do curso
Working with Llama 3
Instruções do exercício
- Set the model parameters so that the maximum number of tokens is limited to ten tokens, and the model only ever chooses between the two most likely words at each completion step.
Exercício interativo prático
Experimente este exercício completando este código de exemplo.
output = llm(
"What are the symptoms of strep throat?",
# Set the model parameters
max_tokens=____, #Limit response length
top_k=____ #Restrict word choices
)
print(output['choices'][0]['text'])