LoslegenKostenlos loslegen

Ensuring safe responses

You're configuring an internal chatbot for a medical team. To ensure consistent responses, you need to limit variability by setting a token limit and restricting token selection.

You have been provided the Llama class instance in the llm variable and the code to call the completion. You are also given a sample prompt to test with.

Diese Übung ist Teil des Kurses

Working with Llama 3

Kurs anzeigen

Anleitung zur Übung

  • Set the model parameters so that the maximum number of tokens is limited to ten tokens, and the model only ever chooses between the two most likely words at each completion step.

Interaktive Übung

Vervollständige den Beispielcode, um diese Übung erfolgreich abzuschließen.

output = llm(
		"What are the symptoms of strep throat?", 
  		# Set the model parameters 
      	max_tokens=____, #Limit response length
		top_k=____ #Restrict word choices
) 

print(output['choices'][0]['text'])
Code bearbeiten und ausführen