1. Shot prompting
We've seen how DeepSeek models generate text and respond to prompts. But how do we guide the model to produce better, more accurate responses?
2. Providing examples
Providing examples in a prompt helps the model understand what we expect.
This technique, called shot prompting, plays a huge role in shaping AI responses.
Let's break it down.
3. What is shot prompting?
Shot prompting means including examples in a prompt to guide the model's response.
There are three main approaches:
Zero-shot prompting gives no examples, just an instruction.
One-shot prompting provides a single example to guide the response.
Few-shot prompting includes multiple examples to provide more context and improve the model's understanding of the desired output.
4. Why does shot prompting matter?
Adding examples improves AI performance.
Shot prompting helps in classification to sort text into defined categories, sentiment analysis to identify opinions or emotions, data extraction to extract specific information from unstructured text, and much more.
Let's see it in action.
5. Zero-shot prompting
Let's use a model to analyze restaurant reviews and assign sentiment scores from 1-5 to each review: 1 indicating a poor experience and 5 indicating an amazing experience.
Here, we provide no examples for the model, only an instruction clearly indicating the task, and the labels to use - the numbers 1 to 5.
Sending this request and extracting the result, we can see that the model assigned numbers to each review, but the formatting isn't easily readable. It would be better if the review and score was alongside one another.
6. One-shot prompting
Let's now add an example before the two reviews we wish to analyze. This is a one-shot prompt, as one example is provided. We also add arrows after each review and the number 1 after the example. This should make it clearer for the model that we're only looking for the number.
Running this,
we see that the formatting is much better. Providing an example also gave the model extra context about how to determine the final score.
7. Few-shot prompting
Let's step it up a final time and add two more examples, so there's three in total. We now have a few-shot prompt to send to the model.
With this, we didn't see any improvements - the formatting is the way we asked for and the scores remained consistent. Try experimenting with four, five, or ten examples to see if these scores change!
Hopefully you're starting to grasp just how many problems can be solved using DeepSeek models, and how important prompts are to obtaining good results.
8. Let's practice!
Time to put this into practice!