1. Deep analysis and code generation
So we've explored a few prompting techniques and use cases for chat models, how about reasoning models?
2. Recap...
Recall that reasoning models output a series of thinking tokens, denoted by <think> tags, and a final response. These thinking tokens break complex tasks, like data analysis or writing code, into structured sub-tasks. The model then completes the sub-task, updates its knowledge, and either continues or iterates.
Making a request to a reasoning model only requires updating the model to a DeepSeek reasoning model, but note that the temperature parameter isn't supported for reasoning models.
Interestingly, despite the similarities in syntax, the prompting best practices we've discussed for chat models require some modifications for reasoning models.
3. 1. Keep it simple
Firstly, keep prompts simple. Writing concise prompts is generally a good practice, but for reasoning models, even providing examples can actually diminish performance.
The few-shot prompts we wrote before for chat models can actually confuse reasoning models, which are designed to create their own reasoning steps, and providing too much guidance can actually interfere with this process.
4. 2. Encourage reasoning
Second, actually encouraging the model to reason can enhance this capability.
Adding something like this to the end of the prompt encourages the reasoning process. However, this additional performance will likely increase token usage and time-to-response, so consider the needs of the use case when adding this.
5. 3. Stay away from simple tasks!
The last point is more around when to use reasoning models than how to use them; put bluntly, stay away from simple tasks!
Consider simple tasks the domain of the chat model. Reasoning models, when presented with simple tasks requiring a single or very few steps, can overthink and get lost during the thinking process. They may still get to the correct answer eventually, but at much greater cost and time taken than a chat model.
This is analogous to humans when asked a really simple question, it can sometimes cause us to doubt ourselves and hesitate on responding.
6. 3. Stay away from simple tasks!
If we ask DeepSeek's reasoning model what 1+1 is, here's what we get. The model starts with a logical approach, then visualizes it using apples, then counting fingers, and so on, eventually arriving at the correct answer.
7. 3. Stay away from simple tasks!
This is what the chat model responded with.
As reasoning models progress, they may improve their ability to break-off from this thinking process when it quickly converges on an answer, but for now, stick to chat models for these simple tasks.
Let's apply these techniques to prompt a model to solve a multi-step coding problem.
8. Example: Code debugging
Code debugging can be tricky. It's an iterative process of thinking, executing code, and updating our knowledge based on the code error or output. Does this sound familiar? This mirrors how reasoning models think,
9. Example: Code debugging
so they are perfectly suited to this type of task.
10. Example: Code debugging
Let's present the model with a task to fix some code. Notice that we use delimiters here to separate the task and inputs, which is also a good prompting practice. Let's see how the model responds.
11. Example: Code debugging
Here's the abbreviated output. We can see that the model interrogates each line of code to identify what's wrong, what needs to be added, and where. Reasoning models can save you hours in code debugging alone.
12. Let's practice!
Let's get reasoning!