Get startedGet started for free

Advanced prompt engineering for coding assistance

1. Advanced prompt engineering for coding assistance

Hi! In this next video, we'll explore advanced Prompt Engineering techniques.

2. Chain-of-thought

Model performance depends not only on what the AI model knows but also on how it’s prompted. Asking for a complex task, like writing a recursive function, in one step often fails.

3. Chain-of-thought

Guiding the model through intermediate steps works better. That’s Chain-of-Thought prompting: helping models solve complex tasks by giving them the time and steps needed to generate the correct answer.

4. Chain-of-thought

There are two approaches to Chain-of-Thought. The easiest one is to write “Let’s think step by step” at the end of prompts. Such a simple sentence is very powerful since it encourages the model to break down the task in all its intermediate steps. This idea was first proposed by researchers from the University of Tokyo, together with Google Research, in a scientific paper.

5. Chain-of-thought

In this example from the original article, we see that the model fails to provide the correct answer with a direct prompt here on the left. However, in the example on the right, it succeeds to solve the same problem when given a Chain-of-Thought prompt.

6. Chain-of-thought

The other approach is to provide the intermediate steps we expect the model to follow before reaching the final answer. For example, instead of simply writing: “Write a Python function that checks if a string is a palindrome”, the Chain-of-Thought version would be to outline the process as a series of steps, as shown. This second method is more powerful because, instead of making the model figure out the steps, they are provided to it.

7. Reasoning models

The current trend in AI is to embed reasoning in most models, so chain of thought prompting is not going to be applicable for reasoners. For example, when given the basic prompt “Write a Python function that checks if a string is a palindrome,” a reasoning model automatically breaks down the task into steps. In addition to this, reasoning models can self‑verify intermediate steps and reflect on their own reasoning. In general, reasoning models tend to produce more accurate responses for coding tasks.

8. System roles

There is also one more ingredient that can really make the difference in the model output. That is a system role definition. A system role definition is a prompt that comes before the main prompt and sets expectations for the model’s role and behavior. Role definitions have been shown to consistently improve output quality and accuracy. There are no strict rules for writing them, as they depend on the task. Let’s look at some examples!

9. System roles

Imagine we are stuck understanding the logic behind the palindrome function and want to use an AI model as a coding tutor. A possible system role definition could be: "You are a friendly programming tutor. Explain each concept in simple terms, using analogies when helpful. Highlight common mistakes to avoid" Now imagine we are using the AI model to debug a codebase with a complex function. In this case, we would need a different system prompt, such as: "You are a senior software engineer helping to debug code. First, identify what the code is trying to do. Then, analyze where and why it might be failing”

10. Let's practice!

We've explored three powerful advanced concepts for better prompting: Chain-of-Thought reasoning, reasoning models, and system role definitions. Now, let’s put these concepts into practice!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.