Get startedGet started for free

Adding context for smarter AI responses

1. Adding context for smarter AI responses

Hi there! In this video, we will explore how to provide enough context in a prompt to ensure the model has all the information it needs to complete the task effectively.

2. Prompt refinement

But before we dive in, let’s take a moment to talk about prompt refinement. Prompt crafting is an iterative process in which we start with an initial prompt, feed it to the model, and review the model’s response.

3. Prompt refinement

Next, based on the feedback from the evaluation, we refine the initial prompt. This cycle continues until the output meets our expectations.

4. Prompt refinement

Prompt refinement isn’t just about fixing errors; it can also help gradually increase the complexity of a task. This is especially useful in coding, where applying iterative improvements can lead to significant enhancements. For example, when refining a prompt iteratively, we might request features like preventing runtime errors or adding input validation based on an initial implementation that doesn’t address these concerns out of the box.

5. Prompt refinement

Let’s put this in practice with an example! In this case, we need assistance writing a Python function to validate email addresses. We can iteratively refine the prompt, starting with: “Write a Python function to validate email addresses.” To this prompt, the model responds with an initial implementation. Next, we take our email database into account and ask the model to identify flawed email addresses by refining the prompt: “Make sure it covers edge cases such as not allowing user@domain.” The model updates the regular expression accordingly and identifies the flawed email addresses. We can add more rounds if we like, until we get to the model’s response and code evaluation that we’re looking for.

6. Contextual prompting

Another important aspect to consider when crafting our coding prompts is providing the right context so the model has all the necessary information to complete the task. This is known as “contextual prompting” and helps guide the model more effectively. In coding, the most powerful contextual technique is “code context injection”. This involves including in the prompt the relevant existing code: function signatures, helper classes, or configuration files to reduce mismatches between the model context and the real context.

7. Contextual prompting

When working with large codebases, it is important to isolate the code relevant to the task from unrelated parts. One effective strategy is to generate the call graph of the function we are focusing on. This allows to identify and provide only the information needed by that function at runtime.

8. Contextual prompting

Finally, contextual prompting also makes a huge difference when using an AI model for debugging. Suppose one of our scripts is failing, and we are using an AI assistant to help fix it.

9. Contextual prompting

Asking: "I am getting a ConnectionError in my Python script. How can I fix it?" Will give back a list of things that might be causing the issue, as the model can only guess the context.

10. Contextual prompting

The best contextual prompt is the one that provides all the context available. In this case, that would be the code snippet and the error message. One possible prompt would be as shown. By using the full context, the model is more certain about the issue in our code.

11. Let's practice!

Now, let’s put all these concepts into practice!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.