Get startedGet started for free

Mastering context engineering

1. Mastering context engineering

In this lesson, we'll break down context engineering and compare it to prompt engineering, a term you might be familiar with. We'll talk about best practices and principles for getting the outcomes you want with AI. But first, what is context? Well, context is kind of like AI's memory, and the thing is that the information you give to AI really matters. Specifically what you're providing Replit Agent with, right? We saw that in our earlier lesson. And so it's just like instructions that you provide to a human. If those instructions are unclear, unreadable in the wrong order, contain misleading information, you're gonna get suboptimal results from AI. And so just like a person, it's easy for AI to lose track of goals with inadequate context. Now at Replit, we're making improvements to how our Agent works every day, but it's still our job as managers, as vibe coders to instruct and manage these AI systems. Now context can be anything. It can be our prompt, or it can be images, it can be documentation, or even URLs that Agent can go fetch and pull the contents of. Agent can also take screenshots of those URLs as well. It can be research requests as we saw earlier: "Hey, go search the web for this thing". And just like a human, AI also learns from examples. So if we can give AI examples of what we want, that's incredibly powerful. Whether that's a screenshot, a snippet of code, a certain library to use and implement, etc. So what does good context look like? Well, good context looks like specific instructions. Sometimes this is emotional words or quote-on-quote speaking the language. So in a design sense saying minimalist or the term border radius, which refers to how rounded corners are, rather than trying to describe corner rounding, the right keywords or research requests can go a long way in building with AI. And if we're thinking about vibe coding as sort of puzzle pieces that we're fitting together, if we can determine which pieces we're using and then instruct AI to go out and use those, that can be incredibly powerful. So look at libraries that do X, Y, Z. Here's an example of implementing this feature, right? And similarly, error logs that isolate a problem are examples or leading clues for AI on what to fix. And so as we're thinking about what good context looks like, this is actually just good engineering, right? We're breaking things down from first principles and instructing our Agent how to implement them or how to go about fixing them. And so images that show visual bugs can be really powerful as well because there are certain things that aren't present in console logs or that AI has trouble seeing. Mow with Agent's testing as we'll see more of it's able to see a lot of things, but at times we still need to provide that additional context. Now let's talk a little bit about context reasoning. And this is a question I get all the time. People ask me, well, how do I think about building these prompts or these messages to send to Agent? The questions you need to ask yourself are, what problem am I trying to solve? What information is most valuable to solve it, and how can I communicate that information to bridge the gap between what I want and what the model is doing, or what the outputs look like? This can be a little high level, so let's give an example. So your role is to arrange existing information into an input, most likely to produce an output and as a vibe coder, this is like solving a puzzle or you know, building with like Legos or something. We're trying to figure out how to take what we have, take these pieces of information and create that solution to our problem. Now let's spend some time and talk about good prompts and bad prompts. So we're gonna go line by line here and talk about what could be improved. A bad prompt is: ‘Fix my code’. That's because it's not specific. It doesn't show or specify what AI should work on. A better prompt is: ‘My site fails when processing user input. the error seems to be from the database. Can you help debug this? Here's the relevant error message’. Very specific, provides examples and shows our Agent what could possibly be going wrong. Similarly: ‘Make a website’. That's not descriptive. It doesn't instruct Agent what you want, and if our Agent doesn't know, it can't really build towards the desired outcome. So a better prompt is: ‘Create a simple portfolio website. It needs sections for home about me in a contact form. Use a clean, modern design theme and placeholder content. Here is an example screenshot’. This is clear, specific, and testable. Last prompt: ‘Don't make it slow’. First thing, you don't really want to use negative prompts with AI and it's proven to be an inefficient way of prompting, so we want to reframe this to be positive, what to show rather than not what to show. So ‘Refactor the data processing function to handle larger inputs more efficiently. Could we use a different algorithm or data structure? Check out these docs’. So we're using positive instruction, we're specifying a goal, and we're even asking Agent a question for maybe a further area of exploration. And so when you're messaging Agent, you can really think about it like a text message or a Slack message to one of your coworkers. Would a human be able to read this thing? I send texts to people all the time, nobody reads them. But we have to ask, is this clear and simple and concise? Would a human get confused by this message that I'm sending? Is there conflicting information or information that's out of order? And finally, are my goals clear? Is what I'm trying to communicate logical, concise, and explainable? These are the most important things when prompting AI. And so some best practices in context engineering are to refresh context for new topics. You'll see examples of that in the course. That can be by creating new chats with Agent, by clicking the plus icon. Or even using checkpoints. When you roll back your app to an earlier version, we also roll back the context. Updating context on key project decisions. For example, we have a Replit.md file in each project, which contains documentation, so you can update that file to update what's in Agent's context for long running instructions that should span multiple chats. And really thinking critically about what goes in context and how context is structured, because those are very, very crucial elements. And so when you're typing that out, when you're typing out messages to AI, you should think about if you're typing out a message to a human being, would our AI partner understand this message? And finally, less is more. I've said it's like talking to a human many times, simplicity and focus win over everything. So if you can distill down exactly what you want to the most simple form possible, that goes a long way. Now we'll move on to the next lesson and start building in Replit.

2. Let's practice!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.