Prompt engineering
1. Prompt engineering
We are now ready to start the development phase.2. LLM lifecyle: Prompt engineering
The first task is to load the base model selected in the ideation phase, and start prompt engineering.3. The development cycle
The development phase is a cyclic process of building and improving the application, known as the development cycle. At the heart of every application is the prompt, which instructs the LLM to generate desired outputs. We'll revisit this cycle as we progress, filling in new parts as we learn them. But why do we care about prompt engineering?4. Why is prompt engineering important?
Prompt engineering enhances prompts in three ways: By giving clear instructions, we can improve performance by getting more accurate and helpful responses from LLMs. With well-crafted prompts, we gain more control over the output, steering LLMs to generate the desired content. These models can make mistakes or produce irrelevant information. Prompt engineering helps us avoid bias and hallucinations. But how do we find the perfect prompt?5. Elements of a prompt
A typical prompt includes four elements: an instruction for the model, examples or additional context, input data, and an output indicator. In our example, we want to predict calories of a dish. Instructions specify the task and format, examples are other dishes' calories, input is the actual dish, and the output indicator guides what the model should produce.6. Finding the perfect prompt
When testing prompts, it's worth adjusting LLM settings such as temperature, affecting the randomness, or max tokens, affecting the output length. Additionally, we can enhance prompts by using in-context learning, where we show the model examples of inputs and outputs. Numerous online resources offer prompt design patterns to assist us. A playground environment is useful for trying various models and settings. Each prompt acts like a mini-experiment. At this stage, we're evaluating prompt quality using our own judgment.7. Prompt management
When finding that perfect prompt, tracking results, known as prompt management, is crucial for efficiency, reproducibility, and collaboration. We should track the prompt itself, its output, along with details about the model and settings used. This way, we can revisit and reuse past results. Using a dedicated prompt management tool or our preferred version control system is recommended. At this stage, we should begin generating a collection of good input-output pairs that can be used to evaluate the application later on.8. Prompt templates for reusability
Once we've gathered a collection of promising prompts, we're ready to proceed. We can always return to refine our prompt engineering later on. Right now, it's crucial to start developing prompt templates. In our application, we have input data that we need to transform to output. These templates use placeholders for input and work like recipes for different tasks, fitting any kind of data. They're essential for making reusable prompts, making our application more flexible and efficient. For example, with our dish calorie prediction template, we can predict calories for any type of dish. This will be even more helpful as we create advanced applications.9. The development cycle
Starting from the original development cycle diagram, we are now ready to extend it.10. The development cycle
We add the activity of prompt engineering to it. We use a dashed arrow to denote that this leads to new and improved prompts to be used in the LLM application.11. Let's practice!
We've introduced the fundamentals of prompt engineering as we develop our application. Now, let's apply this knowledge in practice.Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.