Get startedGet started for free

Personalizing and Categorizing with LLMs

1. Personalizing and Categorizing with LLMs

Great work! In the last video, we built an agentic workflow that summarizes text. But a summary sitting at the end of a pipeline doesn't do much on its own.

2. From User Feedback to Custom Responses

What if the agent could summarize user feedback, classify each message by sentiment - like positive and negative - and drive what happens next — even draft a personalized reply? The classification step here is crucial, as it turns unpredictable summaries into clean, predictable labels. These labels can then be routed using rule-based nodes like If and Switch. Let's start with how to categorize with LLMs.

3. LLMs for Classification

When we prompted the model to summarize the user feedback, the prompt was made up of two parts: an instruction of the task to complete, and the user feedback inserted using an expression. This is the same for classification tasks, but it's important that the instruction contains the category labels we want the model to use. Without these provided labels, the model will guess what we're looking for, and that doesn't give us the clear and predictable labels we need for downstream rule-based nodes. A prompt like "Classify this feedback as positive, negative, or neutral. Reply with one word only.", is often sufficient, but for more subjective classification, you may need to provide examples in the prompt to guide the model's classifications.

4. From User Feedback to Custom Responses

Once we have a clean one-word output, we can use a Switch node to generate different responses depending on the user's sentiment. The prompt follows the same two-part pattern: instruction plus data. But rather than duplicating an Agent node on every branch,

5. From User Feedback to Custom Responses

we can use Edit Fields to set the instruction — "Draft a short, polite apology" for negative, "Write a friendly thank-you" for positive, etc. — and connect all branches into a single Agent. The data stays the same: the instruction and original feedback, but we're now pulling-in the instruction as an expression as well. One Agent, many instructions — each producing a response unique to each customer. Let's build this!

6. Example: Acme's feedback pipeline

We start with a Manual Trigger and an Edit Fields node containing one summarized customer feedback message under the field "feedback". Next, we have an Agent node. Here's where we combine our instruction with the data. We type the instruction: "Classify this feedback as positive, negative, or neutral. Reply with one word only." Then we drag the feedback field in as an expression — that's the data part. Instruction plus data, stitched together into one prompt. Running it — the Agent returns "negative." One clean word. Now, a Switch node after the Agent. We add a rule checking whether the output equals "negative", one for "positive", and another for neutral. On the negative branch, we add an Edit Fields node and create one field called instruction with the value: "Draft a short, polite apology to this customer." We add Edit Fields to the other branches with their own custom instructions. We then connect all three Edit Fields nodes with an AI Agent node. Its prompt pulls-in the instruction field via an expression, plus the original feedback, which we extract from the first Edit Fields node. This is a single agent, but the instruction it receives depends on which branch the data came from. Let's execute the workflow and see what happens! The first Agent classifies, the Switch routes to the negative branch, Edit Fields sets the apology instruction, and the second Agent produces a tailored reply addressing the export button issue. Fully automated, end to end. In the next video, we'll bring all of these pieces together — triggers, data transformation, conditional logic, and LLMs — into one complete workflow from start to finish.

7. Let's practice!

Time to put your Agents to work!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.