Get startedGet started for free

Introducing LLMs into Workflows

1. Introducing LLMs into Workflows

Welcome back! You've come a long way — triggers, conditions, expressions, and merging data. But every node so far follows fixed rules. What happens when a task doesn't fit neatly into a rule? That's where large language models come in.

2. Fixed rules vs. language understanding

An If node checking whether a number is greater than ten will always give the same answer — no surprises. That's because it follows a fixed rule. Large Language Models, or LLMs, work differently. Ask one to summarize the same paragraph twice and you'll get two slightly different responses — different wording, maybe a different emphasis. That's not a flaw; language models generate text by predicting the most likely next word, and that prediction can vary. So we've got a trade-off: predictability of rule-based workflows versus the flexibility of LLMs. When does it make sense to give up the first for the second?

3. When to use each

LLMs shine when the task needs language understanding — summarizing text, drafting a reply, or interpreting unstructured text data - things that are genuinely hard to capture with an If or Switch rule. But for precise calculations, exact formatting, or clear, explainable decisions, rule-based nodes are faster, cheaper, and consistent. A good rule-of-thumb is that if a rule-based node can handle the job, use it. Reach for an LLM only when you need that flexibility.

4. New node: AI Agent

In n8n, we connect to an LLM through the AI Agent node. It takes a text prompt — usually built from data flowing through the workflow — and sends it to a language model. To tell the Agent which model to use, we attach a Chat Model sub-node underneath it. In this course, we'll use the OpenAI Chat Model, and connect to a GPT-series model. Think of the Agent as the orchestrator and the Chat Model as the raw engine that generates the outputs. In the exercises, some OpenAI credentials are already set up for you to select from the dropdown, so we can jump straight into building.

5. Example: Summarizing customer feedback

Let's see this in action. We start with a Manual Trigger and add an Edit Fields node to create some sample data — a piece of submitted product feedback. Next, we add an Agent node and attach an OpenAI Chat Model sub-node underneath. Inside the model sub-node, credentials should be available for you to select already, and the model can be changed here as well. In the Agent's prompt field, we'll set the source of the prompt as being defined below, and type: "Summarize this customer feedback in one sentence." Then, we reference the feedback field from Edit Fields using an expression, which we can drag in from the Input panel, just like we did with expressions in the last chapter. When we run the workflow, Edit Fields creates our data, the Agent sends the prompt plus the feedback to the model, and a moment later we get back a one-line summary. Now here's the interesting part — run it again with the same input. The summary will say essentially the same thing, but the wording shifts slightly. Quick tip: when we're iterating on a prompt like this, we can pin the data on specific nodes. That freezes its output so we can re-run the Agent node without triggering the whole workflow each time.

6. The Agent's output

The best part? The Agent's response is just a text field in the output — exactly like any other node. We can reference it with expressions, reshape it in Edit Fields, or feed it into an If node to branch based on what the model said. The LLM slots right into the same data flow we've been building all along. It just produces text instead of following a fixed rule. In the next video, we'll put the Agent to work on real tasks — personalizing messages and categorizing data at scale.

7. Let's practice!

Time to add some AI to your workflows!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.