Using API data and LLMs
1. Using API data and LLMs
Welcome back! In this video, we'll connect our workflows to external APIs and even use a large language model, or LLM, to generate output for us.2. Why this matters
APIs let us pull in real-world data, like user details or product info. LLMs add intelligence, they can summarize text, classify messages, or even generate new content. Together, they make our automations far more powerful.3. Why this matters
Why does this matter? Most business automations rely on data from other systems, CRMs, payment tools, or support platforms. APIs make those connections possible, while LLMs help us understand or act on the data automatically.4. How APIs work
An API is just a way for one system to talk to another. Think of it like ordering food at a restaurant, we place a request, and the kitchen returns our meal. In n8n, the HTTP Request node is our waiter. It sends requests and brings back data, usually in JSON format.5. Using API data
When the HTTP Request node gets data back, it's structured in fields like userId, budget, or niche. We can inspect these fields in n8n's Schema or Table View, just like we did earlier, and pass them into later nodes. For example, we might call an API that lists support tickets and use the fields to decide whether a ticket should be escalated or closed.6. Filtering and branching data
Here's where things get interesting. Once we have API data, we can apply logic to it. Say Acme's finance team pulls ten customer records from an API but only wants to process customers with a budget greater than 20,000. Adding an If or Switch node allows us to branch our workflow so that only relevant data continues. This keeps our workflows clean and efficient. Remember our earlier principle: listen, filter, act? It applies here, too. Here's a quick note:7. APIs behind the scenes
Most action nodes we see, like Slack, Google Sheets, or Notion, are actually HTTP calls under the hood. n8n handles the credentials and sends us the request. If an app or action isn't covered natively, or we want more control, we can always use the HTTP Request node directly. It's our all-purpose tool for custom integrations.8. From APIs to AI
Now that we can fetch and filter data from APIs, let's consider what would happen if our workflow could actually understand that data. That's where large language models, or LLMs, come in. They don't just move information; they interpret it, summarize it, and generate insights automatically. Let's combine the two to make our API-powered workflows even smarter.9. Adding LLM intelligence
Once we can pull data from APIs, the next step is making sense of it, and that's where LLMs come in. Imagine a customer leaves feedback like: "The checkout crashed and I lost my cart".10. Adding LLM intelligence
Without an LLM, we'd have to tag that manually or build a complex rule.11. Adding LLM intelligence
With an LLM, one API call can analyze it instantly and return a smart label. For example, send the text to OpenAI or Hugging Face via an HTTP Request. The model can summarize the issue or classify it as a bug, feature request, or praise. And remember, at its core, this is still just another API call.12. APIs and LLMs
Let's recap. APIs bring in real-world data. Logic nodes let us filter or branch based on that data.13. APIs and LLMs
LLMs interpret, summarize, or generate new content. Together, they turn simple workflows into intelligent systems that understand and act on information automatically.14. Let's practice!
Time for some practice. Bring data and intelligence together to create smarter automations.Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.