Get startedGet started for free

Evaluating and Logging Workflow Outputs

1. Evaluating and Logging Workflow Outputs

Not all failures throw errors. Sometimes a workflow completes with green checkmarks across the board — and still delivers garbage. Let's see how that happens, and what to do about it.

2. The Silent Workflow Killer

Here's an inventory pipeline. There's a lot going on, but let's focus on the core processing nodes first — a Manual Trigger with 48 product records pinned, a Transform node, and at the far end, an If node that routes data to either a dashboard or a quarantine path. When we execute the workflow, every node shows green checkmarks, 48 items flow through. It looks perfect. But scroll through the Transform node's output — some products have empty SKUs, others have negative quantities. The workflow didn't break, it just passed bad data through without questioning it. Error handling from the last lesson wouldn't have caught this — nothing errored. So how do we know what happened at each step, and how do we stop bad data before it reaches the end?

3. Checkpoint Logging

One approach is to log a checkpoint after each major step. We've added Data Table nodes that branch off after each processing step — Log Fetch, Log Transform, Log Evaluate. They tap off the main pipeline as side connections, so they don't interfere with the data flowing to the next step. Let's look more closely at Log Fetch. It's set to Insert, pointing at an execution log table, and it maps three fields: the step name — here "Fetch" — the item count, pulled from the previous node using an expression, and a timestamp using `$now`. Each checkpoint captures the same three fields, creating a running record of the pipeline. Now the Read Log node at the end. This is a Data Table Get node that reads the full log table. In the output panel, each checkpoint appears as a separate item: "Fetch", 48 items; "Transform", 48 items; "Evaluate", 1 item. The counts tell us exactly how many items passed through each step. Imagine that second entry said 12 instead of 48 — thirty-six products vanished during the transform step. Without these checkpoints, we'd never know where the drop happened. With them, we can pinpoint the exact step. And because the data table persists between executions, we can inspect it anytime.

4. Closing the Loop: Evaluation

Checkpoints tell us what happened, but they don't stop bad data from reaching the end. That's what the Evaluate Output node is for — this is the quality gate. This Code node loops through every output item and asks specific questions: does each product have a valid SKU? Is the quantity a positive number? Is the warehouse code one we recognize? For each problem, it pushes a message into an issues list. After the loop, it produces a single verdict — pass or fail — with the list of issues. In the output, we see pass is false, with a list of specific issues — the empty SKUs, negative quantities, and unknown warehouse code we spotted earlier. The If node reads that pass field and routes accordingly — true sends data to the dashboard, false sends it to the quarantine path. Now click into the quarantine node and look at its fields. Notice the expression referencing the evaluation node by name: `$('Evaluate Output').item.json`. This syntax lets us pull data from any upstream node, not just the one directly before. We'll use this in the exercises to wire up both branches of the quality gate.

5. Let's practice!

Let's give this a go!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.