Persisting Data with Data Tables
1. Persisting Data with Data Tables
Up until now,2. The Need for Persistence
every workflow we've built has been stateless. It runs, it processes data, and when it's done, everything disappears. The next time it runs,3. The Need for Persistence
it starts from a completely clean slate. For many automations, that's fine. But what about when we need our workflow to remember something? Maybe you want to store which records you've already processed, so you don't process them again. Or maybe you want to log every execution for auditing. That's where data tables come in.4. n8n Data Tables
n8n provides data tables as a built-in storage feature that gives us a persistent, visual table right inside the interface. We can create tables, define columns, and then use Data Table nodes in our workflows to insert, read, update, and delete rows. Unlike a database that we'd have to set up externally, data tables are provided out of the box — we can open the Data tables tab on the Overview page and see our stored data in a spreadsheet-like view anytime. Let's see how this works!5. Writing and Reading Data Tables
We start with a Manual Trigger that has three pinned weather records — London, Tokyo, and Berlin. This represents incoming data that we want to store so it's available between runs. To persist it, we need a data table. In the Overview page, we open the Data tables tab and create a new table called weather_log. Inside, we create new columns for each field we want to have: city, temp_c, humidity, description, and date. The "created at" and "updated at" columns are included by default. Back in the workflow, we add a Data Table node, set the operation to Insert, select the weather_log table, and map each field to its column. When we execute, the node writes three rows. We can open the Data tables tab and see them right there. Now let's switch the same node's operation to Get and enable Return All. When we execute — wait, we got nine rows instead of three. That's because the Manual Trigger still has three pinned items flowing in, and the Data Table node runs once per input item — so Get executes three times, each returning all three rows. To fix this, we open the node's Settings and enable Execute Once. Now when we run it again, we get our three rows of persisted data. This is different from pinned data, which is just saved test data for development. Data tables are real persistent storage, visible and editable from the n8n interface.6. Preventing Data Table Duplicates
But what happens when we run the workflow again and some of the data already exists? We don't want duplicates. Let's extend the pattern. This time, the workflow reads the existing table first, then uses a Code node to compare the incoming records against what's already stored. If a record matches on city and date, it gets skipped. Only new records pass through to the final Insert node. Let's execute and look at the Code node. Three records went in — London, Paris, and Tokyo — but only Paris came out. London and Tokyo were already in the table, so they were filtered out. If we check the Data tables tab, Paris has been added but there are no duplicates. This makes the workflow idempotent — safe to run multiple times. The Code node does a simple array comparison; the Data Table handles all the storage. This pattern is essential for any automation that runs on a schedule.7. Let's practice!
In the exercises, you'll build this from the ground up — first writing and reading data, then adding the comparison step so your workflow only processes new records. By the end, you'll have an automation that can run repeatedly without creating duplicates.Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.