Resuming after a restart
Your streaming pipeline at Global Retail Analytics picks up new CSV files from a Unity Catalog volume and writes them to Delta Lake. After a weekend outage, the cluster restarts on Monday morning. Three new files landed over the weekend. The stream processes only those three files — none of the files from the previous week are reloaded.
What allows the stream to skip previously processed files and resume exactly where it left off?
Diese Übung ist Teil des Kurses
Data Transformation with Spark SQL in Databricks
Interaktive Übung
In dieser interaktiven Übung kannst du die Theorie in die Praxis umsetzen.
Übung starten