Building a multi-task job pipeline
Your team at Global Retail Analytics has split the data pipeline into three notebooks: task1_ingest cleans the raw CSV and writes a Delta table, task2_revenue aggregates revenue by country, and task3_top_customers ranks customers by spend. All three are preloaded in the Exercises folder.
Your job is to wire them into a Databricks Job, run the pipeline, and fix any errors that come up.
This exercise is part of the course
Data Transformation with Spark SQL in Databricks
Hands-on interactive exercise
Turn theory into action with one of our interactive exercises
Start Exercise