Building a multi-task job pipeline
Your team at Global Retail Analytics has split the data pipeline into three notebooks: task1_ingest cleans the raw CSV and writes a Delta table, task2_revenue aggregates revenue by country, and task3_top_customers ranks customers by spend. All three are preloaded in the Exercises folder.
Your job is to wire them into a Databricks Job, run the pipeline, and fix any errors that come up.
Este exercício faz parte do curso
Data Transformation with Spark SQL in Databricks
Exercício interativo prático
Transforme a teoria em ação com um de nossos exercícios interativos
Começar o exercício