BaşlayınÜcretsiz Başlayın

Building a multi-task job pipeline

Your team at Global Retail Analytics has split the data pipeline into three notebooks: task1_ingest cleans the raw CSV and writes a Delta table, task2_revenue aggregates revenue by country, and task3_top_customers ranks customers by spend. All three are preloaded in the Exercises folder.

Your job is to wire them into a Databricks Job, run the pipeline, and fix any errors that come up.

Bu egzersiz

Data Transformation with Spark SQL in Databricks

kursunun bir parçasıdır
Kursu Görüntüle

Uygulamalı interaktif egzersiz

İnteraktif egzersizlerimizden biriyle teoriyi pratiğe dökün

Egzersizi başlat