ComenzarEmpieza gratis

Running an ETL Pipeline

Ready to run your first ETL pipeline? Let's get to it!

Here, the functions extract(), transform(), and load() have been defined for you. To run this data ETL pipeline, you're going to execute each of these functions. If you're curious, take a peek at what the extract() function looks like.

def extract(file_name):
    print(f"Extracting data from {file_name}")
    return pd.read_csv(file_name)

Este ejercicio forma parte del curso

Introducción a las canalizaciones de datos

Ver curso

Instrucciones de ejercicio

  • Use the extract() function to extract data from the raw_data.csv file.
  • Transform the extracted_data DataFrame using the transform() function.
  • Finally, load the transformed_data DataFrame to the cleaned_data SQL table.

Ejercicio interactivo práctico

Pruebe este ejercicio completando este código de muestra.

# Extract data from the raw_data.csv file
extracted_data = ____(file_name="raw_data.csv")

# Transform the extracted_data
transformed_data = transform(data_frame=____)

# Load the transformed_data to cleaned_data.csv
____(data_frame=transformed_data, target_table="cleaned_data")
Editar y ejecutar código