Writing to a file
In the video, you saw that files are often loaded into a MPP database like Redshift in order to make it available for analysis.
The typical workflow is to write the data into columnar data files. These data files are then uploaded to a storage system and from there, they can be copied into the data warehouse. In case of Amazon Redshift, the storage system would be S3, for example.
The first step is to write a file to the right format. For this exercises you'll choose the Apache Parquet file format.
There's a PySpark DataFrame called film_sdf and a pandas DataFrame called film_pdf in your workspace.
Questo esercizio fa parte del corso
Introduction to Data Engineering
Istruzioni dell'esercizio
- Write the
pandasDataFramefilm_pdfto a parquet file called"films_pdf.parquet". - Write the PySpark DataFrame
film_sdfto a parquet file called"films_sdf.parquet".
Esercizio pratico interattivo
Prova a risolvere questo esercizio completando il codice di esempio.
# Write the pandas DataFrame to parquet
film_pdf.____("____")
# Write the PySpark DataFrame to parquet
film_sdf.____.____("____")