Using lazy processing
Lazy processing operations will usually return in about the same amount of time regardless of the actual quantity of data. Remember that this is due to Spark not performing any transformations until an action is requested.
For this exercise, we'll be defining a Data Frame (aa_dfw_df) and add a couple transformations. Note the amount of time required for the transformations to complete when defined vs when the data is actually queried. These differences may be short, but they will be noticeable. When working with a full Spark cluster with larger quantities of data the difference will be more apparent.
Deze oefening maakt deel uit van de cursus
Cleaning Data with PySpark
Oefeninstructies
- Load the Data Frame.
- Add the transformation for
F.lower()to theDestination Airportcolumn. - Show the Data Frame, noting the time difference for this action to complete.
Praktische interactieve oefening
Probeer deze oefening eens door deze voorbeeldcode in te vullen.
# Load the CSV file
aa_dfw_df = ____.____.____('csv').options(Header=True).load('AA_DFW_2018.csv.gz')
# Add the airport column using the F.lower() method
aa_dfw_df = aa_dfw_df.withColumn('airport', ____(aa_dfw_df['Destination Airport']))
# Show the DataFrame
____