Using lazy processing
Lazy processing operations will usually return in about the same amount of time regardless of the actual quantity of data. Remember that this is due to Spark not performing any transformations until an action is requested.
For this exercise, we'll be defining a Data Frame (aa_dfw_df) and add a couple transformations. Note the amount of time required for the transformations to complete when defined vs when the data is actually queried. These differences may be short, but they will be noticeable. When working with a full Spark cluster with larger quantities of data the difference will be more apparent.
Bu egzersiz
Cleaning Data with PySpark
kursunun bir parçasıdırEgzersiz talimatları
- Load the Data Frame.
- Add the transformation for
F.lower()to theDestination Airportcolumn. - Show the Data Frame, noting the time difference for this action to complete.
Uygulamalı interaktif egzersiz
Bu örnek kodu tamamlayarak bu egzersizi bitirin.
# Load the CSV file
aa_dfw_df = ____.____.____('csv').options(Header=True).load('AA_DFW_2018.csv.gz')
# Add the airport column using the F.lower() method
aa_dfw_df = aa_dfw_df.withColumn('airport', ____(aa_dfw_df['Destination Airport']))
# Show the DataFrame
____