Get startedGet started for free

Writing Spark configurations

Now that you've reviewed some of the Spark configurations on your cluster, you want to modify some of the settings to tune Spark to your needs. You'll import some data to review that your changes have affected the cluster.

The spark configuration is initially set to the default value of 200 partitions.

The spark object is available for use. A file named departures.txt.gz is available for import. An initial DataFrame containing the distinct rows from departures.txt.gz is available as departures_df.

This exercise is part of the course

Cleaning Data with PySpark

View Course

Exercise instructions

  • Store the number of partitions in departures_df in the variable before.
  • Change the spark.sql.shuffle.partitions configuration to 500 partitions.
  • Recreate the departures_df DataFrame reading the distinct rows from the departures file.
  • Print the number of partitions from before and after the configuration change.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Store the number of partitions in variable
before = departures_df.____

# Configure Spark to use 500 partitions
____('spark.sql.shuffle.partitions', ____)

# Recreate the DataFrame using the departures data file
departures_df = spark.read.csv('departures.txt.gz').____

# Print the number of partitions for each instance
print("Partition count before change: %d" % ____)
print("Partition count after change: %d" % ____)
Edit and Run Code