Filter and Count

The RDD transformation filter() returns a new RDD containing only the elements that satisfy a particular function. It is useful for filtering large datasets based on a keyword. For this exercise, you'll filter out lines containing keyword Spark from fileRDD RDD which consists of lines of text from the README.md file. Next, you'll count the total number of lines containing the keyword Spark and finally print the first 4 lines of the filtered RDD.

Remember, you already have a SparkContext sc, file_path, and fileRDD available in your workspace.

This exercise is part of the course

Big Data Fundamentals with PySpark

View Course

Exercise instructions

  • Create filter() transformation to select the lines containing the keyword Spark.
  • How many lines in fileRDD_filter contain the keyword Spark?
  • Print the first four lines of the resulting RDD.

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Filter the fileRDD to select lines with Spark keyword
fileRDD_filter = fileRDD.filter(lambda line: 'Spark' in ____)

# How many lines are there in fileRDD?
print("The total number of lines with the keyword Spark is", fileRDD_filter.____())

# Print the first four lines of fileRDD
for line in fileRDD_filter.____(____):
  print(line)