ReduceBykey and Collect
One of the most popular pair RDD transformations is reduceByKey() which operates on key, value (k,v) pairs and merges the values for each key. In this exercise, you'll first create a pair RDD from a list of tuples, then combine the values with the same key and finally print out the result.
Remember, you already have a SparkContext sc available in your workspace.
Questo esercizio fa parte del corso
Big Data Fundamentals with PySpark
Istruzioni dell'esercizio
- Create a pair RDD named
Rddwith tuples(1,2),(3,4),(3,6),(4,5). - Transform the
RddwithreduceByKey()into a pair RDDRdd_Reducedby adding the values with the same key. - Collect the contents of pair RDD
Rdd_Reducedand iterate to print the output.
Esercizio pratico interattivo
Prova a risolvere questo esercizio completando il codice di esempio.
# Create PairRDD Rdd with key value pairs
Rdd = sc.parallelize([____])
# Apply reduceByKey() operation on Rdd
Rdd_Reduced = Rdd.reduceByKey(lambda x, y: ____)
# Iterate over the result and print the output
for num in Rdd_Reduced.____:
print("Key {} has {} Counts".format(____, num[1]))