Loading data in PySpark shell
In PySpark, we express our computation through operations on distributed collections that are automatically parallelized across the cluster. In the previous exercise, you have seen an example of loading a list as parallelized collections and in this exercise, you'll load the data from a local file in PySpark shell.
Remember, you already have a SparkContext sc and file_path variable (which is the path to the README.md file) available in your workspace.
Questo esercizio fa parte del corso
Big Data Fundamentals with PySpark
Istruzioni dell'esercizio
- Load a local text file
README.mdin PySpark shell.
Esercizio pratico interattivo
Prova a risolvere questo esercizio completando il codice di esempio.
# Load a local file into PySpark shell
lines = sc.____(file_path)