In this chapter you will learn how to create and query a SQL table in Spark. Spark SQL brings the expressiveness of SQL to Spark. You will also learn how to use SQL window functions in Spark. Window functions perform a calculation across rows that are related to the current row. They greatly simplify achieving results that are difficult to express using only joins and traditional aggregations. We'll use window functions to perform running sums, running differences, and other operations that are challenging to perform in basic SQL.
In this chapter, you will be loading natural language text. Then you will apply a moving window analysis to find frequent word sequences.
In the previous chapters you learned how to use the expressiveness of window function SQL. However, this expressiveness now makes it important that you understand how to properly cache dataframes and cache SQL tables. It is also important to know how to evaluate your application. You learn how to do do this using the Spark UI. You'll also learn a best practice for logging in Spark. Spark SQL brings with it another useful tool for tuning query performance issues, the query execution plan. You will learn how to use the execution plan for evaluating the provenance of a dataframe.
Previous chapters provided you with the tools for loading raw text, tokenizing it, and extracting word sequences. This is already very useful for analysis, but it is also useful for machine learning. What you've learned now comes together by using logistic regression to classify text. By the conclusion of this chapter, you will have loaded raw natural language text data and used it to train a text classifier.