21. Schemas
● Schema Pros
○ Enable column names instead of column positions
○ Queries using SQL (or DataFrame) syntax
○ Make your data more structured
● Schema Cons
○ ??
○ ??
○ ??
22. Schemas
● Schema Pros
○ Enable column names instead of column positions
○ Queries using SQL (or DataFrame) syntax
○ Make your data more structured
● Schema Cons
○ Make your data more structured
○ Reduce future flexibility (app is more fragile)
○ Y2K
24. HiveContext
val sqlContext = new org.apache.spark.sql.
hive.HiveContext(sc)
FYI - a less preferred alternative:
org.apache.spark.sql.SQLContext
25. DataFrames
Primary abstraction in Spark SQL
Evolved from SchemaRDD
Exposes functionality via SQL or DF API
SQL for developer productivity (ETL, BI, etc)
DF for data scientist productivity (R / Pandas)
26. Live Coding - Spark-Shell
Maven Packages for CSV and Avro
org.apache.hadoop:hadoop-aws:2.7.1
com.amazonaws:aws-java-sdk-s3:1.10.30
com.databricks:spark-csv_2.10:1.3.0
com.databricks:spark-avro_2.10:2.0.1
spark-shell --packages $SPARK_PKGS
27. Live Coding - Loading CSV
val path = "AAPL.csv"
val df = sqlContext.read.
format("com.databricks.spark.csv").
option("header", "true").
option("inferSchema", "true").
load(path)
df.registerTempTable("stocks")
28. Caching
If I run a query twice, how many times will the
data be read from disk?
29. Caching
If I run a query twice, how many times will the
data be read from disk?
1. RDDs are lazy.
2. Therefore the data will be read twice.
3. Unless you cache the RDD, All transformations
in the RDD will execute on each action.
32. Caching Comparison
Caching Spark SQL DataFrames vs
caching plain non-DataFrame RDDs
● RDDs cached at level of individual records
● DataFrames know more about the data.
● DataFrames are cached using an in-memory
columnar format.
33. Caching Comparison
What is the difference between these:
(a) sqlContext.cacheTable("df_table")
(b) df.cache
(c) sqlContext.sql("CACHE TABLE df_table")
40. Schema Inference
Infer schema of JSON files:
● By default it scans the entire file.
● It finds the broadest type that will fit a field.
● This is an RDD operation so it happens fast.
Infer schema of CSV files:
● CSV parser uses same logic as JSON
parser.
41. User Defined Functions
How do you apply a “UDF”?
● Import types (StringType, IntegerType, etc)
● Create UDF (in Scala)
● Apply the function (in SQL)
Notes:
● UDFs can take single or multiple arguments
● Optional registerFunction arg2: ‘return type’
42. Live Coding - UDF
● Import types (StringType, IntegerType, etc)
● Create UDF (in Scala)
● Apply the function (in SQL)
43. Live Coding - Autocomplete
Find all types available for SQL schemas +UDF
Types and their meanings:
StringType = String
IntegerType = Int
DoubleType = Double