Abstract:
Data exploration often requires running aggregation/slice-dice queries on data sourced from disparate sources. You may want to identify distribution patterns, outliers, etc and aid the feature selection process as you train your predictive models. As you begin to understand your data, you want to ask ad-hoc questions expressed through your visualization tool (which typically translates to SQL queries), study the results and iteratively explore the data set through more queries. Unfortunately, even when data sets can be in-memory, large data set computations take time breaking the train of thought and increasing time to insight . We know Spark can be fast through its in-memory parallel processing. But, Spark 1.x isn’t quite there. Spark 2.0 promises to offer 10X better speed than its predecessor. Spark 2.0 ushers some impressive improvements to interactive query performance. We first explore these advances - compiling the query plan eliminating virtual function calls, and other improvements in the Catalyst engine. We compare the performance to other popular popular query processing engines by studying the spark query plans. We then go through SnappyData (an open source project that integrates Spark with a database that offers OLTP, OLAP and stream processing in a single cluster) where we use smarter data colocation and Synopses data (.e.g. Stratified sampling) to dramatically cut down on the memory requirements as well as the query latency. We explain the key concepts in summarizing data using structures like stratified sampling by walking through some examples in Apache Zeppelin notebooks (a open source visualization tool for spark) and demonstrate how we can explore massive data sets with just your laptop resources while achieving remarkable speeds.
Bio:
Jags is a founder and the CTO of SnappyData. Previously, Jags was the Chief Architect for “fast data” products at Pivotal and served in the extended leadership team of the company. At Pivotal and previously at VMWare, he led the technology direction for GemFire and other distributed in-memory Bio:
Jags Ramnarayan is a founder and the CTO of SnappyData. Previously, Jags was the Chief Architect for “fast data” products at Pivotal and served in the extended leadership team of the company. At Pivotal and previously at VMWare, he led the technology direction for GemFire and other distributed in-memory products.
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Explore big data at speed of thought with Spark 2.0 and Snappydata
1. Explore big data at speed of thought
with Spark 2.0 and SnappyData
www.snappydata.io
Jags Ramnarayan
CTO, Co-founder @ SnappyData
2. Our Pedigree
SnappyDat
a
SpinOut
● New Spark-based open source
project started by Pivotal
GemFire founders+engineers
● Decades of in-memory data
management experience
● Focus on real-time, operational
analytics: Spark inside an
OLTP+OLAP databaseFunded by Pivotal, GE, GTD Capital
3. Our Mission
Spark Executor Disparate data formats
… JSON, CSV, Parquet..
DB Tier
(NoSQL, SQL, ..)
Spark Cluster is for COMPUTE
Spark
Jobs
S3, HDFS, Files…
Ephemeral,
read-only
STATE
Spark is a Compute engine that works with disparate databases
4. Our Mission – Spark cluster is also an Operational DB
Spark Executor
Spark Cluster is for COMPUTE
Spark
Jobs
S3, HDFS,
Files…
Spark
readOnly
Cache
Deep fusion of Spark with hybrid in-memory database – OLTP, OLAP
SnappyData
- Support mutability, transactions
- Point lookups, updates
- higher performance, less complex
- SQL compliant (Not just selects)
- HA (Replication across geos)
- Persistence: backup, recovery
- Far fewer resources (Synopses)
5. Focus for this talk
• Is Operational Analytics – Interactive Analytic query processing
• Improvements in Spark SQL performance
• Why is in-memory analytics still challenging?
• The SnappyData solution – brief overview (will not dive into Hybrid DB)
• Synopses Data Engine – focus on Stratified sampling
• Demo using Zeppelin
• Q&A
6. DataFrame(DF) and Query plan in Spark
• Distributed data organized as named columns
- Similar to R/Python DataFrame
• But, with richer transformations, optimizations
• Can be created from many disparate sources
• Any SQL in Spark when compiled is expressed
as transformations on DFs
Scan
Project
Aggregate
Join
Filter
Data
DataFrame
DataFrame
DataFrame
DataFrame
DataFrame
select AVG(ArrDelay) arrivalDelay,
UniqueCarrier carrier from airline JOIN history
where <filter> group by UniqueCarrier
7. Is this fast enough?
- Spark 1.6, MacBook Pro 4 core, 2.8 Ghz Intel i7, enough RAM
- Airline OnTime performance data set, 105 Million records
select AVG(ArrDelay) from airline ~ 3 seconds ~ 2 seconds
select AVG(ArrDelay) arrivalDelay,
UniqueCarrier carrier from airline
group by UniqueCarrier order by
UniqueCarrier
Parquet files in OS
Buffer
Managed in Spark
memory
~ 10 seconds ~ 6 seconds
8. Spark 1.6 query plan
What is expensive?
Scan over 105 million Integers
select AVG(ArrDelay) from airline
Shuffle results from each partition
so we can compute Avg across all
partitions
- is cheap in this case … only 11
partitions
9. How did Spark 2.0 do?
- Spark 2.0, MacBook Pro 4 core, 2.8 Ghz Intel i7, enough RAM
- Airline OnTime performance data set, 105 Million records
Parquet files in OS
Buffer
Managed in Spark
memory
~ 3 seconds ~ 600 millisecondsselect AVG(ArrDelay) from airline
More than 3X faster than Spark 1.6
10. Spark 2.0 query plan
What is different?
Scan over 105 million Integers
is much faster now
Shuffle results from each partition
so we can compute Avg across all
partitions
- is cheap in this case … only 11
partitions
select AVG(ArrDelay) from airline
11. Whole Stage Code Generation
- Each Operator implemented using
functions
- And, functions imply chasing
pointers … Expensive
- Code Generation
-- Remove virtual function calls
-- Array, variables instead of objects
-- Capitalize on modern CPU cache
Aggregate
Filter
Scan
Project
How to remove complexity? Add a layer
How to improve perf? Remove a layer
Filter() {
getNextRow {
get a row from scan() //child
Apply filter condition
true: return row
}
Scan() {
getNextRow {
get row from fileInputStream
}
13. Good enough? Hitting the CPU Wall?
select
count(*) , airlineName
From history t1, current t2, airports t3
Where t1 Join t2 Join t3
group by description
order by count desc limit 8
Distributed Joins can be very expensive
0
20
40
60
80
100
120
140
160
180
200
1 10
ConcurrencyConcurrency
ResponseTime in
seconds
ResponseTime
in seconds
14. Moving, Copying costs
• Aggregations – GroupBy, MapReduce
• Joins with other streams, Reference data
Shuffle Costs (Copying, Serialization) Excessive copying in
Java based Scale out stores
15. - DRAM is still relatively expensive for the deluge of data
- Analytics in the cloud requires fluid data movement
-- How do you move large volumes to/from clouds?
Challenges with In-memory Analytics
16. • Most apps happy to tradeoff 1% accuracy
for 200x speedup!
• Can usually get a 99.9% accurate answer by only
looking at a tiny fraction of data!
• Often can make perfectly accurate
decisions without having perfectly
accurate answers!
• A/B Testing, visualization, ...
• The data itself is usually noisy
• Processing entire data doesn’t necessarily mean
exact answers!
• Inference is probabilistic anyway
Use statistical techniques to shrink data?
17. SnappyData
A Hybrid Open source system for Transactions, Analytics,
Streaming
(https://github.com/SnappyDataInc/snappydata)
18. SnappyData – In-memory Hybrid DB with Spark
A Single Unified Cluster: OLTP + OLAP + Streaming
for real-time analytics
Batch design, high throughput
Real-time design
Low latency, HA,
concurrency
Vision: Drastically reduce the cost and
complexity in modern big data
Rapidly Maturing Matured over 13 years
19. Maintain recent data in-memory, lazily fetch from source
Process, store
streams
Kafka
Snappy Data Server – Spark Executor + Store
Batch
compute
Reference data
Lazy write, Fetch on
demand
RDB
HDFS
In-memory compute, state
Current
Operational
data
External data S3, Rdb, MPP DB…Spark API ++
- Java, Scala,
Python, R, REST
Synopses data
Interactive analytic queries
History data
20. Realizing ‘speed-of-thought’ Analytics
Rows
Columnar
Stream processing
Kafka
Queue
(partition)
Snappy Data Server – Spark Executor + Store
Index
Process
Spark or SQL
Program
Batch
compute
Hybrid Store
RDB
(Reference data)
HDFS
MPP DB
In-memory compute, state
overflow
Local
persist
Spark API ++
- Java, Scala,
Python, R, REST
Synopse
s
Interactive analytic queries(SQL, JDBC, ODBC)
21. • Fast
- Stream, ingested data colocated on shared key
- Tables colocated on shared key
- Far less copying, serialization
- Improvements to vectorization (20X faster than spark)
• Use less memory, CPU
- Maintain only “Hot/active” data in RAM
- Summarize all data using Synopses
• Flexible
- Spark. Enough said.
Fast, Fewer resources, Flexible
22. Features
- Deeply integrated database for Spark
- 100% compatible with Spark
- Extensions for Transactions (updates), SQL stream processing
- Extensions for High Availability
- Approximate query processing for interactive OLAP
- OLTP+OLAP Store
- Replicated and partitioned tables
- Tables can be Row or Column oriented (in-memory & on-disk)
- SQL extensions for compatibility with SQL Standard
- create table, view, indexes, constraints, etc
25. Uniform (Random) Sampling
ID Advertiser Geo Bid
1 adv10 NY 0.0001
2 adv10 VT 0.0005
3 adv20 NY 0.0002
4 adv10 NY 0.0003
5 adv20 NY 0.0001
6 adv30 VT 0.0001
Uniform Sample
ID Advertiser Geo Bid Sampling
Rate
3 adv20 NY 0.0002 1/3
5 adv20 NY 0.0001 1/3
SELECT avg(bid)
FROM AdImpresssions
WHERE geo = ‘VT’
Original Table
26. Uniform (Random) Sampling
ID Advertiser Geo Bid
1 adv10 NY 0.0001
2 adv10 VT 0.0005
3 adv20 NY 0.0002
4 adv10 NY 0.0003
5 adv20 NY 0.0001
6 adv30 VT 0.0001
Uniform Sample
ID Advertiser Geo Bid Sampling
Rate
3 adv20 NY 0.0002 2/3
5 adv20 NY 0.0001 2/3
1 adv10 NY 0.0001 2/3
2 adv10 VT 0.0005 2/3
SELECT avg(bid)
FROM AdImpresssions
WHERE geo = ‘VT’
Original Table Larger
27. Stratified Sampling
ID Advertiser Geo Bid
1 adv10 NY 0.0001
2 adv10 VT 0.0005
3 adv20 NY 0.0002
4 adv10 NY 0.0003
5 adv20 NY 0.0001
6 adv30 VT 0.0001
Stratified Sample on Geo
ID Advertiser Geo Bid Sampling
Rate
3 adv20 NY 0.0002 1/4
2 adv10 VT 0.0005 1/2
SELECT avg(bid)
FROM AdImpresssions
WHERE geo = ‘VT’
Original Table
28. Value of Sampling grows with volume
Select avg(Bid), Advertiser from T1 group by Advertiser
Select avg(Bid), Advertiser from T1 group by Advertiser with error 0.1
Speed/Accuracy tradeoffError(%)
30 mins
Time to Execute on
Entire Dataset
Interactive
Queries
2 sec
Execution Time 28
100 secs
2 secs
1% Error
29. Query execution with accuracy guarantee
PARSE
QUERY
Can Query be
executed on
Samples?
- Recent time window
- Computable from samples
- Within error constraints
- Point query on history
- Outlier query
- Very complex query
Parallely
Execute on
Base table
In-memory
Execution with
Error bar
Response
Response
No
Yes
30. Synopses Data Engine Features
• Support for uniform sampling
• Support for stratified sampling
- Solutions exist for stored data (BlinkDB)
- SnappyData works for infinite streams of data too
• Support for exponentially decaying windows over time
• Support for synopses
- Top-K queries, heavy hitters, outliers, ...
• [future] Support for joins
• Workload mining (http://CliffGuard.org)
31. Sketching techniques
● Sampling not effective for outlier detection
○ MAX/MIN etc
● Other probabilistic structures like CMS, heavy hitters, etc
● SnappyData implements Hokusai
○ Capturing item frequencies in timeseries
● Design permits TopK queries over arbitrary time intervals
(Top100 popular URLs)
SELECT pageURL, count(*) frequency FROM Table
WHERE …. GROUP BY ….
ORDER BY frequency DESC
LIMIT 100
33. Free Cloud trial service – Project iSight
● Free AWS/Azure credits for folks to try out SnappyData
● One click launch of private SnappyData cluster with Zeppelin
● Multiple notebooks with comprehensive description of concepts and
value
● Bring your own data sets to try ‘Instant visualization’ using Synopses
data
Send email to chomp@snappydata.io to be notified. Anticipate release in next 2 weeks
34. Unified OLAP/OLTP streaming w/ Spark
● Far fewer resources: TB problem becomes GB.
○ CPU contention drops
● Far less complex
○ single cluster for stream ingestion, continuous queries, interactive
queries and machine learning
● Much faster
○ compressed data managed in distributed memory in columnar
form reduces volume and is much more responsive
35. www.snappydata.io
SnappyData is Open Source
● Ad Analytics example/benchmark -
https://github.com/SnappyDataInc/snappy-poc
● https://github.com/SnappyDataInc/snappydata
● Learn more www.snappydata.io/blog
● Connect:
○ twitter: www.twitter.com/snappydata
○ facebook: www.facebook.com/snappydata
○ slack: http://snappydata-slackin.herokuapp.com
37. Use Case Patterns
1. Operational Analytics DB
- Caching for Analytics over disparate sources
- Federate query between samples and backend’
2. Stream analytics for Spark
Process streams, transform, real-time scoring, store, query
3. In-memory transactional store
Highly concurrent apps, SQL cache, OLTP + OLAP
39. Snappy Spark Cluster Deployment topologies
• Snappy store and Spark
Executor share the JVM
memory
• Reference based access –
zero copy
• SnappyStore is isolated but
use the same COLUMN
FORMAT AS SPARK for high
throughput
Unified Cluster
Split Cluster
40. Simple API – Spark Compatible
● Access Table as DataFrame
Catalog is automatically recovered
● Store RDD[T]/DataFrame can be
stored in SnappyData tables
● Access from Remote SQL clients
● Addtional API for updates,
inserts, deletes
//Save a dataFrame using the Snappy or spark context …
context.createExternalTable(”T1", "ROW", myDataFrame.schema,
props );
//save using DataFrame API
dataDF.write.format("ROW").mode(SaveMode.Append).options(pro
ps).saveAsTable(”T1");
val impressionLogs: DataFrame = context.table(colTable)
val campaignRef: DataFrame = context.table(rowTable)
val parquetData: DataFrame = context.table(parquetTable)
<… Now use any of DataFrame APIs … >
41. Extends Spark
CREATE [Temporary] TABLE [IF NOT EXISTS] table_name
(
<column definition>
) USING ‘JDBC | ROW | COLUMN ’
OPTIONS (
COLOCATE_WITH 'table_name', // Default none
PARTITION_BY 'PRIMARY KEY | column name', // will be a replicated table, by default
REDUNDANCY '1' , // Manage HA
PERSISTENT "DISKSTORE_NAME ASYNCHRONOUS | SYNCHRONOUS",
// Empty string will map to default disk store.
OFFHEAP "true | false"
EVICTION_BY "MEMSIZE 200 | COUNT 200 | HEAPPERCENT",
…..
[AS select_statement];
42. Simple to Ingest Streams using SQL
Consume from stream
Transform raw data
Continuous Analytics
Ingest into in-memory Store
Overflow table to HDFS
Create stream table AdImpressionLog
(<Columns>) using directkafka_stream options (
<socket endpoints>
"topics 'adnetwork-topic’ “,
"rowConverter ’ AdImpressionLogAvroDecoder’ )
streamingContext.registerCQ(
"select publisher, geo, avg(bid) as avg_bid, count(*) imps,
count(distinct(cookie)) uniques from AdImpressionLog
window (duration '2' seconds, slide '2' seconds)
where geo != 'unknown' group by publisher, geo”)// Register CQ
.foreachDataFrame(df => {
df.write.format("column").mode(SaveMode.Appen
d)
.saveAsTable("adImpressions")
44. How do we extend Spark for Real Time?
• Spark Executors are long
running. Driver failure
doesn’t shutdown
Executors
• Driver HA – Drivers run
“Managed” with standby
secondary
• Data HA – Consensus based
clustering integrated for
eager replication
45. How do we extend Spark for Real Time?
• By pass scheduler for low
latency SQL
• Deep integration with
Spark Catalyst(SQL) –
collocation optimizations,
indexing use, etc
• Full SQL support –
Persistent Catalog,
Transaction, DML
47. Concurrent Ingest + Query Performance
• AWS 4 c4.2xlarge instances
- 8 cores, 15GB mem
• Each node parallely ingests stream from
Kafka
• Parallel batch writes to store (32
partitions)
• Only few cores used for Stream writes
as most of CPU reserved for
OLAP queries
0
100000
200000
300000
400000
500000
600000
700000
Spark-
Cassandra
Spark-
InMemoryDB
SnappyData
Series1 322000 480000 670000
Persecond
Throughput
Stream ingestion rate
(On 4 nodes with cap on CPU to allow for queries)
https://github.com/SnappyDataInc/snappy-poc
2X – 45X faster (vs Cassandra, IMDB)
48. Concurrent Ingest + Query Performance
0
10000
20000
30000
40000
30M
60M
90M
30M
60M
90M
30M
60M
90M
Spark-Cassandra
Spark-InMemoryDBl
SnappyData
20346
65061 93960
3649 5801 7295
1056
1571
2144
Q1
Sample “scan” oriented OLAP query(Spark SQL) performance executed
while ingesting data
select count(*) AS adCount, geo from adImpressions
group by geo order by adCount desc limit 20;
Response
Time(millis)
https://github.com/SnappyDataInc/snappy-poc
2X – 45X faster
Notas do Editor
CONTEXT SHOULD BE OUR MISSION …. LAMBDA LIKE WOULD BE BETTER ….
optimizations to enable direct access of storage into local execution variables, avoiding all copying to bring data from storage layer to execution layer (possible only due to our unique embedded mode). Integrated with whole-stage code generation of Spark 2.0 so that these get compiled by JIT into exactly one memory load instruction for one primitive value (uncompressed).
There is a reciprocal relationship with Spark RDDs/DataFrames. any table is visible as a DataFrame and vice versa. Hence, all the spark APIs, tranformations can also be applied to snappy managed tables.
For instance, you can use the DataFrame data source API to save any arbitrary DataFrame into a snappy table like shown in the example.
One cool aspect of Spark is its ability to take an RDD of objects (say with nested structure) and implicitly infer its schema. i.e. turn into into a DataFrame and store it.
The SQL dialect will be Spark SQL ++. i.e. we are extending SQL to be much more compliant with standard SQL.
A number of the extensions that dictate things like HA, disk persistence, etc are all specified through OPTIONS in spark SQL.
Manage data(mutable) in spark executors (store memory mgr works with Block mgr)
Make executors long lived
Which means, spark drivers run de-coupled .. they can fail.
- managed Drivers
- Selective scheduling
- Deeply integrate with query engine for optimizations
- Full SQL support: including transactions, DML, catalog integration
Manage data(mutable) in spark executors (store memory mgr works with Block mgr)
Make executors long lived
Which means, spark drivers run de-coupled .. they can fail.
- managed Drivers
- Selective scheduling
- Deeply integrate with query engine for optimizations
- Full SQL support: including transactions, DML, catalog integration