It is common for consumer Internet companies to start off with popular third-party tools for analytics needs. Then, when the user base and the company grows, they end up building their own analytics data pipeline and query engine to cope with their data scale, satisfy custom data enrichment and reporting needs and achieve high quality of their data. That’s exactly the path that was taken at Grammarly, the popular online proofreading service.
In this session, Grammarly will share how they improved business and marketing analytics, previously done with Mixpanel, by building their own in-house analytics engine and application on top of Apache Spark. Chernetsov wil touch upon several Spark tweaks and gotchas that they experienced along the way:
– Outputting data to several storages in a single Spark job
– Dealing with Spark memory model, building a custom spillable data-structure for your data traversal
– Implementing a custom query language with parser combinators on top of Spark sql parser
– Custom query optimizer and analyzer when you want not exactly sql
– Flexible-schema storage and query against multi-schema data with schema conflicts
– Custom aggregation functions in Spark SQL
ChistaDATA Real-Time DATA Analytics Infrastructure
Building a Versatile Analytics Pipeline on Top of Apache Spark with Mikhail Chernetsov
1. Building a Versatile Analytics Pipeline
On Top Of Apache Spark
Misha Chernetsov, Grammarly
Spark Summit 2017
June 6, 2017
3. Data Team Lead @ Grammarly
Building Analytics Pipelines (5 years)
Coding on JVM (12 years), Scala + Spark (3 years)
About Me: Misha Chernetsov
@chernetsov
4. Tool that helps us better understand:
● Who are our users?
● How do they interact with the product?
● How do they get in, engage, pay, and how long do they stay?
Analytics @ Consumer Product Company
5. We want our decisions to be
data-driven
Everyone: product managers, marketing, engineers, support...
Analytics @ Consumer Product Company
8. Example Report 2 – Comparison of Cohort Retention Over Time
dummy data!dummy data!
9. Ads
Email
Social
Number of
users who
bought a
subscription.
Split by traffic
source type
(where user
came from)
Calendar Day
Example Report 3 – Payer Conversions By Traffic Source
dummy data!dummy data!
10. ● Landing page visit
○ URL with UTM tags
○ Referrer
● Subscription purchased
○ Is first in subscription
Example: Data
11. Everything is an Event
Example: Data
{
"eventName": "page-visit",
"url": "...?utm_medium=ad",
…
}
{
"eventName": "subscribe",
"period": "12 months",
…
}
14. capture enrich index query
Use 3rd Party?
1. Integrated Event Analytics
2. UI over your DB
Reports are not tailored for your
needs, limited capability.
Pre-aggregation / enriching
is still on you.
Hard to achieve accuracy and trust.
25. Authenticated:
userId = 756
Enrichment 1: User Attribution
tfingerprint = abc
(All events from a
given browser)
Authenticated:
userId = 123
Heuristics to
attribute those
26. Authenticated:
userId = 756
Enrichment 1: User Attribution
tfingerprint = abc
(All events from a
given browser)
Authenticated:
userId = 123
Heuristics to
attribute those
27. rdd.mapPartitions { iterator =>
val buffer = new ArrayBuffer()
iterator
.takeWhile(_.userId.isEmpty)
.foreach(buffer.append)
val userId = iterator.head.userId
buffer.map(_.setAttributedUserId(userId)) ++ iterator
}
Enrichment 1: User Attribution
28. rdd.mapPartitions { iterator =>
val buffer = new ArrayBuffer()
iterator
.takeWhile(_.userId.isEmpty)
.foreach(buffer.append)
val userId = iterator.head.userId
buffer.map(_.setAttributedUserId(userId)) ++ iterator
}
Enrichment 1: User Attribution
29. rdd.mapPartitions { iterator =>
val buffer = new ArrayBuffer()
iterator
.takeWhile(_.userId.isEmpty)
.foreach(buffer.append)
val userId = iterator.head.userId
buffer.map(_.setAttributedUserId(userId)) ++ iterator
}
Enrichment 1: User Attribution
30. rdd.mapPartitions { iterator =>
val buffer = new ArrayBuffer()
iterator
.takeWhile(_.userId.isEmpty)
.foreach(buffer.append)
val userId = iterator.head.userId
buffer.map(_.setAttributedUserId(userId)) ++ iterator
}
Enrichment 1: User Attribution
31. rdd.mapPartitions { iterator =>
val buffer = new ArrayBuffer()
iterator
.takeWhile(_.userId.isEmpty)
.foreach(buffer.append)
val userId = iterator.head.userId
buffer.map(_.setAttributedUserId(userId)) ++ iterator
}
Enrichment 1: User Attribution
Can grow big and
OOM your worker for
outliers who use
Grammarly without
ever registering
32. By default we
should operate
in User Memory
(small fraction).
Spark & Memory
User Memory
100% - spark.memory.fraction = 25%
Spark Memory
spark.memory.fraction = 75%
Let’s get into
Spark Memory
and use its
safety features.
33. rdd.mapPartitions { iterator =>
val buffer = new SpillableBuffer()
iterator
.takeWhile(_.userId.isEmpty)
.foreach(buffer.append)
val userId = iterator.head.userId
buffer.map(_.setAttributedUserId(userId)) ++ iterator
}
Enrichment 1: User Attribution
Can safely grow in
mem while enough
free Spark Mem. Spills
to disk otherwise.
43. trait SizeTracker {
def afterUpdate(): Unit = { … }
def estimateSize(): Long = { … }
}
Call on every append.
Periodically estimates size
and saves samples.
Extrapolates
SizeTracker
44. trait Spillable {
abstract def spill(inMemCollection: C): Unit
def maybeSpill(currentMemory: Long, inMemCollection: C) {
try x2 if needed
}
}
Spillable
call on every append to collection
46. ● Be safe with outliers
● Get outside User Memory (25%), use Spark Memory (75%)
● Spark APIs: Could be a bit friendlier and high level
Custom Spillable Collection
53. Raw
Kafka
Spark Pipeline
Stream:
Save Raw Kafka
Stream:
User Attr.
User-attributed
Kafka
Stream:
Calc Props
Enriched and
Queryable
Batch:
User Attr.
Batch:
Calc Props
55. ● Connectors for everything
● Great for batch
○ Shuffle with spilling
○ Failure recovery
● Great for streaming
○ Fast
○ Low overhead
Spark Pipeline
64. rdd.forEachPartition { iterator =>
val writer = new BufferedWriter(
new OutputStreamWriter(new FileOutStream())
)
iterator.forEach { el =>
writer.writeln(el)
}
writer.close() // makes sure this writes
}
Writer
65. rdd.forEachPartition { iterator =>
val writer = new BufferedWriter(
new OutputStreamWriter(new FileOutStream())
)
iterator.forEach { el =>
writer.writeln(el)
}
writer.close() // makes sure this writes
}
Writer
66. rdd.forEachPartition { iterator =>
val writer = new BufferedWriter(
new OutputStreamWriter(new FileOutStream())
)
iterator.forEach { el =>
writer.writeln(el)
}
writer.close() // makes sure this writes
}
Writer
67. rdd.forEachPartition { iterator =>
val writer = new BufferedWriter(
new OutputStreamWriter(new FileOutStream(...))
)
iterator.forEach { el =>
writer.writeln(el)
}
writer.close() // makes sure this writes
}
● Buffer
● Non-blocking
● Idempotent / Dedupe
Writer
68. andWriteToX = rdd.mapPartitions { iterator =>
val writer = new XWriter()
val writingIterator = iterator.map { el =>
writer.write(el)
}.closing(() => writer.close)
}
AndWriter
69. andWriteToX = rdd.mapPartitions { iterator =>
val writer = new XWriter()
val writingIterator = iterator.map { el =>
writer.write(el)
}.closing(() => writer.close)
}
AndWriter
70. andWriteToX = rdd.mapPartitions { iterator =>
val writer = new XWriter()
val writingIterator = iterator.map { el =>
writer.write(el)
}.closing(() => writer.close)
}
AndWriter
75. Index
● Parquet on AWS S3
● Custom partitioning: By eventName and time interval
● Append changes, compact + merge on the fly when querying
● Randomized names to maximize S3 parallelism
● Use s3a for max performance and and tweak for S3 read-after-write
consistency
● Support flexible schema, even with conflicts!
82. SEGMENT “eventName”
WHERE foo = “bar” AND x.y IN (“a”, “b”, “c”)
UNIQUE
BY m IS NOT NULL
TIME from 2 months ago to today
STEP 1 month SPAN 1 week
Option 3: Custom Query Language
83. SEGMENT “eventName”
WHERE foo = “bar” AND x.y IN (“a”, “b”, “c”)
UNIQUE
BY m IS NOT NULL
TIME from 2 months ago to today
STEP 1 month SPAN 1 week
Option 3: Custom Query Language
Expressions
84. ● Segment, Funnel, Retention
● UI & as DataFrame in Zeppelin
● Spark <= 1.6 – Scala Parser Combinators
● Reuse most complex part of expression parser
● Relatively extensible
Option 3: Custom Query Language
86. ● Custom versatile analytics is doable and enjoyable
● Spark is a great platform to build analytics on top of
○ Enrichment Pipeline: Batch / Streaming, Query, ML
● Would be cool to see even deep internals slightly more extensible
Conclusion