SlideShare a Scribd company logo
1 of 85
Download to read offline
Patterns of the
Lambda Architecture
Truth and Lies at the Edge of Scale
Flip Kromer — CSC
I’m Flip Kromer, Distinguished Engineer at CSC. If you are a large enterprise company
looking to add Big Data capabilities — especially one involving legacy systems —
we’re a big, stable company that specializes in turning technology into an enterprise-
grade solution.
Pattern Set
This talk will equip you with two things.
One is patterns for how we design high-scale architectures to solve specific solution
cases
now that extra infrastructure is nearly free
Tradeoff Rules
PICK
ANY
TWO
Along with a set of tradeoff rules along the lines of the pick-any-two trinity but more
sophisticated
Lambda Architecture
So what is the Lambda Architecture? Here’s two examples.
Search w/ Update
Build
Indexes
A Ton of
Text
Historical
Index
Live Indexer
More
Text
Recent
Index
API
In this system, we have a whole ton of historical text, with more arriving all the time,
and want to allow immediate real-time search across the whole corpus.
Search w/ Update
Build
Indexes
A Ton of
Text
Historical
Index
Live Indexer
More
Text
Recent
Index
API
Build

Main
Index
We will use a large periodic batch job to create indexes on the historical data.This
takes a while — far longer than our recency demands allow — so we might as well
have our elephants use clever algorithms and optimally organize the data for rapid
retrieval.
Search w/ Update
Build
Indexes
A Ton of
Text
Historical
Index
Live Indexer
More
Text
Recent
Index
API
Update Recent Index
Until the next stampede arrives with an updated index, as each new record arrives
we will not only file it with the historical data but also use simple fast indexing to
make it immediately searchable. Merging new records directly would require stuffing
them into the right place in the historical index, which eventually means moving
records around, which demands far too much time and complexity to be workable.
Search w/ Update
Build
Indexes
A Ton of
Text
Historical
Index
Live Indexer
More
Text
Recent
Index
API
Serve
Result
The system to serve the data just pulls from both indexes in immediate time
Build
Indexes
A Ton of
Text
Historical
Index
Live Indexer
More
Text
Recent
Index
API
Lambda Architecture
Batch
Speed
Serving
We have a batch layer for the global corpus;
A speed layer for recent results;
and a serving layer for access
Build
Indexes
A Ton of
Text
Historical
Index
Live Indexer
More
Text
Recent
Index
API
Lambda Architecture
Global
Relevant
Immediate
We have a batch layer for the global corpus;
A speed layer for recent results;
and a serving layer for access
Train
Recomm’der
Visitor
History
History
Alsobuy
Visitor:
Product
Visitor
Alsobuy
Update
Recommendation
Fetch/Update
History
Visitor:
Product
History
Webserver
Recommender
Another familiar architecture is a high-scale recommender system — “Given that the
user has looked at mod-style dresses and mason jars show them these knitting
needles”.This diagram shows a recommender, but most machine-learning systems
look like this.
Train
Recomm’der
Visitor
History
History
Alsobuy
Visitor:
Product
Visitor
Alsobuy
Update
Recommendation
Fetch/Update
History
Visitor:
Product
History
Webserver
Recommender
Build
Model
You have one system process all the examples you’ve ever seen to produce a
predictive model.The trained model it produces can then react immediately to all
future examples as they occur.
Train
Recomm’der
Visitor
History
History
Alsobuy
Visitor:
Product
Visitor
Alsobuy
Update
Recommendation
Fetch/Update
History
Visitor:
Product
History
Webserver
Recommender
Applies Model
The trained model it produces can then react immediately to all future examples as
they occur. In this system we’re going to have one system to apply the model and
store the recommendation
Your operations team is better off with two systems that can fail without breaking
the site than to have the apply-model step coupled to serving pages.
Train
Recomm’der
Visitor
History
History
Alsobuy
Visitor:
Product
Visitor
Alsobuy
Update
Recommendation
Fetch/Update
History
Visitor:
Product
History
Webserver
Recommender
Serves
Result
So that the web layer can just serve the result without being contaminated by the
recommender system’s code.
Train
Recomm’der
Visitor
History
History
Alsobuy
Visitor:
Product
Visitor
Alsobuy
Update
Recommendation
Fetch/Update
History
Visitor:
Product
History
Webserver
Recommender
Batch
Speed
Serving
Again, the same three pieces
Lambda Arch Layers
• Batch layer Deep Global Truth throughput
• Speed layer Relevant Local Truth throughput
• Serving layer Rapid Retrieval latency
speed layer cares about throughput
Serving layer cares about latency,
Lambda Arch: Technology
• Batch layer Hadoop, Spark, Batch DB Reports
• Speed layer Storm+Trident, Spark Str., Samza,AMQP, …
• Serving layer Web APIs, Static Assets, RPC, …
Lambda Architecture
Batch
Speed
Serving
λ
λ
Where does the name lambda come from?
In my head it’s cause the flow diagram…
Lambda Architecture
Batch
Speed
Serving
λ
looks like the shape of the character for lambda
Lambda Architecture
λ(v)
• Pure Function on immutable data
But really it means this new mindset of building pure function (lambda) on immutable
data,
Ideal Data System
Ideal Data System
• Capacity -- Can process arbitrarily large amounts of data
• Affordability -- Cheap to run
• Simplicity -- Easy to build, maintain, debug
• Resilience -- Jobs/Processes fail&restart gracefully
• Responsiveness -- Low latency for delivering results
• Justification -- Incorporates all relevant data into result
• Comprehensive -- Answer questions about any subject
• Recency -- Promptly incorporates changes in world
• Accuracy -- Few approximations or avoidable errors
The laziest, and therefore best, knobs are the Capacity/Affordability ones.The pre-big-
data era can be thought of as one where only those two exist. Big Data broke the
handle off the Capacity knob, either because Affordability ramps too fast or because
the speed of light starts threatening resilience, responsiveness or recency
* _Comprehensive_: complete; including all or nearly all elements or aspects of
something
* _concise_: giving a lot of information clearly and in a few words; brief but
Ideal Data System
• Capacity -- Can process arbitrarily large amounts of data
• Affordability -- Cheap to run
• Simplicity -- Easy to build, maintain, debug
• Resilience -- Jobs/Processes fail&restart gracefully
• Responsiveness -- Low latency for delivering results
• Justification -- Incorporates all relevant data into result
• Comprehensive -- Answer questions about any subject
• Recency -- Promptly incorporates changes in world
• Accuracy -- Few approximations or avoidable errors
You would think that what mattered was correctness — justified true belief
Ideal Data System
• Capacity -- Can process arbitrarily large amounts of data
• Affordability -- Cheap to run
• Simplicity -- Easy to build, maintain, debug
• Resilience -- Jobs/Processes fail&restart gracefully
• Responsiveness -- Low latency for delivering results
• Justification -- Incorporates all relevant data into result
• Comprehensive -- Answer questions about any subject
• Recency -- Promptly incorporates changes in world
• Accuracy -- Few approximations or avoidable errors
When you look at what we actually do, the non-negotiables are that it be manageable
and economic given that you must process arbitrarily large amounts of data
Truth is a nice-to-have.
Tradeoff Rules
PICK
ANY
TWO
Set of tradeoff rules along the lines of the pick-any-two trinity but more sophisticated
At Scale
AND
THIS
THIS
AND TRY TO BE GOOD
Basically, given big data you have to accomodate any amount of data and produce
static reports or queries that execute within the duration of human patience — so
you must be fast and cheap, sacrificing good.
Patterns
Train
Recomm’der
Visitor
History
History
Alsobuy
Visitor:
Product
Visitor
Alsobuy
Update
Recommendation
Fetch/Update
History
Visitor:
Product
History
Webserver
Recommender
The world you’re modeling changes — new sets of products are released, new and
variated customers sign up, changes to the site drive new behavior — but it changes
slowly. So it’s no big deal if the training stage is only run once a week over several
hours.
The first example follows a pretty familiar general form I’ll call “Train / React”.You
have one system process all the examples you’ve ever seen to produce a predictive
model.The trained model it produces can then react immediately to all future
Pattern: Train / React
• Model of the world lets you make immediate decisions
• World changes slowly, so we can re-build model at leisure
• Relax: Recency
• Batch layer: Train a machine learning model
• Speed layer: Apply that model
• Examples: most Machine Learning thingies
(Recommender)
Big fat job that only needs to run occasionally; results of the job inform what happens
immediately
Search w/ Update
Build
Indexes
A Ton of
Text
Historical
Index
Live Indexer
More
Text
Recent
Index
API
Pattern: Baseline / Delta
• Understanding the world takes a long time
• World changes much faster than that, and you care
• Relax: Simplicity, Accuracy
• Batch layer: Process the entire world
• Speed layer: Handle any changes since last big run
• Examples: Real-time Search index; Count Distinct; 

other Approximate Stream Algorithms
In Train / React, the world changes, but slowly; training in batch mode is just fine
In Baseline / Delta, the world changes so quickly can’t run compute job fast enough
So you are sacrificing simplicity — there’s two systems where there was only one —
and accuracy — the recent records won’t update global normalized frequencies
Pagerank
Converge
Pagerank
Friend
Relations
User
Pagerank
Retrieve Bob’s
Facebook Ntwk
Bob
Bob’s Friends’
Pageranks
Estimate
Bob’s Pagerank
But don’t bother updating
Bob’s Friends (or friends
friends or …)
API
(Lazy Propagation)
Pagerank
48
24
42 12
12
6
24
24
42
6
6
6
6
6
6
6
This next example has an importantly different flavor.
The core way that Google identifies important web pages is the “Pagerank”
algorithm, which basically says “a page is interesting if other interesting pages link to
it”.That’s recursive of course but the math works out.You can do similar things on a
social network like twitter to find spammers and superstars, or among college
football teams or world of warcraft players to prepare a competitive ranking, or
among buyers and sellers in a market to detect fraud.
To define a reputation ranking on say Twitter you simulate a game of multiple rounds.
48
24
42 12
12
6
24
24
42
6
6
6
6
6
6
6
9
4
-
5
-
New Record Appears
?
Doing this is kinda literally what Hadoop was born to do, and it’s a simple
Hadoop-101 level program.
Acting out all those rounds using every interaction we’ve ever seen takes a fair
amount of time, though, and so a problem comes when we meet a new person.
This new person accrues some reputational jellybeans, and we don’t want to live in
total ignorance of what their score is; and they dispatch some as well, which should
change the scores of those they follow.
48
24
42 12
12
6
24
24
42
6
6
6
6
6
6
6
9
4
-
5
-
Update Using Local
12÷3 = 4
24÷5 ≈ 5
9
Well, we can roughly guess the score of the new node by having their followers pay
out a jellybean share proportional to what they would have gotten in the last
pagerank round.
“A Guess beats a Blank Stare”
* World rate of change not really relevant
* The solution is actually to tell a lie
48
24
42 12
12
6
24
24
42
6
6
6
6
6
6
6
9
4
-
5
-
…Ignoring Correctness
meh
But we’re not going to update the neighbors.You’d be concurrently updating an
arbitrary number of outbound nodes, and then of course those nodes’ changes
should rightfully propagate as well — this is why we play the multiple pagerank
rounds in the first place.
What we do instead is lie. Look, planes don’t fall out of the sky if you get someone’s
coolness quotient wrong in the first decimal point.
Batch Updates Graph
42
30
36 11
10
6
21
21
36
4
6
6
6
5 5
4
9
3
9
6
(A Guess beats a Blank)
This has an importantly different flavor
* World rate of change not really relevant
* The solution is actually to tell a lie
Pattern: World/Local
• Understanding the world needs full graph
• You can tell a little white lie reading immediate graph only
• Relaxing: Accuracy, Justification
• Batch layer: uses global graph information
• Speed layer: just reads immediate neighborhood
• Examples:“Whom to follow”, Clustering, anything at 2nd-
degree (friend-of-a-friend)
Problem isn’t so much about the volume of data,
it’s about how _far away_ that data is.
You can’t justify doing that second-order query for three reasons:
* time
* compute resources
* computational risk
Pattern: Guess Beats Blank
• You can’t calculate a good answer quickly
• But Comprehensiveness is a must
• Relaxing: Accuracy, Justification
• Batch layer: finds the correct answer
• Speed layer: makes a reasonable guess
• Examples:Any time the sparse-data case is also the most
valuable
In this case, we can’t sacrifice comprehensiveness — for every record that exists, we
must return a relevant answer. So we sacrifice truthfulness — or more precisely, we
sacrifice accuracy and justification.
Marine Corp’s 80% Rule
“Any decision made with
more than 80% of the
necessary information is
hesitation”
— “The Marine Corps Way”
Santamaria & Martino
When lots of data already, the imperfect result in the speed layer doesn’t have a huge
effect
When there isn’t much data, overwhelmingly better to fill in with an imperfect result
US Marine Corps:“Any decision made with more than 80% of the necessary
information is hesitation”
A Guess Beats a Blank
• You can’t calculate a good answer quickly
• But Comprehensiveness is a must
• Relaxing: Accuracy, Justification
• Batch layer: finds the correct answer
• Speed layer: makes a reasonable guess
• Examples:Any time the sparse-data case is also the most
valuable
In this case, we can’t sacrifice comprehensiveness — for every record that exists, we
must return a relevant answer. So we sacrifice truthfulness — or more precisely, we
sacrifice accuracy and justification.
Security
Find Potential
Evilness
Connection
Counts
Agents of
Interest
Store
Interaction
Net
Connect
ions
Detected
Evilnesses
Approximate
Streaming Agg
Agent of
Interest?
Dashboard
In security, you have the data-breach type problems — why is someone strip-mining
computers in turn to a server in [name your own semi-friendly country]? — and
bradley-manning type problems — why is a GS-5 at a console in Kuwait downloading
every single diplomatic dispatch?
Pattern: Slow Boil/Flash Fire
• Two tempos of data: months vs milliseconds
• Short-term data too much to store
• Long-term data too much to handle immediately
• Often accompanies Baseline / Deltas, Global / Local
• Examples:
• Trending Topics
• Insider Security
Global/Local:Why has a contractor sysadmin in Hawaii accessed powerpoint presos
from every single group within our organization?
Banking, Oversimplified
Reconcile
Accounts
Account
Balances
Event Store Transaction Update Records
(CAP Tradeoffs)
Banking, Oversimplified
Reconcile
Accounts
Account
Balances
Event Store Transaction Update Records
nice-to-haveessential
This wins over fast layer
(CAP Tradeoffs)
Pattern: C-A-P Tradeoffs
• C-A-P tradeoffs:
• Can’t depend on when data will roll in (Justification)
• Can’t live in ignorance (Comprehensiveness)
• Batch layer: The final answer
• Speed layer: Actionable views
• Examples: Security (Authorization vs Auditing), 

lots of counting problems
(Banking)
Pattern: Out-of-Order
• C-A-P tradeoffs:
• Can’t depend on when data will roll in (Justification)
• Can’t live in ignorance (Comprehensiveness)
• Batch layer: The final answer
• Speed layer: Actionable views
• Examples: Security (Authorization vs Auditing), 

lots of counting problems
(Banking)
Common Theme
The System Asymptotes
to Truth over time
We keep seeing this common theme — you are building a system that approaches
correctness over time.This leads to a best practice that I’ll call the improver pattern:
Scrape Product Web
• Scrapers: yield partial records
• Unifier: connects all identifiers for a common object
• Resolver: combines partial records into unified record
Entity Resolution
Pattern: Improver
• Improver:



function(best guess,{new facts}) ~> new best guess
• Batch layer: f(blank, {all facts}) ~> best possible guess
• Speed layer: f(current best, {new fact}) ~> new best guess
• Batch and speed layer share same code & contract,
asymptote to truth.
The way you build your resolver is such that it
Two Big Ideas
• Fine-grained control over architectural tradeoffs
• Truth lives at the edge, not the middle
Lets you trade off how quickly, how expensively, how true, how justified
New Paradigm for how, when and where we handle truth
Two Big Ideas
• Fine-grained control over architectural tradeoffs
• Approximate a pure function on all data
• What we do now that architecture is free
• Truth lives at the edge, not the middle
Lets you trade off how quickly, how expensively, how true, how justified
New Paradigm for how, when and where we handle truth
Two Big Ideas
• Fine-grained control over architectural tradeoffs
• Approximate a pure function on all data
• What we do now that architecture is free
• Truth lives at the edge, not the middle
• Data is syndicated forward from arrival to serving
• “Query at write time”
Lets you trade off how quickly, how expensively, how true, how justified
New Paradigm for how, when and where we handle truth
• Lambda architecture isn’t about speed layer / batch layer.
• It's about
• moving truth to the edge, not the center;
• enabling fine-grained tradeoffs against fundamental limits;
• decoupling consumer from infrastructure
• decoupling consumer from asynchrony
• …with profound implications for how you build your teams
λ Arch: Truth, not Plumbing
This way of doing it simplifies architecture:
Local interactions only
Elimination of asynchrony
Which in turn profoundly simplifies development and operations
And allows you to structure team like you do the
Lambda Architecture
for a Dinky Little Blog
So far, talked about a bunch of reasons why you might be led **to** a lambda
architecture
And when there's a new technology people always first ask why they should do it
differently, which is a wise Thing to ask and a foolish thing to insist on
But let's look at it from the other end, from what life is like if this were the natural
state of being.
And to do so, let's take the most unjustifiable case for a high scale architecture: a blog
engine
Blog: Traditional Approach
• Familiar with the ORM Rails-style blog:
• Models: User, Article, Comment
• Views:
• /user/:id (user info, links to their articles and comments);
• /articles (list of articles);
• /articles/:id (article content, comments, author info)
User
id 3
name joeman
homepage http://…
photo http://…
bio “…”
Article
id 7
title The Crisis
body These are…
author_id 3
created_at 2014-08-08
Comment
id 12
body “lol”
article_id 7
author_id 3
Author Name
Author Bio Lorem ipsum dolor sit
amet, consectetur adipiscing elit, sed do
eiusmod tempor incididunt ut labore et
dolore magna aliqua.
Author
Photo
Joe has written 2 Articles:
“A Post about My Cat”
Donec nec justo eget felis facilisis
fermentum.Aliquam porttitor mauris sit
amet orci.Aenean dignissim pellentesque
(… read more)
“Welcome to my blog”
Donec nec justo eget felis facilisis
fermentum.Aliquam porttitor mauris sit
amet orci.Aenean dignissim pellentesque
(… read more)
Article Title
Article Body Lorem ipsum dolor sit
amet, consectetur adipiscing elit, sed do
eiusmod tempor incididunt ut labore et
dolore magna aliqua. Ut enim ad minim
veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum
dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat non proident.
Author Name
Author
Photo
Author Bio
Lorem ipsum
dolor sit amet,
consectetur
adipiscing elit,
sed do eiusmod
tempor
Comments
"First Post"
(8/8/2014 by Commenter 1)
"lol"
(8/8/2014 by Commenter 2)
"No comment"
(8/8/2014 by Commenter 3)
article show user show
articles
users
comments
Webserver
Traditional: Assemble on Read
DB models are sole source of truth
Denormalized
Used directly by reader and writer
View is constructed from spare parts at read time
Syndicate on Write
Δ
article Biographers
View
Fragments
showReportersΔ
user Biographers
Δ
com’t Biographers
articles
users
comments
DB models are sole source of truth
Denormalized
Used directly by reader and writer
View is constructed from spare parts at read time
• (…hack hack hack…)
/articles/v2/show.json
/articles/v1/show.json
• (…hack hack hack…)
What data model would you
like to receive? {“title”:”…”,
“body”:”…”,…}
lol um can I also have
Data Engineer Web Engineer
{“title”:”…”,
“body”:”…”,
“snippet”:…}
Syndicated Data
• The Data is always _there_
• …but sometimes it’s more perfect than other times.
Syndicated Data
• Reports are cheap, single-concern, and faithful to the view.
• You start thinking like the customer, not the database
• All pages render in O(1):
• Your imagination doesn’t have to fit inside a TCP timeout
• Data is immutable, flows are idempotent:
• Interface change is safe
• Data is always _there_,
• Asynchrony doesn’t affect consumers
• Everything is decoupled:
• Way harder to break everything
One of the worst pains in asses is the query that takes 1500 milliseconds. Needs to
be immediate, usually mission-critical, expensive in all ways
• Lambda architecture isn’t about speed layer / batch layer.
• It's about
• moving truth to the edge, not the center;
• enabling fine-grained tradeoffs against fundamental limits;
• decoupling consumer from infrastructure
• decoupling consumer from asynchrony
• …with profound implications for how you build your teams
λ Arch: Truth, not Plumbing
This way of doing it simplifies architecture:
Local interactions only
Elimination of asynchrony
Which in turn profoundly simplifies development and operations
And allows you to structure team like you do the
…
…
Changes update models
update
article
update
user
update
comments
Δ
article
Δ
user
Δ
com’nt
models
user
com’nt
article
history
Models stay the same: User, Article, Comment. Updated directly
Reporters can subscribe to models
On update, reporter receives updated object, and can do anything else it wants.
Typically, it's to create a new report
Reports live in the target domain: faithful to the data consumer. In this case, they look
very close to the information hierarchy of the rendered page
All pages render in O(1).Your imagination is not constrained by the length of a TCP
timeout
Models Trigger Reporters
update
article
update
user
update
comments
Δ
article
Δ
user
Δ
com’nt
models
user
com’nt
article
history
compact
article
user’s #
articles
expanded
user
user’s #
comments
sidebar
user
compact
comment
expanded
article
exp’d
article
compact
article
user’s #
articles
exp’d
user
sidebar
user
user’s #
comments
compact
comment
micro
user
micro
user
DB models are sole source of truth
Denormalized
Used directly by reader and writer
View is constructed from spare parts at read time
Serve Report Fragments
exp’d
article
compact
article
user’s #
articles
exp’d
user
sidebar
user
user’s #
comments
compact
comment
micro
user
show
article
Article Title
Article Body Lorem ipsum dolor sit
amet, consectetur adipiscing elit, sed do
eiusmod tempor incididunt ut labore et
dolore magna aliqua. Ut enim ad minim
veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum
dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat non proident.
Author Name
Author
Photo
Author Bio
Lorem ipsum
dolor sit amet,
consectetur
adipiscing elit,
sed do eiusmod
tempor
Comments
"First Post"
(8/8/2014 by Commenter 1)
"lol"
(8/8/2014 by Commenter 2)
"No comment"
(8/8/2014 by Commenter 3)
DB models are sole source of truth
Denormalized
Used directly by reader and writer
View is constructed from spare parts at read time
Article Title
Article Body Lorem ipsum dolor sit
amet, consectetur adipiscing elit, sed do
eiusmod tempor incididunt ut labore et
dolore magna aliqua. Ut enim ad minim
veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum
dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat non proident.
Author Name
Author
Photo
Author Bio
Lorem ipsum
dolor sit amet,
consectetur
adipiscing elit,
sed do eiusmod
tempor
Comments
"First Post"
(8/8/2014 by Commenter 1)
"lol"
(8/8/2014 by Commenter 2)
"No comment"
(8/8/2014 by Commenter 3)
article show rendered
{
"title":"Article Title",
"body":"Article Body Lorem [...]",
"author":{ ... },
"comments: [
{"comment_id":1, "body":"First Post",...},
{"comment_id":2, "body":"lol",...},
...
]}
Serve Report Fragments
Article Title
Article Body Lorem ipsum dolor sit
amet, consectetur adipiscing elit, sed do
eiusmod tempor incididunt ut labore et
dolore magna aliqua. Ut enim ad minim
veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex ea commodo
consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum
dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat non proident.
Author Name
Author
Photo
Author Bio
Lorem ipsum
dolor sit amet,
consectetur
adipiscing elit,
sed do eiusmod
tempor
Comments
"First Post"
(8/8/2014 by Commenter 1)
"lol"
(8/8/2014 by Commenter 2)
"No comment"
(8/8/2014 by Commenter 3)
exp’d
article
compact
article
user’s #
articles
exp’d
user
sidebar
user
user’s #
comments
compact
comment
micro
user
show
user
DB models are sole source of truth
Denormalized
Used directly by reader and writer
View is constructed from spare parts at read time
Reports are Cheap
update
article
update
user
update
comments
Δ
article
Δ
user
Δ
com’nt
models
user
com’nt
article
history
compact
article
user’s #
articles
expanded
user
user’s #
comments
sidebar
user
compact
comment
expanded
article
exp’d
article
compact
article
user’s #
articles
exp’d
user
sidebar
user
user’s #
comments
compact
comment
micro
user
micro
user
list
articles
show
article
list user’s
articles
show
user
DB models are sole source of truth
Denormalized
Used directly by reader and writer
View is constructed from spare parts at read time
Two Big Ideas
• Fine-grained control over those architectural tradeoffs
• Truth lives at the edge, not the middle
Lets you trade off how quickly, how expensively, how true, how justified
New Paradigm for how, when and where we handle truth
Lambda Architecture
Entity Resolution
Intake
parse
Amazon
Amazon
parse
eBay
eBay
parse
Ma&Pa
Ma&Pa
Electronics
Bulk
Stream
RPC Callback
key
words
mfr &
model
ASIN
VendorListing
Listings
Batch Layer: Resolve/Unify
Product
Resolver
Unified
Products
Listings
Unify
Products
Improve
Product
Resolver
key
words
mfr &
model
ASIN
VendorListing
Fetch
Products
Unified
Products
Listings
Unify
Products
Update
Product
Resolver
key
words
mfr &
model
ASIN
VendorListing
Fetch
Products
Unified
Products
Resolve &
Update
Listings
Unify
Products
Cannot have Consistency
Product
Resolver
key
words
mfr &
model
ASIN
VendorListing
Fetch
Products
Unified
Products
Resolve &
Update
Listings
Unify
Products
Objections
Objections
• Three objections
1.Why hasn't it been done before
2.Architecture Astronaut
3.I'm not at high scale
• Response
1.Chef/Puppet/Docker/etc
2.Chef/Puppet/Docker/etc
3.Shut Up
Objections
• Two APIs? Really?
• Yes. Guilty.That’s dumb and must be fixed.
• Spark or Samza, if you’re willing to only drink one flavor of
Kool-Aid
• EZbake.io, a CSC / 42six project to attack this
• …but we shouldn’t be living at the low level anyhow
Objections
• Orchestration: “logical plan” (dataflow graph)
• Optimization/Allocation: “physical plan” (what goes where)
• Resource Projector: instantiates infrastructure
• HTTP listeners,Trident streams, Oozie scheduling, ETL
flows, cron jobs, etc
• Transport Machineries:
• move data around, fulfilling locality/ordering/etc guarantees
• Data Processing: UDFs and operators

More Related Content

What's hot

Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache FlinkAlbert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache FlinkFlink Forward
 
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the CloudSpeed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloudgluent.
 
A TPC Benchmark of Hive LLAP and Comparison with Presto
A TPC Benchmark of Hive LLAP and Comparison with PrestoA TPC Benchmark of Hive LLAP and Comparison with Presto
A TPC Benchmark of Hive LLAP and Comparison with PrestoYu Liu
 
S. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data CompanionS. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data CompanionFlink Forward
 
Apache Flink: API, runtime, and project roadmap
Apache Flink: API, runtime, and project roadmapApache Flink: API, runtime, and project roadmap
Apache Flink: API, runtime, and project roadmapKostas Tzoumas
 
Gradoop: Scalable Graph Analytics with Apache Flink @ Flink Forward 2015
Gradoop: Scalable Graph Analytics with Apache Flink @ Flink Forward 2015Gradoop: Scalable Graph Analytics with Apache Flink @ Flink Forward 2015
Gradoop: Scalable Graph Analytics with Apache Flink @ Flink Forward 2015Martin Junghanns
 
Unifying Stream, SWL and CEP for Declarative Stream Processing with Apache Flink
Unifying Stream, SWL and CEP for Declarative Stream Processing with Apache FlinkUnifying Stream, SWL and CEP for Declarative Stream Processing with Apache Flink
Unifying Stream, SWL and CEP for Declarative Stream Processing with Apache FlinkDataWorks Summit/Hadoop Summit
 
Functional Comparison and Performance Evaluation of Streaming Frameworks
Functional Comparison and Performance Evaluation of Streaming FrameworksFunctional Comparison and Performance Evaluation of Streaming Frameworks
Functional Comparison and Performance Evaluation of Streaming FrameworksHuafeng Wang
 
Apache Big Data EU 2016: Building Streaming Applications with Apache Apex
Apache Big Data EU 2016: Building Streaming Applications with Apache ApexApache Big Data EU 2016: Building Streaming Applications with Apache Apex
Apache Big Data EU 2016: Building Streaming Applications with Apache ApexApache Apex
 
K. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward KeynoteK. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward KeynoteFlink Forward
 
Apache Beam: A unified model for batch and stream processing data
Apache Beam: A unified model for batch and stream processing dataApache Beam: A unified model for batch and stream processing data
Apache Beam: A unified model for batch and stream processing dataDataWorks Summit/Hadoop Summit
 
Big Migrations: Moving elephant herds by Carlos Izquierdo
Big Migrations: Moving elephant herds by Carlos IzquierdoBig Migrations: Moving elephant herds by Carlos Izquierdo
Big Migrations: Moving elephant herds by Carlos IzquierdoBig Data Spain
 
Flink history, roadmap and vision
Flink history, roadmap and visionFlink history, roadmap and vision
Flink history, roadmap and visionStephan Ewen
 
Introduction to Spark Streaming
Introduction to Spark StreamingIntroduction to Spark Streaming
Introduction to Spark StreamingKnoldus Inc.
 
Apache Spark Streaming: Architecture and Fault Tolerance
Apache Spark Streaming: Architecture and Fault ToleranceApache Spark Streaming: Architecture and Fault Tolerance
Apache Spark Streaming: Architecture and Fault ToleranceSachin Aggarwal
 

What's hot (20)

Omid: A Transactional Framework for HBase
Omid: A Transactional Framework for HBaseOmid: A Transactional Framework for HBase
Omid: A Transactional Framework for HBase
 
From Device to Data Center to Insights
From Device to Data Center to InsightsFrom Device to Data Center to Insights
From Device to Data Center to Insights
 
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache FlinkAlbert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
Albert Bifet – Apache Samoa: Mining Big Data Streams with Apache Flink
 
Next Gen Big Data Analytics with Apache Apex
Next Gen Big Data Analytics with Apache Apex Next Gen Big Data Analytics with Apache Apex
Next Gen Big Data Analytics with Apache Apex
 
Stream Processing made simple with Kafka
Stream Processing made simple with KafkaStream Processing made simple with Kafka
Stream Processing made simple with Kafka
 
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the CloudSpeed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
 
The Future of Apache Storm
The Future of Apache StormThe Future of Apache Storm
The Future of Apache Storm
 
A TPC Benchmark of Hive LLAP and Comparison with Presto
A TPC Benchmark of Hive LLAP and Comparison with PrestoA TPC Benchmark of Hive LLAP and Comparison with Presto
A TPC Benchmark of Hive LLAP and Comparison with Presto
 
S. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data CompanionS. Bartoli & F. Pompermaier – A Semantic Big Data Companion
S. Bartoli & F. Pompermaier – A Semantic Big Data Companion
 
Apache Flink: API, runtime, and project roadmap
Apache Flink: API, runtime, and project roadmapApache Flink: API, runtime, and project roadmap
Apache Flink: API, runtime, and project roadmap
 
Gradoop: Scalable Graph Analytics with Apache Flink @ Flink Forward 2015
Gradoop: Scalable Graph Analytics with Apache Flink @ Flink Forward 2015Gradoop: Scalable Graph Analytics with Apache Flink @ Flink Forward 2015
Gradoop: Scalable Graph Analytics with Apache Flink @ Flink Forward 2015
 
Unifying Stream, SWL and CEP for Declarative Stream Processing with Apache Flink
Unifying Stream, SWL and CEP for Declarative Stream Processing with Apache FlinkUnifying Stream, SWL and CEP for Declarative Stream Processing with Apache Flink
Unifying Stream, SWL and CEP for Declarative Stream Processing with Apache Flink
 
Functional Comparison and Performance Evaluation of Streaming Frameworks
Functional Comparison and Performance Evaluation of Streaming FrameworksFunctional Comparison and Performance Evaluation of Streaming Frameworks
Functional Comparison and Performance Evaluation of Streaming Frameworks
 
Apache Big Data EU 2016: Building Streaming Applications with Apache Apex
Apache Big Data EU 2016: Building Streaming Applications with Apache ApexApache Big Data EU 2016: Building Streaming Applications with Apache Apex
Apache Big Data EU 2016: Building Streaming Applications with Apache Apex
 
K. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward KeynoteK. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward Keynote
 
Apache Beam: A unified model for batch and stream processing data
Apache Beam: A unified model for batch and stream processing dataApache Beam: A unified model for batch and stream processing data
Apache Beam: A unified model for batch and stream processing data
 
Big Migrations: Moving elephant herds by Carlos Izquierdo
Big Migrations: Moving elephant herds by Carlos IzquierdoBig Migrations: Moving elephant herds by Carlos Izquierdo
Big Migrations: Moving elephant herds by Carlos Izquierdo
 
Flink history, roadmap and vision
Flink history, roadmap and visionFlink history, roadmap and vision
Flink history, roadmap and vision
 
Introduction to Spark Streaming
Introduction to Spark StreamingIntroduction to Spark Streaming
Introduction to Spark Streaming
 
Apache Spark Streaming: Architecture and Fault Tolerance
Apache Spark Streaming: Architecture and Fault ToleranceApache Spark Streaming: Architecture and Fault Tolerance
Apache Spark Streaming: Architecture and Fault Tolerance
 

Viewers also liked

Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on AzureDevoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on AzurePatrick Chanezon
 
Riak in Ten Minutes
Riak in Ten MinutesRiak in Ten Minutes
Riak in Ten MinutesJon Meredith
 
DataFrames: The Extended Cut
DataFrames: The Extended CutDataFrames: The Extended Cut
DataFrames: The Extended CutWes McKinney
 
Architectural Patterns for Scaling Microservices and APIs - GlueCon 2015
Architectural Patterns for Scaling Microservices and APIs - GlueCon 2015Architectural Patterns for Scaling Microservices and APIs - GlueCon 2015
Architectural Patterns for Scaling Microservices and APIs - GlueCon 2015Lori MacVittie
 
ApacheCon NA 2015 Spark / Solr Integration
ApacheCon NA 2015 Spark / Solr IntegrationApacheCon NA 2015 Spark / Solr Integration
ApacheCon NA 2015 Spark / Solr Integrationthelabdude
 
Microservices: next-steps
Microservices: next-stepsMicroservices: next-steps
Microservices: next-stepsBoyan Dimitrov
 
Apache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big DataApache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big DataPaco Nathan
 
Scikit-learn for easy machine learning: the vision, the tool, and the project
Scikit-learn for easy machine learning: the vision, the tool, and the projectScikit-learn for easy machine learning: the vision, the tool, and the project
Scikit-learn for easy machine learning: the vision, the tool, and the projectGael Varoquaux
 
Big Data Ecosystem at LinkedIn. Keynote talk at Big Data Innovators Gathering...
Big Data Ecosystem at LinkedIn. Keynote talk at Big Data Innovators Gathering...Big Data Ecosystem at LinkedIn. Keynote talk at Big Data Innovators Gathering...
Big Data Ecosystem at LinkedIn. Keynote talk at Big Data Innovators Gathering...Mitul Tiwari
 
Computing recommendations at extreme scale with Apache Flink @Buzzwords 2015
Computing recommendations at extreme scale with Apache Flink @Buzzwords 2015Computing recommendations at extreme scale with Apache Flink @Buzzwords 2015
Computing recommendations at extreme scale with Apache Flink @Buzzwords 2015Till Rohrmann
 
Deep Learning as a Cat/Dog Detector
Deep Learning as a Cat/Dog DetectorDeep Learning as a Cat/Dog Detector
Deep Learning as a Cat/Dog DetectorRoelof Pieters
 
Building Your Data Warehouse with Amazon Redshift
Building Your Data Warehouse with Amazon RedshiftBuilding Your Data Warehouse with Amazon Redshift
Building Your Data Warehouse with Amazon RedshiftAmazon Web Services
 
Spark ETL Techniques - Creating An Optimal Fantasy Baseball Roster
Spark ETL Techniques - Creating An Optimal Fantasy Baseball RosterSpark ETL Techniques - Creating An Optimal Fantasy Baseball Roster
Spark ETL Techniques - Creating An Optimal Fantasy Baseball RosterDon Drake
 
Gluecon Monitoring Microservices and Containers: A Challenge
Gluecon Monitoring Microservices and Containers: A ChallengeGluecon Monitoring Microservices and Containers: A Challenge
Gluecon Monitoring Microservices and Containers: A ChallengeAdrian Cockcroft
 

Viewers also liked (16)

Deep Dive - DynamoDB
Deep Dive - DynamoDBDeep Dive - DynamoDB
Deep Dive - DynamoDB
 
Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on AzureDevoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
 
Riak in Ten Minutes
Riak in Ten MinutesRiak in Ten Minutes
Riak in Ten Minutes
 
DataFrames: The Extended Cut
DataFrames: The Extended CutDataFrames: The Extended Cut
DataFrames: The Extended Cut
 
Architectural Patterns for Scaling Microservices and APIs - GlueCon 2015
Architectural Patterns for Scaling Microservices and APIs - GlueCon 2015Architectural Patterns for Scaling Microservices and APIs - GlueCon 2015
Architectural Patterns for Scaling Microservices and APIs - GlueCon 2015
 
ApacheCon NA 2015 Spark / Solr Integration
ApacheCon NA 2015 Spark / Solr IntegrationApacheCon NA 2015 Spark / Solr Integration
ApacheCon NA 2015 Spark / Solr Integration
 
Microservices: next-steps
Microservices: next-stepsMicroservices: next-steps
Microservices: next-steps
 
Apache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big DataApache Spark and the Emerging Technology Landscape for Big Data
Apache Spark and the Emerging Technology Landscape for Big Data
 
Scikit-learn for easy machine learning: the vision, the tool, and the project
Scikit-learn for easy machine learning: the vision, the tool, and the projectScikit-learn for easy machine learning: the vision, the tool, and the project
Scikit-learn for easy machine learning: the vision, the tool, and the project
 
Akka streams
Akka streamsAkka streams
Akka streams
 
Big Data Ecosystem at LinkedIn. Keynote talk at Big Data Innovators Gathering...
Big Data Ecosystem at LinkedIn. Keynote talk at Big Data Innovators Gathering...Big Data Ecosystem at LinkedIn. Keynote talk at Big Data Innovators Gathering...
Big Data Ecosystem at LinkedIn. Keynote talk at Big Data Innovators Gathering...
 
Computing recommendations at extreme scale with Apache Flink @Buzzwords 2015
Computing recommendations at extreme scale with Apache Flink @Buzzwords 2015Computing recommendations at extreme scale with Apache Flink @Buzzwords 2015
Computing recommendations at extreme scale with Apache Flink @Buzzwords 2015
 
Deep Learning as a Cat/Dog Detector
Deep Learning as a Cat/Dog DetectorDeep Learning as a Cat/Dog Detector
Deep Learning as a Cat/Dog Detector
 
Building Your Data Warehouse with Amazon Redshift
Building Your Data Warehouse with Amazon RedshiftBuilding Your Data Warehouse with Amazon Redshift
Building Your Data Warehouse with Amazon Redshift
 
Spark ETL Techniques - Creating An Optimal Fantasy Baseball Roster
Spark ETL Techniques - Creating An Optimal Fantasy Baseball RosterSpark ETL Techniques - Creating An Optimal Fantasy Baseball Roster
Spark ETL Techniques - Creating An Optimal Fantasy Baseball Roster
 
Gluecon Monitoring Microservices and Containers: A Challenge
Gluecon Monitoring Microservices and Containers: A ChallengeGluecon Monitoring Microservices and Containers: A Challenge
Gluecon Monitoring Microservices and Containers: A Challenge
 

Similar to Patterns of the Lambda Architecture -- 2015 April -- Hadoop Summit, Europe

Building Big Data Streaming Architectures
Building Big Data Streaming ArchitecturesBuilding Big Data Streaming Architectures
Building Big Data Streaming ArchitecturesDavid Martínez Rego
 
Using Hazelcast in the Kappa architecture
Using Hazelcast in the Kappa architectureUsing Hazelcast in the Kappa architecture
Using Hazelcast in the Kappa architectureOliver Buckley-Salmon
 
Enterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
Enterprise Data World 2018 - Building Cloud Self-Service Analytical SolutionEnterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
Enterprise Data World 2018 - Building Cloud Self-Service Analytical SolutionDmitry Anoshin
 
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Precisely
 
Metail and Elastic MapReduce
Metail and Elastic MapReduceMetail and Elastic MapReduce
Metail and Elastic MapReduceGareth Rogers
 
Lessons Learned Replatforming A Large Machine Learning Application To Apache ...
Lessons Learned Replatforming A Large Machine Learning Application To Apache ...Lessons Learned Replatforming A Large Machine Learning Application To Apache ...
Lessons Learned Replatforming A Large Machine Learning Application To Apache ...Databricks
 
Introduction to the Typesafe Reactive Platform
Introduction to the Typesafe Reactive PlatformIntroduction to the Typesafe Reactive Platform
Introduction to the Typesafe Reactive PlatformBoldRadius Solutions
 
Patterns of the Lambda Architecture -- 2015 April - Hadoop Summit, Europe
Patterns of the Lambda Architecture -- 2015 April - Hadoop Summit, EuropePatterns of the Lambda Architecture -- 2015 April - Hadoop Summit, Europe
Patterns of the Lambda Architecture -- 2015 April - Hadoop Summit, EuropeFlip Kromer
 
Getting Started with Amazon Redshift
Getting Started with Amazon RedshiftGetting Started with Amazon Redshift
Getting Started with Amazon RedshiftAmazon Web Services
 
Getting Started with Amazon Redshift
Getting Started with Amazon RedshiftGetting Started with Amazon Redshift
Getting Started with Amazon RedshiftAmazon Web Services
 
Artur Borycki - Beyond Lambda - how to get from logical to physical - code.ta...
Artur Borycki - Beyond Lambda - how to get from logical to physical - code.ta...Artur Borycki - Beyond Lambda - how to get from logical to physical - code.ta...
Artur Borycki - Beyond Lambda - how to get from logical to physical - code.ta...AboutYouGmbH
 
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOT
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOTAWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOT
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOTAmazon Web Services
 
Big Data in the Cloud: How the RISElab Enables Computers to Make Intelligent ...
Big Data in the Cloud: How the RISElab Enables Computers to Make Intelligent ...Big Data in the Cloud: How the RISElab Enables Computers to Make Intelligent ...
Big Data in the Cloud: How the RISElab Enables Computers to Make Intelligent ...Amazon Web Services
 
Getting Started with Amazon Redshift
Getting Started with Amazon RedshiftGetting Started with Amazon Redshift
Getting Started with Amazon RedshiftAmazon Web Services
 
Data Apps with the Lambda Architecture - with Real Work Examples on Merging B...
Data Apps with the Lambda Architecture - with Real Work Examples on Merging B...Data Apps with the Lambda Architecture - with Real Work Examples on Merging B...
Data Apps with the Lambda Architecture - with Real Work Examples on Merging B...Altan Khendup
 
AWS re:Invent 2016: Accenture Cloud Platform Serverless Journey (ARC202)
AWS re:Invent 2016: Accenture Cloud Platform Serverless Journey (ARC202)AWS re:Invent 2016: Accenture Cloud Platform Serverless Journey (ARC202)
AWS re:Invent 2016: Accenture Cloud Platform Serverless Journey (ARC202)Amazon Web Services
 
2014 09-12 lambda-architecture-at-indix
2014 09-12 lambda-architecture-at-indix2014 09-12 lambda-architecture-at-indix
2014 09-12 lambda-architecture-at-indixYu Ishikawa
 
Cloud Lambda Architecture Patterns
Cloud Lambda Architecture PatternsCloud Lambda Architecture Patterns
Cloud Lambda Architecture PatternsAsis Mohanty
 
Five Early Challenges Of Building Streaming Fast Data Applications
Five Early Challenges Of Building Streaming Fast Data ApplicationsFive Early Challenges Of Building Streaming Fast Data Applications
Five Early Challenges Of Building Streaming Fast Data ApplicationsLightbend
 
Petabytes and Nanoseconds
Petabytes and NanosecondsPetabytes and Nanoseconds
Petabytes and NanosecondsRobert Greiner
 

Similar to Patterns of the Lambda Architecture -- 2015 April -- Hadoop Summit, Europe (20)

Building Big Data Streaming Architectures
Building Big Data Streaming ArchitecturesBuilding Big Data Streaming Architectures
Building Big Data Streaming Architectures
 
Using Hazelcast in the Kappa architecture
Using Hazelcast in the Kappa architectureUsing Hazelcast in the Kappa architecture
Using Hazelcast in the Kappa architecture
 
Enterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
Enterprise Data World 2018 - Building Cloud Self-Service Analytical SolutionEnterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
Enterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
 
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...
 
Metail and Elastic MapReduce
Metail and Elastic MapReduceMetail and Elastic MapReduce
Metail and Elastic MapReduce
 
Lessons Learned Replatforming A Large Machine Learning Application To Apache ...
Lessons Learned Replatforming A Large Machine Learning Application To Apache ...Lessons Learned Replatforming A Large Machine Learning Application To Apache ...
Lessons Learned Replatforming A Large Machine Learning Application To Apache ...
 
Introduction to the Typesafe Reactive Platform
Introduction to the Typesafe Reactive PlatformIntroduction to the Typesafe Reactive Platform
Introduction to the Typesafe Reactive Platform
 
Patterns of the Lambda Architecture -- 2015 April - Hadoop Summit, Europe
Patterns of the Lambda Architecture -- 2015 April - Hadoop Summit, EuropePatterns of the Lambda Architecture -- 2015 April - Hadoop Summit, Europe
Patterns of the Lambda Architecture -- 2015 April - Hadoop Summit, Europe
 
Getting Started with Amazon Redshift
Getting Started with Amazon RedshiftGetting Started with Amazon Redshift
Getting Started with Amazon Redshift
 
Getting Started with Amazon Redshift
Getting Started with Amazon RedshiftGetting Started with Amazon Redshift
Getting Started with Amazon Redshift
 
Artur Borycki - Beyond Lambda - how to get from logical to physical - code.ta...
Artur Borycki - Beyond Lambda - how to get from logical to physical - code.ta...Artur Borycki - Beyond Lambda - how to get from logical to physical - code.ta...
Artur Borycki - Beyond Lambda - how to get from logical to physical - code.ta...
 
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOT
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOTAWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOT
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOT
 
Big Data in the Cloud: How the RISElab Enables Computers to Make Intelligent ...
Big Data in the Cloud: How the RISElab Enables Computers to Make Intelligent ...Big Data in the Cloud: How the RISElab Enables Computers to Make Intelligent ...
Big Data in the Cloud: How the RISElab Enables Computers to Make Intelligent ...
 
Getting Started with Amazon Redshift
Getting Started with Amazon RedshiftGetting Started with Amazon Redshift
Getting Started with Amazon Redshift
 
Data Apps with the Lambda Architecture - with Real Work Examples on Merging B...
Data Apps with the Lambda Architecture - with Real Work Examples on Merging B...Data Apps with the Lambda Architecture - with Real Work Examples on Merging B...
Data Apps with the Lambda Architecture - with Real Work Examples on Merging B...
 
AWS re:Invent 2016: Accenture Cloud Platform Serverless Journey (ARC202)
AWS re:Invent 2016: Accenture Cloud Platform Serverless Journey (ARC202)AWS re:Invent 2016: Accenture Cloud Platform Serverless Journey (ARC202)
AWS re:Invent 2016: Accenture Cloud Platform Serverless Journey (ARC202)
 
2014 09-12 lambda-architecture-at-indix
2014 09-12 lambda-architecture-at-indix2014 09-12 lambda-architecture-at-indix
2014 09-12 lambda-architecture-at-indix
 
Cloud Lambda Architecture Patterns
Cloud Lambda Architecture PatternsCloud Lambda Architecture Patterns
Cloud Lambda Architecture Patterns
 
Five Early Challenges Of Building Streaming Fast Data Applications
Five Early Challenges Of Building Streaming Fast Data ApplicationsFive Early Challenges Of Building Streaming Fast Data Applications
Five Early Challenges Of Building Streaming Fast Data Applications
 
Petabytes and Nanoseconds
Petabytes and NanosecondsPetabytes and Nanoseconds
Petabytes and Nanoseconds
 

Recently uploaded

专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改yuu sss
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfBoston Institute of Analytics
 
Multiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfMultiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfchwongval
 
Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Seán Kennedy
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一fhwihughh
 
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档208367051
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceSapana Sha
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort servicejennyeacort
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxMike Bennett
 
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSINGmarianagonzalez07
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfJohn Sterrett
 
Heart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectHeart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectBoston Institute of Analytics
 
Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Cathrine Wilhelmsen
 
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024thyngster
 
Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Seán Kennedy
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)jennyeacort
 
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...ssuserf63bd7
 
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...Amil Baba Dawood bangali
 
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxNLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxBoston Institute of Analytics
 

Recently uploaded (20)

专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
专业一比一美国俄亥俄大学毕业证成绩单pdf电子版制作修改
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
Multiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfMultiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdf
 
Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
 
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
原版1:1定制南十字星大学毕业证(SCU毕业证)#文凭成绩单#真实留信学历认证永久存档
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts Service
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptx
 
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdf
 
Heart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectHeart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis Project
 
Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)
 
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
 
Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
 
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
 
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
 
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxNLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
 

Patterns of the Lambda Architecture -- 2015 April -- Hadoop Summit, Europe

  • 1. Patterns of the Lambda Architecture Truth and Lies at the Edge of Scale Flip Kromer — CSC I’m Flip Kromer, Distinguished Engineer at CSC. If you are a large enterprise company looking to add Big Data capabilities — especially one involving legacy systems — we’re a big, stable company that specializes in turning technology into an enterprise- grade solution.
  • 2. Pattern Set This talk will equip you with two things. One is patterns for how we design high-scale architectures to solve specific solution cases now that extra infrastructure is nearly free
  • 3. Tradeoff Rules PICK ANY TWO Along with a set of tradeoff rules along the lines of the pick-any-two trinity but more sophisticated
  • 4. Lambda Architecture So what is the Lambda Architecture? Here’s two examples.
  • 5. Search w/ Update Build Indexes A Ton of Text Historical Index Live Indexer More Text Recent Index API In this system, we have a whole ton of historical text, with more arriving all the time, and want to allow immediate real-time search across the whole corpus.
  • 6. Search w/ Update Build Indexes A Ton of Text Historical Index Live Indexer More Text Recent Index API Build
 Main Index We will use a large periodic batch job to create indexes on the historical data.This takes a while — far longer than our recency demands allow — so we might as well have our elephants use clever algorithms and optimally organize the data for rapid retrieval.
  • 7. Search w/ Update Build Indexes A Ton of Text Historical Index Live Indexer More Text Recent Index API Update Recent Index Until the next stampede arrives with an updated index, as each new record arrives we will not only file it with the historical data but also use simple fast indexing to make it immediately searchable. Merging new records directly would require stuffing them into the right place in the historical index, which eventually means moving records around, which demands far too much time and complexity to be workable.
  • 8. Search w/ Update Build Indexes A Ton of Text Historical Index Live Indexer More Text Recent Index API Serve Result The system to serve the data just pulls from both indexes in immediate time
  • 9. Build Indexes A Ton of Text Historical Index Live Indexer More Text Recent Index API Lambda Architecture Batch Speed Serving We have a batch layer for the global corpus; A speed layer for recent results; and a serving layer for access
  • 10. Build Indexes A Ton of Text Historical Index Live Indexer More Text Recent Index API Lambda Architecture Global Relevant Immediate We have a batch layer for the global corpus; A speed layer for recent results; and a serving layer for access
  • 11. Train Recomm’der Visitor History History Alsobuy Visitor: Product Visitor Alsobuy Update Recommendation Fetch/Update History Visitor: Product History Webserver Recommender Another familiar architecture is a high-scale recommender system — “Given that the user has looked at mod-style dresses and mason jars show them these knitting needles”.This diagram shows a recommender, but most machine-learning systems look like this.
  • 12. Train Recomm’der Visitor History History Alsobuy Visitor: Product Visitor Alsobuy Update Recommendation Fetch/Update History Visitor: Product History Webserver Recommender Build Model You have one system process all the examples you’ve ever seen to produce a predictive model.The trained model it produces can then react immediately to all future examples as they occur.
  • 13. Train Recomm’der Visitor History History Alsobuy Visitor: Product Visitor Alsobuy Update Recommendation Fetch/Update History Visitor: Product History Webserver Recommender Applies Model The trained model it produces can then react immediately to all future examples as they occur. In this system we’re going to have one system to apply the model and store the recommendation Your operations team is better off with two systems that can fail without breaking the site than to have the apply-model step coupled to serving pages.
  • 16. Lambda Arch Layers • Batch layer Deep Global Truth throughput • Speed layer Relevant Local Truth throughput • Serving layer Rapid Retrieval latency speed layer cares about throughput Serving layer cares about latency,
  • 17. Lambda Arch: Technology • Batch layer Hadoop, Spark, Batch DB Reports • Speed layer Storm+Trident, Spark Str., Samza,AMQP, … • Serving layer Web APIs, Static Assets, RPC, …
  • 18. Lambda Architecture Batch Speed Serving λ λ Where does the name lambda come from? In my head it’s cause the flow diagram…
  • 19. Lambda Architecture Batch Speed Serving λ looks like the shape of the character for lambda
  • 20. Lambda Architecture λ(v) • Pure Function on immutable data But really it means this new mindset of building pure function (lambda) on immutable data,
  • 22. Ideal Data System • Capacity -- Can process arbitrarily large amounts of data • Affordability -- Cheap to run • Simplicity -- Easy to build, maintain, debug • Resilience -- Jobs/Processes fail&restart gracefully • Responsiveness -- Low latency for delivering results • Justification -- Incorporates all relevant data into result • Comprehensive -- Answer questions about any subject • Recency -- Promptly incorporates changes in world • Accuracy -- Few approximations or avoidable errors The laziest, and therefore best, knobs are the Capacity/Affordability ones.The pre-big- data era can be thought of as one where only those two exist. Big Data broke the handle off the Capacity knob, either because Affordability ramps too fast or because the speed of light starts threatening resilience, responsiveness or recency * _Comprehensive_: complete; including all or nearly all elements or aspects of something * _concise_: giving a lot of information clearly and in a few words; brief but
  • 23. Ideal Data System • Capacity -- Can process arbitrarily large amounts of data • Affordability -- Cheap to run • Simplicity -- Easy to build, maintain, debug • Resilience -- Jobs/Processes fail&restart gracefully • Responsiveness -- Low latency for delivering results • Justification -- Incorporates all relevant data into result • Comprehensive -- Answer questions about any subject • Recency -- Promptly incorporates changes in world • Accuracy -- Few approximations or avoidable errors You would think that what mattered was correctness — justified true belief
  • 24. Ideal Data System • Capacity -- Can process arbitrarily large amounts of data • Affordability -- Cheap to run • Simplicity -- Easy to build, maintain, debug • Resilience -- Jobs/Processes fail&restart gracefully • Responsiveness -- Low latency for delivering results • Justification -- Incorporates all relevant data into result • Comprehensive -- Answer questions about any subject • Recency -- Promptly incorporates changes in world • Accuracy -- Few approximations or avoidable errors When you look at what we actually do, the non-negotiables are that it be manageable and economic given that you must process arbitrarily large amounts of data Truth is a nice-to-have.
  • 25. Tradeoff Rules PICK ANY TWO Set of tradeoff rules along the lines of the pick-any-two trinity but more sophisticated
  • 26. At Scale AND THIS THIS AND TRY TO BE GOOD Basically, given big data you have to accomodate any amount of data and produce static reports or queries that execute within the duration of human patience — so you must be fast and cheap, sacrificing good.
  • 28. Train Recomm’der Visitor History History Alsobuy Visitor: Product Visitor Alsobuy Update Recommendation Fetch/Update History Visitor: Product History Webserver Recommender The world you’re modeling changes — new sets of products are released, new and variated customers sign up, changes to the site drive new behavior — but it changes slowly. So it’s no big deal if the training stage is only run once a week over several hours. The first example follows a pretty familiar general form I’ll call “Train / React”.You have one system process all the examples you’ve ever seen to produce a predictive model.The trained model it produces can then react immediately to all future
  • 29. Pattern: Train / React • Model of the world lets you make immediate decisions • World changes slowly, so we can re-build model at leisure • Relax: Recency • Batch layer: Train a machine learning model • Speed layer: Apply that model • Examples: most Machine Learning thingies (Recommender) Big fat job that only needs to run occasionally; results of the job inform what happens immediately
  • 30. Search w/ Update Build Indexes A Ton of Text Historical Index Live Indexer More Text Recent Index API
  • 31. Pattern: Baseline / Delta • Understanding the world takes a long time • World changes much faster than that, and you care • Relax: Simplicity, Accuracy • Batch layer: Process the entire world • Speed layer: Handle any changes since last big run • Examples: Real-time Search index; Count Distinct; 
 other Approximate Stream Algorithms In Train / React, the world changes, but slowly; training in batch mode is just fine In Baseline / Delta, the world changes so quickly can’t run compute job fast enough So you are sacrificing simplicity — there’s two systems where there was only one — and accuracy — the recent records won’t update global normalized frequencies
  • 32. Pagerank Converge Pagerank Friend Relations User Pagerank Retrieve Bob’s Facebook Ntwk Bob Bob’s Friends’ Pageranks Estimate Bob’s Pagerank But don’t bother updating Bob’s Friends (or friends friends or …) API (Lazy Propagation)
  • 33. Pagerank 48 24 42 12 12 6 24 24 42 6 6 6 6 6 6 6 This next example has an importantly different flavor. The core way that Google identifies important web pages is the “Pagerank” algorithm, which basically says “a page is interesting if other interesting pages link to it”.That’s recursive of course but the math works out.You can do similar things on a social network like twitter to find spammers and superstars, or among college football teams or world of warcraft players to prepare a competitive ranking, or among buyers and sellers in a market to detect fraud. To define a reputation ranking on say Twitter you simulate a game of multiple rounds.
  • 34. 48 24 42 12 12 6 24 24 42 6 6 6 6 6 6 6 9 4 - 5 - New Record Appears ? Doing this is kinda literally what Hadoop was born to do, and it’s a simple Hadoop-101 level program. Acting out all those rounds using every interaction we’ve ever seen takes a fair amount of time, though, and so a problem comes when we meet a new person. This new person accrues some reputational jellybeans, and we don’t want to live in total ignorance of what their score is; and they dispatch some as well, which should change the scores of those they follow.
  • 35. 48 24 42 12 12 6 24 24 42 6 6 6 6 6 6 6 9 4 - 5 - Update Using Local 12÷3 = 4 24÷5 ≈ 5 9 Well, we can roughly guess the score of the new node by having their followers pay out a jellybean share proportional to what they would have gotten in the last pagerank round. “A Guess beats a Blank Stare” * World rate of change not really relevant * The solution is actually to tell a lie
  • 36. 48 24 42 12 12 6 24 24 42 6 6 6 6 6 6 6 9 4 - 5 - …Ignoring Correctness meh But we’re not going to update the neighbors.You’d be concurrently updating an arbitrary number of outbound nodes, and then of course those nodes’ changes should rightfully propagate as well — this is why we play the multiple pagerank rounds in the first place. What we do instead is lie. Look, planes don’t fall out of the sky if you get someone’s coolness quotient wrong in the first decimal point.
  • 37. Batch Updates Graph 42 30 36 11 10 6 21 21 36 4 6 6 6 5 5 4 9 3 9 6 (A Guess beats a Blank) This has an importantly different flavor * World rate of change not really relevant * The solution is actually to tell a lie
  • 38. Pattern: World/Local • Understanding the world needs full graph • You can tell a little white lie reading immediate graph only • Relaxing: Accuracy, Justification • Batch layer: uses global graph information • Speed layer: just reads immediate neighborhood • Examples:“Whom to follow”, Clustering, anything at 2nd- degree (friend-of-a-friend) Problem isn’t so much about the volume of data, it’s about how _far away_ that data is. You can’t justify doing that second-order query for three reasons: * time * compute resources * computational risk
  • 39. Pattern: Guess Beats Blank • You can’t calculate a good answer quickly • But Comprehensiveness is a must • Relaxing: Accuracy, Justification • Batch layer: finds the correct answer • Speed layer: makes a reasonable guess • Examples:Any time the sparse-data case is also the most valuable In this case, we can’t sacrifice comprehensiveness — for every record that exists, we must return a relevant answer. So we sacrifice truthfulness — or more precisely, we sacrifice accuracy and justification.
  • 40. Marine Corp’s 80% Rule “Any decision made with more than 80% of the necessary information is hesitation” — “The Marine Corps Way” Santamaria & Martino When lots of data already, the imperfect result in the speed layer doesn’t have a huge effect When there isn’t much data, overwhelmingly better to fill in with an imperfect result US Marine Corps:“Any decision made with more than 80% of the necessary information is hesitation”
  • 41. A Guess Beats a Blank • You can’t calculate a good answer quickly • But Comprehensiveness is a must • Relaxing: Accuracy, Justification • Batch layer: finds the correct answer • Speed layer: makes a reasonable guess • Examples:Any time the sparse-data case is also the most valuable In this case, we can’t sacrifice comprehensiveness — for every record that exists, we must return a relevant answer. So we sacrifice truthfulness — or more precisely, we sacrifice accuracy and justification.
  • 42. Security Find Potential Evilness Connection Counts Agents of Interest Store Interaction Net Connect ions Detected Evilnesses Approximate Streaming Agg Agent of Interest? Dashboard In security, you have the data-breach type problems — why is someone strip-mining computers in turn to a server in [name your own semi-friendly country]? — and bradley-manning type problems — why is a GS-5 at a console in Kuwait downloading every single diplomatic dispatch?
  • 43. Pattern: Slow Boil/Flash Fire • Two tempos of data: months vs milliseconds • Short-term data too much to store • Long-term data too much to handle immediately • Often accompanies Baseline / Deltas, Global / Local • Examples: • Trending Topics • Insider Security Global/Local:Why has a contractor sysadmin in Hawaii accessed powerpoint presos from every single group within our organization?
  • 45. Banking, Oversimplified Reconcile Accounts Account Balances Event Store Transaction Update Records nice-to-haveessential This wins over fast layer (CAP Tradeoffs)
  • 46. Pattern: C-A-P Tradeoffs • C-A-P tradeoffs: • Can’t depend on when data will roll in (Justification) • Can’t live in ignorance (Comprehensiveness) • Batch layer: The final answer • Speed layer: Actionable views • Examples: Security (Authorization vs Auditing), 
 lots of counting problems (Banking)
  • 47. Pattern: Out-of-Order • C-A-P tradeoffs: • Can’t depend on when data will roll in (Justification) • Can’t live in ignorance (Comprehensiveness) • Batch layer: The final answer • Speed layer: Actionable views • Examples: Security (Authorization vs Auditing), 
 lots of counting problems (Banking)
  • 48. Common Theme The System Asymptotes to Truth over time We keep seeing this common theme — you are building a system that approaches correctness over time.This leads to a best practice that I’ll call the improver pattern:
  • 50. • Scrapers: yield partial records • Unifier: connects all identifiers for a common object • Resolver: combines partial records into unified record Entity Resolution
  • 51. Pattern: Improver • Improver:
 
 function(best guess,{new facts}) ~> new best guess • Batch layer: f(blank, {all facts}) ~> best possible guess • Speed layer: f(current best, {new fact}) ~> new best guess • Batch and speed layer share same code & contract, asymptote to truth. The way you build your resolver is such that it
  • 52. Two Big Ideas • Fine-grained control over architectural tradeoffs • Truth lives at the edge, not the middle Lets you trade off how quickly, how expensively, how true, how justified New Paradigm for how, when and where we handle truth
  • 53. Two Big Ideas • Fine-grained control over architectural tradeoffs • Approximate a pure function on all data • What we do now that architecture is free • Truth lives at the edge, not the middle Lets you trade off how quickly, how expensively, how true, how justified New Paradigm for how, when and where we handle truth
  • 54. Two Big Ideas • Fine-grained control over architectural tradeoffs • Approximate a pure function on all data • What we do now that architecture is free • Truth lives at the edge, not the middle • Data is syndicated forward from arrival to serving • “Query at write time” Lets you trade off how quickly, how expensively, how true, how justified New Paradigm for how, when and where we handle truth
  • 55. • Lambda architecture isn’t about speed layer / batch layer. • It's about • moving truth to the edge, not the center; • enabling fine-grained tradeoffs against fundamental limits; • decoupling consumer from infrastructure • decoupling consumer from asynchrony • …with profound implications for how you build your teams λ Arch: Truth, not Plumbing This way of doing it simplifies architecture: Local interactions only Elimination of asynchrony Which in turn profoundly simplifies development and operations And allows you to structure team like you do the
  • 56. Lambda Architecture for a Dinky Little Blog So far, talked about a bunch of reasons why you might be led **to** a lambda architecture And when there's a new technology people always first ask why they should do it differently, which is a wise Thing to ask and a foolish thing to insist on But let's look at it from the other end, from what life is like if this were the natural state of being. And to do so, let's take the most unjustifiable case for a high scale architecture: a blog engine
  • 57. Blog: Traditional Approach • Familiar with the ORM Rails-style blog: • Models: User, Article, Comment • Views: • /user/:id (user info, links to their articles and comments); • /articles (list of articles); • /articles/:id (article content, comments, author info)
  • 58. User id 3 name joeman homepage http://… photo http://… bio “…” Article id 7 title The Crisis body These are… author_id 3 created_at 2014-08-08 Comment id 12 body “lol” article_id 7 author_id 3
  • 59. Author Name Author Bio Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Author Photo Joe has written 2 Articles: “A Post about My Cat” Donec nec justo eget felis facilisis fermentum.Aliquam porttitor mauris sit amet orci.Aenean dignissim pellentesque (… read more) “Welcome to my blog” Donec nec justo eget felis facilisis fermentum.Aliquam porttitor mauris sit amet orci.Aenean dignissim pellentesque (… read more) Article Title Article Body Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident. Author Name Author Photo Author Bio Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor Comments "First Post" (8/8/2014 by Commenter 1) "lol" (8/8/2014 by Commenter 2) "No comment" (8/8/2014 by Commenter 3) article show user show
  • 60. articles users comments Webserver Traditional: Assemble on Read DB models are sole source of truth Denormalized Used directly by reader and writer View is constructed from spare parts at read time
  • 61. Syndicate on Write Δ article Biographers View Fragments showReportersΔ user Biographers Δ com’t Biographers articles users comments DB models are sole source of truth Denormalized Used directly by reader and writer View is constructed from spare parts at read time
  • 62. • (…hack hack hack…) /articles/v2/show.json /articles/v1/show.json • (…hack hack hack…) What data model would you like to receive? {“title”:”…”, “body”:”…”,…} lol um can I also have Data Engineer Web Engineer {“title”:”…”, “body”:”…”, “snippet”:…}
  • 63. Syndicated Data • The Data is always _there_ • …but sometimes it’s more perfect than other times.
  • 64. Syndicated Data • Reports are cheap, single-concern, and faithful to the view. • You start thinking like the customer, not the database • All pages render in O(1): • Your imagination doesn’t have to fit inside a TCP timeout • Data is immutable, flows are idempotent: • Interface change is safe • Data is always _there_, • Asynchrony doesn’t affect consumers • Everything is decoupled: • Way harder to break everything One of the worst pains in asses is the query that takes 1500 milliseconds. Needs to be immediate, usually mission-critical, expensive in all ways
  • 65. • Lambda architecture isn’t about speed layer / batch layer. • It's about • moving truth to the edge, not the center; • enabling fine-grained tradeoffs against fundamental limits; • decoupling consumer from infrastructure • decoupling consumer from asynchrony • …with profound implications for how you build your teams λ Arch: Truth, not Plumbing This way of doing it simplifies architecture: Local interactions only Elimination of asynchrony Which in turn profoundly simplifies development and operations And allows you to structure team like you do the
  • 66.
  • 67.
  • 68. Changes update models update article update user update comments Δ article Δ user Δ com’nt models user com’nt article history Models stay the same: User, Article, Comment. Updated directly Reporters can subscribe to models On update, reporter receives updated object, and can do anything else it wants. Typically, it's to create a new report Reports live in the target domain: faithful to the data consumer. In this case, they look very close to the information hierarchy of the rendered page All pages render in O(1).Your imagination is not constrained by the length of a TCP timeout
  • 69. Models Trigger Reporters update article update user update comments Δ article Δ user Δ com’nt models user com’nt article history compact article user’s # articles expanded user user’s # comments sidebar user compact comment expanded article exp’d article compact article user’s # articles exp’d user sidebar user user’s # comments compact comment micro user micro user DB models are sole source of truth Denormalized Used directly by reader and writer View is constructed from spare parts at read time
  • 70. Serve Report Fragments exp’d article compact article user’s # articles exp’d user sidebar user user’s # comments compact comment micro user show article Article Title Article Body Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident. Author Name Author Photo Author Bio Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor Comments "First Post" (8/8/2014 by Commenter 1) "lol" (8/8/2014 by Commenter 2) "No comment" (8/8/2014 by Commenter 3) DB models are sole source of truth Denormalized Used directly by reader and writer View is constructed from spare parts at read time
  • 71. Article Title Article Body Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident. Author Name Author Photo Author Bio Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor Comments "First Post" (8/8/2014 by Commenter 1) "lol" (8/8/2014 by Commenter 2) "No comment" (8/8/2014 by Commenter 3) article show rendered { "title":"Article Title", "body":"Article Body Lorem [...]", "author":{ ... }, "comments: [ {"comment_id":1, "body":"First Post",...}, {"comment_id":2, "body":"lol",...}, ... ]}
  • 72. Serve Report Fragments Article Title Article Body Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident. Author Name Author Photo Author Bio Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor Comments "First Post" (8/8/2014 by Commenter 1) "lol" (8/8/2014 by Commenter 2) "No comment" (8/8/2014 by Commenter 3) exp’d article compact article user’s # articles exp’d user sidebar user user’s # comments compact comment micro user show user DB models are sole source of truth Denormalized Used directly by reader and writer View is constructed from spare parts at read time
  • 73. Reports are Cheap update article update user update comments Δ article Δ user Δ com’nt models user com’nt article history compact article user’s # articles expanded user user’s # comments sidebar user compact comment expanded article exp’d article compact article user’s # articles exp’d user sidebar user user’s # comments compact comment micro user micro user list articles show article list user’s articles show user DB models are sole source of truth Denormalized Used directly by reader and writer View is constructed from spare parts at read time
  • 74. Two Big Ideas • Fine-grained control over those architectural tradeoffs • Truth lives at the edge, not the middle Lets you trade off how quickly, how expensively, how true, how justified New Paradigm for how, when and where we handle truth
  • 80. Cannot have Consistency Product Resolver key words mfr & model ASIN VendorListing Fetch Products Unified Products Resolve & Update Listings Unify Products
  • 82. Objections • Three objections 1.Why hasn't it been done before 2.Architecture Astronaut 3.I'm not at high scale • Response 1.Chef/Puppet/Docker/etc 2.Chef/Puppet/Docker/etc 3.Shut Up
  • 83.
  • 84. Objections • Two APIs? Really? • Yes. Guilty.That’s dumb and must be fixed. • Spark or Samza, if you’re willing to only drink one flavor of Kool-Aid • EZbake.io, a CSC / 42six project to attack this • …but we shouldn’t be living at the low level anyhow
  • 85. Objections • Orchestration: “logical plan” (dataflow graph) • Optimization/Allocation: “physical plan” (what goes where) • Resource Projector: instantiates infrastructure • HTTP listeners,Trident streams, Oozie scheduling, ETL flows, cron jobs, etc • Transport Machineries: • move data around, fulfilling locality/ordering/etc guarantees • Data Processing: UDFs and operators