SlideShare uma empresa Scribd logo
1 de 26
Scaling Apache Giraph
Nitay Joffe, Data Infrastructure Engineer
nitay@apache.org
June 3, 2013
Agenda
1 Background
2 Scaling
3 Results
4 Questions
Background
What is Giraph?
• Apache open source graph computation engine based on Google’s Pregel.
• Support for Hadoop, Hive, HBase, and Accumulo.
• BSP model with simple think like a vertex API.
• Combiners, Aggregators, Mutability, and more.
• Configurable Graph<I,V,E,M>:
– I: Vertex ID
– V: Vertex Value
– E: Edge Value
– M: Message data
What is Giraph NOT?
• A Graph database. See Neo4J.
• A completely asynchronous generic MPI system.
• A slow tool.
implements
Writable
Why not Hive?
Input
format
Output
format
Map
tasks
Intermediate
files
Reduce
tasks
Output 0
Output 1
Input 0
Input 1
Iterate!
• Too much disk. Limited in-memory caching.
• Each iteration becomes a MapReduce job!
Giraph components
Master – Application coordinator
• Synchronizes supersteps
• Assigns partitions to workers before superstep begins
Workers – Computation & messaging
• Handle I/O – reading and writing the graph
• Computation/messaging of assigned partitions
ZooKeeper
• Maintains global application state
Giraph Dataflow
Split 0
Split 1
Split 2
Split 3
Worker
1
Master
Worker
0Input format
Load /
Send
Graph
Load /
Send
Graph
Loading the graph
1
Part 0
Part 1
Part 2
Part 3
Compute /
Send
Messages
Worker
1
Compute /
Send
Messages
Master
Worker
0
In-memory
graph
Send stats / iterate!
Compute/Iterate
2
Worker
1
Worker
0
Part 0
Part 1
Part 2
Part 3
Output format
Part 0
Part 1
Part 2
Part 3
Storing the graph
3
Split 4
Split
Giraph Job Lifetime
Output
Active Inactive
Vote to Halt
Received Message
Vertex Lifecycle
All Vertices
Halted?
Input
Compute
Superstep
No
Master
halted?
No
Yes
Yes
Simple Example – Compute the maximum value
5
1
5
2
5
5
2
5
5
5
5
5
1
2
Processor 1
Processor 2
Time
Connected Components
e.g. Finding Communities
PageRank – ranking websites
Mahout (Hadoop)
854 lines
Giraph
< 30 lines
• Send neighbors an equal fraction of your page rank
• New page rank = 0.15 / (# of vertices) + 0.85 * (messages
sum)
Scaling
Problem: Worker Crash.
Superstep i
(no checkpoint)
Superstep i+1
(checkpoint)
Superstep i+2
(no checkpoint)
Worker failure!
Superstep i+1
(checkpoint)
Superstep i+2
(no checkpoint)
Superstep i+3
(checkpoint)
Worker failure after
checkpoint complete!
Superstep i+3
(no checkpoint)
Application
Complete…
Solution: Checkpointing.
“Spare”
Master 2
Active
Master State“Spare”
Master 1
“Active”
Master 0
Before failure of active master 0
“Spare”
Master 2
Active
Master State“Active”
Master 1
“Active”
Master 0
After failure of active master 0
ZooKeeper ZooKeeper
Problem: Master Crash.
Solution: ZooKeeper Master Queue.
Problem: Primitive Collections.
• Graphs often parameterized with { }
• Boxing/unboxing. Objects have internal overhead.
3
Solution: Use fastutil, e.g. Long2DoubleOpenHashMap.
fastutil extends the Java™ Collections Framework by providing type-specific
maps, sets, lists and queues with a small memory footprint and fast access and
insertion
1
2
4
5
1.2
0.5
0.8
0.4
1.7
0.7
Single Source Shortest Path
s
t
1.2
0.5
0.8
0.4
0.2
0.7
Network Flow
3
1
2
4
5
Count In-Degree
Problem: Too many objects.
Lots of time spent in GC.
Graph: 1B Vertices, 200B Edges, 200 Workers.
• 1B Edges per Worker. 1 object per edge value.
• List<Edge<I, E>>  ~ 10B objects
• 5M Vertices per Worker. 10 objects per vertex value.
• Map<I, Vertex<I, V, E>  ~ 50M objects
• 1 Message per Edge. 10 objects per message data.
• Map<I, List<M>>  ~ 10B objects
• Objects used ~= O(E*e + V*v + M*m) => O(E*e)
Label Propagation
e.g. Who’s sleeping?
3
1
2
4
5
Boring
Amazing
Q: What did he think?
0.5
0.2
0.8 0.36
0.17
0.41
Confusing
Problem: Too many objects.
Lots of time spent in GC.
Solution: byte[]
• Serialize messages, edges, and vertices.
• Iterable interface with representative object.
Input Input Input
next()
next()
next()
Objects per worker ~= O(V)
Label Propagation
e.g. Who’s sleeping?
3
1
2
4
5
Boring
Amazing
Q: What did he think?
0.5
0.2
0.8 0.36
0.17
0.41
Confusing
Problem: Serialization of byte[]
• DataInput? Kyro? Custom?
Solution: Unsafe
• Dangerous. No formal API. Volatile. Non-portable (oracle JVM only).
• AWESOME. As fast as it gets.
• True native. Essentially C: *(long*)(data+offset);
Problem: Large Aggregations.
Worker
Worker
Worker
Worker
Worker
Master
Workers own aggregators
Worker
Worker
Worker
Worker
Worker
Master
Aggregator owners communicate
with Master
Worker
Worker
Worker
Worker
Worker
Master
Aggregator owners distribute values
Solution: Sharded Aggregators.
Worker
Worker
Worker
Worker
Worker
Master
K-Means Clustering
e.g. Similar Emails
Problem: Network Wait.
• RPC doesn’t fit model.
• Synchronous calls no good.
Solution: Netty
Tune queue sizes & threads
BarrierBarrier
Begin superstep
compute
network
End compute
End superstep
wait
Barrier
Barrier
Begin superstep
compute
network
wait
Time to first message
End compute
End superstep
Results
0
50
100
150
200
250
300
350
400
450
50 100 150 200 250 300
IterationTime(sec)
Workers
2B Vertices, 200B Edges, 20 Compute Threads
Increasing Workers Increasing Data
Size
0
50
100
150
200
250
300
350
400
450
1E+09 1.01E+11
IterationTime(sec)
Edges
50 Workers, 20 Compute Threads
Scalability Graphs
Lessons Learned
• Coordinating is a zoo. Be resilient with ZooKeeper.
• Efficient networking is hard. Let Netty help.
• Primitive collections, primitive performance. Use fastutil.
• byte[] is simple yet powerful.
• Being Unsafe can be a good thing.
• Have a graph? Use Giraph.
What’s the final result?
Comparison with Hive:
• 20x CPU speedup
• 100x Elapsed time speedup. 15 hours => 9 minutes.
Computations on entire Facebook graph no longer “weekend jobs”.
Now they’re coffee breaks.
Questions?
Problem: Measurements.
• Need tools to gain visibility into the system.
• Problems with connecting to Hadoop sub-processes.
Solution: Do it all.
• YourKit – see YourKitProfiler
• jmap – see JMapHistoDumper
• VisualVM –with jstatd & ssh socks proxy
• Yammer Metrics
• Hadoop Counters
• Logging & GC prints
Problem: Mutations
• Synchronization.
• Load balancing.
Solution: Reshuffle resources
• Mutations handled at barrier between supersteps.
• Master rebalances vertex assignments to optimize distribution.
• Handle mutations in batches.
• Avoid if using byte[].
• Favor algorithms which don’t mutate graph.

Mais conteúdo relacionado

Mais procurados

Best Practices for Hyperparameter Tuning with MLflow
Best Practices for Hyperparameter Tuning with MLflowBest Practices for Hyperparameter Tuning with MLflow
Best Practices for Hyperparameter Tuning with MLflowDatabricks
 
MALT: Distributed Data-Parallelism for Existing ML Applications (Distributed ...
MALT: Distributed Data-Parallelism for Existing ML Applications (Distributed ...MALT: Distributed Data-Parallelism for Existing ML Applications (Distributed ...
MALT: Distributed Data-Parallelism for Existing ML Applications (Distributed ...asimkadav
 
Sparkling Water 5 28-14
Sparkling Water 5 28-14Sparkling Water 5 28-14
Sparkling Water 5 28-14Sri Ambati
 
Challenges on Distributed Machine Learning
Challenges on Distributed Machine LearningChallenges on Distributed Machine Learning
Challenges on Distributed Machine Learningjie cao
 
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 Separating Hype from Reality in Deep Learning with Sameer Farooqui Separating Hype from Reality in Deep Learning with Sameer Farooqui
Separating Hype from Reality in Deep Learning with Sameer FarooquiDatabricks
 
Building High Available and Scalable Machine Learning Applications
Building High Available and Scalable Machine Learning ApplicationsBuilding High Available and Scalable Machine Learning Applications
Building High Available and Scalable Machine Learning ApplicationsYalçın Yenigün
 
SciPy 2019: How to Accelerate an Existing Codebase with Numba
SciPy 2019: How to Accelerate an Existing Codebase with NumbaSciPy 2019: How to Accelerate an Existing Codebase with Numba
SciPy 2019: How to Accelerate an Existing Codebase with Numbastan_seibert
 
Enabling Composition in Distributed Reinforcement Learning with Ray RLlib wit...
Enabling Composition in Distributed Reinforcement Learning with Ray RLlib wit...Enabling Composition in Distributed Reinforcement Learning with Ray RLlib wit...
Enabling Composition in Distributed Reinforcement Learning with Ray RLlib wit...Databricks
 
Aran Khanna, Software Engineer, Amazon Web Services at MLconf ATL 2017
Aran Khanna, Software Engineer, Amazon Web Services at MLconf ATL 2017Aran Khanna, Software Engineer, Amazon Web Services at MLconf ATL 2017
Aran Khanna, Software Engineer, Amazon Web Services at MLconf ATL 2017MLconf
 
Storm: distributed and fault-tolerant realtime computation
Storm: distributed and fault-tolerant realtime computationStorm: distributed and fault-tolerant realtime computation
Storm: distributed and fault-tolerant realtime computationnathanmarz
 
Storm presentation
Storm presentationStorm presentation
Storm presentationShyam Raj
 
Webinar: Deep Learning with H2O
Webinar: Deep Learning with H2OWebinar: Deep Learning with H2O
Webinar: Deep Learning with H2OSri Ambati
 
Ray and Its Growing Ecosystem
Ray and Its Growing EcosystemRay and Its Growing Ecosystem
Ray and Its Growing EcosystemDatabricks
 
Real-time Big Data Processing with Storm
Real-time Big Data Processing with StormReal-time Big Data Processing with Storm
Real-time Big Data Processing with Stormviirya
 
STORM as an ETL Engine to HADOOP
STORM as an ETL Engine to HADOOPSTORM as an ETL Engine to HADOOP
STORM as an ETL Engine to HADOOPDataWorks Summit
 
Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...
Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...
Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...MLconf
 

Mais procurados (20)

Best Practices for Hyperparameter Tuning with MLflow
Best Practices for Hyperparameter Tuning with MLflowBest Practices for Hyperparameter Tuning with MLflow
Best Practices for Hyperparameter Tuning with MLflow
 
MALT: Distributed Data-Parallelism for Existing ML Applications (Distributed ...
MALT: Distributed Data-Parallelism for Existing ML Applications (Distributed ...MALT: Distributed Data-Parallelism for Existing ML Applications (Distributed ...
MALT: Distributed Data-Parallelism for Existing ML Applications (Distributed ...
 
Sparkling Water 5 28-14
Sparkling Water 5 28-14Sparkling Water 5 28-14
Sparkling Water 5 28-14
 
Challenges on Distributed Machine Learning
Challenges on Distributed Machine LearningChallenges on Distributed Machine Learning
Challenges on Distributed Machine Learning
 
Deploying Machine Learning Models to Production
Deploying Machine Learning Models to ProductionDeploying Machine Learning Models to Production
Deploying Machine Learning Models to Production
 
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 Separating Hype from Reality in Deep Learning with Sameer Farooqui Separating Hype from Reality in Deep Learning with Sameer Farooqui
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 
Building High Available and Scalable Machine Learning Applications
Building High Available and Scalable Machine Learning ApplicationsBuilding High Available and Scalable Machine Learning Applications
Building High Available and Scalable Machine Learning Applications
 
SciPy 2019: How to Accelerate an Existing Codebase with Numba
SciPy 2019: How to Accelerate an Existing Codebase with NumbaSciPy 2019: How to Accelerate an Existing Codebase with Numba
SciPy 2019: How to Accelerate an Existing Codebase with Numba
 
20160908 hivemall meetup
20160908 hivemall meetup20160908 hivemall meetup
20160908 hivemall meetup
 
Enabling Composition in Distributed Reinforcement Learning with Ray RLlib wit...
Enabling Composition in Distributed Reinforcement Learning with Ray RLlib wit...Enabling Composition in Distributed Reinforcement Learning with Ray RLlib wit...
Enabling Composition in Distributed Reinforcement Learning with Ray RLlib wit...
 
Aran Khanna, Software Engineer, Amazon Web Services at MLconf ATL 2017
Aran Khanna, Software Engineer, Amazon Web Services at MLconf ATL 2017Aran Khanna, Software Engineer, Amazon Web Services at MLconf ATL 2017
Aran Khanna, Software Engineer, Amazon Web Services at MLconf ATL 2017
 
Tensorflow vs MxNet
Tensorflow vs MxNetTensorflow vs MxNet
Tensorflow vs MxNet
 
Storm: distributed and fault-tolerant realtime computation
Storm: distributed and fault-tolerant realtime computationStorm: distributed and fault-tolerant realtime computation
Storm: distributed and fault-tolerant realtime computation
 
Storm presentation
Storm presentationStorm presentation
Storm presentation
 
Webinar: Deep Learning with H2O
Webinar: Deep Learning with H2OWebinar: Deep Learning with H2O
Webinar: Deep Learning with H2O
 
Ray and Its Growing Ecosystem
Ray and Its Growing EcosystemRay and Its Growing Ecosystem
Ray and Its Growing Ecosystem
 
Weld Strata talk
Weld Strata talkWeld Strata talk
Weld Strata talk
 
Real-time Big Data Processing with Storm
Real-time Big Data Processing with StormReal-time Big Data Processing with Storm
Real-time Big Data Processing with Storm
 
STORM as an ETL Engine to HADOOP
STORM as an ETL Engine to HADOOPSTORM as an ETL Engine to HADOOP
STORM as an ETL Engine to HADOOP
 
Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...
Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...
Mathias Brandewinder, Software Engineer & Data Scientist, Clear Lines Consult...
 

Semelhante a Scaling Apache Giraph for Large Graph Computation

Making fitting in RooFit faster
Making fitting in RooFit fasterMaking fitting in RooFit faster
Making fitting in RooFit fasterPatrick Bos
 
2014.02.13 (Strata) Graph Analysis with One Trillion Edges on Apache Giraph
2014.02.13 (Strata) Graph Analysis with One Trillion Edges on Apache Giraph2014.02.13 (Strata) Graph Analysis with One Trillion Edges on Apache Giraph
2014.02.13 (Strata) Graph Analysis with One Trillion Edges on Apache GiraphAvery Ching
 
Fixing twitter
Fixing twitterFixing twitter
Fixing twitterRoger Xia
 
Fixing Twitter Improving The Performance And Scalability Of The Worlds Most ...
Fixing Twitter  Improving The Performance And Scalability Of The Worlds Most ...Fixing Twitter  Improving The Performance And Scalability Of The Worlds Most ...
Fixing Twitter Improving The Performance And Scalability Of The Worlds Most ...smallerror
 
Fixing Twitter Improving The Performance And Scalability Of The Worlds Most ...
Fixing Twitter  Improving The Performance And Scalability Of The Worlds Most ...Fixing Twitter  Improving The Performance And Scalability Of The Worlds Most ...
Fixing Twitter Improving The Performance And Scalability Of The Worlds Most ...xlight
 
[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼NAVER D2
 
Using BigBench to compare Hive and Spark (Long version)
Using BigBench to compare Hive and Spark (Long version)Using BigBench to compare Hive and Spark (Long version)
Using BigBench to compare Hive and Spark (Long version)Nicolas Poggi
 
Online learning, Vowpal Wabbit and Hadoop
Online learning, Vowpal Wabbit and HadoopOnline learning, Vowpal Wabbit and Hadoop
Online learning, Vowpal Wabbit and HadoopHéloïse Nonne
 
DjangoCon 2010 Scaling Disqus
DjangoCon 2010 Scaling DisqusDjangoCon 2010 Scaling Disqus
DjangoCon 2010 Scaling Disquszeeg
 
Giraph at Hadoop Summit 2014
Giraph at Hadoop Summit 2014Giraph at Hadoop Summit 2014
Giraph at Hadoop Summit 2014Claudio Martella
 
Rails performance at Justin.tv - Guillaume Luccisano
Rails performance at Justin.tv - Guillaume LuccisanoRails performance at Justin.tv - Guillaume Luccisano
Rails performance at Justin.tv - Guillaume LuccisanoGuillaume Luccisano
 
Python VS GO
Python VS GOPython VS GO
Python VS GOOfir Nir
 
Hanborq optimizations on hadoop map reduce 20120221a
Hanborq optimizations on hadoop map reduce 20120221aHanborq optimizations on hadoop map reduce 20120221a
Hanborq optimizations on hadoop map reduce 20120221aSchubert Zhang
 
Migrating from matlab to python
Migrating from matlab to pythonMigrating from matlab to python
Migrating from matlab to pythonActiveState
 
[NYJavaSig] Riding the Distributed Streams - Feb 2nd, 2017
[NYJavaSig] Riding the Distributed Streams - Feb 2nd, 2017[NYJavaSig] Riding the Distributed Streams - Feb 2nd, 2017
[NYJavaSig] Riding the Distributed Streams - Feb 2nd, 2017Viktor Gamov
 
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACPerformance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACKristofferson A
 
Buildingsocialanalyticstoolwithmongodb
BuildingsocialanalyticstoolwithmongodbBuildingsocialanalyticstoolwithmongodb
BuildingsocialanalyticstoolwithmongodbMongoDB APAC
 
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data PlatformsCassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data PlatformsDataStax Academy
 

Semelhante a Scaling Apache Giraph for Large Graph Computation (20)

Making fitting in RooFit faster
Making fitting in RooFit fasterMaking fitting in RooFit faster
Making fitting in RooFit faster
 
2014.02.13 (Strata) Graph Analysis with One Trillion Edges on Apache Giraph
2014.02.13 (Strata) Graph Analysis with One Trillion Edges on Apache Giraph2014.02.13 (Strata) Graph Analysis with One Trillion Edges on Apache Giraph
2014.02.13 (Strata) Graph Analysis with One Trillion Edges on Apache Giraph
 
Fixing twitter
Fixing twitterFixing twitter
Fixing twitter
 
Fixing_Twitter
Fixing_TwitterFixing_Twitter
Fixing_Twitter
 
Fixing Twitter Improving The Performance And Scalability Of The Worlds Most ...
Fixing Twitter  Improving The Performance And Scalability Of The Worlds Most ...Fixing Twitter  Improving The Performance And Scalability Of The Worlds Most ...
Fixing Twitter Improving The Performance And Scalability Of The Worlds Most ...
 
Fixing Twitter Improving The Performance And Scalability Of The Worlds Most ...
Fixing Twitter  Improving The Performance And Scalability Of The Worlds Most ...Fixing Twitter  Improving The Performance And Scalability Of The Worlds Most ...
Fixing Twitter Improving The Performance And Scalability Of The Worlds Most ...
 
[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼[262] netflix 빅데이터 플랫폼
[262] netflix 빅데이터 플랫폼
 
Using BigBench to compare Hive and Spark (Long version)
Using BigBench to compare Hive and Spark (Long version)Using BigBench to compare Hive and Spark (Long version)
Using BigBench to compare Hive and Spark (Long version)
 
Online learning, Vowpal Wabbit and Hadoop
Online learning, Vowpal Wabbit and HadoopOnline learning, Vowpal Wabbit and Hadoop
Online learning, Vowpal Wabbit and Hadoop
 
DjangoCon 2010 Scaling Disqus
DjangoCon 2010 Scaling DisqusDjangoCon 2010 Scaling Disqus
DjangoCon 2010 Scaling Disqus
 
Giraph at Hadoop Summit 2014
Giraph at Hadoop Summit 2014Giraph at Hadoop Summit 2014
Giraph at Hadoop Summit 2014
 
Rails performance at Justin.tv - Guillaume Luccisano
Rails performance at Justin.tv - Guillaume LuccisanoRails performance at Justin.tv - Guillaume Luccisano
Rails performance at Justin.tv - Guillaume Luccisano
 
Python VS GO
Python VS GOPython VS GO
Python VS GO
 
Hanborq optimizations on hadoop map reduce 20120221a
Hanborq optimizations on hadoop map reduce 20120221aHanborq optimizations on hadoop map reduce 20120221a
Hanborq optimizations on hadoop map reduce 20120221a
 
Migrating from matlab to python
Migrating from matlab to pythonMigrating from matlab to python
Migrating from matlab to python
 
[NYJavaSig] Riding the Distributed Streams - Feb 2nd, 2017
[NYJavaSig] Riding the Distributed Streams - Feb 2nd, 2017[NYJavaSig] Riding the Distributed Streams - Feb 2nd, 2017
[NYJavaSig] Riding the Distributed Streams - Feb 2nd, 2017
 
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACPerformance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
 
Data Science
Data ScienceData Science
Data Science
 
Buildingsocialanalyticstoolwithmongodb
BuildingsocialanalyticstoolwithmongodbBuildingsocialanalyticstoolwithmongodb
Buildingsocialanalyticstoolwithmongodb
 
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data PlatformsCassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
 

Scaling Apache Giraph for Large Graph Computation

  • 1. Scaling Apache Giraph Nitay Joffe, Data Infrastructure Engineer nitay@apache.org June 3, 2013
  • 2. Agenda 1 Background 2 Scaling 3 Results 4 Questions
  • 4. What is Giraph? • Apache open source graph computation engine based on Google’s Pregel. • Support for Hadoop, Hive, HBase, and Accumulo. • BSP model with simple think like a vertex API. • Combiners, Aggregators, Mutability, and more. • Configurable Graph<I,V,E,M>: – I: Vertex ID – V: Vertex Value – E: Edge Value – M: Message data What is Giraph NOT? • A Graph database. See Neo4J. • A completely asynchronous generic MPI system. • A slow tool. implements Writable
  • 5. Why not Hive? Input format Output format Map tasks Intermediate files Reduce tasks Output 0 Output 1 Input 0 Input 1 Iterate! • Too much disk. Limited in-memory caching. • Each iteration becomes a MapReduce job!
  • 6. Giraph components Master – Application coordinator • Synchronizes supersteps • Assigns partitions to workers before superstep begins Workers – Computation & messaging • Handle I/O – reading and writing the graph • Computation/messaging of assigned partitions ZooKeeper • Maintains global application state
  • 7. Giraph Dataflow Split 0 Split 1 Split 2 Split 3 Worker 1 Master Worker 0Input format Load / Send Graph Load / Send Graph Loading the graph 1 Part 0 Part 1 Part 2 Part 3 Compute / Send Messages Worker 1 Compute / Send Messages Master Worker 0 In-memory graph Send stats / iterate! Compute/Iterate 2 Worker 1 Worker 0 Part 0 Part 1 Part 2 Part 3 Output format Part 0 Part 1 Part 2 Part 3 Storing the graph 3 Split 4 Split
  • 8. Giraph Job Lifetime Output Active Inactive Vote to Halt Received Message Vertex Lifecycle All Vertices Halted? Input Compute Superstep No Master halted? No Yes Yes
  • 9. Simple Example – Compute the maximum value 5 1 5 2 5 5 2 5 5 5 5 5 1 2 Processor 1 Processor 2 Time Connected Components e.g. Finding Communities
  • 10. PageRank – ranking websites Mahout (Hadoop) 854 lines Giraph < 30 lines • Send neighbors an equal fraction of your page rank • New page rank = 0.15 / (# of vertices) + 0.85 * (messages sum)
  • 12. Problem: Worker Crash. Superstep i (no checkpoint) Superstep i+1 (checkpoint) Superstep i+2 (no checkpoint) Worker failure! Superstep i+1 (checkpoint) Superstep i+2 (no checkpoint) Superstep i+3 (checkpoint) Worker failure after checkpoint complete! Superstep i+3 (no checkpoint) Application Complete… Solution: Checkpointing.
  • 13. “Spare” Master 2 Active Master State“Spare” Master 1 “Active” Master 0 Before failure of active master 0 “Spare” Master 2 Active Master State“Active” Master 1 “Active” Master 0 After failure of active master 0 ZooKeeper ZooKeeper Problem: Master Crash. Solution: ZooKeeper Master Queue.
  • 14. Problem: Primitive Collections. • Graphs often parameterized with { } • Boxing/unboxing. Objects have internal overhead. 3 Solution: Use fastutil, e.g. Long2DoubleOpenHashMap. fastutil extends the Java™ Collections Framework by providing type-specific maps, sets, lists and queues with a small memory footprint and fast access and insertion 1 2 4 5 1.2 0.5 0.8 0.4 1.7 0.7 Single Source Shortest Path s t 1.2 0.5 0.8 0.4 0.2 0.7 Network Flow 3 1 2 4 5 Count In-Degree
  • 15. Problem: Too many objects. Lots of time spent in GC. Graph: 1B Vertices, 200B Edges, 200 Workers. • 1B Edges per Worker. 1 object per edge value. • List<Edge<I, E>>  ~ 10B objects • 5M Vertices per Worker. 10 objects per vertex value. • Map<I, Vertex<I, V, E>  ~ 50M objects • 1 Message per Edge. 10 objects per message data. • Map<I, List<M>>  ~ 10B objects • Objects used ~= O(E*e + V*v + M*m) => O(E*e) Label Propagation e.g. Who’s sleeping? 3 1 2 4 5 Boring Amazing Q: What did he think? 0.5 0.2 0.8 0.36 0.17 0.41 Confusing
  • 16. Problem: Too many objects. Lots of time spent in GC. Solution: byte[] • Serialize messages, edges, and vertices. • Iterable interface with representative object. Input Input Input next() next() next() Objects per worker ~= O(V) Label Propagation e.g. Who’s sleeping? 3 1 2 4 5 Boring Amazing Q: What did he think? 0.5 0.2 0.8 0.36 0.17 0.41 Confusing
  • 17. Problem: Serialization of byte[] • DataInput? Kyro? Custom? Solution: Unsafe • Dangerous. No formal API. Volatile. Non-portable (oracle JVM only). • AWESOME. As fast as it gets. • True native. Essentially C: *(long*)(data+offset);
  • 18. Problem: Large Aggregations. Worker Worker Worker Worker Worker Master Workers own aggregators Worker Worker Worker Worker Worker Master Aggregator owners communicate with Master Worker Worker Worker Worker Worker Master Aggregator owners distribute values Solution: Sharded Aggregators. Worker Worker Worker Worker Worker Master K-Means Clustering e.g. Similar Emails
  • 19. Problem: Network Wait. • RPC doesn’t fit model. • Synchronous calls no good. Solution: Netty Tune queue sizes & threads BarrierBarrier Begin superstep compute network End compute End superstep wait Barrier Barrier Begin superstep compute network wait Time to first message End compute End superstep
  • 21. 0 50 100 150 200 250 300 350 400 450 50 100 150 200 250 300 IterationTime(sec) Workers 2B Vertices, 200B Edges, 20 Compute Threads Increasing Workers Increasing Data Size 0 50 100 150 200 250 300 350 400 450 1E+09 1.01E+11 IterationTime(sec) Edges 50 Workers, 20 Compute Threads Scalability Graphs
  • 22. Lessons Learned • Coordinating is a zoo. Be resilient with ZooKeeper. • Efficient networking is hard. Let Netty help. • Primitive collections, primitive performance. Use fastutil. • byte[] is simple yet powerful. • Being Unsafe can be a good thing. • Have a graph? Use Giraph.
  • 23. What’s the final result? Comparison with Hive: • 20x CPU speedup • 100x Elapsed time speedup. 15 hours => 9 minutes. Computations on entire Facebook graph no longer “weekend jobs”. Now they’re coffee breaks.
  • 25. Problem: Measurements. • Need tools to gain visibility into the system. • Problems with connecting to Hadoop sub-processes. Solution: Do it all. • YourKit – see YourKitProfiler • jmap – see JMapHistoDumper • VisualVM –with jstatd & ssh socks proxy • Yammer Metrics • Hadoop Counters • Logging & GC prints
  • 26. Problem: Mutations • Synchronization. • Load balancing. Solution: Reshuffle resources • Mutations handled at barrier between supersteps. • Master rebalances vertex assignments to optimize distribution. • Handle mutations in batches. • Avoid if using byte[]. • Favor algorithms which don’t mutate graph.

Notas do Editor

  1. No internal FB repo. Everyone committer.A global epoch followed by a global barrier where components do concurrent computation and send messages.Graphs are sparse.
  2. Giraph is a map-only job
  3. Code is real, checked into Giraph.All vertices find the maximum value in a strongly connected graph
  4. One active master, with spare masters taking over in the event of an active master failureAll active master state is stored in ZooKeeper so that a spare master can immediately step in when an active master fails“Active” master implemented as a queue in ZooKeeperA single worker failure causes the superstep to failApplication reverts to the last committed superstep automaticallyMaster detects worker failure during any superstep with a ZooKeeper “health” znodeMaster chooses the last committed superstep and sends a command through ZooKeeper for all workers to restart from that superstep
  5. One active master, with spare masters taking over in the event of an active master failureAll active master state is stored in ZooKeeper so that a spare master can immediately step in when an active master fails“Active” master implemented as a queue in ZooKeeper
  6. Primitive collections are primitive.Lots of boxing / unboxing of types.Object and reference for each instance.
  7. Also other implementations like Map&lt;I,E&gt; for edges which use more space but better for lots of mutations.Realistically for FB sized graphs need even bigger.Edges are not uniform in reality, some vertices are much larger.
  8. Dangerous, non-portable, volatile. Oracle JVM only. No formal API.Allocate non-GC memory.Inherit from String (final class).Direct access memory (C pointer casts)
  9. Cluster open source projects.Histograms. Job metrics.
  10. Start sending messages early and sendwith computation.Tune message buffer sizes to reduce wait time.
  11. First thing’s first – what’s going on with the system?Want debugger, but don’t have one.Use YourKit’s API to create granular snapshots withinapplication.JMap binding errors – spawn from within process.
  12. With byte[] any mutation requires full deserialization / re-serialization.