O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

Consensus: Why Can't We All Just Agree?

135 visualizações

Publicada em

Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2KMjdkx.

Heidi Howard takes a journey through history starting with the days when the field of distributed consensus began with a proof of its impossibility and arriving to present day when recently revised understanding of consensus enables a new generation of scalable yet resilient distributed systems. Filmed at qconlondon.com.

Heidi Howard is a PhD student in the System Research Group at Cambridge University, Computer Lab. Her research interests are consistency, fault-tolerence and performance in distributed systems, specializing in distributed consensus algorithms. She is most widely known for her generalization of Leslie Lamport’s Paxos algorithm for solving consensus, known as Flexible Paxos.

Publicada em: Tecnologia
  • Make Your Bookie Cry And Pull The Hair By Generating $23,187, Verified profit! Click to find out how ➤➤ http://ishbv.com/zcodesys/pdf
       Responder 
    Tem certeza que deseja  Sim  Não
    Insira sua mensagem aqui

Consensus: Why Can't We All Just Agree?

  1. 1. Distributed Consensus: Why Can't We All Just Agree? Heidi Howard PhD Student @ University of Cambridge heidi.howard@cl.cam.ac.uk @heidiann360 hh360.user.srcf.net
  2. 2. InfoQ.com: News & Community Site Watch the video with slide synchronization on InfoQ.com! https://www.infoq.com/presentations/ future-distributed-consensus • Over 1,000,000 software developers, architects and CTOs read the site world- wide every month • 250,000 senior developers subscribe to our weekly newsletter • Published in 4 languages (English, Chinese, Japanese and Brazilian Portuguese) • Post content from our QCon conferences • 2 dedicated podcast channels: The InfoQ Podcast, with a focus on Architecture and The Engineering Culture Podcast, with a focus on building • 96 deep dives on innovative topics packed as downloadable emags and minibooks • Over 40 new content items per week
  3. 3. Purpose of QCon - to empower software development by facilitating the spread of knowledge and innovation Strategy - practitioner-driven conference designed for YOU: influencers of change and innovation in your teams - speakers and topics driving the evolution and innovation - connecting and catalyzing the influencers and innovators Highlights - attended by more than 12,000 delegates since 2007 - held in 9 cities worldwide Presented at QCon London www.qconlondon.com
  4. 4. Sometimes inconsistency is not an option • Distributed locking • Safety critical systems • Distributed scheduling • Strongly consistent databases • Blockchain Anything which requires guaranteed agreement • Leader election • Orchestration services • Distributed file systems • Coordination & configuration • Strongly consistent databases
  5. 5. What is Distributed Consensus? “The process of reaching agreement over state between unreliable hosts connected by unreliable networks, all operating asynchronously”
  6. 6. A walk through time We are going to take a journey through the developments in distributed consensus, spanning over three decades. Stops include: • FLP Result & CAP Theorem • Viewstamped Replication, Paxos & Multi-Paxos • State Machine Replication • Paxos Made Live, Zookeeper & Raft • Flexible Paxos Bob
  7. 7. Fischer, Lynch & Paterson Result We begin with a slippery start Impossibility of distributed consensus with one faulty process Michael Fischer, Nancy Lynch and Michael Paterson ACM SIGACT-SIGMOD Symposium on Principles of Database Systems 1983
  8. 8. FLP Result We cannot guarantee agreement in an asynchronous system where even one host might fail. Why? We cannot reliably detect failures. We cannot know for sure the difference between a slow host/network and a failed host Note: We can still guarantee safety, the issue limited to guaranteeing liveness.
  9. 9. Solution to FLP In practice: We approximate reliable failure detectors using heartbeats and timers. We accept that sometimes the service will not be available (when it could be). In theory: We make weak assumptions about the synchrony of the system e.g. messages arrive within a year.
  10. 10. Viewstamped Replication the forgotten algorithm Viewstamped Replication Revisited Barbara Liskov and James Cowling MIT Tech Report MIT-CSAIL-TR-2012-021 Not the original from 1988, but recommended
  11. 11. Viewstamped Replication In my view, the pioneering algorithm on the field of distributed consensus. Approach: Select one node to be the ‘master’. The master is responsible for replicating decisions. Once a decision has been replicated onto the majority of nodes then it is commit. We rotate the master when the old master fails with agreement from the majority of nodes.
  12. 12. Paxos Lamport’s consensus algorithm The Part-Time Parliament Leslie Lamport ACM Transactions on Computer Systems May 1998
  13. 13. Paxos The textbook algorithm for reaching consensus on a single value. • two phase process: promise and commit • each requiring majority agreement (aka quorums)
  14. 14. Paxos Example - Failure Free
  15. 15. 1 2 3 P: C: P: C: P: C:
  16. 16. 1 2 3 P: C: P: C: P: C: B Incoming request from Bob
  17. 17. 1 2 3 P: C: P: 13 C: P: C: B Promise (13) ? Phase 1 Promise (13) ?
  18. 18. 1 2 3 P: 13 C: OKOK P: 13 C: P: 13 C: Phase 1
  19. 19. 1 2 3 P: 13 C: 13, B P: 13 C: P: 13 C: Phase 2 Commit (13, ) ?B Commit (13, ) ?B
  20. 20. 1 2 3 P: 13 C: 13, B P: 13 C: 13, P: 13 C: 13, Phase 2 B B OKOK
  21. 21. 1 2 3 P: 13 C: 13, B P: 13 C: 13, P: 13 C: 13, B B OK Bob is granted the lock
  22. 22. Paxos Example - Node Failure
  23. 23. 1 2 3 P: C: P: C: P: C:
  24. 24. 1 2 3 P: C: P: 13 C: P: C: Promise (13) ? Phase 1 B Incoming request from Bob Promise (13) ?
  25. 25. 1 2 3 P: 13 C: P: 13 C: P: 13 C: Phase 1 B OK OK
  26. 26. 1 2 3 P: 13 C: P: 13 C: 13, P: 13 C: Phase 2 Commit (13, ) ?B B
  27. 27. 1 2 3 P: 13 C: P: 13 C: 13, P: 13 C: 13, Phase 2 B B
  28. 28. 1 2 3 P: 13 C: P: 13 C: 13, P: 13 C: 13, Alice B B A
  29. 29. 1 2 3 P: 22 C: P: 13 C: 13, P: 13 C: 13, Phase 1 B B A Promise (22) ?
  30. 30. 1 2 3 P: 22 C: P: 13 C: 13, P: 22 C: 13, Phase 1 B B A OK(13, )B
  31. 31. 1 2 3 P: 22 C: 22, P: 13 C: 13, P: 22 C: 13, Phase 2 B B A Commit (22, ) ?B B
  32. 32. 1 2 3 P: 22 C: 22, P: 13 C: 13, P: 22 C: 22, Phase 2 B B OK B NO
  33. 33. Paxos Example - Conflict
  34. 34. 1 2 3 P: 13 C: P: 13 C: P: 13 C: B Phase 1 - Bob
  35. 35. 1 2 3 P: 21 C: P: 21 C: P: 21 C: B Phase 1 - Alice A
  36. 36. 1 2 3 P: 33 C: P: 33 C: P: 33 C: B Phase 1 - Bob A
  37. 37. 1 2 3 P: 41 C: P: 41 C: P: 41 C: B Phase 1 - Alice A
  38. 38. What does Paxos give us? Safety - Decisions are always final Liveness - Decision will be reached as long as a majority of nodes are up and able to communicate*. Clients must wait two round trips to the majority of nodes, sometimes longer. *plus our weak synchrony assumptions for the FLP result
  39. 39. Multi-Paxos Lamport’s leader-driven consensus algorithm Paxos Made Moderately Complex Robbert van Renesse and Deniz Altinbuken ACM Computing Surveys April 2015 Not the original, but highly recommended
  40. 40. Multi-Paxos Lamport’s insight: Phase 1 is not specific to the request so can be done before the request arrives and can be reused for multiple instances of Paxos. Implication: Bob now only has to wait one round trip
  41. 41. State Machine Replication fault-tolerant services using consensus Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial Fred Schneider ACM Computing Surveys 1990
  42. 42. State Machine Replication (SMR) A general technique for making a service, such as a database, fault-tolerant. Application Client Client
  43. 43. Application Application Application Client Client Network Consensus Consensus Consensus Consensus Consensus
  44. 44. CAP Theorem You cannot have your cake and eat it CAP Theorem Eric Brewer Presented at Symposium on Principles of Distributed Computing, 2000
  45. 45. Consistency, Availability & Partition Tolerance - Pick Two 1 2 3 4 B C
  46. 46. Paxos Made Live & Chubby How google uses Paxos Paxos Made Live - An Engineering Perspective Tushar Chandra, Robert Griesemer and Joshua Redstone ACM Symposium on Principles of Distributed Computing 2007
  47. 47. Isn’t this a solved problem? “There are significant gaps between the description of the Paxos algorithm and the needs of a real-world system. In order to build a real-world system, an expert needs to use numerous ideas scattered in the literature and make several relatively small protocol extensions. The cumulative effort will be substantial and the final system will be based on an unproven protocol.”
  48. 48. Paxos Made Live Paxos made live documents the challenges in constructing Chubby, a distributed coordination service, built using Multi-Paxos and State machine replication.
  49. 49. Challenges • Handling disk failure and corruption • Dealing with limited storage capacity • Effectively handling read-only requests • Dynamic membership & reconfiguration • Supporting transactions • Verifying safety of the implementation
  50. 50. Fast Paxos Like Multi-Paxos, but faster Fast Paxos Leslie Lamport Microsoft Research Tech Report MSR-TR-2005-112
  51. 51. Fast Paxos Paxos: Any node can commit a value in 2 RTTs Multi-Paxos: The leader node can commit a value in 1 RTT But, what about any node committing a value in 1 RTT?
  52. 52. Fast Paxos We can bypass the leader node for many operations, so any node can commit a value in 1 RTT. However, we must increase the size of the quorum.
  53. 53. Zookeeper The open source solution Zookeeper: wait-free coordination for internet-scale systems Hunt et al USENIX ATC 2010 Code: zookeeper.apache.org
  54. 54. Zookeeper Consensus for the masses. It utilizes and extends Multi-Paxos for strong consistency. Unlike “Paxos made live”, this is clearly discussed and openly available.
  55. 55. Egalitarian Paxos Don’t restrict yourself unnecessarily There Is More Consensus in Egalitarian Parliaments Iulian Moraru, David G. Andersen, Michael Kaminsky SOSP 2013 also see Generalized Consensus and Paxos
  56. 56. Egalitarian Paxos The basis of SMR is that every replica of an application receives the same commands in the same order. However, sometimes the ordering can be relaxed…
  57. 57. C=1 B? C=C+1 C? B=0 B=C C=1 B? C=C+1 C? B=0 B=C Partial Ordering Total Ordering
  58. 58. C=1 B? C=C+1 C? B=0 B=C Many possible orderings B? C=C+1 C?B=0 B=CC=1 B?C=C+1 C? B=0 B=CC=1 B? C=C+1 C? B=0 B=CC=1
  59. 59. Egalitarian Paxos Allow requests to be out-of-order if they are commutative. Conflict becomes much less common. Works well in combination with Fast Paxos.
  60. 60. Raft Consensus Paxos made understandable In Search of an Understandable Consensus Algorithm Diego Ongaro and John Ousterhout USENIX ATC 2014
  61. 61. Raft Raft has taken the wider community by storm. Largely, due to its understandable description. It’s another variant of SMR with Multi-Paxos. Key features: • Really strong leadership - all other nodes are passive • Various optimizations - e.g. dynamic membership and log compaction
  62. 62. Flexible Paxos Paxos made scalable Flexible Paxos: Quorum intersection revisited Heidi Howard, Dahlia Malkhi, Alexander Spiegelman ArXiv:1608.06696
  63. 63. Majorities are not needed Usually, we use require majorities to agree so we can guarantee that all quorums (groups) intersect. This work shows that not all quorums need to intersect. Only the ones used for phase 2 (replication) and phase 1 (leader election). This applies to all algorithms in this class: Paxos, Viewstamped Replication, Zookeeper, Raft etc..
  64. 64. Example: Non-strict majorities Phase 2 Replication quorum Phase 1 Leader election quorum
  65. 65. Example: Counting quorums Replication quorum Leader election quorum
  66. 66. Example: Group quorums Replication quorum Leader election quorum
  67. 67. How strong is the leadership? Strong Leadership Leaderless Paxos Egalitarian Paxos Raft Viewstamped Replication Multi-Paxos Fast Paxos Leader only when needed Leader driven Zookeeper Chubby
  68. 68. Who is the winner? Depends on the award: • Best for minimum latency: Viewstamped Replication • Most widely used open source project: Zookeeper • Easiest to understand: Raft • Best for WANs: Egalitarian Paxos
  69. 69. Future 1. More scalable consensus algorithms utilizing Flexible Paxos. 2. A clearer understanding of consensus and better explained algorithms. 3. Consensus in challenging settings such as geo-replicated systems.
  70. 70. Summary Do not be discouraged by impossibility results and dense abstract academic papers. Don’t give up on consistency. Consensus is achievable, even performant and scalable. Find the right algorithm and quorum system for your specific domain. There is no single silver bullet. heidi.howard@cl.cam.ac.uk @heidiann360
  71. 71. Watch the video with slide synchronization on InfoQ.com! https://www.infoq.com/presentations/future- distributed-consensus

×