call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
Apache con2016final
1. Apache BookKeeper
DISTRIBUTED STORE
a Salesforce Use Case
Venkateswararao Jujjuri (JV)
Cloud Storage Architect
vjujjuri@salesforce.com
jujjuri@gmail.com
@jvjujjuri | Twitter
https://www.linkedin.com/in/jvjujjuri
2. Agenda
Salesforce needs and requirements
Hunt and Selection
BookKeeper Introduction
Improvements and Enhancements
As Service at Scale @ Salesforce
Performance
Community
Q & A
3. Salesforce Application Storage Needs
Store for Persistent WAL, data, and objects
Low, constant write latencies
• Transaction Log, Smaller writes
Low, constant Random Read latencies
Highly available
Append Only entries
• Objects
Highly Consistent for immutable data
Long Term Storage
Distributed and linearly scalable.
On commodity hardware
Low Operating Cost
4. What Did we consider?
Build vs. Buy
• Time-To-Market, resources, cost.
Finalists
• Ceph
• A CP System
• w/Unreliable reads, Read path can behave like an AP system.
• Lot of effort to make it AP behavior on write path
• Remember: Immutable data.
• BookKeeper
• CAP system, because of immutable/append only data.
• Came close to what we want
• Almost there but not everything.
5. Apache Bookkeeper
A highly consistent, available, replicated, distributed log service.
Immutable , append only store.
Thick Client, Simple and Elegant placement policy
• No Central Master
• No complicated hashing/computing for placement
Low latency, both on writes and reads.
Runs on commodity hardware.
Built for WAL use-case, but can be expanded to broader storage needs
Uses ZooKeeper as consensuses service, and metadata store.
Awesome Community.
7. Apache BookKeeper
A system to reliably log streams of records.
Is designed to store write ahead logs for database like applications.
Inspired by and designed to solve HDFS NameNode availability deficiencies.
Opensource Chronology
• 2008 Open Sourced contribution to ZooKeeper
• 2011 Sub-Project of ZooKeeper.
• 2012 Production
8. Terminology
Journal: Write ahead log
Ledger: Log Stream
Entry: Each entry of log stream
Client: Library, with the application.
Bookie: Server
Ensemble: Set of Bookies across which a ledger is striped.
Cluster: All bookies belong to a given instance of Bookkeeper
Write Quorum Size: Number of replicas.
Ack Quorum Size: Number of responses needed before client’s write is satisfied.
LAC: Last Add Confirmed.
9. Major Components
• Thick Client; Carries heavy weight in the protocol.
• Thin Server, Bookie. Bookies never initiate any interaction with ZooKeeper or fellow Bookies.
• Zookeeper monitors Bookies.
• Metadata is stored on Zookeeper.
• Auditor to monitor bookies and identify under replicated ledgers.
• Replication workers to replicate under replicated ledger copies.
Major Components
10. Create Ledger
• Gets Writer Ledger Handle
Add an entry to the Ledger
• Write To the Ledger
Open Ledger
• Gives ReadOnly Ledger Handle.
• May ask for non-recovery read handle.
Get an entry from the ledger
• Read from the ledger
Close ledger
Delete Ledger
Basic Operations
11. Salesforce Application with BookKeeper
Application
Store Interface
With
Bookkeeper client User
Library
Bookies ZooKeeper
Server Machine
12. Guarantees
• If an entry has been acknowledged, it must be readable.
• If an entry is read once, it must always be readable.
• If write of entryID ‘n’ is successful, all entries until ‘n’ are successfully committed.
Consistencies
• Last Add Confirmed is consistency among readers
• Fence is consistency among writers.
Commitment
13. Out-of-order write and In-Order Ack.
• Application has liberty to pre-allocate entryIDs
• Multiple application threads can write in parallel.
User defined Ledger Names
• Not restricted by BK generated ledger Names
Explicit LAC updates
• Added ReadLac, WriteLac to the protocol.
• Maintain both piggy-back LAC and explicit LAC simultaneously.
Enhancements - In the internal branch working to push upstream
14. Conventional Name Space.
• User defined Names
• Treat LedgerId as an i-node in a file system.
Disk scrubbers and Repairs
• Actively hunt and repair bit-rots and corruptions
Scalable Metadata Store
• Separate and dedicated metadata store
• Not restricted by ZK limitations
Enhancements - Future
15. Out of order write and in order Ack
0 1 2 3 4 5
App A ( Writer )
6
App B ( Writer )
8
App C ( Writer )
7
16. Last Add Confirmed
0 1 2 3 4 5
App A ( Writer )
6
App B ( Writer )
8
App C ( Writer )
7
LAC LAC
App D (Reader)
X
LAC
18. What Can Happen?
Client
• Client Restarts
• Client loses connection with zookeeper
• Client loses connection with bookies.
Bookie
• Bookie Goes down
• Disk(s) on bookie go bad, IO issues
• Bookie gets disconnected from network.
Zookeeper
• Gets disconnected from rest of the cluster
19. Writing Client Crash
bookie
bookie
bookie
zookeeper
What is the last entry?
• Nothing happens until a reader attempts to
read.
• Recovery process gets initiated when a
process opens the ledger for reading.
• Close the ledger on zoo keeper
• Identify Last entry of the ledger.
• Update metadata on zookeeper with
Last Add Confirmed. (LAC)
20. Client gets disconnected with Bookies.
Either bookie is down or network between client and bookie have issues.
Contact zoo keeper to get the list of available bookies.
Update ensemble set, register with zookeeper.
Continue with new set.
21. Client gets disconnected with Zookeeper.
Tries to reestablish the connection.
Can continue to read and write to the ledger.
Until that time, no metadata operations can be performed.
• Can not create a ledger
• Can not open a ledger
• Can not close a ledger
22. Reader Opens while writer is active.
Application control
BK guarantees correctness.
Reader initiates recovery process.
• Fences bookie on the zookeeper.
• Informs all bookies in ensemble recovery started.
• After these steps writer will get write errors.(if actively writing)
• Reader contacts all bookies to learn last entry.
• Replicates last entry if it doesn’t have enough replicas.
• Updates zookeeper with LAC, and closes the ledger.
23. Recovery begins when the ledger is opened by the reader in recovery mode
• Check if the ledger needs recovery (not closed)
• Fence the ledger first and initiate recovery
• Step1: Flag that the ledger is in recovery by update ZooKeeper state.
• Step2 : Fence Bookies
• Step3 : Recover the Ledger
Fencing and Recovery
26. Auditor
• Starts on every Bookie machine, leader gets elected through ZooKeeper.
• One active auditor per cluster.
• Watch Bookie failures and manage under replicated ledgers list.
Replication Workers
• Responsible for performing replication to maintain quorum copies.
• Can run on any machine in the cluster, usually runs on each Bookie machine.
• Work on under replicated ledgers list published by the Auditor.
• Pick one ledger at a time, create a lock on ZooKeeper and replicate to local bookie.
• If local bookie is part of the ensemble, drop the lock and move to next one in the list.
Bookie Crashes - Auto Recovery