O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a navegar o site, você aceita o uso de cookies. Leia nosso Contrato do Usuário e nossa Política de Privacidade.
O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a utilizar o site, você aceita o uso de cookies. Leia nossa Política de Privacidade e nosso Contrato do Usuário para obter mais detalhes.
a Salesforce Use Case
Venkateswararao Jujjuri (JV)
Cloud Storage Architect
@jvjujjuri | Twitter
Salesforce needs and requirements
Hunt and Selection
Improvements and Enhancements
As Service at Scale @ Salesforce
Q & A
Salesforce Application Storage Needs
Store for Persistent WAL, data, and objects
Low, constant write latencies
• Transaction Log, Smaller writes
Low, constant Random Read latencies
Append Only entries
Highly Consistent for immutable data
Long Term Storage
Distributed and linearly scalable.
On commodity hardware
Low Operating Cost
What Did we consider?
Build vs. Buy
• Time-To-Market, resources, cost.
• A CP System
• w/Unreliable reads, Read path can behave like an AP system.
• Lot of eﬀort to make it AP behavior on write path
• Remember: Immutable data.
• CAP system, because of immutable/append only data.
• Came close to what we want
• Almost there but not everything.
A highly consistent, available, replicated, distributed log service.
Immutable , append only store.
Thick Client, Simple and Elegant placement policy
• No Central Master
• No complicated hashing/computing for placement
Low latency, both on writes and reads.
Runs on commodity hardware.
Built for WAL use-case, but can be expanded to broader storage needs
Uses ZooKeeper as consensuses service, and metadata store.
A system to reliably log streams of records.
Is designed to store write ahead logs for database like applications.
Inspired by and designed to solve HDFS NameNode availability deﬁciencies.
• 2008 Open Sourced contribution to ZooKeeper
• 2011 Sub-Project of ZooKeeper.
• 2012 Production
Journal: Write ahead log
Ledger: Log Stream
Entry: Each entry of log stream
Client: Library, with the application.
Ensemble: Set of Bookies across which a ledger is striped.
Cluster: All bookies belong to a given instance of Bookkeeper
Write Quorum Size: Number of replicas.
Ack Quorum Size: Number of responses needed before client’s write is satisﬁed.
LAC: Last Add Conﬁrmed.
• Thick Client; Carries heavy weight in the protocol.
• Thin Server, Bookie. Bookies never initiate any interaction with ZooKeeper or fellow Bookies.
• Zookeeper monitors Bookies.
• Metadata is stored on Zookeeper.
• Auditor to monitor bookies and identify under replicated ledgers.
• Replication workers to replicate under replicated ledger copies.
• Gets Writer Ledger Handle
Add an entry to the Ledger
• Write To the Ledger
• Gives ReadOnly Ledger Handle.
• May ask for non-recovery read handle.
Get an entry from the ledger
• Read from the ledger
Salesforce Application with BookKeeper
Bookkeeper client User
• If an entry has been acknowledged, it must be readable.
• If an entry is read once, it must always be readable.
• If write of entryID ‘n’ is successful, all entries until ‘n’ are successfully committed.
• Last Add Conﬁrmed is consistency among readers
• Fence is consistency among writers.
Out-of-order write and In-Order Ack.
• Application has liberty to pre-allocate entryIDs
• Multiple application threads can write in parallel.
User deﬁned Ledger Names
• Not restricted by BK generated ledger Names
Explicit LAC updates
• Added ReadLac, WriteLac to the protocol.
• Maintain both piggy-back LAC and explicit LAC simultaneously.
Enhancements - In the internal branch working to push upstream
Conventional Name Space.
• User deﬁned Names
• Treat LedgerId as an i-node in a ﬁle system.
Disk scrubbers and Repairs
• Actively hunt and repair bit-rots and corruptions
Scalable Metadata Store
• Separate and dedicated metadata store
• Not restricted by ZK limitations
Enhancements - Future
Out of order write and in order Ack
0 1 2 3 4 5
App A ( Writer )
App B ( Writer )
App C ( Writer )
Last Add Conﬁrmed
0 1 2 3 4 5
App A ( Writer )
App B ( Writer )
App C ( Writer )
App D (Reader)
What Can Happen?
• Client Restarts
• Client loses connection with zookeeper
• Client loses connection with bookies.
• Bookie Goes down
• Disk(s) on bookie go bad, IO issues
• Bookie gets disconnected from network.
• Gets disconnected from rest of the cluster
Writing Client Crash
What is the last entry?
• Nothing happens until a reader attempts to
• Recovery process gets initiated when a
process opens the ledger for reading.
• Close the ledger on zoo keeper
• Identify Last entry of the ledger.
• Update metadata on zookeeper with
Last Add Conﬁrmed. (LAC)
Client gets disconnected with Bookies.
Either bookie is down or network between client and bookie have issues.
Contact zoo keeper to get the list of available bookies.
Update ensemble set, register with zookeeper.
Continue with new set.
Client gets disconnected with Zookeeper.
Tries to reestablish the connection.
Can continue to read and write to the ledger.
Until that time, no metadata operations can be performed.
• Can not create a ledger
• Can not open a ledger
• Can not close a ledger
Reader Opens while writer is active.
BK guarantees correctness.
Reader initiates recovery process.
• Fences bookie on the zookeeper.
• Informs all bookies in ensemble recovery started.
• After these steps writer will get write errors.(if actively writing)
• Reader contacts all bookies to learn last entry.
• Replicates last entry if it doesn’t have enough replicas.
• Updates zookeeper with LAC, and closes the ledger.
Recovery begins when the ledger is opened by the reader in recovery mode
• Check if the ledger needs recovery (not closed)
• Fence the ledger ﬁrst and initiate recovery
• Step1: Flag that the ledger is in recovery by update ZooKeeper state.
• Step2 : Fence Bookies
• Step3 : Recover the Ledger
Fencing and Recovery
Write Non Recovery Read
Recovery ReadFence & Recover
Attempt to write
• Starts on every Bookie machine, leader gets elected through ZooKeeper.
• One active auditor per cluster.
• Watch Bookie failures and manage under replicated ledgers list.
• Responsible for performing replication to maintain quorum copies.
• Can run on any machine in the cluster, usually runs on each Bookie machine.
• Work on under replicated ledgers list published by the Auditor.
• Pick one ledger at a time, create a lock on ZooKeeper and replicate to local bookie.
• If local bookie is part of the ensemble, drop the lock and move to next one in the list.
Bookie Crashes - Auto Recovery
Heterogeneous Stores and Tiered Architecture
Clusters of storage serving App Instances
App Instance App Instance