SlideShare a Scribd company logo
1 of 35
Download to read offline
GoSF July 20, 2016
Ephemeral Volatile Cache
Clustered memcached optimized for
AWS and tuned for Netflix use cases.
What is EVCache?
Distributed
Memcached
Tunable
Replication
Highly
Resilient
Topology
Aware
Data
Chunking
Additional
Functionality
Ephemeral Volatile Cache
Home Page
Request
Why Optimize for AWS
● Instances disappear
● Zones disappear
● Regions can "disappear" (Chaos Kong)
● These do happen (and we test all the time)
● Network can be lossy
○ Throttling
○ Dropped packets
● Customer requests move between regions
EVCache Use @ Netflix
● 70+ distinct EVCache clusters
● Used by 500+ Microservices
● Data replicated over 3 AWS regions
● Over 1.5 Million replications per second
● 65+ Billion objects
● Tens of Millions ops/second (Trillions per day)
● 170+ Terabytes of data stored
● Clusters from 3 to hundreds of instances
● 11000+ memcached instances of varying size
Architecture
Eureka
(Service Discovery)
Server
Memcached
Prana (Sidecar)
Monitoring & Other Processes
Client Application
Client Library
Client
Architecture
● Complete bipartite graph between clients and servers
● Sets fan out, gets prefer closer servers
● Multiple full copies of data
us-west-2a us-west-2cus-west-2b
Client
Reading
us-west-2a us-west-2cus-west-2b
ClientClient Client
Writing
us-west-2a us-west-2cus-west-2b
ClientClient Client
Use Case: Lookaside cache
Application (Microservice)
Service Client Library
Client Ribbon Client
S S S S. . .
C C C C. . .
. . .
Data Flow
Use Case: Primary Store
Offline / Nearline
Precomputes for
Recommendations
Online Services
Offline Services
. . .
Online Client Application
Client Library
Client
Data Flow
Use Case: Transient Data Store
Online Client Application
Client Library
Client
Online Client Application
Client Library
Client
Online Client Application
Client Library
Client
. . .
Additional Features
● Global data replication
● Secondary indexing (debugging)
● Cache warming (faster deployments)
● Consistency checking
All powered by metadata flowing through Kafka
Region BRegion A
Repl Relay
Repl Proxy
KafkaRepl Relay
Repl Proxy
1 mutate
2 send
metadata
3 poll msg
5
https send
m
sg
6
mutate
4
get data
for set
APP
Kafka
Cross-Region Replication
7 read
APP
Cache Warming (Deployments)
Cache Warmer
Kafka. . . . . .. . .
Online Client Application
Client Library
Client
Minimal Code Example
Create EVCache Object
EVCache evCache = new EVCache.Builder()
.setAppName("EVCACHE_TEST")
.setCachePrefix("pre")
.setDefaultTTL(900)
.build();
Write Data
evCache.set("key", "value");
Read Data
evCache.get("key");
Delete Data
evCache.delete("key");
Failure Resilience in Client
● Operation Fast Failure
● Tunable Read Retries
● Read/Write Queues
● Set with Tunable Latch
● Async Replication through Kafka
Moneta
Next-gen EVCache Server
I.e. why I'm talking about this at a Go meetup
Moneta
Moneta: The Goddess of Memory
Juno Moneta: The Protectress of Funds for Juno
● Evolution of the EVCache server
● EVCache on SSD
● Cost optimization
● Ongoing lower EVCache cost per stream
● Takes advantage of global request patterns
Old Server
● Stock Memcached and Prana (Netflix sidecar)
● Solid, worked for years
● All data stored in RAM in Memcached
● Became more expensive with expansion / N+1 architecture
Memcached
Prana
Metrics & Other Processes
Optimization
● Global data means many copies
● Access patterns are heavily region-oriented
● In one region:
○ Hot data is used often
○ Cold data is almost never touched
● Keep hot data in RAM, cold data on SSD
● Size RAM for working set, SSD for overall dataset
New Server
● Adds Rend and Mnemonic
● Still looks like Memcached
● Unlocks cost-efficient storage & server-side intelligence
Rend
Prana
Metrics & Other Processes
Memcached (RAM)
Mnemonic (SSD)
external internal
https://github.com/netflix/rend
go get github.com/netflix/rend
Rend
Rend
● High-performance Memcached proxy & server
● Written in Go
○ Powerful concurrency primitives
○ Productive and fast
● Manages the L1/L2 relationship
● Server-side data chunking
● Tens of thousands of connections
Rend
● Modular to allow future changes / expansion of scope
○ Set of libraries and a default main()
● Manages connections, request orchestration, and backing
stores
● Low-overhead metrics library
● Multiple orchestrators
● Parallel locking for data integrity
Server Loop
Request Orchestration
Backend Handlers
M
E
T
R
I
C
S
Connection Management
Protocol
Moneta in Production
● Serving some of our most important personalization data
● Rend runs with two ports
○ One for regular users (read heavy or active management)
○ Another for "batch" uses: Replication and Precompute
● Maintains working set in RAM
● Optimized for precomputes
○ Smartly replaces data in L1
Std
Std
Prana
Metrics & Other Processes
Memcached
Mnemonic
external internal
Batch
Mnemonic
Open source Soon™
Mnemonic
● Manages data storage to SSD
● Reuses Rend server libraries
○ Handles Memcached protocol
● Core logic implements Memcached operations into
RocksDB
Rend Server Core Lib (Go)
Mnemonic Op Handler (Go)
Mnemonic Core (C++)
RocksDB
Mnemonic Stack
Why RocksDB for Moneta
● Fast at medium to high write load
○ Goal: 99% read latency ~20-25ms
● LSM Tree Design minimizes random writes to SSD
○ Data writes are buffered
● SST: Static Sorted Table
Record A Record B
SST SST SST
...
memtables
How we use RocksDB
● FIFO "Compaction"
○ More suitable for our precompute use cases
○ Level compaction generated too much traffic to SSD
● Bloom filters and indices kept in-memory
● Records sharded across many RocksDBs per instance
○ Reduces number of SST files checked, decreasing latency
...
Mnemonic Core Lib
Key: ABC
Key: XYZ
RocksDB’s
FIFO Limitation
● FIFO compaction not suitable for all use cases
○ Very frequently updated records may prematurely push out other
valid records
● Future: custom compaction or level compaction
SST
Record A2
Record B1
Record B2
Record A3
Record A1
Record A2
Record B1
Record B2
Record A3
Record A1
Record B3Record B3
Record C
Record D
Record E
Record F
Record G
Record H
SST SST
time
Moneta Performance Benchmark
● 1.7ms 99th percentile read latency
○ Server-side latency
○ Not using batch port
● Load: 1K writes/sec, 3K reads/sec
○ Reads have 10% misses
● Instance type: i2.xlarge
Open Source
https://github.com/netflix/EVCache
https://github.com/netflix/rend
Thank You
@sgmansfield
smansfield@netflix.com

More Related Content

What's hot

Monitoring MongoDB’s Engines in the Wild
Monitoring MongoDB’s Engines in the WildMonitoring MongoDB’s Engines in the Wild
Monitoring MongoDB’s Engines in the Wild
Tim Vaillancourt
 

What's hot (20)

Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
 
EVCache: Lowering Costs for a Low Latency Cache with RocksDB
EVCache: Lowering Costs for a Low Latency Cache with RocksDBEVCache: Lowering Costs for a Low Latency Cache with RocksDB
EVCache: Lowering Costs for a Low Latency Cache with RocksDB
 
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
 
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
 
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 Instance
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 InstanceExtreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 Instance
Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 Instance
 
Inside CynosDB: MariaDB optimized for the cloud at Tencent
Inside CynosDB: MariaDB optimized for the cloud at TencentInside CynosDB: MariaDB optimized for the cloud at Tencent
Inside CynosDB: MariaDB optimized for the cloud at Tencent
 
EVCache at Netflix
EVCache at NetflixEVCache at Netflix
EVCache at Netflix
 
HBaseCon2017 gohbase: Pure Go HBase Client
HBaseCon2017 gohbase: Pure Go HBase ClientHBaseCon2017 gohbase: Pure Go HBase Client
HBaseCon2017 gohbase: Pure Go HBase Client
 
State of Gluster Performance
State of Gluster PerformanceState of Gluster Performance
State of Gluster Performance
 
hbaseconasia2017: Apache HBase at Netease
hbaseconasia2017: Apache HBase at Neteasehbaseconasia2017: Apache HBase at Netease
hbaseconasia2017: Apache HBase at Netease
 
Connecting kafka message systems with scylla
Connecting kafka message systems with scylla   Connecting kafka message systems with scylla
Connecting kafka message systems with scylla
 
Update on Crimson - the Seastarized Ceph - Seastar Summit
Update on Crimson  - the Seastarized Ceph - Seastar SummitUpdate on Crimson  - the Seastarized Ceph - Seastar Summit
Update on Crimson - the Seastarized Ceph - Seastar Summit
 
Integration of Glusterfs in to commvault simpana
Integration of Glusterfs in to commvault simpanaIntegration of Glusterfs in to commvault simpana
Integration of Glusterfs in to commvault simpana
 
Streaming Replication (Keynote @ PostgreSQL Conference 2009 Japan)
Streaming Replication (Keynote @ PostgreSQL Conference 2009 Japan)Streaming Replication (Keynote @ PostgreSQL Conference 2009 Japan)
Streaming Replication (Keynote @ PostgreSQL Conference 2009 Japan)
 
hbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecturehbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecture
 
Accordion HBaseCon 2017
Accordion HBaseCon 2017Accordion HBaseCon 2017
Accordion HBaseCon 2017
 
HBaseCon2017 Improving HBase availability in a multi tenant environment
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon2017 Improving HBase availability in a multi tenant environment
HBaseCon2017 Improving HBase availability in a multi tenant environment
 
Monitoring MongoDB’s Engines in the Wild
Monitoring MongoDB’s Engines in the WildMonitoring MongoDB’s Engines in the Wild
Monitoring MongoDB’s Engines in the Wild
 
Elephants in the Cloud
Elephants in the CloudElephants in the Cloud
Elephants in the Cloud
 
Redis Developers Day 2014 - Redis Labs Talks
Redis Developers Day 2014 - Redis Labs TalksRedis Developers Day 2014 - Redis Labs Talks
Redis Developers Day 2014 - Redis Labs Talks
 

Similar to EVCache & Moneta (GoSF)

Sizing MongoDB on AWS with Wired Tiger-Patrick and Vigyan-Final
Sizing MongoDB on AWS with Wired Tiger-Patrick and Vigyan-FinalSizing MongoDB on AWS with Wired Tiger-Patrick and Vigyan-Final
Sizing MongoDB on AWS with Wired Tiger-Patrick and Vigyan-Final
Vigyan Jain
 
Scala like distributed collections - dumping time-series data with apache spark
Scala like distributed collections - dumping time-series data with apache sparkScala like distributed collections - dumping time-series data with apache spark
Scala like distributed collections - dumping time-series data with apache spark
Demi Ben-Ari
 

Similar to EVCache & Moneta (GoSF) (20)

Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2
 
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
 
Crimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryCrimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent Memory
 
EVCache Builderscon
EVCache BuildersconEVCache Builderscon
EVCache Builderscon
 
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
 
Apache Samza Past, Present and Future
Apache Samza  Past, Present and FutureApache Samza  Past, Present and Future
Apache Samza Past, Present and Future
 
Seattle Cassandra Meetup - HasOffers
Seattle Cassandra Meetup - HasOffersSeattle Cassandra Meetup - HasOffers
Seattle Cassandra Meetup - HasOffers
 
Apache samza past, present and future
Apache samza  past, present and futureApache samza  past, present and future
Apache samza past, present and future
 
Sizing MongoDB on AWS with Wired Tiger-Patrick and Vigyan-Final
Sizing MongoDB on AWS with Wired Tiger-Patrick and Vigyan-FinalSizing MongoDB on AWS with Wired Tiger-Patrick and Vigyan-Final
Sizing MongoDB on AWS with Wired Tiger-Patrick and Vigyan-Final
 
Scala like distributed collections - dumping time-series data with apache spark
Scala like distributed collections - dumping time-series data with apache sparkScala like distributed collections - dumping time-series data with apache spark
Scala like distributed collections - dumping time-series data with apache spark
 
Deep Dive on Delivering Amazon EC2 Instance Performance
Deep Dive on Delivering Amazon EC2 Instance PerformanceDeep Dive on Delivering Amazon EC2 Instance Performance
Deep Dive on Delivering Amazon EC2 Instance Performance
 
Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
 
Distributed caching-computing v3.8
Distributed caching-computing v3.8Distributed caching-computing v3.8
Distributed caching-computing v3.8
 
Real time data pipline with kafka streams
Real time data pipline with kafka streamsReal time data pipline with kafka streams
Real time data pipline with kafka streams
 
Tweaking performance on high-load projects
Tweaking performance on high-load projectsTweaking performance on high-load projects
Tweaking performance on high-load projects
 
Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...
Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...
Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...
 
Polyglot persistence @ netflix (CDE Meetup)
Polyglot persistence @ netflix (CDE Meetup) Polyglot persistence @ netflix (CDE Meetup)
Polyglot persistence @ netflix (CDE Meetup)
 
Shootout at the PAAS Corral
Shootout at the PAAS CorralShootout at the PAAS Corral
Shootout at the PAAS Corral
 
Kafka streams decoupling with stores
Kafka streams decoupling with storesKafka streams decoupling with stores
Kafka streams decoupling with stores
 
Deep Dive on Amazon EC2 Instances - AWS Summit Cape Town 2017
Deep Dive on Amazon EC2 Instances - AWS Summit Cape Town 2017Deep Dive on Amazon EC2 Instances - AWS Summit Cape Town 2017
Deep Dive on Amazon EC2 Instances - AWS Summit Cape Town 2017
 

Recently uploaded

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 

EVCache & Moneta (GoSF)

  • 2. Ephemeral Volatile Cache Clustered memcached optimized for AWS and tuned for Netflix use cases. What is EVCache?
  • 5. Why Optimize for AWS ● Instances disappear ● Zones disappear ● Regions can "disappear" (Chaos Kong) ● These do happen (and we test all the time) ● Network can be lossy ○ Throttling ○ Dropped packets ● Customer requests move between regions
  • 6. EVCache Use @ Netflix ● 70+ distinct EVCache clusters ● Used by 500+ Microservices ● Data replicated over 3 AWS regions ● Over 1.5 Million replications per second ● 65+ Billion objects ● Tens of Millions ops/second (Trillions per day) ● 170+ Terabytes of data stored ● Clusters from 3 to hundreds of instances ● 11000+ memcached instances of varying size
  • 7. Architecture Eureka (Service Discovery) Server Memcached Prana (Sidecar) Monitoring & Other Processes Client Application Client Library Client
  • 8. Architecture ● Complete bipartite graph between clients and servers ● Sets fan out, gets prefer closer servers ● Multiple full copies of data us-west-2a us-west-2cus-west-2b Client
  • 11. Use Case: Lookaside cache Application (Microservice) Service Client Library Client Ribbon Client S S S S. . . C C C C. . . . . . Data Flow
  • 12. Use Case: Primary Store Offline / Nearline Precomputes for Recommendations Online Services Offline Services . . . Online Client Application Client Library Client Data Flow
  • 13. Use Case: Transient Data Store Online Client Application Client Library Client Online Client Application Client Library Client Online Client Application Client Library Client . . .
  • 14. Additional Features ● Global data replication ● Secondary indexing (debugging) ● Cache warming (faster deployments) ● Consistency checking All powered by metadata flowing through Kafka
  • 15. Region BRegion A Repl Relay Repl Proxy KafkaRepl Relay Repl Proxy 1 mutate 2 send metadata 3 poll msg 5 https send m sg 6 mutate 4 get data for set APP Kafka Cross-Region Replication 7 read APP
  • 16. Cache Warming (Deployments) Cache Warmer Kafka. . . . . .. . . Online Client Application Client Library Client
  • 17. Minimal Code Example Create EVCache Object EVCache evCache = new EVCache.Builder() .setAppName("EVCACHE_TEST") .setCachePrefix("pre") .setDefaultTTL(900) .build(); Write Data evCache.set("key", "value"); Read Data evCache.get("key"); Delete Data evCache.delete("key");
  • 18. Failure Resilience in Client ● Operation Fast Failure ● Tunable Read Retries ● Read/Write Queues ● Set with Tunable Latch ● Async Replication through Kafka
  • 19. Moneta Next-gen EVCache Server I.e. why I'm talking about this at a Go meetup
  • 20. Moneta Moneta: The Goddess of Memory Juno Moneta: The Protectress of Funds for Juno ● Evolution of the EVCache server ● EVCache on SSD ● Cost optimization ● Ongoing lower EVCache cost per stream ● Takes advantage of global request patterns
  • 21. Old Server ● Stock Memcached and Prana (Netflix sidecar) ● Solid, worked for years ● All data stored in RAM in Memcached ● Became more expensive with expansion / N+1 architecture Memcached Prana Metrics & Other Processes
  • 22. Optimization ● Global data means many copies ● Access patterns are heavily region-oriented ● In one region: ○ Hot data is used often ○ Cold data is almost never touched ● Keep hot data in RAM, cold data on SSD ● Size RAM for working set, SSD for overall dataset
  • 23. New Server ● Adds Rend and Mnemonic ● Still looks like Memcached ● Unlocks cost-efficient storage & server-side intelligence Rend Prana Metrics & Other Processes Memcached (RAM) Mnemonic (SSD) external internal
  • 25. Rend ● High-performance Memcached proxy & server ● Written in Go ○ Powerful concurrency primitives ○ Productive and fast ● Manages the L1/L2 relationship ● Server-side data chunking ● Tens of thousands of connections
  • 26. Rend ● Modular to allow future changes / expansion of scope ○ Set of libraries and a default main() ● Manages connections, request orchestration, and backing stores ● Low-overhead metrics library ● Multiple orchestrators ● Parallel locking for data integrity Server Loop Request Orchestration Backend Handlers M E T R I C S Connection Management Protocol
  • 27. Moneta in Production ● Serving some of our most important personalization data ● Rend runs with two ports ○ One for regular users (read heavy or active management) ○ Another for "batch" uses: Replication and Precompute ● Maintains working set in RAM ● Optimized for precomputes ○ Smartly replaces data in L1 Std Std Prana Metrics & Other Processes Memcached Mnemonic external internal Batch
  • 29. Mnemonic ● Manages data storage to SSD ● Reuses Rend server libraries ○ Handles Memcached protocol ● Core logic implements Memcached operations into RocksDB Rend Server Core Lib (Go) Mnemonic Op Handler (Go) Mnemonic Core (C++) RocksDB Mnemonic Stack
  • 30. Why RocksDB for Moneta ● Fast at medium to high write load ○ Goal: 99% read latency ~20-25ms ● LSM Tree Design minimizes random writes to SSD ○ Data writes are buffered ● SST: Static Sorted Table Record A Record B SST SST SST ... memtables
  • 31. How we use RocksDB ● FIFO "Compaction" ○ More suitable for our precompute use cases ○ Level compaction generated too much traffic to SSD ● Bloom filters and indices kept in-memory ● Records sharded across many RocksDBs per instance ○ Reduces number of SST files checked, decreasing latency ... Mnemonic Core Lib Key: ABC Key: XYZ RocksDB’s
  • 32. FIFO Limitation ● FIFO compaction not suitable for all use cases ○ Very frequently updated records may prematurely push out other valid records ● Future: custom compaction or level compaction SST Record A2 Record B1 Record B2 Record A3 Record A1 Record A2 Record B1 Record B2 Record A3 Record A1 Record B3Record B3 Record C Record D Record E Record F Record G Record H SST SST time
  • 33. Moneta Performance Benchmark ● 1.7ms 99th percentile read latency ○ Server-side latency ○ Not using batch port ● Load: 1K writes/sec, 3K reads/sec ○ Reads have 10% misses ● Instance type: i2.xlarge