3. About
● Founded in 2013 by the original creators of Apache Spark
● Data and AI platform as a service for 5000+ customers
● 1500+ employees, 400+ engineers, >$400M annual recurring revenue
● 3 cloud providers, 50+ regions
● Launching millions of VMs / day to run data engineering and ML workloads, processing exabytes of
data
4. Agenda
● Monitoring at Databricks before M3
● Deploying M3
○ Architecture
○ Migration
● Lessons Learned
○ Operational advice
○ Things to monitor
○ Updates and upgrades
6. Monitoring At Databricks
● Monitoring targets:
○ Cloud-native, majority of services run on Kubernetes
○ Customer Spark workloads run on VMs in customer environments
● Prometheus-based monitoring since 2016
● All service teams use metrics, dashboards, alerts
○ Most engineers are PromQL-literate
● Use-cases: real-time alerting, debugging, SLO reporting, automated event response
● Monitoring and data-drivenness are core to Databricks engineering culture
8. Scale Numbers
● 50+ regions / k8s clusters across multiple cloud providers
● 100+ microservices
● Infrastructure footprint of 4M+ VMs of Databricks services and customer Apache Spark workers
● Largest single Prometheus instance
○ 900k samples / sec
○ Churn rate: many metrics with only < 100 samples (i.e. metrics from short-lived Spark jobs persist for only < 100
minutes at 1 min scrape interval)
○ Disk usage (15d retention): 4TB
○ Huge AWS VM: x1e.16xlarge, 64 core, 1952GB RAM
9. Scaling Bottlenecks & Pain Points
Operational
● Frequent capacity issues - OOMs, high disk usage
● Multi-hour Prometheus updates (long WAL recovery process during startup)
UX
● Mental overhead of sharded view of metrics
● Big queries never completing (and causes OOMs)
● Short retention period
● Subject to strict metric whitelist
10. Searching for a Scalable Monitoring Solution
Requirements:
● High metric volume, cardinality, churn rate
● Minimum 90d retention
● Compatible with PromQL
● Global view of (some) metrics
● High availability setup
Nice-to-have:
● Good update and maintenance story - less manual intervention, no metrics gaps
● Battle-tested in large scale production environment
● Open source
(Mid-2019) Alternatives considered: sharded Prometheus, Thanos, Cortex, Datadog, SignalFx
11. Why ?
● Fulfilled all our hard requirements
○ Designed for large scale workloads and horizontally scalable
○ Exposes Prometheus API query endpoint
○ High availability with multi-replica setup
○ Designed for multi-region and cloud setup, with global querying feature
● Battle-tested at high scale at Uber in a production environment
● Has a kubernetes operator for automated cluster operations
● Cool features that we would be interested to use
○ Aggregation on ingest
○ Downsampling (potentially longer retention)
20. Migration
1. Shadow deployment
○ Dual-write metrics to both Prom and M3 storage
○ Evaluate alerts in using both Prom and M3 rule engine
○ Open a querying endpoint for Observability team to test queries and dashboarding
2. Behavior validation
○ Compare alert evaluation between old and new system
○ Compare dashboards side-by-side
3. Incremental rollout strategy
○ Percentage-based rollout of ad-hoc query traffic to M3, staged across environments
○ Per-service rollout of alert evaluation
4. Final outcome: All ad-hoc query traffic and alerts served from M3
23. Outcome
● 1-yr migration (mid-2019 to mid-2020)
● M3 runs as the sole metrics provider in all environments across clouds
○ (beta) Global query endpoint available via M3 for all metrics
● User experience largely unchanged (PromQL everything)
● Retention is widely 90d
● Migration went pretty smoothly, avoided major outages
● Higher confidence to continue scaling metrics workloads into upcoming years
● No more giant VMs with 2TB RAM!!
25. M3 From The Trenches
● System metrics to monitor
● General operational advice
● What to alert on
● How we do updates/upgrades
26. Overview
● Overall m3 has been amazingly stable
○ By far our biggest issue is running out of disk space
● Across more than 50 deployments only a few have been problematic
○ We'll dive into why, and how to avoid it
27. M3 at Databricks
● Large number of clusters means things HAVE to be automated
○ We use a combination of spinnaker and jenkins to kubectl apply templates
● About 900k samples per second in large clusters
● About 200k series read per second in large clusters
28. Key Metrics to Watch
● Memory used (alert if steadily over 60%)
○ We've seen that spikes can cause OOMs if you're consistently over this
○ Resolve by
■ Scale up cluster, or reduce incoming metric load
○ sum(container_memory_rss{filter}) by (kubernetes_pod_name)
● Disk space used (alert if predict_linear full in 14 days)
○ 14 days seems long, but it gives us plenty of time to provision new nodes and allow data to migrate
○ Resolve by
■ Scale up cluster, reduce retention, reduce incoming metric load
○ (kubelet_volume_stats_capacity_bytes{filter} - kubelet_volume_stats_available_bytes{}) /
kubelet_volume_stats_capacity_bytes{}
● Cluster scale-up can be slow
○ Be sure to test how long it takes in your cluster
29. General Advice
● Avoid a lot of custom things
○ As close to what the operator expects is the best
● Observe query rates and set limits
● Have a good testing env
○ Need to iterate quickly
○ Be able to throw away data
○ Try to have it at scale
● Have a look at the M3 dashboards and learn what things mean
○ https://grafana.com/grafana/dashboards/8126
○ Very dev focused, suggest making your own with key metrics
30.
31. Other Alerting
● high latency ingesting samples: coordinator_ingest_latency_bucket
● rate(coordinator_write_errors{code! = '4XX'}[1m])
● rate(coordinator_fetch_errors{code! = '4XX'}[1m])
● high out of order samples:
○ rate(database_tick_merged_out_of_order_blocks[5m]) > X
○ this can help catch double scrapes
■ Due to pull based arch, this can cause false alerts
○ inhibit during node startup
32. Upgrades / Updates
● So far very smooth from compatibility standpoint
○ Only seen one small query eval regression
○ Just did the 1.0 update, also smooth
■ Some api changes
● We manage this via spinnaker + jenkins
○ One pain point here is lack of fully self driving updates (i.e. only kubectl apply)
■ Is actually now available
○ Requires us to be vigilant to ensure our configs and m3db versions stay in sync
● Suggestion: Have a readiness check for coordinators
○ Restarting many at the same time can make k8s unhappy
○ Requires setting a connect consistency on the coordinator config
33. Metric Spikes
For any high volume system, you will need a way to deal with spikes.
For example: A service adds a label with exploding cardinality
● Have a way to identify the source of the spike
● Be able to cut off that source easily
○ Preferable to OOMing your cluster
34. Capacity
● Brief overview of capacity planning at Databricks
● We've found that one m3db replica per 50,000 incoming time-series works pretty well
○ We are write heavy
● For same workload need about 50 write coordinators in two deployments (100 total)
35. Future Work
Some examples of nifty new things M3 will enable us to do now that we're getting operationally mature
● Downsampling for older metrics
○ Expect a significant savings in disk space
● Using different namespaces for metrics with different requirements
● Allowing direct push into M3 from difficult to scrape services
○ E.g. databricks jobs, developer laptops
36. Conclusion
● Overall a successful migration for us
● Community has been helpful
● Nice new things on the horizon