Reproducible builds, fast and safe deployment process together with self-healing services form the basis of stable and maintainable infrastructure. In this talk I’d like to cover, from the Site Reliability Engineering (SRE) perspective, how Dropbox addresses above challenges, what technologies are used and what lessons were learnt during implementation process.
6. Dropbox Backend Infrastructure:
Something one might call a “Hybrid Cloud”.
Few datacenters + AWS VPCs + Edge Network (POPs).
Running Ubuntu Server, Puppet/Chef and Nagios.
Rest of the stack is pretty custom.
Dropbox today is not just “file storage”,
but dozens of services,
running on tens of thousands of machines.
12. Problems:
Repo is growing, new languages are in use:
Golang, Node.js, Rust.
No way to track dependencies,
dependencies installed in runtime via Puppet.
Global Encap repo deployed via rsync onto the whole fleet.
13. In search of a better build system
What are the requirements?
• Fast
• Reproducible
• Hermetic
• Flexible
• Explicit dependencies
14.
15. A Historical Perspective*
•(2006) Google got annoyed with Make and began “Blaze”
•(2012) Looks like ex-googlers at Twitter were missing “Blaze”, hence
began “Pants”
•(2013) Looks like ex-googlers at Facebook were missing “Blaze”,
hence began “Buck”
•(2014) Google realised what’s going on and released “Blaze” as
“Bazel”
•(2016) Ex-googlers at Thought Machine are still missing “Blaze”,
hence began “Please”, in Go this time :)
16. Bazel Concepts
•WORKSPACE: one per repo, defines external
dependencies
•BUILD files: Python-like DSL for describing build
targets (test is also a build target)
•`*.bzl` files: Macro and extensions
•`//dropbox/aws:ec2allocate` — labels to
specify build targets
21. •Migration started in July, 2015
•~6,400 Bazel BUILD files (~314,094 lines)
•~9,000 lines of custom *.bzl code
•Custom rules for: python, golang, rust, node.js
•BUILD file generator for Cmake, Python
•Mostly done, still work in progress …
Migration Status
22. Key Insights
•Robust remote build cache is essential.
•Keep explicit dependencies between
components.
•It is possible to retrofit new build system into
old codebase.
•Bazel, Pants, Buck, Please — pick one, or write
your own :)
29. •About 500 files and 60,000 SLOC
•Complex evaluation rules
•Configuration tends to become a Turing-complete
language
•Advanced linters and validation needed
•Specifying resource limits is tricky
Gestalt: Challenges
31. YAPS Packages: Historical
approach
•Install Debian packages via Puppet/Chef
•Use Python’s Virtualenv & PyPI
•Encap — “Bag of rats” dependencies :)
•Blast the whole repo via rsync every few minutes
by CRON
32. YAPS Packages: Current approach
•SquashFS images. Native Linux in-kernel support
•Transparent compression and de-duplication
•Read-only mounts, +1 from security
•Loopback device mounts are fast
•SquashFS image has 1+ Bazel targets and
transitive dependency closure for each target
33. $ cd /srv/aws-tools
$ tree -L 3
.
|-- ec2terminate # <- executable file
`-- ec2terminate.runfiles # <- transitive closure
|-- MANIFEST # <- list of all files
`-- __main__ # <- dependencies
|-- _solib_k8
|-- configs
|-- dbops
|-- devsecrets
|-- dpkg
`-- dropbox
...
34. YAPS Packages: Challenges
•*.pyc files have to be in the package
•Unmountable packages due to open file descriptors
•If code has to be modified on the prod server
(YOLO), special procedure — “Hijacking” is
required
•Full package has be pushed even with a 1 line
change (Xdelta compression might help)
36. Process Manager: Historical
approach
•Using Supervisord and configuration generated by
Puppet
•Update of Supervisord requires tasks to be restarted
•Loosing tasks if Supervisord killed by OOM
•Supervisord is really old, from 2004 (has
XMLRPC?!)
37. Process Manager: Current
approach
•Using Dbxinit: in-house project written in Go
•Keeps local state, thus can be updated without tasks
downtime, can survive OOM
•Supports health-checks for tasks
•Has resource limits: RSS, max fds, OOM score
•Speaks JSON HTTP
38. Configuration Management:
Historical approach
•Puppet 2.x in server mode
•Perf problems with server as fleet grew in size
•No linters or unit tests, caused a lot of errors
•“Blast to the fleet” deployment model
•Single global run via CRON, runs all modules - slow
39. Configuration Management:
Current approach
•Chef 12.x in Zero mode
•Invested heavily into linters and unit-testing
•Easy to test on a single production machine
•Has 3 runs: “platform”, “global” and “service”
•Cookbooks deployed via YAPS
•Generally trying to move service owners out of CM
41. Containers: Runc for stateless
services
•Runc is integrated with Dbxinit, each task runs inside
its own container
•Runc uses minimal Ubuntu Docker image
•Main use case is dependency isolation via mount
namespaces
•Doesn’t use network namespaces yet
42. Containers: Challenges
•Log rotation. Logs should be moved from the box
ASAP, since machine with stateless service can be
shut down without notice
•Looking into ELK stack to solve that problem
•Resource accounting. Currently doesn’t enforce any
resource limits
45. Ops Automation: Nagios &
Naoru
•Nagios runs on all production machines & AWS
EC2 instances
•Common problems are automatically fixed by auto-
remediation system called “Naoru”
•Its input is a stream of Nagios alerts and output is a
set of remediations that can be executed
automatically
46.
47. Talk Summary
Building:
•Unified build system with clean dependencies
Deploying:
•One deployment system and sound packaging
Running:
•Robust process management and automation of
simple tasks