Accelerating development velocity of production ML systems with Docker by Kinnary Jangla The rise of microservices has allowed ML systems to grow in complexity but has also introduced new challenges when things inevitably go wrong. Most companies provide isolated development environments for engineers to work within. While a necessity once a team reaches even a small size, this same organizational choice introduces potentially frustrating dependencies when those individual environments inevitably drift. Kinnary Jangla explains how Pinterest dockerized the services powering its home feed to accelerate development and decrease operational complexity and outlines the benefits Pinterest gained from this change that may be applicable to other microservice-based ML systems. This project was initially motivated by challenges arising from the difficulty of testing individual changes in a reproducible way. Without standardized environments, predeployment testing often yielded nonrepresentative results, causing downtime and confusion for those responsible for keeping the service up. The Docker solution that was eventually deployed prepackages all dependencies found in each microservice, allowing developers to quickly set up large portions of the home feed stack and always test on the current team-wide configs. This architecture has enabled the team to debug latency issues, expand its testing suite to include connecting to simulated databases, and more quickly do development on our thrift APIs. Kinnary shares tips and tricks for dockerizing a large-scale legacy production service and discusses how an architectural change like this can change how an ML team works.