The typical organizational model is that teams are in constant flux, are created for work, are only responsible for the change and are not empowered, or lack trust, to run products. A high performance organization model allows teams to take full responsibility for cost, compliance and security, and lets them own their own incidents. This improves quality, change failure rates, lower costs and leads to more happy employees. DevOps is about creating with the end in mind, cross-functional autonomous teams and end-tn-end responsibility. You build it, you run it. You break it, you fix it. This means you want to automate everything in a CI/CD pipeline. Roll-forward, don't roll-back. DevOps principles play an important role in a data-driven maturity model. Continuous prototyping and a data mindset and skills for everybody. In a Data Science Workflow combining input data and deriving the model features usually requires the most of the work, and lots of iterations before its done. Implement features one-by-one. So, start with a baseline model and compare this against more complex models, to see if additional complexity is worth the performance gain. The result of a data scientist is a trained model. Such a model contains 4 components: input data, derived features, chosen model type and hyperparameters. A trained model is always the combination of data and the code. So where do you run this trained model? Model management is versioning code but not the data. A model management server stores hyperparameters, performance metrics, metadata, trained models. IN a data science pipeline, we have two components for deployment: the application and the trained model. So we split the pipeline into parts: a build pipeline, a train pipeline and a deploy pipeline. A complete pipeline mapped to azure components would look largely like this: An Azure DevOps Build pipeline, an Azure ML Training pipeline and an Azure DevOps Release pipeline.