2. Crawl-Walk-Run-Fly
Crawl
• Standardize Source
Control
• Define branching
strategy
• Define Defect Tracking
tool
• Automate System test
cases
• Plan for Synthetic
Transactions
• Enable Quality Checks
• Enable Automated
Unit Test cases
• Expose to cloud
• Build Knowledge base
Walk
• Build Continuous
Integration pipeline
• Build an Artifact
repository
• Build Continuous
Delivery pipeline for
Dev
• Enable Synthetic
transaction
Run
• Build Continuous
Delivery pipeline for
QA and production
• Enable Build
Promotion
• Setup Infrastructure
and application
monitoring tools
Fly
• Enable Blue Green
environments
• Enable Infrastructure
as code
• Enable Metric based
alerts
• Enable Continuous
Deployment
• Build pipelines of
pipelines
• Progress towards
DevSecOps
3. Recap
• Crawl Stage details at
https://www.slideshare.net/EkloveMohan/preparing-
for-devops
• Covered roles and responsibilities for the
Organization, Developers, QA and Operations.
• Give 2-3 months time to mature.
• What we have to start this phase:
• Single Source Control system
• Well defined Branching Strategy
• Defect tracking tool
• Quality checks for the source code
• Unit Test Automation
• Automated System test cases
• Knowledge about the complete application
• Exposure to cloud
4. DevOps Statistics – “To be are not to be”
• Forrester declared 2017 as the “Year of DevOps” and their data conforms that
50% of the organizations are implementing DevOps by the end of 2017.
Forrester now predicts that 2018 will be the “year of enterprise DevOps”.
https://go.forrester.com/blogs/2018-the-year-of-enterprise-devops/
• Puppet.com indicates that in 2017, the high performer DevOps organization
did deployment 46 times more frequently, 440 times faster lead time from
commit to deploy, 96 times faster mean time to recover from downtime and 5
times lower change failure rate (changes are 1/5 as likely to fail).
https://puppet.com/system/files/2017-10/2017-state-of-devops-report-puppet-dora.pdf
• International Data Corporation (IDC) believes that DevOps will be adopted --
in either practice or discipline -- by 80% of Global 1000 organizations by 2019.
http://business-technology-roundtable.blogspot.com/2015/02/the-devops-path-to-digital.html
5. Walk Stage - Roles and Responsibilities
• Organization
• Monitor progress on a daily basis
• Filter on Continuous Integration
and delivery tools
• Define Artifact repository
• Quality Assurance (QA)
• Optimize system test cases
• Enable Synthetic Transactions
• Developers
• Build Continuous Integration (CI)
Pipeline
• Script tasks to run for CI
• Improve Unit test case coverage
• Operations
• Build Continuous Delivery (CD)
pipeline.
• Script task to run the CD process
• Build on cloud knowledge
6. Monitor Progress
• “Transformation” team to define realistic goals for each sprint. E.g.
• Automated Unit Test and system test coverage be at 20% during first 2 sprints and
should increase by 5% during each subsequent sprint.
• Quality score baseline and ensure that code quality, design quality, architecture
quality and Test quality does not deteriorate at any stage. If it does, do ask “why”.
• Talk to the team to understand the hinderances. Bringing
transformation wont be easy. We have architects in the group to
mitigate technical challenges and managers to handle people and
process challenges. There is always a “Plan B”.
7. Continuous Integration and Delivery tools
• Pick a common tool for Continuous Integration (CI) and
Continuous Delivery (CD). While it is absolutely fine to
have different tool for CI and CD, keeping them same helps
in terms of the learning curve and standardizing tools
across organization.
• Compare tools on the basics of what your project needs and
not only on the basis of someone’s past experience. Factors
like plugin supports (e.g. SonarQube support), cost,
Software as a service vs self hosted, API driven, GUI,
command line integration, Training and documentation
along with the community support are some important
points to consider.
• Not all the features are available with a single tool therefore
they provide an option of executing custom scripts which
can be in powershell/shell or nant. Developers and
Operations need to create these as and when required.
8. Artifact repository
• Build a repository to store all the code builds. The repository
should have a provision of storing builds per environment.
Mostly the CI tools stores all the builds on the CI Server itself but
it’s better to keep them out of the CI servers (may run into issues
with disk space and have to clean the old builds to efficiently run
CI process).
• It can be a simple file system storage but needs good amount of
disk space as well as a solid backup strategy.
• Cloud can be a good option to start if this has not been tried as
yet in the organization. E.g. AWS Simple Storage Service (S3)
provides all the capabilities that is expected from a repository.
• Configure the repository such that every build first goes to the
lowest environment. When it is tested and verified, then only it
should move to the next environment. E.g. the build should first
be deployed only to Dev. Once certified that everything is
working as expected, then only move the artifacts to the QA
environment. NONE OF THE ARTIFACTS SHOULD DIRECTLY
BE TARGETED FOR THE PRODUCTION ENVIRONMENT
(NOT EVEN HOTFIXES). IT HAS TO FOLLOW THE CYLCE OF
NON-PROD to PROD.
9. 1. Download
custom
scripts
2. Download
Source Code
3. Build
Source code
4. Quality
checks
5. Run
automated
unit test case
6. Upload
artifacts
Feedback
Fail Fail Fail Fail Fail Fail
Pass Pass Pass Pass Pass
Continuous Integration pipeline
10. Master-Slave for CI
• Every developer check-in should trigger the CI pipeline and
creates a build. If there are too many check-ins at almost the
same time, the CI server, queues up each of the request and
process them in order they were received.
• For a small team this may not be an issue, but for large
teams, waiting for the old build to complete and start a new
one delays the process of “continuous feedback”.
• Implement master-slave for the CI process. The master
server monitors for all the changes but it distributes the
pipeline execution to the slaves. Master keeps a track of all
the running jobs on each of the slaves and all the reporting
continues to be from the master.
• This provides high availability in case a slave node fails.
Master keeps a track of all the “active” slaves and distribute
the task accordingly.
Master CI server
Slave Slave Slave
Commands
Commands
Commands
Source
Control
Poll changes
11. Continuous Delivery/Deployment
• To build the CD pipeline, there are two options:
• Continuous Delivery – CI and CD works independently
• Continuous Deployment – CI process triggers CD pipeline
• For Dev environment, configure for Continuous Deployment. Other
environments require a bit more confidence building before we can
go for Continuous Deployment.
• With CI pipeline, the build was created and uploaded to Artifact
repository. CD process starts from downloading the build and ends at
deploying the build to the target environment.
• To keep things simple, the CD pipeline is one per environment where
as the CI pipeline is one per application. This is due to the fact that
the build is created irrespective of the environment whereas the
deployment is done on separate set of servers with a distinct set of
permissions. E.g. Dev users do not have permission to deploy on QA
environment.
• Before deployment to the target environment, the CD pipeline
should have a “config transformation” phase where the environment
specific configs are updated before they are deployed. E.g. update QA
DB connection strings before it is deployed to QA environment.
Continuous
Integration
Deployment
Continuous
Integration
Deployment
Manual
Trigger
Auto
Trigger
Continuous Delivery
Continuous Deployment
12. Preparing Infrastructure on Cloud
• To get started on cloud, use IaaS (infrastructure as a service) offering. Create EC2/VM on the server and follow the wizard
steps(Choosing Operating System, Storage Space, Memory and CPUs, security group etc).
• Create a common root user and password on the servers or use a common .pem file for all servers.
• The manual steps of creating the servers at this stage helps us automate the process of server creating later. As of now,
create all the servers (manually) needed for the application. Call this environment a PoC or Dev environment (since this is
for experimentation purpose).
• Install all the pre-requisites on the servers created (e.g. Java runtime, .NET framework, web server etc).
• Optionally, take a snapshot of the servers when all pre-requisites are installed. Use this next time instead of building the
servers again.
• To run the application on cloud, you also need database(s). Generate database script, DDL and DML(only the master
data and not the transactional data). E.g. On Amazon Web Services, create a RDS (Relation Database Service), choose the
database server and run the script.
• Note all the IP addresses (public) for the servers that are created. This information is needed while deploying binaries on
cloud servers. (Note: every time the servers are restarted, the public IP address would change, therefore either use Elastic
IP or change the CD script with the new IP addresses before CD pipeline is executed)
13. 1. Download
custom
scripts
2. Download
build from
Artifact repo.
3. Transform
config files
4. Deploy
5. Run
System test
Feedback
Fail Fail Fail Fail Fail
Pass Pass
Continuous Delivery pipeline
PassPass
14. Enable Synthetic transaction
• Create a standalone pipeline for executing synthetic transaction (a dummy
transaction that touches all the external endpoints and then void the transaction).
• The pipeline executes once a day, at a predefined time, ideally just before the business
hours.
• It “warms up” the environment as well as provide early feedback if an issue exists
while connecting to other third party or components within the enterprise.
1. Download
custom
scripts
2. Create
dummy
record in the
database.
3. Execute
test case
4. Clean up
database
Feedback
Fail Fail Fail Fail
Pass PassPassTrigger
15. Closing words
• Things have been simple during the “walk” stage, we haven’t used a lot of
industry standard tools as yet. We tried to work with what we build during the
“crawl” stage.
• The CD pipeline steps are common for on-premise and cloud during this
phase but it changes during our “run” phase. The intent is to be able to gain
knowledge on cloud (and CI/CD tools) and start deployment with basic setup.
• Let the “walk” phase run for 1-2 months and gain confidence by doing the
simple things right. The Dev and Ops have started working together and have
gained some level of maturity.
• Start preparing for to the “Run” stage. Let some more projects start with the
“crawl” stage. Time now to rename our “Transformation” team to be “DevOps”
CoE.