A deep dive into how Nine Publishing (formerly Fairfax media) continuously deliver and integrate new features and bug fixes into the microservices platform.
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Deep dive - Concourse CI/CD and Pipelines
1. Deep Dive - Concourse CI/CD &
Pipelines
A deep dive into how we continuously
deliver and integrate new features and
bug fixes into our platform
Syed Imam
DevOps Engineer, Platform Engineering
2. Few things before we start..
● Let’s keep this interruption-free (read: No Device!)
● There’s an attendance sheet going around
● Feel free/don’t bother to take notes
● Jump in anytime for questions
● We also have a separate QnA session at the end
3. Well, is this as deep as it sounds?
➔ What is Concourse?
➔ Why Concourse?
➔ What does it look like?
➔ What is it made of?
➔ Where do I start?
➔ What if I get stuck?
8. Why Concourse?
● Infrastructure as Code concept
● Means: pipeline defined as code
● Stateless - makes it easier to maintain
● Declarative - pipeline.yaml
● Containers - reusability and reproducibility
● Extensible - add/use as many components as you want
13. 1
What is it made of?
Resources
Things that you want to
pass or get off of an
operation/job
2
Resources Types
Each resource has a type
(e.g. docker image), which
has to be defined in the
pipeline
3
Jobs
The name for an actual
operation/action.
14. What is it made of?
resources and types
● Represent all external input and output to the jobs
● All resources (mostly) belong to a single type: docker_images
● Few primitives you’d need to be aware of are,
○ check - discovers new versions of the resource (e.g. new commit)
○ input - retrieves the resource at a particular versions (e.g. checkout a
git repo at commit X)
○ output - creates a new version of the resource (e.g. docker image
build outputs to ecr in AWS)
15. What is it made of?
As we were discussing, the most vital part of a pipeline is,
job
plan
steps
get
put
aggregate
task
do
16. What is it made of?
● job → plan → get(step)
○ What’s first thing you’d require to do something with something?
○ get fetches a resource as input and
○ Makes it available for subsequent task
○ It can have attributes (e.g. version, passed, params, triggers)
● job → plan → put (step)
○ What’s next after something has been done to something?
○ put pushes output to the resources
○ Successful put follows an implicit get
○ Later step can use that updated resource
18. What is it made of?
● job → plan → aggregate
○ Keyword that declares underlying sub-steps to
be performed in parallel
○ Single substep failure results into failure of
whole aggregate
● job → plan → do
○ Performs the underlying steps serially
○ This is the default execution method unless
○ You introduce an aggregate process and then
shift
19. What is it made of?
● job → plan → task (step)
○ Takes inputs from resources and
○ Provides output for subsequent task
○ Use (pre) configured docker image(s)
○ Mostly executes shell scripts
○ Not independent…
○ In a sense that cannot be rerun without
running the job it belongs to (tip: try)
20. What is it made of?: Shared Tasks
● Shared task repo: <scm-url>/concourse-util/src/master/
● Collection of all shared tasks in a single place
● More generic/fundamental in nature
● Example: slack, kubernetes, git
● New task that could be used by others?
● Talk to maintainers/raise a pull-request
21. What is it made of?
config.yaml and helm_config.yaml
● We aim to keep things as more templative as possible
● (Noticed passing variables for resource and job?)
● One common way to achieve that is isolating the values
for respective variables into somewhere other than
pipeline.yaml
● That’s when config.yaml and helm_config.yaml
come into play
22. How to initiate pipeline?
Go to #tech-ci-ink(or infra) and type
● set-pipeline main <repo-name>
● expose-pipeline main <repo-name>
● unpause-pipeline main <repo-name>
● check-resource main <repo-name> <resource-name> (e.g. repo,
ci-util)
24. Where do I start?
● Templative approach
● Check out the skeleton apps
● skeleton-go-api is relatively up to date
● Raise pull-request for useful changes that’ll help others too
26. ● Checking logs
○ Specially, CI logs before Rollback was a success! Happy Helming!
● Identify exactly what part of helm upgrade/install failed
● Good to do a quick review of the helm diff
● Failure does replicate to development → test → staging → production
● Check if the relevant secrets have been installed to the namespace
○ kubectl -n <namespace> get secrets
● Any issue with the docker pull?
● Missing environment variables?
What if I get stuck?: Deployment Failure
27. What if I get stuck?: Deployment Failure
● pod may fail to restart for multiple release versions
○ kubectl -n <namespace> get rs,pods
○ kubectl -n <namespace> delete rs <replicaset>
● Timing out?
○ Default helm install limit is 300s (5 minutes)
○ Number of pods or replicaset x (download image + health-check
pass)/maxSurge
● Solution
○ Simplify image (i.e. bare minimum)
○ Increase maxSurge value
○ Increase limit: ~/ci/helm_config.yaml
28. What if I get stuck?
● Just a simple question or query? Post it in #tech-infr-ci
channel
● (But hey, no one responded the other day)
● Check /roster pe-batsman to find out the batsman for
any particular day
● But I need help setting up a new pipeline or improving an
existing one!
● Raise a JIRA in Platform Engineering (PE) project with
enough lead time
[Image Source: https://giphy.com/]
Continuous Integration: Once the feature/bugfix branch is merged to master, test and subsequent build is triggered automatically.
Continuous Delivery: Continuous Integration + Automated release to all Pre-prod environments (Dev, Test, Staging)
Continuous Deployment: Continuous Integration + Automated Release to all environments (Dev, Test, Staging, Production)
Each job has a single build plan. Plan consists of sequence of steps