AWS DevOps Portfolio
AWS CodeCommit
AWS CodeDeploy
AWS CodePipeline
Continuous Integration
Continuous Delivery
AWS CloudFormation
AWS OpsWorks
AWS Config
Infrastructure
as Code
Amazon CloudWatch
AWS CloudTrail
Monitoring
& Logging
AWS Elastic Beanstalk
Platform
as a Service
AWS CodeCommit
Use standard Git tools
Scalability, availability, and durability of Amazon S3
Encryption at rest with customer-specific keys
git pull/push CodeCommit
Git objects in
Amazon S3
Git index in
Amazon
DynamoDB
Encryption key
in AWS KMS
SSH or HTTPS
AWS CodePipeline
Connect to best-of-breed tools
Accelerate your release process
Consistently verify each release
Build
1) Build
2) Unit test
1) Deploy
2) UI test
Source Beta Production
1) Deploy
2) Perf test
Gamma
1) Deploy canary
2) Deploy region 1
3) Deploy region 2
1) Pull
AWS CodeDeploy
Easy and reliable deployments
Scale with ease
Deploy to any server
Test
CodeDeployv1, v2, v3
Production
Dev
application
revisions
deployment groups
Fully managed build service that compiles source code,
runs tests, and produces software packages
Scales continuously and processes multiple builds
concurrently
You can provide custom build environments suited to
your needs via Docker images
Only pay by the minute for the compute resources you
use
Launched with CodePipeline and Jenkins integration
AWS CodeBuild
NEW!
NEW!
What is AWS CloudTrail?
AWS CloudTrail is a fully
managed service that
records API calls made on
your AWS account.
CloudTrail helps you gain
visibility into API activity,
enables you to
troubleshoot operational
issues, conduct security
analysis and meet internal
or external compliance
requirements.
Customers
are making
API calls...
On a
growing set
of services
around the
world…
CloudTrail is
continuously
recording API
calls…
And
delivering
log files to
customers
Elastic Beanstalk?
AWS Elastic Beanstalk is an easy-to-use
service for deploying, scaling, and
managing web applications and services.
Notas do Editor
- welcome everyone
- we build the tools that developers inside of Amazon use, as well as a new set of AWS tools that all of customers can use
- today, we're going to talk about DevOps at Amazon, and give you an inside peak at how Amazon develops our web applications and services
- this talk is broken into 2 sections
- first, I'll start with the backstory about Amazon's own DevOps transformation, and the changes that we made to become more agile with our product delivery
- after covering this history, we're going to switch back to the present
- I'm going to introduce 3 new AWS services that give you the same type of tools that we use internally at Amazon
- You should walk away with a high level understanding of the different parts involved with a DevOps transformation, and an idea of how you could use our AWS Code services in your own DevOps processes
DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.
Teams work more efficiently and effectively, more nimble and agile
Using automation to work efficiently and release software more rapidly
You bake reliability and security into your automated practices to ensure your service is always running and infrastructure is in compliance
When you’re growing quickly and moving fast, you need IaC to help you manage your infrastructure at scale. Repeatable processes
Ultimately helps your organization increase its speed and velocity. The end goal is to innovate for your customers faster and become a better business
delivering a product or service in the shortest amount of time by increasing operational efficiency through shared responsibilities—without compromising on quality, reliability, stability, resilience, or security—and doing this in a repetitive fashion for a continuous delivery model
combination of microservices and increased release frequency more deployments and operational challenges
Need ways to release software safely and reliably
Treat infrastructure like you treat application code
Innovations of the cloud – you can treat your entire infrastructure like code. You can access it programmatically using APIs
We have a service that lets you use templates – declare the aws resources you want provisioned and how you want them provisioned
Then you can check these templates into github and version control them
You can easily replicate environments and share them with others
You save time
Monitor logs and metrics to improve application performance and infrastructure performance
When trying to move quickly, you need to understand how changes are impacting your performance
Services need to be on 24/7, so real time monitoring and analysis becomes rly important
You can set automation (alerts, automatic changes, etc.)
Shared release pipeline => friction
Coordinate changes, eg lib upgrade
Quick changes? Gotta merge everyone else’s; “merge weeks”
Re-build & re-deploy entire application
Amazon had 1 TEAM for this deployment!
- when you're working with a monolithic app, you have many developers all pushing changes through a shared release pipeline
- this causes frictions at many points of the lifecycle
- upfront during development, engineers need to coordinate their changes to make sure they're not making changes that will break someone else's code
- if you want to upgrade a shared library to take advantage of a new feature, you need to convince everyone else to upgrade at the same time – good luck with that
- and if you want to quickly push an important fix for your feature, you still need to merge it in with everyone else's in process changes
- this leads to "merge Fridays", or worse yet "merge weeks", where all the developers have to compile their changes and resolve any conflicts for the next release
- even after development, you also face overhead when you're pushing the changes through the delivery pipeline
- you need to re-build the entire app, run all of the test suites to make sure there are no regressions, and re-deploy the entire app
- to give you an idea of this overhead, Amazon had a central team whose sole job it was to deploy this monolithic app into production
- even if you're just making a one-line change in a tiny piece of code you own, you still need to go through this heavyweight process and wait to catch the next train leaving the station
- for a fast growth company trying to innovate and compete, this overhead and sluggishness was unacceptable
- the monolith became too big to scale efficiently so we made a couple of big changes
- one was architectural, and the other was organizational
2-pizza -> 8 people; if more, split
Cultural change: teams w/ full autonomy
-> “small startups”
-> working directly with their customers
https://www.flickr.com/photos/ryu1miwa/5147549795/
- in conjunction with breaking apart the architecture, we also broke apart the organization
- we split up the hierarchical org into small teams
- we called them 2-pizza teams, because if they got larger than you could feed with 2 pizzas, we'd break them up
- in reality, the target number is about 8 people per team, so I personally think the 2 pizza goal is maybe a little too frugal
- another important change that went along with this is cultural
- when we split up the org, we gave the teams full autonomy
- they became small startups that owned every aspect of their service
- they worked directly with their customers (internal or external), set their roadmap, designed their features, wrote the code, ran the tests, deployed to production, and operated it
- if there was pain anywhere in the process they felt it
- operational issue in the middle of the night, the team was paged
- lack of tests breaking customers, the team got a bunch of support tickets
- that motivation ensured the team focused on all aspects of the software lifecycle, broke down any barriers between the phases, and made the process flow as efficiently as possible
- we didn't have this term at the time, but this was the start of our "DevOps" culture
Constellation of services, Amazon.com, 2009
Primitives, eg dispolay buy button, calculating taxes
Packaged as standalone web service, HTTP interface
Highly decoupled
- we took the monolith and broke it apart into a service oriented architecture
- factored the app into small, focused, single-purpose services, which we call "primitives"
- for example, we had a primitive for displaying - every primitive was packaged as a standalone web service, and got an HTTP interface
the buy button on a product page, and we had one for calculating taxes
- these building blocks only communicated to each other through the web service interfaces
- this created a highly decoupled architecture where these services could be iterated on independently as long as they adhered to their web service interface
- to give you an idea of the scope of these small services, I've included this graphic
- this is the constellation of services that deliver the Amazon.com website back in 2009, 6 years ago
- this term didn't exist back then, but today you'd call this a microservice architecture
New code written fast. Tooling gap!
- these two changes decoupled the teams and made a dramatic improvement to the front end of the lifecycle
- it was very easy for them to make decisions and write new code for their microservice
- but when they went to deploy their code to production, they struggled with trying to handle this themselves
- we had a tooling gap, and the old way of having a central team push out the entire codebase was no longer workable
- that wouldn't scale to be able to serve thousands of different teams with different technologies and release schedules
- to fix this, Amazon started a new central tools team to build a new breed of developer tools
https://www.flickr.com/photos/frostyone1/4415746604/
- these new tools had some unique characteristics
- the tools had to be self-service, because there's no other way to be able to scale to that many customers
- the tools had to be technology agnostic, because the teams chose many different types of platforms and programming languages for their services
- the tools had to encourage best practices, while we allow autonomy, we also want to support shared learning across the teams so everyone can improve
- and of course, in the service-oriented mindset, the tools were delivered as primitive services
- with these new tools, we completed the puzzle
- the teams were decoupled and they had the tools necessary to efficiently release on their own
- what does success look like
- there are a lot of ways that you can measure the process, and no one way is perfect
- but here's one data point
- when you have thousands of independent teams
- producing highly-factored microservices
- that are deployed across multiple dev, test, and production environments
- in a continuous delivery process
- you get a lot of deployments
- at Amazon in 2014, we ran over 50M deployments
- that's an average of 1.5 deployments every second
- if developers feel full ownership of their service, they will be motivated to make it better
- if you focus on the customer, the right things will happen naturally
BIN
Deliver results
- if teams are accountable to delivering measureable results to strategically selected goals, they will prioritize their efforts accordingly
BIN
Optimize response times
- automation makes everything fast, effortless, and reliable
- don't blast an update across your entire production environment, take baby steps to minimize impact of unforeseen problems. rollbacks are just a click away
- monitor extensively to catch issues before customers do. use canaries
BIN
Annual planning
- annual planning needs to stack rank existing initiatives along with the newly proposed initiatives
https://www.flickr.com/photos/stevendepolo/5749192025/
- after we tell customers the story of our DevOps transformation, they typically ask us how they can do the same
- I'm not going to over-simplify this, because it is a very complex answer
- this can involve organizational changes, cultural changes, and process changes
- plus there's no one right answer for these
- every company is going to tweak their approach to optimize for their own environment
- but there is one standard thing that every DevOps transformation needs, and that's an efficient and reliable continuous delivery pipeline
- that's the focus for the rest of this talk
- the final service is CodeCommit, where we implemented the Git protocol on top of Amazon S3 storage
- this means from the front-end, it behaves like any other Git source control system
- you'll use the same Git tools and issue the same Git commands that you do today
- on the backend though, we've taken a whole new approach
- rather than use a file-system based architecture, we built CodeCommit on top of Amazon S3 and DynamoDB
- this brings all of their benefits of replicated cloud-based storage, plus some interesting bonus features
- one of those is that CodeCommit automatically encrypts all repositories using customer-specific keys
- this means that every customer will have their repositories encrypted differently in S3
- the next service is CodePipeline, which was inspired by our internal Pipelines service
- it allows you to completely model out your custom software release process
- you specify how you want your new code changes built and unit tested, how they should be deployed to pre-production test environments, how they should be validated with functional and performance tests, and ultimately how they should roll out to production
- you have complete control over the end-to-end workflow and how each step is performed
- you can connect to an AWS service like CodeDeploy, your own custom server like Jenkins, or even an integrated partner tool like GitHub
- it's completely extensible and allows anyone to plug in
- one of the great things about CodePipeline is how integrates our large ecosystem of developer tool partners
- you'll see in the upcoming demo how easy it is to discover and connect to these partner services, and include them as a step in your own release process
- after you set up your automated release workflow, then you're free to push changes as often a you like
- CodePipeline will automatically marshal your code changes through your process as quickly as it can, while ensuring that they go through all of your quality checks
- the first service I'd like to introduce is CodeDeploy
- CodeDeploy is the externalization of our internal Apollo service, and it enables you to deploy just like Amazon
- you specify what version of your application to install on what group of servers, and CodeDeploy coordinates that rollout for you
- it has the same rolling update feature to deploy without downtime
- it has the health tracking feature to stop bad deployments before they take down your application
- all you do is define how to install your application on a single machine, and CodeDeploy can scale that across a fleet of hundreds of servers
- when we launched CodeDeploy, it only supported deployments to Amazon EC2 instances
- but earlier this year, we released support for on premises deployments
- this allows you to deploy to servers in your private data center, as well as VMs in other clouds
- as long as the machine can run our agent and make calls to our public service endpoint, you can deploy to it
- this means you can have a single tool to centralize the deployment for all of your applications to all of your different environments
Here are the key benefis of CloudFormation
Automation is obviously one of the key benfits of cloudformation, creation, update, and deletion of application or infrastructure
But more powerful is to use it to manage all you infrastructure with it: commit, version, roll back just as with application code to track changes and test them extensively before using them into production
Creation is atomic: you get deterministic behavior: either your application started up successfully or not, but then you don’t have any orphaned resources flowing around
The templates can be used as blueprints inside or across organizations, you can share or enforce best practices
Some more soft advantages are
that Cfn is highly configurable,
closely integrated with all AWS services,
allows to follow a module approach to infrastructure management and provisioning
and you get can started quickly to get an application running compared to selecting the right services and putting something together yourself