This session teaches you how to architect scalable, highly-available, and secure applications on AWS. In this session, we cover the differences between traditional and cloud-based availability, how to apply AWS availability options to workloads, architectural design patterns for automatingfault tolerance, and examples of highly available architectures.
2. Architecting Highly Available Applications on AWS
• ME: Alex Sinner – AWS Solutions Architect
• YOU: Here to learn more about running highly
available, scalable Applications on AWS
• TODAY: about best practices and things to think
about when building for large scale
12. Regions
US-WEST (Oregon)
EU-WEST (Ireland)
ASIA PAC (Tokyo)
US-WEST (N. California)
SOUTH AMERICA (Sao Paulo)
US-EAST (Virginia)
AWS GovCloud (US)
ASIA PAC
(Sydney)
ASIA PAC
(Singapore)
CHINA (Beijing)
13. Availability Zones
US-WEST (Oregon)
EU-WEST (Ireland)
ASIA PAC (Tokyo)
US-WEST (N. California)
SOUTH AMERICA (Sao Paulo)
US-EAST (Virginia)
AWS GovCloud (US)
ASIA PAC
(Sydney)
ASIA PAC
(Singapore)
CHINA (Beijing)
17. Day One, User One
• A single EC2 Instance
– With full stack on this host
• Web app
• Database
• Management
• Etc.
• A single Elastic IP
• Route53 for DNS
EC2
Instance
Elastic IP
Amazon
Route 53
User
18. “We’re gonna need a bigger box”
• Simplest approach
• Can now leverage PIOPs
• High I/O instances
• High memory instances
• High CPU instances
• High storage instances
• Easy to change instance sizes
• Will hit an endpoint eventually
i2.4xlarge
m3.xlarge
m1.small
19. “We’re gonna need a bigger box”
• Simplest approach
• Can now leverage PIOPs
• High I/O instances
• High memory instances
• High CPU instances
• High storage instances
• Easy to change instance sizes
• Will hit an endpoint eventually
i2.4xlarge
m3.xlarge
m1.small
20. Day One, User One:
• We could potentially get
to a few hundred to a few
thousand depending on
application complexity
and traffic
• No failover
• No redundancy
• Too many eggs in one
basket
EC2
instance
Elastic IP
address
Amazon
Route 53
User
21. Day One, User One:
• We could potentially get
to a few hundred to a few
thousand depending on
application complexity
and traffic
• No failover
• No redundancy
• Too many eggs in one
basket
EC2
instance
Elastic IP
address
Amazon
Route 53
User
22. Day Two, User >1:
First, let’s separate out
our single host into
more than one:
• Web
• Database
– Make use of a database
service?
Web
instance
Database
instance
Elastic IP
address
Amazon
Route 53
User
23. Self-Managed Fully-Managed
Database server
on Amazon EC2
Your choice of
database running on
Amazon EC2
Bring Your Own
License (BYOL)
Amazon
DynamoDB
Managed NoSQL
database service
using SSD storage
Seamless scalability
Zero administration
Amazon RDS
Microsoft SQL,
Oracle, MySQL or
PostgreSQL as a
managed service
Flexible licensing
BYOL or License
Included
Amazon
Redshift
Massively parallel,
petabyte-scale, data
warehouse service
Fast, powerful and
easy to scale
Database Options
24. But how do I choose
what DB technology I
need? SQL? NoSQL?
27. Why start with SQL?
• Established and well-worn technology
• Lots of existing code, communities, books, background,
tools, etc.
• You aren’t going to break SQL DBs in your first 10 million
users. No really, you won’t*.
• Clear patterns to scalability
* Unless you are manipulating data at MASSIVE scale; even then, SQL will have a
place in your stack
28. AH HA! You
said “massive
amounts”, I
will have
massive
amounts!
29. If your usage is such that you will be
generating several TB ( >5 ) of data
in the first year OR have an
incredibly data-intensive workload…
you might need NoSQL
30. Regardless, why NoSQL?
• Super low latency applications
• Metadata driven datasets
• Highly non-relational data
• Need schema-less data constructs*
• Massive amounts of data (again, in the TB range)
• Rapid ingest of data ( thousands of records/sec )
• Already have skilled staff
*Need != “it is easier to do dev without schemas”
31. Amazon Dynamo DB
• Managed, provisioned throughput
NoSQL database
• Fast, predictable performance
• Fully distributed, fault tolerant
architecture
• Considerations for non-uniform
data
Feature Details
Provisioned
throughput
Dial up or down provisioned
read/write capacity
Predictable
performance
Average single digit millisecond
latencies from SSD-backed
infrastructure
Strong
consistency
Be sure you are reading the
most up-to-date values
Fault
tolerant
Data replicated across
Availability Zones
Monitoring Integrated to Amazon
CloudWatch
Secure Integrates with AWS Identity
and Access Management
(AWS IAM)
Amazon
EMR
Integrates with Amazon EMR
for complex analytics on large
datasets
32. But back to the main
path… Let’s see how
far SQL at the core
can grow
33. User >100
First let’s separate out
our single host into
more than one
• Web
• Database
– Use RDS to make your life
easier
Web
Instance
Elastic IP
RDS DB
Instance
Amazon
Route 53
User
34. User > 1000
Next let’s address our
lack of failover and
redundancy issues
• Elastic Load Balancing
• Another web instance
– In another Availability Zone
• Enable Amazon RDS multi-AZ
Web
Instance
RDS DB Instance
Active (Multi-AZ)
Availability Zone Availability Zone
Web
Instance
RDS DB Instance
Standby (Multi-AZ)
Elastic Load
Balancing
Amazon
Route 53
User
35. • Create highly scalable applications
• Distribute load across EC2 instances
in multiple Availability Zones
Feature Details
Available Load balance across instances in multiple
Availability Zones
Health checks Automatically checks health of instances and
takes them in or out of service
Session stickiness Route requests to the same instance
Secure sockets layer Supports SSL offload from web and application
servers with flexible cipher support
Monitoring Publishes metrics to CloudWatch
Elastic Load
Balancer
Elastic Load Balancing
37. User >10 ks–100 ks
RDS DB Instance
Active (Multi-AZ)
Availability Zone Availability Zone
RDS DB Instance
Standby (Multi-AZ)
Elastic Load
Balancing
RDS DB Instance
Read Replica
RDS DB Instance
Read Replica
RDS DB Instance
Read Replica
RDS DB Instance
Read Replica
Web
Instance
Web
Instance
Web
Instance
Web
Instance
Web
Instance
Web
Instance
Web
Instance
Web
Instance
Amazon
Route 53
User
38. Shift some load around:
Let’s lighten the load on our
web and database
instances:
• Move static content from the
web instance to Amazon S3
and CloudFront
• Move dynamic content from
the load balancer to
CloudFront
• Move session/state and DB
caching to ElastiCache or
Amazon DynamoDB
Web
instance
RDS DB Instance
Active (Multi-AZ)
Availability Zone
Elastic Load
Balancer
Amazon S3
Amazon
CloudFront
Amazon
Route 53
User
ElastiCache
Amazon
DynamoDB
41. Automatic resizing of compute clusters
based on demand
Trigger auto-scaling policy
Feature Details
Control Define minimum and maximum instance pool
sizes and when scaling and cool down occurs
Integrated to Amazon
CloudWatch
Use metrics gathered by CloudWatch to drive
scaling
Instance types Run Auto Scaling for On-Demand and Spot
Instances; compatible with VPC
aws autoscaling create-auto-scaling-
group
--auto-scaling-group-name MyGroup
--launch-configuration-name MyConfig
--min-size 4
--max-size 200
--availability-zones us-west-2c
Auto Scaling
Amazon
CloudWatch
42. Auto Scaling can scale from
one instance to thousands
and back down
43. User >500k+:
Availability Zone
Amazon
Route 53
User
Amazon S3
Amazon
CloudFront
Availability Zone
Elastic Load
Balancing
Amazon
DynamoDBRDS DB Instance
Read Replica
Web
instance
Web
instance
Web
instance
ElastiCache RDS DB Instance
Read Replica
Web
instance
Web
instance
Web
instance
ElastiCacheRDS DB Instance
Standby (Multi-AZ)
RDS DB Instance
Active (Multi-AZ)
44.
45. Use Tools:
Managing your infrastructure will become an ever
increasing important part of your time. Use tools to
automate repetitive tasks.
• Tools to manage AWS resources
• Tools to manage software and configuration on your
instances
• Automated data analysis of logs and user actions
46. AWS Application Management Solutions
AWS
Elastic Beanstalk
AWS
OpsWorks
AWS
CloudFormation
Amazon EC2
Convenience Control
Higher-level services Do it yourself
47. User >500k+:
You’ll potentially start to run into issues with speed and
performance of your applications:
• Have monitoring/metrics/logging in place
– If you can’t build it internally, outsource it! (3rd party SaaS)
• Pay attention to what customers are saying works well
• Squeeze as much performance as you can out of each
service/component
51. AWS Marketplace & Partners Can Help
• Customer can find, research,
and buy software
• Simple pricing, aligns with
Amazon EC2 usage model
• Launch in minutes
• AWS Marketplace billing
integrated into your AWS
account
• 1300+ products across 20+
categories
Learn more at: aws.amazon.com/marketplace
55. SOA’ing
Move services into their own
tiers/modules. Treat each of these
as 100% separate pieces of your
infrastructure and scale them
independently.
Amazon.com and AWS do this
extensively! It offers flexibility and
greater understanding of each
component.
56. Loose coupling sets you free!
• The looser they're coupled, the bigger they scale
– Independent components
– Design everything as a black box
– Decouple interactions
– Favor services with built-in redundancy and scalability rather than
building your own
Controller A Controller B
Controller A Controller B
Q Q
Tight coupling
Use Amazon SQS for buffers
Loose coupling
57. Loose coupling + SOA = winning
Examples:
• Email
• Queuing
• Transcoding
• Search
• Databases
• Monitoring
• Metrics
• Logging
Amazon
CloudSearch
Amazon SQSAmazon SNS
Amazon Elastic
Transcoder
Amazon SWF
Amazon SES
In the early days, if someone has a service for it already,
opt to use that instead of building it yourself.
DON’T RE-INVENT THE WHEEL
58. On re-inventing the wheel…
If you find yourself writing
your own: queue, DNS server,
database, storage system,
monitoring tool
61. Users > 1 Million
Reaching a million and above is going to require some of
all the previous things:
• Multi-AZ
• Elastic Load Balancing between tiers
• Auto Scaling
• Service-oriented architecture
• Serving content smartly (S3/CloudFront)
• Caching off DB
• Moving state off tiers that autoscale
62. Users > 1 Million
RDS DB Instance
Active (Multi-AZ)
Availability Zone
Elastic Load
Balancer
RDS DB Instance
Read Replica
RDS DB Instance
Read Replica
Web
Instance
Web
Instance
Web
Instance
Web
Instance
Amazon
Route 53
User
Amazon S3
Amazon
Cloudfront
Amazon
DynamoDB
Amazon SQS
ElastiCache
Worker
Instance
Worker
Instance
Amazon
CloudWatch
Internal App
Instance
Internal App
Instance
Amazon SES
64. From 5 to 10 Million Users
You may start to run into issues with your database around
contention on the write master.
How can you solve it?
• Federation - splitting into multiple DBs based on function
• Sharding - splitting one data set up across multiple hosts
• Moving some functionality to other types of DBs (NoSQL)
67. Review
• Multi-AZ your infrastructure
• Make use of self-scaling services
– Elastic Load Balancing, Amazon S3, Amazon SNS, Amazon SQS,
Amazon SWF, Amazon SES, etc.
• Build in redundancy at every level
• Most likely start with SQL
• Cache data both inside and outside your
infrastructure
• Use automation tools in your infrastructure
68. Review (cont)
• Make sure you have good
metrics/monitoring/logging tools in place
• Split tiers into individual services (SOA)
• Use Auto Scaling when you’re ready for it
• Don’t reinvent the wheel
• Move to NoSQL when it really makes sense but
do your best not to administer it
69. Putting all this together
means we should now
easily be able to handle
10+ million users!
Introduce yourself, who the crowd is, and our goal for today
So we are going to start small with a single user system and go through different steps in the evolution of that system so that it can deal with millions of users.
Scaling is a big topic, with lots of opinions, guides, how-tos, and 3rd parties. If you are new to scaling on AWS, you might ask yourself this question: “So how do I scale?”
“And if you are like most people, its really hard to know where to start. Again, there are all these resources, twitter based experts, and blog posts preaching to you how to scale”.. “so again, where do we start?”
If you are like me, you’ll start where I usually start when I want to learn how to do something. A search engine. In this case I’ve gone and searched for “scaling on AWS” using my favorite search engine.
It’s important to note something about the results here. First off, there are a lot of things to read. This search was from a few months ago, and there were almost a million posts on how to scale on AWS.
Unfortunately for us and our search engine here however, the first response back is actually not what we are looking for. Auto-scaling IS an AWS services, and its great, but…
“Auto-scaling is a tool and a destination for your infrastructure. It isn’t a single thing. Its not a check-box you can click when launching something. Your infrastructure really has to be built with the right properties in mind for Auto-scaling to work.”.. “So again, where do we start?”
What do we need first?
We need some basics to lay the foundations we’ll need to build our knowledge of AWS on top of.
First up we have AWS regions. This is the most macro level concept we have at AWS today. ( describe regions, their number, locations ).
https://aws.amazon.com/about-aws/globalinfrastructure
Next up we have Availability Zones, these are part of our regions, and existing within them. There will be at a minimum 2 of these in every AZ, and generally speaking your infrastructure will live in one or more AZ’s inside a given region. We’ll be talking a lot about Multi-AZ architectures today, as it’s a core-component to having a highly available, highly redundant, and highly durable infrastructure on AWS.
Focus on importance of Azs on HA architecture
We have over 30 services, and we are going to cover some today
Cover different layers of service groups, such as networking, compute, databases, storage, higher level application services, etc…
Consider this as your toolbox to build highly available, scalable systems.
So let’s start from day one, user one, of our new infrastructure and application.
This here is the most basic set up you would need to serve up a web application. We have Route53 for DNS, an EC2 instance running our webapp and database, and an Elastic IP attached to the EC2 instance so Route53 can direct traffic to us. Now in scaling this infrastructure, the only real option we have is to get a bigger EC2 instance…
Scaling the one EC2 instance we have to a larger one is the most simple approach to start with. There are a lot of different AWS instance types to go with depending on your work load. Some have high I/O, CPU, Memory, or local storage. You can also make use of EBS-Optimized instances and Provisioned IOPs to help scale the storage for this instance quite a bit.
Scaling the one EC2 instance we have to a larger one is the most simple approach to start with. There are a lot of different AWS instance types to go with depending on your work load. Some have high I/O, CPU, Memory, or local storage. You can also make use of EBS-Optimized instances and Provisioned IOPs to help scale the storage for this instance quite a bit.
So while we could reach potentially a few hundred or few thousand users supported by this single instance, its not a long term play.
We also have to consider some other issues with this infrastructure; No Failover, No redundancy, and too many eggs in one basket, since we have both the database and webapp on the same instance.
The first thing we can do to address the issues of too many eggs in one basket, and to over come the “no bigger boat” problem, is to split out our Webapp and Database into two instances. This gives us more flexibility in scaling these two things independently. And since we are breaking out the Database, this is a great time to think about maybe making use of a database services instead of managing this ourselves…
Section A [end]
At AWS there are a lot of different options to running databases. One is to just install pretty much any database you can think of on an EC2 instance, and manage all of it yourself. If you are really comfortable doing DBA like activities, like backups, patching, security, tuning, this could be an option for you.
If not, then we have a few options that we think are a better idea:
First is Amazon RDS, or Relational Database Service. With RDS you get a managed database instance of either MySQL, Oracle, or SQL Server, with features such as automated daily backups, simple scaling, patch management, snapshots and restores, High availability, and read replicas, depending on the engine you go with.
Next up we have DynamoDB, a NoSQL database, built ontop of SSDs. DynamoDB is based on the Dynamo whitepaper published by Amazon.com back in 2003, considered the grandfather of most modern NoSQL databases like Cassandra and Riak. DynamoDB that we have here at AWS is kind of like a cousin of the original paper. One of the key concepts to DynamoDB is what we call “Zero Administration”. With DynamoDB the only knobs to tweak are the reads and writes per second you want the DB to be able to perform at. You set it, and it will give you that capacity with query responses averaging in single millisecond. We’ve had customers with loads such as half a million reads and writes per second without DynamoDB even blinking.
Lastly we have Amazon Redshift, a multi-petabyte-scale data warehouse service. With Redshift, much like most AWS services, the idea is that you can start small, and scale as you need to, while only paying for what scale you are at. What this means is that you can get 1TB of of data per year at less than a thousand dollars with Redshift. This is several times cheaper than most other datawarehouse providers costs, and again, you can scale and grow as your business dictates without you needing to sign an expensive contract upfront.
Given that we have all these different options, from running pretty much anything you want yourself, to making use of one of the database services AWS provides, how do you choose? How do you decide between SQL and NoSQL?
Given that we have all these different options, from running pretty much anything you want yourself, to making use of one of the database services AWS provides, how do you choose? How do you decide between SQL and NoSQL?
Read as is
So Why start with SQL databases? Generally speaking SQL based databases are established and well worn technology. There’s a good chance SQL is older than most people in this room. It has however continued to power most of the largest web applications we deal with on a daily basis. There are a lot of existing code, books, tools, communities, and people who know and understand SQL. Some of these newer nosql databases might have a handful, tops, of companies using them at scale. You also aren’t going to break SQL databases in your first 10 million users. And yes there is an astrisk here, and we’ll get to that in a second. Lastly, there are a lot of clear patterns for scalability that we’ll discuss a bit through out this talk. So as for my point here at the bottom, I again strongly recommend SQL based technology, unless your application is doing something SUPER weird with the data, or you’ll have MASSIVE amounts of it, even then, SQL will be in your stack.
AH HA! You say. I said ‘massive amounts”, and we all assume we’ll have massive amounts, so that means that you must be the lone exclusion in this room… well lets clarify this a bit.
If your usage is such that you will be generating several terabytes ( greater than 5) of data in the first year, OR you will have an incredibly data intensive workload, then, you might need NoSQL
So why else might you need NoSQL? There are definitely usecases where it makes sense to go NoSQL right off the bat. Some examples:
Super low latency applications.
Metadata driven datasets
High-nonrelational data
Kind of going along with the previous is where you really need schema-less data constructs. And lets highlight the word NEED here. This isn’t just developers saying its easy to make apps without schemas. That’s just laziness
Massive amounts of data, again from the previous slide, in the several TB range.
Rapid ingest of data. Where you need to ingest potentially thousands of records per second into a single dataset
Need staff and need to hire more when scaling
Talk about DynamoDB in the sense that using a managed solution takes away the operational burden at scale
Read this slide…
So for this scenario today, we’re going to go with RDS and MYSQL as our database engine.
Next up we need to address the lack of failover and redundancy in our infrastructure. We’re going to do this by adding in another webapp instance, and enabling the Multi-AZ feature of RDS, which will give us a standby instance in a different AZ from the Primary. We’re also going to replace our EIP with an Elastic Load Balancer to share the load between our two web instances
For those who aren’t familiar yet with ELB( Elastic Load Balancer ), it is a highly scalable load balancing service that you can put infront of tiers of your application where you have multiple instances that you want to share load across. ELB is a really great service, in that it does a lot for you without you having to do much. It will create a self-healing/self-scaling LB that can do things such as SSL termination, handle sticky Sessions, have multiple listeners. It will also do health checks back to the instances behind it, and puts a bunch of metrics into CloudWatch for you as well. This is a key service in building highly available infrastructures on AWS.
Read this slide.
Most of you will get to this point and be pretty well off honestly. You can take this really pretty far for most web applications. We could scale this out over another AZ maybe.
Add in another tier of read replicas.
Imagine for instance if you cached the search pages for highly requested queries. This could take load off your search, off your web application, your database, etc. So now we can see here that we’ve got CloudFront in front of both S3 and our ELB. Now that we’ve got that covered, lets move back to the session information, and database queries we can be caching as well.
Section [begin]
Read slide
Read slide
Talk about auto-scaling.
Read slide.
If we add in auto-scaling, our caching layer(both inside, and outside our infrastructure), and the read-replicas with MySQL, we can now handle a pretty serious load. This could potentially even get us into the millions of users by itself if continued to be scaled horizontally and vertically. By the way, the introduction of Auto Scaling at low user-counts is as beneficial as at high user counts – once your web layer is scalable in lighter weight chunks.
----- Meeting Notes (3/3/14 14:29) -----
Section end
Read slide
----- Meeting Notes (3/3/14 14:29) -----
Section begin
Discuss lightly pros/cons of each.
Elastic Beanstalk is easiest to start with, but offers less control. Opsworks gives you more tools, with a bit more work on your part. CloudFormation is a template driven tool with its own language, so a bit of a learning curve, but very very powerful. Lastly you could do all this manually, but at scale its nearly impossible without a huge team.
Read slide
Pay attention to what your metrics say to you. Host Level metrics are great for deep diving on problems, but aggregate level metrics will be more valuable as a bigger picture of what is going on with your infrastructure. Log analysis is also very much needed, and incredibly powerful to have in your infrastructure. Don’t skim on it. Log everything centrally. Lastly we have external site metrics. Its amazing how many people don’t think about this last one here. You need to understand how your site is performing from the view of your end users. ( top two are from CloudWatch, bottom left is from Kibana/logstash, bottom right is Pingdom)
Read slide
Section A [end]
Read slide, talk about how awesome the marketplace is to find the kind of tools you need to help you scale.
----- Meeting Notes (3/3/14 14:29) -----
Section end
Section B [begin]
There’s even further that we can go beyond what we have so far. So far we’ve had just a single webapp tier doing all of our application workload. While that works for some sites and applications, for many it doesn’t. Which brings us on to our next topic…
Say nothing, go quick from this slide to next one.
Service Oriented Architecture!
Read slide, sum up SOA, and mention that Amazon.com and AWS have hundreds of services under the hood that represent the sites and services you see. It’s a core principle in application/service development at Amazon.
Talk about loose coupling and how it pertains to SOA architectures. Describe the SQS as a buffer example.
Combining Loose coupling, SOA, and prebuilt services, can also really have some huge advantages. Instead of writing all these mini services yourself, try and leverage already existing services and applications, especially when you are starting out. DON’T REINVENT THE WHEEL! For example, at AWS we have services to help you with Email, Queues, Transcoding, Search, Databases, and Monitoring and Metrics. Lean on other 3rd parties for more.
Read slide.
Read slide
Read slide
This diagram is missing the other AZ, but we’ve only got so much room on the slide. But we can see we’ve added in some internal pools for different tasks perhaps. Maybe we’re not using SQS for something, and have SES for sending our out bound email. Again our users will still talk to Route53, and then to CloudFront to get to our site and our content hosted back by our ELB and S3.