Cloud makes it super easy for you to spin off your desired IT resources. But, the true value of cloud lies in its capability to provide you a set of building blocks for your applications. Join us in this hands-on session to understand how to use Amazon Virtual Private Cloud (VPC) and Amazon Elastic Compute Cloud (EC2) along with Amazon EC2 Auto Scaling and Elastic Load Balancer to design your scalable architecture and build your applications in no time. Moreover, we will discover how to modernize your application with the help of our serverless service AWS Lambda.
71. Scalable Web Apps
A very popular use-case for AWS services
Applications with growing, variable or cyclical demand fit AWS well
Elasticity and automation can be exercised to real advantage
AWS services allow you to accelerate application development
”Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet
”Cloud computing is a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications.”
”Cloud computing is a style of computing in which scalable and elastic IT – enabled capabilities are delivered as a service using Internet technologies
Most cloud providers are extremely reliable in providing their services, with many maintaining 99.99% uptime. The connection is always on and as long as workers have an Internet connection. Moving to the cloud gives access to enterprise-class technology, for everyone. It also allows smaller businesses to act faster than big, established competitors. Pay-as-you-go service and cloud business applications mean small outfits can run with the big boys, and disrupt the market, while remaining lean and nimble..
Cloud-based services are ideal for businesses with growing or fluctuating bandwidth demands. If your needs increase it’s easy to scale up your cloud capacity, drawing on the service’s remote servers. Likewise, if you need to scale down again, the flexibility is baked into the service. This level of agility can give businesses using cloud computing a real advantage over competitors – it’s not surprising that CIOs and IT Directors rank ‘operational agility’ as a top driver for cloud adoption.
Most cloud providers are extremely reliable in providing their services, with many maintaining 99.99% uptime. The connection is always on and as long as workers have an Internet connection. Moving to the cloud gives access to enterprise-class technology, for everyone. It also allows smaller businesses to act faster than big, established competitors. Pay-as-you-go service and cloud business applications mean small outfits can run with the big boys, and disrupt the market, while remaining lean and nimble..
Cloud computing cuts out the high cost of hardware. You simply pay as you go and enjoy a subscription-based model that’s kind to your cash flow. Add to that the ease of setup and management and suddenly your scary, hairy IT project looks at lot friendlier. Cloud computing offers a flexible cost structure, thereby limiting exposure.
Most cloud providers are extremely reliable in providing their services, with many maintaining 99.99% uptime. The connection is always on and as long as workers have an Internet connection. Moving to the cloud gives access to enterprise-class technology, for everyone. It also allows smaller businesses to act faster than big, established competitors. Pay-as-you-go service and cloud business applications mean small outfits can run with the big boys, and disrupt the market, while remaining lean and nimble..
Most cloud providers are extremely reliable in providing their services, with many maintaining 99.99% uptime. The connection is always on and as long as workers have an Internet connection. Moving to the cloud gives access to enterprise-class technology, for everyone. It also allows smaller businesses to act faster than big, established competitors. Pay-as-you-go service and cloud business applications mean small outfits can run with the big boys, and disrupt the market, while remaining lean and nimble..
Most cloud providers are extremely reliable in providing their services, with many maintaining 99.99% uptime. The connection is always on and as long as workers have an Internet connection. Moving to the cloud gives access to enterprise-class technology, for everyone. It also allows smaller businesses to act faster than big, established competitors. Pay-as-you-go service and cloud business applications mean small outfits can run with the big boys, and disrupt the market, while remaining lean and nimble..
AWS provides the deepest and broadest cloud platform in the world. Virtually every conceivable use case can be built and deployed on the AWS Cloud.
This is a simple view of the set of services offered by AWS. At the core are the compute, storage and data services that are the heart of our offering. We then surround these offerings with a range of supporting components like management tools, networking services and application services. All these capabilities are hosted within our global data center footprint that allows you to consume services without having to build out your own facilities or procure hardware equipment.
TALKING POINTS
AWS has developed the broadest collection of services available from any cloud provider.
Our approach to regions, availability zones, and POPs provides global coverage for high availability, low latency applications.
Foundation services across compute, storage, security, and networking offer customers flexibility in their architecture. We have a full spectrum of options to meet most price-to-performance scenarios.
We offer the capability for both managed and unmanaged database options.
The offerings for Analytics and Application Services enable advanced data processing and workloads.
AWS Redshift, our cloud-based data warehouse, is the fastest growing service in the history of AWS.
Our management tools offer a lot of insight and flexibility to let you manage your AWS resources through either our tools or the management tools you’re already familiar with.
Recent expansion into enterprise applications has been entirely driven by customer feedback on where they’d like us to deliver value.
Core Services: The core services of the platform provide a strong foundation to build upon.
Compute: Broad selection of instance types for general-purpose computing, high-performance computing, high-memory computing, GPUs, high-IO computing, even dedicated instances for regulated workloads. And we actually have twice as many as the nearest competitor for compute instances.
Storage: We launched S3 10 years ago. Since then, we’ve added: Archival storage with Glacier, file systems with Elastic File System or block store with the Elastic Block Store. Additionally, inside EBS we have general-purpose volumes for the block store. We have provisioned IOPS, and we have magnetic media. Innovation on behalf of our customers has also provided latency-sensitive transactional workloads with a consistent high-IOPS environment to run.
Databases: The same is true for databases. Today we have six different database engines available on the relational database service: Aurora, PostgreSQL, MySQL, MariaDB, and the traditional Oracle and SQL Server databases. All of these databases can run in multi-AZ configurations, so you can split your database for higher availability.
Security: If we take a look at identity and access control, we have a very, very deep and fine-grained set of identity and access controls which allow you to not just specify a high-level role, but get really detailed on who can do what to which resource. So you can say that a specific individual inside your organization -- Bill -- can add a table to a specific RDS instance at a particular time only from within the corporate network between 9am and 5pm.
Enterprise Apps: Server infrastructure isn’t the only place we’ve invested. For instance, Amazon WorkSpaces provides an extremely capable desktop-as-a-service service, and customers like Johnson & Johnson are currently deploying it across more than 25,000 seats.
Marketplace: Finally, I want to call your attention to the AWS Marketplace, which allows you deploy third-party software directly to your AWS environment. At present, the Marketplace offers over 2600 products across more than 900 sellers and customers consume over 205 million running hours a month.
I’d like to walk through a few examples of how we think the AWS Cloud can help you…
1/ And we continue to iterate at a faster clip than anybody.
2/ In 2014 we launched 516 significant services and features
3/ In 2015: 722; last year 1,017; this year over 1,300 (over 3 new features/day)
4/ So, pace of innovation continues to accelerate which is extending the gap in functionality when you study it closely
”Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet
”Cloud computing is a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications.”
”Cloud computing is a style of computing in which scalable and elastic IT – enabled capabilities are delivered as a service using Internet technologies
Most cloud providers are extremely reliable in providing their services, with many maintaining 99.99% uptime. The connection is always on and as long as workers have an Internet connection. Moving to the cloud gives access to enterprise-class technology, for everyone. It also allows smaller businesses to act faster than big, established competitors. Pay-as-you-go service and cloud business applications mean small outfits can run with the big boys, and disrupt the market, while remaining lean and nimble..
And finally, here are the regions we either have launched or will launch in 2016, 2017, and 2018….in this three year period we will add 11 regions. That is as many regions as we launched in our first 10 years.
And we’re just getting started. I fully expect that our pace of global expansion is going to continue to accelerate.
Let’s review the bidding here. We have launched 2 new North America regions – Ohio and Montreal. We launched our third European region in London. And we launched 2 new regions in Asia Pacific – Mumbia and Seoul.
And we have preannounced another 6 regions. In the US, we will add a second region for certain government workloads on the east coast. In Europe, we will add a region in Paris and a region in Sweden. We will launch our first middle eastern region in Bahrain. And we will add two more Asia Pacific regions – one in Hong Kong and a second mainland China region in Ninjxia.
Phew…that was tiring to just list. And we are not done…we are just getting started…stay tuned.
This is Gartner’s latest market segment share sizing released this past Fall
Can see they estimate AWS’s market segment share at 44.1%, up from 39.7% a year before
More than double the other 9 providers listed here combined…
So significant market segment leadership position that’s expanding
So as a quick refresher, EC2 instances are virtual machines. They are guests that are running on top of a hypervisor that is running on a physical piece of hardware.
Now, this may be a little obvious to some people, but I wanted to talk about how the different sizes of EC2 instances actually work using the c4 as an example. A c4.8xlarge will give you roughly double the amount of resources as two c4.4xlarge’s. These are things like CPU, memory, and network. and those 4xlarges are roughly equal to four 2xlarges and so on and so forth.
Walk through the terminology from what an AMI is, launching an instance into a specific network environment, in specific AZ/Region, there are multiple regions, block storage is in an AZ, S3 is regional and holds snapshots.
High-level description of Security Groups. Focus on how they control network traffic and the differences between EC2 Classic and VPC security groups.
Users that want the low cost and flexibility of Amazon EC2 without any up-front payment or long-term commitment
Applications being developed or tested on Amazon EC2 for the first time
Applications with short term, spiky, or unpredictable workloads that cannot be interrupted
Applications with steady state or predictable usage
Applications that require reserved capacity
Users able to make upfront payments to reduce their total computing costs even further
You can save up to 75% off the On-Demand rate. You can choose between three payment options when you purchase a Standard Reserved Instance. With the All Upfront option, you pay for the entire Reserved Instance with one upfront payment. This option provides you with the largest discount compared to On-Demand Instance pricing. With the Partial Upfront option, you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term. The No Upfront option does not require any upfront payment and provides a discounted hourly rate for the duration of the term.
Now if your needs change after you have purchased a Reserved Instance, you can request to move your Reserved Instance to another Availability Zone within the same region, change its Network Platform or, for Linux/UNIX RIs, modify the instance type of your reservation to another type in the same instance family, at no additional cost. The other option is to sell your unneeded RI’s on the RI Marketplace!
Applications that have flexible start and end times
Experiments that can only be conducted at very low compute prices (Brookhaven and Fermi – analyzing the origins of our universe). Or business that need extremely low infrastructure costs to achieve profitability such as Adtech.
Users with urgent computing needs or large amounts of additional capacity
Spot Instances provide the ability for customers to purchase compute capacity with no upfront commitment and at hourly rates usually lower than the On-Demand rate, often as much as 90% cheaper - for those wondering what is a 90% discount? It is about 1c per core hour. Ask yourself what could your best people do, or how well could your application perform with a 10,000 core data center that costs just $100 per hour..
So the spot rules are actually pretty simple. There is a market determined pricing mechanism, that is often as much as 90% off the On-Demand price. You never pay more than your bid, in fact you’ll often pay significantly less than your bid! Should the market price exceed your bid, we give you 2 minutes to wrap up your work! Here is a quick example of the impact of bidding on interruptions and price.
25% you kept your instance for almost 7 days, being impacted during a few short periods. However, you only paid the market price which was 86% off, just less than 20c per hour during the last week, only 14% of the OD price.
At 50% you would have been interrupted just once, for a very short period of time during the sixth day. You’re average discount during the week is 85% just 21c per hour, paying just 15% of OD.
At 75% you would not once have been interrupted, achieving an average discount of 85% just 21c an hour, again paying just 15% of OD.
So a simple tip for getting started with Spot is bid the on-demand price and you’ll still only pay the market rate often just 10% of the on-demand price! If you’re using Spot fleet it will automagically handle the re-provisioning of your capacity!
So having a balanced meal means -
Use Reserved Instances for known/steady-state workloads
Set-up multiple Scaling groups
Scale using Spot, On-Demand or both
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can also create a Hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter.
Here’s a visualization of the network components of a VPC, which can span availability zones
Traffic can be routed from a subnet to the internet, or it can be kept private
You can also route subnet traffic to a Virtual Private Gateway which connects via VPC to a customer data center
Here’s a directional chart of server load versus time, where you can see a gradual spike in activity mid day
This is common for many applications, to see a surge of activity during the day when customers wake up and then a ramp down when customer go to bed
Overlayed onto the chart is the capacity of one server, which in the case is enough to serve customer up until about 8am, and then later between 4pm and midnight
So two servers are traditionally required, since you’ll have to buy for the peak, although for roughly 16 hours out of the day, you don’t need the second server
Notice that the area under the curve represents server capacity needed as well as the expense of that server capacity. The difference between the capacity provisioned and the need is the savings opportunity
What if you could provision one server for the first 8 hours…
Continue using that server for the next 8 hours…
Add an additional server for the middle of the day to accommodate peak demand…
And then scale back down to one server for the final 8 hours…
That would save you a third of the traditional capacity requirement
You can do this with Amazon EC2 by purchasing On-Demand Instances, where you pay by the hour
Some applications are spikier, for example some enterprise applications that may by sitting dormant for most of the month but require substantial server capacity for the final days
In this example, the end of month need is predictably 6x what is required during the remainder of the month
The expense to provision for peak is significantly higher than in the previous example,
The area under the curve traditionally provisioned is significantly larger than what is needed, making the savings opportunity higher
With this predictable workload pattern, you can capitalize on the savings opportunity with Amazon EC2
In this case it’s 75%
Elastic Load Balancer does health checks. If Elastic Load Balancing finds an unhealthy instance, it stops sending traffic to the instance and reroutes traffic to healthy instances.
At the same time, Auto Scaling periodically performs health checks on instances. When Auto Scaling determines that an instance is unhealthy, it terminates that instance and launches a new one.
Using this functionality across multiple availability zones allows your architecture to fail over to either availability zone, enabling a highly available web architecture within a region
You’ll notice as well that static content is delivered through CloudFront our Content Delivery Network
Lamborghini: Automobili Lamborghini manufactures luxury super sports cars in Italy. When the company’s outdated website and infrastructure needed an update, they chose AWS to bring a new website online in less than one month, supporting a new product launch that generated a 250% increase in website traffic.
Unilever: Unilever migrated 500 web properties in less than 5 months to a standardized digital marketing platform running on AWS capable of supporting global campaigns, and reducing the time to launch new projects by 75%.
Discovery Communications: Discovery Communications is a leader in nonfiction media, reaching more than 1.8 billion cumulative subscribers in 218 countries and territories. Discovery uses AWS to run more than 40 websites while easily meeting fluctuating traffic.
Airbnb: Tobi Knaup, an engineer at Airbnb says, “Because of AWS, there has always been an easy answer (in terms of time required and cost) to scale our site.”
McCormick: McCormick & Company is a global leader in the flavor industry with more than $4 billion in annual sales. McCormick uses AWS to host the FlavorPrint website and store user profiles, photographs, and how-to videos. By using AWS, McCormick was able to create an entirely new way to present its products to customers, and along the way, double capacity and reduced infrastructure costs by over 50%.
Lafarge: A leader in building materials, Lafarge uses AWS to host 20 active corporate website and plans to expand its use to more websites and applications. Using AWS gives Lafarge the ability to instantaneously add or remove instances in order to manage website load during peak periods.
Dow Jones: Dow Jones & Company is a global provider of news and business information, delivering content to consumers and organizations via newspapers, Web sites, mobile apps, video, newsletters, magazines, proprietary databases, conferences, and radio. The WSJ.com product running on AWS Tokyo leverages multiple Availability Zones on Amazon EC2 instances to run Dow Jones app code and Oracle databases.
Ziff Davis: Ziff Davis is an American publisher and Internet company. Ziff Davis is using AWS to host its web properties such as PCMags.com, IGN.com and AskMen.com. AWS provides a uniform environment for the enterprise’s web properties, and eliminates the cost of licensing, on-site maintenance, and hardware refreshes.
Reddit: 4 billion page views per month with only 20 people in the whole company
…then this is irrelevant if clients can’t resolve you?
Magnetic 1TB max
2 flavours with Max 16TB volume size
General Purpose
3,000 per volume up to 1TB
10,000 maximum
20,000 max on PIOPS
If you could use everything you’ve already learned about designing service-based applications without the need to manage the server-based infrastructure, that would be pretty compelling, right?
You can have your applications operations be fully managed, no provisioning, high availability built in, no patching or monitoring Operating Systems.
Also, when creating web services, a lot of the code that your development team will be responsible for writing is relevant to the web services paradigm itself. Running a web server, exposing an API, marshalling requests/responses, etc. By architecting to be serverless, your developers can focus on the core business logic that matters.
And finally, the serverless applications you build will have their scaling managed for you, no matter the what that scale is.
I’ll explain the service by describing the four different components that I think about when building a Lambda-based application. We jump into each one of these components now.
Next is the event source. For the code that you’ve written and would like to have executed, the event source will define how and when that occurs. There are a number of different event sources available today and that’s continuing to grow at a very rapid pace.
Each different event source type will define what data and metadata passed to your function so that it’s able to process with all the context that it needs for your application. For example, if you would like your code to execute whenever a new object lands inside an S3 bucket and choose S3 as an event source… when your Lambda function is triggered it will be provided metadata like the userIdentity that uploaded the object, the bucket the object was created in, and the key and size of that new object.
And if, for example, you choose Amazon API Gateway as your event source, your Lambda function will receive all of the HTTPS request details it needs to process that API request.
Lambda allows you to work within a model that provides an amazing balance between abstraction and control. You get to be abstracted away from all the undifferentiated heavy lifting of infrastructure, and you get full control over the code required to run your application. All of the practices and tools your developers are using for code creation and management can still be used before deploying to Lambda.
Security is AWS’s #1 priority and always will be. Using Lambda means you get native integration with AWS features and services like IAM and VPC that make implementing security best practices easier. Lambda is already part of many mission critical applications for AWS customers already today.
You pay per function execution. When you’ve provisioned a server that no users are interacting with, you’re still paying for that unused capacity. Not with Lambda. No concept of paying for idle capacity, no commitments required. And there is a gigantic free tier available. The first 1 million function executions per month are free with Lambda. And the Lambda free tier does not expire after 12 months like some other AWS free tiers.
There is already a booming community around Lambda, there for your support and they’ve documented answers for a lot of questions that you may run into when starting out.
Your function is collocated on the AWS platform with all of the other services at your functions fingertips. You could write a simple API and a simple code function that’s deployed and managed by AWS Lambda that directly (and securely) integrates with a single relational database that could grow and scale up to 64TB with Amazon Aurora. That’s insane! Not to mention the Support, Solutions Architect, and Partner organization that are here to help make you successful.
Now I’m going to give a demonstration for how to build a serverless application on AWS using AWS Lambda and Amazon API Gateway.
It will be a very typical three-tier web application.
The presentation tier will be a static HTML front end stored inside of S3. Which will reach out to the logic tier via HTTPS API requests.
The logic tier will be an API will be deployed through Amazon API Gateway and processing will occur by functions inside of AWS Lambda.
Those functions will persist data inside a DynamoDB table which will be our data tier.
Once we’re completed, we’ll have a fully scalable and managed application that requires no server-based operations.