Amazon has a long history of using a decentralized IT infrastructure. This arrangement enabled our development teams to access compute and storage resources on demand, and it has increased overall productivity and agility.
prashanth updated resume 2024 for Teaching Profession
Amazon web services
1. Amazon Web Services
1 | P a g e
J.T.M.COE, Faizpur
CHAPTER 1. INTRODUCTION
1.1. OVERVIEW
Amazon has a long history of using a decentralized IT infrastructure. This
arrangement enabled our development teams to access compute and storage
resources on demand, and it has increased overall productivity and agility. By 2005,
Amazon had spent over a decade and millions of dollars building and managing the
large-scale, reliable, and client IT infrastructure that powered one of the worlds
largest online retail platforms. Amazon launched Amazon Web Services (AWS) so
that other organizations could be net from Amazons experience and investment in
running a large-scale distributed, transactional IT infrastructure. AWS has been
operating since 2006, and today serves hundreds of thousands of customers
worldwide. Today Amazon.com runs a global web platform serving millions of
customers and managing billions of dollars worth of commerce every year. Using
AWS, you can requisition compute power, storage, and other services in minutes and
have the edibility to choose the development platform or programming model that
makes the most sense for the problems they are trying to solve. You pay only for
what you use, with no up-front expenses or long-term commitments, making AWS a
cost-effective way to deliver applications.
Here are some of examples of how organizations, from research firms to large enter-
prises, use AWS today:
A large enterprise quickly and economically deploys new internal applications, such
as HR solutions, payroll applications, inventory management solutions, and online
training to its distributed workforce.
An e-commerce website accommodates sudden demand for a hot product caused by
viral buzz from Facebook and Twitter without having to upgrade its infrastructure. A
pharmaceutical research firm executes large-scale simulations using computing
power provided by AWS. Media companies serve unlimited video, music, and other
media to their worldwide customer base.
1.2. PURPOSE
AWS offers low, pay-as-you-go pricing with no up-front expenses or long-term
commitments. We are able to build and manage a global infrastructure at scale, and
pass the cost saving benefits onto you in the form of lower prices. With the
efficiencies of our scale and expertise, we have been able to lower our prices on 15
different occasions over the past four years. AWS provides a massive global cloud
infrastructure that allows you to quickly innovate, experiment and iterate. Instead of
waiting weeks or months for hardware, you can instantly deploy new applications,
instantly scale up as your workload grows, and instantly scale down based on
demand. Whether you need one virtual server or thousands, whether you need them
for a few hours or 24/7, you still only pay for what you use.
AWS is a language and operating system agnostic platform. You choose the
development platform or programming model that makes the most sense for your
2. Amazon Web Services
2 | P a g e
J.T.M.COE, Faizpur
business. You can choose which services you use, one or several, and choose how
you use them. This flexibility allows you to focus on innovation, not infrastructure.
AWS is a secure, durable technology platform with industry-recognized
certifications and audits: PCI DSS Level 1, ISO 27001, FISMA Moderate, Fed
RAMP, HIPAA, and SOC 1(formerly referred to as SAS 70 and/or SSAE 16) and
SOC 2 audit reports. Our services and data centres have multiple layers of
operational and physical security to ensure the integrity and safety of your data.
3. Amazon Web Services
3 | P a g e
J.T.M.COE, Faizpur
CHAPTER 2. WHY USE AWS
AWS is readily distinguished from other vendors in the traditional IT computing
landscape because it is:
1.Flexible: AWS enables organizations to use the programming models, operating
systems, databases, and architectures with which they are already familiar. In
addition, this flexibility helps organizations mix and match architectures in order to
serve their diverse business needs.
2.Cost-effective: With AWS, organizations pay only for what they use, without up-
front or long-term commitments.
3.Scalable and elastic: Organizations can quickly add and subtract AWS resources to
their applications in order to meet customer demand and manage costs.
4.Secure: In order to provide end-to-end security and end-to-end privacy, AWS
builds services in accordance with security best practices, provides the appropriate
security features in those services, and documents how to use those features.
5.Experienced: When using AWS, organizations can leverage Amazons more than
fifteen years of experience delivering large-scale, global infrastructure in a reliable,
secure fashion.
2.1. FLEXIBLE
The first key difference between AWS and other IT models is flexibility. Using
traditional models to deliver IT solutions often requires large investments in new
architectures, programming languages, and operating systems. Although these
investments are valuable, the time that it takes to adapt to new technologies can also
slow down your business and prevent you from quickly responding to changing
markets and opportunities. When the opportunity to innovate arises, you want to be
able to move quickly and not always have to support legacy infrastructure and
applications or deal with protracted procurement processes.
In contrast, the flexibility of AWS allows you to keep the programming models,
languages, and operating systems that you are already using or choose others that are
better suited for their project. You don’t have to learn new skills. Flexibility means
that migrating legacy applications to the cloud is easy and cost-effective. Instead of
re-writing applications, you can easily move them to the AWS cloud and tap into
advanced computing capabilities. Building applications on AWS is very much like
building applications using existing hardware resources. Since AWS provides a
flexible, virtual IT infrastructure, you can use the services together as a platform or
separately for specific needs. AWS run almost any thing from full web applications
to batch processing to o site data back-ups.
In addition, you can move existing SOA-based solutions to the cloud by migrating
discrete components of legacy applications. Typically, these components benefit
4. Amazon Web Services
4 | P a g e
J.T.M.COE, Faizpur
from high availability and scalability, or they are self-contained applications with
few internal dependencies. Larger organizations typically run in a hybrid mode
where pieces of the application run in their data-centre and other portions run in the
cloud. Once these organizations gain experience with the cloud, they begin
transitioning more of their projects to the cloud, and they begin to appreciate many
of the benefits outlined in this document. Ultimately, many organizations see the
unique advantages of the cloud and AWS and make it a permanent part of their IT
mix.
Finally, AWS provides you flexibility when provisioning new services. Instead of
the weeks and months it takes to plan, budget, procure, set up, deploy, operate, and
hire for a new project, you can simply sign up for AWS and immediately begin
deployment on the cloud the equivalent of 1, 10, 100, or 1,000 servers. Whether you
want to prototype an application or host a production solution, AWS makes it simple
for you to get started and be productive. Many customers and the flexibility of AWS
to be a great asset in improving time to market and overall organizational
productivity.
2.2. COST-EFFECTIVE
Cost is one of the most complex elements of delivering contemporary IT
solutions the cloud provides an on-demand IT infrastructure that lets you consume
only the amount of resources that you actually need
You are not limited to a set amount of storage, bandwidth, or computing
resources. It is often difficult to predict requirements for these resources. As a result,
you might provision too few resources, which has an impact on customer
satisfaction, or you might provide too many resources and miss an opportunity to
maximize return on investment (ROI) through full utilization. The cloud provides the
flexibility to strike the right balance. AWS requires no up-front investment, long-
term commitment, or minimum spend. You can get started through a completely
self-service experience on-line, scale up and down as needed, and terminate your
relationship with AWS at any time. You can access new resources almost instantly.
The ability to respond quickly to changes, no matter how large or small, means that
you can take on new opportunities and meet business challenges that could drive
revenue and reduce costs. If you want to consult with AWS for deeper technical
discussions, our sales and solutions architecture teams are available.
2.3 SCALABLE AND ELASTIC
In the traditional IT organization, scalability and elasticity were often equated with
investment and infrastructure.
In the cloud, scalability and elasticity provide opportunity for savings and improved
ROI. AWS uses the term elastic to describe the ability to scale computing resources
up and down easily, with minimal friction. Elasticity helps you avoid provisioning
resources up front for projects with variable consumption rates or short lifetimes. In-
stead of acquiring hardware, setting it up, and maintaining it in order to allocate
resources to your applications, you use AWS to allocate resources using simple API
calls. The AWS cloud is also a useful resource for implementing short-term jobs,
5. Amazon Web Services
5 | P a g e
J.T.M.COE, Faizpur
mission-critical jobs, and jobs repeated at regular intervals. For example, when a
pharmaceutical company needs to run drug simulations (a short-term job), it can use
AWS to spin up resources in the cloud, and then shut them down when it no longer
needs additional resources. When an enterprise has to quickly deal with the effects of
natural disaster on its data centre (a mission critical job), it can use AWS to tap into
new storage and computing resources to accommodate demand. Furthermore, AWS
can preserve computing re-sources and reduce costs for regularly repeated tasks,
such as month-end payroll or invoice processing.
2.4. SECURE
AWS delivers a scalable cloud-computing platform that provides customers with end
to end security and end-to-end privacy.
AWS builds security into its services in accordance with security best practices, and
documents how to use the security features. It is important that you leverage AWS
security features and best practices to design an appropriately secure application
environment.
Ensuring the confidentiality, integrity, and availability of your data is of the utmost
importance to AWS, as is maintaining your trust and confidence. AWS takes the
following approaches to secure the cloud infrastructure: Certifications and
accreditations. AWS has in the past successfully completed multiple SAS70 Type II
audits, and now publishes a Service Organization Controls 1 (SOC 1) report,
published under both the SSAE 16 and the ISAE 3402 professional standards. In
addition to the SOC 1 report, AWS publishes a Service Organization Controls 2
(SOC 2), Type II report. Similar to the SOC 1 in the evaluation of controls, the SOC
2 report is an attestation report that expands the evaluation of controls to the criteria
set forth by the American Institute of Certified Public Accountants (AICPA) Trust
Services Principles. Additionally, AWS publishes a Service Organization Controls 3
(SOC 3) report . The SOC 3 report is a publically-available summary of the AWS
SOC 2 report and provides the AICPA Sys Trust
Physical security. Amazon has many years of experience designing, constructing,
and operating large-scale data centres. The AWS infrastructure is located in
Amazon-controlled data centres throughout the world. Knowledge of the location of
the data centres is limited to those within Amazon who have a legitimate business
reasons for this information. The data centres are physically secured in a variety of
ways to prevent unauthorized access. Secure services. Each service in the AWS
cloud is architected to be secure. The services contain a number of capabilities that
restrict unauthorized access or usage without sacrificing the flexibility that customers
demand.
Data privacy. You can encrypt personal and business data in the AWS cloud, and
publish backup and redundancy procedures for services so that your customers can
protect their data and keep their applications running.
6. Amazon Web Services
6 | P a g e
J.T.M.COE, Faizpur
Data privacy. You can encrypt personal and business data in the AWS cloud, and
publish backup and redundancy procedures for services so that your customers can
protect their data and keep their applications running.
2.5. EXPERIENCED
AWS provides a low-friction path to cloud computing by design. Nevertheless, as
with any IT project, the move to the AWS cloud should be done thoughtfully. You
should hold your cloud-computing partner to the same high standards that you would
expect of any hardware or software vendor. The trust that you place in your cloud-
computing vendor will be critical as your organization grows and your customers
continue to expect the best experience. The AWS cloud provides levels of scale,
security, reliability, and privacy that are often cost prohibitive for many
organizations to meet or exceed. AWS has built an infrastructure based on lessons
learned from over sixteen years experience managing the multi-billion dollar
Amazon.com business. AWS customers benefit as Amazon continues to hone its
infrastructure management skills and capabilities. Today Amazon.com runs a global
web platform serving millions of customers and managing billions of dollars worth
of commerce every year. AWS has been operating since 2006, and today serves
hundreds of thousands of customers worldwide. Moreover, AWS has a demonstrated
track record of listening to its customers and delivering highly innovative new
features at a rapid pace. These new releases have the same high standards of security
and reliability that are demonstrated in all existing AWS infrastructure services.
AMAZON WEB SERVICE Page 9 Officially launched in 2006, Amazon Web
Services provides online services for other web sites or client-side applications. Most
of these services are not exposed directly to end users, but instead offer functionality
that other developers can use in their applications. Amazon Web Services offerings
are accessed over HTTP, using the REST architectural style and SOAP protocol. All
services are billed based on usage, but how usage is measured for billing varies from
service to service. In late 2003, Chris Pinkham and Benjamin Black presented a
paper describing a vision for Amazon's retail computing infrastructure that was
completely standardized, completely automated, and would rely extensively on web
services for services such as storage, drawing on internal work already under-way.
Near the end they mentioned the possibility of selling virtual servers as a service,
proposing the company could generate revenue from the new infrastructure
investment. The first AWS service launched for public usage was Simple Queue
Service in November 2004. Amazon EC2 was built by a team in Cape Town, South
Africa, under Pinkham and lead developer Chris Brown. In June 2007, Amazon
claimed that more than 180,000 developers had signed up to use Amazon Web
Services. In November 2010, it was reported that all of Amazon.com retail web
services had been moved to AWS. On April 20, 2011, some parts of Amazon.
7. Amazon Web Services
7 | P a g e
J.T.M.COE, Faizpur
CHAPTER 3.HISTORY
Officially launched in 2006, Amazon Web Services provides online services for
other web sites or client-side applications. Most of these services are not exposed
directly to end users, but instead offer functionality that other developers can use in
their applications. Amazon Web Services offerings are accessed over HTTP, using
the REST architectural style and SOAP protocol. All services are billed based on
usage, but how usage is measured for billing varies from service to service. In late
2003, Chris Pinkham and Benjamin Black presented a paper describing a vision for
Amazon's retail computing infrastructure that was completely standardized,
completely automated, and would rely extensively on web services for services such
as storage, drawing on internal work already underway. Near the end they mentioned
the possibility of selling virtual servers as a service, proposing the company could
generate revenue from the new infrastructure investment. The first AWS service
launched for public usage was Simple Queue Service in November 2004. Amazon
EC2 was built by a team in Cape Town, South Africa, under Pinkham and lead
developer Chris Brown. In June 2007, Amazon claimed that more than 180,000
developers had signed up to use Amazon Web Services. In November 2010, it was
reported that all of Amazon.com retail web services had been moved to AWS. On
April 20, 2011, some parts of Amazon Web Services suffered a major outage. A
portion of volumes using the Elastic Block Store (EBS) service became "stuck" and
were unable to fulfil read/write requests. It took at least two days for service to be
fully restored. On June 29, 2012, several websites that rely on Amazon Web Services
were taken of fine due to a severe storm of historic proportions in Northern Virginia,
where AWS' largest data centre cluster is located. On October 22, 2012, a major
outage occurred, affecting many sites such as Reddit, Foursquare, Pinterest, and
others. The cause was a latent memory leak bug in an operational data collection
agent. On December 24, 2012, AWS suffered another outage, causing websites such
as Net ix instant video to be unavailable for customers in the North eastern United
States. AWS later issued a statement detailing the issues with the Elastic Load
Balancing service that led up to the outage. In November 2012, AWS hosted its first
customer event in Las Vegas. On April 30, 2013, AWS began offering a certification
program for computer engineers with expertise in cloud
In 2015, Gartner estimated that AWS customers are deploying 10x more
infrastructures on AWS than the combined adoption of the next 14 providers. During
the 2015 re: Invent keynote, AWS disclosed that they have more than a million
active customers every month in 190 countries, including nearly 2,000 government
agencies, 5,000 education institutions and more than 17,500 non profits. AWS
adoption has increased since launch in 2006. Customers include NASA, the Obama
Campaign, Pinterest, Kempinski Hotels, Net ix, Infor, the CIA. AWS Distinguished
Engineer James Hamil-ton has also provided a timeline of AWS' 10 year history.
8. Amazon Web Services
8 | P a g e
J.T.M.COE, Faizpur
CHAPTER 4.SERVICES OF AWS
4.1. Amazon Web Services Cloud Platform
AWS is a comprehensive cloud services platform that offers compute power,
storage, content delivery, and other functionality that organizations can use to deploy
applications and services cost effectively with flexibility, scalability, and reliability.
AWS self-service means that you can proactively address your internal plans and
react to external demands when you choose.
Figure 1: Amazon web service cloud platform
4.2. COMPUTE NETWORKING
4.2.1 Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides
resizable compute capacity in the cloud. It is designed to make web-scale computing
easier for developers and system administrators. Amazon EC2s simple web service
interface allows you to obtain and configure capacity with minimal friction. It
provides you with complete control of your computing resources and lets you run on
Amazons proven computing environment. Amazon EC2 reduces the time required to
9. Amazon Web Services
9 | P a g e
J.T.M.COE, Faizpur
obtain and boot new server instances to minutes, allowing you to quickly scale
capacity, both up and down, as your computing requirements change. Amazon EC2
changes the economics of computing by allowing you to pay only for capacity that
you actually use. Amazon EC2 provides developers and system administrators the
tools to build failure resilient applications and isolate themselves from common
failure scenarios.
Figure 2: Amazon EC2
4.2.2 Auto Scaling
Auto Scaling allows you to scale your Amazon EC2 capacity up or down
automatically according to conditions you de ne. With Auto Scaling, you can ensure
that the number of Amazon EC2 instances you are using increases seamlessly during
demand spikes to maintain performance, and decreases automatically during demand
lulls to minimize costs. Auto Scaling is particularly well suited for applications that
experience hourly, daily, or weekly variability in usage. Auto Scaling is enabled by
Amazon Cloud Watch and available at no additional charge beyond Amazon Cloud
Watch fees.
Figure 3: EC2 Auto Scaling
10. Amazon Web Services
10 | P a g e
J.T.M.COE, Faizpur
4.2.3. Elastic Load Balancing
Elastic Load Balancing automatically distributes incoming application traffic across
multiple Amazon EC2 instances. It enables you to achieve even greater fault
tolerance in your applications, seamlessly providing the amount of load balancing
capacity needed in response to incoming application traffic. Elastic Load Balancing
detects unhealthy instances and automatically reroutes traffic to healthy instances
until the unhealthy instances have been restored. Customers can enable Elastic Load
Balancing within a single Availability Zone or across multiple zones for even more
consistent application performance.
Figure 4: EC2 Elastic Load Balancing
4.2.4. Amazon Workspaces
Amazon Work Spaces is a fully managed desktop computing service in the cloud.
Amazon Work Spaces allows customers to easily provision cloud-based desktops
that allow end-users to access the documents, applications and resources they need
with the de-vice of their choice, including laptops, iPad, Kindle Fire, or Android
tablets. With a few clicks in the AWS Management Console, customers can
provision a high-quality desktop experience for any number of users at a cost that is
highly competitive with traditional desktops and half the cost of most virtual desktop
infrastructure (VDI) solutions.
Figure 5: Amazon Workspace
11. Amazon Web Services
11 | P a g e
J.T.M.COE, Faizpur
4.2.5 Amazon Virtual Private Cloud (Amazon VPC)
Amazon Virtual Private Cloud lets you provision a logically isolated section of the
Amazon Web Services (AWS) Cloud where you can launch AWS resources in a
virtual network that you de ne. You have complete control over your virtual
networking environment, including selection of your own IP address range, creation
of subnets, and configuration of route tables and network gateways. You can easily
customize the net-work configuration for your Amazon VPC. For example, you can
create a public-facing subnet for your web servers that has access to the Internet, and
place your backend systems such as databases or application servers in a private-
facing subnet with no Internet access. You can leverage multiple layers of security
(including security groups and network access control lists) to help control access to
Amazon EC2 instances in each subnet. Additionally, you can create a hardware
virtual private network (VPN) connection between your corporate data centre and
your VPC and leverage the AWS cloud as an extension of your corporate data
centre.
Figure 6: Amazon Virtual Private Cloud (Amazon VPC)
4.2.6. Amazon Route 53
Amazon Route 53 is a highly available and scalable Domain Name System (DNS)
web service. It is designed to give developers and businesses an extremely reliable
and cost effective way to route end users to Internet applications by translating
human readable names, such as www.example.com, into the numeric IP addresses,
such as 192.0.2.1, that computers use to connect to each other. Route 53 effectively
connects user requests to infrastructure running in AWS, such as an EC2 instance, an
elastic load balancer, or an Amazon S3 bucket. Route 53 can also be used to route
12. Amazon Web Services
12 | P a g e
J.T.M.COE, Faizpur
users to infrastructure outside of AWS. Amazon Route 53 is designed to be fast, easy
to use, and cost effective. It answers DNS queries with low latency by using a global
net-work of DNS servers. Queries for your domain are automatically routed to the
nearest DNS server, and thus are answered with the best possible performance. With
Route 53, you can create and manage your public DNS records with the AWS
Management Console or with an easy-to-use API. Its also integrated with other
Amazon Web Ser-vices. For instance, by using the AWS Identity and Access
Management (IAM) service with Route 53, you can control who in your organization
can make changes to your DNS records. Like other Amazon Web Services, there are
no long term contracts or minimum usage requirements for using Route 53you pay
only for managing domains through the service and the number of queries that the
service answers.
Figure 7: Amazon Route 53
4.2.7. AWS Direct Connect
AWS Direct Connect makes it easy to establish a dedicated network connection from
your premises to AWS. Using AWS Direct Connect, you can establish private
connectivity between AWS and your data centre, office, or co-location environment,
which in many cases can reduce your network costs, increase bandwidth throughput,
and provide a more consistent network experience than Internet-based connections.
AWS Direct Connect lets you establish a dedicated network connection between
your net-work and one of the AWS Direct Connect locations. Using industry
standard 802.1Q virtual LANS (VLANs), this dedicated connection can be
partitioned into multiple logical connections. This allows you to use the same
connection to access public resources such as objects stored in Amazon S3 using
13. Amazon Web Services
13 | P a g e
J.T.M.COE, Faizpur
public IP address space, and private re-sources such as Amazon EC2 instances
running within an Amazon VPC using private IP space, while maintaining network
separation between the public and private environments. Logical connections can be
reconfigured at any time to meet your changing needs.
Figure 8: AWS Direct Connect
4.3 STORAGE CONTENT DELIVERY NETWORK
4.3.1 Amazon Simple Storage Service (Amazon S3)
Amazon S3 is storage for the Internet. It is designed to make web-scale
computing easier for developers. Amazon S3 provides a simple web services
interface that can be used to store and retrieve any amount of data, at any time, from
anywhere on the web. The container for objects stored in Amazon S3 is called an
Amazon S3 bucket. Amazon S3 gives any developer access to the same highly
scalable, reliable, secure, fast, inexpensive infrastructure that Amazon uses to run its
own global network of websites. The service aims to maximize benefits of scale and
to pass those benefits on to developers.
14. Amazon Web Services
14 | P a g e
J.T.M.COE, Faizpur
Figure 9: STORAGE CONTENT DELIVERY NETWORK
Figure 10: Amazon Simple Storage Service (Amazon S3)
4.3.2. Amazon Glacier
Amazon Glacier is an extremely low-cost storage service that provides secure and
durable storage for data archiving and backup. In order to keep costs low, Amazon
Glacier is optimized for data that is infrequently accessed and for which retrieval
times of several hours are suitable. With Amazon Glacier, customers can reliably
store large or small amounts of data for as little as dollar 0.01 per gigabyte per
month, a sig-ni cant savings compared to on premises solutions. Companies typically
over-pay for data archiving. First, they're forced to make an expensive upfront
payment for their archiving solution (which does not include the ongoing cost for
operational expenses such as power, facilities, staffing, and maintenance). Second,
since companies have to guess what their capacity requirements will be, they
understandably overprovision to make sure they have enough capacity for data
redundancy and unexpected growth. This set of circumstances results in under-
15. Amazon Web Services
15 | P a g e
J.T.M.COE, Faizpur
utilized capacity and wasted money. With Amazon Glacier, you pay only for what
you use. Amazon Glacier changes the game for data archiving and backup because
you pay nothing up front, pay a very low price for storage, and can scale your usage
up or down as needed, while AWS handles all of the operational heavy lifting
required to do data retention well. It only takes a few clicks in the AWS
Management Console to set up Amazon Glacier, and then you can upload any
amount of data you choose.
Figure 11: Amazon Glacier
4.3.3. Amazon Elastic Block Storage (EBS)
Amazon Elastic Block Store (EBS) provides block level storage volumes for use
with Amazon EC2 instances. Amazon EBS volumes are network-attached, and
persist independently from the life of an instance. Amazon EBS provides highly
available, highly reliable, predictable storage volumes that can be attached to a
running Amazon EC2 instance and exposed as a device within the instance. Amazon
EBS is particularly suited for applications that require a database, le system, or
access to raw block level storage.
Figure 12: Amazon Elastic Block Storage(EBS)
4.3.4 AWS Storage Gateway
AWS Storage Gateway is a service connecting an on-premises software appliance
with cloud-based storage to provide seamless and secure integration between an
organizations on-premises IT environment and AWSs storage infrastructure. The
service enables you to securely upload data to the AWS cloud for cost-effective
backup and rapid disaster recovery. AWS Storage Gateway supports industry-
standard storage proto-cols that work with your existing applications. It provides
16. Amazon Web Services
16 | P a g e
J.T.M.COE, Faizpur
low-latency performance by maintaining data on your on-premises storage hardware
while asynchronously up-loading this data to AWS, where it is encrypted and
securely stored in Amazon Simple Storage Service (Amazon S3) or Amazon Glacier.
Using AWS Storage Gateway, you can back up point-in-time snapshots of your on
premises application data to Amazon S3 for future recovery. In the event you need
replacement capacity for disaster recovery purposes, or if you want to leverage
Amazon EC2s on-demand compute capacity for additional capacity during peak
periods, for new projects, or as a more cost-effective way to run your normal
workloads, you can use AWS Storage Gateway to mirror your on-premises data to
Amazon EC2 instances.
Figure 13: AWS Storage Gateway
17. Amazon Web Services
17 | P a g e
J.T.M.COE, Faizpur
4.4 DATABASE
4.4.1 Amazon Relational Database Service (Amazon RDS)
Amazon Relational Database Service (Amazon RDS) is a web service that makes it
easy to set up, operate, and scale a relational database in the cloud. It provides cost
efficient and resizable capacity while managing time-consuming database
administration tasks, freeing you up to focus on your applications and business.
Amazon RDS gives you access to the capabilities of a familiar MySQL, Oracle, SQL
Server or Postgre SQL database. This means that the code, applications, and tools
you already use today with your existing databases can be used with Amazon RDS.
Amazon RDS automatically patches the database software and backs up your
database, storing the backups for a retention period that you de ne and enabling
point-in-time recovery. You benefit from the flexibility of being able to scale the
compute resources or storage capacity associated with your relational database
instance by using a single API call. In addition, Amazon RDS makes it easy to use
replication to enhance availability and reliability for production databases and to
scale out beyond the capacity of a single database deployment for read-heavy
database workloads.
Figure 14: AWS RDS
4.4.2. Amazon DynamoDB
Amazon DynamoDB is a fast, fully managed NoSQL database service that makes it
simple and cost-effective to store and retrieve any amount of data, and serve any
level of request traffic. All data items are stored on Solid State Drives (SSDs), and
are replicated across 3 Availability Zones for high availability and durability. With
Dy-namoDB, you can offload the administrative burden of operating and scaling a
highly available distributed database cluster, while paying a low price for only what
you use Amazon DynamoDB is designed to address the core problems of database
management, performance, scalability, and reliability. Developers can create a
database table that can store and retrieve any amount of data, and serve any level of
request traffic. DynamoDB automatically spreads the data and traffic for the table
over a sufficient number of servers to handle the request capacity specified by the
customer and the amount of data stored, while maintaining consistent, fast
performance. All data items are stored on solid state drives (SSDs) and are
automatically replicated across multiple Availability Zones in a Region to provide
built-in high availability and data durability. Amazon DynamoDB enables customers
to of load the administrative burden of operating and scaling a highly available,
distributed database cluster while only paying a low variable price for the resources
they consume.
18. Amazon Web Services
18 | P a g e
J.T.M.COE, Faizpur
Figure 15: amazon DynamoDB
4.4.3. Amazon ElastiCache
Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale
an in-memory cache in the cloud. The service improves the performance of web
applications by allowing you to retrieve information from a fast, managed, in-
memory caching system, instead of relying entirely on slower disk-based databases.
ElastiCache supports two open source caching engines
memcached- a widely adopted memory object caching system. ElastiCache is
protocol.
Figure 16: Amazon Elasticcache
Compliant with Memcached, so popular tools that you use today with existing Mem-
cached environments will work seamlessly with the service.
Redis a popular open-source in-memory key-value store that supports data structures
such as sorted sets and lists. ElastiCache supports Redis master / slave replication
which can be used to achieve cross AZ redundancy. Amazon ElastiCache
automatically detects and replaces failed nodes, reducing the overhead associated
with self-managed infrastructures and provides a resilient system that mitigates the
risk of overloaded databases, which slow website and application load times.
Through integration with Amazon Cloud Watch, Amazon ElastiCache provides
enhanced visibility into key performance metrics associated with your Memcached
or Redis nodes.
19. Amazon Web Services
19 | P a g e
J.T.M.COE, Faizpur
4.4.4 Amazon Redshift
Amazon Redshift is a fast, fully managed, peta byte-scale data warehouse service
that makes it simple and cost-effective to efficiently analyze all your data using your
existing business intelligence tools.
It is optimized for datasets ranging from a few hundred gigabytes to a petabyte or
more and costs less than dollar 1,000 per terabyte per year, a tenth the cost of most
traditional data warehousing solutions. Amazon Redshift delivers fast query and I/O
performance for virtually any size dataset by using columnar storage technology and
parallelizing and distributing queries across multiple nodes.
We have made Amazon Redshift easy to use by automating most of the common
administrative tasks associated with provisioning, configuring, monitoring, backing
up, and securing a data warehouse. Powerful security functionality is built-in.
Amazon Redshift supports Amazon VPC out of the box and you can encrypt all your
data and backups with just a few clicks. Once youve provisioned your cluster, you
can connect to it and start loading data and running queries using the same SQL-
based tools you use today.
Figure 17: Amazon redshift
20. Amazon Web Services
20 | P a g e
J.T.M.COE, Faizpur
4.5. ANALYTICS
4.5.1. Amazon Elastic MapReduce (Amazon EMR)
Amazon Elastic MapReduce (Amazon EMR is a web service that makes it easy to
quickly and cost-effectively process vast amounts of data. Amazon EMR uses
Hadoop, an open source framework, to distribute your data and processing across a
resizable cluster of Amazon EC2 instances. Amazon EMR is used in a variety of
applications, including log analysis, web indexing, data warehousing, machine
learning, financial analysis, scientific simulation, and bioinformatics. Customers
launch millions of Amazon EMR clusters every year.
Figure 18: Amazon Elastic Mapreduce
4.5.2. AWS Data Pipeline
AWS Data Pipeline is a web service that helps you reliably process and move data
between different AWS compute and storage services as well as on-premise data
sources at specified intervals. With AWS Data Pipeline, you can regularly access
your data where its stored, transform and process it at scale, and efficiently transfer
the results to AWS services such as Amazon S3, Amazon RDS, Amazon
DynamoDB, and Amaon Elastic MapReduce (EMR).
AWS Data Pipeline helps you easily create complex data processing
workloads that are fault tolerant, repeatable, and highly available. You don’t have to
worry about ensuring resource availability, managing inter-task dependencies,
retrying transient failures or timeouts in individual tasks, or creating a failure
notification system. AWS Data Pipeline also allows you to move and process data
21. J.T.M.COE, Faizpur
that was previously locked up in on
4.6 Amazon AppStream
Amazon AppStream is a
intensive applications and games from the cloud. It d
application on AWS infrastructure and streams the output to mass
such as personal computers, tablets, and mobile phones. Because your application is
running in the cloud, it can scale to handle vast computational and storage needs,
regardless of the devices your customers are using. You can choose to stream either
all or parts of your application from the cloud. Amazon AppStream enables use cases
for games and applications that wouldn
market devices. Using Amazon AppStream, your games and applications are no
longer constrained by the hardware in your customers hands. Amazon AppStream
includes a SDK that currently supports streaming applications from Microsoft
Windows Server 2008 R2 to devices runnin
Windows. A Mac OS X SDK is planned for 2014
4.6.1. Amazon Elastic Transcoder
Amazon Elastic Transcoder is media transcoding in the cloud. It is designed to be a
highly scalable, easy to
convert (or transcode) media les from their source format into versions that will
playback on devices like smart
Amazon Elastic Transcoder manages all aspects of the t
transparently and automatically. There
ware, tune performance, or otherwise manage transcoding infrastructure. You simply
Amazon Web Services
Faizpur
reviously locked up in on-premise data silos.
Figure 19: Amazon Data Pipeline
Amazon AppStream
Amazon AppStream is a flexible, low-latency service that lets you stream resource
intensive applications and games from the cloud. It deploys and renders
tion on AWS infrastructure and streams the output to mass-
such as personal computers, tablets, and mobile phones. Because your application is
running in the cloud, it can scale to handle vast computational and storage needs,
ardless of the devices your customers are using. You can choose to stream either
all or parts of your application from the cloud. Amazon AppStream enables use cases
for games and applications that wouldn’t be possible running natively on mass
s. Using Amazon AppStream, your games and applications are no
longer constrained by the hardware in your customers hands. Amazon AppStream
includes a SDK that currently supports streaming applications from Microsoft
Windows Server 2008 R2 to devices running FireOS, Android, iOS, and Microsoft
Windows. A Mac OS X SDK is planned for 2014
Figure 20: Amazon Appstream
Amazon Elastic Transcoder
Amazon Elastic Transcoder is media transcoding in the cloud. It is designed to be a
easy to use and a cost effective way for developers and businesses to
convert (or transcode) media les from their source format into versions that will
playback on devices like smart phones, tablets and PCs.
Amazon Elastic Transcoder manages all aspects of the transcoding process for you
rently and automatically. There no need to administer software, scale hard
ware, tune performance, or otherwise manage transcoding infrastructure. You simply
Amazon Web Services
21 | P a g e
latency service that lets you stream resource
eploys and renders your
-market devices,
such as personal computers, tablets, and mobile phones. Because your application is
running in the cloud, it can scale to handle vast computational and storage needs,
ardless of the devices your customers are using. You can choose to stream either
all or parts of your application from the cloud. Amazon AppStream enables use cases
t be possible running natively on mass-
s. Using Amazon AppStream, your games and applications are no
longer constrained by the hardware in your customers hands. Amazon AppStream
includes a SDK that currently supports streaming applications from Microsoft
g FireOS, Android, iOS, and Microsoft
.
Amazon Elastic Transcoder is media transcoding in the cloud. It is designed to be a
ective way for developers and businesses to
convert (or transcode) media les from their source format into versions that will
ranscoding process for you
no need to administer software, scale hard-
ware, tune performance, or otherwise manage transcoding infrastructure. You simply
22. Amazon Web Services
22 | P a g e
J.T.M.COE, Faizpur
create a transcoding job specifying the location of your source video and how you
want it transcoded. Amazon Elastic Transcoder also provides transcoding presets for
popular output formats, which means that you don’t need to guess about which
settings work best on particular devices. All these features are available via service
APIs and the AWS Management Console.
Figure 21: Amazon Elastic Transcoder
23. Amazon Web Services
23 | P a g e
J.T.M.COE, Faizpur
CHAPTER 5.WORKING OF AWS
5.1. HOW DOES AWS WORK
When you create a stack, AWS Cloud Formation makes underlying service calls to
AWS to provision and configure your resources. Note that AWS Cloud Formation
can per-form only actions that you have permission to do. For example, to create
EC2 instances by using AWS Cloud Formation, you need permissions to create
instances. You'll need similar permissions to terminate instances when you delete
stacks with instances. You use AWS Identity and Access Management (IAM) to
manage permissions. The calls that AWS Cloud Formation makes are all declared by
your template. For example, suppose you have a template that describes an EC
instance with a t1.micro instance type. When you use that template to create a stack,
AWS Cloud Formation calls the Amazon EC2 create instance API and specifies the
instance type as t1.micro. The following diagram summarizes the AWS Cloud
Formation work flow for creating stacks.
Figure 22: Working of AWS
You can design an AWS Cloud Formation template (a JSON-formatted document)in
AWS Cloud Formation Designer or write one in a text editor. You can also choose to
use a provided template. The template describes the resources you want and their
settings. For example, suppose you want to create an EC2 instance. Your template
can declare an EC2 instance and describe. 2. Save the template locally or in an S3
bucket. If you created a template, save it with any le extension like .json or .txt
Create an AWS Cloud Formation stack by specifying the location of your template,
such as a path on your local computer or an Amazon S3 URL. If the template
contains parameters, you can specify input values when you create the stack.
Parameters enable you to pass in values to your template so that you can customize
your resources each time you create a stack. You can create stacks by using the AWS
Cloud Formation console, API, or AWS CLI. Note:- If you specify a template le
stored locally, AWS Cloud Formation uploads it to an S3 bucket in your AWS
account. AWS Cloud Formation creates a bucket for each region in which you
upload a template le. The buckets are accessible to anyone with Amazon Simple
Storage Service (Amazon S3) permissions in your AWS account. If a bucket created
by AWS Cloud Formation is already present, the template is added to that bucket.
You can use your own bucket and manage its permissions by manually uploading
templates to Amazon S3. Then whenever you create or update a stack, specify the
Amazon S3 URL of a template le. AWS Cloud Formation provisions and configures
resources by making calls to the AWS services that are described in your template.
24. Amazon Web Services
24 | P a g e
J.T.M.COE, Faizpur
After all the resources have been created, AWS Cloud Formation reports that your
stack has been created. You can then start using the resources in your stack. If stack
creation fails, AWS Cloud Formation rolls back your changes by deleting the
resources that it created.
5.1.1 Updating a Stack with Change Sets
When you need to update your stack's resources, you can modify the stack's
template. You don't need to create a new stack and delete the old one. To update a
stack, create a change set by submitting a modified version of the original stack
template, different input parameter values, or both. AWS Cloud Formation compares
the modified template with the original template and generates a change set. The
change set lists the proposed changes. After reviewing the changes, you can execute
the change set to update your stack or you can create a new change set. The
following diagram summarizes the workflow for updating a stack.
Important
Updates can cause interruptions. Depending on the resource and properties that you
are updating, an update might interrupt or even replace an existing resource. For
more information, see AWS Cloud Formation Stacks Updates.
You can modify an AWS Cloud Formation stack template by using AWS Cloud
Formation Designer or a text editor. For example, if you want to change the instance
type for an EC2 instance, you would change the value of the Instance Type property
in the original stack's template.
Save the AWS Cloud Formation template locally or in an S3 bucket.
Create a change set by specifying the stack that you want to update and the location
of the modified template, such as a path on your local computer or an Amazon S3
URL. If the template contains parameters, you can specify values when you create
the change set. For more information about creating change sets, see the section
called Updating Stacks Using Change Sets. Note If you specify a template that is
stored on your local computer, AWS Cloud Formation automatically uploads your
template to an S3 bucket in your AWS account.
View the change set to check that AWS Cloud Formation will perform the changes
that you expect. For example, check whether AWS Cloud Formation will replace any
critical stack resources. You can create as many change sets as you need until you
have included the changes that you want.
Important Change sets don't indicate whether your stack update will be successful.
For example, a change set doesn't check if you will surpass an account limit, if you're
updating a resource that doesn't support updates, or if you have sufficient
permissions to modify a resource, all of which can cause a stack update to fail.
25. Amazon Web Services
25 | P a g e
J.T.M.COE, Faizpur
Execute the change set that you want to apply to your stack. AWS Cloud Formation
updates your stack by updating only the resources that you modified and signals that
your stack has been successfully updated. If the stack updates fails, AWS Cloud
Formation rolls back changes to restore the stack to the last known working state.
26. Amazon Web Services
26 | P a g e
J.T.M.COE, Faizpur
CHAPTER 6. PRICING
Amazon Web Services: Complex Pricing Options
In contrast, AWS offers several pricing options. It offers on demand usage pricing
that is rounded up to the nearest hour, and this is Amazon’s most expensive pricing.
Another option is called a Reserved Instance, where you make a commitment for
either one year or three years for a specific VM instance. These come with payment
options: to pay in full up front, to pay a portion of the cost up front, or to pay nothing
up front. AWS’ lowest prices come from paying all of the cost up front.
Any discounts are tied to up front payments, which can create a trade off in agility
and flexibility but more on that later.
Figure 23:Amazon Web Services Cost Estimation Calculator
27. Amazon Web Services
27 | P a g e
J.T.M.COE, Faizpur
Figure 24 : Amazon Web Services Pricing
28. Amazon Web Services
28 | P a g e
J.T.M.COE, Faizpur
CHAPTER 6.ADVANTAGES AND DISADVANTAGES
6.1. ADVANTAGES
1. Easy to use Flexible Cost-effective Reliable Secure Scalable High
Performance.
2. Trade capital expense for variable expense.
3. Benefit from massive economies of scale.
4. Stop guessing about capacity.
5. Increase speed and agility.
6. Stop spending money running and maintaining data centres.
7. Go global in minutes.
8. Thousands of independent software vendors like SAP, Oracle, Adobe,
Microsoft, Esri, etc. have made their software available on AWS to
customers along with go-to-market partnerships with system integrators
(such as Capgemini, Cognizant and Wipro) that provide both application
development expertise and managed services.
9. More than 5x the compute capacity in use than the aggregate total of the
other fourteen leading providers in the market
10. 7 years in the market with hundreds of thousands of customers in over 190
countries running every imaginable use case on AWS
11. Groups of data centers, which it calls "regions," on the East and West Coasts
of the U.S., and in Ireland, Japan, Singapore, Australia and Brazil; it also has
one region dedicated to the U.S. federal government.
6.2. DISADVANTAGES
1. Should have account with Amazon. Should be connected with internet. Only
works on computers.
2. The learning curve for a software-defined data centres is sometimes steep for
larger enterprises.
3. Billing is extremely confusing; NPI recommends going through a reseller for
a more detailed monthly bill.
4. AWS does not include enterprise-grade support by default. Customers will
need to buy Business tier support for this, which carries up to a 10%
premium on the customer's overall AWS spend.
5. Almost all enterprise customers require a custom agreement (vs. the click-
through agreement online), and significant terms-and-conditions negotiation.
6. Have experienced high profile outages in recent history
29. Amazon Web Services
29 | P a g e
J.T.M.COE, Faizpur
CHAPTER 7. What’s Next
The extremely customizable cloud computing service provider, AWS has the
potential of being the era defining cloud service provider. In a brief period of time it
has envisaged an extremely trusted and rapidly expanding cloud infrastructure within
its reach—a league which is further growing and dominating the computer and IoT
market. It is through these defining parameters that one can easily foresee the
tremendous change in cloud computing, dominated by AWS. Unlike its
contemporary kinds, AWS aims at greater affordability, customization and “get what
I want!” forces of the modern day market.
Market Expansion to Battle Competition
With the blasting expansion of digitalization and IoT (especial thanks to smartphone
industries), the area of cloud consumption is indeed vast; vast enough that it can and
will support the competitors. But, given that the consumer’s discretion is wise, one
would opt for AWS at least for a trial, which seems, will be appealing enough to the
consumers to embracing it forever. Other competitors are also well equipped,
though. Long-tail cloud use cases that AWS won’t optimize and will continue to
spring up. Today’s long-tail use case may be the size of the entire 2017 cloud
computing market 10-12 years down the line: Google, Digital Ocean, Microsoft are
well positioned to pick up what Amazon is proud to offer. But, as far as “need of the
hour” is concerned, AWS has a better edge over these competitors.
With its expanding wings where AWS has already shook hands with Nokia for 5G,
IoT and cloud services, it is further attracting attention and trust of the cloud
consumers/service providers. People are willing to collaborate with AWS due to the
gravity built around it. Apart from the goodwill privileges, AWS also readily caters
to the modern day needs of cloud services. It allows its customers to have at their
disposal a full-fledged virtual cluster of computers, available all the time, through the
internet. Therefore, with its doubly tightened bow, AWS Cloud Services will surely
overshadow its counterparts and expand its reach as the time progresses.
AWS Competitive Market Strategies
In a practical sense, any company won’t need more than one server until it has users
it cannot accommodate. But, in consideration of cost reduction, one might as well
rethink over maximizing the profit margin and that’s where AWS will stand high and
apart. Having the low margin monopoly in its DNA, Amazon will sweep away the
wandering expensive service providers. In the flux of growing enlightenment vis-à-
vis cloud computing and its abounding expenses, likewise companies will obviously
cut down their cost too to battle the competition, but to what end? They also have to
keep up the brand value (as it has been so far carried). They offer services, which
AWS also offers, at relatively higher prices. In the event, ultimately the consumer
will be benefitting due to the cut throat competition and search of market
domination. The prices will be marred down so low that cloud services become
cheaper than internet connection. Had the infrastructure not been this wide, with this
kind of demand that we see now a days, the same consumers would be facing crisis
instead.
30. Amazon Web Services
30 | P a g e
J.T.M.COE, Faizpur
Even with respect to its service delivery, AWS stands higher than its counterparts
due to its affordability, efficiency and trust. The success of AWS will depend on
bringing the center of data mass onto AWS for transactional data. Besides that, other
factors such as going up the application stack strategically will also help AWS climb
the ladder of success. If the future cloud computing infrastructure demands what
AWS is already offering, then it will be highly unlikely for other budding
competitors to match steps with the success that AWS would have gained by that
time.
Ground Realities of the Market
AWS is safer, cost effective and efficient as compared to its corresponding
competitors. Google has proficiency over machine learning (part that defines it is
putting greater efforts in machine development than cloud betterment) while
Microsoft is good at solving nearly only 80% of the cloud computing needs and
problems. So, while AWS brings exclusivity in its approach, it becomes sensible to
think of it as a master of the arena. Every firm has its specialty in something and
AWS has its in cloud computing. With proper grip over safety, affordability and
reliance, AWS Cloud Services will continue to exhibit its mastery over cloud
computing in the coming years.
Cloud computing is a sticky business—one has to keep continuing unless something
very unfortunate happens and the company/individual has to switch to some other
company. AWS delivers a promising cloud service that is capable of lasting with you
for the lifetime. You won’t have to spend years and millions of dollars over inter-
cloud transfer for AWS Cloud Services will not award you with any discouragement.
It will let you keep your customer relation intact and blooming through its seamless
services, acquiring its own success, thereby.
31. Amazon Web Services
31 | P a g e
J.T.M.COE, Faizpur
CHAPTER 8.CONCLUSION
Amazon web service provides a lot of services which we can use connect
with internet and use cloud storage. In other words we can say that if we haven’t web
services then we never go out of our computer and we cant use anything.
Amazon provides us services at very low cost. In Amazon web service we
only pay for what and how much long time we use service. When we use Amazon
web service services then we found Less error Fastest work speed Easy usability.
The role of an enterprise architect is to enable the organization to be
innovative and respond rapidly to changing customer behaviour. The enterprise
architect holds the long-term business vision of the organization and is responsible
for the journey it has to take to reach this target landscape. They support an
organization to achieve their objectives by successfully evolving across all domains;
Business, Application, Technology and Data.
This is no different when moving to the cloud. The Enterprise Architect role
is key in successful cloud adoption. Enterprise architects can use AWS services as
architectural building blocks to guide the technology decisions of the organization to
realize the enterprise’s business vision.
It has been challenging for enterprise architects to measure their goals and
demonstrate their value with on-premises architectures. With AWS Cloud adoption,
enterprise architects can use AWS services to create traceability and relationships
across the enterprise architecture domains, allowing the architect to correctly track
how their organization is changing and improving. AWS lets the enterprise architect
address end-to-end traceability, operational modelling, and governance. It is easier to
gather data on transition architectures in the cloud as the organization moves to its
target state.
The wide breadth of AWS services and agility means it is also easier for
architects and application teams to respond rapidly when architectural deviations are
identified and changes need to take place.
Using AWS services, you can more easily execute and realize the value of
enterprise architecture practices.