O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a navegar o site, você aceita o uso de cookies. Leia nosso Contrato do Usuário e nossa Política de Privacidade.
O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a utilizar o site, você aceita o uso de cookies. Leia nossa Política de Privacidade e nosso Contrato do Usuário para obter mais detalhes.
Disaster Recovery using Amazon Web Services - Webinar
In the event of a disaster, you can quickly restore data locally or launch resources in Amazon Web Services (AWS) to help ensure business continuity. In this presentation, you will learn about the AWS services that you can leverage for your disaster recovery (DR) solution, four common DR architectures that leverage the AWS Cloud, and how to get started.
Archive or BackupBig Data & High Performance ComputingDisaster recoveryPackaged business applicationsWeb applications
So let’s start with where DR fits into your continuity plans overall. It’s part of a business continuity continuum. And I’d like to point out that implementing DR is not an all or nothing proposition – you can work your way across the continuum, and today we’ll discuss some of the things to consider and how AWS can play a part. The starting point is usually thinking about how to keep you applications up and running. You’ll have a requirement in the form of how many nines of reliability you need, keeping in mind that every nine you add after the first few add a lot of cost, often around 10x for each additional nine. The next thing you’re likely to plan for is how to backup your data so it’s safe and available to you in the event of a disaster. How do you store your data so it’s durable and available when you need it. And then you need to have a plan for what to do in the unlikely event that you have one of those black swan events where a true disaster occurs. How do you deal with recovery?
Disaster recovery is at one end of that continuum, and how you choose to implement your DR is influenced by your requirements with a couple of things:How long you’re able to be down; and that’s your Recovery Time Objective, or RTOHow much data you can tolerate losing, or how in synch does your backup data have to be with what you have in your operating environment. That’s your Recovery Point Objective, or RPOThese are not technological things, these are business considerations. The easy answer is to have the RTO be minutes and the RPO no data loss, but that’s likely to be much more expensive than is feasible. And chances are you don’t need to be that stringent. So now you can start analyze the trade-offs between the cost of achieving various recovery times and data restore. And you start to think about the requirements for different types of outages – say from restoring a file that was accidentally deleted through to how to handle a complete system outage due to a natural disaster. A common path to the cloud is to start with backup and recovery plans using the cloud for your backups, and then identify the applications that are candidates for you to implement a full DR plan with in the cloud. Any app that you can run in the cloud is low hanging fruit. Replicating the full stack would be at the more complex and involved end of the scale. So you have a lot of flexibility in how you approach the solution that fits you best, and Jeff is going to talk about what some of those architectures look like and how you can implement them.
We’re often asked how it is that some customers are able to reduce costs as dramatically as the claims I made earlier, while still getting the recovery performance they need. That’s a great question so I’ll take a minute to point out in simple terms one of the ways that can be accomplished. [talk to the slide]
On your ownBringing on a full time consultantWith an ISV solutionWith a system integrator
SAWS has eight Regions, and each Region is a separate cloud. This gives our customers complete control over where data is stored, and a lot of options for where to host your disaster recovery site. You are literally a few mouse clicks away from deploying across the globe. This is a lot easier than doing that with off-site tape backup, your own data centers or CoLos.
Slide notes:You can choose to deploy and run your applications in multiple physical locations within the AWS cloud. Amazon Web Services are available in geographic Regions. When you use AWS, you canspecify the Region in which your data will be stored, instances run, queues started, and databases instantiated.For most AWS infrastructure services, including Amazon EC2, there are eight regions: US East (Northern Virginia), US West (Northern California), EU (Ireland), Asia Pacific (Singapore) and Asia Pacific (Tokyo), AWS GovCloud (US), US West (Oregon), and South America (Sao Paulo).Within each Region are Availability Zones (AZs). Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect yourapplications from a failure (unlikely as it might be) that affects an entire zone. Regions consist of one or more Availability Zones, are geographically dispersed, and are in separate geographic areas or countries. The Amazon EC2 service level agreement commitment is 99.95% availability for each Amazon EC2 Region.
With AWS, you’ll see that the same security isolations are employed as would be found in a traditional data center. These include physical data center security, separation of the network, isolation of the server hardware, and isolation of storage. AWS customers have control over their data: they own the data, not us; they can encrypt their data at rest and in motion, just as they would in their own data center.
Our customers continue to make very heavy use of Amazon S3. We now process up to 500,000 S3 requests per second. Many of these are PUT requests, representing new data that is flowing in to S3. As of the end of the fourth quarter of 2011, there are 762 billion (762,000,000,000) objects in S3.
AWS Direct Connect makes it easy to establish a dedicated network connection from your premise to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple logical connections. This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space, and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC) using private IP space, while maintaining network separation between the public and private environments. Logical connections can be reconfigured at any time to meet your changing needs. http://aws.amazon.com/directconnect/Amazon Virtual Private Cloud (Amazon VPC) lets you provision a private, isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. With Amazon VPC, you can define a virtual network topology that closely resembles a traditional network that you might operate in your own datacenter. You have control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can easily customize the network configuration for your Amazon VPC. For example, you can create a public-facing subnet for your webservers that has access to the Internet, and place your backend systems such as databases or application servers in a private-facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet. Additionally, you can create a Hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter. http://aws.amazon.com/vpc/Dedicated Instances are Amazon EC2 instances launched within your Amazon VPC that run hardware dedicated to a single customer. Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud – on-demand elastic provisioning, pay only for what you use, and a private, isolated virtual network, all while ensuring that your Amazon EC2 compute instances will be isolated at the hardware level. You can easily create a VPC that contains dedicated instances only, providing physical isolation for all Amazon EC2 compute instances launched into that VPC, or you can choose to mix both dedicated instances and non-dedicated instances within the same VPC based on application-specific requirements. http://aws.amazon.com/dedicated-instances/
Advantages to simple Backup and RestoreSimple to get startedExtremely cost effective (mostly backup storage)Preparation PhaseTake backups of current systemsStore backups in S3Describe procedure to restore from backup on AWSKnow which AMI to use, build your own as neededKnow how to restore system from backupsKnow how to switch to new systemKnow how to configure the deployment
Disaster Recovery using Amazon Web Services - Webinar