SlideShare uma empresa Scribd logo
1 de 7
Baixar para ler offline
SPECIAL REPORT
Preserve your data
when disaster strikesCopyright © InfoWorld Media Group. All rights reserved.
Disaster Recovery
Deep
Dive
Sponsored by
I N F O W O R L D . C O M 				 					 D E E P D I V E S E R I E S 
2Disaster Recovery Deep Divei
Smart strategies for
data protection
Disaster recovery and business continuity may be IT’s most important
responsibilities. The key is to create smart policies collaboratively—and then
line up the appropriate technologies to match
i By Matt Prigge
DATA IS LIKE OXYGEN. Without it, most modern
organizations would quickly find themselves gasping
for breath; if they went too long without their customer,
transaction, or production data, they’d die.
The threats to data are everywhere, from natural disas-
ters and malware outbreaks to simple human error, like
accidental deletions. Yet surprisingly few IT organizations
take the time to construct the policies and infrastructure
needed to protect the data organizations need to survive.
The result? Poorly planned, ad hoc recovery and continuity
methodologies that no one fully understands and which are
often obsolete before they’re even implemented.
Thriving in today’s data-centric environment requires
a renewed focus on disaster recovery and business con-
tinuity procedures and policies. It’s not just an IT prob-
lem. Data loss affects every part of every organization;
because of that, every stakeholder—from the executive
suite to facilities and maintenance—needs to be involved
in the decisions surrounding data ownership.
This type of collaborative approach is neither easy nor
inexpensive. But it will allow both IT and business stake-
holders to be on the same page about where the organi-
zation’s disaster recovery (DR) and business continuity
(BC) dollars are being spent, and why. More important, it
will ensure everyone knows what to expect in the event
of a disaster and can respond appropriately.
WHAT ARE DR AND BC?
Disaster recovery and business continuity have come to
mean different things to different people. But the core
concepts are the same: Deploying the correct policies,
procedures and layers of technology to allow organiza-
tions to continue to function in the event disaster strikes.
This can range from something as simple as having easily
accessible backups in case of hardware failure to nearly
instant fail-over to a secondary data center when your
primary one is taken out by a cataclysmic event.
Generally speaking, DR is a series of procedures and
policies that when properly implemented allow your
organization to recover from a data disaster. How long
it takes you to recover, and how stale the data will be
once you’re back online, will depend on the recovery
time objectives (RTOs) and recovery point objectives
(RPOs) decided upon by your organization’s stakehold-
ers. DR’s job is not to provide consistent up-time, but to
ensure you can recover your data in the event that it’s
lost, inaccessible, or compromised.
A comprehensive disaster recovery plan would also
include anything else required to get your environment
back on its feet, including how and where backup sets
are stored, how they can be retrieved, what hardware is
needed to access them, where to obtain alternate hard-
ware if the primary systems are inaccessible, whether
installation media for applications would be required,
and so on.
Business continuity systems aim to mitigate or neu-
tralize the effects of a disaster by minimizing business
interruption. They typically rely on high availability (HA)
hardware—redundant systems operating in real time and
with real data, so that if one fails the others can pick up
the slack almost instantaneously with little or no effect
on business processes. An HA event (such as a cluster
node failing over) might introduce a minute’s worth of
I N F O W O R L D . C O M 				 					 D E E P D I V E S E R I E S 
3Disaster Recovery Deep Divei
service interruption, whereas recovering a backup onto a
new piece of hardware could take hours or even days—
especially if replacement hardware isn’t readily available.
But having HA systems in place does not ensure
disaster won’t strike. There will always be failure vec-
tors like data corruption, software glitches, power spikes,
and storms that can skip right on past every redundant
system to despoil your data and ruin your day. That’s
the most important thing to remember about HA: It
only lowers your probability of experiencing disaster, it
doesn’t make you immune.
Business continuity ties together the availability goals
of HA with the recoverability goals of DR. A carefully
conceived BC design can allow you to recover from a
full-blown disaster with limited downtime and near-zero
data loss. It is by far the most involved and expensive
approach to take, but many enterprises have concluded
that data is so vital to their day-to-day operations BC is
too important not to pursue.
BUILDING A POLICY
There are four main stages to building a coherent BC/
DR policy for your organization. The first task is to define
a scope that details which applications will be covered by
the policy. Second, you need to define the risks you’re
attempting to protect your infrastructure against. The
third task is to define the precise requirements of the
policy in terms of RTO and RPO for the risk categories
you’ve defined. Only after you’ve completed those steps
can you determine the technological building blocks
you’ll need to deliver on the policies you’ve defined.
Throughout this process, it’s vital to involve business
stakeholders in the decision-making process—however
nontechnical they may be. In fact, the less technical
those stakeholders are, the more critical it is that they
be involved. A failure to set downtime and recovery
expectations properly can have devastating, career-end-
ing consequences.
SCOPE
When considering the scope of a BC/DR plan, it’s impor-
tant to start by looking at the user-facing applications you
are attempting to protect. If you start from the perspec-
tive of the application and then determine everything
it needs to survive, you’ll find all the infrastructure and
data associated with it. If you start from the infrastruc-
ture and work up to the application, it’s easy to blind
yourself to things the application needs in order to run.
For example, you could create a flawless backup of your
application and database clusters, but still forget to back
up a file share that contains the application client, along
with all of its hard-to-reproduce configuration data.
Though it makes a great deal of sense to separate less
mission-critical applications from those that the business
depends upon heavily, it is increasingly common to see
BC/DR policy scopes that extend to nearly all of the
core IT infrastructure. This is primarily as a result of the
popularity of server virtualization, network convergence,
and centralization of primary storage resources. In such
environments, a BC/DR policy that seeks to protect a
mission-critical application served by a small collection
of virtual machines may well end up including nearly
every major resource in the data center within its scope.
This is not necessarily a bad thing, but it can drive up the
costs significantly.
RISKS
After a scope for the policy has been set, the next step
is to determine which risks the policy will attempt to
address. IT architectures are vulnerable to a dizzying
array of risks, ranging from disk failure all the way up to
Hurricane Katrina-level natural disasters.
Typically, almost everyone will want to plan for com-
mon risks such as hardware failure, but some organi-
zations may choose not to plan for less-probable risks
such as regional disasters or pandemic outbreaks. It’s
not unheard of to see significant investments made in
an effort to mitigate disasters that the business’s core
operations—specifically human resources—could not
withstand in any case.
Knowing which risks you are and are not protecting
the organization’s data against is critical to thorough
planning and implementation. No matter what your
organization chooses to do, the most important thing is
that everyone from the C-level executives on down is
on board with the decisions made.
REQUIREMENTS
After you’ve set your scope and defined the risks, it’s
time to put hard numbers on how quickly your BC/DR
policy dictates you recover. At the very least, this should
I N F O W O R L D . C O M 				 					 D E E P D I V E S E R I E S 
4Disaster Recovery Deep Divei
include targets for recovery times and recovery points.
However, your organization may not decide to place the
same RTO/RPO requirements on every kind of disaster.
For example, your organization may require a
30-minute RTO and 15-minute RPO for a particularly
critical database application in the face of hardware and
software failures, but may be satisfied with longer RTO/
RPOs if the entire office building is destroyed by a fire
and employees are unable to return to work.
Of course, we’d all like to have near-zero recovery
time and data loss. While this is absolutely possible with
technology available today, it also may be prohibitively
expensive. Thus, defining policy requirements is a bal-
ancing act between the relative cost of providing ultra-
low RTO/RPO and the cost to the business of downtime
or data loss.
However, it’s important to avoid getting caught up in
discussions of how much recovery options will cost until
you’ve determined what the organization truly needs in
order to remain viable. Too often, huge price tags will
dissuade IT from even offering aggressive RTO/RPO
solutions for fear of being laughed out of the corner
office. If in the final analysis budgets won’t support more
expensive options, a collective decision can be made to
scale back the requirements, but it will be made based
on a full understanding of the situation.
BUILDING BLOCKS
After you have a solid idea of the services you need to
protect, the risks you need to protect them from, and
your recovery objectives, you can figure out the tech-
nological building blocks you’ll need to deliver on the
policy. Due to the sheer number of different solutions
available, there is really no one right way to protect any
given infrastructure.
The trick to designing a good BC/DR solution is to
treat each component as a layer of protection, each with
different capabilities that combine to provide a compre-
hensive solution.
SOFTWARE BUILDING BLOCKS
The heart of any BC/DR strategy is the software that
makes it tick. Generally speaking, this will involve at least
one kind of backup software for disaster recovery, but
may also leverage application-level backup/replication
and even full-blown host-based replication software to
deliver on business continuity goals.
Agent-Based Backup—Traditionally, the most com-
mon software component you’ll see is the agent-based
backup utility. This software generally utilizes one or
more centralized server installations to draw backup data
from software agents installed within the individual serv-
ers you want to protect, then stores the backups on tape,
direct-attached disk (DAS), or other storage hardware.
If disaster strikes and data is lost, the data can be
restored to the way it was at the time of the last backup.
This works reasonably well for situations where you’re
protecting against data corruption or deletion and there
is a relatively long RPO (usually 24 hours or more).
However, when risks include wholesale loss of a server
containing a substantial amount of data, traditional
agent-based backup is unlikely to meet more aggressive
RTO requirements, since restoring a server from bare
metal can require a significant amount of time.
But don’t write off agent-based backup utilities just
yet. Despite their stodgy reputation, modern backup
agents may offer advanced features such as image-
based backups and client side deduplication, which can
both shorten recovery objectives and decrease the time
needed to create backup sets.
Image-based backups capture the contents of an
entire server at once instead of file by file and restore
the machine exactly as it was, making backups easier
to capture and restore. Client-side deduplication lets
you use cheap centralized backup storage and distrib-
ute the dedupe load, making backup over a WAN faster
because there’s less data to pass down the wire. These
more advanced forms of agent-based backup are useful
for organizations who need to shorten their RTO/RPO
but don’t have the virtualization stacks in place to make
backup more seamless.
Virtualized Backup—Virtualized infrastructures have
significantly more backup options available to them.
Because virtualization introduces an abstraction layer
between the operating system and the underlying server
and storage hardware, it is far easier to both obtain con-
sistent image-based backups and to restore those back-
ups onto different hardware configurations should the
need ever arise.
I N F O W O R L D . C O M 				 					 D E E P D I V E S E R I E S 
5Disaster Recovery Deep Divei
In fact, image-based virtualization backup software
can also be leveraged to perform business continuity
duties in addition to disaster recovery. Instead of stor-
ing backups on dedicated storage media like tape or
network-attached storage, the backups can be stored
live on a backup virtualization cluster– a collection of
virtualized hosts located in a secondary location that
can be turned on at a moment’s notice. Though it’s dif-
ficult to achieve RPOs of under 30 minutes with this
approach, it can be an exceptionally cheap way to offer
aggressive RTO when more advanced hardware-based
approaches like SAN-to-SAN replication (see below)
aren’t feasible.
Host-Based Replication—You can achieve a similarly
aggressive RTO for physical servers by using host-based
software that replicates data from your physical produc-
tion servers to dedicated recovery servers, which can be
either physical or virtual. However, that solution requires
an expensive one-to-one ratio between your protected
assets and recovery assets—you’d essentially be paying
for two of everything. This kind of software is also typi-
cally not dual-use, which means you’d still need to field
separate disaster recovery software to provide archival
backups. However, in situations where operating systems
and applications can be virtualized and have not been,
it is often possible to utilize host-based software to rep-
licate many physical servers onto a single virtualization
host— handily avoiding the need for a duplication of
physical resources.
Application-Level Backup—Certain types of appli-
cations come with their own tools for providing disas-
ter recovery and business continuity. For example, you
could configure a database engine to capture full back-
ups and snapshots of transaction logs each night to a
secondary storage device for DR, while replicating trans-
action logs to a server in a secondary data center for
BC. Though these solutions can provide a cheap way to
provide very stringent RTO/RPO, they are also one-off
solutions that will only apply to applications that support
them. If you’ve got more than one critical database with
its own DR and BC processes, you’d have to manage
each of them separately, increasing the complexity of
your BC/DR approach and creating a potentially huge
source of headaches.
Keep it Simple!—No matter what mix of software you
choose, try to keep it as simple as possible. Though you
can achieve a great deal by carefully layering several dif-
ferent kinds of backup software and adding a healthy
dose of customized scripting, the more complexity you
introduce into the equation, the more likely you are to
encounter backup failures or—worse—put your data at
even greater risk.
HARDWARE BUILDING BLOCKS
No matter what kind of approach you use, hardware
will play an essential role. With disaster recovery the
hardware’s job is to store backup data and provide data
retention/archival capabilities. In BC applications, the
role of hardware can much more varied—ranging from
virtualization hosts with their own inboard storage, to
shared network attached storage (NAS) systems, and
synchronized storage area networks (SANs) hundreds
of miles apart. The public cloud can also be leveraged
to play a substantial role in both DR and BC.
Tape Backup—When most people think of DR-related
backups, the first thing that comes to mind is usually tape
backup. Though much maligned, tape backup systems
have remained relevant due to sequential read and write
performance that can be two to three times faster than
disk, as well as well as their long shelf life and the relative
durability. Though disk-based backup has become much
more popular recently, disaster recovery approaches that
require backup media to be shipped off site for archival
storage still generally rely on tape.
Disk-Based Backup—However, tape suffers from a
number of substantial limitations, primarily because it’s
a sequential medium. Reading random chunks of data
off different parts of the tape can be very time consum-
ing. Backups that rely heavily on updating or referenc-
ing previous backup file data are much better suited
to on-line spinning disk. But even where disk is being
utilized as the first backup tier, off-site archives are still
frequently stored on tape—forming the oft-used disk-to-
disk-to-tape model.
Disk backup media can take a number of different
forms. In the simplest application, it can be a backup
server with a large amount of direct-attached storage. In
situations where the backup software is running within a
I N F O W O R L D . C O M 				 					 D E E P D I V E S E R I E S 
6Disaster Recovery Deep Divei
virtualization environment or is on a system that stores
backup sets elsewhere, NAS is frequently used.
NAS and VTL—In large installations requiring signi-
ficant amounts of throughput and long periods of data
retention, it’s much more common to see dedicated,
purpose-built NAS backup appliances. These devices
come in two basic flavors: one that acts as an arbi-
trarily large NAS and is accessible via a variety of file-
sharing protocols; and virtual tape libraries (VTLs) that
emulate iSCSI or Fibre-Channel-based tape libraries.
VTLs can be useful in situations where legacy backup
software expects to have a real tape drive attached,
but tapes themselves aren’t desirable. By using a VTL,
you can provide a deduplicated disk-based backup
option while still allowing the legacy backup software
to function.
For most organizations that need to store large
amounts of data for a significant period of time, however,
a NAS backup appliance is much more likely to be the
right answer. The primary reason for this is that nearly
every enterprise-grade backup appliance will implement
some kind of deduplication to remove portions of the
data set that are exact copies of each other—like, for
example, the same company-wide email sent to every
employee’s in-box. Deduping the data can significantly
reduce the amount of storage required by each backup
set, allowing organizations to increase the amount of
historical backup data they can maintain.
There are a wide variety of deduplication products,
some of which are more efficient than others, but they
generally utilize one of two approaches: at-rest dedupli-
cation and inline deduplication. In the first instance, data
is backed up onto the appliance as-is, and then dedupli-
cated after the backup is complete. In the second, data
is deduplicated as it is being written onto the device.
Typically, devices that leverage at-rest deduplication
take less time to backup and restore data because they
don’t have to dedupe and/or rehydrate data as it moves
on and off the device. However, because they must
store at least one back up in its un-deduplicated state,
they also require more raw storage compared to inline
deduplication devices.
The type of deduplication device that’s best for a
given application really depends upon how it will be
used. For instance, if you plan to store backups of a
virtualized environment on an NAS, you may want to
use an at-rest deduplication platform due to its typically
faster read and write performance. Some virtualization
backup software can restore a virtual machine while that
virtual machine’s backup data resides on the NAS—yield-
ing extremely short RTO.
In this kind of instant restore model, a virtualization
host can mount the backup image directly from the
NAS, start the VM, and the live-migrate it back to the
original primary storage platform—essentially eliminating
the time it would normally take to restore the data back
to primary storage in a traditional backup. In that sense,
what is normally considered a DR resource can be lever-
aged to supply some level of BC functionality—especially
if the NAS is located at a remote data center along with
dedicated virtualization capacity.
SAN Replication— However, most BC approaches
rely on a dedicated warm site—a data center housed in
a remote location that contains all of the infrastructure
necessary to keep the organization running if the pri-
mary data center goes down. That site is almost always
coupled with some type of storage replication, usually
a secondary SAN that’s replicating data from the pro-
duction SAN.
Because it essentially doubles your organization’s
investment in storage hardware and requires large
amounts of low-latency bandwidth to connect both sites,
this type of replication can be very expensive, but it is
widely recognized as the most comprehensive business
continuity approach available.
There are two kinds of SAN-to-SAN replication: syn-
chronous and asynchronous. With synchronous replica-
tion, data is kept in precise synchronization between
the production SAN and recovery SAN. By contrast,
asynchronous replication captures snapshots of your
data at specified intervals; if the primary storage system
fails, you can roll back to a recent snapshot to minimize
the data loss.
At first glance, it would appear that synchronous rep-
lication is better. Because data on the recovery SAN
is always identical to that on the production SAN, the
recovery time is exactly zero. The problem? If that pro-
duction data becomes corrupted, the corrupted data
will be replicated on the recovery SAN as well. (That’s
why disaster recovery is an essential component of even
I N F O W O R L D . C O M 				 					 D E E P D I V E S E R I E S 
7Disaster Recovery Deep Divei
the most robust business continuity system—you always
want to have a reliable backup set.) With asynchronous
replication, you can roll back to the most recent snap-
shot of data preceding the corruption event.
If you have stringent RTO requirements, you prob-
ably need synchronous replication to meet them. But
you’ll also need to make sure your DR resources are
aligned correctly in case you need them. Depending
on the storage platform you choose you may be able to
adopt a hybrid approach, taking snapshots of the SAN
production data and storing them on the recovery side
SAN—effectively providing you with zero RPO as well
as the ability to roll back to an earlier data set without
having to restore from backups.
Cloud-based DR and BC—In scenarios where a large
amount of Internet bandwidth is available, the public
cloud can also provide a viable option for both DR and
BC goals. In DR applications, cloud-based backup can
provide a replacement for tape media or NAS repli-
cation in order to provide off-site backup protection.
However, managing the security of backup data being
placed in the cloud and ensuring that the backup data
can be quickly and easily brought back on site can be
a challenge.
The public cloud can also fill the role of a fully deployed
hot recovery data center. Through the use of application-
specific backup software or host-based replication tools,
entire server infrastructures can be protected by storage
and computing resources located in the cloud.
Whether you decide to opt for cloud-based DR or
BC depends on a variety of factors, including the type of
IT resources your organization has, whether you have
the skill set in house to navigate the fundamentally dif-
ferent nature of cloud services, how much bandwidth
your data requires, and how closely you must adhere to
government regulations regarding information security.
TESTING AND COMPLIANCE
No matter what solution you choose for disaster recov-
ery or business continuity, building a consistent test-
ing and compliance regimen—and sticking to it—is vital.
Without frequent testing, you can never be certain
the data you thought was backed up can actually be
restored, or that your secondary data center will come
on line when the Big One hits.
This is where many organizations fail. Overworked
and understaffed IT orgs may feel they lack the time and
resources required for frequent testing of DR and BC.
But that may also be due to poorly designed processes
or a lack of sufficient automation. Making an extra effort
to streamline these processes will help not only in testing
these solutions but also in implementing them when a
real disaster strikes.
In the ideal scenario, organizations will deploy their
DR or BC solutions as part of their day-to-day opera-
tions—allowing them to both leverage their investments
and test them at the same time. For example, you might
use the DR system for a mission-critical database to pop-
ulate a read-only database platform that’s used for gener-
ating reports. If a problem arises with a report, that could
be a sign your DR system isn’t working as it should.
Or you might routinely transition workloads between
a production data center and a recovery data center—
spreading the load across both data centers when your
need more performance while also demonstrating that
your fail-over methodology is functioning properly.
Backups can and do fail silently. If you do not test
them, you won’t find out until it’s too late—a situation
nobody wants to find themselves in. You’ll also have an
excellent opportunity to evaluate whether you’re meet-
ing your organization’s written RTO/RPO requirements,
and to lobby for changes if you’re not.
BC/DR: A BUSINESS NECESSITY
As businesses become increasingly dependent on data
to survive, the importance of disaster recovery and busi-
ness continuity measures cannot be overstated. Only
through thoughtful and collaborative planning that
involves all levels of the organization—both technical
and non-technical—can you field a comprehensive disas-
ter recovery and business continuity strategy that appro-
priately sets expectations and adequately protects the
lifeblood of the business. i

Mais conteĂșdo relacionado

Mais procurados

a-guide-to-ddos-2015-2
a-guide-to-ddos-2015-2a-guide-to-ddos-2015-2
a-guide-to-ddos-2015-2Mike Revell
 
Cloud Computing Disaster Readiness Report
Cloud Computing Disaster Readiness ReportCloud Computing Disaster Readiness Report
Cloud Computing Disaster Readiness ReportGandhi Legacy Tour
 
Slow Data versus Quick Data
Slow Data versus Quick DataSlow Data versus Quick Data
Slow Data versus Quick DataMartin Geddes
 
Disaster recovery plan sample 2
Disaster recovery plan sample 2Disaster recovery plan sample 2
Disaster recovery plan sample 2AbenetAsmellash
 
Best Practices for Proactive Disaster Recovery and Business Continuity
Best Practices for Proactive Disaster Recovery and Business ContinuityBest Practices for Proactive Disaster Recovery and Business Continuity
Best Practices for Proactive Disaster Recovery and Business ContinuityReadWrite
 
P r o t e c t i n g y o u r b u s i n e s s
P r o t e c t i n g y o u r b u s i n e s sP r o t e c t i n g y o u r b u s i n e s s
P r o t e c t i n g y o u r b u s i n e s smatele41
 
With-All-Due-Diligence20150330
With-All-Due-Diligence20150330With-All-Due-Diligence20150330
With-All-Due-Diligence20150330Jim Kramer
 
10-TOP-IT-INITIATIVES_6-6-16
10-TOP-IT-INITIATIVES_6-6-1610-TOP-IT-INITIATIVES_6-6-16
10-TOP-IT-INITIATIVES_6-6-16Peak 10
 
Business Continuity And Disaster Recovery Are Top IT Priorities For 2010 And ...
Business Continuity And Disaster Recovery Are Top IT Priorities For 2010 And ...Business Continuity And Disaster Recovery Are Top IT Priorities For 2010 And ...
Business Continuity And Disaster Recovery Are Top IT Priorities For 2010 And ...Citrix Online
 
Guide to the Total Cost of Email
Guide to the Total Cost of EmailGuide to the Total Cost of Email
Guide to the Total Cost of EmailTim Pickard
 
eBook - Top Ten Reasons Cloud Computing is Inevitable in Dentistry
eBook - Top Ten Reasons Cloud Computing is Inevitable in DentistryeBook - Top Ten Reasons Cloud Computing is Inevitable in Dentistry
eBook - Top Ten Reasons Cloud Computing is Inevitable in DentistryNextGen Healthcare
 

Mais procurados (13)

Downtime-Whitepaper
Downtime-WhitepaperDowntime-Whitepaper
Downtime-Whitepaper
 
a-guide-to-ddos-2015-2
a-guide-to-ddos-2015-2a-guide-to-ddos-2015-2
a-guide-to-ddos-2015-2
 
Cloud Computing Disaster Readiness Report
Cloud Computing Disaster Readiness ReportCloud Computing Disaster Readiness Report
Cloud Computing Disaster Readiness Report
 
Slow Data versus Quick Data
Slow Data versus Quick DataSlow Data versus Quick Data
Slow Data versus Quick Data
 
Disaster recovery plan sample 2
Disaster recovery plan sample 2Disaster recovery plan sample 2
Disaster recovery plan sample 2
 
Best Practices for Proactive Disaster Recovery and Business Continuity
Best Practices for Proactive Disaster Recovery and Business ContinuityBest Practices for Proactive Disaster Recovery and Business Continuity
Best Practices for Proactive Disaster Recovery and Business Continuity
 
P r o t e c t i n g y o u r b u s i n e s s
P r o t e c t i n g y o u r b u s i n e s sP r o t e c t i n g y o u r b u s i n e s s
P r o t e c t i n g y o u r b u s i n e s s
 
With-All-Due-Diligence20150330
With-All-Due-Diligence20150330With-All-Due-Diligence20150330
With-All-Due-Diligence20150330
 
10-TOP-IT-INITIATIVES_6-6-16
10-TOP-IT-INITIATIVES_6-6-1610-TOP-IT-INITIATIVES_6-6-16
10-TOP-IT-INITIATIVES_6-6-16
 
Effective Communications in Business Continuity Planning
Effective Communications in Business Continuity PlanningEffective Communications in Business Continuity Planning
Effective Communications in Business Continuity Planning
 
Business Continuity And Disaster Recovery Are Top IT Priorities For 2010 And ...
Business Continuity And Disaster Recovery Are Top IT Priorities For 2010 And ...Business Continuity And Disaster Recovery Are Top IT Priorities For 2010 And ...
Business Continuity And Disaster Recovery Are Top IT Priorities For 2010 And ...
 
Guide to the Total Cost of Email
Guide to the Total Cost of EmailGuide to the Total Cost of Email
Guide to the Total Cost of Email
 
eBook - Top Ten Reasons Cloud Computing is Inevitable in Dentistry
eBook - Top Ten Reasons Cloud Computing is Inevitable in DentistryeBook - Top Ten Reasons Cloud Computing is Inevitable in Dentistry
eBook - Top Ten Reasons Cloud Computing is Inevitable in Dentistry
 

Destaque (9)

Survivors Guide To The Cloud
Survivors Guide To The CloudSurvivors Guide To The Cloud
Survivors Guide To The Cloud
 
The Ultimate Guide To Business Continuity
The Ultimate Guide To Business ContinuityThe Ultimate Guide To Business Continuity
The Ultimate Guide To Business Continuity
 
Cloud Based Email
Cloud Based EmailCloud Based Email
Cloud Based Email
 
Cloud or Onsite BDR?
Cloud or Onsite BDR?Cloud or Onsite BDR?
Cloud or Onsite BDR?
 
Defeating Cyber Threats
Defeating Cyber ThreatsDefeating Cyber Threats
Defeating Cyber Threats
 
Ten Myths About Deleted Files
Ten Myths About Deleted FilesTen Myths About Deleted Files
Ten Myths About Deleted Files
 
Meeting the Challenges of HIPAA Compliance, Phishing Attacks, and Mobile Secu...
Meeting the Challenges of HIPAA Compliance, Phishing Attacks, and Mobile Secu...Meeting the Challenges of HIPAA Compliance, Phishing Attacks, and Mobile Secu...
Meeting the Challenges of HIPAA Compliance, Phishing Attacks, and Mobile Secu...
 
Marketing for NGOs
Marketing for NGOsMarketing for NGOs
Marketing for NGOs
 
How to Migrate Without Downtime
How to Migrate Without DowntimeHow to Migrate Without Downtime
How to Migrate Without Downtime
 

Semelhante a Disaster Recovery - Deep Dive

Disaster Recovery Deep Dive
Disaster Recovery Deep DiveDisaster Recovery Deep Dive
Disaster Recovery Deep DiveLiberteks
 
Business Continuity for Mission Critical Applications
Business Continuity for Mission Critical ApplicationsBusiness Continuity for Mission Critical Applications
Business Continuity for Mission Critical ApplicationsDataCore Software
 
The cost of downtime
The cost of downtimeThe cost of downtime
The cost of downtimeBillyHosking
 
Insider's Guide- The Data Protection Imperative
Insider's Guide- The Data Protection ImperativeInsider's Guide- The Data Protection Imperative
Insider's Guide- The Data Protection ImperativeDataCore Software
 
V mware quick start guide to disaster recovery
V mware   quick start guide to disaster recoveryV mware   quick start guide to disaster recovery
V mware quick start guide to disaster recoveryVMware_EMEA
 
Planning For Disaster And Everyday Threats Wp111438
Planning For Disaster And Everyday Threats Wp111438Planning For Disaster And Everyday Threats Wp111438
Planning For Disaster And Everyday Threats Wp111438Erik Ginalick
 
ProfitBricks-white-paper-Disaster-Recovery-US
ProfitBricks-white-paper-Disaster-Recovery-USProfitBricks-white-paper-Disaster-Recovery-US
ProfitBricks-white-paper-Disaster-Recovery-USMudia Akpobome
 
Whitepaper : Building a disaster ready infrastructure
Whitepaper : Building a disaster ready infrastructureWhitepaper : Building a disaster ready infrastructure
Whitepaper : Building a disaster ready infrastructureJake Weaver
 
RUNNING HEADER Disaster Recovery Plan Information and Documentat.docx
RUNNING HEADER Disaster Recovery Plan Information and Documentat.docxRUNNING HEADER Disaster Recovery Plan Information and Documentat.docx
RUNNING HEADER Disaster Recovery Plan Information and Documentat.docxanhlodge
 
Business Continuity Getting Started
Business Continuity Getting StartedBusiness Continuity Getting Started
Business Continuity Getting Startedmxp5714
 
Disaster recovery white_paper
Disaster recovery white_paperDisaster recovery white_paper
Disaster recovery white_paperCMR WORLD TECH
 
Forrester: How Organizations Are Improving Business Resiliency with Continuou...
Forrester: How Organizations Are Improving Business Resiliency with Continuou...Forrester: How Organizations Are Improving Business Resiliency with Continuou...
Forrester: How Organizations Are Improving Business Resiliency with Continuou...EMC
 
Disaster recovery in the Cloud - whitepaper
Disaster recovery in the Cloud - whitepaper Disaster recovery in the Cloud - whitepaper
Disaster recovery in the Cloud - whitepaper Karolina Dryja
 
Mastering disaster a data center checklist
Mastering disaster a data center checklistMastering disaster a data center checklist
Mastering disaster a data center checklistChris Wick
 
Integra: Attack of the Business Killing Monster (Infographic)
Integra: Attack of the Business Killing Monster (Infographic)Integra: Attack of the Business Killing Monster (Infographic)
Integra: Attack of the Business Killing Monster (Infographic)Jessica Legg
 
Disaster Recovery: Develop Efficient Critique for an Emergency
Disaster Recovery: Develop Efficient Critique for an EmergencyDisaster Recovery: Develop Efficient Critique for an Emergency
Disaster Recovery: Develop Efficient Critique for an Emergencysco813f8ko
 
Data_Protection_WP - Jon Toigo
Data_Protection_WP - Jon ToigoData_Protection_WP - Jon Toigo
Data_Protection_WP - Jon ToigoEd Ahl
 
Mastering disaster e book Telehouse
Mastering disaster e book TelehouseMastering disaster e book Telehouse
Mastering disaster e book TelehouseTelehouse
 
Cloud Backup or Cloud Disaster Recovery – Key differences explained! | Sysfore
Cloud Backup or Cloud Disaster Recovery – Key differences explained! | SysforeCloud Backup or Cloud Disaster Recovery – Key differences explained! | Sysfore
Cloud Backup or Cloud Disaster Recovery – Key differences explained! | SysforeSysfore Technologies
 
Prevent & Protect
Prevent & ProtectPrevent & Protect
Prevent & ProtectMike McMillan
 

Semelhante a Disaster Recovery - Deep Dive (20)

Disaster Recovery Deep Dive
Disaster Recovery Deep DiveDisaster Recovery Deep Dive
Disaster Recovery Deep Dive
 
Business Continuity for Mission Critical Applications
Business Continuity for Mission Critical ApplicationsBusiness Continuity for Mission Critical Applications
Business Continuity for Mission Critical Applications
 
The cost of downtime
The cost of downtimeThe cost of downtime
The cost of downtime
 
Insider's Guide- The Data Protection Imperative
Insider's Guide- The Data Protection ImperativeInsider's Guide- The Data Protection Imperative
Insider's Guide- The Data Protection Imperative
 
V mware quick start guide to disaster recovery
V mware   quick start guide to disaster recoveryV mware   quick start guide to disaster recovery
V mware quick start guide to disaster recovery
 
Planning For Disaster And Everyday Threats Wp111438
Planning For Disaster And Everyday Threats Wp111438Planning For Disaster And Everyday Threats Wp111438
Planning For Disaster And Everyday Threats Wp111438
 
ProfitBricks-white-paper-Disaster-Recovery-US
ProfitBricks-white-paper-Disaster-Recovery-USProfitBricks-white-paper-Disaster-Recovery-US
ProfitBricks-white-paper-Disaster-Recovery-US
 
Whitepaper : Building a disaster ready infrastructure
Whitepaper : Building a disaster ready infrastructureWhitepaper : Building a disaster ready infrastructure
Whitepaper : Building a disaster ready infrastructure
 
RUNNING HEADER Disaster Recovery Plan Information and Documentat.docx
RUNNING HEADER Disaster Recovery Plan Information and Documentat.docxRUNNING HEADER Disaster Recovery Plan Information and Documentat.docx
RUNNING HEADER Disaster Recovery Plan Information and Documentat.docx
 
Business Continuity Getting Started
Business Continuity Getting StartedBusiness Continuity Getting Started
Business Continuity Getting Started
 
Disaster recovery white_paper
Disaster recovery white_paperDisaster recovery white_paper
Disaster recovery white_paper
 
Forrester: How Organizations Are Improving Business Resiliency with Continuou...
Forrester: How Organizations Are Improving Business Resiliency with Continuou...Forrester: How Organizations Are Improving Business Resiliency with Continuou...
Forrester: How Organizations Are Improving Business Resiliency with Continuou...
 
Disaster recovery in the Cloud - whitepaper
Disaster recovery in the Cloud - whitepaper Disaster recovery in the Cloud - whitepaper
Disaster recovery in the Cloud - whitepaper
 
Mastering disaster a data center checklist
Mastering disaster a data center checklistMastering disaster a data center checklist
Mastering disaster a data center checklist
 
Integra: Attack of the Business Killing Monster (Infographic)
Integra: Attack of the Business Killing Monster (Infographic)Integra: Attack of the Business Killing Monster (Infographic)
Integra: Attack of the Business Killing Monster (Infographic)
 
Disaster Recovery: Develop Efficient Critique for an Emergency
Disaster Recovery: Develop Efficient Critique for an EmergencyDisaster Recovery: Develop Efficient Critique for an Emergency
Disaster Recovery: Develop Efficient Critique for an Emergency
 
Data_Protection_WP - Jon Toigo
Data_Protection_WP - Jon ToigoData_Protection_WP - Jon Toigo
Data_Protection_WP - Jon Toigo
 
Mastering disaster e book Telehouse
Mastering disaster e book TelehouseMastering disaster e book Telehouse
Mastering disaster e book Telehouse
 
Cloud Backup or Cloud Disaster Recovery – Key differences explained! | Sysfore
Cloud Backup or Cloud Disaster Recovery – Key differences explained! | SysforeCloud Backup or Cloud Disaster Recovery – Key differences explained! | Sysfore
Cloud Backup or Cloud Disaster Recovery – Key differences explained! | Sysfore
 
Prevent & Protect
Prevent & ProtectPrevent & Protect
Prevent & Protect
 

Mais de Envision Technology Advisors

8 Strategies For Building A Modern DataCenter
8 Strategies For Building A Modern DataCenter8 Strategies For Building A Modern DataCenter
8 Strategies For Building A Modern DataCenterEnvision Technology Advisors
 
Unleashing IT: Seize Innovation, Accelerate Business, Drive Outcomes. All thr...
Unleashing IT: Seize Innovation, Accelerate Business, Drive Outcomes. All thr...Unleashing IT: Seize Innovation, Accelerate Business, Drive Outcomes. All thr...
Unleashing IT: Seize Innovation, Accelerate Business, Drive Outcomes. All thr...Envision Technology Advisors
 

Mais de Envision Technology Advisors (20)

The State of Global Markets 2013
The State of Global Markets 2013The State of Global Markets 2013
The State of Global Markets 2013
 
Ten Myths About Recovery Deleted Files
Ten Myths About Recovery Deleted FilesTen Myths About Recovery Deleted Files
Ten Myths About Recovery Deleted Files
 
Detecting Stopping Advanced Attacks
Detecting Stopping Advanced AttacksDetecting Stopping Advanced Attacks
Detecting Stopping Advanced Attacks
 
8 Strategies For Building A Modern DataCenter
8 Strategies For Building A Modern DataCenter8 Strategies For Building A Modern DataCenter
8 Strategies For Building A Modern DataCenter
 
Unleashing IT: Seize Innovation, Accelerate Business, Drive Outcomes. All thr...
Unleashing IT: Seize Innovation, Accelerate Business, Drive Outcomes. All thr...Unleashing IT: Seize Innovation, Accelerate Business, Drive Outcomes. All thr...
Unleashing IT: Seize Innovation, Accelerate Business, Drive Outcomes. All thr...
 
7 Steps To Developing A Cloud Security Plan
7 Steps To Developing A Cloud Security Plan7 Steps To Developing A Cloud Security Plan
7 Steps To Developing A Cloud Security Plan
 
Avoiding The Seven Deadly Sins of IT
Avoiding The Seven Deadly Sins of ITAvoiding The Seven Deadly Sins of IT
Avoiding The Seven Deadly Sins of IT
 
Forrester Emerging MSSP Wave
Forrester Emerging MSSP WaveForrester Emerging MSSP Wave
Forrester Emerging MSSP Wave
 
RetroFit's Network Monitoring Solution
RetroFit's Network Monitoring SolutionRetroFit's Network Monitoring Solution
RetroFit's Network Monitoring Solution
 
Network Latency
Network LatencyNetwork Latency
Network Latency
 
2013 Threat Report
2013 Threat Report2013 Threat Report
2013 Threat Report
 
Termination of Windows XP
Termination of Windows XPTermination of Windows XP
Termination of Windows XP
 
WhenThe Going Gets Tough
WhenThe Going Gets ToughWhenThe Going Gets Tough
WhenThe Going Gets Tough
 
As A Man-Thinketh
As A Man-ThinkethAs A Man-Thinketh
As A Man-Thinketh
 
Project Management | Why do projects fail?
Project Management | Why do projects fail?Project Management | Why do projects fail?
Project Management | Why do projects fail?
 
Tips using Siri
Tips using SiriTips using Siri
Tips using Siri
 
Too Many EHR Alers
Too Many EHR AlersToo Many EHR Alers
Too Many EHR Alers
 
Top12 health issues
Top12 health issuesTop12 health issues
Top12 health issues
 
Final Federal IT Health Plan
Final Federal IT Health PlanFinal Federal IT Health Plan
Final Federal IT Health Plan
 
Roadmap IT Long Term Care
Roadmap IT Long Term CareRoadmap IT Long Term Care
Roadmap IT Long Term Care
 

Último

Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍾 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍾 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍾 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍾 8923113531 🎰 Avail...gurkirankumar98700
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilV3cube
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 

Último (20)

Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍾 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍾 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍾 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍾 8923113531 🎰 Avail...
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 

Disaster Recovery - Deep Dive

  • 1. SPECIAL REPORT Preserve your data when disaster strikesCopyright © InfoWorld Media Group. All rights reserved. Disaster Recovery Deep Dive Sponsored by
  • 2. I N F O W O R L D . C O M D E E P D I V E S E R I E S 2Disaster Recovery Deep Divei Smart strategies for data protection Disaster recovery and business continuity may be IT’s most important responsibilities. The key is to create smart policies collaboratively—and then line up the appropriate technologies to match i By Matt Prigge DATA IS LIKE OXYGEN. Without it, most modern organizations would quickly find themselves gasping for breath; if they went too long without their customer, transaction, or production data, they’d die. The threats to data are everywhere, from natural disas- ters and malware outbreaks to simple human error, like accidental deletions. Yet surprisingly few IT organizations take the time to construct the policies and infrastructure needed to protect the data organizations need to survive. The result? Poorly planned, ad hoc recovery and continuity methodologies that no one fully understands and which are often obsolete before they’re even implemented. Thriving in today’s data-centric environment requires a renewed focus on disaster recovery and business con- tinuity procedures and policies. It’s not just an IT prob- lem. Data loss affects every part of every organization; because of that, every stakeholder—from the executive suite to facilities and maintenance—needs to be involved in the decisions surrounding data ownership. This type of collaborative approach is neither easy nor inexpensive. But it will allow both IT and business stake- holders to be on the same page about where the organi- zation’s disaster recovery (DR) and business continuity (BC) dollars are being spent, and why. More important, it will ensure everyone knows what to expect in the event of a disaster and can respond appropriately. WHAT ARE DR AND BC? Disaster recovery and business continuity have come to mean different things to different people. But the core concepts are the same: Deploying the correct policies, procedures and layers of technology to allow organiza- tions to continue to function in the event disaster strikes. This can range from something as simple as having easily accessible backups in case of hardware failure to nearly instant fail-over to a secondary data center when your primary one is taken out by a cataclysmic event. Generally speaking, DR is a series of procedures and policies that when properly implemented allow your organization to recover from a data disaster. How long it takes you to recover, and how stale the data will be once you’re back online, will depend on the recovery time objectives (RTOs) and recovery point objectives (RPOs) decided upon by your organization’s stakehold- ers. DR’s job is not to provide consistent up-time, but to ensure you can recover your data in the event that it’s lost, inaccessible, or compromised. A comprehensive disaster recovery plan would also include anything else required to get your environment back on its feet, including how and where backup sets are stored, how they can be retrieved, what hardware is needed to access them, where to obtain alternate hard- ware if the primary systems are inaccessible, whether installation media for applications would be required, and so on. Business continuity systems aim to mitigate or neu- tralize the effects of a disaster by minimizing business interruption. They typically rely on high availability (HA) hardware—redundant systems operating in real time and with real data, so that if one fails the others can pick up the slack almost instantaneously with little or no effect on business processes. An HA event (such as a cluster node failing over) might introduce a minute’s worth of
  • 3. I N F O W O R L D . C O M D E E P D I V E S E R I E S 3Disaster Recovery Deep Divei service interruption, whereas recovering a backup onto a new piece of hardware could take hours or even days— especially if replacement hardware isn’t readily available. But having HA systems in place does not ensure disaster won’t strike. There will always be failure vec- tors like data corruption, software glitches, power spikes, and storms that can skip right on past every redundant system to despoil your data and ruin your day. That’s the most important thing to remember about HA: It only lowers your probability of experiencing disaster, it doesn’t make you immune. Business continuity ties together the availability goals of HA with the recoverability goals of DR. A carefully conceived BC design can allow you to recover from a full-blown disaster with limited downtime and near-zero data loss. It is by far the most involved and expensive approach to take, but many enterprises have concluded that data is so vital to their day-to-day operations BC is too important not to pursue. BUILDING A POLICY There are four main stages to building a coherent BC/ DR policy for your organization. The first task is to define a scope that details which applications will be covered by the policy. Second, you need to define the risks you’re attempting to protect your infrastructure against. The third task is to define the precise requirements of the policy in terms of RTO and RPO for the risk categories you’ve defined. Only after you’ve completed those steps can you determine the technological building blocks you’ll need to deliver on the policies you’ve defined. Throughout this process, it’s vital to involve business stakeholders in the decision-making process—however nontechnical they may be. In fact, the less technical those stakeholders are, the more critical it is that they be involved. A failure to set downtime and recovery expectations properly can have devastating, career-end- ing consequences. SCOPE When considering the scope of a BC/DR plan, it’s impor- tant to start by looking at the user-facing applications you are attempting to protect. If you start from the perspec- tive of the application and then determine everything it needs to survive, you’ll find all the infrastructure and data associated with it. If you start from the infrastruc- ture and work up to the application, it’s easy to blind yourself to things the application needs in order to run. For example, you could create a flawless backup of your application and database clusters, but still forget to back up a file share that contains the application client, along with all of its hard-to-reproduce configuration data. Though it makes a great deal of sense to separate less mission-critical applications from those that the business depends upon heavily, it is increasingly common to see BC/DR policy scopes that extend to nearly all of the core IT infrastructure. This is primarily as a result of the popularity of server virtualization, network convergence, and centralization of primary storage resources. In such environments, a BC/DR policy that seeks to protect a mission-critical application served by a small collection of virtual machines may well end up including nearly every major resource in the data center within its scope. This is not necessarily a bad thing, but it can drive up the costs significantly. RISKS After a scope for the policy has been set, the next step is to determine which risks the policy will attempt to address. IT architectures are vulnerable to a dizzying array of risks, ranging from disk failure all the way up to Hurricane Katrina-level natural disasters. Typically, almost everyone will want to plan for com- mon risks such as hardware failure, but some organi- zations may choose not to plan for less-probable risks such as regional disasters or pandemic outbreaks. It’s not unheard of to see significant investments made in an effort to mitigate disasters that the business’s core operations—specifically human resources—could not withstand in any case. Knowing which risks you are and are not protecting the organization’s data against is critical to thorough planning and implementation. No matter what your organization chooses to do, the most important thing is that everyone from the C-level executives on down is on board with the decisions made. REQUIREMENTS After you’ve set your scope and defined the risks, it’s time to put hard numbers on how quickly your BC/DR policy dictates you recover. At the very least, this should
  • 4. I N F O W O R L D . C O M D E E P D I V E S E R I E S 4Disaster Recovery Deep Divei include targets for recovery times and recovery points. However, your organization may not decide to place the same RTO/RPO requirements on every kind of disaster. For example, your organization may require a 30-minute RTO and 15-minute RPO for a particularly critical database application in the face of hardware and software failures, but may be satisfied with longer RTO/ RPOs if the entire office building is destroyed by a fire and employees are unable to return to work. Of course, we’d all like to have near-zero recovery time and data loss. While this is absolutely possible with technology available today, it also may be prohibitively expensive. Thus, defining policy requirements is a bal- ancing act between the relative cost of providing ultra- low RTO/RPO and the cost to the business of downtime or data loss. However, it’s important to avoid getting caught up in discussions of how much recovery options will cost until you’ve determined what the organization truly needs in order to remain viable. Too often, huge price tags will dissuade IT from even offering aggressive RTO/RPO solutions for fear of being laughed out of the corner office. If in the final analysis budgets won’t support more expensive options, a collective decision can be made to scale back the requirements, but it will be made based on a full understanding of the situation. BUILDING BLOCKS After you have a solid idea of the services you need to protect, the risks you need to protect them from, and your recovery objectives, you can figure out the tech- nological building blocks you’ll need to deliver on the policy. Due to the sheer number of different solutions available, there is really no one right way to protect any given infrastructure. The trick to designing a good BC/DR solution is to treat each component as a layer of protection, each with different capabilities that combine to provide a compre- hensive solution. SOFTWARE BUILDING BLOCKS The heart of any BC/DR strategy is the software that makes it tick. Generally speaking, this will involve at least one kind of backup software for disaster recovery, but may also leverage application-level backup/replication and even full-blown host-based replication software to deliver on business continuity goals. Agent-Based Backup—Traditionally, the most com- mon software component you’ll see is the agent-based backup utility. This software generally utilizes one or more centralized server installations to draw backup data from software agents installed within the individual serv- ers you want to protect, then stores the backups on tape, direct-attached disk (DAS), or other storage hardware. If disaster strikes and data is lost, the data can be restored to the way it was at the time of the last backup. This works reasonably well for situations where you’re protecting against data corruption or deletion and there is a relatively long RPO (usually 24 hours or more). However, when risks include wholesale loss of a server containing a substantial amount of data, traditional agent-based backup is unlikely to meet more aggressive RTO requirements, since restoring a server from bare metal can require a significant amount of time. But don’t write off agent-based backup utilities just yet. Despite their stodgy reputation, modern backup agents may offer advanced features such as image- based backups and client side deduplication, which can both shorten recovery objectives and decrease the time needed to create backup sets. Image-based backups capture the contents of an entire server at once instead of file by file and restore the machine exactly as it was, making backups easier to capture and restore. Client-side deduplication lets you use cheap centralized backup storage and distrib- ute the dedupe load, making backup over a WAN faster because there’s less data to pass down the wire. These more advanced forms of agent-based backup are useful for organizations who need to shorten their RTO/RPO but don’t have the virtualization stacks in place to make backup more seamless. Virtualized Backup—Virtualized infrastructures have significantly more backup options available to them. Because virtualization introduces an abstraction layer between the operating system and the underlying server and storage hardware, it is far easier to both obtain con- sistent image-based backups and to restore those back- ups onto different hardware configurations should the need ever arise.
  • 5. I N F O W O R L D . C O M D E E P D I V E S E R I E S 5Disaster Recovery Deep Divei In fact, image-based virtualization backup software can also be leveraged to perform business continuity duties in addition to disaster recovery. Instead of stor- ing backups on dedicated storage media like tape or network-attached storage, the backups can be stored live on a backup virtualization cluster– a collection of virtualized hosts located in a secondary location that can be turned on at a moment’s notice. Though it’s dif- ficult to achieve RPOs of under 30 minutes with this approach, it can be an exceptionally cheap way to offer aggressive RTO when more advanced hardware-based approaches like SAN-to-SAN replication (see below) aren’t feasible. Host-Based Replication—You can achieve a similarly aggressive RTO for physical servers by using host-based software that replicates data from your physical produc- tion servers to dedicated recovery servers, which can be either physical or virtual. However, that solution requires an expensive one-to-one ratio between your protected assets and recovery assets—you’d essentially be paying for two of everything. This kind of software is also typi- cally not dual-use, which means you’d still need to field separate disaster recovery software to provide archival backups. However, in situations where operating systems and applications can be virtualized and have not been, it is often possible to utilize host-based software to rep- licate many physical servers onto a single virtualization host— handily avoiding the need for a duplication of physical resources. Application-Level Backup—Certain types of appli- cations come with their own tools for providing disas- ter recovery and business continuity. For example, you could configure a database engine to capture full back- ups and snapshots of transaction logs each night to a secondary storage device for DR, while replicating trans- action logs to a server in a secondary data center for BC. Though these solutions can provide a cheap way to provide very stringent RTO/RPO, they are also one-off solutions that will only apply to applications that support them. If you’ve got more than one critical database with its own DR and BC processes, you’d have to manage each of them separately, increasing the complexity of your BC/DR approach and creating a potentially huge source of headaches. Keep it Simple!—No matter what mix of software you choose, try to keep it as simple as possible. Though you can achieve a great deal by carefully layering several dif- ferent kinds of backup software and adding a healthy dose of customized scripting, the more complexity you introduce into the equation, the more likely you are to encounter backup failures or—worse—put your data at even greater risk. HARDWARE BUILDING BLOCKS No matter what kind of approach you use, hardware will play an essential role. With disaster recovery the hardware’s job is to store backup data and provide data retention/archival capabilities. In BC applications, the role of hardware can much more varied—ranging from virtualization hosts with their own inboard storage, to shared network attached storage (NAS) systems, and synchronized storage area networks (SANs) hundreds of miles apart. The public cloud can also be leveraged to play a substantial role in both DR and BC. Tape Backup—When most people think of DR-related backups, the first thing that comes to mind is usually tape backup. Though much maligned, tape backup systems have remained relevant due to sequential read and write performance that can be two to three times faster than disk, as well as well as their long shelf life and the relative durability. Though disk-based backup has become much more popular recently, disaster recovery approaches that require backup media to be shipped off site for archival storage still generally rely on tape. Disk-Based Backup—However, tape suffers from a number of substantial limitations, primarily because it’s a sequential medium. Reading random chunks of data off different parts of the tape can be very time consum- ing. Backups that rely heavily on updating or referenc- ing previous backup file data are much better suited to on-line spinning disk. But even where disk is being utilized as the first backup tier, off-site archives are still frequently stored on tape—forming the oft-used disk-to- disk-to-tape model. Disk backup media can take a number of different forms. In the simplest application, it can be a backup server with a large amount of direct-attached storage. In situations where the backup software is running within a
  • 6. I N F O W O R L D . C O M D E E P D I V E S E R I E S 6Disaster Recovery Deep Divei virtualization environment or is on a system that stores backup sets elsewhere, NAS is frequently used. NAS and VTL—In large installations requiring signi- ficant amounts of throughput and long periods of data retention, it’s much more common to see dedicated, purpose-built NAS backup appliances. These devices come in two basic flavors: one that acts as an arbi- trarily large NAS and is accessible via a variety of file- sharing protocols; and virtual tape libraries (VTLs) that emulate iSCSI or Fibre-Channel-based tape libraries. VTLs can be useful in situations where legacy backup software expects to have a real tape drive attached, but tapes themselves aren’t desirable. By using a VTL, you can provide a deduplicated disk-based backup option while still allowing the legacy backup software to function. For most organizations that need to store large amounts of data for a significant period of time, however, a NAS backup appliance is much more likely to be the right answer. The primary reason for this is that nearly every enterprise-grade backup appliance will implement some kind of deduplication to remove portions of the data set that are exact copies of each other—like, for example, the same company-wide email sent to every employee’s in-box. Deduping the data can significantly reduce the amount of storage required by each backup set, allowing organizations to increase the amount of historical backup data they can maintain. There are a wide variety of deduplication products, some of which are more efficient than others, but they generally utilize one of two approaches: at-rest dedupli- cation and inline deduplication. In the first instance, data is backed up onto the appliance as-is, and then dedupli- cated after the backup is complete. In the second, data is deduplicated as it is being written onto the device. Typically, devices that leverage at-rest deduplication take less time to backup and restore data because they don’t have to dedupe and/or rehydrate data as it moves on and off the device. However, because they must store at least one back up in its un-deduplicated state, they also require more raw storage compared to inline deduplication devices. The type of deduplication device that’s best for a given application really depends upon how it will be used. For instance, if you plan to store backups of a virtualized environment on an NAS, you may want to use an at-rest deduplication platform due to its typically faster read and write performance. Some virtualization backup software can restore a virtual machine while that virtual machine’s backup data resides on the NAS—yield- ing extremely short RTO. In this kind of instant restore model, a virtualization host can mount the backup image directly from the NAS, start the VM, and the live-migrate it back to the original primary storage platform—essentially eliminating the time it would normally take to restore the data back to primary storage in a traditional backup. In that sense, what is normally considered a DR resource can be lever- aged to supply some level of BC functionality—especially if the NAS is located at a remote data center along with dedicated virtualization capacity. SAN Replication— However, most BC approaches rely on a dedicated warm site—a data center housed in a remote location that contains all of the infrastructure necessary to keep the organization running if the pri- mary data center goes down. That site is almost always coupled with some type of storage replication, usually a secondary SAN that’s replicating data from the pro- duction SAN. Because it essentially doubles your organization’s investment in storage hardware and requires large amounts of low-latency bandwidth to connect both sites, this type of replication can be very expensive, but it is widely recognized as the most comprehensive business continuity approach available. There are two kinds of SAN-to-SAN replication: syn- chronous and asynchronous. With synchronous replica- tion, data is kept in precise synchronization between the production SAN and recovery SAN. By contrast, asynchronous replication captures snapshots of your data at specified intervals; if the primary storage system fails, you can roll back to a recent snapshot to minimize the data loss. At first glance, it would appear that synchronous rep- lication is better. Because data on the recovery SAN is always identical to that on the production SAN, the recovery time is exactly zero. The problem? If that pro- duction data becomes corrupted, the corrupted data will be replicated on the recovery SAN as well. (That’s why disaster recovery is an essential component of even
  • 7. I N F O W O R L D . C O M D E E P D I V E S E R I E S 7Disaster Recovery Deep Divei the most robust business continuity system—you always want to have a reliable backup set.) With asynchronous replication, you can roll back to the most recent snap- shot of data preceding the corruption event. If you have stringent RTO requirements, you prob- ably need synchronous replication to meet them. But you’ll also need to make sure your DR resources are aligned correctly in case you need them. Depending on the storage platform you choose you may be able to adopt a hybrid approach, taking snapshots of the SAN production data and storing them on the recovery side SAN—effectively providing you with zero RPO as well as the ability to roll back to an earlier data set without having to restore from backups. Cloud-based DR and BC—In scenarios where a large amount of Internet bandwidth is available, the public cloud can also provide a viable option for both DR and BC goals. In DR applications, cloud-based backup can provide a replacement for tape media or NAS repli- cation in order to provide off-site backup protection. However, managing the security of backup data being placed in the cloud and ensuring that the backup data can be quickly and easily brought back on site can be a challenge. The public cloud can also fill the role of a fully deployed hot recovery data center. Through the use of application- specific backup software or host-based replication tools, entire server infrastructures can be protected by storage and computing resources located in the cloud. Whether you decide to opt for cloud-based DR or BC depends on a variety of factors, including the type of IT resources your organization has, whether you have the skill set in house to navigate the fundamentally dif- ferent nature of cloud services, how much bandwidth your data requires, and how closely you must adhere to government regulations regarding information security. TESTING AND COMPLIANCE No matter what solution you choose for disaster recov- ery or business continuity, building a consistent test- ing and compliance regimen—and sticking to it—is vital. Without frequent testing, you can never be certain the data you thought was backed up can actually be restored, or that your secondary data center will come on line when the Big One hits. This is where many organizations fail. Overworked and understaffed IT orgs may feel they lack the time and resources required for frequent testing of DR and BC. But that may also be due to poorly designed processes or a lack of sufficient automation. Making an extra effort to streamline these processes will help not only in testing these solutions but also in implementing them when a real disaster strikes. In the ideal scenario, organizations will deploy their DR or BC solutions as part of their day-to-day opera- tions—allowing them to both leverage their investments and test them at the same time. For example, you might use the DR system for a mission-critical database to pop- ulate a read-only database platform that’s used for gener- ating reports. If a problem arises with a report, that could be a sign your DR system isn’t working as it should. Or you might routinely transition workloads between a production data center and a recovery data center— spreading the load across both data centers when your need more performance while also demonstrating that your fail-over methodology is functioning properly. Backups can and do fail silently. If you do not test them, you won’t find out until it’s too late—a situation nobody wants to find themselves in. You’ll also have an excellent opportunity to evaluate whether you’re meet- ing your organization’s written RTO/RPO requirements, and to lobby for changes if you’re not. BC/DR: A BUSINESS NECESSITY As businesses become increasingly dependent on data to survive, the importance of disaster recovery and busi- ness continuity measures cannot be overstated. Only through thoughtful and collaborative planning that involves all levels of the organization—both technical and non-technical—can you field a comprehensive disas- ter recovery and business continuity strategy that appro- priately sets expectations and adequately protects the lifeblood of the business. i