Mais conteúdo relacionado Semelhante a Shunra app cloud_whitepaper (20) Shunra app cloud_whitepaper1. WAN. Web. Mobile. Cloud.
Confidence in Application Performance™
Deploying Your Application in the Cloud:
Strategies to Proactively Mitigate Performance Risk
A Shunra Software Best Practices White Paper By Marty Brandwin
2. A Shunra Software White Paper
Corporations worldwide are shifting technology resources and Is my application Cloud-ready?
infrastructure to the Cloud. These businesses expect to realize
gains in operational efficiency and scalability as a result of the When analyzing an existing application for its Cloud-readiness, it is
Cloud’s elasticity, and they expect to reduce capital expenditures imperative to break down the application into its core dependencies,
on IT infrastructure as they migrate to an operational pay-as- components and functionality. With each “piece” of the application,
you-go expense and offload typical infrastructure management organizations must weigh the unique benefits and risks to determine
responsibilities (and costs) to the Cloud provider. whether the Cloud paradigm is the best option – whether each
component will function as expected in the Cloud, whether it is
Today, organizations recognize the value and significant gains that scalable, what costs will be incurred to maintain the component in
Cloud computing offers. They are also knowledgeable enough the Cloud, and how end users will experience it.
to recognize the risks involved with Cloud deployments, such as
the potential bottlenecks and points of failure that are introduced Typically, preparing an application for the
as application topology and dependencies now include extra
hops to the Cloud. Other risks include network latency, data Cloud requires one of two application
security, bandwidth limitations, reliance on third party content development efforts: re-architecting
delivery networks, and potential development costs if application
architecture or components require refactoring. The end result of all
application components with a
of these possible impairments is reduced application performance SaaS-like infrastructure, or building
and a poor user experience. new components and applications that
Cloud computing, therefore, is not an instant “win”. It is critical to leverage Cloud APIs for design, process
analyze the potential tradeoffs that may be necessary when moving
and workflow. Both situations introduce
an application, or some of its components, to the Cloud. It is also
vital to be proactive in determining the impact these changes costs and performance risk to the
will have on application performance and, most importantly, user application.
experience.
Additional latency introduced by extra hops to the Cloud has an additive effect that can impair end user experience.
1 msec Latency for LOCAL User 50 msec Latency for REMOTE User
Client Server
Client Server
Session Initiation Session Initiation
Initiated Session Initiated Session
Login Request Login Request
Login Reply Login Reply
Login Page Request Login Page Request
Login Page Login Page
Sporadic Download Sporadic Download
Acknowledgements Acknowledgements
Session Teardown
Session Closed
Session Closed
Session Teardown
Session Closed
Session Closed
3 Seconds 30 Seconds
© 2011 Shunra Software Ltd. All rights reserved. Shunra is a registered trademark of Shunra Software.
3. A Shunra Software White Paper
The introduction of minimal additional latency can create significant but also be magnified. Take for example the latency implications
performance bottlenecks when a large number of application calls with a chatty application – the introduction of minimal additional
are occurring. latency can create significant performance bottlenecks when a large
number of application calls are occurring. In addition, multi-tenancy
Cloud infrastructure changes mean existing investments in and shared Cloud resources mean that some applications can be
architecture, data structure and performance engineering may not negatively impacted by high load and resource requirements from
be leverageable. Re-architecting the middleware and back-end other applications.
tiers of an application to leverage Cloud APIs can be a significant
undertaking. Application development and management platforms Pre-deployment performance testing is essential.
must be capable of supporting the Cloud model throughout all
stages of the application development lifecycle. Without appropriate The current Cloud performance testing paradigm requires a pre-
planning for the development, refactoring and management of deployment migration of application components and data to a
applications deployed to the Cloud, organizations may be forced Cloud-based staging area in order to test functionality, establish
to seek out ad hoc solutions that represent additional costs and benchmarks and set expectations. Copying over virtual machines
corporate investment, offsetting at least some of the expected gains and other components to the Cloud from the datacenter introduces
from a Cloud migration. its own performance and resiliency risks that need to be understood.
Most importantly, all of these changes put a burden on the QA/
Testing team. Not only does application functionality in the Cloud
To optimize pre-deployment testing
need to be validated, so does performance and adherence to service
Organizations must be able to:
level objectives (SLOs). While the application performs well in the
traditional datacenter, the variability of hosting it in the Cloud Collect real-world Cloud network information
introduces new performance risk. over time, including latency, jitter, packet loss,
and bandwidth constraints
Complicating the migration, and critical to accurately assessing
application topology changes, is the requirement to have a Replay these real-world impairments in a test lab
thorough understanding of the services and architecture offered by
the Cloud provider and the role of third-party vendors that may be Understand datacenter location and end user
working with the provider (content delivery networks, for example). location(s)
Service level guarantees and other performance metrics are
Automatically recreate multiple network
increasingly easy to establish and monitor, though it is much more
scenarios, including best- and worst-case
difficult to anticipate unplanned outages, and resulting application
conditions
behavior, in the Cloud as opposed to the traditional data center.
This approach to pre-deployment testing empowers
Moving from the traditional datacenter and into the Cloud paradigm
organizations to proactively plan for and successfully
necessitates a hand-off of control – control of data, control of
deploy applications to the Cloud.
centralized IT functionality. Best practices, therefore, dictate a
well-choreographed and thorough performance assessment of
the application in advance of deployment to the Cloud. While Once application components or a reference system are deployed,
management and maintenance control is largely relinquished, which can be time-intensive, additional testing code may be
preparedness and validation of application performance provides required and the application may be placed in a debug state. From
the assurance IT organizations need to confidently deploy to the there, the application or its components can be stress tested and
Cloud. the interaction of both the Cloud-based and datacenter-based
components can be analyzed. What-if scenarios, times of peak
Proactively testing (and validating) load, scalability, etc. are all conditions that can then be tested.
end user experience While this high-level view of testing is consistent with what QA
and Performance Engineers have come to expect in traditional
Now that you have thoroughly assessed Cloud provider capabilities
datacenters, the pay-as-you-go model of the Cloud makes this a
and applied that knowledge to your application development and
costly proposition.
hosting plans, there is one more requirement to complete your
proactive strategy: validate and ensure end user experience. Rather, pre-deployment testing in the datacenter, with real Cloud-
based simulation, is a more cost-effective and flexible means for
The best-laid plans cannot fully anticipate and account for the
testing applications. By precisely emulating Cloud conditions and
performance and experience risks associated with deploying
services prior to deployment, organizations are able to test more
applications in the Cloud. In fact, application issues within the Cloud
scenarios at less cost and be certain of end user experience.
environment can not only resurface, as they did in the datacenter,
© 2011 Shunra Software Ltd. All rights reserved. Shunra is a registered trademark of Shunra Software.
4. A Shunra Software White Paper
In addition, emulating Cloud conditions and simulating real-world again in random order, with various factors imposed to change
usage scenarios, like outages and peak loads, early in the Cloud parameters in order to test performance and scalability under the
deployment/development lifecycle allows organizations to better breadth of real-life conditions.
anticipate and plan for capacity and resource requirements. Analysis
of application behavior in the datacenter under Cloud conditions The company was able to precisely recreate the conditions of the
and what-if scenarios can also help organizations determine which private Cloud and accurately simulate multiple test scenarios in the
application components are best suited for, or are even capable of company’s on-site lab. As a result of an extensive and thorough
pre-deployment performance test, Shunra helped the company
being deployed to, the Cloud.
validate the performance and associated requirements of the online
A Practical Example with Shunra’s communities prior to deployment. This was of utmost importance
as the company operates one of the most popular family-focused
PerformanceSuite communities on the Web, and user experience could not be
To realize value and the fastest return on your Cloud migration compromised. Shunra was also able to quantify the potential gains in
investment, best practices dictate proactive pre-deployment testing efficiency, providing a cost justification for the migration.
with solutions like Shunra’s Performance Suite. As the leading
As a result of supporting this migration project, the company now
application performance engineering provider, Shunra has helped
employs Shunra for performance validation and needs analysis on
thousands of companies worldwide build performance into their
dozens of online application releases annually.
applications, whether WAN, Web, Mobile or Cloud.
When a multinational entertainment company decided to migrate
Key Impairments and Risks
its online communities and social media properties to a private
IBM-hosted Cloud, it turned to Shunra to proactively determine As we mentioned, network impairments that are experienced in the
and validate its migration strategy. The company had several load data center can be magnified within a Cloud architecture. Assessing
generation tools available and functionality testing experience in the performance among varying Cloud network conditions is essential.
lab, but recognized the potential impact of the move on its end users Impairments to consider, include:
and wanted to ensure optimal application performance based on
network conditions. Latency
Latency is the amount of time required for a packet to reach its
destination across a given physical link. It is also, more often than
not, a primary source of performance problems. One way to think
about latency is through a simple analogy: the driving distance
between two points. How long a car takes to get from point A to
point B depends on factors like distance, speed limits, and traffic
congestion. If points A and B are close in proximity, then latency
is negligible. As the distance becomes greater, however, as it does
when you introduce a Cloud topology and the multiple gateways
that must be traversed in a typical transaction, greater performance
risk is introduced.
Factors contributing to latency include:
NetworkCatcher enables capture and playback of real-world
Geographic distance – increasing the distance between links
network behavior.
introduces a delay based on the physics of sending data packets
from one location to another; this delay is magnified by the
The company knew that latency would be introduced to the online potential need for additional “turns” or the need to re-send
applications based on the physics alone of a geographic move. packets when they become corrupt or fragmented; a vicious
However, they also needed to understand how additional gateways, cycle can result as the increased distance also increases the risk of
network queues and conditions that would require packets to be re- packet corruption or loss.
sent could multiply this delay.
Network queues – when traversing a network consisting of
In order to test the impact of latency and other real-world network multiple intermediate networks, packets tend to “queue up” at
constraints, Shunra’s Network Catcher was deployed to the private busy routers, much as traffic accumulates at busy intersections;
Cloud to capture real-life latency, jitter and packet loss values. This overloading these routes increases latency; and, if packets need
data was then replayed in a test lab using Shunra’s PerformanceSuite to be re-sent, additional traffic, and thus latency, is created.
and Shunra’s seamless integration with HP LoadRunner and
Performance Center. The data was played in sequential order, and Before migrating an application to the Cloud, it is essential to
© 2011 Shunra Software Ltd. All rights reserved. Shunra is a registered trademark of Shunra Software.
5. A Shunra Software White Paper
understand the combined impact of real-world network latencies Packet Loss
and application “turns” on the performance of critical business
services to the end user. In general, when data carried across a network is lost or corrupted,
the affected packets must be resent. As discussed, this can
Jitter compound network impairments like latency and jitter, causing
significant performance degradation. This degradation is not due as
Jitter is a measure of the variability of latency. It describes the much to the packet loss as it is to the time it takes for applications
variation in time (or delay) that is experienced between sending to respond to them. The most significant effect of packet loss is
and receiving data packets. The result of jitter can be packet loss or from application timeouts, which are defined as the length of time
re-ordering, which can have dramatic impact on the performance of a network host is programmed to wait for a reply before resending
video or audio streams. the latest information again. Each time a packet must be resent, the
Bandwidth Availability resulting timeouts incurred can severely reduce the quality of the
end user experience.
Bandwidth describes the speed at which information travels on a
link per unit of time. Data cannot be sent or received faster than Packet loss can occur for several reasons:
the underlying media allows. Bandwidth considerations, however, ardware or software bugs – packets can be assembled or
H
are more complicated than just the speeds at which data can disassembled incorrectly due to infrastructure or software
be transmitted, known as theoretical bandwidth. Rather, when defects.
considering bandwidth and its impact on performance, we must
consider other performance factors that affect how much of the Electrical problems – high power lines, inadequate noise
available bandwidth can be used: isolation, air conditioners and other electrical sources can disrupt
data transition.
Bottlenecks – a network is only as fast as its slowest link; if users
connect to a 1.5Mbps WAN through a 56 Kbps dial-up link, real Network loads – when traffic coming to a router exceeds the
bandwidth is 56 Kbps. router’s ability to process, an overflow condition results; this
overflow condition may be handled automatically by the router
Utilization – as with any channel, the more traffic there is (think which proactively drops packets to avoid overflow conditions.
about cars on the highway), the slower the speed.
P header corruption – when packet header information is
I
Protocol overhead (bandwidth allocation) – different protocols corrupted, a router may misinterpret the packet as being invalid
impose different bandwidth penalties – i.e., the percentage and drop it; header corruption typically occurs because of errors
of the data stream allocated to addressing and other control at the physical network layer which cause data bits to toggle.
functions; for example, ATM has an overhead of 10% (5 bytes for
every 53-byte ATM cell), effectively lowering network bandwidth Fragmentation – when a data packet exceeds the maximum
allocated for data transfer by 10%. allowed to traverse the network, it may be broken down into
smaller packets before sending it on its way; this fragmentation
uality of Service (QoS) – many network providers allocate
Q takes time and increases the aggregate processing time required
bandwidth based on the type of traffic or destination; for (because there are more packets to process) and more risk of lost
example, video may get a higher priority than email because of packets.
greater potential performance problems with video; similarly,
traffic going to a corporate customer may be prioritized over Networks are imperfect. Network conditions change. With a
traffic to a residential customer. huge number of data packets flying in many different directions,
across complex network infrastructures that incorporate multiple
Asymmetric bandwidth – another complication occurs when technologies from multiple vendors, not every 0 and 1 will travel
downloaded data is received much faster than uploaded data, from endpoint to endpoint exactly as expected.
as with a Digital Subscriber Line (DSL) network; typically used in
residential settings, when DSL is used in a business environment, Cloud migrations introduce performance risk that can and must
even a small upload can temporarily slow or stop other data be mitigated to maintain user satisfaction, productivity and/or
traffic. revenue streams. A proactive approach to performance engineering
empowers organizations to see how their code will behave under
In Cloud environments, the impact of network connections and the variable and worst-case conditions. By incorporating the realities
amount of data that can be carried is an essential consideration, of the network environment into the test cycle, organizations gain
especially since bandwidth is subject to contention by multiple valuable insight into the vulnerabilities that can adversely affect
applications. In a public Cloud environment, in particular, the application performance. And, they are best equipped to resolve
performance of any given application is subject to the volume of issues before end users are affected – saving considerable time and
traffic generated by all the other applications utilizing the same money.
infrastructure.
© 2011 Shunra Software Ltd. All rights reserved. Shunra is a registered trademark of Shunra Software.
6. A Shunra Software White Paper
About Shunra
When deploying applications across WAN, Web, Mobile or Cloud- provides customized performance results, enabling pre-production
based networks, risk mitigation and cost avoidance is paramount. remediation and optimization, and confidence in application
On Black
Today, 80% of the costs associated with application development performance prior to deployment.
occur in remediating failed or underperforming applications after
deployment, where the ineffective application has already had a Shunra is the industry-recognized leader in Application Performance
negative impact on the end user or customer experience. Shunra Engineering (APE), offering over a decade of experience with some
offers a proactive approach to application performance engineering of the most complex and sophisticated networks in the world.
(APE). When implemented at the policy level and as a best practice Customers include WalMart, McDonalds, Bank of America, Apple
across the Application Lifecycle, the Shunra PerformanceSuite™ Computer, Cisco, Verizon, FedEx, GE, Walt Disney, TJX, Best Buy, eBay,
builds real-world application performance testing (latency, packet Siemens, Motorola, Marriott, Merrill Lynch, ATT, ADP, ING Direct,
loss, bandwidth optimization, jitter), into all business and mission- Citibank, Thomson Reuters, Master Card, IBM, Boeing, HP, Pfizer,
critical applications, all prior to deployment. The Shunra solution Boeing, Intel, and the Federal Reserve Bank.
discovers, predicts, emulates and analyzes the performance of
Shunra is based in Philadelphia, PA and is privately held. For more
applications over real-world networks – all within an offline, pre-
information, call 1.877.474.8672 or visit.www.shunra.com.
production, test lab or COE environment. The results? Shunra
Ask Shunra About Our Proactive Strategies for
Deploying Your Application in the Cloud Today!
Visit www.shunra.com and request to be contacted.
Or contact Shunra directly at 1.877.474.8672 or
1.215.564.4046 (worldwide offices listed below)
WAN. Web. Mobile. Cloud.
On Black
Confidence in Application Performance™
Application Performance Engineering www.shunra.com
Call your Local office TODAY to find out more!
North America, Headquarters Israel Office European Office For a complete list of our
1800 J.F. Kennedy Blvd. Ste 601
Philadelphia, PA USA
6B Hanagar Street
Neve Neeman B Hod Hasharon
73 Watling Street
London
channel partners, please
Tel: 215 564 4046 45240, Israel EC4M 9BJ visit our website
Toll Free: 1 877 474 8672 Tel: +972 9 764 3743 Tel: +44 207 153 9835
Fax: 215 564 4047 Fax: +972 9 764 3754 Fax: +44 207 285 6816 www.shunra.com
info@shunra.com info@shunra.com saleseurope@shunra.com
© 2011 Shunra Software Ltd. All rights reserved. Shunra is a registered trademark of Shunra Software.