This paper points out that storage infrastructures should be upgraded now to allow the creation of a denser environment that will fully realize the potential of compute and flash storage investments. Failure to do so will have two key costs to the organization.
The Cost of Maintaining the Storage Network Status Quo
1. !
"
"
"
"
"
"
"
The Cost of Maintaining the Storage Network Status Quo!
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
Prepared by: George Crump, Lead Analyst !
Prepared: February 2014
2. The Cost of Maintaining the Storage Network Status Quo
Storage infrastructure upgrades have typically been driven by the obsolescence of the prior
architecture. Many customers migrated over time to 8Gbps Fibre Channel because the cost of
the 8Gbps infrastructure components were the same cost as their 4Gbps counterparts. The
move to the next generation storage infrastructure, Gen 5, may have an entirely different
motivation; the desire to fully leverage investments made in server compute via virtualization
and storage tiers via solid-state drives (SSDs). The slow rolling upgrades of the past will simply
take too long to fully utilize the investments in these assets.
"To maximize return on investment (ROI) in compute and storage, data centers need to build far
more dense environments. This means hosts that can support more virtual servers and
desktops; databases that can scale to support more users per host; and unstructured data
environments that can manipulate billions of discrete files. The compute power is readily
available to support these architectures and the advent of flash storage allows the storage
media to keep pace with the compute demand. But if the storage interconnect is not upgraded,
these capabilities will go to waste putting the organization at a competitive disadvantage.
"Storage infrastructures should be upgraded now to allow the creation of a denser environment
that will fully realize the potential of compute and flash storage investments. Failure to do so will
have two key costs to the organization.
"The Cost of Incrementalism
"The traditional approach to upgrades is incremental as new servers or storage are added to the
data center environment, this results in an incremental upgrade to the next generation storage
infrastructure. Because of this incremental approach, most organizations never fully complete
the transition before the next generation of infrastructure becomes available.
"For example, before many customers could upgrade completely to 4Gbps Fibre Channel from
2Gbps, 8Gbps Fibre Channel had already become widely available. The problem with this
incremental approach to storage infrastructure is that compute power and storage media,
thanks to SSDs, are making quantum leaps in high performance capabilities. If the infrastructure
isn’t upgraded to support these advancements, the infrastructure will fall further and further
behind. When the storage network falls behind application response time degrades, availability
declines and investments in fast storage systems like flash based arrays don’t achieve their full
potential.
"To take full advantage of the enormous potential of compute rich virtual hosts and near zero
latency storage requires a complete and almost "all at once" upgrade of the storage network
infrastructure to the latest generation. This next generation architecture, Gen 5 Fibre Channel
not only has the speed and bandwidth to fully support current and future compute and storage
tiers, it also has the features needed to optimize performance and availability for the virtual
infrastructure. The full utilization of the compute and storage performance now available will
help to further drive down costs for the data center as a whole.
"The Cost of Workarounds
"Through a variety of workarounds, some performance demands can be met with the current
storage network infrastructure. The problem is that these workarounds have both present and
future costs and in many cases make the eventual upgrade to the next generation storage
architecture more expensive and more difficult to implement. A few examples follow:
"""
February 2014 Page ! of !2 4
3. Storage Switzerland, LLC
• Workaround: Additional network cards can be added to the hosts and traffic can be routed
to multiple switches and through multiple storage systems.
• Cost: The obvious cost of buying multiple components that are not individually fully
utilized. There is also the additional management costs and complexity associated with
making these changes.
• Workaround: Some vendors have proposed "server-side SSD" solutions that, in theory,
should eliminate the storage network all together.
• Cost: While this approach has value, it should be introduced after the storage network is
optimized. The reason for this is that server side solutions end up creating performance
storage silos that need to be independently managed, creating more work for the
administrator.
"Even if the server side solution can network storage across servers, making a "server side
network", the solution adds cost and complexity. First of all a dedicated network has to be
designed to make the aggregation of storage perform well. Secondly the storage administrator
has the unenviable task of having to maintain a shared but totally isolated storage architecture.
What’s more there is a high degree of risk with these solutions. Server side networks are new to
the marketplace and have yet to be fully vetted out by thousands of production IT
implementations.
"
• Workaround: Another workaround is storage quality of service (QoS). Storage QoS, in
general, allows available storage resources (IOPS) to be provisioned on a prioritized basis
with more critical applications or workloads given some form of preference. Like server
side SSD solutions, it has its place but is best implemented after the storage network
infrastructure has first been upgraded.
• Cost: While QoS is one of the key requirements of next generation storage architecture, it
needs to be built on a foundation that has enough raw performance and bandwidth to go
around. In other words, it is easier to divide up a large pie than a small one.
"The cost of the QoS workaround is the purchase of a new storage system with specific
QoS capabilities, only to be bottlenecked by the storage infrastructure. There is also the
cost that the storage network and HBAs are unaware of the QoS on the storage system
and don’t know how to cooperate with it. The next generation architecture will need to have
an end-to-end approach to QoS where the HBAs, storage switch and eventually the
storage system itself can all work together under the same QoS umbrella.
"The ROI of a Next Generation Architecture
"A next generation storage infrastructure should allow the creation of a denser environment
allowing the full realization of compute and flash storage’s potential. Instead of a series of
workarounds the storage infrastructure has to be treated strategically. Implementing Gen 5
infrastructure as that strategy allows for complete optimization of the environment.
"In most cases, the 16Gbps bandwidth that Gen 5 fabric architectures provide will obviate the
need, at least as a first step, for costly server side solutions or storage systems with QoS,
allowing those capabilities to be added and budgeted for as they are needed.
"Beyond the cost avoidance capabilities of Gen 5, there is also the ability to deliver a more
significant ROI on compute and storage tier investments. Often, modern host servers can, from
a CPU and memory perspective, support significantly more virtual machines (VMs) than they do
presently. The typical VM count is typically between 12 and 20. With Gen 5 and Flash based
storage that count should easily be capable of tripling in number. Considering the cost of a
server appropriately equipped for virtualization, the potential to reduce the number of physical
hosts by as much as 2/3 represents a significant cost savings.
February 2014 Page ! of !3 4
4. The Cost of Maintaining the Storage Network Status Quo
Density also applies to the network itself. A denser network means fewer HBAs, switch ports
and cable runs. It also provides a much simpler environment to manage and diagnose since
there are fewer variables to consider.
"A denser environment also means better utilization of the storage system. Prior to flash, storage
systems had to add capacity to meet performance demands. Now workloads have to be added
to the storage system to meet flash’s performance capability. In other words, there is often more
than enough storage media performance available, the key is developing a storage architecture
that will allow you to tap into all of it. An architecture like Gen 5 is an excellent example of that.
"All of this density also leads to better power utilization. This is power efficiency in its purest form.
With a dense environment, there are simply fewer servers, network connections and storage
systems needed.
"Conclusion
"Continuing the policy of the "slow-roll" upgrade seems like a safe one but in reality it ends up
costing the organization. With a "slow-roll" upgrade, the potential of the compute processing
power and storage I/O are never realized. That means that dense compute and storage
environments that will save the organization both upfront capital dollars and long term
operational expenses cannot be built. The storage network status quo ends up being very
expensive. The bottleneck it creates end up requiring extra physical hosts, extra physical
network connections and additional separate storage systems. All of which require power,
cooling and management cycles.
"To avoid the impact of the storage network status quo, data centers should consider an all-at-
once move to high performance storage architectures like Gen 5 FC. This will allow them to
dramatically reduce server and storage purchases while delivering the performance that
businesses are demanding.
February 2014 Page ! of !4 4