4. The Business Challenge
• Data Growth
$ $ $
$ $ $ $
$
20-40% per year
$ $ $ $ • Capex Required
$ $ $ $ More Infrastructure
More Energy
Every two days now, we create • Capex Budget Reality
as much information as we did p e
x
from the dawn of civilisation O
Only 1-5%/year increase
• Inescapable data growth
up until 2003 ta
Eric Schmidt
Executive Chairman, Google Da meets budget reality
• Operational impacts
uired
Can’t just buy
q
ape x Re more/cheaper storage
C Energy
requirements/costs
Increasing Energy costs
Capex Budget Reality CRC Legislation – reduce
energy requirements
time
5. Analysing Storage Efficiency
The Vision Of Your Tomorrow
Changing the Storage cost curve
Reducing cost and energy requirements
15% loss
30% loss 30% loss
Unused 40% loss
RAID LUN Sizes
LUN’s Utilisation
Spares Performance
Zero access
Raw Usable Allocated Active Data Size
13. IBM Systems and Technology Group
Virtualisation Built-In - Virtualise the ‘Complete Infrastructure’
FREE Virtual Centre Plug-In for Storwize V7000
Virtual Server Infrastructure
Manage
Tivoli Storage Productivity Center
Storwize V7000
Storwize V7000
Manage
Virtual Storage Infrastructure Storage Virtualisation
13
{DESCRIPTION} This is a title page. The module presented in this page is called - 1H2012 IBM STG Smarter Storage Selling. Module 5 – ltu35954 Mastering the IBM Storage V7000 Value Proposition Client Presentation Randy Arseneau Consultant, Storage Platform Strategy {TRANSCRIPT} Hello and welcome to the mastering the IBM Storage V7000 Value Proposition Client Presentation. My name is Randy Arseneau. I’m going to take you through this module which is part of our new 2012 STG Smarter Storage selling curriculum.
An entirely new set of requirements are being placed on what are in many cases inefficient IT infrastructures. To capture new opportunities, IT organizations must respond to dramatic increases in demand and workload while meeting demands for new services and greater service quality. Meanwhile, IT budgets over the last 6+ years have increased only slightly. This is what we call the IT conundrum. The need to do dramatically more without additional resources. So, while the possibilities for innovation on a Smarter Planet are widely varied… every IT organization faces the stark reality that the demand for computing capacity is nearly insatiable while their IT budgets are increasingly viewed as a means of cost control.
Title of Presentation Client Copyright IDC. Reproduction is forbidden unless authorized. All rights reserved.
So lets first look at the business challenge related to this information and data growth that our customers are facing. <click> So data continues to grow and continues to grow an extremely rapid rate, with industry numbers suggesting between 20 and 40% per year. Indeed our own studies, including those performed by Butterfly for example, support these numbers, with an average data growth rate of 34% per year <click> Now what does this actually mean in terms of capital expense and impact. So this data growth means our customer are having to buy more infrastructure that is clearly taking up more data centre space it is obviously incurring more cost. The curve is not as aggressive as the data curve because the cost of technology does keep declining and storage densities do keep getting greater However the storage densities are not increasing as quickly as they have been in the past, historically storage densities were increasing faster than data growth but today that has changed and the data growth is exceeding the technology. Now the other impact of course of buying all this infrastructure is the impact on energy consumption with the more physical infrastructure you put on the floor the more disks you’re powering, the greater the energy consumption becomes. <click> So how does the budget look in comparison to the this Capex requirement, well industry numbers suggest storage budgets are only increasing by between 1 and 5% per year, although some customers have also been telling that their budgets are actually being reduced, expanding the size of the problem even more. <click> This looking at the capex and budget curves, really highlights the issues our customers are facing. This is where the reality of the inescapable data growth meets the reality of the budgets available. It is this very issue that emphasises the need to look at and implement a smarter, more efficient storage infrastructure. <click> The final consideration is operational costs. There could be a temptation to just buy cheaper storage technologies to help counter the problem, but this does not address this issue of increasing operational costs. There are some key elements or challenges here such as staffing costs to manage an ever growing environment, and in particular finding the right skills in the market. But also, every disk that is purchased consumes energy, and energy resources can not be thought of as a limitless resource any longer, there is a need to evaluate if energy providers will be able to keep up with the required energy demand growth. Of course energy costs have also been increasing dramatically, this combined with increasing energy requirements from the additional infrastructure required to keep track with data growth, just compounds the problem. Lastly, recent UK government legislation requires companies to reduce their carbon footprint, an element of which will ultimately require them to reduce their energy consumption, or face significant penalties. Whilst this currently affects larger companies, within the next few months, it will impact all businesses, so is something they need to consider.
Lets look into a little more detail on Storage Efficiency. Here we have what is often referred to as the storage wastage or waterfall chart. I’m sure this will be familiar to many of you, but lets quickly step through the causes of this effect. <click> On average, companies lose 30 percent of the capacity configuring RAID protected arrays, taking into account parity drives and hot spares. This then gives us the usable capacity. There isn’t necessarily a lot we can do in this space unless perhaps the customer is currently employing a lot of RAID 1/mirroring in their environment, and then we can perhaps provide the right service levels with alternative, more cost effective RAID levels. <click> Looking at the next step, on average another 30% is then lost when allocating storage. This can be due to company std LUN sizes leaving undefined capacity, as well as often arrays configured for performance. These are often defined by the number of disks/spindles required to drive the performance needed, but can realise more capacity than was required, which is then in turn not utilised effectively. <click> Approximately an additional 15% is lost through configured LUN’s not being used, perhaps having been previously allocated to a system that has since been decommissioned, but the storage capacity was not redeployed back into the storage pool. Or volumes that have not been accessed for an extended period of time, but contain data of value. <click> The actual data size compared to what was allocated to a requirement is also in the majority of cases, significantly smaller. Project or application integration managers often ask for more capacity than they really needed initially, to cover themselves, making requests that hopefully cover all eventualities. However, if they are not using it yet, why would our customers want to be paying for that depreciating asset as well as the wasted energy consumption powering something that is not yet being utilised? Another way to think about this is our clients are in effect getting a $30,000 return from their $100,000 investment. Another example to help our client visualise the inefficiencies, this utilisation is like having a 10 story building, with only 3 floors occupied. And then when employing 20 more people, buying a new building to put them in. This is in effect what many of our clients are doing today, believing they need and subsequently buying more storage, when their existing storage is perhaps only 30% utilised. IBM can help our clients address these recognised problems and indeed can help reverse the trends. So how can we help our customers reverse this trend, well lets first focus on the 30% loss from usable to allocated. <click> With storage virtualisation technologies, that certainly IBM have had in the market for over 8 years now, we can ensure all usable capacity can be grouped into storage pools, enabling it to be carved up without wastage. Think of multiple partially filled glasses of water vs a jug of water being shared with straws, so there is only one resource to think of from the point of view of keeping it as near to full as possible. Automated tiering can then also minimise the number of drives required to drive performance requirements whilst also ensuring they are fully utilised, always keeping the data with the highest access, on the fasted technology. Thin Provisioning, allows you to allocate more storage capacity than you have available, the concept of over provisioning. So reducing that wastage associated with a project that has been over specified. The efficiency gains from these technologies can be significant, such as one of our clients who achieved an increase of 47% in utilisation through Storage Virtualisation. <click> By combining these technologies its possible our clients “Allocated” capacity is actually higher than the usable capacity. Very similar in approach to the concept of over provisioning processors that many of our customers are doing today. <click> If we then look at the loss from allocated to active, with the right storage management tools, we can ensure we identify any capacity that is not being accessed. If it is a volume that actually contains no data, it can be released and added back into the storage pool. If it contains data of value, that is not being accessed, with the right automation tools linked to policies from the reporting, it can be moved to a more cost efficient technology. The reporting tools also become highly valuable, following our previous step, in providing capacity forecasting and reporting, ensuring the right physical capacity is available to satisfy the now over allocated storage. <click> By understanding what there is, how it is being used, and moving data to the right pace, we can ensure all allocated capacity is active. <click> Finally, looking at the data size. Data Reduction Technologies such as compression and deduplication, ensures data takes up less physical capacity, and with compression technologies that can be applied to production data without compromising performance, the benefit these technologies becomes more accessible. IBM’s Real Time Compression technology in particular, ensures we can offer our clients a significant differential compared to the rest of the market. With our compression technology, we can compression Oracle databases by up to 85%, and typically, on average by 80%. When discussing this with customers, they often query if there is a performance impact from this compression. The answer is yes, a positive impact. The reason for this is simple, in the end to end process of writing or reading data, the slowest part of the process is the moving parts disk technologies. By compressing the data in flight at 80%, we are in effect only writing or reading approx. 1/5 th of the capacity, speeding up the total process. Across all of a customer data or file types, we on average see between 50 and 60% compression. In discussions with customers we can ask them, “Mr Customer, whilst you clearly will need proof points, what if, at worst performance was the same, but your entire data was compressed by an average of 60%, could you afford not to investigate further”? The answer is very typically no, and we can then propose a proof of concept or workshop to determine the compression ratio’s they are going to achieve. <click> It is through this compression technology in particular, that we can actually ensure our customers data size is actually significantly larger than the capacity required to store it. <click> Through the combination of all these technologies, you can see we can help change that storage curve. This will reduce both our clients costs, as well as their energy requirements, helping to reduce operational costs. Very importantly, we can also discuss with our clients how we can apply these technologies and efficiencies, to their existing infrastructure, and help them gain significant additional value from their existing assets. This is the vision of tomorrow’s infrastructure, utilising technologies that IBM has available today and with which we would like to start working with our clients, to deliver a smarter storage infrastructure. We can see here we can transform that $100,000 investment to return perhaps $140,000, a proposition that we can be sure CFO’s will like to hear.
Virtual infrastructures improve efficiency, provide for t ransparent mobility, and give common manageability and capabilities regardless of type of virtualized resource. VMware leads the way in virtualized x86 server infrastructures. IBM leads the way in virtualized storage infrastructures. Like VMware vSphere Hypervisor for servers… IBM’s Storage Hypervisor comes in both a midrange (Storwize V7000) and enterprise (SAN Volume Controller) package Like VMware vCenter Server for servers… IBM’s Tivoli Storage Productivity Center unifies and simplifies virtual storage management
04/20/12 IBM Confidential
10/6 As I mentioned on the previous slide, IBM Active Cloud Engine has multi-site support. That means clients can support Active Clouds. Traditionally, clouds have been implemented for a data center, or for a Cloud Service Provider. If you or part of your work group are in another part of the world, you might not have fast access to the information you need to do your job. In an Active Cloud, users share a single view of files and directories in each site, based on their access permissions, of course. Files flow to the users. Users access files the same way they do in a single site environment. Instead of chasing their files, files migrate to users automatically, as needed. IBM Active Cloud Engine also implements policy-driven file distribution, so you can pre-populate remote sites with files you know people need to access. That enables fast performance for remote users.
Title of Presentation Client Copyright IDC. Reproduction is forbidden unless authorized. All rights reserved.
10/6 Now, I would like to introduce our featured announcement. One Year Ago, IBM Set a New Standard for Midrange Storage with the introduction of Storwize V7000. Storwize V7000 is o ptimized for Virtual Server Environments. I includes: Virtual storage for flexibility Easy Tier for performance, and VMware integration for clients managing their virtualized data centers with VMware tools Storwize V7000 supports 100% Virtualized Storage. Not just for its own storage, but for external storage, too; which can improve flexibility across the data center. Storwize V7000 has Enterprise class Storage Efficiency built in. Besides virtualization and Easy Tier, which I just mentioned, Storwize V7000 includes thin provisioning and efficient snapshots; and sophisticated local and remote mirroring, optional. Storwize V7000 is easy to set up, often in less than a day, and easy to manage. Its innovative, intuitive GUI eliminates complexity, and non-disruptive data migration included