20. Source: Gartner 2006 20,000 ft² 800kW +33% 100-200 Racks Annual Operating Expense = $800k Annual Operating Expense = $4.6M* *Peripheral DC costs considered Legacy DC designed to accommodate 2-3kW per Rack Introducing 1/3 high-density infrastructure into a legacy facility is cost prohibitive Legacy Server High-Density Server Power per Server 2-3 kW per rack > 20 kW per rack Power per Floor Space 30-40 W/ft² 700-800 W/ft² Cooling Needs—chilled airflow 200-300 cfm 3,000 cfm
36. Physical Server ESX Layer Benifites Performance Scalability Stability - No Manage Service, Vmotion, etc Browns Virtual Machines
37. Physical Server ESX Layer Benifites Performance Scalability Stability SAN Vmotion OS - No Manage Service, Vmotion, etc Browns Virtual Machines
38. Physical Server ESX Layer Benifites Performance Scalability Stability Windows OS SAN Vmotion Manage Services Browns Virtual Machines
39.
40.
41. Virtualisation Cost Analysis No change (three years) As is cost (Hardware, Electricity) -$194,166.41 Provisioning of new hardware -$26,974.36 Total -$221,140.77 Greenhouse Emissions (tonnes) 387.23 Assume Software Costs are static Virtualisation (three years) Virtualisation Hardware -$49,700.00 Gain in Productivity $29,587.50 Virtualisation software -$14,700.00 Internal Implementation Costs (including provisioning) -$6,069.23 Consulting Costs -$16,000.00 Total -$56,881.73 Greenhouse Emissions (tonnes) 44.28 Net Change $164,259.04 74% Reduction Greenhouse reduction 114.32 Tonnes per annum Electricity Savings $19,053.00 Over 3 years Server count reduction 10 NPV $153,973.94 After 3 Years 77%
42.
43.
Editor's Notes
Standards-Based Data Center Structured Cabling System Design 3/20/06 Copyright (c) 2006 Ortronics/Legrand. All rights reserved. JS
Standards-Based Data Center Structured Cabling System Design 3/20/06 Copyright (c) 2006 Ortronics/Legrand. All rights reserved. JS
Transcript : So as we go through the CPI, the critical physical infrastructure, we're going to touch on a few different things. For today's presentation, mainly it's going to be power, racks, and air, air conditioning. However I wanted to put this slide together just to show you sort of what all that Layer zero stuff is. And this is really a whole separate industry from what we do but it's something that has a significant impact on how we architect our systems, and operate our systems. So on the power side, you've got the UPS, the uninterruptible power supplies. You've got the generators, you've got the batteries, you've got the power distribution, the PDUs, power distribution units, which you find both in the racks and out on the floor, branch circuits, distribution panels, etcetera. On the cooling side, you've got CRAC, kind of a funny acronym, but it stands for computer room air conditioning those are the big air conditioners off in the corner or at the end of the rows. The only other thing you'll really see inside the datacenters are the ADUS, the air distribution units. Those don't supply cool air they just distribute air so they're just really in essence extra fans. I'm not going to really cover the chilling of the cooling towers or the condensers because those typically are outside of the data center room itself. And the rack side, kind of getting away from the term cabinets for a lot of customers the word cabinet implies an old-style glass-front type of rack. You'll typically hear most people, at least in this industry, refer to the two different types of racks as server racks, which are fully perforated enclosures, and telco racks, which are typically your two- and four-post open-frames. Now we have seen a major trend of consolidation through conversions of people now starting to bring the core or the MDF inside the same room, the datacenter, the same room as the servers and storage. And we'll talk about some of the issues around airflow because a lot of our equipment still cools right to left whereas most servers and storage all cool front to back. We'll touch a little bit on structured cable and not too much for this presentation. And we're not going to get into security and fire suppression really at all. Also important to touch base on the management methodologies employed in this industry. It's still a very legacy style analog type of building management system that you'll typically see a lot of the UPS and air conditioning managed by. Its very reactive and most of the time the IT Department has no insight into this and arguably these are the most critical components in the datacenter. So there's a large opportunity for Cisco here that we often refer to as Cisco connected real estate or CCRE to start migrating customers off of these BMS systems and onto an IP-based network. What you will also find is -- and this is nice for us in terms of positioning because the competitive implications aren't as great a lot of times customers want redundancy in their management platforms. So they want to keep the BMS, let the Facilities guys still look at this stuff, but they want to complement that with an IP-based network. So that migration doesn't necessarily have to be off it can be in parallel but it's sort of a case-by-case basis. But I think we all know the benefits of an IP-based network all you're doing is telling the customer, you can extend that reach to the most critical components in their datacenter, the power and the cooling. Author’s Original Notes: BILL LULOFS One of the most important slides- What products do we consider as ‘Data Center Products’. Our intent is to over time support JUST this portfolio of products with VFrame and Fabric Manager
Transcript : So when you actually go out to a customer site and you're walking through the datacenter, a great way to appear credible with these guys is actually go and look at the Facilities equipment. So where is it, what does it look like, what does it do? So we've put this slide up as a graphical aid to basically show you where some of this stuff live. So when you're looking at the cooling infrastructure, typically it's off to the end of the rows. They're those big gray boxes without much instrumentation on the front, maybe just a small LED display. It's important to go over and look at that and ask the customer how high is it loaded, what efficiency is it running at, how are they managing it, are they getting alerts, like an SMS to the phone if something goes wrong? These are all some good questions to be asking. Depending on how facility-savvy your IT guy is that you're meeting with, they may know and they may not know but either way it's appearing to them that you know what you're talking about in this space. Now the ADU, the air distribution units, are a fairly new type of approach. You may or may not always see these in the datacenter. If you do, it's probably a good indication that customers are either fairly savvy on the facility side or they've been burnt literally they've got hot spots that the need to get rid of and we'll show you a little bit more about what those look like. But these typically will be on the outside of a rack or actually inside as a rack-mounted piece of equipment inside the racks. On the power side, the UPS and batteries --- in a lot of large enterprise-type datacenters, UPSs are still in standalone rooms. These may be down in the basement, they may be fairly far away from the datacenter. You may or may not want to go take a look at it depends on your comfort level with it. But if you do get to a level of comfort, the same questions that you ask about the CRAC, the air conditioning units, you'd ask about the UPS. So highly is it loaded, what's its efficiency, how much runtime does it have, what's the battery refresh cycle? Is it proactively monitored and managed specifically on the batteries, is a great question because a lot of times if those aren't proactively monitored and power goes down, you basically lose your whole datacenter because your battery bank was not ready to supply power. It happens more often than not. Generators outside the datacenter are probably something that you may not want to get into. But what I would point out is when you're talking to a customer and if he asks you any questions about what he should do for UPS runtime for the batteries, first question to ask is do you have a generator? And if they do what you can do is go with a much smaller battery runtime, literally like five or ten minutes tops because it takes less than a minute for a generator to spin up. So if they have a generator in place and its set to go and it's been tested, or it's regularly tested, you can really get away with a much smaller battery. So a good way to save money and also reduce your environmental impact on recycling the batteries. On the racks, I'm not going to talk too much about racks because at the end of the day it's bent metal but we will talk about some of the structured cabling and the air distribution and locations of a rack SOE standard operating environment.
Transcript : So switching gears a little, focusing specifically on cooling, when we look at CRAC or sometimes called CRAH, computer room air handlers --- I think that acronym came up because CRAC was a little bit loaded. But nevertheless when you look at these systems, they're typically very inefficient, very unmanageable and without a doubt the biggest consumer of power in the datacenter we'll talk a little bit more about why later. And again we mentioned earlier the air distribution units basically just helping get the cold air where it needs to go.
Transcript : Now one of the new cooling methodologies that's very compelling, and again our partner APC has put this out. They're claiming they can cool up to 30 kilowatt in a standard server rack, which to compare that or to give an example, if you fully populated a rack with an HP blade system, so four chassis, that's only going to be roughly 15 kilowatts. So it's almost double what they're able to cool than what's required today. We'll talk a little bit more about what we see out there for common rack densities but 15 kilowatt is pretty high for a production environment you typically don't even see that high today. But a lot of the analysts are saying that's where it's going. But basically this in-row cooling design just separates the racks by about eight inches. And what they do is they pump in a self-contained cooling unit, and it's a closed return system. So what that means basically is it supplies the cold air to the front of the rack and it pulls the hot air off the back of the rack, right in between the two racks. So very, very efficient system. It does require a little bit more space, but when you net it out with looking at the air handlers that were typically on the end of the row, which require a lot of room for access, it ends up being about the same. So a very innovative solution that customers should take note of, particularly on new datacenter designs where it's much easier to specify this.
Transcript : I'm not going to read every one of these ten steps, but this is a good reference slide for you to take a look at what are some Top Ten steps. But first and foremost, and this is where we can help customers, is they need to establish a baseline. Once you have a baseline or a benchmark, from there you can start to put policies in place that help you to improve your overall efficiencies. Most customers today on the IT side, and to some extent the Facilities side, don't even have a baseline on how efficient their datacenter is because it's a very manual process today. We're looking at ways that we can automate that process. And that's what we're working towards with some of the product business units, looking at how we can help customers manage efficiency in their datacenter. But really too early to say much more than that we're looking at it. So this is a good best practices slice that you can share with a customer to say here are ten things you can do today that will help.
Transcript : Again I'm not going to read through each point, but here are some best practices on the power side. The first one is look at the cooling top ten steps, again cooling is 50 percent of the in the datacenter, so that has the biggest impact. And just the next one that I'll mention is standardize on a standard operating environment for your racks. Standardize on how much kilowatt you're going to spec per rack, and design the power and cooling based off of that. A much simpler way to do it much, much better in terms of availability, and much better in terms of efficiency. So again please feel free to share this with anybody who asks questions.
For a panel, list moderator in this slide and panelists on the following slide.
[Walk through and summarize customer benefits]
“ Dell IT Target is to Decommission 10% and Virtualize > 60% of Workloads” Consider a Data Center with 1000 physical servers. Typically we find: 10% or 100 servers can be decommissioned as they are no longer in use. 10% or 100 servers can be consolidated to 50 servers without virtualization – e.g., assign more users/folders per file more modern server 60% of servers can be Virtualized consolidating 600 servers down to ~50. 100 servers may run efficiently without virtualization, but would benefit from more power efficient servers It is likely that some servers, say 10% are best left alone. So we reach up to 70% server volume reduction from 1000 servers to 300 servers. The virtualized servers are likely to run at higher utilization and draw more power per server, however using modern energy efficient servers & storage can save nearly as much power.
Let’s look at a real life customer example: In 2003, the IT service arm of a leading North American energy company started a consolidation project, using VMware Infrastructure as the primary vehicle for achieving this. The customer had 1000+ x86 industry standard servers, - >100 SQL servers - Lotus notes mail servers - Citrix servers - Lots of Line of Business applications Using VMware, they reduced their server count to 80, approximately a 12 to 1 reduction. What is interesting is how consolidation has affected their infrastructure beyond the reduced server count. Virtualization based consolidation not only affects the servers that are consolidated, but raises the quality and service levels of everything around it, including storage, network and facilities. Specifically in this project: Consolidation of Storage: moved from silo’d direct attached storage to highly available Tiered Storage, specifically a Hitachi SAN A 10:1 consolidation of network ports, or in the words of the customer “from spaghetti wiring to world class datacenter.” Dramatic power savings and cost avoidance on the facilities and hardware infrastructure, with a 20 to 1 reductions in racks and power whips. These are significant savings in hardware and further reduce power consumption beyond the servers themselves. --- So what’s been the net impact of this virtualization project?