Three Best Practices for Optimizing your IT Infrastructure
In a survey by the Uptime Institute, 42% of enterprise data center managers reported that they would run out of power capacity within 24 months. This statistic isn't surprising when you consider that today's IT hardware requires more power-distribution air conditioning and UPS capacity than in the past.
What steps is your data center taking to mitigate the detrimental disruptions of availability, reliability and uptime caused by a loss of capacity?
To view the recorded webinar event, please visit http://www.42u.com/it-optimization-webinar.htm
Ladies and Gentlemen: Thanks for standing by and welcome to today’s session in the 42U web Seminar Series. Today’s presentation is entitled: “ Three Best Practices for Optimizing your IT Infrastructure” During the presentation, all participants will be in a listen only mode. However, we encourage your questions or comments at anytime through the “chat” feature located at the lower left of your screen. These questions will be addressed as time allows. As a reminder, this Web Seminar is being recorded, today, May 14th 2008 and a recording will be sent to all attendees within 48 hours.
Transition: Thank you, Rebecca. I’d like to keep this as a very focused discussion today, so I’m going to outline a very specific agenda. First up, we’re going to talk about how the growth in energy demands are impacting many areas of the data center environment. Next, we’ll address what we have seen as best practices towards optimization, and what we feel are the three primary focus areas. #1, Benchmarking your current environment to get a real sense of where you are starting from in your level of efficiency. #2, the refinement of your cooling approach- what we’ve seen in the way of new technologies that are really helping people get a handle on both the impact of higher density environments and the bottom line. And finally #3- designing your power infrastructure in a way that promotes throughput efficiency, while at the same time reducing overall consumption. Without further ado, I’d like to turn it over to Patrick. Patrick?
Transition: That’s a pretty powerful statement by the ACEEE. Has the US Government has spent any of our tax dollars on studying this issue as well? I’ve noticed they like to get involved in these kinds of things from time to time.
Transition: That information from the EPA study is pretty interesting, too- and shows that this problem is getting some real focus. Patrick, do you have any information on what exactly is driving this power increase these studies identified?
With the cost of energy skyrocketing at the same time as user demand for more performance is increasing, IT managers are facing a power, space and cost crunch. Some data center managers have already reported that the cost of electricity and cooling in the data center are exceeding the cost of the equipment itself. IDC estimates for every $1.00 spent on new data center hardware, an additional $0.50 is spent on power and cooling, more than double the amount of five years ago. According to Gartner, 70 percent of CIO's are reporting that power and/or cooling issues are now their single largest problem in the data center. Gartner estimates that 50 percent of data centers in 2008 will have insufficient power and cooling capacity to meet demand with 48 percent of the data center budget being spent on energy, up from 8 percent a few years ago. It is clear that IT managers need new ways to reduce power as they increase data center performance. Blade servers are a key server consolidation and infrastructure management technology whose deployment can deliver the needed increase in performance while giving data center managers new ways to cut power consumption and costs. It may seem counterintuitive. Blade servers pack more horsepower into smaller chassis and enable a greater concentration of compute power in the data center. With multiple, highly compact blade servers, a single blade server chassis can deliver more compute resources than racks of individual server towers or rows of server racks. But this increased density can translate into higher power consumption per square foot. How does one get around this physical limitation? Besides adopting blade server architectures in the data center, it takes smarter planning, superior power management tools and effective utilization of advanced new technologies such as virtualization.
Transition: Interesting to see that broken out- it certainly seems that even small percentage increases in efficiency can make a significant difference in your energy costs. Can you even provide any other detail on where this energy goes?
Thanks for that, Patrick. From this slide, it’s pretty clear that those tiny wattage savings can add up to a pretty big number. It really seems that any positive changes you can make to the efficiency of the parts that make up a data center are becoming more and more beneficial. I guess the next thing to do would be to go ahead and start looking at the physical ways to begin optimizing the environment. I believe we identified three ways to begin, and the first was to establish a benchmark. How do we go about doing that?
Power Usage Effectiveness and Data center infrastructure efficiency Transition: Hey, that’s some great technical information- can we take a look at a good graphic to help us tie this all together?
Is energy measured at the component level? Is CPU throttling enabled on the servers, and the performance lab measures the range of power consumed under a variety of loads. Is thermal profiling is used to identify hot spots and overcooling? Are energy capacities monitored on a total data center level, all the say down to the circuit-level? Is energy usage continuously monitored to determine peak and low energy demands. Is feedback of live data is available to individual organizations, allowing them to react appropriately Is the energy savings plan is documented? Transition: It certainly seems that measuring power at multiple points in the flow is important and necessary to get a complete picture. Is the technology available today to really capture the information we need, wherever we want to measure it? ((PATRICK CHATS ABOUT THIS TECHNOLOGY))
Transition 2: Interesting. But power seems to be only one part of the equation. What other information do we need to gather to be able to get the greatest accuracy in our benchmark results?
Cables, pipes, etc. Cable openings, perforated tile placement, etc. Cabinet layout, cooling unit orientation, ceiling height, etc. Patrick <paraphrase>: This leads to the manifestation of hot spots? Transition: Fantastic information on improving cooling efficiency- again, it seems that examining and improving the small details really seem to add up to make a big impact. So what should people expect out of benchmarking your data center?
Data Center reliability Optimization of your current cooling infrastructure Enablement of precision cooling to eliminate hot spots Reduction in bypass airflow ASHRAE compliance Complete understanding of your data center’s environment, including cooling requirements and deficiencies Transition: The data we’ve found sure seems to point to the fact that by doing this assessment, a company can get a lot of good, measurable data to help form their efficiency strategy. What have you seen as the best way to get started?
Transition: Okay, that makes sense. Just as a quick note, so far in our best practices discussion we’ve talked about benchmarking to really know where you’re starting from. Okay, let’s say we’ve got that benchmarking task complete. Now what’s next? Would this be a good time to start talking about optimizing the cooling approach?
Here is what industry analysts are saying: Significant increase in kW per rack Energy prices are increasing Space utilization numbers are down because dc are now full from the power and cooling perspective Transition: That’s a really interesting slide, Patrick- power, heat, and utility costs are increasing, and the space you can use in the racks to house the equipment is shrinking! Seems like the wrong direction to be headed everywhere you look. Can you talk about basic cooling designs- kind of where we’ve been, and where we’re going?
Cold air escapes through cable cutouts Escaping cold air reduces static pressure resulting in insufficient cold aisle airflow Result is vertical and zone hotspots in high heat load areas Transition: Patrick- hang on- does this hot & cold aisle scenario best address the needs of today’s high density server environments? Seems like this kind of solution has been out there a while- there’s got to be a better way.
Transition: We’ve actually seen some early success stories using the close coupled cooling solution. Some customers are cooling up to 30KW per rack, with still further expansion capability. Patrick, could you go into any more detail on that?
How would you set this up in an N+1 configuration? Transition: Okay, this type of solution continues to look interesting. But here’s a question for you- in most data centers, redundancy is a key concern. Can you address that in this sort of cooling scenario?
Transition: Hey, I hate to say this, but I do see one potential problem here. In this case, aren’t you using a lot of floor space? Real estate’s not getting any cheaper, you know!
Transition: Okay- got it. You’re actually increasing the cooling capacity in a lot less space- pretty neat. How does this affect your Total Cost of Ownership calculations from a budget perspective?
Patrick to discuss slide: - Best case scenario – 45% real estate savings - Best case scenario – 20% Energy Savings Transition: This makes a lot of sense, Patrick. This kind of solution really allows you to consolidate your equipment into fewer cabinets, as well as showing some serious energy savings. Okay, I’m understanding the cooling part pretty well. Now, what about improving the efficiency of your power? How does that work- you just plug stuff in, and that’s it- right?
Transition: Gadzooks, that slide has a lot of information. Can you help me break this down into some smaller, more easily digestible areas of focus for an efficiency improvement discussion?
Give brief 40 second introduction to the design and capabilities of the system. This will lead into the market drivers and conditions regarding MAC’s, Concept of Modularity, and the market the product was designed for. Transition: (( AFTER PATRICK SAYS FLYWHEEL)) FLYWHEELS? That sounds neat, but is that really practical for a high load data center environment? I’m all about new technology, but I’ve heard that you can only get about 15 seconds of carryover with those things! Is that really helpful? Transition 2: Okay, high efficiency UPS is one area of focus- what do you think of PDU’s- would that be another area of interest?
Transition: It’s amazing that the UPS and PDU’s in the data center can have so much impact in how power is used, and how it affects the equipment it gives life to. Alright, these seem like good best practice focus areas. Let’s say a company wanted to move forward with these things we’ve been discussing- what would you recommend to them?
Thanks for all that great information, Patrick! At this time we’re going to open up our conference for Q&A. As a reminder, if you have a question, please type it into the box in the lower left-hand corner of your screen, and then choose the ‘Chat’ button. We’ve had several questions already, so we’ll just start with those first as the rest come in. Also, if you’d like to receive a copy of today’s presentation, please email the address shown. (((DO THE Q&A))) At this time, it looks like we don’t have any time for more questions. I would like to once again thank Patrick for joining us today. Thanks also to everyone for joining our presentation. Don’t forget, if you’d like more specific information on a particular application for what we’ve been discussing, we’re here to help- do get in touch. Have a great day!