SlideShare uma empresa Scribd logo
1 de 21
Baixar para ler offline
A White Paper from the Experts
                                                           in Business-Critical Continuity




Energy Logic: Reducing Data Center Energy Consumption by
Creating Savings that Cascade Across Systems
The model demonstrates     Executive Summary
that reductions in ener-
gy consumption at the IT   A number of associations, consultants and vendors have promoted best practices for
equipment level have       enhancing data center energy efficiency. These practices cover everything from facility lighting
the greatest impact on     to cooling system design, and have proven useful in helping some companies slow or reverse
overall consumption        the trend of rising data center energy consumption. However, most organizations still lack a
because they cascade       cohesive, holistic approach for reducing data center energy use.
across all supporting
                           Emerson Network Power analyzed the available energy-saving opportunities and identified the
systems.
                           top ten. Each of these ten opportunities were then applied to a 5,000-square-foot data center
                           model based on real-world technologies and operating parameters. Through the model,
                           Emerson Network Power was able to quantify the savings of each action at the system level, as
                           well as identify how energy reduction in some systems affects consumption in supporting
                           systems.

                           The model demonstrates that reductions in energy consumption at the IT equipment level
                           have the greatest impact on overall consumption because they cascade across all supporting
                           systems. This led to the development of Energy Logic, a vendor-neutral roadmap for optimizing
                           data center energy efficiency that starts with the IT equipment and progresses to the support
                           infrastructure. This paper shows how Energy Logic can deliver a 50 percent or greater reduction
                           in data center energy consumption without compromising performance or availability.

                           This approach has the added benefit of removing the three most critical constraints faced by
                           data center managers today: power, cooling and space. In the model, the 10 Energy Logic
                           strategies freed up two-thirds of floor space, one-third of UPS capacity and 40 percent of
                           precision cooling capacity.

                           All of the technologies used in the Energy Logic approach are available today and many can be
                           phased into the data center as part of regular technology upgrades/refreshes, minimizing
                           capital expenditures.

                           The model also identified some gaps in existing technologies that could enable greater energy
                           reductions and help organizations make better decisions regarding the most efficient
                           technologies for a particular data center.




                                                                         1
Introduction                                        In a report to the U.S. Congress, the           Data center energy
                                                    Environmental Protection Agency                 consumption is being
The double impact of rising data center             concluded that best practices can reduce        driven by the demand
energy consumption and rising energy costs          data center energy consumption by 50            within almost every
has elevated the importance of data center          percent by 2011.                                organization for greater
efficiency as a strategy to reduce costs,                                                           computing capacity
manage capacity and promote environ-                The EPA report included a list of Top 10        and increased IT
mental responsibility.                              Energy Saving Best Practices as identified by   centralization.
                                                    the Lawrence Berkeley National Lab. Other
Data center energy consumption has been             organizations, including Emerson Network
driven by the demand within almost every            Power, have distributed similar information
organization for greater computing capacity         and there is evidence that some best
and increased IT centralization. While this         practices are being adopted.
was occurring, global electricity prices
increased 56 percent between 2002 and               The Spring 2007 DCUG Survey found that
2006.                                               77 percent of respondents already had their
                                                    data center arranged in a hot-aisle/cold-
The financial implications are significant;         aisle configuration to increase cooling sys-
estimates of annual power costs for U.S.            tem efficiency, 65 percent use blanking pan-
data centers now range as high as $3.3              els to minimize recirculation of hot air and
billion.                                            56 percent have sealed the floor to prevent
This trend impacts data center capacity as          cooling losses.
well. According to the Fall 2007 Survey of          While progress has been made, an objec-
the Data Center Users Group (DCUG®), an             tive, vendor-neutral evaluation of efficiency
influential group of data center managers,          opportunities across the spectrum of data
power limitations were cited as the primary         center systems has been lacking. This has
factor limiting growth by 46 percent of             made it difficult for data center managers to
respondents, more than any other factor.            prioritize efficiency efforts and tailor best
In addition to financial and capacity consid-       practices to their data center equipment
erations, reducing data center energy use           and operating practices.
has become a priority for organizations             This paper closes that gap by outlining a
seeking to reduce their environmental               holistic approach to energy reduction,
footprint.                                          based on quantitative analysis, that enables
There is general agreement that improve-            a 50 percent or greater reduction in data
ments in data center efficiency are possible.       center energy consumption.




                                                2
The distinction between          Data Center Energy Consumption                            percent of total consumption. Supply-side
 demand and supply                                                                          systems include the UPS, power distribu-
 power consumption is             The first step in prioritizing energy saving              tion, cooling, lighting and building
 valuable because reduc-          opportunities was to gain a solid under-                  switchgear, and account for 48 percent of
 tions in demand-side             standing of data center energy consumption.               consumption.
 energy use cascade
                                  Emerson Network Power modeled energy                      Information on data center and infrastruc-
 through the supply side.
                                  consumption for a typical 5,000-square-                   ture equipment and operating parameters
                                  foot data center (Figure 1) and analyzed                  on which the analysis was based are pre-
                                  how energy is used within the facility.                   sented in Appendix A. Note that all data
                                  Energy use was categorized as either                      centers are different and the savings poten-
                                  “demand side” or “supply side.”                           tial will vary by facility. However, at mini-
                                  Demand-side systems are the servers, stor-                mum, this analysis provides an order-of-
                                  age, communications and other IT systems                  magnitude comparison for data center
                                  that support the business. Supply-side                    energy reduction strategies.
                                  systems exist to support the demand side.                 The distinction between demand and sup-
                                  In this analysis, demand-side systems—                    ply power consumption is valuable because
                                  which include processors, server power sup-               reductions in demand-side energy use cas-
                                  plies, other server components, storage and               cade through the supply side. For example,
                                  communication equipment—account for 52                    in the 5,000-square-foot data center used to

Power and Cooling 48%                                                   Computing Equipment 52%


         Building Switchgear /                    Computing Equipment 52%                   Category                   Power Draw*
         MV Transformer 3%                               (Demand)
                                                                                            Computing                          588 kW
           Lighting 1%
                                                     Processor
                                                        15%
                                                                                            Lighting                            10 kW

                                                                                            UPS and distribution
                                                            Server Power
                                                                                            losses                              72 kW
      Support                     Cooling                   Supply 14%
    Systems 48%                    38%                                                      Cooling power draw for
      (Supply)                                                                              computing and UPS losses           429 kW
                                                         Other Server
                                                            15%                             Building switchgear/MV
                                                                                            transformer/other losses            28 kW

                 PDU 1%                                                    Storage 4%       TOTAL                            1127 kW
                                                                    Communication
                         UPS 5%
                                                                    Equipment 4%


 Figure 1. Analysis of a typical 5000-square-foot data center shows that demand-side computing equipment accounts
 for 52 percent of energy usage and supply-side systems account for 48 percent.


 * This represents the average power draw (kW). Daily energy consumption (kWh) can be captured by multiplying the power draw by 24.



                                                                                        3
analyze energy consumption, a 1 Watt                                       complete. The energy saving measures                           The first step in the

reduction at the server component level                                    included in Energy Logic should be consid-                     Energy Logic approach is

(processor, memory, hard disk, etc.) results                               ered a guide. Many organizations will already                  to establish an IT equip-

in an additional 1.84 Watt savings in the                                  have undertaken some measures at the end                       ment procurement

power supply, power distribution system,                                   of the sequence or will have to deploy some                    policy that exploits the

UPS system, cooling system and building                                    technologies out of sequence to remove                         energy efficiency bene-

entrance switchgear and medium voltage                                     existing constraints to growth.                                fits of low power proces-

transformer (Figure 2).                                                                                                                   sors and high-efficiency
                                                                           The first step in the Energy Logic approach is                 power supplies.
Consequently, every Watt of savings that                                   to establish an IT equipment procurement
can be achieved on the processor level                                     policy that exploits the energy efficiency ben-
creates a total of 2.84 Watts of savings for                               efits of low power processors and high-effi-
the facility.                                                              ciency power supplies.

The Energy Logic Approach                                                  As these technologies are specified in new
                                                                           equipment, inefficient servers will
Energy Logic takes a sequential approach to                                be phased out and replaced with higher-
reducing energy costs, applying the 10 tech-                               efficiency units, creating a solid foundation
nologies and best practices that exhibited                                 for an energy-optimized data center.
the most potential in the order in which they
have the greatest impact.                                                  Power management software has great
                                                                           potential to reduce energy costs and should
While the sequence is important, Energy                                    be considered as part of an energy optimiza-
Logic is not intended to be a step-by-step                                 tion strategy, particularly for data centers that
approach in the sense that each step can only                              have large differences between peak and
be undertaken after the previous one is                                    average utilization rates. Other facilities may


          Server
                                                        The Cascade Effect
        Component

      1 Watt                    -1.0 W              -1.18 W
      saved here
                          DC-DC                 AC-DC
                                                                       -1.49 W

                     Saves an additional   and .31 W here          Power
                     .18 W here                                 Distribution              -1.53 W

                                                              and .04 W here
                                                                                       UPS
                                                                                                             -1.67 W

                                                                                 and .14 W here
                                                                                                        Cooling
                                                                                                                                -2.74 W

      1 Watt saved at the processor saves a                                                         and 1.07 W here
                                                                                                                           Building
      total of 2.84 Watts of total consumption                                                                           Switchgear/
                                                                                                                         Transformer

                                                                                                                                -2.84 W
                   = Cumulative Saving
                                                                                                                       and .07 W here


Figure 2. With the Cascade Effect, a 1 Watt savings at the server component level creates a
reduction in facility energy consumption of 2.84 Watts.


                                                                       4
Total floor space        choose not to employ power management                 Reducing Energy Consumption and
required for IT equip-   because of concerns about response times. A           Eliminating Constraints to Growth
ment was reduced by 65   significant opportunity exists within the
percent.                 industry to enhance the sophistication of             Employing the Energy Logic approach to the
                         power management to make it an even more              model data center reduced energy use by 52
                         powerful tool in managing energy use.                 percent without compromising performance
                                                                               or availability.
                         The next step involves IT projects that may
                         not be driven by efficiency considerations, but       In its unoptimized state, the 5,000-square-
                         have an impact on energy consumption. They            foot data center model used to develop the
                         include:                                              Energy Logic approach supported a total
                         • Blade servers                                       compute load of 529 kW and total facility
                         • Server virtualization                               load of 1127 kW. Through the optimization
                                                                               strategies presented here, this facility has
                         These technologies have emerged as “best              been transformed to enable the same level of
                         practice” approaches to data center manage-           performance using significantly less power
                         ment and play a role in optimizing a data             and space. Total compute load was reduced
                         center for efficiency, performance and man-           to 367 kW, while rack density was increased
                         ageability.                                           from 2.9 kW per rack to 6.1 kW per rack.

                         Once policies and plans have been put in              This has reduced the number of racks required
                         place to optimize IT systems, the focus shifts        to support the compute load from 210 to 60
                         to supply-side systems. The most effective            and eliminated power, cooling and space limi-
                         approaches to infrastructure optimization             tations constraining growth (Figure 4).
                         include:
                         • Cooling best practices                              Total energy consumption was reduced to
                         • 415V AC power distribution                          542 kW and the total floor space required for
                         • Variable capacity cooling                           IT equipment was reduced by 65 percent
                         • Supplemental cooling                                (Figure 5).
                         • Monitoring and optimization                         Energy Logic is suitable for every type of
                         Emerson Network Power has quantified the              data center; however, the sequence may be
                         savings that can be achieved through each of          affected by facility type. Facilities operating at
                         these actions individually and as part of the         high utilization rates throughout a 24-hour
                         Energy Logic sequence (Figure 3). Note that           day will want to focus initial efforts on
                         savings for supply-side systems look smaller          sourcing IT equipment with low power
                         when taken as part of Energy Logic because            processors and high efficiency power sup-
                         those systems are now supporting a smaller            plies. Facilities that experience predictable
                         load.                                                 peaks in activity may achieve the greatest
                                                                               benefit from power management technology.
                                                                               Figure 6 shows how compute load and type
                                                                               of operation influence priorities.




                                                                           5
Savings Independent of                      Energy Logic Savings
                                       Other Actions                         with the Cascade Effect
 Energy Saving Action                                                                                                    ROI
                                                                                                       Cumulative
                                   Savings (kW)     Savings (%)         Savings (kW)    Savings (%)
                                                                                                      Savings (kW)

Lower power processors                  111             10%                 111            10%            111        12 to 18 mo.

High-efficiency power
                                        141             12%                 124            11%            235          5 to 7 mo.
supplies

Power management
                                        125             11%                 86              8%            321         Immediate
features

                                                                                                                     TCO reduced
Blade servers                            8               1%                  7              1%            328
                                                                                                                        38%*

                                                                                                                     TCO reduced
Server virtualization                   156             14%                 86              8%            414
                                                                                                                       63%**


415V AC power distribution              34               3%                 20              2%            434          2 to 3 mo.


Cooling best practices                  24               2%                 15              1%            449          4 to 6 mo.


Variable capacity cooling:
                                        79               7%                 49              4%            498         4 to 10 mo.
variable speed fan drives

Supplemental cooling                    200             18%                 72              6%            570        10 to 12 mo.


Monitoring and optimiza-
tion: Cooling units work as a           25               2%                 15              1%            585          3 to 6 mo.
team
* Source for blade impact on TCO: IDC ** Source for virtualization impact on TCO: VMware
Figure 3. Using the model of a 5,000-square-foot data center consuming 1127 kW of power, the actions included in
the Energy Logic approach work together to produced a 585 kW reduction in energy use.


Constraint                                              Unoptimized                    Optimized           Capacity Freed Up
Data center space (sq ft)                                      4988                        1768                  3220 (65%)

UPS capacity (kVA)                                            2 * 750                    2 * 500                2 * 250 (33%)

Cooling plant capacity (tons)                                  350                         200                   150 (43%)

Building entrance switchgear and genset (kW)                   1169                        620                   549 (47%)

Figure 4. Energy Logic removes constraints to growth in addition to reducing energy consumption.


                                                   6
P1                                        P2 P1                                                  P2 P1                                                                 P2

                       COLD AISLE                                                   COLD AISLE                                          COLD AISLE

            D1 R   R   R    R   R   R   R   R   R   R D2 D1 R           R   R   R    R   R   R   R   R   R D2 D1 R              R   R   R      R     R     R     R     R     R D2                   Control
  CW CRAC




                                                                                                                                                                                         CW CRAC
                                                                                                                                                                                                    Center
                           HOT AISLE                                                HOT AISLE                                            HOT AISLE

            D1 R   R   R    R   R   R   R   R   R   R D2 D1 R           R   R   R    R   R   R   R   R   R D2 D1 R              R   R   R      R     R     R     R     R     R D2


                       COLD AISLE                                                   COLD AISLE                                          COLD AISLE

            D1 R   R   R    R   R   R   R   R   R   R D2 D1 R           R   R   R    R   R   R   R   R   R D2 D1 R              R   R   R      R     R     R     R     R     R D2
  CW CRAC




                                                                                                                                                                                         CW CRAC
                           HOT AISLE                                                HOT AISLE                                            HOT AISLE
                                                                                                                                                                                                    Battery
            D1 R   R   R    R   R   R   R   R   R   R D2 D1 R           R   R   R    R   R   R   R   R   R D2 D1 R              R   R   R      R     R     R     R     R     R D2                    Room


                       COLD AISLE                                                   COLD AISLE                                          COLD AISLE


            D1 R   R   R    R   R   R   R   R   R   R D2 D1 R           R   R   R    R   R   R   R   R   R D2 D1 R              R   R   R      R     R     R     R     R     R D2
  CW CRAC




                                                                                                                                                                                         CW CRAC
                           HOT AISLE                                                HOT AISLE                                            HOT AISLE

            D1 R   R   R    R   R   R   R   R   R   R D2 D1 R           R   R   R    R   R   R   R   R   R D2 D1 R              R   R   R      R     R     R     R     R     R D2                  UPS Room

                       COLD AISLE                                                   COLD AISLE                                          COLD AISLE
  CW CRAC




                                                                                                                                                                                         CW CRAC




                                                                                                                                                                                                               CW CRAC
            D1 R   R   R    R   R   R   R   R   R   R D2 D1 R           R   R   R    R   R   R   R   R   R D2 D1 R              R   R   R      R     R     R     R     R     R D2

                           HOT AISLE                                                HOT AISLE                                            HOT AISLE


                                                                            Before Energy Logic

                                                                                    P1                                          P2 P1                                              P2

                                                                                                     COLD AISLE                                 COLD AISLE

                                                                                    D1 R     R   R   R   R   R     R      R D2 D1 R           R      R     R     R     R     R D2                   Control
                                                              CW CRAC




                                                                                                                                                                                        CW CRAC
                                                                                                                                                                                                    Center
                                                                                                     HOT AISLE                                      HOT AISLE

                                                                                    D1 R     R   R   R   R   R     R      R D2 D1 R           R      R     R     R     R     R D2


                                                                                                     COLD AISLE                                 COLD AISLE
                                                                                                             XDV
                                                                                                                   XDV
                                                                                                                          XDV




                                                                                                                                        XDV
                                                                                                                                              XDV
                                                                                                                                                     XDV
                                                                                                                                                           XDV
                                                                                                                                                                 XDV
                                                                                                                                                                       XDV
                                                                                                                                                                             XDV
                                                                                    D1 R     R   R   R   R                      D2 D1                                              D2
                                                              CW CRAC




                                                                                                                                                                                        CW CRAC
                                                                                                     HOT AISLE                                      HOT AISLE
                                                                                                                                                                                                    Battery
                                                                                                             XDV
                                                                                                                   XDV
                                                                                                                          XDV




                                                                                                                                        XDV
                                                                                                                                              XDV
                                                                                                                                                     XDV
                                                                                                                                                           XDV
                                                                                                                                                                 XDV
                                                                                                                                                                       XDV
                                                                                                                                                                             XDV


                                                                                    D1 R     R   R   R   R                      D2 D1                                              D2                Room


                                                                                                     COLD AISLE                                 COLD AISLE
                 Space Savings: 65%


                                                                                                                                                                                                   UPS Room

                                                                                                                                                                                                              CW CRAC




                                                                            After Energy Logic
P1 = Stage one distribution, side A         D1 = Stage two distribution, side A                                          R = Rack
P2 = Stage one distribution, side B         D2 = Stage two distribution, side B                                          XDV = Supplemental cooling module
CW CRAC = Chilled water computer room air conditioner

Figure 5. The top diagram shows the unoptimized data center layout. The lower diagram shows the data center after the
Energy Logic actions were applied. Space required to support data center equipment was reduced from 5,000 square feet
to 1,768 square feet (65 percent).

                                                                                                                   7
1. Lowest power processor                  1. Virtualization
operations



                     2. High-efficiency power supplies          2. Lowest power processor
 24-hour




                     3. Blade servers                           3. High-efficiency power supply
                     4. Power management                                          4. Power management
                                                  Cooling best practices          5. Consider main-
                                                                                     frame architecture
                                              Variable capacity cooling
                                               Supplemental cooling
                     1. Power                   415V AC distribution     1. Virtualization
                        management           Monitoring and optimization 2. Power management
 operations
  Daytime




                                                                                     3. Low power
                     2. Low power
                                                                                        processor
                        processor
                                                                             4. High-efficiency
                     3. High-efficiency power supplies
                                                                                power supplies
                     4. Blade servers

                                I/O Intensive                                 Compute Intensive

Figure 6 . The Energy Logic approach can be tailored to the compute load and type of operation.


The Ten Energy Logic Actions                                  lower power processors deliver the same per-
                                                              formance as higher power models (Figure 8).
1. Processor Efficiency
In the absence of a true standard measure of                  In the 5,000-square-foot data center mod-
processor efficiency comparable to the U.S.                   eled for this paper, low power processors cre-
Department of Transportation fuel efficiency                  ate a 10 percent reduction in overall data
standard for automobiles, Thermal Design                      center power consumption.
Power (TDP) serves as a proxy for server
                                                              2. Power Supplies
power consumption.
                                                              As with processors, many of the server power
The typical TDP of processors in use today is                 supplies in use today are operating at efficien-
between 80 and 103 Watts (91 W average).                      cies below what is currently available. The
For a price premium, processor manufactur-                    U.S. EPA estimated the average efficiency of
ers provide a lower voltage versions of their                 installed server power supplies at 72 percent
processors that consumes on average 30                        in 2005. In the model, we assume the unopti-
Watts less than standard processors (Figure                   mized data center uses power supplies that
7). Independent research studies show these                   average 79 percent across a mix of servers
                                                              that range from four years old to new.


                             Sockets       Speed (GHz)        Standard       Low power           Saving
                                  1             1.8-2.6          103 W            65 W              38 W
             AMD
                                  2             1.8-2.6          95 W             68 W              27 W

             Intel                2             1.8-2.6          80 W             50 W              30 W

Figure 7. Intel and AMD offer a variety of low power processors that deliver average savings
between 27 W and 38 W.

                                                          8
System Performance Results AS3AP is Better)
                     Transaction/Second (Higher                                                                                                                 Best-in-class power supplies are available
                      30000
                                                                                                                                                                today that deliver efficiency of 90 percent.
                                                                                                                                                                Use of these power supplies reduces power
                      25000
                                                                                                                                                                draw within the data center by 124 kW or 11
                                                                                                                                                                percent of the 1127 kW total.

                      20000
                                                                                                                                                                As with other data center systems, server
                                                                                                                                                                power supply efficiency varies depending on
Transactions/Second




                                                                                                                                                                load. Some power supplies perform better at
                      15000                                                                                                                                     partial loads than others and this is particu-
                                                                                                                                                                larly important in dual-corded devices where
                                                                                                                                                                power supply utilization can average less
                      10000
                                                                                                                                                                than 30 percent. Figure 9 shows power sup-
                                                                                                                                                                ply efficiencies at different loads for two
                       5000                                                                                                                                     power supply models. At 20 percent load,
                                                                                                                                                                model A has an efficiency of approximately
                                                                                                                                                                88 percent while model B has an efficiency
                         0
                                  Load 1                   Load 2                          Load 3                   Load 4                Load 5                closer to 82 percent.
                              Opteron 2218 HE 2.6 GHz                Woodcrest LV S148 2.3 GHz          Opteron 2218 2.6 GHz       Woodcrest 5140 2.3 GHz

                                                                                                                                Source Anandtech                Figure 9 also highlights another opportunity
                                                                                                                                                                to increase efficiency: sizing power supplies
Figure 8. Performance results for standard- and low-power processor
                                                                                                                                                                closer to actual load. Notice that the maxi-
systems using the American National Standard Institute AS3AP
                                                                                                                                                                mum configuration is about 80 percent of
benchmark.


                                                                                                  “Typical” PSU Load in                                                          “Typical” Config            “Max”                    Nameplate
                                                                                                   Redundant Config.                                                              100% CPU Util.          Configuration                 Rating




                                                                        90%




                                                                        85%
                                                        Efficiency




                                                                        80%

                                                                                                                                                      Model A
                                                                                                                                                      Model B

                                                                        75%




                                                                        70%
                                                                              0%      5%         10%    15%      20%      25%    30%   35%     40%      45%     50%     55%    60%   65%     70%    75%     80%     85%   90%   95%     100%

                                                                                                                                                        Percent of Full Load



                                                         Figure 9. Power supply efficiency can vary significantly depending on load and power supplies
                                                         are often sized for a load that exceeds the maximum server configuration.




                                                                                                                                                            9
the nameplate rating and the typical configu-                                                                     are disabled because of concerns regarding                                        While the move to blade
ration is 67 percent of the nameplate rating.                                                                     response time; however, this decision may                                         servers is typically not
Server manufacturers should allow pur-                                                                            need to be reevaluated in light of the signifi-                                   driven by energy consid-
chasers to choose power supplies sized for a                                                                      cant savings this technology can enable.                                          erations, blade servers
typical or maximum configuration.                                                                                                                                                                   can play a role in reduc-
                                                                                                                  In the model, we assume that the idle power                                       ing energy consump-
3. Power management software                                                                                      draw is 80 percent of the peak power draw                                         tion.
Data centers are sized for peak conditions that                                                                   without power management, and reduces to
may rarely exist. In a typical business data cen-                                                                 45 percent of peak power draw as power
ter, daily demand progressively increases from                                                                    management is enabled. With this scenario,
about 5 a.m. to 11 a.m. and then begins to                                                                        power management can save an additional
drop again at 5 p.m. (Figure 10).                                                                                 86 kW or eight percent of the unoptimized
                                                                                                                  data center load.
Server power consumption remains relatively
high as server load decreases (Figure 11). In                                                                     4. Blade Servers
idle mode, most servers consume between                                                                           Many organizations have implemented blade
70 and 85 percent of full operational power.                                                                      servers to meet processing requirements and
Consequently, a facility operating at just 20                                                                     improve server management. While the
percent capacity may use 80 percent of the                                                                        move to blade servers is typically not driven
energy as the same facility operating at 100                                                                      by energy considerations, blade servers can
percent capacity.                                                                                                 play a role in energy consumption.

Server processors have power management                                                                           Blade servers consume about 10 percent less
features built-in that can reduce power when                                                                      power than equivalent rack mount servers
the processor is idle. Too often these features                                                                   because multiple servers share common



                                                                                                                                             AC Power Input Versus Percent CPU Time

                                                                                                                           150                                                                                         120
                                                                                                   Servers
                       45
                                                                                                                                                                                Dell Power Edge 2400
                                                                                                                                                                                (Web/SQL Server)
                       40                                                                                                  125
                                                                                                                                                                                      Watts
                                                                                                                                                                                                                       100

                                                                                                                                                                                      Proc Time %
                       35
                                                                                                                           100                                                                                         80
 Percent Utilization




                       30


                                                                                                                                                                                                                             % CPU Time
                       25
                                                                                                                   Watts




                                                                                                                            75                                                                                         60

                       20

                       15                                                                                                   50                                                                                         40



                       10
                                                                                                                            25                                                                                         20
                       5

                       0    1   2   3   4   5   6   7   8   9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24                  0                                                                                          0
                                                                    Hours

Figure 10. Daily utilization for a typical                                                                        Figure 11. Low processor activity does not translate into low power
business data center.                                                                                             consumption.




                                                                                                             10
Implementing virtual-     power supplies, cooling fans and other                  data center airflow (Figure 12). Many organi-
ization provides an       components.                                             zations, including Emerson Network Power,
incremental eight per-                                                            offer CFD imaging as part of data center
                          In the model, we see a one percent reduction
cent reduction in total                                                           assessment services focused on improving
                          in total energy consumption when 20 percent
data center power draw                                                            cooling efficiency.
for the 5,000-square-     of rack-based servers are replaced with blade
foot facility.            servers. More importantly, blades facilitate            Additionally, temperatures in the cold aisle
                          the move to a high-density data center archi-           may be able to be raised if current tempera-
                          tecture, which can significantly reduce energy          tures are below 68° F. Chilled water tempera-
                          consumption.                                            tures can often be raised from 45° F to 50° F.

                          5. Server Virtualization                                In the model, cooling system efficiency is
                          As server technologies are optimized, virtual-          improved five percent simply by implement-
                          ization is increasingly being deployed to               ing best practices. This reduces overall facility
                          increase server utilization and reduce the              energy costs by one percent with virtually no
                          number of servers required.                             investment in new technology.

                          In our model, we assume that 25 percent of              7. 415V AC Power Distribution
                          servers are virtualized with eight non-virtual-         The critical power system represents another
                          ized physical servers being replaced by one             opportunity to reduce energy consumption;
                          virtualized physical server. We also assume             however, even more than other systems, care
                          that the applications being virtualized were            must be taken to ensure reductions in energy
                          residing in single-processor and two-proces-            consumption are not achieved at the cost of
                          sor servers and the virtualized applications            reduced equipment availability.
                          are hosted on servers with at least two
                                                                                  Most data centers use a type of UPS called a
                          processors.
                                                                                  double-conversion system. These systems
                          Implementing virtualization provides an                 convert incoming power to DC and then back
                          incremental eight percent reduction in total            to AC within the UPS. This enables the UPS to
                          data center power draw for the 5,000-square-            generate a clean, consistent waveform for IT
                          foot facility.                                          equipment and effectively isolates IT equip-
                                                                                  ment from the power source.
                          6. Cooling Best Practices
                          Most data centers have implemented some                 UPS systems that don’t convert the incoming
                          best practices, such as the hot-aisle/cold-aisle        power—line interactive or passive standby
                          rack arrangement. Potential exists in sealing           systems—can operate at higher efficiencies
                          gaps in floors, using blanking panels in open           because of the losses associated with the con-
                          spaces in racks, and avoiding mixing of hot             version process. These systems may compro-
                          and cold air. ASHRAE has published several              mise equipment protection because they do
                          excellent papers on these best practices.               not fully condition incoming power.

                          Computational fluid dynamics (CFD) can be               A bigger opportunity exists downstream
                          used to identify inefficiencies and optimize            from the UPS. In most data centers, the UPS




                                                                             11
8. Variable Capacity Cooling                        Newer technologies such
                                                       Data center systems are sized to handle peak        as Digital Scroll com-
                                                       loads, which rarely exist. Consequently, oper-      pressors and variable
                                                       ating efficiency at full load is often not a good   frequency drives in com-
                                                       indication of actual operating efficiency.          puter room air condi-
                                                                                                           tioners (CRACs) allow
                                                       Newer technologies, such as Digital Scroll
Figure 12. CFD imaging can be used to evalu-                                                               high efficiencies to be
                                                       compressors and variable frequency drives in
ate cooling efficiency and optimize airflow.                                                               maintained at partial
This image shows hot air being recirculated            computer room air conditioners (CRACs),
                                                                                                           loads.
as it is pulled back toward the CRAC, which is         allow high efficiencies to be maintained at
poorly positioned.                                     partial loads.

                                                       Digital scroll compressors allow the capacity
provides power at 480V, which is then                  of room air conditioners to be matched
stepped down via a transformer, with                   exactly to room conditions without turning
accompanying losses, to 208V in the power              compressors on and off.
distribution system. These stepdown losses
can be eliminated by converting UPS output             Typically, CRAC fans run at a constant speed
power to 415V.                                         and deliver a constant volume of air flow.
                                                       Converting these fans to variable frequency
The 415V three-phase input provides 240V               drive fans allows fan speed and power draw to
single-phase, line-to-neutral input directly to        be reduced as load decreases. Fan power is
the server (Figure 13). This higher voltage not        directly proportional to the cube of fan rpm
only eliminates stepdown losses but also               and a 20 percent reduction in fan speed pro-
enables an increase in server power supply             vides almost 50 percent savings in fan power
efficiency. Servers and other IT equipment             consumption. These drives are available in
can handle 240V AC input without any issues.           retrofit kits that make it easy to upgrade
In the model, an incremental two percent               existing CRACs with a payback of less than
reduction in facility energy use is achieved by        one year.
using 415V AC power distribution.

  Traditional 208V Distribution

                              PDU                    PDU                         12V      Servers
  480V            480V                208V                         208V
           UPS                                                             PSU            Storage
                          transformer             panelboard
                                                                                  Comm Equipment
  415V Distribution

  480V                                               PDU                         12V       Servers
                          415V 3-phase                             240V
           UPS                                                             PSU             Storage
                          240V phase to           panelboard
                             neutral                                               Comm Equipment


Figure 13. 415V power distribution provides a more efficient alternative to using 208V power.




                                                  12
Supplemental cooling          In the chilled water-based air conditioning
units are mounted             system used in this analysis, the use of vari-
above or alongside            able frequency drives provides an incremental
equipment racks and           saving of four percent in data center power
pull hot air directly from    consumption.
the hot aisle and deliver
cold air to the cold aisle.   9. High Density Supplemental Cooling
                              Traditional room-cooling systems have
                              proven very effective at maintaining a safe,
                              controlled environment for IT equipment.
                              However, optimizing data center energy effi-
                              ciency requires moving from traditional data
                              center densities (2 to 3 kW per rack) to an
                                                                                      Figure 14. Supplemental cooling enables
                              environment that can support much higher
                                                                                      higher rack densities and improved cooling
                              densities (in excess of 30 kW).                         efficiency.

                              This requires implementing an approach to           In the model, 20 racks at 12 kW density per
                              cooling that shifts some of the cooling load        rack use high density supplemental cooling
                              from traditional CRAC units to supplemental         while the remaining 40 racks (at 3.2 kW den-
                              cooling units. Supplemental cooling units are       sity) are supported by the traditional room
                              mounted above or alongside equipment                cooling system. This creates an incremental
                              racks (Figure14), and pull hot air directly from    six percent reduction in overall data center
                              the hot aisle and deliver cold air to the cold      energy costs. As the facility evolves and more
                              aisle.                                              racks move to high density, the savings will
                              Supplemental cooling units can reduce cool-         increase.
                              ing costs by 30 percent compared to tradi-          10. Monitoring and Optimization
                              tional approaches to cooling. These savings         One of the consequences of rising equipment
                              are achieved because supplemental cooling           densities has been increased diversity within
                              brings cooling closer to the source of heat,        the data center. Rack densities are rarely uni-
                              reducing the fan power required to move air.        form across a facility and this can create cool-
                              They also use more efficient heat exchangers        ing inefficiencies if monitoring and optimiza-
                              and delver only sensible cooling, which is          tion is not implemented. Room cooling units
                              ideal for the dry heat generated by electronic      on one side of a facility may be humidifying
                              equipment.                                          the environment based on local conditions
                              Refrigerant is delivered to the supplemental        while units on the opposite side of the facility
                              cooling modules through an overhead piping          are dehumidifying.
                              system, which, once installed, allows cooling       Cooling control systems can monitor condi-
                              modules to be easily added or relocated as          tions across the data center and coordinate
                              the environment changes.                            the activities of multiple units to prevent con-
                                                                                  flicts and increase teamwork (Figure 15 ).




                                                                                 13
TEAMWORK MODE

                                                                                                                 Control Disables
 Humidifier ON            Humidifier ON   Dehumidification ON           Compressors ON                        heaters in “Cold Zone”

  Unit 1                   Unit 2          Unit 3                        Unit 1                   Unit 2                         Unit 3




                 UNBALANCED LOAD          NO LOAD                      SWITCH               UNBALANCED LOAD                   NO LOAD
                                                                       OR HUB
                                                                                    POWER




  Unit 6                   Unit 5          Unit 4                        Unit 6                   Unit 5                         Unit 4

                          Humidifier ON   Dehumidification ON                                    Compressors ON




Figure 15. System-level control reduces conflict between room air conditioning units
operating in different zones in the data center.



In the model, an incremental saving of one                           • Monitor and reduce parasitic losses from                           The Energy Logic model
percent is achieved as a result of system-level                        generators, exterior lighting and perimeter                        also provides a roadmap
monitoring and control.                                                access control. For a 1 MW load, generator                         for the industry to deliv-
                                                                       losses of 20 kW to 50 kW have been                                 er the information and
Other Opportunities for Savings                                        measured.                                                          technologies that data

Energy Logic prioritizes the most effective                                                                                               center managers can use
                                                                     What the Industry Can Learn from the                                 to optimize their
energy reduction strategies, but it is not
                                                                     Energy Logic Model                                                   facilities.
intended to be a comprehensive list of energy
reducing measures. In addition to the actions                        The Energy Logic model not only prioritizes
in the Energy Logic strategy, data center man-                       actions for data center managers. It also pro-
agers should consider the feasibility of the                         vides a roadmap for the industry to deliver the
following:                                                           information and technologies that data center
                                                                     managers can use to optimize their facilities.
• Consolidate data storage from direct
                                                                     Here are specific actions the industry can take
  attached storage to network attached stor-
                                                                     to support increased data center efficiency:
  age. Also, faster disks consume more
  power so consider reorganizing data so that                        1. Define universally accepted metrics for
  less frequently used data is on slower                             processor, server and data center efficiency
  archival drives.                                                   There have been tremendous technology
                                                                     advances in server processors in the last
• Use economizers where appropriate to
                                                                     decade. Until 2005, higher processor per-
  allow outside air to be used to support data
                                                                     formance was linked with higher clock speeds
  center cooling during colder months,
                                                                     and hotter chips consuming more power.
  creating opportunities for energy-free
                                                                     Recent advances in multi-core technology
  cooling. With today’s high-density
                                                                     have driven performance increases by using
  computing environment, economizers can
                                                                     more computing cores operating at relatively
  be cost effective in many more locations
                                                                     lesser clock speeds, which reduces power
  than might be expected.
                                                                     consumption.




                                                                14
An industry standard of    Today processor manufacturers offer a range              eliminate much of the resistance to using
data center efficiency     of server processors from which a customer               power management.
that measures perform-     needs to select the right processor for the
                                                                                    3. Matching power supply capacity to server
ance per Watt of energy    given application. What is lacking is an easy-
                                                                                    configuration
used would be extreme-     to-understand and easy-to-use measure such
                                                                                    Server manufacturers tend to oversize power
ly beneficial in measur-   as the miles-per-gallon automotive fuel effi-
                                                                                    supplies to accommodate the maximum
ing the progress of data   ciency ratings developed by the U.S.
                                                                                    configuration of a particular server. Some
center optimization        Department of Transportation, that can help
                                                                                    users may be willing to pay an efficiency
efforts.                   buyers select the ideal processor for a given
                                                                                    penalty for the flexibility to more easily
                           load. The performance per Watt metric is
                                                                                    upgrade, but many would prefer a choice
                           evolving gradually with SPEC score being used
                                                                                    between a power supply sized for a standard
                           as the server performance measure, but more
                                                                                    configuration and one sized for maximum
                           work is needed.
                                                                                    configuration. Server manufacturers should
                           This same philosophy could be applied to the             consider making these options available
                           facility level. An industry standard of data cen-        and users need to be educated about the
                           ter efficiency that measures performance per             impact power supply size has on energy
                           Watt of energy used would be extremely ben-              consumption.
                           eficial in measuring the progress of data cen-
                                                                                    4. Designing for High Density
                           ter optimization efforts. The PUE ratio devel-
                                                                                    A perception persists that high-density envi-
                           oped by the Green Grid provides a measure of
                                                                                    ronments are more expensive than simply
                           infrastructure efficiency, but not total facility
                                                                                    spreading the load over a larger space. High-
                           efficiency. IT management needs to work
                                                                                    density environments employing blade and
                           with IT equipment and infrastructure manu-
                                                                                    virtualized servers are actually economical as
                           facturers to develop the miles-per-gallon
                                                                                    they drive down energy costs and remove
                           equivalent for both systems and facilities.
                                                                                    constraints to growth, often delaying or elim-
                           2. More sophisticated power management                   inating the need to build new facilities.
                           While enabling power management features
                                                                                    5. High-voltage distribution
                           provides tremendous savings, IT manage-
                                                                                    415V power distribution is used commonly
                           ment often prefers to stay away from this
                                                                                    in Europe, but UPS systems that easily sup-
                           technology as the impact on availability is not
                                                                                    port this architecture are not readily available
                           clearly established. As more tools become
                                                                                    in the United States. Manufacturers of critical
                           available to manage power management fea-
                                                                                    power equipment should provide the 415V
                           tures, and data is available to ensure that
                                                                                    output as an option on UPS systems and can
                           availability is not impacted, we should see
                                                                                    do more to educate their customers regard-
                           this technology gain market acceptance.
                                                                                    ing high-voltage power distribution.
                           More sophisticated controls that would allow
                           these features to be enabled only during peri-           6. Integrated measurement and control
                           ods of low utilization, or turned off when criti-        Data that can be easily collected from IT sys-
                           cal applications are being processed, would              tems and the racks that support them has




                                                                               15
yet to be effectively integrated with support          IT consolidation projects also play an impor-      Together these strate-
systems controls. This level of integration            tant role in data center optimization. Both        gies created a 52 per-
would allow IT systems, applications and sup-          blade servers and virtualization contribute to     cent reduction in energy
port systems to be more effectively managed            energy savings and support a high-density          use in the 5,000-square-
based on actual conditions at the IT equip-            environment that facilitates true                  foot data center model
ment level.                                            optimization.                                      developed by Emerson
                                                                                                          Network Power while
Conclusion                                             The final steps in the Energy Logic optimiza-      removing constraints to
                                                       tion strategy is to focus on infrastructure sys-   growth.
Data center managers and designers, IT
                                                       tems, employing a combination of best prac-
equipment manufacturers and infrastructure
                                                       tices and efficient technologies to increase
providers must all collaborate to truly opti-
                                                       the efficiency of power and cooling systems.
mize data center efficiency.
                                                       Together these strategies created a 52 per-
For data center managers, there are a number
                                                       cent reduction in energy use in the 5,000-
of actions that can be taken today that can
                                                       square-foot data center model developed by
significantly drive down energy consumption
                                                       Emerson Network Power while removing
while freeing physical space and power and
                                                       constraints to growth.
cooling capacity to support growth.
                                                       Appendix B shows exactly how these savings
Energy reduction initiatives should begin with
                                                       are achieved over time as legacy technologies
policies that encourage the use of efficient IT
                                                       are phased out and savings cascade across
technologies, specifically low power proces-
                                                       systems.
sors and high-efficiency power supplies. This
will allow more efficient technologies to be
introduced into the data center as part of the
normal equipment replacement cycle.

Power management software should also be
considered in applications where it is appro-
priate as it may provide greater savings than
any other single technology, depending on
data center utilization.




                                                  16
Appendix A: Data Center Assumptions Used to Model Energy Use



To quantify the results of the efficiency              Servers
improvement options presented in this                  • Age is based on average server replace-
paper, a hypothetical data center that has               ment cycle of 4-5 years.
not been optimized for energy efficiency
                                                       • Processor Thermal Design Power aver-
was created. The ten efficiency actions pre-
                                                         ages 91W/processor.
sented in this paper were then applied to
this facility sequentially to quantify results.        • All servers have dual redundant power
                                                         supplies. The average DC-DC conversion
This 5000-square-foot hypothetical data                  efficiency is assumed at 85% and average
center has 210 racks with average heat den-              AC-DC conversion efficiency is assumed at
sity of 2.8kW/rack. The racks are arranged               79 percent for the mix of servers from
in a hot-aisle/cold-aisle configuration. Cold            four years old to new.
aisles are four feet wide, and hot aisles are
                                                       • Daytime power draw is assumed to exist
three feet wide. Based on this configuration
                                                         for 14 hours on weekdays and 4 hours on
and operating parameters, average facility
                                                         weekends. Night time power draw is 80
power draw was calculated to be 1127 kW.
                                                         percent of daytime power draw.
Following are additional details used in the
analysis.                                              • See Figure 16 for more details on server
                                                         configuration and operating parameters.


                                                         Two      Four     More than
                                      Single Socket                                       Total
                                                       sockets   Sockets     four

Number of servers                         157           812        84         11          1064

Daytime power draw (Watts/server)         277           446       893        4387          –

Nighttime power draw (Watts/server)       247           388       775        3605          –

Total daytime power draw (kW)              44           362        75         47          528

Total nighttime power draw (kW)            39           315        65         38          457

Average server power draw (kW)             41           337        70         42          490


Figure 16. Server operating parameters used in the Energy Logic model.




                                                  17
Storage                                              Cooling system
• Storage Type: Network attached storage.            • Cooling System is chilled water based.
• Capacity is 120 Terabytes.                         • Total sensible heat load on the precision
• Average Power Draw is 49 kW.                         cooling system includes heat generated
                                                       by the IT equipment, UPS and PDUs,
Communication Equipment                                building egress and human load.
• Routers, switches and hubs required to
  interconnect the servers, storage and              • Cooling System Components:
  access points through Local Area Network              - Eight 128.5 kW chilled water based
  and provide secure access to public                     precision cooling system placed at the
  networks.                                               end of each hot aisle. Includes one
                                                          redundant unit.
• Average Power Draw is 49 kW.                          - The chilled water source is a chiller
Power Distribution Units (PDU):                           plant consisting of three 200 ton
• Provides output of 208V, 3 Phase through                chillers (n+1) with matching con-
  whips and rack power strips to power                    densers for heat rejection and four
  servers, storage, communication equip-                  chilled water pumps (n+2).
  ment and lighting. (Average load is                   - The chiller, pumps and air condition-
  539kW).                                                 ers are powered from the building dis-
• Input from UPS is 480V 3-phase.                         tribution board (480V 3 phase).
• Efficiency of power distribution is 97.5              - Total cooling system power draw is
  percent.                                                429 kW.
UPS System                                           Building substation:
• Two double conversion 750 kVA UPS                  • The building substation provides 480V
  modules arranged in dual redundant                   3-phase power to UPS’s and cooling
  (1 +1) configuration with input filters for          system.
  power factor correction (power factor =            • Average load on building substation is
  91 percent).                                         1,099 kW.
                                                     • Utility input is 13.5 kVA, 3-phase
• The UPS receives 480V input power for
                                                       connection.
  the distribution board and provides a
                                                     • System consists of transformer with isola-
  480V, 3 Phase power to the power distri-
                                                       tion switchgear on the incoming line,
  bution units on the data center floor.
                                                       switchgear, circuit breakers and distribu-
• UPS efficiency at part load: 91.5 percent.           tion panel on the low voltage line.
                                                     • Substation, transformer and building
                                                       entrance switchgear composite efficiency
                                                       is 97.5 percent.




                                                18
Appendix B: Timing Benefits from Efficiency Improvement Actions




                                                                   Estimated Cumulative Yearly Savings
  Efficiency Improvement                   Savings
            Area                            (kW)
                                                          Year 1     Year 2      Year 3       Year 4     Year 5

                 Low power processors        111            6          22          45          78         111
Policies




                 Higher efficiency power
                                             124           12          43          68          99         124
  IT




                 supplies

                 Server power
                                             86             9          26          43          65         86
                 management

                 Blade servers                7             1          7           7            7          7
Projects
   IT




                 Virtualization              86             9          65          69          86         86

                 415V AC power
                                             20             0          0           20          20         20
                 distribution
Practices
  Best




                 Cooling best practices      15            15          15          15          15         15

                 Variable capacity
                                             49            49          49          49          49         49
                 cooling
Infrastructure




                 High density
                                             72             0          57          72          72         72
   Projects




                 supplemental cooling

                 Monitoring and opti-
                                             15             0          15          15          15         15
                 mization

                                             585           100        299         402          505        585




                                                     19
Note: Energy Logic is an approach developed by                         Emerson Network Power
        Emerson Network Power to provide organizations                         1050 Dearborn Drive
        with the information they need to more effective-
        ly reduce data center energy consumption. It is                        P.O. Box 29186
        not directly tied to a particular Emerson Network                      Columbus, Ohio 43229
        Power product or service. We encourage use of                          800.877.9222 (U.S. & Canada Only)
        the Energy Logic approach in industry discussions                      614.888.0246 (Outside U.S.)
        on energy efficiency and will permit use of the
        Energy Logic graphics with the following                               Fax: 614.841.6022
        attribution:
                                                                               EmersonNetworkPower.com
        The Energy Logic approach developed by                                 Liebert.com
        Emerson Network Power.
                                                                               While every precaution has been taken to ensure
        To request a graphic send an email to                                  accuracy and completeness in this literature, Liebert
                                                                               Corporation assumes no responsibility, and disclaims all
        energylogic@emerson.com.                                               liability for damages resulting from use of this informa-
                                                                               tion or for any errors or omissions.
                                                                               Specifications subject to change without notice.
                                                                               © 2007 Liebert Corporation. All rights reserved
                                                                               throughout the world.
                                                                               Trademarks or registered trademarks are property of
                                                                               their respective owners.
                                                                               ® Liebert and the Liebert logo are registered trade-
                                                                               marks of the Liebert Corporation
                                                                               Business-Critical Continuity, Emerson Network Power
                                                                               and the Emerson Network Power logo are trademarks
                                                                               and service mark of the Emerson Electric Co.

                                                                               WP154-158-117
                                                                               SL-24621




Emerson Network Power.
The global leader in enabling Business-Critical Continuity™.                                      EmersonNetworkPower. com
   AC Power                 Embedded Computing                 Outside Plant                              Racks & Integrated Cabinets
   Connectivity             Embedded Power                     Power Switching & Controls                 Services
   DC Power                 Monitoring                         Precision Cooling                          Precision Cooling

Mais conteúdo relacionado

Mais procurados

ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...IJCSEA Journal
 
451 Dct Dc Sustainability Exec Overview
451 Dct Dc Sustainability Exec Overview451 Dct Dc Sustainability Exec Overview
451 Dct Dc Sustainability Exec OverviewMichael Searles
 
Impact of Dispersed Generation on Optimization of Power Exports
Impact of Dispersed Generation on Optimization of Power ExportsImpact of Dispersed Generation on Optimization of Power Exports
Impact of Dispersed Generation on Optimization of Power ExportsIJERA Editor
 
Efficient Data Center Transformer Practices
Efficient Data Center Transformer PracticesEfficient Data Center Transformer Practices
Efficient Data Center Transformer Practicesmiller0204
 
Msft Top10 Business Practicesfor Es Data Centers April09
Msft Top10 Business Practicesfor Es Data Centers April09Msft Top10 Business Practicesfor Es Data Centers April09
Msft Top10 Business Practicesfor Es Data Centers April09hutuworm
 
Five Power Trends on Their Way to Your Data Center
Five Power Trends on Their Way to Your Data CenterFive Power Trends on Their Way to Your Data Center
Five Power Trends on Their Way to Your Data Centerdigitallibrary
 
Calculating total power
Calculating total powerCalculating total power
Calculating total powerjeet69
 
IRJET- An Efficient Dynamic Deputy Cluster Head Selection Method for Wireless...
IRJET- An Efficient Dynamic Deputy Cluster Head Selection Method for Wireless...IRJET- An Efficient Dynamic Deputy Cluster Head Selection Method for Wireless...
IRJET- An Efficient Dynamic Deputy Cluster Head Selection Method for Wireless...IRJET Journal
 
Power Quality Improvement in Off-Grid Renewable Energy Based Power System usi...
Power Quality Improvement in Off-Grid Renewable Energy Based Power System usi...Power Quality Improvement in Off-Grid Renewable Energy Based Power System usi...
Power Quality Improvement in Off-Grid Renewable Energy Based Power System usi...ijtsrd
 
Green Data Centers_Report
Green Data Centers_ReportGreen Data Centers_Report
Green Data Centers_ReportSwena Gupta
 
Go Green to Save Green – Embracing Green Energy Practices
Go Green to Save Green – Embracing Green Energy PracticesGo Green to Save Green – Embracing Green Energy Practices
Go Green to Save Green – Embracing Green Energy PracticesLindaWatson19
 
Enabling a Sustainable Future
Enabling a Sustainable FutureEnabling a Sustainable Future
Enabling a Sustainable FutureIBMAsean
 
15325008%2 e2014%2e1002589
15325008%2 e2014%2e100258915325008%2 e2014%2e1002589
15325008%2 e2014%2e1002589rehman1oo
 
Green & Beyond: Data Center Actions to Increase Business Responsiveness and R...
Green & Beyond: Data Center Actions to Increase Business Responsiveness and R...Green & Beyond: Data Center Actions to Increase Business Responsiveness and R...
Green & Beyond: Data Center Actions to Increase Business Responsiveness and R...IBMAsean
 
Case Study: How Duke Energy Is Relying on Frontier Technologies to Become Mor...
Case Study: How Duke Energy Is Relying on Frontier Technologies to Become Mor...Case Study: How Duke Energy Is Relying on Frontier Technologies to Become Mor...
Case Study: How Duke Energy Is Relying on Frontier Technologies to Become Mor...Jill Kirkpatrick
 
IRJET- Reducing electricity usage in Internet using transactional data
IRJET-  	  Reducing electricity usage in Internet using transactional dataIRJET-  	  Reducing electricity usage in Internet using transactional data
IRJET- Reducing electricity usage in Internet using transactional dataIRJET Journal
 
Enabling a Sustainable Future
Enabling a Sustainable FutureEnabling a Sustainable Future
Enabling a Sustainable FutureIBMAsean
 

Mais procurados (20)

ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...
 
451 Dct Dc Sustainability Exec Overview
451 Dct Dc Sustainability Exec Overview451 Dct Dc Sustainability Exec Overview
451 Dct Dc Sustainability Exec Overview
 
Impact of Dispersed Generation on Optimization of Power Exports
Impact of Dispersed Generation on Optimization of Power ExportsImpact of Dispersed Generation on Optimization of Power Exports
Impact of Dispersed Generation on Optimization of Power Exports
 
Efficient Data Center Transformer Practices
Efficient Data Center Transformer PracticesEfficient Data Center Transformer Practices
Efficient Data Center Transformer Practices
 
Msft Top10 Business Practicesfor Es Data Centers April09
Msft Top10 Business Practicesfor Es Data Centers April09Msft Top10 Business Practicesfor Es Data Centers April09
Msft Top10 Business Practicesfor Es Data Centers April09
 
Five Power Trends on Their Way to Your Data Center
Five Power Trends on Their Way to Your Data CenterFive Power Trends on Their Way to Your Data Center
Five Power Trends on Their Way to Your Data Center
 
Calculating total power
Calculating total powerCalculating total power
Calculating total power
 
Tresa
TresaTresa
Tresa
 
IRJET- An Efficient Dynamic Deputy Cluster Head Selection Method for Wireless...
IRJET- An Efficient Dynamic Deputy Cluster Head Selection Method for Wireless...IRJET- An Efficient Dynamic Deputy Cluster Head Selection Method for Wireless...
IRJET- An Efficient Dynamic Deputy Cluster Head Selection Method for Wireless...
 
E3 s binghamton
E3 s binghamtonE3 s binghamton
E3 s binghamton
 
Power Quality Improvement in Off-Grid Renewable Energy Based Power System usi...
Power Quality Improvement in Off-Grid Renewable Energy Based Power System usi...Power Quality Improvement in Off-Grid Renewable Energy Based Power System usi...
Power Quality Improvement in Off-Grid Renewable Energy Based Power System usi...
 
Green Data Centers_Report
Green Data Centers_ReportGreen Data Centers_Report
Green Data Centers_Report
 
Go Green to Save Green – Embracing Green Energy Practices
Go Green to Save Green – Embracing Green Energy PracticesGo Green to Save Green – Embracing Green Energy Practices
Go Green to Save Green – Embracing Green Energy Practices
 
9 ijcse-01223
9 ijcse-012239 ijcse-01223
9 ijcse-01223
 
Enabling a Sustainable Future
Enabling a Sustainable FutureEnabling a Sustainable Future
Enabling a Sustainable Future
 
15325008%2 e2014%2e1002589
15325008%2 e2014%2e100258915325008%2 e2014%2e1002589
15325008%2 e2014%2e1002589
 
Green & Beyond: Data Center Actions to Increase Business Responsiveness and R...
Green & Beyond: Data Center Actions to Increase Business Responsiveness and R...Green & Beyond: Data Center Actions to Increase Business Responsiveness and R...
Green & Beyond: Data Center Actions to Increase Business Responsiveness and R...
 
Case Study: How Duke Energy Is Relying on Frontier Technologies to Become Mor...
Case Study: How Duke Energy Is Relying on Frontier Technologies to Become Mor...Case Study: How Duke Energy Is Relying on Frontier Technologies to Become Mor...
Case Study: How Duke Energy Is Relying on Frontier Technologies to Become Mor...
 
IRJET- Reducing electricity usage in Internet using transactional data
IRJET-  	  Reducing electricity usage in Internet using transactional dataIRJET-  	  Reducing electricity usage in Internet using transactional data
IRJET- Reducing electricity usage in Internet using transactional data
 
Enabling a Sustainable Future
Enabling a Sustainable FutureEnabling a Sustainable Future
Enabling a Sustainable Future
 

Destaque

Looking Ahead: 2010 and Beyond – The Decade of Energy Efficiency
Looking Ahead: 2010 and Beyond – The Decade of Energy EfficiencyLooking Ahead: 2010 and Beyond – The Decade of Energy Efficiency
Looking Ahead: 2010 and Beyond – The Decade of Energy EfficiencyAlliance To Save Energy
 
Energy Logiic: A Roadmap for Reducing Energy Consumption in the Data Center
Energy Logiic: A Roadmap for Reducing Energy Consumption in the Data CenterEnergy Logiic: A Roadmap for Reducing Energy Consumption in the Data Center
Energy Logiic: A Roadmap for Reducing Energy Consumption in the Data Centerdigitallibrary
 
Implementing Policy and Control
Implementing Policy and ControlImplementing Policy and Control
Implementing Policy and Controldigitallibrary
 
ICT solutions for collecting and analysing data of energy consumption - Damir...
ICT solutions for collecting and analysing data of energy consumption - Damir...ICT solutions for collecting and analysing data of energy consumption - Damir...
ICT solutions for collecting and analysing data of energy consumption - Damir...Mayors in Action 3rd Centralized Training
 
Data center specific thermal and energy saving techniques
Data center specific thermal and energy saving techniquesData center specific thermal and energy saving techniques
Data center specific thermal and energy saving techniquesXiao Qin
 
Energy Logic Presentation
Energy Logic PresentationEnergy Logic Presentation
Energy Logic PresentationRon_Bednar
 

Destaque (6)

Looking Ahead: 2010 and Beyond – The Decade of Energy Efficiency
Looking Ahead: 2010 and Beyond – The Decade of Energy EfficiencyLooking Ahead: 2010 and Beyond – The Decade of Energy Efficiency
Looking Ahead: 2010 and Beyond – The Decade of Energy Efficiency
 
Energy Logiic: A Roadmap for Reducing Energy Consumption in the Data Center
Energy Logiic: A Roadmap for Reducing Energy Consumption in the Data CenterEnergy Logiic: A Roadmap for Reducing Energy Consumption in the Data Center
Energy Logiic: A Roadmap for Reducing Energy Consumption in the Data Center
 
Implementing Policy and Control
Implementing Policy and ControlImplementing Policy and Control
Implementing Policy and Control
 
ICT solutions for collecting and analysing data of energy consumption - Damir...
ICT solutions for collecting and analysing data of energy consumption - Damir...ICT solutions for collecting and analysing data of energy consumption - Damir...
ICT solutions for collecting and analysing data of energy consumption - Damir...
 
Data center specific thermal and energy saving techniques
Data center specific thermal and energy saving techniquesData center specific thermal and energy saving techniques
Data center specific thermal and energy saving techniques
 
Energy Logic Presentation
Energy Logic PresentationEnergy Logic Presentation
Energy Logic Presentation
 

Semelhante a Reducing Data Center Energy 50% with Energy Logic

Implementing energy efficient data centers
Implementing energy efficient  data centersImplementing energy efficient  data centers
Implementing energy efficient data centersSchneider Electric India
 
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMSBLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMSDaniel Bodenski
 
Facility Optimization
Facility OptimizationFacility Optimization
Facility Optimizationcwoodson
 
Optimization of power consumption in data centers using machine learning bas...
Optimization of power consumption in data centers using  machine learning bas...Optimization of power consumption in data centers using  machine learning bas...
Optimization of power consumption in data centers using machine learning bas...IJECEIAES
 
Energy-Efficient Infrastructures For Data Center
Energy-Efficient Infrastructures For Data CenterEnergy-Efficient Infrastructures For Data Center
Energy-Efficient Infrastructures For Data CenterKnurr USA
 
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...cscpconf
 
Energy performance in mission critical facilities
Energy performance in mission critical facilitiesEnergy performance in mission critical facilities
Energy performance in mission critical facilitiesAlcides Villarreal Polo
 
Ibm low energy_final low res version
Ibm low energy_final low res versionIbm low energy_final low res version
Ibm low energy_final low res versionEstelle Andlauer
 
Datacenter ISO50001 and CoC
Datacenter ISO50001 and CoCDatacenter ISO50001 and CoC
Datacenter ISO50001 and CoCDidier Monestes
 
Evaluating the Opportunity for DC Power in the Data Center
Evaluating the Opportunity for DC Power in the Data CenterEvaluating the Opportunity for DC Power in the Data Center
Evaluating the Opportunity for DC Power in the Data Centermmurrill
 
ENERGY-AWARE DISK STORAGE MANAGEMENT: ONLINE APPROACH WITH APPLICATION IN DBMS
ENERGY-AWARE DISK STORAGE MANAGEMENT: ONLINE APPROACH WITH APPLICATION IN DBMSENERGY-AWARE DISK STORAGE MANAGEMENT: ONLINE APPROACH WITH APPLICATION IN DBMS
ENERGY-AWARE DISK STORAGE MANAGEMENT: ONLINE APPROACH WITH APPLICATION IN DBMSijdms
 
The Data Center Frontier Presentations
The Data Center Frontier PresentationsThe Data Center Frontier Presentations
The Data Center Frontier PresentationsJenna Riemenschneider
 
Green it economics_aurora_tco_paper
Green it economics_aurora_tco_paperGreen it economics_aurora_tco_paper
Green it economics_aurora_tco_paperEurotech Aurora
 
Saving energy in data centers through workload consolidation
Saving energy in data centers through workload consolidationSaving energy in data centers through workload consolidation
Saving energy in data centers through workload consolidationEco4Cloud
 
Improvements in Data Center Management
Improvements in Data Center ManagementImprovements in Data Center Management
Improvements in Data Center ManagementScottMadden, Inc.
 

Semelhante a Reducing Data Center Energy 50% with Energy Logic (20)

Express Computer
Express ComputerExpress Computer
Express Computer
 
Implementing energy efficient data centers
Implementing energy efficient  data centersImplementing energy efficient  data centers
Implementing energy efficient data centers
 
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMSBLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
 
Facility Optimization
Facility OptimizationFacility Optimization
Facility Optimization
 
Optimization of power consumption in data centers using machine learning bas...
Optimization of power consumption in data centers using  machine learning bas...Optimization of power consumption in data centers using  machine learning bas...
Optimization of power consumption in data centers using machine learning bas...
 
Energy-Efficient Infrastructures For Data Center
Energy-Efficient Infrastructures For Data CenterEnergy-Efficient Infrastructures For Data Center
Energy-Efficient Infrastructures For Data Center
 
Data Center
Data Center Data Center
Data Center
 
Cooling in Data Centers
Cooling in Data CentersCooling in Data Centers
Cooling in Data Centers
 
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...
 
Energy performance in mission critical facilities
Energy performance in mission critical facilitiesEnergy performance in mission critical facilities
Energy performance in mission critical facilities
 
Ibm low energy_final low res version
Ibm low energy_final low res versionIbm low energy_final low res version
Ibm low energy_final low res version
 
Energy Storage 01
Energy Storage 01Energy Storage 01
Energy Storage 01
 
Datacenter ISO50001 and CoC
Datacenter ISO50001 and CoCDatacenter ISO50001 and CoC
Datacenter ISO50001 and CoC
 
Evaluating the Opportunity for DC Power in the Data Center
Evaluating the Opportunity for DC Power in the Data CenterEvaluating the Opportunity for DC Power in the Data Center
Evaluating the Opportunity for DC Power in the Data Center
 
ENERGY-AWARE DISK STORAGE MANAGEMENT: ONLINE APPROACH WITH APPLICATION IN DBMS
ENERGY-AWARE DISK STORAGE MANAGEMENT: ONLINE APPROACH WITH APPLICATION IN DBMSENERGY-AWARE DISK STORAGE MANAGEMENT: ONLINE APPROACH WITH APPLICATION IN DBMS
ENERGY-AWARE DISK STORAGE MANAGEMENT: ONLINE APPROACH WITH APPLICATION IN DBMS
 
The Data Center Frontier Presentations
The Data Center Frontier PresentationsThe Data Center Frontier Presentations
The Data Center Frontier Presentations
 
Green it economics_aurora_tco_paper
Green it economics_aurora_tco_paperGreen it economics_aurora_tco_paper
Green it economics_aurora_tco_paper
 
009
009009
009
 
Saving energy in data centers through workload consolidation
Saving energy in data centers through workload consolidationSaving energy in data centers through workload consolidation
Saving energy in data centers through workload consolidation
 
Improvements in Data Center Management
Improvements in Data Center ManagementImprovements in Data Center Management
Improvements in Data Center Management
 

Último

EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 

Último (20)

EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 

Reducing Data Center Energy 50% with Energy Logic

  • 1. A White Paper from the Experts in Business-Critical Continuity Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems
  • 2. The model demonstrates Executive Summary that reductions in ener- gy consumption at the IT A number of associations, consultants and vendors have promoted best practices for equipment level have enhancing data center energy efficiency. These practices cover everything from facility lighting the greatest impact on to cooling system design, and have proven useful in helping some companies slow or reverse overall consumption the trend of rising data center energy consumption. However, most organizations still lack a because they cascade cohesive, holistic approach for reducing data center energy use. across all supporting Emerson Network Power analyzed the available energy-saving opportunities and identified the systems. top ten. Each of these ten opportunities were then applied to a 5,000-square-foot data center model based on real-world technologies and operating parameters. Through the model, Emerson Network Power was able to quantify the savings of each action at the system level, as well as identify how energy reduction in some systems affects consumption in supporting systems. The model demonstrates that reductions in energy consumption at the IT equipment level have the greatest impact on overall consumption because they cascade across all supporting systems. This led to the development of Energy Logic, a vendor-neutral roadmap for optimizing data center energy efficiency that starts with the IT equipment and progresses to the support infrastructure. This paper shows how Energy Logic can deliver a 50 percent or greater reduction in data center energy consumption without compromising performance or availability. This approach has the added benefit of removing the three most critical constraints faced by data center managers today: power, cooling and space. In the model, the 10 Energy Logic strategies freed up two-thirds of floor space, one-third of UPS capacity and 40 percent of precision cooling capacity. All of the technologies used in the Energy Logic approach are available today and many can be phased into the data center as part of regular technology upgrades/refreshes, minimizing capital expenditures. The model also identified some gaps in existing technologies that could enable greater energy reductions and help organizations make better decisions regarding the most efficient technologies for a particular data center. 1
  • 3. Introduction In a report to the U.S. Congress, the Data center energy Environmental Protection Agency consumption is being The double impact of rising data center concluded that best practices can reduce driven by the demand energy consumption and rising energy costs data center energy consumption by 50 within almost every has elevated the importance of data center percent by 2011. organization for greater efficiency as a strategy to reduce costs, computing capacity manage capacity and promote environ- The EPA report included a list of Top 10 and increased IT mental responsibility. Energy Saving Best Practices as identified by centralization. the Lawrence Berkeley National Lab. Other Data center energy consumption has been organizations, including Emerson Network driven by the demand within almost every Power, have distributed similar information organization for greater computing capacity and there is evidence that some best and increased IT centralization. While this practices are being adopted. was occurring, global electricity prices increased 56 percent between 2002 and The Spring 2007 DCUG Survey found that 2006. 77 percent of respondents already had their data center arranged in a hot-aisle/cold- The financial implications are significant; aisle configuration to increase cooling sys- estimates of annual power costs for U.S. tem efficiency, 65 percent use blanking pan- data centers now range as high as $3.3 els to minimize recirculation of hot air and billion. 56 percent have sealed the floor to prevent This trend impacts data center capacity as cooling losses. well. According to the Fall 2007 Survey of While progress has been made, an objec- the Data Center Users Group (DCUG®), an tive, vendor-neutral evaluation of efficiency influential group of data center managers, opportunities across the spectrum of data power limitations were cited as the primary center systems has been lacking. This has factor limiting growth by 46 percent of made it difficult for data center managers to respondents, more than any other factor. prioritize efficiency efforts and tailor best In addition to financial and capacity consid- practices to their data center equipment erations, reducing data center energy use and operating practices. has become a priority for organizations This paper closes that gap by outlining a seeking to reduce their environmental holistic approach to energy reduction, footprint. based on quantitative analysis, that enables There is general agreement that improve- a 50 percent or greater reduction in data ments in data center efficiency are possible. center energy consumption. 2
  • 4. The distinction between Data Center Energy Consumption percent of total consumption. Supply-side demand and supply systems include the UPS, power distribu- power consumption is The first step in prioritizing energy saving tion, cooling, lighting and building valuable because reduc- opportunities was to gain a solid under- switchgear, and account for 48 percent of tions in demand-side standing of data center energy consumption. consumption. energy use cascade Emerson Network Power modeled energy Information on data center and infrastruc- through the supply side. consumption for a typical 5,000-square- ture equipment and operating parameters foot data center (Figure 1) and analyzed on which the analysis was based are pre- how energy is used within the facility. sented in Appendix A. Note that all data Energy use was categorized as either centers are different and the savings poten- “demand side” or “supply side.” tial will vary by facility. However, at mini- Demand-side systems are the servers, stor- mum, this analysis provides an order-of- age, communications and other IT systems magnitude comparison for data center that support the business. Supply-side energy reduction strategies. systems exist to support the demand side. The distinction between demand and sup- In this analysis, demand-side systems— ply power consumption is valuable because which include processors, server power sup- reductions in demand-side energy use cas- plies, other server components, storage and cade through the supply side. For example, communication equipment—account for 52 in the 5,000-square-foot data center used to Power and Cooling 48% Computing Equipment 52% Building Switchgear / Computing Equipment 52% Category Power Draw* MV Transformer 3% (Demand) Computing 588 kW Lighting 1% Processor 15% Lighting 10 kW UPS and distribution Server Power losses 72 kW Support Cooling Supply 14% Systems 48% 38% Cooling power draw for (Supply) computing and UPS losses 429 kW Other Server 15% Building switchgear/MV transformer/other losses 28 kW PDU 1% Storage 4% TOTAL 1127 kW Communication UPS 5% Equipment 4% Figure 1. Analysis of a typical 5000-square-foot data center shows that demand-side computing equipment accounts for 52 percent of energy usage and supply-side systems account for 48 percent. * This represents the average power draw (kW). Daily energy consumption (kWh) can be captured by multiplying the power draw by 24. 3
  • 5. analyze energy consumption, a 1 Watt complete. The energy saving measures The first step in the reduction at the server component level included in Energy Logic should be consid- Energy Logic approach is (processor, memory, hard disk, etc.) results ered a guide. Many organizations will already to establish an IT equip- in an additional 1.84 Watt savings in the have undertaken some measures at the end ment procurement power supply, power distribution system, of the sequence or will have to deploy some policy that exploits the UPS system, cooling system and building technologies out of sequence to remove energy efficiency bene- entrance switchgear and medium voltage existing constraints to growth. fits of low power proces- transformer (Figure 2). sors and high-efficiency The first step in the Energy Logic approach is power supplies. Consequently, every Watt of savings that to establish an IT equipment procurement can be achieved on the processor level policy that exploits the energy efficiency ben- creates a total of 2.84 Watts of savings for efits of low power processors and high-effi- the facility. ciency power supplies. The Energy Logic Approach As these technologies are specified in new equipment, inefficient servers will Energy Logic takes a sequential approach to be phased out and replaced with higher- reducing energy costs, applying the 10 tech- efficiency units, creating a solid foundation nologies and best practices that exhibited for an energy-optimized data center. the most potential in the order in which they have the greatest impact. Power management software has great potential to reduce energy costs and should While the sequence is important, Energy be considered as part of an energy optimiza- Logic is not intended to be a step-by-step tion strategy, particularly for data centers that approach in the sense that each step can only have large differences between peak and be undertaken after the previous one is average utilization rates. Other facilities may Server The Cascade Effect Component 1 Watt -1.0 W -1.18 W saved here DC-DC AC-DC -1.49 W Saves an additional and .31 W here Power .18 W here Distribution -1.53 W and .04 W here UPS -1.67 W and .14 W here Cooling -2.74 W 1 Watt saved at the processor saves a and 1.07 W here Building total of 2.84 Watts of total consumption Switchgear/ Transformer -2.84 W = Cumulative Saving and .07 W here Figure 2. With the Cascade Effect, a 1 Watt savings at the server component level creates a reduction in facility energy consumption of 2.84 Watts. 4
  • 6. Total floor space choose not to employ power management Reducing Energy Consumption and required for IT equip- because of concerns about response times. A Eliminating Constraints to Growth ment was reduced by 65 significant opportunity exists within the percent. industry to enhance the sophistication of Employing the Energy Logic approach to the power management to make it an even more model data center reduced energy use by 52 powerful tool in managing energy use. percent without compromising performance or availability. The next step involves IT projects that may not be driven by efficiency considerations, but In its unoptimized state, the 5,000-square- have an impact on energy consumption. They foot data center model used to develop the include: Energy Logic approach supported a total • Blade servers compute load of 529 kW and total facility • Server virtualization load of 1127 kW. Through the optimization strategies presented here, this facility has These technologies have emerged as “best been transformed to enable the same level of practice” approaches to data center manage- performance using significantly less power ment and play a role in optimizing a data and space. Total compute load was reduced center for efficiency, performance and man- to 367 kW, while rack density was increased ageability. from 2.9 kW per rack to 6.1 kW per rack. Once policies and plans have been put in This has reduced the number of racks required place to optimize IT systems, the focus shifts to support the compute load from 210 to 60 to supply-side systems. The most effective and eliminated power, cooling and space limi- approaches to infrastructure optimization tations constraining growth (Figure 4). include: • Cooling best practices Total energy consumption was reduced to • 415V AC power distribution 542 kW and the total floor space required for • Variable capacity cooling IT equipment was reduced by 65 percent • Supplemental cooling (Figure 5). • Monitoring and optimization Energy Logic is suitable for every type of Emerson Network Power has quantified the data center; however, the sequence may be savings that can be achieved through each of affected by facility type. Facilities operating at these actions individually and as part of the high utilization rates throughout a 24-hour Energy Logic sequence (Figure 3). Note that day will want to focus initial efforts on savings for supply-side systems look smaller sourcing IT equipment with low power when taken as part of Energy Logic because processors and high efficiency power sup- those systems are now supporting a smaller plies. Facilities that experience predictable load. peaks in activity may achieve the greatest benefit from power management technology. Figure 6 shows how compute load and type of operation influence priorities. 5
  • 7. Savings Independent of Energy Logic Savings Other Actions with the Cascade Effect Energy Saving Action ROI Cumulative Savings (kW) Savings (%) Savings (kW) Savings (%) Savings (kW) Lower power processors 111 10% 111 10% 111 12 to 18 mo. High-efficiency power 141 12% 124 11% 235 5 to 7 mo. supplies Power management 125 11% 86 8% 321 Immediate features TCO reduced Blade servers 8 1% 7 1% 328 38%* TCO reduced Server virtualization 156 14% 86 8% 414 63%** 415V AC power distribution 34 3% 20 2% 434 2 to 3 mo. Cooling best practices 24 2% 15 1% 449 4 to 6 mo. Variable capacity cooling: 79 7% 49 4% 498 4 to 10 mo. variable speed fan drives Supplemental cooling 200 18% 72 6% 570 10 to 12 mo. Monitoring and optimiza- tion: Cooling units work as a 25 2% 15 1% 585 3 to 6 mo. team * Source for blade impact on TCO: IDC ** Source for virtualization impact on TCO: VMware Figure 3. Using the model of a 5,000-square-foot data center consuming 1127 kW of power, the actions included in the Energy Logic approach work together to produced a 585 kW reduction in energy use. Constraint Unoptimized Optimized Capacity Freed Up Data center space (sq ft) 4988 1768 3220 (65%) UPS capacity (kVA) 2 * 750 2 * 500 2 * 250 (33%) Cooling plant capacity (tons) 350 200 150 (43%) Building entrance switchgear and genset (kW) 1169 620 549 (47%) Figure 4. Energy Logic removes constraints to growth in addition to reducing energy consumption. 6
  • 8. P1 P2 P1 P2 P1 P2 COLD AISLE COLD AISLE COLD AISLE D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 Control CW CRAC CW CRAC Center HOT AISLE HOT AISLE HOT AISLE D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 COLD AISLE COLD AISLE COLD AISLE D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 CW CRAC CW CRAC HOT AISLE HOT AISLE HOT AISLE Battery D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 Room COLD AISLE COLD AISLE COLD AISLE D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 CW CRAC CW CRAC HOT AISLE HOT AISLE HOT AISLE D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 UPS Room COLD AISLE COLD AISLE COLD AISLE CW CRAC CW CRAC CW CRAC D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 HOT AISLE HOT AISLE HOT AISLE Before Energy Logic P1 P2 P1 P2 COLD AISLE COLD AISLE D1 R R R R R R R R D2 D1 R R R R R R R D2 Control CW CRAC CW CRAC Center HOT AISLE HOT AISLE D1 R R R R R R R R D2 D1 R R R R R R R D2 COLD AISLE COLD AISLE XDV XDV XDV XDV XDV XDV XDV XDV XDV XDV D1 R R R R R D2 D1 D2 CW CRAC CW CRAC HOT AISLE HOT AISLE Battery XDV XDV XDV XDV XDV XDV XDV XDV XDV XDV D1 R R R R R D2 D1 D2 Room COLD AISLE COLD AISLE Space Savings: 65% UPS Room CW CRAC After Energy Logic P1 = Stage one distribution, side A D1 = Stage two distribution, side A R = Rack P2 = Stage one distribution, side B D2 = Stage two distribution, side B XDV = Supplemental cooling module CW CRAC = Chilled water computer room air conditioner Figure 5. The top diagram shows the unoptimized data center layout. The lower diagram shows the data center after the Energy Logic actions were applied. Space required to support data center equipment was reduced from 5,000 square feet to 1,768 square feet (65 percent). 7
  • 9. 1. Lowest power processor 1. Virtualization operations 2. High-efficiency power supplies 2. Lowest power processor 24-hour 3. Blade servers 3. High-efficiency power supply 4. Power management 4. Power management Cooling best practices 5. Consider main- frame architecture Variable capacity cooling Supplemental cooling 1. Power 415V AC distribution 1. Virtualization management Monitoring and optimization 2. Power management operations Daytime 3. Low power 2. Low power processor processor 4. High-efficiency 3. High-efficiency power supplies power supplies 4. Blade servers I/O Intensive Compute Intensive Figure 6 . The Energy Logic approach can be tailored to the compute load and type of operation. The Ten Energy Logic Actions lower power processors deliver the same per- formance as higher power models (Figure 8). 1. Processor Efficiency In the absence of a true standard measure of In the 5,000-square-foot data center mod- processor efficiency comparable to the U.S. eled for this paper, low power processors cre- Department of Transportation fuel efficiency ate a 10 percent reduction in overall data standard for automobiles, Thermal Design center power consumption. Power (TDP) serves as a proxy for server 2. Power Supplies power consumption. As with processors, many of the server power The typical TDP of processors in use today is supplies in use today are operating at efficien- between 80 and 103 Watts (91 W average). cies below what is currently available. The For a price premium, processor manufactur- U.S. EPA estimated the average efficiency of ers provide a lower voltage versions of their installed server power supplies at 72 percent processors that consumes on average 30 in 2005. In the model, we assume the unopti- Watts less than standard processors (Figure mized data center uses power supplies that 7). Independent research studies show these average 79 percent across a mix of servers that range from four years old to new. Sockets Speed (GHz) Standard Low power Saving 1 1.8-2.6 103 W 65 W 38 W AMD 2 1.8-2.6 95 W 68 W 27 W Intel 2 1.8-2.6 80 W 50 W 30 W Figure 7. Intel and AMD offer a variety of low power processors that deliver average savings between 27 W and 38 W. 8
  • 10. System Performance Results AS3AP is Better) Transaction/Second (Higher Best-in-class power supplies are available 30000 today that deliver efficiency of 90 percent. Use of these power supplies reduces power 25000 draw within the data center by 124 kW or 11 percent of the 1127 kW total. 20000 As with other data center systems, server power supply efficiency varies depending on Transactions/Second load. Some power supplies perform better at 15000 partial loads than others and this is particu- larly important in dual-corded devices where power supply utilization can average less 10000 than 30 percent. Figure 9 shows power sup- ply efficiencies at different loads for two 5000 power supply models. At 20 percent load, model A has an efficiency of approximately 88 percent while model B has an efficiency 0 Load 1 Load 2 Load 3 Load 4 Load 5 closer to 82 percent. Opteron 2218 HE 2.6 GHz Woodcrest LV S148 2.3 GHz Opteron 2218 2.6 GHz Woodcrest 5140 2.3 GHz Source Anandtech Figure 9 also highlights another opportunity to increase efficiency: sizing power supplies Figure 8. Performance results for standard- and low-power processor closer to actual load. Notice that the maxi- systems using the American National Standard Institute AS3AP mum configuration is about 80 percent of benchmark. “Typical” PSU Load in “Typical” Config “Max” Nameplate Redundant Config. 100% CPU Util. Configuration Rating 90% 85% Efficiency 80% Model A Model B 75% 70% 0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100% Percent of Full Load Figure 9. Power supply efficiency can vary significantly depending on load and power supplies are often sized for a load that exceeds the maximum server configuration. 9
  • 11. the nameplate rating and the typical configu- are disabled because of concerns regarding While the move to blade ration is 67 percent of the nameplate rating. response time; however, this decision may servers is typically not Server manufacturers should allow pur- need to be reevaluated in light of the signifi- driven by energy consid- chasers to choose power supplies sized for a cant savings this technology can enable. erations, blade servers typical or maximum configuration. can play a role in reduc- In the model, we assume that the idle power ing energy consump- 3. Power management software draw is 80 percent of the peak power draw tion. Data centers are sized for peak conditions that without power management, and reduces to may rarely exist. In a typical business data cen- 45 percent of peak power draw as power ter, daily demand progressively increases from management is enabled. With this scenario, about 5 a.m. to 11 a.m. and then begins to power management can save an additional drop again at 5 p.m. (Figure 10). 86 kW or eight percent of the unoptimized data center load. Server power consumption remains relatively high as server load decreases (Figure 11). In 4. Blade Servers idle mode, most servers consume between Many organizations have implemented blade 70 and 85 percent of full operational power. servers to meet processing requirements and Consequently, a facility operating at just 20 improve server management. While the percent capacity may use 80 percent of the move to blade servers is typically not driven energy as the same facility operating at 100 by energy considerations, blade servers can percent capacity. play a role in energy consumption. Server processors have power management Blade servers consume about 10 percent less features built-in that can reduce power when power than equivalent rack mount servers the processor is idle. Too often these features because multiple servers share common AC Power Input Versus Percent CPU Time 150 120 Servers 45 Dell Power Edge 2400 (Web/SQL Server) 40 125 Watts 100 Proc Time % 35 100 80 Percent Utilization 30 % CPU Time 25 Watts 75 60 20 15 50 40 10 25 20 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 0 0 Hours Figure 10. Daily utilization for a typical Figure 11. Low processor activity does not translate into low power business data center. consumption. 10
  • 12. Implementing virtual- power supplies, cooling fans and other data center airflow (Figure 12). Many organi- ization provides an components. zations, including Emerson Network Power, incremental eight per- offer CFD imaging as part of data center In the model, we see a one percent reduction cent reduction in total assessment services focused on improving in total energy consumption when 20 percent data center power draw cooling efficiency. for the 5,000-square- of rack-based servers are replaced with blade foot facility. servers. More importantly, blades facilitate Additionally, temperatures in the cold aisle the move to a high-density data center archi- may be able to be raised if current tempera- tecture, which can significantly reduce energy tures are below 68° F. Chilled water tempera- consumption. tures can often be raised from 45° F to 50° F. 5. Server Virtualization In the model, cooling system efficiency is As server technologies are optimized, virtual- improved five percent simply by implement- ization is increasingly being deployed to ing best practices. This reduces overall facility increase server utilization and reduce the energy costs by one percent with virtually no number of servers required. investment in new technology. In our model, we assume that 25 percent of 7. 415V AC Power Distribution servers are virtualized with eight non-virtual- The critical power system represents another ized physical servers being replaced by one opportunity to reduce energy consumption; virtualized physical server. We also assume however, even more than other systems, care that the applications being virtualized were must be taken to ensure reductions in energy residing in single-processor and two-proces- consumption are not achieved at the cost of sor servers and the virtualized applications reduced equipment availability. are hosted on servers with at least two Most data centers use a type of UPS called a processors. double-conversion system. These systems Implementing virtualization provides an convert incoming power to DC and then back incremental eight percent reduction in total to AC within the UPS. This enables the UPS to data center power draw for the 5,000-square- generate a clean, consistent waveform for IT foot facility. equipment and effectively isolates IT equip- ment from the power source. 6. Cooling Best Practices Most data centers have implemented some UPS systems that don’t convert the incoming best practices, such as the hot-aisle/cold-aisle power—line interactive or passive standby rack arrangement. Potential exists in sealing systems—can operate at higher efficiencies gaps in floors, using blanking panels in open because of the losses associated with the con- spaces in racks, and avoiding mixing of hot version process. These systems may compro- and cold air. ASHRAE has published several mise equipment protection because they do excellent papers on these best practices. not fully condition incoming power. Computational fluid dynamics (CFD) can be A bigger opportunity exists downstream used to identify inefficiencies and optimize from the UPS. In most data centers, the UPS 11
  • 13. 8. Variable Capacity Cooling Newer technologies such Data center systems are sized to handle peak as Digital Scroll com- loads, which rarely exist. Consequently, oper- pressors and variable ating efficiency at full load is often not a good frequency drives in com- indication of actual operating efficiency. puter room air condi- tioners (CRACs) allow Newer technologies, such as Digital Scroll Figure 12. CFD imaging can be used to evalu- high efficiencies to be compressors and variable frequency drives in ate cooling efficiency and optimize airflow. maintained at partial This image shows hot air being recirculated computer room air conditioners (CRACs), loads. as it is pulled back toward the CRAC, which is allow high efficiencies to be maintained at poorly positioned. partial loads. Digital scroll compressors allow the capacity provides power at 480V, which is then of room air conditioners to be matched stepped down via a transformer, with exactly to room conditions without turning accompanying losses, to 208V in the power compressors on and off. distribution system. These stepdown losses can be eliminated by converting UPS output Typically, CRAC fans run at a constant speed power to 415V. and deliver a constant volume of air flow. Converting these fans to variable frequency The 415V three-phase input provides 240V drive fans allows fan speed and power draw to single-phase, line-to-neutral input directly to be reduced as load decreases. Fan power is the server (Figure 13). This higher voltage not directly proportional to the cube of fan rpm only eliminates stepdown losses but also and a 20 percent reduction in fan speed pro- enables an increase in server power supply vides almost 50 percent savings in fan power efficiency. Servers and other IT equipment consumption. These drives are available in can handle 240V AC input without any issues. retrofit kits that make it easy to upgrade In the model, an incremental two percent existing CRACs with a payback of less than reduction in facility energy use is achieved by one year. using 415V AC power distribution. Traditional 208V Distribution PDU PDU 12V Servers 480V 480V 208V 208V UPS PSU Storage transformer panelboard Comm Equipment 415V Distribution 480V PDU 12V Servers 415V 3-phase 240V UPS PSU Storage 240V phase to panelboard neutral Comm Equipment Figure 13. 415V power distribution provides a more efficient alternative to using 208V power. 12
  • 14. Supplemental cooling In the chilled water-based air conditioning units are mounted system used in this analysis, the use of vari- above or alongside able frequency drives provides an incremental equipment racks and saving of four percent in data center power pull hot air directly from consumption. the hot aisle and deliver cold air to the cold aisle. 9. High Density Supplemental Cooling Traditional room-cooling systems have proven very effective at maintaining a safe, controlled environment for IT equipment. However, optimizing data center energy effi- ciency requires moving from traditional data center densities (2 to 3 kW per rack) to an Figure 14. Supplemental cooling enables environment that can support much higher higher rack densities and improved cooling densities (in excess of 30 kW). efficiency. This requires implementing an approach to In the model, 20 racks at 12 kW density per cooling that shifts some of the cooling load rack use high density supplemental cooling from traditional CRAC units to supplemental while the remaining 40 racks (at 3.2 kW den- cooling units. Supplemental cooling units are sity) are supported by the traditional room mounted above or alongside equipment cooling system. This creates an incremental racks (Figure14), and pull hot air directly from six percent reduction in overall data center the hot aisle and deliver cold air to the cold energy costs. As the facility evolves and more aisle. racks move to high density, the savings will Supplemental cooling units can reduce cool- increase. ing costs by 30 percent compared to tradi- 10. Monitoring and Optimization tional approaches to cooling. These savings One of the consequences of rising equipment are achieved because supplemental cooling densities has been increased diversity within brings cooling closer to the source of heat, the data center. Rack densities are rarely uni- reducing the fan power required to move air. form across a facility and this can create cool- They also use more efficient heat exchangers ing inefficiencies if monitoring and optimiza- and delver only sensible cooling, which is tion is not implemented. Room cooling units ideal for the dry heat generated by electronic on one side of a facility may be humidifying equipment. the environment based on local conditions Refrigerant is delivered to the supplemental while units on the opposite side of the facility cooling modules through an overhead piping are dehumidifying. system, which, once installed, allows cooling Cooling control systems can monitor condi- modules to be easily added or relocated as tions across the data center and coordinate the environment changes. the activities of multiple units to prevent con- flicts and increase teamwork (Figure 15 ). 13
  • 15. TEAMWORK MODE Control Disables Humidifier ON Humidifier ON Dehumidification ON Compressors ON heaters in “Cold Zone” Unit 1 Unit 2 Unit 3 Unit 1 Unit 2 Unit 3 UNBALANCED LOAD NO LOAD SWITCH UNBALANCED LOAD NO LOAD OR HUB POWER Unit 6 Unit 5 Unit 4 Unit 6 Unit 5 Unit 4 Humidifier ON Dehumidification ON Compressors ON Figure 15. System-level control reduces conflict between room air conditioning units operating in different zones in the data center. In the model, an incremental saving of one • Monitor and reduce parasitic losses from The Energy Logic model percent is achieved as a result of system-level generators, exterior lighting and perimeter also provides a roadmap monitoring and control. access control. For a 1 MW load, generator for the industry to deliv- losses of 20 kW to 50 kW have been er the information and Other Opportunities for Savings measured. technologies that data Energy Logic prioritizes the most effective center managers can use What the Industry Can Learn from the to optimize their energy reduction strategies, but it is not Energy Logic Model facilities. intended to be a comprehensive list of energy reducing measures. In addition to the actions The Energy Logic model not only prioritizes in the Energy Logic strategy, data center man- actions for data center managers. It also pro- agers should consider the feasibility of the vides a roadmap for the industry to deliver the following: information and technologies that data center managers can use to optimize their facilities. • Consolidate data storage from direct Here are specific actions the industry can take attached storage to network attached stor- to support increased data center efficiency: age. Also, faster disks consume more power so consider reorganizing data so that 1. Define universally accepted metrics for less frequently used data is on slower processor, server and data center efficiency archival drives. There have been tremendous technology advances in server processors in the last • Use economizers where appropriate to decade. Until 2005, higher processor per- allow outside air to be used to support data formance was linked with higher clock speeds center cooling during colder months, and hotter chips consuming more power. creating opportunities for energy-free Recent advances in multi-core technology cooling. With today’s high-density have driven performance increases by using computing environment, economizers can more computing cores operating at relatively be cost effective in many more locations lesser clock speeds, which reduces power than might be expected. consumption. 14
  • 16. An industry standard of Today processor manufacturers offer a range eliminate much of the resistance to using data center efficiency of server processors from which a customer power management. that measures perform- needs to select the right processor for the 3. Matching power supply capacity to server ance per Watt of energy given application. What is lacking is an easy- configuration used would be extreme- to-understand and easy-to-use measure such Server manufacturers tend to oversize power ly beneficial in measur- as the miles-per-gallon automotive fuel effi- supplies to accommodate the maximum ing the progress of data ciency ratings developed by the U.S. configuration of a particular server. Some center optimization Department of Transportation, that can help users may be willing to pay an efficiency efforts. buyers select the ideal processor for a given penalty for the flexibility to more easily load. The performance per Watt metric is upgrade, but many would prefer a choice evolving gradually with SPEC score being used between a power supply sized for a standard as the server performance measure, but more configuration and one sized for maximum work is needed. configuration. Server manufacturers should This same philosophy could be applied to the consider making these options available facility level. An industry standard of data cen- and users need to be educated about the ter efficiency that measures performance per impact power supply size has on energy Watt of energy used would be extremely ben- consumption. eficial in measuring the progress of data cen- 4. Designing for High Density ter optimization efforts. The PUE ratio devel- A perception persists that high-density envi- oped by the Green Grid provides a measure of ronments are more expensive than simply infrastructure efficiency, but not total facility spreading the load over a larger space. High- efficiency. IT management needs to work density environments employing blade and with IT equipment and infrastructure manu- virtualized servers are actually economical as facturers to develop the miles-per-gallon they drive down energy costs and remove equivalent for both systems and facilities. constraints to growth, often delaying or elim- 2. More sophisticated power management inating the need to build new facilities. While enabling power management features 5. High-voltage distribution provides tremendous savings, IT manage- 415V power distribution is used commonly ment often prefers to stay away from this in Europe, but UPS systems that easily sup- technology as the impact on availability is not port this architecture are not readily available clearly established. As more tools become in the United States. Manufacturers of critical available to manage power management fea- power equipment should provide the 415V tures, and data is available to ensure that output as an option on UPS systems and can availability is not impacted, we should see do more to educate their customers regard- this technology gain market acceptance. ing high-voltage power distribution. More sophisticated controls that would allow these features to be enabled only during peri- 6. Integrated measurement and control ods of low utilization, or turned off when criti- Data that can be easily collected from IT sys- cal applications are being processed, would tems and the racks that support them has 15
  • 17. yet to be effectively integrated with support IT consolidation projects also play an impor- Together these strate- systems controls. This level of integration tant role in data center optimization. Both gies created a 52 per- would allow IT systems, applications and sup- blade servers and virtualization contribute to cent reduction in energy port systems to be more effectively managed energy savings and support a high-density use in the 5,000-square- based on actual conditions at the IT equip- environment that facilitates true foot data center model ment level. optimization. developed by Emerson Network Power while Conclusion The final steps in the Energy Logic optimiza- removing constraints to tion strategy is to focus on infrastructure sys- growth. Data center managers and designers, IT tems, employing a combination of best prac- equipment manufacturers and infrastructure tices and efficient technologies to increase providers must all collaborate to truly opti- the efficiency of power and cooling systems. mize data center efficiency. Together these strategies created a 52 per- For data center managers, there are a number cent reduction in energy use in the 5,000- of actions that can be taken today that can square-foot data center model developed by significantly drive down energy consumption Emerson Network Power while removing while freeing physical space and power and constraints to growth. cooling capacity to support growth. Appendix B shows exactly how these savings Energy reduction initiatives should begin with are achieved over time as legacy technologies policies that encourage the use of efficient IT are phased out and savings cascade across technologies, specifically low power proces- systems. sors and high-efficiency power supplies. This will allow more efficient technologies to be introduced into the data center as part of the normal equipment replacement cycle. Power management software should also be considered in applications where it is appro- priate as it may provide greater savings than any other single technology, depending on data center utilization. 16
  • 18. Appendix A: Data Center Assumptions Used to Model Energy Use To quantify the results of the efficiency Servers improvement options presented in this • Age is based on average server replace- paper, a hypothetical data center that has ment cycle of 4-5 years. not been optimized for energy efficiency • Processor Thermal Design Power aver- was created. The ten efficiency actions pre- ages 91W/processor. sented in this paper were then applied to this facility sequentially to quantify results. • All servers have dual redundant power supplies. The average DC-DC conversion This 5000-square-foot hypothetical data efficiency is assumed at 85% and average center has 210 racks with average heat den- AC-DC conversion efficiency is assumed at sity of 2.8kW/rack. The racks are arranged 79 percent for the mix of servers from in a hot-aisle/cold-aisle configuration. Cold four years old to new. aisles are four feet wide, and hot aisles are • Daytime power draw is assumed to exist three feet wide. Based on this configuration for 14 hours on weekdays and 4 hours on and operating parameters, average facility weekends. Night time power draw is 80 power draw was calculated to be 1127 kW. percent of daytime power draw. Following are additional details used in the analysis. • See Figure 16 for more details on server configuration and operating parameters. Two Four More than Single Socket Total sockets Sockets four Number of servers 157 812 84 11 1064 Daytime power draw (Watts/server) 277 446 893 4387 – Nighttime power draw (Watts/server) 247 388 775 3605 – Total daytime power draw (kW) 44 362 75 47 528 Total nighttime power draw (kW) 39 315 65 38 457 Average server power draw (kW) 41 337 70 42 490 Figure 16. Server operating parameters used in the Energy Logic model. 17
  • 19. Storage Cooling system • Storage Type: Network attached storage. • Cooling System is chilled water based. • Capacity is 120 Terabytes. • Total sensible heat load on the precision • Average Power Draw is 49 kW. cooling system includes heat generated by the IT equipment, UPS and PDUs, Communication Equipment building egress and human load. • Routers, switches and hubs required to interconnect the servers, storage and • Cooling System Components: access points through Local Area Network - Eight 128.5 kW chilled water based and provide secure access to public precision cooling system placed at the networks. end of each hot aisle. Includes one redundant unit. • Average Power Draw is 49 kW. - The chilled water source is a chiller Power Distribution Units (PDU): plant consisting of three 200 ton • Provides output of 208V, 3 Phase through chillers (n+1) with matching con- whips and rack power strips to power densers for heat rejection and four servers, storage, communication equip- chilled water pumps (n+2). ment and lighting. (Average load is - The chiller, pumps and air condition- 539kW). ers are powered from the building dis- • Input from UPS is 480V 3-phase. tribution board (480V 3 phase). • Efficiency of power distribution is 97.5 - Total cooling system power draw is percent. 429 kW. UPS System Building substation: • Two double conversion 750 kVA UPS • The building substation provides 480V modules arranged in dual redundant 3-phase power to UPS’s and cooling (1 +1) configuration with input filters for system. power factor correction (power factor = • Average load on building substation is 91 percent). 1,099 kW. • Utility input is 13.5 kVA, 3-phase • The UPS receives 480V input power for connection. the distribution board and provides a • System consists of transformer with isola- 480V, 3 Phase power to the power distri- tion switchgear on the incoming line, bution units on the data center floor. switchgear, circuit breakers and distribu- • UPS efficiency at part load: 91.5 percent. tion panel on the low voltage line. • Substation, transformer and building entrance switchgear composite efficiency is 97.5 percent. 18
  • 20. Appendix B: Timing Benefits from Efficiency Improvement Actions Estimated Cumulative Yearly Savings Efficiency Improvement Savings Area (kW) Year 1 Year 2 Year 3 Year 4 Year 5 Low power processors 111 6 22 45 78 111 Policies Higher efficiency power 124 12 43 68 99 124 IT supplies Server power 86 9 26 43 65 86 management Blade servers 7 1 7 7 7 7 Projects IT Virtualization 86 9 65 69 86 86 415V AC power 20 0 0 20 20 20 distribution Practices Best Cooling best practices 15 15 15 15 15 15 Variable capacity 49 49 49 49 49 49 cooling Infrastructure High density 72 0 57 72 72 72 Projects supplemental cooling Monitoring and opti- 15 0 15 15 15 15 mization 585 100 299 402 505 585 19
  • 21. Note: Energy Logic is an approach developed by Emerson Network Power Emerson Network Power to provide organizations 1050 Dearborn Drive with the information they need to more effective- ly reduce data center energy consumption. It is P.O. Box 29186 not directly tied to a particular Emerson Network Columbus, Ohio 43229 Power product or service. We encourage use of 800.877.9222 (U.S. & Canada Only) the Energy Logic approach in industry discussions 614.888.0246 (Outside U.S.) on energy efficiency and will permit use of the Energy Logic graphics with the following Fax: 614.841.6022 attribution: EmersonNetworkPower.com The Energy Logic approach developed by Liebert.com Emerson Network Power. While every precaution has been taken to ensure To request a graphic send an email to accuracy and completeness in this literature, Liebert Corporation assumes no responsibility, and disclaims all energylogic@emerson.com. liability for damages resulting from use of this informa- tion or for any errors or omissions. Specifications subject to change without notice. © 2007 Liebert Corporation. All rights reserved throughout the world. Trademarks or registered trademarks are property of their respective owners. ® Liebert and the Liebert logo are registered trade- marks of the Liebert Corporation Business-Critical Continuity, Emerson Network Power and the Emerson Network Power logo are trademarks and service mark of the Emerson Electric Co. WP154-158-117 SL-24621 Emerson Network Power. The global leader in enabling Business-Critical Continuity™. EmersonNetworkPower. com AC Power Embedded Computing Outside Plant Racks & Integrated Cabinets Connectivity Embedded Power Power Switching & Controls Services DC Power Monitoring Precision Cooling Precision Cooling