Green computing and eco-friendly data centers are important topics as data center energy usage is growing rapidly. The document discusses 5 key ways to make data centers more energy efficient: 1) Measure power usage effectiveness to track efficiency, 2) Manage airflow to prevent hot and cold air mixing, 3) Adjust thermostat settings higher to reduce cooling needs, 4) Utilize free cooling techniques like evaporative cooling when possible, and 5) Optimize power distribution to minimize energy losses. Proper implementation of these strategies can significantly reduce energy consumption and costs while also helping the environment.
2. Abstract
The green data center has moved from theoretical to the realistic, with IT leaders
being challenged to construct new data centers (or retrofits the existing one) with
energy saving features, sustainable materials and other environmental efficiencies
in mind.
This project deals with the effects that are caused by the data center. How severe
these effects are and how to overcome these. The measures that has been
provided are not only in constructional point of view but also focusing on other
dimensions.
3. Introduction
In recent years, there has been a rapid growth in the Information Technology
(IT) industry in India. Data Centers: the key infrastructure component
powering this sector–operate continuously (24x7) throughout the year, and are very
energy intensive. These high–tech facilities generally consume many times the energy
of a typical office building as much as a hundred times more on a square-meter basis.
These facilities are experiencing significant growth in India, making it one of the fastest
growing energy–use sectors and impacting electrical supply and distribution.
By one estimate, data centers consumed 0.8% of the world’s electricity in 2005, at an
aggregate cost of 7.2 billion USD. These figures are nearly double the estimates for
2000, indicating an annual growth rate of 16%. Although the net consumption of data
centers in India has yet to be quantified, the IT heavy nature of the country’s economy
suggests relative data center energy consumption in India exceeds the global average.
4. It’s not “speculation” any more…
Atmospheric warming is a now a broadly accepted trend.
41% of the Earth’s population (2.3 billion) live in water-stressed areas; 3.5 billion will do
so by 2025.
Increasingly includes developed areas such as Western USA, Australia, SE England.
There is now increasing pressure on traditional energy resources, especially from rapidly
industrializing nations (China, India).
Energy consumption is expected to increase by over 129% in parts of Asia by 2020.
CEO/CFO focus is turning to energy costs & environmental costs (CO2 emissions).
Major investors increasingly see poor green performance as a source of risk.
5.
6. Information technology (IT) accounts for….
–Accounts for 2% of anthropogenic CO2.
–Roughly equivalent to aviation industry.
–IT energy usage will double next 4 years
7. Energy Demands Have Data Centers at a
Tipping Point
IT energy demand doubling every 9 – 24 months.
6500 US data centers consume electricity equal to the State of Utah.
On average 100 units of energy generation equal 3 units of work for productive IT.
IT accounts for 2% of anthropogenic CO2 emissions.
e-waste cannot be ignored – 1 billion computers potential scrap by 2010.
Enough to fill the Rose Bowl each year.
IBM investing to lead in power and cooling efficiency!
8. What is a Green Data Center?
Diagnose
Get the facts to understand your
energy use and opportunities for
improvement
Build
Plan, build, and
upgrade to
energy efficient
data centers
CoolVirtualize
Implement
virtualization and
other innovative
technologies
Manage &
Measure
Seize control
with energy
management
software
Use innovative
cooling
solutions
9. How is energy typically used in the data
center?
55% 45%
Server/Storage hardware
70% 30%
Processor
Data center
10. Ways for Eco friendly Data Center
By applying several simple design choices the efficiency of facility, reduce costs, and
reduce your impact on the environment can be improved. Here are the top five best
practices:
Measure PUE
Manage airflow
Adjust the thermostat
Use free cooling
Optimize power distribution
11. 1. Measure PUE
We can't manage what we don’t measure, so we have to be sure to track the data
center's energy use. The industry uses a ratio called Power Usage Effectiveness (PUE) to
measure and help reduce the energy used for non-computing functions like cooling
and power distribution. To effectively use PUE, it's important to measure often. It
should be sampled at least once per second. It’s even more important to capture
energy data over the entire year, since seasonal weather variations affect PUE.
It’s a measure of how effectively you deliver power and
cooling to the IT equipment.
12. For a service provider, they should focused on reducing the energy use while
serving the explosive growth of the Internet. Most data centers use almost as much
non-computing or "overhead" energy (like cooling and power conversion) as they
do to power their servers.
According to the Uptime Institute's 2014 Data Center Survey, the global average of
respondents' largest data centers is around 1.7.
To decrease PUE a different approach should be taken. The facilities should have
different power and cooling infrastructures, and should be located in different
climates. As seasonal weather patterns also impact PUE values, which may tend to
give lower PUE during cooler quarters. Some have managed to maintain a low PUE
average across our entire fleet of data center sites around the world—even during
hot, humid Atlanta summers.
13. 2. Manage airflow
Good air flow management is crucial to efficient data center operation. Minimize hot
and cold air mixing by using well-designed containment. Then, eliminate hot spots and
be sure to use blanking plates (or flat sheets of metal) for any empty slots in your rack.
A little analysis can have big payoffs. For example, thermal modelling using
computational fluid dynamics (CFD) can help quickly to characterize and optimize air
flow for the facility without having to reorganize the computing room.
Cooled Air flow
Hot Air flow
14. One of the simplest ways to save energy in a data center is to raise the
temperature. It’s a myth that data centers need to be kept chilly. According to
expert recommendations and most IT equipment manufacturers' specifications,
data center operators can safely raise their cold aisle to 80°F or higher. By doing so,
we can significantly reduce facility energy use.
Other way is to use thermal modelling to locate “hot spots” and better understand
airflow in the data center. In the design phase, physically arrange equipment to
even out temperatures in the facility. Even after that, we can move certain
equipment like computer room air conditioners (CRACs) to reduce hot spots and
even out the ambient temperature—ultimately reducing the amount of time the
CRAC must run.
15.
16. To cut cooling costs and save energy, prevent the “hot aisle” air behind the server racks
from mixing with the “cold aisle” in front of the server racks. In large data centers, people
use appropriate ducting and permanent enclosures. In addition, they take simple measures
well-suited for smaller “closet” style data centers. For instance, we:
Use blanking panels (or flat sheets of metal) to close off empty rack slots and prevent
hot aisle air from seeping into to the cold aisle.
Hang plastic curtains (like those used in commercial refrigerators) to seal off the cold
aisle.
Enclose areas with components that run hotter (such as power supply units or PSUs)
with plastic curtains.
These efforts help to reduce the total amount of energy used for cooling. At the same time,
they ensure that the cooler air which is send into the cold aisles is truly cool enough to do
its job.
17. 3. Adjust the thermostat
It has long been believed that IT equipment needs to run at low temperatures—
between 15°C/60°F and 21°C/70°F. However, the American Society of Heating,
Refrigerating and Air Conditioning Engineers (ASHRAE) recommends cold aisle
temperatures of up to 27°C/81°F, which have found to have no detrimental effect on
equipment. Most IT equipment manufacturers spec machines at 32°C/90°F or higher,
so there is plenty of margin. In addition, most CRACs are set to dehumidify the air
down to 40% relative humidity and to reheat air if the return air is too cold. Raising the
temperature and turning off dehumidifying and reheating provides significant energy
savings.
18. An elevated cold aisle temperature allows CRACs to operate more efficiently at higher
intake temperatures. Also, it allows for more days of “free cooling”—days where
mechanical cooling doesn’t need to run—if the facility has air- or water-side
economization.
The simple act of raising the temperature from 22°C/72°F to 27°C/81°F in a single
200kW networking room could save tens of thousands of dollars annually in energy
costs.
19. 4. Use free Cooling
Chillers typically use the most energy in a data center's cooling infrastructure, so there
is largest opportunity for savings by minimizing their use. Taking advantage of "free
cooling" to remove heat from facility without using a chiller. This can include
using low temperature ambient air,
evaporating water, or
a large thermal reservoir.
While there's more than one way to free cool, water and air-side economizers are
proven and readily available.
20. Cooling with water—not chillers
The electricity that powers a data center ultimately turns into heat. Most data centers
use chillers or air conditioning units to cool things down, requiring 30-70% overhead
in energy usage. Data centers often use water as an energy-efficient way to cool
instead.
Trap hot air and cool equipment with water
Google have designed custom cooling systems for their server racks that they’ve
named “Hot Hut” because they serve as temporary homes for the hot air that leaves
servers—sealing it away from the rest of the data center floor. Fans on top of each Hot
Hut unit pull hot air from behind the servers through water-cooled coils. The chilled air
leaving the Hot Hut returns to the ambient air in the data center, where their servers
can draw the chilled air in, cooling them down and completing the cycle.
21. Take advantage of evaporative cooling: Evaporation is a powerful tool. In our
bodies, it helps us maintain our temperature even when outside temperatures are
warmer than we are. It also works similarly in data center’s cooling towers. As hot
water from the data center flows down the towers through a material that speeds
evaporation, some of the water turns to vapor. A fan lifts this vapor, removing the
excess heat in the process, and the tower sends the cooled water back into the
data center.
Use the natural cooling power of sea water: Evaporating water isn't the only way to
free cool. Some data centers uses sea water to cool without chillers. Data centers
can be establish in places like Hamina, Finland because its cold climate and its
location on the Gulf of Finland as Google does. The cooling system are designed
which pumps cold water from the sea to the facility, transfers heat from operations
to the sea water through a heat exchanger, and then cools this water before
returning it to the sea. This approach can provide all of needed cooling year round,
and doesn’t need to install any mechanical chillers at all.
22. 5. Optimize power distribution
We can minimize power distribution losses by eliminating as many power
conversion steps as possible. For the conversion steps make sure to have, a specify
efficient equipment transformers and power distribution units (PDUs). One of the
largest losses in data center power distribution is from the uninterruptible power
supply (UPS), so it's important to select a high-efficiency model. Also, keep high
voltages as close to the power supply as possible to reduce line losses.
Building custom, highly-efficient servers: Usually servers are high-performance
computers that run all the time. They're the core of data centers, and should be
designed to use as little energy as possible. It can be done by minimizing power loss
and by removing unnecessary parts. Also ensure that servers use little energy when
they're waiting for a task, rather than hogging power when there’s less computing
work to be done.
23. Optimize the power path: A typical server wastes up to a third of the energy it uses
before any of that energy reaches the parts that do the actual computing. Servers lose
the most energy at the power supply, which converts the AC voltage coming from a
standard outlet to a set of low DC voltages. They then lose more at the voltage
regulator, which further converts the power supply's output to the voltages required
by microchips. Designed with low efficiency standards in order to save on initial cost,
traditional servers end up costing much more in electricity in the long run.
Used efficient power supplies and highly efficient voltage regulator modules to ensure
that most of the power goes to the components that do the actual computing work.
Cut out two of the AC/DC conversion stages by putting back-up batteries directly on
the server racks. It estimate an annual savings of over 500 kWh per server—or 25%—
over a typical system.
24. Conclusion
The primary concern before and after converting to a green data center should be
power management. Most of the administrators are not comfortable monitoring
power usage because the main concern is performance. The administrators are more
concerned about performance uptime and not the power usage. This practice needs to
change and power usage should be measured and monitored where needed. Even
after converting to a GDC, the enterprise needs to monitor the usage to reap the
benefits of the transition-to-green process.
25. The vendors can also help in power management by selling energy efficient hardware
or recyclable hardware. Based on the growth of the industry, the power needs are
increasing and thus there is a need to monitor the power usage. As the power needs
increase, more earth-friendly measures can to be taken to help the environment. Also,
the management and administrators need to discuss the increasing needs of the data
centers. The power scheme currently used in a data center will change with the growth
of industrial as well as the educational sector.