ACEEE - Best Practices For Data Centres - Lessons Learned
ACEEE - Best Practices For Data Centres - Lessons Learned
ACEEE - Best Practices For Data Centres - Lessons Learned
Data Centers
Steve Greenberg, Evan Mills, and Bill Tschudi, Lawrence Berkeley National Laboratory
Peter Rumsey, Rumsey Engineers
Bruce Myatt, EYP Mission Critical Facilities
ABSTRACT
Over the past few years, the authors benchmarked 22 data center buildings. From this
effort, we have determined that data centers can be over 40 times as energy intensive as
conventional office buildings. Studying the more efficient of these facilities enabled us to
compile a set of “best-practice” technologies for energy efficiency. These best practices include:
improved air management, emphasizing control and isolation of hot and cold air streams; right-
sizing central plants and ventilation systems to operate efficiently both at inception and as the
data center load increases over time; optimized central chiller plants, designed and controlled to
maximize overall cooling plant efficiency, central air-handling units, in lieu of distributed units;
“free cooling” from either air-side or water-side economizers; alternative humidity control,
including elimination of control conflicts and the use of direct evaporative cooling; improved
uninterruptible power supplies; high-efficiency computer power supplies; on-site generation
combined with special chillers for cooling using the waste heat; direct liquid cooling of racks or
computers; and lowering the standby losses of standby generation systems.
Other benchmarking findings include power densities from 5 to nearly 100 Watts per
square foot; though lower than originally predicted, these densities are growing. A 5:1 variation
in cooling effectiveness index (ratio of cooling power to computer power) was found, as well as
large variations in power distribution efficiency and overall center performance (ratio of
computer power to total building power). These observed variations indicate the potential of
energy savings achievable through the implementation of best practices in the design and
operation of data centers.
0.74 0.75
0.70
0.68 0.67
0.66
0.63
0.60
Computer Power: Total Power
1 2 3 4 5 6 7 8 9 10 11 12 16 17 18 19 20 21 22
Data Center Number
Note: all values are shown as a fraction of the respective data center total power consumption.
The following sections briefly cover data center best practices that have emerged from
studying these centers. The references and links offer further details.
4.0
Computer Power: HVAC
3.5
3.0
2.5
Power
2.0
1.5
1.0
0.5
0.0
1 2 3 4 5 6 7 8 9 10 11 12 14 16 17 18 19 20 21 22
Data Center Number
Improving "Air management" - or optimizing the delivery of cool air and the collection
of waste heat - can involve many design and operational practices. Air cooling improvements
can often be made by addressing:
• Use of "hot aisle and cold aisle" arrangements where racks of computers are stacked with
the cold inlet sides facing each other and similarly the hot discharge sides facing each
other (see Figure 3 for a typical arrangement)
95%
90%
Efficiency
85%
Double-Conversion UPS
75%
Delta-Conversion UPS
70%
0% 20% 40% 60% 80% 100%
Percent of Rated Active Power Load
• Water flow is a very efficient method of transporting heat. On a volume basis, it carries
approximately 3,500 times as much heat as air, and moving the water requires an order of
magnitude less energy. Water-cooled systems can thus save not just energy but space as
well.
• Cooling racks of IT equipment reliably and economically is the main purpose of the data
center cooling system; conditioning the remaining space in the data center room without
the rack load is a minor task in both difficulty and importance.
• Capturing heat at a high temperature directly from the racks allows for much greater use
of waterside economizer free cooling, which can reduce cooling energy use by 60% or
more when operating.
• Transferring heat from a small volume of hot air directly off the equipment to a chilled
water loop is more efficient than mixing hot air with a large volume of ambient air and
removing heat from the entire mixed volume. The water-cooled rack provides the
ultimate hot/cold air separation and can run at very high hot-side temperatures without
creating uncomfortably hot working conditions for occupants.
• Direct liquid cooling of components offers the greatest cooling system efficiency by
eliminating airflow needs entirely. When direct liquid component systems become
available, they should be evaluated on a case-by-case basis.
• Create a process for the IT, facilities, and design personnel to make decisions together. It
is important for everyone involved to appreciate the issues of the other parties.
• Institute an energy management program, integrated with other functions (risk
management, cost control, quality assurance, employee recognition, and training).
• Use life-cycle cost analysis as a decision-making tool, including energy price volatility
and non-energy benefits (e.g. reliability, environmental impacts).
• Create design intent documents to help involve all key stakeholders, and keep the team
on the same page, while clarifying and preserving the rationale for key design decisions.
• Adopt quantifiable goals based on Best Practices.
• Minimize construction and operating costs by introducing energy optimization at the
earliest phases of design.
• Include integrated monitoring, measuring and controls in the facility design.
• Benchmark existing facilities, track performance, and assess opportunities.
• Incorporate a comprehensive commissioning (quality assurance) process for construction
and retrofit projects.
• Include periodic “re-commissioning” in the overall facility maintenance program.
• Ensure that all facility operations staff receives site-specific training on identification and
proper operation of energy-efficiency features.
Summary
Through the study of 22 data centers, the following best practices have emerged:
• Improved air management, emphasizing control and isolation of hot and cold air streams.
• Right-sized central plants and ventilation systems to operate efficiently both at inception
and as the data center load increases over time.
• Optimized central chiller plants, designed and controlled to maximize overall cooling
plant efficiency, including the chillers, pumps, and towers.
• Central air-handling units with high fan efficiency, in lieu of distributed units.
• Air-side or water-side economizers, operating in series with, or in lieu of, compressor-
based cooling, to provide “free cooling” when ambient conditions allow.
• Alternative humidity control, including elimination of simultaneous humidification and
dehumidification, and the use of direct evaporative cooling.
• Improved configuration and operation of uninterruptible power supplies.
• High-efficiency computer power supplies to reduce load at the racks.
Implementing these best practices for new centers and as retrofits can be very cost-
effective due to the high energy intensity of data centers. Making information available to
design, facilities, and IT personnel is central to this process. To this end, LBNL has recently
developed a self-paced on-line training resource, which includes further detail on how to
implement best practices and tools that can assist data center operators and service providers in
capturing the energy savings potential. See http://hightech.lbl.gov/DCTraining/top.html
References
ASHRAE 2004. Thermal Guidelines for Data Processing Environments. TC 9.9 Mission Critical
Facilities. Atlanta, Geor.: American Society of Heating, Refrigerating, and Air-
Conditioning Engineers.
ASHRAE 2005a. Datacom Equipment Power Trends and Cooling Applications. TC 9.9.
RMI 2003. Design Recommendations for High-Performance Data Centers – Report of the
Integrated Design Charrette. Snowmass, Colo.: Rocky Mountain Institute.
Tschudi, W., P. Rumsey, E. Mills, T. Xu. 2005. "Measuring and Managing Energy Use in
Cleanrooms." HPAC Engineering, December.