If you receive errors when attempting to view this white paper, please install the latest version of
"Together, APC’s global teams work to fulfill their mission of creating delighted customers. To do this, the Company focuses its efforts on four primary application areas: Home/Small Office; Business Networks; Data Centers
and Facilities; and Access Provider Networks."
Source : APC
Deploying High-Density Zones in a Low-Density Data Center
High-Density Zones is also known as :
Low-Density Data Center ,
Data Center Cooling Density Zones,
Data Center Infrastructure Density Zones,
Data Center Efficiency,
High-Density Zones Data Center Dynamics,
Data Center Designs High-Density Zones,
APC Data Center,
Low-Density Power Cooling Infrastructure,
Apc Power Supplies,
High-Density Power Supply,
Power Supply Technology,
High Density Modules,
Ultra High Density,
High Density Cooling,
High Density Computing Data Center,
Data Center High Density Power Zone,
Data Center Relative Humidity,
Data Center Racks,
Increasing Data Center Efficiency High Density,
Data Center Energy Savings,
Power Supply Efficiency,
Power Supply Modules,
Power Supply Voltage.
New breakthroughs in power and cooling technology allow for a simple and rapid
deployment of self-contained high-density zones within an existing or new low-density data
center. The independence of these high-density zones allows for predictable and reliable
operation of high-density equipment without a negative impact on the performance of
existing low-density power and cooling infrastructure. A side benefit is that these highdensity
zones operate at much higher electrical efficiency than conventional designs.
Guidance on planning design, implementation, and predictable operation of high-density
zones is provided.
High-density equipment such as blade servers, 1U servers, and multi-core high-end servers provide more
computing per watt compared to previous generation servers. However, when consolidated, this new
generation of equipment requires concentrated power and cooling resources. Data center operators and IT
executives are often uncertain about the capability of their existing data center and whether a new data
center must be built to support higher rack densities. Fortunately, a simple solution exists that allows for the
rapid deployment of high-density racks within a traditional low-density data center. A high-density zone, as
illustrated in Figure 1, allows data center managers to support a mixed-density data center environment for
a fraction of the cost of building an entire new data center.
In this paper a high-density zone is defined as one or more rows of racks that support a per-rack density
average of 4 kW or above. A high-density zone resides within the borders of a larger, low-density data
center. The high-density zone is not the same as a high-density data center, which is a data center
dedicated to supporting nothing but high-density racks. Managing for the deployment and operation of a
high-density data center is not the subject of this paper.
High-density zone compared to "spreading out" strategy
Although today's IT equipment operates at high power density ' that is, each individual server draws a high
amount of power ' this does not always mean such devices must be deployed in a high-density manner by
packing them together in a rack. In fact, a popular strategy has been to spread out high-density servers by
installing fewer in each rack. If the equipment is dispersed like this, the data center's average power density
will likely stay in the range that the data center was originally designed for. In this way, a variety of technical
problems can be avoided.
However, the "spreading out" strategy may not be viable for a number of reasons:
- Consumption of additional floor space, which may be difficult to justify or simply not possible
- Executive management perception that partially-filled racks are wasteful
- Increased cabling costs (because of longer runs)
- Increased cost and difficulty of maintenance ' cabling and mounting may be intertwined with other
equipment in a nonstandard manner, scattered throughout the room
- Reduced electrical efficiency of the data center, because cooling-system air paths are longer and
less well targeted ' this increases mixing of hot and cool air, which results in lower return
temperature to the air conditioner (see later sidebar
Why shorter air paths increase data center
For these reasons, it is expected that data enter operators
will begin to deploy IT equipment at its full density capability
' in zones ' rather than try to stay within an overall room
power density by spreading out the load. With new power
and cooling technologies, there is now a significant
efficiency entitlement from concentrating high-density
equipment into zones.
This paper assumes the choice has been made to deploy
high-density IT racks in a low-density data center. Rowbased
cooling, as a technique to implement these highdensity
zones, is presented as a simple solution for
addressing high-density power and cooling issues in both
existing and new data centers. For more information regarding alternatives for deploying high-density
equipment, including the option of spreading out IT equipment, see APC White Paper #46, "Cooling
Strategies for Ultra-High Density Racks and Blade Servers".
The Problem : Unmanaged High Density
Traditional data center design uses a raised floor to distribute cooling to low-density IT equipment (Figure
2a). However, when high-density equipment is randomly installed throughout a low-density data center the
cooling stability is upset and hot spots begin to appear (Figure 2b).
Data centers designed for low-density racks (typically 1-2 kW / rack) vary dramatically in construction.
Ceiling heights, raised floor depths, room geometry, power distribution, and raised floor obstructions are all
quite different. In addition, IT managers vary in how they define a high-density rack. This paper defines a
high-density rack as 4 kW or higher. Regardless of which number is used to denote a high-density rack, the
following deployment issues need to be considered:
- Delayed server deployment ' uncertainty of knowing which rack can cool a newly provisioned server
just adds to the already long delay by having to perform a cooling assessment
- Unplanned downtime ' due to overloaded power distribution circuits or thermal shutdown of IT
- Unpredictable cooling throughout the data center ' no certainty that every high-density server will be
properly cooled after every move, add, or change
- Loss of cooling redundancy ' as more high-density racks are added, air conditioning units that were
once redundant are now required to supply the concentrated airflows. Some subsystems are
extremely impractical or costly to instrument for power consumption (for example, PDUs due to
number of output connections, or switchgear)
Fortunately a solution exists that can neutralize these issues and is discussed in the following sections.
Placing high-density racks in an isolated, standardized self-contained area of the data center provides a low
cost, viable solution to the challenges mentioned above. This high-density zone avoids dependence on
the unpredictable nature of raised floor cooling and would not require complex computational fluid dynamics
(CFD) analysis prior to installation.
Figure 3 illustrates three high-density zone implementation methodologies all of which are capable of
supporting independent power distribution, UPS, and cooling systems. This "drop-in" solution eliminates the
hot spots in Figure 2b by simply moving high-density equipment into the zone. The heat generated from the
high-density IT equipment within this zone is rejected to the outdoors with no impact to the existing data
center cooling system or the surrounding low-density IT racks. In fact, the zone acts as its own high-density
data center within an existing low-density data center and is thermally "invisible" to the rest of the room.
What is "room neutral"?
High-density zones operate on the idea of isolating server exhaust heat and directing all of that heat into the
air conditioner intakes, where the air is first cooled before being redistributed to the front of the servers. By
isolating both hot and cold air streams, the high-density zone neutralizes the thermal impact that highdensity
IT racks would otherwise have on traditional low-density data centers. In other words, the zone
presents a room neutral load to the existing data center cooling system.
Although this paper focuses on the cooling of high-density zones, it is also possible to power a zone with its
own dedicated UPS and power distribution. This may be desirable in situations where the existing data
center UPS is at capacity or is being phased out due to end-of-life or when targeted power availability is
required for a specific zone.
The system in Figure 4 integrates a cluster of high-density IT racks with a high-density row-based cooling
system and high-density UPS and power distribution system in a pre-manufactured, pre-tested zone.
Row-based cooling architecture
A row-based cooling architecture makes it possible to have a room-neutral high-density zone. Row-based
cooling is an air distribution approach in which the air conditioners are dedicated to a specific row of racks.
This is in stark contrast with room-based cooling where perimeter air conditioners are "dedicated" to the
entire room. Row-based air conditioners may be installed above IT racks, adjacent to IT racks, or in
combination. An example of a row-based air conditioner is shown in Figure 5.
While most facilities and IT personnel understand the basic idea behind high-density zones, they question
how the zone can be "room neutral" in the midst of constant moves, adds, and changes. Considering their
past experience with the variability and at times perplexing nature of raised-floor cooling, skepticism toward
the long-term predictability of highdensity
zones is not surprising.
Though raised floors and high-density
zones are both governed by the same
laws of fluid dynamics and
thermodynamics, one major aspect
sets them apart ' standardization.
If raised floors were standardized so
that they all had the same depth, same
dimensions, same under floor
obstructions, same under floor airflow
pattern, same CRAC locations, and
same air leakage from tile cutouts,
they could more easily be modeled in
real time so as to predict their behavior
using design and planning software
tools. If this standardization existed, IT
managers would be able to predict the
cooling impact of adding a blade
chassis to a particular rack and make rational decisions based on the prediction. However, these raised
floor attributes by their very nature are customized and are not conducive to standardization. Furthermore,
the variability of all these attributes would make real-time computational fluid dynamics (CFD) modeling
nearly impossible in a typical data center.
In contrast, high-density zones use standardized hot / cold aisle widths, rack height, and air path distances
to the rack. Row-based cooling also eliminates the variability introduced by the raised floor. These
simplifications make it possible to design predictable high-density zones using standardized tools. These
design tools provide the confidence that any design will capture and neutralize the expected amount of hot
exhaust air. For more information on the row-based cooling architecture, and how it compares to roombased
cooling, see APC White Paper #130, "The Advantages of Row and Rack-Oriented Cooling
Architectures for Data Centers."
Zone Containment Methods
Server exhaust heat can be diverted back to the air conditioners in three ways: uncontained, hot aisle
containment, and rack air containment (see Figure 5). All of these methods leverage a row-based cooling
concept (e.g., the air conditioner is brought within a few feet of the IT rack).
Uncontained zones rely on the standard layout and widths of
the common hot aisle and cold aisle arrangement to keep hot
and cold air streams from mixing. For this reason, uncontained
zones depend on multiple racks in a row and are not effective
in cooling stand-alone IT racks. The hot and cold aisles formed
by rows of racks (and in some cases walls) are what isolate the
hot and cold air streams as illustrated in Figure 6. The closer
an IT equipment rack is to a row-based air conditioner, the
greater the amount of exhaust air that is captured and cooled.
As the distance between the IT rack and the row-based air
conditioner increases, the more the hot exhaust air mixes with
the surrounding air in the data center.
Importance of blanking panels
Proper row-based cooling depends on the
isolation of hot and cold air streams. If any of
the vertical space in a rack is not filled by
equipment, the gaps between equipment
allow hot exhaust air to flow through the rack
and to the front of equipment such as
servers. This mixing between the hot and
cold air streams reduces the effectiveness of
row-based cooling. For more information see
APC White Paper #44, "Improving Rack
Cooling Performance Using Airflow
Management™ Blanking Panels."
When to use this method
- When IT racks designated for the zone are moved and relocated frequently.
- When IT racks are used from a variety of different vendors
- More row-based air conditioners required at lower densities in order to properly capture hot exhaust
air from all IT racks.
Why shorter air paths
increase data center efficiency
With traditional room-based perimeter cooling, cool supply air must travel further to each rack, requiring extra fan power. In
addition, if there is no raised floor there can be substantial mixing of the cool supply air with warm exhaust air in the room,
which will require lowering the supply temperature far below what is needed at the IT racks. Hot spots can force lowering the
supply temperature even further to control overheating. Too-low supply temperature can risk condensation on the cooling
coil, which results in wasteful dehumidification-rehumidification and reduction of system cooling capacity. Long return paths
similarly can cause air mixing, which lowers the return temperature to the cooling unit. Lower return temperature slows the
rate of heat transfer to the coil, so heat is removed less efficiently.
The much shorter air paths in row-based cooling dramatically lessen mixing of supply and return air (and with containment
and blanking panels, virtually eliminate mixing). On the supply side, this allows operation at a higher coil temperature, which
takes less chiller energy to maintain and is much less likely to cause wasteful condensation. On the return side, it produces
a higher return temperature which increases the heat removal rate. Compared to long-path cooling systems, these
combined short-path effects (1) increase operational cooling efficiency and (2) increase the cooling capacity of the heat
2. HOT AISLE containment
Hot aisle containment zones are identical to uncontained zones except for the fact that the hot aisle in every
pair of rows is contained. The hot aisle becomes the hot exhaust channel by enclosing it with ceiling panels
and a door at each end of the aisle (Figure 7). In addition, the racks' rear doors are removed. The hot
exhaust air is physically contained and unable to mix with the ambient data center air. A wall or another row
of racks is required to form a cold aisle in order to isolate the cold supply air.
When to use this method
- In cases where floor space must be conserved. This method is popular because it consumes the
same space as two rows of low-density racks.
- In data centers with hot aisle / cold-aisle layouts
- Hot aisle containment panels increase capital cost
- Hot aisle containment may exceed work environment policies due to high temperature
- Incompatible with some types of cabling, power strips, labels, and other materials that are not rated
for high temperatures
- Not possible with a single row of racks
- Authority having jurisdiction (AHJ) may require fire suppression in hot aisle
3. RACK containment
Rack containment (also called rack air containment) is similar to hot aisle containment except that the hot
exhaust air is contained using the back frame of the equipment racks and a series of panels to form a rear
air channel. This channel can be attached to a single IT rack or to a row of racks (Figure 8). The panels
used to create the hot exhaust air channel increase the depth of a normal rack by 20 cm (8 in). An optional
series of front panels may be used on rack containment arrangements that require complete containment of
hot and cold air streams as shown in Figure 9. This optional front containment adds an additional 20 cm (8
in) to the depth of the rack.
When to use this method
- In cases where hot aisle containment is the
preferred method, but a single odd row is left
- When frequent access to and easy management
of communication cables is required
- For complete isolation in cases such as standalone
open data center environments or mixed
layouts ' only when optional front containment is
- In wiring closets that lack any form of cooling,
exposing high-density equipment to high
temperatures ' only when optional front
containment is used
- When sound attenuation is required ' only when
optional front containment is used
- Front and rear containment panels increase capital cost
- In a single rack configuration, cost increases substantially when cooling redundancy is required
Why NOT use containment?
It may appear that containment would be the
clear choice for any row-based cooling
scenario. However, this is not always the
With row-based cooling, containment is
more important at lower densities, where the
ratio of IT racks to air conditioners is higher.
The higher this ratio the greater the distance
between IT racks and air conditioners, with
more chance for hot exhaust air to "escape."
Higher densities, on the other hand, mean a
lower ratio of IT racks to air conditioners,
with shorter air paths and less chance for hot
exhaust to escape ' in this case,
containment is less essential because
airflow is tightly targeted and tends to
"behave" all by itself.
In addition, there may be practical
considerations that rule out containment,
such as higher cost at certain rack power
densities, company restrictions on hot work
environments (i.e., a contained hot aisle),
and incompatibility with existing racks.
Additional High-Density Zone Benefits
The decision on whether to move forward with deployment of a high-density zone should also consider the
- Standardization of design elements
- Compatibility with any data center, new or existing
- Configurability with dedicated UPS and power distribution
- Configurability with any level of redundancy
- Configurability with any number of IT racks
Standardized design elements
In order for high-density zones to provide predictable performance they must include standard design
elements. This includes components such as air conditioners, power distribution, UPS, and racks. In
addition, standard dimensions play a key role in predictably isolating hot and cold air flows. Standard
dimensions include hot / cold aisle widths, rack height, and standard (short) airflow travel distances.
Modularity is also a benefit of standardization and allows high-density zones to be quickly deployed, altered
over time, and even moved to another data center. Standardized components and dimensions greatly
simplify the design process. These pre-designed standard solutions may even be re-ordered for other data
centers. Data center personnel can also leverage standardization by deploying predictable capacity and
change management software that maintains the peak performance of the high-density zone (this is
discussed later). For more information on standardization see APC White Paper #116, "Standardization and
Modularity in Network-Critical Physical Infrastructure".
Compatible with any data center, new or existing
High-density zones are modular and independent of room-based cooling architectures and existing UPS
architectures. Therefore, few constraints exist to prevent their deployment in new or existing data centers.
Sufficient floor space must be available and the floor must have enough weight-bearing capacity. All other
aspects of a standardized high-density zone are replicable in multiple types of data centers.
Configurable with dedicated UPS and power distribution
The architecture of the high-density zone allows for deployment of zone-specific UPS and PDU
configurations in cases where the existing data center UPS is at capacity or is being phased out due to endof-
life. These systems are rack-based and designed to be modular and scalable.
Configurable with any level of redundancy
Redundancy levels vary depending upon the criticality of the IT assets. Traditional data center design is
such that the entire physical infrastructure is built to satisfy the redundancy requirements of the most critical
set of assets. This type of design is extremely expensive both from a capital cost and operational cost
perspective. A much more cost-effective design is to provide redundant power and cooling only where and
when required. High-density zones allow for this targeted redundancy / availability approach by including
redundant power and cooling modules when appropriate. Note that the core infrastructure such as chilled
water piping and electrical service entrance must be designed and built on day one with the highest
redundancy level required.
Configurable with any number of IT racks
High-density zones are scalable in that they accommodate the number of IT racks required at a specific
power density. These zones can range in size from a single IT rack to 20 or more racks depending on local
Combining these characteristics results in a highly flexible high-density solution that can extend the life of a
legacy data center and postpone the capital outlay required for building a new one.
In-House vs. Vendor-Assisted Deployment
The data center owner has two options for the deployment of high-density zones: in-house deployment or
vendor-assisted deployment. In both cases a solid project plan is required. More specific information
regarding data center projects and system planning is available in APC white papers #140, "Data Center
Projects: Standardized Process", and #142, "Data Center Projects: System Planning."
IT managers can easily deploy smaller sized zones or smaller data centers (less than 20 racks) with no
previous experience. A worksheet and checklist is provided in Appendix A. This worksheet can serve as a
helpful guide and facilitates the collection of information required to specify and deploy a high-density zone.
The worksheet assumes the project owner has knowledge of the IT equipment associated with the planned
high-density zone (e.g. total power requirements, plug requirements, rack U-height requirements, and
communications cabling requirements).
If the worksheet is properly filled out, an educated decision can be made on which zone containment
method to choose. APC TradeOff Tool™ #10, "Data Center InRow™ Containment Selector", (see Figure
10) can help select the most appropriate zone containment method. The results generated by the tool are
based on typical scenarios and in some cases the recommended containment option may differ from the
actual final design.
Once a containment type is chosen, a decision must be made on which components the zone will include.
The worksheet helps data center staff determine whether to include a dedicated UPS, PDU, or chiller. In
some cases, certain preferences and constraints dictate which components are included in a zone and
which are not. Table 3 provides a list of possible constraints that could affect the ultimate configuration of
the high-density zone.
Even with the constraint of no spare UPS, chiller, or power distribution capacity, it is still possible to extend
the life of an existing data center by installing a high-density zone with its own dedicated power and cooling
resources. For example, the high-density zone in Figure 11 includes its own chiller plant, UPS, and power
distribution. It is assumed that the data center's electrical service entrance has sufficient spare capacity to
supply power to this packaged solution. In cases where a data center has run out of spare electrical service
capacity, a decision must be made to install an additional utility feed(s) or build a new data center. Other
factors beyond the scope of this paper such as available floor space, virtualization potential, business
objectives, leasing contracts, and future growth plans factor into the buy-or-build decision.
From the time a need for a high-density zone is identified, IT and facilities personnel can expect to populate
the racks in a given zone in one to three months, assuming the required budget is approved. However,
internal company processes may extend the proposed timeline.
Although it is possible for data center staff to deploy high-density zones without outside assistance, projects
involving data centers with 20 or more racks can be considerably more complex. In such cases consultation
with design experts and project managers is recommended.
Vendor-assisted deployment usually begins with an assessment of the existing data center or the design
plans for a new data center. In either case an assessment provides the design experts with valuable
information, including preferences and constraints, which allows optimum design decisions. Assessments
help answer questions such as:
- Can an existing row be retrofit with row-based air conditioners to avoid downtime?
- If spare chilled water capacity is unavailable should a self-contained air conditioning unit be used as
opposed to a packaged chiller?
- What steps can be taken to increase the speed of deployment of a future high-density zone?
An effective assessment (such as APC's Blade Server Readiness Assessment) measures spare bulk power
and cooling capacity as well as spare distribution capacity. Bulk cooling capacity is measured at the chiller
while the distribution capacity is measured at the CRAH units on the data center floor. This data provides an
estimate of cooling capacity and compares constraints against current and future requirements. Ultimately
this will help answer the question, "When will I run out of cooling capacity and require a high-density zone?"
After measuring and analyzing the data, a plan is created to meet future high-density needs. In the end, an
effective design plan for mixed-density data centers should incorporate power, cooling and floor space
utilization efficiency. An effective design plan allows a data center to use up its power, cooling, and space
resources all at the same point in the future, thereby avoiding stranded resources.
Real-Time Management of High-Density Zone
The architecture of row-based cooling makes real-time modeling of cooling performance possible. Design
tools can configure racks, row-based air conditioners, UPS, and power distribution based on high-density
zone specifications such as average and peak power density per rack, containment, redundancy, and plug
types. Once a high-density zone is deployed, real-time planning and management tools allow IT personnel
to maintain predictable operation even after moves, adds, and changes take place. Examples of appropriate
design and planning tools include APC's InfraStruXure Designer and APC's Capacity and Change Manager.
For more information on management and its critical role in predictable performance, see APC White Paper
#150, "Power and Cooling Capacity Management for Data Centers".
In the past it was a major challenge for IT personnel to successfully deploy a mix of high-density and lowdensity
equipment in the same data center space. Traditional data centers were specified to cool a uniform
rack power density and were not capable of predictably cooling a large number high-density racks. Now
architectures such as row-based cooling allow for the rapid deployment of high-density zones within an
existing or new low-density data center. Modular row-oriented power and cooling can now be added where
and when high-density racks are required, without any effect on the existing room-level infrastructure. In
combination with capacity and change management systems, zones offer a high-density deployment
solution capable of maintaining a room-neutral and predictable operation even after moves, adds, and
About the authors
Neil Rasmussen is the Senior VP of Innovation for APC. He establishes the technology direction for the
world's largest R&D budget devoted to power, cooling, and rack infrastructure for critical networks. Neil is
currently working to advance the science of high-efficiency, high-density, scalable data center infrastructure
solutions and is the principal architect of the APC InfraStruXure® system.
Prior to founding APC in 1981, Neil received his Bachelors and Masters degrees from MIT in electrical
engineering where he did his thesis on the analysis of a 200MW power supply for a tokamak fusion reactor.
From 1979 to 1981, he worked at MIT Lincoln Laboratories on flywheel energy storage systems and solar
electric power systems.
Victor Avelar is a Strategic Research Analyst at APC. He is responsible research in data center design and
operations and consults with clients on risk assessment and design practices to optimize the availability of
their data center environments. Victor holds a Bachelor's degree in Mechanical Engineering from
Rensselaer Polytechnic Institute and an MBA from Babson College. He is a member of AFCOM and the
American Society for Quality.