DATA CENTER EFFICIENCY
Hyperscale architecture is only one
of the ongoing trends within data
center design. Focus continues on
improving the efficiency of data
centers, especially in cooling and
power consumption. Immersion
cooling is one area of progress; while
the use of air is relatively risk free,
fluids, such as water, can transfer
heat 24 times faster from a surface
and can store more heat within
an equivalent volume. Immersion
cooling has taken a few different
forms. In 2018, Microsoft began
testing the concept of submerging
a data center built into a shipping
container. The concept was to
evaluate viability of deploying
these types of data centers on the
coastline of population centers.
Measures, such as using the
surrounding water, resulted in
reduced power consumption,
potentially using power generated
only from wind and tidal forces.
Liquid cooling is not just for the
data center building or external heat
exchangers, since rack and server
level cooling has also emerged.
While the presence of liquid
within any structure that has
a high reliance on electrical power
is often considered a risk, traditional
air-cooling methods become less
efficient to the extent that cooling
densities of 60 to 70 kW per rack can
become extremely difficult. While
data centers are not yet seeing this
level of load, densities are increasing
in both typical and hyperscale
data centers.
Liquid cooling currently is used
to cool the central processing units
24 I ICT TODAY
(CPUs) within a server; it is the
part of the server that generates
a significant amount of heat and
is most affected by it. Unlike full
immersion, liquid is only used in
conjunction with the CPU heat
sinks, allowing other elements
to run within the ambient airflow.
Liquid cooling does not necessarily
require chilled liquid (i.e., liquid at
30 to 40 C (86 to 105 F)), because
it supplies more than adequate
heat absorption while supporting
traditional heat exchangers.
Liquid cooling, while expected
to be a significant part of the data
center industry, is not the only
endeavor to decrease costs. Originating
from Facebook’s research and
development, the Open Compute
Project (OCP) is an effort modeled
on open-source software where
participants provide and share
information about data center issues.
Broken into a number of subject
areas, including equipment and
infrastructure, OCP looked at what
was needed to meet objectives,
leaving little unchallenged. Since
its incorporation in 2011, OCP
concepts have been increasingly
adopted in hyperscale designs, as
changes to the physical dimensions
of servers allowed for increased heat
transfer potential. Because these
new dimensions were physically
incompatible with standardized
EIA/ECA-310-E racks, this led to the
envisioning of the ubiquitous
telecom rack.
Other areas of OCP focus on
electrical power distribution.
Movement has been made on
refining dc infrastructure that
avoids potential power losses and
heat generation when converting
traditional ac power into dc for
IT equipment and batteries. These
developments triggered a review
of the building infrastructure,
which included a look at pathway
sizes. As the height of racks have
been increasing, wider racks and
the option to move fully loaded
racks into place may require larger
pathways capable of handling
increased loading.
THE FOG ROLLS IN
Much like any weather system,
the cloud continues to move and
change according to its surroundings.
Where the cloud meets the horizon
line, some notice a new formation,
one caused by continued develop-
ment of IoT. IoT is no longer a new
concept, and much has been written
about it. However, IoT’s impact is
much larger and continues to be
a force for change. IoT is often
viewed as wired connections between
devices and their respective systems.
Regardless of the actual definition, edge data
centers will be required, either as new construction
or possibly created from smaller existing enterprise
or colocation data centers.