These advancements and growth
occurred in the course of about
eight years and led to the realm of
the hyperscale data center. While the
term “hyperscale” often connotes
a very large computer room area
in excess of 9,000 m2 (100,000 ft2),
hyperscale may also refer to a data
center with as little as 5,000 servers
and 929 m2 (10,000 ft2) that can
provide high-volume traffic and the
ability to handle heavy computing
workloads for organizations that
run most of their applications
in the cloud.1 Granted, having
additional space assists with the
potential volume and proper
selection of equipment and
components to fit specific use
case requirements, but running
a data center is much more than
about space. The data center,
regardless of size, must also be
operationally efficient.
May/June 2019 I 23
Change may seem justifiably
overwhelming for data center
operators, engineers, designers,
consultants, project managers,
and installers who must
simultaneously attend to their
daily responsibilities while keeping
up with the large and continuous
influx of new technologies and
paradigms. The new ANSI/BICSI
002-2019, Data Center Design and
Implementation Best Practices
standard provides assistance
and a path forward amid ongoing
changes and trends in the age
of the Internet of Things (IoT),
big data and ICT transformation.
A CLOUD IS BORN
In 2009, enterprise data centers
were seemingly all the rage.
Store the data internally to the
company, and provide it to staff as
needed. To most people, reference
to “the cloud” meant something that
existed on the horizon that could
bring rain; those in data center
circles thought similarly. Except
to them, the cloud was bringing
a new paradigm. Shortly thereafter,
the cloud entered common use.
The cloud became an easy to use
metaphor to describe data that was
stored somewhere and waiting
to rain down on the user. Use
of the cloud grew quickly as some
companies saw financial benefits
for having “someone else” store
information that reduced internal
infrastructure and labor costs.
Other companies viewed the cloud
as the provider of much needed
redundancy for critical data systems.
As demand grew, so too did the
number and physical size
of data centers.
Data center construction was
in a steady growth pattern as
more applications drove people
and certain areas of the globe
to create and demand more data.
As construction was trending
upward, technological
advancements were performing
as historically expected by
decreasing features and function
sizes of key items and allowing
for more powerful servers, storage
systems and network equipment
to reside in the same physical space.
Because the ability to send data
from point A to point B depends
on the pathway and space available,
not unlike automobile traffic
(Figure 1), advancement in
networking protocols allowed
networks to start determining
optimum routing.
FIGURE 1: Data traffic is similar to vehicle traffic; both need available pathways
and space.