Data centers are among the most energy-intensive facilities in the world. According to the most authoritative studies that calculate energy usage from the bottom up by examining performance data from existing data centers rather than extrapolating totals from previous data points, data centers consumed around 205 terawatt hours (TWh) of electricity in 2018, or roughly one percent of all global use. That’s more than the entire countries of Argentina (124 TWh) and Sweden (123 TWh) consumed in 2020.
Considering that global internet traffic has increased ten-fold over the last decade, however, the fact that data center power usage only grew about six percent between 2010 and 2018 is a remarkable achievement. Even the most alarmist headlines about data center energy demands are frequently followed by reports that concede the industry as a whole has made tremendous strides in efficiency to keep power usage far lower than forecasts predicted.
Some of those strides are the result of innovations in server technology and the economies of scale provided by hyperscale facilities. But at the individual data center level, a new generation of AI-powered data center infrastructure management (DCIM) tools have played a critical role in helping colocation providers become far more efficient.
Understanding Data Center Power Usage Effectiveness (PUE)
One of the most important calculations for determining data center efficiency is power usage effectiveness (PUE), which measures how much energy the facility is actually using relative to its IT equipment requirements. A facility’s PUE is calculated using the following formula:
PUE = Total power consumed / IT energy requirements
If a data center consumes 100,000 kilowatts, but its servers, storage, and networking equipment only required 55,000 kilowatts, the facility would have a PUE score of 1.81. That means that for every kilowatt required to power the IT stack, the data center actually consumed an additional .81 kilowatts. The higher a facility’s PUE score, the more energy is required to keep it operational, which translates into much higher costs being passed along to customers.
In the previous example, only about half of the facility’s total energy consumption is going toward its IT equipment. Where, then, is the rest of the power going? In most cases, it’s being consumed by the data center’s cooling resources.
Data Center Cooling Inefficiencies
Servers and other IT equipment generate massive amounts of heat, especially when processors are running at full capacity. Managing the heat discharged by this equipment is one of the primary challenges facing data center managers. A wide range of strategies, including hot/cold aisle deployments on the data floor, structured cabling, and cold-aisle containment systems, have been developed to improve air flow within the white space.
But while these measures are important, they often don’t address the key problem with cooling infrastructure. Many computer room air conditioner (CRAC) and air handler (CRAH) units are not connected to an intelligent environmental system capable of making minute adjustments in real-time to address hot spots and over cooling. Data center personnel frequently lack visibility into where cooling resources are needed, so rather than attempting to make constant manual adjustments whenever a server is running hot, they keep the temperature in the entire data room lower than might be needed to ensure that nothing overheats and crashes before they can respond to a problem.
Unfortunately, this approach ends up wasting valuable cooling resources. A server’s heat output can change drastically over the course of a day due to fluctuating traffic. It may only need half as much cooling during periods of low traffic, so running the cooling system at the level needed to accommodate a maximum IT load is incredibly inefficient. Without some way of identifying and responding to server load fluctuations throughout the entire data center environmentit’s understandable why data center managers take a “set it and forget it” approach to cooling, despite the high costs. Nobody has been fired for overcooling but they have for overheating.
Optimizing Cooling With Artificial Intelligence Solutions
Data centers can drastically improve energy efficiency by implementing tools that make it easier to monitor, control, and automate their cooling infrastructure. These systems utilize a combination of sensors and Artificial Intelligence to continuously monitor the data center environment and create a more accurate picture of cooling needs throughout the facility. The AI platform can then use this data to dynamically make fine-tuned adjustments as needed.
Machine learning tools can analyze temperature data over time to identify trends and model the optimal airflow patterns to eliminate hot spots and cold spots. This ensures that air conditioning units are running as efficiently as possible, which both saves energy and reduces mechanical wear and tear.
Improving Evoque’s Energy Efficiency with Vigilent delivered by BGIS
When Evoque began looking for a dynamic cooling solution that could help improve energy efficiency in its colocation data centers, we consulted with BGIS’ GCET Professional Services and ultimately chose the Vigilent Dynamic Cooling Management System. Leveraging the latest innovations in Internet of Things (IoT) sensors and AI applications, Vigilent’s integrated system consists of four interconnected components:
Wireless IoT sensors are distributed throughout the facility to provide real-time visibility into heat levels and troublesome hotspots/cold spots. Information is gathered and stored continuously, generating valuable data for future analysis.
Sophisticated machine learning algorithms use data gathered by the monitoring system to automatically adjust cooling resources to match the current heat load. Changes are made constantly and in real-time, with no need for manual intervention.
Powerful AI technology combs through monitoring system data to produce actionable insights. Interactions between cooling resources and heat load can be identified quickly, and airflow modeled to reflect ongoing trends.
Alarms & Notifications
Although Vigilent’s system is largely automated, there are often cases where human intervention is required. In these situations, the system can issue alerts to data center personnel the moment a problem emerges to ensure a speedy response for continuous uptime. These events are commonly caused by added/removed IT equipment, a failure in the cooling unit which reduced efficiency, or an item that was left near racks that is inhibiting airflow to the servers.
Results and Plans for Expansion
After BGIS implemented Vigilent’s integrated solution in Evoque’s Lisle, IL facility just outside of Chicago, the results quickly proved the value of handing cooling control management over to a sophisticated AI system.
- 20% reduction in facility’s PUE
- 200kW p/h in energy savings
- Went from running all 103 CRAH units to running only 63 units on average
“Creating more efficient and sustainable data centers is at the core of Evoque’s strategy and values,” shared Evoque's Vice President of Data Center Engineering John Diamond. “The success from this first implementation of Vigilent is truly a win, win, win, for our customers, for Evoque and for the environment.”
After seeing such a tremendous impact, Evoque is installing Vigilent in all of its data center locations.
With multiple colocation facilities located in major markets alongside comprehensive cloud consulting services, Evoque Data Center Solutions provides the resources and expertise that can help organizations accelerate their digital transformation. To learn more about how our data center and cloud services can help you reduce your energy costs and manage your business more effectively, talk to one of our specialists today.
For more about BGIS.