Data Center Efficiency: Put A Cap On Lost Cooling Capacity

Posted by Gaw Technology on 2/26/2013

Green computing and efficiency are the big buzzwords in the data industry, and for good reason. With environmental costs and monetary costs at stake, it’s important to put energy efficiency on the radar. But there’s another factor that most data center owners and operators aren’t taking into account nearly as much as they should.

The “Biggest Loser”: Data Center Capacity

Like most data centers, yours may be falling victim to “lost capacity.” And this loss overshadows all others when it comes to your data center’s cost effectiveness.

Data center capacity is the amount of IT equipment intended to be enclosed in a data center, typically expressed in kW/sqft or kW/cabinet.

It’s estimated by data center industry experts that 30% or more of data center capacity is lost in operation. What does this mean in the grand scheme of things? Globally, a minimum of 4.65 GW, out of 15.5 GW of available data center capacity, is unusable. This adds up to about 31 million sqft of wasted data center floor space and $70 billion of unrealized capital expense in the data center.

These are big-time losses, but why isn’t anyone talking about them? And why are they happening?

First of all, lost capacity doesn’t get much attention because it’s not a glaringly obvious, “one-line item” loss. It builds up over time due to the fragmentation of resources.

The Main Culprit: Resource Fragmentation

When there’s an inconsistency between the actual, tangible construction of the data center and the original data center design, fragmentation occurs -- an inefficient scattering of infrastructure resources like space, power, networking and cooling. The main contributor to this inconsistency is the unrealistic, idealized prediction of power density and cooling capacity.

As this fragmentation continues, the data center loses its potential to support the intended IT load.

The real kicker is that this slow-creeping glitch isn’t spotlighted until long after the data center has reached operational maturity and the margin on capacity has closed. And this blind sighted time lag between problem cause and problem discovery is what causes loss of capacity.

Simulate And Visualize, Early And Often

The ability to realistically predict fragmentation makes it easier to find a balance between the short-term design plan and the long-term usage of intended capacity.

This simulation isn’t just a “one and done” deal. Quite the opposite, you should be simulating and visualizing resource fragmentation on a frequent basis, especially as IT modifications occur. It’s crucial that your data center be flexible -- able to adapt to change, easily and without fuss or fumble, many times during its lifespan.

Respect The Numbers

To predict the impact of proposed changes before implementation, integrate live data, the numbers from DCIM systems and the most updated IT configuration roadmap information.

Combining this data collection with the simulation model gives you the operational validation of an IT service availability and long-term data center capacity -- and these are the most important metrics in data center performance.

What You Should Expect

  • Increase in capacity: from the typical 60-70% to above 90%
  • Increase in IT service availability: made possible by assessing redundancy and environmental risks before any changes are implemented
  • Extended facility lifespan: brought about by strategic and proactive operational

Put a cap on capacity loss, simulate and crunch numbers early and often … and get the most out of your data center.

Need help assessing and improving the cooling capacity of your cabinets and other data center accessories? Our enclosure consultants are happy to help you optimize the airflow in your data center. Click the button below for a free enclosure consultation and keep your valuable equipment running at full-speed efficiency.

Add Comment