Data center cooling is a crucial component of any IT environment. In fact, many IT and Facilities Managers cite this as their number one priority. This is not a shock! Server equipment is not only expensive, but it’s also the nerve center of any successful business. You simply -- and literally -- cannot afford to have your servers overheat.
Though computing equipment is becoming smaller and smaller, it often uses the same or even more electricity than the equipment it replaced. That means more heat is being generated in data centers.
In other words, it’s getting hot in there.
Today’s IT equipment can push data centers to 750 watts per square foot, and this increase in data center computing capabilities has resulted in corresponding increases in rack and room power densities. How to cool these new higher-powered racks and server rack shelves is a question that challenges all data center managers.
To avoid catastrophic equipment failures, data centers should be designed and operated to target the entire recommended range. Electronic equipment should be designed to operate within the extremes of the allowable operating environment. But remember: Prolonged exposure to temperatures outside the recommended range may result in decreased equipment reliability and longevity.
Two Ways To Chill Out
In high-density equipment rooms, successful thermal management relies on a seamless integration of server rack-cooling design and room-cooling design.
If you haven’t already, evaluate and remediate your data center for optimum control and use of energy through the use of segregated hot and cold aisles.
1. Server Rack Cooling:
The ideal path inside a rack ensures that no “hot spots” occur and the waste heat is effectively removed. Simulations and real-world testing show that moving air through a cabinet from bottom to top results in the lowest internal cabinet temperatures.
The most common airflow pattern pulls cooler air in from the front and exhausts the heated air toward the rear or sides -- also called “front-intake” equipment.
Be wary of “rear-intake” equipment. When cabinet air comes in through the rear and exhausts out the front, it does not allow hot air to exit the top of the rack.
Downward airflow is less than ideal, creating “mixed convection” (mixture of forced air and convection) during operation and in the event of fan failure.
2. Room Cooling Systems:
The bottom-up system:
- Access floor with perforated floor tiles.
- Air is generally supplied by down-flow Computer Room Air Conditioning (CRAC) units.
- Re-circulation occurs at the top of the racks due to an insufficient supply flow rate and equipment shelves above the interface are exposed to significantly higher temperatures.
The top-down system:
- Modular cooling units are installed above selected equipment racks. The number of units depends on the heat release of the equipment.
- Built-in fans and cooling coils move and condition the air, while a refrigeration loop controls the cooling coil temperature.
- This system produces a well-mixed cold aisle and the servers draw air with even temperature conditions.
No Borrowing From Peter To Pay Paul!
Over-cooling of some racks should not compensate for under-cooling of others! In other words, there is no “credit” to be gained from intake temperatures below the minimum recommended temperature.
The increase in power density is driven by the ability to pack an ever-greater amount of performance into today’s servers. The power density trend will continue to challenge cooling technology.
To maximize your data center with cabinets and accessories visit Gaw Technology online.
Gaw Technology’s consultants will help you every step of the way as you create a data center that keeps your servers cool.