Your high-performance, multiprocessor servers are working hard, computing tons of information and storing countless bytes of information. This takes up a lot of power, which converts directly into heat. Servers may be getting smaller and faster, but they’re also draining more power and, therefore, producing more heat.
A typical 1U server expends 250 to 500W, which means that the standard 42U server rack, with 40 servers stacked on top of each other, can drain 10 to 20kW and produce 35,000 to 70,000 British thermal units (BTUs) of heat. This requires three to six tons of cooling per rack -- about ten years ago, this was the amount of cooling required for a 200x400 sq. ft. room with 10 to 15 42U racks!
Even with server virtualization and colocation -- the partitioning of a physical server into smaller virtual servers to help maximize resources, save space and energy, assist in disaster recovery, centralize server administration, decrease operational costs and cut down on hardware maintenance -- many existing power distribution systems cannot handle providing 20 to 30 kW per rack.
Spreading servers across half-empty racks to prevent overheating by lowering the overall average power per square foot is not uncommon. But it’s not efficient, either.
Optimizing your data center for maximum airflow is crucial. The best way to do this is to lay out your servers according to the hot aisle/cold aisle design.
The goal of this strategic layout is to conserve energy and manage airflow, therefore lowering cooling costs. Essentially, it involves lining up server racks in alternating rows, with hot air exhausts facing one way and cold air intakes facing the other.
The cold aisles are the rows that contain the rack fronts and typically face air conditioner output ducts. The heated exhausts pour into the hot aisles, which typically face the air conditioner return ducts.
To isolate the hot aisles from the cold aisles and prevent air mixing, use a containment system. Plenum spaces -- like raised floors -- provide airflow pathways to prevent overheating. Vendors may offer other options that combine containment with variable fan drives (VFDs).
Hot & Cold Aisles: Quick Tips
Raise the floor 1.5 feet (this is the plenum space that was mentioned above) so that the air pushed by air conditioning equipment passes through.
Use rack grills that have high cubic feet per minute (CFM) -- look for those with outputs around 600 CFM.
Go for the devices with side or top exhausts in their own area of your data center.
Install automatic doors in your data center.
Your servers work hard; don’t make them work hard in the heat. Keep your data center efficient with strategic hot/cold layout and the cabinets and accessories to keep things cool.