Facebook Seeks Patent on Cooling Automation update from March 2012
Engineers from Facebook are seeking a patent on a data center cooling system that uses a load balancer to automatically shift workloads among racks of servers. The system is one of several approaches to intelligent cooling systems in which servers, sensors and cooling equipment can "talk" to one another.
March 5, 2012
Server-room6
The server room at the new Facebook data center in Prinevlle, Oregon, featuring a hot aisle containment system. (Photo credit: Alan Brandt)
Engineers from Facebook are seeking a patent on a data center cooling system that uses a load balancer to automatically shift workloads among racks of servers. The system can also manage fans that adjust the volume of air in either the hot aisle or cold aisle.
The system described by Facebook is one of several approaches to designing intelligent cooling systems in which servers, sensors and cooling equipment can "talk" to one another to provide advanced management of high-density racks of IT equipment. Facebook's technology targets a particular challenge in large data centers - the tendency for on-board server fans to fight with row-level cooling systems as the temperature rises.
Raising the temperature in the data center can save big money on power costs. In recent years, industry research has shown that servers can perform effectively at temperatures above 80 degrees F, well above the average ranges in the low 70s at which most data centers are maintained.
But if you nudge the thermostat too high, the energy savings can evaporate in a flurry of fan activity. Several studies in which servers were tested at higher temperatures discovered that on-board server fans kicked on between 77 and 80 degrees. This fan activity consumed energy that offset the gains from using less room-level cooling.
Companies like Facebook and Microsoft have sought to address this by reducing or eliminating on-board server fans. Ths approach only works in a design in which airflow is closely managed and monitored, typically by using aisle containment and temperature sensors that provide greater control over conditions in the racks.
Facebook applied these techniques in several data center retrofits of its leased data center space in 2010, which involved detailed analysis of fan speeds. The company worked with its server vendor to adjust the algorithm driving the fan speeds.
The patent application by Facebook engineers Amir Michael and Michael Palecznywas submitted in 2010, and recently made public. Most cooling automation systems focus on adjusting the airflow being provided to the racks of servers. But the Facebook patent filing describes the use of a load balancer that can redistribute the workload across servers to shift compute activity away from "hot spots" inside racks. The Facebook system also can adjust fans that manage airflow entering the cold aisle and exiting the hot aisle, providing multiple ways to adjust for changing thermal conditions.
The submission builds upon the techniques described in a 2009 patent submission by members of Facebook's data center team, which focused on designs that would allow servers to operate without fans, including modifications inside the server chassis to improve airflow and provide more cool air to components.
The Facebook patents discuss reducing the use of fans by only activating them in a certain temperature range, or going without fans altogether. The Open Compute designs released by Facebook in April 2011 feature server chassis with four 60 millimeter fans at the rear of the server. The use of a 1.5U chassis allows the use of the 60 millimeter fans, which are more efficient than 40 milimeter fans seen in many 1U chassis.
Facebook is hardly alone in seeking to solve these problems. Over the past five years, a number of data center researchers and vendors have focused on automated cooling systems that can adjust to temperature and pressure changes in the server environment. Here are a few examples:
In 2007, HP introduced Dynamic Smart Cooling, a system that deployed sensors throughout the data center, which communicate with the AC systems. HP used the system in its data centers, but the technology was lightly adopted by customers.
In 2008, Opengate Data Sytems introduced a heat containment system for data center racks, equipped with modules that monitor air pressure in a server cabinet and can adjust fan activity based on pressure within the cabinet.
In 2009, Lawrence Berkeley Labs and Intel developed a proof-of-concept that integrated a sensor network into building management systems, which could then adjust the output of cooling systems in response to changes in server temperature and pressure readings at the top and bottom of each rack.
In 2010, Brocade opened a new data center on its San Jose campus with a network of 1,500 temperature sensors tied into building management system, which can auto-adjust cooling as workloads shift.
In 2010, SynapSense introduced Adaptive Control, software that can dynamically adjust the temperature set points and fan speed in computer room air handlers (CRAHs) based on sensor readings of server inlet temperatures and sub-floor air pressures.
About the Author
You May Also Like