It Doesn't Take a Supercomputer to Justify Liquid Cooling

How rear-door heat exchangers solve the high-density data center problem

10 Min Read
Google data center cooling plant
A look inside the cooling plant at a Google data center. (Photo: Google)

When the national weather services for Denmark and Switzerland upgraded their computing capacities, they each turned to supercomputers that are cooled by internal heat exchangers.

It doesn't take a supercomputer to justify liquid cooling, however. Heat exchangers have been used inside server cabinets for many years to dissipate heat and reduce the cooling needed from computer room air handing (CRAH) units. Recent advances are causing data center managers who may have dismissed them as risky to take a second look.

Rear door heat exchangers (RDHx) are being used for dense server environments in any data center where racks use 20 kW/hour or more of power.  "That level of usage is typical of organizations conducting intense research or mining bitcoins," says John Peter "JP" Valiulis, VP of North America marketing, thermal management, for Vertiv (formerly Emerson Network Power).

Cooling for data centers with high power density per rack is an especially timely subject today, as Machine Learning software starts making its way into enterprise and service provider data centers. Training Machine Learning algorithms requires massive, power-hungry GPU clusters, pushing data center power density far beyond the average 3kW to 6kW per rack.

Read more: Deep Learning Driving Up Data Center Power Density

RDHx systems target mission critical, high transaction rate work that demands the smallest possible server count. "Education, government and, in particular, the defense sector, are classic candidates for RDHx," says Mark Simmons, director of enterprise product architect for Fujitsu. "Industries that don't want to run massive quantities of water throughout the data center," should be interested, too.

For racks with less energy usage, RDHx systems may not be cost effective, Simmons continues. "Most data centers use only 3kW to 6kW/hour per rack. Even if they used 10kW/hour per rack, RDHx would be expensive."

Heat Removal

RDHx systems make economic sense for intense computing applications because they excel at heat removal.

Typical RDHx systems are radiator-like doors attached to the back of racks with coils or plates for direct heat exchange or for chilled water or coolant. "This method of heat dissipation is very efficient because it places heat removal very close to the heat source," Valiulis says. Consequently, it enables a neutral room, without the need for hot or cold aisles.

They are so efficient, Lawrence Berkeley National Laboratory (LBNL) suggests it may be possible to eliminate CRAH units from the data center entirely. In an internal case study 10 years ago, server outlet air temperatures were reduced by 10°F (5.5°C) to 35°F (19.4°C), depending on the server workload, coolant temperature and flow rate. In that example, 48 percent of the waste heat was removed.

The technology has improved in the past decade. Today, Simmons says, "RDHx can reduce the energy used for cooling 80 percent at the racks, and 50 percent in the data center overall."

Technological Advances

Adding RDHx systems to existing racks is possible.

Liebert's XDR door replaces existing back doors on racks by Knurr and other leading manufacturers.  This passive door provides up to 20kW/hour of cooling, using a coolant that changes phase into a gas at room temperature, thus reducing concerns about introducing liquid into the data center.

Other manufacturers are designing heat exchanger doors for their own racks. The Fujitsu RDHx system, for example, can be retrofitted onto PRIMERGY racks, which contain high performance Fujitsu CX400 servers. "We add a backpack to a standard 19 inch rack, making it deep enough to contain the heat exchanger," Simmons says.

That isn't the only advance. "This field-replaceable system uses liquid-to-liquid heat exchange to dissipate heat directly, rather than by air flow," Simmons says. "This removes heat quicker and reduces cooling needs. It's very simple."

The backpack also is designed to prevent leaks. Like double-hulled oil tankers, any leaks will be contained in the shell, trigger the patented leak detection system to send an alert.

Fujitsu is using these backpacks in its European operations and expects to launch the system in the U.S. this fall.

Run Cool, Run Fast

As heat builds up inside cabinets, servers run slower. Adding RDHx, however, alleviates the problem for Fujitsu's own high performance data centers. "They can run at maximum speed all the time," Simmons says, because the heat is removed.

Some in the industry have suggested RDHx systems can provide the extra cooling needed to allow them to overclock servers and therefore increase processing speeds.

Air, Water or Coolant

Initially, RDHx doors cooled servers passively via large radiators attached to the back of the racks, Valiulis says. "Those doors relied on the fans within the servers to remove the heat. In about the past three years, active doors became available that use their own built-in-fans to pull heat through the servers."

Early systems, and many current ones, use chilled water to remove heat. Some recent versions use hot water (at 40°C) to remove heat. Others rely on coolants, like the popular R-410A. The next generation of RDHx systems are likely to explore even more efficient refrigerants.

Liquid-to-liquid heat exchange is considered the most efficient.

Overall Benefits

RDHx are good solutions for high performance computing centers and dense server racks, but they also add value to less intensive computing environments.

By efficiently removing heat, these cooling systems support increased density, which helps data centers decrease their footprints. As Simmons, himself a former data center manager, explains, "When RDHx systems are used, data centers can fill up the entire rack with servers. This typically isn't done with air-cooled systems."

Data centers also now have the ability to more easily segment the physical space. For example, high performance computing may be consolidated in one area of the data center, which can be cooled with RDHx systems without the adding additional CRAH units.

RDHx systems are more efficient, less expensive and easier to install than CRAH systems, and may allow data centers to add capacity in areas in which otherwise would be impractical. "RDHx systems can make a lot of sense for data centers with some high density areas," Valiulis says.

This ability adds an important element of flexibility, especially for older data centers struggling to meet today's power-intensive needs.

Lawrence Berkeley National Laboratory (LBNL) evaluated passive heat exchangers several years ago. It reports that passive doors don't require electrical energy and perform well at higher chilled water set points.

According to its technology bulletin Data Center Rack Cooling with Rear-door Heat Exchanger, "Depending on the climate and piping arrangements, RDHx devices can eliminate chiller energy because they can use treated water from a plate-and-frame heat exchanger connected to a cooling tower." Maintenance consists of removing dust from the air side of the heat exchanger and maintaining the water system at the chiller.

Whether RDHx is effective depends on the ability to adjust the system to deliver the right amount of cooling. "The ability to adjust refrigerant offers higher protection and efficiency," Valiulis says.

RDHx Isn't for Everybody

"RDHx systems are not being well-adopted," Valiulis says.

This cooling method is best for high performance computing platforms. "The very large, commoditized computing companies like Google, Amazon and Network Appliance aren't embracing this technology because they don't have a need for really highly dense, fast infrastructures," Simmons says. For those applications, "good enough" computing actually is good enough.

Colocation host Cosentry cites other reasons when it opted not to use RDHx in its facilities. Jason Black, former VP of data center services and solutions engineering, now VP and GM at TierPoint, explains that RDHx systems don't provide the flexibility Cosentry needs as it lays out the data center floor.

"Typically," Black elaborates, "a rear door heat exchanger requires hard piping to each cabinet door.  This creates a problem when colocation customers move out and we need to repurpose the space." Today's flexible piping could simplify, but not eliminate, the piping issue, however.

Black says he also is concerned about introducing liquid into the data hall. The IBM heat eXchanger door, for example, holds six gallons of water and supports a flow rate of 8 to 10 gallons per minute. A catastrophic failure could drench a cabinet and the cabling underneath the raised floor. To avoid that possibility, Black says, "We have specifically designed our data centers with mechanical corridors to eliminate any water/coolant from the data hall space."

LBNL, in contrast, piped chilled water for its RDHx system underneath its raised floors using flexible tubing with quick disconnect fittings. Alternatively, overhead piping could have been used.

RDHx also makes accessing services a hassle, Valiulis says. "You have to swing open a door to access each rack, and close it when you're done." That's a minor inconvenience, but it adds two more steps to servicing every rack.”

Ensuring security is another concern. "Mechanical systems need maintenance at least quarterly," Black points out. Cosentry data centers have a mechanical hall that enables maintenance technicians to do their jobs without coming into contact with customers' servers, thus enhancing security. "Rear door heat exchangers would negate these security procedures," Black says.

Simmons, at Fujitsu, disagrees on two points. He says that once the new RDHx systems are set up, they are virtually maintenance free. "Fujitsu's backpacks are, essentially, closed loop systems. You can lock the racks and still access the backpacks."

Future Cooling

The practicality of RDHx for routine computer operations is becoming less of a discussion point as the industry develops newer, higher tech cooling solutions. In the relatively near future, server cooling may be performed at the chip level. Chip manufacturers are developing liquid-cooled chips that dissipate heat where it is generated, thus enabling more compact board and server designs.

For example, Fujitsu's Cool-Central liquid cooling technology for its PRIMERGY servers, dissipate 60 to 80 percent of the heat generated by the servers. This cuts cooling costs by half and allows data center density to increase between 250 percent and 500 percent. The water in these chips routinely reaches 40°C but still provides ample cooling.

Looking further into the future, university researchers are investigating quantum cooling. A team at the University of Texas at Arlington has developed a computer chip that cools itself to -228°C without using coolant when operating in room temperatures. (Previous chips had to be immersed in coolant to achieve that feat.)

To achieve this intense cooling, electron filters called quantum wells are designed into the chips. These wells are so tiny that only super-cooled electrons can pass through them, thus cooling the chip. The process is in the early research stage but appears to reduce chip energy usage by ten-fold.

Implementation Checklist

In the meantime, before quantum wells and liquid-cooled chips become commonplace, high performance data centers can improve performance gains, increase density and reduce cooling costs by installing rear door heat exchangers.

To help these systems operate at maximum efficiency, LBNL recommends installing blanking panels in server racks to prevent hot air from short circuiting components. It also advises scrutinizing raised floor tile arrangements to ensure air is directed where it is needed and increasing the data center setpoint temperature. To monitor the system and allow adjustments to improve performance, an energy monitoring and control system is important.

Ensuring hot aisle/cold aisle containment is less important when heat exchangers are used in the server cabinets, although that arrangement may still be valuable. "Using an RDHx can sufficiently reduce server outlet temperatures to the point where hot and cold aisles are no longer relevant," the LBNL bulletin reports. Typically, however, CRAH units are still in place and are augmented by RDHx systems.

Once RDHx systems are installed, check for air gaps. LBNL reports that RDHx doors don't always fit the racks as tightly as they should. Seal any gaps around cabinet doors with tubing to increase efficiency, and measure temperatures at the rack outflows before and after heat exchangers are installed. Also monitor the rate of flow through the system to ensure the RDHx is functioning properly and to correlate liquid flow rates and server temperatures. Ensure that coolant temperatures at each door are above the dew point to prevent condensation, and check the system periodically for leaks.

Conclusion

RDHx can be a strategic piece of data center hardware, or an expensive solution to a minor problem, depending upon the data center. Before considering RDHx, think carefully about current and future needs and know what you're trying to accomplish, Simmons says. That will determine whether RDHx is right for your organization now, or in the future.

Gail Dutton covers the intersection of business and technology. She is a regular contributor to Penton publications and can be reached at [email protected].

Read more about:

Data Center Knowledge

About the Author

Data Center Knowledge

Data Center Knowledge, a sister site to ITPro Today, is a leading online source of daily news and analysis about the data center industry. Areas of coverage include power and cooling technology, processor and server architecture, networks, storage, the colocation industry, data center company stocks, cloud, the modern hyper-scale data center space, edge computing, infrastructure for machine learning, and virtual and augmented reality. Each month, hundreds of thousands of data center professionals (C-level, business, IT and facilities decision-makers) turn to DCK to help them develop data center strategies and/or design, build and manage world-class data centers. These buyers and decision-makers rely on DCK as a trusted source of breaking news and expertise on these specialized facilities.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like