Kava: Google Redesigns Data Center Cooling Every 12 to 18 Months

Efficiency is a never-ending quest for the web giant’s infrastructure team

ITPro Today

March 25, 2016

4 Min Read
Kava: Google Redesigns Data Center Cooling Every 12 to 18 Months
Joe Kava, Google’s VP of data center operations, speaking at the company’s GCP Next 2016 event in San Francisco (Source: video by Google)

For any large-scale internet company, data center efficiency and profit margins are closely linked, and at scale like Google’s, data center efficiency is everything.

Designing for better efficiency is a never-ending process for Google’s infrastructure team, and since cooling is the biggest source of inefficiency in data centers, it has always gotten special attention.

“Since I’ve been at Google, we have redesigned our fundamental cooling technology on average every 12 to 18 months,” said Joe Kava, who’s overseen the company’s data center operations for the last eight years.

This efficiency chase has produced innovations in water and air-based data center cooling. The company has developed ways to use sea water and industrial canal water for cooling; it has devised systems for reclaiming and recycling grey water and for harvesting rain water.

Google has also pushed the envelope in using outside air for cooling, or airside economization. “We’ve designed data centers that don’t use water cooling at all,” Kava said.

Like other web-scale data center operators and companies in the business of providing data center capacity, data center design at Google is something that gets improved with every new facility that comes online.

“There’s no one-size-fits-all model at Google,” he said. “Each data center is designed for highest performance and efficiency for that specific location. We don’t rest on our laurels.”

Data Centers Central to Google’s Cloud Pitch

Kava spoke about Google’s data center best practices this week at the company’s first cloud user conference in San Francisco, called GCP Next. The event was a big effort to send the message that Google is a serious competitor to Amazon Web Services and Microsoft Azure in the enterprise cloud market.

Google CEO Sundar Pichai and the company’s former CEO and chairman Eric Schmidt delivered keynotes at the event; and so did Urs Hölzle, its senior VP of technical infrastructure and its eighth employee, and Diane Greene, the founder of VMware, who recently joined to lead Google’s enterprise cloud business.

See more: Go on a Virtual 360-Degree Google Data Center Tour

Kava’s keynote at GCP Next was no filler. The might of Google data centers is a key part of its cloud pitch, on par with the low cost of its cloud services and all the sexy features, like machine learning and container orchestration.

The message boils down to something like this: Look, we design and build the best data centers in the world, and now you can use them too. It’s a message Google has been using to sell its cloud for several years now.

Testing claims like this is difficult, since ultimately the only customer of Google data centers is Google itself, and, as Kava put it, 99 percent of Googlers themselves aren’t allowed to set foot in the company’s data centers for security purposes. But the overall strength of engineering at Google is hard to argue with.

“World’s Largest Data Center Campus”

Kava showed off a video of the company’s data center in Iowa, which he said was the largest data center campus in the world:

Kava-google-iowa.jpg

Note the construction trucks in this screenshot from the video for scale. Each building pad is more than one-third of a mile long and houses a multi-story data center, he said.

Data center scale is another important message for Google to send as it ratchets up its cloud business. Today, its biggest rivals are far ahead in terms of the number of locations their cloud services are available in, and Google has to catch up.

The company announced this week it was bringing two additional cloud availability regions online this year – in Oregon and Japan – and 10 more locations next year.

Machine Learning Helps Fine-Tune for Efficiency

Google execs spent a lot of time talking about machine learning at the conference. The company is increasingly using machine learning technologies for its web services and this week launched first machine learning services as cloud offerings.

One way it is using machine learning internally is to optimize data center efficiency, as Data Center Knowledge reported earlier.

Data centers are complex systems working together to get the best performance, Kava said. It is impossible for humans to understand how to optimize these systems because of the sheer number of interactions and operating parameters involved.

“However, it is pretty trivial for computers to crunch through those scenarios and find the optimal settings,” he said. “Over the past couple of years we’ve developed these algorithms and we’ve trained them with billions of data points from all of our data centers all over the world.”

Read more: Google Using Machine Learning to Boost Data Center Efficiency

Data visualization generated using analysis of this data helps operations teams decide how to set up electrical and mechanical plants in Google data centers:

Kava-google-GCP-next-visualization.jpg

Data visualization helps the team see inflection points in curves that may not be intuitive otherwise. Using this process, Kava’s team has realized that there can be as many as 19 independent variables that affect data center performance.

Perhaps the biggest operating principle in Google’s infrastructure approach is end-to-end ownership. Designing everything in-house, from servers to data centers, and relying exclusively on internal staff for data center operations, the company exercises full control of its infrastructure, its performance and efficiency.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like