Google Cloud Anticipates Machine Learning Growth with GPU update from November 2016

Data Center Knowledge

November 19, 2016

2 Min Read
Google Cloud Anticipates Machine Learning Growth with GPU

WHIR-logo.png

Brought to You by The WHIR

Google is bringing graphics processing unit (GPU) chips to its Compute Engine and Cloud Machine Learning to boost performance for intensive computing tasks like rendering and large-scale simulations. GPUs will be available from Google Cloud Platform worldwide in early 2017, according to an announcement this week.

In a separate announcement, the company announced a new Cloud Machine Learning group, and a series of moves related to machine learning and artificial intelligence.

Google introduced its new Cloud Jobs API, which applies machine learning to the hiring process, as well as significantly reduced prices for its Cloud Vision API, and a premium edition of its Cloud Translation (formerly Google Translate) API.  A blog post outlining Google’s machine learning-related updates also announces the general availability of the Cloud Natural Language API.

The announcements collectively represent a push not just to broaden its compute-intensive cloud services, but to deliver practical services built on them. GPUs are used as accelerators on many clouds, from Peer1 to AWS, though industry players are engaged in an ongoing debate about the scalability and relative efficiency of different approaches.

Google says the GPUs on its cloud will be AMD’s FirePro S9300 x2 for remote workstations and the Tesla P100 and Tesla K80s from NVIDIA for deep learning, AI, and high-performance computing (HPC) applications. Google is offering the GPUs in passthrough mode for bare metal performance, and up to 8 can be attached to each VM instance.

“These new instances of GPUs in the Google Cloud offer extraordinary performance advantages over comparable CPU-based systems and underscore the inflection point we are seeing in computing today,” said Todd Mostak, founder and CEO of data exploration startup MapD, which used them as part of an early access program. “Using standard analytical queries on the 1.2 billion row NYC taxi dataset, we found that a single Google n1-highmem-32 instance with 8 attached K80 dies is on average 85 times faster than Impala running on a cluster of 6 nodes each with 32 vCPUs. Further, the innovative SSD storage configuration via NVME further reduced cold load times by a factor of five. This performance offers tremendous flexibility for enterprises interested in millisecond speed at over billions of rows.”

GPU instances take minutes to set up from Google Cloud Console or the gcloud command line, and are priced per minute.

Google Cloud Platform also extended its geographic reach earlier this month with a new Tokyo region.

This article was originally posted here at The Whir.

About the Author

Data Center Knowledge

Data Center Knowledge, a sister site to ITPro Today, is a leading online source of daily news and analysis about the data center industry. Areas of coverage include power and cooling technology, processor and server architecture, networks, storage, the colocation industry, data center company stocks, cloud, the modern hyper-scale data center space, edge computing, infrastructure for machine learning, and virtual and augmented reality. Each month, hundreds of thousands of data center professionals (C-level, business, IT and facilities decision-makers) turn to DCK to help them develop data center strategies and/or design, build and manage world-class data centers. These buyers and decision-makers rely on DCK as a trusted source of breaking news and expertise on these specialized facilities.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like