Custom AI Hardware: A New Front in the Cloud Wars

Alibaba Cloud joins the race for the cloud platform with the best AI processor.

alibaba cloud logo mwc barcelona 2019 getty
David Ramos/Getty Images

Low prices, high performance, a wide range of tools, and scale of your platform are no longer enough if you want to show customers that your cloud is better than the competitors’. The world’s largest cloud platforms are now also competing on who can design the best processors for machine learning.

In September, Alibaba Cloud launched Hanguang 800, its custom machine learning chip, following announcements along similar lines by some of its biggest rivals, Amazon Web Services and Google Cloud.

This custom hardware powers cutting-edge features in one of the fastest growing segments of the cloud services market: Platform-as-a-Service. According to IHS Markit | Technology, PaaS grew 41 percent in the first half of 2019. This segment is where machine learning and artificial intelligence techniques are “most heavily used,” Devan Adams, a principal analyst at IHS, said in a recent announcement.

The only segment of the market that grew faster than PaaS was what IHS calls Cloud-as-a-Service, or CaaS. According to the market research group (part of the Informa Tech family that also includes Data Center Knowledge), CaaS includes all the services an Infrastructure-as-a-Service offering does, plus management of server and cloud operating systems. A middle ground between IaaS and PaaS, in a CaaS situation, the cloud provider manages more of the customer's stack than an IaaS provider does but less than a PaaS provider.

Custom AI hardware and teams of highly skilled experts like data scientists are how cloud providers increasingly differentiate themselves, Adams said. He tied the growth in PaaS usage to the rising adoption of AI and ML techniques.

Alibaba’s new AI chip is for “inference,” a subset of ML workloads. Inference is when a system makes autonomous decisions using a model that’s been “trained” using a different type of hardware.

AWS introduced its custom ML inference chip, called Inferentia, last November.

Alphabet’s Google Cloud has been running its custom Tensor Processing Unit ASICs for ML training since around 2015.

Read more about:

Data Center Knowledge

About the Authors

Yevgeniy Sverdlik

Former editor in chief of Data Center Knowledge.

Data Center Knowledge

Data Center Knowledge, a sister site to ITPro Today, is a leading online source of daily news and analysis about the data center industry. Areas of coverage include power and cooling technology, processor and server architecture, networks, storage, the colocation industry, data center company stocks, cloud, the modern hyper-scale data center space, edge computing, infrastructure for machine learning, and virtual and augmented reality. Each month, hundreds of thousands of data center professionals (C-level, business, IT and facilities decision-makers) turn to DCK to help them develop data center strategies and/or design, build and manage world-class data centers. These buyers and decision-makers rely on DCK as a trusted source of breaking news and expertise on these specialized facilities.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like