Confidential Computing, the Next Big Thing Making Cloud Less Scary for Enterprises
Securing data while in use opens doors from the on-prem data center to the cloud for more applications.
Google Cloud’s new AMD-enabled confidential computing offering isn't the only attempt around to protect data while in use by applications. All the major cloud vendors, including Amazon Web Services, Microsoft Azure, and IBM Cloud have their own approaches to the challenge of securing sensitive data at runtime – a major barrier for moving some enterprise applications from on-premises corporate data centers to the cloud.
According to a survey at June's Linux Foundation Open Source Summit by the Confidential Computing Consortium, only 30 percent of attendees have heard of the technology, but it’s promising to change the way the more security-conscious organizations view public cloud infrastructure.
"Sensitive data being used by most applications in the data center and public cloud today are not protected against attacks that target data while it is in use by applications," said Stephen Walli, governing board chair of the Confidential Computing Consortium and principal program manager at Microsoft.
As more applications are moved to the cloud or out to the edge, traditional perimeter security defenses are limited in their ability to protect against attacks, he told Data Center Knowledge. Plus, there are the challenges of protecting against your cloud service providers' own employees, or against other customers of the same shared service.
"There have been high-profile data breaches, such as the Target breach, that would have been prevented if confidential computing was used to protect applications," Walli added.
A hardware-based trusted execution environment protects against attackers with physical access to the hardware, root access to the host operating system or hypervisor, or privileged access to the orchestration system. "This enables many workloads to move to the public cloud which previously could not due to security concerns or compliance requirements," he explained.
According to Gartner analyst Steve Riley, confidential computing in a hardware-based trusted execution environment, or “enclave,” protects the code and the data from attackers, including those with internal access, such as a service providers’ staff.
This approach makes more sense in cloud environments than on premises. "For workloads completely contained within your own data center, we don’t see nearly as much value," Riley told DCK.
Hardware-Based Security for Heavy Lifting
Confidential computing took another big step toward the mainstream went mainstream last month, when Google Cloud announced the public beta of its new confidential computing offering. But the technology has been around for a while.
IBM, for example, has had confidential computing on its IBM Z mainframes for a couple of years now, first in beta, and then, this spring, its Secure Execution of Linux service became generally available.
"We are now in the fourth generation of really looking at confidential computing, both on premise and in the cloud," said Rohit Badlaney, VP of IBM Z Hybrid Cloud at IBM.
The technology is widely used by IBM’s enterprise customers, he said, including Bank of America, Daimler, and Apple.
Like Google Cloud, IBM's platform doesn't require that applications be rewritten.
"This entire enclave technology is designed around Open Container Initiative containers, and as long as the container is designed to work on multi-platform, it's very much lift and shift," Badlaney told DCK. "It has to be architecturally compliant, but a container is a container in my view."
IBM's confidential computing leverages IBM Z servers and can have up to 16 terabytes of protected memory, making it capable of handing extremely large workloads.
"We literally went through a client engagement, a healthcare security provider, that needed this level of security and confidential computing," said Badlaney. "They in essence lifted and shifted their workload from Azure cloud to IBM Cloud."
Memory wise, the AMD Epyc 2 chip that Google Cloud uses to enable confidential computing can go up to 896 gigabytes. "And if you do the math, with Intel SGX [Intel’s secure enclave technology] you can have 64 to 128 megabytes of protected memory," said Badlaney.
When enterprises move from Intel to IBM Z, there’s a significant performance improvement, said Marcel Mitran, distinguished engineer and CTO of LinuxONE at IBM.
"Generally, for application workload written in Java, we see a two-times uplift in performance," he said. "For cryptography, we see an order of magnitude level of improvement. Our systems have 5GHz clocked cores, massive amounts of cache, and that's all great for high performance enterprise computing."
But the Intel SGX platform is a good fit for some narrowly focused applications, like key management. And IBM does offer Intel SGX-powered secure enclaves for those customers who want them – as does Microsoft Azure.
"Microsoft has offered confidential computing to Azure customers for several years and the Azure confidential computing service became generally available in April," said Vikas Bhatia, principal group program manager at Microsoft.
He promised more confidential computing innovation in the coming months, including innovation in hardware, software, and services.
Amazon has been going its own way on confidential computing. AWS doesn't support Intel SGX or the AMD-based runtime security yet, focusing instead on its own approach, called Nitro Enclaves, which provide a cryptographic isolation layer. The enclaves are built on AWS’s own Nitro hardware, including Nitro security chips and Nitro cards.
Not All Confidential Computing Is Created Equal
Besides the memory limitations, there are other differences between the confidential computing offerings of the major vendors.
The IBM Z server, for example, is not necessarily a good fit for the average data center.
“Using the Z server is not only cost-prohibitive, it is also architecture-prohibitive," said Thomas Hatch, CTO and co-founder at SaltStack, a cybersecurity vendor. "Relying on a mainframe approach can be much more restrictive with respect to how the data center is designed and built out."
That's why the introduction of AMD's Epyc chipset is a strong win for the chipmaker and will help them get more market share in the data center, he said.
There are also some differences in how the security is implemented on the platforms.
The memory integrity protection provided by Intel SGX is different from the AMD Secure Encrypted Virtualization feature on the Epyc processors, said Ambuj Kumar, co-founder and CEO at Fortanix, which offers runtime encryption products built around Intel SGX.
"The AMD SEV solution supports faster random memory access than current Intel SGX platforms, but this comes at the cost of weaker security properties," he said.
And while memory is a constraint today on Intel SGX, it's not as bad as Google makes it sound, according to Kumar. "The current generation of Intel SGX-enabled processors has a processor reserved memory range of up to 256MB. This is only a cache and the effective enclave memory size can be as large as 64GB."
And that will get better, he added. "Forthcoming changes to the Intel architecture will increase the cache size and increase the range of available data center platforms that provide support for Intel SGX."
Read more about:
Data Center KnowledgeAbout the Authors
You May Also Like