Cloud Infrastructure Reaches Turning Point as Container Adoption Becomes UniversalCloud Infrastructure Reaches Turning Point as Container Adoption Becomes Universal
Container adoption hits 90% as enterprises grapple with the complexity of Kubernetes, a new Nutanix study finds.
When the cloud first became an option for enterprise users, virtual machine instances (VMs) were the only way to run workloads.
Times have changed, and now containers have become the standard approach for application deployment and management, at least according to Nutanix's seventh annual Enterprise Cloud Index (ECI). In recent years, Nutanix ECI reports have identified cloud service growth and complexity as key challenges. In 2025, many of those concerns remain, even as new ones, including the use of generative AI (GenAI), have been introduced.
The new report is based on responses from 1,500 IT and DevOps/platform engineering decision-makers worldwide, providing insights into how organizations are modernizing their infrastructure to support cloud-native applications.
Key findings of the Nutanix ECI report include:
• 90% of organizations now run containerized applications.
• 98% use Kubernetes environments, with 80% managing multiple instances.
• 81% say their infrastructure needs improvement for cloud-native support.
• 94% report clear benefits from cloud-native applications/containers.
• Over 80% have implemented a GenAI strategy, driving new infrastructure demands.
Lee Caswell, senior vice president of product and solutions marketing for Nutanix, told ITPro Today that he was surprised to see 90% of organizations report some of their applications are containerized. He noted that while container adoption is well-known in the public cloud, the new report shows that container adoption is underway in private data centers as well.
"We see great value for customers in adopting containers to speed application development and testing, as well as provide application portability across the hybrid cloud," Caswell said.
How GenAI Is Impacting Hybrid Cloud Infrastructure
The report found that 90% of organizations expect rising IT costs, due to GenAI implementation efforts.
"Everyone looking at GenAI workloads is concerned about the cost and availability of GPUs," Caswell said.
Caswell said Nutanix expects to see continued software innovation that will change the mix of CPU and GPU resources required for evolving training, inferencing, and agentic workloads. As such, he noted that it is important for organizations adopting GenAI to use an infrastructure platform that offers a full choice of GPU and CPU options that can be modified over time.
"The pace of LLM [large language model] development similarly means there is terrific value in relying on infrastructure that simplifies access to LLMs from leading providers like Nvidia and Hugging Face," he said.
Container Sprawl Is a Real Concern
Containerization adoption appears nearly universal, with 90% of organizations having at least some containerized applications. That said, 81% stated that their infrastructure needs improvement to support cloud-native applications.
"A decade into cloud-native, many organizations are discovering that allowing every development team to manage their own Kubernetes environment is proving difficult at scale," Tobi Knauf, vice president and general manager of cloud-native for Nutanix, told ITPro Today.
Kubernetes has become the de facto platform across the hybrid cloud for container orchestration. Knauf noted that Kubernetes was not designed to be used by each DevOps team individually, but rather as a centrally managed platform. But because DevOps and Kubernetes rose in popularity around the same time, many organizations applied a DevOps approach to Kubernetes and ended up with hundreds or sometimes thousands of disparate clusters.
What Needs to Improve to Better Support Containers
Organizations will need to overcome the challenge of having multiple clusters and Kubernetes environments to control costs and improve efficiency.
They can consider several strategies to better manage multiple Kubernetes environments.
"Organizations that run across different types of infrastructure such as on-prem, one or more public clouds, and the edge as most larger organizations do often don't have a common operating model to manage Kubernetes across these environments," Knauf said.
Multiple clusters can lead to duplicate efforts, expensive operations, and security issues because of a lack of central governance.
Knauf suggests a series of strategies to tackle the issue, including:
Use a common operating model. Having a common operating model for Kubernetes across any infrastructure eliminates duplicate efforts and automates operations.
Embrace platform engineering. Adopt a platform engineering approach to centralize platform functions such as governance, observability, and security.
Use open source. Adopt a Kubernetes platform that is based on open source and has open APIs, to avoid locking your apps and operational workflows into a single platform.
About the Author
You May Also Like