A Hyperconvergence Progress Report: Has Kubernetes Stolen the Show?

If an open source extension succeeds in making Kubernetes a virtual machine orchestrator, the value proposition supporting hyperconvergence could get pulled out from under it.

Scott Fulton III, Contributor

January 3, 2020

5 Min Read
Joe Fernandes, VP of cloud platform products, Red Hat
Joe Fernandes, VP of cloud platform products, Red HatScott Fulton

The basic idea behind hyperconverged infrastructure (HCI) is that the resources each server in a data center brings to the table — memory, storage, processing power, networking bandwidth — may be pooled together and consumed like fuel or electricity rather than a bunch of individual devices. The premise of Kubernetes is that applications and individual services may be distributed throughout a data center, utilizing infrastructure in a more liquid fashion.

Throughout 2018 and into 2019, various marketers pushed the idea that hyperconvergence and Kubernetes were like chocolate and peanut butter: “Two great tastes that taste great together.” There was a way to phrase the arguments for both to make it seem like they were a great architectural fit for one another.

“In the immediate future, IT departments have to manage an infrastructure ‘duality,’” wrote IDC’s Ashish Nadkarni back in March 2017 [PDF], “the ability to deploy and manage two sets of applications, each with vastly diverse infrastructure requirements and service-level objectives.” Hyperconverged platforms resided in one compartment of this duality, Nadkarni continued. Architecture at that time tended to be optimized, in his view, for “current-generation applications” largely comprised of legacy software running in first-generation virtual machines.

Related:A Crash Course in Persistent Kubernetes Storage

Yet as Kubernetes’ conquests grew, by late 2018 having completely sublimated the Docker trend that gave rise to it, the overlaps between Kubernetes and HCI became a subject of greater debate. Now, all of a sudden, there’s discussion over whether the two platforms are more like chocolate and chocolate — still nice, but a bit rich if your New Year resolution is to slim down.

At the heart of this issue is an emerging Kubernetes project called KubeVirt, which transforms what was originally a “next-generation applications” orchestrator into an all-purpose virtualization management framework, incorporating both old and new generations. KubeVirt began as a project inside Red Hat that the Cloud Native Computing Foundation accepted into its incubation “sandbox” in 2019.

“A lot of people who run OpenShift and Kubernetes say they run it in a virt environment,” Joe Fernandes, Red Hat’s VP of cloud platform products, said while speaking at a recent OpenShift event in San Diego. (OpenShift is Red Hat’s Kubernetes-based container platform.) By “virt” Fernandes was referring to one common form of a software-based virtual environment. Red Hat’s own manifestation of KubeVirt is currently being distributed as a technical preview.

“When you’re running on bare metal,” Fernandes continued, “what do you do with the workloads that still run in VMs? Well, you bring those VMs to Kubernetes instead of bringing Kubernetes to the VMs. That’s the idea behind KubeVirt: bringing a mix of container- and VM-based workloads, all managed with a Kubernetes control plane, on a shared platform.”

KubeVirt started attracting attention in the open source software development community late last year, intentionally promoting the expression “container-native virtualization,” or CNV, as a cause unto itself. VMware has had a similar effort in the works for the last couple years: Project Pacific, unveiled in 2019, is a project to make Kubernetes native to vSphere.

“Leveraging KubeVirt, the idea is to run containers and virtual machines as equal citizens on the same infrastructure... with Kubernetes as the orchestrator,” Naren Narendra, product marketing director for Kubernetes platform maker Diamanti, said in a recent CNCF webinar. Diamanti’s D20 is a three-node server and storage platform it describes as “Hyperconvergence 3.0,” using certified Kubernetes 1.12 as its lowest-level orchestrator.

“This is important because, regardless of the workload — whether it’s a container or a virtual machine — they both can be orchestrated using Kubernetes,” continued Narendra. “It’s the same mechanism, it’s the same tool, it’s the same learning, it’s the same training that you need to have.”

Narendra’s point illuminates the problem with earlier infrastructure models that tried to incorporate hyperconvergence as it had been explained at the time: With containerized environments running as layers atop a platform otherwise constructed by VMs that were not sharing the same network overlay, they weren’t really converged. The whole point of an orchestration environment is to distribute workloads, not encapsulate them.

When Cisco first tried to solve the problem of co-existence, it partnered with Docker, Inc.  Together, they envisioned a multiplicity of orchestrator environments, each with its own clusters, but all bound together by a common FlexVolume. This central volume would represent the “union,” as Cisco put it, of all the file systems for the containers inside the system. When a container requested more storage, the request was facilitated transparently by FlexVolume, which made that storage look to the container like an iSCSI logical unit.

“The existence of the containers inside VMs and the use of FlexVolume,” wrote Sam Halabi for his 2017 book Demystifying HCI, “give containers the same resiliency that HX [Cisco HCI] applies on VMs. VMs and their data — and hence containers and their data — benefit from all the data services such as snapshots, replication, deduplication, compression, and so on.”

What wasn’t obvious at the time was Cisco’s implication behind “resiliency” in this context: a high level of compartmentalization that enabled cohabitants to order resources from the same source without recognizing each other’s existence. It’s a kind of convergence, yes, but not really “hyper” — not the kind of unification that enables messaging and API calls between software components, for example, or that consolidates load balancing into a single process.

This is what a truly cohabitative system would allow. Common sense would tell an IT specialist that the platform at the lowest layer of data center infrastructure would be the one that in the end unifies the environment — and that Kubernetes, typically placed at the application level, would be too high. But the astounding pervasiveness of Kubernetes in its unyielding quest for ubiquity may be enough to flip that architectural model on its ear. Now it’s a race to the bottom, so to speak, and KubeVirt may be the engine that gets it there.

Yet traditional HCI platforms – the likes of Cisco’s HyperFlex, Dell EMC’s VxRail, and HPE’s SimpliVity – may yet have the upper hand, if data center and IT managers decide there is no “value-add” in mixing containerized workloads with legacy workloads. Having one load balancer, one network overlay, and a single pane of glass may make life easier for marketing. But such a convergence of convergences may require retooling and retraining to a degree that enterprise IT organizations have historically never displayed — so drastic that legacy workloads and systems end up being protected and preserved after all.  2020 will determine whether Kubernetes has enough momentum to break through this otherwise impenetrable shield.

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like