4 Open-Source Tools for Running VMs in a Cloud-Native Environment
Are your legacy workloads keeping you from going cloud-native? Here are four solutions for running your virtual machines in a cloud-native environment, with minimal tweaking.
If you're like many IT pros today, you want to go cloud-native. But you have legacy workloads, like monoliths, that will only run on virtual machines.
You could maintain separate environments for your cloud-native workloads and your legacy ones. But wouldn't it be better if you could find a way to integrate the VMs into your cloud-native setup, so you could manage them seamlessly alongside your containers?
Fortunately, there is. This article walks through four open-source solutions for running VMs in a cloud-native environment, with minimal reconfiguration or tweaking required.
Why Run VMs in Cloud-Native Environments?
Before looking at the tools, let's look at why it's important to be able to run VMs in an environment that otherwise consists of containerized, loosely coupled, cloud-native workloads.
The main reason is simple: VMs that host legacy workloads are not going away, but maintaining separate hosting environments to run them is a burden.
Meanwhile, transforming your legacy workloads to meet cloud-native standards may not be an option. Although in a perfect world you'd have the time and engineering resources to refactor your legacy workloads so they can run natively in a cloud-native environment, that's not always possible in the real world.
So, you need tools — like one of the four open-source solutions described below — that let legacy VM workloads coexist peacefully with cloud-native workloads.
1. Running VMs with KubeVirt
Probably the most popular solution for deploying virtual machines within a cloud-native environment is KubeVirt.
KubeVirt works by running virtual machines inside Kubernetes Pods. If you want to run a virtual machine alongside containers, then you simply install KubeVirt into an existing Kubernetes cluster with:
export RELEASE=v0.35.0# Deploy the KubeVirt operatorkubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml# Create the KubeVirt CR (instance deployment request) which triggers the actual installationkubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml# wait until all KubeVirt components are upkubectl -n kubevirt wait kv kubevirt --for condition=Available
Then, you create and apply a YAML file that describes each of the virtual machines you want to run. KubeVirt executes each machine inside a container, so from Kubernetes' perspective, the VM is just a regular Pod (with a few limitations, which are discussed in the following section). However, you still get a VM image, persistent storage, and fixed CPU and memory allocations, just as you would with a conventional VM.
What this means is that KubeVirt requires essentially no changes to your VM. All you have to do is install KubeVirt and create deployments for your VMs to make them operate as Pods.
2. The Virtlet Approach
If you want to become really committed to treating VMs as Pods, you might like Virtlet, an open-source tool from Mirantis.
Virtlet is similar to KubeVirt in that Virtlet also lets you run VMs inside Kubernetes Pods. However, the key differences between these two tools is that Virtlet provides even deeper integration of VMs into the Kubernetes Pod specification. This means that you can do things with Virtlet like manage VMs as part of DaemonSets or ReplicaSets, which you can't do natively using KubeVirt. (KubeVirt has equivalent features, but they are add-ons rather than native parts of Kubernetes.)
Mirantis also says that Virtlet usually offers better networking performance than KubeVirt, although it's hard to know definitively because there are so many variables involved in network configuration.
3. Istio Support for VMs
What if you don't want to manage your VMs as if they were containers? What if you want to treat them like VMs, while still allowing them to integrate easily with microservices?
Probably the best solution is to connect your VMs to Istio, the open-source service mesh. Under this approach, you can deploy and manage VMs using standard VM tooling while still managing networking, balancing loads, and so on via Istio.
Unfortunately, the process for connecting VMs to Istio is relatively tedious, and it is currently difficult to automate. It boils down to installing Istio on each of the VMs you want to connect, configuring a namespace for them, and then connecting each VM to Istio. For a full rundown of the Istio-VM integration process, check out the documentation.
4. Containers and VMs Side-by-Side with OpenStack
The techniques we've looked at so far involve taking cloud-native platforms like Kubernetes or Istio and adding VM support to them.
An alternative approach is to take a non-cloud-native platform that lets you run VMs, then graft cloud-native tooling onto it.
That's what you get if you run VMs and containers together on OpenStack. OpenStack was originally designed as a way to deploy VMs (among other types of resources) to build a private cloud. But OpenStack can now also host Kubernetes.
So, you could use OpenStack to deploy and manage VMs, while simultaneously running cloud-native, containerized workloads on OpenStack via Kubernetes. You'd end up with two orchestration layers — the underlying OpenStack installation and then the Kubernetes environment — so this approach is more complex from an administrative perspective.
Its main benefit, however, is that you'd have the ability to keep your VMs and containers relatively separate from each other because the VMs would not be part of Kubernetes. Nor would you be limited to Kubernetes tooling for managing the VMs. You can treat your VMs as standard VMs, while treating containers as standard containers.
Conclusion
The open-source ecosystem offers a number of approaches for helping VMs coexist with cloud-native workloads. The best solution for you depends on whether you want to take a Kubernetes-centric approach (in which case KubeVirt or Virtlet is the way to go), or you want to allow your VMs to exist alongside containers without being tightly integrated with them (in which case OpenStack makes most sense). And if you just want integration at the network level but not the orchestration level, consider connecting VMs to an Istio service mesh.
About the Author
You May Also Like