5 Reasons Why Kubernetes Is So Challenging
Among the reasons why Kubernetes is so challenging are incomplete tooling and confusing terms.
September 20, 2021
This may be an unpopular opinion, so I’ll put it bluntly: Kubernetes is such an unholy mess of unnecessary architectural complexity, half-baked configuration tooling and constantly changing rules that I have sometimes wondered if the whole thing is just a joke. In fact, the reasons why Kubernetes is so challenging can sometimes seem to outnumber the reasons why Kubernetes brings value to organizations.
I have imagined engineers inside Google--where Kubernetes originated--saying to themselves, “Hey, let’s make a system that is so needlessly complex that no reasonable person would ever touch it, then see how many hapless engineers we can get to wed their careers to it.”
OK, I’m being a bit hyperbolic. I don’t actually think that Google intended Kubernetes as a massive joke, and I recognize that Kubernetes offers a range of important benefits. (In fact, I wrote a blog post on that very topic, to offset the hate mail that this piece may generate.) But I do think that Kubernetes as a platform suffers from some deep architectural and operational challenges that developers should address now if they want Kubernetes to become a platform that ordinary people actually use and value.
To prove the point, here are the five major reasons why Kubernetes is so maddening to me.
Kubernetes doesn’t know what it wants to be.
Kubernetes is basically a platform for orchestrating containers. Containers are really the only type of workload that Kubernetes can handle without special add-ons. But that hasn’t stopped the Kubernetes community from trying to extend Kubernetes into an orchestrator for VMs and serverless, too. Although these projects are external to Kubernetes itself, they’re closely aligned. The Cloud Native Computing Foundation, which hosts Kubernetes development, has been pretty vocal about promoting Kubernetes as a tool for orchestrating VMs.
And then there are folks talking about reasons to deploy monoliths on Kubernetes, which is kind of like suggesting that you make your cat live in a doghouse.
Now, you could argue that it would be great if we could deploy everything under the sun with Kubernetes. I’d respond that IT pros have been managing most types of workloads for many years without Kubernetes, and have been getting along just fine. The problem Kubernetes was designed to solve -- container orchestration--is a relatively narrow and specific one. Trying to turn Kubernetes into a management platform for VMs, monoliths and other things that don’t involve containers is creating a solution for a problem that doesn’t exist.
This matters because it leaves me with the impression that no one is really sure what Kubernetes is even supposed to be. Is it just a container orchestrator, or is it going to become the place where you deploy every type of application you could ever imagine--including apps that would be much simpler to deploy without using Kubernetes? I think the Kubernetes community needs to answer these questions in order to establish a clear mission.
Kubernetes tooling is incomplete.
Another reason why Kubernetes is so challenging is that it seems to struggle to decide what it does and doesn’t want its native tooling to do. A lot of the built-in systems can do certain things on their own, but they require integration with external tools to provide complete functionality.
Take security contexts, for example. Kubernetes provides some rules that you can enforce natively with security contexts, such as preventing containers from running in privileged mode. But it mostly expects you to integrate with external frameworks, like AppArmor or SELinux, to make full use of security contexts.
Admission controllers are similar. Kubernetes has a bunch of built-in admission controllers that you can use (provided they are enabled in your particular version of Kubernetes, of course). But then it also uses webhooks to let you define custom admission controllers. While it’s great to have this flexibility, it’s also kind of confusing.
I tend to think that the lives of both Kubernetes developers and users would be simpler if each category of built-in Kubernetes tool either provided complete domain functionality natively or was designed only to integrate with external tools that can provide that functionality. As it stands, the tooling feels kind of half-baked--as if developers said, “We’re going to kind of implement some things ourselves, but we’ll force our users to use third-party tools to address the stuff we don’t feel like building.”
The community is rife with confusing terminology.
This is a bit of a potshot, but I’m going to take it: Like much of the IT industry, the Kubernetes community suffers from a predilection for making up needlessly complex and obscure terms.
Case in point: “Kubernetes,” a word that no one outside of tech has any idea how to pronounce. And even those inside tech debate the proper pronunciation.
I also kind of hate the term “pods.” Just call them containers or container groups, which is what they are. Likewise, what Kubernetes calls a “deployment” is not what most people think of when they hear “deployment.” The same goes for “service account.” And don’t get me started on “admission controller!”
To be fair, I suppose the IT industry as a whole is teeming with terminology that is more complex or obscure than it needs to be. It’s not just an issue with Kubernetes. Still, one of the reasons why Kubenetes is harder to learn is that many Kubernetes terms don’t really make a lot of sense at first glance.
Roles vs. ClusterRoles: An arbitrary construct.
This criticism is a bit specific and nitpicky, but another reason why Kubernetes is so challenging is the way the platform’s RBAC system distinguishes between Roles and ClusterRoles. The former enforce access controls for individual namespaces, and the latter apply across clusters.
Now, I totally understand why separating workloads into namespaces is useful. I’m not criticizing namespaces as a concept.
Instead, what irks me is that it feels unnecessarily complex and arbitrary to force Kubernetes users to create different categories of RBAC rules for namespaces and clusters.
After all, why stop there? Why not have PodRoles and NodeRoles and TuesdayAfternoonRoles (the latter would, of course, be in effect only Tuesday afternoons) so that you can apply RBAC configurations based on various other constructs to which you may or may not want to apply a specific set of rules?
Plus, given that multi-cluster Kubernetes has become a thing, ClusterRoles feel even less useful, because they aren’t necessarily universal. If you are managing multiple clusters, you’ll need to create multiple ClusterRoles. At that point, it feels like you may as well do away with ClusterRoles entirely and define everything on a namespace-by-namespace basis using Roles.
Kubernetes features are constantly changing.
Kubernetes became an open source project in 2014. Seven years and more than 20 releases later, Kubernetes is still constantly changing with each major release.
I get that platforms change and add new features all the time. That’s great. But what’s not great is when you make huge changes that upend existing features, as Kubernetes has a tendency to do.
Take, for example, pod security policies, which for a couple of years were supposed to be essential to securing Kubernetes containers. (Excuse me--I should say “pods”). And then, along came Kubernetes 1.21, which deprecated pod security policies and replaced them with security contexts, a similar but different feature.
I understand why this change was made--security contexts are much more flexible and extensible--but Kubernetes is no longer in beta (at least, not officially), and I think it’s kind of problematic to break a major feature not that long after you introduced it.
Conclusion
Let me again insist that I think Kubernetes overall is great. But I also think it’s more complex than it has to be. Indeed, I suspect that some folks want it to be overly complex, because then they can create a whole universe of third-party management tools that address Kubernetes’s complexity.
The good news is that Kubernetes is still relatively new, and it’s not too late to make things simpler. I mean, I guess we’re probably not going to change the nomenclature for things like pods and namespaces at this point, but Kubernetes developers could still make decisions that err on the side of simplicity.
About the Author
You May Also Like