Mastering Kubernetes in the Cloud: A Guide to Cloud Controller ManagerMastering Kubernetes in the Cloud: A Guide to Cloud Controller Manager
Cloud Controller Manager is a crucial yet often overlooked Kubernetes component that streamlines cloud integrations. Here's why it matters and how to use it effectively.

If you choose to run a Kubernetes cluster in a public cloud, you'll want to know a thing or two about the Kubernetes Cloud Controller Manager. Although Cloud Controller Manager doesn't feature prominently in most discussions of Kubernetes components, it plays a critical role in streamlining the deployment and managing of Kubernetes in the cloud.
To provide guidance on why and how to use Cloud Controller Manager, this article explains how the component works, why it's important, and how to get started using it.
What Is Kubernetes Cloud Controller Manager?
Cloud Controller Manager is a component within Kubernetes that integrates Kubernetes clusters with specific cloud platforms — such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.
Cloud Controller Manager is part of the Kubernetes control plane, meaning the core set of services responsible for managing Kubernetes clusters.
What Does Cloud Controller Manager Do?
The Cloud Controller Manager component solves a key challenge that engineers face when deploying Kubernetes in the cloud: the need to integrate Kubernetes with various cloud platforms, each of which works in a different way.
To understand why this is important, let's step back a bit and think about how Kubernetes works from an infrastructure perspective. In theory, one of the factors that makes Kubernetes so popular is that it's an open source, vendor-agnostic platform. This means that, for the most part, Kubernetes works in a consistent way regardless of where you choose to set up your cluster — whether on-prem or in any of the various public clouds.
To put this another way, Kubernetes theoretically treats each node (meaning a server that forms part of a Kubernetes cluster) the same. It's not supposed to matter which operating system kernel your node runs, which type of CPU it has, whether it's a virtual machine or bare-metal, and so on.
That said, there are nuanced differences between the cloud platforms that you could use to host a Kubernetes cluster. Each cloud provider uses different APIs to create and manage the servers that function as Kubernetes nodes, as well as to configure resources like network load balancers.
This means that Kubernetes can't actually ignore the underlying infrastructure platform it is hosted on or be truly vendor-agnostic. It needs to be able to support the unique APIs of whichever platform is hosting a cluster.
This is where Cloud Controller Manager comes in. Cloud Controller Manager essentially serves as a compatibility layer that translates generic Kubernetes requests — like "identify a node's IP address" or "create a load balancer" — into API requests that work with a specific cloud platform.
Benefits of Cloud Controller Manager
The main benefit of Cloud Controller Manager is that it offers a simple way for Kubernetes to interact with cloud provider APIs without requiring any special configuration or code implementation on the part of Kubernetes users. Cluster admins can simply choose which cloud they need to integrate with, then enable the appropriate Cloud Controller Manager.
In addition, from the perspective of the Kubernetes project, Cloud Controller Manager is advantageous because it separates cloud-specific compatibility logic into a distinct component. Rather than building support for each cloud platform's APIs directly into the Kubernetes control plane, Cloud Controller Manager uses a plugin architecture that allows the various cloud providers to write the logic necessary for Kubernetes to integrate with their APIs, then make it available to Kubernetes users as a component that the users can optionally enable.
This approach makes it easy for cloud providers to update the compatibility layer as needed in order to keep it in sync with their APIs. It also prevents Kubernetes developers from having to be responsible for keeping compatibility logic up-to-date within the core Kubernetes codebase.
When to Use Cloud Controller Manager
Deciding whether to use Cloud Controller Manager is straightforward in most cases. It boils down to the following considerations:
If you're running Kubernetes on top of a public cloud platform, you should enable the component. An exception is cases where your cluster is very simple and doesn't include any type of complex node or networking configuration. In that case, Cloud Controller Manager may not be necessary because you would not be making requests that require the use of the cloud provider's APIs.
If you're running Kubernetes on bare-metal servers that you are managing yourself, Cloud Controller Manager is not necessary because Kubernetes can interact with nodes and other resources directly, without having to use special APIs.
How to Use Cloud Controller Manager
In most Kubernetes distributions, Cloud Controller Manager is installed by default but not enabled by default. (An exception is most managed Kubernetes services, like Amazon EKS, which use Cloud Controller Manager by default to integrate with the cloud host environment.)
To turn on cloud-controller manager in a Kubernetes cluster that you've deployed without using a managed cloud service, add the following option to the configuration settings for kube-controller-manager, kube-apiserver, and kubelet within your cluster:
--cloud-provider=external
(Note that if kube-controller-manager, kube-apiserver, or kubelet are already running, you'll need to stop and restart them after making the configuration change for it to take effect.)
Then, deploy the Cloud Controller Manager that matches the cloud you are using. You can do this by creating a DaemonSet using the following sample YAML code (borrowed from the Kubernetes documentation):
# This is an example of how to set up cloud-controller-manager as a Daemonset in your cluster.
# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master
# Note that this Daemonset will not work straight out of the box for your cloud, this is
# meant to be a guideline.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cloud-controller-manager
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:cloud-controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: cloud-controller-manager
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: cloud-controller-manager
name: cloud-controller-manager
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: cloud-controller-manager
template:
metadata:
labels:
k8s-app: cloud-controller-manager
spec:
serviceAccountName: cloud-controller-manager
containers:
- name: cloud-controller-manager
# for in-tree providers we use registry.k8s.io/cloud-controller-manager
# this can be replaced with any other image for out-of-tree providers
image: registry.k8s.io/cloud-controller-manager:v1.8.0
command:
- /usr/local/bin/cloud-controller-manager
- --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here!
- --leader-elect=true
- --use-service-account-credentials
# these flags will vary for every cloud provider
- --allocate-node-cidrs=true
- --configure-cloud-routes=true
- --cluster-cidr=172.17.0.0/16
tolerations:
# this is required so CCM can bootstrap itself
- key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
effect: NoSchedule
# these tolerations are to have the daemonset runnable on control plane nodes
# remove them if your control plane nodes should not run pods
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
# this is to restrict CCM to only run on master nodes
# the node selector may vary depending on your cluster setup
nodeSelector:
node-role.kubernetes.io/master: ""
Be sure to change the YOUR_CLOUD_PROVIDER fields and network settings in the sample code as needed.
Alternatively, you can build and run container images based on your cloud provider's controller-manager code. Azure offers guidance on how to do this in its GitHub repository, and GCP also maintains Cloud Controller Manager code on GitHub. Likewise, AWS provides documentation on deploying Cloud Controller Manager using pre-existing resources available on GitHub.
About the Author
You May Also Like