kubernetes role to hide the master kubernetes role to hide the master kubernetes kubernetes

kubernetes role to hide the master


The key indeed is kubelet component of the Kubernetes.
I suspect managed Kubernetes versions do not run kubelet at all.
You can do the same on your DIY cluster to prove.

Main goal of kubelet is to run Pods.
If you don't need to run Pods on a host, you don't start kubelet.
Control Plane components can run as systemd services or static containers.

There is Alpha feature to self-host Control Plane components (ie run them as Pods): https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/self-hosting/
So in future they may start running kubelet on master hosts, but no need for now.

The kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.

https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/

When the kubelet flag --register-node is true (the default), the kubelet will attempt to register itself with the API server. This is the preferred pattern, used by most distros.

https://kubernetes.io/docs/concepts/architecture/nodes/#self-registration-of-nodes


After you have created a cluster you can run below command to delete the master nodes

kubectl delete node master-node-name

After you do this you can no more see that master node in kubectl get nodes but you should still be able to interact with the cluster normally.Here the nodes entry in ETCD is only getting deleted.

Another way to achieve the same is to configure Kubelet not to register the node via --register-node=false flag and manually administer it

I believe this is what the managed kubernetes service providers do internally.


How would you disable kubelet "at all"? I mean, I install my k8s master with "kubeadm init", I dont install and run "systemctl kubelet start", yet, my node is still registering and remain as "Not Ready" node, so the registering part is still here.

If you've set up your kubernetes cluster using kubeadm, it is required to install kubelets on all nodes, including master as it deploys vast majority of key cluster components such as kube-apiserver, kube-controller-manager or kube-scheduler as Pods in kube-system namespace (you can list them by kubectl get pods -n kube-system). In other words: you cannot run your cluster with kubeadm without having running kubelet on your master node. Without it no system Pods forming your kubernetes cluster can be deployed. See also this section in official kubernetes documentation.

As to Self-hosting the Kubernetes control plane mentioned by @Ivan, better read it carefully in official docs to understand how it really works:

kubeadm allows you to experimentally create a self-hosted Kubernetes control plane. This means that key components such as the API server, controller manager, and scheduler run as DaemonSet pods configured via the Kubernetes API instead of static pods configured in the kubelet via static files.

It's not written anywhere that you don't need kubelet on master-node at present. On the contrary, it says that in case of using self-hosted Kubernetes control plane (currently experimenta) approach in kubeadm:

key components such as the API server, controller manager, and scheduler run as DaemonSet Pods configured via the Kubernetes API instead of static Pods configured in the kubelet via static files.

So again: in both approaches key cluster components are run as Pods, only DaemonSets are configured via Kubernetes API, but these are still Pods and yes, static Pods configured via static files (which is current kubeadm approach) still need kubelet that can read those static files on master node and create appropriate Pods declared in them.