Kubernetes Master Worker Node Kubeadm Join issue Kubernetes Master Worker Node Kubeadm Join issue kubernetes kubernetes

Kubernetes Master Worker Node Kubeadm Join issue


you should first try

 #kubeadm reset 

because you already have kubernetes it gets error.


Regarding kubeadm reset:

1 ) As describe here:

The "reset" command executes the following phases:preflight              Run reset pre-flight checksupdate-cluster-status  Remove this node from the ClusterStatus object.remove-etcd-member     Remove a local etcd member.cleanup-node           Run cleanup node.

So I recommend to run the preflight phase first (by using the --skip-phases flag) before executing the all phases together.

2 ) When you execute the cleanup-node phase you can see that the following steps are being logged:

..[reset] Stopping the kubelet service[reset] Unmounting mounted directories in "/var/lib/kubelet"[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki][reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]..

Let's go over the [reset] entries and see how they solve the 4 errors you mentioned:

A ) The first [reset] entry will fix the Port 10250 is in use issue (kubelet was listening on this port).

B ) The fourth [reset] entry will fix the two errors of /etc/kubernetes/manifests is not empty and /etc/kubernetes/kubelet.conf already exists.

C ) And we're left with the /etc/kubernetes/pki/ca.crt already exists error.
I thought that the third [reset] entry of removing /etc/kubernetes/pki should take care of that.
But, in my case when I ran the kubeadm join with verbosity level of 5 (by appending the --v=5 flag) I encounter the error below:

I0929 ... checks.go:432] validating if ...[preflight] Some fatal errors occurred:[ERROR FileAvailable-etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists

So I had to remove the /etc/kubernetes/pki folder manually and then the kubeadm join was successful again.


Join should be performed on the worker node ;)) !!!

enter image description here