Kubernetes- error uploading crisocket: timed out waiting for the condition Kubernetes- error uploading crisocket: timed out waiting for the condition kubernetes kubernetes

Kubernetes- error uploading crisocket: timed out waiting for the condition


I had the encountered the following issue after node was rebooted:

[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8smaster" as an annotation[kubelet-check] Initial timeout of 40s passed.error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

Steps to get rid of this issue:

  1. Check the hostname again, after reboot it might have changed.

    sudo vi /etc/hostname sudo vi /etc/hosts
  2. Perform the following clean-up actions

    Code:

    sudo kubeadm resetrm -rf /var/lib/cni/sudo rm -rf /var/lib/cni/systemctl daemon-reloadsystemctl restart kubeletsudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
  3. Execute the init action with the special tag as below

    Code:

    sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=10.10.10.2 --ignore-preflight-errors=all    

    (where 10.10.10.2 is the IP of master node and 192.168.0.0/16 is the private subnet assigned for Pods)


I’ve had the same problem on Ubuntu 16.04 amd64, fixed it with these commands:

swapoff -a    # will turn off the swap kubeadm resetsystemctl daemon-reloadsystemctl restart kubeletiptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X  # will reset iptables

Also, look at that issue in kubeadm GitHub kubeadm swap on the related issue where people still report having the problem after turning swap off.

You may also try to add --fail-swap-on=false flag in /etc/default/kubelet file, it didn't help in my case though.

It seems to be fixed in the latest k8 version because after upgrading the cluster, I haven't experienced the issue.