kubeadm v1.18.2 with crio version 1.18.2 failing to start master node from private repo on Centos7 / RH7 kubeadm v1.18.2 with crio version 1.18.2 failing to start master node from private repo on Centos7 / RH7 kubernetes kubernetes

kubeadm v1.18.2 with crio version 1.18.2 failing to start master node from private repo on Centos7 / RH7


So the problem is not exactly a bug on CRI-O as we initially thought (also the CRI-O dev team) but it seems to be a lot of configurations that need to be applied if the user desires to use CRI-O as the CRI for kubernetes and also desire to use a private repo.

So I will not put here the configurations for the CRI-O as it is already documented on the ticket that I raised with the team Kubernetes v1.18.2 with crio version 1.18.2 failing to sync with kubelet on RH7#3915.

The first configuration that someone should apply is to configure the registries of the containers where the images will be pulled:

$ cat /etc/containers/registries.conf[[registry]]prefix = "k8s.gcr.io"insecure = falseblocked = falselocation = "k8s.gcr.io"[[registry.mirror]]location = "my.private.repo"

CRI-O recommends that this configuration should be passed as a flag to the kubelet (haircommander/cri-o-kubeadm) but for me it was not working with only this configuration.

I went back to the kubernetes manual and it is recommended not to pass the flag there for kubelet but to the file /var/lib/kubelet/config.yaml during run time. For me this is not possible as the node needs to start with the CRI-O socket and not any other socket (ref Configure cgroup driver used by kubelet on control-plane node).

So I managed to get it up and running by passing this flag on my config file sample below:

$ cat /tmp/config.yamlapiVersion: kubeadm.k8s.io/v1beta2kind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 1.2.3.4  bindPort: 6443nodeRegistration:  criSocket: unix:///var/run/crio/crio.sock  name: node.name  taints:  - effect: NoSchedule    key: node-role.kubernetes.io/master---apiServer:  timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta2certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns:  type: CoreDNSetcd:  local:    dataDir: /var/lib/etcdcontrolPlaneEndpoint: 1.2.3.4:6443imageRepository: my.private.repokind: ClusterConfigurationkubernetesVersion: v1.18.2networking:  dnsDomain: cluster.local  podSubnet: 10.85.0.0/16  serviceSubnet: 10.96.0.0/12scheduler: {}---apiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationcgroupDriver: systemd

Then simply the user can start the master / worker node with the flag --config <file.yml> and the node will be launched successfully.

Hope all the information here will help someone else.