coredns can't start when using crio and with selinux on coredns can't start when using crio and with selinux on kubernetes kubernetes

coredns can't start when using crio and with selinux on


On the host do the following

chcon -R -t container_file_t /var/lib/kubelet/container_id/volumes

This will change the label on the host volumes to be accessible by the containers SELinux label.

I do not know of a good way to handle the passing in of secrets. But adding the

allow container_t tmpfs_t:file open;

Would be probably best.

In OpenShift these are all handled Automatically, I believe. Although I don't work at that level of the stack.


I've looked into it and it seems that the problem lays in kubelet version. Let me elaborate on that:

SELinux Volumes not relabeled in 1.16 - this link is providing more details about the issue.

I tried to reproduce this coredns issue on different versions of Kubernetes.

Issue shows on version 1.16 and newer. It seems to work properly with SELinux enabled on the version 1.15.6

For this to work you will need working CentOS and CRI-O environment.

CRI-O version:

Version:  0.1.0RuntimeName:  cri-oRuntimeVersion:  1.16.2RuntimeApiVersion:  v1alpha1

To deploy this insfrastructure I followed this site for the most part: KubeVirt

Kubernetes v1.15.7

Steps to reproduce:

  • Disable SELinux and restart machine:
    • $ setenforce 0
    • $ sed -i s/^SELINUX=.*$/SELINUX=disabled/ /etc/selinux/config
    • $ reboot
  • Check if SELinux is disabled by invoking command: $ sestatus
  • Install packages with $ yum install INSERT_PACKAGES_BELOW
    • kubelet-1.15.7-0.x86_64
    • kubeadm-1.15.7-0.x86_64
    • kubectl-1.15.7-0.x86_64
  • Initialize Kubernetes cluster with following command $ kubeadm init --pod-network-cidr=10.244.0.0/16
  • Wait for cluster to initialize correctly and follow kubeadm instructions to connect to cluster
  • Apply Flannel CNI $ kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

Check if coredns pods are running correctly with command: $ kubectl get pods -A

It should give similar output to that:

NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGEkube-system   coredns-5c98db65d4-2c7lt                     1/1     Running   2          7m59skube-system   coredns-5c98db65d4-5dp9s                     1/1     Running   2          7m59skube-system   etcd-centos-kube-master                      1/1     Running   2          7m20skube-system   kube-apiserver-centos-kube-master            1/1     Running   2          7m4skube-system   kube-controller-manager-centos-kube-master   1/1     Running   2          6m55skube-system   kube-flannel-ds-amd64-mzh27                  1/1     Running   2          7m14skube-system   kube-proxy-bqll8                             1/1     Running   2          7m58skube-system   kube-scheduler-centos-kube-master            1/1     Running   2          6m58s

Coredns pods in kubernetes cluster with SELinux disabled are working properly.

Enable SELinux:

From root account invoke commands to enable SELinux and restart the machine:

  • $ setenforce 1
  • $ sed -i s/^SELINUX=.*$/SELINUX=enforcing/ /etc/selinux/config
  • $ reboot

Check if coredns pods are running correctly. They should not get crashloopbackoff error when running: kubectl get pods -A

Kubernetes v1.16.4

Steps to reproduce:

  • Run $ kubeadm reset if coming from another another version
  • Remove old Kubernetes packages with $ yum remove OLD_PACKAGES
  • Disable SELinux and restart machine:
    • $ setenforce 0
    • $ sed -i s/^SELINUX=.*$/SELINUX=disabled/ /etc/selinux/config
    • $ reboot
  • Check if SELinux is disabled by invoking command: $ sestatus
  • Install packages with $ yum install INSERT_PACKAGES_BELOW
    • kubelet-1.16.4-0.x86_64
    • kubeadm-1.16.4-0.x86_64
    • kubectl-1.16.4-0.x86_64
  • Initialize Kubernetes cluster with following command $ kubeadm init --pod-network-cidr=10.244.0.0/16
  • Wait for cluster to initialize correctly and follow kubeadm instructions to connect to cluster
  • Apply Flannel CNI $ kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

Check if coredns pods are running correctly with command: $ kubectl get pods -A

It should give similar output to that:

NAMESPACE     NAME                                         READY   STATUS             RESTARTS   AGEkube-system   coredns-5644d7b6d9-fgbkl                     1/1     Running            1          13mkube-system   coredns-5644d7b6d9-x6h4l                     1/1     Running            1          13mkube-system   etcd-centos-kube-master                      1/1     Running            1          12mkube-system   kube-apiserver-centos-kube-master            1/1     Running            1          12mkube-system   kube-controller-manager-centos-kube-master   1/1     Running            1          12mkube-system   kube-proxy-v52ls                             1/1     Running            1          13mkube-system   kube-scheduler-centos-kube-master            1/1     Running            1          12m

Enable SELinux:

From root account invoke commands to enable SELinux and restart the machine:

  • $ setenforce 1
  • $ sed -i s/^SELINUX=.*$/SELINUX=enforcing/ /etc/selinux/config
  • $ reboot

After reboot coredns pods should enter crashloopbackoff state as shown below:

NAMESPACE     NAME                                         READY   STATUS             RESTARTS   AGEkube-system   coredns-5644d7b6d9-fgbkl                     0/1     CrashLoopBackOff   25         113mkube-system   coredns-5644d7b6d9-x6h4l                     0/1     CrashLoopBackOff   25         113mkube-system   etcd-centos-kube-master                      1/1     Running            1          112mkube-system   kube-apiserver-centos-kube-master            1/1     Running            1          112mkube-system   kube-controller-manager-centos-kube-master   1/1     Running            1          112mkube-system   kube-proxy-v52ls                             1/1     Running            1          113mkube-system   kube-scheduler-centos-kube-master            1/1     Running            1          112m

Logs from the pod coredns-5644d7b6d9-fgbkl show:

plugin/kubernetes: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied


I did it by the command underneath.

semanage fcontext -a -t container_file_t "/var/lib/kubelet/pods/pod_id/volumes(/.*)?"restorecon -R -v /var/lib/kubelet/pods/pod_id/