Ufw firewall blocks kubernetes (with calico) Ufw firewall blocks kubernetes (with calico) kubernetes kubernetes

Ufw firewall blocks kubernetes (with calico)


I'm trying to install a kubernetes cluster on my server (Debian 10). On my server I used ufw as firewall. Before creating the cluster I allowed these ports on ufw: 179/tcp, 4789/udp, 5473/tcp, 443 /tcp, 6443/tcp, 2379/tcp, 4149/tcp, 10250/tcp, 10255/tcp, 10256/tcp, 9099/tcp, 6443/tcp

NOTE: all executable commands begin with $

  • Following this initial instruction, I installed ufw on a Debian 10 and enabled the same ports you mention:
$ sudo apt update && sudo apt-upgrade -y$ sudo apt install ufw -y$ sudo ufw allow sshRule addedRule added (v6)$ sudo ufw enableCommand may disrupt existing ssh connections. Proceed with operation (y|n)? yFirewall is active and enabled on system startup$ sudo ufw allow 179/tcp$ sudo ufw allow 4789/tcp$ sudo ufw allow 5473/tcp$ sudo ufw allow 443/tcp$ sudo ufw allow 6443/tcp$ sudo ufw allow 2379/tcp$ sudo ufw allow 4149/tcp$ sudo ufw allow 10250/tcp$ sudo ufw allow 10255/tcp$ sudo ufw allow 10256/tcp$ sudo ufw allow 9099/tcp$ sudo ufw statusStatus: activeTo                         Action      From--                         ------      ----22/tcp                     ALLOW       Anywhere                  179/tcp                    ALLOW       Anywhere                  4789/tcp                   ALLOW       Anywhere                  5473/tcp                   ALLOW       Anywhere                  443/tcp                    ALLOW       Anywhere                  6443/tcp                   ALLOW       Anywhere                  2379/tcp                   ALLOW       Anywhere                  4149/tcp                   ALLOW       Anywhere                  10250/tcp                  ALLOW       Anywhere                  10255/tcp                  ALLOW       Anywhere                  10256/tcp                  ALLOW       Anywhere                  22/tcp (v6)                ALLOW       Anywhere (v6)             179/tcp (v6)               ALLOW       Anywhere (v6)             4789/tcp (v6)              ALLOW       Anywhere (v6)             5473/tcp (v6)              ALLOW       Anywhere (v6)             443/tcp (v6)               ALLOW       Anywhere (v6)             6443/tcp (v6)              ALLOW       Anywhere (v6)             2379/tcp (v6)              ALLOW       Anywhere (v6)             4149/tcp (v6)              ALLOW       Anywhere (v6)             10250/tcp (v6)             ALLOW       Anywhere (v6)             10255/tcp (v6)             ALLOW       Anywhere (v6)             10256/tcp (v6)             ALLOW       Anywhere (v6)       

$ sudo apt-get update$ sudo apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common=
  • Adding Docker repository:
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -$ sudo apt-key fingerprint 0EBFCD88$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian buster stable"
  • Update source list and install Docker-ce:
$ sudo apt-get update$ sudo apt-get -y install docker-ce

NOTE: On production system recomend install a fixed version of docker:

$ apt-cache madison docker-ce$ sudo apt-get install docker-ce=<VERSION>

  • Installing Kube Tools - kubeadm, kubectl, kubelet:
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  • Configure Kubernetes repository (copy the 3 lines and paste at once):
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.listdeb https://apt.kubernetes.io/ kubernetes-xenial mainEOF
  • Installing packages:
$ sudo apt-get update$ sudo apt-get install -y kubelet kubeadm kubectl
  • After installing mark theses packages to don’t update automatically:
$ sudo apt-mark hold kubelet kubeadm kubectl

$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
  • Make kubectl enabled to non-root user:
$ mkdir -p $HOME/.kube$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yamlconfigmap/calico-config createdcustomresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org createdclusterrole.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrole.rbac.authorization.k8s.io/calico-node createdclusterrolebinding.rbac.authorization.k8s.io/calico-node createddaemonset.apps/calico-node createdserviceaccount/calico-node createddeployment.apps/calico-kube-controllers createdserviceaccount/calico-kube-controllers created
  • Check the status:
$ kubectl get pods -n kube-systemNAME                                           READY   STATUS    RESTARTS   AGEcalico-kube-controllers-555fc8cc5c-wnnvq       1/1     Running   0          26mcalico-node-sngt8                              1/1     Running   0          26mcoredns-66bff467f8-2qqlv                       1/1     Running   0          55mcoredns-66bff467f8-vptpr                       1/1     Running   0          55metcd-kubeadm-ufw-debian10                      1/1     Running   0          55mkube-apiserver-kubeadm-ufw-debian10            1/1     Running   0          55mkube-controller-manager-kubeadm-ufw-debian10   1/1     Running   0          55mkube-proxy-nx8cz                               1/1     Running   0          55mkube-scheduler-kubeadm-ufw-debian10            1/1     Running   0          55m

Considerations:

Sorry my ufw rules are a bit messy, I tried too many things to get kubernetes working.

  • It's normal to try many things to make something work, but it sometimes end up becoming the issue itself.
  • I'm posting you the step by step I did to deploy it on the same environment as you so you can follow it once again to achieve the same results.
  • My felix probe didn't got any error, only time it got error was when i tried (on purpose) deploying the kubernetes without creating the rules on ufw.

If it does not solve, next steps:

  • Now, if after following this tutorial you still get a similar problem, please update the question with the following informations:
    • kubectl describe <pod_name> -n kube-system
    • kubectl get pod <pod_name> -n kube-system
    • kubectl logs <pod_name> -n kube-system
    • It's always recommended starting with a clean installation of Linux, if you are running a VM, delete the VM and create a new one.
    • If you are running on bare-metal, consider what else is running on the server, maybe there's another software messing with network communication.

Let me know in the comments if you find any problem following these troubleshooting steps.