Kubernetes cluster VirtualBox issues with networking (NAT and Host-only adapters) Kubernetes cluster VirtualBox issues with networking (NAT and Host-only adapters) kubernetes kubernetes

Kubernetes cluster VirtualBox issues with networking (NAT and Host-only adapters)


As I have mentioned in deleted comment, I recreated this on my Ubuntu 18.04 host. Created two Ubuntu 18.10 VM, with two adapters (NAT and one Host-Only adapter). I have the same configuration as you have specified here. Everything works fine.

What I had to do was to add the second adapter manually, I did it by using netplan before running kubeadm init and kubeadm join on node.

Just in case you did not do that - add the host only adapter network to the yaml file in /etc/netplan/50-cloud-init.yaml and run sudo netplan generate and sudo netplan apply. For nginx I have used deployment from official Kubernetes documentation. Then I have exposed the service:

kubectl create service nodeport nginx --tcp=80:80Curling my node IP address on NodePort from host machine works fine.

This was just to demonstrate what I did so it works in my environment. Judging from the described pod error it seems like there is something wrong with Flannel itself:

/run/flannel/subnet.env: no such file or directory

I checked this directory on master and it looks like this:

/run/flannel/subnet.env

FLANNEL_NETWORK=10.244.0.0/16FLANNEL_SUBNET=10.244.0.1/24FLANNEL_MTU=1450FLANNEL_IPMASQ=true

Check if the file is there, and if this will not help you, we can try to further troubleshoot if you provide more information. However there are too many unknowns so I had to guess in some places, my advice would be to destroy it all and try again with the information I have provided, and run the nginx with NodePort and not ClusterIP type. ClusterIP will only be reachable from inside of the cluster - for example Node.


Please let me pump up this thread. Long time ago I had configurated 1 NAT for internet, 1 HOST for SSH remote and errors the same. Special when setup Rancher Longhorn.

Now, I don't build like that. First, I build the GATEWAY SERVER by using CentOS with iptable (1 NAT, 1 HOST)

Then, other VMs has just 1 interface HOST connected direct to GATEWAY SERVER