Kubernetes coredns readiness probe failed
I met similar issue when upgrade my host machine to ubuntu 18.04 which uses systemd-resolved as DNS server. The nameserver field in /etc/resolv.conf is using a local IP address 127.0.0.53 which would cause coreDNS failed to start.
You can take a look at the details from the following link.https://github.com/coredns/coredns/blob/master/plugin/loop/README.md#troubleshooting-loops-in-kubernetes-clusters
I just hit this problem myself. Apparently the lack of a hostname in the healthcheck url is ok.
What got me ahead was
microk8s.inspect
The output said there's a problem with forwarding on iptables. Since I have firewalld on my system, I temporarily disabled it
systemctl stop firewalld
and then disabled dns in microk8s and enabled it again (for some unknown reason the dns pod didn't get up on it's own)
microk8s.disable dnsmicrok8s.enable dns
it started without any issues.
I would start troubleshooting with kubelet agent verification on the master and worker nodes in order exclude any intercommunication issue within a cluster nodes whenever the rest core runtime Pods are up and running, as kubelet
is the main contributor for Liveness and Readiness probes.
systemctl status kubelet -ljournalctl -u kubelet
Mentioned in the question health check URLs are fine, as they are predefined in CoreDNS deployment per design.
Ensure that CNI plugin Pods are functioning and cluster overlay network intercepts requests from Pod to Pod communication as CoreDNS is very sensitive on any issue related to entire cluster networking.
In addition to @Hang Du answer about CoreDNS pods loopback issue, I encourage you to get more information regarding CoreDNS problems investigation in official k8s debug documentation.