DNS problem on AWS EKS when running in private subnets DNS problem on AWS EKS when running in private subnets kubernetes kubernetes

DNS problem on AWS EKS when running in private subnets


I feel like I have to give this a proper answer because coming upon this question was the answer to 10 straight hours of debugging for me. As @Daniel said in his comment, the issue I found was with my ACL blocking outbound traffic on UDP port 53 which apparently kubernetes uses to resolve DNS records.

The process was especially confusing for me because one of my pods worked actually worked the entire time since (I think?) it happened to be in the same zone as the kubernetes DNS resolver.


To elaborate on the comment from @Daniel, you need:

  1. an ingress rule for UDP port 53
  2. an ingress rule for UDP on ephemeral ports (e.g. 1025–65535)

I hadn't added (2) and was seeing CoreDNS receiving requests and trying to respond, but the response wasn't getting back to the requester.

Some tips for others dealing with these kinds of issues, turn on CoreDNS logging by adding the log configuration to the configmap, which I was able to do with kubectl edit configmap -n kube-system coredns. See CoreDNS docs on this https://github.com/coredns/coredns/blob/master/README.md#examples This can help you figure out whether the issue is CoreDNS receiving queries or sending the response back.


So I been struggling for a couple of hours i think, lost track of time, with this issue as well.

Since i am using the default VPC but with the worker nodes inside the private subnet, it wasn't working.

I went through the amazon-vpc-cni-k8s and found the solution.

We have to sff the environment variable of the aws-node daemonset AWS_VPC_K8S_CNI_EXTERNALSNAT=true.

You can either get the new yaml and apply or just fix it through the dashboard. However for it to work you have to restart the worker node instance so the ip route tables are refreshed.

issue link is here

thankz