Deploying Ingress Nginx Controller ELB in EKS Cluster with multiple nodes Deploying Ingress Nginx Controller ELB in EKS Cluster with multiple nodes kubernetes kubernetes

Deploying Ingress Nginx Controller ELB in EKS Cluster with multiple nodes


Actually it's according to implementation of ingress controller and ELB.The ELB is recognize only node where ingress controller pod is running. The rest of nodes is OutOfService. If ingress-controller pod will be removed to another node then ELB recognize this node as InService instance. You can try this by deleting the controller pod.

The recommendation is using NLB or ALB loadbalancers with ingress controller.From version 1.18 of k8s NLB will be default for ingress loadbalncer.Try this tutorial for change loadbalancer type.


This is expected behavior when externalTrafficPolicy is set to Local in the service (which is what you have). With externalTrafficPolicy: Local , you don't get any extra hops - once the traffic arrives at the node, it doesn't leave the node.Load Balancer will send traffic only to the nodes where the Ingress Controller pods are running. On the other nodes, the health check will return 503 and will be treated as unhealthy.

Change the externalTrafficPolicy to Cluster if you want all nodes to be healthy.

This is generally not recommended though as by doing this the client's IP address is not propagated to the end Pods. But this is only true for NLB's and not Classic Elastic Load Balancers. So, best is to use NLB with nginx ingress controller. If you still want all nodes to be healthy, stick with Local policy & use a daemon set.

Official documentation around this.