True LoadBalancing in Kubernetes? True LoadBalancing in Kubernetes? kubernetes kubernetes

True LoadBalancing in Kubernetes?


NodePort is not load balancer.

You're right about this in one way, yes it's not designed to be a load balancer.

users still hits a single node i.e. the "Node" which is K8s's minion but a real Load Balancer, right?

With NodePort, you have to hit a single node at any one time, but you have to remember that kube-proxy is running on ALL nodes. So you can hit the NodePort on any node in the cluster (even a node the workload isn't running on) and you'll still hit the endpoint you want to hit. This becomes important later.

The sub-domain that we specify in ingress-service should points to "a" node in K8s cluster

No, this isn't how it works.

Your ingress controller needs to be exposed externally still. If you're using a cloud provider, a commonly used pattern is to expose your ingress controller with Service of Type=LoadBalancer. The LoadBalancing still happens with Services, but Ingress allows you to use that Service in a more user friendly way. Don't confuse ingress with loadbalancing.

I'm having a doubt how cloud provider LB does the load balancing? Are those really distribute the traffic to appropriate Node which PODS are deployed or just forwarding the traffic to master node or minion?

If you look at a provisioned service in Kubernetes, you'll see why it makes sense.

Here's a Service of Type LoadBalancer:

kubectl get svc nginx-ingress-controller -n kube-system                                                                    NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP        PORT(S)                      AGEnginx-ingress-controller   LoadBalancer   <redacted>   internal-a4c8...   80:32394/TCP,443:31281/TCP   147d

You can see I've deployed an ingress controller with type LoadBalancer. This has created an AWS ELB, but also notice, like NodePort it's mapped port 80 on the ingress controller pod to port 32394.

So, let's look at the actual LoadBalancer in AWS:

aws elb describe-load-balancers --load-balancer-names a4c80f4eb1d7c11e886d80652b702125{    "LoadBalancerDescriptions": [        {            "LoadBalancerName": "a4c80f4eb1d7c11e886d80652b702125",            "DNSName": "internal-a4c8<redacted>",            "CanonicalHostedZoneNameID": "<redacted>",            "ListenerDescriptions": [                {                    "Listener": {                        "Protocol": "TCP",                        "LoadBalancerPort": 443,                        "InstanceProtocol": "TCP",                        "InstancePort": 31281                    },                    "PolicyNames": []                },                {                    "Listener": {                        "Protocol": "HTTP",                        "LoadBalancerPort": 80,                        "InstanceProtocol": "HTTP",                        "InstancePort": 32394                    },                    "PolicyNames": []                }            ],            "Policies": {                "AppCookieStickinessPolicies": [],                "LBCookieStickinessPolicies": [],                "OtherPolicies": []            },            "BackendServerDescriptions": [],            "AvailabilityZones": [                "us-west-2a",                "us-west-2b",                "us-west-2c"            ],            "Subnets": [                "<redacted>",                "<redacted>",                "<redacted>"            ],            "VPCId": "<redacted>",            "Instances": [                {                    "InstanceId": "<redacted>"                },                {                    "InstanceId": "<redacted>"                },                {                    "InstanceId": "<redacted>"                },                {                    "InstanceId": "<redacted>"                },                {                    "InstanceId": "<redacted>"                },                {                    "InstanceId": "<redacted>"                },                {                    "InstanceId": "<redacted>"                },                {                    "InstanceId": "<redacted>"                }            ],            "HealthCheck": {                "Target": "TCP:32394",                "Interval": 10,                "Timeout": 5,                "UnhealthyThreshold": 6,                "HealthyThreshold": 2            },            "SourceSecurityGroup": {                "OwnerAlias": "337287630927",                "GroupName": "k8s-elb-a4c80f4eb1d7c11e886d80652b702125"            },            "SecurityGroups": [                "sg-8e0749f1"            ],            "CreatedTime": "2018-03-01T18:13:53.990Z",            "Scheme": "internal"        }    ]}

The most important things to note here are:

The LoadBalancer is mapping port 80 in ELB to the NodePort:

{                "Listener": {                    "Protocol": "HTTP",                    "LoadBalancerPort": 80,                    "InstanceProtocol": "HTTP",                    "InstancePort": 32394                },                "PolicyNames": [] }

You'll also see that there are multiple target Instances, not one:

aws elb describe-load-balancers --load-balancer-names a4c80f4eb1d7c11e886d80652b702125 | jq '.LoadBalancerDescriptions[].Instances | length'8

And finally, if you look at the number of nodes in my cluster, you'll see it's actually all the nodes that have been added to the LoadBalancer:

kubectl get nodes -l "node-role.kubernetes.io/node=" --no-headers=true | wc -l                                             8

So, in summary - Kubernetes does implement true LoadBalancing with services (whether that be NodePort or LoadBalancer types) and the ingress just makes that service more accessible to the outside world