Kubernetes pod resolve external kafka hostname in coredns not as hostaliases inside pod Kubernetes pod resolve external kafka hostname in coredns not as hostaliases inside pod kubernetes kubernetes

Kubernetes pod resolve external kafka hostname in coredns not as hostaliases inside pod


Thank you for the question and showing your effort to solve the problem.

You are right about adding hostAliases to be not a good practice because on an event your kafka hosts IP changes then you will have to apply the new IP to the deployment and it will trigger a pod reload.

I am not sure how externalIPs fits over here as a solution since:

Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.

But for a moment if I take it for granted that externalIP solution is working, even then the way you are accessing your service is not correct!

DNS resolution is failing because your domain name is wrong camel.component.kafka.configuration.brokers=kafka-worker1.default:9092 changing it to this camel.component.kafka.configuration.brokers=kafka-worker1.default.svc.cluster.local:9092 may fix it. Note: if your k8s cluster has a different domain than the default then replace cluster.local with your k8s cluster domain.

Check DNS debugging REF

There are two solutions which I can think of:

First Service without selectors and manual endpoint creation:

(example code) the name of the endpoint is used to attach to the service. therefore use same names for both service and endpoint which is kafka-worker

apiVersion: v1kind: Servicemetadata: name: kafka-workerspec: type: ClusterIP ports: - port: 9092   targetPort: 9092---apiVersion: v1kind: Endpointsmetadata: name: kafka-workersubsets: - addresses:   - ip: 10.76.XX.XX # kafka worker 1   - ip: 10.76.XX.XX # kafka worker 2   - ip: 10.76.XX.XX # kafka worker 3   ports:   - port: 9092     name: kafka-worker

Way to access this would be camel.component.kafka.configuration.brokers=kafka-worker.default.svc.cluster.local:9092

Note: - You can add more information to you endpoint ip like nodeName, hostName checkout this api ref - advantage of this approach is k8s will load balance for you to the kafka workers

Second ExternalName:

For this approach you will need to have single Domain name defined already, how to do that is out of scope of this answer but for example kafka-worker.abc.com is your domain name, now it is your responsibility to attach all of your 3 kafka worker node IPs in a (maybe) roundrobin fashion to your DNS server. Note: this kind load balancing (via DNS) is not always preferred because there is no health check performed by the DNS server to make sure which nodes are alive and which are dead.

This approach is not guaranteed and may need addition tweaks depending on your systems networking to resolve domain names. which is to say the node where you have your coredns/kube-dns running that node should be able to resolved kafka-worker.abc.com otherwise when k8s return the CNAME you application will fail to resolve it!

Here is an example:

kind: Servicemetadata:  name: kafka-workerspec:  type: ExternalName  externalName: kafka-worker.abc.com

Update:Following your update in Question. Looking at the first error it seems you have created 3 services which generates 3 DNS

kafka-worker3.default.svc.cluster.localkafka-worker2.default.svc.cluster.localkafka-worker1.default.svc.cluster.local

I suggest, if you could please check my example code! you DO NOT need to create 3 services, just one service which is attach to a endpoint which has 3 IPs of your 3 brokers.

For your second error:hostname is not domain name, hostname is typically the name give to the machine (please check the difference). just for the sake of simplicity I would suggest to use only IP in endpoint object.