Communication Between Two Services in Kubernetes Cluster Using Ingress as API Gateway Communication Between Two Services in Kubernetes Cluster Using Ingress as API Gateway kubernetes kubernetes

Communication Between Two Services in Kubernetes Cluster Using Ingress as API Gateway


Chris,

I haven't used linode or kong and don't know what your frontend actually does, so I'll just point out what I can see:

  • The simplest dns check is to curl (or ping, dig, etc.):

    • http://[dataapi's pod ip]:80 from a host node
    • http://[kong-proxy svc's internal ip]/dataapi/api/values from a host node (or another pod - see below)
  • default path matching on nginx ingress controller is pathPrefix, so your nginx ingress with path: / and nginx.ingress.kubernetes.io/rewrite-target: / actually matches everything and rewrites to /. This may not be an issue if you properly specify all your ingresses so they take priority over "/".

  • you said 'using a kong ingress as a proxy to redirect incoming', just want to make sure you're proxying (not redirecting the client).

  • Is chrome just relaying its upstream error from frontend-service? An external client shouldn't be able to resolve the cluster's urls (unless you've joined your local machine to the cluster's network or done some other fancy trick). By default, dns only works within the cluster.

  • cluster dns generally follows [service name].[namespace name].svc.cluster.local. If dns cluster dns is working, then using curl, ping, wget, etc. from a pod in the cluster and pointing it to that svc will send it to the cluster svc ip, not an external ip.

  • is your dataapi service configured to respond to /dataapi/api/values or does it not care what the uri is?

If you don't have any network policies restricting traffic within a namespace, you should be able to create a test pod in the same namespace, and curl the service dns and the pod ip's directly:

apiVersion: v1kind: Podmetadata:  name: curl-test  namespace: kongspec:  containers:  - name: curl-test    image: buildpack-deps    imagePullPolicy: Always    command:    - "curl"    - "-v"    - "http://dataapi:80/dataapi/api/values"  #nodeSelector:  #  kubernetes.io/hostname: [a more different node's hostname]

The pod should attempt dns resolution from the cluster. So it should find dataapi's svc ip and curl port 80 path /dataapi/api/values. Service IP's are virtual so they aren't actually 'reachable'. Instead, iptables routes them to the pod ip, which has an actual network endpoint and IS addressable.

once it completes, just check the logs: kubectl logs curl-test, and then delete it.

If this fails, the nature of the failure in the logs should tell you if it's a dns or link issue. If it works, then you probably don't have a cluster dns issue. But it's possible you have an inter-node communication issue. To test this, you can run the same manifest as above, but uncomment the node selector field to force it to run on a different node than your kong-proxy pod. It's a manual method, but it's quick for troubleshooting. Just rinse and repeat as needed for other nodes.

Of course, it may not be any of this, but hopefully this helps troubleshoot.


After a lot of help from Eric G (thank you!) on this, and reading this previous StackOverflow, I finally solved the issue. As the answer in this link illustrates, our frontend pod was serving up our application in a web browser which knows NOTHING about Kubernetes clusters.

As the link suggests, we added another rule in our nginx ingress to successfully route our http requests to the proper service

    - host: gateway.*******.com      http:        paths:          - path: /            pathType: Prefix            backend:              service:                name: gateway-service                port:                  number: 80

Then from our Angular frontend, we sent our HTTP requests as follows:

...http.get<string>("http://gateway.*******.com/api/name_of_contoller');...

And we were finally able to communicate with our backend service the way we wanted. Both frontend and backend in the same Kubernetes Cluster.