Assign External IP to a Kubernetes Service Assign External IP to a Kubernetes Service kubernetes kubernetes

Assign External IP to a Kubernetes Service


First of all run this command:

kubectl get -n namespace services

Above command will return output like this:

 NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGEbackend   NodePort   10.100.44.154         <none>          9400:3003/TCP   13h   frontend        NodePort   10.107.53.39     <none>        3000:30017/TCP   13h

It is clear from the above output that External IPs are not assigned to the services yet. To assign External IPs to backend service run the following command.

 kubectl patch svc backend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'

and to assign external IP to frontend service run this command.

 kubectl patch svc frontend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'

Now get namespace service to check either external IPs assignment:

kubectl get -n namespace services

We get an output like this:

NAME     TYPE     CLUSTER-IP     EXTERNAL-IP    PORT(S)             AGEbackend  NodePort 10.100.44.154  192.168.0.194  9400:3003/TCP       13hfrontend NodePort 10.107.53.39   192.168.0.194  3000:30017/TCP      13h

Cheers!!! Kubernetes External IPs are now assigned .


If this is just for testing, then try

kubectl port-forward service/nginx-service 80:80

Then you can

curl http://localhost:80


A solution that could work (and not only for testing, though it has its shortcomings) is to set your Pod to map the host network with the hostNetwork spec field set to true.

It means that you won't need a service to expose your Pod, as it will always be accessible on your host via a single port (the containerPort you specified in the manifest). No need to keep a DNS mapping record in that case.

This also means that you can only run a single instance of this Pod on a given node (talking about shortcomings...). As such, it makes it a good candidate for a DaemonSet object.

If your Pod still needs to access/resolve internal Kubernetes hostnames, you need to set the dnsPolicy spec field set to ClusterFirstWithNoHostNet. This setting will enable your pod to access the K8S DNS service.

Example:

apiVersion: apps/v1kind: DaemonSetmetadata:  name: nginxspec:  template:    metadata:      labels:        app: nginx-reverse-proxy    spec:      hostNetwork: true      dnsPolicy: ClusterFirstWithHostNet      tolerations:  # allow a Pod instance to run on Master - optional      - key: node-role.kubernetes.io/master        effect: NoSchedule      containers:      - image: nginx        name: nginx        ports:        - name: http          containerPort: 80        - name: https          containerPort: 443

EDIT: I was put on this track thanks to the the ingress-nginx documentation