Kubernetes Pod to run with OpenVPN client sidecar and have functional DNS through the tunnel and in cluster Kubernetes Pod to run with OpenVPN client sidecar and have functional DNS through the tunnel and in cluster kubernetes kubernetes

Kubernetes Pod to run with OpenVPN client sidecar and have functional DNS through the tunnel and in cluster


I had rather solved the problem by running a sidecar DNS-server, because:

  • it is easier to implement, maintain and understand;
  • it works without surprises.

Here is an example pod with CoreDNS:

apiVersion: v1kind: Podmetadata:  name: foo  namespace: defaultspec:  volumes:  - name: config-volume    configMap:      name: foo-config      items:        - key: Corefile          path: Corefile  dnsPolicy: None # SIgnals Kubernetes that you want to supply your own DNS - otherwise `/etc/resolv.conf` will be overwritten by Kubernetes and there is then no way to update it.  dnsConfig:    nameservers:      - 127.0.0.1 # This will set the local Core DNS as the DNS resolver. When `dnsPolicy` is set, `dnsConfig` must be provided.  containers:    - name: dns      image: coredns/coredns      env:        - name: LOCAL_DNS          value: 10.233.0.3 # insert local DNS IP address (see kube-dns service ClusterIp)        - name: REMOTE_DNS          value: 192.168.255.1 # insert remote DNS IP address      args:        - '-conf'        - /etc/coredns/Corefile      volumeMounts:        - name: config-volume          readOnly: true          mountPath: /etc/coredns    - name: test      image: debian:buster      command:        - bash        - -c        - apt update && apt install -y dnsutils && cat /dev/stdout---apiVersion: v1kind: ConfigMapmetadata:  name: foo-config  namespace: defaultdata:  Corefile: |    cluster.local:53 {      errors      health      forward . {$LOCAL_DNS}      cache 30    }    cluster.remote:53 {      errors      health      rewrite stop {        # rewrite cluster.remote to cluster.local and back        name suffix cluster.remote cluster.local answer auto      }      forward . {$REMOTE_DNS}      cache 30    }

The CoreDNS config above simply forwards cluster.local queries to the local service and cluster.remote - to the remote one. Using it I was able to resolve kubernetes service IP of both clusters:

 k exec -it -n default foo -c test -- bashroot@foo:/# dig @localhost kubernetes.default.svc.cluster.local +short10.100.0.1root@foo:/# dig @localhost kubernetes.default.svc.cluster.remote +short10.43.0.1

Update:

Possibly, the following Core DNS configuration is sufficient, in case you require access to the internet as well as cluster.internal is provided by Kubernetes itself:

.:53 {  errors  health  forward . {$LOCAL_DNS}  cache 30}cluster.remote:53 {  errors  health  forward . {$REMOTE_DNS}  cache 30}


Ad 1.) I am not sure I understand what you mean by namespace rotation (do you mean round-robin domain rotation?), but you could set the timeout to 0, so resolver sends right away dns queries to both name-servers and returns the quicker dns response.

The better idea is to leverage a native kubernetes dns (coredns, kubedns) and just set the forwarding rule there, as per documentation you could add something like this to the coredns/kube-dns configmap in the kube-system:

cluster.remote:53 {        errors        cache 30        forward . <remote cluster dns ip>    }

This way you won't need to touch /etc/resolve.conf in the pod at all, you just need to ensure kubedns can reach the remote dns server... or configure your application for iterative dns resolutionYou can find more details in the official kubernetes documentation https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ and coredns https://coredns.io/plugins/forward/ .Of course modifying kubedns/coredns configuration requires you to have admin rights in the cluster.