How to access locally installed postgresql in microk8s cluster How to access locally installed postgresql in microk8s cluster kubernetes kubernetes

How to access locally installed postgresql in microk8s cluster


First you need to configure your Postgresql to listen not only on your vm's localhost. Let's assume you have a network interface with IP address 10.1.2.3, which is configured on your node, on which Postgresql instance is installed.

Add the following entry in your /etc/postgresql/10/main/postgresql.conf:

listen_addresses = 'localhost,10.1.2.3'

and restart your postgres service:

sudo systemctl restart postgresql

You can check if it listens on the desired address by running:

sudo ss -ntlp | grep postgres

From your Pods deployed within your Microk8s cluster you should be able to reach IP addresses of your node e.g. you should be able to ping mentioned 10.1.2.3 from your Pods.

As it doesn't require any loadbalancing you can reach to your Postgresql directly from your Pods without a need of configuring additional Service, that exposes it to your cluster.

If you don't want to refer to your Postgresql instance in your application using it's IP address, you can edit your Deployment (which manages the set of Pods that connect to your postgres db) to modify the default content of /etc/hosts file used by your Pods.

Edit your app Deployment by running:

microk8s.kubectl edit deployment your-app

and add the following section under Pod template spec:

  hostAliases: # it should be on the same indentation level as "containers:"  - hostnames:    - postgres    - postgresql    ip: 10.1.2.3

After saving it, all your Pods managed by this Deployment will be recreated according to the new specification. When you exec into your Pod by running:

microk8s.kubectl exec -ti pod-name -- /bin/bash

you should see additional section in your /etc/hosts file:

# Entries added by HostAliases.10.1.2.3    postgres    postgresql

Since now you can refer to your Postgres instance in your app by names postgres:5432 or postgresql:5432 and it will be resolved to your VM's IP address.

I hope it helps.

UPDATE:

I almost forgot that some time ago I've posted an answer on a very similar topic. You can find it here. It describes the usage of a Service without selector, which is basically what you mentioned in your question. And yes, it also can be used for configuring access to your Postgresql instance running on the same host. As this kind of Service doesn't have selectors by its definition, no endpoint is automatically created by kubernetes and you need to create one by yourself. Once you have the IP address of your Postgres instance (in our example it is 10.1.2.3) you can use it in your endpoint definition.

Once you configure everything on the side of kubernetes you may still encounter an issue with Postgres. In your Pod that is trying to connect to the Postgres instance you may see the following error message:

org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host 10.1.7.151

It basically means that your pg_hba.conf file lacks the required entry that would allow your Pod to access your Postgresql database. Authentication is host-based, so in other words only hosts with certain IPs or with IPs within certain IP range are allowed to authenticate.

Client authentication is controlled by a configuration file, which traditionally is named pg_hba.conf and is stored in the database cluster's data directory. (HBA stands for host-based authentication.)

So now you probably wonder which network you should allow in your pg_hba.conf. To handle cluster networking Microk8s uses flannel. Take a look at the content of your /var/snap/microk8s/common/run/flannel/subnet.env file. Mine looks as follows:

FLANNEL_NETWORK=10.1.0.0/16FLANNEL_SUBNET=10.1.53.1/24FLANNEL_MTU=1410FLANNEL_IPMASQ=false

Adding to your pg_hba.conf only flannel subnet should be enough to ensure that all your Pods can connect to Posgresql.