How do I model a PostgreSQL failover cluster with Docker/Kubernetes? How do I model a PostgreSQL failover cluster with Docker/Kubernetes? kubernetes kubernetes

How do I model a PostgreSQL failover cluster with Docker/Kubernetes?


There's an example in OpenShift: https://github.com/openshift/postgresql/tree/master/examples/replica The principle is the same in pure Kube (it's not using anything truly OpenShift specific, and you can use the images in plain docker)


You can give PostDock a try, either with docker-compose or Kubernetes. Currently I have tried it in our project with docker-compose, with the schema as shown below:

pgmaster (primary node1)  --||- pgslave1 (node2)       --||  |- pgslave2 (node3)    --|----pgpool (master_slave_mode stream)----client|- pgslave3 (node4)       --|   |- pgslave4 (node5)    --|

I have tested the following scenarios, and they all work very well:

  • Replication: changes made at the primary (i.e., master) node will be replicated to all standby (i.e., slave) nodes
  • Failover: stops the primary node, and a standby node (e.g., node4) will automatically take over the primary role.
  • Prevention of two primary nodes: resurrect the previous primary node (node1), node4 will continue as the primary node, while node1 will be in sync but as a standby node.

As for the client application, these changes are all transparent. The client just points to the pgpool node, and keeps working fine in all the aforementioned scenarios.

Note: In case you have problems to get PostDock up running, you could try my forked version of PostDock.

Pgpool-II with Watchdog

A problem with the aforementioned architecture is that pgpool is the single point of failure. So I have also tried enabling Watchdog for pgpool-II with a delegated virtual IP, so as to avoid the single point of failure.

master (primary node1)  --\|- slave1 (node2)       ---\     / pgpool1 (active)  \|  |- slave2 (node3)    ----|---|                     |----client|- slave3 (node4)       ---/     \ pgpool2 (standby) /   |- slave4 (node5)    --/

I have tested the following scenarios, and they all work very well:

  • Normal scenario: both pgpools start up, with the virtual IP automatically applied to one of them, in my case, pgpool1
  • Failover: shutdown pgpool1. The virtual IP will be automatically applied to pgpool2, which hence becomes active.
  • Start failed pgpool: start again pgpool1. The virtual IP will be kept with pgpool2, and pgpool1 is now working as standby.

As for the client application, these changes are all transparent. The client just points to the virtual IP, and keeps working fine in all the aforementioned scenarios.

You can find this project at my GitHub repository on the watchdog branch.


Kubernetes's statefulset is a good base for setting up the stateful service. You will still need some work to configure the correct membership among PostgreSQL replicas.

Kubernetes has one example for it. http://blog.kubernetes.io/2017/02/postgresql-clusters-kubernetes-statefulsets.html