Quickstart with Patroni

Israel Barth
Israel Barth

Patroni is a tool written in Python that is used for managing replication and high availability in PostgreSQL clusters.

The purpose of the tool is to make it easier to manage your PostgreSQL clusters, including support for automated failover and switchover operations.

In this article we will explain how to set up a minimal Patroni cluster for testing purposes.

Set up a Patroni test cluster

In this section we will share the steps to set up a Patroni test cluster.

At the time this article was written, there was still no support for Patroni clusters deployment through the latest available TPAexec version, which was v23.6. Also, the latest available version of Patroni at this time was used: 2.1.4.

With that in mind, we prepared a TPAexec post-deploy hook to set up a Patroni test cluster using Docker containers. This is how the cluster will look like:

  • Operating System: Ubuntu Focal
  • PostgreSQL: 14
  • Patroni: 2.1.4
  • DCS: etcd
  • Nodes:
  • ts-36-patroni-node-1: primary
  • ts-36-patroni-node-2: standby
  • ts-36-patroni-node-3: standby

The Patroni cluster will use etcd as the DCS layer. Each of the containers will have both PostgreSQL, Patroni, and etcd installed.

1- Create the initial cluster configuration:

tpaexec configure ts-36 \
--architecture M1 \
--platform docker \
--distribution Ubuntu \
--os-version focal \
--postgres-version 14 \
--extra-packages etcd patroni

2- Override the configuration, so we use a smaller subset of nodes that respects the aforementioned architecture:

cat << EOF > ts-36/config.yml
--
architecture: M1
cluster_name: ts-36
cluster_tags: {}

cluster_vars:
enable_pg_backup_api: false
packages:
common:
- etcd
- patroni
postgres_version: '14'
preferred_python_version: python3
use_volatile_subscriptions: true

locations:
- Name: main
- Name: dr

instance_defaults:
image: tpa/ubuntu:focal
platform: docker
vars:
ansible_user: root

instances:
- Name: ts-36-patroni-node-1
location: main
node: 1
role:
- primary
- Name: ts-36-patroni-node-2
location: main
node: 2
role:
- replica
upstream: ts-36-patroni-node-1
- Name: ts-36-patroni-node-3
location: dr
node: 3
role:
- replica
upstream: ts-36-patroni-node-1
EOF

3- Create the post-deploy hook, and its dependent files:

mkdir ts-36/hooks

cat << EOF > ts-36/hooks/post-deploy.yml
# Set facts
- name: Get container IP
# xargs is used to trim trailing spaces
shell: hostname -I | xargs
register: container_ip_command
- name: Set container IP as a fact
set_fact:
container_ip={{ container_ip_command.stdout }}
- name: Get container name
# xargs is used to trim trailing spaces
shell: hostname | xargs
register: container_name_command
- name: Set container name as a fact
set_fact:
container_name={{ container_name_command.stdout }}

# Configure etcd
- name: Add ETCDCTL_API to .bashrc (postgres)
lineinfile:
path: "~{{ postgres_user }}/.bashrc"
regexp: '^export ETCDCTL_API='
line: "export ETCDCTL_API=3"
- name: Add ETCDCTL_API to .bashrc (root)
lineinfile:
path: "/root/.bashrc"
regexp: '^export ETCDCTL_API='
line: "export ETCDCTL_API=3"
- name: Create etcd configuration file
template:
src: etcd.j2
dest: /etc/default/etcd
vars:
node_ip: "{{ container_ip }}"
node_name: "{{ container_name }}"
cluster_token: "etcd-patroni-{{ cluster_name }}"
- name: Start etcd service
service:
name: etcd
state: started

# Stop conflicting processes
- name: Stop repmgr service
service:
name: repmgr
state: stopped
- name: Stop PostgreSQL service
service:
name: postgres
state: stopped

# Configure Patroni
- name: Create Patroni logging folder
file:
state: directory
path: /var/log/patroni
owner: "{{ postgres_user }}"
group: "{{ postgres_user }}"
- name: Create Patroni configuration file
template:
src: patroni.yml.j2
dest: /etc/patroni.yml
vars:
node_ip: "{{ container_ip }}"
node_name: "{{ container_name }}"
patroni_cluster_name: "patroni-{{ cluster_name }}"
data_directory: "{{ postgres_data_dir }}"
bin_directory: "{{ postgres_bin_dir }}"
postgres_port: "{{ postgres_port }}"
- name: Create Patroni service
template:
src: patroni.service.j2
dest: /etc/systemd/system/patroni.service
- name: Reload systemd daemon
shell: systemctl daemon-reload
- name: Start Patroni service
service:
name: patroni
state: started
EOF

cat << EOF > ts-36/hooks/etcd.j2
#[Member]
ETCD_LISTEN_PEER_URLS="http://{{ node_ip }}:2380"
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://{{ node_ip }}:2379"
ETCD_NAME="{{ node_name }}"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://{{ node_ip }}:2380"
ETCD_INITIAL_CLUSTER="{%- for h in hostvars -%}
{{ hostvars[h].container_name }}=http://{{ hostvars[h].container_ip }}:2380,
{%- endfor -%}"
ETCD_ADVERTISE_CLIENT_URLS="http://{{ node_ip }}:2379"
ETCD_INITIAL_CLUSTER_TOKEN="{{ cluster_token }}"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat << EOF > ts-36/hooks/patroni.service.j2
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL
After=syslog.target
After=network.target

[Service]
Type=simple
User=postgres
Group=postgres
StandardOutput=syslog
# Start the patroni process
ExecStart=/bin/patroni /etc/patroni.yml
# Send HUP to reload from patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID
# Only kill the patroni process, not it's children, so it will gracefully stop postgres
KillMode=process
# Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=30
# Restart the service if it crashed
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

cat << EOF > ts-36/hooks/patroni.yml.j2
scope: {{ patroni_cluster_name }}
namespace: /db/
name: {{ node_name }}
restapi:
listen: "0.0.0.0:8008"
connect_address: "{{ node_ip }}:8008"
etcd3:
hosts:
{% for h in hostvars -%}
- {{ hostvars[h].container_ip }}:2379
{% endfor %}

bootstrap:
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true
use_slots: true
initdb:
- encoding: UTF8
postgresql:
listen: "0.0.0.0:{{ postgres_port }}"
connect_address: "{{ node_ip }}:{{ postgres_port }}"
data_dir: "{{ data_directory }}"
bin_dir: "{{ bin_directory }}"
authentication:
replication:
username: repmgr
superuser:
username: postgres
tags:
nosync: false
log:
level: INFO
dir: /var/log/patroni
file_size: 50000000
file_num: 10
loggers:
etcd.client: DEBUG
urllib3: DEBUG
EOF

Note: With the above commands we create a simple configuration file for Patroni (patroni.yml). You can change that according to your needs, it is just a sample configuration. The full list of options for the patroni.yml file can be found at the YAML Configuration Settings section of the Patroni documentation.

4- Provision the cluster

tpaexec provision ts-36

5- Deploy the cluster

tpaexec deploy ts-36

At this point you will have a Patroni cluster with the aforementioned architecture.

You can verify that by checking etcd and Patroni information:

1- In any node, check etcd member list:

etcdctl member list --write-out=table

The output should look similar to:

postgres@ts-36-patroni-node-1:~ $ etcdctl member list --write-out=table
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
| 53e2188e2c657738 | started | ts-36-patroni-node-1 | http://172.17.0.2:2380 | http://172.17.0.2:2379 |
| c64e6395fafdc498 | started | ts-36-patroni-node-2 | http://172.17.0.3:2380 | http://172.17.0.3:2379 |
| d80fc6730c50b3cd | started | ts-36-patroni-node-3 | http://172.17.0.4:2380 | http://172.17.0.4:2379 |

2- In any node, check Patroni member list:

patronictl -c /etc/patroni.yml list -e -f pretty

The output should look similar to:

postgres@ts-36-patroni-node-1:~ $ patronictl -c /etc/patroni.yml list -e -f pretty
+ Cluster: patroni-ts-36 (7171076102150087983)
| Member | Host | Role | State | TL | Lag in MB | Pending restart | Scheduled restart | Tags |
| ts-36-patroni-node-1 | 172.17.0.2 | Leader | running | 2 | | * | | |
| ts-36-patroni-node-2 | 172.17.0.3 | Replica | running | 2 | 0 | * | | |
| ts-36-patroni-node-3 | 172.17.0.4 | Replica | running | 2 | 0 | * | | |

Was this article helpful?

0 out of 0 found this helpful