Consider a copy of the production cluster, created for test purposes. In this case, renaming the database would be important to avoid applications connecting to the test cluster as if it was production. This article shows how to change the database name in a PGD cluster.
Starting with the following PGD cluster:
bdrdb=# SELECT * FROM bdr.group_versions_details ORDER BY 2;
node_id | node_name | postgres_version | bdr_version | bdr_edition
3999258439 | rt92679n1 | 14.7.0 | 4.3.0 | EE
639297188 | rt92679n2 | 14.7.0 | 4.3.0 | EE
3138151972 | rt92679n3 | 14.7.0 | 4.3.0 | EE
(3 rows)
The cluster contains 3 BDR nodes (one of them is a Witness) as well as a Barman server and 2 HARP Proxy nodes. The full config.yml
can be found below:
--
architecture: BDR-Always-ON
cluster_name: rt92679
cluster_tags: {}
cluster_vars:
bdr_database: bdrdb
bdr_node_group: rt92679
bdr_version: '4'
enable_pg_backup_api: false
epas_redwood_compat: false
failover_manager: harp
harp_consensus_protocol: bdr
postgres_coredump_filter: '0xff'
postgres_flavour: epas
postgres_version: '14'
preferred_python_version: python3
tpa_2q_repositories:
- products/bdr4/release
- products/harp/release
- products/default/release
use_volatile_subscriptions: true
locations:
- Name: a
- Name: b
instance_defaults:
image: tpa/rocky:8
platform: docker
vars:
ansible_user: root
instances:
- Name: rt92679n1
backup: rt92679b1
location: a
node: 1
role:
- bdr
- Name: rt92679n2
location: a
node: 2
role:
- bdr
- Name: rt92679n3
location: a
node: 3
role:
- bdr
- witness
- Name: rt92679b1
location: b
node: 4
role:
- barman
- Name: rt92679p1
location: a
node: 5
role:
- harp-proxy
- Name: rt92679p2
location: a
node: 6
role:
- harp-proxy
Following you can see instructions on how to change the database name, for example from bdrdb
to bdrdbnew
, on this cluster.
IMPORTANT: Please note that this procedure requires a downtime.
Each node has an associated interface connection string, also called DSN:
bdrdb=# SELECT node_name, interface_connstr FROM bdr.node_summary WHERE peer_state_name in ('ACTIVE', 'STANDBY') ORDER BY 1;
node_name | interface_connstr
rt92679n1 | host=rt92679n1 port=5444 dbname=bdrdb user=enterprisedb
rt92679n2 | host=rt92679n2 port=5444 dbname=bdrdb user=enterprisedb
rt92679n3 | host=rt92679n3 port=5444 dbname=bdrdb user=enterprisedb
(3 rows)
Also, each node has subscriptions, used to receive data from other nodes, for example on node 1:
bdrdb=# SELECT sub_name, origin_name, target_name FROM bdr.subscription_summary ORDER BY 1;
sub_name | origin_name | target_name
bdr_bdrdb_rt92679_rt92679n2_rt92679n1 | rt92679n2 | rt92679n1
bdr_bdrdb_rt92679_rt92679n3_rt92679n1 | rt92679n3 | rt92679n1
(2 rows)
So the first requirement is to disable all subscriptions on all nodes, which also stops replication among all nodes. Do this on all nodes:
SELECT bdr.alter_subscription_disable(sub_name)
FROM bdr.subscription_summary;
Then we also need to disable Raft Consensus on all nodes:
SELECT bdr.consensus_disable();
Then we need to change the interface DSN on all nodes. Here we are replacing the database name bdrdb
with bdrdbnew
. Do this on all nodes:
SELECT bdr.alter_node_interface(
node_name := t.node_name,
interface_dsn := replace(t.interface_connstr, 'dbname=bdrdb', 'dbname=bdrdbnew')
)
FROM bdr.node_summary t
WHERE t.peer_state_name IN ('ACTIVE', 'STANDBY');
You can confirm the new interface connection string has changed:
bdrdb=# SELECT node_name, interface_connstr FROM bdr.node_summary WHERE peer_state_name in ('ACTIVE', 'STANDBY') ORDER BY 1;
node_name | interface_connstr
rt92679n1 | host=rt92679n1 port=5444 dbname=bdrdbnew user=enterprisedb
rt92679n2 | host=rt92679n2 port=5444 dbname=bdrdbnew user=enterprisedb
rt92679n3 | host=rt92679n3 port=5444 dbname=bdrdbnew user=enterprisedb
(3 rows)
Now notice how the BDR extension is configured to be loaded with Postgres:
bdrdb=# SHOW shared_preload_libraries;
shared_preload_libraries
$libdir/dbms_pipe, $libdir/edb_gen, $libdir/dbms_aq, pg_stat_statements, bdr
(1 row)
Take note on that value, and just remove the , bdr
part of the text. Then use this value to change the shared_preload_libraries
parameter, to not load bdr
. Do this on all nodes:
echo "shared_preload_libraries = '\$libdir/dbms_pipe, \$libdir/edb_gen, \$libdir/dbms_aq, pg_stat_statements'" >> /opt/postgres/data/postgresql.auto.conf
On all nodes, as root
, restart Postgres, for example:
systemctl restart postgres
On the Barman node (if any), then locate the name of the server, for example:
[barman@rt92679b1 ~]$ barman list-servers
rt92679n1 - Backups from rt92679n1
As root
, deactivate the server:
[root@rt92679b1 ~]# sed -i 's/active = true/active = false/' /etc/barman.d/rt92679n1.conf
Then stop the Barman WAL receiver:
[barman@rt92679b1 ~]$ barman receive-wal --stop rt92679n1
Stopped process receive-wal(4478)
On all nodes, as root
, stop the HARP Manager service:
systemctl stop harp-manager
Now that the BDR workers are not loaded, and Barman is not attached to any BDR node, and HARP manager is not running, you can rename the database. First, connect to a different database, for example postgres
, and cancel any backends that might be still connected to the BDR database. Do this on all nodes:
SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE datname = 'bdrdb';
Then you can finally rename the database. Do this on all nodes:
ALTER DATABASE bdrdb RENAME TO bdrdbnew;
Also, on all nodes, reset the shared_preload_libraries
value to what it was, i.e., with bdr
on it:
ALTER SYSTEM RESET shared_preload_libraries;
As root
, restart Postgres on all nodes:
systemctl restart postgres
Finally, connect to the database, now called bdrdbnew
in our case, and re-enable all subscriptions:
SELECT bdr.alter_subscription_enable(sub_name)
FROM bdr.subscription_summary;
Do this on all nodes, and BDR should be working correctly with the database renamed.
At this point, we still need to handle Barman (if any), HARP and TPAexec. In the TPAexec <cluster_dir>/config.yml
file on the TPAexec host, you'll see the bdr_database
setting, for example:
bdr_database: bdrdb
Change it to the new database name, for example:
bdr_database: bdrdbnew
Then run:
cd <cluster_dir>
tpaexec provision .
tpaexec deploy .
And this will automatically adjust the Barman and HARP configuration.