How to convert non-TDE nodes to TDE in a PGD cluster using TPAexec

Edgar Silva Leitão
Edgar Silva Leitão
  • Updated

The steps provided in this article are oriented towards migrating data from nodes that do not have Transparent Data Encryption (TDE) enabled in an already running PGD 5 cluster to nodes that have TDE enabled within the same PGD 5 cluster.

As TDE is only available for PGE 15 and EPAS 15 and above, and BDR 4 is compatible only up to PGE 14 and EPAS 14, then the cluster in this example is composed of 2 data nodes and 1 witness using EPAS 15 running on a PGD 5 cluster.

Strategy

The strategy used in this article relies on the fact that nodes without TDE enabled and nodes with TDE enabled can replicate to each other:

1- Deploy the cluster using the config.yml with only the nodes bdr1, bdr2, and bdr3 that are not using TDE.

2- Create a schema and a table on the node bdr1.

3- Add 3 new nodes (bdr4, bdr5, bdr6) in the config.yml file to be deployed with TDE enabled.

4- Add preferred_first_bdr_primary: bdr1 to be used as an upstream to provide data to the new nodes.

5- Provision and deploy the cluster.

6- Now we have bdr1, bdr2, and bdr3 without TDE, and bdr4, bdr5, bdr6 with TDE.

7- Part and drop bdr1, bdr2, and bdr3 from the cluster, leaving only bdr4, bdr5, and bdr6 with TDE enabled in the cluster.

NOTE: The inverse can be done to migrate data from TDE-enabled nodes to non-TDE-enabled nodes.

This article will cover the full process from the initial deployment until the removal of the non-TDE-enabled nodes from the cluster.

TPAexec config.yml file

Use the following configuration to deploy a 3-node cluster without TDE enabled:

---
architecture: PGD-Always-ON
cluster_name: tdecluster
cluster_tags: {}

keyring_backend: legacy

cluster_vars:
  apt_repository_list: []
  bdr_database: bdrdb
  bdr_node_group: tdecluster
  bdr_node_groups:
  - name: tdecluster
    options:
      enable_proxy_routing: true
  - name: dc1_subgroup
    options:
      location: dc1
    parent_group_name: tdecluster
  bdr_version: '5'
  default_pgd_proxy_options:
    listen_port: 6432
  edb_repositories:
  - enterprise
  - postgres_distributed
  epas_redwood_compat: false
  failover_manager: pgd
  postgres_coredump_filter: '0xff'
  postgres_flavour: epas
  postgres_version: '15'
  preferred_python_version: python3
  use_volatile_subscriptions: true
  yum_repository_list:
  - EPEL


locations:
- Name: dc1

instance_defaults:
  image: tpa/rocky:8
  platform: docker
  vars:
    ansible_user: root

instances:
- Name: bdr1
  location: dc1
  node: 1
  role:
  - bdr
  - pgd-proxy
  vars:
    bdr_child_group: dc1_subgroup
    bdr_node_options:
      route_priority: 100
- Name: bdr2
  location: dc1
  node: 2
  role:
  - bdr
  - pgd-proxy
  vars:
    bdr_child_group: dc1_subgroup
    bdr_node_options:
      route_priority: 100
- Name: bdr3
  location: dc1
  node: 3
  role:
  - bdr
  - pgd-proxy
  - witness
  vars:
    bdr_child_group: dc1_subgroup
    bdr_node_options:
      route_priority: 100

Deploying the cluster

For better compatibility in the deployment, use the latest TPAexec version.

1- Create a directory for the cluster and cd to it:

mkdir ~/tdecluster
cd ~/tdecluster

2- Create a `config.yml file in the cluster directory with the content above:

vim config.yml

3- Relink, provision, and deploy:

tpaexec relink .

tpaexec provision .

tpaexec deploy .

Feeding some data into the cluster

Access the bdrdb database in node bdr1, create a schema, a table, and insert some data:

\c bdrdb

CREATE SCHEMA schema1;

CREATE TABLE schema1.tbl1 (id INTEGER PRIMARY KEY, msg TEXT);

INSERT INTO schema1.tbl1 VALUES (1, 'TDETEST');

Verify the inserted data in both bdr1 and bdr2:

SELECT * FROM schema1.tbl1;
 id |   msg   
----+---------
  1 | TDETEST
(1 row)

Deploying TDE enabled nodes with TPAexec

Add the following parameter in the config.yml file under cluster_vars. This parameter will force TPAexec to use the node bdr1 as the upstream to provide the schema and the table for the fresh new nodes:

preferred_first_bdr_primary: bdr1

Add 2 new data nodes (bdr4 and bdr5) and 1 witness node (bdr6) with TDE options for the initdb command in the config.yml file, just below the block of the bdr3 node:

- Name: bdr4
  location: dc1
  node: 4
  role:
  - bdr
  - pgd-proxy
  vars:
   bdr_child_group: dc1_subgroup
   bdr_node_options:
     route_priority: 100
   postgres_initdb_opts:
   - --data-encryption
   - --data-checksums
   - "--key-wrap-command=''/usr/bin/openssl enc -e -aes-128-cbc -pbkdf2 -pass pass:12345678 -out %p''"
   - "--key-unwrap-command=''/usr/bin/openssl enc -d -aes-128-cbc -pbkdf2 -pass pass:12345678 -in %p''"
- Name: bdr5
  location: dc1
  node: 5
  role:
  - bdr
  - pgd-proxy
  vars:
    bdr_child_group: dc1_subgroup
    bdr_node_options:
       route_priority: 100
    postgres_initdb_opts:
    - --data-encryption
    - --data-checksums
    - "--key-wrap-command=''/usr/bin/openssl enc -e -aes-128-cbc -pbkdf2 -pass pass:12345678 -out %p''"
    - "--key-unwrap-command=''/usr/bin/openssl enc -d -aes-128-cbc -pbkdf2 -pass pass:12345678 -in %p''"
- Name: bdr6
  location: dc1
  node: 6
  role:
  - bdr
  - pgd-proxy
  - witness
  vars:
   bdr_child_group: dc1_subgroup
   bdr_node_options:
     route_priority: 100
   postgres_initdb_opts:
   - --data-encryption
   - --data-checksums
   - "--key-wrap-command=''/usr/bin/openssl enc -e -aes-128-cbc -pbkdf2 -pass pass:12345678 -out %p''"
   - "--key-unwrap-command=''/usr/bin/openssl enc -d -aes-128-cbc -pbkdf2 -pass pass:12345678 -in %p''"

IMPORTANT!!: To enable TDE via TPAexec, it is necessary to specify --data-checksums. The option -pass pass:12345678 defines a password for the encryption and decryption process. The key-unwrap-command will be configured in postgresql.conf in the setting data_encryption_key_unwrap_command.

Provision and deploy the cluster with the new nodes:

tpaexec provision .

tpaexec deploy .

Confirm TDE and check replicated data to the new nodes.

The cluster is now deployed with six nodes, and half of them doesn't have TDE enabled, and the other half has. We can check if TDE is enabled by running the following query in the BDR database from any node:

WITH node_tde AS (
  SELECT jsonb_array_elements(
    bdr.run_on_all_nodes($$
      SELECT data_encryption_version FROM pg_control_init()
    $$)::jsonb
  ) AS j
)
SELECT j->>'node_id' AS node_id,
       j->>'node_name' AS node_name,
       j->'response'->'command_tuples'->0->>'data_encryption_version' AS tde_enabled
FROM node_tde
ORDER BY 2;

The following output is expected:

  node_id   | node_name | tde_enabled
------------+-----------+-------------
 4245713078 | bdr1      | 0
 1609217846 | bdr2      | 0
 1630662325 | bdr3      | 0
 959108419  | bdr4      | 1
 3841624025 | bdr5      | 1
 4056130800 | bdr6      | 1
(6 rows)

In the output, we can notice that nodes bdr1, bdr2, and bdr3 have tde_enabled = 0, which means that TDE is not enabled in these nodes. On the other hand, nodes bdr4, bdr5, and bdr6 have tde_enabled = 1, which confirms that TDE is enabled in these three nodes.

Also, the table schema1.tbl1 exists in the nodes bdr4 and bdr5 with the inserted values from before. At this moment, you can access the node bdr1 and insert a new value:

INSERT INTO schema1.tbl1 VALUES (2, 'TDETEST2');

Then check in the nodes bdr2, bdr4, and bdr5 the replicated new values:

SELECT * FROM schema1.tbl1;

Parting and dropping nodes

We have confirmed that the replication is working between all the data nodes, and all of them have the same data. Now is the time to remove the non-TDE nodes from the cluster.

First, check Raft Consensus:

SELECT * FROM bdr.group_raft_details;

 instance_id |  node_id   | node_name | node_group_name |     state     | leader_type | leader_id  | leader_name | current_term | commit_index | nodes | voting_nodes | protocol_version 
-------------+------------+-----------+-----------------+---------------+-------------+------------+-------------+--------------+--------------+-------+--------------+------------------
           1 | 4245713078 | bdr1      | tdecluster      | RAFT_FOLLOWER | NODE        | 1609217846 | bdr2        |            2 |         1761 |     6 |            6 |             5004
           1 | 1609217846 | bdr2      | tdecluster      | RAFT_LEADER   | NODE        | 1609217846 | bdr2        |            2 |         1761 |     6 |            6 |             5004
           1 | 1630662325 | bdr3      | tdecluster      | RAFT_FOLLOWER | NODE        | 1609217846 | bdr2        |            2 |         1761 |     6 |            6 |             5004
           1 |  959108419 | bdr4      | tdecluster      | RAFT_FOLLOWER | NODE        | 1609217846 | bdr2        |            2 |         1761 |     6 |            6 |             5004
           1 | 3841624025 | bdr5      | tdecluster      | RAFT_FOLLOWER | NODE        | 1609217846 | bdr2        |            2 |         1761 |     6 |            6 |             5004
           1 | 4056130800 | bdr6      | tdecluster      | RAFT_FOLLOWER | NODE        | 1609217846 | bdr2        |            2 |         1761 |     6 |            6 |             5004

Now check the nodes of the cluster:

SELECT * FROM bdr.node_summary;
 node_name | node_group_name |                  interface_connstr                  | peer_state_name | peer_target_state_name | node_seq_id | node_local_dbname |  node_id   | node_group_id | node_kind_name 
-----------+-----------------+-----------------------------------------------------+-----------------+------------------------+-------------+-------------------+------------+---------------+----------------
 bdr1      | dc1_subgroup    | host=bdr1 port=5444 dbname=bdrdb user=enterprisedb  | ACTIVE          | ACTIVE                 |           1 | bdrdb             | 4245713078 |    1302278103 | data
 bdr2      | dc1_subgroup    | host=bdr2 port=5444 dbname=bdrdb user=enterprisedb  | ACTIVE          | ACTIVE                 |           2 | bdrdb             | 1609217846 |    1302278103 | data
 bdr3      | dc1_subgroup    | host=bdr3 port=5444 dbname=bdrdb user=enterprisedb  | ACTIVE          | ACTIVE                 |           3 | bdrdb             | 1630662325 |    1302278103 | witness
 bdr4      | dc1_subgroup    | host=bdr4 port=5444 dbname=bdrdb user=enterprisedb  | ACTIVE          | ACTIVE                 |           4 | bdrdb             |  959108419 |    1302278103 | data
 bdr5      | dc1_subgroup    | host=bdr5 port=5444 dbname=bdrdb user=enterprisedb  | ACTIVE          | ACTIVE                 |           5 | bdrdb             | 3841624025 |    1302278103 | data
 bdr6      | dc1_subgroup    | host=bdr6 port=5444 dbname=bdrdb user=enterprisedb  | ACTIVE          | ACTIVE                 |           6 | bdrdb             | 4056130800 |    1302278103 | witness
(6 rows)

Access the BDR database in the node bdr4 and part the non-TDE nodes from the cluster, one by one. Don't part all of them at the same time.

SELECT bdr.part_node(
  node_name := 'bdr1',
  wait_for_completion := true,
  force := false
); 

SELECT bdr.part_node(
  node_name := 'bdr2',
  wait_for_completion := true,
  force := false
); 

SELECT bdr.part_node(
  node_name := 'bdr3',
  wait_for_completion := true,
  force := false
); 

Now, drop the non-TDE nodes from the BDR catalog of each remaining node. This is an optional step, however, you can do this by accessing the BDR database in nodes bdr4, bdr5, bdr6, and executing the following in each of them:

SELECT bdr.drop_node(
  node_name := 'bdr1',
  cascade := 'true',
  force := false
);

SELECT bdr.drop_node(
  node_name := 'bdr2',
  cascade := 'true',
  force := false
);

SELECT bdr.drop_node(
  node_name := 'bdr3',
  cascade := 'true',
  force := false
);

Wrapping up

All the non-TDE nodes have been completely removed from the cluster, and you can confirm that by running the following queries:

SELECT * FROM bdr.node_summary;

SELECT * FROM bdr.group_raft_details;

You can also run the query above again to confirm all nodes are TDE-enabled, i.e.:

WITH node_tde AS (
  SELECT jsonb_array_elements(
    bdr.run_on_all_nodes($$
      SELECT data_encryption_version FROM pg_control_init()
    $$)::jsonb
  ) AS j
)
SELECT j->>'node_id' AS node_id,
       j->>'node_name' AS node_name,
       j->'response'->'command_tuples'->0->>'data_encryption_version' AS tde_enabled
FROM node_tde
ORDER BY 2;

Which should return as output, confirming all nodes have TDE enabled:

  node_id   | node_name | tde_enabled
------------+-----------+-------------
 959108419  | bdr4      | 1
 3841624025 | bdr5      | 1
 4056130800 | bdr6      | 1
(3 rows)

And we that we have successfully migrated a 3-node PGD cluster from non-TDE to TDE.

To finish the procedure, we can remove the blocks for bdr1, bdr2, and bdr3 from the config.yml file, followed by a tpaexec provision execution, to avoid adding them back to the cluster in future TPAexec deployments of the cluster.

Was this article helpful?

0 out of 0 found this helpful