Create EDB replica Cluster on Different Namespace in Kubernetes Environment

Swapnil Suryawanshi
Swapnil Suryawanshi
  • Creating an EDB replica cluster in another namespace in a cloud-native environment typically involves several steps, assuming you are using Kubernetes as your orchestration platform.
  • Below is a high-level guide on how to achieve this:
  • In this article, I have created the replica cluster in another Namespace using the backup method.

Prerequisites before creating Replica cluster

To take the backup of any cluster first you have to set up the storage for the cluster, so we can archive the backup files and WAL files.

  1. Request access to Amazon AWS.
  2. Create an IAM role, and generate ACCESS_KEY_ID and ACCESS_SECRET_KEY.
  • ACCESS_KEY_ID: the ID of the access key that will be used to upload files into S3
  • ACCESS_SECRET_KEY: the secret part of the access key mentioned above
  1. Create a bucket and provide it access to external apps. You need the below complete permissions:
  • “s3:AbortMultipartUpload”
  • “s3:DeleteObject”
  • “s3:GetObject”
  • “s3:ListBucket”
  • “s3:PutObject”
  • “s3:PutObjectTagging”
  1. Verify access from external sources.

Note: The steps above will vary according to the storage you are using. In my case, I used Amazon AWS' S3 bucket to store the WAL(s) files and backup(s).

Content

My Configuration:

  • Operator: cloud Native Postgresql v1.20.2
  • Storage: AWS S3
  • Database: PostgreSQL v15.3

In my example, I have used two different Namespace: CNP and CNP2

  • Namespace CNP: Primary and Replica
  • Namespace CNP2: Replica

Step:1

  • If you don't have the Source and Target namespace already, create it using the following command:

  • Create 2 Namespace: CNP and CNP2

user@LAPTOP385PNIN cnp-yaml % kubectl create ns cnp
namespace/cnp created
user@LAPTOP385PNIN cnp-yaml % 
user@LAPTOP385PNIN cnp-yaml % kubectl create ns cnp2
namespace/cnp2 created
user@LAPTOP385PNIN cnp-yaml % 

Step:2

  • To define the permissions on store backups in S3 buckets, you need to have:

  • The ACCESS_KEY_ID and ACCESS_SECRET_KEY credentials.

  • The access key used must have permission to upload files into the bucket.

  • Given that, you must create a Kubernetes secret with the credentials, and you can do that with the following command:

  • Create the same Kubernetes secret with the same credentials in both Namespaces.

user@LAPTOP385PNIN cnp-yaml % kubectl create secret generic aws-creds \
 --from-literal=ACCESS_KEY_ID=XXXXXXXXX7R6L \
 --from-literal=ACCESS_SECRET_KEY=XXXXXXXXX5T7m -n cnp
secret/aws-creds created
user@LAPTOP385PNIN cnp-yaml % kubectl create secret generic aws-creds \
 --from-literal=ACCESS_KEY_ID=XXXXXXXXX7R6L \
 --from-literal=ACCESS_SECRET_KEY=XXXXXXXXX5T7m -n cnp2
secret/aws-creds created
user@LAPTOP385PNIN cnp-yaml % 

Step:3

  • Create the two-node backup cluster: cluster-example-with-backup in Namespace: CNP
user@LAPTOP385PNIN cnp-yaml % cat cluster-example-with-backup.yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
 name: cluster-example-with-backup
 namespace: cnp
spec:
 logLevel: info
 startDelay: 3600
 stopDelay: 1800
 nodeMaintenanceWindow:
  inProgress: false
  reusePVC: true
 backup:
  barmanObjectStore:
   s3Credentials:
    accessKeyId:
     name: aws-creds
     key: ACCESS_KEY_ID
    secretAccessKey:
     name: aws-creds
     key: ACCESS_SECRET_KEY
   destinationPath: 's3://my-bucket/test'
 enableSuperuserAccess: true
 monitoring:
  disableDefaultQueries: false
  enablePodMonitor: false
 minSyncReplicas: 0
 postgresGID: 26
 replicationSlots:
  highAvailability:
   slotPrefix: _cnp_
  updateInterval: 30
 primaryUpdateMethod: switchover
 postgresUID: 26
 walStorage:
  resizeInUseVolumes: true
  size: 1Gi
 maxSyncReplicas: 0
 switchoverDelay: 3600
 storage:
  resizeInUseVolumes: true
  size: 2Gi
 primaryUpdateStrategy: unsupervised
 instances: 2
 imagePullPolicy: Always
user@LAPTOP385PNIN cnp-yaml % kubectl apply -f cluster-example-with-backup.yaml -n cnp
cluster.postgresql.k8s.enterprisedb.io/cluster-example-with-backup created
user@LAPTOP385PNIN cnp-yaml % 
user@LAPTOP385PNIN cnp-yaml % kubectl get pods -n cnp -L role
NAME              READY  STATUS  RESTARTS  AGE  ROLE
cluster-example-with-backup-1  1/1   Running  0     53s  primary
cluster-example-with-backup-2  1/1   Running  0     29s  replica
user@LAPTOP385PNIN cnp-yaml % 

Step:4

  • Take the backup of the primary cluster: cluster-example-with-backup
user@LAPTOP385PNIN cnp-yaml % cat backup.yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Backup
metadata:
 name: cluster-example-trigger-backup
spec:
 cluster:
  name: cluster-example-with-backup
user@LAPTOP385PNIN cnp-yaml % kubectl apply -f backup.yaml -n cnp
backup.postgresql.k8s.enterprisedb.io/cluster-example-trigger-backup created

user@LAPTOP385PNIN cnp-yaml % kubectl get backup -n cnp
NAME               AGE  CLUSTER            PHASE    ERROR
cluster-example-trigger-backup  45s  cluster-example-with-backup  completed  
user@LAPTOP385PNIN cnp-yaml % aws s3 ls s3://my-bucket/test --recursive --human-readable --summarize
2023-09-25 12:44:37  0 Bytes test/
2023-09-25 13:08:00  1.3 KiB test/cluster-example-with-backup/base/20230925T073733/backup.info
2023-09-25 13:07:35  31.2 MiB test/cluster-example-with-backup/base/20230925T073733/data.tar
2023-09-25 13:05:25  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000001
2023-09-25 13:05:42  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000002
2023-09-25 13:05:59  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000003
2023-09-25 13:06:19 338 Bytes test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000003.00000028.backup
2023-09-25 13:10:40  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000004
2023-09-25 13:15:41  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000005

Total Objects: 9
 Total Size: 111.2 MiB

Step:5

  • You'll need to define a replication configuration for your EDB replica. This typically includes details like the primary pod connection information, authentication credentials, and replication method (e.g., streaming replication or logical replication)

  • To deploy the EDB replica in the target namespace, you can use a Kubernetes Deployment or StatefulSet manifest. Here's a simplified example:

  • in my example, I have created the one node replica cluster: cluster-example-replica-from-backup-simple in target Namespace: CNP2

user@LAPTOP385PNIN cnp-yaml % cat cluster-example-replica-from-backup-simple.yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
 name: cluster-example-replica-from-backup-simple
spec:
 instances: 1

 bootstrap:
  recovery:
   source: cluster-example-with-backup

 replica:
  enabled: true
  source: cluster-example-with-backup

 storage:
  size: 1Gi
    
 externalClusters:
  - name: cluster-example-with-backup
   barmanObjectStore:
    destinationPath: 's3://my-bucket/test'
    s3Credentials:
     accessKeyId:
      key: ACCESS_KEY_ID
      name: aws-creds
     inheritFromIAMRole: false
     secretAccessKey:
      key: ACCESS_SECRET_KEY
      name: aws-creds
user@LAPTOP385PNIN cnp-yaml % kubectl apply -f cluster-example-replica-from-backup-simple.yaml -n cnp2      
cluster.postgresql.k8s.enterprisedb.io/cluster-example-replica-from-backup-simple created
user@LAPTOP385PNIN cnp-yaml % kubectl get pod -n cnp2
NAME                               READY  STATUS   RESTARTS  AGE
cluster-example-replica-from-backup-simple-1-full-recovery6vgdw  0/1   Completed  0     73s
cluster-example-replica-from-backup-simple-1           0/1   Running   0     41s
user@LAPTOP385PNIN cnp-yaml % kubectl get pod -n cnp2
NAME                      READY  STATUS  RESTARTS  AGE
cluster-example-replica-from-backup-simple-1  1/1   Running  0     3m28s

Replication Verification

Step:6

  • To cross-verify the Replication role of replica cluster cluster-example-replica-from-backup-simple-1 verify it using the kubectl cnp status <cluster_name> ( CNP Plugin )results.

  • As it shows Replication role = Designated primary

user@LAPTOP385PNIN cnp-yaml % kubectl cnp status cluster-example-replica-from-backup-simple -n cnp2
Replica Cluster Summary
Name:        cluster-example-replica-from-backup-simple
Namespace:      cnp2
System ID:      7282663856800313363
PostgreSQL Image:  quay.io/enterprisedb/postgresql:15.3
Designated primary: cluster-example-replica-from-backup-simple-1
Source cluster:   cluster-example-with-backup
Status:       Cluster in healthy state 
Instances:      1
Ready instances:   1

Certificates Status
Certificate Name                    Expiration Date        Days Left Until Expiration
cluster-example-replica-from-backup-simple-ca      2023-12-24 07:55:42 +0000 UTC 89.88
cluster-example-replica-from-backup-simple-replication 2023-12-24 07:55:42 +0000 UTC 89.88
cluster-example-replica-from-backup-simple-server    2023-12-24 07:55:42 +0000 UTC 89.88

Continuous Backup status
Not configured

Unmanaged Replication Slot Status
No unmanaged replication slots found

Instances status
Name                     Database Size Current LSN Replication role  Status QoS     Manager Version Node
cluster-example-replica-from-backup-simple-1 29 MB     0/A000000  Designated primary OK   BestEffort 1.20.2      k3d-k3s-default-server-0

  • To cross-verify the status of the replica cluster, log intoreplica pod cluster-example-replica-from-backup-simple-1 and verify the pg_is_in_recovery results.
user@LAPTOP385PNIN cnp-yaml % kubectl exec -ti cluster-example-replica-from-backup-simple-1 -n cnp2 -- psql -U postgres
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
psql (15.3)
Type "help" for help.

postgres=# 
postgres=# 
postgres=# select pg_is_in_recovery();
pg_is_in_recovery 
t
(1 row)

Data Replication test

Step:7

  • To verify if data is properly replication or not Log in to both cluster primary and Replica and check the available tables:
user@LAPTOP385PNIN cnp-yaml % kubectl exec -ti cluster-example-with-backup-1 -n cnp -- psql -U postgres        
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
psql (15.3)
Type "help" for help.

postgres=# \dt
Did not find any relations.
postgres=# 
user@LAPTOP385PNIN cnp-yaml % kubectl exec -ti cluster-example-replica-from-backup-simple-1 -n cnp2 -- psql -U postgres
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
psql (15.3)
Type "help" for help.

postgres=# \dt
Did not find any relations.
postgres=# 

Step:8

  • Created the simple table T1 and inserted a few values by logging in to Primary POD: cluster-example-with-backup-1, Namespace: CNP

  • To replicate the data inprimary switch the few WAL files.

user@LAPTOP385PNIN cnp-yaml % kubectl exec -ti cluster-example-with-backup-1 -n cnp -- psql -U postgres        
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
psql (15.3)
Type "help" for help.

postgres=# create table t1 (id int);
CREATE TABLE
postgres=# 

postgres=# \dt
    List of relations
Schema | Name | Type | Owner  
public | t1  | table | postgres
(1 row)

postgres=# insert into t1 values (1);
INSERT 0 1
postgres=# insert into t1 values (2);
INSERT 0 1
postgres=# 
postgres=# insert into t1 values (3);
INSERT 0 1

postgres=# checkpoint ;
CHECKPOINT
postgres=# 
postgres=# select pg_switch_wal();
pg_switch_wal 
0/6018780
(1 row)

postgres=# select pg_switch_wal();
pg_switch_wal 
0/7000000
(1 row)

postgres=# select pg_switch_wal();
pg_switch_wal 
0/7000000
(1 row)

postgres=# checkpoint ;
CHECKPOINT

Step:9

  • Verify if the WAL file is switched or not.
user@LAPTOP385PNIN cnp-yaml % aws s3 ls s3://my-bucket/test --recursive --human-readable --summarize  
2023-09-25 12:44:37  0 Bytes test/
2023-09-25 13:08:00  1.3 KiB test/cluster-example-with-backup/base/20230925T073733/backup.info
2023-09-25 13:07:35  31.2 MiB test/cluster-example-with-backup/base/20230925T073733/data.tar
2023-09-25 13:05:25  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000001
2023-09-25 13:05:42  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000002
2023-09-25 13:05:59  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000003
2023-09-25 13:06:19 338 Bytes test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000003.00000028.backup
2023-09-25 13:10:40  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000004
2023-09-25 13:15:41  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000005
2023-09-25 13:42:17  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000006
2023-09-25 13:42:39  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000007
2023-09-25 13:42:56  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000008
2023-09-25 13:47:39  16.0 MiB test/cluster-example-with-backup/wals/0000000100000000/000000010000000000000009

Total Objects: 13
 Total Size: 175.2 MiB

Step:10

  • To cross-verify the Data, log in to Replica cluster POD:cluster-example-replica-from-backup-simple-1, Namespace: cnp2
user@LAPTOP385PNIN cnp-yaml % kubectl exec -ti cluster-example-replica-from-backup-simple-1 -n cnp2 -- psql -U postgres
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
psql (15.3)
Type "help" for help.

postgres=# 
postgres=# 
postgres=# \dt
    List of relations
Schema | Name | Type | Owner  
public | t1  | table | postgres
(1 row)

postgres=# 
postgres=# 
postgres=# select * from test;
ERROR: relation "test" does not exist
LINE 1: select * from test;
           ^
postgres=# select * from t1;
id 
 1
 2
 3
(3 rows)

postgres=#

Conclusion

By following the above steps, you can create a PostgreSQL replica cluster in another namespace within a cloud-native environment using Kubernetes. Make sure to adjust the configuration to match your specific requirements and security policies.

Was this article helpful?

0 out of 0 found this helpful