-
You can backup PostgreSQL cluster online using EDB Postgres for Kubernetes's continuous physical backup and WAL archiving capability. You can then restore your database from the first backup in your system at any time (no downtime required).
-
Backups are taken from the primary or designated primary instance in a cluster.
-
A continuous backup infrastructure can be orchestrated using the Barman tool. Therefore, Backups will be available in tar format. A base backup as well as a WAL file can be compressed and encrypted.
-
You can archive the backup files in any service that is supported by the Barman Cloud infrastructure.
- AWS S3
- Microsoft Azure Blob Storage
- Google Cloud Storage
Prerequisites before backup:
To take the backup of any cluster, first, you have to set up the storage for the cluster so that we can archive the backup files and WAL files.
- Request access to Amazon AWS.
- Create an IAM role, and generate ACCESS_KEY_ID and ACCESS_SECRET_KEY.
-
ACCESS_KEY_ID:
the ID of the access key that will be used to upload files into S3 -
ACCESS_SECRET_KEY:
the secret part of the access key mentioned above
- Create a bucket and provide it access to external apps. You need the below complete permissions:
- “s3:AbortMultipartUpload”
- “s3:DeleteObject”
- “s3:GetObject”
- “s3:ListBucket”
- “s3:PutObject”
- “s3:PutObjectTagging”
- Verify access from external sources.
Note: The steps above will vary according to the storage you are using. Here, we are showing an example of AWS S3 bucket to store the WaL(s) files and backup(s).
Setup details
Operator:
EDB Postgres for Kubernetes v1.20.1
Storage:
AWS S3
Database:
PostgreSQL v15.3
Step 1: To define permissions on store backups in S3 buckets, you need to use ACCESS_KEY_ID and ACCESS_SECRET_KEY credentials. The access key used must have permission to upload files into the bucket.
- Given that, you must create a Kubernetes secret with the credentials, and you can do that with the following command:
1] Create the namespace
swapnilsuryawanshi@LAPTOP385PNIN ~ % oc create namespace cnp
namespace/cnp created
2] Create a Kubernetes secret with the credentials.
swapnilsuryawanshi@LAPTOP385PNIN ~ % kubectl create secret generic aws-creds \
--from-literal=ACCESS_KEY_ID=xxxxxxxxxxxxx \
--from-literal=ACCESS_SECRET_KEY=xxxxxxxxxxxxx -n cnp
secret/aws-creds created
Note: Replace the xxx with your original secret.
3] Verify the created secret.
swapnilsuryawanshi@LAPTOP385PNIN ~ % oc get secret -n cnp
NAME TYPE DATA AGE
aws-creds Opaque 2 18s
builder-dockercfg-dqxd9 kubernetes.io/dockercfg 1 4m11s
builder-token-t2g6x kubernetes.io/service-account-token 4 4m12s
default-dockercfg-rxjx9 kubernetes.io/dockercfg 1 4m11s
default-token-qq6d7 kubernetes.io/service-account-token 4 4m12s
deployer-dockercfg-zbcvv kubernetes.io/dockercfg 1 4m11s
deployer-token-gljct kubernetes.io/service-account-token 4 4m12s
swapnilsuryawanshi@LAPTOP385PNIN ~ % oc describe secrets/aws-creds -n cnp
Name: aws-creds
Namespace: cnp
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
ACCESS_KEY_ID: 20 bytes
ACCESS_SECRET_KEY: 40 bytes
Once that secret has been created, you can configure your cluster like in the following example:
Step 2: Create the cluster to test the continued WAL archiving and backup. (I have created the cluster: 'cluster-sample'
)
swapnilsuryawanshi@LAPTOP385PNIN ~ % cat cluster-sample.yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: cluster-sample
namespace: cnp
spec:
logLevel: info
startDelay: 30
stopDelay: 30
nodeMaintenanceWindow:
inProgress: false
reusePVC: true
backup:
barmanObjectStore:
s3Credentials:
accessKeyId:
key: ACCESS_KEY_ID
name: aws-creds
secretAccessKey:
key: ACCESS_SECRET_KEY
name: aws-creds
inheritFromIAMRole: false
destinationPath: 's3://swapnil-cnpg/CNP/'
target: prefer-standby
enableSuperuserAccess: true
monitoring:
disableDefaultQueries: false
enablePodMonitor: false
minSyncReplicas: 0
postgresGID: 26
replicationSlots:
highAvailability:
enabled: false
slotPrefix: _cnp_
updateInterval: 30
primaryUpdateMethod: switchover
bootstrap:
initdb:
import:
schemaOnly: false
failoverDelay: 0
postgresUID: 26
walStorage:
resizeInUseVolumes: true
size: 1Gi
maxSyncReplicas: 0
switchoverDelay: 40000000
storage:
resizeInUseVolumes: true
size: 2Gi
primaryUpdateStrategy: unsupervised
instances: 1
imagePullPolicy: Always
swapnilsuryawanshi@LAPTOP385PNIN ~ % oc apply -f cluster-sample.yaml -n cnp
cluster.postgresql.k8s.enterprisedb.io/cluster-sample created
swapnilsuryawanshi@LAPTOP385PNIN ~ %
The snippet of the POD after creating a cluster: cluster-sample
:
swapnilsuryawanshi@LAPTOP385PNIN ~ % oc get pods -n cnp
NAME READY STATUS RESTARTS AGE
cluster-sample-1-initdb-qxnb2 0/1 PodInitializing 0 68s
swapnilsuryawanshi@LAPTOP385PNIN ~ % oc get pods -n cnp
NAME READY STATUS RESTARTS AGE
cluster-sample-1 1/1 Running 0 5m22s
Step 3: Prepare the backup for the cluster: cluster-sample.
swapnilsuryawanshi@LAPTOP385PNIN ~ % cat backup-sample.yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Backup
metadata:
name: backup-sample
namespace: cnp
spec:
cluster:
name: cluster-sample
target: primary
swapnilsuryawanshi@LAPTOP385PNIN ~ % oc apply -f backup-sample.yaml -n cnp
backup.postgresql.k8s.enterprisedb.io/backup-sample created
After creating a backup for the cluster: cluster-sample, here's a snippet of the backup status.
swapnilsuryawanshi@LAPTOP385PNIN ~ % oc get backup -n cnp
NAME AGE CLUSTER PHASE ERROR
backup-sample 9s cluster-sample running
swapnilsuryawanshi@LAPTOP385PNIN ~ % oc get backup -n cnp
NAME AGE CLUSTER PHASE ERROR
backup-sample 3m22s cluster-sample completed
You can also verify the cluster-sample logs:
{"level":"info","ts":"2023-06-21T07:04:10Z","msg":"Backup started","backupName":"backup-sample","backupNamespace":"backup-sample","logging_pod":"cluster-sample-1","options":["--user","postgres","--name","backup-1687331050","--cloud-provider","aws-s3","s3://swapnil-cnpg/CNP/","cluster-sample"]}
:
:
{"level":"info","ts":"2023-06-21T07:05:27Z","msg":"Backup completed","backupName":"backup-sample","backupNamespace":"backup-sample","logging_pod":"cluster-sample-1"}
Step 4: Once you successfully take the backup, verify the backup file and WAL files from your storage:
swapnilsuryawanshi@LAPTOP385PNIN ~ % aws s3 ls s3://swapnil-cnpg/ --recursive --human-readable --summarize
2023-06-21 10:30:02 0 Bytes CNP/
2023-06-21 12:35:28 1.3 KiB CNP/cluster-sample/base/20230621T070412/backup.info
2023-06-21 12:34:15 31.0 MiB CNP/cluster-sample/base/20230621T070412/data.tar
2023-06-21 12:25:42 16.0 MiB CNP/cluster-sample/wals/0000000100000000/000000010000000000000001
2023-06-21 12:30:40 16.0 MiB CNP/cluster-sample/wals/0000000100000000/000000010000000000000002
2023-06-21 12:34:15 16.0 MiB CNP/cluster-sample/wals/0000000100000000/000000010000000000000003
2023-06-21 12:34:32 16.0 MiB CNP/cluster-sample/wals/0000000100000000/000000010000000000000004
2023-06-21 12:34:58 348 Bytes CNP/cluster-sample/wals/0000000100000000/000000010000000000000004.00000028.backup
2023-06-21 12:35:00 16.0 MiB CNP/cluster-sample/wals/0000000100000000/000000010000000000000005
2023-06-21 12:40:00 16.0 MiB CNP/cluster-sample/wals/0000000100000000/000000010000000000000006
Total Objects: 10
Total Size: 127.0 MiB
swapnilsuryawanshi@LAPTOP385PNIN Downloads % cat backup.info
backup_label='START WAL LOCATION: 0/4000028 (file 000000010000000000000004)\nCHECKPOINT LOCATION: 0/4000060\nBACKUP METHOD: streamed\nBACKUP FROM: primary\nSTART TIME: 2023-06-21 07:04:12 UTC\nLABEL: Barman backup cloud 20230621T070412\nSTART TIMELINE: 1\n'
backup_name=backup-1687331050
begin_offset=40
begin_time=2023-06-21 07:04:12.561049+00:00
begin_wal=000000010000000000000004
begin_xlog=0/4000028
compression=None
config_file=/var/lib/postgresql/data/pgdata/postgresql.conf
copy_stats={'total_time': 73.883539, 'number_of_workers': 2, 'analysis_time': 0, 'analysis_time_per_item': {'data': 0}, 'copy_time_per_item': {'data': 72.09514}, 'serialized_copy_time_per_item': {'data': 50.509505}, 'copy_time': 72.09514, 'serialized_copy_time': 50.509505}
deduplicated_size=None
end_offset=312
end_time=2023-06-21 07:04:14.946463+00:00
end_wal=000000010000000000000004
end_xlog=0/4000138
error=None
hba_file=/var/lib/postgresql/data/pgdata/pg_hba.conf
ident_file=/var/lib/postgresql/data/pgdata/pg_ident.conf
included_files=['/var/lib/postgresql/data/pgdata/custom.conf', '/var/lib/postgresql/data/pgdata/postgresql.auto.conf']
mode=None
pgdata=/var/lib/postgresql/data/pgdata
server_name=cloud
size=None
status=DONE
systemid=7247029303540998162
tablespaces=None
timeline=1
version=150003
xlog_segment_size=16777216