This document shows the very basic setup of getting K8s up and running on a RHEL 8 server.
Install the Docker repository:
dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
Install the Kubernetes Repository, by adding a kubernetes.repo
file under /etc/yum.repos.d/
with the following contents:
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
#exclude=kubelet kubeadm kubectl
Install the following packages using dnf
:
dnf install -y cri-tools.x86_64 kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64 kubernetes-cni.x86_64 docker-buildx-plugin.x86_64 docker-ce.x86_64 docker-ce-cli.x86_64
We need to install helm, cnp plugin, k3d and kind.
To install helm execute the following as root
:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
To install the CNP plugin execute the following as the root
:
curl -sSfL https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | sudo sh -s -- -b /usr/local/bin
To install k3d
execute the following as the root
:
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
To install kind
execute the following as root
:
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
K8s / Docker should not be executed as a root
user, as such, we need to add any non-root users we wish to be able to manage k8s clusters to the group docker
, e.g.:
usermod -aG docker ec2-user
systemctl enable docker
systemctl start docker
The server is now ready to deploy and test k8s clusters. Below is an example of setting up an EPAS 16, 3 database node cluster, with TDE enabled.
To begin with, let's create a yaml file that defines the cluster, we will call this file cluster_epas16_tde.yaml
and the contents will be:
---
apiVersion: v1
kind: Secret
metadata:
name: tde-key
data:
key: bG9zcG9sbGl0b3NkaWNlbnBpb3Bpb3Bpb2N1YW5kb3RpZW5lbmhhbWJyZWN1YW5kb3RpZW5lbmZyaW8=
---
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: tde
spec:
instances: 3
imageName: quay.io/enterprisedb/edb-postgres-advanced:16.3
licenseKey: FN7QGAIBA5GGSY3FNZZWKAP7QAAACAYBARCGC5DBAEFAAAIBKIA77AQAAEAVGAP7QIAAAAAK76AQKAIC76CAAAAA7YAXH74AAH7ACBT3EJRXK43UN5WWK4RCHIRGKZDCEIWCE3TBNVSSEORCEIWCE3TBNVSXG4DBMNSSEORCEIWCE4DSN5SHKY3UEI5CEY3MN52WILLOMF2GS5TFFVYG643UM5ZGK43RNQRCYITFPBYGS4TZL5SGC5DFEI5CEMRQGI2S2MBWFUZDCVBQGA5DAMB2GAYFUIRMEJYHK3DMKNSWG4TFOQRDU6ZCOVZWK4TOMFWWKIR2EJREO3DKLJLTK6S2KMYXEWSYNN2GGMSWPFSG2VTZJRMEE6LCGJIT2IRMEJYGC43TO5XXEZBCHIRGIRSKI5LUQUTPLFKU42KUNZBGUYKGMN5FS3DHGBNEM232J5KTCNLBK5SHCWT2NRGWI3SNHURCYITTMVZHMZLSJRXWGYLUNFXW4IR2EIZG4ZDRFZUW6IT5PUATCAXLTP6JJ6SA2UPZ6QDW4CY4SL2BV7H6BS6QJFZVX26IU5TXN3XH7BU33RL4BBN6U25ZQ4YBDR5C2MRACMICKGTNB7OOXYL2GFbbbb=
postgresql:
epas:
tde:
enabled: true
secretKeyRef:
name: tde-key
key: key
storage:
size: 1Gi
The license key itself listed above is not valid, but one can be provided.
So that we can start from scratch, we can clear down any existing clusters and helm repositories, this step is optional. As you non-root user execute:
k3d cluster delete -a
helm repo remove edb
Next, we create the cluster and add the helm
repository. We only need the helm
repository if we are using EDB proprietary images, eg PGD or EPAS. Execute the following commands:
k3d cluster create tde
helm repo add edb https://enterprisedb.github.io/edb-postgres-for-kubernetes-charts/
We can search the helm
repository as follows to list the available helm charts:
helm search repo edb
The output will resemble:
NAME CHART VERSION APP VERSION DESCRIPTION
edb/edb-postgres-distributed-for-kubernetes 1.0.0 1.0.0 EDB Postgres Distributed for Kubernetes Helm Chart
edb/edb-postgres-for-kubernetes 0.21.2 1.23.2 EDB Postgres for Kubernetes Helm Chart
edb/cloud-native-postgresql 0.13.0 1.15.0 Cloud Native Postgresql Helm Chart
In this example we will use the edb/edb-postgres-for-kubernetes
chart, as follows:
helm upgrade --dependency-update \
--install edb-epas-tde \
--namespace postgresql-operator-system \
--create-namespace \
edb/edb-postgres-for-kubernetes \
--set image.imageCredentials.username=k8s_enterprise_pgd \
--set image.imageCredentials.password={repos2_company_token}
Using kubectl
you can confirm the status of the operator as follows:
kubectl get deployments -A
The output should look similar to that shown below:
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system local-path-provisioner 1/1 1 1 48m
kube-system coredns 1/1 1 1 48m
kube-system traefik 1/1 1 1 48m
postgresql-operator-system edb-epas-tde-edb-postgres-for-kubernetes 1/1 1 1 48m
kube-system metrics-server 1/1 1 1 48m
The key entry being:
postgresql-operator-system edb-epas-tde-edb-postgres-for-kubernetes 1/1 1
Finally, we are ready to apply the yaml file using kubectl
as follows:
kubectl apply -f cluster_epas16_tde.yaml
All being well, after a short wait, you can check which pods are running, and their state using the following command:
kubectl get pods -A
The output should resemble:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-6c86858495-cp9kh 1/1 Running 0 53m
kube-system coredns-6799fbcd5-9fhpm 1/1 Running 0 53m
kube-system helm-install-traefik-crd-ss46f 0/1 Completed 0 53m
kube-system helm-install-traefik-s2674 0/1 Completed 1 53m
kube-system svclb-traefik-9e4dd8e1-2k7rq 2/2 Running 0 53m
kube-system traefik-f4564c4f4-rbtfh 1/1 Running 0 53m
postgresql-operator-system edb-epas-tde-edb-postgres-for-kubernetes-59d5949979-zrs4w 1/1 Running 0 53m
kube-system metrics-server-54fd9b65b-mtkf5 1/1 Running 0 53m
default tde-1 1/1 Running 0 51m
default tde-2 1/1 Running 0 51m
default tde-3 1/1 Running 0 51m
As can be seen, there are 3 TDE pods all running.
We can now check on the pods themselves, using psql
as below.
kubectl exec --stdin --tty tde-1 -- psql -c "select version();"
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
version
-----------------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 16.3 (EnterpriseDB Advanced Server 16.3.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20), 64-bit
(1 row)
kubectl exec --stdin --tty tde-1 -- psql -c "select data_encryption_version from pg_control_init();"
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
data_encryption_version
-------------------------
1
(1 row)
data_encryption_version value of 1 indicates TDE is enabled.
Simply query the postgres parameters for redwood:
kubectl exec --stdin --tty tde-1 -- psql -c "select name,setting from pg_settings where name like '%redwood%';"
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
name | setting
----------------------------+---------
edb_redwood_date | on
edb_redwood_greatest_least | on
edb_redwood_raw_names | off
edb_redwood_strings | on
(4 rows)
Again, in our example, Redwood is enabled.
Using the cnp plugin installed above, we can gain a lot of information from the status of the cluster using the command in the following format:
kubectl cnp status cluster_name -n namespace
In our case, the output is:
kubectl cnp status tde -n default
Cluster Summary
Name: tde
Namespace: default
System ID: 7385132692532666402
PostgreSQL Image: quay.io/enterprisedb/edb-postgres-advanced:16.3
Primary instance: tde-1
Primary start time: 2024-06-27 10:46:58 +0000 UTC (uptime 1h2m3s)
Status: Cluster in healthy state
Instances: 3
Ready instances: 3
Current Write LSN: 0/A000060 (Timeline: 1 - WAL File: 00000001000000000000000A)
Certificates Status
Certificate Name Expiration Date Days Left Until Expiration
---------------- --------------- --------------------------
tde-ca 2024-09-25 10:41:12 +0000 UTC 89.95
tde-replication 2024-09-25 10:41:12 +0000 UTC 89.95
tde-server 2024-09-25 10:41:12 +0000 UTC 89.95
Continuous Backup status
Not configured
Physical backups
No running physical backups found
Streaming Replication status
Replication Slots Enabled
Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority Replication Slot
---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- ----------------
tde-2 0/A000060 0/A000060 0/A000060 0/A000060 00:00:00 00:00:00 00:00:00 streaming async 0 active
tde-3 0/A000060 0/A000060 0/A000060 0/A000060 00:00:00 00:00:00 00:00:00 streaming async 0 active
Unmanaged Replication Slot Status
No unmanaged replication slots found
Managed roles status
No roles managed
Tablespaces status
No managed tablespaces
Pod Disruption Budgets status
Name Role Expected Pods Current Healthy Minimum Desired Healthy Disruptions Allowed
---- ---- ------------- --------------- ----------------------- -------------------
tde-primary primary 1 1 1 0
tde replica 2 2 1 1
Instances status
Name Database Size Current LSN Replication role Status QoS Manager Version Node
---- ------------- ----------- ---------------- ------ --- --------------- ----
tde-1 69 MB 0/A000060 Primary OK BestEffort 1.23.2 k3d-tde-server-0
tde-2 68 MB 0/A000060 Standby (async) OK BestEffort 1.23.2 k3d-tde-server-0
tde-3 68 MB 0/A000060 Standby (async) OK BestEffort 1.23.2 k3d-tde-server-0
As can be seen from the output generated, everything is looking exactly as it should.