11---
22title : " Installation, Configuration and Deployment Demo"
3- description : " Walk through the process of installing, configuring and deploying the Cloud Native PostgreSQL Operator via a browser-hosted Minikube console "
3+ description : " Walk through the process of installing, configuring and deploying the Cloud Native PostgreSQL Operator via a browser-hosted Kubernetes environment "
44navTitle : Install, Configure, Deploy
55product : ' Cloud Native Operator'
66platform : ubuntu
77tags :
88 - postgresql
99 - cloud-native-postgresql-operator
1010 - kubernetes
11- - minikube
11+ - k3d
1212 - live-demo
1313katacodaPanel :
14- scenario : minikube
14+ scenario : ubuntu:2004
15+ initializeCommand : clear; echo -e \\\\033[1mPreparing k3d and kubectl...\\\\n\\\\033[0m; snap install kubectl --classic; wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash; clear; echo -e \\\\033[2mk3d is ready\\ - enjoy Kubernetes\\!\\\\033[0m;
1516 codelanguages : shell, yaml
1617showInteractiveBadge : true
1718---
@@ -30,22 +31,34 @@ It will take roughly 5-10 minutes to work through.
3031
3132<KatacodaPanel />
3233
33- [ Minikube ] ( https://kubernetes .io/docs/setup/learning-environment/minikube/ ) is already installed in this environment; we just need to start the cluster:
34+ Once [ k3d ] ( https://k3d .io/ ) is ready, we need to start a cluster:
3435
3536``` shell
36- minikube start
37+ k3d cluster create
3738__OUTPUT__
38- * minikube v1.8.1 on Ubuntu 18.04
39- * Using the none driver based on user configuration
40- * Running on localhost (CPUs=2, Memory=2460MB, Disk=145651MB) ...
41- * OS release is Ubuntu 18.04.4 LTS
42- * Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
43- - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
44- * Launching Kubernetes ...
45- * Enabling addons: default-storageclass, storage-provisioner
46- * Configuring local host environment ...
47- * Waiting for cluster to come online ...
48- * Done! kubectl is now configured to use " minikube"
39+ INFO[0000] Prep: Network
40+ INFO[0000] Created network ' k3d-k3s-default'
41+ INFO[0000] Created volume ' k3d-k3s-default-images'
42+ INFO[0000] Starting new tools node...
43+ INFO[0000] Pulling image ' docker.io/rancher/k3d-tools:5.1.0'
44+ INFO[0001] Creating node ' k3d-k3s-default-server-0'
45+ INFO[0001] Pulling image ' docker.io/rancher/k3s:v1.21.5-k3s2'
46+ INFO[0002] Starting Node ' k3d-k3s-default-tools'
47+ INFO[0006] Creating LoadBalancer ' k3d-k3s-default-serverlb'
48+ INFO[0007] Pulling image ' docker.io/rancher/k3d-proxy:5.1.0'
49+ INFO[0011] Using the k3d-tools node to gather environment information
50+ INFO[0011] HostIP: using network gateway...
51+ INFO[0011] Starting cluster ' k3s-default'
52+ INFO[0011] Starting servers...
53+ INFO[0011] Starting Node ' k3d-k3s-default-server-0'
54+ INFO[0018] Starting agents...
55+ INFO[0018] Starting helpers...
56+ INFO[0018] Starting Node ' k3d-k3s-default-serverlb'
57+ INFO[0024] Injecting ' 172.19.0.1 host.k3d.internal' into /etc/hosts of all nodes...
58+ INFO[0024] Injecting records for host.k3d.internal and for 2 network members into CoreDNS configmap...
59+ INFO[0025] Cluster ' k3s-default' created successfully!
60+ INFO[0025] You can now use it like this:
61+ kubectl cluster-info
4962```
5063
5164This will create the Kubernetes cluster, and you will be ready to use it.
@@ -54,29 +67,30 @@ Verify that it works with the following command:
5467``` shell
5568kubectl get nodes
5669__OUTPUT__
57- NAME STATUS ROLES AGE VERSION
58- minikube Ready master 66s v1.17.3
70+ NAME STATUS ROLES AGE VERSION
71+ k3d-k3s-default-server-0 Ready control-plane, master 16s v1.21.5+k3s2
5972```
6073
61- You will see one node called ` minikube ` . If the status isn't yet "Ready", wait for a few seconds and run the command above again.
74+ You will see one node called ` k3d-k3s-default-server-0 ` . If the status isn't yet "Ready", wait for a few seconds and run the command above again.
6275
6376## Install Cloud Native PostgreSQL
6477
65- Now that the Minikube cluster is running, you can proceed with Cloud Native PostgreSQL installation as described in the [ "Installation"] ( installation_upgrade.md ) section:
78+ Now that the Kubernetes cluster is running, you can proceed with Cloud Native PostgreSQL installation as described in the [ "Installation and upgrades "] ( installation_upgrade.md ) section:
6679
6780``` shell
68- kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.9.1 .yaml
81+ kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.10.0 .yaml
6982__OUTPUT__
7083namespace/postgresql-operator-system created
7184customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created
7285customresourcedefinition.apiextensions.k8s.io/clusters.postgresql.k8s.enterprisedb.io created
86+ customresourcedefinition.apiextensions.k8s.io/poolers.postgresql.k8s.enterprisedb.io created
7387customresourcedefinition.apiextensions.k8s.io/scheduledbackups.postgresql.k8s.enterprisedb.io created
74- mutatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-mutating-webhook-configuration created
7588serviceaccount/postgresql-operator-manager created
7689clusterrole.rbac.authorization.k8s.io/postgresql-operator-manager created
7790clusterrolebinding.rbac.authorization.k8s.io/postgresql-operator-manager-rolebinding created
7891service/postgresql-operator-webhook-service created
7992deployment.apps/postgresql-operator-controller-manager created
93+ mutatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-mutating-webhook-configuration created
8094validatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-validating-webhook-configuration created
8195```
8296
@@ -166,24 +180,32 @@ metadata:
166180 annotations:
167181 kubectl.kubernetes.io/last-applied-configuration: |
168182 {" apiVersion" :" postgresql.k8s.enterprisedb.io/v1" ," kind" :" Cluster" ," metadata" :{" annotations" :{}," name" :" cluster-example" ," namespace" :" default" }," spec" :{" instances" :3," primaryUpdateStrategy" :" unsupervised" ," storage" :{" size" :" 1Gi" }}}
169- creationTimestamp: " 2021-09-30T19:52:07Z "
183+ creationTimestamp: " 2021-11-12T05:56:37Z "
170184 generation: 1
171185 name: cluster-example
172186 namespace: default
173- resourceVersion: " 2292"
174- selfLink: /apis/postgresql.k8s.enterprisedb.io/v1/namespaces/default/clusters/cluster-example
175- uid: af696791-b82a-45a9-a1c2-6e4577128d0e
187+ resourceVersion: " 2005"
188+ uid: 621d46bc-8a3b-4039-a9f3-6f21ab4ef68d
176189spec:
177190 affinity:
178191 podAntiAffinityType: preferred
179192 topologyKey: " "
180193 bootstrap:
181194 initdb:
182195 database: app
196+ encoding: UTF8
197+ localeCType: C
198+ localeCollate: C
183199 owner: app
184- imageName: quay.io/enterprisedb/postgresql:14.0
200+ enableSuperuserAccess: true
201+ imageName: quay.io/enterprisedb/postgresql:14.1
202+ imagePullPolicy: IfNotPresent
185203 instances: 3
186204 logLevel: info
205+ maxSyncReplicas: 0
206+ minSyncReplicas: 0
207+ postgresGID: 26
208+ postgresUID: 26
187209 postgresql:
188210 parameters:
189211 log_destination: csvlog
@@ -200,15 +222,18 @@ spec:
200222 wal_keep_size: 512MB
201223 primaryUpdateStrategy: unsupervised
202224 resources: {}
225+ startDelay: 30
226+ stopDelay: 30
203227 storage:
228+ resizeInUseVolumes: true
204229 size: 1Gi
205230status:
206231 certificates:
207232 clientCASecret: cluster-example-ca
208233 expirations:
209- cluster-example-ca: 2021-12-29 19:47:07 +0000 UTC
210- cluster-example-replication: 2021-12-29 19:47:07 +0000 UTC
211- cluster-example-server: 2021-12-29 19:47:07 +0000 UTC
234+ cluster-example-ca: 2022-02-10 05:51:37 +0000 UTC
235+ cluster-example-replication: 2022-02-10 05:51:37 +0000 UTC
236+ cluster-example-server: 2022-02-10 05:51:37 +0000 UTC
212237 replicationTLSSecret: cluster-example-replication
213238 serverAltDNSNames:
214239 - cluster-example-rw
@@ -222,9 +247,11 @@ status:
222247 - cluster-example-ro.default.svc
223248 serverCASecret: cluster-example-ca
224249 serverTLSSecret: cluster-example-server
225- cloudNativePostgresqlCommitHash: c88bd8a
250+ cloudNativePostgresqlCommitHash: f616a0d
251+ cloudNativePostgresqlOperatorHash: 02abbad9215f5118906c0c91d61bfbdb33278939861d2e8ea21978ce48f37421
226252 configMapResourceVersion: {}
227253 currentPrimary: cluster-example-1
254+ currentPrimaryTimestamp: " 2021-11-12T05:57:15Z"
228255 healthyPVC:
229256 - cluster-example-1
230257 - cluster-example-2
@@ -239,22 +266,25 @@ status:
239266 licenseStatus:
240267 isImplicit: true
241268 isTrial: true
242- licenseExpiration: " 2021-10-30T19:52:07Z "
269+ licenseExpiration: " 2021-12-12T05:56:37Z "
243270 licenseStatus: Implicit trial license
244271 repositoryAccess: false
245272 valid: true
246273 phase: Cluster in healthy state
274+ poolerIntegrations:
275+ pgBouncerIntegration: {}
247276 pvcCount: 3
248277 readService: cluster-example-r
249278 readyInstances: 3
250279 secretsResourceVersion:
251- applicationSecretVersion: " 884 "
252- clientCaSecretVersion: " 880 "
253- replicationSecretVersion: " 882 "
254- serverCaSecretVersion: " 880 "
255- serverSecretVersion: " 881 "
256- superuserSecretVersion: " 883 "
280+ applicationSecretVersion: " 934 "
281+ clientCaSecretVersion: " 930 "
282+ replicationSecretVersion: " 932 "
283+ serverCaSecretVersion: " 930 "
284+ serverSecretVersion: " 931 "
285+ superuserSecretVersion: " 933 "
257286 targetPrimary: cluster-example-1
287+ targetPrimaryTimestamp: " 2021-11-12T05:56:38Z"
258288 writeService: cluster-example-rw
259289```
260290
@@ -273,15 +303,15 @@ status:
273303
274304## Install the kubectl-cnp plugin
275305
276- Cloud Native PostgreSQL provides a plugin for kubectl to manage a cluster in Kubernetes, along with a script to install it:
306+ Cloud Native PostgreSQL provides [ a plugin for kubectl] ( cnp-plugin ) to manage a cluster in Kubernetes, along with a script to install it:
277307
278308``` shell
279309curl -sSfL \
280310 https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | \
281311 sudo sh -s -- -b /usr/local/bin
282312__OUTPUT__
283313EnterpriseDB/kubectl-cnp info checking GitHub for latest tag
284- EnterpriseDB/kubectl-cnp info found version: 1.9.1 for v1.9.1 /linux/x86_64
314+ EnterpriseDB/kubectl-cnp info found version: 1.10.0 for v1.10.0 /linux/x86_64
285315EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp
286316```
287317
@@ -293,17 +323,22 @@ __OUTPUT__
293323Cluster in healthy state
294324Name: cluster-example
295325Namespace: default
296- PostgreSQL Image: quay.io/enterprisedb/postgresql:14.0
326+ PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
297327Primary instance: cluster-example-1
298328Instances: 3
299329Ready instances: 3
330+ Current Timeline: 1
331+ Current WAL file: 000000010000000000000005
332+
333+ Continuous Backup status
334+ Not configured
300335
301336Instances status
302- Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
303- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
304- cluster-example-1 0/5000060 7013817246676054032 ✓ ✗ ✗ ✗ OK
305- cluster-example-2 0/5000060 0/5000060 7013817246676054032 ✗ ✓ ✗ ✗ OK
306- cluster-example-3 0/5000060 0/5000060 7013817246676054032 ✗ ✓ ✗ ✗ OK
337+ Manager Version Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
338+ --------------- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
339+ 1.10.0 cluster-example-1 0/5000060 7029558504442904594 ✓ ✗ ✗ ✗ OK
340+ 1.10.0 cluster-example-2 0/5000060 0/5000060 7029558504442904594 ✗ ✓ ✗ ✗ OK
341+ 1.10.0 cluster-example-3 0/5000060 0/5000060 7029558504442904594 ✗ ✓ ✗ ✗ OK
307342```
308343
309344!!! Note "There's more"
@@ -326,20 +361,24 @@ Now if we check the status...
326361kubectl cnp status cluster-example
327362__OUTPUT__
328363Failing over Failing over to cluster-example-2
329- Switchover in progress
330364Name: cluster-example
331365Namespace: default
332- PostgreSQL Image: quay.io/enterprisedb/postgresql:14.0
333- Primary instance: cluster-example-1 (switching to cluster-example-2)
366+ PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
367+ Primary instance: cluster-example-2
334368Instances: 3
335369Ready instances: 2
370+ Current Timeline: 2
371+ Current WAL file: 000000020000000000000006
372+
373+ Continuous Backup status
374+ Not configured
336375
337376Instances status
338- Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
339- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
340- cluster-example-1 - - - - - - - - pod not available
341- cluster-example-2 0/60010F0 7013817246676054032 ✓ ✗ ✗ ✗ OK
342- cluster-example-3 0/60000A0 0/60000A0 7013817246676054032 ✗ ✗ ✗ ✗ OK
377+ Manager Version Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
378+ --------------- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
379+ 1.10.0 cluster-example-3 0/60000A0 0/60000A0 7029558504442904594 ✗ ✗ ✗ ✗ OK
380+ 45 cluster-example-1 - - - - - - - - pod not available
381+ 1.10.0 cluster-example-2 0/6000F58 7029558504442904594 ✓ ✗ ✗ ✗ OK
343382```
344383
345384...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary:
@@ -350,26 +389,29 @@ __OUTPUT__
350389Cluster in healthy state
351390Name: cluster-example
352391Namespace: default
353- PostgreSQL Image: quay.io/enterprisedb/postgresql:14.0
392+ PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
354393Primary instance: cluster-example-2
355394Instances: 3
356395Ready instances: 3
396+ Current Timeline: 2
397+ Current WAL file: 000000020000000000000006
398+
399+ Continuous Backup status
400+ Not configured
357401
358402Instances status
359- Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
360- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
361- cluster-example-1 0/6004E70 0/6004E70 7013817246676054032 ✗ ✓ ✗ ✗ OK
362- cluster-example-2 0/6004E70 7013817246676054032 ✓ ✗ ✗ ✗ OK
363- cluster-example-3 0/6004E70 0/6004E70 7013817246676054032 ✗ ✓ ✗ ✗ OK
403+ Manager Version Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
404+ --------------- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
405+ 1.10.0 cluster-example-3 0/60000A0 0/60000A0 7029558504442904594 ✗ ✗ ✗ ✗ OK
406+ 1.10.0 cluster-example-2 0/6004CA0 7029558504442904594 ✓ ✗ ✗ ✗ OK
407+ 1.10.0 cluster-example-1 0/6004CA0 0/6004CA0 7029558504442904594 ✗ ✓ ✗ ✗ OK
364408```
365409
366410
367411### Further reading
368412
369413This is all it takes to get a PostgreSQL cluster up and running, but of course there's a lot more possible - and certainly much more that is prudent before you should ever deploy in a production environment!
370414
371- - Deploying on public cloud platforms: see the [ Cloud Setup] ( cloud_setup/ ) section.
372-
373415- Design goals and possibilities offered by the Cloud Native PostgreSQL Operator: check out the [ Architecture] ( architecture/ ) and [ Use cases] ( use_cases/ ) sections.
374416
375417- Configuring a secure and reliable system: read through the [ Security] ( security/ ) , [ Failure Modes] ( failure_modes/ ) and [ Backup and Recovery] ( backup_recovery/ ) sections.
0 commit comments