Skip to content

Commit 7d04710

Browse files
committed
Merge branch 'release/2021-04-28' into main
Former-commit-id: 1be9c9f
2 parents fc0cfb0 + 5c24cbf commit 7d04710

File tree

88 files changed

+3860
-449
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

88 files changed

+3860
-449
lines changed

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -79,3 +79,4 @@ product_docs/content/
7979
product_docs/content_build/
8080
static/nginx_redirects.generated
8181
temp_kubernetes/
82+
advocacy_docs/kubernetes/cloud_native_postgresql/*.md.in

advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx

+288-243
Large diffs are not rendered by default.

advocacy_docs/kubernetes/cloud_native_postgresql/backup_recovery.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ kubectl create secret generic minio-creds \
112112
--from-literal=MINIO_SECRET_KEY=<minio secret key here>
113113
```
114114

115-
!!! NOTE "Note"
115+
!!! Note
116116
Cloud Object Storage credentials will be used only by MinIO Gateway in this case.
117117

118118
!!! Important

advocacy_docs/kubernetes/cloud_native_postgresql/cnp-plugin.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ To get a certificate, you need to provide a name for the secret to store
128128
the credentials, the cluster name, and a user for this certificate
129129

130130
```shell
131-
kubectl cnp certificate cluster-cert --cnp-cluster cluster-example --cnp-user appuser
131+
kubectl cnp certificate cluster-cert --cnp-cluster cluster-example --cnp-user appuser
132132
```
133133

134134
After the secrete it's created, you can get it using `kubectl`

advocacy_docs/kubernetes/cloud_native_postgresql/e2e.mdx

+1
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ and the following suite of E2E tests are performed on that cluster:
4444
* Restore from backup;
4545
* Pod affinity using `NodeSelector`;
4646
* Metrics collection;
47+
* Operator pod deletion;
4748
* Primary endpoint switch in case of failover in less than 10 seconds;
4849
* Primary endpoint switch in case of switchover in less than 20 seconds;
4950
* Recover from a degraded state in less than 60 seconds.

advocacy_docs/kubernetes/cloud_native_postgresql/index.mdx

+4-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,9 @@ navigation:
2424
- rolling_update
2525
- backup_recovery
2626
- postgresql_conf
27+
- operator_conf
2728
- storage
29+
- labels_annotations
2830
- samples
2931
- monitoring
3032
- expose_pg_services
@@ -36,6 +38,7 @@ navigation:
3638
- container_images
3739
- operator_capability_levels
3840
- api_reference
41+
- release_notes
3942
- credits
4043

4144
---
@@ -64,7 +67,7 @@ and is available under the [EnterpriseDB Limited Use License](https://www.enterp
6467
You can [evaluate Cloud Native PostgreSQL for free](evaluation.md).
6568
You need a valid license key to use Cloud Native PostgreSQL in production.
6669

67-
!!! IMPORTANT
70+
!!! Important
6871
Currently, based on the [Operator Capability Levels model](operator_capability_levels.md),
6972
users can expect a **"Level III - Full Lifecycle"** set of capabilities from the
7073
Cloud Native PostgreSQL Operator.

advocacy_docs/kubernetes/cloud_native_postgresql/installation.mdx

+8-2
Original file line numberDiff line numberDiff line change
@@ -11,12 +11,12 @@ product: 'Cloud Native Operator'
1111
The operator can be installed like any other resource in Kubernetes,
1212
through a YAML manifest applied via `kubectl`.
1313

14-
You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.2.1.yaml)
14+
You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.3.0.yaml)
1515
as follows:
1616

1717
```sh
1818
kubectl apply -f \
19-
https://get.enterprisedb.io/cnp/postgresql-operator-1.2.1.yaml
19+
https://get.enterprisedb.io/cnp/postgresql-operator-1.3.0.yaml
2020
```
2121

2222
Once you have run the `kubectl` command, Cloud Native PostgreSQL will be installed in your Kubernetes cluster.
@@ -92,3 +92,9 @@ the pod will be rescheduled on another node.
9292

9393
As far as OpenShift is concerned, details might differ depending on the
9494
selected installation method.
95+
96+
!!! Seealso "Operator configuration"
97+
You can change the default behavior of the operator by overriding
98+
some default options. For more information, please refer to the
99+
["Operator configuration"](operator_conf.md) section.
100+

advocacy_docs/kubernetes/cloud_native_postgresql/interactive_demo.mdx

+82-17
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "Installation, Configuration and Demployment Demo"
2+
title: "Installation, Configuration and Deployment Demo"
33
description: "Walk through the process of installing, configuring and deploying the Cloud Native PostgreSQL Operator via a browser-hosted Minikube console"
44
navTitle: Install, Configure, Deploy
55
product: 'Cloud Native PostgreSQL Operator'
@@ -21,6 +21,7 @@ Want to see what it takes to get the Cloud Native PostgreSQL Operator up and run
2121
1. Installing the Cloud Native PostgreSQL Operator
2222
2. Deploying a three-node PostgreSQL cluster
2323
3. Installing and using the kubectl-cnp plugin
24+
4. Testing failover to verify the resilience of the cluster
2425

2526
It will take roughly 5-10 minutes to work through.
2627

@@ -64,7 +65,7 @@ You will see one node called `minikube`. If the status isn't yet "Ready", wait f
6465
Now that the Minikube cluster is running, you can proceed with Cloud Native PostgreSQL installation as described in the ["Installation"](installation.md) section:
6566

6667
```shell
67-
kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.2.0.yaml
68+
kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.3.0.yaml
6869
__OUTPUT__
6970
namespace/postgresql-operator-system created
7071
customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created
@@ -164,13 +165,13 @@ metadata:
164165
annotations:
165166
kubectl.kubernetes.io/last-applied-configuration: |
166167
{"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}}
167-
creationTimestamp: "2021-04-07T00:33:43Z"
168+
creationTimestamp: "2021-04-27T15:11:21Z"
168169
generation: 1
169170
name: cluster-example
170171
namespace: default
171-
resourceVersion: "1806"
172+
resourceVersion: "2572"
172173
selfLink: /apis/postgresql.k8s.enterprisedb.io/v1/namespaces/default/clusters/cluster-example
173-
uid: 38ddc347-3f2e-412a-aa14-a26904e1a49e
174+
uid: 6a693046-a9d0-41b0-ac68-7a96d7e2ff07
174175
spec:
175176
affinity:
176177
topologyKey: ""
@@ -196,21 +197,27 @@ status:
196197
instances: 3
197198
instancesStatus:
198199
healthy:
199-
- cluster-example-3
200200
- cluster-example-1
201201
- cluster-example-2
202+
- cluster-example-3
202203
latestGeneratedNode: 3
203204
licenseStatus:
204205
isImplicit: true
205206
isTrial: true
206-
licenseExpiration: "2021-05-07T00:33:43Z"
207+
licenseExpiration: "2021-05-27T15:11:21Z"
207208
licenseStatus: Implicit trial license
208209
repositoryAccess: false
209210
valid: true
210211
phase: Cluster in healthy state
211212
pvcCount: 3
212213
readService: cluster-example-r
213214
readyInstances: 3
215+
secretsResourceVersion:
216+
applicationSecretVersion: "1479"
217+
caSecretVersion: "1475"
218+
replicationSecretVersion: "1477"
219+
serverSecretVersion: "1476"
220+
superuserSecretVersion: "1478"
214221
targetPrimary: cluster-example-1
215222
writeService: cluster-example-rw
216223
```
@@ -238,7 +245,7 @@ curl -sSfL \
238245
sudo sh -s -- -b /usr/local/bin
239246
__OUTPUT__
240247
EnterpriseDB/kubectl-cnp info checking GitHub for latest tag
241-
EnterpriseDB/kubectl-cnp info found version: 1.2.1 for v1.2.1/linux/x86_64
248+
EnterpriseDB/kubectl-cnp info found version: 1.3.0 for v1.3.0/linux/x86_64
242249
EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp
243250
```
244251

@@ -247,7 +254,7 @@ The `cnp` command is now available in kubectl:
247254
```shell
248255
kubectl cnp status cluster-example
249256
__OUTPUT__
250-
Cluster in healthy state
257+
Cluster in healthy state
251258
Name: cluster-example
252259
Namespace: default
253260
PostgreSQL Image: quay.io/enterprisedb/postgresql:13.2
@@ -256,23 +263,81 @@ Instances: 3
256263
Ready instances: 3
257264

258265
Instances status
259-
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart
260-
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- ---------------
261-
cluster-example-1 0/6000060 6941211174657425425 ✓ ✗ ✗ ✗
262-
cluster-example-2 0/6000060 0/6000060 6941211174657425425 ✗ ✓ ✗ ✗
263-
cluster-example-3 0/6000060 0/6000060 6941211174657425425 ✗ ✓ ✗ ✗
266+
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
267+
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
268+
cluster-example-1 0/5000060 6955855494195015697 ✓ ✗ ✗ ✗ OK
269+
cluster-example-2 0/5000060 0/5000060 6955855494195015697 ✗ ✓ ✗ ✗ OK
270+
cluster-example-3 0/5000060 0/5000060 6955855494195015697 ✗ ✓ ✗ ✗ OK
264271
```
265272

266273
!!! Note "There's more"
267274
See [the Cloud Native PostgreSQL Plugin page](cnp-plugin/) for more commands and options.
268275

276+
## Testing failover
277+
278+
As our status checks show, we're running two replicas - if something happens to the primary instance of PostgreSQL, the cluster will fail over to one of them. Let's demonstrate this by killing the primary pod:
279+
280+
```shell
281+
kubectl delete pod --wait=false cluster-example-1
282+
__OUTPUT__
283+
pod "cluster-example-1" deleted
284+
```
285+
286+
This simulates a hard shutdown of the server - a scenario where something has gone wrong.
287+
288+
Now if we check the status...
289+
```shell
290+
kubectl cnp status cluster-example
291+
__OUTPUT__
292+
Failing over Failing over to cluster-example-2
293+
Name: cluster-example
294+
Namespace: default
295+
PostgreSQL Image: quay.io/enterprisedb/postgresql:13.2
296+
Primary instance: cluster-example-2
297+
Instances: 3
298+
Ready instances: 2
299+
300+
Instances status
301+
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
302+
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
303+
cluster-example-1 - - - - - - - - unable to upgrade connection: container not found ("postgres") -
304+
cluster-example-2 0/7000230 6955855494195015697 ✓ ✗ ✗ ✗ OK
305+
cluster-example-3 0/70000A0 0/70000A0 6955855494195015697 ✗ ✓ ✗ ✗ OK
306+
```
307+
308+
...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary:
309+
310+
```shell
311+
kubectl cnp status cluster-example
312+
__OUTPUT__
313+
Cluster in healthy state
314+
Name: cluster-example
315+
Namespace: default
316+
PostgreSQL Image: quay.io/enterprisedb/postgresql:13.2
317+
Primary instance: cluster-example-2
318+
Instances: 3
319+
Ready instances: 3
320+
321+
Instances status
322+
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
323+
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
324+
cluster-example-1 0/7004268 0/7004268 6955855494195015697 ✗ ✓ ✗ ✗ OK
325+
cluster-example-2 0/7004268 6955855494195015697 ✓ ✗ ✗ ✗ OK
326+
cluster-example-3 0/7004268 0/7004268 6955855494195015697 ✗ ✓ ✗ ✗ OK
327+
```
328+
269329

270330
### Further reading
271331

272332
This is all it takes to get a PostgreSQL cluster up and running, but of course there's a lot more possible - and certainly much more that is prudent before you should ever deploy in a production environment!
273333

274-
- For information on using the Cloud Native PostgreSQL Operator to deploy on public cloud platforms, see the [Cloud Setup](cloud_setup/) section.
334+
- Deploying on public cloud platforms: see the [Cloud Setup](cloud_setup/) section.
335+
336+
- Design goals and possibilities offered by the Cloud Native PostgreSQL Operator: check out the [Architecture](architecture/) and [Use cases](use_cases/) sections.
337+
338+
- Configuring a secure and reliable system: read through the [Security](security/), [Failure Modes](failure_modes/) and [Backup and Recovery](backup_recovery/) sections.
339+
340+
- Webinar: [Watch Gabriele Bartolini discuss and demonstrate Cloud Native PostgreSQL lifecycle management](https://www.youtube.com/watch?v=S-I9y-HnAnI)
275341

276-
- For the design goals and possibilities offered by the Cloud Native PostgreSQL Operator, check out the [Architecture](architecture/) and [Use cases](use_cases/) sections.
342+
- Development: [Leonardo Cecchi writes about setting up a local environment using Cloud Native PostgreSQL for application development](https://www.enterprisedb.com/blog/cloud-native-postgresql-application-developers)
277343

278-
- And for details on what it takes to configure a secure and reliable system, read through the [Security](security/), [Failure Modes](failure_modes/) and [Backup and Recovery](backup_recovery/) sections.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
---
2+
title: 'Labels and annotations'
3+
originalFilePath: 'src/labels_annotations.md'
4+
product: 'Cloud Native Operator'
5+
---
6+
7+
Resources in Kubernetes are organized in a flat structure, with no hierarchical
8+
information or relationship between them. However, such resources and objects
9+
can be linked together and put in relationship through **labels** and
10+
**annotations**.
11+
12+
!!! info
13+
For more information, please refer to the Kubernetes documentation on
14+
[annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) and
15+
[labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/).
16+
17+
In short:
18+
19+
- an annotation is used to assign additional non-identifying information to
20+
resources with the goal to facilitate integration with external tools
21+
- a label is used to group objects and query them through Kubernetes' native
22+
selector capability
23+
24+
You can select one or more labels and/or annotations you will use
25+
in your Cloud Native PostgreSQL deployments. Then you need to configure the operator
26+
so that when you define these labels and/or annotations in a cluster's metadata,
27+
they are automatically inherited by all resources created by it (including pods).
28+
29+
!!! Note
30+
Label and annotation inheritance is the technique adopted by Cloud Native
31+
PostgreSQL in lieu of alternative approaches such as pod templates.
32+
33+
## Pre-requisites
34+
35+
By default, no label or annotation defined in the cluster's metadata is
36+
inherited by the associated resources.
37+
In order to enable label/annotation inheritance, you need to follow the
38+
instructions provided in the ["Operator configuration"](operator_conf.md) section.
39+
40+
Below we will continue on that example and limit it to the following:
41+
42+
- annotations: `categories`
43+
- labels: `app`, `environment`, and `workload`
44+
45+
!!! Note
46+
Feel free to select the names that most suit your context for both
47+
annotations and labels. Remember that you can also use wildcards
48+
in naming and adopt strategies like `mycompany/*` for all labels
49+
or annotations starting with `mycompany/` to be inherited.
50+
51+
## Defining cluster's metadata
52+
53+
When defining the cluster, **before** any resource is deployed, you can
54+
properly set the metadata as follows:
55+
56+
```yaml
57+
apiVersion: postgresql.k8s.enterprisedb.io/v1
58+
kind: Cluster
59+
metadata:
60+
name: cluster-example
61+
annotations:
62+
categories: database
63+
labels:
64+
environment: production
65+
workload: database
66+
app: sso
67+
spec:
68+
# ... <snip>
69+
```
70+
71+
Once the cluster is deployed, you can verify, for example, that the labels
72+
have been correctly set in the pods with:
73+
74+
```shell
75+
kubectl get pods --show-labels
76+
```
77+
78+
## Current limitations
79+
80+
Cloud Native PostgreSQL does not currently support synchronization of labels
81+
or annotations after a resource has been created. For example, suppose you
82+
deploy a cluster. When you add a new annotation to be inherited and define it
83+
in the existing cluster, the operator will not automatically set it
84+
on the associated resources.

advocacy_docs/kubernetes/cloud_native_postgresql/license_keys.mdx

+3
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,9 @@ kubectl rollout restart deployment -n [NAMESPACE_NAME_HERE] \
4444
postgresql-operator-controller-manager
4545
```
4646

47+
!!! Seealso "Operator configuration"
48+
For more information, please refer to the ["Operator configuration"](operator_conf.md) section.
49+
4750
The validity of the license key can be checked inside the cluster status.
4851

4952
```sh

advocacy_docs/kubernetes/cloud_native_postgresql/monitoring.mdx

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,15 @@ product: 'Cloud Native Operator'
55
---
66

77
For each PostgreSQL instance, the operator provides an exporter of metrics for
8-
[Prometheus](https://prometheus.io/) via HTTP, on port 8000.
8+
[Prometheus](https://prometheus.io/) via HTTP, on port 9187.
99
The operator comes with a predefined set of metrics, as well as a highly
1010
configurable and customizable system to define additional queries via one or
1111
more `ConfigMap` objects - and, future versions, `Secret` too.
1212

1313
The exporter can be accessed as follows:
1414

1515
```shell
16-
curl http://<pod ip>:8000/metrics
16+
curl http://<pod ip>:9187/metrics
1717
```
1818

1919
All monitoring queries are:

0 commit comments

Comments
 (0)