Skip to content

Commit 0326062

Browse files
authored
Merge pull request #1048 from EnterpriseDB/release/2021-03-09
Production Release 2021-03-09 Former-commit-id: 251f073
2 parents f140662 + 0229dba commit 0326062

File tree

355 files changed

+6447
-26593
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

355 files changed

+6447
-26593
lines changed

.github/workflows/deploy-develop.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ jobs:
4747
NODE_OPTIONS: --max-old-space-size=4096
4848
ALGOLIA_API_KEY: ${{ secrets.ALGOLIA_API_KEY }}
4949
ALGOLIA_APP_ID: ${{ secrets.ALGOLIA_APP_ID }}
50-
ALGOLIA_INDEX_NAME: edb-staging
50+
ALGOLIA_INDEX_NAME: edb-docs-staging
5151
INDEX_ON_BUILD: true
5252

5353
- name: Netlify deploy

.github/workflows/deploy-main.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ jobs:
4747
NODE_OPTIONS: --max-old-space-size=4096
4848
ALGOLIA_API_KEY: ${{ secrets.ALGOLIA_API_KEY }}
4949
ALGOLIA_APP_ID: ${{ secrets.ALGOLIA_APP_ID }}
50-
ALGOLIA_INDEX_NAME: edb
50+
ALGOLIA_INDEX_NAME: edb-docs
5151
GTM_ID: GTM-5W8M67
5252
INDEX_ON_BUILD: true
5353

.github/workflows/update-pdfs-on-develop.yml

-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,6 @@ jobs:
1313
- uses: actions/checkout@v2
1414
with:
1515
ref: develop
16-
fetch-depth: 0 # fetch whole repo so git-restore-mtime can work
1716
ssh-key: ${{ secrets.ADMIN_SECRET_SSH_KEY }}
1817
- name: Update submodules
1918
run: git submodule update --init --remote

README.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -36,8 +36,6 @@ We recommend using MacOS to work with the EDB Docs application.
3636

3737
1. Pull the shared icon files down with `git submodule update --init`.
3838

39-
1. Now select which sources you want with `yarn config-sources`.
40-
4139
1. And finally, you can start up the site locally with `yarn develop`, which should make it live at `http://localhost:8000/`. Huzzah!
4240

4341
### Installation of PDF / Doc Conversion Tools (optional)
@@ -64,7 +62,7 @@ If you are a Windows user, you can work with Docs without installing it locally
6462

6563
### Configuring Which Sources are Loaded
6664

67-
When doing local development of the site or advocacy content, you may want to load other sources to experience the full site. The more sources you load, the slower the site will build, so it's recommended to typically only load the content you'll be working with the most.
65+
By default, all document sources will be loaded into the app during development. It's possible to set up a configuration file, `dev-sources.json`, to only load specific sources, but this is not required.
6866

6967
#### `yarn config-sources`
7068

advocacy_docs/kubernetes/cloud_native_operator/architecture.mdx

+9
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ Cloud Native PostgreSQL currently supports clusters based on asynchronous and sy
1313
* One primary, with optional multiple hot standby replicas for High Availability
1414
* Available services for applications:
1515
* `-rw`: applications connect to the only primary instance of the cluster
16+
* `-ro`: applications connect to the only hot standby replicas for read-only-workloads
1617
* `-r`: applications connect to any of the instances for read-only workloads
1718
* Shared-nothing architecture recommended for better resilience of the PostgreSQL cluster:
1819
* PostgreSQL instances should reside on different Kubernetes worker nodes and share only the network
@@ -45,12 +46,17 @@ The following diagram shows the architecture:
4546

4647
![Applications reading from any instance in round robin](./images/architecture-r.png)
4748

49+
Applications can also access hot standby replicas through the `-ro` service made available
50+
by the operator. This service enables the application to offload read-only queries from the
51+
primary node.
52+
4853
## Application deployments
4954

5055
Applications are supposed to work with the services created by Cloud Native PostgreSQL
5156
in the same Kubernetes cluster:
5257

5358
* `[cluster name]-rw`
59+
* `[cluster name]-ro`
5460
* `[cluster name]-r`
5561

5662
Those services are entirely managed by the Kubernetes cluster and
@@ -97,6 +103,9 @@ you can use the following environment variables in your applications:
97103
* `PG_DATABASE_R_SERVICE_HOST`: the IP address of the service
98104
pointing to all the PostgreSQL instances for read-only workloads
99105

106+
* `PG_DATABASE_RO_SERVICE_HOST`: the IP address of the
107+
service pointing to all hot-standby replicas of the cluster
108+
100109
* `PG_DATABASE_RW_SERVICE_HOST`: the IP address of the
101110
service pointing to the *primary* instance of the cluster
102111

advocacy_docs/kubernetes/cloud_native_operator/before_you_start.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ specific to Kubernetes and PostgreSQL.
1111

1212
| Resource | Description |
1313
|-------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
14-
| [Node](https://kubernetes.io/docs/concepts/architecture/nodes/) | A *node* is a worker machine in Kubernetes, either virtual or physical, where all services necessary to run pods are managed by the master(s). |
14+
| [Node](https://kubernetes.io/docs/concepts/architecture/nodes/) | A *node* is a worker machine in Kubernetes, either virtual or physical, where all services necessary to run pods are managed by the control plane node(s). |
1515
| [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/) | A *pod* is the smallest computing unit that can be deployed in a Kubernetes cluster and is composed of one or more containers that share network and storage. |
1616
| [Service](https://kubernetes.io/docs/concepts/services-networking/service/) | A *service* is an abstraction that exposes as a network service an application that runs on a group of pods and standardizes important features such as service discovery across applications, load balancing, failover, and so on. |
1717
| [Secret](https://kubernetes.io/docs/concepts/configuration/secret/) | A *secret* is an object that is designed to store small amounts of sensitive data such as passwords, access keys, or tokens, and use them in pods. |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,144 @@
1+
---
2+
title: 'Cloud Native PostgreSQL Plugin'
3+
originalFilePath: 'src/cnp-plugin.md'
4+
product: 'Cloud Native Operator'
5+
---
6+
7+
Cloud Native PostgreSQL provides a plugin for `kubectl` to manage a cluster in Kubernetes.
8+
The plugin also works with `oc` in an OpenShift environment.
9+
10+
## Install
11+
12+
You can install the plugin in your system with:
13+
14+
```sh
15+
curl -sSfL \
16+
https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | \
17+
sudo sh -s -- -b /usr/local/bin
18+
```
19+
20+
## Use
21+
22+
Once the plugin was installed and deployed, you can start using it like this:
23+
24+
```shell
25+
kubectl cnp <command> <args...>
26+
```
27+
28+
### Status
29+
30+
The `status` command provides a brief of the current status of your cluster.
31+
32+
```shell
33+
kubectl cnp status cluster-example
34+
```
35+
36+
```shell
37+
Cluster in healthy state
38+
Name: cluster-example
39+
Namespace: default
40+
PostgreSQL Image: quay.io/enterprisedb/postgresql:13
41+
Primary instance: cluster-example-1
42+
Instances: 3
43+
Ready instances: 3
44+
45+
Instances status
46+
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart
47+
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- ---------------
48+
cluster-example-1 0/6000060 6927251808674721812 ✓ ✗ ✗ ✗
49+
cluster-example-2 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗
50+
cluster-example-3 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗
51+
52+
```
53+
54+
You can also get a more verbose version of the status by adding `--verbose` or just `-v`
55+
56+
```shell
57+
kubectl cnp status cluster-example --verbose
58+
```
59+
60+
```shell
61+
Cluster in healthy state
62+
Name: cluster-example
63+
Namespace: default
64+
PostgreSQL Image: quay.io/enterprisedb/postgresql:13
65+
Primary instance: cluster-example-1
66+
Instances: 3
67+
Ready instances: 3
68+
69+
PostgreSQL Configuration
70+
archive_command = '/controller/manager wal-archive %p'
71+
archive_mode = 'on'
72+
archive_timeout = '5min'
73+
full_page_writes = 'on'
74+
hot_standby = 'true'
75+
listen_addresses = '*'
76+
logging_collector = 'off'
77+
max_parallel_workers = '32'
78+
max_replication_slots = '32'
79+
max_worker_processes = '32'
80+
port = '5432'
81+
ssl = 'on'
82+
ssl_ca_file = '/tmp/ca.crt'
83+
ssl_cert_file = '/tmp/server.crt'
84+
ssl_key_file = '/tmp/server.key'
85+
unix_socket_directories = '/var/run/postgresql'
86+
wal_keep_size = '512MB'
87+
wal_level = 'logical'
88+
wal_log_hints = 'on'
89+
90+
91+
PostgreSQL HBA Rules
92+
# Grant local access
93+
local all all peer
94+
95+
# Require client certificate authentication for the streaming_replica user
96+
hostssl postgres streaming_replica all cert clientcert=1
97+
hostssl replication streaming_replica all cert clientcert=1
98+
99+
# Otherwise use md5 authentication
100+
host all all all md5
101+
102+
103+
Instances status
104+
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart
105+
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- ---------------
106+
cluster-example-1 0/6000060 6927251808674721812 ✓ ✗ ✗ ✗
107+
cluster-example-2 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗
108+
cluster-example-3 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗
109+
```
110+
111+
The command also supports output in `yaml` and `json` format.
112+
113+
### Promote
114+
115+
The meaning of this command is to `promote` a pod in the cluster to primary, so you
116+
can start with maintenance work or test a switch-over situation in your cluster
117+
118+
```shell
119+
kubectl cnp promote cluster-example cluster-example-2
120+
```
121+
122+
### Certificates
123+
124+
Clusters created using the Cloud Native PostgreSQL operator work with a CA to sign
125+
a TLS authentication certificate.
126+
127+
To get a certificate, you need to provide a name for the secret to store
128+
the credentials, the cluster name, and a user for this certificate
129+
130+
```shell
131+
kubectl cnp certificate cluster-cert --cnp-cluster cluster-example --cnp-user appuser
132+
```
133+
134+
After the secrete it's created, you can get it using `kubectl`
135+
136+
```shell
137+
kubectl get secret cluster-cert
138+
```
139+
140+
And the content of the same in plain text using the following commands:
141+
142+
```shell
143+
kubectl get secret cluster-cert -o json | jq -r '.data | map(@base64d) | .[]'
144+
```

advocacy_docs/kubernetes/cloud_native_operator/credits.mdx

+8-6
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,15 @@ product: 'Cloud Native Operator'
77
Cloud Native PostgreSQL (Operator for Kubernetes/OpenShift) has been designed,
88
developed, and tested by the EnterpriseDB Cloud Native team:
99

10-
- Leonardo Cecchi
11-
- Marco Nenciarini
12-
- Jonathan Gonzalez
13-
- Francesco Canovai
10+
- Gabriele Bartolini
1411
- Jonathan Battiato
12+
- Francesco Canovai
13+
- Leonardo Cecchi
14+
- Valerio Del Sarto
1515
- Niccolò Fei
16-
- Devin Nemec
16+
- Jonathan Gonzalez
17+
- Danish Khan
18+
- Marco Nenciarini
19+
- Jitendra Wadle
1720
- Adam Wright
18-
- Gabriele Bartolini
1921

advocacy_docs/kubernetes/cloud_native_operator/e2e.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ and the following suite of E2E tests are performed on that cluster:
3232
* Installation of the operator;
3333
* Creation of a `Cluster`;
3434
* Usage of a persistent volume for data storage;
35-
* Connection via services;
35+
* Connection via services, including read-only;
3636
* Scale-up of a `Cluster`;
3737
* Scale-down of a `Cluster`;
3838
* Failover;

advocacy_docs/kubernetes/cloud_native_operator/expose_pg_services.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This section explains how to expose a PostgreSQL service externally, allowing ac
88
to your PostgreSQL database **from outside your Kubernetes cluster** using
99
NGINX Ingress Controller.
1010

11-
If you followed the [QuickStart](/quickstart), you should have by now
11+
If you followed the [QuickStart](./quickstart.md), you should have by now
1212
a database that can be accessed inside the cluster via the
1313
`cluster-example-rw` (primary) and `cluster-example-r` (read-only)
1414
services in the `default` namespace. Both services use port `5432`.

advocacy_docs/kubernetes/cloud_native_operator/failure_modes.mdx

+5-9
Original file line numberDiff line numberDiff line change
@@ -131,25 +131,21 @@ Self-healing will happen after `tolerationSeconds`.
131131

132132
## Self-healing
133133

134-
If the failed pod is a standby, the pod is removed from the `-r` service.
134+
If the failed pod is a standby, the pod is removed from the `-r` service
135+
and from the `-ro` service.
135136
The pod is then restarted using its PVC if available; otherwise, a new
136137
pod will be created from a backup of the current primary. The pod
137-
will be added again to the `-r` service when ready.
138+
will be added again to the `-r` service and to the `-ro` service when ready.
138139

139140
If the failed pod is the primary, the operator will promote the active pod
140141
with status ready and the lowest replication lag, then point the `-rw`service
141-
to it. The failed pod will be removed from the `-r` service.
142+
to it. The failed pod will be removed from the `-r` service and from the
143+
`-ro` service.
142144
Other standbys will start replicating from the new primary. The former
143145
primary will use `pg_rewind` to synchronize itself with the new one if its
144146
PVC is available; otherwise, a new standby will be created from a backup of the
145147
current primary.
146148

147-
!!! Important
148-
Due to a [bug in PostgreSQL 13 streaming replication](https://www.postgresql.org/message-id/flat/20201209.174314.282492377848029776.horikyota.ntt%40gmail.com)
149-
it is not guaranteed that an existing standby is able to follow a promoted
150-
primary, even if the new primary contains all the required WALs. Standbys
151-
will be able to follow a primary if WAL archiving is configured.
152-
153149
## Manual intervention
154150

155151
In the case of undocumented failure, it might be necessary to intervene

advocacy_docs/kubernetes/cloud_native_operator/images/apps-in-k8s.png

100755100644
File mode changed.

advocacy_docs/kubernetes/cloud_native_operator/images/apps-outside-k8s.png

100755100644
File mode changed.

advocacy_docs/kubernetes/cloud_native_operator/images/architecture-in-k8s.png

100755100644
File mode changed.

advocacy_docs/kubernetes/cloud_native_operator/images/architecture-r.png

100755100644
File mode changed.

advocacy_docs/kubernetes/cloud_native_operator/images/architecture-rw.png

100755100644
File mode changed.

advocacy_docs/kubernetes/cloud_native_operator/images/network-storage-architecture.png

100755100644
File mode changed.

advocacy_docs/kubernetes/cloud_native_operator/images/operator-capability-level.png

100755100644
File mode changed.

advocacy_docs/kubernetes/cloud_native_operator/images/public-cloud-architecture-storage-replication.png

100755100644
File mode changed.

advocacy_docs/kubernetes/cloud_native_operator/images/public-cloud-architecture.png

100755100644
File mode changed.

advocacy_docs/kubernetes/cloud_native_operator/images/shared-nothing-architecture.png

100755100644
File mode changed.

advocacy_docs/kubernetes/cloud_native_operator/index.mdx

+1
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ navigation:
2828
- ssl_connections
2929
- kubernetes_upgrade
3030
- e2e
31+
- cnp-plugin
3132
- license_keys
3233
- container_images
3334
- operator_capability_levels

advocacy_docs/kubernetes/cloud_native_operator/installation.mdx

+2-2
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,12 @@ product: 'Cloud Native Operator'
99
The operator can be installed like any other resource in Kubernetes,
1010
through a YAML manifest applied via `kubectl`.
1111

12-
You can install the [latest operator manifest](../samples/postgresql-operator-1.0.0.yaml)
12+
You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.1.0.yaml)
1313
as follows:
1414

1515
```sh
1616
kubectl apply -f \
17-
https://docs.enterprisedb.io/cloud-native-postgresql/latest/samples/postgresql-operator-1.0.0.yaml
17+
https://get.enterprisedb.io/cnp/postgresql-operator-1.1.0.yaml
1818
```
1919

2020
Once you have run the `kubectl` command, Cloud Native PostgreSQL will be installed in your Kubernetes cluster.

advocacy_docs/kubernetes/cloud_native_operator/kubernetes_upgrade.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ When **disabled**, Kubernetes forces the recreation of the
8080
Pod on a different node with a new PVC by relying on
8181
PostgreSQL's physical streaming replication, then destroys
8282
the old PVC together with the Pod. This scenario is generally
83-
not recommended unless the database's size is small, and recloning
83+
not recommended unless the database's size is small, and re-cloning
8484
the new PostgreSQL instance takes shorter than waiting.
8585

8686
!!! Note

advocacy_docs/kubernetes/cloud_native_operator/operator_capability_levels.mdx

+2-2
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ PostgreSQL instance and to reconcile the pod status with the instance itself
7070
based on the PostgreSQL cluster topology. The instance manager also starts a
7171
web server that is invoked by the `kubelet` for probes. Unix signals invoked
7272
by the `kubelet` are filtered by the instance manager and, where appropriate,
73-
forwarded to the `postmaster` process for fast and controlled reactions to
73+
forwarded to the `postgres` process for fast and controlled reactions to
7474
external events. The instance manager is written in Go and has no external
7575
dependencies.
7676

@@ -374,7 +374,7 @@ for PostgreSQL have been implemented.
374374
### Kubernetes events
375375

376376
Record major events as expected by the Kubernetes API, such as creating resources,
377-
removing nodes, upgrading, and so on. Events can be displayed throught
377+
removing nodes, upgrading, and so on. Events can be displayed through
378378
the `kubectl describe` and `kubectl get events` command.
379379

380380
## Level 5 - Auto Pilot

advocacy_docs/kubernetes/cloud_native_operator/rolling_update.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ managed by the `primaryUpdateStrategy` option, accepting these two values:
3939
The default and recommended value is `switchover`.
4040

4141
The upgrade keeps the Cloud Native PostgreSQL identity and does not
42-
reclone the data. Pods will be deleted and created again with the same PVCs.
42+
re-clone the data. Pods will be deleted and created again with the same PVCs.
4343

4444
During the rolling update procedure, the services endpoints move to reflect
4545
the cluster's status, so the applications ignore the node that

advocacy_docs/kubernetes/cloud_native_operator/samples/backup-example.yaml

100755100644
File mode changed.

0 commit comments

Comments
 (0)