Skip to content

Commit a3e6762

Browse files
author
George Song
committed
Merge develop into main for release.
Former-commit-id: 1f310f4
2 parents 3f8b0b0 + c6e4fdb commit a3e6762

File tree

943 files changed

+841
-63856
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

943 files changed

+841
-63856
lines changed

advocacy_docs/kubernetes/cloud_native_postgresql/index.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ navigation:
1515
- architecture
1616
- installation
1717
- quickstart
18+
- interactive
1819
- cloud_setup
1920
- bootstrap
2021
- resource_management
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
title: 'Cloud Native PostgreSQL Interactive Demonstrations'
3+
navTitle: 'Interactive Demos'
4+
product: 'Cloud Native Operator'
5+
indexCards: full
6+
showInteractiveBadge: false
7+
---
8+
Lines changed: 278 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,278 @@
1+
---
2+
title: "Installation, Configuration and Demployment Demo"
3+
description: "Walk through the process of installing, configuring and deploying the Cloud Native PostgreSQL Operator via a browser-hosted Minikube console"
4+
navTitle: Install, Configure, Deploy
5+
product: 'Cloud Native PostgreSQL Operator'
6+
platform: ubuntu
7+
tags:
8+
- postgresql
9+
- cloud-native-postgresql-operator
10+
- kubernetes
11+
- minikube
12+
- live-demo
13+
katacodaPanel:
14+
scenario: minikube
15+
codelanguages: shell, yaml
16+
showInteractiveBadge: true
17+
---
18+
19+
Want to see what it takes to get the Cloud Native PostgreSQL Operator up and running? This section will demonstrate the following:
20+
21+
1. Installing the Cloud Native PostgreSQL Operator
22+
2. Deploying a three-node PostgreSQL cluster
23+
3. Installing and using the kubectl-cnp plugin
24+
25+
It will take roughly 5-10 minutes to work through.
26+
27+
!!!interactive This demo is interactive
28+
You can follow along right in your browser by clicking the button below. Once the environment initializes, you'll see a terminal open at the bottom of the screen.
29+
30+
<KatacodaPanel />
31+
32+
[Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/) is already installed in this environment; we just need to start the cluster:
33+
34+
```shell
35+
minikube start
36+
__OUTPUT__
37+
* minikube v1.8.1 on Ubuntu 18.04
38+
* Using the none driver based on user configuration
39+
* Running on localhost (CPUs=2, Memory=2460MB, Disk=145651MB) ...
40+
* OS release is Ubuntu 18.04.4 LTS
41+
* Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
42+
- kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
43+
* Launching Kubernetes ...
44+
* Enabling addons: default-storageclass, storage-provisioner
45+
* Configuring local host environment ...
46+
* Waiting for cluster to come online ...
47+
* Done! kubectl is now configured to use "minikube"
48+
```
49+
50+
This will create the Kubernetes cluster, and you will be ready to use it.
51+
Verify that it works with the following command:
52+
53+
```shell
54+
kubectl get nodes
55+
__OUTPUT__
56+
NAME STATUS ROLES AGE VERSION
57+
minikube Ready master 66s v1.17.3
58+
```
59+
60+
You will see one node called `minikube`. If the status isn't yet "Ready", wait for a few seconds and run the command above again.
61+
62+
## Install Cloud Native PostgreSQL
63+
64+
Now that the Minikube cluster is running, you can proceed with Cloud Native PostgreSQL installation as described in the ["Installation"](installation.md) section:
65+
66+
```shell
67+
kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.2.0.yaml
68+
__OUTPUT__
69+
namespace/postgresql-operator-system created
70+
customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created
71+
customresourcedefinition.apiextensions.k8s.io/clusters.postgresql.k8s.enterprisedb.io created
72+
customresourcedefinition.apiextensions.k8s.io/scheduledbackups.postgresql.k8s.enterprisedb.io created
73+
mutatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-mutating-webhook-configuration created
74+
serviceaccount/postgresql-operator-manager created
75+
clusterrole.rbac.authorization.k8s.io/postgresql-operator-manager created
76+
clusterrolebinding.rbac.authorization.k8s.io/postgresql-operator-manager-rolebinding created
77+
service/postgresql-operator-webhook-service created
78+
deployment.apps/postgresql-operator-controller-manager created
79+
validatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-validating-webhook-configuration created
80+
```
81+
82+
And then verify that it was successfully installed:
83+
84+
```shell
85+
kubectl get deploy -n postgresql-operator-system postgresql-operator-controller-manager
86+
__OUTPUT__
87+
NAME READY UP-TO-DATE AVAILABLE AGE
88+
postgresql-operator-controller-manager 1/1 1 1 52s
89+
```
90+
91+
## Deploy a PostgreSQL cluster
92+
93+
As with any other deployment in Kubernetes, to deploy a PostgreSQL cluster
94+
you need to apply a configuration file that defines your desired `Cluster`.
95+
96+
The [`cluster-example.yaml`](../samples/cluster-example.yaml) sample file
97+
defines a simple `Cluster` using the default storage class to allocate
98+
disk space:
99+
100+
```yaml
101+
cat <<EOF > cluster-example.yaml
102+
# Example of PostgreSQL cluster
103+
apiVersion: postgresql.k8s.enterprisedb.io/v1
104+
kind: Cluster
105+
metadata:
106+
name: cluster-example
107+
spec:
108+
instances: 3
109+
110+
# Example of rolling update strategy:
111+
# - unsupervised: automated update of the primary once all
112+
# replicas have been upgraded (default)
113+
# - supervised: requires manual supervision to perform
114+
# the switchover of the primary
115+
primaryUpdateStrategy: unsupervised
116+
117+
# Require 1Gi of space
118+
storage:
119+
size: 1Gi
120+
EOF
121+
```
122+
123+
!!! Note "There's more"
124+
For more detailed information about the available options, please refer
125+
to the ["API Reference" section](api_reference.md).
126+
127+
In order to create the 3-node PostgreSQL cluster, you need to run the following command:
128+
129+
```shell
130+
kubectl apply -f cluster-example.yaml
131+
__OUTPUT__
132+
cluster.postgresql.k8s.enterprisedb.io/cluster-example created
133+
```
134+
135+
You can check that the pods are being created with the `get pods` command. It'll take a bit to initialize, so if you run that
136+
immediately after applying the cluster configuration you'll see the status as `Init:` or `PodInitializing`:
137+
138+
```shell
139+
kubectl get pods
140+
__OUTPUT__
141+
NAME READY STATUS RESTARTS AGE
142+
cluster-example-1-initdb-kq2vw 0/1 PodInitializing 0 18s
143+
```
144+
145+
...give it a minute, and then check on it again:
146+
147+
```shell
148+
kubectl get pods
149+
__OUTPUT__
150+
NAME READY STATUS RESTARTS AGE
151+
cluster-example-1 1/1 Running 0 56s
152+
cluster-example-2 1/1 Running 0 35s
153+
cluster-example-3 1/1 Running 0 19s
154+
```
155+
156+
Now we can check the status of the cluster:
157+
158+
```shell
159+
kubectl get cluster cluster-example -o yaml
160+
__OUTPUT__
161+
apiVersion: postgresql.k8s.enterprisedb.io/v1
162+
kind: Cluster
163+
metadata:
164+
annotations:
165+
kubectl.kubernetes.io/last-applied-configuration: |
166+
{"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}}
167+
creationTimestamp: "2021-04-07T00:33:43Z"
168+
generation: 1
169+
name: cluster-example
170+
namespace: default
171+
resourceVersion: "1806"
172+
selfLink: /apis/postgresql.k8s.enterprisedb.io/v1/namespaces/default/clusters/cluster-example
173+
uid: 38ddc347-3f2e-412a-aa14-a26904e1a49e
174+
spec:
175+
affinity:
176+
topologyKey: ""
177+
bootstrap:
178+
initdb:
179+
database: app
180+
owner: app
181+
imageName: quay.io/enterprisedb/postgresql:13.2
182+
instances: 3
183+
postgresql:
184+
parameters:
185+
logging_collector: "off"
186+
max_parallel_workers: "32"
187+
max_replication_slots: "32"
188+
max_worker_processes: "32"
189+
wal_keep_size: 512MB
190+
primaryUpdateStrategy: unsupervised
191+
resources: {}
192+
storage:
193+
size: 1Gi
194+
status:
195+
currentPrimary: cluster-example-1
196+
instances: 3
197+
instancesStatus:
198+
healthy:
199+
- cluster-example-3
200+
- cluster-example-1
201+
- cluster-example-2
202+
latestGeneratedNode: 3
203+
licenseStatus:
204+
isImplicit: true
205+
isTrial: true
206+
licenseExpiration: "2021-05-07T00:33:43Z"
207+
licenseStatus: Implicit trial license
208+
repositoryAccess: false
209+
valid: true
210+
phase: Cluster in healthy state
211+
pvcCount: 3
212+
readService: cluster-example-r
213+
readyInstances: 3
214+
targetPrimary: cluster-example-1
215+
writeService: cluster-example-rw
216+
```
217+
218+
!!! Note
219+
By default, the operator will install the latest available minor version
220+
of the latest major version of PostgreSQL when the operator was released.
221+
You can override this by setting [the `imageName` key in the `spec` section of
222+
the `Cluster` definition](../api_reference/#clusterspec).
223+
224+
!!! Important
225+
The immutable infrastructure paradigm requires that you always
226+
point to a specific version of the container image.
227+
Never use tags like `latest` or `13` in a production environment
228+
as it might lead to unpredictable scenarios in terms of update
229+
policies and version consistency in the cluster.
230+
231+
## Install the kubectl-cnp plugin
232+
233+
Cloud Native PostgreSQL provides a plugin for kubectl to manage a cluster in Kubernetes, along with a script to install it:
234+
235+
```shell
236+
curl -sSfL \
237+
https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | \
238+
sudo sh -s -- -b /usr/local/bin
239+
__OUTPUT__
240+
EnterpriseDB/kubectl-cnp info checking GitHub for latest tag
241+
EnterpriseDB/kubectl-cnp info found version: 1.2.1 for v1.2.1/linux/x86_64
242+
EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp
243+
```
244+
245+
The `cnp` command is now available in kubectl:
246+
247+
```shell
248+
kubectl cnp status cluster-example
249+
__OUTPUT__
250+
Cluster in healthy state
251+
Name: cluster-example
252+
Namespace: default
253+
PostgreSQL Image: quay.io/enterprisedb/postgresql:13.2
254+
Primary instance: cluster-example-1
255+
Instances: 3
256+
Ready instances: 3
257+
258+
Instances status
259+
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart
260+
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- ---------------
261+
cluster-example-1 0/6000060 6941211174657425425 ✓ ✗ ✗ ✗
262+
cluster-example-2 0/6000060 0/6000060 6941211174657425425 ✗ ✓ ✗ ✗
263+
cluster-example-3 0/6000060 0/6000060 6941211174657425425 ✗ ✓ ✗ ✗
264+
```
265+
266+
!!! Note "There's more"
267+
See [the Cloud Native PostgreSQL Plugin page](../cnp-plugin/) for more commands and options.
268+
269+
270+
### Further reading
271+
272+
This is all it takes to get a PostgreSQL cluster up and running, but of course there's a lot more possible - and certainly much more that is prudent before you should ever deploy in a production environment!
273+
274+
- For information on using the Cloud Native PostgreSQL Operator to deploy on public cloud platforms, see the [Cloud Setup](.../cloud_setup/) section.
275+
276+
- For the design goals and possibilities offered by the Cloud Native PostgreSQL Operator, check out the [Architecture](../architecture/) and [Use cases](../use_cases/) sections.
277+
278+
- And for details on what it takes to configure a secure and reliable system, read through the [Security](../security/), [Failure Modes](../failure_modes/) and [Backup and Recovery](../backup_recovery/) sections.

advocacy_docs/kubernetes/cloud_native_postgresql/quickstart.mdx

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@ using Cloud Native PostgreSQL on a local Kubernetes cluster in
99
[Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/) or
1010
[Kind](https://kind.sigs.k8s.io/).
1111

12+
!!! Tip "Live demonstration"
13+
Don't want to install anything locally just yet? Try a demonstration directly in your browser:
14+
15+
[Cloud Native PostgreSQL Operator Interactive Quickstart](interactive/installation_and_deployment/)
16+
1217
RedHat OpenShift Container Platform users can test the certified operator for
1318
Cloud Native PostgreSQL on the [Red Hat CodeReady Containers (CRC)](https://developers.redhat.com/products/codeready-containers/overview)
1419
for OpenShift.

advocacy_docs/supported-open-source/barman/single-server-streaming/index.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ iconName: coffee
1212
directoryDefaults:
1313
platform: ubuntu
1414
prevNext: true
15+
showInteractiveBadge: true
1516
---
1617

1718
This section demonstrates setting up a backup and recovery scenario using a Barman server and a PostgreSQL server. It covers:

advocacy_docs/supported-open-source/barman/single-server-streaming/step01-db-setup.mdx

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ This demo is interactive. You can follow along right in your browser, using Kata
2222
!!! Note
2323
When you click "Start Now," Katacoda will load a Docker Compose environment with two container images representing a
2424
PostgreSQL 13 server with [the Pagila database](https://github.com/devrimgunduz/pagila) loaded (named `pg`)
25-
and a backup server for Barman.
26-
25+
and a backup server for Barman.
26+
2727
Once you see a `postgres@pg` prompt, you can follow the steps below.
2828

2929
<KatacodaPanel />
@@ -64,8 +64,8 @@ Let's call our dedicated backup user, "barman". We'll create it interactively vi
6464
```shell
6565
createuser --superuser --replication -P barman
6666
__OUTPUT__
67-
Enter password for new role:
68-
Enter it again:
67+
Enter password for new role:
68+
Enter it again:
6969
```
7070

7171
Enter `example-password` when prompted (twice).
@@ -80,8 +80,8 @@ Now we will create the streaming replication user, "streaming_barman". This does
8080
```shell
8181
createuser --replication -P streaming_barman
8282
__OUTPUT__
83-
Enter password for new role:
84-
Enter it again:
83+
Enter password for new role:
84+
Enter it again:
8585
```
8686

8787
Enter `example-password` when prompted (twice).
@@ -113,12 +113,12 @@ psql -d pagila
113113
Show max_wal_senders;
114114
Show max_replication_slots;
115115
__OUTPUT__
116-
max_wal_senders
116+
max_wal_senders
117117
-----------------
118118
10
119119
(1 row)
120120

121-
max_replication_slots
121+
max_replication_slots
122122
-----------------------
123123
10
124124
(1 row)
@@ -129,13 +129,13 @@ The default for both of these (for PostgreSQL 10 and above) is 10, so we're fine
129129

130130
### Gazing fondly at data
131131

132-
Before we end, let's query some data - this is what we're going to back up!
132+
Before we end, let's query some data - this is what we're going to back up!
133133

134134
```sql
135135
select * from actor where last_name='KILMER';
136136
\q
137137
__OUTPUT__
138-
actor_id | first_name | last_name | last_update
138+
actor_id | first_name | last_name | last_update
139139
----------+------------+-----------+------------------------
140140
23 | SANDRA | KILMER | 2020-02-15 09:34:33+00
141141
45 | REESE | KILMER | 2020-02-15 09:34:33+00

gatsby-config.js

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -231,6 +231,7 @@ module.exports = {
231231
customTypes: {
232232
seealso: 'note',
233233
hint: 'tip',
234+
interactive: 'interactive',
234235
},
235236
},
236237
],

0 commit comments

Comments
 (0)