Skip to content

Commit 452039c

Browse files
Merge pull request #1210 from EnterpriseDB/release/2021-04-07
Former-commit-id: 5363a19
2 parents ac67806 + 6058063 commit 452039c

File tree

4,230 files changed

+88346
-33828
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

4,230 files changed

+88346
-33828
lines changed

.husky/_/husky.sh

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
#!/bin/sh
2+
if [ -z "$husky_skip_init" ]; then
3+
debug () {
4+
[ "$HUSKY_DEBUG" = "1" ] && echo "husky (debug) - $1"
5+
}
6+
7+
readonly hook_name="$(basename "$0")"
8+
debug "starting $hook_name..."
9+
10+
if [ "$HUSKY" = "0" ]; then
11+
debug "HUSKY env variable is set to 0, skipping hook"
12+
exit 0
13+
fi
14+
15+
if [ -f ~/.huskyrc ]; then
16+
debug "sourcing ~/.huskyrc"
17+
. ~/.huskyrc
18+
fi
19+
20+
export readonly husky_skip_init=1
21+
sh -e "$0" "$@"
22+
exitCode="$?"
23+
24+
if [ $exitCode != 0 ]; then
25+
echo "husky - $hook_name hook exited with code $exitCode (error)"
26+
exit $exitCode
27+
fi
28+
29+
exit 0
30+
fi

advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@ Below you will find a description of the defined resources:
3636
* [ClusterSpec](#clusterspec)
3737
* [ClusterStatus](#clusterstatus)
3838
* [DataBackupConfiguration](#databackupconfiguration)
39+
* [MonitoringConfiguration](#monitoringconfiguration)
3940
* [NodeMaintenanceWindow](#nodemaintenancewindow)
4041
* [PostgresConfiguration](#postgresconfiguration)
4142
* [RecoveryTarget](#recoverytarget)
@@ -212,6 +213,7 @@ ClusterSpec defines the desired state of Cluster
212213
| backup | The configuration to be used for backups | *[BackupConfiguration](#backupconfiguration) | false |
213214
| nodeMaintenanceWindow | Define a maintenance window for the Kubernetes nodes | *[NodeMaintenanceWindow](#nodemaintenancewindow) | false |
214215
| licenseKey | The license key of the cluster. When empty, the cluster operates in trial mode and after the expiry date (default 30 days) the operator will cease any reconciliation attempt. For details, please refer to the license agreement that comes with the operator. | string | false |
216+
| monitoring | The configuration of the monitoring infrastructure of this cluster | *[MonitoringConfiguration](#monitoringconfiguration) | false |
215217

216218

217219
## ClusterStatus
@@ -229,6 +231,7 @@ ClusterStatus defines the observed state of Cluster
229231
| pvcCount | How many PVCs have been created by this cluster | int32 | false |
230232
| jobCount | How many Jobs have been created by this cluster | int32 | false |
231233
| danglingPVC | List of all the PVCs created by this cluster and still available which are not attached to a Pod | []string | false |
234+
| initializingPVC | List of all the PVCs that are being initialized by this cluster | []string | false |
232235
| licenseStatus | Status of the license | licensekey.Status | false |
233236
| writeService | Current write pod | string | false |
234237
| readService | Current list of read pods | string | false |
@@ -248,6 +251,16 @@ DataBackupConfiguration is the configuration of the backup of the data directory
248251
| jobs | The number of parallel jobs to be used to upload the backup, defaults to 2 | *int32 | false |
249252

250253

254+
## MonitoringConfiguration
255+
256+
MonitoringConfiguration is the type containing all the monitoring configuration for a certain cluster
257+
258+
| Field | Description | Scheme | Required |
259+
| -------------------- | ------------------------------ | -------------------- | -------- |
260+
| customQueriesConfigMap | The list of config maps containing the custom queries | []corev1.ConfigMapKeySelector | false |
261+
| customQueriesSecret | The list of secrets containing the custom queries | []corev1.SecretKeySelector | false |
262+
263+
251264
## NodeMaintenanceWindow
252265

253266
NodeMaintenanceWindow contains information that the operator will use while upgrading the underlying node.

advocacy_docs/kubernetes/cloud_native_postgresql/architecture.mdx

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -39,16 +39,15 @@ purposes.
3939
Applications must be aware of the limitations that [Hot Standby](https://www.postgresql.org/docs/current/hot-standby.html)
4040
presents and familiar with the way PostgreSQL operates when dealing with these workloads.
4141

42-
Applications can access any PostgreSQL instance at any time through the `-r`
43-
service made available by the operator at connection time.
42+
Applications can access hot standby replicas through the `-ro` service made available
43+
by the operator. This service enables the application to offload read-only queries from the
44+
primary node.
4445

4546
The following diagram shows the architecture:
4647

47-
![Applications reading from any instance in round robin](./images/architecture-r.png)
48+
![Applications reading from hot standby replicas in round robin](./images/architecture-read-only.png)
4849

49-
Applications can also access hot standby replicas through the `-ro` service made available
50-
by the operator. This service enables the application to offload read-only queries from the
51-
primary node.
50+
Applications can also access any PostgreSQL instance at any time through the `-r` service at connection time.
5251

5352
## Application deployments
5453

advocacy_docs/kubernetes/cloud_native_postgresql/credits.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,9 @@ developed, and tested by the EnterpriseDB Cloud Native team:
1515
- Niccolò Fei
1616
- Jonathan Gonzalez
1717
- Danish Khan
18+
- Anand Nednur
1819
- Marco Nenciarini
20+
- Gabriele Quaresima
1921
- Jitendra Wadle
2022
- Adam Wright
2123

Lines changed: 3 additions & 0 deletions
Loading

advocacy_docs/kubernetes/cloud_native_postgresql/index.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,13 +17,15 @@ navigation:
1717
- quickstart
1818
- cloud_setup
1919
- bootstrap
20+
- resource_management
2021
- security
2122
- failure_modes
2223
- rolling_update
2324
- backup_recovery
2425
- postgresql_conf
2526
- storage
2627
- samples
28+
- monitoring
2729
- expose_pg_services
2830
- ssl_connections
2931
- kubernetes_upgrade

advocacy_docs/kubernetes/cloud_native_postgresql/installation.mdx

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,15 +6,17 @@ product: 'Cloud Native Operator'
66

77
## Installation on Kubernetes
88

9+
### Directly using the operator manifest
10+
911
The operator can be installed like any other resource in Kubernetes,
1012
through a YAML manifest applied via `kubectl`.
1113

12-
You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.1.0.yaml)
14+
You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.2.0.yaml)
1315
as follows:
1416

1517
```sh
1618
kubectl apply -f \
17-
https://get.enterprisedb.io/cnp/postgresql-operator-1.1.0.yaml
19+
https://get.enterprisedb.io/cnp/postgresql-operator-1.2.0.yaml
1820
```
1921

2022
Once you have run the `kubectl` command, Cloud Native PostgreSQL will be installed in your Kubernetes cluster.
@@ -25,6 +27,16 @@ You can verify that with:
2527
kubectl get deploy -n postgresql-operator-system postgresql-operator-controller-manager
2628
```
2729

30+
### Using the Operator Lifecycle Manager (OLM)
31+
32+
OperatorHub is a community-sourced index of operators available via the
33+
[Operator Lifecycle Manager](https://github.com/operator-framework/operator-lifecycle-manager),
34+
which is a package managing system for operators.
35+
36+
You can install Cloud Native PostgreSQL using the metadata available in the
37+
[Cloud Native PostgreSQL page](https://operatorhub.io/operator/cloud-native-postgresql)
38+
from the [OperatorHub.io website](https://operatorhub.io), following the installation steps listed on that page.
39+
2840
## Installation on Openshift
2941

3042
### Via the web interface

advocacy_docs/kubernetes/cloud_native_postgresql/license_keys.mdx

Lines changed: 54 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,19 +4,65 @@ originalFilePath: 'src/license_keys.md'
44
product: 'Cloud Native Operator'
55
---
66

7-
Each `Cluster` resource has a `licenseKey` parameter in its definition.
8-
9-
A `licenseKey` is always required for the operator to work.
7+
A license key is always required for the operator to work.
108

119
The only exception is when you run the operator with Community PostgreSQL:
12-
in this case, if the `licenseKey` parameter is unset, a cluster will be
13-
started with the default trial license - which automatically expires after 30 days.
10+
in this case, if the license key is unset, a cluster will be started with the default
11+
trial license - which automatically expires after 30 days.
1412

1513
!!! Important
1614
After the license expiration, the operator will cease any reconciliation attempt
1715
on the cluster, effectively stopping to manage its status.
1816
The pods and the data will still be available.
1917

18+
## Company level license keys
19+
20+
A license key allows you to create an unlimited number of PostgreSQL
21+
clusters in your installation.
22+
23+
The license key needs to be available in a `ConfigMap` in the same
24+
namespace where the operator is deployed.
25+
26+
In Kubernetes the operator is deployed by default in
27+
the `postgresql-operator-system` namespace.
28+
When instead OLM is used (i.e. on OpenShift), the operator is installed
29+
by default in the `openshift-operators` namespace.
30+
31+
Given the namespace name, and the license key, you can create
32+
the config map with the following command:
33+
34+
```
35+
kubectl create configmap -n [NAMESPACE_NAME_HERE] \
36+
postgresql-operator-controller-manager-config \
37+
--from-literal=EDB_LICENSE_KEY=[LICENSE_KEY_HERE]
38+
```
39+
40+
The following command can be used to reload the config map:
41+
42+
```
43+
kubectl rollout restart deployment -n [NAMESPACE_NAME_HERE] \
44+
postgresql-operator-controller-manager
45+
```
46+
47+
The validity of the license key can be checked inside the cluster status.
48+
49+
```sh
50+
kubectl get cluster cluster_example -o yaml
51+
[...]
52+
status:
53+
[...]
54+
licenseStatus:
55+
licenseExpiration: "2021-11-06T09:36:02Z"
56+
licenseStatus: Trial
57+
valid: true
58+
isImplicit: false
59+
isTrial: true
60+
[...]
61+
```
62+
63+
## Cluster level license keys
64+
65+
Each `Cluster` resource has a `licenseKey` parameter in its definition.
2066
You can find the expiration date, as well as more information about the license,
2167
in the cluster status:
2268

@@ -29,6 +75,8 @@ status:
2975
licenseExpiration: "2021-11-06T09:36:02Z"
3076
licenseStatus: Trial
3177
valid: true
78+
isImplicit: false
79+
isTrial: true
3280
[...]
3381
```
3482

@@ -38,4 +86,4 @@ the expiration date or move the cluster to a production license.
3886
Cloud Native PostgreSQL is distributed under the EnterpriseDB Limited Usage License
3987
Agreement, available at [enterprisedb.com/limited-use-license](https://www.enterprisedb.com/limited-use-license).
4088

41-
Cloud Native PostgreSQL: Copyright (C) 2019-2020 EnterpriseDB.
89+
Cloud Native PostgreSQL: Copyright (C) 2019-2021 EnterpriseDB.
Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
---
2+
title: 'Monitoring'
3+
originalFilePath: 'src/monitoring.md'
4+
product: 'Cloud Native Operator'
5+
---
6+
7+
For each PostgreSQL instance, the operator provides an exporter of metrics for
8+
[Prometheus](https://prometheus.io/) via HTTP, on port 8000.
9+
The operator comes with a predefined set of metrics, as well as a highly
10+
configurable and customizable system to define additional queries via one or
11+
more `ConfigMap` objects - and, future versions, `Secret` too.
12+
13+
The exporter can be accessed as follows:
14+
15+
```shell
16+
curl http://<pod ip>:8000/metrics
17+
```
18+
19+
All monitoring queries are:
20+
21+
- transactionally atomic (one transaction per query)
22+
- executed with the `pg_monitor` role
23+
24+
Please refer to the
25+
["Default roles" section in PostgreSQL documentation](https://www.postgresql.org/docs/current/default-roles.html)
26+
for details on the `pg_monitor` role.
27+
28+
## User defined metrics
29+
30+
Users will be able to define metrics through the available interface
31+
that the operator provides. This interface is currently in *beta* state and
32+
only supports definition of custom queries as `ConfigMap` and `Secret` objects
33+
using a YAML file that is inspired by the [queries.yaml file](https://github.com/prometheus-community/postgres_exporter/blob/main/queries.yaml)
34+
of the PostgreSQL Prometheus Exporter.
35+
36+
Queries must be defined in a `ConfigMap` to be referenced in the `monitoring`
37+
section of the `Cluster` definition, as in the following example:
38+
39+
```yaml
40+
apiVersion: postgresql.k8s.enterprisedb.io/v1
41+
kind: Cluster
42+
metadata:
43+
name: cluster-example
44+
spec:
45+
instances: 3
46+
47+
storage:
48+
size: 1Gi
49+
50+
monitoring:
51+
customQueriesConfigMap:
52+
- name: example-monitoring
53+
key: custom-queries
54+
```
55+
56+
Specifically, the `monitoring` section looks for an array with the name
57+
`customQueriesConfigMap`, which, as the name suggests, needs a list of
58+
`ConfigMap` key references to be used as the source of custom queries.
59+
60+
For example:
61+
62+
```yaml
63+
---
64+
apiVersion: v1
65+
kind: ConfigMap
66+
metadata:
67+
namespace: default
68+
name: example-monitoring
69+
data:
70+
custom-queries: |
71+
pg_replication:
72+
query: "SELECT CASE WHEN NOT pg_is_in_recovery()
73+
THEN 0
74+
ELSE GREATEST (0,
75+
EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())))
76+
END AS lag"
77+
primary: true
78+
metrics:
79+
- lag:
80+
usage: "GAUGE"
81+
description: "Replication lag behind primary in seconds"
82+
```
83+
84+
The object must have a name and be in the same namespace as the `Cluster`.
85+
Note that the above query will be executed on the `primary` node, with the
86+
following output.
87+
88+
```text
89+
# HELP custom_pg_replication_lag Replication lag behind primary in seconds
90+
# TYPE custom_pg_replication_lag gauge
91+
custom_pg_replication_lag 0
92+
```
93+
94+
This framework enables the definition of custom metrics to monitor the database
95+
or the application inside the PostgreSQL cluster.

0 commit comments

Comments
 (0)