diff --git a/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx b/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx index bceff3185dd..cd93f21eb6c 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx @@ -267,6 +267,36 @@ the following parameters: instances, you can add a label with the key `k8s.enterprisedb.io/reload` to it. Otherwise, you must reload the instances using the `kubectl cnp reload` subcommand. +#### Customizing the `streaming_replica` client certificate + +In some environments, it may not be possible to generate a certificate with the +common name `streaming_replica` due to company policies or other security +concerns, such as a CA shared across multiple clusters. In such cases, the user +mapping feature can be used to allow authentication as the `streaming_replica` +user with certificates containing different common names. + +To configure this setup, add a `pg_ident.conf` entry for the predefined map +named `cnp_streaming_replica`. + +For example, to enable `streaming_replica` authentication using a certificate +with the common name `streaming-replica.cnp.svc.cluster.local`, add the +following to your cluster definition: + +```yaml +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Cluster +metadata: + name: cluster-example +spec: + postgresql: + pg_ident: + - cnp_streaming_replica streaming-replica.cnp.svc.cluster.local streaming_replica +``` + +For further details on how `pg_ident.conf` is managed by the operator, see the +["PostgreSQL Configuration" page](postgresql_conf.md#the-pg_ident-section) in +the documentation. + #### Cert-manager example This simple example shows how to use [cert-manager](https://cert-manager.io/) diff --git a/product_docs/docs/postgres_for_kubernetes/1/failover.mdx b/product_docs/docs/postgres_for_kubernetes/1/failover.mdx index bb5fa67f5d6..e42809098ea 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/failover.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/failover.mdx @@ -96,3 +96,268 @@ expected outage. Enabling a new configuration option to delay failover provides a mechanism to prevent premature failover for short-lived network or node instability. + +## Failover Quorum (Quorum-based Failover) + +!!! Warning + *Failover quorum* is an experimental feature introduced in version 1.27.0. + Use with caution in production environments. + +Failover quorum is a mechanism that enhances data durability and safety during +failover events in EDB Postgres for Kubernetes-managed PostgreSQL clusters. + +Quorum-based failover allows the controller to determine whether to promote a replica +to primary based on the state of a quorum of replicas. +This is useful when stronger data durability is required than the one offered +by [synchronous replication](replication.md#synchronous-replication) and +default automated failover procedures. + +When synchronous replication is not enabled, some data loss is expected and +accepted during failover, as a replica may lag behind the primary when +promoted. + +With synchronous replication enabled, the guarantee is that the application +will not receive explicit acknowledgment of the successful commit of a +transaction until the WAL data is known to be safely received by all required +synchronous standbys. +This is not enough to guarantee that the operator is able to promote the most +advanced replica. + +For example, in a three-node cluster with synchronous replication set to `ANY 1 +(...)`, data is written to the primary and one standby before a commit is +acknowledged. If both the primary and the aligned standby become unavailable +(such as during a network partition), the remaining replica may not have the +latest data. Promoting it could lose some data that the application considered +committed. + +Quorum-based failover addresses this risk by ensuring that failover only occurs +if the operator can confirm the presence of all synchronously committed data in +the instance to promote, and it does not occur otherwise. + +This feature allows users to choose their preferred trade-off between data +durability and data availability. + +Failover quorum can be enabled by setting the annotation +`alpha.k8s.enterprisedb.io/failoverQuorum="true"` in the `Cluster` resource. + +!!! info + When this feature is out of the experimental phase, the annotation + `alpha.k8s.enterprisedb.io/failoverQuorum` will be replaced by a configuration option in + the `Cluster` resource. + +### How it works + +Before promoting a replica to primary, the operator performs a quorum check, +following the principles of the Dynamo `R + W > N` consistency model[^1]. + +In the quorum failover, these values assume the following meaning: + +- `R` is the number of *promotable replicas* (read quorum); +- `W` is the number of replicas that must acknowledge the write before the + `COMMIT` is returned to the client (write quorum); +- `N` is the total number of potentially synchronous replicas; + +*Promotable replicas* are replicas that have these properties: + +- are part of the cluster; +- are able to report their state to the operator; +- are potentially synchronous; + +If `R + W > N`, then we can be sure that among the promotable replicas there is +at least one that has confirmed all the synchronous commits, and we can safely +promote it to primary. If this is not the case, the controller will not promote +any replica to primary, and will wait for the situation to change. + +Users can force a promotion of a replica to primary through the +`kubectl cnp promote` command even if the quorum check is failing. + +!!! Warning + Manual promotion should only be used as a last resort. Before proceeding, + make sure you fully understand the risk of data loss and carefully consider the + consequences of prioritizing the resumption of write workloads for your + applications. + +An additional CRD is used to track the quorum state of the cluster. A `Cluster` +with the quorum failover enabled will have a `FailoverQuorum` resource with the same +name as the `Cluster` resource. The `FailoverQuorum` CR is created by the +controller when the quorum failover is enabled, and it is updated by the primary +instance during its reconciliation loop, and read by the operator during quorum +checks. It is used to track the latest known configuration of the synchronous +replication. + +!!! Important + Users should not modify the `FailoverQuorum` resource directly. During + PostgreSQL configuration changes, when it is not possible to determine the + configuration, the `FailoverQuorum` resource will be reset, preventing any + failover until the new configuration is applied. + +The `FailoverQuorum` resource works in conjunction with PostgreSQL synchronous +replication. + +!!! Warning + There is no guarantee that `COMMIT` operations returned to the + client but that have not been performed synchronously, such as those made + explicitly disabling synchronous replication with + `SET synchronous_commit TO local`, will be present on a promoted replica. + +### Quorum Failover Example Scenarios + +In the following scenarios, `R` is the number of promotable replicas, `W` is +the number of replicas that must acknowledge a write before commit, and `N` is +the total number of potentially synchronous replicas. The "Failover" column +indicates whether failover is allowed under quorum failover rules. + +#### Scenario 1: Three-node cluster, failing pod(s) + +A cluster with `instances: 3`, `synchronous.number=1`, and +`dataDurability=required`. + +- If only the primary fails, two promotable replicas remain (R=2). + Since `R + W > N` (2 + 1 > 2), failover is allowed and safe. +- If both the primary and one replica fail, only one promotable replica + remains (R=1). Since `R + W = N` (1 + 1 = 2), failover is not allowed to + prevent possible data loss. + +| R | W | N | Failover | +| :-: | :-: | :-: | :------: | +| 2 | 1 | 2 | ✅ | +| 1 | 1 | 2 | ❌ | + +#### Scenario 2: Three-node cluster, network partition + +A cluster with `instances: 3`, `synchronous.number: 1`, and +`dataDurability: required` experiences a network partition. + +- If the operator can communicate with the primary, no failover occurs. The + cluster can be impacted if the primary cannot reach any standby, since it + won't commit transactions due to synchronous replication requirements. +- If the operator cannot reach the primary but can reach both replicas (R=2), + failover is allowed. If the operator can reach only one replica (R=1), + failover is not allowed, as the synchronous one may be the other one. + +| R | W | N | Failover | +| :-: | :-: | :-: | :------: | +| 2 | 1 | 2 | ✅ | +| 1 | 1 | 2 | ❌ | + +#### Scenario 3: Five-node cluster, network partition + +A cluster with `instances: 5`, `synchronous.number=2`, and +`dataDurability=required` experiences a network partition. + +- If the operator can communicate with the primary, no failover occurs. The + cluster can be impacted if the primary cannot reach at least two standbys, + as since it won't commit transactions due to synchronous replication + requirements. +- If the operator cannot reach the primary but can reach at least three + replicas (R=3), failover is allowed. If the operator can reach only two + replicas (R=2), failover is not allowed, as the synchronous one may be the + other one. + +| R | W | N | Failover | +| :-: | :-: | :-: | :------: | +| 3 | 2 | 4 | ✅ | +| 2 | 2 | 4 | ❌ | + +#### Scenario 4: Three-node cluster with remote synchronous replicas + +A cluster with `instances: 3` and remote synchronous replicas defined in +`standbyNamesPre` or `standbyNamesPost`. We assume that the primary is failing. + +This scenario requires an important consideration. Replicas listed in +`standbyNamesPre` or `standbyNamesPost` are not counted in +`R` (they cannot be promoted), but are included in `N` (they may have received +synchronous writes). So, if +`synchronous.number <= len(standbyNamesPre) + len(standbyNamesPost)`, failover +is not possible, as no local replica can be guaranteed to have the required +data. The operator prevents such configurations during validation, but some +invalid configurations are shown below for clarity. + +**Example configurations:** + +Configuration #1 (valid): + +```yaml +instances: 3 +postgresql: + synchronous: + method: any + number: 2 + standbyNamesPre: + - angus +``` + +In this configuration, when the primary fails, `R = 2` (the local replicas), +`W = 2`, and `N = 3` (2 local replicas + 1 remote), allowing failover. +In case of an additional replica failing (`R = 1`) failover is not allowed. + +| R | W | N | Failover | +| :-: | :-: | :-: | :------: | +| 3 | 2 | 4 | ✅ | +| 2 | 2 | 4 | ❌ | + +Configuration #2 (invalid): + +```yaml +instances: 3 +postgresql: + synchronous: + method: any + number: 1 + maxStandbyNamesFromCluster: 1 + standbyNamesPre: + - angus +``` + +In this configuration, `R = 2` (the local replicas), `W = 1`, and `N = 3` +(2 local replicas + 1 remote). +Failover is not possible in this setup, so quorum failover can not be +enabled with this configuration. + +| R | W | N | Failover | +| :-: | :-: | :-: | :------: | +| 1 | 1 | 2 | ❌ | + +Configuration #3 (invalid): + +```yaml +instances: 3 +postgresql: + synchronous: + method: any + number: 1 + maxStandbyNamesFromCluster: 0 + standbyNamesPre: + - angus + - malcolm +``` + +In this configuration, `R = 0` (the local replicas), `W = 1`, and `N = 2` +(0 local replicas + 2 remote). +Failover is not possible in this setup, so quorum failover can not be +enabled with this configuration. + +| R | W | N | Failover | +| :-: | :-: | :-: | :------: | +| 0 | 1 | 2 | ❌ | + +#### Scenario 5: Three-node cluster, preferred data durability, network partition + +Consider a cluster with `instances: 3`, `synchronous.number=1`, and +`dataDurability=preferred` that experiences a network partition. + +- If the operator can communicate with both the primary and the API server, + the primary continues to operate, removing unreachable standbys from the + `synchronous_standby_names` set. +- If the primary cannot reach the operator or API server, a quorum check is + performed. The `FailoverQuorum` status cannot have changed, as the primary cannot + have received new configuration. If the operator can reach both replicas, + failover is allowed (`R=2`). If only one replica is reachable (`R=1`), + failover is not allowed. + +| R | W | N | Failover | +| :-: | :-: | :-: | :------: | +| 2 | 1 | 2 | ✅ | +| 1 | 1 | 2 | ❌ | + +[^1]: [Dynamo: Amazon’s highly available key-value store](https://www.amazon.science/publications/dynamo-amazons-highly-available-key-value-store) diff --git a/product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx b/product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx new file mode 100644 index 00000000000..84c8f122dc4 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx @@ -0,0 +1,354 @@ +--- +title: 'Image Volume Extensions' +originalFilePath: 'src/imagevolume_extensions.md' +--- + + + +EDB Postgres for Kubernetes supports the **dynamic loading of PostgreSQL extensions** into a +`Cluster` at Pod startup using the [Kubernetes `ImageVolume` feature](https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/) +and the `extension_control_path` GUC introduced in PostgreSQL 18, to which this +project contributed. + +This feature allows you to mount a [PostgreSQL extension](https://www.postgresql.org/docs/current/extend-extensions.html), +packaged as an OCI-compliant container image, as a read-only and immutable +volume inside a running pod at a known filesystem path. + +You can make the extension available either globally, using the +[`shared_preload_libraries` option](postgresql_conf.md#shared-preload-libraries), +or at the database level through the `CREATE EXTENSION` command. For the +latter, you can use the [`Database` resource’s declarative extension management](declarative_database_management.md/#managing-extensions-in-a-database) +to ensure consistent, automated extension setup within your PostgreSQL +databases. + +## Benefits + +Image volume extensions decouple the distribution of PostgreSQL operand +container images from the distribution of extensions. This eliminates the +need to define and embed extensions at build time within your PostgreSQL +images—a major adoption blocker for PostgreSQL as a containerized workload, +including from a security and supply chain perspective. + +As a result, you can: + +- Use the [official PostgreSQL `minimal` operand images](https://github.com/enterprisedb/docker-postgres?tab=readme-ov-file#minimal-images) + provided by EDB Postgres for Kubernetes. +- Dynamically add the extensions you need to your `Cluster` definitions, + without rebuilding or maintaining custom PostgreSQL images. +- Reduce your operational surface by using immutable, minimal, and secure base + images while adding only the extensions required for each workload. + +Extension images must be built according to the +[documented specifications](#image-specifications). + +## Requirements + +To use image volume extensions with EDB Postgres for Kubernetes, you need: + +- **PostgreSQL 18 or later**, with support for `extension_control_path`. +- **Kubernetes 1.33**, with the `ImageVolume` feature gate enabled. +- **EDB Postgres for Kubernetes-compatible extension container images**, ensuring: + - Matching PostgreSQL major version of the `Cluster` resource. + - Compatible operating system distribution of the `Cluster` resource. + - Matching CPU architecture of the `Cluster` resource. + +## How it works + +Extension images are defined in the `.spec.postgresql.extensions` stanza of a +`Cluster` resource, which accepts an ordered list of extensions to be added to +the PostgreSQL cluster. + +!!! Info + For field-level details, see the + [API reference for `ExtensionConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-ExtensionConfiguration). + +Each image volume is mounted at `/extensions/`. + +By default, EDB Postgres for Kubernetes automatically manages the relevant GUCs, setting: + +- `extension_control_path` to `/extensions//share`, allowing + PostgreSQL to locate any extension control file within `/extensions//share/extension` +- `dynamic_library_path` to `/extensions//lib` + +These values are appended in the order in which the extensions are defined in +the `extensions` list, ensuring deterministic path resolution within +PostgreSQL. This allows PostgreSQL to discover and load the extension without +requiring manual configuration inside the pod. + +!!! Info + Depending on how your extension container images are built and their layout, + you may need to adjust the default `extension_control_path` and + `dynamic_library_path` values to match the image structure. + +!!! Important + If the extension image includes shared libraries, they must be compiled + with the same PostgreSQL major version, operating system distribution, and CPU + architecture as the PostgreSQL container image used by your cluster, to ensure + compatibility and prevent runtime issues. + +## How to add a new extension + +Adding an extension to a database in EDB Postgres for Kubernetes involves a few steps: + +1. Define the extension image in the `Cluster` resource so that PostgreSQL can + discover and load it. +2. Add the library to [`shared_preload_libraries`](postgresql_conf.md#shared-preload-libraries) + if the extension requires it. +3. Declare the extension in the `Database` resource where you want it + installed, if the extension supports `CREATE EXTENSION`. + +!!! Warning + Avoid making changes to extension images and PostgreSQL configuration + settings (such as `shared_preload_libraries`) simultaneously. + First, allow the pod to roll out with the new extension image, then update + the PostgreSQL configuration. + This limitation will be addressed in a future release of EDB Postgres for Kubernetes. + +For illustration purposes, this guide uses a simple, fictitious extension named +`foo` that supports `CREATE EXTENSION`. + +### Adding a new extension to a `Cluster` resource + +You can add an `ImageVolume`-based extension to a `Cluster` using the +`.spec.postgresql.extensions` stanza. For example: + +```yaml +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Cluster +metadata: + name: foo-18 +spec: + # ... + postgresql: + extensions: + - name: foo + image: + reference: # registry path for your extension image + # ... +``` + +The `name` field is **mandatory** and **must be unique within the cluster**, as +it determines the mount path (`/extensions/foo` in this example). It must +consist of *lowercase alphanumeric characters or hyphens (`-`)* and must start +and end with an alphanumeric character. + +The `image` stanza follows the [Kubernetes `ImageVolume` API](https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/). +The `reference` must point to a valid container registry path for the extension +image. + +!!! Important + When a new extension is added to a running `Cluster`, EDB Postgres for Kubernetes will + automatically trigger a [rolling update](rolling_update.md) to attach the new + image volume to each pod. Before adding a new extension in production, + ensure you have thoroughly tested it in a staging environment to prevent + configuration issues that could leave your PostgreSQL cluster in an unhealthy + state. + +Once mounted, EDB Postgres for Kubernetes will automatically configure PostgreSQL by appending: + +- `/extensions/foo/share` to `extension_control_path` +- `/extensions/foo/lib` to `dynamic_library_path` + +This ensures that the PostgreSQL container is ready to serve the `foo` +extension when requested by a database, as described in the next section. The +`CREATE EXTENSION foo` command, triggered automatically during the +[reconciliation of the `Database` resource](declarative_database_management.md/#managing-extensions-in-a-database), +will work without additional configuration, as PostgreSQL will locate: + +- the extension control file at `/extensions/foo/share/extension/foo.control` +- the shared library at `/extensions/foo/lib/foo.so` + +### Adding a new extension to a `Database` resource + +Once the extension is available in the PostgreSQL instance, you can leverage +declarative databases to [manage the lifecycle of your extensions](declarative_database_management.md#managing-extensions-in-a-database) +within the target database. + +Continuing with the `foo` example, you can request the installation of the +`foo` extension in the `app` database of the `foo-18` cluster using the +following resource definition: + +```yaml +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Database +metadata: + name: foo-app +spec: + name: app + owner: app + cluster: + name: foo-18 + extensions: + - name: foo + version: 1.0 +``` + +EDB Postgres for Kubernetes will automatically reconcile this resource, executing the +`CREATE EXTENSION foo` command inside the `app` database if it is not +already installed, ensuring your desired state is maintained without manual +intervention. + +## Advanced Topics + +In some cases, the default expected structure may be insufficient for your +extension image, particularly when: + +- The extension requires additional system libraries. +- Multiple extensions are bundled in the same image. +- The image uses a custom directory structure. + +Following the *"convention over configuration"* paradigm, EDB Postgres for Kubernetes allows +you to finely control the configuration of each extension image through the +following fields: + +- `extension_control_path`: A list of relative paths within the container image + to be appended to PostgreSQL’s `extension_control_path`, allowing it to + locate extension control files. +- `dynamic_library_path`: A list of relative paths within the container image + to be appended to PostgreSQL’s `dynamic_library_path`, enabling it to locate + shared library files for extensions. +- `ld_library_path`: A list of relative paths within the container image to be + appended to the `LD_LIBRARY_PATH` environment variable of the instance + manager process, allowing PostgreSQL to locate required system libraries at + runtime. + +This flexibility enables you to support complex or non-standard extension +images while maintaining clarity and predictability. + +### Setting Custom Paths + +If your extension image does not use the default `lib` and `share` directories +for its libraries and control files, you can override the defaults by +explicitly setting `extension_control_path` and `dynamic_library_path`. + +For example: + +```yaml +spec: + postgresql: + extensions: + - name: my-extension + extension_control_path: + - my/share/path + dynamic_library_path: + - my/lib/path + image: + reference: # registry path for your extension image +``` + +EDB Postgres for Kubernetes will configure PostgreSQL with: + +- `/extensions/my-extension/my/share/path` appended to `extension_control_path` +- `/extensions/my-extension/my/lib/path` appended to `dynamic_library_path` + +This allows PostgreSQL to discover your extension’s control files and shared +libraries correctly, even with a non-standard layout. + +### Multi-extension Images + +You may need to include multiple extensions within the same container image, +adopting a structure where each extension’s files reside in their own +subdirectory. + +For example, to package PostGIS and pgRouting together in a single image, each +in its own subdirectory: + +```yaml +# ... +spec: + # ... + postgresql: + extensions: + - name: geospatial + extension_control_path: + - postgis/share + - pgrouting/share + dynamic_library_path: + - postgis/lib + - pgrouting/lib + # ... + image: + reference: # registry path for your geospatial image + # ... + # ... + # ... +``` + +### Including System Libraries + +Some extensions, such as PostGIS, require system libraries that may not be +present in the base PostgreSQL image. To support these requirements, you can +package the necessary libraries within your extension container image and make +them available to PostgreSQL using the `ld_library_path` field. + +For example, if your extension image includes a `system` directory with the +required libraries: + +```yaml +# ... +spec: + # ... + postgresql: + extensions: + - name: postgis + # ... + ld_library_path: + - syslib + image: + reference: # registry path for your PostGIS image + # ... + # ... + # ... +``` + +EDB Postgres for Kubernetes will set the `LD_LIBRARY_PATH` environment variable to include +`/extensions/postgis/system`, allowing PostgreSQL to locate and load these +system libraries at runtime. + +!!! Important + Since `ld_library_path` must be set when the PostgreSQL process starts, + changing this value requires a **cluster restart** for the new value to take effect. + EDB Postgres for Kubernetes does not currently trigger this restart automatically; you will need to + manually restart the cluster (e.g., using `cnp restart`) after modifying `ld_library_path`. + +## Image Specifications + +A standard extension container image for EDB Postgres for Kubernetes includes two +required directories at its root: + +- `share`: contains the extension control file (e.g., `.control`) + and any SQL files. +- `lib`: contains the extension's shared library (e.g., `.so`) and + any additional required libraries. + +Following this structure ensures that the extension will be automatically +discoverable and usable by PostgreSQL within EDB Postgres for Kubernetes without requiring +manual configuration. + +!!! Important + We encourage PostgreSQL extension developers to publish OCI-compliant extension + images following this layout as part of their artifact distribution, making + their extensions easily consumable within Kubernetes environments. + Ideally, extension images should target a specific operating system + distribution and architecture, be tied to a particular PostgreSQL version, and + be built using the distribution’s native packaging system (for example, using + Debian or RPM packages). This approach ensures consistency, security, and + compatibility with the PostgreSQL images used in your clusters. + +## Caveats + +Currently, adding, removing, or updating an extension image triggers a +restart of the PostgreSQL pods. This behavior is inherited from how +[image volumes](https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/) +work in Kubernetes. + +Before performing an extension update, ensure you have: + +- Thoroughly tested the update process in a staging environment. +- Verified that the extension image contains the required upgrade path between + the currently installed version and the target version. +- Updated the `version` field for the extension in the relevant `Database` + resource definition to align with the new version in the image. + +These steps help prevent downtime or data inconsistencies in your PostgreSQL +clusters during extension updates. diff --git a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx index 763d65f9f08..948833d6af5 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx @@ -81,14 +81,14 @@ through a YAML manifest applied via `kubectl`. There are two different manifests available depending on your subscription plan: -- Standard: The [latest standard operator manifest](https://get.enterprisedb.io/pg4k/pg4k-standard-1.26.1.yaml). -- Enterprise: The [latest enterprise operator manifest](https://get.enterprisedb.io/pg4k/pg4k-enterprise-1.26.1.yaml). +- Standard: The [latest standard operator manifest](https://get.enterprisedb.io/pg4k/pg4k-standard-1.27.0-rc1.yaml). +- Enterprise: The [latest enterprise operator manifest](https://get.enterprisedb.io/pg4k/pg4k-enterprise-1.27.0-rc1.yaml). You can install the manifest for the latest version of the operator by running: ```sh kubectl apply --server-side -f \ - https://get.enterprisedb.io/pg4k/pg4k-$EDB_SUBSCRIPTION_PLAN-1.26.1.yaml + https://get.enterprisedb.io/pg4k/pg4k-$EDB_SUBSCRIPTION_PLAN-1.27.0-rc1.yaml ``` You can verify that with: @@ -283,6 +283,8 @@ When versions are not directly upgradable, the old version needs to be removed before installing the new one. This won't affect user data but only the operator itself. + + ### Upgrading to 1.26 from a previous minor version !!! Important diff --git a/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx b/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx index 922e8860b3d..fb1f0945cb7 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx @@ -192,9 +192,9 @@ spec: failureThreshold: 10 ``` -### Primary Isolation (alpha) +### Primary Isolation -EDB Postgres for Kubernetes 1.26 introduces an opt-in experimental behavior for the liveness +EDB Postgres for Kubernetes 1.27 introduces an additional behavior for the liveness probe of a PostgreSQL primary, which will report a failure if **both** of the following conditions are met: @@ -203,35 +203,34 @@ following conditions are met: The effect of this behavior is to consider an isolated primary to be not alive and subsequently **shut it down** when the liveness probe fails. -It is **disabled by default** and can be enabled by adding the following -annotation to the `Cluster` resource: +It is **enabled by default** and can be disabled by adding the following: ```yaml -metadata: - annotations: - alpha.k8s.enterprisedb.io/livenessPinger: '{"enabled": true}' +spec: + probes: + liveness: + isolationCheck: + enabled: false ``` -!!! Warning - This feature is experimental and will be introduced in a future EDB Postgres for Kubernetes - release with a new API. If you decide to use it now, note that the API **will - change**. - !!! Important - If you plan to enable this experimental feature, be aware that the default - liveness probe settings—automatically derived from `livenessProbeTimeout`—might + Be aware that the default liveness probe settings—automatically derived from `livenessProbeTimeout`—might be aggressive (30 seconds). As such, we recommend explicitly setting the liveness probe configuration to suit your environment. -The annotation also accepts two optional network settings: `requestTimeout` -and `connectionTimeout`, both defaulting to `500` (in milliseconds). +The spec also accepts two optional network settings: `requestTimeout` +and `connectionTimeout`, both defaulting to `1000` (in milliseconds). In cloud environments, you may need to increase these values. For example: ```yaml -metadata: - annotations: - alpha.k8s.enterprisedb.io/livenessPinger: '{"enabled": true,"requestTimeout":1000,"connectionTimeout":1000}' +spec: + probes: + liveness: + isolationCheck: + enabled: true + requestTimeout: "2000" + connectionTimeout: "2000" ``` ## Readiness Probe diff --git a/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx b/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx index da6678647e8..807707f61f5 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx @@ -16,6 +16,14 @@ connect to publications on a publisher node. Subscribers pull data changes from these publications and can re-publish them, enabling cascading replication and complex topologies. +!!! Important + To protect your logical replication subscribers after a failover of the + publisher cluster in EDB Postgres for Kubernetes, ensure that replication slot + synchronization for logical decoding is enabled. Without this, your logical + replication clients may lose data and fail to continue seamlessly after a + failover. For configuration details, see + ["Replication: Logical Decoding Slot Synchronization"](replication.md#logical-decoding-slot-synchronization). + This flexible model is particularly useful for: - Online data migrations @@ -249,7 +257,7 @@ the `Subscription` status will reflect the following: If an error occurs during reconciliation, `status.applied` will be `false`, and an error message will be included in the `status.message` field. -### Removing a subscription +### Removing a Subscription The `subscriptionReclaimPolicy` field controls the behavior when deleting a `Subscription` object: @@ -278,6 +286,13 @@ spec: In this case, deleting the `Subscription` object also removes the `subscriber` subscription from the `app` database of the `king` cluster. +### Resilience to Failovers + +To ensure that your logical replication subscriptions remain operational after +a failover of the publisher, configure EDB Postgres for Kubernetes to synchronize logical +decoding slots across the cluster. For detailed instructions, see +[Logical Decoding Slot Synchronization](replication.md#logical-decoding-slot-synchronization). + ## Limitations Logical replication in PostgreSQL has some inherent limitations, as outlined in diff --git a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx index d84987538bf..2927ffa558d 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx @@ -119,11 +119,11 @@ than `1`, the operator manages `instances -1` replicas, including high availability (HA) through automated failover and rolling updates through switchover operations. -EDB Postgres for Kubernetes manages replication slots for all the replicas -in the HA cluster. The implementation is inspired by the previously -proposed patch for PostgreSQL, called -[failover slots](https://wiki.postgresql.org/wiki/Failover_slots), and -also supports user defined physical replication slots on the primary. +EDB Postgres for Kubernetes manages replication slots for all replicas in the +high-availability cluster. It also supports user-defined physical replication +slots on the primary and enables logical decoding failover—natively for +PostgreSQL 17 and later using `sync_replication_slots`, and through the +`pg_failover_slots` extension for earlier versions. ### Service Configuration diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx index 819262bd416..039bf3a5e47 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/index.mdx @@ -1,8 +1,9 @@ --- -title: API Reference - v1.26.1 +title: API Reference - v1.27.0-rc1 originalFilePath: src/pg4k.v1.md navTitle: API Reference navigation: + - v1.27.0-rc1 - v1.26.1 - v1.26.0 - v1.25.1 @@ -115,6 +116,7 @@ navigation: - [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster) - [ClusterImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ClusterImageCatalog) - [Database](#postgresql-k8s-enterprisedb-io-v1-Database) +- [FailoverQuorum](#postgresql-k8s-enterprisedb-io-v1-FailoverQuorum) - [ImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ImageCatalog) - [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler) - [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication) @@ -260,6 +262,38 @@ More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api- +
+ +## FailoverQuorum + +**Appears in:** + +

FailoverQuorum contains the information about the current failover +quorum status of a PG cluster. It is updated by the instance manager +of the primary node and reset to zero by the operator to trigger +an update.

+ + + + + + + + + + + + + +
FieldDescription
apiVersion [Required]
string
postgresql.k8s.enterprisedb.io/v1
kind [Required]
string
FailoverQuorum
metadata [Required]
+meta/v1.ObjectMeta +
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field.
status
+FailoverQuorumStatus +
+

Most recently observed status of the failover quorum.

+
+
## ImageCatalog @@ -2952,6 +2986,60 @@ storage

+
+ +## ExtensionConfiguration + +**Appears in:** + +- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration) + +

ExtensionConfiguration is the configuration used to add +PostgreSQL extensions to the Cluster.

+ + + + + + + + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

The name of the extension, required

+
image [Required]
+core/v1.ImageVolumeSource +
+

The image containing the extension, required

+
extension_control_path
+[]string +
+

The list of directories inside the image which should be added to extension_control_path. +If not defined, defaults to "/share".

+
dynamic_library_path
+[]string +
+

The list of directories inside the image which should be added to dynamic_library_path. +If not defined, defaults to "/lib".

+
ld_library_path
+[]string +
+

The list of directories inside the image which should be added to ld_library_path.

+
+
## ExtensionSpec @@ -3078,6 +3166,54 @@ of WAL archiving and backups for this external cluster

+
+ +## FailoverQuorumStatus + +**Appears in:** + +- [FailoverQuorum](#postgresql-k8s-enterprisedb-io-v1-FailoverQuorum) + +

FailoverQuorumStatus is the latest observed status of the failover +quorum of the PG cluster.

+ + + + + + + + + + + + + + + + + +
FieldDescription
method
+string +
+

Contains the latest reported Method value.

+
standbyNames
+[]string +
+

StandbyNames is the list of potentially synchronous +instance names.

+
standbyNumber
+int +
+

StandbyNumber is the number of synchronous standbys that transactions +need to wait for replies from.

+
primary
+string +
+

Primary is the name of the primary instance that updated +this object the latest time.

+
+
## ImageCatalogRef @@ -3333,6 +3469,44 @@ conflict with the operator's intended functionality or design.

+
+ +## IsolationCheckConfiguration + +**Appears in:** + +- [LivenessProbe](#postgresql-k8s-enterprisedb-io-v1-LivenessProbe) + +

IsolationCheckConfiguration contains the configuration for the isolation check +functionality in the liveness probe

+ + + + + + + + + + + + + + +
FieldDescription
enabled
+bool +
+

Whether primary isolation checking is enabled for the liveness probe

+
requestTimeout
+int +
+

Timeout in milliseconds for requests during the primary isolation check

+
connectionTimeout
+int +
+

Timeout in milliseconds for connections during the primary isolation check

+
+
## LDAPBindAsAuth @@ -3486,6 +3660,40 @@ the bind+search LDAP authentication process

LDAPScheme defines the possible schemes for LDAP

+
+ +## LivenessProbe + +**Appears in:** + +- [ProbesConfiguration](#postgresql-k8s-enterprisedb-io-v1-ProbesConfiguration) + +

LivenessProbe is the configuration of the liveness probe

+ + + + + + + + + + + +
FieldDescription
Probe
+Probe +
(Members of Probe are embedded into this type.) +

Probe is the standard probe configuration

+
isolationCheck
+IsolationCheckConfiguration +
+

Configure the feature that extends the liveness probe for a primary +instance. In addition to the basic checks, this verifies whether the +primary is isolated from the Kubernetes API server and from its +replicas, ensuring that it can be safely shut down if network +partition or API unavailability is detected. Enabled by default.

+
+
## ManagedConfiguration @@ -4472,6 +4680,13 @@ This should only be used for debugging and troubleshooting. Defaults to false.

+extensions
+[]ExtensionConfiguration + + +

The configuration of the extensions to be added

+ + @@ -4507,9 +4722,9 @@ the primary server of the cluster as part of rolling updates

**Appears in:** -- [ProbeWithStrategy](#postgresql-k8s-enterprisedb-io-v1-ProbeWithStrategy) +- [LivenessProbe](#postgresql-k8s-enterprisedb-io-v1-LivenessProbe) -- [ProbesConfiguration](#postgresql-k8s-enterprisedb-io-v1-ProbesConfiguration) +- [ProbeWithStrategy](#postgresql-k8s-enterprisedb-io-v1-ProbeWithStrategy)

Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.

@@ -4649,7 +4864,7 @@ to be injected in the PostgreSQL Pods

liveness [Required]
-Probe +LivenessProbe

The liveness probe configuration

@@ -5112,6 +5327,19 @@ It may only contain lower case letters, numbers, and the underscore character. This can only be set at creation time. By default set to _cnp_.

+synchronizeLogicalDecoding
+bool + + +

When enabled, the operator automatically manages synchronization of logical +decoding (replication) slots across high-availability clusters.

+

Requires one of the following conditions:

+
    +
  • PostgreSQL version 17 or later
  • +
  • PostgreSQL version < 17 with pg_failover_slots extension enabled
  • +
+ + diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.27.0-rc1.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.27.0-rc1.mdx new file mode 100644 index 00000000000..e5bc5008cda --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1/v1.27.0-rc1.mdx @@ -0,0 +1,6462 @@ +--- +title: API Reference - v1.27.0-rc1 +navTitle: v1.27.0-rc1 +pdfExclude: 'true' + +--- + +

Package v1 contains API Schema definitions for the postgresql v1 API group

+ +## Resource Types + +- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup) +- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster) +- [ClusterImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ClusterImageCatalog) +- [Database](#postgresql-k8s-enterprisedb-io-v1-Database) +- [FailoverQuorum](#postgresql-k8s-enterprisedb-io-v1-FailoverQuorum) +- [ImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ImageCatalog) +- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler) +- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication) +- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup) +- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription) + +
+ +## Backup + +

A Backup resource is a request for a PostgreSQL backup by the user.

+ + + + + + + + + + + + + + + + +
FieldDescription
apiVersion [Required]
string
postgresql.k8s.enterprisedb.io/v1
kind [Required]
string
Backup
metadata [Required]
+meta/v1.ObjectMeta +
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field.
spec [Required]
+BackupSpec +
+

Specification of the desired behavior of the backup. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
status
+BackupStatus +
+

Most recently observed status of the backup. This data may not be up to +date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
+ +
+ +## Cluster + +

Cluster is the Schema for the PostgreSQL API

+ + + + + + + + + + + + + + + + +
FieldDescription
apiVersion [Required]
string
postgresql.k8s.enterprisedb.io/v1
kind [Required]
string
Cluster
metadata [Required]
+meta/v1.ObjectMeta +
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field.
spec [Required]
+ClusterSpec +
+

Specification of the desired behavior of the cluster. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
status
+ClusterStatus +
+

Most recently observed status of the cluster. This data may not be up +to date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
+ +
+ +## ClusterImageCatalog + +

ClusterImageCatalog is the Schema for the clusterimagecatalogs API

+ + + + + + + + + + + + + +
FieldDescription
apiVersion [Required]
string
postgresql.k8s.enterprisedb.io/v1
kind [Required]
string
ClusterImageCatalog
metadata [Required]
+meta/v1.ObjectMeta +
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field.
spec [Required]
+ImageCatalogSpec +
+

Specification of the desired behavior of the ClusterImageCatalog. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
+ +
+ +## Database + +

Database is the Schema for the databases API

+ + + + + + + + + + + + + + + + +
FieldDescription
apiVersion [Required]
string
postgresql.k8s.enterprisedb.io/v1
kind [Required]
string
Database
metadata [Required]
+meta/v1.ObjectMeta +
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field.
spec [Required]
+DatabaseSpec +
+

Specification of the desired Database. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
status
+DatabaseStatus +
+

Most recently observed status of the Database. This data may not be up to +date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
+ +
+ +## FailoverQuorum + +**Appears in:** + +

FailoverQuorum contains the information about the current failover +quorum status of a PG cluster. It is updated by the instance manager +of the primary node and reset to zero by the operator to trigger +an update.

+ + + + + + + + + + + + + +
FieldDescription
apiVersion [Required]
string
postgresql.k8s.enterprisedb.io/v1
kind [Required]
string
FailoverQuorum
metadata [Required]
+meta/v1.ObjectMeta +
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field.
status
+FailoverQuorumStatus +
+

Most recently observed status of the failover quorum.

+
+ +
+ +## ImageCatalog + +

ImageCatalog is the Schema for the imagecatalogs API

+ + + + + + + + + + + + + +
FieldDescription
apiVersion [Required]
string
postgresql.k8s.enterprisedb.io/v1
kind [Required]
string
ImageCatalog
metadata [Required]
+meta/v1.ObjectMeta +
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field.
spec [Required]
+ImageCatalogSpec +
+

Specification of the desired behavior of the ImageCatalog. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
+ +
+ +## Pooler + +

Pooler is the Schema for the poolers API

+ + + + + + + + + + + + + + + + +
FieldDescription
apiVersion [Required]
string
postgresql.k8s.enterprisedb.io/v1
kind [Required]
string
Pooler
metadata [Required]
+meta/v1.ObjectMeta +
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field.
spec [Required]
+PoolerSpec +
+

Specification of the desired behavior of the Pooler. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
status
+PoolerStatus +
+

Most recently observed status of the Pooler. This data may not be up to +date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
+ +
+ +## Publication + +

Publication is the Schema for the publications API

+ + + + + + + + + + + + + + + + +
FieldDescription
apiVersion [Required]
string
postgresql.k8s.enterprisedb.io/v1
kind [Required]
string
Publication
metadata [Required]
+meta/v1.ObjectMeta +
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field.
spec [Required]
+PublicationSpec +
+ No description provided.
status [Required]
+PublicationStatus +
+ No description provided.
+ +
+ +## ScheduledBackup + +

ScheduledBackup is the Schema for the scheduledbackups API

+ + + + + + + + + + + + + + + + +
FieldDescription
apiVersion [Required]
string
postgresql.k8s.enterprisedb.io/v1
kind [Required]
string
ScheduledBackup
metadata [Required]
+meta/v1.ObjectMeta +
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field.
spec [Required]
+ScheduledBackupSpec +
+

Specification of the desired behavior of the ScheduledBackup. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
status
+ScheduledBackupStatus +
+

Most recently observed status of the ScheduledBackup. This data may not be up +to date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
+ +
+ +## Subscription + +

Subscription is the Schema for the subscriptions API

+ + + + + + + + + + + + + + + + +
FieldDescription
apiVersion [Required]
string
postgresql.k8s.enterprisedb.io/v1
kind [Required]
string
Subscription
metadata [Required]
+meta/v1.ObjectMeta +
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field.
spec [Required]
+SubscriptionSpec +
+ No description provided.
status [Required]
+SubscriptionStatus +
+ No description provided.
+ +
+ +## AffinityConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

AffinityConfiguration contains the info we need to create the +affinity rules for Pods

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
enablePodAntiAffinity
+bool +
+

Activates anti-affinity for the pods. The operator will define pods +anti-affinity unless this field is explicitly set to false

+
topologyKey
+string +
+

TopologyKey to use for anti-affinity configuration. See k8s documentation +for more info on that

+
nodeSelector
+map[string]string +
+

NodeSelector is map of key-value pairs used to define the nodes on which +the pods can run. +More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

+
nodeAffinity
+core/v1.NodeAffinity +
+

NodeAffinity describes node affinity scheduling rules for the pod. +More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity

+
tolerations
+[]core/v1.Toleration +
+

Tolerations is a list of Tolerations that should be set for all the pods, in order to allow them to run +on tainted nodes. +More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

+
podAntiAffinityType
+string +
+

PodAntiAffinityType allows the user to decide whether pod anti-affinity between cluster instance has to be +considered a strong requirement during scheduling or not. Allowed values are: "preferred" (default if empty) or +"required". Setting it to "required", could lead to instances remaining pending until new kubernetes nodes are +added if all the existing nodes don't match the required pod anti-affinity rule. +More info: +https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity

+
additionalPodAntiAffinity
+core/v1.PodAntiAffinity +
+

AdditionalPodAntiAffinity allows to specify pod anti-affinity terms to be added to the ones generated +by the operator if EnablePodAntiAffinity is set to true (default) or to be used exclusively if set to false.

+
additionalPodAffinity
+core/v1.PodAffinity +
+

AdditionalPodAffinity allows to specify pod affinity terms to be passed to all the cluster's pods.

+
+ +
+ +## AvailableArchitecture + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

AvailableArchitecture represents the state of a cluster's architecture

+ + + + + + + + + + + +
FieldDescription
goArch [Required]
+string +
+

GoArch is the name of the executable architecture

+
hash [Required]
+string +
+

Hash is the hash of the executable

+
+ +
+ +## BackupConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

BackupConfiguration defines how the backup of the cluster are taken. +The supported backup methods are BarmanObjectStore and VolumeSnapshot. +For details and examples refer to the Backup and Recovery section of the +documentation

+ + + + + + + + + + + + + + + + + +
FieldDescription
volumeSnapshot
+VolumeSnapshotConfiguration +
+

VolumeSnapshot provides the configuration for the execution of volume snapshot backups.

+
barmanObjectStore
+github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanObjectStoreConfiguration +
+

The configuration for the barman-cloud tool suite

+
retentionPolicy
+string +
+

RetentionPolicy is the retention policy to be used for backups +and WALs (i.e. '60d'). The retention policy is expressed in the form +of XXu where XX is a positive integer and u is in [dwm] - +days, weeks, months. +It's currently only applicable when using the BarmanObjectStore method.

+
target
+BackupTarget +
+

The policy to decide which instance should perform backups. Available +options are empty string, which will default to prefer-standby policy, +primary to have backups run always on primary instances, prefer-standby +to have backups run preferably on the most updated standby, if available.

+
+ +
+ +## BackupMethod + +(Alias of `string`) + +**Appears in:** + +- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec) + +- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus) + +- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec) + +

BackupMethod defines the way of executing the physical base backups of +the selected PostgreSQL instance

+ +
+ +## BackupPhase + +(Alias of `string`) + +**Appears in:** + +- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus) + +

BackupPhase is the phase of the backup

+ +
+ +## BackupPluginConfiguration + +**Appears in:** + +- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec) + +- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec) + +

BackupPluginConfiguration contains the backup configuration used by +the backup plugin

+ + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

Name is the name of the plugin managing this backup

+
parameters
+map[string]string +
+

Parameters are the configuration parameters passed to the backup +plugin for this backup

+
+ +
+ +## BackupSnapshotElementStatus + +**Appears in:** + +- [BackupSnapshotStatus](#postgresql-k8s-enterprisedb-io-v1-BackupSnapshotStatus) + +

BackupSnapshotElementStatus is a volume snapshot that is part of a volume snapshot method backup

+ + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

Name is the snapshot resource name

+
type [Required]
+string +
+

Type is tho role of the snapshot in the cluster, such as PG_DATA, PG_WAL and PG_TABLESPACE

+
tablespaceName
+string +
+

TablespaceName is the name of the snapshotted tablespace. Only set +when type is PG_TABLESPACE

+
+ +
+ +## BackupSnapshotStatus + +**Appears in:** + +- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus) + +

BackupSnapshotStatus the fields exclusive to the volumeSnapshot method backup

+ + + + + + + + +
FieldDescription
elements
+[]BackupSnapshotElementStatus +
+

The elements list, populated with the gathered volume snapshots

+
+ +
+ +## BackupSource + +**Appears in:** + +- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery) + +

BackupSource contains the backup we need to restore from, plus some +information that could be needed to correctly restore it.

+ + + + + + + + + + + +
FieldDescription
LocalObjectReference
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference +
(Members of LocalObjectReference are embedded into this type.) + No description provided.
endpointCA
+github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector +
+

EndpointCA store the CA bundle of the barman endpoint. +Useful when using self-signed certificates to avoid +errors with certificate issuer and barman-cloud-wal-archive.

+
+ +
+ +## BackupSpec + +**Appears in:** + +- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup) + +

BackupSpec defines the desired state of Backup

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
cluster [Required]
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference +
+

The cluster to backup

+
target
+BackupTarget +
+

The policy to decide which instance should perform this backup. If empty, +it defaults to cluster.spec.backup.target. +Available options are empty string, primary and prefer-standby. +primary to have backups run always on primary instances, +prefer-standby to have backups run preferably on the most updated +standby, if available.

+
method
+BackupMethod +
+

The backup method to be used, possible options are barmanObjectStore, +volumeSnapshot or plugin. Defaults to: barmanObjectStore.

+
pluginConfiguration
+BackupPluginConfiguration +
+

Configuration parameters passed to the plugin managing this backup

+
online
+bool +
+

Whether the default type of backup with volume snapshots is +online/hot (true, default) or offline/cold (false) +Overrides the default setting specified in the cluster field '.spec.backup.volumeSnapshot.online'

+
onlineConfiguration
+OnlineConfiguration +
+

Configuration parameters to control the online/hot backup with volume snapshots +Overrides the default settings specified in the cluster '.backup.volumeSnapshot.onlineConfiguration' stanza

+
+ +
+ +## BackupStatus + +**Appears in:** + +- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup) + +

BackupStatus defines the observed state of Backup

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
BarmanCredentials
+github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanCredentials +
(Members of BarmanCredentials are embedded into this type.) +

The potential credentials for each cloud provider

+
endpointCA
+github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector +
+

EndpointCA store the CA bundle of the barman endpoint. +Useful when using self-signed certificates to avoid +errors with certificate issuer and barman-cloud-wal-archive.

+
endpointURL
+string +
+

Endpoint to be used to upload data to the cloud, +overriding the automatic endpoint discovery

+
destinationPath
+string +
+

The path where to store the backup (i.e. s3://bucket/path/to/folder) +this path, with different destination folders, will be used for WALs +and for data. This may not be populated in case of errors.

+
serverName
+string +
+

The server name on S3, the cluster name is used if this +parameter is omitted

+
encryption
+string +
+

Encryption method required to S3 API

+
backupId
+string +
+

The ID of the Barman backup

+
backupName
+string +
+

The Name of the Barman backup

+
phase
+BackupPhase +
+

The last backup status

+
startedAt
+meta/v1.Time +
+

When the backup was started

+
stoppedAt
+meta/v1.Time +
+

When the backup was terminated

+
beginWal
+string +
+

The starting WAL

+
endWal
+string +
+

The ending WAL

+
beginLSN
+string +
+

The starting xlog

+
endLSN
+string +
+

The ending xlog

+
error
+string +
+

The detected error

+
commandOutput
+string +
+

Unused. Retained for compatibility with old versions.

+
commandError
+string +
+

The backup command output in case of error

+
backupLabelFile
+[]byte +
+

Backup label file content as returned by Postgres in case of online (hot) backups

+
tablespaceMapFile
+[]byte +
+

Tablespace map file content as returned by Postgres in case of online (hot) backups

+
instanceID
+InstanceID +
+

Information to identify the instance where the backup has been taken from

+
snapshotBackupStatus
+BackupSnapshotStatus +
+

Status of the volumeSnapshot backup

+
method
+BackupMethod +
+

The backup method being used

+
online
+bool +
+

Whether the backup was online/hot (true) or offline/cold (false)

+
pluginMetadata
+map[string]string +
+

A map containing the plugin metadata

+
+ +
+ +## BackupTarget + +(Alias of `string`) + +**Appears in:** + +- [BackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-BackupConfiguration) + +- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec) + +- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec) + +

BackupTarget describes the preferred targets for a backup

+ +
+ +## BootstrapConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

BootstrapConfiguration contains information about how to create the PostgreSQL +cluster. Only a single bootstrap method can be defined among the supported +ones. initdb will be used as the bootstrap method if left +unspecified. Refer to the Bootstrap page of the documentation for more +information.

+ + + + + + + + + + + + + + +
FieldDescription
initdb
+BootstrapInitDB +
+

Bootstrap the cluster via initdb

+
recovery
+BootstrapRecovery +
+

Bootstrap the cluster from a backup

+
pg_basebackup
+BootstrapPgBaseBackup +
+

Bootstrap the cluster taking a physical backup of another compatible +PostgreSQL instance

+
+ +
+ +## BootstrapInitDB + +**Appears in:** + +- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration) + +

BootstrapInitDB is the configuration of the bootstrap process when +initdb is used +Refer to the Bootstrap page of the documentation for more information.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
database
+string +
+

Name of the database used by the application. Default: app.

+
owner
+string +
+

Name of the owner of the database in the instance to be used +by applications. Defaults to the value of the database key.

+
secret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference +
+

Name of the secret containing the initial credentials for the +owner of the user database. If empty a new secret will be +created from scratch

+
redwood
+bool +
+

If we need to enable/disable Redwood compatibility. Requires +EPAS and for EPAS defaults to true

+
options
+[]string +
+

The list of options that must be passed to initdb when creating the cluster. +Deprecated: This could lead to inconsistent configurations, +please use the explicit provided parameters instead. +If defined, explicit values will be ignored.

+
dataChecksums
+bool +
+

Whether the -k option should be passed to initdb, +enabling checksums on data pages (default: false)

+
encoding
+string +
+

The value to be passed as option --encoding for initdb (default:UTF8)

+
localeCollate
+string +
+

The value to be passed as option --lc-collate for initdb (default:C)

+
localeCType
+string +
+

The value to be passed as option --lc-ctype for initdb (default:C)

+
locale
+string +
+

Sets the default collation order and character classification in the new database.

+
localeProvider
+string +
+

This option sets the locale provider for databases created in the new cluster. +Available from PostgreSQL 16.

+
icuLocale
+string +
+

Specifies the ICU locale when the ICU provider is used. +This option requires localeProvider to be set to icu. +Available from PostgreSQL 15.

+
icuRules
+string +
+

Specifies additional collation rules to customize the behavior of the default collation. +This option requires localeProvider to be set to icu. +Available from PostgreSQL 16.

+
builtinLocale
+string +
+

Specifies the locale name when the builtin provider is used. +This option requires localeProvider to be set to builtin. +Available from PostgreSQL 17.

+
walSegmentSize
+int +
+

The value in megabytes (1 to 1024) to be passed to the --wal-segsize +option for initdb (default: empty, resulting in PostgreSQL default: 16MB)

+
postInitSQL
+[]string +
+

List of SQL queries to be executed as a superuser in the postgres +database right after the cluster has been created - to be used with extreme care +(by default empty)

+
postInitApplicationSQL
+[]string +
+

List of SQL queries to be executed as a superuser in the application +database right after the cluster has been created - to be used with extreme care +(by default empty)

+
postInitTemplateSQL
+[]string +
+

List of SQL queries to be executed as a superuser in the template1 +database right after the cluster has been created - to be used with extreme care +(by default empty)

+
import
+Import +
+

Bootstraps the new cluster by importing data from an existing PostgreSQL +instance using logical backup (pg_dump and pg_restore)

+
postInitApplicationSQLRefs
+SQLRefs +
+

List of references to ConfigMaps or Secrets containing SQL files +to be executed as a superuser in the application database right after +the cluster has been created. The references are processed in a specific order: +first, all Secrets are processed, followed by all ConfigMaps. +Within each group, the processing order follows the sequence specified +in their respective arrays. +(by default empty)

+
postInitTemplateSQLRefs
+SQLRefs +
+

List of references to ConfigMaps or Secrets containing SQL files +to be executed as a superuser in the template1 database right after +the cluster has been created. The references are processed in a specific order: +first, all Secrets are processed, followed by all ConfigMaps. +Within each group, the processing order follows the sequence specified +in their respective arrays. +(by default empty)

+
postInitSQLRefs
+SQLRefs +
+

List of references to ConfigMaps or Secrets containing SQL files +to be executed as a superuser in the postgres database right after +the cluster has been created. The references are processed in a specific order: +first, all Secrets are processed, followed by all ConfigMaps. +Within each group, the processing order follows the sequence specified +in their respective arrays. +(by default empty)

+
+ +
+ +## BootstrapPgBaseBackup + +**Appears in:** + +- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration) + +

BootstrapPgBaseBackup contains the configuration required to take +a physical backup of an existing PostgreSQL cluster

+ + + + + + + + + + + + + + + + + +
FieldDescription
source [Required]
+string +
+

The name of the server of which we need to take a physical backup

+
database
+string +
+

Name of the database used by the application. Default: app.

+
owner
+string +
+

Name of the owner of the database in the instance to be used +by applications. Defaults to the value of the database key.

+
secret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference +
+

Name of the secret containing the initial credentials for the +owner of the user database. If empty a new secret will be +created from scratch

+
+ +
+ +## BootstrapRecovery + +**Appears in:** + +- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration) + +

BootstrapRecovery contains the configuration required to restore +from an existing cluster using 3 methodologies: external cluster, +volume snapshots or backup objects. Full recovery and Point-In-Time +Recovery are supported. +The method can be also be used to create clusters in continuous recovery +(replica clusters), also supporting cascading replication when instances >

+
    +
  1. Once the cluster exits recovery, the password for the superuser +will be changed through the provided secret. +Refer to the Bootstrap page of the documentation for more information.
  2. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
backup
+BackupSource +
+

The backup object containing the physical base backup from which to +initiate the recovery procedure. +Mutually exclusive with source and volumeSnapshots.

+
source
+string +
+

The external cluster whose backup we will restore. This is also +used as the name of the folder under which the backup is stored, +so it must be set to the name of the source cluster +Mutually exclusive with backup.

+
volumeSnapshots
+DataSource +
+

The static PVC data source(s) from which to initiate the +recovery procedure. Currently supporting VolumeSnapshot +and PersistentVolumeClaim resources that map an existing +PVC group, compatible with EDB Postgres for Kubernetes, and taken with +a cold backup copy on a fenced Postgres instance (limitation +which will be removed in the future when online backup +will be implemented). +Mutually exclusive with backup.

+
recoveryTarget
+RecoveryTarget +
+

By default, the recovery process applies all the available +WAL files in the archive (full recovery). However, you can also +end the recovery as soon as a consistent state is reached or +recover to a point-in-time (PITR) by specifying a RecoveryTarget object, +as expected by PostgreSQL (i.e., timestamp, transaction Id, LSN, ...). +More info: https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET

+
database
+string +
+

Name of the database used by the application. Default: app.

+
owner
+string +
+

Name of the owner of the database in the instance to be used +by applications. Defaults to the value of the database key.

+
secret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference +
+

Name of the secret containing the initial credentials for the +owner of the user database. If empty a new secret will be +created from scratch

+
+ +
+ +## CatalogImage + +**Appears in:** + +- [ImageCatalogSpec](#postgresql-k8s-enterprisedb-io-v1-ImageCatalogSpec) + +

CatalogImage defines the image and major version

+ + + + + + + + + + + +
FieldDescription
image [Required]
+string +
+

The image reference

+
major [Required]
+int +
+

The PostgreSQL major version of the image. Must be unique within the catalog.

+
+ +
+ +## CertificatesConfiguration + +**Appears in:** + +- [CertificatesStatus](#postgresql-k8s-enterprisedb-io-v1-CertificatesStatus) + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

CertificatesConfiguration contains the needed configurations to handle server certificates.

+ + + + + + + + + + + + + + + + + + + + +
FieldDescription
serverCASecret
+string +
+

The secret containing the Server CA certificate. If not defined, a new secret will be created +with a self-signed CA and will be used to generate the TLS certificate ServerTLSSecret. + +Contains: + +

+
    +
  • ca.crt: CA that should be used to validate the server certificate, +used as sslrootcert in client connection strings.
  • +
  • ca.key: key used to generate Server SSL certs, if ServerTLSSecret is provided, +this can be omitted.
  • +
+
serverTLSSecret
+string +
+

The secret of type kubernetes.io/tls containing the server TLS certificate and key that will be set as +ssl_cert_file and ssl_key_file so that clients can connect to postgres securely. +If not defined, ServerCASecret must provide also ca.key and a new secret will be +created using the provided CA.

+
replicationTLSSecret
+string +
+

The secret of type kubernetes.io/tls containing the client certificate to authenticate as +the streaming_replica user. +If not defined, ClientCASecret must provide also ca.key, and a new secret will be +created using the provided CA.

+
clientCASecret
+string +
+

The secret containing the Client CA certificate. If not defined, a new secret will be created +with a self-signed CA and will be used to generate all the client certificates. + +Contains: + +

+
    +
  • ca.crt: CA that should be used to validate the client certificates, +used as ssl_ca_file of all the instances.
  • +
  • ca.key: key used to generate client certificates, if ReplicationTLSSecret is provided, +this can be omitted.
  • +
+
serverAltDNSNames
+[]string +
+

The list of the server alternative DNS names to be added to the generated server TLS certificates, when required.

+
+ +
+ +## CertificatesStatus + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

CertificatesStatus contains configuration certificates and related expiration dates.

+ + + + + + + + + + + +
FieldDescription
CertificatesConfiguration
+CertificatesConfiguration +
(Members of CertificatesConfiguration are embedded into this type.) +

Needed configurations to handle server certificates, initialized with default values, if needed.

+
expirations
+map[string]string +
+

Expiration dates for all certificates.

+
+ +
+ +## ClusterMonitoringTLSConfiguration + +**Appears in:** + +- [MonitoringConfiguration](#postgresql-k8s-enterprisedb-io-v1-MonitoringConfiguration) + +

ClusterMonitoringTLSConfiguration is the type containing the TLS configuration +for the cluster's monitoring

+ + + + + + + + +
FieldDescription
enabled
+bool +
+

Enable TLS for the monitoring endpoint. +Changing this option will force a rollout of all instances.

+
+ +
+ +## ClusterSpec + +**Appears in:** + +- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster) + +

ClusterSpec defines the desired state of Cluster

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
description
+string +
+

Description of this PostgreSQL cluster

+
inheritedMetadata
+EmbeddedObjectMetadata +
+

Metadata that will be inherited by all objects related to the Cluster

+
imageName
+string +
+

Name of the container image, supporting both tags (<image>:<tag>) +and digests for deterministic and repeatable deployments +(<image>:<tag>@sha256:<digestValue>)

+
imageCatalogRef
+ImageCatalogRef +
+

Defines the major PostgreSQL version we want to use within an ImageCatalog

+
imagePullPolicy
+core/v1.PullPolicy +
+

Image pull policy. +One of Always, Never or IfNotPresent. +If not defined, it defaults to IfNotPresent. +Cannot be updated. +More info: https://kubernetes.io/docs/concepts/containers/images#updating-images

+
schedulerName
+string +
+

If specified, the pod will be dispatched by specified Kubernetes +scheduler. If not specified, the pod will be dispatched by the default +scheduler. More info: +https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/

+
postgresUID
+int64 +
+

The UID of the postgres user inside the image, defaults to 26

+
postgresGID
+int64 +
+

The GID of the postgres user inside the image, defaults to 26

+
instances [Required]
+int +
+

Number of instances required in the cluster

+
minSyncReplicas
+int +
+

Minimum number of instances required in synchronous replication with the +primary. Undefined or 0 allow writes to complete when no standby is +available.

+
maxSyncReplicas
+int +
+

The target value for the synchronous replication quorum, that can be +decreased if the number of ready standbys is lower than this. +Undefined or 0 disable synchronous replication.

+
postgresql
+PostgresConfiguration +
+

Configuration of the PostgreSQL server

+
replicationSlots
+ReplicationSlotsConfiguration +
+

Replication slots management configuration

+
bootstrap
+BootstrapConfiguration +
+

Instructions to bootstrap this cluster

+
replica
+ReplicaClusterConfiguration +
+

Replica cluster configuration

+
superuserSecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference +
+

The secret containing the superuser password. If not defined a new +secret will be created with a randomly generated password

+
enableSuperuserAccess
+bool +
+

When this option is enabled, the operator will use the SuperuserSecret +to update the postgres user password (if the secret is +not present, the operator will automatically create one). When this +option is disabled, the operator will ignore the SuperuserSecret content, delete +it when automatically created, and then blank the password of the postgres +user by setting it to NULL. Disabled by default.

+
certificates
+CertificatesConfiguration +
+

The configuration for the CA and related certificates

+
imagePullSecrets
+[]github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference +
+

The list of pull secrets to be used to pull the images. If the license key +contains a pull secret that secret will be automatically included.

+
storage
+StorageConfiguration +
+

Configuration of the storage of the instances

+
serviceAccountTemplate
+ServiceAccountTemplate +
+

Configure the generation of the service account

+
walStorage
+StorageConfiguration +
+

Configuration of the storage for PostgreSQL WAL (Write-Ahead Log)

+
ephemeralVolumeSource
+core/v1.EphemeralVolumeSource +
+

EphemeralVolumeSource allows the user to configure the source of ephemeral volumes.

+
startDelay
+int32 +
+

The time in seconds that is allowed for a PostgreSQL instance to +successfully start up (default 3600). +The startup probe failure threshold is derived from this value using the formula: +ceiling(startDelay / 10).

+
stopDelay
+int32 +
+

The time in seconds that is allowed for a PostgreSQL instance to +gracefully shutdown (default 1800)

+
smartStopDelay
+int32 +
+

Deprecated: please use SmartShutdownTimeout instead

+
smartShutdownTimeout
+int32 +
+

The time in seconds that controls the window of time reserved for the smart shutdown of Postgres to complete. +Make sure you reserve enough time for the operator to request a fast shutdown of Postgres +(that is: stopDelay - smartShutdownTimeout).

+
switchoverDelay
+int32 +
+

The time in seconds that is allowed for a primary PostgreSQL instance +to gracefully shutdown during a switchover. +Default value is 3600 seconds (1 hour).

+
failoverDelay
+int32 +
+

The amount of time (in seconds) to wait before triggering a failover +after the primary PostgreSQL instance in the cluster was detected +to be unhealthy

+
livenessProbeTimeout
+int32 +
+

LivenessProbeTimeout is the time (in seconds) that is allowed for a PostgreSQL instance +to successfully respond to the liveness probe (default 30). +The Liveness probe failure threshold is derived from this value using the formula: +ceiling(livenessProbe / 10).

+
affinity
+AffinityConfiguration +
+

Affinity/Anti-affinity rules for Pods

+
topologySpreadConstraints
+[]core/v1.TopologySpreadConstraint +
+

TopologySpreadConstraints specifies how to spread matching pods among the given topology. +More info: +https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/

+
resources
+core/v1.ResourceRequirements +
+

Resources requirements of every generated Pod. Please refer to +https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ +for more information.

+
ephemeralVolumesSizeLimit
+EphemeralVolumesSizeLimitConfiguration +
+

EphemeralVolumesSizeLimit allows the user to set the limits for the ephemeral +volumes

+
priorityClassName
+string +
+

Name of the priority class which will be used in every generated Pod, if the PriorityClass +specified does not exist, the pod will not be able to schedule. Please refer to +https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass +for more information

+
primaryUpdateStrategy
+PrimaryUpdateStrategy +
+

Deployment strategy to follow to upgrade the primary server during a rolling +update procedure, after all replicas have been successfully updated: +it can be automated (unsupervised - default) or manual (supervised)

+
primaryUpdateMethod
+PrimaryUpdateMethod +
+

Method to follow to upgrade the primary server during a rolling +update procedure, after all replicas have been successfully updated: +it can be with a switchover (switchover) or in-place (restart - default)

+
backup
+BackupConfiguration +
+

The configuration to be used for backups

+
nodeMaintenanceWindow
+NodeMaintenanceWindow +
+

Define a maintenance window for the Kubernetes nodes

+
licenseKey
+string +
+

The license key of the cluster. When empty, the cluster operates in +trial mode and after the expiry date (default 30 days) the operator +will cease any reconciliation attempt. For details, please refer to +the license agreement that comes with the operator.

+
licenseKeySecret
+core/v1.SecretKeySelector +
+

The reference to the license key. When this is set it take precedence over LicenseKey.

+
monitoring
+MonitoringConfiguration +
+

The configuration of the monitoring infrastructure of this cluster

+
externalClusters
+[]ExternalCluster +
+

The list of external clusters which are used in the configuration

+
logLevel
+string +
+

The instances' log level, one of the following values: error, warning, info (default), debug, trace

+
projectedVolumeTemplate
+core/v1.ProjectedVolumeSource +
+

Template to be used to define projected volumes, projected volumes will be mounted +under /projected base folder

+
env
+[]core/v1.EnvVar +
+

Env follows the Env format to pass environment variables +to the pods created in the cluster

+
envFrom
+[]core/v1.EnvFromSource +
+

EnvFrom follows the EnvFrom format to pass environment variables +sources to the pods to be used by Env

+
managed
+ManagedConfiguration +
+

The configuration that is used by the portions of PostgreSQL that are managed by the instance manager

+
seccompProfile
+core/v1.SeccompProfile +
+

The SeccompProfile applied to every Pod and Container. +Defaults to: RuntimeDefault

+
tablespaces
+[]TablespaceConfiguration +
+

The tablespaces configuration

+
enablePDB
+bool +
+

Manage the PodDisruptionBudget resources within the cluster. When +configured as true (default setting), the pod disruption budgets +will safeguard the primary node from being terminated. Conversely, +setting it to false will result in the absence of any +PodDisruptionBudget resource, permitting the shutdown of all nodes +hosting the PostgreSQL cluster. This latter configuration is +advisable for any PostgreSQL cluster employed for +development/staging purposes.

+
plugins
+[]PluginConfiguration +
+

The plugins configuration, containing +any plugin to be loaded with the corresponding configuration

+
probes
+ProbesConfiguration +
+

The configuration of the probes to be injected +in the PostgreSQL Pods.

+
+ +
+ +## ClusterStatus + +**Appears in:** + +- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster) + +

ClusterStatus defines the observed state of Cluster

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
instances
+int +
+

The total number of PVC Groups detected in the cluster. It may differ from the number of existing instance pods.

+
readyInstances
+int +
+

The total number of ready instances in the cluster. It is equal to the number of ready instance pods.

+
instancesStatus
+map[PodStatus][]string +
+

InstancesStatus indicates in which status the instances are

+
instancesReportedState
+map[PodName]InstanceReportedState +
+

The reported state of the instances during the last reconciliation loop

+
managedRolesStatus
+ManagedRoles +
+

ManagedRolesStatus reports the state of the managed roles in the cluster

+
tablespacesStatus
+[]TablespaceState +
+

TablespacesStatus reports the state of the declarative tablespaces in the cluster

+
timelineID
+int +
+

The timeline of the Postgres cluster

+
topology
+Topology +
+

Instances topology.

+
latestGeneratedNode
+int +
+

ID of the latest generated node (used to avoid node name clashing)

+
currentPrimary
+string +
+

Current primary instance

+
targetPrimary
+string +
+

Target primary instance, this is different from the previous one +during a switchover or a failover

+
lastPromotionToken
+string +
+

LastPromotionToken is the last verified promotion token that +was used to promote a replica cluster

+
pvcCount
+int32 +
+

How many PVCs have been created by this cluster

+
jobCount
+int32 +
+

How many Jobs have been created by this cluster

+
danglingPVC
+[]string +
+

List of all the PVCs created by this cluster and still available +which are not attached to a Pod

+
resizingPVC
+[]string +
+

List of all the PVCs that have ResizingPVC condition.

+
initializingPVC
+[]string +
+

List of all the PVCs that are being initialized by this cluster

+
healthyPVC
+[]string +
+

List of all the PVCs not dangling nor initializing

+
unusablePVC
+[]string +
+

List of all the PVCs that are unusable because another PVC is missing

+
licenseStatus
+github.com/EnterpriseDB/cloud-native-postgres/pkg/licensekey.Status +
+

Status of the license

+
writeService
+string +
+

Current write pod

+
readService
+string +
+

Current list of read pods

+
phase
+string +
+

Current phase of the cluster

+
phaseReason
+string +
+

Reason for the current phase

+
secretsResourceVersion
+SecretsResourceVersion +
+

The list of resource versions of the secrets +managed by the operator. Every change here is done in the +interest of the instance manager, which will refresh the +secret data

+
configMapResourceVersion
+ConfigMapResourceVersion +
+

The list of resource versions of the configmaps, +managed by the operator. Every change here is done in the +interest of the instance manager, which will refresh the +configmap data

+
certificates
+CertificatesStatus +
+

The configuration for the CA and related certificates, initialized with defaults.

+
firstRecoverabilityPoint
+string +
+

The first recoverability point, stored as a date in RFC3339 format. +This field is calculated from the content of FirstRecoverabilityPointByMethod.

+

Deprecated: the field is not set for backup plugins.

+
firstRecoverabilityPointByMethod
+map[BackupMethod]meta/v1.Time +
+

The first recoverability point, stored as a date in RFC3339 format, per backup method type.

+

Deprecated: the field is not set for backup plugins.

+
lastSuccessfulBackup
+string +
+

Last successful backup, stored as a date in RFC3339 format. +This field is calculated from the content of LastSuccessfulBackupByMethod.

+

Deprecated: the field is not set for backup plugins.

+
lastSuccessfulBackupByMethod
+map[BackupMethod]meta/v1.Time +
+

Last successful backup, stored as a date in RFC3339 format, per backup method type.

+

Deprecated: the field is not set for backup plugins.

+
lastFailedBackup
+string +
+

Last failed backup, stored as a date in RFC3339 format.

+

Deprecated: the field is not set for backup plugins.

+
cloudNativePostgresqlCommitHash
+string +
+

The commit hash number of which this operator running

+
currentPrimaryTimestamp
+string +
+

The timestamp when the last actual promotion to primary has occurred

+
currentPrimaryFailingSinceTimestamp
+string +
+

The timestamp when the primary was detected to be unhealthy +This field is reported when .spec.failoverDelay is populated or during online upgrades

+
targetPrimaryTimestamp
+string +
+

The timestamp when the last request for a new primary has occurred

+
poolerIntegrations
+PoolerIntegrations +
+

The integration needed by poolers referencing the cluster

+
cloudNativePostgresqlOperatorHash
+string +
+

The hash of the binary of the operator

+
availableArchitectures
+[]AvailableArchitecture +
+

AvailableArchitectures reports the available architectures of a cluster

+
conditions
+[]meta/v1.Condition +
+

Conditions for cluster object

+
instanceNames
+[]string +
+

List of instance names in the cluster

+
onlineUpdateEnabled
+bool +
+

OnlineUpdateEnabled shows if the online upgrade is enabled inside the cluster

+
image
+string +
+

Image contains the image name used by the pods

+
pgDataImageInfo
+ImageInfo +
+

PGDataImageInfo contains the details of the latest image that has run on the current data directory.

+
pluginStatus
+[]PluginStatus +
+

PluginStatus is the status of the loaded plugins

+
switchReplicaClusterStatus
+SwitchReplicaClusterStatus +
+

SwitchReplicaClusterStatus is the status of the switch to replica cluster

+
demotionToken
+string +
+

DemotionToken is a JSON token containing the information +from pg_controldata such as Database system identifier, Latest checkpoint's +TimeLineID, Latest checkpoint's REDO location, Latest checkpoint's REDO +WAL file, and Time of latest checkpoint

+
systemID
+string +
+

SystemID is the latest detected PostgreSQL SystemID

+
+ +
+ +## ConfigMapResourceVersion + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

ConfigMapResourceVersion is the resource versions of the secrets +managed by the operator

+ + + + + + + + +
FieldDescription
metrics
+map[string]string +
+

A map with the versions of all the config maps used to pass metrics. +Map keys are the config map names, map values are the versions

+
+ +
+ +## DataDurabilityLevel + +(Alias of `string`) + +**Appears in:** + +- [SynchronousReplicaConfiguration](#postgresql-k8s-enterprisedb-io-v1-SynchronousReplicaConfiguration) + +

DataDurabilityLevel specifies how strictly to enforce synchronous replication +when cluster instances are unavailable. Options are required or preferred.

+ +
+ +## DataSource + +**Appears in:** + +- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery) + +

DataSource contains the configuration required to bootstrap a +PostgreSQL cluster from an existing storage

+ + + + + + + + + + + + + + +
FieldDescription
storage [Required]
+core/v1.TypedLocalObjectReference +
+

Configuration of the storage of the instances

+
walStorage
+core/v1.TypedLocalObjectReference +
+

Configuration of the storage for PostgreSQL WAL (Write-Ahead Log)

+
tablespaceStorage
+map[string]core/v1.TypedLocalObjectReference +
+

Configuration of the storage for PostgreSQL tablespaces

+
+ +
+ +## DatabaseObjectSpec + +**Appears in:** + +- [ExtensionSpec](#postgresql-k8s-enterprisedb-io-v1-ExtensionSpec) + +- [SchemaSpec](#postgresql-k8s-enterprisedb-io-v1-SchemaSpec) + +

DatabaseObjectSpec contains the fields which are common to every +database object

+ + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

Name of the extension/schema

+
ensure
+EnsureOption +
+

Specifies whether an extension/schema should be present or absent in +the database. If set to present, the extension/schema will be +created if it does not exist. If set to absent, the +extension/schema will be removed if it exists.

+
+ +
+ +## DatabaseObjectStatus + +**Appears in:** + +- [DatabaseStatus](#postgresql-k8s-enterprisedb-io-v1-DatabaseStatus) + +

DatabaseObjectStatus is the status of the managed database objects

+ + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

The name of the object

+
applied [Required]
+bool +
+

True of the object has been installed successfully in +the database

+
message
+string +
+

Message is the object reconciliation message

+
+ +
+ +## DatabaseReclaimPolicy + +(Alias of `string`) + +**Appears in:** + +- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec) + +

DatabaseReclaimPolicy describes a policy for end-of-life maintenance of databases.

+ +
+ +## DatabaseRoleRef + +**Appears in:** + +- [TablespaceConfiguration](#postgresql-k8s-enterprisedb-io-v1-TablespaceConfiguration) + +

DatabaseRoleRef is a reference an a role available inside PostgreSQL

+ + + + + + + + +
FieldDescription
name
+string +
+ No description provided.
+ +
+ +## DatabaseSpec + +**Appears in:** + +- [Database](#postgresql-k8s-enterprisedb-io-v1-Database) + +

DatabaseSpec is the specification of a Postgresql Database, built around the +CREATE DATABASE, ALTER DATABASE, and DROP DATABASE SQL commands of +PostgreSQL.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
cluster [Required]
+core/v1.LocalObjectReference +
+

The name of the PostgreSQL cluster hosting the database.

+
ensure
+EnsureOption +
+

Ensure the PostgreSQL database is present or absent - defaults to "present".

+
name [Required]
+string +
+

The name of the database to create inside PostgreSQL. This setting cannot be changed.

+
owner [Required]
+string +
+

Maps to the OWNER parameter of CREATE DATABASE. +Maps to the OWNER TO command of ALTER DATABASE. +The role name of the user who owns the database inside PostgreSQL.

+
template
+string +
+

Maps to the TEMPLATE parameter of CREATE DATABASE. This setting +cannot be changed. The name of the template from which to create +this database.

+
encoding
+string +
+

Maps to the ENCODING parameter of CREATE DATABASE. This setting +cannot be changed. Character set encoding to use in the database.

+
locale
+string +
+

Maps to the LOCALE parameter of CREATE DATABASE. This setting +cannot be changed. Sets the default collation order and character +classification in the new database.

+
localeProvider
+string +
+

Maps to the LOCALE_PROVIDER parameter of CREATE DATABASE. This +setting cannot be changed. This option sets the locale provider for +databases created in the new cluster. Available from PostgreSQL 16.

+
localeCollate
+string +
+

Maps to the LC_COLLATE parameter of CREATE DATABASE. This +setting cannot be changed.

+
localeCType
+string +
+

Maps to the LC_CTYPE parameter of CREATE DATABASE. This setting +cannot be changed.

+
icuLocale
+string +
+

Maps to the ICU_LOCALE parameter of CREATE DATABASE. This +setting cannot be changed. Specifies the ICU locale when the ICU +provider is used. This option requires localeProvider to be set to +icu. Available from PostgreSQL 15.

+
icuRules
+string +
+

Maps to the ICU_RULES parameter of CREATE DATABASE. This setting +cannot be changed. Specifies additional collation rules to customize +the behavior of the default collation. This option requires +localeProvider to be set to icu. Available from PostgreSQL 16.

+
builtinLocale
+string +
+

Maps to the BUILTIN_LOCALE parameter of CREATE DATABASE. This +setting cannot be changed. Specifies the locale name when the +builtin provider is used. This option requires localeProvider to +be set to builtin. Available from PostgreSQL 17.

+
collationVersion
+string +
+

Maps to the COLLATION_VERSION parameter of CREATE DATABASE. This +setting cannot be changed.

+
isTemplate
+bool +
+

Maps to the IS_TEMPLATE parameter of CREATE DATABASE and ALTER DATABASE. If true, this database is considered a template and can +be cloned by any user with CREATEDB privileges.

+
allowConnections
+bool +
+

Maps to the ALLOW_CONNECTIONS parameter of CREATE DATABASE and +ALTER DATABASE. If false then no one can connect to this database.

+
connectionLimit
+int +
+

Maps to the CONNECTION LIMIT clause of CREATE DATABASE and +ALTER DATABASE. How many concurrent connections can be made to +this database. -1 (the default) means no limit.

+
tablespace
+string +
+

Maps to the TABLESPACE parameter of CREATE DATABASE. +Maps to the SET TABLESPACE command of ALTER DATABASE. +The name of the tablespace (in PostgreSQL) that will be associated +with the new database. This tablespace will be the default +tablespace used for objects created in this database.

+
databaseReclaimPolicy
+DatabaseReclaimPolicy +
+

The policy for end-of-life maintenance of this database.

+
schemas
+[]SchemaSpec +
+

The list of schemas to be managed in the database

+
extensions
+[]ExtensionSpec +
+

The list of extensions to be managed in the database

+
+ +
+ +## DatabaseStatus + +**Appears in:** + +- [Database](#postgresql-k8s-enterprisedb-io-v1-Database) + +

DatabaseStatus defines the observed state of Database

+ + + + + + + + + + + + + + + + + + + + +
FieldDescription
observedGeneration
+int64 +
+

A sequence number representing the latest +desired state that was synchronized

+
applied
+bool +
+

Applied is true if the database was reconciled correctly

+
message
+string +
+

Message is the reconciliation output message

+
schemas
+[]DatabaseObjectStatus +
+

Schemas is the status of the managed schemas

+
extensions
+[]DatabaseObjectStatus +
+

Extensions is the status of the managed extensions

+
+ +
+ +## EPASConfiguration + +**Appears in:** + +- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration) + +

EPASConfiguration contains EDB Postgres Advanced Server specific configurations

+ + + + + + + + + + + +
FieldDescription
audit
+bool +
+

If true enables edb_audit logging

+
tde
+TDEConfiguration +
+

TDE configuration

+
+ +
+ +## EmbeddedObjectMetadata + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

EmbeddedObjectMetadata contains metadata to be inherited by all resources related to a Cluster

+ + + + + + + + + + + +
FieldDescription
labels
+map[string]string +
+ No description provided.
annotations
+map[string]string +
+ No description provided.
+ +
+ +## EnsureOption + +(Alias of `string`) + +**Appears in:** + +- [DatabaseObjectSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseObjectSpec) + +- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec) + +- [RoleConfiguration](#postgresql-k8s-enterprisedb-io-v1-RoleConfiguration) + +

EnsureOption represents whether we should enforce the presence or absence of +a Role in a PostgreSQL instance

+ +
+ +## EphemeralVolumesSizeLimitConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

EphemeralVolumesSizeLimitConfiguration contains the configuration of the ephemeral +storage

+ + + + + + + + + + + +
FieldDescription
shm
+k8s.io/apimachinery/pkg/api/resource.Quantity +
+

Shm is the size limit of the shared memory volume

+
temporaryData
+k8s.io/apimachinery/pkg/api/resource.Quantity +
+

TemporaryData is the size limit of the temporary data volume

+
+ +
+ +## ExtensionConfiguration + +**Appears in:** + +- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration) + +

ExtensionConfiguration is the configuration used to add +PostgreSQL extensions to the Cluster.

+ + + + + + + + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

The name of the extension, required

+
image [Required]
+core/v1.ImageVolumeSource +
+

The image containing the extension, required

+
extension_control_path
+[]string +
+

The list of directories inside the image which should be added to extension_control_path. +If not defined, defaults to "/share".

+
dynamic_library_path
+[]string +
+

The list of directories inside the image which should be added to dynamic_library_path. +If not defined, defaults to "/lib".

+
ld_library_path
+[]string +
+

The list of directories inside the image which should be added to ld_library_path.

+
+ +
+ +## ExtensionSpec + +**Appears in:** + +- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec) + +

ExtensionSpec configures an extension in a database

+ + + + + + + + + + + + + + +
FieldDescription
DatabaseObjectSpec
+DatabaseObjectSpec +
(Members of DatabaseObjectSpec are embedded into this type.) +

Common fields

+
version [Required]
+string +
+

The version of the extension to install. If empty, the operator will +install the default version (whatever is specified in the +extension's control file)

+
schema [Required]
+string +
+

The name of the schema in which to install the extension's objects, +in case the extension allows its contents to be relocated. If not +specified (default), and the extension's control file does not +specify a schema either, the current default object creation schema +is used.

+
+ +
+ +## ExternalCluster + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

ExternalCluster represents the connection parameters to an +external cluster which is used in the other sections of the configuration

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

The server name, required

+
connectionParameters
+map[string]string +
+

The list of connection parameters, such as dbname, host, username, etc

+
sslCert
+core/v1.SecretKeySelector +
+

The reference to an SSL certificate to be used to connect to this +instance

+
sslKey
+core/v1.SecretKeySelector +
+

The reference to an SSL private key to be used to connect to this +instance

+
sslRootCert
+core/v1.SecretKeySelector +
+

The reference to an SSL CA public key to be used to connect to this +instance

+
password
+core/v1.SecretKeySelector +
+

The reference to the password to be used to connect to the server. +If a password is provided, EDB Postgres for Kubernetes creates a PostgreSQL +passfile at /controller/external/NAME/pass (where "NAME" is the +cluster's name). This passfile is automatically referenced in the +connection string when establishing a connection to the remote +PostgreSQL server from the current PostgreSQL Cluster. This ensures +secure and efficient password management for external clusters.

+
barmanObjectStore
+github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanObjectStoreConfiguration +
+

The configuration for the barman-cloud tool suite

+
plugin [Required]
+PluginConfiguration +
+

The configuration of the plugin that is taking care +of WAL archiving and backups for this external cluster

+
+ +
+ +## FailoverQuorumStatus + +**Appears in:** + +- [FailoverQuorum](#postgresql-k8s-enterprisedb-io-v1-FailoverQuorum) + +

FailoverQuorumStatus is the latest observed status of the failover +quorum of the PG cluster.

+ + + + + + + + + + + + + + + + + +
FieldDescription
method
+string +
+

Contains the latest reported Method value.

+
standbyNames
+[]string +
+

StandbyNames is the list of potentially synchronous +instance names.

+
standbyNumber
+int +
+

StandbyNumber is the number of synchronous standbys that transactions +need to wait for replies from.

+
primary
+string +
+

Primary is the name of the primary instance that updated +this object the latest time.

+
+ +
+ +## ImageCatalogRef + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

ImageCatalogRef defines the reference to a major version in an ImageCatalog

+ + + + + + + + + + + +
FieldDescription
TypedLocalObjectReference
+core/v1.TypedLocalObjectReference +
(Members of TypedLocalObjectReference are embedded into this type.) + No description provided.
major [Required]
+int +
+

The major version of PostgreSQL we want to use from the ImageCatalog

+
+ +
+ +## ImageCatalogSpec + +**Appears in:** + +- [ClusterImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ClusterImageCatalog) + +- [ImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ImageCatalog) + +

ImageCatalogSpec defines the desired ImageCatalog

+ + + + + + + + +
FieldDescription
images [Required]
+[]CatalogImage +
+

List of CatalogImages available in the catalog

+
+ +
+ +## ImageInfo + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

ImageInfo contains the information about a PostgreSQL image

+ + + + + + + + + + + +
FieldDescription
image [Required]
+string +
+

Image is the image name

+
majorVersion [Required]
+int +
+

MajorVersion is the major version of the image

+
+ +
+ +## Import + +**Appears in:** + +- [BootstrapInitDB](#postgresql-k8s-enterprisedb-io-v1-BootstrapInitDB) + +

Import contains the configuration to init a database from a logic snapshot of an externalCluster

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
source [Required]
+ImportSource +
+

The source of the import

+
type [Required]
+SnapshotType +
+

The import type. Can be microservice or monolith.

+
databases [Required]
+[]string +
+

The databases to import

+
roles
+[]string +
+

The roles to import

+
postImportApplicationSQL
+[]string +
+

List of SQL queries to be executed as a superuser in the application +database right after is imported - to be used with extreme care +(by default empty). Only available in microservice type.

+
schemaOnly
+bool +
+

When set to true, only the pre-data and post-data sections of +pg_restore are invoked, avoiding data import. Default: false.

+
pgDumpExtraOptions
+[]string +
+

List of custom options to pass to the pg_dump command. IMPORTANT: +Use these options with caution and at your own risk, as the operator +does not validate their content. Be aware that certain options may +conflict with the operator's intended functionality or design.

+
pgRestoreExtraOptions
+[]string +
+

List of custom options to pass to the pg_restore command. IMPORTANT: +Use these options with caution and at your own risk, as the operator +does not validate their content. Be aware that certain options may +conflict with the operator's intended functionality or design.

+
+ +
+ +## ImportSource + +**Appears in:** + +- [Import](#postgresql-k8s-enterprisedb-io-v1-Import) + +

ImportSource describes the source for the logical snapshot

+ + + + + + + + +
FieldDescription
externalCluster [Required]
+string +
+

The name of the externalCluster used for import

+
+ +
+ +## InstanceID + +**Appears in:** + +- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus) + +

InstanceID contains the information to identify an instance

+ + + + + + + + + + + +
FieldDescription
podName
+string +
+

The pod name

+
ContainerID
+string +
+

The container ID

+
+ +
+ +## InstanceReportedState + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

InstanceReportedState describes the last reported state of an instance during a reconciliation loop

+ + + + + + + + + + + + + + +
FieldDescription
isPrimary [Required]
+bool +
+

indicates if an instance is the primary one

+
timeLineID
+int +
+

indicates on which TimelineId the instance is

+
ip [Required]
+string +
+

IP address of the instance

+
+ +
+ +## IsolationCheckConfiguration + +**Appears in:** + +- [LivenessProbe](#postgresql-k8s-enterprisedb-io-v1-LivenessProbe) + +

IsolationCheckConfiguration contains the configuration for the isolation check +functionality in the liveness probe

+ + + + + + + + + + + + + + +
FieldDescription
enabled
+bool +
+

Whether primary isolation checking is enabled for the liveness probe

+
requestTimeout
+int +
+

Timeout in milliseconds for requests during the primary isolation check

+
connectionTimeout
+int +
+

Timeout in milliseconds for connections during the primary isolation check

+
+ +
+ +## LDAPBindAsAuth + +**Appears in:** + +- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig) + +

LDAPBindAsAuth provides the required fields to use the +bind authentication for LDAP

+ + + + + + + + + + + +
FieldDescription
prefix
+string +
+

Prefix for the bind authentication option

+
suffix
+string +
+

Suffix for the bind authentication option

+
+ +
+ +## LDAPBindSearchAuth + +**Appears in:** + +- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig) + +

LDAPBindSearchAuth provides the required fields to use +the bind+search LDAP authentication process

+ + + + + + + + + + + + + + + + + + + + +
FieldDescription
baseDN
+string +
+

Root DN to begin the user search

+
bindDN
+string +
+

DN of the user to bind to the directory

+
bindPassword
+core/v1.SecretKeySelector +
+

Secret with the password for the user to bind to the directory

+
searchAttribute
+string +
+

Attribute to match against the username

+
searchFilter
+string +
+

Search filter to use when doing the search+bind authentication

+
+ +
+ +## LDAPConfig + +**Appears in:** + +- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration) + +

LDAPConfig contains the parameters needed for LDAP authentication

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
server
+string +
+

LDAP hostname or IP address

+
port
+int +
+

LDAP server port

+
scheme
+LDAPScheme +
+

LDAP schema to be used, possible options are ldap and ldaps

+
bindAsAuth
+LDAPBindAsAuth +
+

Bind as authentication configuration

+
bindSearchAuth
+LDAPBindSearchAuth +
+

Bind+Search authentication configuration

+
tls
+bool +
+

Set to 'true' to enable LDAP over TLS. 'false' is default

+
+ +
+ +## LDAPScheme + +(Alias of `string`) + +**Appears in:** + +- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig) + +

LDAPScheme defines the possible schemes for LDAP

+ +
+ +## LivenessProbe + +**Appears in:** + +- [ProbesConfiguration](#postgresql-k8s-enterprisedb-io-v1-ProbesConfiguration) + +

LivenessProbe is the configuration of the liveness probe

+ + + + + + + + + + + +
FieldDescription
Probe
+Probe +
(Members of Probe are embedded into this type.) +

Probe is the standard probe configuration

+
isolationCheck
+IsolationCheckConfiguration +
+

Configure the feature that extends the liveness probe for a primary +instance. In addition to the basic checks, this verifies whether the +primary is isolated from the Kubernetes API server and from its +replicas, ensuring that it can be safely shut down if network +partition or API unavailability is detected. Enabled by default.

+
+ +
+ +## ManagedConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

ManagedConfiguration represents the portions of PostgreSQL that are managed +by the instance manager

+ + + + + + + + + + + +
FieldDescription
roles
+[]RoleConfiguration +
+

Database roles managed by the Cluster

+
services
+ManagedServices +
+

Services roles managed by the Cluster

+
+ +
+ +## ManagedRoles + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

ManagedRoles tracks the status of a cluster's managed roles

+ + + + + + + + + + + + + + +
FieldDescription
byStatus
+map[RoleStatus][]string +
+

ByStatus gives the list of roles in each state

+
cannotReconcile
+map[string][]string +
+

CannotReconcile lists roles that cannot be reconciled in PostgreSQL, +with an explanation of the cause

+
passwordStatus
+map[string]PasswordState +
+

PasswordStatus gives the last transaction id and password secret version for each managed role

+
+ +
+ +## ManagedService + +**Appears in:** + +- [ManagedServices](#postgresql-k8s-enterprisedb-io-v1-ManagedServices) + +

ManagedService represents a specific service managed by the cluster. +It includes the type of service and its associated template specification.

+ + + + + + + + + + + + + + +
FieldDescription
selectorType [Required]
+ServiceSelectorType +
+

SelectorType specifies the type of selectors that the service will have. +Valid values are "rw", "r", and "ro", representing read-write, read, and read-only services.

+
updateStrategy
+ServiceUpdateStrategy +
+

UpdateStrategy describes how the service differences should be reconciled

+
serviceTemplate [Required]
+ServiceTemplateSpec +
+

ServiceTemplate is the template specification for the service.

+
+ +
+ +## ManagedServices + +**Appears in:** + +- [ManagedConfiguration](#postgresql-k8s-enterprisedb-io-v1-ManagedConfiguration) + +

ManagedServices represents the services managed by the cluster.

+ + + + + + + + + + + +
FieldDescription
disabledDefaultServices
+[]ServiceSelectorType +
+

DisabledDefaultServices is a list of service types that are disabled by default. +Valid values are "r", and "ro", representing read, and read-only services.

+
additional
+[]ManagedService +
+

Additional is a list of additional managed services specified by the user.

+
+ +
+ +## Metadata + +**Appears in:** + +- [PodTemplateSpec](#postgresql-k8s-enterprisedb-io-v1-PodTemplateSpec) + +- [ServiceAccountTemplate](#postgresql-k8s-enterprisedb-io-v1-ServiceAccountTemplate) + +- [ServiceTemplateSpec](#postgresql-k8s-enterprisedb-io-v1-ServiceTemplateSpec) + +

Metadata is a structure similar to the metav1.ObjectMeta, but still +parseable by controller-gen to create a suitable CRD for the user. +The comment of PodTemplateSpec has an explanation of why we are +not using the core data types.

+ + + + + + + + + + + + + + +
FieldDescription
name
+string +
+

The name of the resource. Only supported for certain types

+
labels
+map[string]string +
+

Map of string keys and values that can be used to organize and categorize +(scope and select) objects. May match selectors of replication controllers +and services. +More info: http://kubernetes.io/docs/user-guide/labels

+
annotations
+map[string]string +
+

Annotations is an unstructured key value map stored with a resource that may be +set by external tools to store and retrieve arbitrary metadata. They are not +queryable and should be preserved when modifying objects. +More info: http://kubernetes.io/docs/user-guide/annotations

+
+ +
+ +## MonitoringConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

MonitoringConfiguration is the type containing all the monitoring +configuration for a certain cluster

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
disableDefaultQueries
+bool +
+

Whether the default queries should be injected. +Set it to true if you don't want to inject default queries into the cluster. +Default: false.

+
customQueriesConfigMap
+[]github.com/cloudnative-pg/machinery/pkg/api.ConfigMapKeySelector +
+

The list of config maps containing the custom queries

+
customQueriesSecret
+[]github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector +
+

The list of secrets containing the custom queries

+
enablePodMonitor
+bool +
+

Enable or disable the PodMonitor

+
tls
+ClusterMonitoringTLSConfiguration +
+

Configure TLS communication for the metrics endpoint. +Changing tls.enabled option will force a rollout of all instances.

+
podMonitorMetricRelabelings
+[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig +
+

The list of metric relabelings for the PodMonitor. Applied to samples before ingestion.

+
podMonitorRelabelings
+[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig +
+

The list of relabelings for the PodMonitor. Applied to samples before scraping.

+
+ +
+ +## NodeMaintenanceWindow + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

NodeMaintenanceWindow contains information that the operator +will use while upgrading the underlying node.

+

This option is only useful when the chosen storage prevents the Pods +from being freely moved across nodes.

+ + + + + + + + + + + +
FieldDescription
reusePVC
+bool +
+

Reuse the existing PVC (wait for the node to come +up again) or not (recreate it elsewhere - when instances >1)

+
inProgress
+bool +
+

Is there a node maintenance activity in progress?

+
+ +
+ +## OnlineConfiguration + +**Appears in:** + +- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec) + +- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec) + +- [VolumeSnapshotConfiguration](#postgresql-k8s-enterprisedb-io-v1-VolumeSnapshotConfiguration) + +

OnlineConfiguration contains the configuration parameters for the online volume snapshot

+ + + + + + + + + + + +
FieldDescription
waitForArchive
+bool +
+

If false, the function will return immediately after the backup is completed, +without waiting for WAL to be archived. +This behavior is only useful with backup software that independently monitors WAL archiving. +Otherwise, WAL required to make the backup consistent might be missing and make the backup useless. +By default, or when this parameter is true, pg_backup_stop will wait for WAL to be archived when archiving is +enabled. +On a standby, this means that it will wait only when archive_mode = always. +If write activity on the primary is low, it may be useful to run pg_switch_wal on the primary in order to trigger +an immediate segment switch.

+
immediateCheckpoint
+bool +
+

Control whether the I/O workload for the backup initial checkpoint will +be limited, according to the checkpoint_completion_target setting on +the PostgreSQL server. If set to true, an immediate checkpoint will be +used, meaning PostgreSQL will complete the checkpoint as soon as +possible. false by default.

+
+ +
+ +## PasswordState + +**Appears in:** + +- [ManagedRoles](#postgresql-k8s-enterprisedb-io-v1-ManagedRoles) + +

PasswordState represents the state of the password of a managed RoleConfiguration

+ + + + + + + + + + + +
FieldDescription
transactionID
+int64 +
+

the last transaction ID to affect the role definition in PostgreSQL

+
resourceVersion
+string +
+

the resource version of the password secret

+
+ +
+ +## PgBouncerIntegrationStatus + +**Appears in:** + +- [PoolerIntegrations](#postgresql-k8s-enterprisedb-io-v1-PoolerIntegrations) + +

PgBouncerIntegrationStatus encapsulates the needed integration for the pgbouncer poolers referencing the cluster

+ + + + + + + + +
FieldDescription
secrets
+[]string +
+ No description provided.
+ +
+ +## PgBouncerPoolMode + +(Alias of `string`) + +**Appears in:** + +- [PgBouncerSpec](#postgresql-k8s-enterprisedb-io-v1-PgBouncerSpec) + +

PgBouncerPoolMode is the mode of PgBouncer

+ +
+ +## PgBouncerSecrets + +**Appears in:** + +- [PoolerSecrets](#postgresql-k8s-enterprisedb-io-v1-PoolerSecrets) + +

PgBouncerSecrets contains the versions of the secrets used +by pgbouncer

+ + + + + + + + +
FieldDescription
authQuery
+SecretVersion +
+

The auth query secret version

+
+ +
+ +## PgBouncerSpec + +**Appears in:** + +- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec) + +

PgBouncerSpec defines how to configure PgBouncer

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
poolMode
+PgBouncerPoolMode +
+

The pool mode. Default: session.

+
authQuerySecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference +
+

The credentials of the user that need to be used for the authentication +query. In case it is specified, also an AuthQuery +(e.g. "SELECT usename, passwd FROM pg_catalog.pg_shadow WHERE usename=$1") +has to be specified and no automatic CNP Cluster integration will be triggered.

+
authQuery
+string +
+

The query that will be used to download the hash of the password +of a certain user. Default: "SELECT usename, passwd FROM public.user_search($1)". +In case it is specified, also an AuthQuerySecret has to be specified and +no automatic CNP Cluster integration will be triggered.

+
parameters
+map[string]string +
+

Additional parameters to be passed to PgBouncer - please check +the CNP documentation for a list of options you can configure

+
pg_hba
+[]string +
+

PostgreSQL Host Based Authentication rules (lines to be appended +to the pg_hba.conf file)

+
paused
+bool +
+

When set to true, PgBouncer will disconnect from the PostgreSQL +server, first waiting for all queries to complete, and pause all new +client connections until this value is set to false (default). Internally, +the operator calls PgBouncer's PAUSE and RESUME commands.

+
+ +
+ +## PluginConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +- [ExternalCluster](#postgresql-k8s-enterprisedb-io-v1-ExternalCluster) + +

PluginConfiguration specifies a plugin that need to be loaded for this +cluster to be reconciled

+ + + + + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

Name is the plugin name

+
enabled
+bool +
+

Enabled is true if this plugin will be used

+
isWALArchiver
+bool +
+

Only one plugin can be declared as WALArchiver. +Cannot be active if ".spec.backup.barmanObjectStore" configuration is present.

+
parameters
+map[string]string +
+

Parameters is the configuration of the plugin

+
+ +
+ +## PluginStatus + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

PluginStatus is the status of a loaded plugin

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

Name is the name of the plugin

+
version [Required]
+string +
+

Version is the version of the plugin loaded by the +latest reconciliation loop

+
capabilities
+[]string +
+

Capabilities are the list of capabilities of the +plugin

+
operatorCapabilities
+[]string +
+

OperatorCapabilities are the list of capabilities of the +plugin regarding the reconciler

+
walCapabilities
+[]string +
+

WALCapabilities are the list of capabilities of the +plugin regarding the WAL management

+
backupCapabilities
+[]string +
+

BackupCapabilities are the list of capabilities of the +plugin regarding the Backup management

+
restoreJobHookCapabilities
+[]string +
+

RestoreJobHookCapabilities are the list of capabilities of the +plugin regarding the RestoreJobHook management

+
status
+string +
+

Status contain the status reported by the plugin through the SetStatusInCluster interface

+
+ +
+ +## PodTemplateSpec + +**Appears in:** + +- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec) + +

PodTemplateSpec is a structure allowing the user to set +a template for Pod generation.

+

Unfortunately we can't use the corev1.PodTemplateSpec +type because the generated CRD won't have the field for the +metadata section.

+

References: +https://github.com/kubernetes-sigs/controller-tools/issues/385 +https://github.com/kubernetes-sigs/controller-tools/issues/448 +https://github.com/prometheus-operator/prometheus-operator/issues/3041

+ + + + + + + + + + + +
FieldDescription
metadata
+Metadata +
+

Standard object's metadata. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

+
spec
+core/v1.PodSpec +
+

Specification of the desired behavior of the pod. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
+ +
+ +## PodTopologyLabels + +(Alias of `map[string]string`) + +**Appears in:** + +- [Topology](#postgresql-k8s-enterprisedb-io-v1-Topology) + +

PodTopologyLabels represent the topology of a Pod. map[labelName]labelValue

+ +
+ +## PoolerIntegrations + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

PoolerIntegrations encapsulates the needed integration for the poolers referencing the cluster

+ + + + + + + + +
FieldDescription
pgBouncerIntegration
+PgBouncerIntegrationStatus +
+ No description provided.
+ +
+ +## PoolerMonitoringConfiguration + +**Appears in:** + +- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec) + +

PoolerMonitoringConfiguration is the type containing all the monitoring +configuration for a certain Pooler.

+

Mirrors the Cluster's MonitoringConfiguration but without the custom queries +part for now.

+ + + + + + + + + + + + + + +
FieldDescription
enablePodMonitor
+bool +
+

Enable or disable the PodMonitor

+
podMonitorMetricRelabelings
+[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig +
+

The list of metric relabelings for the PodMonitor. Applied to samples before ingestion.

+
podMonitorRelabelings
+[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig +
+

The list of relabelings for the PodMonitor. Applied to samples before scraping.

+
+ +
+ +## PoolerSecrets + +**Appears in:** + +- [PoolerStatus](#postgresql-k8s-enterprisedb-io-v1-PoolerStatus) + +

PoolerSecrets contains the versions of all the secrets used

+ + + + + + + + + + + + + + + + + +
FieldDescription
serverTLS
+SecretVersion +
+

The server TLS secret version

+
serverCA
+SecretVersion +
+

The server CA secret version

+
clientCA
+SecretVersion +
+

The client CA secret version

+
pgBouncerSecrets
+PgBouncerSecrets +
+

The version of the secrets used by PgBouncer

+
+ +
+ +## PoolerSpec + +**Appears in:** + +- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler) + +

PoolerSpec defines the desired state of Pooler

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
cluster [Required]
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference +
+

This is the cluster reference on which the Pooler will work. +Pooler name should never match with any cluster name within the same namespace.

+
type
+PoolerType +
+

Type of service to forward traffic to. Default: rw.

+
instances
+int32 +
+

The number of replicas we want. Default: 1.

+
template
+PodTemplateSpec +
+

The template of the Pod to be created

+
pgbouncer [Required]
+PgBouncerSpec +
+

The PgBouncer configuration

+
deploymentStrategy
+apps/v1.DeploymentStrategy +
+

The deployment strategy to use for pgbouncer to replace existing pods with new ones

+
monitoring
+PoolerMonitoringConfiguration +
+

The configuration of the monitoring infrastructure of this pooler.

+
serviceTemplate
+ServiceTemplateSpec +
+

Template for the Service to be created

+
+ +
+ +## PoolerStatus + +**Appears in:** + +- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler) + +

PoolerStatus defines the observed state of Pooler

+ + + + + + + + + + + +
FieldDescription
secrets
+PoolerSecrets +
+

The resource version of the config object

+
instances
+int32 +
+

The number of pods trying to be scheduled

+
+ +
+ +## PoolerType + +(Alias of `string`) + +**Appears in:** + +- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec) + +

PoolerType is the type of the connection pool, meaning the service +we are targeting. Allowed values are rw and ro.

+ +
+ +## PostgresConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

PostgresConfiguration defines the PostgreSQL configuration

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
parameters
+map[string]string +
+

PostgreSQL configuration options (postgresql.conf)

+
synchronous
+SynchronousReplicaConfiguration +
+

Configuration of the PostgreSQL synchronous replication feature

+
pg_hba
+[]string +
+

PostgreSQL Host Based Authentication rules (lines to be appended +to the pg_hba.conf file)

+
pg_ident
+[]string +
+

PostgreSQL User Name Maps rules (lines to be appended +to the pg_ident.conf file)

+
epas
+EPASConfiguration +
+

EDB Postgres Advanced Server specific configurations

+
syncReplicaElectionConstraint
+SyncReplicaElectionConstraints +
+

Requirements to be met by sync replicas. This will affect how the "synchronous_standby_names" parameter will be +set up.

+
shared_preload_libraries
+[]string +
+

Lists of shared preload libraries to add to the default ones

+
ldap
+LDAPConfig +
+

Options to specify LDAP configuration

+
promotionTimeout
+int32 +
+

Specifies the maximum number of seconds to wait when promoting an instance to primary. +Default value is 40000000, greater than one year in seconds, +big enough to simulate an infinite timeout

+
enableAlterSystem
+bool +
+

If this parameter is true, the user will be able to invoke ALTER SYSTEM +on this EDB Postgres for Kubernetes Cluster. +This should only be used for debugging and troubleshooting. +Defaults to false.

+
extensions
+[]ExtensionConfiguration +
+

The configuration of the extensions to be added

+
+ +
+ +## PrimaryUpdateMethod + +(Alias of `string`) + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

PrimaryUpdateMethod contains the method to use when upgrading +the primary server of the cluster as part of rolling updates

+ +
+ +## PrimaryUpdateStrategy + +(Alias of `string`) + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

PrimaryUpdateStrategy contains the strategy to follow when upgrading +the primary server of the cluster as part of rolling updates

+ +
+ +## Probe + +**Appears in:** + +- [LivenessProbe](#postgresql-k8s-enterprisedb-io-v1-LivenessProbe) + +- [ProbeWithStrategy](#postgresql-k8s-enterprisedb-io-v1-ProbeWithStrategy) + +

Probe describes a health check to be performed against a container to determine whether it is +alive or ready to receive traffic.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
initialDelaySeconds
+int32 +
+

Number of seconds after the container has started before liveness probes are initiated. +More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+
timeoutSeconds
+int32 +
+

Number of seconds after which the probe times out. +Defaults to 1 second. Minimum value is 1. +More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

+
periodSeconds
+int32 +
+

How often (in seconds) to perform the probe. +Default to 10 seconds. Minimum value is 1.

+
successThreshold
+int32 +
+

Minimum consecutive successes for the probe to be considered successful after having failed. +Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.

+
failureThreshold
+int32 +
+

Minimum consecutive failures for the probe to be considered failed after having succeeded. +Defaults to 3. Minimum value is 1.

+
terminationGracePeriodSeconds
+int64 +
+

Optional duration in seconds the pod needs to terminate gracefully upon probe failure. +The grace period is the duration in seconds after the processes running in the pod are sent +a termination signal and the time when the processes are forcibly halted with a kill signal. +Set this value longer than the expected cleanup time for your process. +If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this +value overrides the value provided by the pod spec. +Value must be non-negative integer. The value zero indicates stop immediately via +the kill signal (no opportunity to shut down). +This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. +Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.

+
+ +
+ +## ProbeStrategyType + +(Alias of `string`) + +**Appears in:** + +- [ProbeWithStrategy](#postgresql-k8s-enterprisedb-io-v1-ProbeWithStrategy) + +

ProbeStrategyType is the type of the strategy used to declare a PostgreSQL instance +ready

+ +
+ +## ProbeWithStrategy + +**Appears in:** + +- [ProbesConfiguration](#postgresql-k8s-enterprisedb-io-v1-ProbesConfiguration) + +

ProbeWithStrategy is the configuration of the startup probe

+ + + + + + + + + + + + + + +
FieldDescription
Probe
+Probe +
(Members of Probe are embedded into this type.) +

Probe is the standard probe configuration

+
type
+ProbeStrategyType +
+

The probe strategy

+
maximumLag
+k8s.io/apimachinery/pkg/api/resource.Quantity +
+

Lag limit. Used only for streaming strategy

+
+ +
+ +## ProbesConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

ProbesConfiguration represent the configuration for the probes +to be injected in the PostgreSQL Pods

+ + + + + + + + + + + + + + +
FieldDescription
startup [Required]
+ProbeWithStrategy +
+

The startup probe configuration

+
liveness [Required]
+LivenessProbe +
+

The liveness probe configuration

+
readiness [Required]
+ProbeWithStrategy +
+

The readiness probe configuration

+
+ +
+ +## PublicationReclaimPolicy + +(Alias of `string`) + +**Appears in:** + +- [PublicationSpec](#postgresql-k8s-enterprisedb-io-v1-PublicationSpec) + +

PublicationReclaimPolicy defines a policy for end-of-life maintenance of Publications.

+ +
+ +## PublicationSpec + +**Appears in:** + +- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication) + +

PublicationSpec defines the desired state of Publication

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
cluster [Required]
+core/v1.LocalObjectReference +
+

The name of the PostgreSQL cluster that identifies the "publisher"

+
name [Required]
+string +
+

The name of the publication inside PostgreSQL

+
dbname [Required]
+string +
+

The name of the database where the publication will be installed in +the "publisher" cluster

+
parameters
+map[string]string +
+

Publication parameters part of the WITH clause as expected by +PostgreSQL CREATE PUBLICATION command

+
target [Required]
+PublicationTarget +
+

Target of the publication as expected by PostgreSQL CREATE PUBLICATION command

+
publicationReclaimPolicy
+PublicationReclaimPolicy +
+

The policy for end-of-life maintenance of this publication

+
+ +
+ +## PublicationStatus + +**Appears in:** + +- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication) + +

PublicationStatus defines the observed state of Publication

+ + + + + + + + + + + + + + +
FieldDescription
observedGeneration
+int64 +
+

A sequence number representing the latest +desired state that was synchronized

+
applied
+bool +
+

Applied is true if the publication was reconciled correctly

+
message
+string +
+

Message is the reconciliation output message

+
+ +
+ +## PublicationTarget + +**Appears in:** + +- [PublicationSpec](#postgresql-k8s-enterprisedb-io-v1-PublicationSpec) + +

PublicationTarget is what this publication should publish

+ + + + + + + + + + + +
FieldDescription
allTables
+bool +
+

Marks the publication as one that replicates changes for all tables +in the database, including tables created in the future. +Corresponding to FOR ALL TABLES in PostgreSQL.

+
objects
+[]PublicationTargetObject +
+

Just the following schema objects

+
+ +
+ +## PublicationTargetObject + +**Appears in:** + +- [PublicationTarget](#postgresql-k8s-enterprisedb-io-v1-PublicationTarget) + +

PublicationTargetObject is an object to publish

+ + + + + + + + + + + +
FieldDescription
tablesInSchema
+string +
+

Marks the publication as one that replicates changes for all tables +in the specified list of schemas, including tables created in the +future. Corresponding to FOR TABLES IN SCHEMA in PostgreSQL.

+
table
+PublicationTargetTable +
+

Specifies a list of tables to add to the publication. Corresponding +to FOR TABLE in PostgreSQL.

+
+ +
+ +## PublicationTargetTable + +**Appears in:** + +- [PublicationTargetObject](#postgresql-k8s-enterprisedb-io-v1-PublicationTargetObject) + +

PublicationTargetTable is a table to publish

+ + + + + + + + + + + + + + + + + +
FieldDescription
only
+bool +
+

Whether to limit to the table only or include all its descendants

+
name [Required]
+string +
+

The table name

+
schema
+string +
+

The schema name

+
columns
+[]string +
+

The columns to publish

+
+ +
+ +## RecoveryTarget + +**Appears in:** + +- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery) + +

RecoveryTarget allows to configure the moment where the recovery process +will stop. All the target options except TargetTLI are mutually exclusive.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
backupID
+string +
+

The ID of the backup from which to start the recovery process. +If empty (default) the operator will automatically detect the backup +based on targetTime or targetLSN if specified. Otherwise use the +latest available backup in chronological order.

+
targetTLI
+string +
+

The target timeline ("latest" or a positive integer)

+
targetXID
+string +
+

The target transaction ID

+
targetName
+string +
+

The target name (to be previously created +with pg_create_restore_point)

+
targetLSN
+string +
+

The target LSN (Log Sequence Number)

+
targetTime
+string +
+

The target time as a timestamp in the RFC3339 standard

+
targetImmediate
+bool +
+

End recovery as soon as a consistent state is reached

+
exclusive
+bool +
+

Set the target to be exclusive. If omitted, defaults to false, so that +in Postgres, recovery_target_inclusive will be true

+
+ +
+ +## ReplicaClusterConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

ReplicaClusterConfiguration encapsulates the configuration of a replica +cluster

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
self
+string +
+

Self defines the name of this cluster. It is used to determine if this is a primary +or a replica cluster, comparing it with primary

+
primary
+string +
+

Primary defines which Cluster is defined to be the primary in the distributed PostgreSQL cluster, based on the +topology specified in externalClusters

+
source [Required]
+string +
+

The name of the external cluster which is the replication origin

+
enabled
+bool +
+

If replica mode is enabled, this cluster will be a replica of an +existing cluster. Replica cluster can be created from a recovery +object store or via streaming through pg_basebackup. +Refer to the Replica clusters page of the documentation for more information.

+
promotionToken
+string +
+

A demotion token generated by an external cluster used to +check if the promotion requirements are met.

+
minApplyDelay
+meta/v1.Duration +
+

When replica mode is enabled, this parameter allows you to replay +transactions only when the system time is at least the configured +time past the commit time. This provides an opportunity to correct +data loss errors. Note that when this parameter is set, a promotion +token cannot be used.

+
+ +
+ +## ReplicationSlotsConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

ReplicationSlotsConfiguration encapsulates the configuration +of replication slots

+ + + + + + + + + + + + + + +
FieldDescription
highAvailability
+ReplicationSlotsHAConfiguration +
+

Replication slots for high availability configuration

+
updateInterval
+int +
+

Standby will update the status of the local replication slots +every updateInterval seconds (default 30).

+
synchronizeReplicas
+SynchronizeReplicasConfiguration +
+

Configures the synchronization of the user defined physical replication slots

+
+ +
+ +## ReplicationSlotsHAConfiguration + +**Appears in:** + +- [ReplicationSlotsConfiguration](#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsConfiguration) + +

ReplicationSlotsHAConfiguration encapsulates the configuration +of the replication slots that are automatically managed by +the operator to control the streaming replication connections +with the standby instances for high availability (HA) purposes. +Replication slots are a PostgreSQL feature that makes sure +that PostgreSQL automatically keeps WAL files in the primary +when a streaming client (in this specific case a replica that +is part of the HA cluster) gets disconnected.

+ + + + + + + + + + + + + + +
FieldDescription
enabled
+bool +
+

If enabled (default), the operator will automatically manage replication slots +on the primary instance and use them in streaming replication +connections with all the standby instances that are part of the HA +cluster. If disabled, the operator will not take advantage +of replication slots in streaming connections with the replicas. +This feature also controls replication slots in replica cluster, +from the designated primary to its cascading replicas.

+
slotPrefix
+string +
+

Prefix for replication slots managed by the operator for HA. +It may only contain lower case letters, numbers, and the underscore character. +This can only be set at creation time. By default set to _cnp_.

+
synchronizeLogicalDecoding
+bool +
+

When enabled, the operator automatically manages synchronization of logical +decoding (replication) slots across high-availability clusters.

+

Requires one of the following conditions:

+
    +
  • PostgreSQL version 17 or later
  • +
  • PostgreSQL version < 17 with pg_failover_slots extension enabled
  • +
+
+ +
+ +## RoleConfiguration + +**Appears in:** + +- [ManagedConfiguration](#postgresql-k8s-enterprisedb-io-v1-ManagedConfiguration) + +

RoleConfiguration is the representation, in Kubernetes, of a PostgreSQL role +with the additional field Ensure specifying whether to ensure the presence or +absence of the role in the database

+

The defaults of the CREATE ROLE command are applied +Reference: https://www.postgresql.org/docs/current/sql-createrole.html

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

Name of the role

+
comment
+string +
+

Description of the role

+
ensure
+EnsureOption +
+

Ensure the role is present or absent - defaults to "present"

+
passwordSecret
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference +
+

Secret containing the password of the role (if present) +If null, the password will be ignored unless DisablePassword is set

+
connectionLimit
+int64 +
+

If the role can log in, this specifies how many concurrent +connections the role can make. -1 (the default) means no limit.

+
validUntil
+meta/v1.Time +
+

Date and time after which the role's password is no longer valid. +When omitted, the password will never expire (default).

+
inRoles
+[]string +
+

List of one or more existing roles to which this role will be +immediately added as a new member. Default empty.

+
inherit
+bool +
+

Whether a role "inherits" the privileges of roles it is a member of. +Defaults is true.

+
disablePassword
+bool +
+

DisablePassword indicates that a role's password should be set to NULL in Postgres

+
superuser
+bool +
+

Whether the role is a superuser who can override all access +restrictions within the database - superuser status is dangerous and +should be used only when really needed. You must yourself be a +superuser to create a new superuser. Defaults is false.

+
createdb
+bool +
+

When set to true, the role being defined will be allowed to create +new databases. Specifying false (default) will deny a role the +ability to create databases.

+
createrole
+bool +
+

Whether the role will be permitted to create, alter, drop, comment +on, change the security label for, and grant or revoke membership in +other roles. Default is false.

+
login
+bool +
+

Whether the role is allowed to log in. A role having the login +attribute can be thought of as a user. Roles without this attribute +are useful for managing database privileges, but are not users in +the usual sense of the word. Default is false.

+
replication
+bool +
+

Whether a role is a replication role. A role must have this +attribute (or be a superuser) in order to be able to connect to the +server in replication mode (physical or logical replication) and in +order to be able to create or drop replication slots. A role having +the replication attribute is a very highly privileged role, and +should only be used on roles actually used for replication. Default +is false.

+
bypassrls
+bool +
+

Whether a role bypasses every row-level security (RLS) policy. +Default is false.

+
+ +
+ +## SQLRefs + +**Appears in:** + +- [BootstrapInitDB](#postgresql-k8s-enterprisedb-io-v1-BootstrapInitDB) + +

SQLRefs holds references to ConfigMaps or Secrets +containing SQL files. The references are processed in a specific order: +first, all Secrets are processed, followed by all ConfigMaps. +Within each group, the processing order follows the sequence specified +in their respective arrays.

+ + + + + + + + + + + +
FieldDescription
secretRefs
+[]github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector +
+

SecretRefs holds a list of references to Secrets

+
configMapRefs
+[]github.com/cloudnative-pg/machinery/pkg/api.ConfigMapKeySelector +
+

ConfigMapRefs holds a list of references to ConfigMaps

+
+ +
+ +## ScheduledBackupSpec + +**Appears in:** + +- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup) + +

ScheduledBackupSpec defines the desired state of ScheduledBackup

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
suspend
+bool +
+

If this backup is suspended or not

+
immediate
+bool +
+

If the first backup has to be immediately start after creation or not

+
schedule [Required]
+string +
+

The schedule does not follow the same format used in Kubernetes CronJobs +as it includes an additional seconds specifier, +see https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format

+
cluster [Required]
+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference +
+

The cluster to backup

+
backupOwnerReference
+string +
+

Indicates which ownerReference should be put inside the created backup resources.

+
    +
  • none: no owner reference for created backup objects (same behavior as before the field was introduced)
  • +
  • self: sets the Scheduled backup object as owner of the backup
  • +
  • cluster: set the cluster as owner of the backup
  • +
+
target
+BackupTarget +
+

The policy to decide which instance should perform this backup. If empty, +it defaults to cluster.spec.backup.target. +Available options are empty string, primary and prefer-standby. +primary to have backups run always on primary instances, +prefer-standby to have backups run preferably on the most updated +standby, if available.

+
method
+BackupMethod +
+

The backup method to be used, possible options are barmanObjectStore, +volumeSnapshot or plugin. Defaults to: barmanObjectStore.

+
pluginConfiguration
+BackupPluginConfiguration +
+

Configuration parameters passed to the plugin managing this backup

+
online
+bool +
+

Whether the default type of backup with volume snapshots is +online/hot (true, default) or offline/cold (false) +Overrides the default setting specified in the cluster field '.spec.backup.volumeSnapshot.online'

+
onlineConfiguration
+OnlineConfiguration +
+

Configuration parameters to control the online/hot backup with volume snapshots +Overrides the default settings specified in the cluster '.backup.volumeSnapshot.onlineConfiguration' stanza

+
+ +
+ +## ScheduledBackupStatus + +**Appears in:** + +- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup) + +

ScheduledBackupStatus defines the observed state of ScheduledBackup

+ + + + + + + + + + + + + + +
FieldDescription
lastCheckTime
+meta/v1.Time +
+

The latest time the schedule

+
lastScheduleTime
+meta/v1.Time +
+

Information when was the last time that backup was successfully scheduled.

+
nextScheduleTime
+meta/v1.Time +
+

Next time we will run a backup

+
+ +
+ +## SchemaSpec + +**Appears in:** + +- [DatabaseSpec](#postgresql-k8s-enterprisedb-io-v1-DatabaseSpec) + +

SchemaSpec configures a schema in a database

+ + + + + + + + + + + +
FieldDescription
DatabaseObjectSpec
+DatabaseObjectSpec +
(Members of DatabaseObjectSpec are embedded into this type.) +

Common fields

+
owner [Required]
+string +
+

The role name of the user who owns the schema inside PostgreSQL. +It maps to the AUTHORIZATION parameter of CREATE SCHEMA and the +OWNER TO command of ALTER SCHEMA.

+
+ +
+ +## SecretVersion + +**Appears in:** + +- [PgBouncerSecrets](#postgresql-k8s-enterprisedb-io-v1-PgBouncerSecrets) + +- [PoolerSecrets](#postgresql-k8s-enterprisedb-io-v1-PoolerSecrets) + +

SecretVersion contains a secret name and its ResourceVersion

+ + + + + + + + + + + +
FieldDescription
name
+string +
+

The name of the secret

+
version
+string +
+

The ResourceVersion of the secret

+
+ +
+ +## SecretsResourceVersion + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

SecretsResourceVersion is the resource versions of the secrets +managed by the operator

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
superuserSecretVersion
+string +
+

The resource version of the "postgres" user secret

+
replicationSecretVersion
+string +
+

The resource version of the "streaming_replica" user secret

+
applicationSecretVersion
+string +
+

The resource version of the "app" user secret

+
managedRoleSecretVersion
+map[string]string +
+

The resource versions of the managed roles secrets

+
caSecretVersion
+string +
+

Unused. Retained for compatibility with old versions.

+
clientCaSecretVersion
+string +
+

The resource version of the PostgreSQL client-side CA secret version

+
serverCaSecretVersion
+string +
+

The resource version of the PostgreSQL server-side CA secret version

+
serverSecretVersion
+string +
+

The resource version of the PostgreSQL server-side secret version

+
barmanEndpointCA
+string +
+

The resource version of the Barman Endpoint CA if provided

+
externalClusterSecretVersion
+map[string]string +
+

The resource versions of the external cluster secrets

+
metrics
+map[string]string +
+

A map with the versions of all the secrets used to pass metrics. +Map keys are the secret names, map values are the versions

+
+ +
+ +## ServiceAccountTemplate + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

ServiceAccountTemplate contains the template needed to generate the service accounts

+ + + + + + + + +
FieldDescription
metadata [Required]
+Metadata +
+

Metadata are the metadata to be used for the generated +service account

+
+ +
+ +## ServiceSelectorType + +(Alias of `string`) + +**Appears in:** + +- [ManagedService](#postgresql-k8s-enterprisedb-io-v1-ManagedService) + +- [ManagedServices](#postgresql-k8s-enterprisedb-io-v1-ManagedServices) + +

ServiceSelectorType describes a valid value for generating the service selectors. +It indicates which type of service the selector applies to, such as read-write, read, or read-only

+ +
+ +## ServiceTemplateSpec + +**Appears in:** + +- [ManagedService](#postgresql-k8s-enterprisedb-io-v1-ManagedService) + +- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec) + +

ServiceTemplateSpec is a structure allowing the user to set +a template for Service generation.

+ + + + + + + + + + + +
FieldDescription
metadata
+Metadata +
+

Standard object's metadata. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

+
spec
+core/v1.ServiceSpec +
+

Specification of the desired behavior of the service. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

+
+ +
+ +## ServiceUpdateStrategy + +(Alias of `string`) + +**Appears in:** + +- [ManagedService](#postgresql-k8s-enterprisedb-io-v1-ManagedService) + +

ServiceUpdateStrategy describes how the changes to the managed service should be handled

+ +
+ +## SnapshotOwnerReference + +(Alias of `string`) + +**Appears in:** + +- [VolumeSnapshotConfiguration](#postgresql-k8s-enterprisedb-io-v1-VolumeSnapshotConfiguration) + +

SnapshotOwnerReference defines the reference type for the owner of the snapshot. +This specifies which owner the processed resources should relate to.

+ +
+ +## SnapshotType + +(Alias of `string`) + +**Appears in:** + +- [Import](#postgresql-k8s-enterprisedb-io-v1-Import) + +

SnapshotType is a type of allowed import

+ +
+ +## StorageConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +- [TablespaceConfiguration](#postgresql-k8s-enterprisedb-io-v1-TablespaceConfiguration) + +

StorageConfiguration is the configuration used to create and reconcile PVCs, +usable for WAL volumes, PGDATA volumes, or tablespaces

+ + + + + + + + + + + + + + + + + +
FieldDescription
storageClass
+string +
+

StorageClass to use for PVCs. Applied after +evaluating the PVC template, if available. +If not specified, the generated PVCs will use the +default storage class

+
size
+string +
+

Size of the storage. Required if not already specified in the PVC template. +Changes to this field are automatically reapplied to the created PVCs. +Size cannot be decreased.

+
resizeInUseVolumes
+bool +
+

Resize existent PVCs, defaults to true

+
pvcTemplate
+core/v1.PersistentVolumeClaimSpec +
+

Template to be used to generate the Persistent Volume Claim

+
+ +
+ +## SubscriptionReclaimPolicy + +(Alias of `string`) + +**Appears in:** + +- [SubscriptionSpec](#postgresql-k8s-enterprisedb-io-v1-SubscriptionSpec) + +

SubscriptionReclaimPolicy describes a policy for end-of-life maintenance of Subscriptions.

+ +
+ +## SubscriptionSpec + +**Appears in:** + +- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription) + +

SubscriptionSpec defines the desired state of Subscription

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
cluster [Required]
+core/v1.LocalObjectReference +
+

The name of the PostgreSQL cluster that identifies the "subscriber"

+
name [Required]
+string +
+

The name of the subscription inside PostgreSQL

+
dbname [Required]
+string +
+

The name of the database where the publication will be installed in +the "subscriber" cluster

+
parameters
+map[string]string +
+

Subscription parameters included in the WITH clause of the PostgreSQL +CREATE SUBSCRIPTION command. Most parameters cannot be changed +after the subscription is created and will be ignored if modified +later, except for a limited set documented at: +https://www.postgresql.org/docs/current/sql-altersubscription.html#SQL-ALTERSUBSCRIPTION-PARAMS-SET

+
publicationName [Required]
+string +
+

The name of the publication inside the PostgreSQL database in the +"publisher"

+
publicationDBName
+string +
+

The name of the database containing the publication on the external +cluster. Defaults to the one in the external cluster definition.

+
externalClusterName [Required]
+string +
+

The name of the external cluster with the publication ("publisher")

+
subscriptionReclaimPolicy
+SubscriptionReclaimPolicy +
+

The policy for end-of-life maintenance of this subscription

+
+ +
+ +## SubscriptionStatus + +**Appears in:** + +- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription) + +

SubscriptionStatus defines the observed state of Subscription

+ + + + + + + + + + + + + + +
FieldDescription
observedGeneration
+int64 +
+

A sequence number representing the latest +desired state that was synchronized

+
applied
+bool +
+

Applied is true if the subscription was reconciled correctly

+
message
+string +
+

Message is the reconciliation output message

+
+ +
+ +## SwitchReplicaClusterStatus + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

SwitchReplicaClusterStatus contains all the statuses regarding the switch of a cluster to a replica cluster

+ + + + + + + + +
FieldDescription
inProgress
+bool +
+

InProgress indicates if there is an ongoing procedure of switching a cluster to a replica cluster.

+
+ +
+ +## SyncReplicaElectionConstraints + +**Appears in:** + +- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration) + +

SyncReplicaElectionConstraints contains the constraints for sync replicas election.

+

For anti-affinity parameters two instances are considered in the same location +if all the labels values match.

+

In future synchronous replica election restriction by name will be supported.

+ + + + + + + + + + + +
FieldDescription
nodeLabelsAntiAffinity
+[]string +
+

A list of node labels values to extract and compare to evaluate if the pods reside in the same topology or not

+
enabled [Required]
+bool +
+

This flag enables the constraints for sync replicas

+
+ +
+ +## SynchronizeReplicasConfiguration + +**Appears in:** + +- [ReplicationSlotsConfiguration](#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsConfiguration) + +

SynchronizeReplicasConfiguration contains the configuration for the synchronization of user defined +physical replication slots

+ + + + + + + + + + + +
FieldDescription
enabled [Required]
+bool +
+

When set to true, every replication slot that is on the primary is synchronized on each standby

+
excludePatterns
+[]string +
+

List of regular expression patterns to match the names of replication slots to be excluded (by default empty)

+
+ +
+ +## SynchronousReplicaConfiguration + +**Appears in:** + +- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration) + +

SynchronousReplicaConfiguration contains the configuration of the +PostgreSQL synchronous replication feature. +Important: at this moment, also .spec.minSyncReplicas and .spec.maxSyncReplicas +need to be considered.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
method [Required]
+SynchronousReplicaConfigurationMethod +
+

Method to select synchronous replication standbys from the listed +servers, accepting 'any' (quorum-based synchronous replication) or +'first' (priority-based synchronous replication) as values.

+
number [Required]
+int +
+

Specifies the number of synchronous standby servers that +transactions must wait for responses from.

+
maxStandbyNamesFromCluster
+int +
+

Specifies the maximum number of local cluster pods that can be +automatically included in the synchronous_standby_names option in +PostgreSQL.

+
standbyNamesPre
+[]string +
+

A user-defined list of application names to be added to +synchronous_standby_names before local cluster pods (the order is +only useful for priority-based synchronous replication).

+
standbyNamesPost
+[]string +
+

A user-defined list of application names to be added to +synchronous_standby_names after local cluster pods (the order is +only useful for priority-based synchronous replication).

+
dataDurability
+DataDurabilityLevel +
+

If set to "required", data durability is strictly enforced. Write operations +with synchronous commit settings (on, remote_write, or remote_apply) will +block if there are insufficient healthy replicas, ensuring data persistence. +If set to "preferred", data durability is maintained when healthy replicas +are available, but the required number of instances will adjust dynamically +if replicas become unavailable. This setting relaxes strict durability enforcement +to allow for operational continuity. This setting is only applicable if both +standbyNamesPre and standbyNamesPost are unset (empty).

+
+ +
+ +## SynchronousReplicaConfigurationMethod + +(Alias of `string`) + +**Appears in:** + +- [SynchronousReplicaConfiguration](#postgresql-k8s-enterprisedb-io-v1-SynchronousReplicaConfiguration) + +

SynchronousReplicaConfigurationMethod configures whether to use +quorum based replication or a priority list

+ +
+ +## TDEConfiguration + +**Appears in:** + +- [EPASConfiguration](#postgresql-k8s-enterprisedb-io-v1-EPASConfiguration) + +

TDEConfiguration contains the Transparent Data Encryption configuration

+ + + + + + + + + + + + + + + + + + + + +
FieldDescription
enabled
+bool +
+

True if we want to have TDE enabled

+
secretKeyRef
+core/v1.SecretKeySelector +
+

Reference to the secret that contains the encryption key

+
wrapCommand
+core/v1.SecretKeySelector +
+

WrapCommand is the encrypt command provided by the user

+
unwrapCommand
+core/v1.SecretKeySelector +
+

UnwrapCommand is the decryption command provided by the user

+
passphraseCommand
+core/v1.SecretKeySelector +
+

PassphraseCommand is the command executed to get the passphrase that will be +passed to the OpenSSL command to encrypt and decrypt

+
+ +
+ +## TablespaceConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +

TablespaceConfiguration is the configuration of a tablespace, and includes +the storage specification for the tablespace

+ + + + + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

The name of the tablespace

+
storage [Required]
+StorageConfiguration +
+

The storage configuration for the tablespace

+
owner
+DatabaseRoleRef +
+

Owner is the PostgreSQL user owning the tablespace

+
temporary
+bool +
+

When set to true, the tablespace will be added as a temp_tablespaces +entry in PostgreSQL, and will be available to automatically house temp +database objects, or other temporary files. Please refer to PostgreSQL +documentation for more information on the temp_tablespaces GUC.

+
+ +
+ +## TablespaceState + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

TablespaceState represents the state of a tablespace in a cluster

+ + + + + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

Name is the name of the tablespace

+
owner
+string +
+

Owner is the PostgreSQL user owning the tablespace

+
state [Required]
+TablespaceStatus +
+

State is the latest reconciliation state

+
error
+string +
+

Error is the reconciliation error, if any

+
+ +
+ +## TablespaceStatus + +(Alias of `string`) + +**Appears in:** + +- [TablespaceState](#postgresql-k8s-enterprisedb-io-v1-TablespaceState) + +

TablespaceStatus represents the status of a tablespace in the cluster

+ +
+ +## Topology + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +

Topology contains the cluster topology

+ + + + + + + + + + + + + + +
FieldDescription
instances
+map[PodName]PodTopologyLabels +
+

Instances contains the pod topology of the instances

+
nodesUsed
+int32 +
+

NodesUsed represents the count of distinct nodes accommodating the instances. +A value of '1' suggests that all instances are hosted on a single node, +implying the absence of High Availability (HA). Ideally, this value should +be the same as the number of instances in the Postgres HA cluster, implying +shared nothing architecture on the compute side.

+
successfullyExtracted
+bool +
+

SuccessfullyExtracted indicates if the topology data was extract. It is useful to enact fallback behaviors +in synchronous replica election in case of failures

+
+ +
+ +## VolumeSnapshotConfiguration + +**Appears in:** + +- [BackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-BackupConfiguration) + +

VolumeSnapshotConfiguration represents the configuration for the execution of snapshot backups.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
labels
+map[string]string +
+

Labels are key-value pairs that will be added to .metadata.labels snapshot resources.

+
annotations
+map[string]string +
+

Annotations key-value pairs that will be added to .metadata.annotations snapshot resources.

+
className
+string +
+

ClassName specifies the Snapshot Class to be used for PG_DATA PersistentVolumeClaim. +It is the default class for the other types if no specific class is present

+
walClassName
+string +
+

WalClassName specifies the Snapshot Class to be used for the PG_WAL PersistentVolumeClaim.

+
tablespaceClassName
+map[string]string +
+

TablespaceClassName specifies the Snapshot Class to be used for the tablespaces. +defaults to the PGDATA Snapshot Class, if set

+
snapshotOwnerReference
+SnapshotOwnerReference +
+

SnapshotOwnerReference indicates the type of owner reference the snapshot should have

+
online
+bool +
+

Whether the default type of backup with volume snapshots is +online/hot (true, default) or offline/cold (false)

+
onlineConfiguration
+OnlineConfiguration +
+

Configuration parameters to control the online/hot backup with volume snapshots

+
diff --git a/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx b/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx index 70abb0425bc..1676b6831aa 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx @@ -380,9 +380,9 @@ Fixed rules: ```text local all all peer -hostssl postgres streaming_replica all cert -hostssl replication streaming_replica all cert -hostssl all cnp_pooler_pgbouncer all cert +hostssl postgres streaming_replica all cert map=cnp_streaming_replica +hostssl replication streaming_replica all cert map=cnp_streaming_replica +hostssl all cnp_pooler_pgbouncer all cert map=cnp_pooler_pgbouncer ``` Default rules: @@ -404,8 +404,9 @@ The resulting `pg_hba.conf` will look like this: ```text local all all peer -hostssl postgres streaming_replica all cert -hostssl replication streaming_replica all cert +hostssl postgres streaming_replica all cert map=cnp_streaming_replica +hostssl replication streaming_replica all cert map=cnp_streaming_replica +hostssl all cnp_pooler_pgbouncer all cert map=cnp_pooler_pgbouncer diff --git a/product_docs/docs/postgres_for_kubernetes/1/preview_version.mdx b/product_docs/docs/postgres_for_kubernetes/1/preview_version.mdx index c6f8ac705dd..643a31b1193 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/preview_version.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/preview_version.mdx @@ -38,11 +38,11 @@ are not backwards compatible and could be removed entirely. ## Current Preview Version -The current preview version is **1.26.0-rc3**. -The preview version is available on OpenShift. -There are two different manifests available depending on your subscription plan: +The current preview version is **1.27.0-rc1**. -* Standard: The [release candidate for the standard operator manifest](https://get.enterprisedb.io/pg4k/pg4k-standard-1.26.0-rc3.yaml). -* Enterprise: The [release candidate for the enterprise operator manifest](https://get.enterprisedb.io/pg4k/pg4k-enterprise-1.26.0-rc3.yaml). +For more information on the current preview version and how to test, please view the links below: + +- [Announcement](https://cloudnative-pg.io/releases/cloudnative-pg-1-27.0-rc1-released/) +- [Documentation](https://cloudnative-pg.io/documentation/preview/) diff --git a/product_docs/docs/postgres_for_kubernetes/1/replication.mdx b/product_docs/docs/postgres_for_kubernetes/1/replication.mdx index 7304c067bbb..4ba1c6d2ded 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/replication.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/replication.mdx @@ -85,8 +85,8 @@ following excerpt taken from `pg_hba.conf`: ``` # Require client certificate authentication for the streaming_replica user -hostssl postgres streaming_replica all cert -hostssl replication streaming_replica all cert +hostssl postgres streaming_replica all cert map=cnp_streaming_replica +hostssl replication streaming_replica all cert map=cnp_streaming_replica ``` !!! Seealso "Certificates" @@ -125,6 +125,11 @@ EDB Postgres for Kubernetes supports both details on managing this behavior, refer to the [Data Durability and Synchronous Replication](#data-durability-and-synchronous-replication) section. +!!! Important + The [*failover quorum* feature](failover.md#failover-quorum-quorum-based-failover) (experimental) + can be used alongside synchronous replication to improve data durability + and safety during failover events. + Direct configuration of the `synchronous_standby_names` option is not permitted. However, EDB Postgres for Kubernetes automatically populates this option with the names of local pods, while also allowing customization to extend synchronous @@ -738,6 +743,78 @@ spec: size: 1Gi ``` +### Logical Decoding Slot Synchronization + +EDB Postgres for Kubernetes can synchronize logical decoding (replication) slots across all +nodes in a high-availability cluster, ensuring seamless continuation of logical +replication after a failover or switchover. This feature is disabled by +default, and enabling it requires two steps. + +The first step is to enable logical decoding slot synchronization: + +```yaml + # ... + replicationSlots: + highAvailability: + synchronizeLogicalDecoding: true +``` + +The second step involves configuring PostgreSQL parameters: the required +configuration depends on your PostgreSQL version, as explained below. + +When enabled, the operator automatically manages logical decoding slot states +during failover and switchover, preventing slot invalidation and avoiding data +loss for logical replication clients. + +#### Behavior on PostgreSQL 17 and later + +For PostgreSQL 17 and newer, EDB Postgres for Kubernetes transparently manages the +[`synchronized_standby_slots` parameter](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-SYNCHRONIZED-STANDBY-SLOTS). + +You must enable both `sync_replication_slots` and `hot_standby_feedback` in +your PostgreSQL configuration: + +```yaml +# ... +postgresql: + parameters: + # ... + hot_standby_feedback: 'on' + sync_replication_slots: 'on' +``` + +Additionally, you must create the logical replication `Subscription` with the +`failover` option enabled, for example: + +```yaml +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Subscription +# ... +spec: +# ... + parameters: + failover: 'true' +# ... +``` + +When configured, logical WAL sender processes send decoded changes to plugins +only after the specified replication slots confirm receiving and flushing the +relevant WAL, ensuring that: + +- logical replication slots do not consume changes until they are safely + received by the replicas of the publisher, and +- logical replication clients can seamlessly reconnect to a promoted standby + without missing data after failover. + +For more details on logical replication slot synchronization, see the +PostgreSQL documentation on [Logical Replication Failover](https://www.postgresql.org/docs/current/logical-replication-failover.html). + +#### Behavior on PostgreSQL 16 and earlier + +For PostgreSQL 16 and older versions, EDB Postgres for Kubernetes uses the +[`pg_failover_slots` extension](https://github.com/EnterpriseDB/pg_failover_slots) +to maintain synchronization of logical replication slots across failovers. + ### Capping the WAL size retained for replication slots When replication slots is enabled, you might end up running out of disk space diff --git a/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx b/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx index 918fe18ef7f..6c9c41ca1c9 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx @@ -5,29 +5,32 @@ originalFilePath: 'src/rolling_update.md' -The operator allows you to change the PostgreSQL version used in a cluster -while applications continue running against it. +The operator allows changing the PostgreSQL version used in a cluster while +applications are running against it. -Rolling upgrades are triggered when: +!!! Important + Only upgrades for PostgreSQL minor releases are supported. -- you change the `imageName` attribute in the cluster specification; +Rolling upgrades are started when: -- the [image catalog](image_catalog.md) is updated with a new image for the - major version used by the cluster; +- the user changes the `imageName` attribute of the cluster specification; -- a change in the PostgreSQL configuration requires a restart to apply; +- the [image catalog](image_catalog.md) is updated with a new image for the major used by the cluster; -- you change the `Cluster` `.spec.resources` values; +- a change in the PostgreSQL configuration requires a restart to be + applied; -- you resize the persistent volume claim on AKS; +- a change on the `Cluster` `.spec.resources` values -- the operator is updated, ensuring Pods run the latest instance manager - (unless [in-place updates are enabled](installation_upgrade.md#in-place-updates-of-the-instance-manager)). +- a change in size of the persistent volume claim on AKS -During a rolling upgrade, the operator upgrades all replicas one Pod at a time, -starting from the one with the highest serial. +- after the operator is updated, to ensure the Pods run the latest instance + manager (unless [in-place updates are enabled](installation_upgrade.md#in-place-updates-of-the-instance-manager)). -The primary is always the last node to be upgraded. +The operator starts upgrading all the replicas, one Pod at a time, and begins +from the one with the highest serial. + +The primary is the last node to be upgraded. Rolling updates are configurable and can be either entirely automated (`unsupervised`) or requiring human intervention (`supervised`). diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-destination.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-destination.yaml index 71a6a8fe31d..86879d80e6a 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-destination.yaml +++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-destination.yaml @@ -39,3 +39,5 @@ spec: dbname: app publicationName: pub externalClusterName: cluster-example + parameters: + failover: 'true' diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-source.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-source.yaml index 06032bf3ddf..5bf2a264d61 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-source.yaml +++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-logical-source.yaml @@ -3,9 +3,9 @@ kind: Cluster metadata: name: cluster-example spec: - instances: 1 + instances: 3 - imageName: quay.io/enterprisedb/postgresql:16 + imageName: quay.io/enterprisedb/postgresql:17 storage: size: 1Gi @@ -25,11 +25,21 @@ spec: - INSERT INTO another_schema.numbers_three (m) (SELECT generate_series(1,10000)) - ALTER TABLE another_schema.numbers_three OWNER TO app + replicationSlots: + highAvailability: + synchronizeLogicalDecoding: true + managed: roles: - name: app login: true replication: true + + postgresql: + parameters: + hot_standby_feedback: 'on' + sync_replication_slots: 'on' + --- apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Publication diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-quorum.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-quorum.yaml new file mode 100644 index 00000000000..16787992691 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-syncreplicas-quorum.yaml @@ -0,0 +1,16 @@ +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Cluster +metadata: + name: cluster-example + annotations: + alpha.k8s.enterprisedb.io/failoverQuorum: "true" +spec: + instances: 3 + + postgresql: + synchronous: + method: any + number: 1 + + storage: + size: 1G