diff --git a/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx b/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx
index bceff3185dd..cd93f21eb6c 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx
@@ -267,6 +267,36 @@ the following parameters:
instances, you can add a label with the key `k8s.enterprisedb.io/reload` to it. Otherwise,
you must reload the instances using the `kubectl cnp reload` subcommand.
+#### Customizing the `streaming_replica` client certificate
+
+In some environments, it may not be possible to generate a certificate with the
+common name `streaming_replica` due to company policies or other security
+concerns, such as a CA shared across multiple clusters. In such cases, the user
+mapping feature can be used to allow authentication as the `streaming_replica`
+user with certificates containing different common names.
+
+To configure this setup, add a `pg_ident.conf` entry for the predefined map
+named `cnp_streaming_replica`.
+
+For example, to enable `streaming_replica` authentication using a certificate
+with the common name `streaming-replica.cnp.svc.cluster.local`, add the
+following to your cluster definition:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ postgresql:
+ pg_ident:
+ - cnp_streaming_replica streaming-replica.cnp.svc.cluster.local streaming_replica
+```
+
+For further details on how `pg_ident.conf` is managed by the operator, see the
+["PostgreSQL Configuration" page](postgresql_conf.md#the-pg_ident-section) in
+the documentation.
+
#### Cert-manager example
This simple example shows how to use [cert-manager](https://cert-manager.io/)
diff --git a/product_docs/docs/postgres_for_kubernetes/1/failover.mdx b/product_docs/docs/postgres_for_kubernetes/1/failover.mdx
index bb5fa67f5d6..e42809098ea 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/failover.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/failover.mdx
@@ -96,3 +96,268 @@ expected outage.
Enabling a new configuration option to delay failover provides a mechanism to
prevent premature failover for short-lived network or node instability.
+
+## Failover Quorum (Quorum-based Failover)
+
+!!! Warning
+ *Failover quorum* is an experimental feature introduced in version 1.27.0.
+ Use with caution in production environments.
+
+Failover quorum is a mechanism that enhances data durability and safety during
+failover events in EDB Postgres for Kubernetes-managed PostgreSQL clusters.
+
+Quorum-based failover allows the controller to determine whether to promote a replica
+to primary based on the state of a quorum of replicas.
+This is useful when stronger data durability is required than the one offered
+by [synchronous replication](replication.md#synchronous-replication) and
+default automated failover procedures.
+
+When synchronous replication is not enabled, some data loss is expected and
+accepted during failover, as a replica may lag behind the primary when
+promoted.
+
+With synchronous replication enabled, the guarantee is that the application
+will not receive explicit acknowledgment of the successful commit of a
+transaction until the WAL data is known to be safely received by all required
+synchronous standbys.
+This is not enough to guarantee that the operator is able to promote the most
+advanced replica.
+
+For example, in a three-node cluster with synchronous replication set to `ANY 1
+(...)`, data is written to the primary and one standby before a commit is
+acknowledged. If both the primary and the aligned standby become unavailable
+(such as during a network partition), the remaining replica may not have the
+latest data. Promoting it could lose some data that the application considered
+committed.
+
+Quorum-based failover addresses this risk by ensuring that failover only occurs
+if the operator can confirm the presence of all synchronously committed data in
+the instance to promote, and it does not occur otherwise.
+
+This feature allows users to choose their preferred trade-off between data
+durability and data availability.
+
+Failover quorum can be enabled by setting the annotation
+`alpha.k8s.enterprisedb.io/failoverQuorum="true"` in the `Cluster` resource.
+
+!!! info
+ When this feature is out of the experimental phase, the annotation
+ `alpha.k8s.enterprisedb.io/failoverQuorum` will be replaced by a configuration option in
+ the `Cluster` resource.
+
+### How it works
+
+Before promoting a replica to primary, the operator performs a quorum check,
+following the principles of the Dynamo `R + W > N` consistency model[^1].
+
+In the quorum failover, these values assume the following meaning:
+
+- `R` is the number of *promotable replicas* (read quorum);
+- `W` is the number of replicas that must acknowledge the write before the
+ `COMMIT` is returned to the client (write quorum);
+- `N` is the total number of potentially synchronous replicas;
+
+*Promotable replicas* are replicas that have these properties:
+
+- are part of the cluster;
+- are able to report their state to the operator;
+- are potentially synchronous;
+
+If `R + W > N`, then we can be sure that among the promotable replicas there is
+at least one that has confirmed all the synchronous commits, and we can safely
+promote it to primary. If this is not the case, the controller will not promote
+any replica to primary, and will wait for the situation to change.
+
+Users can force a promotion of a replica to primary through the
+`kubectl cnp promote` command even if the quorum check is failing.
+
+!!! Warning
+ Manual promotion should only be used as a last resort. Before proceeding,
+ make sure you fully understand the risk of data loss and carefully consider the
+ consequences of prioritizing the resumption of write workloads for your
+ applications.
+
+An additional CRD is used to track the quorum state of the cluster. A `Cluster`
+with the quorum failover enabled will have a `FailoverQuorum` resource with the same
+name as the `Cluster` resource. The `FailoverQuorum` CR is created by the
+controller when the quorum failover is enabled, and it is updated by the primary
+instance during its reconciliation loop, and read by the operator during quorum
+checks. It is used to track the latest known configuration of the synchronous
+replication.
+
+!!! Important
+ Users should not modify the `FailoverQuorum` resource directly. During
+ PostgreSQL configuration changes, when it is not possible to determine the
+ configuration, the `FailoverQuorum` resource will be reset, preventing any
+ failover until the new configuration is applied.
+
+The `FailoverQuorum` resource works in conjunction with PostgreSQL synchronous
+replication.
+
+!!! Warning
+ There is no guarantee that `COMMIT` operations returned to the
+ client but that have not been performed synchronously, such as those made
+ explicitly disabling synchronous replication with
+ `SET synchronous_commit TO local`, will be present on a promoted replica.
+
+### Quorum Failover Example Scenarios
+
+In the following scenarios, `R` is the number of promotable replicas, `W` is
+the number of replicas that must acknowledge a write before commit, and `N` is
+the total number of potentially synchronous replicas. The "Failover" column
+indicates whether failover is allowed under quorum failover rules.
+
+#### Scenario 1: Three-node cluster, failing pod(s)
+
+A cluster with `instances: 3`, `synchronous.number=1`, and
+`dataDurability=required`.
+
+- If only the primary fails, two promotable replicas remain (R=2).
+ Since `R + W > N` (2 + 1 > 2), failover is allowed and safe.
+- If both the primary and one replica fail, only one promotable replica
+ remains (R=1). Since `R + W = N` (1 + 1 = 2), failover is not allowed to
+ prevent possible data loss.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 2 | 1 | 2 | ✅ |
+| 1 | 1 | 2 | ❌ |
+
+#### Scenario 2: Three-node cluster, network partition
+
+A cluster with `instances: 3`, `synchronous.number: 1`, and
+`dataDurability: required` experiences a network partition.
+
+- If the operator can communicate with the primary, no failover occurs. The
+ cluster can be impacted if the primary cannot reach any standby, since it
+ won't commit transactions due to synchronous replication requirements.
+- If the operator cannot reach the primary but can reach both replicas (R=2),
+ failover is allowed. If the operator can reach only one replica (R=1),
+ failover is not allowed, as the synchronous one may be the other one.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 2 | 1 | 2 | ✅ |
+| 1 | 1 | 2 | ❌ |
+
+#### Scenario 3: Five-node cluster, network partition
+
+A cluster with `instances: 5`, `synchronous.number=2`, and
+`dataDurability=required` experiences a network partition.
+
+- If the operator can communicate with the primary, no failover occurs. The
+ cluster can be impacted if the primary cannot reach at least two standbys,
+ as since it won't commit transactions due to synchronous replication
+ requirements.
+- If the operator cannot reach the primary but can reach at least three
+ replicas (R=3), failover is allowed. If the operator can reach only two
+ replicas (R=2), failover is not allowed, as the synchronous one may be the
+ other one.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 3 | 2 | 4 | ✅ |
+| 2 | 2 | 4 | ❌ |
+
+#### Scenario 4: Three-node cluster with remote synchronous replicas
+
+A cluster with `instances: 3` and remote synchronous replicas defined in
+`standbyNamesPre` or `standbyNamesPost`. We assume that the primary is failing.
+
+This scenario requires an important consideration. Replicas listed in
+`standbyNamesPre` or `standbyNamesPost` are not counted in
+`R` (they cannot be promoted), but are included in `N` (they may have received
+synchronous writes). So, if
+`synchronous.number <= len(standbyNamesPre) + len(standbyNamesPost)`, failover
+is not possible, as no local replica can be guaranteed to have the required
+data. The operator prevents such configurations during validation, but some
+invalid configurations are shown below for clarity.
+
+**Example configurations:**
+
+Configuration #1 (valid):
+
+```yaml
+instances: 3
+postgresql:
+ synchronous:
+ method: any
+ number: 2
+ standbyNamesPre:
+ - angus
+```
+
+In this configuration, when the primary fails, `R = 2` (the local replicas),
+`W = 2`, and `N = 3` (2 local replicas + 1 remote), allowing failover.
+In case of an additional replica failing (`R = 1`) failover is not allowed.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 3 | 2 | 4 | ✅ |
+| 2 | 2 | 4 | ❌ |
+
+Configuration #2 (invalid):
+
+```yaml
+instances: 3
+postgresql:
+ synchronous:
+ method: any
+ number: 1
+ maxStandbyNamesFromCluster: 1
+ standbyNamesPre:
+ - angus
+```
+
+In this configuration, `R = 2` (the local replicas), `W = 1`, and `N = 3`
+(2 local replicas + 1 remote).
+Failover is not possible in this setup, so quorum failover can not be
+enabled with this configuration.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 1 | 1 | 2 | ❌ |
+
+Configuration #3 (invalid):
+
+```yaml
+instances: 3
+postgresql:
+ synchronous:
+ method: any
+ number: 1
+ maxStandbyNamesFromCluster: 0
+ standbyNamesPre:
+ - angus
+ - malcolm
+```
+
+In this configuration, `R = 0` (the local replicas), `W = 1`, and `N = 2`
+(0 local replicas + 2 remote).
+Failover is not possible in this setup, so quorum failover can not be
+enabled with this configuration.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 0 | 1 | 2 | ❌ |
+
+#### Scenario 5: Three-node cluster, preferred data durability, network partition
+
+Consider a cluster with `instances: 3`, `synchronous.number=1`, and
+`dataDurability=preferred` that experiences a network partition.
+
+- If the operator can communicate with both the primary and the API server,
+ the primary continues to operate, removing unreachable standbys from the
+ `synchronous_standby_names` set.
+- If the primary cannot reach the operator or API server, a quorum check is
+ performed. The `FailoverQuorum` status cannot have changed, as the primary cannot
+ have received new configuration. If the operator can reach both replicas,
+ failover is allowed (`R=2`). If only one replica is reachable (`R=1`),
+ failover is not allowed.
+
+| R | W | N | Failover |
+| :-: | :-: | :-: | :------: |
+| 2 | 1 | 2 | ✅ |
+| 1 | 1 | 2 | ❌ |
+
+[^1]: [Dynamo: Amazon’s highly available key-value store](https://www.amazon.science/publications/dynamo-amazons-highly-available-key-value-store)
diff --git a/product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx b/product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx
new file mode 100644
index 00000000000..84c8f122dc4
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/imagevolume_extensions.mdx
@@ -0,0 +1,354 @@
+---
+title: 'Image Volume Extensions'
+originalFilePath: 'src/imagevolume_extensions.md'
+---
+
+
+
+EDB Postgres for Kubernetes supports the **dynamic loading of PostgreSQL extensions** into a
+`Cluster` at Pod startup using the [Kubernetes `ImageVolume` feature](https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/)
+and the `extension_control_path` GUC introduced in PostgreSQL 18, to which this
+project contributed.
+
+This feature allows you to mount a [PostgreSQL extension](https://www.postgresql.org/docs/current/extend-extensions.html),
+packaged as an OCI-compliant container image, as a read-only and immutable
+volume inside a running pod at a known filesystem path.
+
+You can make the extension available either globally, using the
+[`shared_preload_libraries` option](postgresql_conf.md#shared-preload-libraries),
+or at the database level through the `CREATE EXTENSION` command. For the
+latter, you can use the [`Database` resource’s declarative extension management](declarative_database_management.md/#managing-extensions-in-a-database)
+to ensure consistent, automated extension setup within your PostgreSQL
+databases.
+
+## Benefits
+
+Image volume extensions decouple the distribution of PostgreSQL operand
+container images from the distribution of extensions. This eliminates the
+need to define and embed extensions at build time within your PostgreSQL
+images—a major adoption blocker for PostgreSQL as a containerized workload,
+including from a security and supply chain perspective.
+
+As a result, you can:
+
+- Use the [official PostgreSQL `minimal` operand images](https://github.com/enterprisedb/docker-postgres?tab=readme-ov-file#minimal-images)
+ provided by EDB Postgres for Kubernetes.
+- Dynamically add the extensions you need to your `Cluster` definitions,
+ without rebuilding or maintaining custom PostgreSQL images.
+- Reduce your operational surface by using immutable, minimal, and secure base
+ images while adding only the extensions required for each workload.
+
+Extension images must be built according to the
+[documented specifications](#image-specifications).
+
+## Requirements
+
+To use image volume extensions with EDB Postgres for Kubernetes, you need:
+
+- **PostgreSQL 18 or later**, with support for `extension_control_path`.
+- **Kubernetes 1.33**, with the `ImageVolume` feature gate enabled.
+- **EDB Postgres for Kubernetes-compatible extension container images**, ensuring:
+ - Matching PostgreSQL major version of the `Cluster` resource.
+ - Compatible operating system distribution of the `Cluster` resource.
+ - Matching CPU architecture of the `Cluster` resource.
+
+## How it works
+
+Extension images are defined in the `.spec.postgresql.extensions` stanza of a
+`Cluster` resource, which accepts an ordered list of extensions to be added to
+the PostgreSQL cluster.
+
+!!! Info
+ For field-level details, see the
+ [API reference for `ExtensionConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-ExtensionConfiguration).
+
+Each image volume is mounted at `/extensions/ FailoverQuorum contains the information about the current failover
+quorum status of a PG cluster. It is updated by the instance manager
+of the primary node and reset to zero by the operator to trigger
+an update. Most recently observed status of the failover quorum.
+
+
## ImageCatalog
@@ -2952,6 +2986,60 @@ storage
+
+Field Description
+apiVersion
[Required]
stringpostgresql.k8s.enterprisedb.io/v1
+kind
[Required]
stringFailoverQuorum
+
+metadata
[Required]
+meta/v1.ObjectMeta
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the
+metadata
field.
+
+
+status
+FailoverQuorumStatus
+
+
+
ExtensionConfiguration is the configuration used to add +PostgreSQL extensions to the Cluster.
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ The name of the extension, required + |
+
image [Required]+core/v1.ImageVolumeSource + |
+
+ The image containing the extension, required + |
+
extension_control_path +[]string + |
+
+ The list of directories inside the image which should be added to extension_control_path. +If not defined, defaults to "/share". + |
+
dynamic_library_path +[]string + |
+
+ The list of directories inside the image which should be added to dynamic_library_path. +If not defined, defaults to "/lib". + |
+
ld_library_path +[]string + |
+
+ The list of directories inside the image which should be added to ld_library_path. + |
+
FailoverQuorumStatus is the latest observed status of the failover +quorum of the PG cluster.
+ +Field | Description |
---|---|
method +string + |
+
+ Contains the latest reported Method value. + |
+
standbyNames +[]string + |
+
+ StandbyNames is the list of potentially synchronous +instance names. + |
+
standbyNumber +int + |
+
+ StandbyNumber is the number of synchronous standbys that transactions +need to wait for replies from. + |
+
primary +string + |
+
+ Primary is the name of the primary instance that updated +this object the latest time. + |
+
IsolationCheckConfiguration contains the configuration for the isolation check +functionality in the liveness probe
+ +Field | Description |
---|---|
enabled +bool + |
+
+ Whether primary isolation checking is enabled for the liveness probe + |
+
requestTimeout +int + |
+
+ Timeout in milliseconds for requests during the primary isolation check + |
+
connectionTimeout +int + |
+
+ Timeout in milliseconds for connections during the primary isolation check + |
+
LDAPScheme defines the possible schemes for LDAP
+ + +## LivenessProbe + +**Appears in:** + +- [ProbesConfiguration](#postgresql-k8s-enterprisedb-io-v1-ProbesConfiguration) + +LivenessProbe is the configuration of the liveness probe
+ +Field | Description |
---|---|
Probe +Probe + |
+(Members of Probe are embedded into this type.)
+ Probe is the standard probe configuration + |
+
isolationCheck +IsolationCheckConfiguration + |
+
+ Configure the feature that extends the liveness probe for a primary +instance. In addition to the basic checks, this verifies whether the +primary is isolated from the Kubernetes API server and from its +replicas, ensuring that it can be safely shut down if network +partition or API unavailability is detected. Enabled by default. + |
+
extensions
The configuration of the extensions to be added
+Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.
@@ -4649,7 +4864,7 @@ to be injected in the PostgreSQL Podsliveness
[Required]The liveness probe configuration
@@ -5112,6 +5327,19 @@ It may only contain lower case letters, numbers, and the underscore character. This can only be set at creation time. By default set to_cnp_
.
synchronizeLogicalDecoding
When enabled, the operator automatically manages synchronization of logical +decoding (replication) slots across high-availability clusters.
+Requires one of the following conditions:
+Package v1 contains API Schema definitions for the postgresql v1 API group
+ +## Resource Types + +- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup) +- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster) +- [ClusterImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ClusterImageCatalog) +- [Database](#postgresql-k8s-enterprisedb-io-v1-Database) +- [FailoverQuorum](#postgresql-k8s-enterprisedb-io-v1-FailoverQuorum) +- [ImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ImageCatalog) +- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler) +- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication) +- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup) +- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription) + + + +## Backup + +A Backup resource is a request for a PostgreSQL backup by the user.
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Backup |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+BackupSpec + |
+
+ Specification of the desired behavior of the backup. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
status +BackupStatus + |
+
+ Most recently observed status of the backup. This data may not be up to +date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
Cluster is the Schema for the PostgreSQL API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Cluster |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+ClusterSpec + |
+
+ Specification of the desired behavior of the cluster. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
status +ClusterStatus + |
+
+ Most recently observed status of the cluster. This data may not be up +to date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
ClusterImageCatalog is the Schema for the clusterimagecatalogs API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | ClusterImageCatalog |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+ImageCatalogSpec + |
+
+ Specification of the desired behavior of the ClusterImageCatalog. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
Database is the Schema for the databases API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Database |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+DatabaseSpec + |
+
+ Specification of the desired Database. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
status +DatabaseStatus + |
+
+ Most recently observed status of the Database. This data may not be up to +date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
FailoverQuorum contains the information about the current failover +quorum status of a PG cluster. It is updated by the instance manager +of the primary node and reset to zero by the operator to trigger +an update.
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | FailoverQuorum |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
status +FailoverQuorumStatus + |
+
+ Most recently observed status of the failover quorum. + |
+
ImageCatalog is the Schema for the imagecatalogs API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | ImageCatalog |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+ImageCatalogSpec + |
+
+ Specification of the desired behavior of the ImageCatalog. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
Pooler is the Schema for the poolers API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Pooler |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+PoolerSpec + |
+
+ Specification of the desired behavior of the Pooler. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
status +PoolerStatus + |
+
+ Most recently observed status of the Pooler. This data may not be up to +date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
Publication is the Schema for the publications API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Publication |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+PublicationSpec + |
++ No description provided. | +
status [Required]+PublicationStatus + |
++ No description provided. | +
ScheduledBackup is the Schema for the scheduledbackups API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | ScheduledBackup |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+ScheduledBackupSpec + |
+
+ Specification of the desired behavior of the ScheduledBackup. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
status +ScheduledBackupStatus + |
+
+ Most recently observed status of the ScheduledBackup. This data may not be up +to date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
Subscription is the Schema for the subscriptions API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Subscription |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+SubscriptionSpec + |
++ No description provided. | +
status [Required]+SubscriptionStatus + |
++ No description provided. | +
AffinityConfiguration contains the info we need to create the +affinity rules for Pods
+ +Field | Description |
---|---|
enablePodAntiAffinity +bool + |
+
+ Activates anti-affinity for the pods. The operator will define pods +anti-affinity unless this field is explicitly set to false + |
+
topologyKey +string + |
+
+ TopologyKey to use for anti-affinity configuration. See k8s documentation +for more info on that + |
+
nodeSelector +map[string]string + |
+
+ NodeSelector is map of key-value pairs used to define the nodes on which +the pods can run. +More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ + |
+
nodeAffinity +core/v1.NodeAffinity + |
+
+ NodeAffinity describes node affinity scheduling rules for the pod. +More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity + |
+
tolerations +[]core/v1.Toleration + |
+
+ Tolerations is a list of Tolerations that should be set for all the pods, in order to allow them to run +on tainted nodes. +More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ + |
+
podAntiAffinityType +string + |
+
+ PodAntiAffinityType allows the user to decide whether pod anti-affinity between cluster instance has to be +considered a strong requirement during scheduling or not. Allowed values are: "preferred" (default if empty) or +"required". Setting it to "required", could lead to instances remaining pending until new kubernetes nodes are +added if all the existing nodes don't match the required pod anti-affinity rule. +More info: +https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + |
+
additionalPodAntiAffinity +core/v1.PodAntiAffinity + |
+
+ AdditionalPodAntiAffinity allows to specify pod anti-affinity terms to be added to the ones generated +by the operator if EnablePodAntiAffinity is set to true (default) or to be used exclusively if set to false. + |
+
additionalPodAffinity +core/v1.PodAffinity + |
+
+ AdditionalPodAffinity allows to specify pod affinity terms to be passed to all the cluster's pods. + |
+
AvailableArchitecture represents the state of a cluster's architecture
+ +Field | Description |
---|---|
goArch [Required]+string + |
+
+ GoArch is the name of the executable architecture + |
+
hash [Required]+string + |
+
+ Hash is the hash of the executable + |
+
BackupConfiguration defines how the backup of the cluster are taken. +The supported backup methods are BarmanObjectStore and VolumeSnapshot. +For details and examples refer to the Backup and Recovery section of the +documentation
+ +Field | Description |
---|---|
volumeSnapshot +VolumeSnapshotConfiguration + |
+
+ VolumeSnapshot provides the configuration for the execution of volume snapshot backups. + |
+
barmanObjectStore +github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanObjectStoreConfiguration + |
+
+ The configuration for the barman-cloud tool suite + |
+
retentionPolicy +string + |
+
+ RetentionPolicy is the retention policy to be used for backups
+and WALs (i.e. '60d'). The retention policy is expressed in the form
+of |
+
target +BackupTarget + |
+
+ The policy to decide which instance should perform backups. Available
+options are empty string, which will default to |
+
BackupMethod defines the way of executing the physical base backups of +the selected PostgreSQL instance
+ + + +## BackupPhase + +(Alias of `string`) + +**Appears in:** + +- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus) + +BackupPhase is the phase of the backup
+ + + +## BackupPluginConfiguration + +**Appears in:** + +- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec) + +- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec) + +BackupPluginConfiguration contains the backup configuration used by +the backup plugin
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name is the name of the plugin managing this backup + |
+
parameters +map[string]string + |
+
+ Parameters are the configuration parameters passed to the backup +plugin for this backup + |
+
BackupSnapshotElementStatus is a volume snapshot that is part of a volume snapshot method backup
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name is the snapshot resource name + |
+
type [Required]+string + |
+
+ Type is tho role of the snapshot in the cluster, such as PG_DATA, PG_WAL and PG_TABLESPACE + |
+
tablespaceName +string + |
+
+ TablespaceName is the name of the snapshotted tablespace. Only set +when type is PG_TABLESPACE + |
+
BackupSnapshotStatus the fields exclusive to the volumeSnapshot method backup
+ +Field | Description |
---|---|
elements +[]BackupSnapshotElementStatus + |
+
+ The elements list, populated with the gathered volume snapshots + |
+
BackupSource contains the backup we need to restore from, plus some +information that could be needed to correctly restore it.
+ +Field | Description |
---|---|
LocalObjectReference +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+(Members of LocalObjectReference are embedded into this type.)
+ No description provided. |
+
endpointCA +github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector + |
+
+ EndpointCA store the CA bundle of the barman endpoint. +Useful when using self-signed certificates to avoid +errors with certificate issuer and barman-cloud-wal-archive. + |
+
BackupSpec defines the desired state of Backup
+ +Field | Description |
---|---|
cluster [Required]+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ The cluster to backup + |
+
target +BackupTarget + |
+
+ The policy to decide which instance should perform this backup. If empty,
+it defaults to |
+
method +BackupMethod + |
+
+ The backup method to be used, possible options are |
+
pluginConfiguration +BackupPluginConfiguration + |
+
+ Configuration parameters passed to the plugin managing this backup + |
+
online +bool + |
+
+ Whether the default type of backup with volume snapshots is
+online/hot ( |
+
onlineConfiguration +OnlineConfiguration + |
+
+ Configuration parameters to control the online/hot backup with volume snapshots +Overrides the default settings specified in the cluster '.backup.volumeSnapshot.onlineConfiguration' stanza + |
+
BackupStatus defines the observed state of Backup
+ +Field | Description |
---|---|
BarmanCredentials +github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanCredentials + |
+(Members of BarmanCredentials are embedded into this type.)
+ The potential credentials for each cloud provider + |
+
endpointCA +github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector + |
+
+ EndpointCA store the CA bundle of the barman endpoint. +Useful when using self-signed certificates to avoid +errors with certificate issuer and barman-cloud-wal-archive. + |
+
endpointURL +string + |
+
+ Endpoint to be used to upload data to the cloud, +overriding the automatic endpoint discovery + |
+
destinationPath +string + |
+
+ The path where to store the backup (i.e. s3://bucket/path/to/folder) +this path, with different destination folders, will be used for WALs +and for data. This may not be populated in case of errors. + |
+
serverName +string + |
+
+ The server name on S3, the cluster name is used if this +parameter is omitted + |
+
encryption +string + |
+
+ Encryption method required to S3 API + |
+
backupId +string + |
+
+ The ID of the Barman backup + |
+
backupName +string + |
+
+ The Name of the Barman backup + |
+
phase +BackupPhase + |
+
+ The last backup status + |
+
startedAt +meta/v1.Time + |
+
+ When the backup was started + |
+
stoppedAt +meta/v1.Time + |
+
+ When the backup was terminated + |
+
beginWal +string + |
+
+ The starting WAL + |
+
endWal +string + |
+
+ The ending WAL + |
+
beginLSN +string + |
+
+ The starting xlog + |
+
endLSN +string + |
+
+ The ending xlog + |
+
error +string + |
+
+ The detected error + |
+
commandOutput +string + |
+
+ Unused. Retained for compatibility with old versions. + |
+
commandError +string + |
+
+ The backup command output in case of error + |
+
backupLabelFile +[]byte + |
+
+ Backup label file content as returned by Postgres in case of online (hot) backups + |
+
tablespaceMapFile +[]byte + |
+
+ Tablespace map file content as returned by Postgres in case of online (hot) backups + |
+
instanceID +InstanceID + |
+
+ Information to identify the instance where the backup has been taken from + |
+
snapshotBackupStatus +BackupSnapshotStatus + |
+
+ Status of the volumeSnapshot backup + |
+
method +BackupMethod + |
+
+ The backup method being used + |
+
online +bool + |
+
+ Whether the backup was online/hot ( |
+
pluginMetadata +map[string]string + |
+
+ A map containing the plugin metadata + |
+
BackupTarget describes the preferred targets for a backup
+ + + +## BootstrapConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +BootstrapConfiguration contains information about how to create the PostgreSQL
+cluster. Only a single bootstrap method can be defined among the supported
+ones. initdb
will be used as the bootstrap method if left
+unspecified. Refer to the Bootstrap page of the documentation for more
+information.
Field | Description |
---|---|
initdb +BootstrapInitDB + |
+
+ Bootstrap the cluster via initdb + |
+
recovery +BootstrapRecovery + |
+
+ Bootstrap the cluster from a backup + |
+
pg_basebackup +BootstrapPgBaseBackup + |
+
+ Bootstrap the cluster taking a physical backup of another compatible +PostgreSQL instance + |
+
BootstrapInitDB is the configuration of the bootstrap process when +initdb is used +Refer to the Bootstrap page of the documentation for more information.
+ +Field | Description |
---|---|
database +string + |
+
+ Name of the database used by the application. Default: |
+
owner +string + |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the |
+
secret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ Name of the secret containing the initial credentials for the +owner of the user database. If empty a new secret will be +created from scratch + |
+
redwood +bool + |
+
+ If we need to enable/disable Redwood compatibility. Requires +EPAS and for EPAS defaults to true + |
+
options +[]string + |
+
+ The list of options that must be passed to initdb when creating the cluster. +Deprecated: This could lead to inconsistent configurations, +please use the explicit provided parameters instead. +If defined, explicit values will be ignored. + |
+
dataChecksums +bool + |
+
+ Whether the |
+
encoding +string + |
+
+ The value to be passed as option |
+
localeCollate +string + |
+
+ The value to be passed as option |
+
localeCType +string + |
+
+ The value to be passed as option |
+
locale +string + |
+
+ Sets the default collation order and character classification in the new database. + |
+
localeProvider +string + |
+
+ This option sets the locale provider for databases created in the new cluster. +Available from PostgreSQL 16. + |
+
icuLocale +string + |
+
+ Specifies the ICU locale when the ICU provider is used.
+This option requires |
+
icuRules +string + |
+
+ Specifies additional collation rules to customize the behavior of the default collation.
+This option requires |
+
builtinLocale +string + |
+
+ Specifies the locale name when the builtin provider is used.
+This option requires |
+
walSegmentSize +int + |
+
+ The value in megabytes (1 to 1024) to be passed to the |
+
postInitSQL +[]string + |
+
+ List of SQL queries to be executed as a superuser in the |
+
postInitApplicationSQL +[]string + |
+
+ List of SQL queries to be executed as a superuser in the application +database right after the cluster has been created - to be used with extreme care +(by default empty) + |
+
postInitTemplateSQL +[]string + |
+
+ List of SQL queries to be executed as a superuser in the |
+
import +Import + |
+
+ Bootstraps the new cluster by importing data from an existing PostgreSQL
+instance using logical backup ( |
+
postInitApplicationSQLRefs +SQLRefs + |
+
+ List of references to ConfigMaps or Secrets containing SQL files +to be executed as a superuser in the application database right after +the cluster has been created. The references are processed in a specific order: +first, all Secrets are processed, followed by all ConfigMaps. +Within each group, the processing order follows the sequence specified +in their respective arrays. +(by default empty) + |
+
postInitTemplateSQLRefs +SQLRefs + |
+
+ List of references to ConfigMaps or Secrets containing SQL files
+to be executed as a superuser in the |
+
postInitSQLRefs +SQLRefs + |
+
+ List of references to ConfigMaps or Secrets containing SQL files
+to be executed as a superuser in the |
+
BootstrapPgBaseBackup contains the configuration required to take +a physical backup of an existing PostgreSQL cluster
+ +Field | Description |
---|---|
source [Required]+string + |
+
+ The name of the server of which we need to take a physical backup + |
+
database +string + |
+
+ Name of the database used by the application. Default: |
+
owner +string + |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the |
+
secret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ Name of the secret containing the initial credentials for the +owner of the user database. If empty a new secret will be +created from scratch + |
+
BootstrapRecovery contains the configuration required to restore
+from an existing cluster using 3 methodologies: external cluster,
+volume snapshots or backup objects. Full recovery and Point-In-Time
+Recovery are supported.
+The method can be also be used to create clusters in continuous recovery
+(replica clusters), also supporting cascading replication when instances
>
Field | Description |
---|---|
backup +BackupSource + |
+
+ The backup object containing the physical base backup from which to
+initiate the recovery procedure.
+Mutually exclusive with |
+
source +string + |
+
+ The external cluster whose backup we will restore. This is also
+used as the name of the folder under which the backup is stored,
+so it must be set to the name of the source cluster
+Mutually exclusive with |
+
volumeSnapshots +DataSource + |
+
+ The static PVC data source(s) from which to initiate the
+recovery procedure. Currently supporting |
+
recoveryTarget +RecoveryTarget + |
+
+ By default, the recovery process applies all the available
+WAL files in the archive (full recovery). However, you can also
+end the recovery as soon as a consistent state is reached or
+recover to a point-in-time (PITR) by specifying a |
+
database +string + |
+
+ Name of the database used by the application. Default: |
+
owner +string + |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the |
+
secret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ Name of the secret containing the initial credentials for the +owner of the user database. If empty a new secret will be +created from scratch + |
+
CatalogImage defines the image and major version
+ +Field | Description |
---|---|
image [Required]+string + |
+
+ The image reference + |
+
major [Required]+int + |
+
+ The PostgreSQL major version of the image. Must be unique within the catalog. + |
+
CertificatesConfiguration contains the needed configurations to handle server certificates.
+ +Field | Description |
---|---|
serverCASecret +string + |
+
+ The secret containing the Server CA certificate. If not defined, a new secret will be created +with a self-signed CA and will be used to generate the TLS certificate ServerTLSSecret. + +Contains: + + +
|
+
serverTLSSecret +string + |
+
+ The secret of type kubernetes.io/tls containing the server TLS certificate and key that will be set as
+ |
+
replicationTLSSecret +string + |
+
+ The secret of type kubernetes.io/tls containing the client certificate to authenticate as
+the |
+
clientCASecret +string + |
+
+ The secret containing the Client CA certificate. If not defined, a new secret will be created +with a self-signed CA and will be used to generate all the client certificates. + +Contains: + + +
|
+
serverAltDNSNames +[]string + |
+
+ The list of the server alternative DNS names to be added to the generated server TLS certificates, when required. + |
+
CertificatesStatus contains configuration certificates and related expiration dates.
+ +Field | Description |
---|---|
CertificatesConfiguration +CertificatesConfiguration + |
+(Members of CertificatesConfiguration are embedded into this type.)
+ Needed configurations to handle server certificates, initialized with default values, if needed. + |
+
expirations +map[string]string + |
+
+ Expiration dates for all certificates. + |
+
ClusterMonitoringTLSConfiguration is the type containing the TLS configuration +for the cluster's monitoring
+ +Field | Description |
---|---|
enabled +bool + |
+
+ Enable TLS for the monitoring endpoint. +Changing this option will force a rollout of all instances. + |
+
ClusterSpec defines the desired state of Cluster
+ +Field | Description |
---|---|
description +string + |
+
+ Description of this PostgreSQL cluster + |
+
inheritedMetadata +EmbeddedObjectMetadata + |
+
+ Metadata that will be inherited by all objects related to the Cluster + |
+
imageName +string + |
+
+ Name of the container image, supporting both tags ( |
+
imageCatalogRef +ImageCatalogRef + |
+
+ Defines the major PostgreSQL version we want to use within an ImageCatalog + |
+
imagePullPolicy +core/v1.PullPolicy + |
+
+ Image pull policy.
+One of |
+
schedulerName +string + |
+
+ If specified, the pod will be dispatched by specified Kubernetes +scheduler. If not specified, the pod will be dispatched by the default +scheduler. More info: +https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/ + |
+
postgresUID +int64 + |
+
+ The UID of the |
+
postgresGID +int64 + |
+
+ The GID of the |
+
instances [Required]+int + |
+
+ Number of instances required in the cluster + |
+
minSyncReplicas +int + |
+
+ Minimum number of instances required in synchronous replication with the +primary. Undefined or 0 allow writes to complete when no standby is +available. + |
+
maxSyncReplicas +int + |
+
+ The target value for the synchronous replication quorum, that can be +decreased if the number of ready standbys is lower than this. +Undefined or 0 disable synchronous replication. + |
+
postgresql +PostgresConfiguration + |
+
+ Configuration of the PostgreSQL server + |
+
replicationSlots +ReplicationSlotsConfiguration + |
+
+ Replication slots management configuration + |
+
bootstrap +BootstrapConfiguration + |
+
+ Instructions to bootstrap this cluster + |
+
replica +ReplicaClusterConfiguration + |
+
+ Replica cluster configuration + |
+
superuserSecret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ The secret containing the superuser password. If not defined a new +secret will be created with a randomly generated password + |
+
enableSuperuserAccess +bool + |
+
+ When this option is enabled, the operator will use the |
+
certificates +CertificatesConfiguration + |
+
+ The configuration for the CA and related certificates + |
+
imagePullSecrets +[]github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ The list of pull secrets to be used to pull the images. If the license key +contains a pull secret that secret will be automatically included. + |
+
storage +StorageConfiguration + |
+
+ Configuration of the storage of the instances + |
+
serviceAccountTemplate +ServiceAccountTemplate + |
+
+ Configure the generation of the service account + |
+
walStorage +StorageConfiguration + |
+
+ Configuration of the storage for PostgreSQL WAL (Write-Ahead Log) + |
+
ephemeralVolumeSource +core/v1.EphemeralVolumeSource + |
+
+ EphemeralVolumeSource allows the user to configure the source of ephemeral volumes. + |
+
startDelay +int32 + |
+
+ The time in seconds that is allowed for a PostgreSQL instance to +successfully start up (default 3600). +The startup probe failure threshold is derived from this value using the formula: +ceiling(startDelay / 10). + |
+
stopDelay +int32 + |
+
+ The time in seconds that is allowed for a PostgreSQL instance to +gracefully shutdown (default 1800) + |
+
smartStopDelay +int32 + |
+
+ Deprecated: please use SmartShutdownTimeout instead + |
+
smartShutdownTimeout +int32 + |
+
+ The time in seconds that controls the window of time reserved for the smart shutdown of Postgres to complete.
+Make sure you reserve enough time for the operator to request a fast shutdown of Postgres
+(that is: |
+
switchoverDelay +int32 + |
+
+ The time in seconds that is allowed for a primary PostgreSQL instance +to gracefully shutdown during a switchover. +Default value is 3600 seconds (1 hour). + |
+
failoverDelay +int32 + |
+
+ The amount of time (in seconds) to wait before triggering a failover +after the primary PostgreSQL instance in the cluster was detected +to be unhealthy + |
+
livenessProbeTimeout +int32 + |
+
+ LivenessProbeTimeout is the time (in seconds) that is allowed for a PostgreSQL instance +to successfully respond to the liveness probe (default 30). +The Liveness probe failure threshold is derived from this value using the formula: +ceiling(livenessProbe / 10). + |
+
affinity +AffinityConfiguration + |
+
+ Affinity/Anti-affinity rules for Pods + |
+
topologySpreadConstraints +[]core/v1.TopologySpreadConstraint + |
+
+ TopologySpreadConstraints specifies how to spread matching pods among the given topology. +More info: +https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ + |
+
resources +core/v1.ResourceRequirements + |
+
+ Resources requirements of every generated Pod. Please refer to +https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ +for more information. + |
+
ephemeralVolumesSizeLimit +EphemeralVolumesSizeLimitConfiguration + |
+
+ EphemeralVolumesSizeLimit allows the user to set the limits for the ephemeral +volumes + |
+
priorityClassName +string + |
+
+ Name of the priority class which will be used in every generated Pod, if the PriorityClass +specified does not exist, the pod will not be able to schedule. Please refer to +https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass +for more information + |
+
primaryUpdateStrategy +PrimaryUpdateStrategy + |
+
+ Deployment strategy to follow to upgrade the primary server during a rolling
+update procedure, after all replicas have been successfully updated:
+it can be automated ( |
+
primaryUpdateMethod +PrimaryUpdateMethod + |
+
+ Method to follow to upgrade the primary server during a rolling
+update procedure, after all replicas have been successfully updated:
+it can be with a switchover ( |
+
backup +BackupConfiguration + |
+
+ The configuration to be used for backups + |
+
nodeMaintenanceWindow +NodeMaintenanceWindow + |
+
+ Define a maintenance window for the Kubernetes nodes + |
+
licenseKey +string + |
+
+ The license key of the cluster. When empty, the cluster operates in +trial mode and after the expiry date (default 30 days) the operator +will cease any reconciliation attempt. For details, please refer to +the license agreement that comes with the operator. + |
+
licenseKeySecret +core/v1.SecretKeySelector + |
+
+ The reference to the license key. When this is set it take precedence over LicenseKey. + |
+
monitoring +MonitoringConfiguration + |
+
+ The configuration of the monitoring infrastructure of this cluster + |
+
externalClusters +[]ExternalCluster + |
+
+ The list of external clusters which are used in the configuration + |
+
logLevel +string + |
+
+ The instances' log level, one of the following values: error, warning, info (default), debug, trace + |
+
projectedVolumeTemplate +core/v1.ProjectedVolumeSource + |
+
+ Template to be used to define projected volumes, projected volumes will be mounted
+under |
+
env +[]core/v1.EnvVar + |
+
+ Env follows the Env format to pass environment variables +to the pods created in the cluster + |
+
envFrom +[]core/v1.EnvFromSource + |
+
+ EnvFrom follows the EnvFrom format to pass environment variables +sources to the pods to be used by Env + |
+
managed +ManagedConfiguration + |
+
+ The configuration that is used by the portions of PostgreSQL that are managed by the instance manager + |
+
seccompProfile +core/v1.SeccompProfile + |
+
+ The SeccompProfile applied to every Pod and Container.
+Defaults to: |
+
tablespaces +[]TablespaceConfiguration + |
+
+ The tablespaces configuration + |
+
enablePDB +bool + |
+
+ Manage the |
+
plugins +[]PluginConfiguration + |
+
+ The plugins configuration, containing +any plugin to be loaded with the corresponding configuration + |
+
probes +ProbesConfiguration + |
+
+ The configuration of the probes to be injected +in the PostgreSQL Pods. + |
+
ClusterStatus defines the observed state of Cluster
+ +Field | Description |
---|---|
instances +int + |
+
+ The total number of PVC Groups detected in the cluster. It may differ from the number of existing instance pods. + |
+
readyInstances +int + |
+
+ The total number of ready instances in the cluster. It is equal to the number of ready instance pods. + |
+
instancesStatus +map[PodStatus][]string + |
+
+ InstancesStatus indicates in which status the instances are + |
+
instancesReportedState +map[PodName]InstanceReportedState + |
+
+ The reported state of the instances during the last reconciliation loop + |
+
managedRolesStatus +ManagedRoles + |
+
+ ManagedRolesStatus reports the state of the managed roles in the cluster + |
+
tablespacesStatus +[]TablespaceState + |
+
+ TablespacesStatus reports the state of the declarative tablespaces in the cluster + |
+
timelineID +int + |
+
+ The timeline of the Postgres cluster + |
+
topology +Topology + |
+
+ Instances topology. + |
+
latestGeneratedNode +int + |
+
+ ID of the latest generated node (used to avoid node name clashing) + |
+
currentPrimary +string + |
+
+ Current primary instance + |
+
targetPrimary +string + |
+
+ Target primary instance, this is different from the previous one +during a switchover or a failover + |
+
lastPromotionToken +string + |
+
+ LastPromotionToken is the last verified promotion token that +was used to promote a replica cluster + |
+
pvcCount +int32 + |
+
+ How many PVCs have been created by this cluster + |
+
jobCount +int32 + |
+
+ How many Jobs have been created by this cluster + |
+
danglingPVC +[]string + |
+
+ List of all the PVCs created by this cluster and still available +which are not attached to a Pod + |
+
resizingPVC +[]string + |
+
+ List of all the PVCs that have ResizingPVC condition. + |
+
initializingPVC +[]string + |
+
+ List of all the PVCs that are being initialized by this cluster + |
+
healthyPVC +[]string + |
+
+ List of all the PVCs not dangling nor initializing + |
+
unusablePVC +[]string + |
+
+ List of all the PVCs that are unusable because another PVC is missing + |
+
licenseStatus +github.com/EnterpriseDB/cloud-native-postgres/pkg/licensekey.Status + |
+
+ Status of the license + |
+
writeService +string + |
+
+ Current write pod + |
+
readService +string + |
+
+ Current list of read pods + |
+
phase +string + |
+
+ Current phase of the cluster + |
+
phaseReason +string + |
+
+ Reason for the current phase + |
+
secretsResourceVersion +SecretsResourceVersion + |
+
+ The list of resource versions of the secrets +managed by the operator. Every change here is done in the +interest of the instance manager, which will refresh the +secret data + |
+
configMapResourceVersion +ConfigMapResourceVersion + |
+
+ The list of resource versions of the configmaps, +managed by the operator. Every change here is done in the +interest of the instance manager, which will refresh the +configmap data + |
+
certificates +CertificatesStatus + |
+
+ The configuration for the CA and related certificates, initialized with defaults. + |
+
firstRecoverabilityPoint +string + |
+
+ The first recoverability point, stored as a date in RFC3339 format. +This field is calculated from the content of FirstRecoverabilityPointByMethod. +Deprecated: the field is not set for backup plugins. + |
+
firstRecoverabilityPointByMethod +map[BackupMethod]meta/v1.Time + |
+
+ The first recoverability point, stored as a date in RFC3339 format, per backup method type. +Deprecated: the field is not set for backup plugins. + |
+
lastSuccessfulBackup +string + |
+
+ Last successful backup, stored as a date in RFC3339 format. +This field is calculated from the content of LastSuccessfulBackupByMethod. +Deprecated: the field is not set for backup plugins. + |
+
lastSuccessfulBackupByMethod +map[BackupMethod]meta/v1.Time + |
+
+ Last successful backup, stored as a date in RFC3339 format, per backup method type. +Deprecated: the field is not set for backup plugins. + |
+
lastFailedBackup +string + |
+
+ Last failed backup, stored as a date in RFC3339 format. +Deprecated: the field is not set for backup plugins. + |
+
cloudNativePostgresqlCommitHash +string + |
+
+ The commit hash number of which this operator running + |
+
currentPrimaryTimestamp +string + |
+
+ The timestamp when the last actual promotion to primary has occurred + |
+
currentPrimaryFailingSinceTimestamp +string + |
+
+ The timestamp when the primary was detected to be unhealthy
+This field is reported when |
+
targetPrimaryTimestamp +string + |
+
+ The timestamp when the last request for a new primary has occurred + |
+
poolerIntegrations +PoolerIntegrations + |
+
+ The integration needed by poolers referencing the cluster + |
+
cloudNativePostgresqlOperatorHash +string + |
+
+ The hash of the binary of the operator + |
+
availableArchitectures +[]AvailableArchitecture + |
+
+ AvailableArchitectures reports the available architectures of a cluster + |
+
conditions +[]meta/v1.Condition + |
+
+ Conditions for cluster object + |
+
instanceNames +[]string + |
+
+ List of instance names in the cluster + |
+
onlineUpdateEnabled +bool + |
+
+ OnlineUpdateEnabled shows if the online upgrade is enabled inside the cluster + |
+
image +string + |
+
+ Image contains the image name used by the pods + |
+
pgDataImageInfo +ImageInfo + |
+
+ PGDataImageInfo contains the details of the latest image that has run on the current data directory. + |
+
pluginStatus +[]PluginStatus + |
+
+ PluginStatus is the status of the loaded plugins + |
+
switchReplicaClusterStatus +SwitchReplicaClusterStatus + |
+
+ SwitchReplicaClusterStatus is the status of the switch to replica cluster + |
+
demotionToken +string + |
+
+ DemotionToken is a JSON token containing the information +from pg_controldata such as Database system identifier, Latest checkpoint's +TimeLineID, Latest checkpoint's REDO location, Latest checkpoint's REDO +WAL file, and Time of latest checkpoint + |
+
systemID +string + |
+
+ SystemID is the latest detected PostgreSQL SystemID + |
+
ConfigMapResourceVersion is the resource versions of the secrets +managed by the operator
+ +Field | Description |
---|---|
metrics +map[string]string + |
+
+ A map with the versions of all the config maps used to pass metrics. +Map keys are the config map names, map values are the versions + |
+
DataDurabilityLevel specifies how strictly to enforce synchronous replication
+when cluster instances are unavailable. Options are required
or preferred
.
DataSource contains the configuration required to bootstrap a +PostgreSQL cluster from an existing storage
+ +Field | Description |
---|---|
storage [Required]+core/v1.TypedLocalObjectReference + |
+
+ Configuration of the storage of the instances + |
+
walStorage +core/v1.TypedLocalObjectReference + |
+
+ Configuration of the storage for PostgreSQL WAL (Write-Ahead Log) + |
+
tablespaceStorage +map[string]core/v1.TypedLocalObjectReference + |
+
+ Configuration of the storage for PostgreSQL tablespaces + |
+
DatabaseObjectSpec contains the fields which are common to every +database object
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name of the extension/schema + |
+
ensure +EnsureOption + |
+
+ Specifies whether an extension/schema should be present or absent in
+the database. If set to |
+
DatabaseObjectStatus is the status of the managed database objects
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ The name of the object + |
+
applied [Required]+bool + |
+
+ True of the object has been installed successfully in +the database + |
+
message +string + |
+
+ Message is the object reconciliation message + |
+
DatabaseReclaimPolicy describes a policy for end-of-life maintenance of databases.
+ + + +## DatabaseRoleRef + +**Appears in:** + +- [TablespaceConfiguration](#postgresql-k8s-enterprisedb-io-v1-TablespaceConfiguration) + +DatabaseRoleRef is a reference an a role available inside PostgreSQL
+ +Field | Description |
---|---|
name +string + |
++ No description provided. | +
DatabaseSpec is the specification of a Postgresql Database, built around the
+CREATE DATABASE
, ALTER DATABASE
, and DROP DATABASE
SQL commands of
+PostgreSQL.
Field | Description |
---|---|
cluster [Required]+core/v1.LocalObjectReference + |
+
+ The name of the PostgreSQL cluster hosting the database. + |
+
ensure +EnsureOption + |
+
+ Ensure the PostgreSQL database is |
+
name [Required]+string + |
+
+ The name of the database to create inside PostgreSQL. This setting cannot be changed. + |
+
owner [Required]+string + |
+
+ Maps to the |
+
template +string + |
+
+ Maps to the |
+
encoding +string + |
+
+ Maps to the |
+
locale +string + |
+
+ Maps to the |
+
localeProvider +string + |
+
+ Maps to the |
+
localeCollate +string + |
+
+ Maps to the |
+
localeCType +string + |
+
+ Maps to the |
+
icuLocale +string + |
+
+ Maps to the |
+
icuRules +string + |
+
+ Maps to the |
+
builtinLocale +string + |
+
+ Maps to the |
+
collationVersion +string + |
+
+ Maps to the |
+
isTemplate +bool + |
+
+ Maps to the |
+
allowConnections +bool + |
+
+ Maps to the |
+
connectionLimit +int + |
+
+ Maps to the |
+
tablespace +string + |
+
+ Maps to the |
+
databaseReclaimPolicy +DatabaseReclaimPolicy + |
+
+ The policy for end-of-life maintenance of this database. + |
+
schemas +[]SchemaSpec + |
+
+ The list of schemas to be managed in the database + |
+
extensions +[]ExtensionSpec + |
+
+ The list of extensions to be managed in the database + |
+
DatabaseStatus defines the observed state of Database
+ +Field | Description |
---|---|
observedGeneration +int64 + |
+
+ A sequence number representing the latest +desired state that was synchronized + |
+
applied +bool + |
+
+ Applied is true if the database was reconciled correctly + |
+
message +string + |
+
+ Message is the reconciliation output message + |
+
schemas +[]DatabaseObjectStatus + |
+
+ Schemas is the status of the managed schemas + |
+
extensions +[]DatabaseObjectStatus + |
+
+ Extensions is the status of the managed extensions + |
+
EPASConfiguration contains EDB Postgres Advanced Server specific configurations
+ +Field | Description |
---|---|
audit +bool + |
+
+ If true enables edb_audit logging + |
+
tde +TDEConfiguration + |
+
+ TDE configuration + |
+
EmbeddedObjectMetadata contains metadata to be inherited by all resources related to a Cluster
+ +Field | Description |
---|---|
labels +map[string]string + |
++ No description provided. | +
annotations +map[string]string + |
++ No description provided. | +
EnsureOption represents whether we should enforce the presence or absence of +a Role in a PostgreSQL instance
+ + + +## EphemeralVolumesSizeLimitConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +EphemeralVolumesSizeLimitConfiguration contains the configuration of the ephemeral +storage
+ +Field | Description |
---|---|
shm +k8s.io/apimachinery/pkg/api/resource.Quantity + |
+
+ Shm is the size limit of the shared memory volume + |
+
temporaryData +k8s.io/apimachinery/pkg/api/resource.Quantity + |
+
+ TemporaryData is the size limit of the temporary data volume + |
+
ExtensionConfiguration is the configuration used to add +PostgreSQL extensions to the Cluster.
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ The name of the extension, required + |
+
image [Required]+core/v1.ImageVolumeSource + |
+
+ The image containing the extension, required + |
+
extension_control_path +[]string + |
+
+ The list of directories inside the image which should be added to extension_control_path. +If not defined, defaults to "/share". + |
+
dynamic_library_path +[]string + |
+
+ The list of directories inside the image which should be added to dynamic_library_path. +If not defined, defaults to "/lib". + |
+
ld_library_path +[]string + |
+
+ The list of directories inside the image which should be added to ld_library_path. + |
+
ExtensionSpec configures an extension in a database
+ +Field | Description |
---|---|
DatabaseObjectSpec +DatabaseObjectSpec + |
+(Members of DatabaseObjectSpec are embedded into this type.)
+ Common fields + |
+
version [Required]+string + |
+
+ The version of the extension to install. If empty, the operator will +install the default version (whatever is specified in the +extension's control file) + |
+
schema [Required]+string + |
+
+ The name of the schema in which to install the extension's objects, +in case the extension allows its contents to be relocated. If not +specified (default), and the extension's control file does not +specify a schema either, the current default object creation schema +is used. + |
+
ExternalCluster represents the connection parameters to an +external cluster which is used in the other sections of the configuration
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ The server name, required + |
+
connectionParameters +map[string]string + |
+
+ The list of connection parameters, such as dbname, host, username, etc + |
+
sslCert +core/v1.SecretKeySelector + |
+
+ The reference to an SSL certificate to be used to connect to this +instance + |
+
sslKey +core/v1.SecretKeySelector + |
+
+ The reference to an SSL private key to be used to connect to this +instance + |
+
sslRootCert +core/v1.SecretKeySelector + |
+
+ The reference to an SSL CA public key to be used to connect to this +instance + |
+
password +core/v1.SecretKeySelector + |
+
+ The reference to the password to be used to connect to the server.
+If a password is provided, EDB Postgres for Kubernetes creates a PostgreSQL
+passfile at |
+
barmanObjectStore +github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanObjectStoreConfiguration + |
+
+ The configuration for the barman-cloud tool suite + |
+
plugin [Required]+PluginConfiguration + |
+
+ The configuration of the plugin that is taking care +of WAL archiving and backups for this external cluster + |
+
FailoverQuorumStatus is the latest observed status of the failover +quorum of the PG cluster.
+ +Field | Description |
---|---|
method +string + |
+
+ Contains the latest reported Method value. + |
+
standbyNames +[]string + |
+
+ StandbyNames is the list of potentially synchronous +instance names. + |
+
standbyNumber +int + |
+
+ StandbyNumber is the number of synchronous standbys that transactions +need to wait for replies from. + |
+
primary +string + |
+
+ Primary is the name of the primary instance that updated +this object the latest time. + |
+
ImageCatalogRef defines the reference to a major version in an ImageCatalog
+ +Field | Description |
---|---|
TypedLocalObjectReference +core/v1.TypedLocalObjectReference + |
+(Members of TypedLocalObjectReference are embedded into this type.)
+ No description provided. |
+
major [Required]+int + |
+
+ The major version of PostgreSQL we want to use from the ImageCatalog + |
+
ImageCatalogSpec defines the desired ImageCatalog
+ +Field | Description |
---|---|
images [Required]+[]CatalogImage + |
+
+ List of CatalogImages available in the catalog + |
+
ImageInfo contains the information about a PostgreSQL image
+ +Field | Description |
---|---|
image [Required]+string + |
+
+ Image is the image name + |
+
majorVersion [Required]+int + |
+
+ MajorVersion is the major version of the image + |
+
Import contains the configuration to init a database from a logic snapshot of an externalCluster
+ +Field | Description |
---|---|
source [Required]+ImportSource + |
+
+ The source of the import + |
+
type [Required]+SnapshotType + |
+
+ The import type. Can be |
+
databases [Required]+[]string + |
+
+ The databases to import + |
+
roles +[]string + |
+
+ The roles to import + |
+
postImportApplicationSQL +[]string + |
+
+ List of SQL queries to be executed as a superuser in the application +database right after is imported - to be used with extreme care +(by default empty). Only available in microservice type. + |
+
schemaOnly +bool + |
+
+ When set to true, only the |
+
pgDumpExtraOptions +[]string + |
+
+ List of custom options to pass to the |
+
pgRestoreExtraOptions +[]string + |
+
+ List of custom options to pass to the |
+
ImportSource describes the source for the logical snapshot
+ +Field | Description |
---|---|
externalCluster [Required]+string + |
+
+ The name of the externalCluster used for import + |
+
InstanceID contains the information to identify an instance
+ +Field | Description |
---|---|
podName +string + |
+
+ The pod name + |
+
ContainerID +string + |
+
+ The container ID + |
+
InstanceReportedState describes the last reported state of an instance during a reconciliation loop
+ +Field | Description |
---|---|
isPrimary [Required]+bool + |
+
+ indicates if an instance is the primary one + |
+
timeLineID +int + |
+
+ indicates on which TimelineId the instance is + |
+
ip [Required]+string + |
+
+ IP address of the instance + |
+
IsolationCheckConfiguration contains the configuration for the isolation check +functionality in the liveness probe
+ +Field | Description |
---|---|
enabled +bool + |
+
+ Whether primary isolation checking is enabled for the liveness probe + |
+
requestTimeout +int + |
+
+ Timeout in milliseconds for requests during the primary isolation check + |
+
connectionTimeout +int + |
+
+ Timeout in milliseconds for connections during the primary isolation check + |
+
LDAPBindAsAuth provides the required fields to use the +bind authentication for LDAP
+ +Field | Description |
---|---|
prefix +string + |
+
+ Prefix for the bind authentication option + |
+
suffix +string + |
+
+ Suffix for the bind authentication option + |
+
LDAPBindSearchAuth provides the required fields to use +the bind+search LDAP authentication process
+ +Field | Description |
---|---|
baseDN +string + |
+
+ Root DN to begin the user search + |
+
bindDN +string + |
+
+ DN of the user to bind to the directory + |
+
bindPassword +core/v1.SecretKeySelector + |
+
+ Secret with the password for the user to bind to the directory + |
+
searchAttribute +string + |
+
+ Attribute to match against the username + |
+
searchFilter +string + |
+
+ Search filter to use when doing the search+bind authentication + |
+
LDAPConfig contains the parameters needed for LDAP authentication
+ +Field | Description |
---|---|
server +string + |
+
+ LDAP hostname or IP address + |
+
port +int + |
+
+ LDAP server port + |
+
scheme +LDAPScheme + |
+
+ LDAP schema to be used, possible options are |
+
bindAsAuth +LDAPBindAsAuth + |
+
+ Bind as authentication configuration + |
+
bindSearchAuth +LDAPBindSearchAuth + |
+
+ Bind+Search authentication configuration + |
+
tls +bool + |
+
+ Set to 'true' to enable LDAP over TLS. 'false' is default + |
+
LDAPScheme defines the possible schemes for LDAP
+ + + +## LivenessProbe + +**Appears in:** + +- [ProbesConfiguration](#postgresql-k8s-enterprisedb-io-v1-ProbesConfiguration) + +LivenessProbe is the configuration of the liveness probe
+ +Field | Description |
---|---|
Probe +Probe + |
+(Members of Probe are embedded into this type.)
+ Probe is the standard probe configuration + |
+
isolationCheck +IsolationCheckConfiguration + |
+
+ Configure the feature that extends the liveness probe for a primary +instance. In addition to the basic checks, this verifies whether the +primary is isolated from the Kubernetes API server and from its +replicas, ensuring that it can be safely shut down if network +partition or API unavailability is detected. Enabled by default. + |
+
ManagedConfiguration represents the portions of PostgreSQL that are managed +by the instance manager
+ +Field | Description |
---|---|
roles +[]RoleConfiguration + |
+
+ Database roles managed by the |
+
services +ManagedServices + |
+
+ Services roles managed by the |
+
ManagedRoles tracks the status of a cluster's managed roles
+ +Field | Description |
---|---|
byStatus +map[RoleStatus][]string + |
+
+ ByStatus gives the list of roles in each state + |
+
cannotReconcile +map[string][]string + |
+
+ CannotReconcile lists roles that cannot be reconciled in PostgreSQL, +with an explanation of the cause + |
+
passwordStatus +map[string]PasswordState + |
+
+ PasswordStatus gives the last transaction id and password secret version for each managed role + |
+
ManagedService represents a specific service managed by the cluster. +It includes the type of service and its associated template specification.
+ +Field | Description |
---|---|
selectorType [Required]+ServiceSelectorType + |
+
+ SelectorType specifies the type of selectors that the service will have. +Valid values are "rw", "r", and "ro", representing read-write, read, and read-only services. + |
+
updateStrategy +ServiceUpdateStrategy + |
+
+ UpdateStrategy describes how the service differences should be reconciled + |
+
serviceTemplate [Required]+ServiceTemplateSpec + |
+
+ ServiceTemplate is the template specification for the service. + |
+
ManagedServices represents the services managed by the cluster.
+ +Field | Description |
---|---|
disabledDefaultServices +[]ServiceSelectorType + |
+
+ DisabledDefaultServices is a list of service types that are disabled by default. +Valid values are "r", and "ro", representing read, and read-only services. + |
+
additional +[]ManagedService + |
+
+ Additional is a list of additional managed services specified by the user. + |
+
Metadata is a structure similar to the metav1.ObjectMeta, but still +parseable by controller-gen to create a suitable CRD for the user. +The comment of PodTemplateSpec has an explanation of why we are +not using the core data types.
+ +Field | Description |
---|---|
name +string + |
+
+ The name of the resource. Only supported for certain types + |
+
labels +map[string]string + |
+
+ Map of string keys and values that can be used to organize and categorize +(scope and select) objects. May match selectors of replication controllers +and services. +More info: http://kubernetes.io/docs/user-guide/labels + |
+
annotations +map[string]string + |
+
+ Annotations is an unstructured key value map stored with a resource that may be +set by external tools to store and retrieve arbitrary metadata. They are not +queryable and should be preserved when modifying objects. +More info: http://kubernetes.io/docs/user-guide/annotations + |
+
MonitoringConfiguration is the type containing all the monitoring +configuration for a certain cluster
+ +Field | Description |
---|---|
disableDefaultQueries +bool + |
+
+ Whether the default queries should be injected.
+Set it to |
+
customQueriesConfigMap +[]github.com/cloudnative-pg/machinery/pkg/api.ConfigMapKeySelector + |
+
+ The list of config maps containing the custom queries + |
+
customQueriesSecret +[]github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector + |
+
+ The list of secrets containing the custom queries + |
+
enablePodMonitor +bool + |
+
+ Enable or disable the |
+
tls +ClusterMonitoringTLSConfiguration + |
+
+ Configure TLS communication for the metrics endpoint. +Changing tls.enabled option will force a rollout of all instances. + |
+
podMonitorMetricRelabelings +[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig + |
+
+ The list of metric relabelings for the |
+
podMonitorRelabelings +[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig + |
+
+ The list of relabelings for the |
+
NodeMaintenanceWindow contains information that the operator +will use while upgrading the underlying node.
+This option is only useful when the chosen storage prevents the Pods +from being freely moved across nodes.
+ +Field | Description |
---|---|
reusePVC +bool + |
+
+ Reuse the existing PVC (wait for the node to come
+up again) or not (recreate it elsewhere - when |
+
inProgress +bool + |
+
+ Is there a node maintenance activity in progress? + |
+
OnlineConfiguration contains the configuration parameters for the online volume snapshot
+ +Field | Description |
---|---|
waitForArchive +bool + |
+
+ If false, the function will return immediately after the backup is completed, +without waiting for WAL to be archived. +This behavior is only useful with backup software that independently monitors WAL archiving. +Otherwise, WAL required to make the backup consistent might be missing and make the backup useless. +By default, or when this parameter is true, pg_backup_stop will wait for WAL to be archived when archiving is +enabled. +On a standby, this means that it will wait only when archive_mode = always. +If write activity on the primary is low, it may be useful to run pg_switch_wal on the primary in order to trigger +an immediate segment switch. + |
+
immediateCheckpoint +bool + |
+
+ Control whether the I/O workload for the backup initial checkpoint will
+be limited, according to the |
+
PasswordState represents the state of the password of a managed RoleConfiguration
+ +Field | Description |
---|---|
transactionID +int64 + |
+
+ the last transaction ID to affect the role definition in PostgreSQL + |
+
resourceVersion +string + |
+
+ the resource version of the password secret + |
+
PgBouncerIntegrationStatus encapsulates the needed integration for the pgbouncer poolers referencing the cluster
+ +Field | Description |
---|---|
secrets +[]string + |
++ No description provided. | +
PgBouncerPoolMode is the mode of PgBouncer
+ + + +## PgBouncerSecrets + +**Appears in:** + +- [PoolerSecrets](#postgresql-k8s-enterprisedb-io-v1-PoolerSecrets) + +PgBouncerSecrets contains the versions of the secrets used +by pgbouncer
+ +Field | Description |
---|---|
authQuery +SecretVersion + |
+
+ The auth query secret version + |
+
PgBouncerSpec defines how to configure PgBouncer
+ +Field | Description |
---|---|
poolMode +PgBouncerPoolMode + |
+
+ The pool mode. Default: |
+
authQuerySecret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ The credentials of the user that need to be used for the authentication +query. In case it is specified, also an AuthQuery +(e.g. "SELECT usename, passwd FROM pg_catalog.pg_shadow WHERE usename=$1") +has to be specified and no automatic CNP Cluster integration will be triggered. + |
+
authQuery +string + |
+
+ The query that will be used to download the hash of the password +of a certain user. Default: "SELECT usename, passwd FROM public.user_search($1)". +In case it is specified, also an AuthQuerySecret has to be specified and +no automatic CNP Cluster integration will be triggered. + |
+
parameters +map[string]string + |
+
+ Additional parameters to be passed to PgBouncer - please check +the CNP documentation for a list of options you can configure + |
+
pg_hba +[]string + |
+
+ PostgreSQL Host Based Authentication rules (lines to be appended +to the pg_hba.conf file) + |
+
paused +bool + |
+
+ When set to |
+
PluginConfiguration specifies a plugin that need to be loaded for this +cluster to be reconciled
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name is the plugin name + |
+
enabled +bool + |
+
+ Enabled is true if this plugin will be used + |
+
isWALArchiver +bool + |
+
+ Only one plugin can be declared as WALArchiver. +Cannot be active if ".spec.backup.barmanObjectStore" configuration is present. + |
+
parameters +map[string]string + |
+
+ Parameters is the configuration of the plugin + |
+
PluginStatus is the status of a loaded plugin
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name is the name of the plugin + |
+
version [Required]+string + |
+
+ Version is the version of the plugin loaded by the +latest reconciliation loop + |
+
capabilities +[]string + |
+
+ Capabilities are the list of capabilities of the +plugin + |
+
operatorCapabilities +[]string + |
+
+ OperatorCapabilities are the list of capabilities of the +plugin regarding the reconciler + |
+
walCapabilities +[]string + |
+
+ WALCapabilities are the list of capabilities of the +plugin regarding the WAL management + |
+
backupCapabilities +[]string + |
+
+ BackupCapabilities are the list of capabilities of the +plugin regarding the Backup management + |
+
restoreJobHookCapabilities +[]string + |
+
+ RestoreJobHookCapabilities are the list of capabilities of the +plugin regarding the RestoreJobHook management + |
+
status +string + |
+
+ Status contain the status reported by the plugin through the SetStatusInCluster interface + |
+
PodTemplateSpec is a structure allowing the user to set +a template for Pod generation.
+Unfortunately we can't use the corev1.PodTemplateSpec +type because the generated CRD won't have the field for the +metadata section.
+References: +https://github.com/kubernetes-sigs/controller-tools/issues/385 +https://github.com/kubernetes-sigs/controller-tools/issues/448 +https://github.com/prometheus-operator/prometheus-operator/issues/3041
+ +Field | Description |
---|---|
metadata +Metadata + |
+
+ Standard object's metadata. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + |
+
spec +core/v1.PodSpec + |
+
+ Specification of the desired behavior of the pod. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
PodTopologyLabels represent the topology of a Pod. map[labelName]labelValue
+ + + +## PoolerIntegrations + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +PoolerIntegrations encapsulates the needed integration for the poolers referencing the cluster
+ +Field | Description |
---|---|
pgBouncerIntegration +PgBouncerIntegrationStatus + |
++ No description provided. | +
PoolerMonitoringConfiguration is the type containing all the monitoring +configuration for a certain Pooler.
+Mirrors the Cluster's MonitoringConfiguration but without the custom queries +part for now.
+ +Field | Description |
---|---|
enablePodMonitor +bool + |
+
+ Enable or disable the |
+
podMonitorMetricRelabelings +[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig + |
+
+ The list of metric relabelings for the |
+
podMonitorRelabelings +[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig + |
+
+ The list of relabelings for the |
+
PoolerSecrets contains the versions of all the secrets used
+ +Field | Description |
---|---|
serverTLS +SecretVersion + |
+
+ The server TLS secret version + |
+
serverCA +SecretVersion + |
+
+ The server CA secret version + |
+
clientCA +SecretVersion + |
+
+ The client CA secret version + |
+
pgBouncerSecrets +PgBouncerSecrets + |
+
+ The version of the secrets used by PgBouncer + |
+
PoolerSpec defines the desired state of Pooler
+ +Field | Description |
---|---|
cluster [Required]+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ This is the cluster reference on which the Pooler will work. +Pooler name should never match with any cluster name within the same namespace. + |
+
type +PoolerType + |
+
+ Type of service to forward traffic to. Default: |
+
instances +int32 + |
+
+ The number of replicas we want. Default: 1. + |
+
template +PodTemplateSpec + |
+
+ The template of the Pod to be created + |
+
pgbouncer [Required]+PgBouncerSpec + |
+
+ The PgBouncer configuration + |
+
deploymentStrategy +apps/v1.DeploymentStrategy + |
+
+ The deployment strategy to use for pgbouncer to replace existing pods with new ones + |
+
monitoring +PoolerMonitoringConfiguration + |
+
+ The configuration of the monitoring infrastructure of this pooler. + |
+
serviceTemplate +ServiceTemplateSpec + |
+
+ Template for the Service to be created + |
+
PoolerStatus defines the observed state of Pooler
+ +Field | Description |
---|---|
secrets +PoolerSecrets + |
+
+ The resource version of the config object + |
+
instances +int32 + |
+
+ The number of pods trying to be scheduled + |
+
PoolerType is the type of the connection pool, meaning the service
+we are targeting. Allowed values are rw
and ro
.
PostgresConfiguration defines the PostgreSQL configuration
+ +Field | Description |
---|---|
parameters +map[string]string + |
+
+ PostgreSQL configuration options (postgresql.conf) + |
+
synchronous +SynchronousReplicaConfiguration + |
+
+ Configuration of the PostgreSQL synchronous replication feature + |
+
pg_hba +[]string + |
+
+ PostgreSQL Host Based Authentication rules (lines to be appended +to the pg_hba.conf file) + |
+
pg_ident +[]string + |
+
+ PostgreSQL User Name Maps rules (lines to be appended +to the pg_ident.conf file) + |
+
epas +EPASConfiguration + |
+
+ EDB Postgres Advanced Server specific configurations + |
+
syncReplicaElectionConstraint +SyncReplicaElectionConstraints + |
+
+ Requirements to be met by sync replicas. This will affect how the "synchronous_standby_names" parameter will be +set up. + |
+
shared_preload_libraries +[]string + |
+
+ Lists of shared preload libraries to add to the default ones + |
+
ldap +LDAPConfig + |
+
+ Options to specify LDAP configuration + |
+
promotionTimeout +int32 + |
+
+ Specifies the maximum number of seconds to wait when promoting an instance to primary. +Default value is 40000000, greater than one year in seconds, +big enough to simulate an infinite timeout + |
+
enableAlterSystem +bool + |
+
+ If this parameter is true, the user will be able to invoke |
+
extensions +[]ExtensionConfiguration + |
+
+ The configuration of the extensions to be added + |
+
PrimaryUpdateMethod contains the method to use when upgrading +the primary server of the cluster as part of rolling updates
+ + + +## PrimaryUpdateStrategy + +(Alias of `string`) + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +PrimaryUpdateStrategy contains the strategy to follow when upgrading +the primary server of the cluster as part of rolling updates
+ + + +## Probe + +**Appears in:** + +- [LivenessProbe](#postgresql-k8s-enterprisedb-io-v1-LivenessProbe) + +- [ProbeWithStrategy](#postgresql-k8s-enterprisedb-io-v1-ProbeWithStrategy) + +Probe describes a health check to be performed against a container to determine whether it is +alive or ready to receive traffic.
+ +Field | Description |
---|---|
initialDelaySeconds +int32 + |
+
+ Number of seconds after the container has started before liveness probes are initiated. +More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + |
+
timeoutSeconds +int32 + |
+
+ Number of seconds after which the probe times out. +Defaults to 1 second. Minimum value is 1. +More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + |
+
periodSeconds +int32 + |
+
+ How often (in seconds) to perform the probe. +Default to 10 seconds. Minimum value is 1. + |
+
successThreshold +int32 + |
+
+ Minimum consecutive successes for the probe to be considered successful after having failed. +Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + |
+
failureThreshold +int32 + |
+
+ Minimum consecutive failures for the probe to be considered failed after having succeeded. +Defaults to 3. Minimum value is 1. + |
+
terminationGracePeriodSeconds +int64 + |
+
+ Optional duration in seconds the pod needs to terminate gracefully upon probe failure. +The grace period is the duration in seconds after the processes running in the pod are sent +a termination signal and the time when the processes are forcibly halted with a kill signal. +Set this value longer than the expected cleanup time for your process. +If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this +value overrides the value provided by the pod spec. +Value must be non-negative integer. The value zero indicates stop immediately via +the kill signal (no opportunity to shut down). +This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. +Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + |
+
ProbeStrategyType is the type of the strategy used to declare a PostgreSQL instance +ready
+ + + +## ProbeWithStrategy + +**Appears in:** + +- [ProbesConfiguration](#postgresql-k8s-enterprisedb-io-v1-ProbesConfiguration) + +ProbeWithStrategy is the configuration of the startup probe
+ +Field | Description |
---|---|
Probe +Probe + |
+(Members of Probe are embedded into this type.)
+ Probe is the standard probe configuration + |
+
type +ProbeStrategyType + |
+
+ The probe strategy + |
+
maximumLag +k8s.io/apimachinery/pkg/api/resource.Quantity + |
+
+ Lag limit. Used only for |
+
ProbesConfiguration represent the configuration for the probes +to be injected in the PostgreSQL Pods
+ +Field | Description |
---|---|
startup [Required]+ProbeWithStrategy + |
+
+ The startup probe configuration + |
+
liveness [Required]+LivenessProbe + |
+
+ The liveness probe configuration + |
+
readiness [Required]+ProbeWithStrategy + |
+
+ The readiness probe configuration + |
+
PublicationReclaimPolicy defines a policy for end-of-life maintenance of Publications.
+ + + +## PublicationSpec + +**Appears in:** + +- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication) + +PublicationSpec defines the desired state of Publication
+ +Field | Description |
---|---|
cluster [Required]+core/v1.LocalObjectReference + |
+
+ The name of the PostgreSQL cluster that identifies the "publisher" + |
+
name [Required]+string + |
+
+ The name of the publication inside PostgreSQL + |
+
dbname [Required]+string + |
+
+ The name of the database where the publication will be installed in +the "publisher" cluster + |
+
parameters +map[string]string + |
+
+ Publication parameters part of the |
+
target [Required]+PublicationTarget + |
+
+ Target of the publication as expected by PostgreSQL |
+
publicationReclaimPolicy +PublicationReclaimPolicy + |
+
+ The policy for end-of-life maintenance of this publication + |
+
PublicationStatus defines the observed state of Publication
+ +Field | Description |
---|---|
observedGeneration +int64 + |
+
+ A sequence number representing the latest +desired state that was synchronized + |
+
applied +bool + |
+
+ Applied is true if the publication was reconciled correctly + |
+
message +string + |
+
+ Message is the reconciliation output message + |
+
PublicationTarget is what this publication should publish
+ +Field | Description |
---|---|
allTables +bool + |
+
+ Marks the publication as one that replicates changes for all tables
+in the database, including tables created in the future.
+Corresponding to |
+
objects +[]PublicationTargetObject + |
+
+ Just the following schema objects + |
+
PublicationTargetObject is an object to publish
+ +Field | Description |
---|---|
tablesInSchema +string + |
+
+ Marks the publication as one that replicates changes for all tables
+in the specified list of schemas, including tables created in the
+future. Corresponding to |
+
table +PublicationTargetTable + |
+
+ Specifies a list of tables to add to the publication. Corresponding
+to |
+
PublicationTargetTable is a table to publish
+ +Field | Description |
---|---|
only +bool + |
+
+ Whether to limit to the table only or include all its descendants + |
+
name [Required]+string + |
+
+ The table name + |
+
schema +string + |
+
+ The schema name + |
+
columns +[]string + |
+
+ The columns to publish + |
+
RecoveryTarget allows to configure the moment where the recovery process +will stop. All the target options except TargetTLI are mutually exclusive.
+ +Field | Description |
---|---|
backupID +string + |
+
+ The ID of the backup from which to start the recovery process. +If empty (default) the operator will automatically detect the backup +based on targetTime or targetLSN if specified. Otherwise use the +latest available backup in chronological order. + |
+
targetTLI +string + |
+
+ The target timeline ("latest" or a positive integer) + |
+
targetXID +string + |
+
+ The target transaction ID + |
+
targetName +string + |
+
+ The target name (to be previously created
+with |
+
targetLSN +string + |
+
+ The target LSN (Log Sequence Number) + |
+
targetTime +string + |
+
+ The target time as a timestamp in the RFC3339 standard + |
+
targetImmediate +bool + |
+
+ End recovery as soon as a consistent state is reached + |
+
exclusive +bool + |
+
+ Set the target to be exclusive. If omitted, defaults to false, so that
+in Postgres, |
+
ReplicaClusterConfiguration encapsulates the configuration of a replica +cluster
+ +Field | Description |
---|---|
self +string + |
+
+ Self defines the name of this cluster. It is used to determine if this is a primary
+or a replica cluster, comparing it with |
+
primary +string + |
+
+ Primary defines which Cluster is defined to be the primary in the distributed PostgreSQL cluster, based on the +topology specified in externalClusters + |
+
source [Required]+string + |
+
+ The name of the external cluster which is the replication origin + |
+
enabled +bool + |
+
+ If replica mode is enabled, this cluster will be a replica of an +existing cluster. Replica cluster can be created from a recovery +object store or via streaming through pg_basebackup. +Refer to the Replica clusters page of the documentation for more information. + |
+
promotionToken +string + |
+
+ A demotion token generated by an external cluster used to +check if the promotion requirements are met. + |
+
minApplyDelay +meta/v1.Duration + |
+
+ When replica mode is enabled, this parameter allows you to replay +transactions only when the system time is at least the configured +time past the commit time. This provides an opportunity to correct +data loss errors. Note that when this parameter is set, a promotion +token cannot be used. + |
+
ReplicationSlotsConfiguration encapsulates the configuration +of replication slots
+ +Field | Description |
---|---|
highAvailability +ReplicationSlotsHAConfiguration + |
+
+ Replication slots for high availability configuration + |
+
updateInterval +int + |
+
+ Standby will update the status of the local replication slots
+every |
+
synchronizeReplicas +SynchronizeReplicasConfiguration + |
+
+ Configures the synchronization of the user defined physical replication slots + |
+
ReplicationSlotsHAConfiguration encapsulates the configuration +of the replication slots that are automatically managed by +the operator to control the streaming replication connections +with the standby instances for high availability (HA) purposes. +Replication slots are a PostgreSQL feature that makes sure +that PostgreSQL automatically keeps WAL files in the primary +when a streaming client (in this specific case a replica that +is part of the HA cluster) gets disconnected.
+ +Field | Description |
---|---|
enabled +bool + |
+
+ If enabled (default), the operator will automatically manage replication slots +on the primary instance and use them in streaming replication +connections with all the standby instances that are part of the HA +cluster. If disabled, the operator will not take advantage +of replication slots in streaming connections with the replicas. +This feature also controls replication slots in replica cluster, +from the designated primary to its cascading replicas. + |
+
slotPrefix +string + |
+
+ Prefix for replication slots managed by the operator for HA.
+It may only contain lower case letters, numbers, and the underscore character.
+This can only be set at creation time. By default set to |
+
synchronizeLogicalDecoding +bool + |
+
+ When enabled, the operator automatically manages synchronization of logical +decoding (replication) slots across high-availability clusters. +Requires one of the following conditions: +
|
+
RoleConfiguration is the representation, in Kubernetes, of a PostgreSQL role +with the additional field Ensure specifying whether to ensure the presence or +absence of the role in the database
+The defaults of the CREATE ROLE command are applied +Reference: https://www.postgresql.org/docs/current/sql-createrole.html
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name of the role + |
+
comment +string + |
+
+ Description of the role + |
+
ensure +EnsureOption + |
+
+ Ensure the role is |
+
passwordSecret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ Secret containing the password of the role (if present) +If null, the password will be ignored unless DisablePassword is set + |
+
connectionLimit +int64 + |
+
+ If the role can log in, this specifies how many concurrent
+connections the role can make. |
+
validUntil +meta/v1.Time + |
+
+ Date and time after which the role's password is no longer valid. +When omitted, the password will never expire (default). + |
+
inRoles +[]string + |
+
+ List of one or more existing roles to which this role will be +immediately added as a new member. Default empty. + |
+
inherit +bool + |
+
+ Whether a role "inherits" the privileges of roles it is a member of.
+Defaults is |
+
disablePassword +bool + |
+
+ DisablePassword indicates that a role's password should be set to NULL in Postgres + |
+
superuser +bool + |
+
+ Whether the role is a |
+
createdb +bool + |
+
+ When set to |
+
createrole +bool + |
+
+ Whether the role will be permitted to create, alter, drop, comment
+on, change the security label for, and grant or revoke membership in
+other roles. Default is |
+
login +bool + |
+
+ Whether the role is allowed to log in. A role having the |
+
replication +bool + |
+
+ Whether a role is a replication role. A role must have this
+attribute (or be a superuser) in order to be able to connect to the
+server in replication mode (physical or logical replication) and in
+order to be able to create or drop replication slots. A role having
+the |
+
bypassrls +bool + |
+
+ Whether a role bypasses every row-level security (RLS) policy.
+Default is |
+
SQLRefs holds references to ConfigMaps or Secrets +containing SQL files. The references are processed in a specific order: +first, all Secrets are processed, followed by all ConfigMaps. +Within each group, the processing order follows the sequence specified +in their respective arrays.
+ +Field | Description |
---|---|
secretRefs +[]github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector + |
+
+ SecretRefs holds a list of references to Secrets + |
+
configMapRefs +[]github.com/cloudnative-pg/machinery/pkg/api.ConfigMapKeySelector + |
+
+ ConfigMapRefs holds a list of references to ConfigMaps + |
+
ScheduledBackupSpec defines the desired state of ScheduledBackup
+ +Field | Description |
---|---|
suspend +bool + |
+
+ If this backup is suspended or not + |
+
immediate +bool + |
+
+ If the first backup has to be immediately start after creation or not + |
+
schedule [Required]+string + |
+
+ The schedule does not follow the same format used in Kubernetes CronJobs +as it includes an additional seconds specifier, +see https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format + |
+
cluster [Required]+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ The cluster to backup + |
+
backupOwnerReference +string + |
+
+ Indicates which ownerReference should be put inside the created backup resources. +
|
+
target +BackupTarget + |
+
+ The policy to decide which instance should perform this backup. If empty,
+it defaults to |
+
method +BackupMethod + |
+
+ The backup method to be used, possible options are |
+
pluginConfiguration +BackupPluginConfiguration + |
+
+ Configuration parameters passed to the plugin managing this backup + |
+
online +bool + |
+
+ Whether the default type of backup with volume snapshots is
+online/hot ( |
+
onlineConfiguration +OnlineConfiguration + |
+
+ Configuration parameters to control the online/hot backup with volume snapshots +Overrides the default settings specified in the cluster '.backup.volumeSnapshot.onlineConfiguration' stanza + |
+
ScheduledBackupStatus defines the observed state of ScheduledBackup
+ +Field | Description |
---|---|
lastCheckTime +meta/v1.Time + |
+
+ The latest time the schedule + |
+
lastScheduleTime +meta/v1.Time + |
+
+ Information when was the last time that backup was successfully scheduled. + |
+
nextScheduleTime +meta/v1.Time + |
+
+ Next time we will run a backup + |
+
SchemaSpec configures a schema in a database
+ +Field | Description |
---|---|
DatabaseObjectSpec +DatabaseObjectSpec + |
+(Members of DatabaseObjectSpec are embedded into this type.)
+ Common fields + |
+
owner [Required]+string + |
+
+ The role name of the user who owns the schema inside PostgreSQL.
+It maps to the |
+
SecretVersion contains a secret name and its ResourceVersion
+ +Field | Description |
---|---|
name +string + |
+
+ The name of the secret + |
+
version +string + |
+
+ The ResourceVersion of the secret + |
+
SecretsResourceVersion is the resource versions of the secrets +managed by the operator
+ +Field | Description |
---|---|
superuserSecretVersion +string + |
+
+ The resource version of the "postgres" user secret + |
+
replicationSecretVersion +string + |
+
+ The resource version of the "streaming_replica" user secret + |
+
applicationSecretVersion +string + |
+
+ The resource version of the "app" user secret + |
+
managedRoleSecretVersion +map[string]string + |
+
+ The resource versions of the managed roles secrets + |
+
caSecretVersion +string + |
+
+ Unused. Retained for compatibility with old versions. + |
+
clientCaSecretVersion +string + |
+
+ The resource version of the PostgreSQL client-side CA secret version + |
+
serverCaSecretVersion +string + |
+
+ The resource version of the PostgreSQL server-side CA secret version + |
+
serverSecretVersion +string + |
+
+ The resource version of the PostgreSQL server-side secret version + |
+
barmanEndpointCA +string + |
+
+ The resource version of the Barman Endpoint CA if provided + |
+
externalClusterSecretVersion +map[string]string + |
+
+ The resource versions of the external cluster secrets + |
+
metrics +map[string]string + |
+
+ A map with the versions of all the secrets used to pass metrics. +Map keys are the secret names, map values are the versions + |
+
ServiceAccountTemplate contains the template needed to generate the service accounts
+ +Field | Description |
---|---|
metadata [Required]+Metadata + |
+
+ Metadata are the metadata to be used for the generated +service account + |
+
ServiceSelectorType describes a valid value for generating the service selectors. +It indicates which type of service the selector applies to, such as read-write, read, or read-only
+ + + +## ServiceTemplateSpec + +**Appears in:** + +- [ManagedService](#postgresql-k8s-enterprisedb-io-v1-ManagedService) + +- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec) + +ServiceTemplateSpec is a structure allowing the user to set +a template for Service generation.
+ +Field | Description |
---|---|
metadata +Metadata + |
+
+ Standard object's metadata. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + |
+
spec +core/v1.ServiceSpec + |
+
+ Specification of the desired behavior of the service. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
ServiceUpdateStrategy describes how the changes to the managed service should be handled
+ + + +## SnapshotOwnerReference + +(Alias of `string`) + +**Appears in:** + +- [VolumeSnapshotConfiguration](#postgresql-k8s-enterprisedb-io-v1-VolumeSnapshotConfiguration) + +SnapshotOwnerReference defines the reference type for the owner of the snapshot. +This specifies which owner the processed resources should relate to.
+ + + +## SnapshotType + +(Alias of `string`) + +**Appears in:** + +- [Import](#postgresql-k8s-enterprisedb-io-v1-Import) + +SnapshotType is a type of allowed import
+ + + +## StorageConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +- [TablespaceConfiguration](#postgresql-k8s-enterprisedb-io-v1-TablespaceConfiguration) + +StorageConfiguration is the configuration used to create and reconcile PVCs, +usable for WAL volumes, PGDATA volumes, or tablespaces
+ +Field | Description |
---|---|
storageClass +string + |
+
+ StorageClass to use for PVCs. Applied after +evaluating the PVC template, if available. +If not specified, the generated PVCs will use the +default storage class + |
+
size +string + |
+
+ Size of the storage. Required if not already specified in the PVC template. +Changes to this field are automatically reapplied to the created PVCs. +Size cannot be decreased. + |
+
resizeInUseVolumes +bool + |
+
+ Resize existent PVCs, defaults to true + |
+
pvcTemplate +core/v1.PersistentVolumeClaimSpec + |
+
+ Template to be used to generate the Persistent Volume Claim + |
+
SubscriptionReclaimPolicy describes a policy for end-of-life maintenance of Subscriptions.
+ + + +## SubscriptionSpec + +**Appears in:** + +- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription) + +SubscriptionSpec defines the desired state of Subscription
+ +Field | Description |
---|---|
cluster [Required]+core/v1.LocalObjectReference + |
+
+ The name of the PostgreSQL cluster that identifies the "subscriber" + |
+
name [Required]+string + |
+
+ The name of the subscription inside PostgreSQL + |
+
dbname [Required]+string + |
+
+ The name of the database where the publication will be installed in +the "subscriber" cluster + |
+
parameters +map[string]string + |
+
+ Subscription parameters included in the |
+
publicationName [Required]+string + |
+
+ The name of the publication inside the PostgreSQL database in the +"publisher" + |
+
publicationDBName +string + |
+
+ The name of the database containing the publication on the external +cluster. Defaults to the one in the external cluster definition. + |
+
externalClusterName [Required]+string + |
+
+ The name of the external cluster with the publication ("publisher") + |
+
subscriptionReclaimPolicy +SubscriptionReclaimPolicy + |
+
+ The policy for end-of-life maintenance of this subscription + |
+
SubscriptionStatus defines the observed state of Subscription
+ +Field | Description |
---|---|
observedGeneration +int64 + |
+
+ A sequence number representing the latest +desired state that was synchronized + |
+
applied +bool + |
+
+ Applied is true if the subscription was reconciled correctly + |
+
message +string + |
+
+ Message is the reconciliation output message + |
+
SwitchReplicaClusterStatus contains all the statuses regarding the switch of a cluster to a replica cluster
+ +Field | Description |
---|---|
inProgress +bool + |
+
+ InProgress indicates if there is an ongoing procedure of switching a cluster to a replica cluster. + |
+
SyncReplicaElectionConstraints contains the constraints for sync replicas election.
+For anti-affinity parameters two instances are considered in the same location +if all the labels values match.
+In future synchronous replica election restriction by name will be supported.
+ +Field | Description |
---|---|
nodeLabelsAntiAffinity +[]string + |
+
+ A list of node labels values to extract and compare to evaluate if the pods reside in the same topology or not + |
+
enabled [Required]+bool + |
+
+ This flag enables the constraints for sync replicas + |
+
SynchronizeReplicasConfiguration contains the configuration for the synchronization of user defined +physical replication slots
+ +Field | Description |
---|---|
enabled [Required]+bool + |
+
+ When set to true, every replication slot that is on the primary is synchronized on each standby + |
+
excludePatterns +[]string + |
+
+ List of regular expression patterns to match the names of replication slots to be excluded (by default empty) + |
+
SynchronousReplicaConfiguration contains the configuration of the
+PostgreSQL synchronous replication feature.
+Important: at this moment, also .spec.minSyncReplicas
and .spec.maxSyncReplicas
+need to be considered.
Field | Description |
---|---|
method [Required]+SynchronousReplicaConfigurationMethod + |
+
+ Method to select synchronous replication standbys from the listed +servers, accepting 'any' (quorum-based synchronous replication) or +'first' (priority-based synchronous replication) as values. + |
+
number [Required]+int + |
+
+ Specifies the number of synchronous standby servers that +transactions must wait for responses from. + |
+
maxStandbyNamesFromCluster +int + |
+
+ Specifies the maximum number of local cluster pods that can be
+automatically included in the |
+
standbyNamesPre +[]string + |
+
+ A user-defined list of application names to be added to
+ |
+
standbyNamesPost +[]string + |
+
+ A user-defined list of application names to be added to
+ |
+
dataDurability +DataDurabilityLevel + |
+
+ If set to "required", data durability is strictly enforced. Write operations
+with synchronous commit settings ( |
+
SynchronousReplicaConfigurationMethod configures whether to use +quorum based replication or a priority list
+ + + +## TDEConfiguration + +**Appears in:** + +- [EPASConfiguration](#postgresql-k8s-enterprisedb-io-v1-EPASConfiguration) + +TDEConfiguration contains the Transparent Data Encryption configuration
+ +Field | Description |
---|---|
enabled +bool + |
+
+ True if we want to have TDE enabled + |
+
secretKeyRef +core/v1.SecretKeySelector + |
+
+ Reference to the secret that contains the encryption key + |
+
wrapCommand +core/v1.SecretKeySelector + |
+
+ WrapCommand is the encrypt command provided by the user + |
+
unwrapCommand +core/v1.SecretKeySelector + |
+
+ UnwrapCommand is the decryption command provided by the user + |
+
passphraseCommand +core/v1.SecretKeySelector + |
+
+ PassphraseCommand is the command executed to get the passphrase that will be +passed to the OpenSSL command to encrypt and decrypt + |
+
TablespaceConfiguration is the configuration of a tablespace, and includes +the storage specification for the tablespace
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ The name of the tablespace + |
+
storage [Required]+StorageConfiguration + |
+
+ The storage configuration for the tablespace + |
+
owner +DatabaseRoleRef + |
+
+ Owner is the PostgreSQL user owning the tablespace + |
+
temporary +bool + |
+
+ When set to true, the tablespace will be added as a |
+
TablespaceState represents the state of a tablespace in a cluster
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name is the name of the tablespace + |
+
owner +string + |
+
+ Owner is the PostgreSQL user owning the tablespace + |
+
state [Required]+TablespaceStatus + |
+
+ State is the latest reconciliation state + |
+
error +string + |
+
+ Error is the reconciliation error, if any + |
+
TablespaceStatus represents the status of a tablespace in the cluster
+ + + +## Topology + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +Topology contains the cluster topology
+ +Field | Description |
---|---|
instances +map[PodName]PodTopologyLabels + |
+
+ Instances contains the pod topology of the instances + |
+
nodesUsed +int32 + |
+
+ NodesUsed represents the count of distinct nodes accommodating the instances. +A value of '1' suggests that all instances are hosted on a single node, +implying the absence of High Availability (HA). Ideally, this value should +be the same as the number of instances in the Postgres HA cluster, implying +shared nothing architecture on the compute side. + |
+
successfullyExtracted +bool + |
+
+ SuccessfullyExtracted indicates if the topology data was extract. It is useful to enact fallback behaviors +in synchronous replica election in case of failures + |
+
VolumeSnapshotConfiguration represents the configuration for the execution of snapshot backups.
+ +Field | Description |
---|---|
labels +map[string]string + |
+
+ Labels are key-value pairs that will be added to .metadata.labels snapshot resources. + |
+
annotations +map[string]string + |
+
+ Annotations key-value pairs that will be added to .metadata.annotations snapshot resources. + |
+
className +string + |
+
+ ClassName specifies the Snapshot Class to be used for PG_DATA PersistentVolumeClaim. +It is the default class for the other types if no specific class is present + |
+
walClassName +string + |
+
+ WalClassName specifies the Snapshot Class to be used for the PG_WAL PersistentVolumeClaim. + |
+
tablespaceClassName +map[string]string + |
+
+ TablespaceClassName specifies the Snapshot Class to be used for the tablespaces. +defaults to the PGDATA Snapshot Class, if set + |
+
snapshotOwnerReference +SnapshotOwnerReference + |
+
+ SnapshotOwnerReference indicates the type of owner reference the snapshot should have + |
+
online +bool + |
+
+ Whether the default type of backup with volume snapshots is
+online/hot ( |
+
onlineConfiguration +OnlineConfiguration + |
+
+ Configuration parameters to control the online/hot backup with volume snapshots + |
+