Skip to content

Commit 5387f43

Browse files
authored
Merge pull request #6276 from EnterpriseDB/DOCS-1127-tpa-23-35
DOCS-1127 Import TPA 23.35 documentation
2 parents b23f515 + 00971a8 commit 5387f43

18 files changed

+866
-202
lines changed

product_docs/docs/tpa/23/architecture-M1.mdx

Lines changed: 61 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,8 @@ originalFilePath: architecture-M1.md
55

66
---
77

8-
A Postgres cluster with one or more active locations, each with the same
9-
number of Postgres nodes and an extra Barman node. Optionally, there can
10-
also be a location containing only a witness node, or a location
11-
containing only a single node, even if the active locations have more
12-
than one.
8+
A Postgres cluster with a single primary node and physical replication
9+
to a number of standby nodes including backup and failover management.
1310

1411
This architecture is suitable for production and is also suited to
1512
testing, demonstrating and learning due to its simplicity and ability to
@@ -19,25 +16,53 @@ If you select subscription-only EDB software with this architecture
1916
it will be sourced from EDB Repos 2.0 and you will need to
2017
[provide a token](reference/edb_repositories/).
2118

22-
## Application and backup failover
23-
24-
The M1 architecture implements failover management in that it ensures
25-
that a replica will be promoted to take the place of the primary should
26-
the primary become unavailable. However it *does not provide any
27-
automatic facility to reroute application traffic to the primary*. If
28-
you require, automatic failover of application traffic you will need to
29-
configure this at the application itself (for example using multi-host
30-
connections) or by using an appropriate proxy or load balancer and the
31-
facilities offered by your selected failover manager.
32-
33-
The above is also true of the connection between the backup node and the
34-
primary created by TPA. The backup will not be automatically adjusted to
35-
target the new primary in the event of failover, instead it will remain
36-
connected to the original primary. If you are performing a manual
37-
failover and wish to connect the backup to the new primary, you may
38-
simply re-run `tpaexec deploy`. If you wish to automatically change the
39-
backup source, you should implement this using your selected failover
40-
manager as noted above.
19+
## Failover management
20+
21+
The M1 architecture always includes a failover manager. Supported
22+
options are repmgr, EDB Failover Manager (EFM) and Patroni. In all
23+
cases, the failover manager will be configured by default to ensure that
24+
a replica will be promoted to take the place of the primary should the
25+
primary become unavailable.
26+
27+
### Application failover
28+
29+
The M1 architecture does not generally provide an automatic facility to
30+
reroute application traffic to the primary. There are several ways you
31+
can add this capability to your cluster.
32+
33+
In TPA:
34+
35+
- If you choose repmgr as the failover manager and enable PgBouncer, you
36+
can include the `repmgr_redirect_pgbouncer: true` hash under
37+
`cluster_vars` in `config.yml`. This causes repmgr to automatically
38+
reconfigure PgBouncer to route traffic to the new primary on failover.
39+
40+
- If you choose Patroni as the failover manager and enable PgBouncer,
41+
Patroni will automatically reconfigure PgBouncer to route traffic to
42+
the new primary on failover.
43+
44+
- If you choose EFM as the failover manager, you can use the
45+
`efm_conf_settings` hash under `cluster_vars` in `config.yml` to
46+
[configure EFM to use a virtual IP address
47+
(VIP)](/efm/latest/04_configuring_efm/05_using_vip_addresses/). This
48+
is an additional IP address which will always route to the primary
49+
node.
50+
51+
- Place an appropriate proxy or load balancer between the cluster and
52+
you application and use a [TPA hook](tpaexec-hooks/) to configure
53+
your selected failover manager to update it with the route to the new
54+
primary on failover.
55+
56+
- Handle failover at the application itself, for example by using
57+
multi-host connection strings.
58+
59+
### Backup failover
60+
61+
TPA does not configure any kind of 'backup failover'. If the Postgres
62+
node from which you are backing up is down, backups will simply halt
63+
until the node is back online. To manually connect the backup to the new
64+
primary, edit `config.yml` to add the `backup` hash to the new primary
65+
instance and re-run `tpaexec deploy`.
4166

4267
## Cluster configuration
4368

@@ -78,18 +103,18 @@ More detail on the options is provided in the following section.
78103

79104
#### Additional Options
80105

81-
| Parameter | Description | Behaviour if omitted |
82-
| --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ |
83-
| `--platform` | One of `aws`, `docker`, `bare`. | Defaults to `aws`. |
84-
| `--location-names` | A space-separated list of location names. The number of active locations is equal to the number of names supplied, minus one for each of the witness-only location and the single-node location if they are requested. | A single location called "main" is used. |
85-
| `--primary-location` | The location where the primary server will be. Must be a member of `location-names`. | The first listed location is used. |
86-
| `--data-nodes-per-location` | A number from 1 upwards. In each location, one node will be configured to stream directly from the cluster's primary node, and the other nodes, if present, will stream from that one. | Defaults to 2. |
87-
| `--witness-only-location` | A location name, must be a member of `location-names`. | No witness-only location is added. |
88-
| `--single-node-location` | A location name, must be a member of `location-names`. | No single-node location is added. |
89-
| `--enable-haproxy` | 2 additional nodes will be added as a load balancer layer.<br/>Only supported with Patroni as the failover manager. | HAproxy nodes will not be added to the cluster. |
90-
| `--enable-pgbouncer` | PgBouncer will be configured in the Postgres nodes to pool connections for the primary. | PgBouncer will not be configured in the cluster. |
91-
| `--patroni-dcs` | Select the Distributed Configuration Store backend for patroni.<br/>Only option is `etcd` at this time. <br/>Only supported with Patroni as the failover manager. | Defaults to `etcd`. |
92-
| `--efm-bind-by-hostname` | Enable efm to use hostnames instead of IP addresses to configure the cluster `bind.address`. | Defaults to use IP addresses |
106+
| Parameter | Description | Behaviour if omitted |
107+
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ |
108+
| `--platform` | One of `aws`, `docker`, `bare`. | Defaults to `aws`. |
109+
| `--location-names` | A space-separated list of location names. The number of locations is equal to the number of names supplied. | A single location called "main" is used. |
110+
| `--primary-location` | The location where the primary server will be. Must be a member of `location-names`. | The first listed location is used. |
111+
| `--data-nodes-per-location` | A number from 1 upwards. In each location, one node will be configured to stream directly from the cluster's primary node, and the other nodes, if present, will stream from that one. | Defaults to 2. |
112+
| `--witness-only-location` | A location name, must be a member of `location-names`. This location will be populated with a single witness node only. | No witness-only location is added. |
113+
| `--single-node-location` | A location name, must be a member of `location-names`. This location will be populated with a single data node only. | No single-node location is added. |
114+
| `--enable-haproxy` | Two additional nodes will be added as a load balancer layer.<br/>Only supported with Patroni as the failover manager. | HAproxy nodes will not be added to the cluster. |
115+
| `--enable-pgbouncer` | PgBouncer will be configured in the Postgres nodes to pool connections for the primary. | PgBouncer will not be configured in the cluster. |
116+
| `--patroni-dcs` | Select the Distributed Configuration Store backend for patroni.<br/>Only option is `etcd` at this time. <br/>Only supported with Patroni as the failover manager. | Defaults to `etcd`. |
117+
| `--efm-bind-by-hostname` | Enable efm to use hostnames instead of IP addresses to configure the cluster `bind.address`. | Defaults to use IP addresses |
93118

94119
<br/><br/>
95120

product_docs/docs/tpa/23/architecture-PGD-Always-ON.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,10 @@ originalFilePath: architecture-PGD-Always-ON.md
55

66
---
77

8-
!!! Note
8+
!!!Note
9+
910
This architecture is for Postgres Distributed 5 only.
1011
If you require PGD 4 or 3.7 please use [BDR-Always-ON](architecture-BDR-Always-ON/).
11-
!!!
1212

1313
EDB Postgres Distributed 5 in an Always-ON configuration,
1414
suitable for use in test and production.
@@ -85,9 +85,9 @@ data centre that provides a level of redundancy, in whatever way
8585
this definition makes sense to your use case. For example, AWS
8686
regions, your own data centres, or any other designation to identify
8787
where your servers are hosted.
88+
!!!
8889

89-
90-
!!! Note Note for AWS users
90+
!!! Note for AWS users
9191

9292
If you are using TPA to provision an AWS cluster, the locations will
9393
be mapped to separate availability zones within the `--region` you
Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
---
2+
description: Configuring a PGD Lightweight cluster with TPA.
3+
title: PGD Lightweight
4+
originalFilePath: architecture-PGD-Lightweight.md
5+
6+
---
7+
8+
!!! Note
9+
10+
This architecture is for Postgres Distributed 5 only.
11+
If you require PGD 4 or 3.7 please use [BDR-Always-ON](../architecture-BDR-Always-ON/).
12+
13+
EDB Postgres Distributed 5 in a Lightweight configuration,
14+
suitable for use in test and production.
15+
16+
This architecture requires an EDB subscription.
17+
All software will be sourced from [EDB Repos 2.0](edb_repositories/).
18+
19+
## Cluster configuration
20+
21+
### Overview of configuration options
22+
23+
An example invocation of `tpaexec configure` for this architecture
24+
is shown below.
25+
26+
```bash
27+
tpaexec configure ~/clusters/pgd-lw \
28+
--architecture Lightweight \
29+
--edb-postgres-extended 15 \
30+
--platform aws --instance-type t3.micro \
31+
--distribution Debian \
32+
--location-names main dr \
33+
```
34+
35+
You can list all available options using the help command.
36+
37+
```bash
38+
tpaexec configure --architecture Lightweight --help
39+
```
40+
41+
The table below describes the mandatory options for PGD-Always-ON
42+
and additional important options.
43+
More detail on the options is provided in the following section.
44+
45+
#### Mandatory Options
46+
47+
| Options | Description |
48+
| ----------------------------------------------------- | -------------------------------------------------------------------------------------------- |
49+
| `--architecture` (`-a`) | Must be set to `Lightweight` |
50+
| Postgres flavour and version (e.g. `--postgresql 15`) | A valid [flavour and version specifier](../tpaexec-configure/#postgres-flavour-and-version). |
51+
52+
<br/><br/>
53+
54+
#### Additional Options
55+
56+
| Options | Description | Behaviour if omitted |
57+
| -------------------------------- | ----------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- |
58+
| `--platform` | One of `aws`, `docker`, `bare`. | Defaults to `aws`. |
59+
| `--location-names` | A space-separated list of location names. The number of locations is equal to the number of names supplied. | TPA will configure a single location with three data nodes. |
60+
| `--add-proxy-nodes-per-location` | The number of proxy nodes in each location. | PGD-proxy will be installed on each data node. |
61+
| `--bdr-database` | The name of the database to be used for replication. | Defaults to `bdrdb`. |
62+
| `--enable-pgd-probes` | Enable http(s) api endpoints for pgd-proxy such as `health/is-ready` to allow probing proxy's health. | Disabled by default. |
63+
| `--proxy-listen-port` | The port on which proxy nodes will route traffic to the write leader. | Defaults to 6432 |
64+
| `--proxy-read-only-port` | The port on which proxy nodes will route read-only traffic to shadow nodes. | Defaults to 6433 |
65+
66+
<br/><br/>
67+
68+
### More detail about Lightweight configuration
69+
70+
A PGD Lightweight cluster comprises 2 locations, with a primary active location containing 2 nodes and a disaster recovery (dr) location with a single node.
71+
72+
Location names for the cluster are specified as
73+
`--location-names primary dr`. A location represents an independent
74+
data centre that provides a level of redundancy, in whatever way
75+
this definition makes sense to your use case. For example, AWS
76+
regions, your own data centres, or any other designation to identify
77+
where your servers are hosted.
78+
79+
!!! Note for AWS users
80+
81+
If you are using TPA to provision an AWS cluster, the locations will
82+
be mapped to separate availability zones within the `--region` you
83+
specify.
84+
You may specify multiple `--regions`, but TPA does not currently set
85+
up VPC peering to allow instances in different regions to
86+
communicate with each other. For a multi-region cluster, you will
87+
need to set up VPC peering yourself.
88+
89+
By default, every data node (in every location) will also run PGD-Proxy
90+
for connection routing. To create separate PGD-Proxy instances instead,
91+
use `--add-proxy-nodes-per-location 3` (or however many proxies you want
92+
to add).
93+
94+
Global routing will make every proxy route to a single write leader, elected amongst all available data nodes across all locations.
95+
96+
You may optionally specify `--bdr-database dbname` to set the name of
97+
the database with BDR enabled (default: bdrdb).
98+
99+
You may optionally specify `--enable-pgd-probes [{http, https}]` to
100+
enable http(s) api endpoints that will allow to easily probe proxy's health.
101+
102+
You may also specify any of the options described by
103+
[`tpaexec help configure-options`](../tpaexec-configure/).

0 commit comments

Comments
 (0)