Skip to content

Commit 5373174

Browse files
committed
Pre-release updates
Signed-off-by: Dj Walker-Morgan <[email protected]>
1 parent 6d5da73 commit 5373174

File tree

5 files changed

+396
-2
lines changed

5 files changed

+396
-2
lines changed
Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
---
2+
description: Configuring a PGD Lightweight cluster with TPA.
3+
title: PGD Lightweight
4+
originalFilePath: architecture-PGD-Lightweight.md
5+
6+
---
7+
8+
!!! Note
9+
10+
This architecture is for Postgres Distributed 5 only.
11+
If you require PGD 4 or 3.7 please use [BDR-Always-ON](../architecture-BDR-Always-ON/).
12+
13+
EDB Postgres Distributed 5 in a Lightweight configuration,
14+
suitable for use in test and production.
15+
16+
This architecture requires an EDB subscription.
17+
All software will be sourced from [EDB Repos 2.0](edb_repositories/).
18+
19+
## Cluster configuration
20+
21+
### Overview of configuration options
22+
23+
An example invocation of `tpaexec configure` for this architecture
24+
is shown below.
25+
26+
```bash
27+
tpaexec configure ~/clusters/pgd-lw \
28+
--architecture Lightweight \
29+
--edb-postgres-extended 15 \
30+
--platform aws --instance-type t3.micro \
31+
--distribution Debian \
32+
--location-names main dr \
33+
```
34+
35+
You can list all available options using the help command.
36+
37+
```bash
38+
tpaexec configure --architecture Lightweight --help
39+
```
40+
41+
The table below describes the mandatory options for PGD-Always-ON
42+
and additional important options.
43+
More detail on the options is provided in the following section.
44+
45+
#### Mandatory Options
46+
47+
| Options | Description |
48+
| ----------------------------------------------------- | -------------------------------------------------------------------------------------------- |
49+
| `--architecture` (`-a`) | Must be set to `Lightweight` |
50+
| Postgres flavour and version (e.g. `--postgresql 15`) | A valid [flavour and version specifier](../tpaexec-configure/#postgres-flavour-and-version). |
51+
52+
<br/><br/>
53+
54+
#### Additional Options
55+
56+
| Options | Description | Behaviour if omitted |
57+
| -------------------------------- | ----------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- |
58+
| `--platform` | One of `aws`, `docker`, `bare`. | Defaults to `aws`. |
59+
| `--location-names` | A space-separated list of location names. The number of locations is equal to the number of names supplied. | TPA will configure a single location with three data nodes. |
60+
| `--add-proxy-nodes-per-location` | The number of proxy nodes in each location. | PGD-proxy will be installed on each data node. |
61+
| `--bdr-database` | The name of the database to be used for replication. | Defaults to `bdrdb`. |
62+
| `--enable-pgd-probes` | Enable http(s) api endpoints for pgd-proxy such as `health/is-ready` to allow probing proxy's health. | Disabled by default. |
63+
| `--proxy-listen-port` | The port on which proxy nodes will route traffic to the write leader. | Defaults to 6432 |
64+
| `--proxy-read-only-port` | The port on which proxy nodes will route read-only traffic to shadow nodes. | Defaults to 6433 |
65+
66+
<br/><br/>
67+
68+
### More detail about Lightweight configuration
69+
70+
A PGD Lightweight cluster comprises 2 locations, with a primary active location containing 2 nodes and a disaster recovery (dr) location with a single node.
71+
72+
Location names for the cluster are specified as
73+
`--location-names primary dr`. A location represents an independent
74+
data centre that provides a level of redundancy, in whatever way
75+
this definition makes sense to your use case. For example, AWS
76+
regions, your own data centres, or any other designation to identify
77+
where your servers are hosted.
78+
79+
!!! Note for AWS users
80+
81+
If you are using TPA to provision an AWS cluster, the locations will
82+
be mapped to separate availability zones within the `--region` you
83+
specify.
84+
You may specify multiple `--regions`, but TPA does not currently set
85+
up VPC peering to allow instances in different regions to
86+
communicate with each other. For a multi-region cluster, you will
87+
need to set up VPC peering yourself.
88+
89+
By default, every data node (in every location) will also run PGD-Proxy
90+
for connection routing. To create separate PGD-Proxy instances instead,
91+
use `--add-proxy-nodes-per-location 3` (or however many proxies you want
92+
to add).
93+
94+
Global routing will make every proxy route to a single write leader, elected amongst all available data nodes across all locations.
95+
96+
You may optionally specify `--bdr-database dbname` to set the name of
97+
the database with BDR enabled (default: bdrdb).
98+
99+
You may optionally specify `--enable-pgd-probes [{http, https}]` to
100+
enable http(s) api endpoints that will allow to easily probe proxy's health.
101+
102+
You may also specify any of the options described by
103+
[`tpaexec help configure-options`](../tpaexec-configure/).

product_docs/docs/tpa/23/reference/barman.mdx

Lines changed: 106 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -99,8 +99,112 @@ TPA versions `<23.35` created the `barman` Postgres user as a `superuser`.
9999

100100
Beginning with `23.35` the `barman` user is created with `NOSUPERUSER`,
101101
so any re-deploys on existing clusters will remove the `superuser` attribute
102-
from the `barman` Postgres user. Instead, the `barman_role` is granted the
103-
required set of privileges and the `barman` user is granted `barman_role` membership.
102+
from the `barman` Postgres user. Instead, the `barman_role` is granted the
103+
required set of privileges and the `barman` user is granted `barman_role` membership.
104104

105105
This avoids granting the `superuser` attribute to the `barman` user, using the set
106106
of privileges provided in the [Barman Manual](https://docs.pgbarman.org/release/latest/#postgresql-connection).
107+
108+
## Shared Barman server
109+
110+
!!! Note
111+
112+
To use the shared Barman functionality with clusters created using a
113+
TPA version earlier than 23.35, you must:
114+
a) upgrade to a version of TPA that supports creating
115+
shared Barman instances.
116+
b) after upgrading TPA, run deploy on $first-cluster so TPA can make
117+
necessary config changes for subsequent clusters to run smoothly
118+
against the shared Barman node.
119+
120+
Some deployments may want to share a single Barman server for multiple
121+
clusters. Shared Barman server deployment within
122+
tpaexec is supported via the `barman_shared` setting that can be set via
123+
`vars:` under the Barman server instance for the given cluster config
124+
that plans to use an existing Barman server. `barman_shared` is a
125+
boolean variable so possible values are true and false(default). When
126+
making any changes to the Barman config in a shared scenario, you must
127+
ensure that configurations across multiple clusters remain in sync so as
128+
to avoid a scenario where one cluster adds a specific configuration and
129+
a second cluster overrides it.
130+
131+
A typical workflow for using a shared Barman server across multiple
132+
clusters is described below.
133+
134+
1. Create a TPA cluster with an instance that has `barman` role
135+
(call it 'first-cluster' for this example).
136+
137+
2. In the second cluster (second-cluster for example), reference this
138+
particular Barman instance from $clusters/first-cluster as a shared
139+
Barman server instance and use `bare` as platform so we are not
140+
trying to create a new Barman instance when running provision. Also
141+
specify the IP address of the Barman instance that this cluster can
142+
use to access it.
143+
144+
```yml
145+
- Name: myBarman
146+
node: 5
147+
role:
148+
- barman
149+
platform: bare
150+
ip_address: x.x.x.x
151+
vars:
152+
barman_shared: true
153+
```
154+
155+
3. Once the second-cluster is provisioned but before running deploy,
156+
make sure that it can access the Barman server instance via ssh. You
157+
can allow this access by copying second-cluster's public key to
158+
Barman server instance via `ssh-copy-id` and then do an ssh to
159+
make sure you can login without having to specify the password.
160+
161+
```bash
162+
# add first-cluster's key to the ssh-agent
163+
$ cd $clusters/first-cluster
164+
$ ssh-add id_first-clutser
165+
$ cd $clusters/second-cluster
166+
$ ssh-keyscan -t rsa,ecdsa -4 $barman-server-ip >> tpa_known_hosts
167+
$ ssh-copy-id -i id_second-cluster.pub -o 'UserKnownHostsFile=tpa_known_hosts' $user@$barman-server-ip
168+
$ ssh -F ssh_config $barman-server
169+
```
170+
171+
4. Copy the Barman user's keys from first-cluster to second-cluster
172+
```bash
173+
$ mkdir $clusters/second-cluster/keys
174+
$ cp $clusters/first-cluster/keys/id_barman* clusters/second-cluster/keys
175+
```
176+
177+
5. Run `tpaexec deploy $clusters/second-cluster`
178+
179+
!!! Note
180+
181+
You must use caution when setting up clusters that share a Barman
182+
server instance. There are a number of important aspects you must
183+
consider before attempting such a setup. For example:
184+
185+
1. Making sure that no two instances in any of the clusters sharing a
186+
Barman server use the same name.
187+
2. Barman configuration and settings otherwise should remain in sync in
188+
all the clusters using a common Barman server to avoid a scenario
189+
where one cluster sets up a specific configuration and the others do
190+
not either because the configuration is missing or uses a different
191+
value.
192+
3. Version of Postgres on instances being backed up across different
193+
clusters needs to be the same.
194+
4. Different clusters using a common Barman server cannot specify
195+
different versions of Barman packages when attempting to override
196+
default.
197+
198+
Some of these may be addressed in a future release as we continue to
199+
improve the shared Barman server support.
200+
201+
!!!Warning
202+
203+
Be extremely careful when deprovisioning clusters sharing a common
204+
Barman node. Especially where the first cluster that deployed Barman
205+
uses non-bare platform. Deprovisioning the first cluster that
206+
originally provisioned and deployed Barman will effectively leave
207+
other clusters sharing the Barman node in an inconsistent state
208+
because the Barman node will already have been deprovisioned by the
209+
first cluster and it won't exist anymore.
210+
!!!
Lines changed: 166 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,166 @@
1+
---
2+
description: Generating standards-compliant clusters with TPA
3+
title: Compliance
4+
originalFilePath: compliance.md
5+
6+
---
7+
8+
TPA can generate configurations designed to make it easy for a
9+
cluster to comply with the STIG or CIS standards. If you pass
10+
`--compliance stig` or `--compliance cis` to `tpaexec configure`,
11+
TPA will:
12+
13+
- Check that other options are compatible with the appropriate
14+
standard.
15+
- Add various entries to the generated `config.yml`, including
16+
marking that this is a cluster meant to comply with a particular
17+
standard and setting Postgres configuration as required by
18+
the standard.
19+
- Adjust some deployment tasks to enforce compliance.
20+
- Run checks at the end of deployment.
21+
22+
The deploy-time checks can
23+
be skipped by giving the option `--excluded_tasks=compliance` to `tpaexec
24+
deploy`. This feature is intended for testing only, when using a test
25+
system on which full compliance is impossible (for example,
26+
because SSL certificates are not available).
27+
28+
There are some situations in which TPA will intentionally fail to
29+
comply with the selected standard; these are documented under Exceptions
30+
below.
31+
32+
## STIG
33+
34+
STIG compliance is indicated by the `--compliance stig` option to
35+
`tpaexec configure`.
36+
37+
### Option compatibility
38+
39+
STIG compliance requires the `bare` platform and the `epas` flavour.
40+
It requires the RedHat OS with version 8 or 9.
41+
42+
### Settings in config.yml
43+
44+
The following entry is added to `cluster_vars` to use the SQL/Protect
45+
feature of EDB Postgres Advanced Server:
46+
47+
```
48+
extra_postgres_extensions: [ 'sql_protect' ]
49+
```
50+
51+
The following entries are added to `cluster_vars` to force clients
52+
to use SSL authentication:
53+
54+
```
55+
hba_force_hostssl: True
56+
hba_force_certificate_auth: True
57+
hba_cert_authentication_map: sslmap
58+
```
59+
60+
The following entries are added to `cluster_vars` to set GUCs in
61+
postgresql.conf:
62+
63+
```
64+
tcp_keepalives_idle: 10
65+
tcp_keepalives_interval: 10
66+
tcp_keepalives_count: 10
67+
log_destination: "stderr"
68+
postgres_log_file_mode: "0600"
69+
```
70+
71+
The following entries are added to `postgres_conf_settings` in
72+
`cluster_vars` to set GUCs in postgresql.conf:
73+
74+
```
75+
edb_audit: "xml"
76+
edb_audit_statement: "all"
77+
edb_audit_connect: "all"
78+
edb_audit_disconnect: "all"
79+
statement_timeout: 1000
80+
client_min_messages: "ERROR"
81+
```
82+
83+
### Deployment differences
84+
85+
During deployment, TPA will set connection limits for the database users
86+
it creates, corresponding to the number of connections that are needed
87+
for normal operation. As each user is set up, it will also check that
88+
an SSL client certificate has been provided for it.
89+
90+
### Providing client ssl certificates
91+
92+
STIG requires DOD-approved ssl certificates for client connections.
93+
These certificates can't be generated by TPA and therefore must be
94+
supplied. When setting up authentication for a user from a
95+
node in the cluster, TPA will look for a certificate/key pair on the
96+
node. The certificate and key should be in files called .crt
97+
and .key in the directory given by the `ssl_client_cert_dir`
98+
setting. The default for this setting is `/`, so the files would be,
99+
for example, `/barman.crt` and `/barman.key` when the `barman` user is
100+
being set up.
101+
102+
### Final checks
103+
104+
At the end of deployment, TPA will check that the server has FIPS
105+
enabled.
106+
107+
### Exceptions
108+
109+
If you select EFM as the failover manager, TPA will configure password
110+
authentication for the EFM user. This goes against the STIG requirement
111+
that all TCP connections use certificate authentication. The reason for
112+
this exception is that EFM does not support certificate authentication.
113+
114+
## CIS
115+
116+
CIS compliance is indicated by the `--compliance cis` option to `tpaexec
117+
configure`.
118+
119+
### Settings in config.yml
120+
121+
The following entries are added to `cluster_vars` to set GUCs in
122+
postgresql.conf:
123+
124+
```
125+
log_connections: "on"
126+
log_disconnections: "on"
127+
```
128+
129+
The following entry is added to `cluster_vars` to enable required
130+
extensions:
131+
132+
```
133+
extra_postgres_extensions: ["passwordcheck", "pgaudit"]
134+
```
135+
136+
The following entry is added to `cluster_vars` to set the umask for
137+
the postgres OS user:
138+
139+
```
140+
extra_bash_rc_lines: "umask 0077"
141+
```
142+
143+
The following entries are added to `postgres_conf_settings` in
144+
`cluster_vars` to set GUCs in postgresql.conf:
145+
146+
```
147+
148+
log_error_verbosity: "verbose"
149+
log_line_prefix: "'%m [%p]: [%l-1] db=%d,user=%u,app=%a,client=%h '"
150+
log_replication_commands: "on"
151+
temp_file_limit: "1GB"
152+
```
153+
154+
### Final checks
155+
156+
At the end of deployment, TPA will check that the server has FIPS
157+
enabled.
158+
159+
### Exceptions
160+
161+
TPA does not support pgBackRest as mentioned in the CIS specification.
162+
Instead TPA installs Barman.
163+
164+
TPA does not install and configure `set_user` as required by the CIS
165+
specification. This is because preventing logon by the Postgres user
166+
would leave TPA unable to connect to, and configure, the database.

0 commit comments

Comments
 (0)