Skip to content

Commit 170d569

Browse files
Merge pull request #1249 from EnterpriseDB/release/2021-04-14
Former-commit-id: 3d13fc8
2 parents 3226951 + 9b80fc0 commit 170d569

File tree

128 files changed

+7674
-58
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

128 files changed

+7674
-58
lines changed

advocacy_docs/supported-open-source/pgbackrest/06-use_case_1.mdx

+13-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ tags:
1010

1111
### Description
1212

13-
* pgBackRest runs locally on database servers and stores repository on remote storage (NFS, S3, or Azure object stores).
13+
* pgBackRest runs locally on database servers and stores repository on remote storage (NFS, S3, Azure or GCS compatible object stores).
1414
* Cron task active on the node where to take backups.
1515
* Manually reconfigure cron task to take backup from another server.
1616

@@ -74,6 +74,18 @@ repo1-azure-key=***
7474

7575
To use the shared access signatures, set the `repo1-azure-key-type` option to **sas** and the `repo1-azure-key` option to the shared access signature token.
7676

77+
##### GCS-compatible Object Stores
78+
79+
```ini
80+
repo1-type=gcs
81+
repo1-gcs-key-type=service
82+
repo1-path=/repo1
83+
repo1-gcs-bucket=bucket-name
84+
repo1-gcs-key=/etc/pgbackrest/gcs-key.json
85+
```
86+
87+
`repo1-gcs-key` is a **token** or **service** key file depending on the `repo1-gcs-key-type` option.
88+
7789
##### Recommended Settings
7890

7991
Use those settings to enable encryption, parallel operations and ensure displaying enough information in the console and in the log file:

advocacy_docs/supported-open-source/pgbackrest/07-use_case_2.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ tags:
1010

1111
### Description
1212

13-
* pgBackRest runs on a dedicated backup server and stores repository on local or remote storage (NFS, S3, or Azure object stores).
13+
* pgBackRest runs on a dedicated backup server and stores repository on local or remote storage (NFS, S3, Azure or GCS compatible object stores).
1414
* Cron task active to take backups from the primary database cluster.
1515
* Manually reconfigure cron task to take backup from another server.
1616

advocacy_docs/supported-open-source/pgbackrest/index.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,6 @@ pgBackRest is an easy-to-use backup and restore tool that aims to enable your or
1717
!!! Note
1818
Looking for pgBackRest documentation? Head over to [http://www.pgbackrest.org](http://www.pgbackrest.org)
1919

20-
pgBackRest [v2.32](https://github.com/pgbackrest/pgbackrest/releases/tag/release/2.32) is the current stable release. Release notes are on the [Releases](http://www.pgbackrest.org/release.html) page.
20+
pgBackRest [v2.33](https://github.com/pgbackrest/pgbackrest/releases/tag/release/2.33) is the current stable release. Release notes are on the [Releases](http://www.pgbackrest.org/release.html) page.
2121

2222
EDB collaborates with the community on this open-source software. The packages provided in EDB repositories are technically equivalent to the packages provided by the **PostgreSQL** community. All of the use cases shown in this document are fully tested and supported by **EDB Postgres Advanced Server**.

product_docs/docs/bdr/3.6/index.mdx

+12-4
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Available as two editions, BDR Standard provides essential multi-master replicat
1515

1616
## BDR Enterprise
1717

18-
To provide very high availability, avoid data conflicts, and to cope with more advanced usage scenarios, the Enterprise Edition provides the following extensive additional features:
18+
To provide very high availability, avoid data conflicts, and to cope with more advanced usage scenarios, the Enterprise edition provides the following extensive additional features:
1919

2020
* Eager replication provides conflict free replication by synchronizing across cluster nodes before committing a transaction
2121
* Commit at most once consistency guards application transactions even in the presence of node failures
@@ -27,12 +27,16 @@ To provide very high availability, avoid data conflicts, and to cope with more a
2727
BDR Enterprise requires EDB Postgres Extended v11 (formerly known as 2ndQuadrant Postgres) which is SQL and on-disk compatible with PostgreSQL.
2828

2929
!!!note
30-
The documentation for the latest stable 3.6 release is available here: [BDR3.6 Enterprise Edition](https://documentation.2ndquadrant.com/bdr3-enterprise/release/latest-3.6/)
30+
The documentation for the latest stable 3.6 release is available here:
31+
32+
[BDR 3.6 Enterprise Edition](https://documentation.2ndquadrant.com/bdr3-enterprise/release/latest-3.6/)
33+
34+
**This is a protected area of our website, if you need access please [contact us](https://www.enterprisedb.com/contact)**
3135
!!!
3236

3337
## BDR Standard
3438

35-
The Standard Edition provides loosely-coupled multi-master logical replication using a mesh topology. This means that you can write to any node and the changes will be sent directly, row-by-row to all the other nodes that are part of the BDR cluster.
39+
The Standard edition provides loosely-coupled multi-master logical replication using a mesh topology. This means that you can write to any node and the changes will be sent directly, row-by-row to all the other nodes that are part of the BDR cluster.
3640

3741
By default BDR uses asynchronous replication to provide row-level eventual consistency, applying changes on the peer nodes only after the local commit.
3842

@@ -46,7 +50,11 @@ The following are included to support very high availability and geographically
4650
BDR Standard requires PostgreSQL v10 or v11.
4751

4852
!!!note
49-
The documentation for the latest stable 3.6 release is available here: [BDR3.6 Standard Edition](https://documentation.2ndquadrant.com/bdr3/release/latest-3.6/)
53+
The documentation for the latest stable 3.6 release is available here:
54+
55+
[BDR 3.6 Standard Edition](https://documentation.2ndquadrant.com/bdr3/release/latest-3.6/)
56+
57+
**This is a protected area of our website, if you need access please [contact us](https://www.enterprisedb.com/contact)**
5058
!!!
5159

5260

Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
---
2+
title: "Architecture Overview"
3+
---
4+
5+
This guide explains how to configure Failover Manager and Pgpool best to leverage the benefits that they provide for Advanced Server. Using the reference architecture described in the Architecture section, you can learn how to achieve high availability by implementing an automatic failover mechanism (with Failover Manager) while scaling the system for larger workloads and an increased number of concurrent clients with read-intensive or mixed workloads to achieve horizontal scaling/read-scalability (with Pgpool).
6+
7+
The architecture described in this document has been developed and tested for EFM 4.2, EDB pgPool 4.2, and Advanced Server 13.
8+
9+
Documentation for Advanced Server and Failover Manager are available from EnterpriseDB at:
10+
11+
<https://www.enterprisedb.com/docs/>
12+
13+
Documentation for pgPool-II can be found at:
14+
15+
<http://www.pgpool.net/docs/latest/en/html>
16+
17+
## Failover Manager Overview
18+
19+
Failover Manager is a high-availability module that monitors the health of a Postgres streaming replication cluster and verifies failures quickly. When a database failure occurs, Failover Manager can automatically promote a streaming replication Standby node into a writable Primary node to ensure continued performance and protect against data loss with minimal service interruption.
20+
21+
**Basic EFM Architecture Terminology**
22+
23+
A Failover Manager cluster is comprised of EFM processes that reside on the following hosts on a network:
24+
25+
- A **Primary** node is the Primary database server that is servicing database clients.
26+
- One or more **Standby nodes** are streaming replication servers associated with the Primary node.
27+
- The **Witness node** confirms assertions of either the Primary or a Standby in a failover scenario. If, during a failure situation, the Primary finds itself in a partition with half or more of the nodes, it will stay Primary. As such, EFM supports running in a cluster with an even number of agents.
28+
29+
## Pgpool-II Overview
30+
31+
Pgpool-II (Pgpool) is an open-source application that provides connection pooling and load balancing for horizontal scalability of SELECT queries on multiple Standbys in EPAS and community Postgres clusters. For every backend, a backend_weight parameter can set the ratio of read traffic to be directed to the backend node. To prevent read traffic on the Primary node, the backend_weight parameter can be set to 0. In such cases, data modification language (DML) queries (i.e., INSERT, UPDATE, and DELETE) will still be sent to the Primary node, while read queries are load-balanced to the Standbys, providing scalability with mixed and read-intensive workloads.
32+
33+
EnterpriseDB supports the following Pgpool functionality:
34+
35+
- Load balancing
36+
- Connection pooling
37+
- High availability
38+
- Connection limits
39+
40+
### PCP Overview
41+
42+
Pgpool provides an interface called PCP for administrators that performs management operations such as retrieving the status of Pgpool or terminating Pgpool processes remotely. PCP commands are UNIX commands that manipulate Pgpool via the network.
43+
44+
### Pgpool Watchdog
45+
46+
`watchdog` is an optional sub process of Pgpool that provides a high availability feature. Features added by `watchdog` include:
47+
48+
- Health checking of the pgpool service
49+
- Mutual monitoring of other watchdog processes
50+
- Changing leader/Standby state if certain faults are detected
51+
- Automatic virtual IP address assigning synchronous to server switching
52+
- Automatic registration of a server as a Standby during recovery
53+
54+
More information about the `Pgpool watchdog` component can be found at:
55+
56+
<http://www.pgpool.net/docs/latest/en/html/tutorial-watchdog.html>
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
---
2+
title: "Architecture"
3+
---
4+
5+
![A typical EFM and Pgpool configuration](images/edb_ha_architecture.png)
6+
7+
The sample architecture diagram shows four nodes as described in the table below:
8+
9+
| **Systems** | **Components** |
10+
| ------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
11+
| Primary Pgpool/EFM witness node | The Primary Pgpool node will only run Pgpool, and EFM witness, as such leaving as much resources available to Pgpool as possible. During normal runmode (no Pgpool Failovers), the Primary Pgpool node has attached the Virtual IP address, and all applications connect through the Virtual IP address to Pgpool. Pgpool will forward all write traffic to the Primary Database node, and will balance all read across all Standby nodes.On the Primary Pgpool node, the EFM witness process ensures that a minimum quota of three EFM agents remains available even if one of the database nodes fails. Some examples are when a node is already unavailable due to maintenance, or failure, and another failure occurs. |
12+
| Primary Database node | The Primary Database node will only run Postgres (Primary)and EFM, leaving all resources to Postgres. Read/Write traffic (i.e., INSERT, UPDATE, DELETE) is forwarded to this node by the Primary Pgpool node. |
13+
| Standby nodes | The Standby nodes are running Postgres (Standby), EFM and an inactive Pgpool process. In case of a Primary database failure, EFM will promote Postgres on one of these Standby nodes to handle read-write traffic. In case of a Primary Pgpool failure, the Pgpool watchdog will activate Pgpool on one of the Standby nodes which will attach the VIP, and handle the forwarding of the application connections to the Database nodes. Note that in a double failure situation (both the Primary Pgpool node and the Primary Database node are in failure), both of these Primary processes might end up on the same node. |
14+
15+
This architecture:
16+
17+
- Achieves high availability by providing two Standbys that can be promoted in case of a Primary Postgres node failure.
18+
- Achieves high availability by providing at least three Pgpool processes in a watchdog configuration.
19+
- Increases performance with mixed and read-intensive workloads by introducing increased read scalability with more than one Standby for load balancing.
20+
- Reduces load on the Primary database node by redirecting read-only traffic with the Primary pgpool node.
21+
- Prevents resource contention between Pgpool and Postgres on the Primary Database node. By not running Pgpool on the Primary database node, the Primary Postgres process can utilize as much resources as possible.
22+
- Prevents resource contention between pgpool and Postgres on the Primary Pgpool node. By not running Standby databases on the Primary Pgpool node, Pgpool can utilize as many resources as possible.
23+
- Optionally, synchronous replication can be set up to achieve near-zero data loss in a failure event.
24+
25+
!!! Note
26+
The architecture also allows us to completely separate 3 virtual machines running Postgres from 3 virtual machines running Pgpool. This kind of setup requires 2 extra virtual machines, but it is a better choice if you want to prevent resource contention between Pgpool and Postgres in Failover scenarios. In this setup, the architecture can run without an extra 7th node running the EFM Witness Process. To increase failure resolution efm witness agents could be deployed on the Pgpool servers.
27+
28+
![Deployment of EFM and Pgpool on separate virtual machines](images/edb_ha_architecture_separate_VM.png)

0 commit comments

Comments
 (0)