Skip to content

Commit fc0cfb0

Browse files
Merge pull request #1310 from EnterpriseDB/release/2021-04-27
Former-commit-id: 00b6f58
2 parents d91ed57 + 31a7f58 commit fc0cfb0

File tree

68 files changed

+1079
-721
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+1079
-721
lines changed

README.md

+1
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@ If you need to run parts of the RST to MDX conversion pipeline, you'll need to i
4848

4949
If you are a Windows user, you can work with Docs without installing it locally by using a Docker container and VSCode. See [Working on Docs in a Docker container using VSCode](README_DOCKER_VSCODE.md)
5050

51+
5152
## Sources
5253

5354
- Advocacy (`/advocacy_docs`, always loaded)

product_docs/docs/efm/4.0/efm_pgpool_ha_guide/01_introduction.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ legacyRedirectsGenerated:
88

99
This guide explains how to configure Failover Manager and Pgpool best to leverage the benefits that they provide for Advanced Server. Using the reference architecture described in the Architecture section, you can learn how to achieve high availability by implementing an automatic failover mechanism (with Failover Manager) while scaling the system for larger workloads and an increased number of concurrent clients with read-intensive or mixed workloads to achieve horizontal scaling/read-scalability (with Pgpool).
1010

11-
The architecture described in this document has been developed and tested for EFM 4.1, EDB Pgpool 4.1, and Advanced Server 13.
11+
The architecture described in this document has been developed and tested for EFM 4.0, EDB Pgpool, and Advanced Server 12.
1212

1313
Documentation for Advanced Server and Failover Manager are available from EnterpriseDB at:
1414

product_docs/docs/efm/4.0/efm_pgpool_ha_guide/02_architecture.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: "Architecture"
33

44
legacyRedirectsGenerated:
55
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
6-
- "/edb-docs/d/edb-postgres-failover-manager/user-guides/high-availability-scalability-guide/4.1/architecture.html"
6+
- "/edb-docs/d/edb-postgres-failover-manager/user-guides/high-availability-scalability-guide/4.0/architecture.html"
77
---
88

99
![A typical EFM and Pgpool configuration](images/edb_ha_architecture.png)

product_docs/docs/efm/4.0/efm_pgpool_ha_guide/03_components_ha_pgpool.mdx

+37-61
Original file line numberDiff line numberDiff line change
@@ -3,94 +3,68 @@ title: "Implementing High Availability with Pgpool"
33

44
legacyRedirectsGenerated:
55
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
6-
- "/edb-docs/d/edb-postgres-failover-manager/user-guides/high-availability-scalability-guide/4.1/components_ha_pgpool.html"
6+
- "/edb-docs/d/edb-postgres-failover-manager/user-guides/high-availability-scalability-guide/4.0/components_ha_pgpool.html"
77
---
88

99
Failover Manager monitors the health of Postgres nodes; in the event of a database failure, Failover Manager performs an automatic failover to a Standby node. Note that Pgpool does not monitor the health of backend nodes and will not perform failover to any Standby nodes.
1010

1111
## Configuring Failover Manager
1212

13-
Failover Manager provides functionality that will remove failed database nodes from Pgpool load balancing; it can also re-attach nodes to Pgpool when returned to the Failover Manager cluster. To configure EFM for high availability using Pgpool, you must set the following properties in the cluster properties file:
13+
Failover Manager provides functionality that will remove failed database nodes from Pgpool load balancing; Failover Manager can also re-attach nodes to Pgpool when returned to the Failover Manager cluster. To configure this behavior, you must identify the load balancer attach and detach scripts in the efm.properties file in the following parameters:
1414

15-
pgpool.enable =<true/false>
15+
`script.load.balancer.attach`=`/path/to/load_balancer_attach.sh %h`
1616

17-
'pcp.user' = <User that would be invoking PCP commands>
17+
`script.load.balancer.detach`=`/path/to/load_balancer_detach.sh %h`
1818

19-
'pcp.host' = <Virtual IP that would be used by pgpool. Same as pgpool parameter 'delegate_IP’>
19+
The script referenced by `load.balancer.detach` is called when Failover Manager decides that a database node has failed. The script detaches the node from Pgpool by issuing a PCP interface call. You can verify a successful execution of the load.balancer.detach script by calling `SHOW NODES` in a psql session attached to the Pgpool port. The call to SHOW NODES should return that the node is marked as down; Pgpool will not send any queries to a downed node.
2020

21-
'pcp.port' = <The port on which pgpool listens for pcp commands>
21+
The script referenced by `load.balancer.attach` is called when a resume command is issued to the efm command-line interface to add a downed node back to the Failover Manager cluster. Once the node rejoins the cluster, the script referenced by `load.balancer.attach` is invoked, issuing a PCP interface call, which adds the node back to the Pgpool cluster. You can verify a successful execution of the `load.balancer.attach` script by calling `SHOW NODES` in a psql session attached to the Pgpool port; the command should return that the node is marked as up. At this point, Pgpool will resume using this node as a load balancing candidate. Sample scripts for each of these parameters are provided in Appendix B.
2222

23-
'pcp.pass.file' = <Absolute path of PCPPASSFILE>
24-
25-
'pgpool.bin' = <Absolute path of pgpool bin directory>
2623

2724
## Configuring Pgpool
2825

29-
The section lists the configuration of some important parameters in the `pgpool.conf` file to integrate the Pgpool-II with EFM.
30-
31-
**Backend node setting**
32-
33-
There are three PostgreSQL backend nodes, one Primary and two Standby nodes. Configure using `backend_*` configuration parameters in `pgpool.conf`, and use the equal backend weights for all nodes. This will make the read queries to be distributed equally among all nodes.
34-
35-
```text
36-
backend_hostname0 = ‘server1_IP'
37-
backend_port0 = 5444
38-
backend_weight0 = 1
39-
backend_flag0 = 'DISALLOW_TO_FAILOVER'
40-
41-
backend_hostname1 = ‘server2_IP'
42-
backend_port1 = 5444
43-
backend_weight1 = 1
44-
backend_flag1 = 'DISALLOW_TO_FAILOVER'
45-
46-
backend_hostname2 = ‘server3_IP'
47-
backend_port2 = 5444
48-
backend_weight2 = 1
49-
backend_flag2 = 'DISALLOW_TO_FAILOVER'
50-
```
51-
52-
**Enable Load-balancing and streaming replication mode**
53-
54-
Set the following configuration parameter in the `pgpool.conf` file to enable load balancing and streaming replication mode
26+
You must provide values for the following configuration parameters in the pgpool.conf file on the Pgpool host:
5527

5628
```text
29+
follow_master_command = '/path/to/follow_primary.sh %d %P'
30+
load_balance_mode = on
5731
master_slave_mode = on
5832
master_slave_sub_mode = 'stream'
59-
load_balance_mode = on
60-
```
61-
62-
**Disable health-checking and failover**
63-
64-
Health-checking and failover must be handled by EFM and hence, these must be disabled on Pgpool-II side. To disable the health-check and failover on pgpool-II side, assign the following values:
65-
66-
```text
67-
health_check_period = 0
6833
fail_over_on_backend_error = off
34+
health_check_period = 0
6935
failover_if_affected_tuples_mismatch = off
70-
failover_command = ‘’
71-
failback_command = ‘’
36+
failover_command = ''
37+
failback_command = ''
38+
search_primary_node_timeout = 3
39+
backend_hostname0='primary'
40+
backend_port0=5433
41+
backend_flag0='ALLOW_TO_FAILOVER'
42+
backend_hostname1='standby1'
43+
backend_port1=5433
44+
backend_flag1='ALLOW_TO_FAILOVER'
45+
backend_hostname2='standby2'
46+
backend_port2=5433
47+
backend_flag2='ALLOW_TO_FAILOVER'
48+
sr_check_period = 10
49+
sr_check_user = 'enterprisedb'
50+
sr_check_password = 'edb'
51+
sr_check_database = 'edb'
52+
health_check_user = 'enterprisedb'
53+
health_check_password = 'edb'
54+
health_check_database = 'edb'
7255
```
7356

74-
Ensure the following while setting up the values in the `pgpool.conf` file:
75-
76-
- Keep the value of wd_priority in pgpool.conf different on each node. The node with the highest value gets the highest priority.
77-
- The properties backend_hostname0 , backend_hostname1, backend_hostname2 and so on are shared properties (in EFM terms) and should hold the same value for all the nodes in pgpool.conf file.
78-
- Update the correct interface value in *if\_* \* and arping cmd props in the pgpool.conf file.
79-
- Add the properties heartbeat_destination0, heartbeat_destination1, heartbeat_destination2 etc. as per the number of nodes in pgpool.conf file on every node. Here heartbeat_destination0 should be the ip/hostname of the local node.
57+
When the primary/master node is changed in Pgpool (either by failover or by manual promotion) in a non-Failover Manager setup, Pgpool detaches all standby nodes from itself, and executes the `follow_master_command` for each standby node, making them follow the new primary node. Since Failover Manager reconfigures the standby nodes before executing the post-promotion script (where a standby is promoted to primary in Pgpool to match the Failover Manager configuration), the `follow_master_command` merely needs to reattach standby nodes to Pgpool.
8058

81-
**Setting up PCP**
59+
Note that the load-balancing is turned on to ensure read scalability by distributing read traffic across the standby nodes.
8260

83-
Script uses the PCP interface, So we need to set up the PCP and .PCPPASS file to allow PCP connections without password prompt.
61+
Note also that the health checking and error-triggered backend failover have been turned off, as Failover Manager will be responsible for performing health checks and triggering failover. It is not advisable for Pgpool to perform health checking in this case, so as not to create a conflict with Failover Manager, or prematurely perform failover.
8462

85-
setup PCP: <http://www.pgpool.net/docs/latest/en/html/configuring-pcp-conf.html>
86-
87-
setup PCPPASS: <https://www.pgpool.net/docs/latest/en/html/pcp-commands.html>
63+
Finally, `search_primary_node_timeout` has been set to a low value to ensure prompt recovery of Pgpool services upon an Failover Manager-triggered failover.
8864

89-
Note that the load-balancing is turned on to ensure read scalability by distributing read traffic across the standby nodes
65+
## pgpool_backend.sh
9066

91-
The health checking and error-triggered backend failover have been turned off, as Failover Manager will be responsible for performing health checks and triggering failover. It is not advisable for Pgpool to perform health checking in this case, so as not to create a conflict with Failover Manager, or prematurely perform failover.
92-
93-
Finally, `search_primary_node_timeout` has been set to a low value to ensure prompt recovery of Pgpool services upon an Failover Manager-triggered failover.
67+
In order for the attach and detach scripts to be successfully called, a pgpool_backend.sh script must be provided. `pgpool_backend.sh` is a helper script for issuing the actual PCP interface commands on Pgpool. Nodes in Failover Manager are identified by IP addresses, while PCP commands refer to a node ID. `pgpool_backend.sh` provides a layer of abstraction to perform the IP address to node ID mapping transparently.
9468

9569
## Virtual IP Addresses
9670

@@ -159,3 +133,5 @@ other_pgpool_port1 = 9999
159133
other_wd_port1 = 9000
160134
wd_priority = 5 # use high watchdog priority on server 4
161135
```
136+
!!! Note
137+
Replace the value of eth0 with the network interface on your system.

0 commit comments

Comments
 (0)