You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1
Original file line number
Diff line number
Diff line change
@@ -48,6 +48,7 @@ If you need to run parts of the RST to MDX conversion pipeline, you'll need to i
48
48
49
49
If you are a Windows user, you can work with Docs without installing it locally by using a Docker container and VSCode. See [Working on Docs in a Docker container using VSCode](README_DOCKER_VSCODE.md)
Copy file name to clipboardExpand all lines: product_docs/docs/efm/4.0/efm_pgpool_ha_guide/01_introduction.mdx
+1-1
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ legacyRedirectsGenerated:
8
8
9
9
This guide explains how to configure Failover Manager and Pgpool best to leverage the benefits that they provide for Advanced Server. Using the reference architecture described in the Architecture section, you can learn how to achieve high availability by implementing an automatic failover mechanism (with Failover Manager) while scaling the system for larger workloads and an increased number of concurrent clients with read-intensive or mixed workloads to achieve horizontal scaling/read-scalability (with Pgpool).
10
10
11
-
The architecture described in this document has been developed and tested for EFM 4.1, EDB Pgpool 4.1, and Advanced Server 13.
11
+
The architecture described in this document has been developed and tested for EFM 4.0, EDB Pgpool, and Advanced Server 12.
12
12
13
13
Documentation for Advanced Server and Failover Manager are available from EnterpriseDB at:
Failover Manager monitors the health of Postgres nodes; in the event of a database failure, Failover Manager performs an automatic failover to a Standby node. Note that Pgpool does not monitor the health of backend nodes and will not perform failover to any Standby nodes.
10
10
11
11
## Configuring Failover Manager
12
12
13
-
Failover Manager provides functionality that will remove failed database nodes from Pgpool load balancing; it can also re-attach nodes to Pgpool when returned to the Failover Manager cluster. To configure EFM for high availability using Pgpool, you must set the following properties in the cluster properties file:
13
+
Failover Manager provides functionality that will remove failed database nodes from Pgpool load balancing; Failover Manager can also re-attach nodes to Pgpool when returned to the Failover Manager cluster. To configure this behavior, you must identify the load balancer attach and detach scripts in the efm.properties file in the following parameters:
'pcp.host' = <Virtual IP that would be used by pgpool. Same as pgpool parameter 'delegate_IP’>
19
+
The script referenced by `load.balancer.detach` is called when Failover Manager decides that a database node has failed. The script detaches the node from Pgpool by issuing a PCP interface call. You can verify a successful execution of the load.balancer.detach script by calling `SHOW NODES` in a psql session attached to the Pgpool port. The call to SHOW NODES should return that the node is marked as down; Pgpool will not send any queries to a downed node.
20
20
21
-
'pcp.port' = <The port on which pgpool listens for pcp commands>
21
+
The script referenced by `load.balancer.attach` is called when a resume command is issued to the efm command-line interface to add a downed node back to the Failover Manager cluster. Once the node rejoins the cluster, the script referenced by `load.balancer.attach` is invoked, issuing a PCP interface call, which adds the node back to the Pgpool cluster. You can verify a successful execution of the `load.balancer.attach` script by calling `SHOW NODES` in a psql session attached to the Pgpool port; the command should return that the node is marked as up. At this point, Pgpool will resume using this node as a load balancing candidate. Sample scripts for each of these parameters are provided in Appendix B.
22
22
23
-
'pcp.pass.file' = <Absolute path of PCPPASSFILE>
24
-
25
-
'pgpool.bin' = <Absolute path of pgpool bin directory>
26
23
27
24
## Configuring Pgpool
28
25
29
-
The section lists the configuration of some important parameters in the `pgpool.conf` file to integrate the Pgpool-II with EFM.
30
-
31
-
**Backend node setting**
32
-
33
-
There are three PostgreSQL backend nodes, one Primary and two Standby nodes. Configure using `backend_*` configuration parameters in `pgpool.conf`, and use the equal backend weights for all nodes. This will make the read queries to be distributed equally among all nodes.
34
-
35
-
```text
36
-
backend_hostname0 = ‘server1_IP'
37
-
backend_port0 = 5444
38
-
backend_weight0 = 1
39
-
backend_flag0 = 'DISALLOW_TO_FAILOVER'
40
-
41
-
backend_hostname1 = ‘server2_IP'
42
-
backend_port1 = 5444
43
-
backend_weight1 = 1
44
-
backend_flag1 = 'DISALLOW_TO_FAILOVER'
45
-
46
-
backend_hostname2 = ‘server3_IP'
47
-
backend_port2 = 5444
48
-
backend_weight2 = 1
49
-
backend_flag2 = 'DISALLOW_TO_FAILOVER'
50
-
```
51
-
52
-
**Enable Load-balancing and streaming replication mode**
53
-
54
-
Set the following configuration parameter in the `pgpool.conf` file to enable load balancing and streaming replication mode
26
+
You must provide values for the following configuration parameters in the pgpool.conf file on the Pgpool host:
Health-checking and failover must be handled by EFM and hence, these must be disabled on Pgpool-II side. To disable the health-check and failover on pgpool-II side, assign the following values:
65
-
66
-
```text
67
-
health_check_period = 0
68
33
fail_over_on_backend_error = off
34
+
health_check_period = 0
69
35
failover_if_affected_tuples_mismatch = off
70
-
failover_command = ‘’
71
-
failback_command = ‘’
36
+
failover_command = ''
37
+
failback_command = ''
38
+
search_primary_node_timeout = 3
39
+
backend_hostname0='primary'
40
+
backend_port0=5433
41
+
backend_flag0='ALLOW_TO_FAILOVER'
42
+
backend_hostname1='standby1'
43
+
backend_port1=5433
44
+
backend_flag1='ALLOW_TO_FAILOVER'
45
+
backend_hostname2='standby2'
46
+
backend_port2=5433
47
+
backend_flag2='ALLOW_TO_FAILOVER'
48
+
sr_check_period = 10
49
+
sr_check_user = 'enterprisedb'
50
+
sr_check_password = 'edb'
51
+
sr_check_database = 'edb'
52
+
health_check_user = 'enterprisedb'
53
+
health_check_password = 'edb'
54
+
health_check_database = 'edb'
72
55
```
73
56
74
-
Ensure the following while setting up the values in the `pgpool.conf` file:
75
-
76
-
- Keep the value of wd_priority in pgpool.conf different on each node. The node with the highest value gets the highest priority.
77
-
- The properties backend_hostname0 , backend_hostname1, backend_hostname2 and so on are shared properties (in EFM terms) and should hold the same value for all the nodes in pgpool.conf file.
78
-
- Update the correct interface value in *if\_*\* and arping cmd props in the pgpool.conf file.
79
-
- Add the properties heartbeat_destination0, heartbeat_destination1, heartbeat_destination2 etc. as per the number of nodes in pgpool.conf file on every node. Here heartbeat_destination0 should be the ip/hostname of the local node.
57
+
When the primary/master node is changed in Pgpool (either by failover or by manual promotion) in a non-Failover Manager setup, Pgpool detaches all standby nodes from itself, and executes the `follow_master_command` for each standby node, making them follow the new primary node. Since Failover Manager reconfigures the standby nodes before executing the post-promotion script (where a standby is promoted to primary in Pgpool to match the Failover Manager configuration), the `follow_master_command` merely needs to reattach standby nodes to Pgpool.
80
58
81
-
**Setting up PCP**
59
+
Note that the load-balancing is turned on to ensure read scalability by distributing read traffic across the standby nodes.
82
60
83
-
Script uses the PCP interface, So we need to set up the PCP and .PCPPASS file to allow PCP connections without password prompt.
61
+
Note also that the health checking and error-triggered backend failover have been turned off, as Failover Manager will be responsible for performing health checks and triggering failover. It is not advisable for Pgpool to perform health checking in this case, so as not to create a conflict with Failover Manager, or prematurely perform failover.
Finally, `search_primary_node_timeout` has been set to a low value to ensure prompt recovery of Pgpool services upon an Failover Manager-triggered failover.
In order for the attach and detach scripts to be successfully called, a pgpool_backend.sh script must be provided. `pgpool_backend.sh` is a helper script for issuing the actual PCP interface commands on Pgpool. Nodes in Failover Manager are identified by IP addresses, while PCP commands refer to a node ID. `pgpool_backend.sh` provides a layer of abstraction to perform the IP address to node ID mapping transparently.
94
68
95
69
## Virtual IP Addresses
96
70
@@ -159,3 +133,5 @@ other_pgpool_port1 = 9999
159
133
other_wd_port1 = 9000
160
134
wd_priority = 5 # use high watchdog priority on server 4
161
135
```
136
+
!!! Note
137
+
Replace the value of eth0 with the network interface on your system.
0 commit comments