Skip to content

Commit 0c3bd0a

Browse files
authored
Dev: sbd: Improve the process of leveraging maintenance mode (#1950)
## Problem #1744 leverage maintenance mode when needs to restart cluster, but there are still some problems when resources are running: #### Configuration changed before hinting, might lead to inconsistent ``` # crm sbd configure watchdog-timeout=45 INFO: No 'msgwait-timeout=' specified in the command, use 2*watchdog timeout: 90 INFO: Configuring disk-based SBD INFO: Initializing SBD device /dev/sda5 INFO: Update SBD_WATCHDOG_DEV in /etc/sysconfig/sbd: /dev/watchdog0 INFO: Sync file /etc/sysconfig/sbd to sle16-2 INFO: Already synced /etc/sysconfig/sbd to all nodes INFO: Update SBD_DELAY_START in /etc/sysconfig/sbd: 131 INFO: Sync file /etc/sysconfig/sbd to sle16-2 INFO: Already synced /etc/sysconfig/sbd to all nodes WARNING: "stonith-timeout" in crm_config is set to 119, it was 71 INFO: Sync directory /etc/systemd/system/sbd.service.d to sle16-2 WARNING: Resource is running, need to restart cluster service manually on each node WARNING: Or, run with `crm -F` or `--force` option, the `sbd` subcommand will leverage maintenance mode for any changes that require restarting sbd.service WARNING: Understand risks that running RA has no cluster protection while the cluster is in maintenance mode and restarting # crm sbd purge INFO: Stop sbd resource 'stonith-sbd'(stonith:fence_sbd) INFO: Remove sbd resource 'stonith-sbd' INFO: Disable sbd.service on node sle16-1 INFO: Disable sbd.service on node sle16-2 INFO: Move /etc/sysconfig/sbd to /etc/sysconfig/sbd.bak on all nodes INFO: Delete cluster property "stonith-timeout" in crm_config INFO: Delete cluster property "priority-fencing-delay" in crm_config WARNING: "stonith-enabled" in crm_config is set to false, it was true WARNING: Resource is running, need to restart cluster service manually on each node WARNING: Or, run with `crm -F` or `--force` option, the `sbd` subcommand will leverage maintenance mode for any changes that require restarting sbd.service WARNING: Understand risks that running RA has no cluster protection while the cluster is in maintenance mode and restarting ``` #### Pacemaker fatal exit when adding diskless sbd on a running cluster with resources running ``` # crm cluster init sbd -S -y INFO: Loading "default" profile from /etc/crm/profiles.yml INFO: Loading "knet-default" profile from /etc/crm/profiles.yml INFO: Configuring diskless SBD WARNING: Diskless SBD requires cluster with three or more nodes. If you want to use diskless SBD for 2-node cluster, should be combined with QDevice. INFO: Update SBD_WATCHDOG_TIMEOUT in /etc/sysconfig/sbd: 15 INFO: Update SBD_WATCHDOG_DEV in /etc/sysconfig/sbd: /dev/watchdog0 INFO: Sync file /etc/sysconfig/sbd to sle16-2 INFO: Already synced /etc/sysconfig/sbd to all nodes INFO: Enable sbd.service on node sle16-1 INFO: Enable sbd.service on node sle16-2 WARNING: Resource is running, need to restart cluster service manually on each node WARNING: Or, run with `crm -F` or `--force` option, the `sbd` subcommand will leverage maintenance mode for any changes that require restarting sbd.service WARNING: Understand risks that running RA has no cluster protection while the cluster is in maintenance mode and restarting WARNING: "stonith-watchdog-timeout" in crm_config is set to 30, it was 0 Broadcast message from systemd-journald@sle16-1 (Thu 2025-10-23 10:54:11 CEST): pacemaker-controld[5674]: emerg: Shutting down: stonith-watchdog-timeout configured (30) but SBD not active Message from syslogd@sle16-1 at Oct 23 10:54:11 ... pacemaker-controld[5674]: emerg: Shutting down: stonith-watchdog-timeout configured (30) but SBD not active ERROR: cluster.init: Failed to run 'crm configure property stonith-watchdog-timeout=30': ERROR: Failed to run 'crm_mon -1rR': crm_mon: Connection to cluster failed: Connection refused ``` ## Solution - Drop the function `restart_cluster_if_possible` - Introduced a new function `utils.able_to_restart_cluster` to check if the cluster can be restarted. Call it before changing any configurations. - Add leverage maintenance mode in `sbd device remove` and `sbd purge` commands #### Add sbd via sbd stage while resource is running ``` # crm cluster init sbd -S -y INFO: Loading "default" profile from /etc/crm/profiles.yml INFO: Loading "knet-default" profile from /etc/crm/profiles.yml WARNING: Please stop all running resources and try again WARNING: Or use 'crm -F/--force' option to leverage maintenance mode WARNING: Understand risks that running RA has no cluster protection while the cluster is in maintenance mode and restarting INFO: Aborting the configuration change attempt INFO: Done (log saved to /var/log/crmsh/crmsh.log on sle16-1) # Leverage maintenance mode # crm -F cluster init sbd -S -y INFO: Loading "default" profile from /etc/crm/profiles.yml INFO: Loading "knet-default" profile from /etc/crm/profiles.yml INFO: Set cluster to maintenance mode WARNING: "maintenance-mode" in crm_config is set to true, it was false INFO: Configuring diskless SBD WARNING: Diskless SBD requires cluster with three or more nodes. If you want to use diskless SBD for 2-node cluster, should be combined with QDevice. INFO: Update SBD_WATCHDOG_TIMEOUT in /etc/sysconfig/sbd: 15 INFO: Update SBD_WATCHDOG_DEV in /etc/sysconfig/sbd: /dev/watchdog0 INFO: Sync file /etc/sysconfig/sbd to sle16-2 INFO: Already synced /etc/sysconfig/sbd to all nodes INFO: Enable sbd.service on node sle16-1 INFO: Enable sbd.service on node sle16-2 INFO: Restarting cluster service INFO: BEGIN Waiting for cluster ........... INFO: END Waiting for cluster WARNING: "stonith-watchdog-timeout" in crm_config is set to 30, it was 0 WARNING: "stonith-enabled" in crm_config is set to true, it was false INFO: Update SBD_DELAY_START in /etc/sysconfig/sbd: 41 INFO: Sync file /etc/sysconfig/sbd to sle16-2 INFO: Already synced /etc/sysconfig/sbd to all nodes WARNING: "stonith-timeout" in crm_config is set to 71, it was 60s INFO: Set cluster from maintenance mode to normal INFO: Delete cluster property "maintenance-mode" in crm_config INFO: Done (log saved to /var/log/crmsh/crmsh.log on sle16-1) ``` #### Purge sbd while resource is running ``` # crm sbd purge WARNING: Please stop all running resources and try again WARNING: Or use 'crm -F/--force' option to leverage maintenance mode WARNING: Understand risks that running RA has no cluster protection while the cluster is in maintenance mode and restarting INFO: Aborting the configuration change attempt ``` #### Add device ``` # crm sbd device add /dev/sda6 INFO: Configured sbd devices: /dev/sda5 INFO: Append devices: /dev/sda6 WARNING: Please stop all running resources and try again WARNING: Or use 'crm -F/--force' option to leverage maintenance mode WARNING: Understand risks that running RA has no cluster protection while the cluster is in maintenance mode and restarting INFO: Aborting the configuration change attempt ``` #### Remove device ``` # crm sbd device remove /dev/sda6 INFO: Configured sbd devices: /dev/sda5;/dev/sda6 INFO: Remove devices: /dev/sda6 WARNING: Please stop all running resources and try again WARNING: Or use 'crm -F/--force' option to leverage maintenance mode WARNING: Understand risks that running RA has no cluster protection while the cluster is in maintenance mode and restarting INFO: Aborting the configuration change attempt ``` #### Configure sbd while DLM is running ``` # crm sbd configure watchdog-timeout=40 INFO: No 'msgwait-timeout=' specified in the command, use 2*watchdog timeout: 80 WARNING: Please stop all running resources and try again WARNING: Or use 'crm -F/--force' option to leverage maintenance mode WARNING: Understand risks that running RA has no cluster protection while the cluster is in maintenance mode and restarting INFO: Aborting the configuration change attempt # Leverage maintenance mode # crm -F sbd configure watchdog-timeout=40 INFO: No 'msgwait-timeout=' specified in the command, use 2*watchdog timeout: 80 INFO: Set cluster to maintenance mode WARNING: "maintenance-mode" in crm_config is set to true, it was false WARNING: Please stop DLM related resources (gfs2-clone) and try again INFO: Set cluster from maintenance mode to normal INFO: Delete cluster property "maintenance-mode" in crm_config ```
2 parents 722ae58 + b387201 commit 0c3bd0a

File tree

7 files changed

+94
-89
lines changed

7 files changed

+94
-89
lines changed

crmsh/sbd.py

Lines changed: 5 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -593,22 +593,6 @@ def enable_sbd_service(self):
593593
logger.info("Enable %s on node %s", constants.SBD_SERVICE, node)
594594
service_manager.enable_service(constants.SBD_SERVICE, node)
595595

596-
@staticmethod
597-
def restart_cluster_if_possible(with_maintenance_mode=False):
598-
if not ServiceManager().service_is_active(constants.PCMK_SERVICE):
599-
return
600-
if not xmlutil.CrmMonXmlParser().is_non_stonith_resource_running():
601-
bootstrap.restart_cluster()
602-
elif with_maintenance_mode:
603-
if not utils.is_dlm_running():
604-
bootstrap.restart_cluster()
605-
else:
606-
logger.warning("Resource is running, need to restart cluster service manually on each node")
607-
else:
608-
logger.warning("Resource is running, need to restart cluster service manually on each node")
609-
logger.warning("Or, run with `crm -F` or `--force` option, the `sbd` subcommand will leverage maintenance mode for any changes that require restarting sbd.service")
610-
logger.warning("Understand risks that running RA has no cluster protection while the cluster is in maintenance mode and restarting")
611-
612596
def configure_sbd(self):
613597
'''
614598
Configure fence_sbd resource and related properties
@@ -746,6 +730,9 @@ def init_and_deploy_sbd(self, restart_first=False):
746730
self._load_attributes_from_bootstrap()
747731

748732
with utils.leverage_maintenance_mode() as enabled:
733+
if not utils.able_to_restart_cluster(enabled):
734+
return
735+
749736
self.initialize_sbd()
750737
self.update_configuration()
751738
self.enable_sbd_service()
@@ -760,7 +747,7 @@ def init_and_deploy_sbd(self, restart_first=False):
760747
restart_cluster_first = restart_first or \
761748
(self.diskless_sbd and not ServiceManager().service_is_active(constants.SBD_SERVICE))
762749
if restart_cluster_first:
763-
SBDManager.restart_cluster_if_possible(with_maintenance_mode=enabled)
750+
bootstrap.restart_cluster()
764751

765752
self.configure_sbd()
766753
bootstrap.adjust_properties(with_sbd=True)
@@ -770,7 +757,7 @@ def init_and_deploy_sbd(self, restart_first=False):
770757
# This helps prevent unexpected issues, such as nodes being fenced
771758
# due to large SBD_WATCHDOG_TIMEOUT values combined with smaller timeouts.
772759
if not restart_cluster_first:
773-
SBDManager.restart_cluster_if_possible(with_maintenance_mode=enabled)
760+
bootstrap.restart_cluster()
774761

775762
def join_sbd(self, remote_user, peer_host):
776763
'''

crmsh/ui_sbd.py

Lines changed: 29 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -517,8 +517,11 @@ def _device_remove(self, devices_to_remove: typing.List[str]):
517517

518518
logger.info("Remove devices: %s", ';'.join(devices_to_remove))
519519
update_dict = {"SBD_DEVICE": ";".join(left_device_list)}
520-
sbd.SBDManager.update_sbd_configuration(update_dict)
521-
sbd.SBDManager.restart_cluster_if_possible()
520+
with utils.leverage_maintenance_mode() as enabled:
521+
if not utils.able_to_restart_cluster(enabled):
522+
return
523+
sbd.SBDManager.update_sbd_configuration(update_dict)
524+
bootstrap.restart_cluster()
522525

523526
@command.completers_repeating(sbd_device_completer)
524527
def do_device(self, context, *args) -> bool:
@@ -601,22 +604,34 @@ def do_purge(self, context, *args) -> bool:
601604
if not self._service_is_active(constants.SBD_SERVICE):
602605
return False
603606

607+
purge_crashdump = False
608+
if args:
609+
if args[0] == "crashdump":
610+
if not self._is_crashdump_configured():
611+
logger.error("SBD crashdump is not configured")
612+
return False
613+
purge_crashdump = True
614+
else:
615+
logger.error("Invalid argument: %s", ' '.join(args))
616+
logger.info("Usage: crm sbd purge [crashdump]")
617+
return False
618+
604619
utils.check_all_nodes_reachable("purging SBD")
605620

606-
if args and args[0] == "crashdump":
607-
if not self._is_crashdump_configured():
608-
logger.error("SBD crashdump is not configured")
621+
with utils.leverage_maintenance_mode() as enabled:
622+
if not utils.able_to_restart_cluster(enabled):
609623
return False
610-
self._set_crashdump_option(delete=True)
611-
update_dict = self._set_crashdump_in_sysconfig(restore=True)
612-
if update_dict:
613-
sbd.SBDManager.update_sbd_configuration(update_dict)
614-
sbd.SBDManager.restart_cluster_if_possible()
615-
return True
616624

617-
sbd.purge_sbd_from_cluster()
618-
sbd.SBDManager.restart_cluster_if_possible()
619-
return True
625+
if purge_crashdump:
626+
self._set_crashdump_option(delete=True)
627+
update_dict = self._set_crashdump_in_sysconfig(restore=True)
628+
if update_dict:
629+
sbd.SBDManager.update_sbd_configuration(update_dict)
630+
else:
631+
sbd.purge_sbd_from_cluster()
632+
633+
bootstrap.restart_cluster()
634+
return True
620635

621636
def _print_sbd_type(self):
622637
if not self.service_manager.service_is_active(constants.SBD_SERVICE):

crmsh/utils.py

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3306,4 +3306,32 @@ def validate_and_get_reachable_nodes(
33063306
member_list.remove(node)
33073307

33083308
return member_list + remote_list
3309+
3310+
3311+
def able_to_restart_cluster(in_maintenance_mode: bool = False) -> bool:
3312+
"""
3313+
Check whether it is able to restart cluster now
3314+
1. If pacemaker is not running, return True
3315+
2. If no non-stonith resource is running, return True
3316+
3. If in maintenance mode and DLM is not running, return True
3317+
4. Otherwise, return False with warning messages to guide user
3318+
"""
3319+
if not ServiceManager().service_is_active(constants.PCMK_SERVICE):
3320+
return True
3321+
crm_mon_parser = xmlutil.CrmMonXmlParser()
3322+
if not crm_mon_parser.is_non_stonith_resource_running():
3323+
return True
3324+
elif in_maintenance_mode:
3325+
if is_dlm_running():
3326+
dlm_related_ids = crm_mon_parser.get_resource_top_parent_id_set_via_type(constants.DLM_CONTROLD_RA)
3327+
logger.warning("Please stop DLM related resources (%s) and try again", ', '.join(dlm_related_ids))
3328+
return False
3329+
else:
3330+
return True
3331+
else:
3332+
logger.warning("Please stop all running resources and try again")
3333+
logger.warning("Or use 'crm -F/--force' option to leverage maintenance mode")
3334+
logger.warning("Understand risks that running RA has no cluster protection while the cluster is in maintenance mode and restarting")
3335+
logger.info("Aborting the configuration change attempt")
3336+
return False
33093337
# vim:ts=4:sw=4:et:

crmsh/xmlutil.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1627,6 +1627,13 @@ def is_resource_started(self, ra):
16271627
xpath = f'//resource[(@id="{ra}" or @resource_agent="{ra}") and @active="true" and @role="Started"]'
16281628
return bool(self.xml_elem.xpath(xpath))
16291629

1630+
def get_resource_top_parent_id_set_via_type(self, ra_type):
1631+
"""
1632+
Given configured ra type, get the topmost parent ra id set
1633+
"""
1634+
xpath = f'//resource[@resource_agent="{ra_type}"]'
1635+
return set([get_topmost_rsc(elem).get('id') for elem in self.xml_elem.xpath(xpath)])
1636+
16301637
def get_resource_id_list_via_type(self, ra_type):
16311638
"""
16321639
Given configured ra type, get the ra id list

test/features/sbd_ui.feature

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -132,3 +132,21 @@ Feature: crm sbd ui test cases
132132
And Run "crm cluster restart --all" on "hanode1"
133133
Then Service "sbd.service" is "stopped" on "hanode1"
134134
Then Service "sbd.service" is "stopped" on "hanode2"
135+
136+
@clean
137+
Scenario: Leverage maintenance mode
138+
When Run "crm cluster init -y" on "hanode1"
139+
And Run "crm cluster join -c hanode1 -y" on "hanode2"
140+
Then Cluster service is "started" on "hanode1"
141+
Then Cluster service is "started" on "hanode2"
142+
When Run "crm configure primitive d Dummy" on "hanode1"
143+
When Try "crm cluster init sbd -s /dev/sda5 -y"
144+
Then Expected "Or use 'crm -F/--force' option to leverage maintenance mode" in stderr
145+
When Run "crm -F cluster init sbd -s /dev/sda5 -y" on "hanode1"
146+
Then Service "sbd" is "started" on "hanode1"
147+
And Service "sbd" is "started" on "hanode2"
148+
When Try "crm sbd purge"
149+
Then Expected "Or use 'crm -F/--force' option to leverage maintenance mode" in stderr
150+
When Run "crm -F sbd purge" on "hanode1"
151+
Then Service "sbd.service" is "stopped" on "hanode1"
152+
Then Service "sbd.service" is "stopped" on "hanode2"

test/unittests/test_sbd.py

Lines changed: 2 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -406,57 +406,6 @@ def test_enable_sbd_service(self, mock_list_cluster_nodes, mock_ServiceManager,
406406
call("Enable %s on node %s", constants.SBD_SERVICE, 'node2')
407407
])
408408

409-
@patch('crmsh.xmlutil.CrmMonXmlParser')
410-
@patch('crmsh.sbd.ServiceManager')
411-
def test_restart_cluster_if_possible_return(self, mock_ServiceManager, mock_CrmMonXmlParser):
412-
mock_ServiceManager.return_value.service_is_active.return_value = False
413-
SBDManager.restart_cluster_if_possible()
414-
mock_ServiceManager.return_value.service_is_active.assert_called_once_with(constants.PCMK_SERVICE)
415-
mock_CrmMonXmlParser.assert_not_called()
416-
417-
@patch('logging.Logger.warning')
418-
@patch('crmsh.utils.is_dlm_running')
419-
@patch('crmsh.xmlutil.CrmMonXmlParser')
420-
@patch('crmsh.sbd.ServiceManager')
421-
def test_restart_cluster_if_possible_manually(
422-
self, mock_ServiceManager, mock_CrmMonXmlParser, mock_is_dlm_running, mock_logger_warning,
423-
):
424-
mock_ServiceManager.return_value.service_is_active.return_value = True
425-
mock_CrmMonXmlParser.return_value.is_non_stonith_resource_running.return_value = True
426-
mock_is_dlm_running.return_value = False
427-
SBDManager.restart_cluster_if_possible()
428-
mock_ServiceManager.return_value.service_is_active.assert_called_once_with(constants.PCMK_SERVICE)
429-
mock_logger_warning.assert_has_calls([
430-
call("Resource is running, need to restart cluster service manually on each node"),
431-
call("Or, run with `crm -F` or `--force` option, the `sbd` subcommand will leverage maintenance mode for any changes that require restarting sbd.service"),
432-
call("Understand risks that running RA has no cluster protection while the cluster is in maintenance mode and restarting")
433-
])
434-
435-
@patch('logging.Logger.warning')
436-
@patch('crmsh.utils.is_dlm_running')
437-
@patch('crmsh.xmlutil.CrmMonXmlParser')
438-
@patch('crmsh.sbd.ServiceManager')
439-
def test_restart_cluster_if_possible_dlm_running(
440-
self, mock_ServiceManager, mock_CrmMonXmlParser, mock_is_dlm_running, mock_logger_warning,
441-
):
442-
mock_ServiceManager.return_value.service_is_active.return_value = True
443-
mock_CrmMonXmlParser.return_value.is_non_stonith_resource_running.return_value = True
444-
mock_is_dlm_running.return_value = True
445-
SBDManager.restart_cluster_if_possible(with_maintenance_mode=True)
446-
mock_ServiceManager.return_value.service_is_active.assert_called_once_with(constants.PCMK_SERVICE)
447-
mock_logger_warning.assert_called_once_with("Resource is running, need to restart cluster service manually on each node")
448-
449-
@patch('crmsh.bootstrap.restart_cluster')
450-
@patch('logging.Logger.warning')
451-
@patch('crmsh.xmlutil.CrmMonXmlParser')
452-
@patch('crmsh.sbd.ServiceManager')
453-
def test_restart_cluster_if_possible(self, mock_ServiceManager, mock_CrmMonXmlParser, mock_logger_warning, mock_restart_cluster):
454-
mock_ServiceManager.return_value.service_is_active.return_value = True
455-
mock_CrmMonXmlParser.return_value.is_non_stonith_resource_running.return_value = False
456-
SBDManager.restart_cluster_if_possible()
457-
mock_ServiceManager.return_value.service_is_active.assert_called_once_with(constants.PCMK_SERVICE)
458-
mock_restart_cluster.assert_called_once()
459-
460409
@patch('crmsh.bootstrap.prompt_for_string')
461410
def test_prompt_for_sbd_device_diskless(self, mock_prompt_for_string):
462411
mock_prompt_for_string.return_value = "none"
@@ -644,10 +593,10 @@ def test_init_and_deploy_sbd_not_config_sbd(self, mock_ServiceManager):
644593
sbdmanager_instance._load_attributes_from_bootstrap.assert_not_called()
645594

646595
@patch('crmsh.bootstrap.adjust_properties')
647-
@patch('crmsh.sbd.SBDManager.restart_cluster_if_possible')
596+
@patch('crmsh.bootstrap.restart_cluster')
648597
@patch('crmsh.sbd.SBDManager.enable_sbd_service')
649598
@patch('crmsh.sbd.ServiceManager')
650-
def test_init_and_deploy_sbd(self, mock_ServiceManager, mock_enable_sbd_service, mock_restart_cluster_if_possible, mock_adjust_properties):
599+
def test_init_and_deploy_sbd(self, mock_ServiceManager, mock_enable_sbd_service, mock_restart_cluster, mock_adjust_properties):
651600
mock_bootstrap_ctx = Mock(cluster_is_running=True)
652601
sbdmanager_instance = SBDManager(bootstrap_context=mock_bootstrap_ctx)
653602
sbdmanager_instance.get_sbd_device_from_bootstrap = Mock()

test/unittests/test_ui_sbd.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -469,14 +469,14 @@ def test_device_remove_last_dev(self):
469469
self.sbd_instance_diskbased._device_remove(["/dev/sda1"])
470470
self.assertEqual(str(e.exception), "Not allowed to remove all devices")
471471

472-
@mock.patch('crmsh.sbd.SBDManager.restart_cluster_if_possible')
472+
@mock.patch('crmsh.bootstrap.restart_cluster')
473473
@mock.patch('crmsh.sbd.SBDManager.update_sbd_configuration')
474474
@mock.patch('logging.Logger.info')
475-
def test_device_remove(self, mock_logger_info, mock_update_sbd_configuration, mock_restart_cluster_if_possible):
475+
def test_device_remove(self, mock_logger_info, mock_update_sbd_configuration, mock_restart_cluster):
476476
self.sbd_instance_diskbased.device_list_from_config = ["/dev/sda1", "/dev/sda2"]
477477
self.sbd_instance_diskbased._device_remove(["/dev/sda1"])
478478
mock_update_sbd_configuration.assert_called_once_with({"SBD_DEVICE": "/dev/sda2"})
479-
mock_restart_cluster_if_possible.assert_called_once()
479+
mock_restart_cluster.assert_called_once()
480480
mock_logger_info.assert_called_once_with("Remove devices: %s", "/dev/sda1")
481481

482482
def test_do_device_no_service(self):
@@ -571,9 +571,10 @@ def test_do_purge_no_service(self, mock_purge_sbd_from_cluster):
571571
self.assertFalse(res)
572572
mock_purge_sbd_from_cluster.assert_not_called()
573573

574+
@mock.patch('crmsh.bootstrap.restart_cluster')
574575
@mock.patch('crmsh.utils.check_all_nodes_reachable')
575576
@mock.patch('crmsh.sbd.purge_sbd_from_cluster')
576-
def test_do_purge(self, mock_purge_sbd_from_cluster, mock_check_all_nodes_reachable):
577+
def test_do_purge(self, mock_purge_sbd_from_cluster, mock_check_all_nodes_reachable, mock_restart_cluster):
577578
self.sbd_instance_diskbased._load_attributes = mock.Mock()
578579
self.sbd_instance_diskbased._service_is_active = mock.Mock(return_value=True)
579580
res = self.sbd_instance_diskbased.do_purge(mock.Mock())

0 commit comments

Comments
 (0)