Skip to content

Conversation

@casassg
Copy link
Contributor

@casassg casassg commented Dec 4, 2025

Summary

Fixes an issue where nodes are permanently marked as DOWN when the resume.py script encounters a GCP API error (such as ZONE_RESOURCE_POOL_EXHAUSTED / stockouts) during bulk insert operations.

Description

Currently, when resume.py handles a failed bulk insert operation, it calls down_nodes_notify_jobs, which executes:
scontrol update nodename=... state=down ...
This forces the node into a manual failure state, requiring administrator intervention to resume. This behavior is detrimental during transient cloud errors (like stockouts) because it prevents Slurm from retrying the allocation on other available nodes or retrying the same node later.
This PR changes the state update from state=down to state=POWER_DOWN. This allows Slurm to return the node to a power-save state (IDLE~), keeping it eligible for future scheduling attempts without manual intervention.

Changes

  • Modified down_nodes_notify_jobs in resume.py to use state=POWER_DOWN instead of state=down.

Fixes #4940

@casassg casassg requested review from a team and samskillman as code owners December 4, 2025 21:39
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @casassg, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the robustness of Slurm node management on Google Cloud Platform by refining how the system reacts to transient API errors during node provisioning. Instead of permanently marking affected nodes as 'down' and requiring manual intervention, the system will now transition them to a 'power-down' state. This allows Slurm to automatically re-evaluate and potentially reuse these nodes once resources become available, ensuring smoother operation and reducing administrative overhead.

Highlights

  • Improved Node State Handling: Changed the resume.py script to mark nodes as POWER_DOWN instead of DOWN when GCP API errors (e.g., ZONE_RESOURCE_POOL_EXHAUSTED) occur during bulk insert operations.
  • Automated Recovery: This modification allows Slurm to automatically return nodes to a power-save state (IDLE~), making them eligible for future scheduling attempts without requiring manual administrator intervention.
  • Enhanced Resilience: The change prevents nodes from being permanently marked as failed due to transient cloud errors, improving the overall resilience and automation of the Slurm cluster on GCP.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses an issue where nodes were permanently marked as DOWN due to transient cloud errors. Changing the state to POWER_DOWN is a good solution that improves cluster resilience by allowing Slurm to retry node allocation. My review includes a suggestion to update a related log message and the function's docstring to align with this change, ensuring code clarity and maintainability.

…dules/slurm_files/scripts/resume.py

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@casassg casassg changed the title power down node instead of mark as down fix(slurm): power down node instead of mark as down Dec 4, 2025
@sarthakag
Copy link
Contributor

Hi @casassg, can you please re-target this PR against develop instead of main?

nodelist = util.to_hostlist(nodes)
log.error(f"Marking nodes {nodelist} as DOWN, reason: {reason}")
run(f"{lookup().scontrol} update nodename={nodelist} state=down reason={reason_quoted}", check=False)
log.error(f"Marking nodes {nodelist} as POWER_DOWN, reason: {reason}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @casassg for the contribution. While there is a merit in this idea, there are few concerns in this implementation

  1. This method is called not only called for bulkinsert failures. Making it power_down in other cases is not suitable.
  2. bulkinsert failures can be for various reasons like quota, permissions, stockout etc . Applying a blanket power_down state may result in an infinite loop where slurm continues to power up this node for jobs but gcloud continues to deny the request

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmm fair enough. I can try to reduce power down to stock out

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lmk what do you think @bytetwin

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any way we can get this checked or CI running?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved the CI workflows.

@casassg casassg changed the base branch from main to develop December 8, 2025 22:55
@casassg
Copy link
Contributor Author

casassg commented Dec 8, 2025

Hi @casassg, can you please re-target this PR against develop instead of main?

rebased

@sarthakag sarthakag added the release-chore To not include into release notes label Dec 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

release-chore To not include into release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Nodes permanently marked DOWN on resume failure (e.g., stockouts) instead of returning to pool

3 participants