-
Notifications
You must be signed in to change notification settings - Fork 265
fix(slurm): power down node instead of mark as down #4941
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @casassg, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the robustness of Slurm node management on Google Cloud Platform by refining how the system reacts to transient API errors during node provisioning. Instead of permanently marking affected nodes as 'down' and requiring manual intervention, the system will now transition them to a 'power-down' state. This allows Slurm to automatically re-evaluate and potentially reuse these nodes once resources become available, ensuring smoother operation and reducing administrative overhead. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly addresses an issue where nodes were permanently marked as DOWN due to transient cloud errors. Changing the state to POWER_DOWN is a good solution that improves cluster resilience by allowing Slurm to retry node allocation. My review includes a suggestion to update a related log message and the function's docstring to align with this change, ensuring code clarity and maintainability.
...nity/modules/scheduler/schedmd-slurm-gcp-v6-controller/modules/slurm_files/scripts/resume.py
Outdated
Show resolved
Hide resolved
…dules/slurm_files/scripts/resume.py Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
Hi @casassg, can you please re-target this PR against develop instead of main? |
| nodelist = util.to_hostlist(nodes) | ||
| log.error(f"Marking nodes {nodelist} as DOWN, reason: {reason}") | ||
| run(f"{lookup().scontrol} update nodename={nodelist} state=down reason={reason_quoted}", check=False) | ||
| log.error(f"Marking nodes {nodelist} as POWER_DOWN, reason: {reason}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @casassg for the contribution. While there is a merit in this idea, there are few concerns in this implementation
- This method is called not only called for bulkinsert failures. Making it power_down in other cases is not suitable.
- bulkinsert failures can be for various reasons like quota, permissions, stockout etc . Applying a blanket power_down state may result in an infinite loop where slurm continues to power up this node for jobs but gcloud continues to deny the request
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mmm fair enough. I can try to reduce power down to stock out
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lmk what do you think @bytetwin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any way we can get this checked or CI running?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved the CI workflows.
rebased |
Summary
Fixes an issue where nodes are permanently marked as
DOWNwhen theresume.pyscript encounters a GCP API error (such asZONE_RESOURCE_POOL_EXHAUSTED/ stockouts) during bulk insert operations.Description
Currently, when
resume.pyhandles a failed bulk insert operation, it callsdown_nodes_notify_jobs, which executes:scontrol update nodename=... state=down ...This forces the node into a manual failure state, requiring administrator intervention to resume. This behavior is detrimental during transient cloud errors (like stockouts) because it prevents Slurm from retrying the allocation on other available nodes or retrying the same node later.
This PR changes the state update from
state=downtostate=POWER_DOWN. This allows Slurm to return the node to a power-save state (IDLE~), keeping it eligible for future scheduling attempts without manual intervention.Changes
down_nodes_notify_jobsinresume.pyto usestate=POWER_DOWNinstead ofstate=down.Fixes #4940