[PT 2.6 perf fix] Remove expensive memory_full_info kernel call #40
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Why are these changes needed?
[PyTorch 2.6 perf fix]
The ray monitoring process agent.py runs periodic memory stats collection which used an expensive low-level kernel call (ray callsite) caused the training process to stall. Removing the expensive kernel call fixed the regression on PyTorch 2.6.
Related issue number
The call was noticed to be expensive in 2019
Reduce reporter CPU by ericl · Pull Request #6553 · ray-project/ray
then un-noticed in 2022
[Core] Export additional metrics for workers and Raylet memory by mwtian · Pull Request #25418 · ray-project/ray
As a next step, we would like to contribute to Ray OSS by exposing allowed metrics as a config.
Checks
git commit -s) in this PR.scripts/format.shto lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/under thecorresponding
.rstfile.