Support multi-GPU eval in tallyqa.py (#69)#334
Open
BuildWithAbid wants to merge 1 commit into
Open
Conversation
Eval is embarrassingly parallel per sample, so each rank evaluates a shard of the dataset and the counts are summed with all_reduce at the end. Launch with: torchrun --nproc_per_node=<N> -m moondream.eval.tallyqa --model <path> The single-GPU / CPU / MPS path is unchanged: when LOCAL_RANK is not in the environment, the process group is never initialized. Also replaces the few `args.debug` references inside eval_tallyqa with the `debug` parameter it already accepts, so the function can be called from eval_all.py without relying on module-level globals.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5511be6b64
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Closes #69 (reference pattern — I'll port the other
moondream/eval/*.pyscripts in a follow-up PR once the approach here is approved).Approach
TallyQA eval is embarrassingly parallel per sample (no gradient sync, each row is independent), so multi-GPU is just dataset sharding plus a single
all_reduceat the end:torchrun, init a NCCL process group and pin each process to its local-rank device.dataset.shard(world_size, rank).all_reducethe four counters (total,correct,total_simple,correct_simple); compute accuracies on rank 0.No new dependencies — just
torch.distributed, which ships with torch.Usage
Single GPU / CPU / MPS (unchanged):
Multi-GPU:
The distributed path only activates when
LOCAL_RANKis set in the environment (i.e. the process was launched bytorchrun), so existing single-GPU invocations behave exactly as before.Notes
printonly run on rank 0 to keep stdout clean.dist.destroy_process_group()is called on shutdown to avoid NCCL exit warnings.args.debugreferences insideeval_tallyqato thedebugparameter it already accepts — this meanseval_all.pycan call it without leaking module globals. Happy to split this into a separate commit/PR if preferred.Test plan
black --check.LOCAL_RANKis unset.dist.init_process_group,dataset.shard(contiguous=True),dist.all_reduce, and the rank-0 print guards.Once this pattern is accepted I'll port the other eval scripts (
pope.py,chartqa.py,textvqa.py,docvqa.py,mmstar.py,coco_map.py,naturalbench.py,countbenchqa.py,realworldqa.py) and updateeval_all.pyin a follow-up.