Compose split routed experts from vLLM responses#1349
Open
S1ro1 wants to merge 7 commits into
Open
Conversation
303d88b to
277ab6e
Compare
277ab6e to
8dbd674
Compare
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 3 potential issues.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 8dbd674. Configure here.
…)" This reverts commit 5be2f9e.
…rts-revert-1198 # Conflicts: # verifiers/clients/renderer_client.py # verifiers/types.py
Member
|
can we put this in a utils file? trying to keep most folders unified by object type, e.g |
Contributor
Author
Yeah can clean up after, for now not 100% sure we can merge, it hits inference speed quite a lot and adds a lot of engineering overhead on prime-rl. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Summary
RoutedExpertstoken payload type backed byint16bytesprompt_routed_expertsplus completion routed experts into one sequence-aligned payloadmainso the PR also carries the renderer multimodal sidecar changes without conflictsValidation
uv run pytest tests/test_renderer_client.py tests/test_env_server.py -quvx ruff@0.15.12 format --isolated --check .uvx ruff@0.15.12 check --isolated .Note
Medium Risk
Touches token parsing/serialization paths across multiple clients and changes the
routed_expertswire format/type, so malformed/partial payloads or shape mismatches could break downstream consumers and truncation behavior.Overview
Adds a new
RoutedExpertsbytes-based type plusverifiers/clients/routed_experts.pyutilities to decode base64int16routed-expert payloads and compose split prompt+completion routing into a single sequence-aligned buffer.Updates the OpenAI chat, OpenAI completions, and renderer clients to read routed-expert data from
model_extra/response fields (prompt_routed_experts+ completionrouted_experts), removing the previous inline base85/NumPy decode path, and threads the composed routing intoResponseTokens.Adjusts response token truncation to slice the new bytes-based routed-experts buffer correctly so routing metadata remains consistent when prompts/completions are clipped.
Reviewed by Cursor Bugbot for commit 162cffb. Bugbot is set up for automated code reviews on this repo. Configure here.