-
Notifications
You must be signed in to change notification settings - Fork 76
Add prefix aware request scheduling proposal #602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Skipping CI for Draft Pull Request. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: liu-cong The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
✅ Deploy Preview for gateway-api-inference-extension ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
|
||
1. **Prefix affinity consistent hashing** | ||
|
||
This goes a step beyond the session affinity by using a prefix aware hash function to route requests with similar prefixes to the same or similar servers. A naive hash function can be just taking the hash of the first N characters/tokens of the request, and therefore all requests with the same first N characters/tokens will be routed to the same server. The [vLLM production stack](https://github.com/vllm-project/production-stack/issues/59) is exploring this strategy using simhash, and preliminary experiments showed mixed results. KubeAI uses a simple strategy to only hash request prefix up to a configurable `prefixCharLength`. Its effectiveness is likely highly dependent on the input length distribution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is that a moving window of up to prefixCharLength? or does it always has exactly prefixCharLength characters?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly the first prefixCharLength
characters https://github.com/substratusai/kubeai/blob/8007ab256b536f8b15e31dd14bd2c1b9eadb3e3e/internal/loadbalancer/group.go#L70
docs/proposals/005-prefix-cache-aware-routing-proposal/README.md
Outdated
Show resolved
Hide resolved
1. Prefix affinity needs to be aware of the server load, otherwise we will create hot spots. We can use queue length and k-v cache utilization to understand the server load. This is similar to the [queue depth threshold](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/2a615e981228aa6ffc2a89219c986ac863dde776/pkg/epp/scheduling/scheduler.go#L40) for LoRA affinity. | ||
|
||
|
||
## Proposal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to start with this approach since it seems relatively simple to implement, but also in theory should be more resilient than the other two options.
Cong's PoC is in main...liu-cong:llm-instance-gateway:prefix-poc (or at least, a version of it is) for those interested. |
docs/proposals/005-prefix-cache-aware-routing-proposal/README.md
Outdated
Show resolved
Hide resolved
docs/proposals/005-prefix-cache-aware-routing-proposal/README.md
Outdated
Show resolved
Hide resolved
docs/proposals/005-prefix-cache-aware-routing-proposal/README.md
Outdated
Show resolved
Hide resolved
Overall this looks reasonable to me. I do feel strongly about changing to proposal number to match the PR number, strict increment can have race condition concerns if multiple proposals are opened at once, and PR number will still (roughly, assuming PRs dont take too long to merge) follow chronological order of creation. Thanks for the great work! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few nits, otherwise /lgtm
|
||
### Report prefix cache indexes on the router | ||
|
||
If the router knows what prefixes are currently cached on each model server replica, it can make the optimal decision. A potential solution is to have the model server (or with a sidecar) report the kv cache indexes to the router. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liu-cong I see that you updated the PR name to use scheduling
, but this design option and the preceding one both use the term "router". Please review these design options and update the terminology accordingly, e.g. find and replace router with scheduler..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately I don't see a unified set of terminologies. Many other projects use the term "routing".
I added a "terminology" section to clarify this. And use scheduling when referring to EPP, and "routing" when referring to other project where the term "routing" was picked.
Let me know what you think.
docs/proposals/005-prefix-cache-aware-routing-proposal/README.md
Outdated
Show resolved
Hide resolved
docs/proposals/005-prefix-cache-aware-routing-proposal/README.md
Outdated
Show resolved
Hide resolved
+1 (see |
@liu-cong thanks for resolving my review feedback! /lgtm |
This proposal was initially discussed in #498