Skip to content

Add prefix aware request scheduling proposal #602

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

liu-cong
Copy link
Contributor

@liu-cong liu-cong commented Mar 28, 2025

This proposal was initially discussed in #498

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Mar 28, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: liu-cong
Once this PR has been reviewed and has the lgtm label, please assign sergeykanzhelev for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

netlify bot commented Mar 28, 2025

Deploy Preview for gateway-api-inference-extension ready!

Name Link
🔨 Latest commit bb19231
🔍 Latest deploy log https://app.netlify.com/sites/gateway-api-inference-extension/deploys/681be6c51d2ad000080bd3a1
😎 Deploy Preview https://deploy-preview-602--gateway-api-inference-extension.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Mar 28, 2025

1. **Prefix affinity consistent hashing**

This goes a step beyond the session affinity by using a prefix aware hash function to route requests with similar prefixes to the same or similar servers. A naive hash function can be just taking the hash of the first N characters/tokens of the request, and therefore all requests with the same first N characters/tokens will be routed to the same server. The [vLLM production stack](https://github.com/vllm-project/production-stack/issues/59) is exploring this strategy using simhash, and preliminary experiments showed mixed results. KubeAI uses a simple strategy to only hash request prefix up to a configurable `prefixCharLength`. Its effectiveness is likely highly dependent on the input length distribution.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is that a moving window of up to prefixCharLength? or does it always has exactly prefixCharLength characters?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1. Prefix affinity needs to be aware of the server load, otherwise we will create hot spots. We can use queue length and k-v cache utilization to understand the server load. This is similar to the [queue depth threshold](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/2a615e981228aa6ffc2a89219c986ac863dde776/pkg/epp/scheduling/scheduler.go#L40) for LoRA affinity.


## Proposal
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to start with this approach since it seems relatively simple to implement, but also in theory should be more resilient than the other two options.

@smarterclayton
Copy link
Contributor

Cong's PoC is in main...liu-cong:llm-instance-gateway:prefix-poc (or at least, a version of it is) for those interested.

@kfswain kfswain mentioned this pull request Apr 22, 2025
@danehans
Copy link
Contributor

@liu-cong are you planning on reviving this PR now that #677 has merged?

@liu-cong
Copy link
Contributor Author

liu-cong commented Apr 25, 2025

@liu-cong are you planning on reviving this PR now that #677 has merged?

Yes, I am working on getting the prefix cache POC PR out and update this one as well.

@ahg-g ahg-g changed the title Add prefex aware routing proposal Add prefix aware routing proposal Apr 29, 2025
@liu-cong liu-cong changed the title Add prefix aware routing proposal Add prefix aware request scheduling proposal May 6, 2025
@liu-cong liu-cong force-pushed the prefix-proposal branch from a648849 to c2dbe97 Compare May 6, 2025 05:34
@liu-cong liu-cong marked this pull request as ready for review May 6, 2025 05:34
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 6, 2025
@k8s-ci-robot k8s-ci-robot requested a review from Jeffwan May 6, 2025 05:34
@kfswain
Copy link
Collaborator

kfswain commented May 6, 2025

Overall this looks reasonable to me. I do feel strongly about changing to proposal number to match the PR number, strict increment can have race condition concerns if multiple proposals are opened at once, and PR number will still (roughly, assuming PRs dont take too long to merge) follow chronological order of creation. Thanks for the great work!

Copy link
Contributor

@danehans danehans left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few nits, otherwise /lgtm


### Report prefix cache indexes on the router

If the router knows what prefixes are currently cached on each model server replica, it can make the optimal decision. A potential solution is to have the model server (or with a sidecar) report the kv cache indexes to the router.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liu-cong I see that you updated the PR name to use scheduling, but this design option and the preceding one both use the term "router". Please review these design options and update the terminology accordingly, e.g. find and replace router with scheduler..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately I don't see a unified set of terminologies. Many other projects use the term "routing".

I added a "terminology" section to clarify this. And use scheduling when referring to EPP, and "routing" when referring to other project where the term "routing" was picked.

Let me know what you think.

@danehans
Copy link
Contributor

danehans commented May 7, 2025

I do feel strongly about changing to proposal number to match the PR number...

+1 (see docs/proposals/README.md).

@oglok oglok mentioned this pull request May 8, 2025
@danehans
Copy link
Contributor

danehans commented May 8, 2025

@liu-cong thanks for resolving my review feedback!

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 8, 2025
@danehans danehans requested review from danehans, kfswain and ahg-g May 8, 2025 21:36
@danehans
Copy link
Contributor

danehans commented May 8, 2025

@kfswain @ahg-g PTAL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants