Skip to content

Conversation

@ZhengHongming888
Copy link

@ZhengHongming888 ZhengHongming888 commented Oct 25, 2025

Purpose

This PR is to add the 'cpu' option for P/D disaggregation or KV transfer based on nixl_connector in pure CPU machine/environment.

After you adding with this option you can do the vllm serve in pure cpu machine or docker image as below -

VLLM_SKIP_WARMUP=True vllm serve meta-llama/Llama-3.2-3B-Instruct --port 8200 --gpu-memory-utilization 0.9 --enforce-eager --tensor-parallel-size 1 --max_model_len 4096 --kv-transfer-config '{"kv_connector":"NixlConnector","kv_role":"kv_both","kv_buffer_device":"cpu"}'

which can help on the construction of heterogenous system between cpu and all other gpus based on like llm-d framework.

Thanks.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@mergify mergify bot added the kv-connector label Oct 25, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for using the nixl_connector on CPU-only environments by including 'cpu' as a supported device in _NIXL_SUPPORTED_DEVICE. The change is straightforward and correctly enables the NixlConnectorWorker to initialize on a CPU platform with a CPU-based KV buffer, which is necessary for features like P/D disaggregation in CPU-only setups. The implementation is correct and aligns with the intended purpose. I have no issues to report with this change.

@njhill njhill changed the title add cpu device support for nixl_connector [P/D] Add cpu device support for nixl_connector Oct 28, 2025
Copy link
Member

@njhill njhill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@njhill njhill added ready ONLY add when PR is ready to merge/full CI is needed and removed ready ONLY add when PR is ready to merge/full CI is needed labels Oct 28, 2025
@njhill
Copy link
Member

njhill commented Oct 28, 2025

@ZhengHongming888 could you please sign off your commit per the DCO instructions: https://github.com/vllm-project/vllm/pull/27510/checks?check_run_id=53663014635

@njhill njhill marked this pull request as draft October 28, 2025 15:36
@njhill njhill marked this pull request as ready for review October 28, 2025 15:37
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines 83 to 87
),
"tpu": ("cpu",),
"xpu": ("cpu",),
"cpu": ("cpu",),
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Enabling CPU device hits unimplemented buffer copy APIs

Adding "cpu": ("cpu",) makes the Nixl connector accept CPU hosts, but the CPU platform still lacks the copy primitives that copy_kv_blocks() uses when kv_buffer_device == "cpu". After a transfer completes, get_finished() calls sync_recved_kv_to_device/save_kv_to_host, which invoke copy_kv_blocks. That function delegates to current_platform.insert_blocks_to_device or swap_out_blocks_to_host (see kv_connector/utils.py), neither of which exist on CpuPlatform. As soon as a request is received or saved, this path will raise an AttributeError, so a CPU-only deployment cannot actually run. Either avoid host-buffer copies for CPU or implement the required methods in CpuPlatform before advertising CPU support here.

Useful? React with 👍 / 👎.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZhengHongming888 this is a good point. Will this change actually have any use without other changes?

Signed-off-by: Zheng, Hongming <[email protected]>
@ZhengHongming888
Copy link
Author

@ZhengHongming888 could you please sign off your commit per the DCO instructions: https://github.com/vllm-project/vllm/pull/27510/checks?check_run_id=53663014635

done. thanks @njhill

@njhill njhill added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kv-connector ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants