-
-
Couldn't load subscription status.
- Fork 10.8k
[model] Add support for openPangu_Ultra_MoE #27521
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: yuantao <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
|
Documentation preview: https://vllm--27521.org.readthedocs.build/en/27521/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for the openPangu_Ultra_MoE model. The changes include a new model implementation file and updates to various configuration and registry files to integrate the new model. The implementation appears to be largely adapted from the existing deepseek_v2 model.
I've identified a critical issue in the scaling logic within the OpenPanguMoE module, which seems to have been carried over from the deepseek_v2 implementation. This logic flaw could lead to incorrect computations, particularly in float16 precision, potentially affecting the model's output. A detailed comment with a suggested fix is provided below. The other changes appear to be correct and consistent with adding a new model to the framework.
…raMoEForCausalLM Signed-off-by: yuantao <[email protected]>
|
hi, can you give us (me and https://github.com/kcmnd )the access to your fork repo as we tested it it doent work now. We can fix some codes. |
Purpose
Add support for openPangu_Ultra_MoE models
FIX #27019
Test Plan
vllm serve $LOCAL_CKPT_DIR/openpangu-ultra-moe-718b-model \ --data-parallel-size 4 \ --data-parallel-size-local 1 \ --data-parallel-start-rank $NODE_RANK \ --data-parallel-address $MASTER_NODE_IP \ --data-parallel-rpc-port 13389 \ --tensor-parallel-size 8 \ --served-model-name pangu_ultra_moe \ --enable-expert-parallel \ --trust-remote-code \Test Result
The serving start normally.
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.