-
Notifications
You must be signed in to change notification settings - Fork 609
mix-placement #4470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
mix-placement #4470
Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a 'mix-placement' feature, which appears to allow shared experts to be treated as regular experts, affecting model configuration, expert map management, and MoE operations. A new patch for DeepSeek V2/V3 models is added to support this feature. The changes are extensive and touch several parts of the codebase.
My review has identified a few critical issues that need to be addressed. There's a potential for a runtime error in vllm_ascend/eplb/adaptor/vllm_adaptor.py due to an incorrect in-place tensor copy with mismatched shapes. Another critical bug was found in vllm_ascend/ops/fused_moe/moe_mlp.py where the logic for calculating group sizes is flawed and can lead to index errors or incorrect results. Additionally, a hardcoded 'magic number' in vllm_ascend/ops/fused_moe/experts_selector.py reduces code clarity and should be replaced with a named constant.
Other changes, such as refactoring and bug fixes in vllm_ascend/ops/fused_moe/fused_moe.py, seem correct and improve the code.
| value=-1 | ||
| ) | ||
| self.expert_map_per_layer[layer_id].copy_(updated_expert_map_padded) | ||
| self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The in-place copy self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map) will raise a RuntimeError if updated_expert_map has a different shape than self.expert_map_per_layer_cpu[layer_id]. The logic for padding updated_expert_map for the device tensor self.expert_map_per_layer suggests that shape mismatches are expected. The CPU-side map should be handled in a way that accommodates shape changes to avoid crashes. Reassigning the tensor, as was done previously, is a safer approach.
| self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map) | |
| self.expert_map_per_layer_cpu[layer_id] = updated_expert_map.clone() |
| group_diff = torch.diff(group_list) | ||
| new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The calculation of new_group from group_list (which appears to be a cumulative sum of group sizes) is incorrect and can lead to runtime errors or incorrect behavior.
- If
group_listcontains only one element,torch.diff(group_list)will be empty, andgroup_diff[0]will raise anIndexError. - If
group_listis[g1, g1+g2, ...],group_diffwill be[g2, g3, ...]. The current logictorch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0)would produce[g2, g2, g3, ...], which is incorrect as the first group size should beg1.
The correct way to get group sizes from a cumulative sum tensor is to use torch.diff with the first element of group_list prepended.
| group_diff = torch.diff(group_list) | |
| new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0) | |
| new_group = torch.cat((group_list[0:1], torch.diff(group_list))) |
| pad_shared_expert_weights = torch.full((topk_weights.shape[0], 1), | ||
| 0.4, | ||
| dtype=topk_weights.dtype, | ||
| device=topk_weights.device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The value 0.4 is used as a hardcoded weight for the padded shared expert. This "magic number" makes the code harder to understand and maintain. It's unclear why this specific value is chosen and what its implications are, especially concerning weight normalization which typically expects weights to sum to 1. This value should be defined as a named constant with an explanatory comment, or passed as a parameter to the function to improve clarity and maintainability.
Signed-off-by: Che Ruan <[email protected]>
7a21148 to
a6d11e2
Compare
Uh oh!
There was an error while loading. Please reload this page.