Skip to content

Conversation

@Mercykid-bash
Copy link
Contributor

@Mercykid-bash Mercykid-bash commented Nov 26, 2025

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a 'mix-placement' feature, which appears to allow shared experts to be treated as regular experts, affecting model configuration, expert map management, and MoE operations. A new patch for DeepSeek V2/V3 models is added to support this feature. The changes are extensive and touch several parts of the codebase.

My review has identified a few critical issues that need to be addressed. There's a potential for a runtime error in vllm_ascend/eplb/adaptor/vllm_adaptor.py due to an incorrect in-place tensor copy with mismatched shapes. Another critical bug was found in vllm_ascend/ops/fused_moe/moe_mlp.py where the logic for calculating group sizes is flawed and can lead to index errors or incorrect results. Additionally, a hardcoded 'magic number' in vllm_ascend/ops/fused_moe/experts_selector.py reduces code clarity and should be replaced with a named constant.

Other changes, such as refactoring and bug fixes in vllm_ascend/ops/fused_moe/fused_moe.py, seem correct and improve the code.

value=-1
)
self.expert_map_per_layer[layer_id].copy_(updated_expert_map_padded)
self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The in-place copy self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map) will raise a RuntimeError if updated_expert_map has a different shape than self.expert_map_per_layer_cpu[layer_id]. The logic for padding updated_expert_map for the device tensor self.expert_map_per_layer suggests that shape mismatches are expected. The CPU-side map should be handled in a way that accommodates shape changes to avoid crashes. Reassigning the tensor, as was done previously, is a safer approach.

Suggested change
self.expert_map_per_layer_cpu[layer_id].copy_(updated_expert_map)
self.expert_map_per_layer_cpu[layer_id] = updated_expert_map.clone()

Comment on lines +115 to +116
group_diff = torch.diff(group_list)
new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The calculation of new_group from group_list (which appears to be a cumulative sum of group sizes) is incorrect and can lead to runtime errors or incorrect behavior.

  1. If group_list contains only one element, torch.diff(group_list) will be empty, and group_diff[0] will raise an IndexError.
  2. If group_list is [g1, g1+g2, ...], group_diff will be [g2, g3, ...]. The current logic torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0) would produce [g2, g2, g3, ...], which is incorrect as the first group size should be g1.

The correct way to get group sizes from a cumulative sum tensor is to use torch.diff with the first element of group_list prepended.

Suggested change
group_diff = torch.diff(group_list)
new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0)
new_group = torch.cat((group_list[0:1], torch.diff(group_list)))

Comment on lines +98 to +101
pad_shared_expert_weights = torch.full((topk_weights.shape[0], 1),
0.4,
dtype=topk_weights.dtype,
device=topk_weights.device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The value 0.4 is used as a hardcoded weight for the padded shared expert. This "magic number" makes the code harder to understand and maintain. It's unclear why this specific value is chosen and what its implications are, especially concerning weight normalization which typically expects weights to sum to 1. This value should be defined as a named constant with an explanatory comment, or passed as a parameter to the function to improve clarity and maintainability.

Signed-off-by: Che Ruan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants