-
Notifications
You must be signed in to change notification settings - Fork 604
addrmsnorm_bias #4462
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
addrmsnorm_bias #4462
Conversation
Signed-off-by: zxwang <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new Triton kernel for RMS Normalization to support fused bias addition, which is a good performance optimization. My review has identified a few critical issues in the implementation of the new Triton kernel and its wrapper function: a hardcoded device property which should be dynamic, passing None to the kernel which will likely cause a crash, and a leftover debug print statement. Additionally, the new code path involving the Triton kernel does not appear to be covered by unit tests. It would be beneficial to add tests for this new functionality to ensure its correctness and prevent future regressions. Please address the comments to improve the robustness and correctness of the implementation.
| # _, num_vectorcore = get_device_properties() | ||
| num_vectorcore = 40 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The num_vectorcore is hardcoded to 40. The code should use the get_device_properties() function to dynamically fetch this value, as intended by the commented-out line. Hardcoding device properties makes the code less portable and may lead to suboptimal performance on different hardware.
| # _, num_vectorcore = get_device_properties() | |
| num_vectorcore = 40 | |
| _, num_vectorcore = get_device_properties() |
| residual.stride(0) if residual is not None else None, | ||
| residual_out.stride(0) if residual is not None else None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Passing None for stride arguments to a Triton kernel can lead to a TypeError at runtime. When residual is None, None is passed for stride_z_row and stride_z_out_row. A dummy integer value, such as 0, should be passed instead.
| residual.stride(0) if residual is not None else None, | |
| residual_out.stride(0) if residual is not None else None, | |
| residual.stride(0) if residual is not None else 0, | |
| residual_out.stride(0) if residual_out is not None else 0, |
vllm_ascend/ops/layernorm.py
Outdated
|
|
||
| x_hat = x * rstd | ||
| x_hat = x_hat.to(original_dtype) | ||
| tl.device_print("[Row %d]xxxdtype: %s", row_idx, x_hat.dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A debug print statement tl.device_print is present in the Triton kernel. This should be removed from production code as it can generate a lot of output and may impact performance.
| tl.device_print("[Row %d]xxxdtype: %s", row_idx, x_hat.dtype) | |
| # tl.device_print("[Row %d]xxxdtype: %s", row_idx, x_hat.dtype) |
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Signed-off-by: zxwang <[email protected]>
Signed-off-by: zxwang <[email protected]>
What this PR does / why we need it?
Does this PR introduce any user-facing change?
How was this patch tested?