-
Notifications
You must be signed in to change notification settings - Fork 12.8k
CANN: optimize the rope ops #15335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CANN: optimize the rope ops #15335
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution! Here are some suggestions, and I’m happy to discuss them together.
|
||
if(ctx.init_ptr == nullptr || !is_attention) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment indicating that is_attention
is a flag used for accuracy testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your suggestion; it has been revised.
ggml/src/ggml-cann/aclnn_ops.cpp
Outdated
if(ctx.init_ptr != nullptr){ | ||
ACL_CHECK(aclrtFree(ctx.init_ptr)); | ||
} | ||
ACL_CHECK(aclrtMalloc(&ctx.init_ptr,theta_scale_length * sizeof(float_t), ACL_MEM_MALLOC_HUGE_FIRST)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a missing space after &ctx.init_ptr,
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your suggestion; it has been revised.
ggml/src/ggml-cann/common.h
Outdated
void* init_ptr = nullptr; | ||
void* sin_ptr = nullptr; | ||
void* cos_ptr = nullptr; | ||
int64_t max_position_length = 200000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A maximum prompt length of 200,000 is a bit excessive; let's initialize it to 65,536 here. And rename it to max_prompt_length
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your suggestion; it has been revised.
Optimize the performance of the rope operator by reusing sin_tensor and cos_tensor across different layers for each token.
Before Optimization
root@worker-33-138:/home/y00939322/rope_test/llama.cpp-master# ./build/bin/llama-bench -m /home/y00939322/qwen2.5-0.5b-instruct-fp16.gguf -p 5 -n 5 -b 1 -sm none -mg 0 -t 8 -fa 1
Optimized
root@worker-33-138:/home/y00939322/rope_test/llama.cpp-rope_ops# ./build/bin/llama-bench -m /home/y00939322/qwen2.5-0.5b-instruct-fp16.gguf -p 5 -n 5 -b 1 -sm none -mg 0 -t 8 -fa 1
Verifying the Operator Precision:
ROPE(type=f32,ne_a=[128,32,2,1],n_dims=128,mode=0,n_ctx=512,fs=1.000000,ef=0.000000,af=1.000000,ff=0,v=0): OK
ROPE(type=f32,ne_a=[128,40,2,1],n_dims=128,mode=0,n_ctx=512,fs=1.000000,ef=0.000000,af=1.000000,ff=0,v=0): OK
ROPE(type=f32,ne_a=[128,52,2,1],n_dims=128,mode=0,n_ctx=512,fs=1.000000,ef=0.000000,af=1.000000,ff=0,v=0): OK
ROPE(type=f32,ne_a=[128,64,2,1],n_dims=128,mode=0,n_ctx=512,fs=1.000000,ef=0.000000,af=1.000000,ff=0,v=0): OK
ROPE(type=f32,ne_a=[64,1,2,1],n_dims=64,mode=2,n_ctx=512,fs=1.000000,ef=0.000000,af=1.000000,ff=0,v=0): OK
ROPE(type=f32,ne_a=[64,71,2,1],n_dims=64,mode=2,n_ctx=512,fs=1.000000,ef=0.000000,af=1.000000,ff=0,v=0): OK
ROPE(type=f32,ne_a=[64,8,2,1],n_dims=64,mode=2,n_ctx=512,fs=1.000000,ef=0.000000,af=1.000000,ff=0,v=0): OK