Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement: Improve ROCm performance on various quants (benchmarks included) #11931

Open
4 tasks done
cb88 opened this issue Feb 17, 2025 · 3 comments
Open
4 tasks done
Labels
enhancement New feature or request

Comments

@cb88
Copy link

cb88 commented Feb 17, 2025

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

This started with benchmarks showing some variability in model performance running different quants on CUBLAS / MMQ on different hardware so... in order to make it more clear where improvements are needed benchmarks!

Git revision b4735

Relevant subset of results of
./bin/test-backend-ops perf -o MUL_MAT
With bar graphs (with and without MI100 since its alot faster than the others) MI100 results provided by @IMbackK

MI60-MI25-MI100_MAT_MUL.xlsx

Anyone running Vega20 (Radeon VII,Radeon Pro Vega II Duo, MI50 or MI60) should probably use Q4_0 or Q4_1 quants if they can as it is almost twice much compute available. Avoid Q2 as it is very slow.

Vega 10 MMQ has reduced performance for K quants avoid. And slightly better compute performance for Q4_0 and Q4_1.

MI100 sees 48-50T/f on most quants, but it should see higher performance in several of these. Currently only f16 is faster but it is probably under performing still. Peak theoretical fp16 on MI100 is 8x it's FP32 performance.

Motivation

Many inexpensive large vram GPUs are leaving performance on the table.

Possible Implementation

No response

@cb88 cb88 added the enhancement New feature or request label Feb 17, 2025
@IMbackK
Copy link
Collaborator

IMbackK commented Feb 17, 2025

thanks @cb88, ill try and see if i can reproduce the gfx906 performance on gfx908 by forcing it down the same code paths, from there i can see if i can optimize some of the worst offenders.

Failing that we can take the easy way to a bit better performance by disabling mmq for all cases except Q4_0, Q4_1 and IQ

@cb88
Copy link
Author

cb88 commented Feb 17, 2025

There is also the issue of failed loop unrolling for gfx900 which may be slowing it down. gfx906 for whatever reason doesn't throw this error.

mmq.cuh:2502:24: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]

@cb88
Copy link
Author

cb88 commented Feb 18, 2025

Some additional benchmarks that were requested

rocblas-bench -f gemm_ex --transposeA N --transposeB N -m 4096 -n 512 -k 4096 --alpha 1 --a_type f16_r --lda 4096 --b_type f16_r --ldb 4096 --beta 0 --c_type f16_r --ldc 4096 --d_type f16_r --ldd 4096 --compute_type f32_r
device MI60 = 15628
device MI25 = 3821.15
rocblas-bench -f gemm_ex --transposeA N --transposeB N -m 4096 -n 512 -k 4096 --alpha 1 --a_type f16_r --lda 4096 --b_type f16_r --ldb 4096 --beta 0 --c_type f16_r --ldc 4096 --d_type f16_r --ldd 4096 --compute_type f16_r 
device MI60 = 1547740 (1.54774e+06 original output in exponential notation for some reason output too long???)
device MI25 = 622459
rocblas-bench -f gemm_ex --transposeA N --transposeB N -m 4096 -n 512 -k 4096 --alpha 1 --a_type f16_r --lda 4096 --b_type f16_r --ldb 4096 --beta 0 --c_type f32_r --ldc 4096 --d_type f32_r --ldd 4096 --compute_type f32_r
device MI60 = 9721.52
device MI25 = 4784.55

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants