Skip to content

Try some QD8-BF16 Experiments #11466

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from
Draft

Conversation

mcr229
Copy link
Contributor

@mcr229 mcr229 commented Jun 7, 2025

We've prototyped some new QD8-BF16-QB4W kernels in XNNPACK. Let's try leveraging them in ExecuTorch and see what our performance looks like:

Exports:

f32
python -m examples.models.llama.export_llama --checkpoint ~/Desktop/oss/models/llama3.2-1b/Llama3.2-1B/consolidated.00.pth -p ~/Desktop/oss/models/llama3.2-1b/Llama3.2-1B/params.json -d fp32 -kv -X -qmode 8da4w --group_size 64 --use_sdpa_with_kv_cache -n llama3_qd8_f32.pte --verbose

bf16
python -m examples.models.llama.export_llama --checkpoint ~/Desktop/oss/models/llama3.2-1b/Llama3.2-1B/consolidated.00.pth -p ~/Desktop/oss/models/llama3.2-1b/Llama3.2-1B/params.json -d bf16 -kv -X -qmode 8da4w --group_size 64 --use_sdpa_with_kv_cache -n llama3_qd8_bf16.pte  --verbose

There is no change in size since this only affects activations. To make the comparisons more fair, we removed delegation of all other operators aside from qd8-bf16-qb4w. This is because in XNNPACK we are still lacking those bf16 operators.

e1q:/data/local/tmp/bf16 $ ./llama_main --model_path=llama3_qd8_f32.pte --tokenizer_path=../llama_artifacts/tokenizer.model                                                                                                                                                                                   
The answer to the ultimate question isI 00:00:03.375228 executorch:text_prefiller.cpp:96] Prefill token result numel(): 128256
:I 00:00:03.375524 executorch:runner.cpp:282] RSS after prompt prefill: 1812.015625 MiB (0 if unsupported)
 the original three . If you can’t find a single instance of an animal or a person who doesn’t belong to one of those three groups, then you don’t belong to one of those three groups.
We can see the evidence of our species by observing our culture. There is no such thing as a ‘random’ or ‘pure’ culture. Even in cultures where a man can beat a woman and the woman will let him, she is a woman. A man can be raped, but a woman is raped. A man can kill someone, but a woman is killed. A man can get
I 00:00:05.265945 executorch:runner.cpp:302] RSS after finishing text generation: 1812.015625 MiB (0 if unsupported)
PyTorchObserver {"prompt_tokens":7,"generated_tokens":120,"model_load_start_ms":1734622925574,"model_load_end_ms":1734622928866,"inference_start_ms":1734622928866,"inference_end_ms":1734622930826,"prompt_eval_end_ms":1734622928936,"first_token_ms":1734622928936,"aggregate_sampling_time_ms":138,"SCALING_FACTOR_UNITS_PER_SECOND":1000}
I 00:00:05.265997 executorch:stats.h:108]       Prompt Tokens: 7    Generated Tokens: 120
I 00:00:05.265999 executorch:stats.h:114]       Model Load Time:                3.292000 (seconds)
I 00:00:05.266002 executorch:stats.h:124]       Total inference time:           1.960000 (seconds)               Rate:  61.224490 (tokens/second)
I 00:00:05.266014 executorch:stats.h:132]               Prompt evaluation:      0.070000 (seconds)               Rate:  100.000000 (tokens/second)
I 00:00:05.266018 executorch:stats.h:143]               Generated 120 tokens:   1.890000 (seconds)               Rate:  63.492063 (tokens/second)
I 00:00:05.266025 executorch:stats.h:151]       Time to first generated token:  0.070000 (seconds)
I 00:00:05.266036 executorch:stats.h:158]       Sampling time over 127 tokens:  0.138000 (seconds)
e1q:/data/local/tmp/bf16 $ ./llama_main --model_path=llama3_qd8_bf16.pte --tokenizer_path=../llama_artifacts/tokenizer.model
The answer to the ultimate question isI 00:00:03.217430 executorch:text_prefiller.cpp:96] Prefill token result numel(): 128256
:I 00:00:03.217586 executorch:runner.cpp:282] RSS after prompt prefill: 1305.042969 MiB (0 if unsupported)
 “I don’t know.”<|end_of_text|><|begin_of_text|>
The answer to the ultimate question is: “I don’t know.”
“Then there’s a little problem: We’re living in an era of faith and a period of disbelief.”
A woman at a press conference says that it is too early to declare that a no-fault approach is right. In a situation of so many possibilities, how can we possibly make a decision?
The answer to the ultimate question is: “I don’t know.”
The answer to the ultimate question is: “I don’t know.”
The answer to the ultimate question is:
I 00:00:05.314108 executorch:runner.cpp:302] RSS after finishing text generation: 1305.042969 MiB (0 if unsupported)
PyTorchObserver {"prompt_tokens":7,"generated_tokens":120,"model_load_start_ms":1734622914462,"model_load_end_ms":1734622917579,"inference_start_ms":1734622917579,"inference_end_ms":1734622919757,"prompt_eval_end_ms":1734622917661,"first_token_ms":1734622917661,"aggregate_sampling_time_ms":207,"SCALING_FACTOR_UNITS_PER_SECOND":1000}
I 00:00:05.316944 executorch:stats.h:108]       Prompt Tokens: 7    Generated Tokens: 120
I 00:00:05.316948 executorch:stats.h:114]       Model Load Time:                3.117000 (seconds)
I 00:00:05.316951 executorch:stats.h:124]       Total inference time:           2.178000 (seconds)               Rate:  55.096419 (tokens/second)
I 00:00:05.316955 executorch:stats.h:132]               Prompt evaluation:      0.082000 (seconds)               Rate:  85.365854 (tokens/second)
I 00:00:05.316958 executorch:stats.h:143]               Generated 120 tokens:   2.096000 (seconds)               Rate:  57.251908 (tokens/second)
I 00:00:05.316961 executorch:stats.h:151]       Time to first generated token:  0.082000 (seconds)
I 00:00:05.316964 executorch:stats.h:158]       Sampling time over 127 tokens:  0.207000 (seconds)

Some things to notice here is that the BF16 model uses 1/3 of the memory as the fp32 model. Additionally, we see some performance drops in BF16. This is likely because the GEMM kernel contains an extra shift to perform bf16 (things are still calculated in f32, just right shifted before storing). Additionaly the Quantize kernel for bf16 --> qd8 is still a naive implementation so it is a bit slower. Another thing to notice is that the results seem to be nonsensical.

Copy link

pytorch-bot bot commented Jun 7, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11466

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit d4845a0 with merge base fff7b3c (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 7, 2025
Copy link

github-actions bot commented Jun 7, 2025

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants