[Speculative decoding] feat: add DFlash support#22105
[Speculative decoding] feat: add DFlash support#22105ruixiang63 wants to merge 22 commits intoggml-org:masterfrom
Conversation
EAGLE3 is an encoder-decoder based speculative decoding method: - Extracts features from target model at specific layers - Uses feature fusion layer to compress target features - Generates draft tokens with single-layer decoder - Maps draft vocabulary to target vocabulary via d2t tensor Key changes: - Add LLM_ARCH_EAGLE3 architecture - Add EAGLE3 encoder/decoder graph (src/models/eagle3.cpp) - Add feature extraction from target model layers - Add g_embeddings handling for decoder input - Add GGML_TENSOR_FLAG_SYNC for GPU synchronization - Add --eagle3 flag for speculative-simple example - Add EAGLE3 model conversion in convert_hf_to_gguf.py
|
Hi @ruixiang63, thanks for your contribution! Per our contribution guidelines, the automated PR checker found the following issue(s) that need your attention:
Please note that maintainers reserve the right to make final decisions on PRs. If you believe there is a mistake, please comment below. |
|
I think the method of exposing the hidden states of the target model needs to be cleaner, as it's used in both eagle3 and dflash and I guess even MTP. Probably needs a refactoring to expose these endpoints |
@ggerganov has already worked on this refactoring work. And you’re very welcome to contribute if you have any better ideas for this PR :) |
|
Trying this against Issue 1 (small, easy):
|
|
Rebased onto the latest master. Hybrid target models (e.g. Qwen3.5) now benefit from the speculative checkpointing mechanism recently merged upstream and the DFlash performance gets better. PR description updated with the new performance numbers. |
|
Have you also looked at DDTree perhaps? |
Not yet, but I’ll take a look. I’d expect it to come after this PR gets merged. |
|
Out of curiosity, have you tested quantizing the DFlash model to Q8? https://huggingface.co/lym00/Qwen3.6-35B-A3B-DFlash-GGUF-Test |
|
I don’t know if this is useful, but I managed to get it working on AMD (though with poor performance) Main GPU: R9700 AI PRO running Unsloth Q5 Qwen 27B 3.5 (Vulkan backend) + DFlash bf16 compiled GGUF The acceptance rate works well with the current parameters; changing them does affect the rate. The It actually runs. Here’s the command and the result. If you’d like me to test something specific that might help, just let me know. I’m clearly out of my depth and can’t really suggest improvements. Command + result + evaluation : Click to expand |
llama.cpp-b8941 does not have this parameter. |
Because it's not merged yet to master branch? |
When is it expected to be merged into master? |
|
getting this startup error: tried these models: meanwhile https://huggingface.co/spiritbuun/Qwen3.6-27B-DFlash-GGUF fails to load on startup: set_dflash: DFlash extraction enabled for layers [0, 0, 0, 0, 0] exec "$LLAMA_SERVER"
|
|
I'm getting the following error trying to run this PR with the Vulkan backend on an R9700, only one token is generated before it crashes:
Full Log |
The DFlash GGUF you referenced is meant for another fork of llama.cpp, not this PR. |
|
My plan of next steps for this PR: This PR currently supports Current working commands for # llama-cli
./build/bin/llama-cli \
-m "${TARGET_MODEL_GGUF}" \
-md "${DFLASH_MODEL_GGUF}" \
--dflash -p "Write a quicksort algorithm in Python. Write code only." -n 256 --draft-max 16 \
-cd 512 -c 512 \
--temp 0 --top-k 1 --seed 42 -ngl 99 -ngld 99 \
--jinja -rea off
# llama-server
./build/bin/llama-server \
-m "${TARGET_MODEL_GGUF}" \
-md "${DFLASH_MODEL_GGUF}" \
--dflash --draft-max 16 \
-c 2048 -cd 512 \
--temp 0 --top-k 1 --seed 42 \
-ngl 99 -ngld 99 \
--jinja -rea off \
-np 1 \
--host 0.0.0.0 --port 8088 |
|
Why isn’t there any speedup after enabling the dfloat parameter on this branch?Meanwhile, performance drops significantly when I switch to the official parameters. T_T 200 tokens/s as normal: fallback to 40 tokens: |
|
I've tried many different patches and configurations over the weekend for my single 3090 setup. There's no benefit in Dflash I can see. I cannot reproduce any of the claimed speed ups in real workflows with Qwen 27B or Qwen 35B. Originally posted by @aminya in TheTom#103 (comment) |
|
For me crashing after generating 1 token.
�[0mdone_getting_tensors: tensor 'token_embd.weight' (q4_K) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead �[0mextract_dflash_features: Start to extract DFlash features: 5 layers, 4 tokens, 5120 embd �[0mres send: sending result for task id = 0 data: {"choices":[{"finish_reason":null,"index":0,"delta":{"content":"Hello"}}],"created":1777351446,"id":"chatcmpl-DbepysCBNfjQ2DyiF5glro4x77VGN159","model":"Qwen3.6-27B-Q4_K_M.gguf","system_fingerprint":"b0-unknown","object":"chat.completion.chunk","timings":{"cache_n":0,"prompt_n":13,"prompt_ms":731.354,"prompt_per_token_ms":56.258,"prompt_per_second":17.77524974225888,"predicted_n":1,"predicted_ms":0.001,"predicted_per_token_ms":0.001,"predicted_per_second":1000000.0}} �[0m |
Did you follow the steps outlined in the PR description? Did you try the model mentioned there to check for speedups? Please note that this is still a draft PR, and many parts will need to be refactored after the upstream changes. As mentioned in the PR description, the current performance is not yet optimal. However, you should still see speedups if you follow the correct steps in the PR description and use the correct models. |
|
Thanks everyone for reporting the issues. I really appreciate your efforts in trying out this PR! The most common issue I’ve seen so far is the use of GGUF models that were not converted with this PR. If you run into any issues, the safest fallback is to use a GGUF model converted with this PR, run it with Once the upstream unified API has been finalized, this PR will continue moving forward and be polished further. Thanks again! |
Overview
This PR adds DFlash speculative decoding to llama.cpp, achieving up to 8x speedup (Qwen3) with full numerical equivalence to the reference original implementation.
Compared to EAGLE3 - which uses an autoregressive draft and generates one token per draft step, DFlash produces an entire block of candidates in a single draft forward pass, resulting in higher per-iteration draft throughput. However, DFlash relies on multiple transformer layers for its draft model, whereas EAGLE3 uses only a single transformer layer.
There is still quite meaningful headroom for further performance improvements with current implementation, summarized in the Future Performance Work section below.
Performance Evaluation (NVIDIA L40S 48GB)
Numbers below were collected with
--draft-max 16,--temp 0 --top-k 1 --seed 42,n=256. Baseline isllama-clirunning the target model alone with the same sampling parameters.Qwen3-8B
Draft:
z-lab/Qwen3-8B-DFlash(bf16), Target:Qwen/Qwen3-8B(bf16)Qwen3-4B
Draft:
z-lab/Qwen3-4B-DFlash(bf16), Target:Qwen/Qwen3-4B(bf16)GPT-OSS-20B
Draft:
z-lab/gpt-oss-20b-DFlash(bf16), Target:openai/gpt-oss-20b(bf16)For MoE targets (gpt-oss-20b), DFlash speedup is generally smaller than for dense attention targets because more experts get activated during the parallel verification step than during single-token autoregressive decoding (same observation as in #18039 for gpt-oss EAGLE3).
Qwen3.5-4B (With Performance Issue)
Draft:
z-lab/Qwen3.5-4B-DFlash(bf16), Target:Qwen/Qwen3.5-4B(bf16)Speedup is intrinsically limited on hybrid target models:
For Hybrid targets (Qwen3.5, ...), when target verify draft tokens, llama.cpp writes KV / recurrent state for the full[id_last + draft block]before acceptance is known.Pure-attention target models can drop rejected suffixes withseq_rm; hybrid targets cannot, because recurrent state is not decomposable by token position.Current workaround inexamples/speculative-simple/speculative-simple.cpp:snapshot target state before verifyon rejection, restore + replay(rerun target model forward) only the accepted prefix to recover recurrent stateCost: each rejected step requires one extra target forward, which is the main reason hybrid speedup lags pure-attention.Qwen3.5-9B
Draft:
z-lab/Qwen3.5-9B-DFlash(bf16), Target:Qwen/Qwen3.5-9B(bf16)How to run DFlash in llama.cpp
Step 1: Convert models to GGUF
[Optional] Step 2: Quantize GGUF models
Step 3: Build llama.cpp
Step 4: Run DFlash speculative decoding
Future Performance Work
KV cache / graph reuse for the DFlash decoder
The DFlash decoder currently rebuilds its graph every iteration (
graphs reused = 0). The main cause is thatcross.n_enc(the length ofaccumulated_target_ctx) grows monotonically, which changes the shape oftarget_ctxand invalidates all downstream tensor shapes.Possible improvements:
add a draft-side KV cache to the DFlash decoder.
This would make the implementation closer to the original reference: committed target-context K/V would be materialized once and reused across iterations, instead of recomputing K/V from the full accumulated context every step. This reduces draft-side compute and also makes graph shapes much more stable, which should improve graph reuse. Since the DFlash decoder attention includes both cross-attention and self-attention, the current llama.cpp implementation does not support this pattern well.
keep the current no-cache design, but fix the
target_ctxinput shape.Instead of letting
target_ctxgrow every iteration, reserve a fixed-size buffer, track the active length separately, and mask out the padded region in attention. This preserves the current semantics while allowing the decoder graph to be reused. This method is not ideal compared to using a KV cache.Hybrid target model performance improvement (For all speculative decoding methods)
Hybrid targets (e.g. Qwen3.5) are slower because the problem is no longer just draft-side graph reuse. During target verify, llama.cpp writes KV / recurrent state for the full draft block before acceptance is known. Pure-attention target models can discard rejected suffixes withseq_rm, but hybrid targets cannot, because their recurrent state is not decomposable by token position.The current workaround is:snapshot the target state before verifyon rejection, restore the snapshotreplay only the accepted prefixThis is correct, but each rejected step may require one extra target forward, which is the main reason hybrid speedup lags pure-attention.A more fundamental future improvement would be target-side deferred commit (SGLang Implementation): verify would compute temporary recurrent states, and only the accepted-prefix state would be committed. That would remove replay from the hybrid path, but it requires deeper changes to llama.cpp’s recurrent-state update flow.
Note this applies to all hybrid models used as target models in speculative decoding methods, not just DFlash.
Updates: Thanks to #19493 and #22227, llama.cpp now supports fallback for hybrid model states.
More (Low Priority)
Requirements