-
Notifications
You must be signed in to change notification settings - Fork 12.3k
Granite Four #13550
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Granite Four #13550
Conversation
This will be necessary to support Jamba (and other recurrent models mixed with Attention). Doesn't compile yet, and finding a slot isn't yet done correctly for recurrent states.
* llama : begin work on support for variable GQA This will also be useful for Jamba if we consider the Mamba layers to have 0 KV heads. * llama : gracefully fail when not finding hybrid slot
* ggml : simplify SSM-related operators * llama : make recurrent state slot allocation contiguous * llama : adapt internal uses of batches to llama_ubatch
This reduces overhead when running hellaswag on thousands of sequences with very small 100k params Mamba models.
This otherwise was a problem when running the HellaSwag benchmark with small batch sizes, making it crash.
This removes the need for ggml_ssm_conv!!! But performance seems slighly worse on my system, especially for prompt processing. Maybe ggml_mul_mat isn't optimized for small row sizes? More performance testing is necessary until GGML_OP_SSM_CONV is removed. * ggml : make ggml_ssm_scan not modify its source tensors * llama : fix shared recurrent tail cell count for small ubatch sizes Otherwise it was impossible to run the 'parallel' example with '-ub 1' with a Mamba or Jamba model.
* ggml : allow GGML_OP_CONCAT to work on non-contiguous tensors The implementation already supported it, and this makes Mamba's conv step slightly faster.
This can be changed back later if the name change is wrong. I was renaming the functions anyway to generalize kv-cache-related functions to hybrid and recurrent model architectures. I think llama_past is a better name than llama_cache for a combined kv cache and recurrent state cache, because the states it contains pretty much always come before the newly-added ones for any particular sequence. Also 'llama_past_clear' sounds more obvious in what it does than 'llama_kv_cache_clear'. The future is what the models generate. (For embeddings, the kv cache isn't really used anyway) Still, I'm open to better suggestions.
Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
Signed-off-by: Gabe Goodhart <[email protected]> Co-authored-by: Sigbjørn Skjæret <[email protected]>
…o GraniteFour * origin/compilade/refactor-kv-cache: memory : avoid referring to KV in recurrent cache logs model : make falcon-h1 use shared mamba2 layer builder Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
Some of the tensor names are common with Llama4
…o GraniteFour * origin/compilade/refactor-kv-cache: gguf-py : avoid adding duplicate tensor mappings for Jamba
* origin/master: llama : support Jamba hybrid Transformer-Mamba models (ggml-org#7531) ggml : add ggml_scale_bias (ggml-org#14417)
@compilade @ggerganov @CISC I think this one is ready to go now! The main outstanding questions I have with the changes here are:
If we prefer to focus on composition over inheritance, we could move to a |
src/llama-arch.h
Outdated
@@ -52,6 +52,7 @@ enum llm_arch { | |||
LLM_ARCH_MAMBA2, | |||
LLM_ARCH_JAMBA, | |||
LLM_ARCH_FALCON_H1, | |||
LLM_ARCH_BAMBA, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking over the code again, I think we could probably collapse LLM_ARCH_BAMBA
into LLM_ARCH_GRANITE_MOE_HYBRID
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might then suggest we change GRANITE_MOE_HYBRID
to simply GRANITE_HYBRID
(which would have the nice benefit of removing the extra indentation!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only hang up collapsing these is how to determine when to use rope
. Right now, the only way to tell is by the architecture name (bamba uses it, granitehybrid doesn't). This could be handled with an hparam
, but I'm not clear exactly which one to use. I see rope_finetuned
that doesn't appear to actually be used anywhere which seems like it might be a good option, but the default value is false
which is reported as "unknown"
, so it wouldn't be a perfect fit. I also see n_no_rope_layer_step
which is used for this in llama4
, but from what I can tell, there's no corresponding constant for plumbing this through via conversion.
I was just looking at that - we should avoid this. I haven't used this pattern and I am not sure I understand how it works. In general, code de-duplication is not an objective here. Even with composition. It's completely fine to copy paste the model architectures instead of reusing them with inheritance. Pretty much all the architectures in What we typically do for deduplication is to extract common blocks into The |
That makes sense. For now, I think I'll remove the virtual inheritance and inherit only from
I also did not understand this at all until I tried to get it working here. I didn't fully trust it until putting together a dummy version: #13550 (comment) |
It's not an issue, it's easy to filter out by hiding whitespace. |
The only key difference is the use of rope which is now set via rope_finetuned in the hparams Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
Per PR discussion, it's simpler to keep this with basic inheritance and not introduce the complexity of virtual inheritance and multiple inheritance ggml-org#13550 (comment) Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
* origin/master: llama : remove llm_graph_input_one (ggml-org#14603) Signed-off-by: Gabe Goodhart <[email protected]>
I've removed the virtual inheritance now and collapsed |
Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
This matches how recurrent vs attention heads are identified for Jamba Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can merge after @compilade approves.
ml.get_key(LLM_KV_ROPE_SCALING_FINETUNED, rope_finetuned, false); | ||
hparams.rope_finetuned = rope_finetuned; | ||
|
||
// A layer is recurrent IFF the n_head_kv value is set to 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// A layer is recurrent IFF the n_head_kv value is set to 0 | |
// A layer is recurrent IF the n_head_kv value is set to 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually meant IFF as in if and only if. Happy to change it if that's too obscure though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heh, never heard of that abbreviation, one lives and learns... :)
Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
* origin/master: cmake : do not search for curl libraries by ourselves (ggml-org#14613) SYCL: Initial set_rows kernel implementation (ggml-org#14562) llama : minor coding style fix for smollm3 (ggml-org#14605) cmake : bump llguidance version to v1.0.1 (ggml-org#14609) cmake : llguidance build parser library only (ggml-org#14608) cuda : support Falcon-H1 state size for SSM_SCAN (ggml-org#14602) Signed-off-by: Gabe Goodhart <[email protected]>
Description
This PR is the end-point for architecture support for Granite 4.0 (#13269 . It incorporates a number of changes from other in-flight branches that will need to be merged first:
Additionally, this PR replaces some work done on other PRs / branches:
Bamba
support: Bamba architecture #10810Bamba
support: https://github.com/gabe-l-hart/llama.cpp/tree/BambaArchitectureRefactorGranite 4.0
support: https://github.com/gabe-l-hart/llama.cpp/tree/GraniteFourDraftBamba
work, this will also be abandoned in favor of this PRJamba
: llama : support Jamba hybrid Transformer-Mamba models #7531master
.Jamba
support in this branch, but on further inspection, it looks like theJamba
architecture has some additional bells-and-whistles (eg sliding-window-attention) that would need further work, so my plan is to leaveJamba
off for now and possibly tackle it later (hopefully it's much easier than the original branch!)Outstanding Questions
Besides the upstream PRs, there are a few questions to answer before this PR is merge ready:
llama-kv-cache
beyond those in feat: Hybrid unified/recurrent cache #13276, but they depend on the addition ofhparams.recurrent_layer_arr
which is only populated correctly if there is a valid model architecture to check against. Should I move all of these changes to the hybrid cache PR or keep them here where the model architectures become real?hparams.recurrent_layer_arr
? Using a max-layer-sizestd::array
doesn't feel quite right.Bamba
andgranite-4.0-tiny-shared-preview
on this branch vs the respective draft branches, so I need to determine if this is due to changes in the attention implementation (ie "working as expected") or a bug somewhere.dymamic_cast
to get the right cache type could be expensive (though it's likely negligible relative to the tensor math). Should we do something more clever to handle different cache types inllama-graph
?switch
statement for determining the type of KV cache to allocate inllama-model.cpp
seems redundant withllama_model_is_recurrent
andllama_model_is_hybrid
. Should we use those functions instead and eliminate the duplicate logic and additional place to tweak for new recurrent / hybrid models?Testing
To test out this branch, I've been using the following models:
granite-4.0-tiny-preview
: https://huggingface.co/ibm-granite/granite-4.0-tiny-previewBamba-9B-v1
: https://huggingface.co/ibm-ai-platform/Bamba-9B-v1mamba2-370m-hf
: https://huggingface.co/AntonV/mamba2-370m-hfDetails
This PR has a lot of changes in it, some of which are isolated in the prereq-PRs above. In addition to the general
mamba2
andllama_kv_cache_hybrid
changes, this PR does the following:python side
BambaForCausalLM
andGraniteMoeHybridForCausalLM
gguf_writer.py
that allows duplicate key/value pairs throughadd_key_value
if (and only if) they match both value and type with the existing key. This is a convenience for hybrid models so that the converter doesn't need to rewrite the hparam conversion from multiple parents.HybridAttention
section underKeys
inconstants.py
to holdattention.layer_indices
. OPEN QUESTION: Should this just go underAttention
?c++ side
llama_model_is_hybrid
akin tollama_model_is_recurrent
llama_model_is_recurrent
intollm_arch_is_*
implemented inllama-arch.*
andllama_model_is_*
implemented inllama-model.*
. This was done so that they could be used during model initialization before the model itself can be passed as the argument, specifically to determine how to populatehparams.recurrent_layer_arr
(see below).hparams.recurrent_layer_arr
and support parsing ithparams.n_embd_k_s
/hparams.n_embd_v_s
0
. This should be fine since none of those places interact with the hybrid caching.hparams.recurrent_layer(uint32_t)
to check whether a given layer is recurrentbamba
andgranitemoeshared
inllama-arch.*
(the boring part!)hparams
as an additional argument to thellama_model.create_memory
methodllama-graph
, anywhere that a specific cache type needs to be fetched, it is grabbed using new methodsget_recurrent_cache
/get_unified_cache
. These methods usedynamic_cast
to handle both non-hybrid caches and hybrid caches.llama-model.cpp
bamba
andgranitemoehybrid
inllama-model
build_mamba_layer
/build_mamba2_layer
fromllm_build_mamba
andbuild_attention_layer
/build_layer_ffn
fromllm_build_granite
intostatic
methods on their respective classes. This makes for some gross function signatures where member data needs to be explicitly passed, but it allows the hybrid model architecture(s) to use these methods without complex inheritance.