Skip to content

Granite Four #13550

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 173 commits into from
Jul 11, 2025
Merged

Granite Four #13550

merged 173 commits into from
Jul 11, 2025

Conversation

gabe-l-hart
Copy link
Contributor

@gabe-l-hart gabe-l-hart commented May 14, 2025

Description

This PR is the end-point for architecture support for Granite 4.0 (#13269 . It incorporates a number of changes from other in-flight branches that will need to be merged first:

Additionally, this PR replaces some work done on other PRs / branches:

Outstanding Questions

Besides the upstream PRs, there are a few questions to answer before this PR is merge ready:

  • This PR contains several changes to llama-kv-cache beyond those in feat: Hybrid unified/recurrent cache #13276, but they depend on the addition of hparams.recurrent_layer_arr which is only populated correctly if there is a valid model architecture to check against. Should I move all of these changes to the hybrid cache PR or keep them here where the model architectures become real?
  • Is there a more efficient way to implement hparams.recurrent_layer_arr? Using a max-layer-size std::array doesn't feel quite right.
  • There are still some numerical differences between the attention outputs when running Bamba and granite-4.0-tiny-shared-preview on this branch vs the respective draft branches, so I need to determine if this is due to changes in the attention implementation (ie "working as expected") or a bug somewhere.
  • The use of dymamic_cast to get the right cache type could be expensive (though it's likely negligible relative to the tensor math). Should we do something more clever to handle different cache types in llama-graph?
  • The switch statement for determining the type of KV cache to allocate in llama-model.cpp seems redundant with llama_model_is_recurrent and llama_model_is_hybrid. Should we use those functions instead and eliminate the duplicate logic and additional place to tweak for new recurrent / hybrid models?

Testing

To test out this branch, I've been using the following models:

Details

This PR has a lot of changes in it, some of which are isolated in the prereq-PRs above. In addition to the general mamba2 and llama_kv_cache_hybrid changes, this PR does the following:

python side

  • Add conversion support for BambaForCausalLM and GraniteMoeHybridForCausalLM
    • This includes one small tweak to gguf_writer.py that allows duplicate key/value pairs through add_key_value if (and only if) they match both value and type with the existing key. This is a convenience for hybrid models so that the converter doesn't need to rewrite the hparam conversion from multiple parents.
    • This also adds the new HybridAttention section under Keys in constants.py to hold attention.layer_indices. OPEN QUESTION: Should this just go under Attention?

c++ side

  • Add a new public API function llama_model_is_hybrid akin to llama_model_is_recurrent
    • I also split up both this function and llama_model_is_recurrent into llm_arch_is_* implemented in llama-arch.* and llama_model_is_* implemented in llama-model.*. This was done so that they could be used during model initialization before the model itself can be passed as the argument, specifically to determine how to populate hparams.recurrent_layer_arr (see below).
  • Add hparams.recurrent_layer_arr and support parsing it
    • The current implementation pre-allocates it as a fixed-length array which doesn't feel quite right.
  • Add an optional layer id to hparams.n_embd_k_s / hparams.n_embd_v_s
    • This is done because for hybrid models, the values may be different by layer.
    • I plumbed through as many usages of these methods as I could find to properly pass the layer index, but there are some places where it's not available which default to layer 0. This should be fine since none of those places interact with the hybrid caching.
  • Add hparams.recurrent_layer(uint32_t) to check whether a given layer is recurrent
  • Model name/param/arch plumbing for bamba and granitemoeshared in llama-arch.* (the boring part!)
  • (possibly breaking) Add hparams as an additional argument to the llama_model.create_memory method
    • This is done so the hparams can be given to the cache construction and used to determine which layers are recurrent for hybrid cache creation
  • In llama-graph, anywhere that a specific cache type needs to be fetched, it is grabbed using new methods get_recurrent_cache / get_unified_cache. These methods use dynamic_cast to handle both non-hybrid caches and hybrid caches.
  • Add support for instantiating the hybrid cache in llama-model.cpp
  • Add model support for bamba and granitemoehybrid in llama-model
    • Most of this is "business as usual," but that breaks down when trying to avoid code duplication for the hybrid architecture
    • To avoid code duplication, I hoisted build_mamba_layer / build_mamba2_layer from llm_build_mamba and build_attention_layer / build_layer_ffn from llm_build_granite into static methods on their respective classes. This makes for some gross function signatures where member data needs to be explicitly passed, but it allows the hybrid model architecture(s) to use these methods without complex inheritance.
    • I tried an alternative route using diamond inheritance, but this would have required some kind of "don't actually initialize the graph" switch in the parent model builders' constructors to avoid trying to build the parent model graphs during initialization of the hybrid class.

compilade added 30 commits April 3, 2024 20:47
This will be necessary to support Jamba
(and other recurrent models mixed with Attention).

Doesn't compile yet, and finding a slot isn't yet done correctly for recurrent states.
* llama : begin work on support for variable GQA

This will also be useful for Jamba if we consider the Mamba layers
to have 0 KV heads.

* llama : gracefully fail when not finding hybrid slot
* ggml : simplify SSM-related operators

* llama : make recurrent state slot allocation contiguous

* llama : adapt internal uses of batches to llama_ubatch
This reduces overhead when running hellaswag
on thousands of sequences with very small 100k params Mamba models.
This otherwise was a problem when running the HellaSwag benchmark
with small batch sizes, making it crash.
This removes the need for ggml_ssm_conv!!!
But performance seems slighly worse on my system,
especially for prompt processing.
Maybe ggml_mul_mat isn't optimized for small row sizes?
More performance testing is necessary until GGML_OP_SSM_CONV is removed.

* ggml : make ggml_ssm_scan not modify its source tensors

* llama : fix shared recurrent tail cell count for small ubatch sizes

Otherwise it was impossible to run the 'parallel' example with '-ub 1'
with a Mamba or Jamba model.
* ggml : allow GGML_OP_CONCAT to work on non-contiguous tensors

The implementation already supported it,
and this makes Mamba's conv step slightly faster.
This can be changed back later if the name change is wrong.
I was renaming the functions anyway to generalize kv-cache-related
functions to hybrid and recurrent model architectures.
I think llama_past is a better name than llama_cache for a combined
kv cache and recurrent state cache, because the states it contains
pretty much always come before the newly-added ones for any particular
sequence. Also 'llama_past_clear' sounds more obvious in what it does
than 'llama_kv_cache_clear'. The future is what the models generate.
(For embeddings, the kv cache isn't really used anyway)

Still, I'm open to better suggestions.
* origin/master:
llama : remove llm_graph_input_one (ggml-org#14603)

Signed-off-by: Gabe Goodhart <[email protected]>
@gabe-l-hart
Copy link
Contributor Author

I've removed the virtual inheritance now and collapsed Bamba and GraniteMoeHybrid into simply GraniteHybrid which covers all permutations of hybrid architectures (w/ and w/out the granite multipliers on top of llama) and dense/MoE/MoE+shared.

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <[email protected]>
This matches how recurrent vs attention heads are identified for Jamba

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <[email protected]>
Copy link
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can merge after @compilade approves.

@ggerganov ggerganov requested review from compilade and CISC July 10, 2025 06:14
ml.get_key(LLM_KV_ROPE_SCALING_FINETUNED, rope_finetuned, false);
hparams.rope_finetuned = rope_finetuned;

// A layer is recurrent IFF the n_head_kv value is set to 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// A layer is recurrent IFF the n_head_kv value is set to 0
// A layer is recurrent IF the n_head_kv value is set to 0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually meant IFF as in if and only if. Happy to change it if that's too obscure though

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heh, never heard of that abbreviation, one lives and learns... :)

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <[email protected]>
* origin/master:
cmake : do not search for curl libraries by ourselves (ggml-org#14613)
SYCL: Initial set_rows kernel implementation (ggml-org#14562)
llama : minor coding style fix for smollm3 (ggml-org#14605)
cmake : bump llguidance version to v1.0.1 (ggml-org#14609)
cmake : llguidance build parser library only (ggml-org#14608)
cuda : support Falcon-H1 state size for SSM_SCAN (ggml-org#14602)

Signed-off-by: Gabe Goodhart <[email protected]>
* origin/master:
Smoldocling support (ggml-org#14597)
Docs: script to auto-generate ggml operations docs (ggml-org#14598)
@CISC
Copy link
Collaborator

CISC commented Jul 10, 2025

@compilade You have the final say. :)

]

return super().modify_tensors(data_torch, name, bid)


@ModelBase.register("GraniteMoeHybridForCausalLM", "BambaForCausalLM")
class GraniteHybridModel(Mamba2Model, GraniteMoeModel):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multiple inheritance in Python works by using methods from the first class where it's present. (At least according to https://stackoverflow.com/questions/3277367/how-does-pythons-super-work-with-multiple-inheritance)

In this case, it means methods from Mamba2Model will be used for mostly everything, and GraniteMoeModel will be used for its prepare_tensors override (from LlamaModel somewhere in its parent hierarchy), unless I'm misunderstanding the order.

The resolution order seems to be

$ python3
>>> import convert_hf_to_gguf
>>> convert_hf_to_gguf.GraniteHybridModel.__mro__
(<class 'convert_hf_to_gguf.GraniteHybridModel'>, <class 'convert_hf_to_gguf.Mamba2Model'>, <class 'convert_hf_to_gguf.GraniteMoeModel'>, <class 'convert_hf_to_gguf.GraniteModel'>, <class 'convert_hf_to_gguf.LlamaModel'>, <class 'convert_hf_to_gguf.TextModel'>, <class 'convert_hf_to_gguf.ModelBase'>, <class 'object'>)

(Noting this here, because I had to check how that works, not because there's a problem).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's right. I like the suggestions below to be more explicit.

return [(self.map_tensor_name(name), data_torch)]

def set_gguf_parameters(self):
GraniteMoeModel.set_gguf_parameters(self)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If all the key-values are overwritten below (which might not be the case, I did not verify), then it could be simpler to not call the parent set_gguf_parameters.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's at least one part of GraniteMoeModel that should be kept, so I'm inclined to keep this as is

The gist is to be explicit about which base class is being used with the
multiple inheritance setup

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <[email protected]>
@gabe-l-hart
Copy link
Contributor Author

@compilade thanks for catching my slop! 🏓 back to you (fixed all except the question about set_gguf_parameters)

Comment on lines 273 to 281
if any(key in kv_data for kv_data in self.kv_data):
# Warn about duplicate keys if they differ by value or type
if any(
(
key in kv_data
and (kv_data[key].value != val or kv_data[key].type != vtype)
)
for kv_data in self.kv_data
):
logger.warning(f'Duplicated key name {key!r}, overwriting it with new value {val!r} of type {vtype.name}')
Copy link
Collaborator

@compilade compilade Jul 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern with this is how this would hide redundant overrides (meaning they might not get noticed to be removed).

In a way, the warnings kind of encourage making set_gguf_parameters easier to follow (so that the overrides don't happen). I could be wrong, though.

For example, these seem to be the duplicate keys for which warnings are hidden by this section:

diff --git a/convert_hf_to_gguf.py b/convert_hf_to_gguf.py
index 2df43ba11..28188b43d 100755
--- a/convert_hf_to_gguf.py
+++ b/convert_hf_to_gguf.py
@@ -6550,13 +6550,6 @@ class GraniteHybridModel(Mamba2Model, GraniteMoeModel):
     def set_gguf_parameters(self):
         GraniteMoeModel.set_gguf_parameters(self)
 
-        ## General Params ##
-        self.gguf_writer.add_embedding_length(self.d_model)
-        self.gguf_writer.add_block_count(self.block_count)
-        self.gguf_writer.add_context_length(self.hparams.get("max_position_embeddings", 0))
-        self.gguf_writer.add_vocab_size(self.hparams["vocab_size"])
-        self.gguf_writer.add_feed_forward_length(self.hparams["intermediate_size"])
-
         ## Mamba mixer params ##
         self.gguf_writer.add_ssm_conv_kernel(self.find_hparam(["conv_kernel", "d_conv"]))
         self.gguf_writer.add_ssm_state_size(self.find_hparam(["state_size", "d_state"]))
@@ -6573,14 +6566,8 @@ class GraniteHybridModel(Mamba2Model, GraniteMoeModel):
         ]
         if rope_dim := self.hparams.get("attn_rotary_emb"):
             self.gguf_writer.add_rope_dimension_count(rope_dim)
-        self.gguf_writer.add_head_count(self.hparams["num_attention_heads"])
         self.gguf_writer.add_head_count_kv(head_count_kv_vec)
 
-        ## Feed Forward Params ##
-        self.gguf_writer.add_layer_norm_rms_eps(
-            self.find_hparam(["layer_norm_epsilon", "rms_norm_eps"], optional=True) or 1e-5
-        )
-
         ## If Bamba, use rope, otherwise don't
         use_rope = "BambaForCausalLM" in self.hparams["architectures"]
         self.gguf_writer.add_rope_scaling_finetuned(use_rope)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's a good point. I think given that we've moved pretty significantly away from the complex inheritance on the c++ side, it ight make sense to see about doing something similar here since the multiple inheritance is definitely causing confusing code here as well. Let me take a look at how to simplify this.

Copy link
Collaborator

@compilade compilade Jul 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part isn't really made confusing by multiple inheritance, but by the long (but linear) family tree of GraniteMoeModel(GraniteModel(LlamaModel(TextModel))) for the inheritance of the set_gguf_parameters method.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a dual edged sword, since we are getting more layers of inheritance it's easy to add duplicates below without noticing, and then adding lots of noise above about something that isn't really an issue. Noise leads to complacency.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, agreed. I guess the real question is whether we want to support using parents directly for gguf params in a multiple-inheritance situation. If it's single inheritance, there should be no reason to overwrite what a parent did. It's possible that a child uses a different key for the same value as a parent, but that would cause the parent's lookup to not find the key and the child's lookup would have a different value (which I think is actually happening here).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heh, very good point! Scope creep for sure

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok removed. That leaves a few known override warnings:

WARNING:gguf.gguf_writer:Duplicated key name 'granitehybrid.embedding_length', overwriting it with new value 1536 of type UINT32
WARNING:gguf.gguf_writer:Duplicated key name 'granitehybrid.block_count', overwriting it with new value 40 of type UINT32
WARNING:gguf.gguf_writer:Duplicated key name 'granitehybrid.vocab_size', overwriting it with new value 49160 of type UINT32
WARNING:gguf.gguf_writer:Duplicated key name 'granitehybrid.feed_forward_length', overwriting it with new value 512 of type UINT32
WARNING:gguf.gguf_writer:Duplicated key name 'granitehybrid.attention.head_count', overwriting it with new value 12 of type UINT32
WARNING:gguf.gguf_writer:Duplicated key name 'granitehybrid.attention.head_count_kv', overwriting it with new value [0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0] of type ARRAY
WARNING:gguf.gguf_writer:Duplicated key name 'granitehybrid.attention.layer_norm_rms_epsilon', overwriting it with new value 1e-05 of type FLOAT32
WARNING:gguf.gguf_writer:Duplicated key name 'granitehybrid.context_length', overwriting it with new value 1048576 of type UINT32

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be resolved here with a comment in the converter and the warnings reinstanted?

Copy link
Collaborator

@compilade compilade Jul 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok removed. That leaves a few known override warnings

All of these (except the head_count_kv one (and the context length, I forgot that)) can be avoided by applying the patch from #13550 (comment).

The source hparams fields seem to be mostly the same both times they are set (small difference with layer_norm_epsilon, but both times the actually-used field is rms_norm_eps (for granite-4.0-tiny-random at least)).

If you think it's clearer to keep it this way, this is fine with me too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤦 Nope, you're totally right. I'm concurrently trying to update my draft of bumping llama.cpp in ollama and multitasking poorly. I'll push with those removed.

…alue

After further discussion, this encourages sloppy overwriting in the model
converters

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteFour

Signed-off-by: Gabe Goodhart <[email protected]>
Co-authored-by: Francis Couture-Harpin <[email protected]>

(thanks for the sharp eyes and patience!)

Branch: GraniteFour

Signed-off-by: Gabe Goodhart <[email protected]>
@gabe-l-hart
Copy link
Contributor Author

Ok, thanks for the patience @compilade, I think it should be good-to-go once CI is green!

@CISC CISC merged commit 0aedae0 into ggml-org:master Jul 11, 2025
51 checks passed
@gabe-l-hart
Copy link
Contributor Author

@compilade @ggerganov @CISC Thank you so much for all the help and discussion in getting this merged! It's really great to have it officially in, and even more so for all the other great hybrid recurrent models that have come in during the process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Apple Metal https://en.wikipedia.org/wiki/Metal_(API) examples ggml changes relating to the ggml tensor library for machine learning hot Something that is hot Nvidia GPU Issues specific to Nvidia GPUs python python script changes server testing Everything test related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants