Skip to content

Conversation

kylesayrs
Copy link
Contributor

@kylesayrs kylesayrs commented Aug 26, 2025

Purpose

  • Support fully-expressive attention and kv cache quantization
  • Support running kv cache quantization evals with hf transformers
10cf70de-d58b-4e78-9851-bab24e91d228

Prerequisites

Changes

New Classes

  • Add hookable attention and kvcache implementations which are registered to the attention module as submodules
    • QuantizedAttentionImpl injects itself into the model by registering a new attention implementation called ct_hooked_attention overriding model.config._attn_implementation to be the new implementation name
    • QuantizedKVCache injects itself into the model by overriding the past_key_values input kwarg to attention, and wrapping the functionality of the original cache
    • Calibration and transform hooks can be added to these modules via the hook functions
      • register_query_hook,
      • register_key_hook
      • register_value_hook

Quantization Lifecycle Changes

  • Apply
    • The kv_cache_scheme field of the quantization config is now used to call initialize_hooked_kv_cache
    • Attention modules can now be targeted, and are used to call initialize_hooked_attention if attention modules are explicitly targted (see is_narrow_match)
    • Remove logic for "merging" kv cache schemes (this doesn't really make any sense, I'm not sure why it was ever included)
  • Initiailize
    • Hooked kv cache and attention modules have their quantization parameters initialized by initialize_module_for_quantization
    • The presence of attention or kvcache submodules is what determines whether attention or kv cache only quantization is being applied
  • Serialization
    • QuantizationConfig. from_pretrained was cleaned up with additional comments
    • The kv_cache_scheme field is added if there are any attention modules with a quantization_scheme attached

Helpers

  • is_narrow_match is used to check that attention modules are being specifically targeted (rather than targeting all modules in a layer)
  • get_num_attn_heads, get_num_kv_heads, get_head_dim get attention config values from config

Testing

  • Added tests for is_narrow_match
  • Added tests for added attention and kvcache classes

Copy link
Contributor

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good, though i have a number of questions and minor suggestions

Comment on lines +146 to +114
# assumes only one model at a time
global _original_impl
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😬 i don't want to delay things, but we should briefly consider if there are alternative solutions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I spent 20 minutes exploring this, it requires creating specialized _ct_hooked_attention functions and specialized QuantizedAttentionImpl, which is more complexity than value added imho

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can _original_impl be registered on the module level (i.e. each self_attn block) instead of setting a global var?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but in order to register the _original_impl, it needs to be gotten from somewhere.

The first time, you "get" it from model.config. However on subsequent calls, model.config is overridden. This means that in order to "get" the original implementation, you'd have to go find the last Attention module you registered it to, or else store it in some global store.

You could register it to the model module itself or something like that, but I think that that's less reliable than just a a global store. If it's functionality you're after, we can turn it into a hash table or something, keyed by model hash.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just trying to figure out what the lifetime of _original_impl is, once everything is set it can basically be treated as no longer necessary? Or is it something that is important the entire duration, during model loading as well as during any forward passes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is all net-new, so it's not like we will breaking anything pre-existing. global vars make me nervous, but this seems like a legitimate enough use case to use them and accept the risk

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it makes you feel any better, this variable is still only scoped to this file. This is the same as any module-scoped read, only this time we're writing to it, and therefore need the global keyword

Copy link
Collaborator

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the goal is to use this generally for kv_cache and attn quantize, can we move the initialize_hooked_attention and initialize_hooked_kv_cache to initialize.py?

I understand we haven't hooked them in yet for those workflows but I think these belong there.

dsikka
dsikka previously approved these changes Sep 2, 2025
Copy link
Collaborator

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do a pass through on any missing docstring, otherwise lgtm.
nice work

Base automatically changed from kylesayrs/transform-simplify-key to main September 8, 2025 18:46
@dsikka dsikka dismissed stale reviews from brian-dellabetta and themself September 8, 2025 18:46

The base branch was changed.

@kylesayrs kylesayrs force-pushed the kylesayrs/r3-only branch 2 times, most recently from e224a5d to 05ec17e Compare October 8, 2025 19:20
@kylesayrs kylesayrs changed the base branch from main to kylesayrs/add-attn-head-strat October 8, 2025 19:20
Copy link
Contributor

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following for the most part. A few clarifications, but this makes sense to me

Comment on lines +146 to +114
# assumes only one model at a time
global _original_impl
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can _original_impl be registered on the module level (i.e. each self_attn block) instead of setting a global var?

@kylesayrs kylesayrs marked this pull request as draft October 8, 2025 21:06
@kylesayrs kylesayrs force-pushed the kylesayrs/add-attn-head-strat branch from d084c5e to e3f24d4 Compare October 9, 2025 14:19
@kylesayrs kylesayrs changed the base branch from kylesayrs/add-attn-head-strat to main October 9, 2025 18:14
@kylesayrs kylesayrs dismissed brian-dellabetta’s stale review October 9, 2025 18:14

The base branch was changed.

@kylesayrs kylesayrs changed the base branch from main to kylesayrs/add-attn-head-strat October 9, 2025 18:15
Base automatically changed from kylesayrs/add-attn-head-strat to main October 9, 2025 20:11
@kylesayrs
Copy link
Contributor Author

@kylesayrs kylesayrs marked this pull request as ready for review October 13, 2025 20:41
@kylesayrs
Copy link
Contributor Author

Last nightly worked, but e2e failed due to model storage issues
https://github.com/neuralmagic/llm-compressor-testing/actions/runs/18483826999

Signed-off-by: Kyle Sayers <[email protected]>
Copy link
Contributor

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can resolve the global var thread, I have another new comment we might want to consider in a follow-up but marking this as approved. Cool stuff! Excited to see it in action

Comment on lines +146 to +114
# assumes only one model at a time
global _original_impl
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is all net-new, so it's not like we will breaking anything pre-existing. global vars make me nervous, but this seems like a legitimate enough use case to use them and accept the risk

Comment on lines +182 to +183
# use any status from modules (in practice, use the last module)
model_status = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like something that might've been missed in the scoped quant work. If multiple statuses are found, rather than just using the last one found don't we want to set format="mixed-precision" in the returned QuantizationConfig?

Copy link
Contributor Author

@kylesayrs kylesayrs Oct 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Essentially, you're right. However, this value is essentially meaningless, as it is later overridden by the model compressor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants