-
Notifications
You must be signed in to change notification settings - Fork 30.6k
[Quantization] Allow loading of transform configs #40673
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[Quantization] Allow loading of transform configs #40673
Conversation
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
cc @MekkCyber |
Putting in draft for now, need to do some more testing |
Signed-off-by: Kyle Sayers <[email protected]>
[For maintainers] Suggested jobs to run (before merge) run-slow: compressed_tensors_integration |
"""Models quantized using compressed tensors can be saved to disk""" | ||
return True | ||
|
||
def dequantize(self, model: "PreTrainedModel"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have to call this dequantize
to match huggingface API? If not, decompress
would be more accurate since it might involve something beyond quantization?
Purpose
Prerequisites
Changes
0.11.0
(to support transform features)update_dtype
in order to reduce complexity and give users more control/predictability of model data typesTesting
CompressedTensorsTest
, added an online quip-style transformed model for testing