-
Notifications
You must be signed in to change notification settings - Fork 263
AI Fix for: Create AWQ guide for llm-docs #1932
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
Summary of ChangesHello @shanaya-Gupta, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the documentation for Activation-Aware Quantization (AWQ) within the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to improve the documentation for Activation-Aware Quantization (AWQ) by creating a more detailed guide. The new guide explains the parameters of the AWQModifier
which is a good improvement. However, the updated README.md
file appears to be incomplete, as it ends abruptly, leaving a sentence unfinished and omitting a crucial code example. I have provided a review comment with a suggestion to fix this.
* **`targets`**: (`Union[str, List[str]]`) Specifies the modules within the model to be quantized. This typically targets `Linear` layers using regular expressions. For instance, `["Linear"]` targets all `Linear` layers, while `r"re:.*layers\.\d+\.(q_proj|k_proj|v_proj|o_proj|gate_proj|up_proj|down_proj)$"` targets specific projection layers in a transformer block. | ||
* **`ignore`**: (`Union[str, List[str]]`) Lists modules to exclude from quantization. Common exclusions include the `lm_head` (output layer) due to its sensitivity, or specific MoE gate layers. Example: `["lm_head", "re:.*mlp.gate$"]`. | ||
* **`duo_scaling`**: (`bool`, optional) If `True` (default), enables duo scaling, a technique where two scaling factors are applied to each weight tensor. This often improves accuracy for certain models, especially those with varying weight distributions. | ||
* **`mappings`**: (`List[AWQMapping]`, optional) A list of `AWQMapping` objects that define how input and output activations are grouped for scaling. These mappings are crucial for aligning normalization layers with their corresponding linear layers to optimize scaling and preserve accuracy. `llm-compressor` provides default mappings for popular architectures (e.g., Llama). For custom models, you might need to define your own. For example, a common mapping structure for Llama-like models looks like this: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation for the mappings
parameter is incomplete. The sentence introducing the example is left hanging, and the code example for a Llama-like model mapping is missing. This could be confusing for users trying to understand how to configure mappings.
I've suggested a fix to complete this section and also restore the link to the mappings registry which was present in the previous version.
* **`mappings`**: (`List[AWQMapping]`, optional) A list of `AWQMapping` objects that define how input and output activations are grouped for scaling. These mappings are crucial for aligning normalization layers with their corresponding linear layers to optimize scaling and preserve accuracy. `llm-compressor` provides default mappings for popular architectures (e.g., Llama). For custom models, you might need to define your own. For example, a common mapping structure for Llama-like models looks like this: | |
* **`mappings`**: (`List[AWQMapping]`, optional) A list of `AWQMapping` objects that define how input and output activations are grouped for scaling. These mappings are crucial for aligning normalization layers with their corresponding linear layers to optimize scaling and preserve accuracy. `llm-compressor` provides default mappings for popular architectures (e.g., Llama). For custom models, you might need to define your own. For example, a common mapping structure for Llama-like models looks like this: | |
```python | |
[ | |
AWQMapping( | |
"re:.*input_layernorm", | |
["re:.*q_proj", "re:.*k_proj", "re:.*v_proj"], | |
), | |
AWQMapping("re:.*v_proj", ["re:.*o_proj"]), | |
AWQMapping( | |
"re:.*post_attention_layernorm", | |
["re:.*gate_proj", "re:.*up_proj"], | |
), | |
AWQMapping( | |
"re:.*up_proj", | |
["re:.*down_proj"], | |
), | |
] |
To support other model families, you can supply your own mappings via the mappings
argument, or you can contribute them to the registry here.
While I do think that the AWQ Readme could use more detail, I closed the issue you were trying to fix since it actually requires a vllm PR rather than an llm-compressor PR. Sorry about any confusion, the updated issue is here: vllm-project/vllm#26840 and still needs someone to work on, though I don't have the necessary access level to label it as a good first issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @shanaya-Gupta , thanks for the contirubtion. the first sections look improved, but you've deleted some that should remain
## Compressing Your Own Model ## | ||
To use your own model, start with an existing example change the `model_id` to match your own model stub. | ||
```python | ||
model_id = "path/to/your/model" | ||
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto") | ||
``` | ||
|
||
## Adding Mappings ## | ||
In order to target weight and activation scaling locations within the model, the `AWQModifier` must be provided an AWQ mapping. For example, the AWQ mapping for the Llama family of models looks like this: | ||
|
||
```python | ||
[ | ||
AWQMapping( | ||
"re:.*input_layernorm", | ||
["re:.*q_proj", "re:.*k_proj", "re:.*v_proj"], | ||
), | ||
AWQMapping("re:.*v_proj", ["re:.*o_proj"]), | ||
AWQMapping( | ||
"re:.*post_attention_layernorm", | ||
["re:.*gate_proj", "re:.*up_proj"], | ||
), | ||
AWQMapping( | ||
"re:.*up_proj", | ||
["re:.*down_proj"], | ||
), | ||
] | ||
``` | ||
|
||
To support other model families, you can add supply your own mappings via the `mappings` argument with instantiating the `AWQModifier`, or you can add them to the registry [here](/src/llmcompressor/modifiers/awq/mappings.py) (contributions are welcome!) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should keep these sections
Resolves #1931.