Skip to content

Conversation

shanaya-Gupta
Copy link

Resolves #1931.

Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

Summary of Changes

Hello @shanaya-Gupta, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the documentation for Activation-Aware Quantization (AWQ) within the llm-compressor project. The existing README.md has been transformed into a detailed guide, providing users with a clearer understanding of AWQ's functionality, its integration with llm-compressor, and comprehensive explanations of the AWQModifier's configuration parameters. This enhancement aims to make it easier for users to implement and optimize AWQ for their large language models, ultimately leading to better model compression and inference performance.

Highlights

  • AWQ Guide Enhancement: The examples/awq/README.md has been completely revamped to provide a more comprehensive and detailed guide on using Activation-Aware Quantization (AWQ) with the llm-compressor library.
  • Detailed Parameter Explanation: The updated guide now thoroughly explains the key parameters of the AWQModifier (e.g., scheme, targets, ignore, duo_scaling, mappings), offering clearer guidance on configuration and usage.
  • Focus on llm-compressor Integration: The new content emphasizes how AWQ integrates with llm-compressor and highlights its benefits, such as efficient 4-bit weight quantization and accelerated inference with vLLM.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to improve the documentation for Activation-Aware Quantization (AWQ) by creating a more detailed guide. The new guide explains the parameters of the AWQModifier which is a good improvement. However, the updated README.md file appears to be incomplete, as it ends abruptly, leaving a sentence unfinished and omitting a crucial code example. I have provided a review comment with a suggestion to fix this.

* **`targets`**: (`Union[str, List[str]]`) Specifies the modules within the model to be quantized. This typically targets `Linear` layers using regular expressions. For instance, `["Linear"]` targets all `Linear` layers, while `r"re:.*layers\.\d+\.(q_proj|k_proj|v_proj|o_proj|gate_proj|up_proj|down_proj)$"` targets specific projection layers in a transformer block.
* **`ignore`**: (`Union[str, List[str]]`) Lists modules to exclude from quantization. Common exclusions include the `lm_head` (output layer) due to its sensitivity, or specific MoE gate layers. Example: `["lm_head", "re:.*mlp.gate$"]`.
* **`duo_scaling`**: (`bool`, optional) If `True` (default), enables duo scaling, a technique where two scaling factors are applied to each weight tensor. This often improves accuracy for certain models, especially those with varying weight distributions.
* **`mappings`**: (`List[AWQMapping]`, optional) A list of `AWQMapping` objects that define how input and output activations are grouped for scaling. These mappings are crucial for aligning normalization layers with their corresponding linear layers to optimize scaling and preserve accuracy. `llm-compressor` provides default mappings for popular architectures (e.g., Llama). For custom models, you might need to define your own. For example, a common mapping structure for Llama-like models looks like this:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The documentation for the mappings parameter is incomplete. The sentence introducing the example is left hanging, and the code example for a Llama-like model mapping is missing. This could be confusing for users trying to understand how to configure mappings.

I've suggested a fix to complete this section and also restore the link to the mappings registry which was present in the previous version.

Suggested change
* **`mappings`**: (`List[AWQMapping]`, optional) A list of `AWQMapping` objects that define how input and output activations are grouped for scaling. These mappings are crucial for aligning normalization layers with their corresponding linear layers to optimize scaling and preserve accuracy. `llm-compressor` provides default mappings for popular architectures (e.g., Llama). For custom models, you might need to define your own. For example, a common mapping structure for Llama-like models looks like this:
* **`mappings`**: (`List[AWQMapping]`, optional) A list of `AWQMapping` objects that define how input and output activations are grouped for scaling. These mappings are crucial for aligning normalization layers with their corresponding linear layers to optimize scaling and preserve accuracy. `llm-compressor` provides default mappings for popular architectures (e.g., Llama). For custom models, you might need to define your own. For example, a common mapping structure for Llama-like models looks like this:
```python
[
AWQMapping(
"re:.*input_layernorm",
["re:.*q_proj", "re:.*k_proj", "re:.*v_proj"],
),
AWQMapping("re:.*v_proj", ["re:.*o_proj"]),
AWQMapping(
"re:.*post_attention_layernorm",
["re:.*gate_proj", "re:.*up_proj"],
),
AWQMapping(
"re:.*up_proj",
["re:.*down_proj"],
),
]

To support other model families, you can supply your own mappings via the mappings argument, or you can contribute them to the registry here.

@HDCharles
Copy link
Collaborator

HDCharles commented Oct 14, 2025

While I do think that the AWQ Readme could use more detail, I closed the issue you were trying to fix since it actually requires a vllm PR rather than an llm-compressor PR. Sorry about any confusion, the updated issue is here: vllm-project/vllm#26840 and still needs someone to work on, though I don't have the necessary access level to label it as a good first issue.

Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @shanaya-Gupta , thanks for the contirubtion. the first sections look improved, but you've deleted some that should remain

Comment on lines -17 to -45
## Compressing Your Own Model ##
To use your own model, start with an existing example change the `model_id` to match your own model stub.
```python
model_id = "path/to/your/model"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto")
```

## Adding Mappings ##
In order to target weight and activation scaling locations within the model, the `AWQModifier` must be provided an AWQ mapping. For example, the AWQ mapping for the Llama family of models looks like this:

```python
[
AWQMapping(
"re:.*input_layernorm",
["re:.*q_proj", "re:.*k_proj", "re:.*v_proj"],
),
AWQMapping("re:.*v_proj", ["re:.*o_proj"]),
AWQMapping(
"re:.*post_attention_layernorm",
["re:.*gate_proj", "re:.*up_proj"],
),
AWQMapping(
"re:.*up_proj",
["re:.*down_proj"],
),
]
```

To support other model families, you can add supply your own mappings via the `mappings` argument with instantiating the `AWQModifier`, or you can add them to the registry [here](/src/llmcompressor/modifiers/awq/mappings.py) (contributions are welcome!)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should keep these sections

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Create AWQ guide for llm-docs

3 participants