Skip to content

Conversation

brian-dellabetta
Copy link
Collaborator

@brian-dellabetta brian-dellabetta commented Sep 26, 2025

SUMMARY:
Resolves #1652

Our multimodal examples all ignore "re:vision_tower.*", but this misses cases where the name is prefixed with something else (e.g. model.vision_tower). This PR loosens the regexes to allow for anything to precede vision_tower or multi_modal_projector and still be caught by the ignore. Layers beginning with vision_tower, without a prefix, will still be caught.

Also some formatting fixes, which must not be included on examples/ as part of ci/cd checks.

TEST PLAN:
Running llm-compressor/examples/multimodal_vision/mistral3_example.py on latest main shows we are quantizing layers we don't want to be:

2025-09-26T20:02:43.571160+0000 | compress_modules | INFO - Quantizing model.vision_tower.transformer.layers.4.feed_forward.gate_proj using 512 samples

After these changes, those don't appear in the logs

Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

Summary of Changes

Hello @brian-dellabetta, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the quantization process within multimodal examples by correcting regular expression patterns. Previously, the patterns were too restrictive, leading to the unintended quantization of vision_tower and multi_modal_projector layers when they appeared with prefixes. The updated regexes ensure these critical components are consistently ignored, aligning with the intended behavior and resolving a reported issue.

Highlights

  • Regex Fixes for Multimodal Examples: The pull request updates regex patterns used to ignore vision_tower and multi_modal_projector layers during quantization in various multimodal examples.
  • Improved Layer Exclusion Logic: The regexes are now more flexible, using re:.*vision_tower.* instead of re:vision_tower.*, to correctly identify and ignore these layers even when they are prefixed (e.g., model.vision_tower).
  • Resolution of Quantization Issue: This change directly addresses and resolves issue Error when loading GPTQ-quantized Mistral-Small model with vLLM #1652, where unintended layers were being quantized.
  • Formatting Improvements: Minor formatting adjustments were also included in some Python files for improved readability.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@brian-dellabetta brian-dellabetta added the ready When a PR is ready for review label Sep 26, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the regex patterns in several example scripts to correctly ignore vision_tower and multi_modal_projector layers during quantization, even when they are prefixed. The changes are correct and address the issue described. I've also pointed out a few places where the vision_model regex was not updated, which should be done for consistency with the goal of this PR. Additionally, some minor formatting fixes are included, which improve code readability.

kylesayrs
kylesayrs previously approved these changes Sep 30, 2025
Copy link
Collaborator

@fynnsu fynnsu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added suggestions for some that weren't updated.

I'm also wondering if we should approach this another way, given this seems very common. For example we could:

  1. Have this behavior by default (essentially auto-append .* to regexes). Slightly risky but probably 95+% of the time what the user wants.
  2. Have some kind of flag (maybe on by default) that appends .* to the regexes.

Idk, maybe that doesn't make sense but it seems like we will almost always want to have the .* there and would help prevent silent failures.

@brian-dellabetta
Copy link
Collaborator Author

I added suggestions for some that weren't updated.

I'm also wondering if we should approach this another way, given this seems very common. For example we could:

1. Have this behavior by default (essentially auto-append `.*` to regexes). Slightly risky but probably 95+% of the time what the user wants.

2. Have some kind of flag (maybe on by default) that appends `.*` to the regexes.

Idk, maybe that doesn't make sense but it seems like we will almost always want to have the .* there and would help prevent silent failures.

Thanks @fynnsu for the catches. We may want to in future, but that would be a change in behavior, we'd have to think about that

@kylesayrs
Copy link
Collaborator

Re: @fynnsu this is philosophical take, but I usually prefer explicit over implicit behavior. This also avoids allows users to be more specific for matching paths, especially when it comes to nested models (which might have repeated path segment names)

@brian-dellabetta brian-dellabetta merged commit 183364e into main Oct 1, 2025
8 checks passed
@brian-dellabetta brian-dellabetta deleted the bdellabe/examples-fix-ignores branch October 1, 2025 15:24
cajeonrh pushed a commit to cajeonrh/llm-compressor that referenced this pull request Oct 2, 2025
…ct#1871)

SUMMARY:
Resolves vllm-project#1652

Our multimodal examples all ignore `"re:vision_tower.*"`, but this
misses cases where the name is prefixed with something else (e.g.
`model.vision_tower`). This PR loosens the regexes to allow for anything
to precede `vision_tower` or `multi_modal_projector` and still be caught
by the ignore. Layers beginning with `vision_tower`, without a prefix,
will still be caught.

Also some formatting fixes, which must not be included on `examples/` as
part of ci/cd checks.

TEST PLAN:
Running `llm-compressor/examples/multimodal_vision/mistral3_example.py`
on latest main shows we are quantizing layers we don't want to be:
```
2025-09-26T20:02:43.571160+0000 | compress_modules | INFO - Quantizing model.vision_tower.transformer.layers.4.feed_forward.gate_proj using 512 samples
```

After these changes, those don't appear in the logs

---------

Signed-off-by: Brian Dellabetta <[email protected]>
Co-authored-by: Fynn Schmitt-Ulms <[email protected]>
Signed-off-by: Cassie Jeon <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready When a PR is ready for review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Error when loading GPTQ-quantized Mistral-Small model with vLLM
3 participants