-
Notifications
You must be signed in to change notification settings - Fork 246
[examples] fix vision_tower/multi_modal_projector regexes #1871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Brian Dellabetta <[email protected]>
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
Summary of ChangesHello @brian-dellabetta, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refines the quantization process within multimodal examples by correcting regular expression patterns. Previously, the patterns were too restrictive, leading to the unintended quantization of Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates the regex patterns in several example scripts to correctly ignore vision_tower
and multi_modal_projector
layers during quantization, even when they are prefixed. The changes are correct and address the issue described. I've also pointed out a few places where the vision_model
regex was not updated, which should be done for consistency with the goal of this PR. Additionally, some minor formatting fixes are included, which improve code readability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added suggestions for some that weren't updated.
I'm also wondering if we should approach this another way, given this seems very common. For example we could:
- Have this behavior by default (essentially auto-append
.*
to regexes). Slightly risky but probably 95+% of the time what the user wants. - Have some kind of flag (maybe on by default) that appends
.*
to the regexes.
Idk, maybe that doesn't make sense but it seems like we will almost always want to have the .*
there and would help prevent silent failures.
tests/e2e/vLLM/recipes/INT8/recipe_int8_channel_weight_dynamic_per_token.yaml
Outdated
Show resolved
Hide resolved
tests/e2e/vLLM/recipes/actorder/recipe_w4a16_actorder_weight.yaml
Outdated
Show resolved
Hide resolved
Co-authored-by: Fynn Schmitt-Ulms <[email protected]>
Co-authored-by: Fynn Schmitt-Ulms <[email protected]>
Co-authored-by: Fynn Schmitt-Ulms <[email protected]>
Co-authored-by: Fynn Schmitt-Ulms <[email protected]>
Co-authored-by: Fynn Schmitt-Ulms <[email protected]>
Co-authored-by: Fynn Schmitt-Ulms <[email protected]>
Thanks @fynnsu for the catches. We may want to in future, but that would be a change in behavior, we'd have to think about that |
Re: @fynnsu this is philosophical take, but I usually prefer explicit over implicit behavior. This also avoids allows users to be more specific for matching paths, especially when it comes to nested models (which might have repeated path segment names) |
…ct#1871) SUMMARY: Resolves vllm-project#1652 Our multimodal examples all ignore `"re:vision_tower.*"`, but this misses cases where the name is prefixed with something else (e.g. `model.vision_tower`). This PR loosens the regexes to allow for anything to precede `vision_tower` or `multi_modal_projector` and still be caught by the ignore. Layers beginning with `vision_tower`, without a prefix, will still be caught. Also some formatting fixes, which must not be included on `examples/` as part of ci/cd checks. TEST PLAN: Running `llm-compressor/examples/multimodal_vision/mistral3_example.py` on latest main shows we are quantizing layers we don't want to be: ``` 2025-09-26T20:02:43.571160+0000 | compress_modules | INFO - Quantizing model.vision_tower.transformer.layers.4.feed_forward.gate_proj using 512 samples ``` After these changes, those don't appear in the logs --------- Signed-off-by: Brian Dellabetta <[email protected]> Co-authored-by: Fynn Schmitt-Ulms <[email protected]> Signed-off-by: Cassie Jeon <[email protected]>
SUMMARY:
Resolves #1652
Our multimodal examples all ignore
"re:vision_tower.*"
, but this misses cases where the name is prefixed with something else (e.g.model.vision_tower
). This PR loosens the regexes to allow for anything to precedevision_tower
ormulti_modal_projector
and still be caught by the ignore. Layers beginning withvision_tower
, without a prefix, will still be caught.Also some formatting fixes, which must not be included on
examples/
as part of ci/cd checks.TEST PLAN:
Running
llm-compressor/examples/multimodal_vision/mistral3_example.py
on latest main shows we are quantizing layers we don't want to be:After these changes, those don't appear in the logs