Skip to content

Conversation

@kylesayrs
Copy link
Collaborator

@kylesayrs kylesayrs commented Sep 17, 2025

Purpose

  • Add tests to quantization in order to add more confidence to future quantization refactors
    • Each of the tested values has been checked and look correct

Changes

  • Observers changes
    • Update calculate_updated_min_max to store running values, even if the function computation shortcuts (helpful testing)
    • Update get_qparams_along_dim to support multiple dims and negative dims
      • This actually results in a silent typing bug with token quantization, and is fixed on the base class implementation
      • This change essentially duplicates the base class implementation. Future work could involve cleaning up the inheritance structure here

Testing

  • Added tests pass

Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on strengthening the quantization observer mechanisms and expanding test coverage. It refines the logic for calculating min-max values and global parameters within observers, ensuring more robust and predictable behavior. Crucially, it adds extensive end-to-end tests for different quantization strategies, which will serve as a vital safeguard for future modifications and improvements to the quantization framework.

Highlights

  • Observer Logic Refinements: Enhanced calculate_updated_min_max to consistently store running values and updated calculate_gparam to use patch_attr for safe, temporary state modification, preventing unintended side effects on running means.
  • Dimension Handling in Observers: Improved get_qparams_along_dim to support multiple and negative dimensions, increasing flexibility for quantization parameter calculation.
  • Comprehensive Quantization E2E Tests: Introduced new end-to-end tests for various quantization strategies (tensor, channel, group, block, token) covering both weight and activation quantization, significantly boosting confidence in the quantization pipeline.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces valuable cleanups to the quantization observers and adds comprehensive end-to-end tests. The observer changes, such as making calculate_gparam side-effect-free and enhancing get_qparams_along_dim, are solid improvements. The new tests significantly increase confidence in the quantization logic for future refactoring. I've included a few suggestions to enhance the new test files by removing debugging print statements and commented-out code, which will improve overall maintainability. Great work on strengthening both the implementation and test coverage.

Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Copy link
Collaborator

@shanjiaz shanjiaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we should update the mse observer as well? Since reset was removed from calibration.

@kylesayrs
Copy link
Collaborator Author

@shanjiaz global parameters are not supported by MSE observer anyways, so it's not relevant without that support

shanjiaz
shanjiaz previously approved these changes Sep 17, 2025
Copy link
Collaborator

@shanjiaz shanjiaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

Signed-off-by: Kyle Sayers <[email protected]>
@kylesayrs kylesayrs changed the title [Observers] Small observers cleanup, add e2e quantization tests [Observers] Small observers cleanup, add e2e quantization tests, increase test file limit Sep 17, 2025
dim = set(dim)

# convert negative dims
dim = [d if d >= 0 else observed.ndim + d for d in dim]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the cast to set happen after this line?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically either is fine, since the argument type just needs to be an iterable. I'm purely matching the implementation on the base model for now

Update get_qparams_along_dim to support multiple dims and negative dims
This actually results in a silent typing bug with token quantization, and is fixed on the base class implementation
This change essentially duplicates the base class implementation. Future work could involve cleaning up the inheritance structure here

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean more that you might end up with duplicates in dim if you create this list and don't cast back to a set.

e.g. if there are 3 dims and dim={1,2,-1}, then dim=[1,2,2] after this line.

updated_min_val, updated_max_val = self.calculate_updated_min_max(
observed=observed
)
# patch to avoid affecting running means
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is because we are calculating global scale right? we don't want the calculate_qparams result to change based on this calculation?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update calculate_gparam to restore original running values, rather than relying on resetting after calculation

Yes

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this preferable? If anything, this now seems more confusing

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a programming standpoint, this decouples calculate_gparam and Observer.reset (there's no way to footgun yourself by calling calculate_gparam and forgetting to call Observer.reset.

From a functionality standpoint, I think this fixes a bug where metrics would be updated twice (which has implications for running values), specifically when called from calibrate_activations. In the case of activations, we don't want to reset after each gparam calculation, since we still need those metrics to compute running values.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're right about the 2nd point.

I don't know if I agree with the first point. This feels like a hack.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Case 1

Consider the case of strategy="tensor_group", dynamic=False and averaging_constant != 1.

On activation hook, calibrate_activations calls call_observer with should_calculate_gparam=True*. This causes calculate_updated_min_max to be called twice, which causes the running min/max to move faster than if no global param was calculated.

Case 2

Consider the case of strategy="tensor_group", dynamic="local" and averaging_constant != 1.

Originally, calculate_gparam would call calculate_updated_min_max would be called and the running values would update (twice*). Now, the running values will not update.

* Note that running values are updated, even if should_calculate_qparams=False

TLDR

So it seems that this change fixes a bug where running values are updated twice, but changes the behavior of dynamic="local" to calculate global parameters based on true values, not running values. I assumed that global parameters should be the true min/max of all values, not running values, but maybe @dsikka you think this shouldn't be the case?

I've reverted the change since it's not necessary for group quant, but we should definitely look into exactly the behavior we want for global scales (and scales in general. Runnings means are slightly strange anyways and seem to be a vestige of QAT).

should_calculate_gparam=True,
should_calculate_qparams=False,
)
module.weight_observer.reset()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we only attach one observer, I’m fairly sure we’re resetting to prevent global scale metrics from impacting quant scale metrics

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update calculate_gparam to restore original running values, rather than relying on resetting after calculation

Reset is replaced by patching and restoring the metrics

kylesayrs and others added 2 commits September 18, 2025 10:45
@kylesayrs kylesayrs changed the title [Observers] Small observers cleanup, add e2e quantization tests, increase test file limit [Observers] Small observers cleanup, add e2e quantization tests Sep 18, 2025
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
@kylesayrs
Copy link
Collaborator Author

#1903

@kylesayrs kylesayrs closed this Oct 9, 2025
kylesayrs added a commit that referenced this pull request Oct 14, 2025
…servers (#1903)

## Purpose ##
* FP4
* Fix bug discovered
[here](#1830 (comment))
where dynamic="local" nvfp4 calculations would increment the observer
twice as fast as normal
  * Enable MSE observer to be used with FP4
    ```psuedocode
    mse_quant_error := mean((x - fake_quant(x))**2)
global_scale <- min[min_vals, max_vals,
global_scale](mse_quant_error(x))
scale, zp <- min[min_vals, max_vals](mse_quant_error(x, global_scale))
    ```
* Simplification
* Make supporting attention calibration easier by separating out
weight/activation/attention reshaping
* Improve readability of observer codes by removing many levels of
function indirection
* Drop support for calibration with non-divisible group sizes. This is
not really a loss, since [forward
passes](https://github.com/neuralmagic/compressed-tensors/blob/main/src/compressed_tensors/quantization/lifecycle/forward.py#L279)
also make this assumption
* New observers
* `memoryless_minmax` computes min and max values on the fly in a
dynamic-quantization style. This observer is useful for PTQ weight
quantization
* `static_minmax` computes absolute min and max values across all
observations. This observer is useful for PTQ activation quantization
* `memoryless_mse` computes best qparams w.r.t. MSE loss for each
observation. This observer is useful for PTQ weight quantization
* Memory improvements
* All observers no longer store copies of scales and zero points,
reducing the amount of required memory
* Newly introduced "memoryless" observers do not store any quantization
parameters, which greatly reduces the memory requirements for PTQ weight
quantization of very large models

| Diagrams |
| - |
| Before |
| <img width="886" height="595" alt="before"
src="https://github.com/user-attachments/assets/660d94c2-3ac8-4e05-9e9b-53d21145abac"
/> |
| After | 
<img width="1527" height="595" alt="after"
src="https://github.com/user-attachments/assets/51a0107e-3fbd-413c-a7a6-03ddc3612169"
/> |

## Changes ##
* Standardize reshaping using `flatten_for_calibration`
* This function reshapes all observed values to `(num_observations,
*qparams_shape, group_size)`
* This function the complexity associated with passing "reduce dims" and
trying to handle weights, activations, and attention states all in the
same function
* In the future, this function could be applied to the quantization
forward pass, although there's probably no need to outside of
standardization
* Implement `get_global_scale` on `Observer` base
* This function decouples minmax calculations from regular qparam
calculations (avoiding the double increment bug)
* This function enables the MSE observer to be used with FP4 global
scales

## Testing ##
* Added additional minmax tests which check exact values of scales. This
test passes both on main and this branch, demonstrating that minmax
observer behavior remains unchanged
* Added additional MSE tests which check exact values of mse losses.
This test passes both on main and this branch, demonstrating that MSE
observer behavior remains unchanged
* Added FP4 MSE test

## Evaluation ##
```
nvfp4-static-minmax
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.6167|±  |   N/A|
```

```
nvfp4-minmax
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.6011|±  |   N/A|
```

---------

Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Dan Huang <[email protected]>
Co-authored-by: dhuangnm <[email protected]>
kylesayrs added a commit that referenced this pull request Oct 14, 2025
…servers (#1903)

* FP4
* Fix bug discovered
[here](#1830 (comment))
where dynamic="local" nvfp4 calculations would increment the observer
twice as fast as normal
  * Enable MSE observer to be used with FP4
    ```psuedocode
    mse_quant_error := mean((x - fake_quant(x))**2)
global_scale <- min[min_vals, max_vals,
global_scale](mse_quant_error(x))
scale, zp <- min[min_vals, max_vals](mse_quant_error(x, global_scale))
    ```
* Simplification
* Make supporting attention calibration easier by separating out
weight/activation/attention reshaping
* Improve readability of observer codes by removing many levels of
function indirection
* Drop support for calibration with non-divisible group sizes. This is
not really a loss, since [forward
passes](https://github.com/neuralmagic/compressed-tensors/blob/main/src/compressed_tensors/quantization/lifecycle/forward.py#L279)
also make this assumption
* New observers
* `memoryless_minmax` computes min and max values on the fly in a
dynamic-quantization style. This observer is useful for PTQ weight
quantization
* `static_minmax` computes absolute min and max values across all
observations. This observer is useful for PTQ activation quantization
* `memoryless_mse` computes best qparams w.r.t. MSE loss for each
observation. This observer is useful for PTQ weight quantization
* Memory improvements
* All observers no longer store copies of scales and zero points,
reducing the amount of required memory
* Newly introduced "memoryless" observers do not store any quantization
parameters, which greatly reduces the memory requirements for PTQ weight
quantization of very large models

| Diagrams |
| - |
| Before |
| <img width="886" height="595" alt="before"
src="https://github.com/user-attachments/assets/660d94c2-3ac8-4e05-9e9b-53d21145abac"
/> |
| After |
<img width="1527" height="595" alt="after"
src="https://github.com/user-attachments/assets/51a0107e-3fbd-413c-a7a6-03ddc3612169"
/> |

* Standardize reshaping using `flatten_for_calibration`
* This function reshapes all observed values to `(num_observations,
*qparams_shape, group_size)`
* This function the complexity associated with passing "reduce dims" and
trying to handle weights, activations, and attention states all in the
same function
* In the future, this function could be applied to the quantization
forward pass, although there's probably no need to outside of
standardization
* Implement `get_global_scale` on `Observer` base
* This function decouples minmax calculations from regular qparam
calculations (avoiding the double increment bug)
* This function enables the MSE observer to be used with FP4 global
scales

* Added additional minmax tests which check exact values of scales. This
test passes both on main and this branch, demonstrating that minmax
observer behavior remains unchanged
* Added additional MSE tests which check exact values of mse losses.
This test passes both on main and this branch, demonstrating that MSE
observer behavior remains unchanged
* Added FP4 MSE test

```
nvfp4-static-minmax
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.6167|±  |   N/A|
```

```
nvfp4-minmax
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.6011|±  |   N/A|
```

---------

Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Dan Huang <[email protected]>
Co-authored-by: dhuangnm <[email protected]>
ronantakizawa pushed a commit to ronantakizawa/llm-compressor that referenced this pull request Oct 15, 2025
…servers (vllm-project#1903)

## Purpose ##
* FP4
* Fix bug discovered
[here](vllm-project#1830 (comment))
where dynamic="local" nvfp4 calculations would increment the observer
twice as fast as normal
  * Enable MSE observer to be used with FP4
    ```psuedocode
    mse_quant_error := mean((x - fake_quant(x))**2)
global_scale <- min[min_vals, max_vals,
global_scale](mse_quant_error(x))
scale, zp <- min[min_vals, max_vals](mse_quant_error(x, global_scale))
    ```
* Simplification
* Make supporting attention calibration easier by separating out
weight/activation/attention reshaping
* Improve readability of observer codes by removing many levels of
function indirection
* Drop support for calibration with non-divisible group sizes. This is
not really a loss, since [forward
passes](https://github.com/neuralmagic/compressed-tensors/blob/main/src/compressed_tensors/quantization/lifecycle/forward.py#L279)
also make this assumption
* New observers
* `memoryless_minmax` computes min and max values on the fly in a
dynamic-quantization style. This observer is useful for PTQ weight
quantization
* `static_minmax` computes absolute min and max values across all
observations. This observer is useful for PTQ activation quantization
* `memoryless_mse` computes best qparams w.r.t. MSE loss for each
observation. This observer is useful for PTQ weight quantization
* Memory improvements
* All observers no longer store copies of scales and zero points,
reducing the amount of required memory
* Newly introduced "memoryless" observers do not store any quantization
parameters, which greatly reduces the memory requirements for PTQ weight
quantization of very large models

| Diagrams |
| - |
| Before |
| <img width="886" height="595" alt="before"
src="https://github.com/user-attachments/assets/660d94c2-3ac8-4e05-9e9b-53d21145abac"
/> |
| After |
<img width="1527" height="595" alt="after"
src="https://github.com/user-attachments/assets/51a0107e-3fbd-413c-a7a6-03ddc3612169"
/> |

## Changes ##
* Standardize reshaping using `flatten_for_calibration`
* This function reshapes all observed values to `(num_observations,
*qparams_shape, group_size)`
* This function the complexity associated with passing "reduce dims" and
trying to handle weights, activations, and attention states all in the
same function
* In the future, this function could be applied to the quantization
forward pass, although there's probably no need to outside of
standardization
* Implement `get_global_scale` on `Observer` base
* This function decouples minmax calculations from regular qparam
calculations (avoiding the double increment bug)
* This function enables the MSE observer to be used with FP4 global
scales

## Testing ##
* Added additional minmax tests which check exact values of scales. This
test passes both on main and this branch, demonstrating that minmax
observer behavior remains unchanged
* Added additional MSE tests which check exact values of mse losses.
This test passes both on main and this branch, demonstrating that MSE
observer behavior remains unchanged
* Added FP4 MSE test

## Evaluation ##
```
nvfp4-static-minmax
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.6167|±  |   N/A|
```

```
nvfp4-minmax
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.6011|±  |   N/A|
```

---------

Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Dan Huang <[email protected]>
Co-authored-by: dhuangnm <[email protected]>
Signed-off-by: ronantakizawa <[email protected]>
cajeonrh pushed a commit to cajeonrh/llm-compressor that referenced this pull request Oct 16, 2025
…servers (vllm-project#1903)

## Purpose ##
* FP4
* Fix bug discovered
[here](vllm-project#1830 (comment))
where dynamic="local" nvfp4 calculations would increment the observer
twice as fast as normal
  * Enable MSE observer to be used with FP4
    ```psuedocode
    mse_quant_error := mean((x - fake_quant(x))**2)
global_scale <- min[min_vals, max_vals,
global_scale](mse_quant_error(x))
scale, zp <- min[min_vals, max_vals](mse_quant_error(x, global_scale))
    ```
* Simplification
* Make supporting attention calibration easier by separating out
weight/activation/attention reshaping
* Improve readability of observer codes by removing many levels of
function indirection
* Drop support for calibration with non-divisible group sizes. This is
not really a loss, since [forward
passes](https://github.com/neuralmagic/compressed-tensors/blob/main/src/compressed_tensors/quantization/lifecycle/forward.py#L279)
also make this assumption
* New observers
* `memoryless_minmax` computes min and max values on the fly in a
dynamic-quantization style. This observer is useful for PTQ weight
quantization
* `static_minmax` computes absolute min and max values across all
observations. This observer is useful for PTQ activation quantization
* `memoryless_mse` computes best qparams w.r.t. MSE loss for each
observation. This observer is useful for PTQ weight quantization
* Memory improvements
* All observers no longer store copies of scales and zero points,
reducing the amount of required memory
* Newly introduced "memoryless" observers do not store any quantization
parameters, which greatly reduces the memory requirements for PTQ weight
quantization of very large models

| Diagrams |
| - |
| Before |
| <img width="886" height="595" alt="before"
src="https://github.com/user-attachments/assets/660d94c2-3ac8-4e05-9e9b-53d21145abac"
/> |
| After | 
<img width="1527" height="595" alt="after"
src="https://github.com/user-attachments/assets/51a0107e-3fbd-413c-a7a6-03ddc3612169"
/> |

## Changes ##
* Standardize reshaping using `flatten_for_calibration`
* This function reshapes all observed values to `(num_observations,
*qparams_shape, group_size)`
* This function the complexity associated with passing "reduce dims" and
trying to handle weights, activations, and attention states all in the
same function
* In the future, this function could be applied to the quantization
forward pass, although there's probably no need to outside of
standardization
* Implement `get_global_scale` on `Observer` base
* This function decouples minmax calculations from regular qparam
calculations (avoiding the double increment bug)
* This function enables the MSE observer to be used with FP4 global
scales

## Testing ##
* Added additional minmax tests which check exact values of scales. This
test passes both on main and this branch, demonstrating that minmax
observer behavior remains unchanged
* Added additional MSE tests which check exact values of mse losses.
This test passes both on main and this branch, demonstrating that MSE
observer behavior remains unchanged
* Added FP4 MSE test

## Evaluation ##
```
nvfp4-static-minmax
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.6167|±  |   N/A|
```

```
nvfp4-minmax
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.6011|±  |   N/A|
```

---------

Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Dan Huang <[email protected]>
Co-authored-by: dhuangnm <[email protected]>
zhanglei1172 pushed a commit to zhanglei1172/llm-compressor that referenced this pull request Oct 17, 2025
…servers (vllm-project#1903)

## Purpose ##
* FP4
* Fix bug discovered
[here](vllm-project#1830 (comment))
where dynamic="local" nvfp4 calculations would increment the observer
twice as fast as normal
  * Enable MSE observer to be used with FP4
    ```psuedocode
    mse_quant_error := mean((x - fake_quant(x))**2)
global_scale <- min[min_vals, max_vals,
global_scale](mse_quant_error(x))
scale, zp <- min[min_vals, max_vals](mse_quant_error(x, global_scale))
    ```
* Simplification
* Make supporting attention calibration easier by separating out
weight/activation/attention reshaping
* Improve readability of observer codes by removing many levels of
function indirection
* Drop support for calibration with non-divisible group sizes. This is
not really a loss, since [forward
passes](https://github.com/neuralmagic/compressed-tensors/blob/main/src/compressed_tensors/quantization/lifecycle/forward.py#L279)
also make this assumption
* New observers
* `memoryless_minmax` computes min and max values on the fly in a
dynamic-quantization style. This observer is useful for PTQ weight
quantization
* `static_minmax` computes absolute min and max values across all
observations. This observer is useful for PTQ activation quantization
* `memoryless_mse` computes best qparams w.r.t. MSE loss for each
observation. This observer is useful for PTQ weight quantization
* Memory improvements
* All observers no longer store copies of scales and zero points,
reducing the amount of required memory
* Newly introduced "memoryless" observers do not store any quantization
parameters, which greatly reduces the memory requirements for PTQ weight
quantization of very large models

| Diagrams |
| - |
| Before |
| <img width="886" height="595" alt="before"
src="https://github.com/user-attachments/assets/660d94c2-3ac8-4e05-9e9b-53d21145abac"
/> |
| After |
<img width="1527" height="595" alt="after"
src="https://github.com/user-attachments/assets/51a0107e-3fbd-413c-a7a6-03ddc3612169"
/> |

## Changes ##
* Standardize reshaping using `flatten_for_calibration`
* This function reshapes all observed values to `(num_observations,
*qparams_shape, group_size)`
* This function the complexity associated with passing "reduce dims" and
trying to handle weights, activations, and attention states all in the
same function
* In the future, this function could be applied to the quantization
forward pass, although there's probably no need to outside of
standardization
* Implement `get_global_scale` on `Observer` base
* This function decouples minmax calculations from regular qparam
calculations (avoiding the double increment bug)
* This function enables the MSE observer to be used with FP4 global
scales

## Testing ##
* Added additional minmax tests which check exact values of scales. This
test passes both on main and this branch, demonstrating that minmax
observer behavior remains unchanged
* Added additional MSE tests which check exact values of mse losses.
This test passes both on main and this branch, demonstrating that MSE
observer behavior remains unchanged
* Added FP4 MSE test

## Evaluation ##
```
nvfp4-static-minmax
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.6167|±  |   N/A|
```

```
nvfp4-minmax
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.6011|±  |   N/A|
```

---------

Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Dan Huang <[email protected]>
Co-authored-by: dhuangnm <[email protected]>
Signed-off-by: LeiZhang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants