Skip to content

Conversation

Xia-Weiwen
Copy link
Collaborator

@Xia-Weiwen Xia-Weiwen commented Sep 25, 2025

Summary
It's a reopen of #3039 but retains backward compatibility. And UTs are not changed to ensure backward compatibility.
This PR merges the two definitions of get_block_size in ao/torchao/quantization/pt2e/observer.py and torchao/quantization/observer.py and put it in torchao/quantization/utils.py
It also removes duplicate definitions of granularities in the pt2e path.

Test plan
This util function is used in many places so it should have been covered by all kinds of UT, especially

pytest -sv test/quantization/pt2e/test_quantize_pt2e.py

@pytorch-bot pytorch-bot bot added the ci-no-td label Sep 25, 2025
Copy link

pytorch-bot bot commented Sep 25, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3059

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 23c2bc1 with merge base 5e90c47 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 25, 2025
@Xia-Weiwen Xia-Weiwen added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label Sep 25, 2025
:func:`~torchao.quantization.quant_primitives.quantize_affine` for docs for
`block_size`
Attributes:
block_size (Tuple[int, ...]): The size of each quantization group
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: match tuple type

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Updated.

)
for i in range(len(block_size)):
assert input_shape[i] % block_size[i] == 0, (
f"Block size {block_size} does not divide input shape {input_shape}"
Copy link
Contributor

@jerryzh168 jerryzh168 Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Not all shapes in input shape {input_shape} are divisible by block size {block_size}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Updated.

return (1,) * (len(input_shape) - 1) + (input_shape[-1],)
elif isinstance(granularity, PerGroup):
assert input_shape[-1] % granularity.group_size == 0, (
f"Group size {granularity.group_size} does not divide input shape {input_shape}"
Copy link
Contributor

@jerryzh168 jerryzh168 Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same for this one Last dimension of input {input_shape[-1]} is not divisible by group size {granularity.group_size}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Updated.

@Xia-Weiwen Xia-Weiwen mentioned this pull request Sep 25, 2025
@Xia-Weiwen Xia-Weiwen marked this pull request as ready for review September 25, 2025 02:40
Comment on lines 36 to 37
from .. import Granularity, PerAxis, PerBlock, PerGroup, PerRow, PerTensor, PerToken
from ..utils import get_block_size
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this does feel a bit weird, does importing from full path works here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Updated

@Xia-Weiwen Xia-Weiwen merged commit 0064084 into pytorch:main Sep 25, 2025
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci-no-td CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants