-
Notifications
You must be signed in to change notification settings - Fork 310
Check numerical equivalence / closeness between different kernel preferences #2651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: jerryzh168/stack/14
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2651
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 11 New FailuresAs of commit bd0faa2 with merge base b757fb9 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
2534529
to
c608b78
Compare
e19cb46
to
5ae457c
Compare
for i in range(1, len(kp_and_res)): | ||
kp, res = kp_and_res[i] | ||
self.assertTrue( | ||
compute_error(res, kp_and_res[0][1]) > 28, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @vkuzo we don't have equivalence yet due to some differences in implementation, do you think we should match torchao quant primitives (choose_scale_float8 + quantize_float8) and triton ones?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we know what the differences are?
IMO we should also choose either TORCH or FBGEMM (but not AUTO) as the reference, and match others to the reference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah see PR summary for differences
I can update and use TORCH as reference
c608b78
to
42a767c
Compare
42a767c
to
65a4f84
Compare
65a4f84
to
ba8efe2
Compare
hp_value_ub: Optional[float] = None, | ||
reverse: bool = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make the callsites match instead of having a flag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah this is not the final state, was just trying out things to check if this helps or not
ba8efe2
to
1720743
Compare
1720743
to
36fce5e
Compare
36fce5e
to
9824504
Compare
9824504
to
e818cb0
Compare
e818cb0
to
1c3a47d
Compare
1c3a47d
to
d15e4ae
Compare
d15e4ae
to
2ea6fbe
Compare
2ea6fbe
to
3a71e28
Compare
3a71e28
to
11d2143
Compare
…erences Summary: This PR checks different kernel preferences for Float8Tensor are similar in numerics (AUTO, TORCH and FBGEMM) triton implementation and torchao implementation are a bit different right now actually, need to decide if we should fix it or not 1. difference in quantize op main difference seems to be the triton implementation is using: ``` a_scale = MAX_FP8 / max_abs then do a_scale = 1.0 / a_scale a_fp8 = a * a_scale ``` while torch is doing: ``` a_scale = max_abs / MAX_FP8 a_fp8 = a / a_scale ``` Also the hp_value_lb and hp_value_ub settings are slightly different triton choose scale and quantize code: https://github.com/pytorch/FBGEMM/blob/a4286c01ef01dad435b2ec8798605127d3032cd8/fbgemm_gpu/experimental/gemm/triton_gemm/fp8_gemm.py#L2382-L2392 torchao choose scale and quantize code: https://github.com/pytorch/ao/blob/3c466f844684af0fb80014094f2ca8663881eb33/torchao/quantization/quant_primitives.py#L2183 https://github.com/pytorch/ao/blob/3c466f844684af0fb80014094f2ca8663881eb33/torchao/quantization/quant_primitives.py#L2283 2. (potentially) difference in matrix multiplication ops TORCH and AUTO/FBGEMM are using different quantized mm ops Added a reverse option to bring sqnr closer: ``` granularity: PerTensor() sizes: ((128,), 256, 128) kp: KernelPreference.AUTO tensor(inf, device='cuda:0', dtype=torch.bfloat16) granularity: PerTensor() sizes: ((128,), 256, 128) kp: KernelPreference.FBGEMM tensor(inf, device='cuda:0', dtype=torch.bfloat16) .granularity: PerTensor() sizes: ((32, 128), 64, 256) kp: KernelPreference.AUTO tensor(inf, device='cuda:0', dtype=torch.bfloat16) granularity: PerTensor() sizes: ((32, 128), 64, 256) kp: KernelPreference.FBGEMM tensor(inf, device='cuda:0', dtype=torch.bfloat16) .granularity: PerRow() sizes: ((128,), 256, 128) kp: KernelPreference.AUTO tensor(inf, device='cuda:0', dtype=torch.bfloat16) granularity: PerRow() sizes: ((128,), 256, 128) kp: KernelPreference.FBGEMM tensor(inf, device='cuda:0', dtype=torch.bfloat16) .granularity: PerRow() sizes: ((32, 128), 64, 256) kp: KernelPreference.AUTO tensor(64.5000, device='cuda:0', dtype=torch.bfloat16) granularity: PerRow() sizes: ((32, 128), 64, 256) kp: KernelPreference.FBGEMM tensor(68., device='cuda:0', dtype=torch.bfloat16) ``` Test Plan: python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_kernel_preference_numerical_equivalence Reviewers: Subscribers: Tasks: Tags: stack-info: PR: #2651, branch: jerryzh168/stack/15
11d2143
to
bd0faa2
Compare
Stacked PRs:
Check numerical equivalence / closeness between different kernel preferences
Summary:
This PR checks different kernel preferences for Float8Tensor are similar in numerics
(AUTO, TORCH and FBGEMM)
triton implementation and torchao implementation are a bit different right now actually, need to decide if we should fix it or not
main difference seems to be the triton implementation is using:
while torch is doing:
Also the hp_value_lb and hp_value_ub settings are slightly different
triton choose scale and quantize code: https://github.com/pytorch/FBGEMM/blob/a4286c01ef01dad435b2ec8798605127d3032cd8/fbgemm_gpu/experimental/gemm/triton_gemm/fp8_gemm.py#L2382-L2392
torchao choose scale and quantize code:
ao/torchao/quantization/quant_primitives.py
Line 2183 in 3c466f8
ao/torchao/quantization/quant_primitives.py
Line 2283 in 3c466f8
TORCH and AUTO/FBGEMM are using different quantized mm ops
Added a reverse option to bring sqnr closer:
Test Plan:
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py -k test_kernel_preference_numerical_equivalence
Reviewers:
Subscribers:
Tasks:
Tags: