-
Notifications
You must be signed in to change notification settings - Fork 317
mxfp8 emulated grouped gemm #2626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2626
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit c2373b8 with merge base 0e00df3 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
e29fb79
to
fa77af5
Compare
fa77af5
to
4fc1a2a
Compare
add emulated mxfp8 grouped gemm stack-info: PR: #2626, branch: danielvegamyhre/stack/22
4fc1a2a
to
c2373b8
Compare
out_dtype: Optional[torch.dtype] = torch.bfloat16, | ||
block_size: int = 32, | ||
) -> torch.Tensor: | ||
# Dequantize input |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: claude might have gotten a little wordy w/ this one on the comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hand wrote all of this actually, including the comments lol. Maybe I am old fashioned but I still hand write everything. Maybe cause I only use the free-tiers but I get pretty terrible results asking AI tools to help with this kind of work. I use it for debugging assistance though and the AI autocomplete is pretty good, it makes beautiful docstrings
Stacked PRs:
mxfp8 emulated grouped gemm
add emulated mxfp8 grouped gemm to unblock mxpf8 MoE work while we wait on torch._scaled_grouped_mm support for mxpf8 in pytorch/pytorch#156806