Skip to content

mxfp8 emulated grouped gemm #2626

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 30, 2025
Merged

mxfp8 emulated grouped gemm #2626

merged 1 commit into from
Jul 30, 2025

Conversation

danielvegamyhre
Copy link
Contributor

@danielvegamyhre danielvegamyhre commented Jul 29, 2025

Stacked PRs:


mxfp8 emulated grouped gemm

add emulated mxfp8 grouped gemm to unblock mxpf8 MoE work while we wait on torch._scaled_grouped_mm support for mxpf8 in pytorch/pytorch#156806

Copy link

pytorch-bot bot commented Jul 29, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2626

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit c2373b8 with merge base 0e00df3 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 29, 2025
@danielvegamyhre danielvegamyhre force-pushed the danielvegamyhre/stack/22 branch from e29fb79 to fa77af5 Compare July 29, 2025 19:45
@danielvegamyhre danielvegamyhre added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label Jul 29, 2025
@danielvegamyhre danielvegamyhre force-pushed the danielvegamyhre/stack/22 branch from fa77af5 to 4fc1a2a Compare July 29, 2025 19:50
@danielvegamyhre
Copy link
Contributor Author

cc @vkuzo @drisspg for review

add emulated mxfp8 grouped gemm

stack-info: PR: #2626, branch: danielvegamyhre/stack/22
@danielvegamyhre danielvegamyhre force-pushed the danielvegamyhre/stack/22 branch from 4fc1a2a to c2373b8 Compare July 29, 2025 21:36
out_dtype: Optional[torch.dtype] = torch.bfloat16,
block_size: int = 32,
) -> torch.Tensor:
# Dequantize input
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: claude might have gotten a little wordy w/ this one on the comments

Copy link
Contributor Author

@danielvegamyhre danielvegamyhre Jul 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hand wrote all of this actually, including the comments lol. Maybe I am old fashioned but I still hand write everything. Maybe cause I only use the free-tiers but I get pretty terrible results asking AI tools to help with this kind of work. I use it for debugging assistance though and the AI autocomplete is pretty good, it makes beautiful docstrings

@danielvegamyhre danielvegamyhre merged commit 9834869 into main Jul 30, 2025
20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants