Skip to content

Groupwise GEMM Full Kernel Tuning #4521

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

jwfromm
Copy link
Contributor

@jwfromm jwfromm commented Jul 18, 2025

Summary:
This diff introduces extensive kernel tuning and heuristics for Cutlass version of DeepGemm. With these heuristics, performance of the cutlass kernel is generally as good or better than DeepGemm for an equivalent workload and numerics are identical. This makes cutlass kernels in FBGEMM quite a bit easier to use than DeepGEMM as they are AOT compiled and do not require runtime tuning or JIT. We also expect substantially better performance for memory bound workloads like decode.

The key tricks we used to get this performance were limiting tile swizzling for compute bound shapes, which reduces register pressure and dramatically improves performance and doing an implicit input transpose for memory bound shapes that allows more efficient WGMMA tiling.

Benchmarking across a wide range of shapes shows the general performance benefits of these kernels in memory bound domains and their competitiveness with deepgemm in compute bound domains.
{F1980376140}

Differential Revision: D78537466

Summary: This diff expands the templatization of FBGEMM's cutlass groupwise GEMM to include options to limit the swizzling of loaded tiles and implicitly transpose tiles. We introduce a bit of additional tuning that shows this allows us to get decent performance in both compute and memory bound domains. In a followup diff, we will dramatically expand these heuristics for better performance.

Differential Revision: D78537461
@meta-cla meta-cla bot added the cla signed label Jul 18, 2025
Copy link

netlify bot commented Jul 18, 2025

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 11d27ab
🔍 Latest deploy log https://app.netlify.com/projects/pytorch-fbgemm-docs/deploys/6879d58ffa9aff0008e3ec6a
😎 Deploy Preview https://deploy-preview-4521--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D78537466

Summary:
Pull Request resolved: pytorch#4521

This diff introduces extensive kernel tuning and heuristics for Cutlass version of DeepGemm. With these heuristics, performance of the cutlass kernel is generally as good or better than DeepGemm for an equivalent workload and numerics are identical. This makes cutlass kernels in FBGEMM quite a bit easier to use than DeepGEMM as they are AOT compiled and do not require runtime tuning or JIT. We also expect substantially better performance for memory bound workloads like decode.

The key tricks we used to get this performance were limiting tile swizzling for compute bound shapes, which reduces register pressure and dramatically improves performance and doing an implicit input transpose for memory bound shapes that allows more efficient WGMMA tiling.

Benchmarking across a wide range of shapes shows the general performance benefits of these kernels in memory bound domains and their competitiveness with deepgemm in compute bound domains.
 {F1980376140}

Differential Revision: D78537466
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D78537466

@jwfromm jwfromm force-pushed the export-D78537466 branch from 6f874ab to 11d27ab Compare July 18, 2025 05:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants