Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tracing option to quantize bench #3651

Closed
wants to merge 4 commits into from

Conversation

jwfromm
Copy link
Contributor

@jwfromm jwfromm commented Jan 31, 2025

Summary:
X-link: https://github.com/facebookresearch/FBGEMM/pull/727

Adds support for the --trace option which will produce gpu traces for each benchmarked operator. This only works internally so if tried in OSS we fall back to nullcontext.

Reviewed By: jiawenliu64

Differential Revision: D68980020

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68980020

Copy link

netlify bot commented Jan 31, 2025

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit ff0150b
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/67a01814dbb5e500085ccf6f
😎 Deploy Preview https://deploy-preview-3651--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Summary:
X-link: facebookresearch/FBGEMM#695


This diff is the nvidia mirror of D68686266, which changes dynamic grouped gemm to return a tensor of shape [total_M, N] when zero_start_index_M isnt provided.  We also add appropriate tests to make sure the behavior doesnt break going forward.

Reviewed By: jasonjk-park, jianyuh, jiawenliu64

Differential Revision: D68689077
…3639)

Summary:

X-link: facebookresearch/FBGEMM#714

D68797978 implemented a new feature that allowed partial rowwise quantization for jagged tensors in the hopes of improving MOE performance. However, it operated on the wrong dimension (oops). This update shifts the dimension to the proper per-group non zero row.

Reviewed By: jasonjk-park, jiawenliu64

Differential Revision: D68872138
Summary:

X-link: facebookresearch/FBGEMM#724

When benchmarking quantize functions, we'd like the overhead to mimic e2e behavior as closely as possible. For example, weights should be quantized ahead of time. The current design of quantize_bench does not allow this.

To accomodate it, I've added a new optional preprocess phase that allows some transformations to be applied independently from benchmarking. Here we use it to prepare data for grouped gemm benchmarks to more accurately capture the e2e behavior.

Reviewed By: jiawenliu64

Differential Revision: D68964950
Summary:

X-link: facebookresearch/FBGEMM#727

Adds support for the --trace option which will produce gpu traces for each benchmarked operator. This only works internally so if tried in OSS we fall back to nullcontext.

Reviewed By: jiawenliu64

Differential Revision: D68980020
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68980020

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 79fcd5b.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants