Skip to content

Port autograd parts of lfilter to python #3954

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

Conversation

samanklesaria
Copy link
Collaborator

PLEASE NOTE THAT THE TORCHAUDIO REPOSITORY IS NO LONGER ACTIVELY MONITORED. You may not get a response. For open discussions, visit https://discuss.pytorch.org/.

Copy link

pytorch-bot bot commented Jul 1, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/audio/3954

Note: Links to docs will display an error until the docs builds have been completed.

❌ 8 New Failures

As of commit 3b3e0dd with merge base bf4e412 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@samanklesaria samanklesaria changed the title Port lfilter_core_loop wrapper to python [WIP] Port autograd parts of lfilter to python Jul 7, 2025
@samanklesaria samanklesaria marked this pull request as ready for review July 12, 2025 00:02
@samanklesaria samanklesaria requested a review from a team as a code owner July 12, 2025 00:02
@@ -100,194 +100,19 @@ void lfilter_core_generic_loop(
}
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've got lfilter_core_generic_loop left-over above in this file. Is it still needed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. That path was used before if you had tensors on a cuda device but you hadn't compiled torchaudio with support for CUDA. We should probably still keep it around. I think we can register it with the dispatcher as a CompositeExplicitAutograd path, which, as I understand it, will work as a catch-all if other dispatcher keys don't kick in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants