Skip to content

Implement AME #651

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
mdbenito opened this issue Mar 2, 2025 · 0 comments · May be fixed by #655
Open

Implement AME #651

mdbenito opened this issue Mar 2, 2025 · 0 comments · May be fixed by #655
Labels
new-method Implementation of new algorithms for valuation or influence functions
Milestone

Comments

@mdbenito
Copy link
Collaborator

mdbenito commented Mar 2, 2025

Introduced in Lin, Jinkun, Anqi Zhang, Mathias Lécuyer, Jinyang Li, Aurojit Panda, and Siddhartha Sen. “Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments.” In Proceedings of the 39th International Conference on Machine Learning, 13468–504. PMLR, 2022.

For the "exact" AME, a very similar sampling scheme to Owen sampling makes for a trivial implementation.

@mdbenito mdbenito added the new-method Implementation of new algorithms for valuation or influence functions label Mar 2, 2025
@mdbenito mdbenito linked a pull request Mar 6, 2025 that will close this issue
4 tasks
@mdbenito mdbenito modified the milestones: v0.11.0, v0.10.1 Apr 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new-method Implementation of new algorithms for valuation or influence functions
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant