Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calculate the pass rate without changing pytest.skip to pytest.xfail in tests. #2957

Open
anmyachev opened this issue Dec 6, 2024 · 5 comments
Labels
enhancement New feature or request tests: ut

Comments

@anmyachev
Copy link
Contributor

Such replacement in tests is done quite often. Obviously, such approach will not work when our part will be in-tree of Triton. What can be done about it?

@pbchekin @whitneywhtsang thoughts?

Part of #2030

@anmyachev anmyachev added enhancement New feature or request tests: ut labels Dec 6, 2024
@whitneywhtsang
Copy link
Contributor

We probably want to change the title of this issue, we usually change pytest.skip to pytest.xfail instead of the other way around.
I think it is reasonable to add the code below to cuda specific unit tests upstream. WDYT?

if is_xpu():
  pytest.xfail

@anmyachev anmyachev changed the title Calculate the pass rate without changing pytest.xfail to pytest.skip in tests. Calculate the pass rate without changing pytest.skip to pytest.xfail in tests. Dec 6, 2024
@anmyachev
Copy link
Contributor Author

I think it is reasonable to add the code below to cuda specific unit tests upstream. WDYT?

I believe that they will not want to run tests for CUDA on HIP backend, since this will increase the CI time and there is also a chance that in some situations there may be a hanging or abortions, which will break their entire CI.

@anmyachev
Copy link
Contributor Author

In theory, we know exactly how many tests we skip for our reasons, since all such cases are in the skipfiles. Maybe we can manually take this number into account when calculating the pass rate?

@whitneywhtsang
Copy link
Contributor

I think it is reasonable to add the code below to cuda specific unit tests upstream. WDYT?

I believe that they will not want to run tests for CUDA on HIP backend, since this will increase the CI time and there is also a chance that in some situations there may be a hanging or abortions, which will break their entire CI.

True that xfail would execute the test case while skip doesn't.

@whitneywhtsang
Copy link
Contributor

In theory, we know exactly how many tests we skip for our reasons, since all such cases are in the skipfiles. Maybe we can manually take this number into account when calculating the pass rate?

My concern is there may be new skip tests, and we accidentally forget to handle those. How can we prevent that case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request tests: ut
Projects
None yet
Development

No branches or pull requests

3 participants