Skip to content

Conversation

@dvicini
Copy link
Member

@dvicini dvicini commented Nov 17, 2025

The latest conftest.py removes tests with unavailable variants, as explained in #1659. This makes sense, as the code previously used a somewhat inefficient way to skip tests with missing variants.

However, this seems to cause a minor issue with the latest pytest versions (>= 8.4.1). With these, test files without any valid tests now produce a non-zero exit code (exit code 5 indicating No tests were collected). This can for example happen when running test_optixdenoiser in a setting without a GPU, or test_polarizer when no polarization variant has been built.

This PR modifies the logic in conftest.py to mark tests for skipping, using the same (fast) logic as originally added in #1659. On my system, both with and without PR the test suite (LLVM only, -m "not slow") takes around 50s. I do not observe a significant performance issue from having pytest report the skipped tests instead of removing them entirely.

It's a somewhat minor change, but makes it such that tests that do not fail consistently return exit code 0 :) Maybe worth double checking performance on other systems, if there are known challenges (e.g., on certain OS?)

@merlinND
Copy link
Member

I don't know about performance implications, but personally I like having the tests marked as 'skipped' (with a clear reason) instead of having them disappear 👍

@wjakob
Copy link
Member

wjakob commented Nov 18, 2025

In those few files, how about just adding a dummy test function? I hesitate to change the test discovery mechanism.

@wjakob
Copy link
Member

wjakob commented Nov 18, 2025

Just a quick question: we can't really know ahead of time which variants the user has. So there always needs to be a way of generating tests at run time. I have not reviewed the PR in detail but don't fully understand how this can be compatible with marking tests as skipped.

@dvicini
Copy link
Member Author

dvicini commented Nov 19, 2025

I only modified the pytest_collection_modifyitems function, which is called after the tests have been collected. It appears that pytest allows to just mark tests as skipped after the initial collection (instead of dropping them). The mechanism to decide which tests can and cannot run is exactly identical to before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants