-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automate testing of specific cases #698
base: main
Are you sure you want to change the base?
Conversation
Why is this important?While the weekly/comprehensive tests cover many cases, there's really no way to truly comprehensively test In #697, we saw that while errors in the weekly tests had been resolved, some use cases of parameters had gone untested. In particular, those noted in #516 (comment). In addition to resolving #516, these changes would address the "Challenges to easily displaying test output" section of #634 (comment). Current TODO list:
|
A path forward here is too make a list of specific cases that need to be tested, and then as much as possible, combine them into a minimal set of That is, we don't need to automate every "min-case" |
|
|
|
Current test analysis (2025-04-04)The "min-case" Understanding what cases (e.g., parameter combinations) are currently covered"min_case" cfgsRun on v3 data, unless otherwise noted
"weekly" cfgsBoth comprehensive_v2 and comprehensive_v3:
Differences:
Which "min_case" cases are already covered in the weekly cfgs?
The following cases are tested by the weekly cfgs, but there is an important caveat: in some sense, the "min_case" cfgs are testing that the sets also do not depend on any other subtask. That is, if we're running only sets that depend on climo, zppy shouldn't care if we don't have the ts task present in the cfg. That is obviously not tested in a comprehensive cfg running as much as possible.
That leaves these cases as untested in weekly testing
Given the number of What are other improvements we can make to the tests?
|
It may be best to test the
|
I don't think this will actually help that much. |
Summary
Refactor image check testing to be more robust.
Objectives:
zppy
jobs not just for the comprehensive/weekly tests, but all the min-case tests as well. Initial implementation ideas are in Automate zppy tests #520.image_check_failures
directories, so that they are easily accessible.Issue resolution:
Select one: This pull request is...
Big Change
1. Does this do what we want it to do?
Required:
If applicable:
2. Are the implementation details accurate & efficient?
Required:
If applicable:
zppy/conda
, not just animport
statement.3. Is this well documented?
Required:
4. Is this code clean?
Required:
If applicable: