You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are drowning in open issues going back to 2013, with no reliable means of seeing if and how later changes to the code base affect open issues. Similarly, the significant number of tests locked in issue comments, is not used to help us detect regressions. The question therefore is how to best utilise the 100+ tests that have been written over the years in response to user issues. I can see three technical means of going about this, for Junit and XQsuite tests.
Known-issue test folder
Ask users to submit test into a folder which is executed by our Ci in separate step that has the allowed-to-fail flag set. In pseudo code:
ant test backlog
Run only tests in the test/src/backlog folder, we would call it e.g. on Travis like this:
Once a backlog test passes it should be pulled into the main test-suite and the corresponding issue closed.
Pro:
clear where to submit code for submitters
test don’t need to be written expecting fails, but can contain the expected value from the get go.
Con:
NPE causing test could potentially remain in the backlog folder marked as %pending indefinitely.
Our Test reporter on Travis makes it very hard to see if and how individual tests failed, so this will be some needle in the haystack work.
Requires active maintenance when moving tests between test-suites
Enable %test:warn aka yellow test for XQsuite tests
Yellow test, in addition to fail=red, pass=green, is similar to @ignore in Junit4, or c. This are different from %test:pending. Pending tests, are not executed at all, whereas warnings are run but don’t halt execution of the test-suite on failure.
(: more test here :)declare
%test:warn
%test:assertEquals(3)
functionao:wrap-element-sequence () {
let $xml := <root><i/><i/><i/></root>
return $xml/node()
=> for-each(ao:wrap#1)
=> for-each(ao:get-i-elements#1)
=> count()
};
The main challenge would be in how to report these.
Once a warning is passing, the annotation should be removed and the test will be run as part of the regular test-suite.
Pros:
warnings already test the desired outcome
easy to put tests into a logical location (ie warnings about e.g. map syntax would be added to other map tests)
more power to XQsuite test authors outside of exist’s core repo
Cons:
approach is largely abandoned in most test-suites (including Junit5 )
suffers from the same needle in haystack problem like previous option
Standard Approach with or without new annotation
Rewrite test from issue comments, to actually expect the erroneous output, and add them to our test-suite. A green test-suite would mean that all know bugs are still accounted for, accidental fixes would show up as red tests, which require investigation.
Pro:
requires no code changes to how and where we perform tests
while counterintuitive to most reporters, although widely practiced elsewhere
Cons:
annotation this requires large scale rewrite of many tests
With new annotation %test:expectFail
To avoid large scale rewrites of prepared tests, we could facilitate this by adding %test:expectFail annotations.
What is the Problem
We are drowning in open issues going back to 2013, with no reliable means of seeing if and how later changes to the code base affect open issues. Similarly, the significant number of tests locked in issue comments, is not used to help us detect regressions. The question therefore is how to best utilise the 100+ tests that have been written over the years in response to user issues. I can see three technical means of going about this, for Junit and XQsuite tests.
Known-issue test folder
Ask users to submit test into a folder which is executed by our Ci in separate step that has the
allowed-to-fail
flag set. In pseudo code:ant test backlog
Run only tests in the
test/src/backlog
folder, we would call it e.g. on Travis like this:Once a backlog test passes it should be pulled into the main test-suite and the corresponding issue closed.
Pro:
Con:
%pending
indefinitely.Enable
%test:warn
aka yellow test for XQsuite testsYellow test, in addition to fail=red, pass=green, is similar to
@ignore
in Junit4, or c. This are different from%test:pending
. Pending tests, are not executed at all, whereas warnings are run but don’t halt execution of the test-suite on failure.The main challenge would be in how to report these.
Once a warning is passing, the annotation should be removed and the test will be run as part of the regular test-suite.
Pros:
Cons:
Standard Approach with or without new annotation
Rewrite test from issue comments, to actually expect the erroneous output, and add them to our test-suite. A green test-suite would mean that all know bugs are still accounted for, accidental fixes would show up as red tests, which require investigation.
Pro:
Cons:
With new annotation
%test:expectFail
To avoid large scale rewrites of prepared tests, we could facilitate this by adding
%test:expectFail
annotations.Unlike
%test:pending
these tests are executed, and are green when faulty output is generated., so anything but2
in the above example.The text was updated successfully, but these errors were encountered: