Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include open issue test in main repo #2393

Open
duncdrum opened this issue Jan 15, 2019 · 1 comment
Open

Include open issue test in main repo #2393

duncdrum opened this issue Jan 15, 2019 · 1 comment
Labels
discuss ask for feedback enhancement new features, suggestions, etc.

Comments

@duncdrum
Copy link
Contributor

duncdrum commented Jan 15, 2019

What is the Problem

We are drowning in open issues going back to 2013, with no reliable means of seeing if and how later changes to the code base affect open issues. Similarly, the significant number of tests locked in issue comments, is not used to help us detect regressions. The question therefore is how to best utilise the 100+ tests that have been written over the years in response to user issues. I can see three technical means of going about this, for Junit and XQsuite tests.

Known-issue test folder

Ask users to submit test into a folder which is executed by our Ci in separate step that has the allowed-to-fail flag set. In pseudo code:

ant test backlog

Run only tests in the test/src/backlog folder, we would call it e.g. on Travis like this:

env:
matrix:
  include:
    - jdk: openjdk8
    - jdk: openjdk9
    - jdk: openjdk10
    - jdk: openjdk11
    - jdk: oraclejdk11
  allow_failures:
    - jdk: openjdk8
      env: backlog=true
      script: ant test backlog

Once a backlog test passes it should be pulled into the main test-suite and the corresponding issue closed.

Pro:

  • clear where to submit code for submitters
  • test don’t need to be written expecting fails, but can contain the expected value from the get go.

Con:

  • NPE causing test could potentially remain in the backlog folder marked as %pending indefinitely.
  • Our Test reporter on Travis makes it very hard to see if and how individual tests failed, so this will be some needle in the haystack work.
  • Requires active maintenance when moving tests between test-suites

Enable %test:warn aka yellow test for XQsuite tests

Yellow test, in addition to fail=red, pass=green, is similar to @ignore in Junit4, or c. This are different from %test:pending. Pending tests, are not executed at all, whereas warnings are run but don’t halt execution of the test-suite on failure.

(: more test here :)
declare
  %test:warn
  %test:assertEquals(3)
function ao:wrap-element-sequence () {
  let $xml := <root><i/><i/><i/></root>
  return $xml/node()
    => for-each(ao:wrap#1)
    => for-each(ao:get-i-elements#1)
    => count()
};	

The main challenge would be in how to report these.

<testsuites>
    <testsuite package="http://exist-db.org/xquery/test/bang" timestamp="2012-10
-16T10:30:12.966+02:00" failures="0" tests="1" warnings="1" time="PT0.046S">
        <testcase name="constructor" class="ao:other"/>
        <warnings failures="1" tests="1">
	  <testcase name="functions3" class="ao:wrap-element-sequence">
	    <failure message="assertEquals failed." type="failure-error-code-1">2 = 3</failure>
	    <output>2</output>
	  </testcase>
        </warnings>        
    </testsuite>
</testsuites>

Once a warning is passing, the annotation should be removed and the test will be run as part of the regular test-suite.

Pros:

  • warnings already test the desired outcome
  • easy to put tests into a logical location (ie warnings about e.g. map syntax would be added to other map tests)
  • more power to XQsuite test authors outside of exist’s core repo

Cons:

  • approach is largely abandoned in most test-suites (including Junit5 )
  • suffers from the same needle in haystack problem like previous option

Standard Approach with or without new annotation

Rewrite test from issue comments, to actually expect the erroneous output, and add them to our test-suite. A green test-suite would mean that all know bugs are still accounted for, accidental fixes would show up as red tests, which require investigation.

Pro:

  • requires no code changes to how and where we perform tests
  • while counterintuitive to most reporters, although widely practiced elsewhere

Cons:

  • annotation this requires large scale rewrite of many tests

With new annotation %test:expectFail

To avoid large scale rewrites of prepared tests, we could facilitate this by adding %test:expectFail annotations.

declare
  %test:expectFail
  %test:assertEquals(2)
function ef:test-sum () {
 1 + 1
};	

Unlike %test:pending these tests are executed, and are green when faulty output is generated., so anything but 2 in the above example.

<testsuites>
    <testsuite package="http://exist-db.org/xquery/test/bang" timestamp="2012-10
-16T10:30:12.966+02:00" failures="0" tests="1" expected="1" time="PT0.046S">
        <testcase name="constructor" class="ef:test-sum"/>         
    </testsuite>
</testsuites>
@duncdrum duncdrum added enhancement new features, suggestions, etc. discuss ask for feedback labels Jan 15, 2019
@adamretter
Copy link
Contributor

I think "known issue test folder" is the only approach which will work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discuss ask for feedback enhancement new features, suggestions, etc.
Projects
None yet
Development

No branches or pull requests

2 participants