Skip to content

add file content #577

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 86 commits into
base: master
Choose a base branch
from
Open

add file content #577

wants to merge 86 commits into from

Conversation

BrentBlanckaert
Copy link
Collaborator

No description provided.

@BrentBlanckaert BrentBlanckaert self-assigned this Jan 11, 2025
@BrentBlanckaert BrentBlanckaert added the enhancement New feature or request label Jan 11, 2025
@BrentBlanckaert
Copy link
Collaborator Author

BrentBlanckaert commented Feb 18, 2025

Documentation for usage of files

This documentation will discuss all the changes made regarding
the IO of a test.

Files

Files are used to describe files that can be added as input for a test and will provide the student with this file in Dodona. This is done in the following way:

files:
  - name: "animal.txt"
    url: "media/workdir/animal.txt"

files is not to be confused with file which will specify the contents and location of a file the code of a student should generate. This can be done in the following way:

file:
  content: "animal.txt" # is the content that the file should have
  location: "media/workdir/animal.txt" # is the location of the file
  oracle: ...

There are several issues with this:

  • The usage of the names files and file is very confusing
  • In content you had to use a file and can't just specify the content
  • You're able to have multiple input files but can only have one output file.
  • Need more consistency in the naming and formatting

The name files was changed to input_files and file was changed to output_files. The name url in files and location in file were also changed to path. You can now also specify multiple files for the output files. An example is the following:

input_files:
  - name: "animal.txt"
    path: "media/workdir/animal.txt"
  - name: "human.txt"
    path: "media/workdir/human.txt"
output_files:
  data: 
    - content: "lion" 
      path: "media/workdir/animal.txt" 
    - content: "tim"
      path: "media/workdir/human.txt"
  oracle: ....
output_files:
  - content: "animal" 
    path: "media/workdir/animal.txt" 
  - content: "humant"
    path: "media/workdir/human.txt"

You can still only specify paths in the content section of output files.
We can distinguish between actual content and the path the a file that contains it by using !path.
An example is the following:

output_files:
  data: 
    - content: !path "animal.txt" 
      path: "media/workdir/animal.txt" 
    - content: "Humans can make music and a warm meal"
      path: "media/workdir/human.txt"
  oracle: ....

So content will now expect the actual content by default and not a path to load it from.

For the feedback, it's all still a bit fuzzy because right now all the content is dumped after eachother. Potential solution:

  • Usage of tabs in the solutions
  • Only show the names and show content when clicking on them.

Most of that will probably need to happen on Dodona itself.

Stdin, Stdout and Stderr

How things currently work, you have to specify the full contents of the stdin, stdout and stderr channels. This can get ugly, when that's a lot of text. This is why the usage of files is also very benificial here.

Example for Stdin:

stdin: !path "media/workdir/animal.txt"

The usage of !path is also present here. This is consistent with what was discussed above.

Under the hood, Stderr and Stdout are both just textual output channels. So they both have the exact same functionality.
If they are a dictionary, they used to expect the key data, but now you can also use content which is more consistent with the rest. Just like before you can also use !path to specify that you want to use a file instead of directly specifying the content.

Examples for Stdout:

stdout: !path "media/workdir/animal.txt"

stdout: 
  content: !path "media/workdir/animal.txt"
  config: ...
  oracle: ...

@BrentBlanckaert BrentBlanckaert marked this pull request as ready for review February 25, 2025 16:20
@BrentBlanckaert
Copy link
Collaborator Author

Documentation for new extensions (updated version)

This documentation will discuss all the changes made regarding the IO of a test.

Input files

Input files (currently files in TESTed) are a list of files defined in the DSL. Each file can be represented using two fields: path (required) and content. I will show a few examples to show what these do:

First example

tabs:
- tab: counting
  contexts:
  - testcases:
    - expression: count_words('fish.txt', 'sharks')
      return: 1
      input_files:
        - path: "fish.txt"
          content: "There are sharks in the water!"
        - path: "mammal.txt"
          content: "There are tigers in the water!"

A path must ALWAYS be provided. This is the string that will be shown to the student in the above expression. The path field (originally called name) contains relative path names and is relative to the workdir directory. In the case of expression, statement, or descriptions, an exact match will be checked. In the expression, that string becomes a hyperlink pointing to the provided content.

The presence of the content field also indicates that the file does not yet exist in the working directory. In this case, a file will still need to be created in the working directory.

The feedback for this example is the following:

First example feedback

{"command": "start-judgement"}
{"title": "counting", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "count_words(&#x27;<a class=\"file-link\" target=\"_blank\">fish.txt</a>&#x27;, &#x27;sharks&#x27;)", "format": "html"}, "command": "start-testcase"}
{"expected": "1", "channel": "return", "command": "start-test"}
{"generated": "1", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"message": {"description": "<div class='contains-file''><p>File: <a class=\"file-link\" target=\"_blank\"><span class=\"code\">mammal.txt</span></a></p></div>", "format": "html"}, "command": "append-message"}
{"data": {"statements": "count_words('fish.txt', 'sharks')", "files": [{"path": "fish.txt", "content": "There are sharks in the water!"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

The seen input files are provided in the data and the start of the html to get make a hyperlink is provided in the description. Dodona will use to create their own hyperlink or pop-up.

Second example

tabs:
- tab: counting
  contexts:
  - testcases:
    - expression: count_words('fish.txt', 'sharks')
      return: 1
      input_files:
        - path: "fish.txt"

In this case, the file is expected to already be present in the working directory. Dodona will make the hyperlink in the description point to a file provided in the evaluation folder with the same path provided in path.
The feedback would be the following:

Second example feedback

{"command": "start-judgement"}
{"title": "counting", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "count_words(&#x27;<a class=\"file-link\" target=\"_blank\">fish.txt</a>&#x27;, &#x27;sharks&#x27;)", "format": "html"}, "command": "start-testcase"}
{"expected": "1", "channel": "return", "command": "start-test"}
{"generated": "1", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"statements": "count_words('fish.txt', 'sharks')", "files": [{"path": "fish.txt"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

The main difference is that only path is present in data.files.

Output files

In TESTed, you were only able to provide a single output file using file. This has been extended to support multiple.
An output_file contains both a path and a content field, both of which are required.

  • The path field specifies the path to the file that the student should have generated (in the working directory).
  • The content field contains either the expected contents of the generated file or a path to a real file containing the expected contents (located in the evaluation folder).

First example

tabs:
- tab: output_file
  contexts:
  - testcases:
    - expression: genereer('origineel_tekst.txt', 'text', 3)
      return: true
      output_files:
        - content: !path "files_tests/tekst1.txt"
          path: "text1.txt"
        - content: !path "files_tests/tekst2.txt"
          path: "text2.txt"
        - content: "Created using write mode.\n3\n"
          path: "text3.txt"
      input_files:
        - path: "origineel_tekst.txt"
          content: "Created using write mode.\n"

The feedback provided for this test would look something like the following:

{"command": "start-judgement"}
{"title": "output_file", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "genereer(&#x27;<a class=\"file-link\" target=\"_blank\">origineel_tekst.txt</a>&#x27;, &#x27;text&#x27;, 3)", "format": "html"}, "command": "start-testcase"}
{"expected": "Created using write mode.\n1\n", "channel": "file: text1.txt", "command": "start-test"}
{"generated": "Created using write mode.\n1\n", "status": {"enum": "correct"}, "command": "close-test"}
{"expected": "Created using write mode.\n2\n", "channel": "file: text2.txt", "command": "start-test"}
{"generated": "Created using write mode.\n2\n", "status": {"enum": "correct"}, "command": "close-test"}
{"expected": "Created using write mode.\n3\n", "channel": "file: text3.txt", "command": "start-test"}
{"generated": "Created using write mode.\n3\n", "status": {"enum": "correct"}, "command": "close-test"}
{"expected": "True", "channel": "return", "command": "start-test"}
{"generated": "True", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"statements": "genereer('origineel_tekst.txt', 'text', 3)", "files": [{"path": "origineel_tekst.txt", "content": "Created using write mode.\n"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

For each output file, the feedback includes a pair showing both the expected and the generated content. The channel field in the expected output will be labeled as file: <path of output file>. The full contents of both files are included directly in the feedback, as there is currently no support for returning files themselves instead of their contents.

Stdin

In TESTed, stdin could only be a string or any other basic type. I also expanded the capabilities for that.
If stdin is a string, it is equivalent to using the field content. Alternatively, a path (relative to the working directory) can be provided. If only path is provided with no content then it will be assumed that a file is present in the evaluation folder at the same location provided by path.

First example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin: "hello"
        stdout: "hello world!\n"

In this setup, stdin cannot be an object, as this would cause issues in the validator. It could be mistaken for one of the next examples, leading to incorrect interpretation or validation errors.

Second example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin:
          content: "hello"
        stdout: "hello world!\n"

This one is equivalent with the first example. The feedback for those would look something like this:

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "hello\n", "format": "console"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

This is the exact same as it is now, but stdin is no longer present in data. This is because we didn't see any more use for it.

Third example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin: 
          path: "hello.txt"
        stdout: "hello world!\n"

In this case, the content must be read from a file. The output looks like this:

Third example feedback

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "$ submission &lt; <a class=\"file-link\" target=\"_blank\">hello.txt</a>", "format": "html"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"files": [{"path": "hello.txt"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

The provided information about the file is also given in data.

Fourth example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin:
          path: "hello.txt"
          content: "hello"
        stdout: "hello world!\n"

In this case, the file doesn't need to physically exist in the working directory. The feedback will be the following:

Fourth example feedback

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "$ submission &lt; <a class=\"file-link\" target=\"_blank\">hello.txt</a>", "format": "html"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"files": [{"path": "hello.txt", "content": "hello\n"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

Now the content will be provided in data.

Fifth example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin:
          path: "hello.txt"
          content: "hello"
        arguments: ["world"]
        stdout: "hello world!\n"

Because an argument was provided, the desciption will look a bit different:

Fifth example feedback

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "$ submission world &lt; <a class=\"file-link\" target=\"_blank\">hello.txt</a>", "format": "html"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"files": [{"path": "hello.txt", "content": "hello\n"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

Sixth example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin:
          content: "hello"
        arguments: ["world"]
        stdout: "hello world!\n"

Sixth example feedback

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "$ submission world <<< hello", "format": "console"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

TESTed uses here-files in this case, but for single-line content, you can use a special shorthand syntax, as shown in the description.

Seventh example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin:
          path: "hello.txt"
          content: "hello"
        arguments: ["world"]
        description: "stdin_test world < hello.txt"
        stdout: "hello world!\n"

Here the description will actually use the file provided in stdin to generate the start of a hyperlink:

Seventh example feedback:

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "stdin_test world &lt; <a class=\"file-link\" target=\"_blank\">hello.txt</a>", "format": "html"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"files": [{"path": "hello.txt", "content": "hello\n"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

Stdout & Stderr

In TESTed, stdout and stderr already followed the exact same syntax. So it makes sense that that is still the case.
A few examples using stdout:

First example

tabs:
- tab: stdout
  contexts:
  - testcases:
      - stdin: "hello"
        stdout: "hello world!\n"

This was already possible in TESTed.

Second example

- tab: stdout
  contexts:
  - testcases:
      - stdin: "hello"
        stdout: 
          content: "hello world!\n"

In TESTed, the key data would be used. This is still possible, but content is a better name and more consistent with the new changes.

First and second example feedback

{"command": "start-judgement"}
{"title": "stdout", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "hello\n", "format": "console"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

This is still the same as it is in TESTed.

Third example

tabs:
- tab: stdout
  contexts:
  - testcases:
      - stdin: "hello"
        stdout: 
           path: "files_tests/hello_out.txt"

Just like stdin, you can provide a path.
This would generate the following feedback:

Third example feedback

{"command": "start-judgement"}
{"title": "stdout", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "hello\n", "format": "console"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

Currently the feedback will still be the same. In a later stage the new DSL could be used by Dodona to provide the feedback by file instead of just a string.

Fifth example

tabs:
- tab: stdout
  contexts:
  - testcases:
    - stdin:
        path: "hello.txt"
        url: "media/workdir/hello.txt"
      arguments: ["world"]
      stdout:
        path: "files_tests/hello_out.txt"
        content: "hello world!\n"
      stderr:
        path: "files_tests/hello_err.txt"
        data: "ERROR\n" # Deprecated

The content or data can still be specified directly. This means that TESTed doesn't need to read the file from an external location. While this isn't particularly useful at the moment, it will become more relevant in the future when Dodona supports including files in the feedback instead of displaying their full content directly.

Fourth example feedback

{"command": "start-judgement"}
{"title": "stdout", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "$ submission world &lt; <a class=\"file-link\" target=\"_blank\">hello.txt</a>", "format": "html"}, "command": "start-testcase"}
{"expected": "ERROR\n", "channel": "stderr", "command": "start-test"}
{"generated": "ERROR\n", "status": {"enum": "correct"}, "command": "close-test"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"files": [{"path": "hello.txt"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

@jorg-vr
Copy link
Contributor

jorg-vr commented May 12, 2025

This version of the described features looks fine to me.
I would still remove the fifth example for stdout, as is has no implemented support for now. But that is not very important.

Is the code also ready for review?

@BrentBlanckaert
Copy link
Collaborator Author

Yes, the code is ready too.

@BrentBlanckaert
Copy link
Collaborator Author

Just realised that the way deprecated messages are done still need to change, but other than that it should be fine.

Copy link
Contributor

@jorg-vr jorg-vr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't review all json schema's, But I assume the feedback I have given on the strict schema is applicable to all

@@ -148,6 +168,7 @@ def _parse_yaml(yaml_stream: str) -> YamlObject:
yaml.add_constructor("!" + actual_type, _custom_type_constructors, loader)
yaml.add_constructor("!expression", _expression_string, loader)
yaml.add_constructor("!oracle", _return_oracle, loader)
yaml.add_constructor("!path", _path_string, loader)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

has this ever been discussed during one of your thesis meetings?

@BrentBlanckaert
Copy link
Collaborator Author

has this ever been discussed during one of your thesis meetings?

It has. This is used in the output_files and all of us were fine with the usage and the name.

@BrentBlanckaert
Copy link
Collaborator Author

Documentation for new extensions (Newest version)

This documentation will discuss all the changes made regarding the IO of a test.

Input files

Input files (currently files in TESTed) are a list of files defined in the DSL. Each file can be represented using two fields: path (required) and content. I will show a few examples to show what these do:

First example

tabs:
- tab: counting
  contexts:
  - testcases:
    - expression: count_words('fish.txt', 'sharks')
      return: 1
      input_files:
        - path: "fish.txt"
          content: "There are sharks in the water!"
        - path: "mammal.txt"
          content: "There are tigers in the water!"

A path must ALWAYS be provided. This is the string that will be shown to the student in the above expression. The path field (originally called name) contains relative path names and is relative to the workdir directory. In the case of expression, statement, or descriptions, Dodona will use path to find the exact match. In the expression, that string becomes a hyperlink pointing to the provided content.

The presence of the content field also indicates that the file does not yet exist in the working directory. In this case, a file will still need to be created in the working directory.

The feedback for this example is the following:

First example feedback

{"command": "start-judgement"}
{"title": "counting", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "count_words('fish.txt', 'sharks')", "format": "python"}, "command": "start-testcase"}
{"expected": "1", "channel": "return", "command": "start-test"}
{"generated": "1", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"statements": "count_words('fish.txt', 'sharks')", "files": [{"path": "fish.txt", "content": "There are sharks in the water!"}, {"path": "mammal.txt", "content": "There are tigers in the water!"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

All the input files are provided in the data. Dodona will use to edit the provided description to create a hyperlink or pop-up.

Second example

tabs:
- tab: counting
  contexts:
  - testcases:
    - expression: count_words('fish.txt', 'sharks')
      return: 1
      input_files:
        - path: "fish.txt"

In this case, the file is expected to already be present in the working directory. Dodona will make the hyperlink in the description point to a file provided in the evaluation folder with the same path provided in path.
The feedback would be the following:

Second example feedback

{"command": "start-judgement"}
{"title": "counting", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "count_words('fish.txt', 'sharks')", "format": "python"}, "command": "start-testcase"}
{"expected": "1", "channel": "return", "command": "start-test"}
{"generated": "1", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"statements": "count_words('fish.txt', 'sharks')", "files": [{"path": "fish.txt"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

The main difference is that only path is present in data.files.

Output files

In TESTed, you were only able to provide a single output file using file. This has been extended to support multiple.
An output_file contains both a path and a content field, both of which are required.

  • The path field specifies the path to the file that the student should have generated (in the working directory).
  • The content field contains either the expected contents of the generated file or a path to a real file containing the expected contents (located in the evaluation folder).

First example

tabs:
- tab: output_file
  contexts:
  - testcases:
    - expression: genereer('origineel_tekst.txt', 'text', 3)
      return: true
      output_files:
        - content: !path "files_tests/tekst1.txt"
          path: "text1.txt"
        - content: !path "files_tests/tekst2.txt"
          path: "text2.txt"
        - content: "Created using write mode.\n3\n"
          path: "text3.txt"
      input_files:
        - path: "origineel_tekst.txt"
          content: "Created using write mode.\n"

The feedback provided for this test would look something like the following:

{"command": "start-judgement"}
{"title": "output_file", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "genereer('origineel_tekst.txt', 'text', 3)", "format": "python"}, "command": "start-testcase"}
{"expected": "Created using write mode.\n1\n", "channel": "file: text1.txt", "command": "start-test"}
{"generated": "Created using write mode.\n1\n", "status": {"enum": "correct"}, "command": "close-test"}
{"expected": "Created using write mode.\n2\n", "channel": "file: text2.txt", "command": "start-test"}
{"generated": "Created using write mode.\n2\n", "status": {"enum": "correct"}, "command": "close-test"}
{"expected": "Created using write mode.\n3\n", "channel": "file: text3.txt", "command": "start-test"}
{"generated": "Created using write mode.\n3\n", "status": {"enum": "correct"}, "command": "close-test"}
{"expected": "True", "channel": "return", "command": "start-test"}
{"generated": "True", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"statements": "genereer('origineel_tekst.txt', 'text', 3)", "files": [{"path": "origineel_tekst.txt", "content": "Created using write mode.\n"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

For each output file, the feedback includes a pair showing both the expected and the generated content. The channel field in the expected output will be labeled as file: <path of output file>. The full contents of both files are included directly in the feedback, as there is currently no support for returning files themselves instead of their contents.

Stdin

In TESTed, stdin could only be a string or any other basic type. I started with expanded the capabilities for that.
If stdin is a string, it is equivalent to using a map with the field content. Alternatively, a path (relative to the working directory) can be provided. If only path is provided with no content then it will be assumed that a file is present in the evaluation folder at the same location provided by path.

First example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin: "hello"
        stdout: "hello world!\n"

In this setup, stdin cannot be an object, as this would cause issues in the validator. It could be mistaken for one of the next examples, leading to incorrect interpretation or validation errors.

Second example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin:
          content: "hello"
        stdout: "hello world!\n"

This one is equivalent with the first example. The feedback for those would look something like this:

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "hello\n", "format": "console"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"stdin": "hello\n"}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

This is the exact same as it is now.

Third example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin: 
          path: "hello.txt"
        stdout: "hello world!\n"

In this case, the content must be read from a file. The output looks like this:

Third example feedback

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "$ submission < hello.txt", "format": "console"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"files": [{"path": "hello.txt"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

The provided information about the file is also given in data.

Fourth example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin:
          path: "hello.txt"
          content: "hello"
        stdout: "hello world!\n"

In this case, the file doesn't need to physically exist in the working directory. The feedback will be the following:

Fourth example feedback

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "$ submission < hello.txt", "format": "console"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"stdin": "hello\n", "files": [{"path": "hello.txt", "content": "hello\n"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

Now the content will be provided in data.

Fifth example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin:
          path: "hello.txt"
          content: "hello"
        arguments: ["world"]
        stdout: "hello world!\n"

Because an argument was provided, the desciption will look a bit different:

Fifth example feedback

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "$ submission world < hello.txt", "format": "console"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"stdin": "hello\n", "files": [{"path": "hello.txt", "content": "hello\n"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

Sixth example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin:
          content: "hello"
        arguments: ["world"]
        stdout: "hello world!\n"

Sixth example feedback

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "$ submission world <<< hello", "format": "console"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"stdin": "hello\n"}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

TESTed uses here-files in this case, but for single-line content, you can use a special shorthand syntax, as shown in the description.

Seventh example

tabs:
- tab: stdin
  contexts:
  - testcases:
      - stdin:
          path: "hello.txt"
          content: "hello"
        arguments: ["world"]
        description: "stdin_test world < hello.txt"
        stdout: "hello world!\n"

Here the description is defined:

Seventh example feedback:

{"command": "start-judgement"}
{"title": "stdin", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "stdin_test world < hello.txt", "format": "text"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"stdin": "hello\n", "files": [{"path": "hello.txt", "content": "hello\n"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

The information of stdin will still be proved in data such that Dodona can generate a hyperlink in the provided description.

Stdout & Stderr

In TESTed, stdout and stderr already followed the exact same syntax. So it makes sense that that is still the case.
A few examples using stdout:

First example

tabs:
- tab: stdout
  contexts:
  - testcases:
      - stdin: "hello"
        stdout: "hello world!\n"

This was already possible in TESTed.

Second example

- tab: stdout
  contexts:
  - testcases:
      - stdin: "hello"
        stdout: 
          content: "hello world!\n"

In TESTed, the key data would be used. This is still possible, but content is a better name and more consistent with the new changes.

First and second example feedback

{"command": "start-judgement"}
{"title": "stdout", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "hello\n", "format": "console"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"stdin": "hello\n"}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

This is still the same as it is in TESTed.

Third example

tabs:
- tab: stdout
  contexts:
  - testcases:
      - stdin: "hello"
        stdout: 
           path: "files_tests/hello_out.txt"

Just like stdin, you can provide a path to a file (relative to the working directory).
This would generate the following feedback:

Third example feedback

{"command": "start-judgement"}
{"title": "stdout", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "hello\n", "format": "console"}, "command": "start-testcase"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"stdin": "hello\n"}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"

Currently the feedback will still be the same. In a later stage the new DSL could be used by Dodona to provide the feedback by file instead of just a string.

Fifth example

tabs:
- tab: stdout
  contexts:
  - testcases:
    - stdin:
        path: "hello.txt"
      arguments: ["world"]
      stdout:
        path: "files_tests/hello_out.txt"
        content: "hello world!\n"
      stderr:
        path: "files_tests/hello_err.txt"
        data: "ERROR\n" # Deprecated

The content or data can still be specified directly. This means that TESTed doesn't need to read the file from an external location. While this isn't particularly useful at the moment, it will become more relevant in the future when Dodona supports including files in the feedback instead of displaying their full content directly.

Fourth example feedback

{"command": "start-judgement"}
{"title": "stdout", "command": "start-tab"}
{"command": "start-context"}
{"description": {"description": "$ submission world < hello.txt", "format": "console"}, "command": "start-testcase"}
{"expected": "ERROR\n", "channel": "stderr", "command": "start-test"}
{"generated": "ERROR\n", "status": {"enum": "correct"}, "command": "close-test"}
{"expected": "hello world!\n", "channel": "stdout", "command": "start-test"}
{"generated": "hello world!\n", "status": {"enum": "correct"}, "command": "close-test"}
{"command": "close-testcase"}
{"data": {"files": [{"path": "hello.txt"}]}, "command": "close-context"}
{"command": "close-tab"}
{"command": "close-judgement"}

@@ -581,7 +581,7 @@ class Testcase(WithFeatures, WithFunctions):
input: Statement | MainInput | LanguageLiterals
description: Message | None = None
output: Output = field(factory=Output)
link_files: list[FileUrl] = field(factory=list)
link_files: list[InputFile] = field(factory=list)
Copy link
Contributor

@pdawyndt pdawyndt May 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would seem logical to also rename link_files to input_files as it's a list of input file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dsl enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants