Adds datarouter integration test#6
Adds datarouter integration test#6saksham-chawla wants to merge 2 commits intocodex/replay-pr-33-basefrom
Conversation
- A basic itf test that verifies DLT log messages sent by datarouter running on the reference integration platform (qnx qemu target) are captured on host
There was a problem hiding this comment.
Good test addition, but reliability, maintainability, and documentation mismatches need fixes.
Status: Changes Suggested | Risk: Medium | Readiness: 6/10 (Needs Polish)
Issues Identified & Suggestions
- Align message count assertion with documented requirement to avoid flakiness:
tests/integration/test_datarouter_dlt.py,tests/integration/README.md - Use unique temp files and robust startup polling for tests:
tests/integration/test_datarouter_dlt.py,tests/integration/conftest.py - Avoid private ITF internals and mark test-only dependency:
tests/integration/conftest.py,MODULE.bazel
Review Details
📁 6 files reviewed | 💬 6 comments
👍 / 👎 individual comments to help improve reviews for you
|
|
||
| logger.info("Found %d messages with app_id=%s, context_id=%s", message_count, APP_ID, CTX_ID) | ||
|
|
||
| assert message_count > 1, f"Expected more than 1 message, but got {message_count}" |
There was a problem hiding this comment.
[Logic] The README and docstring state that the test only needs to observe at least one APP_ID=DR / CTX_ID=STAT message during the capture window, but the assertion requires message_count > 1. Because the datarouter emits roughly every 5 s and the capture runs for only 10 s, any clock skew or jitter that produces a single emission will cause this test to fail even though the system behaved as intended, making the test flaky. Please align the assertion with the documented requirement:
- assert message_count > 1, f"Expected more than 1 message, but got {message_count}"
+ assert message_count >= 1, f"Expected at least 1 message, but got {message_count}"(If a higher threshold is really required, update the documentation and fixture comments accordingly.)
Context for Agents
The README and docstring state that the test only needs to observe *at least one* `APP_ID=DR` / `CTX_ID=STAT` message during the capture window, but the assertion requires `message_count > 1`. Because the datarouter emits roughly every 5 s and the capture runs for only 10 s, any clock skew or jitter that produces a single emission will cause this test to fail even though the system behaved as intended, making the test flaky. Please align the assertion with the documented requirement:
```diff
- assert message_count > 1, f"Expected more than 1 message, but got {message_count}"
+ assert message_count >= 1, f"Expected at least 1 message, but got {message_count}"
```
(If a higher threshold is really required, update the documentation and fixture comments accordingly.)
File: tests/integration/test_datarouter_dlt.py
Line: 65Chain of Thought
- Validation Decision: accepted
- Confidence: 0.97
- Likelihood Score: 4/5
- Likelihood Reason: With messages emitted every ~5s and a 10s capture window, capturing exactly 1 message is a realistic scenario depending on timing. The assertion
> 1would cause the test to fail in that case, making it flaky. The documentation explicitly states "at least 1 message" but the code requires more than 1. - Suggested Action: keep
- Tool Calls Used: 2
Reasoning
The comment accurately identifies a real inconsistency between the documentation and the code:
- README.md line 69: "The test expects at least 1 message during the capture window for reliable test results" — clearly documents
>= 1. - Test line 65:
assert message_count > 1— requires strictly more than 1, i.e., at least 2 messages. - Test line 23-24: Comment says messages sent every ~5s, capture window is 10s.
With a 5-second emission interval and a 10-second capture window, the test could realistically capture 1 or 2 messages depending on when the capture starts relative to the emission cycle. If capture starts just after an emission, the next one arrives at ~5s and the one after at ~10s — but the capture may end before the second one. Getting exactly 1 message is plausible and would cause a test failure despite the system working correctly.
The comment is:
- Accurate: The mismatch between documentation and assertion is real and verified.
- Actionable: It provides a concrete fix (change
> 1to>= 1) with a clear diff. - Introduced by this PR: This is entirely new code added in this PR.
- Addresses flakiness: A legitimate concern for integration tests with timing dependencies.
Evidence Summary
1. File Snapshot (HEAD)
- File:
tests/integration/conftest.py
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
2. File Snapshot (HEAD)
- File:
tests/integration/README.md
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
| def test_dlt_capture(datarouter_running, test_config_fixture, dlt_receiver_config): | ||
| """Verify DLT messages can be captured from the datarouter. | ||
| """ | ||
| dlt_file = "/tmp/test_dlt_capture.dlt" |
There was a problem hiding this comment.
[Testing] The test uses a hardcoded file path /tmp/test_dlt_capture.dlt. This can cause conflicts if tests are run in parallel and is not clean. Use the pytest tmp_path fixture to create a unique temporary file for the test run. This requires adding tmp_path to the test function's arguments, using it to construct the file path, and passing str(dlt_file) to DltReceive.
Example:
# In test function signature:
def test_dlt_capture(..., tmp_path):
# In test body:
dlt_file = tmp_path / "test_dlt_capture.dlt"
# In DltReceive call:
with DltReceive(
...,
file_name=str(dlt_file),
...
):Context for Agents
The test uses a hardcoded file path `/tmp/test_dlt_capture.dlt`. This can cause conflicts if tests are run in parallel and is not clean. Use the pytest `tmp_path` fixture to create a unique temporary file for the test run. This requires adding `tmp_path` to the test function's arguments, using it to construct the file path, and passing `str(dlt_file)` to `DltReceive`.
Example:
```python
# In test function signature:
def test_dlt_capture(..., tmp_path):
# In test body:
dlt_file = tmp_path / "test_dlt_capture.dlt"
# In DltReceive call:
with DltReceive(
...,
file_name=str(dlt_file),
...
):
```
File: tests/integration/test_datarouter_dlt.py
Line: 34Chain of Thought
- Validation Decision: accepted
- Confidence: 0.82
- Likelihood Score: 3/5
- Likelihood Reason: This is an integration test that connects to a specific QEMU VM target and takes 10+ seconds, so parallel execution of the same test is unlikely in practice. However, the hardcoded path has no cleanup, and collisions with other tests or stale files from previous runs are plausible. The cleanliness concern is real even if parallel execution is rare.
- Suggested Action: keep
- Tool Calls Used: 3
Reasoning
The comment correctly identifies that the newly added test file uses a hardcoded path /tmp/test_dlt_capture.dlt at line 34. This is introduced by this PR (the entire file is new).
Accuracy: The comment accurately describes the issue. The hardcoded /tmp path:
- Could cause conflicts if tests run in parallel (though unlikely for this specific integration test)
- Leaves stale files without cleanup
- Is not idiomatic pytest — the
tmp_pathfixture exists precisely for this purpose
Actionability: The comment is highly actionable, providing:
- The specific problem (hardcoded path at line 34)
- A concrete solution (use
tmp_pathfixture) - Exact code examples showing what to change in the function signature and body
Considerations: This is a Bazel-based integration test (py_itf_test) that connects to a QEMU VM. While the parallel test conflict risk is somewhat low for this specific test (it requires a running QEMU target), the suggestion to use tmp_path is still a good practice improvement. The DltReceive accepts a file_name parameter as a string, so passing str(dlt_file) from tmp_path should work identically. pytest's tmp_path also provides automatic cleanup, which the hardcoded path lacks.
Diff Responsibility: This is newly introduced code in this PR, so the issue is squarely within the PR's responsibility.
Evidence Summary
1. File Snapshot (HEAD)
- File:
tests/integration/BUILD
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
2. File Snapshot (HEAD)
- File:
tests/integration/conftest.py
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
| remote = "https://github.com/eclipse-score/baselibs.git", | ||
| ) | ||
|
|
||
| bazel_dep(name = "score_itf", version = "0.1.0") |
There was a problem hiding this comment.
[Maintainability] score_itf is only used by integration tests (the BUILD file sets default_testonly = True), but the bazel_dep is declared as a non-dev dependency. This forces all downstream consumers of score_logging to resolve score_itf, even though they don't need it.
Other test-only dependencies in this file (e.g., score_process, score_platform) use dev_dependency = True. Apply the same here:
| bazel_dep(name = "score_itf", version = "0.1.0") | |
| bazel_dep(name = "score_itf", version = "0.1.0", dev_dependency = True) |
Context for Agents
`score_itf` is only used by integration tests (the BUILD file sets `default_testonly = True`), but the `bazel_dep` is declared as a non-dev dependency. This forces all downstream consumers of `score_logging` to resolve `score_itf`, even though they don't need it.
Other test-only dependencies in this file (e.g., `score_process`, `score_platform`) use `dev_dependency = True`. Apply the same here:
```suggestion
bazel_dep(name = "score_itf", version = "0.1.0", dev_dependency = True)
```
File: MODULE.bazel
Line: 124Chain of Thought
- Validation Decision: accepted
- Confidence: 0.95
- Likelihood Score: 5/5
- Likelihood Reason: The issue is inevitable: any downstream consumer of
score_loggingwill be forced to resolvescore_itfeven though it's only used by tests. This is a direct consequence of the missingdev_dependency = Trueflag. - Suggested Action: keep
- Tool Calls Used: 9
Reasoning
The comment is accurate and actionable:
-
New code introduced by this PR: The
bazel_dep(name = "score_itf", version = "0.1.0")on line 124 is entirely new — the base MODULE.bazel ends at line 122. This is not legacy debt. -
score_itfis only used in test code: The only reference toscore_itfis intests/integration/BUILD, which:- Has
package(default_testonly = True)on line 16 - Uses
load("@score_itf//:defs.bzl", "py_itf_test")and@score_itf//config - No other BUILD file in the repository references
score_itf
- Has
-
Inconsistency with existing patterns: Other test-only dependencies in MODULE.bazel consistently use
dev_dependency = True:score_process(line 34):dev_dependency = Truescore_platform(line 35):dev_dependency = Truescore_toolchains_gcc(line 38):dev_dependency = Truescore_toolchains_qnx(line 39):dev_dependency = True- etc.
-
Real downstream impact: In Bazel's module system, non-dev dependencies propagate to downstream consumers. Missing
dev_dependency = Truemeans any project depending onscore_loggingwill need to resolvescore_itf, even though they have no use for it. -
The fix is concrete and correct: Adding
dev_dependency = Trueis the right solution, consistent with the existing codebase conventions.
Evidence Summary
1. File Snapshot (HEAD)
- File:
tests/integration/BUILD
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
2. File Snapshot (HEAD)
- File:
MODULE.bazel
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
3. File Snapshot (BASE)
- File:
MODULE.bazel - Lines:
115-125
115: )
116:
117: bazel_dep(name = "score_baselibs", version = "0.2.0")
118: git_override(
119: module_name = "score_baselibs",
120: commit = "3c65b223e9f516f95935bb4cd2e83d6088ca016f",
121: remote = "https://github.com/eclipse-score/baselibs.git",
122: )
123:
4. File Lookup
- Filename:
BUILD - Max Results:
10
Found 10 file(s) matching 'BUILD':
- BUILD
- examples/BUILD
- score/datarouter/BUILD
- score/datarouter/build_configuration_flags/BUILD
- score/datarouter/daemon_communication/BUILD
- score/datarouter/dlt_filetransfer_trigger_lib/BUILD
- score/datarouter/error/BUILD
- score/datarouter/lib/synchronized/BUILD
- score/datarouter/network/BUILD
...
5. File Lookup
- Filename:
BUILD.bazel - Max Results:
10
No files found matching 'BUILD.bazel'
6. File Snapshot (HEAD)
- File:
examples/BUILD
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
7. File Snapshot (HEAD)
- File:
score/datarouter/BUILD
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
8. File Snapshot (HEAD)
- File:
BUILD
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
| if TARGET_CONFIG_KEY in session.stash: | ||
| target_ip_address = session.config.getoption("--target-ip") | ||
| target_config = session.stash[TARGET_CONFIG_KEY] | ||
| target_config._BaseProcessor__ip_address = target_ip_address |
There was a problem hiding this comment.
[Maintainability] The hook rewrites target_config._BaseProcessor__ip_address. That field is name-mangled specifically to keep it private to the ITF BaseProcessor implementation, so mutating it from the test suite creates a hard dependency on score_itf internals. Any future change to the plugin (renaming the attribute, validating writes, or making the config immutable) will break these tests even though the public API remains compatible, making upgrades or rollbacks of score_itf high risk. Please override the target IP through the supported configuration surface—e.g., update the target config file passed via --target_config, or use a public setter/fixture provided by score_itf—so the tests depend only on the published contract.
Context for Agents
The hook rewrites `target_config._BaseProcessor__ip_address`. That field is name-mangled specifically to keep it private to the ITF BaseProcessor implementation, so mutating it from the test suite creates a hard dependency on score_itf internals. Any future change to the plugin (renaming the attribute, validating writes, or making the config immutable) will break these tests even though the public API remains compatible, making upgrades or rollbacks of score_itf high risk. Please override the target IP through the supported configuration surface—e.g., update the target config file passed via `--target_config`, or use a public setter/fixture provided by score_itf—so the tests depend only on the published contract.
File: tests/integration/conftest.py
Line: 46Chain of Thought
- Validation Decision: accepted
- Confidence: 0.82
- Likelihood Score: 4/5
- Likelihood Reason: Accessing a name-mangled private attribute of an external library creates a hard coupling to internal implementation details. Since this is an external dependency (score_itf/ITF plugin), any update to the library that changes the BaseProcessor internals will break these tests. The double-underscore naming convention was specifically chosen by the library author to signal this attribute should not be accessed externally.
- Suggested Action: keep
- Tool Calls Used: 11
Reasoning
The comment correctly identifies a real anti-pattern introduced by this PR:
-
New code, not legacy: The file
tests/integration/conftest.pyis entirely new in this PR, so this is not pre-existing debt. -
Name-mangling is correctly identified:
_BaseProcessor__ip_addressis indeed a Python name-mangled attribute, resulting from__ip_addressbeing defined inclass BaseProcessor. This naming convention is Python's mechanism for making attributes private to the class. -
External dependency concern is valid: The
itf.plugins.base.base_pluginmodule is not part of this repository (confirmed byfind_filereturning no results forbase_plugin.py). Accessing private internals of an external library is particularly risky since the consuming code has no control over when those internals change. -
Actionability: The comment suggests concrete alternatives - updating the target config file via
--target_configor using a public setter/fixture from score_itf. While we can't verify the exact API of the external library, these are reasonable and specific suggestions for avoiding the anti-pattern. -
The risk is real: When score_itf updates, if
BaseProcessorchanges its__ip_addressattribute name, adds validation, or makes configs immutable, line 46 will fail with anAttributeErroror unexpected behavior, even though the public API may remain compatible.
The only limitation is that without access to the score_itf library source, we can't verify exactly what public alternatives exist. However, the fundamental concern about accessing name-mangled private attributes of an external dependency is well-founded and actionable.
Evidence Summary
1. File Lookup
- Filename:
base_plugin.py
No files found matching 'base_plugin.py'
2. File Snapshot (HEAD)
- File:
tests/integration/test_datarouter_dlt.py
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
3. File Snapshot (HEAD)
- File:
tests/integration/conftest.py
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
4. File Lookup
- Filename:
MODULE.bazel
Found 1 file(s) matching 'MODULE.bazel':
- MODULE.bazel
| if "datarouter" not in output: | ||
| logger.info("Datarouter not running. Starting Datarouter..") | ||
| execute_command(ssh, _DATAROUTER_START_CMD) | ||
| time.sleep(_DATAROUTER_STARTUP_TIMEOUT_SEC) |
There was a problem hiding this comment.
[Testing] Using a fixed time.sleep(2) to wait for the datarouter process to start can lead to flaky tests. On a heavily loaded system or a slow VM, the process might take longer to initialize, causing the test to fail unnecessarily. A more robust approach is to poll for the process status in a loop with a timeout.
| time.sleep(_DATAROUTER_STARTUP_TIMEOUT_SEC) | |
| timeout_seconds = 10 | |
| poll_interval = 0.5 | |
| end_time = time.time() + timeout_seconds | |
| datarouter_started = False | |
| while time.time() < end_time: | |
| _, stdout_lines, _ = execute_command_output(ssh, _DATAROUTER_CHECK_CMD) | |
| if "datarouter" in "\n".join(stdout_lines): | |
| datarouter_started = True | |
| break | |
| time.sleep(poll_interval) | |
| if not datarouter_started: | |
| pytest.fail(f"Datarouter did not start within {timeout_seconds} seconds") |
Context for Agents
Using a fixed `time.sleep(2)` to wait for the `datarouter` process to start can lead to flaky tests. On a heavily loaded system or a slow VM, the process might take longer to initialize, causing the test to fail unnecessarily. A more robust approach is to poll for the process status in a loop with a timeout.
```suggestion
timeout_seconds = 10
poll_interval = 0.5
end_time = time.time() + timeout_seconds
datarouter_started = False
while time.time() < end_time:
_, stdout_lines, _ = execute_command_output(ssh, _DATAROUTER_CHECK_CMD)
if "datarouter" in "\n".join(stdout_lines):
datarouter_started = True
break
time.sleep(poll_interval)
if not datarouter_started:
pytest.fail(f"Datarouter did not start within {timeout_seconds} seconds")
```
File: tests/integration/conftest.py
Line: 69Chain of Thought
- Validation Decision: accepted
- Confidence: 0.85
- Likelihood Score: 3/5
- Likelihood Reason: The test runs against a QEMU VM (mentioned in the code comments). On a heavily loaded CI system or slow VM, 2 seconds may not be enough for the datarouter process to initialize, causing flaky test failures. The concern is plausible but not inevitable since the author likely tested with this timeout.
- Suggested Action: keep
- Tool Calls Used: 2
Reasoning
The comment targets line 69 in a brand-new file introduced by this PR (tests/integration/conftest.py). The code uses time.sleep(_DATAROUTER_STARTUP_TIMEOUT_SEC) where the constant is 2 seconds, to wait for a datarouter process to start on a QEMU VM before checking if it's running.
Accuracy: The comment correctly identifies the fixed sleep pattern and its potential for flakiness. The current code does have a verification check after the sleep (lines 71-73) that will fail the test if the process hasn't started, but this means if 2 seconds isn't long enough, the test will fail unnecessarily rather than retrying.
Actionability: The suggestion provides a complete, concrete code replacement using a polling loop with configurable timeout (10 seconds) and poll interval (0.5 seconds). This is highly actionable.
Diff Responsibility: This is entirely new code introduced by this PR, so the issue is squarely introduced by this change.
Technical merit: The polling approach is strictly better because:
- It can return faster if the process starts quickly (no need to always wait the full 2 seconds)
- It tolerates slower environments by allowing up to 10 seconds
- It's a well-established pattern for avoiding flaky integration tests
The context that this runs against a QEMU VM (as noted in the code's own comments) makes variable startup times more likely, strengthening the comment's concern.
Evidence Summary
1. File Snapshot (HEAD)
- File:
tests/integration/conftest.py
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
| 3. Capturing DLT messages over UDP for approximately 10 seconds | ||
| 4. Verifying receipt of messages with `APP_ID=DR` and `CTX_ID=STAT` | ||
|
|
||
| The datarouter emits statistics messages every 5 seconds. The test expects at least 1 message during the capture window for reliable test results |
There was a problem hiding this comment.
[Documentation] The documentation states the test expects "at least 1 message", but the implementation in test_datarouter_dlt.py asserts message_count > 1 (i.e., at least 2 messages). The documentation should be updated to match the code to avoid confusion.
| The datarouter emits statistics messages every 5 seconds. The test expects at least 1 message during the capture window for reliable test results | |
| The datarouter emits statistics messages every 5 seconds. The test expects more than 1 message during the capture window for reliable test results |
Context for Agents
The documentation states the test expects "at least 1 message", but the implementation in `test_datarouter_dlt.py` asserts `message_count > 1` (i.e., at least 2 messages). The documentation should be updated to match the code to avoid confusion.
```suggestion
The datarouter emits statistics messages every 5 seconds. The test expects more than 1 message during the capture window for reliable test results
```
File: tests/integration/README.md
Line: 69Chain of Thought
- Validation Decision: accepted
- Confidence: 0.97
- Likelihood Score: 5/5
- Likelihood Reason: The mismatch between documentation and code is definitively present. Anyone reading the README will get a wrong understanding of the test's assertion threshold.
- Suggested Action: keep
- Tool Calls Used: 1
Reasoning
The comment correctly identifies a documentation-code mismatch introduced in this PR:
- README line 69 states: "The test expects at least 1 message during the capture window"
- test_datarouter_dlt.py line 65 asserts:
message_count > 1(i.e., at least 2 messages)
"At least 1" corresponds to >= 1, but the code uses > 1 which means "more than 1" or "at least 2". The suggested fix ("The test expects more than 1 message during the capture window") accurately matches the code's assertion. Both files are newly added in this PR, so this is not legacy debt. The comment is specific, actionable, and correct.
Evidence Summary
1. File Snapshot (HEAD)
- File:
tests/integration/test_datarouter_dlt.py
1: # *******************************************************************************
2: # Copyright (c) 2025 Contributors to the Eclipse Foundation
3: #
4: # See the NOTICE file(s) distributed with this work for additional
5: # information regarding copyright ownership.
6: #
7: # This program and the accompanying materials are made available under the
8: # terms of the Apache License Version 2.0 which is available at
9: # https://www.apache.org/licenses/LICENSE-2.0
10: #
...
Notes for Reviewer
Pre-Review Checklist for the PR Author
Checklist for the PR Reviewer
Post-review Checklist for the PR Author
References
Closes #
Adds Datarouter DLT Integration Test
This pull request introduces a new integration test to verify the capture of DLT (Diagnostic Log and Trace) messages from the
dataroutercomponent running on a QNX 8.0 QEMU target. The test leverages thescore_itfframework to connect to the QEMU VM, ensure thedatarouterprocess is running, capture DLT messages over UDP, and then assert the presence of expected DLT messages with specific application and context IDs. It also includes comprehensive documentation for setting up and running these integration tests.Architecture Diagram:
flowchart TD subgraph Host_Machine A["Bazel Build (+36 lines)"] B["test_datarouter_dlt.py (+65 lines)"] C[/"conftest.py (+86 lines)"/] D["DLT Receiver"] E[("DLT File")] A -->|runs| B B -->|uses fixtures| C D -->|writes to| E B -->|reads & asserts| E end subgraph QEMU_VM F["Datarouter Process"] end Host_Machine -- SSH commands --> QEMU_VM C -->|starts if needed| F F -->|sends DLT (UDP)| D A -->|depends on| G["score_itf"]This summary was automatically generated by @propel-code-bot