Skip to content

fix(mcp): share brownfield store for serve --db#487

Open
andrew-adamson wants to merge 1 commit intoQ00:mainfrom
andrew-adamson:fix/mcp-shared-brownfield-store
Open

fix(mcp): share brownfield store for serve --db#487
andrew-adamson wants to merge 1 commit intoQ00:mainfrom
andrew-adamson:fix/mcp-shared-brownfield-store

Conversation

@andrew-adamson
Copy link
Copy Markdown

Summary

  • pass a shared BrownfieldStore through mcp serve --db instead of falling back to a separate default store
  • keep brownfield handler state aligned with the server database selection
  • add focused coverage for injected-store reuse and initialization

Testing

  • uv run pytest tests/unit/mcp/server/test_adapter.py::TestCreateOuroborosServerBrownfieldStore::test_injected_store_is_shared_with_handler_and_owned_by_server tests/unit/mcp/tools/test_brownfield_handler.py::TestBrownfieldHandlerDispatch::test_injected_store_initializes_once

Copy link
Copy Markdown
Contributor

@ouroboros-agent ouroboros-agent Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review — ouroboros-agent[bot]

Verdict: REQUEST_CHANGES

Reviewing commit d29cef8 for PR #487

Review record: 90c532e9-f263-4895-842b-be697ab985a9

Blocking Findings

| # | File:Line | Severity | Finding |
|### Recovery Notes
First recoverable review artifact generated from codex analysis log.

---|-----------|----------|---------|
| 1 | src/ouroboros/mcp/tools/brownfield_handler.py:197 | BLOCKING | The new shared-store path can permanently wedge brownfield MCP access after a failed startup initialization. BrownfieldStore.initialize() assigns _engine before running create_all/migrations (src/ouroboros/persistence/brownfield.py:165-177), and _run_mcp_server() now catches brownfield_store.initialize() failures and continues startup (src/ouroboros/cli/commands/mcp.py:218-223). After that, _get_store() treats any injected store with a non-None _engine as ready and skips re-initialization, even if the earlier initialize failed mid-flight. The next brownfield request will then run against a partially initialized store instead of retrying init, causing persistent runtime failures until restart. This needs an explicit “initialized successfully” signal or a retry-on-failure path, plus a regression test for failed shared-store init. |

Non-blocking Suggestions

None.

Design Notes

The direction is sensible: sharing one BrownfieldStore through the MCP server matches the existing EventStore lifecycle and avoids per-request connection churn. The weak point is initialization state tracking; the new code relies on an internal engine attribute instead of a successful-init contract, which makes partial startup failures unsafe.


Reviewed by ouroboros-agent[bot] via Codex deep analysis

@shaun0927
Copy link
Copy Markdown
Collaborator

The overall direction is good. Sharing the brownfield store across mcp serve --db is more consistent with the rest of the server lifecycle and avoids per-request drift.

My concern is startup-failure semantics.

Right now it looks possible for brownfield_store.initialize() to fail partway through startup, the server to continue running, and the handler path to later treat the injected store as effectively ready based on partial internal state. That would leave brownfield access wedged until restart instead of giving us a clean retry/fail-fast boundary.

Please tighten that before merge:

  1. Track "successfully initialized" explicitly instead of inferring readiness from _engine-style internals, or
  2. guarantee a safe retry path after failed shared-store initialization.

Please also add a regression test for the failed shared-init path so we know the next brownfield request behaves intentionally rather than inheriting a half-initialized store.

So: good direction, but I want the failure mode made safe.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants