Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 0 additions & 6 deletions .env.developer-example
Original file line number Diff line number Diff line change
Expand Up @@ -53,12 +53,6 @@ PLANEXE_IFRAME_GENERATOR_CONFIRMATION_PRODUCTION_URL='https://example.com/'
PLANEXE_IFRAME_GENERATOR_CONFIRMATION_DEVELOPMENT_URL='https://example.com/'
PLANEXE_WORKER_ID=1

# PLANEXE_HOST_RUN_DIR="/absolute/path/to/PlanExe/run"
# Example paths:
# - macOS: /Users/you/PlanExe/run
# - Linux: /home/you/PlanExe/run
# - Windows: C:\Users\you\PlanExe\run

# mcp
# PLANEXE_MCP_API_KEY='your-api-key-here'
# PLANEXE_MCP_HTTP_HOST='127.0.0.1'
Expand Down
6 changes: 0 additions & 6 deletions .env.docker-example
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,6 @@ PLANEXE_FRONTEND_MULTIUSER_ADMIN_PASSWORD='admin'
# PLANEXE_CREDITS_PER_PLAN='1'
# PLANEXE_CREDIT_PRICE_CENTS='100'

# PLANEXE_HOST_RUN_DIR="/absolute/path/to/PlanExe/run"
# Example paths:
# - macOS: /Users/you/PlanExe/run
# - Linux: /home/you/PlanExe/run
# - Windows: C:\Users\you\PlanExe\run

# mcp
# PLANEXE_MCP_API_KEY='your-api-key-here'
# PLANEXE_MCP_HTTP_HOST='0.0.0.0' # bind all interfaces inside containers
Expand Down
2 changes: 0 additions & 2 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,6 @@ Always check the package-level `AGENTS.md` for file-specific rules
## Docker notes
- `PLANEXE_POSTGRES_PORT` changes the host port mapping only; containers still
connect to Postgres on 5432.
- Keep `PLANEXE_HOST_RUN_DIR` consistent with run dir mounts so outputs land in
the expected host folder.

## Documentation sync
- When changing Docker services, env defaults, or port mappings, update
Expand Down
14 changes: 6 additions & 8 deletions docker-compose.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Docker Compose for PlanExe
TL;DR
-----
- Services: `database_postgres` (DB on `${PLANEXE_POSTGRES_PORT:-5432}`), `worker_plan` (API on 8000), `frontend_multi_user` (UI on `${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}`), plus DB workers (`worker_plan_database_1/2/3` by default; `worker_plan_database` in `manual` profile), and `mcp_cloud` (MCP interface, stdio); `frontend_multi_user` waits for Postgres and worker health.
- Shared host files: `.env` and `./llm_config/` mounted read-only; `./run` bind-mounted so outputs persist; `.env` is also loaded via `env_file`.
- Shared host files: `.env` and `./llm_config/` mounted read-only; `.env` is also loaded via `env_file`.
- Postgres defaults to user/db/password `planexe`; override via env or `.env`; data lives in the `database_postgres_data` volume.
- Env defaults live in `docker-compose.yml` but can be overridden in `.env` or your shell (URLs, timeouts, run dirs, optional auth).
- `develop.watch` syncs code/config for `worker_plan`; rebuild with `--no-cache` after big moves or dependency changes; restart policy is `unless-stopped`.
Expand Down Expand Up @@ -32,7 +32,6 @@ Why compose (escaping dependency hell)
What compose sets up
--------------------
- Reusable local stack with consistent env/paths under `/app` in each container.
- Shared run dir: `PLANEXE_RUN_DIR=/app/run` in the containers, bound to `${PLANEXE_HOST_RUN_DIR:-${PWD}/run}` on the host so outputs persist.
- Postgres data volume: `database_postgres_data` keeps the database files outside the repo tree.

Service: `database_postgres` (Postgres DB)
Expand Down Expand Up @@ -70,18 +69,18 @@ Service: `worker_plan` (pipeline API)
-------------------------------------
- Purpose: runs the PlanExe pipeline and exposes the API on port 8000; the frontend depends on its health.
- Build: `worker_plan/Dockerfile`.
- Env: `PLANEXE_CONFIG_PATH=/app`, `PLANEXE_RUN_DIR=/app/run`, `PLANEXE_HOST_RUN_DIR=${PWD}/run`, `PLANEXE_WORKER_RELAY_PROCESS_OUTPUT=true`.
- Env: `PLANEXE_CONFIG_PATH=/app`, `PLANEXE_WORKER_RELAY_PROCESS_OUTPUT=true`.
- Health: `http://localhost:8000/healthcheck` checked via the compose healthcheck.
- Volumes: `.env` (ro), `llm_config/` (ro), `run/` (rw).
- Volumes: `.env` (ro), `llm_config/` (ro).
- Watch: sync `worker_plan/` into `/app/worker_plan`, rebuild on `worker_plan/pyproject.toml`, restart on compose edits.

Service: `worker_plan_database` (DB-backed worker)
--------------------------------------------------
- Purpose: polls `PlanItem` rows in Postgres, marks them processing, runs the PlanExe pipeline, and writes progress/events back to the DB; no HTTP port exposed.
- Build: `worker_plan_database/Dockerfile` (ships `worker_plan` code, shared `database_api` models, and this worker subclass).
- Depends on: `database_postgres` health.
- Env defaults: derives `SQLALCHEMY_DATABASE_URI` from `PLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD` (fallbacks to `database_postgres` + `planexe/planexe` on 5432); `PLANEXE_CONFIG_PATH=/app`, `PLANEXE_RUN_DIR=/app/run`; MachAI confirmation URLs default to `https://example.com/iframe_generator_confirmation` for both `PLANEXE_IFRAME_GENERATOR_CONFIRMATION_PRODUCTION_URL` and `PLANEXE_IFRAME_GENERATOR_CONFIRMATION_DEVELOPMENT_URL` (override with real endpoints).
- Volumes: `.env` (ro), `llm_config/` (ro), `run/` (rw for pipeline output).
- Env defaults: derives `SQLALCHEMY_DATABASE_URI` from `PLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD` (fallbacks to `database_postgres` + `planexe/planexe` on 5432); `PLANEXE_CONFIG_PATH=/app`; MachAI confirmation URLs default to `https://example.com/iframe_generator_confirmation` for both `PLANEXE_IFRAME_GENERATOR_CONFIRMATION_PRODUCTION_URL` and `PLANEXE_IFRAME_GENERATOR_CONFIRMATION_DEVELOPMENT_URL` (override with real endpoints).
- Volumes: `.env` (ro), `llm_config/` (ro). Pipeline output stays inside the container; the worker persists final artifacts via the DB.
- Entrypoint: `python -m worker_plan_database.app` (runs the long-lived poller loop).
- Multiple workers: compose defines `worker_plan_database_1/2/3` with `PLANEXE_WORKER_ID` set to `1/2/3`. Start the trio with:
- `docker compose up -d worker_plan_database_1 worker_plan_database_2 worker_plan_database_3`
Expand All @@ -92,7 +91,7 @@ Service: `mcp_cloud` (MCP interface)
- Purpose: Model Context Protocol (MCP) server that provides a standardized interface for AI agents and developer tools to interact with PlanExe. Communicates with `worker_plan_database` via the shared Postgres database.
- Build: `mcp_cloud/Dockerfile` (ships shared `database_api` models and the MCP server implementation).
- Depends on: `database_postgres` and `worker_plan` health.
- Env defaults: derives `SQLALCHEMY_DATABASE_URI` from `PLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD` (fallbacks to `database_postgres` + `planexe/planexe` on 5432); `PLANEXE_CONFIG_PATH=/app`, `PLANEXE_RUN_DIR=/app/run`; `PLANEXE_MCP_HTTP_HOST=0.0.0.0`, `PLANEXE_MCP_HTTP_PORT=8001`; `PLANEXE_MCP_PUBLIC_BASE_URL=http://localhost:8001` for report download URLs; `PLANEXE_MCP_REQUIRE_AUTH=false` by default.
- Env defaults: derives `SQLALCHEMY_DATABASE_URI` from `PLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD` (fallbacks to `database_postgres` + `planexe/planexe` on 5432); `PLANEXE_CONFIG_PATH=/app`; `PLANEXE_MCP_HTTP_HOST=0.0.0.0`, `PLANEXE_MCP_HTTP_PORT=8001`; `PLANEXE_MCP_PUBLIC_BASE_URL=http://localhost:8001` for report download URLs; `PLANEXE_MCP_REQUIRE_AUTH=false` by default.
- Ports: host `${PLANEXE_MCP_HTTP_PORT:-8001}` -> container `8001`.
- Volumes: `llm_config/` (ro for provider configs).
- Health: `http://localhost:8001/healthcheck` checked via the compose healthcheck.
Expand All @@ -103,7 +102,6 @@ Usage notes
-----------
- Ports: host `8000->worker_plan`, `${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}->frontend_multi_user`, `PLANEXE_POSTGRES_PORT (default 5432)->database_postgres`; change mappings in `docker-compose.yml` if needed.
- `.env` must exist before `docker compose up`; it is both loaded and mounted read-only. Same for `llm_config/`. If missing, start from `.env.docker-example`.
- To relocate outputs, set `PLANEXE_HOST_RUN_DIR` (or edit the bind mount) to another host path.
- Database: connect on `localhost:${PLANEXE_POSTGRES_PORT:-5432}` with `planexe/planexe` by default; data persists via the `database_postgres_data` volume.

Example: running stack
Expand Down
7 changes: 0 additions & 7 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ x-worker_plan_database-base: &worker_plan_database_base
environment: &worker_plan_database_env
PLANEXE_WORKER_ID: ${PLANEXE_WORKER_ID:-worker_plan_database}
PLANEXE_CONFIG_PATH: /app
PLANEXE_RUN_DIR: /app/run
# Internal container-to-container connection always uses port 5432.
# This is NOT affected by PLANEXE_POSTGRES_PORT (which is for host mapping only).
PLANEXE_POSTGRES_HOST: database_postgres
Expand All @@ -38,7 +37,6 @@ x-worker_plan_database-base: &worker_plan_database_base
volumes:
- ./.env:/app/.env:ro
- ./llm_config:/app/llm_config:ro
- ./run:/app/run
restart: unless-stopped

services:
Expand Down Expand Up @@ -79,15 +77,12 @@ services:
- .env
environment:
PLANEXE_CONFIG_PATH: /app
PLANEXE_HOST_RUN_DIR: ${PLANEXE_HOST_RUN_DIR:-${PWD}/run}
PLANEXE_RUN_DIR: /app/run
PLANEXE_WORKER_RELAY_PROCESS_OUTPUT: "true"
ports:
- "8000:8000"
volumes:
- ./.env:/app/.env:ro
- ./llm_config:/app/llm_config:ro
- ./run:/app/run
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/healthcheck').read()"]
interval: 10s
Expand Down Expand Up @@ -159,7 +154,6 @@ services:
volumes:
- ./.env:/app/.env:ro
- ./llm_config:/app/llm_config:ro
- ./run:/app/run
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:5000/healthcheck').read()"]
interval: 10s
Expand Down Expand Up @@ -201,7 +195,6 @@ services:
- .env
environment:
PLANEXE_CONFIG_PATH: /app
PLANEXE_RUN_DIR: /app/run
# Internal container-to-container connection always uses port 5432.
PLANEXE_POSTGRES_HOST: database_postgres
PLANEXE_POSTGRES_PORT: "5432"
Expand Down
2 changes: 1 addition & 1 deletion docs/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ psql -h localhost -p 5433 -U planexe -d planexe

## Environment notes
- The worker exports logs to stdout when `PLANEXE_WORKER_RELAY_PROCESS_OUTPUT=true` (set in `docker-compose.yml`).
- Shared volumes: `./run` is mounted into both services; `.env` and `./llm_config/` are mounted read-only. Ensure they exist on the host before starting.***
- Shared volumes: `.env` and `./llm_config/` are mounted read-only. Ensure they exist on the host before starting.
- Database: Postgres runs in `database_postgres` and listens on host `${PLANEXE_POSTGRES_PORT:-5432}` mapped to container `5432`; data is persisted in the named volume `database_postgres_data`.
- Multiuser UI: binds to container port `5000`, exposed on host `${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}`.
- MCP server downloads: set `PLANEXE_MCP_PUBLIC_BASE_URL` so clients receive a reachable `/download/...` URL (defaults to `http://localhost:8001` in compose).
Expand Down
2 changes: 0 additions & 2 deletions docs/railway.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,6 @@ PLANEXE_OAUTH_GOOGLE_CLIENT_SECRET="secret"
PLANEXE_POSTGRES_HOST="databasepostgres.railway.internal"
PLANEXE_POSTGRES_PASSWORD="secret"
PLANEXE_STRIPE_SECRET_KEY="secret"
POSTGRES_DATABASE_HOST="secret"
POSTGRES_DATABASE_PUBLIC_PORT="secret"
```

Generate `<a-strong-random-string>` with `openssl rand -hex 32`.
Expand Down
1 change: 0 additions & 1 deletion frontend_multi_user/railway.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@ PLANEXE_API_KEY_SECRET="${{shared.PLANEXE_API_KEY_SECRET}}"
PLANEXE_DATABASE_WORKER_API_KEY="${{shared.PLANEXE_DATABASE_WORKER_API_KEY}}"
PLANEXE_DATABASE_WORKER_URL="${{shared.PLANEXE_DATABASE_WORKER_URL}}"
PLANEXE_POSTGRES_HOST="${{shared.PLANEXE_POSTGRES_HOST}}"
POSTGRES_DATABASE_HOST="${{shared.POSTGRES_DATABASE_HOST}}"
```

## Session / admin login (production)
Expand Down
14 changes: 0 additions & 14 deletions frontend_multi_user/src/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,10 +58,6 @@

from src.utils import CREDIT_SCALE, to_credit_decimal, format_credit_display

RUN_DIR = "run"

SHOW_DEMO_PLAN = False

DEMO_FORM_RUN_PROMPT_UUIDS = [
"ab700769-c3ba-4f8a-913d-8589fea4624e",
"00e1c738-a663-476a-b950-62785922f6f0",
Expand Down Expand Up @@ -270,15 +266,6 @@ def __init__(self):
self.planexe_project_root = Path(__file__).parent.parent.parent.absolute()
logger.info(f"MyFlaskApp.__init__. planexe_project_root: {self.planexe_project_root!r}")

override_planexe_run_dir = self.planexe_dotenv.get_absolute_path_to_dir(DotEnvKeyEnum.PLANEXE_RUN_DIR.value)
if isinstance(override_planexe_run_dir, Path):
debug_planexe_run_dir = 'override'
self.planexe_run_dir = override_planexe_run_dir
else:
debug_planexe_run_dir = 'default'
self.planexe_run_dir = self.planexe_project_root / RUN_DIR
logger.info(f"MyFlaskApp.__init__. planexe_run_dir ({debug_planexe_run_dir}): {self.planexe_run_dir!r}")

self.worker_plan_url = (os.environ.get("PLANEXE_WORKER_PLAN_URL") or "http://worker_plan:8000").rstrip("/")
logger.info(f"MyFlaskApp.__init__. worker_plan_url: {self.worker_plan_url}")

Expand Down Expand Up @@ -688,7 +675,6 @@ def load_user(user_id):
self.app.config['PUBLIC_BASE_URL'] = self.public_base_url
self.app.config['OAUTH_PROVIDERS'] = self.oauth_providers
self.app.config['WORKER_PLAN_URL'] = self.worker_plan_url
self.app.config['PLANEXE_RUN_DIR'] = self.planexe_run_dir
self.app.config['PLANEXE_PROJECT_ROOT'] = self.planexe_project_root
self.app.config['PATH_TO_PYTHON'] = self.path_to_python
self.app.config['PROMPT_CATALOG'] = self.prompt_catalog
Expand Down
13 changes: 1 addition & 12 deletions frontend_multi_user/src/plan_routes.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from decimal import Decimal
from typing import Any, Optional

from flask import Blueprint, current_app, jsonify, make_response, redirect, render_template, request, send_file, url_for
from flask import Blueprint, current_app, jsonify, make_response, redirect, render_template, request, url_for
from flask_login import current_user, login_required
from sqlalchemy import func
from sqlalchemy.exc import DataError
Expand Down Expand Up @@ -45,8 +45,6 @@

plan_routes_bp = Blueprint("plan_routes", __name__)

SHOW_DEMO_PLAN = False


def _new_model(model_cls: Any, **kwargs: Any) -> Any:
from typing import cast
Expand Down Expand Up @@ -903,15 +901,6 @@ def viewplan():
logger.warning("Unauthorized report access attempt. plan_id=%s user_id=%s", plan_id, current_user.id)
return jsonify({"error": "Forbidden"}), 403

if SHOW_DEMO_PLAN:
planexe_run_dir = current_app.config["PLANEXE_RUN_DIR"]
demo_plan_id = "20250524_universal_manufacturing"
demo_plan_dir = (planexe_run_dir / demo_plan_id).absolute()
path_to_html_file = demo_plan_dir / FilenameEnum.REPORT.value
if not path_to_html_file.exists():
return jsonify({"error": "Demo report not found"}), 404
return send_file(str(path_to_html_file), mimetype="text/html")

if not plan.generated_report_html:
logger.error("Report HTML not found for plan_id=%s", plan_id)
return jsonify({"error": "Report not available"}), 404
Expand Down
7 changes: 3 additions & 4 deletions mcp_cloud/AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -364,10 +364,9 @@ The same `_get_download_base_url()` function is used to build both `download_url
example: three isolated try/except blocks for HTTP, DB, and zip snapshot.

## Worker HTTP fallback ordering
- When resolving file lists or artifacts, try fast local sources first:
1. DB (`fetch_report_from_db` / `list_files_from_zip_snapshot` / `fetch_file_from_zip_snapshot`)
2. Local run directory (`list_files_from_local_run_dir`)
3. Worker HTTP (`fetch_file_list_from_worker_plan` / `fetch_artifact_from_worker_plan`)
- When resolving file lists or artifacts, check the DB first and fall back to the worker for anything not yet persisted:
1. DB (`fetch_report_from_db` / `list_files_from_zip_snapshot` / `fetch_file_from_zip_snapshot`) — for completed/snapshotted plans
2. Worker HTTP (`fetch_file_list_from_worker_plan` / `fetch_artifact_from_worker_plan`) — the live source for in-flight plans
- The report fallback chain (`_fetch_report_with_fallbacks`) follows this same
DB-first convention: DB → zip snapshot → HTTP. This matches the zip path
(`fetch_user_downloadable_zip`) which also tries the DB before HTTP.
Expand Down
1 change: 0 additions & 1 deletion mcp_cloud/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ FROM python:3.13-slim
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PLANEXE_CONFIG_PATH=/app \
PLANEXE_RUN_DIR=/app/run \
PIP_NO_CACHE_DIR=1 \
PIP_PREFER_BINARY=1 \
PYTHONPATH=/app:/app/mcp_cloud:/app/database_api
Expand Down
7 changes: 3 additions & 4 deletions mcp_cloud/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -419,10 +419,9 @@ Client → FastAPI (http_server.py)

When fetching plan files (reports, zips), `worker_fetchers.py` tries sources in priority order:

1. **DB** (`PlanItem.generated_report_html` / `PlanItem.run_zip_snapshot`) — fastest, always available for completed plans
2. **Zip snapshot extraction** — extracts individual files from the DB zip snapshot
3. **Local run directory** (`PLANEXE_RUN_DIR/{run_id}/`) — low latency if co-located
4. **Worker HTTP** (`PLANEXE_WORKER_PLAN_URL/runs/{run_id}/...`) — last-resort fallback (10s timeout, 3s connect)
1. **DB** (`PlanItem.generated_report_html` / `PlanItem.run_zip_snapshot`) — served directly from Postgres; available once a plan has completed (or a snapshot has been persisted)
2. **Zip snapshot extraction** — pulls an individual file out of the DB-stored zip snapshot
3. **Worker HTTP** (`PLANEXE_WORKER_PLAN_URL/runs/{run_id}/...`) — the live source for in-flight plans and anything not yet persisted to the DB (10s timeout, 3s connect)

This layered approach avoids blocking on the worker HTTP API when the data is already available locally or in the database.

Expand Down
2 changes: 0 additions & 2 deletions mcp_cloud/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
ensure_planitem_stop_columns,
PLANEXE_SERVER_INSTRUCTIONS,
mcp_cloud_server as mcp_cloud,
BASE_DIR_RUN,
WORKER_PLAN_URL,
REPORT_FILENAME,
REPORT_CONTENT_TYPE,
Expand Down Expand Up @@ -88,7 +87,6 @@
from mcp_cloud.worker_fetchers import ( # noqa: F401
fetch_artifact_from_worker_plan,
fetch_file_list_from_worker_plan,
list_files_from_local_run_dir,
fetch_zip_from_worker_plan,
fetch_user_downloadable_zip,
)
Expand Down
3 changes: 0 additions & 3 deletions mcp_cloud/db_setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -247,9 +247,6 @@ def ensure_last_progress_at_column() -> None:

mcp_cloud_server = Server("planexe-mcp-cloud", instructions=PLANEXE_SERVER_INSTRUCTIONS)

# Base directory for run artifacts (not used directly, fetched via worker_plan HTTP API)
BASE_DIR_RUN = Path(os.environ.get("PLANEXE_RUN_DIR", Path(__file__).parent.parent / "run")).resolve()

WORKER_PLAN_URL = os.environ.get("PLANEXE_WORKER_PLAN_URL", "http://worker_plan:8000")

REPORT_FILENAME = "report.html"
Expand Down
3 changes: 0 additions & 3 deletions mcp_cloud/handlers.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@
from mcp_cloud.worker_fetchers import (
fetch_artifact_from_worker_plan,
fetch_file_list_from_worker_plan,
list_files_from_local_run_dir,
fetch_user_downloadable_zip,
)
from mcp_cloud.model_profiles import _get_model_profiles_sync
Expand Down Expand Up @@ -305,8 +304,6 @@ async def handle_plan_status(arguments: dict[str, Any]) -> CallToolResult:
files = []
if plan_uuid:
files_list = await asyncio.to_thread(list_files_from_zip_snapshot, plan_uuid)
if not files_list:
files_list = await asyncio.to_thread(list_files_from_local_run_dir, plan_uuid)
if not files_list:
try:
files_list = await asyncio.wait_for(
Expand Down
3 changes: 0 additions & 3 deletions mcp_cloud/tests/test_plan_status_tool.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,6 @@ def test_plan_status_falls_back_to_zip_snapshot_files_when_primary_source_empty(
), patch(
"mcp_cloud.handlers.list_files_from_zip_snapshot",
return_value=[("001-2-plan.txt", "2026-03-08T23:49:53Z"), ("log.txt", "2026-03-08T23:50:00Z")],
), patch(
"mcp_cloud.handlers.list_files_from_local_run_dir",
return_value=None,
):
result = asyncio.run(handle_plan_status({"plan_id": plan_id}))

Expand Down
Loading