diff --git a/.env.developer-example b/.env.developer-example index f38765cee..3f5ad6d38 100644 --- a/.env.developer-example +++ b/.env.developer-example @@ -53,12 +53,6 @@ PLANEXE_IFRAME_GENERATOR_CONFIRMATION_PRODUCTION_URL='https://example.com/' PLANEXE_IFRAME_GENERATOR_CONFIRMATION_DEVELOPMENT_URL='https://example.com/' PLANEXE_WORKER_ID=1 -# PLANEXE_HOST_RUN_DIR="/absolute/path/to/PlanExe/run" -# Example paths: -# - macOS: /Users/you/PlanExe/run -# - Linux: /home/you/PlanExe/run -# - Windows: C:\Users\you\PlanExe\run - # mcp # PLANEXE_MCP_API_KEY='your-api-key-here' # PLANEXE_MCP_HTTP_HOST='127.0.0.1' diff --git a/.env.docker-example b/.env.docker-example index ece05167a..8b303c8ee 100644 --- a/.env.docker-example +++ b/.env.docker-example @@ -42,12 +42,6 @@ PLANEXE_FRONTEND_MULTIUSER_ADMIN_PASSWORD='admin' # PLANEXE_CREDITS_PER_PLAN='1' # PLANEXE_CREDIT_PRICE_CENTS='100' -# PLANEXE_HOST_RUN_DIR="/absolute/path/to/PlanExe/run" -# Example paths: -# - macOS: /Users/you/PlanExe/run -# - Linux: /home/you/PlanExe/run -# - Windows: C:\Users\you\PlanExe\run - # mcp # PLANEXE_MCP_API_KEY='your-api-key-here' # PLANEXE_MCP_HTTP_HOST='0.0.0.0' # bind all interfaces inside containers diff --git a/AGENTS.md b/AGENTS.md index bf1f5d211..d55497dac 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -57,8 +57,6 @@ Always check the package-level `AGENTS.md` for file-specific rules ## Docker notes - `PLANEXE_POSTGRES_PORT` changes the host port mapping only; containers still connect to Postgres on 5432. -- Keep `PLANEXE_HOST_RUN_DIR` consistent with run dir mounts so outputs land in - the expected host folder. ## Documentation sync - When changing Docker services, env defaults, or port mappings, update diff --git a/docker-compose.md b/docker-compose.md index b86b954d3..b39e21356 100644 --- a/docker-compose.md +++ b/docker-compose.md @@ -4,7 +4,7 @@ Docker Compose for PlanExe TL;DR ----- - Services: `database_postgres` (DB on `${PLANEXE_POSTGRES_PORT:-5432}`), `worker_plan` (API on 8000), `frontend_multi_user` (UI on `${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}`), plus DB workers (`worker_plan_database_1/2/3` by default; `worker_plan_database` in `manual` profile), and `mcp_cloud` (MCP interface, stdio); `frontend_multi_user` waits for Postgres and worker health. -- Shared host files: `.env` and `./llm_config/` mounted read-only; `./run` bind-mounted so outputs persist; `.env` is also loaded via `env_file`. +- Shared host files: `.env` and `./llm_config/` mounted read-only; `.env` is also loaded via `env_file`. - Postgres defaults to user/db/password `planexe`; override via env or `.env`; data lives in the `database_postgres_data` volume. - Env defaults live in `docker-compose.yml` but can be overridden in `.env` or your shell (URLs, timeouts, run dirs, optional auth). - `develop.watch` syncs code/config for `worker_plan`; rebuild with `--no-cache` after big moves or dependency changes; restart policy is `unless-stopped`. @@ -32,7 +32,6 @@ Why compose (escaping dependency hell) What compose sets up -------------------- - Reusable local stack with consistent env/paths under `/app` in each container. -- Shared run dir: `PLANEXE_RUN_DIR=/app/run` in the containers, bound to `${PLANEXE_HOST_RUN_DIR:-${PWD}/run}` on the host so outputs persist. - Postgres data volume: `database_postgres_data` keeps the database files outside the repo tree. Service: `database_postgres` (Postgres DB) @@ -70,9 +69,9 @@ Service: `worker_plan` (pipeline API) ------------------------------------- - Purpose: runs the PlanExe pipeline and exposes the API on port 8000; the frontend depends on its health. - Build: `worker_plan/Dockerfile`. -- Env: `PLANEXE_CONFIG_PATH=/app`, `PLANEXE_RUN_DIR=/app/run`, `PLANEXE_HOST_RUN_DIR=${PWD}/run`, `PLANEXE_WORKER_RELAY_PROCESS_OUTPUT=true`. +- Env: `PLANEXE_CONFIG_PATH=/app`, `PLANEXE_WORKER_RELAY_PROCESS_OUTPUT=true`. - Health: `http://localhost:8000/healthcheck` checked via the compose healthcheck. -- Volumes: `.env` (ro), `llm_config/` (ro), `run/` (rw). +- Volumes: `.env` (ro), `llm_config/` (ro). - Watch: sync `worker_plan/` into `/app/worker_plan`, rebuild on `worker_plan/pyproject.toml`, restart on compose edits. Service: `worker_plan_database` (DB-backed worker) @@ -80,8 +79,8 @@ Service: `worker_plan_database` (DB-backed worker) - Purpose: polls `PlanItem` rows in Postgres, marks them processing, runs the PlanExe pipeline, and writes progress/events back to the DB; no HTTP port exposed. - Build: `worker_plan_database/Dockerfile` (ships `worker_plan` code, shared `database_api` models, and this worker subclass). - Depends on: `database_postgres` health. -- Env defaults: derives `SQLALCHEMY_DATABASE_URI` from `PLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD` (fallbacks to `database_postgres` + `planexe/planexe` on 5432); `PLANEXE_CONFIG_PATH=/app`, `PLANEXE_RUN_DIR=/app/run`; MachAI confirmation URLs default to `https://example.com/iframe_generator_confirmation` for both `PLANEXE_IFRAME_GENERATOR_CONFIRMATION_PRODUCTION_URL` and `PLANEXE_IFRAME_GENERATOR_CONFIRMATION_DEVELOPMENT_URL` (override with real endpoints). -- Volumes: `.env` (ro), `llm_config/` (ro), `run/` (rw for pipeline output). +- Env defaults: derives `SQLALCHEMY_DATABASE_URI` from `PLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD` (fallbacks to `database_postgres` + `planexe/planexe` on 5432); `PLANEXE_CONFIG_PATH=/app`; MachAI confirmation URLs default to `https://example.com/iframe_generator_confirmation` for both `PLANEXE_IFRAME_GENERATOR_CONFIRMATION_PRODUCTION_URL` and `PLANEXE_IFRAME_GENERATOR_CONFIRMATION_DEVELOPMENT_URL` (override with real endpoints). +- Volumes: `.env` (ro), `llm_config/` (ro). Pipeline output stays inside the container; the worker persists final artifacts via the DB. - Entrypoint: `python -m worker_plan_database.app` (runs the long-lived poller loop). - Multiple workers: compose defines `worker_plan_database_1/2/3` with `PLANEXE_WORKER_ID` set to `1/2/3`. Start the trio with: - `docker compose up -d worker_plan_database_1 worker_plan_database_2 worker_plan_database_3` @@ -92,7 +91,7 @@ Service: `mcp_cloud` (MCP interface) - Purpose: Model Context Protocol (MCP) server that provides a standardized interface for AI agents and developer tools to interact with PlanExe. Communicates with `worker_plan_database` via the shared Postgres database. - Build: `mcp_cloud/Dockerfile` (ships shared `database_api` models and the MCP server implementation). - Depends on: `database_postgres` and `worker_plan` health. -- Env defaults: derives `SQLALCHEMY_DATABASE_URI` from `PLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD` (fallbacks to `database_postgres` + `planexe/planexe` on 5432); `PLANEXE_CONFIG_PATH=/app`, `PLANEXE_RUN_DIR=/app/run`; `PLANEXE_MCP_HTTP_HOST=0.0.0.0`, `PLANEXE_MCP_HTTP_PORT=8001`; `PLANEXE_MCP_PUBLIC_BASE_URL=http://localhost:8001` for report download URLs; `PLANEXE_MCP_REQUIRE_AUTH=false` by default. +- Env defaults: derives `SQLALCHEMY_DATABASE_URI` from `PLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD` (fallbacks to `database_postgres` + `planexe/planexe` on 5432); `PLANEXE_CONFIG_PATH=/app`; `PLANEXE_MCP_HTTP_HOST=0.0.0.0`, `PLANEXE_MCP_HTTP_PORT=8001`; `PLANEXE_MCP_PUBLIC_BASE_URL=http://localhost:8001` for report download URLs; `PLANEXE_MCP_REQUIRE_AUTH=false` by default. - Ports: host `${PLANEXE_MCP_HTTP_PORT:-8001}` -> container `8001`. - Volumes: `llm_config/` (ro for provider configs). - Health: `http://localhost:8001/healthcheck` checked via the compose healthcheck. @@ -103,7 +102,6 @@ Usage notes ----------- - Ports: host `8000->worker_plan`, `${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}->frontend_multi_user`, `PLANEXE_POSTGRES_PORT (default 5432)->database_postgres`; change mappings in `docker-compose.yml` if needed. - `.env` must exist before `docker compose up`; it is both loaded and mounted read-only. Same for `llm_config/`. If missing, start from `.env.docker-example`. -- To relocate outputs, set `PLANEXE_HOST_RUN_DIR` (or edit the bind mount) to another host path. - Database: connect on `localhost:${PLANEXE_POSTGRES_PORT:-5432}` with `planexe/planexe` by default; data persists via the `database_postgres_data` volume. Example: running stack diff --git a/docker-compose.yml b/docker-compose.yml index 656d9501f..099acef19 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -25,7 +25,6 @@ x-worker_plan_database-base: &worker_plan_database_base environment: &worker_plan_database_env PLANEXE_WORKER_ID: ${PLANEXE_WORKER_ID:-worker_plan_database} PLANEXE_CONFIG_PATH: /app - PLANEXE_RUN_DIR: /app/run # Internal container-to-container connection always uses port 5432. # This is NOT affected by PLANEXE_POSTGRES_PORT (which is for host mapping only). PLANEXE_POSTGRES_HOST: database_postgres @@ -38,7 +37,6 @@ x-worker_plan_database-base: &worker_plan_database_base volumes: - ./.env:/app/.env:ro - ./llm_config:/app/llm_config:ro - - ./run:/app/run restart: unless-stopped services: @@ -79,15 +77,12 @@ services: - .env environment: PLANEXE_CONFIG_PATH: /app - PLANEXE_HOST_RUN_DIR: ${PLANEXE_HOST_RUN_DIR:-${PWD}/run} - PLANEXE_RUN_DIR: /app/run PLANEXE_WORKER_RELAY_PROCESS_OUTPUT: "true" ports: - "8000:8000" volumes: - ./.env:/app/.env:ro - ./llm_config:/app/llm_config:ro - - ./run:/app/run healthcheck: test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/healthcheck').read()"] interval: 10s @@ -159,7 +154,6 @@ services: volumes: - ./.env:/app/.env:ro - ./llm_config:/app/llm_config:ro - - ./run:/app/run healthcheck: test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:5000/healthcheck').read()"] interval: 10s @@ -201,7 +195,6 @@ services: - .env environment: PLANEXE_CONFIG_PATH: /app - PLANEXE_RUN_DIR: /app/run # Internal container-to-container connection always uses port 5432. PLANEXE_POSTGRES_HOST: database_postgres PLANEXE_POSTGRES_PORT: "5432" diff --git a/docs/docker.md b/docs/docker.md index 4f7cf3768..655029635 100644 --- a/docs/docker.md +++ b/docs/docker.md @@ -75,7 +75,7 @@ psql -h localhost -p 5433 -U planexe -d planexe ## Environment notes - The worker exports logs to stdout when `PLANEXE_WORKER_RELAY_PROCESS_OUTPUT=true` (set in `docker-compose.yml`). -- Shared volumes: `./run` is mounted into both services; `.env` and `./llm_config/` are mounted read-only. Ensure they exist on the host before starting.*** +- Shared volumes: `.env` and `./llm_config/` are mounted read-only. Ensure they exist on the host before starting. - Database: Postgres runs in `database_postgres` and listens on host `${PLANEXE_POSTGRES_PORT:-5432}` mapped to container `5432`; data is persisted in the named volume `database_postgres_data`. - Multiuser UI: binds to container port `5000`, exposed on host `${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}`. - MCP server downloads: set `PLANEXE_MCP_PUBLIC_BASE_URL` so clients receive a reachable `/download/...` URL (defaults to `http://localhost:8001` in compose). diff --git a/docs/railway.md b/docs/railway.md index dba4b57ff..adfd9cdd4 100644 --- a/docs/railway.md +++ b/docs/railway.md @@ -41,8 +41,6 @@ PLANEXE_OAUTH_GOOGLE_CLIENT_SECRET="secret" PLANEXE_POSTGRES_HOST="databasepostgres.railway.internal" PLANEXE_POSTGRES_PASSWORD="secret" PLANEXE_STRIPE_SECRET_KEY="secret" -POSTGRES_DATABASE_HOST="secret" -POSTGRES_DATABASE_PUBLIC_PORT="secret" ``` Generate `` with `openssl rand -hex 32`. diff --git a/frontend_multi_user/railway.md b/frontend_multi_user/railway.md index ed941d09e..4d1a29baf 100644 --- a/frontend_multi_user/railway.md +++ b/frontend_multi_user/railway.md @@ -27,7 +27,6 @@ PLANEXE_API_KEY_SECRET="${{shared.PLANEXE_API_KEY_SECRET}}" PLANEXE_DATABASE_WORKER_API_KEY="${{shared.PLANEXE_DATABASE_WORKER_API_KEY}}" PLANEXE_DATABASE_WORKER_URL="${{shared.PLANEXE_DATABASE_WORKER_URL}}" PLANEXE_POSTGRES_HOST="${{shared.PLANEXE_POSTGRES_HOST}}" -POSTGRES_DATABASE_HOST="${{shared.POSTGRES_DATABASE_HOST}}" ``` ## Session / admin login (production) diff --git a/frontend_multi_user/src/app.py b/frontend_multi_user/src/app.py index dfae36b94..85da35912 100644 --- a/frontend_multi_user/src/app.py +++ b/frontend_multi_user/src/app.py @@ -58,10 +58,6 @@ from src.utils import CREDIT_SCALE, to_credit_decimal, format_credit_display -RUN_DIR = "run" - -SHOW_DEMO_PLAN = False - DEMO_FORM_RUN_PROMPT_UUIDS = [ "ab700769-c3ba-4f8a-913d-8589fea4624e", "00e1c738-a663-476a-b950-62785922f6f0", @@ -270,15 +266,6 @@ def __init__(self): self.planexe_project_root = Path(__file__).parent.parent.parent.absolute() logger.info(f"MyFlaskApp.__init__. planexe_project_root: {self.planexe_project_root!r}") - override_planexe_run_dir = self.planexe_dotenv.get_absolute_path_to_dir(DotEnvKeyEnum.PLANEXE_RUN_DIR.value) - if isinstance(override_planexe_run_dir, Path): - debug_planexe_run_dir = 'override' - self.planexe_run_dir = override_planexe_run_dir - else: - debug_planexe_run_dir = 'default' - self.planexe_run_dir = self.planexe_project_root / RUN_DIR - logger.info(f"MyFlaskApp.__init__. planexe_run_dir ({debug_planexe_run_dir}): {self.planexe_run_dir!r}") - self.worker_plan_url = (os.environ.get("PLANEXE_WORKER_PLAN_URL") or "http://worker_plan:8000").rstrip("/") logger.info(f"MyFlaskApp.__init__. worker_plan_url: {self.worker_plan_url}") @@ -688,7 +675,6 @@ def load_user(user_id): self.app.config['PUBLIC_BASE_URL'] = self.public_base_url self.app.config['OAUTH_PROVIDERS'] = self.oauth_providers self.app.config['WORKER_PLAN_URL'] = self.worker_plan_url - self.app.config['PLANEXE_RUN_DIR'] = self.planexe_run_dir self.app.config['PLANEXE_PROJECT_ROOT'] = self.planexe_project_root self.app.config['PATH_TO_PYTHON'] = self.path_to_python self.app.config['PROMPT_CATALOG'] = self.prompt_catalog diff --git a/frontend_multi_user/src/plan_routes.py b/frontend_multi_user/src/plan_routes.py index 9e32341df..314a13d9d 100644 --- a/frontend_multi_user/src/plan_routes.py +++ b/frontend_multi_user/src/plan_routes.py @@ -9,7 +9,7 @@ from decimal import Decimal from typing import Any, Optional -from flask import Blueprint, current_app, jsonify, make_response, redirect, render_template, request, send_file, url_for +from flask import Blueprint, current_app, jsonify, make_response, redirect, render_template, request, url_for from flask_login import current_user, login_required from sqlalchemy import func from sqlalchemy.exc import DataError @@ -45,8 +45,6 @@ plan_routes_bp = Blueprint("plan_routes", __name__) -SHOW_DEMO_PLAN = False - def _new_model(model_cls: Any, **kwargs: Any) -> Any: from typing import cast @@ -903,15 +901,6 @@ def viewplan(): logger.warning("Unauthorized report access attempt. plan_id=%s user_id=%s", plan_id, current_user.id) return jsonify({"error": "Forbidden"}), 403 - if SHOW_DEMO_PLAN: - planexe_run_dir = current_app.config["PLANEXE_RUN_DIR"] - demo_plan_id = "20250524_universal_manufacturing" - demo_plan_dir = (planexe_run_dir / demo_plan_id).absolute() - path_to_html_file = demo_plan_dir / FilenameEnum.REPORT.value - if not path_to_html_file.exists(): - return jsonify({"error": "Demo report not found"}), 404 - return send_file(str(path_to_html_file), mimetype="text/html") - if not plan.generated_report_html: logger.error("Report HTML not found for plan_id=%s", plan_id) return jsonify({"error": "Report not available"}), 404 diff --git a/mcp_cloud/AGENTS.md b/mcp_cloud/AGENTS.md index 2dd5177a4..fc53f4b3e 100644 --- a/mcp_cloud/AGENTS.md +++ b/mcp_cloud/AGENTS.md @@ -364,10 +364,9 @@ The same `_get_download_base_url()` function is used to build both `download_url example: three isolated try/except blocks for HTTP, DB, and zip snapshot. ## Worker HTTP fallback ordering -- When resolving file lists or artifacts, try fast local sources first: - 1. DB (`fetch_report_from_db` / `list_files_from_zip_snapshot` / `fetch_file_from_zip_snapshot`) - 2. Local run directory (`list_files_from_local_run_dir`) - 3. Worker HTTP (`fetch_file_list_from_worker_plan` / `fetch_artifact_from_worker_plan`) +- When resolving file lists or artifacts, check the DB first and fall back to the worker for anything not yet persisted: + 1. DB (`fetch_report_from_db` / `list_files_from_zip_snapshot` / `fetch_file_from_zip_snapshot`) — for completed/snapshotted plans + 2. Worker HTTP (`fetch_file_list_from_worker_plan` / `fetch_artifact_from_worker_plan`) — the live source for in-flight plans - The report fallback chain (`_fetch_report_with_fallbacks`) follows this same DB-first convention: DB → zip snapshot → HTTP. This matches the zip path (`fetch_user_downloadable_zip`) which also tries the DB before HTTP. diff --git a/mcp_cloud/Dockerfile b/mcp_cloud/Dockerfile index 2c7641694..25e81d366 100644 --- a/mcp_cloud/Dockerfile +++ b/mcp_cloud/Dockerfile @@ -3,7 +3,6 @@ FROM python:3.13-slim ENV PYTHONDONTWRITEBYTECODE=1 \ PYTHONUNBUFFERED=1 \ PLANEXE_CONFIG_PATH=/app \ - PLANEXE_RUN_DIR=/app/run \ PIP_NO_CACHE_DIR=1 \ PIP_PREFER_BINARY=1 \ PYTHONPATH=/app:/app/mcp_cloud:/app/database_api diff --git a/mcp_cloud/README.md b/mcp_cloud/README.md index 6eceb6959..6e6d45c09 100644 --- a/mcp_cloud/README.md +++ b/mcp_cloud/README.md @@ -419,10 +419,9 @@ Client → FastAPI (http_server.py) When fetching plan files (reports, zips), `worker_fetchers.py` tries sources in priority order: -1. **DB** (`PlanItem.generated_report_html` / `PlanItem.run_zip_snapshot`) — fastest, always available for completed plans -2. **Zip snapshot extraction** — extracts individual files from the DB zip snapshot -3. **Local run directory** (`PLANEXE_RUN_DIR/{run_id}/`) — low latency if co-located -4. **Worker HTTP** (`PLANEXE_WORKER_PLAN_URL/runs/{run_id}/...`) — last-resort fallback (10s timeout, 3s connect) +1. **DB** (`PlanItem.generated_report_html` / `PlanItem.run_zip_snapshot`) — served directly from Postgres; available once a plan has completed (or a snapshot has been persisted) +2. **Zip snapshot extraction** — pulls an individual file out of the DB-stored zip snapshot +3. **Worker HTTP** (`PLANEXE_WORKER_PLAN_URL/runs/{run_id}/...`) — the live source for in-flight plans and anything not yet persisted to the DB (10s timeout, 3s connect) This layered approach avoids blocking on the worker HTTP API when the data is already available locally or in the database. diff --git a/mcp_cloud/app.py b/mcp_cloud/app.py index b7687c1f3..dda4648d3 100644 --- a/mcp_cloud/app.py +++ b/mcp_cloud/app.py @@ -17,7 +17,6 @@ ensure_planitem_stop_columns, PLANEXE_SERVER_INSTRUCTIONS, mcp_cloud_server as mcp_cloud, - BASE_DIR_RUN, WORKER_PLAN_URL, REPORT_FILENAME, REPORT_CONTENT_TYPE, @@ -88,7 +87,6 @@ from mcp_cloud.worker_fetchers import ( # noqa: F401 fetch_artifact_from_worker_plan, fetch_file_list_from_worker_plan, - list_files_from_local_run_dir, fetch_zip_from_worker_plan, fetch_user_downloadable_zip, ) diff --git a/mcp_cloud/db_setup.py b/mcp_cloud/db_setup.py index 228e5a7f5..83b08b164 100644 --- a/mcp_cloud/db_setup.py +++ b/mcp_cloud/db_setup.py @@ -247,9 +247,6 @@ def ensure_last_progress_at_column() -> None: mcp_cloud_server = Server("planexe-mcp-cloud", instructions=PLANEXE_SERVER_INSTRUCTIONS) -# Base directory for run artifacts (not used directly, fetched via worker_plan HTTP API) -BASE_DIR_RUN = Path(os.environ.get("PLANEXE_RUN_DIR", Path(__file__).parent.parent / "run")).resolve() - WORKER_PLAN_URL = os.environ.get("PLANEXE_WORKER_PLAN_URL", "http://worker_plan:8000") REPORT_FILENAME = "report.html" diff --git a/mcp_cloud/handlers.py b/mcp_cloud/handlers.py index 4456747b8..0c76218d3 100644 --- a/mcp_cloud/handlers.py +++ b/mcp_cloud/handlers.py @@ -50,7 +50,6 @@ from mcp_cloud.worker_fetchers import ( fetch_artifact_from_worker_plan, fetch_file_list_from_worker_plan, - list_files_from_local_run_dir, fetch_user_downloadable_zip, ) from mcp_cloud.model_profiles import _get_model_profiles_sync @@ -305,8 +304,6 @@ async def handle_plan_status(arguments: dict[str, Any]) -> CallToolResult: files = [] if plan_uuid: files_list = await asyncio.to_thread(list_files_from_zip_snapshot, plan_uuid) - if not files_list: - files_list = await asyncio.to_thread(list_files_from_local_run_dir, plan_uuid) if not files_list: try: files_list = await asyncio.wait_for( diff --git a/mcp_cloud/tests/test_plan_status_tool.py b/mcp_cloud/tests/test_plan_status_tool.py index b86d598be..4201dab57 100644 --- a/mcp_cloud/tests/test_plan_status_tool.py +++ b/mcp_cloud/tests/test_plan_status_tool.py @@ -53,9 +53,6 @@ def test_plan_status_falls_back_to_zip_snapshot_files_when_primary_source_empty( ), patch( "mcp_cloud.handlers.list_files_from_zip_snapshot", return_value=[("001-2-plan.txt", "2026-03-08T23:49:53Z"), ("log.txt", "2026-03-08T23:50:00Z")], - ), patch( - "mcp_cloud.handlers.list_files_from_local_run_dir", - return_value=None, ): result = asyncio.run(handle_plan_status({"plan_id": plan_id})) diff --git a/mcp_cloud/worker_fetchers.py b/mcp_cloud/worker_fetchers.py index 630e45cf8..ba28df801 100644 --- a/mcp_cloud/worker_fetchers.py +++ b/mcp_cloud/worker_fetchers.py @@ -12,7 +12,6 @@ from worker_plan_api.format_datetime import format_datetime_utc from mcp_cloud.db_setup import ( - BASE_DIR_RUN, REPORT_FILENAME, WORKER_PLAN_URL, ZIP_SNAPSHOT_MAX_BYTES, @@ -178,36 +177,6 @@ async def fetch_file_list_from_worker_plan(run_id: str) -> Optional[list[tuple[s return None -def list_files_from_local_run_dir(run_id: str) -> Optional[list[tuple[str, str]]]: - """ - List files from local run directory when this service shares PLANEXE_RUN_DIR - with the worker (e.g., Docker compose). - - Returns list of (filename, ISO-8601 UTC timestamp) tuples sorted by name, - or None if the directory does not exist. - """ - - run_dir = (BASE_DIR_RUN / run_id).resolve() - try: - if not run_dir.is_relative_to(BASE_DIR_RUN): - return None - except ValueError: - return None - if not run_dir.exists() or not run_dir.is_dir(): - return None - try: - results = [] - for path in run_dir.iterdir(): - if path.is_file(): - mtime = datetime.fromtimestamp(path.stat().st_mtime, tz=UTC) - mtime_str = format_datetime_utc(mtime) - results.append((path.name, mtime_str)) - results.sort(key=lambda t: t[0]) - return results - except Exception as exc: - logger.warning("Unable to list local run dir files for %s: %s", run_id, exc) - return None - async def fetch_zip_from_worker_plan(run_id: str) -> Optional[bytes]: """Fetch the zip snapshot from worker_plan via HTTP.""" try: diff --git a/worker_plan/AGENTS.md b/worker_plan/AGENTS.md index 9822a69c6..c0a869552 100644 --- a/worker_plan/AGENTS.md +++ b/worker_plan/AGENTS.md @@ -10,8 +10,8 @@ consumers. - Avoid renaming response fields like `run_id`, `run_dir`, `display_run_dir`. - Artifact contract: `/runs/{run_id}/zip` must not include `track_activity.jsonl` in downloadable zips. -- Maintain the run directory conventions (`PlanExe_...`) and environment-driven - paths (`PLANEXE_RUN_DIR`, `PLANEXE_HOST_RUN_DIR`, `PLANEXE_CONFIG_PATH`). +- Maintain the run directory conventions (`PlanExe_...`); run outputs go under + `{PLANEXE_CONFIG_PATH}/run/`. - When changing pipeline behavior, keep the subprocess invocation in `start_pipeline_subprocess` consistent with `worker_plan_internal`. - Keep `PlanExeDotEnv.load().update_os_environ()` early so `.env` overrides work. diff --git a/worker_plan/README.md b/worker_plan/README.md index e8824de5a..ee4faee1c 100644 --- a/worker_plan/README.md +++ b/worker_plan/README.md @@ -36,9 +36,7 @@ If you must stay on Python 3.14, expect source builds and potential failures; ex | --- | --- | --- | | `PLANEXE_WORKER_HOST` | `0.0.0.0` | Host address the worker binds to (only when running via `python -m worker_plan.app`). | | `PLANEXE_WORKER_PORT` | `8000` | Port the worker listens on (only when running via `python -m worker_plan.app`). | -| `PLANEXE_RUN_DIR` | `run` | Directory under which run output folders are created. | -| `PLANEXE_HOST_RUN_DIR` | *(unset)* | Optional host path base returned in `display_run_dir` to hint where runs live on the host. | -| `PLANEXE_CONFIG_PATH` | `.` | Working directory for the pipeline; used as the `cwd` when spawning `worker_plan_internal.plan.run_plan_pipeline`. | +| `PLANEXE_CONFIG_PATH` | `.` | Working directory for the pipeline; used as the `cwd` when spawning `worker_plan_internal.plan.run_plan_pipeline`. Run outputs are written to `{PLANEXE_CONFIG_PATH}/run/`. | | `PLANEXE_WORKER_RELAY_PROCESS_OUTPUT` | `false` | When `true`, pipe pipeline stdout/stderr to the worker logs instead of suppressing them. | | `PLANEXE_PURGE_ENABLED` | `false` | Enable the background scheduler that purges old run directories. | | `PLANEXE_PURGE_MAX_AGE_HOURS` | `1` | Maximum age (hours) of runs to delete when purging (scheduler and manual default). | diff --git a/worker_plan/app.py b/worker_plan/app.py index 88ffb8729..d7e6802e1 100644 --- a/worker_plan/app.py +++ b/worker_plan/app.py @@ -47,8 +47,7 @@ # Default to repo root so runs land in PlanExe/run when env vars aren't set. DEFAULT_APP_ROOT = Path(__file__).parent.parent.resolve() APP_ROOT = Path(os.environ.get("PLANEXE_CONFIG_PATH", DEFAULT_APP_ROOT)).resolve() -RUN_BASE_PATH = Path(os.environ.get("PLANEXE_RUN_DIR", APP_ROOT / "run")).resolve() -HOST_RUN_DIR_BASE = os.environ.get("PLANEXE_HOST_RUN_DIR") +RUN_BASE_PATH = (APP_ROOT / "run").resolve() RELAY_PROCESS_OUTPUT = os.environ.get("PLANEXE_WORKER_RELAY_PROCESS_OUTPUT", "false").lower() == "true" PURGE_ENABLED = os.environ.get("PLANEXE_PURGE_ENABLED", "false").lower() == "true" PURGE_MAX_AGE_HOURS = float(os.environ.get("PLANEXE_PURGE_MAX_AGE_HOURS", "1")) @@ -158,19 +157,6 @@ def has_pipeline_complete_file(path_dir: Path) -> bool: return False -def build_display_run_dir(run_dir: Path) -> str: - """ - Returns a user-facing path string for the run directory. - If PLANEXE_HOST_RUN_DIR is set, map to that base to hint where to find the run on the host. - """ - if HOST_RUN_DIR_BASE: - try: - return str(Path(HOST_RUN_DIR_BASE) / run_dir.name) - except Exception: - return str(run_dir) - return str(run_dir) - - def build_env( run_dir: Path, llm_model: str, @@ -257,12 +243,10 @@ def start_run(request: StartRunRequest) -> StartRunResponse: with process_lock: process_store[run_id] = info - display_run_dir = build_display_run_dir(run_dir) - return StartRunResponse( run_id=run_id, run_dir=str(run_dir), - display_run_dir=display_run_dir, + display_run_dir=str(run_dir), pid=process.pid, status="running", ) @@ -298,7 +282,6 @@ def run_status(run_id: str) -> RunStatusResponse: pipeline_complete = has_pipeline_complete_file(run_dir) last_update_seconds_ago = time_since_last_modification(run_dir) run_dir_exists = run_dir.exists() - display_run_dir = build_display_run_dir(run_dir) with process_lock: info = process_store.get(run_id) @@ -317,7 +300,7 @@ def run_status(run_id: str) -> RunStatusResponse: return RunStatusResponse( run_id=run_id, run_dir=str(run_dir), - display_run_dir=display_run_dir, + display_run_dir=str(run_dir), run_dir_exists=run_dir_exists, pid=pid, running=running, diff --git a/worker_plan/railway.md b/worker_plan/railway.md index 9271dcb5f..25d6ae98b 100644 --- a/worker_plan/railway.md +++ b/worker_plan/railway.md @@ -3,8 +3,6 @@ ``` OPENROUTER_API_KEY="${{shared.OPENROUTER_API_KEY}}" PLANEXE_CONFIG_PATH="/app" -PLANEXE_HOST_RUN_DIR="/app/run" -PLANEXE_RUN_DIR="/app/run" PLANEXE_WORKER_RELAY_PROCESS_OUTPUT="true" PLANEXE_POSTGRES_PASSWORD="${{shared.PLANEXE_POSTGRES_PASSWORD}}" PLANEXE_LLM_CONFIG_WHITELISTED_CLASSES="${{shared.PLANEXE_LLM_CONFIG_WHITELISTED_CLASSES}}" @@ -12,7 +10,7 @@ PLANEXE_LLM_CONFIG_WHITELISTED_CLASSES="${{shared.PLANEXE_LLM_CONFIG_WHITELISTED ## Volume - None -The `worker_plan` gets initialized via env vars. It does write to disk inside the `run` dir. +The `worker_plan` gets initialized via env vars. ## Settings - Private Networking diff --git a/worker_plan/worker_plan_api/planexe_dotenv.py b/worker_plan/worker_plan_api/planexe_dotenv.py index 92910641d..76397b6be 100644 --- a/worker_plan/worker_plan_api/planexe_dotenv.py +++ b/worker_plan/worker_plan_api/planexe_dotenv.py @@ -16,7 +16,6 @@ class DotEnvKeyEnum(str, Enum): PATH_TO_PYTHON = "PATH_TO_PYTHON" - PLANEXE_RUN_DIR = "PLANEXE_RUN_DIR" @dataclass class PlanExeDotEnv: diff --git a/worker_plan/worker_plan_internal/utils/planexe_dotenv.py b/worker_plan/worker_plan_internal/utils/planexe_dotenv.py index bd8cc8116..fdc51bc1e 100644 --- a/worker_plan/worker_plan_internal/utils/planexe_dotenv.py +++ b/worker_plan/worker_plan_internal/utils/planexe_dotenv.py @@ -16,7 +16,6 @@ class DotEnvKeyEnum(str, Enum): PATH_TO_PYTHON = "PATH_TO_PYTHON" - PLANEXE_RUN_DIR = "PLANEXE_RUN_DIR" @dataclass class PlanExeDotEnv: diff --git a/worker_plan_database/Dockerfile b/worker_plan_database/Dockerfile index 76aebae94..fddfc943e 100644 --- a/worker_plan_database/Dockerfile +++ b/worker_plan_database/Dockerfile @@ -3,7 +3,6 @@ FROM python:3.13-slim ENV PYTHONDONTWRITEBYTECODE=1 \ PYTHONUNBUFFERED=1 \ PLANEXE_CONFIG_PATH=/app \ - PLANEXE_RUN_DIR=/app/run \ PIP_NO_CACHE_DIR=1 \ PIP_PREFER_BINARY=1 \ PYTHONPATH=/app:/app/worker_plan_database:/app/database_api diff --git a/worker_plan_database/README.md b/worker_plan_database/README.md index 0c2a0fd38..a78409d44 100644 --- a/worker_plan_database/README.md +++ b/worker_plan_database/README.md @@ -17,7 +17,7 @@ Subclass of the `worker_plan` service that runs the PlanExe pipeline with a Post - `PLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD` - falls back to the `database_postgres` service defaults (`planexe/planexe` on port 5432) - Logs stream to stdout with [12-factor style logging](https://12factor.net/logs). Configure with `PLANEXE_LOG_LEVEL` (defaults to `INFO`). -- Volumes mounted in compose: `./run` (pipeline output), `.env`, `./llm_config/` +- Volumes mounted in compose: `.env`, `./llm_config/` (read-only). Durable artifacts are persisted via the DB. - Entrypoint: `python -m worker_plan_database.app` ## Run locally with a venv diff --git a/worker_plan_database/app.py b/worker_plan_database/app.py index dfebcb506..030f56ac9 100644 --- a/worker_plan_database/app.py +++ b/worker_plan_database/app.py @@ -60,8 +60,7 @@ def _new_model(model_cls: Any, **kwargs: Any) -> Any: # --- Global Paths --- BASE_DIR = Path(__file__).parent.parent.absolute() -# Default to shared PLANEXE_RUN_DIR (mounted volume) so worker_plan can read outputs. -BASE_DIR_RUN = Path(os.environ.get("PLANEXE_RUN_DIR", BASE_DIR / "run")).resolve() +BASE_DIR_RUN = (BASE_DIR / "run").resolve() BASE_DIR_RUN.mkdir(exist_ok=True) PLANEXE_CONFIG_PATH_VAR = BASE_DIR