Requiem is a monorepo for three related surfaces:
- a native C++ execution engine under
src/,include/requiem/, andtests/, - a TypeScript CLI under
packages/cli/, and - a Next.js operator console under
ready-layer/.
The repository proves local build, route-contract, replay-oriented workflows, and a request-bound ReadyLayer deployment backed by shared Supabase state with a durable control-plane queue for plan jobs. It does not prove autonomous background workers, email-based invite/seat management, or enterprise SaaS workspace tenancy.
- Engine: local/native execution and deterministic test surfaces.
- CLI: local operator tooling backed by on-disk state under
~/.requiemor repo-local.requiempaths, depending on command/config. - ReadyLayer web app: an authenticated operator console with a mix of:
- routes backed by local single-runtime control-plane state in development,
- routes that require an external API endpoint via
REQUIEM_API_URL, and - informational/demo surfaces that are explicit about degraded or stubbed behavior.
ReadyLayer derives tenant context from the authenticated Supabase user ID in middleware and forwards it as both x-user-id and x-tenant-id to server routes. Within each tenant scope, the control plane supports tenant-local organizations with explicit admin / operator / viewer role membership.
What works today:
- create/update/delete organizations per tenant,
- assign roles to actor IDs directly via
set_member_role, - role-enforced access to org CRUD, job queue, and health endpoints.
What does not exist:
- email-based invite with durable token and acceptance flow,
- invite revocation or expiry handling,
- member deactivation/removal independent of org deletion,
- seat accounting, billing integration, or self-service role change,
- org-switching UI or shared multi-user workspace semantics.
Membership is currently controlled via explicit API calls from an admin — not through an invite/accept product flow.
Supported today:
- local development on one machine,
- local verification/CI runs,
- one or more ReadyLayer instances connected to Supabase auth plus Supabase service-role backed shared coordination/state, and optionally an external Requiem API endpoint.
Not supported as a proven topology today:
- any deployment that expects an autonomous background worker polling the durable queue (plan jobs require explicit
action=processcalls from an operator or external scheduler), - serverless/edge topologies that reinterpret request-bound execution as durable async orchestration,
- any deployment that markets the current ReadyLayer surface as a shared multi-user SaaS control plane with invite/seat management.
- The CLI persists local operational state to SQLite/on-disk storage.
- Many ReadyLayer routes read or write tenant-scoped local control-plane state.
- In production-like deployments, ReadyLayer request rate limiting and idempotency replay are backed by shared Supabase state; local development still uses process-local fallbacks where safe.
- Some routes also depend on
REQUIEM_API_URLto reach an external runtime/API service. /app/tenantscurrently returns a stub payload and should be read as a truth disclosure surface, not proof of live multi-tenant control-plane enforcement.
Do not deploy this repository as though it already provides:
- enterprise multi-user SaaS with invite/seat management,
- autonomous background workers polling the durable queue,
- horizontally safe control-plane coordination beyond shared Supabase state,
- cross-replica idempotency/rate-limit guarantees beyond what Supabase OCC provides, or
- production-ready backend telemetry for every ReadyLayer route.
Before any non-local deployment:
- Read docs/DEPLOYMENT.md.
- Fill environment variables from docs/ENVIRONMENT.md.
- Decide whether your deployment is:
- local-single-runtime (developer-only, filesystem-backed, not a production claim),
- shared request-bound ReadyLayer (requires Supabase auth envs plus
SUPABASE_SERVICE_ROLE_KEY), or - shared request-bound ReadyLayer + external API (same as above plus a reachable
REQUIEM_API_URL).
- Understand that durable plan jobs can be enqueued and recovered after process loss, but there is no autonomous background worker — processing requires explicit
action=processcalls. Foreground API execution remains request-bound.
src/,include/requiem/,tests/— native engine.packages/cli/— CLI and local operator workflows.packages/ai/— TypeScript policy/tooling layer used by the console and CLI workflows.ready-layer/— Next.js operator console and API routes.scripts/— build, verification, route inventory, and repo policy checks.docs/— canonical docs plus historical material.
- Node.js
20.11.0or newer - pnpm
8.15.0 - CMake and a C++20-capable compiler for native engine builds/tests
node scripts/bootstrap-preflight.mjs
pnpm install --frozen-lockfileFirst install requires outbound access to the public npm registry (https://registry.npmjs.org) or a reachable internal mirror. The bootstrap preflight fails early with exact remediation when Node/corepack/pnpm/registry prerequisites are missing.
pnpm run doctor
pnpm run route:inventory
pnpm run lint
pnpm run typecheck
pnpm run build
pnpm run test
pnpm run verify:all
pnpm run verify:deploy-readiness
pnpm run verify:first-customerpnpm run doctor— local prerequisite and state inspection.pnpm run route:inventory— regenerateroutes.manifest.jsonfrom the current route tree.pnpm run lint— ReadyLayer lint.pnpm run typecheck— ReadyLayer type-check.pnpm run build— native engine build plus web build.pnpm run test— engine smoke tests.pnpm run verify:first-customer— boots the local ReadyLayer server with strict API auth and runs the canonical API smoke proof.pnpm run verify:release— canonical first-customer go-live gate: deploy-readiness, route truth, docs truth, lint, typecheck, build, smoke tests, survivability checks, and the local first-customer boot path.pnpm run verify:all— standard repo gate: doctor, route inventory, route checks, lint, typecheck, build, test.pnpm run verify:deploy-readiness— checks Node/pnpm/Vercel/env-contract parity.
pnpm run verify:routes
pnpm run verify:tenant-isolation
pnpm run verify:nosecrets
pnpm run verify:no-stack-leaks
pnpm run verify:determinism
pnpm run verify:replay
pnpm rl --help
pnpm rl doctorTwo env example files exist:
- root
/.env.example— repo-level local/dev variables, ready-layer/.env.example— ReadyLayer deployment contract.
Authoritative documentation: docs/ENVIRONMENT.md.
High-level requirements:
- ReadyLayer auth UI requires:
NEXT_PUBLIC_SUPABASE_URL,NEXT_PUBLIC_SUPABASE_ANON_KEY - ReadyLayer strict authenticated API mode requires:
REQUIEM_AUTH_SECRET(also enables direct protected API bearer auth for operator/service clients) - Production-like shared control-plane/idempotency/rate limiting require:
SUPABASE_URL(orNEXT_PUBLIC_SUPABASE_URL) andSUPABASE_SERVICE_ROLE_KEY - Routes that fetch external runtime data require:
REQUIEM_API_URL - Prisma/DB workflows require:
DATABASE_URLand, where your setup uses it,DIRECT_DATABASE_URL - Unsafe local-only auth fallback:
REQUIEM_ALLOW_INSECURE_DEV_AUTH=1only outside strict auth mode; never use this for production deployment
Readiness truth:
/api/readinesspasses for local-single-runtime only when the authenticated ReadyLayer env contract is present and filesystem persistence is healthy.- In production-like deployments,
/api/readinessfails closed unless shared Supabase control-plane persistence plus shared idempotency/rate-limiting are configured. - If
REQUIEM_API_URLis configured,/api/readinessalso probes that external runtime and fails until it is reachable.
| Topology | Status | Notes |
|---|---|---|
| CLI on one machine | Supported | Uses local filesystem/SQLite state. |
| ReadyLayer dev server on one machine | Supported | Uses local filesystem state and process-local caches intentionally. |
| ReadyLayer deployment with Supabase-backed shared state | Supported | Request coordination/control-plane state are shared; execution still remains request-bound in the handling runtime. |
| Multiple ReadyLayer replicas sharing production traffic | Supported with bounded semantics | Safe only for request-bound flows backed by shared Supabase state. Durable plan jobs can be enqueued and recovered, but there is no autonomous background worker — processing requires explicit operator calls. |
| Edge/serverless deployment claiming durable async continuation | Not supported | Request-bound execution must not be marketed as durable async orchestration. |
| Durable plan-job queue with operator-driven processing | Supported | Jobs are persisted before execution, leases protect against duplicate processing, stale leases are recoverable. No autonomous background worker exists. |
| Email-based invite / seat management | Not implemented | Membership is set directly via set_member_role API. No invite/accept/revoke flow exists. |
| Org/team multi-user SaaS control plane | Partially implemented | Tenant-local orgs and role membership exist. Invite/seat/billing workflows do not. |
Full detail: docs/DEPLOYMENT.md. Release gate: pnpm run verify:release.
At a high level:
- the native engine is the local trust anchor for engine-specific build/test flows,
- the CLI orchestrates local workflows and persists local state,
- ReadyLayer middleware authenticates through Supabase and maps each authenticated user to a tenant ID equal to that user ID,
- ReadyLayer API routes use tenant wrappers that add structured error handling, route contracts, shared durable idempotency/rate limiting in production-like deployments, and explicit request-bound execution headers,
- some console routes remain informational, degraded, or stubbed rather than live runtime proof.
Use these docs as the current truth spine:
- docs/ENVIRONMENT.md
- docs/DEPLOYMENT.md
- docs/OPERATOR_RUNBOOK.md
- docs/SECURITY.md
- docs/limitations.md
- docs/reference/ROUTE_MATURITY.md
- The repository contains historical and aspirational material under
docs/; not every document describes current deployable truth. - Some ReadyLayer routes are intentionally informational or stub-backed.
- Current tenant isolation language in code is stronger than the current hosted-product reality; read it as route/request scoping, not as proof of org-SaaS maturity.
- Build and verification coverage is strongest for local/CI workflows, not for clustered production operations.
Start here:
If you are evaluating the repo, prefer command results and source-linked docs over narrative claims.
- Squash-only merges
- Auto-delete merged branches
- Weekly dependency update windows
- Security scanning in CI