PushPals is a human-guided and autonomous AI coding helper that runs as a multi-service local system around your repository. It combines chat-style interaction, strict planning, delegated execution, and controlled git integration with auditability built in.
PushPals is designed for two modes that can coexist in one runtime:
- Human-driven mode: you ask for work in chat, the system plans and executes code changes safely.
- Autonomous mode:
RemoteBuddyperiodically proposes and dispatches bounded maintenance objectives (when policy and eligibility gates allow).
Both modes flow through the same queues, events, and integration pipeline so behavior is observable and debuggable.
- Excalidraw source:
docs/architecture.excalidraw:
- Mermaid runtime flow:
flowchart LR
U[User] --> C[apps/client]
C -->|POST /message| L[apps/localbuddy]
L -->|POST /requests/enqueue| S[(apps/server)]
S -->|SSE/WS session events| C
S -->|POST /requests/claim| R[apps/remotebuddy]
R -->|POST /jobs/enqueue| S
S -->|POST /jobs/claim| W[apps/workerpals]
W -->|POST /jobs/:id/complete or fail| S
W -->|POST /completions/enqueue| S
S -->|POST /completions/claim| M[apps/source_control_manager]
M -->|merge/push/PR status events| S
S --- DB[(outputs/data/pushpals.db)]
W -->|agent/<worker>/<job> commits| G[(Git branches)]
M -->|integration merge/push| G
apps/client- Expo mission-control UI (web/iOS/Android).
- Subscribes to server event stream and renders live timeline/status.
apps/localbuddy- User ingress on
POST /message. - Handles lightweight local responses and status/read-only prompts.
- Enqueues delegated requests for remote orchestration.
- User ingress on
apps/server- Central control plane and event hub.
- Hosts session/event transport, queue APIs, worker heartbeats, and autonomy APIs.
- Persists all state in SQLite (
outputs/data/pushpals.dbby default).
apps/remotebuddy- Planner/orchestrator.
- Claims queued requests, produces strict planning JSON, emits assistant messages, optionally enqueues
task.executejobs. - Runs optional autonomy loop for objective ideation/scoring/dispatch.
apps/workerpals- Job execution daemon (host worktree mode or Docker mode).
- Claims jobs, executes backend agent (
minisweoropenhands), streams logs, emits completion records.
apps/source_control_manager- Completion consumer and integration daemon.
- Applies worker output (cherry-pick/no-ff/ff-only), runs checks, pushes integration branch, optionally opens/reuses PR.
- Bun 1.x
- Python 3.12+ (for integration/eval harness and Python executor scripts)
- Docker (recommended; required for default
bun run startflow) - Git + GitHub auth if push/PR automation is enabled
bun install
cp .env.example .env
cp configs/local.example.toml configs/local.tomlWindows PowerShell:
bun install
Copy-Item .env.example .env
Copy-Item configs/local.example.toml configs/local.tomlbun run start- Preferred startup path.
- Runs preflights (config presence, LLM reachability, integration branch/worktree checks, Docker image checks, startup warmup), then launches full stack.
bun run start -c- Same as above with runtime-state cleanup first.
bun run dev:full- Direct concurrent launcher without the
start.tspreflight workflow.
- Direct concurrent launcher without the
bun run server:onlybun run localbuddy:onlybun run remotebuddy:onlybun run workerpals:onlybun run workerpals:only:dockerbun run source_control_manager:onlybun run source_control_manager:only:devbun run client:onlybun run client:only:offlinebun run web:onlybun run ios:onlybun run android:only
Use this for terminal-first chat routed through LocalBuddy -> RemoteBuddy.
Install globally from npm:
npm i -g @pushpalsdev/clior with Bun:
bun install -g @pushpalsdev/cliFor local development, one-time local command install from repo root:
bun linkThen from any git repo:
pushpalsNotes:
pushpalshard-fails if current directory is not a git repo.- If LocalBuddy is down,
pushpalsauto-starts embeddedserver + localbuddy + remotebuddy + source_control_manager. - Auto-start does not clone this repository; it downloads release-tagged runtime binaries and runtime assets into
~/.pushpals/runtime. - Override runtime tag when needed via
pushpals --runtime-tag vX.Y.Z. pushpalsvalidates LocalBuddy is attached to the same repo root.- It stores endpoint state in
.git/pushpals-cli-state.json, including a copyablemonitoringHubUrl=.... - Direct OS binaries are published per release under:
https://github.com/PushPalsDev/pushpals/releases
Tag-based release:
git tag vX.Y.Z
git push origin vX.Y.ZOptional but recommended before tagging:
# update reusable release notes
release_log.mdRelease CLI workflow will:
- publish
@pushpalsdev/clito npm - build Windows/Linux/macOS standalone binaries
- attach binaries + checksums to GitHub Releases
- use
release_log.mdas release body when present
PushPals also ships a VS Code extension client in apps/vscode-client that can:
- Start/stop local stack services (
server,localbuddy,remotebuddy,workerpals:only:docker). - Verify/build the worker Docker image before stack startup.
- Provide an in-editor chat/event client wired to your local PushPals server.
Build and package:
bun run vscode:client:compile
bun run vscode:client:packageTerminal 1:
bun run server:onlyTerminal 2:
bun run remotebuddy:onlyTo feed it work directly:
curl -X POST http://localhost:3001/sessions -H "Content-Type: application/json" -d '{"sessionId":"dev"}'
curl -X POST http://localhost:3001/requests/enqueue -H "Content-Type: application/json" -d '{"sessionId":"dev","prompt":"Summarize current failing tests","priority":"interactive"}'Terminal 1: bun run server:only
Terminal 2: bun run remotebuddy:only
Terminal 3: bun run workerpals:only:docker
Terminal 4: bun run source_control_manager:only:dev
Terminal 1: bun run server:only
Terminal 2: bun run localbuddy:only
bun run test- Root tests + protocol tests.
bun run test:integration- End-to-end integration harness (
tests/integration/integration_controller.py --mode integration).
- End-to-end integration harness (
bun run test:integration:eval- Backend evaluation mode (
--mode eval) with scenario/budget controls.
- Backend evaluation mode (
bun run smoke- Smoke script for startup/stack sanity.
Direct eval wrapper:
python -u tests/integration/test_workerpals_backend_eval.pyUseful eval knobs:
WORKERPALS_E2E_BACKENDS=miniswe,openhandsWORKERPALS_E2E_EVAL_SCENARIO_SUITE=quick|real-lite|real-hardWORKERPALS_E2E_SCENARIOS_PER_BACKEND=1WORKERPALS_E2E_MAX_TOTAL_SEC=900WORKERPALS_E2E_MAX_BACKEND_SEC=1200WORKERPALS_E2E_EVAL_OUTPUT=outputs/workerpals_backend_eval.json
- Runtime/services: Bun + TypeScript (ESM)
- Persistence: SQLite (
bun:sqlite) - UI: Expo + React Native + Expo Router
- Worker runtimes: Python 3.12+, Docker sandbox image
- Git integration: git CLI, optional GitHub CLI (
gh) for auth/PR workflows - Agent/event protocol:
packages/protocolJSON schema + TS types - Shared config/communication:
packages/shared
Configured in configs/backend.toml and resolved by apps/workerpals/src/backends/backend_config.ts.
miniswe(default)- Python executor:
apps/workerpals/src/backends/miniswe/miniswe_executor.py - Uses
mini-swe-agent.
- Python executor:
openhands- Python executor:
apps/workerpals/src/backends/openhands/openhands_executor.py - Uses OpenHands SDK / agent-server toolchain.
- Python executor:
How to switch:
# configs/local.toml
[workerpals]
executor = "openhands" # or "miniswe"Or via env override:
WORKERPALS_EXECUTOR=openhands bun run workerpals:only:dockerLocalBuddy, RemoteBuddy, and WorkerPals each have per-service LLM config:
LOCALBUDDY_LLM_BACKENDREMOTEBUDDY_LLM_BACKENDWORKERPALS_LLM_BACKEND
Supported backend values:
lmstudioollama
Compatibility aliases accepted by config normalizer:
openai_compatible->lmstudioollama_chat->ollama
Related settings per service:
*_LLM_ENDPOINT*_LLM_MODEL*_LLM_API_KEY*_LLM_SESSION_ID
- Client sends user text to LocalBuddy:
POST /message. - LocalBuddy chooses:
- local reply path for lightweight chat/status/read-only requests, or
- remote delegation path by enqueuing to server:
POST /requests/enqueue.
- Explicit remote override command supported in chat:
/ask_remote_buddy ....
Main server route families in apps/server/src/server_main.ts:
- Session/event transport:
POST /sessionsGET /sessions/:id/events(SSE replay viaaftercursor)GET /sessions/:id/ws(WebSocket replay)POST /sessions/:id/messagePOST /sessions/:id/command(auth protected)
- Request queue:
POST /requests/enqueuePOST /requests/claimPOST /requests/:id/completePOST /requests/:id/failGET /requests
- Job queue and workers:
POST /jobs/enqueuePOST /jobs/claimPOST /jobs/:id/completePOST /jobs/:id/failPOST /jobs/:id/logGET /jobsGET /jobs/:id/logsPOST /workers/heartbeatGET /workers
- Completion queue:
POST /completions/enqueuePOST /completions/claimPOST /completions/:id/processedPOST /completions/:id/failGET /completions
- Autonomy APIs:
- lock lifecycle (
/autonomy/lock/acquire|renew|release) - snapshot/objective/outcome/eligibility APIs
- question lifecycle APIs
- lock lifecycle (
- Status/ops:
GET /system/statusGET /healthzPOST /admin/shutdown(auth protected)
Both request and job queues are priority ordered:
interactivenormalbackground
Queue implementations:
apps/server/src/requests.tsapps/server/src/jobs.tsapps/server/src/completions.ts
Shared behavior:
- FIFO within each priority band.
- Claim transitions are atomic.
- Queue position and ETA snapshots are derived from live pending order.
- SLO summaries are derived over rolling windows and exposed by
/system/status.
RemoteBuddy planner output feeds strict execution payloads.
Worker contract (task.execute in job params, schema v2) includes:
schemaVersionlane(deterministicorworker)instructionplanning.intentplanning.scope(read/write bounds)planning.acceptanceCriteriaplanning.validationSteps- queue/execution/finalization budgets
WorkerPals validates and executes this payload in:
- direct worktree mode, or
- Docker mode via
apps/workerpals/src/docker_executor.tsandapps/workerpals/src/job_runner.ts.
When WorkerPals finishes mutable work:
- completion record is enqueued with commit/branch metadata.
- SourceControlManager claims completion.
- configured merge strategy applies changes into integration branch:
cherry-pickno-ffff-only
- optional checks run.
- integration branch push occurs when enabled.
- optional PR open/reuse is performed when enabled.
SourceControlManager also exposes a localhost status API (apps/source_control_manager/src/http.ts):
GET /healthGET /jobsGET /jobs/:idGET /stats
Main event/session store in apps/server/src/db.ts:
sessionsevents(append-only cursor log)
Queue + worker tables:
requestsjobsjob_logsjob_artifactsworkerscompletions
Autonomy tables in apps/server/src/autonomy.ts:
autonomy_snapshotsautonomy_candidatesautonomy_objectivesautonomy_outcomesautonomy_pattern_statsquestions_queueautonomy_llm_callsautonomy_dispatch_lock
- Source of truth branch:
main - Integration branch (default):
main_agents - Worker branches:
agent/<workerId>/<jobId> - SourceControlManager worktree default:
.worktrees/source_control_manager - Worker job execution uses isolated worktrees and optionally isolated Docker runtime.
scripts/start.ts enforces critical safety checks before full startup, including:
- required local config files (
.env,configs/local.toml) - LLM endpoint preflight
- integration branch existence/sync checks
- dedicated SourceControlManager worktree guard
- worker sandbox image availability/rebuild policy
Canonical config files:
configs/default.tomlconfigs/<profile>.tomlconfigs/local.toml(local override, typically gitignored)
Load order (last wins):
configs/default.tomlconfigs/<PUSHPALS_PROFILE>.tomlconfigs/local.toml- environment variables
High-value env overrides:
PUSHPALS_PROFILEPUSHPALS_SERVER_URLPUSHPALS_DATA_DIRLOCALBUDDY_LLM_*REMOTEBUDDY_LLM_*WORKERPALS_LLM_*WORKERPALS_EXECUTORWORKERPALS_REQUIRE_DOCKERWORKERPALS_DOCKER_IMAGESOURCE_CONTROL_MANAGER_*
apps/client- Expo UIapps/localbuddy- user ingress and local routingapps/remotebuddy- orchestration, planning, autonomyapps/workerpals- executor daemon and backend adaptersapps/source_control_manager- integration daemonapps/server- event/queue/autonomy API and persistencepackages/protocol- protocol schemas, types, validatorspackages/shared- config loader, communication utilitiesprompts- system and planning promptstests/integration- e2e + backend eval harness
- This repository is under active development.
- For most local development, use Docker worker mode (
workerpals:only:docker) to keep toolchains reproducible. - If you only need chat ingress, you can run
localbuddywithout worker services, but delegated coding work requiresserver + remotebuddy + workerpalsand usuallysource_control_managerfor integration completion.
