A developer toolkit for running AI tools (like Gemini CLI and Codex) in a secure, isolated Docker sandbox, plus supporting local workflows (proxy, MCP integration, and insights reporting).
This toolkit is designed for developers who want to leverage AI assistants on their local codebase without granting unrestricted access. It provides full project context in read-only mode while only allowing modifications to specific, user-approved directories.
Developer Note Hi, This project came from the need to safely integrate powerful AI tools like Codex and Gemini into complex local development workflows without compromising security or codebase integrity. I hope it helps you work smarter and safer with AI. You can find me on LinkedIn.
- Secure by Default: Mounts your entire project as read-only for context, and only selected service directories as read-write.
- Secret Redaction: Automatically backs up and redacts sensitive information like API keys from your code before the AI session begins.
- Seamless Authentication: Intelligently mounts host-machine credentials for the chosen tool (e.g.,
~/.codex,~/.config/gemini-cli) into the container, so your tools work out-of-the-box. - Multi-Tool Support: Easily switch between different AI tools (e.g.,
gemini,codex). - Interactive & Non-Interactive Modes:
- Launch an interactive AI chat session (TUI).
- Run non-interactive, one-shot prompts.
- Get a direct bash shell inside the sandboxed container for debugging or manual changes (
--shell).
- Smart Logging: Logs non-interactive sessions. Automatically detects interactive TUIs (like
codexwithout arguments) and runs them directly for full functionality, without interfering piping. - Post-Session Control: After your session, a menu lets you:
- Review all changed files.
- Generate a git-compatible diff file to easily apply changes in your IDE.
- Restore your original files from the backup, discarding all changes.
- Robust & User-Friendly: Colored output, error handling, and a graceful session management make it a pleasure to use.
Before you begin, ensure you have the following installed on your system:
- Docker: To run the containerized environment.
- Git: Required for the "Generate Diff" feature.
- rsync: Required for the backup and restore functionality.
- AI CLI Tools: The Docker image (
ai-sandboxby default) must have the desired AI CLI tools (e.g.,gemini,codex) installed. - gitleaks (Optional): Required only if using the
--gitleaksflag.
Recommended setup order:
- Install Node dependencies (proxy/runtime tools):
cd ai-sandbox
npm install- Create one shared runtime env file in root:
cp .env.example .envThen edit ai-sandbox/.env and set at minimum:
LLM_API_BASE_URLLLM_API_KEY
- Install MCP Python dependencies (required for
--enrich-with-guru):
cd ai-sandbox/dev-sandbox-mcp
./setup_and_run.shNotes:
dev-sandbox-mcpscripts now fallback to../.env, so a single root.envis enough.dev-sandbox-mcp/.envis optional (local override only).- If Codex MCP registration prompt appears in
setup_and_run.sh, you can accept or skip; dependency install is the important part.
AI Sandbox is a small developer toolkit for running AI-assisted workflows locally with safer defaults.
It currently includes:
ai-server.js: OpenAI-compatible proxy service (provider-agnostic via env config)insights/: local Codex history analytics and HTML reportingdev-sandbox-mcp/: MCP server for coding/developer tools against OpenAI-compatible/local LLM APIs
opencode-mcp exists in this repository but is intentionally out of scope for current migration work.
ai-sandbox/
README.md
Dockerfile
ai-server.js
moe-proxy-server.md
config/
llm-provider.env.example
insights/
insights.sh
dev-sandbox-mcp/
openai-codex-mcp/ # compatibility wrapper (deprecated path)
cd ai-sandbox
make helpMost used targets:
# One-time setup
make init
# Run proxy
make proxy-start
# Run MCP server
make mcp-run
# Generate insights report
make insights-30d
make insights-guru
make insights-showcase
make insights-pdf
# Validate local code changes
make checkcd ai-sandbox
npm install
cp config/llm-provider.env.example .env
npm run startCore env vars:
LLM_API_BASE_URLLLM_API_KEYSANDBOX_PROXY_PORT
Legacy aliases are still supported (LONGCAT_*, PROXY_TOKEN, DEBUG, etc.) during migration.
Detailed proxy runtime and endpoint behavior:
moe-proxy-server.md
Dockerfile provides a small container runtime for AI CLI agents (codex, gemini) with common debugging tools.
Build:
cd ai-sandbox
docker build -t ai-sandbox-agent .Run interactively:
docker run --rm -it \
-v "$(pwd)":/workspace \
-w /workspace \
ai-sandbox-agentRun with stricter sandboxing (read-only root + scoped writable mounts):
docker run --rm -it \
--read-only \
--cap-drop=ALL \
--tmpfs /tmp:rw,noexec,nosuid,size=512m \
--tmpfs /home/node:rw,noexec,nosuid,size=256m \
-v "$(pwd)":/workspace:ro \
-v "$(pwd)/api":/workspace/api:rw \
-w /workspace \
ai-sandbox-agentNotes:
- Default user is non-root (
node). - Runtime home is
/tmp(XDG_*paths are under/tmp/.cacheand/tmp/.config). - Intended pattern is read-only or scoped mounts from host projects, with explicit writable areas.
bash ai-sandbox/insights.sh --days 30For Guru enrichment:
bash ai-sandbox/insights.sh --days 30 --enrich-with-guruRequirement:
dev-sandbox-mcpdependencies must be installed first (./setup_and_run.sh).- For public demos/showcase, anonymize and enrich the generated report:
make insights-showcase- For PDF export, install Playwright once:
cd ai-sandbox
npm install -D playwright
npx playwright install chromium
make insights-pdfOutput:
ai-sandbox/insights-report/index.htmlai-sandbox/insights-report/data.jsonai-sandbox/insights-report/report.pdf(viamake insights-pdf)
Sample:
cd ai-sandbox/dev-sandbox-mcp
cp .env.template .env
./setup_and_run.shCompatibility notes:
- New canonical path:
ai-sandbox/dev-sandbox-mcp - Legacy path kept as wrapper:
ai-sandbox/openai-codex-mcp - Legacy MCP alias
openai_codexis still registered alongsidedev_sandbox_mcp
All components are moving toward canonical LLM_* configuration keys.
Primary reference:
config/llm-provider.env.example
Recommended workflow:
- Keep a single runtime env at
ai-sandbox/.env. dev-sandbox-mcpscripts automatically fallback to../.envwhen localdev-sandbox-mcp/.envis missing.
For configuring reusable commit-planning behavior in new projects (service map, project policy, prompt/schema templates), see:
github-manager-project-setup.md
During migration:
- Old env names continue to work with deprecation warnings.
- Old directory path
openai-codex-mcpforwards todev-sandbox-mcp. - Existing automation should continue to run without immediate breaking changes.
Keep changes focused, small, and scoped to the component being updated.
This suite is intended to be MIT-licensed at project root.
License boundaries:
- Root
ai-sandbox/files followai-sandbox/LICENSE(MIT). - Nested components can keep their own license files and terms.
- When redistributing, preserve notices from both root and nested component licenses.
