Skip to content

A powerful command-line utility for running AI tools like Google's Gemini CLI or OpenAI's Codex in a secure, isolated, and sandboxed Docker environment.

License

Notifications You must be signed in to change notification settings

labeeb-io/AI-Sandbox

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

AI Sandbox

A developer toolkit for running AI tools (like Gemini CLI and Codex) in a secure, isolated Docker sandbox, plus supporting local workflows (proxy, MCP integration, and insights reporting).

This toolkit is designed for developers who want to leverage AI assistants on their local codebase without granting unrestricted access. It provides full project context in read-only mode while only allowing modifications to specific, user-approved directories.

Developer Note Hi, This project came from the need to safely integrate powerful AI tools like Codex and Gemini into complex local development workflows without compromising security or codebase integrity. I hope it helps you work smarter and safer with AI. You can find me on LinkedIn.

Key Features

  • Secure by Default: Mounts your entire project as read-only for context, and only selected service directories as read-write.
  • Secret Redaction: Automatically backs up and redacts sensitive information like API keys from your code before the AI session begins.
  • Seamless Authentication: Intelligently mounts host-machine credentials for the chosen tool (e.g., ~/.codex, ~/.config/gemini-cli) into the container, so your tools work out-of-the-box.
  • Multi-Tool Support: Easily switch between different AI tools (e.g., gemini, codex).
  • Interactive & Non-Interactive Modes:
    • Launch an interactive AI chat session (TUI).
    • Run non-interactive, one-shot prompts.
    • Get a direct bash shell inside the sandboxed container for debugging or manual changes (--shell).
  • Smart Logging: Logs non-interactive sessions. Automatically detects interactive TUIs (like codex without arguments) and runs them directly for full functionality, without interfering piping.
  • Post-Session Control: After your session, a menu lets you:
    • Review all changed files.
    • Generate a git-compatible diff file to easily apply changes in your IDE.
    • Restore your original files from the backup, discarding all changes.
  • Robust & User-Friendly: Colored output, error handling, and a graceful session management make it a pleasure to use.

Prerequisites

Before you begin, ensure you have the following installed on your system:

  1. Docker: To run the containerized environment.
  2. Git: Required for the "Generate Diff" feature.
  3. rsync: Required for the backup and restore functionality.
  4. AI CLI Tools: The Docker image (ai-sandbox by default) must have the desired AI CLI tools (e.g., gemini, codex) installed.
  5. gitleaks (Optional): Required only if using the --gitleaks flag.

Installation

Recommended setup order:

  1. Install Node dependencies (proxy/runtime tools):
cd ai-sandbox
npm install
  1. Create one shared runtime env file in root:
cp .env.example .env

Then edit ai-sandbox/.env and set at minimum:

  • LLM_API_BASE_URL
  • LLM_API_KEY
  1. Install MCP Python dependencies (required for --enrich-with-guru):
cd ai-sandbox/dev-sandbox-mcp
./setup_and_run.sh

Notes:

  • dev-sandbox-mcp scripts now fallback to ../.env, so a single root .env is enough.
  • dev-sandbox-mcp/.env is optional (local override only).
  • If Codex MCP registration prompt appears in setup_and_run.sh, you can accept or skip; dependency install is the important part.

AI Sandbox Components

AI Sandbox is a small developer toolkit for running AI-assisted workflows locally with safer defaults.

It currently includes:

  • ai-server.js: OpenAI-compatible proxy service (provider-agnostic via env config)
  • insights/: local Codex history analytics and HTML reporting
  • dev-sandbox-mcp/: MCP server for coding/developer tools against OpenAI-compatible/local LLM APIs

opencode-mcp exists in this repository but is intentionally out of scope for current migration work.

Repository Layout

ai-sandbox/
  README.md
  Dockerfile
  ai-server.js
  moe-proxy-server.md
  config/
    llm-provider.env.example
  insights/
  insights.sh
  dev-sandbox-mcp/
  openai-codex-mcp/   # compatibility wrapper (deprecated path)

Quick Start

Quick Commands (Makefile)

cd ai-sandbox
make help

Most used targets:

# One-time setup
make init

# Run proxy
make proxy-start

# Run MCP server
make mcp-run

# Generate insights report
make insights-30d
make insights-guru
make insights-showcase
make insights-pdf

# Validate local code changes
make check

1. Proxy

cd ai-sandbox
npm install
cp config/llm-provider.env.example .env
npm run start

Core env vars:

  • LLM_API_BASE_URL
  • LLM_API_KEY
  • SANDBOX_PROXY_PORT

Legacy aliases are still supported (LONGCAT_*, PROXY_TOKEN, DEBUG, etc.) during migration.

Detailed proxy runtime and endpoint behavior:

  • moe-proxy-server.md

Docker Agent Runtime

Dockerfile provides a small container runtime for AI CLI agents (codex, gemini) with common debugging tools.

Build:

cd ai-sandbox
docker build -t ai-sandbox-agent .

Run interactively:

docker run --rm -it \
  -v "$(pwd)":/workspace \
  -w /workspace \
  ai-sandbox-agent

Run with stricter sandboxing (read-only root + scoped writable mounts):

docker run --rm -it \
  --read-only \
  --cap-drop=ALL \
  --tmpfs /tmp:rw,noexec,nosuid,size=512m \
  --tmpfs /home/node:rw,noexec,nosuid,size=256m \
  -v "$(pwd)":/workspace:ro \
  -v "$(pwd)/api":/workspace/api:rw \
  -w /workspace \
  ai-sandbox-agent

Notes:

  • Default user is non-root (node).
  • Runtime home is /tmp (XDG_* paths are under /tmp/.cache and /tmp/.config).
  • Intended pattern is read-only or scoped mounts from host projects, with explicit writable areas.

2. Insights Report

bash ai-sandbox/insights.sh --days 30

For Guru enrichment:

bash ai-sandbox/insights.sh --days 30 --enrich-with-guru

Requirement:

  • dev-sandbox-mcp dependencies must be installed first (./setup_and_run.sh).
  • For public demos/showcase, anonymize and enrich the generated report:
make insights-showcase
  • For PDF export, install Playwright once:
cd ai-sandbox
npm install -D playwright
npx playwright install chromium
make insights-pdf

Output:

  • ai-sandbox/insights-report/index.html
  • ai-sandbox/insights-report/data.json
  • ai-sandbox/insights-report/report.pdf (via make insights-pdf)

Sample:

Codex Insights Report Sample

3. MCP Service

cd ai-sandbox/dev-sandbox-mcp
cp .env.template .env
./setup_and_run.sh

Compatibility notes:

  • New canonical path: ai-sandbox/dev-sandbox-mcp
  • Legacy path kept as wrapper: ai-sandbox/openai-codex-mcp
  • Legacy MCP alias openai_codex is still registered alongside dev_sandbox_mcp

Configuration Model

All components are moving toward canonical LLM_* configuration keys.

Primary reference:

  • config/llm-provider.env.example

Recommended workflow:

  • Keep a single runtime env at ai-sandbox/.env.
  • dev-sandbox-mcp scripts automatically fallback to ../.env when local dev-sandbox-mcp/.env is missing.

Git Manager Setup

For configuring reusable commit-planning behavior in new projects (service map, project policy, prompt/schema templates), see:

  • github-manager-project-setup.md

Backward Compatibility

During migration:

  • Old env names continue to work with deprecation warnings.
  • Old directory path openai-codex-mcp forwards to dev-sandbox-mcp.
  • Existing automation should continue to run without immediate breaking changes.

Contributing

Keep changes focused, small, and scoped to the component being updated.

License

This suite is intended to be MIT-licensed at project root.

License boundaries:

  • Root ai-sandbox/ files follow ai-sandbox/LICENSE (MIT).
  • Nested components can keep their own license files and terms.
  • When redistributing, preserve notices from both root and nested component licenses.

About

A powerful command-line utility for running AI tools like Google's Gemini CLI or OpenAI's Codex in a secure, isolated, and sandboxed Docker environment.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published