-
Notifications
You must be signed in to change notification settings - Fork 0
Docker Setup
Status: ✅ Complete
Last Updated: December 9, 2025
RiceCoder can be run in Docker containers for isolated, reproducible environments. This guide covers building, running, and configuring RiceCoder in Docker.
# Clone the repository
git clone https://github.com/moabualruz/ricecoder.git
cd ricecoder
# Build the Docker image
docker build -t ricecoder:latest .
# Verify the image was built
docker images | grep ricecoder# Display version
docker run --rm ricecoder:latest --version
# Display help
docker run --rm ricecoder:latest --help
# Initialize configuration
docker run --rm ricecoder:latest init- Base Image: Alpine Linux 3.18 (minimal, ~5 MB)
- Binary: Statically linked (MUSL), no external dependencies
- Size: ~50-100 MB (including Rust build artifacts)
- User: Non-root user (ricecoder:1000) for security
- Entrypoint: tini (proper signal handling)
The Dockerfile uses a multi-stage build for efficiency:
-
Builder Stage: Rust 1.75 with build tools
- Installs dependencies (pkg-config, libssl-dev, musl-tools)
- Compiles with static linking (MUSL)
- Produces statically linked binary
-
Runtime Stage: Alpine Linux 3.18
- Minimal base image
- Only runtime dependencies (ca-certificates, tini)
- Non-root user for security
- Final image size: ~50-100 MB
# Run with default help command
docker run ricecoder:latest
# Run specific command
docker run ricecoder:latest --version
docker run ricecoder:latest --help# Run interactive chat
docker run -it ricecoder:latest chat
# Run with TTY allocation
docker run -it --rm ricecoder:latest chatMount your workspace directory to access files:
# Mount current directory as /workspace
docker run -it -v $(pwd):/workspace ricecoder:latest chat
# Mount specific directory
docker run -it -v /path/to/project:/workspace ricecoder:latest chat
# Mount with read-only access
docker run -it -v $(pwd):/workspace:ro ricecoder:latest chatPass environment variables to the container:
# Set OpenAI API key
docker run -e OPENAI_API_KEY="sk-..." ricecoder:latest chat
# Set multiple variables
docker run \
-e OPENAI_API_KEY="sk-..." \
-e RICECODER_LOG_LEVEL="debug" \
ricecoder:latest chat
# Load from .env file
docker run --env-file .env ricecoder:latest chatMount configuration files into the container:
# Mount global config
docker run -v ~/.ricecoder:/home/ricecoder/.ricecoder ricecoder:latest chat
# Mount project config
docker run -v $(pwd)/.agent:/workspace/.agent ricecoder:latest chat
# Mount both
docker run \
-v ~/.ricecoder:/home/ricecoder/.ricecoder \
-v $(pwd)/.agent:/workspace/.agent \
-v $(pwd):/workspace \
ricecoder:latest chat# Build with version tag
docker build -t ricecoder:v0.1.6 .
# Build with multiple tags
docker build -t ricecoder:latest -t ricecoder:v0.1.6 .
# Tag existing image
docker tag ricecoder:latest ricecoder:v0.1.6# Run in background
docker run -d --name ricecoder-daemon ricecoder:latest
# View logs
docker logs ricecoder-daemon
# Stop container
docker stop ricecoder-daemon
# Remove container
docker rm ricecoder-daemonCreate a docker-compose.yml file:
version: '3.8'
services:
ricecoder:
build: .
image: ricecoder:latest
container_name: ricecoder
stdin_open: true
tty: true
volumes:
- .:/workspace
- ~/.ricecoder:/home/ricecoder/.ricecoder
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- RICECODER_LOG_LEVEL=info
working_dir: /workspace
command: chatRun with Docker Compose:
# Start container
docker-compose up
# Run command
docker-compose run ricecoder --version
# Stop container
docker-compose down# Expose port for services
docker run -p 8080:8080 ricecoder:latest
# Connect to host network
docker run --network host ricecoder:latest
# Create custom network
docker network create ricecoder-net
docker run --network ricecoder-net ricecoder:latest| Variable | Description | Example |
|---|---|---|
OPENAI_API_KEY |
OpenAI API key | sk-... |
ANTHROPIC_API_KEY |
Anthropic API key | sk-ant-... |
RICECODER_HOME |
Config directory | /home/ricecoder/.ricecoder |
RICECODER_LOG_LEVEL |
Log level |
debug, info, warn, error
|
RICECODER_OFFLINE |
Offline mode |
true, false
|
Mount configuration files:
# Global config (~/.ricecoder/config.yaml)
docker run -v ~/.ricecoder:/home/ricecoder/.ricecoder ricecoder:latest
# Project config (.agent/config.yaml)
docker run -v $(pwd)/.agent:/workspace/.agent ricecoder:latest
# Steering files (.ai/steering/)
docker run -v $(pwd)/.ai:/workspace/.ai ricecoder:latestMake sure Docker is running:
# Start Docker (macOS)
open -a Docker
# Start Docker (Linux)
sudo systemctl start docker
# Check Docker status
docker psAdd your user to the docker group:
# Add user to docker group
sudo usermod -aG docker $USER
# Apply group changes
newgrp docker
# Verify
docker psCheck build logs:
# Build with verbose output
docker build --progress=plain -t ricecoder:latest .
# Check Dockerfile syntax
docker build --no-cache -t ricecoder:latest .Check container logs:
# View logs
docker logs <container-id>
# Run with interactive terminal
docker run -it ricecoder:latest /bin/sh
# Check entrypoint
docker inspect ricecoder:latest | grep -A 5 EntrypointVerify mount paths:
# Check mounted volumes
docker inspect <container-id> | grep -A 10 Mounts
# List files in container
docker run -v $(pwd):/workspace ricecoder:latest ls -la /workspace
# Check permissions
ls -la $(pwd)Increase Docker memory limit:
# Set memory limit
docker run -m 4g ricecoder:latest chat
# Check current limits
docker stats- Builder stage: ~2 GB (Rust toolchain)
- Runtime stage: ~50-100 MB (Alpine + binary)
- Compressed: ~20-30 MB (when pushed to registry)
- First build: ~10-15 minutes (downloads Rust toolchain)
- Subsequent builds: ~2-5 minutes (uses cache)
-
With
--no-cache: ~10-15 minutes
- Startup time: <2 seconds
- Memory usage: ~50-100 MB (base)
- CPU usage: Minimal when idle
The container runs as non-root user (ricecoder:1000) for security:
# Check user
docker run ricecoder:latest whoami
# Run as root (not recommended)
docker run --user root ricecoder:latest whoamiRun with read-only filesystem:
docker run --read-only ricecoder:latest --versionSet resource limits:
# Limit CPU and memory
docker run \
--cpus 2 \
--memory 2g \
ricecoder:latest chatUse Docker secrets for sensitive data:
# Create secret
echo "sk-..." | docker secret create openai_key -
# Use in compose
docker-compose config# Login to Docker Hub
docker login
# Tag image
docker tag ricecoder:latest moabualruz/ricecoder:latest
docker tag ricecoder:latest moabualruz/ricecoder:v0.1.6
# Push to Docker Hub
docker push moabualruz/ricecoder:latest
docker push moabualruz/ricecoder:v0.1.6
# Pull from Docker Hub
docker pull moabualruz/ricecoder:latest# Tag for private registry
docker tag ricecoder:latest registry.example.com/ricecoder:latest
# Push to private registry
docker push registry.example.com/ricecoder:latest
# Pull from private registry
docker pull registry.example.com/ricecoder:latest# Run interactive chat
docker run -it \
-v $(pwd):/workspace \
-e OPENAI_API_KEY="sk-..." \
ricecoder:latest chat# Review code in container
docker run -it \
-v $(pwd):/workspace \
-e OPENAI_API_KEY="sk-..." \
ricecoder:latest review src/main.rs# Generate code from spec
docker run -it \
-v $(pwd):/workspace \
-e OPENAI_API_KEY="sk-..." \
ricecoder:latest gen --spec my-feature# Create docker-compose.yml
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
ricecoder:
build: .
volumes:
- .:/workspace
- ~/.ricecoder:/home/ricecoder/.ricecoder
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
working_dir: /workspace
stdin_open: true
tty: true
EOF
# Start development environment
docker-compose up -d
# Run commands
docker-compose exec ricecoder chat
docker-compose exec ricecoder review src/main.rs
# Stop when done
docker-compose down- Installation Setup - Installation methods
- Configuration Guide - Configuration options
- Quick Start Guide - Get started quickly
- Troubleshooting Guide - Common issues
Last updated: December 9, 2025