Skip to content

Conversation

@zhravan
Copy link
Collaborator

@zhravan zhravan commented Oct 21, 2025

Issue

Link to related issue(s):


Description

Short summary of what this PR changes or introduces.


Scope of Change

Select all applicable areas impacted by this PR:

  • View (UI/UX)
  • API
  • CLI
  • Infra / Deployment
  • Docs
  • Other (specify): ________

Screenshot / Video / GIF (if applicable)

Attach or embed screenshots, screen recordings, or GIFs demonstrating the feature or fix.


Related PRs (if any)

Link any related or dependent PRs across repos.


Additional Notes for Reviewers (optional)

Anything reviewers should know before testing or merging (e.g., environment variables, setup steps).


Developer Checklist

To be completed by the developer who raised the PR.

  • Add valid/relevant title for the PR
  • Self-review done
  • Manual dev testing done
  • No secrets exposed
  • No merge conflicts
  • Docs added/updated (if applicable)
  • Removed debug prints / secrets / sensitive data
  • Unit / Integration tests passing
  • Follows all standards defined in Nixopus Docs

Reviewer Checklist

To be completed by the reviewer before merge.

  • Peer review done
  • No console.logs / fmt.prints left
  • No secrets exposed
  • If any DB migrations, migration changes are verified
  • Verified release changes are production-ready

Summary by CodeRabbit

  • New Features

    • Added frontend hot-reload development server for improved development experience.
    • Integrated proxy setup into the development installation workflow.
  • Improvements

    • Enhanced development environment configuration with automatic proxy initialization.
    • Streamlined Docker Compose setup for local development with health checks.
  • Changes

    • Reorganized Docker Compose services for better development environment structure.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 21, 2025

Walkthrough

Added Air live-reload configuration for Go API development. Enhanced Caddy proxy with listen directive guard. Refactored Python CLI development workflow to integrate proxy setup, loading, and service initialization. Renamed Supertokens service in Docker Compose. Introduced new view service with hot-reload support and Dockerfile for Next.js development environment.

Changes

Cohort / File(s) Summary
Go API Configuration
api/.air.toml
Introduces Air configuration for live-reload development, defining build paths, filters, and UI settings for automatic Go application rebuild
API Proxy Enhancement
api/internal/features/deploy/proxy/caddy.go
Adds guard to ensure Caddy listen directive exists, defaulting to [":80"] if empty
Python CLI Development Refactor
cli/app/commands/install/development.py
Refactors development workflow to include proxy setup, loading, and unified service initialization; adds proxy_port to defaults; updates _get_config signature to support overrides; introduces _setup_proxy_config, _load_proxy, and _start_all_services methods
Docker Development Composition
docker-compose-dev.yml
Renames supertokens service to nixopus-supertokens, adds new view service with hot-reload support, includes healthchecks, updates SUPERTOKENS_CONNECTION_URI reference, adds projects-dummy volume mount
View Development Dockerfile
view/Dockerfile.dev
Introduces Next.js development Dockerfile based on node:18-alpine with automatic package manager detection, dependency installation, and development server configuration

Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as CLI (development.py)
    participant Proxy as Proxy Setup
    participant API as API Service
    participant View as View Service
    participant Docker as Docker Compose

    User->>CLI: Run development install
    CLI->>CLI: _setup_proxy_config()
    CLI->>Proxy: _load_proxy()
    Proxy->>Docker: docker-compose config
    Docker->>API: Build & Start (with Air)
    Docker->>View: Build & Start (Node.js)
    Docker->>Docker: Health checks
    Proxy->>API: Monitor for rebuild
    API->>API: Hot reload via Air
    View->>View: Hot reload via Next.js dev server
    CLI->>CLI: _start_all_services()
    CLI->>User: Development environment ready
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

The changes span multiple layers (Go, Python, Docker) with mixed complexity. The Python CLI refactoring introduces new workflow logic and method signatures requiring careful verification of control flow integration. Docker Compose changes involve service renaming with environment variable updates needing dependency validation. Configuration and Dockerfile additions are straightforward but contribute to overall scope.

Possibly related PRs

Suggested labels

nixopus-view, nixopus-api, backend, api, nixopus-docker, cli

Suggested reviewers

  • raghavyuva

Poem

🐰 With Air that whistles through the Go,
And Caddy listening down below,
A View springs up with Next.js grace,
Services dance in Docker's space—
Live reload hops make dev quite fast!

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The pull request title "feat: live reloading dockerized dev setup" directly and accurately summarizes the main objective of the changeset. The changes comprehensively establish live reloading for a dockerized development environment: introducing Air configuration for Go live-reload (.air.toml), adding a new Dockerfile for the view service with development-focused settings, restructuring the docker-compose file with a new view service and refactored service naming, and updating the development install command to integrate proxy setup and service orchestration. The title is concise, clear, and sufficiently specific that a teammate reviewing the commit history would understand the primary change without confusion.
Docstring Coverage ✅ Passed Docstring coverage is 91.67% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@zhravan zhravan requested a review from raghavyuva October 22, 2025 08:25
@zhravan zhravan marked this pull request as ready for review October 22, 2025 08:25
@zhravan zhravan changed the title Dev setup modification feat: live reloading dockerized dev setup Oct 22, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
cli/app/commands/install/development.py (1)

403-455: Consider more robust configuration templating and validation.

The current approach uses string replacement to populate the Caddy configuration template. While functional, this has several concerns:

  1. Fragile templating: String replacement is brittle and doesn't validate that placeholders exist or are correctly formatted.
  2. No validation: After JSON manipulation (parsing and modifying the listen directive), there's no validation that the result is a valid Caddy configuration.
  3. Hardcoded domains: The domains app.localhost and api.localhost are hardcoded but could benefit from being configurable.
  4. Listen directive inconsistency: Line 446 sets [":80", ":443"], but api/internal/features/deploy/proxy/caddy.go at line 86 defaults to [":80"] only.

Consider these improvements:

  1. Use a proper templating library (like Jinja2, already common in Python projects) instead of string replacement
  2. Validate the JSON structure after modifications
  3. Make domains configurable through the config system
  4. Align listen directives across the codebase

Example validation after manipulation:

     # Parse and write back
     caddy_config = json.loads(config_str)
     
     # Ensure nixopus server has listen directive for both HTTP and HTTPS
     if "apps" in caddy_config and "http" in caddy_config["apps"]:
         if "servers" in caddy_config["apps"]["http"]:
             if "nixopus" in caddy_config["apps"]["http"]["servers"]:
                 server = caddy_config["apps"]["http"]["servers"]["nixopus"]
                 if "listen" not in server or not server["listen"]:
                     server["listen"] = [":80", ":443"]
+    
+    # Validate the configuration structure
+    if "apps" not in caddy_config or "http" not in caddy_config["apps"]:
+        raise Exception("Invalid Caddy configuration: missing required apps.http section")
     
     with open(caddy_json_template, "w") as f:
         json.dump(caddy_config, f, indent=2)
view/Dockerfile.dev (2)

12-17: Package manager detection is good, but consider consistency in fallback.

The conditional installation logic properly handles yarn, npm, and pnpm lockfiles. However, the fallback case (line 16) uses npm install without the --frozen-lockfile equivalent, which is inconsistent with the other branches that ensure deterministic installations.

Consider using a more consistent approach for the fallback:

 RUN \
   if [ -f yarn.lock ]; then yarn install --frozen-lockfile --network-timeout 600000; \
   elif [ -f package-lock.json ]; then npm ci; \
   elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm i --frozen-lockfile; \
-  else echo "No lockfile found. Installing with npm..." && npm install; \
+  else echo "Warning: No lockfile found. Run 'npm install' locally first." && exit 1; \
   fi

Alternatively, document why the fallback doesn't need locked dependencies for development.


13-13: Consider reducing the yarn network timeout.

The 600000ms (10 minutes) network timeout for yarn is very long and might mask genuine network or registry issues. Typical timeouts are 1-2 minutes (60000-120000ms).

Consider reducing the timeout:

-  if [ -f yarn.lock ]; then yarn install --frozen-lockfile --network-timeout 600000; \
+  if [ -f yarn.lock ]; then yarn install --frozen-lockfile --network-timeout 120000; \
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4494bbb and ea6fd9e.

📒 Files selected for processing (6)
  • api/.air.toml (1 hunks)
  • api/internal/features/deploy/proxy/caddy.go (1 hunks)
  • api/internal/features/supertokens/auth.go (0 hunks)
  • cli/app/commands/install/development.py (10 hunks)
  • docker-compose-dev.yml (6 hunks)
  • view/Dockerfile.dev (1 hunks)
💤 Files with no reviewable changes (1)
  • api/internal/features/supertokens/auth.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-15T22:28:14.568Z
Learnt from: zhravan
PR: raghavyuva/nixopus#0
File: :0-0
Timestamp: 2025-09-15T22:28:14.568Z
Learning: In the nixopus codebase, there is a critical proxy loop issue in api/internal/features/deploy/tasks/create.go where Caddy domain registration uses the same domain as both the domain name and upstream host in AddDomainWithAutoTLS(), causing applications to never be properly served. The fix requires using proper upstream host configuration like "localhost" or "127.0.0.1" instead of the domain name.

Applied to files:

  • api/internal/features/deploy/proxy/caddy.go
🧬 Code graph analysis (1)
cli/app/commands/install/development.py (5)
cli/app/commands/proxy/load.py (2)
  • Load (121-134)
  • LoadConfig (68-81)
cli/app/utils/config.py (2)
  • Config (10-118)
  • get_yaml_value (39-49)
cli/app/commands/install/base.py (1)
  • _get_config (36-49)
cli/app/commands/install/run.py (9)
  • _get_config (125-141)
  • run (175-223)
  • _run_preflight_checks (263-265)
  • _setup_clone_and_config (274-291)
  • _setup_proxy_config (337-365)
  • _create_env_files (293-335)
  • _setup_ssh (367-388)
  • _load_proxy (410-430)
  • _handle_installation_error (256-261)
cli/app/utils/timeout.py (1)
  • TimeoutWrapper (6-26)
🔇 Additional comments (7)
api/.air.toml (1)

13-14: Verify exclude_file path patterns match intended files.

The exclude_file patterns include paths like "api/versions.json", "api/tmp/*", and "api/logs/*". If Air runs from the api/ directory (as suggested by root = "."), these patterns would try to match api/api/versions.json, which likely doesn't exist. If Air runs from the repository root, then the root = "." should probably be root = "./api".

Review the path patterns and ensure they correctly match the intended files relative to the working directory where Air is invoked.

cli/app/commands/install/development.py (3)

91-143: Excellent platform support check with clear guidance.

The WSL2 check provides comprehensive guidance for Windows users, clearly explaining what works natively versus what requires WSL2, and includes step-by-step installation instructions. This significantly improves the developer experience.


507-576: Good improvements to service validation.

The updated validation logic now checks services through the Caddy proxy (api.localhost, app.localhost), includes proper retry logic with timeouts, and optionally displays Docker container status. These enhancements significantly improve the reliability of the development setup process.


580-658: Comprehensive and helpful success messaging.

The success messages provide clear, actionable information for developers including proxy URLs, direct access endpoints, logging commands, hot-reload capabilities, and configuration file locations. The distinction between minimal and verbose output is well-balanced.

docker-compose-dev.yml (3)

29-29: Consistent service renaming across configuration.

The renaming of the SuperTokens service from supertokens to nixopus-supertokens is applied consistently across both the service definition and the connection URI. This improves naming consistency in the development environment.

Also applies to: 152-154


71-110: Well-configured view service with proper isolation.

The new view service is properly configured with:

  • Source code mounting for hot reload
  • Isolated node_modules via named volume (line 94) to avoid host/container conflicts
  • Appropriate healthcheck and dependency ordering
  • Development-specific environment variables

The named volume for node_modules is a best practice for avoiding issues with different architectures and file systems.


88-88: WATCHPACK_POLLING may impact performance.

Setting WATCHPACK_POLLING=true enables polling for file changes, which is more resource-intensive than native file system events. This is sometimes necessary for Docker on Windows or network mounts, but it increases CPU usage.

Consider documenting why polling is required and whether it can be made conditional based on the host OS. Users on Linux/macOS may not need this setting.

Comment on lines +45 to +46
# Mount projects-dummy for Docker build contexts
- ${HOME}/projects-dummy:${HOME}/projects-dummy
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify security implications of projects-dummy mount.

Mounting ${HOME}/projects-dummy:${HOME}/projects-dummy gives the API container read/write access to a directory in the user's home directory. The comment indicates this is for "Docker build contexts," but this could be a security risk if it provides broader access than necessary.

Please clarify:

  1. Why is this mount necessary for Docker build contexts?
  2. Can this be scoped more narrowly (e.g., read-only, specific subdirectory)?
  3. Is this documented for users so they understand what access they're granting?

🏁 Script executed:

#!/bin/bash
# Search for usage of projects-dummy in the codebase to understand its purpose

echo "=== Searching for projects-dummy references ==="
rg -n "projects-dummy" --type-add 'config:*.{yml,yaml,json,env}' --type config --type go --type py

echo ""
echo "=== Checking for Docker build context usage ==="
rg -n "BuildContext|build.*context" --type go -A 3 -B 3

Length of output: 9397


I need to gather more context about projects-dummy. Let me search for documentation and configuration files that might explain its purpose.


🏁 Script executed:

#!/bin/bash
# Search for projects-dummy in documentation and config files

echo "=== Searching for projects-dummy in documentation ==="
fd -e md -e txt -e rst | head -20 | xargs rg -l "projects-dummy" 2>/dev/null || echo "No markdown/text docs found"

echo ""
echo "=== Searching for projects-dummy in all files ==="
rg -n "projects-dummy" -C 2

echo ""
echo "=== Checking for .env files and setup documentation ==="
fd -e env -e sample | xargs rg -l "projects-dummy" 2>/dev/null || echo "Not found in env files"

echo ""
echo "=== Checking docker-compose files for context ==="
find . -name "*docker-compose*" -type f | head -10 | xargs cat 2>/dev/null | grep -A 5 -B 5 "projects-dummy" || echo "Not found in other compose files"

Length of output: 1209


🏁 Script executed:

#!/bin/bash
# Search for projects-dummy in documentation and configuration

echo "=== All references to projects-dummy in codebase ==="
rg -n "projects-dummy" -C 3

echo ""
echo "=== Search for documentation files ==="
find . -type f \( -name "README*" -o -name "*.md" -o -name "CONTRIBUTING*" \) -exec grep -l "projects-dummy" {} \; 2>/dev/null || echo "No docs mentioning projects-dummy found"

echo ""
echo "=== Check environment example files ==="
find . -type f \( -name ".env*" -o -name "*.example" \) -exec cat {} \; 2>/dev/null | grep -i "projects" || echo "No env examples found"

echo ""
echo "=== Check if projects-dummy is conditionally used ==="
rg -n "projects-dummy|PROJECTS_DUMMY" -C 5

Length of output: 1590


Remove or clarify the projects-dummy mount—it appears unused and poses unnecessary security risk.

Based on code analysis, the projects-dummy mount is not used by the application:

  1. Build contexts don't use this mount: The deployment service derives build context from cloned repository paths (contextPath = repoPath), not from projects-dummy
  2. No code references it: No Go code, environment files, or documentation mention projects-dummy
  3. Unnecessary home directory access: Unlike the SSH mount (read-only), this grants full read/write access to a user directory with no documented purpose
  4. Misleading comment: The comment claiming it's "for Docker build contexts" contradicts actual build context logic

Either remove this mount or add documentation explaining:

  • What specific use case requires this directory
  • Why it cannot use the existing build context mechanism
  • Security implications for developers
🤖 Prompt for AI Agents
In docker-compose-dev.yml around lines 45-46, the projects-dummy bind mount
(${HOME}/projects-dummy:${HOME}/projects-dummy) appears unused and grants
unnecessary read/write access to a user home directory; either remove the mount
entirely or replace it with a minimally-privileged, documented alternative: if
unused, delete the volume line and the comment; if needed for a specific dev
workflow, change it to a read-only mount or a dedicated minimal path (e.g.,
./projects-dummy:${CONTAINER_PATH}:ro), add a clear comment describing the exact
use case, why the app cannot use existing build-contexts, and call out the
security implications for developers.

Copy link
Owner

@raghavyuva raghavyuva left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

can you address critical coderabbit comments?

@zhravan zhravan changed the base branch from master to feat/develop October 22, 2025 18:05
@zhravan zhravan changed the base branch from feat/develop to master October 22, 2025 18:05
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cli/app/commands/install/development.py (1)

96-103: Fix the WSL detection logic.

The code reads /proc/version twice in the same condition on Line 101. The second f.read() will return an empty string because the file pointer is at EOF after the first read, causing WSL detection to fail if the file contains "wsl" but not "microsoft".

Apply this diff to fix the issue:

         # Check if running in WSL
         is_wsl = False
         try:
             if os.path.exists("/proc/version"):
                 with open("/proc/version", "r") as f:
-                    is_wsl = "microsoft" in f.read().lower() or "wsl" in f.read().lower()
+                    content = f.read().lower()
+                    is_wsl = "microsoft" in content or "wsl" in content
         except Exception:
             is_wsl = False
🧹 Nitpick comments (2)
cli/app/commands/install/development.py (2)

403-455: Add error handling for JSON parsing.

The method correctly sets up proxy configuration with localhost domains and host.docker.internal reverse proxy URLs. However, the JSON parsing on Line 438 could fail if the template is malformed, which would raise an unclear error.

Consider adding explicit error handling:

         # Replace placeholders
         config_str = config_str.replace("{env.APP_DOMAIN}", view_domain)
         config_str = config_str.replace("{env.API_DOMAIN}", api_domain)
         config_str = config_str.replace("{env.APP_REVERSE_PROXY_URL}", app_reverse_proxy_url)
         config_str = config_str.replace("{env.API_REVERSE_PROXY_URL}", api_reverse_proxy_url)
 
         # Parse and write back
-        caddy_config = json.loads(config_str)
+        try:
+            caddy_config = json.loads(config_str)
+        except json.JSONDecodeError as e:
+            raise Exception(f"Invalid Caddy config template JSON: {e}")

523-575: Consider extracting retry logic and magic numbers.

The validation logic correctly checks services through the Caddy proxy using localhost domains. However, there's code duplication between API and View validation loops, and magic numbers (10 retries, 3-second delays, 5-second timeouts) are hardcoded.

Consider extracting the retry logic:

def _check_service_health(self, url: str, service_name: str, max_retries: int = 10, retry_delay: int = 3, timeout: int = 5) -> bool:
    """Check if a service is responding at the given URL"""
    for i in range(max_retries):
        try:
            response = urllib.request.urlopen(url, timeout=timeout)
            if response.status == 200:
                return True
        except Exception:
            if i < max_retries - 1:
                time.sleep(retry_delay)
    return False

Then use it for both API and View:

api_ready = self._check_service_health("http://api.localhost/api/v1/health", "API")
view_ready = self._check_service_health("http://app.localhost", "View")
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ea6fd9e and 4f7b683.

📒 Files selected for processing (1)
  • cli/app/commands/install/development.py (10 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
cli/app/commands/install/development.py (5)
cli/app/commands/proxy/load.py (2)
  • Load (121-134)
  • LoadConfig (68-81)
cli/app/utils/config.py (2)
  • Config (10-118)
  • get_yaml_value (39-49)
cli/app/commands/install/base.py (1)
  • _get_config (36-49)
cli/app/commands/install/run.py (9)
  • _get_config (125-141)
  • run (175-223)
  • _run_preflight_checks (263-265)
  • _setup_clone_and_config (274-291)
  • _setup_proxy_config (337-365)
  • _create_env_files (293-335)
  • _setup_ssh (367-388)
  • _load_proxy (410-430)
  • _handle_installation_error (256-261)
cli/app/utils/timeout.py (1)
  • TimeoutWrapper (6-26)
🔇 Additional comments (9)
cli/app/commands/install/development.py (9)

13-13: LGTM!

The new imports for proxy configuration components are correctly added and used later in the file.

Also applies to: 26-26


110-143: LGTM!

The detailed Windows/WSL2 guidance is comprehensive and user-friendly. It clearly explains what works on native Windows versus what requires WSL2, with actionable installation steps.


169-169: LGTM!

The proxy_port is correctly added to defaults and consistently retrieved from config.


173-173: LGTM!

The signature change to accept optional user_config and defaults parameters improves flexibility while maintaining backward compatibility through sensible defaults.

Also applies to: 190-190


198-198: LGTM!

The workflow correctly sequences proxy setup (prepare config file) before starting services, and proxy loading (via Admin API) after services are running. This ensures Caddy is available to receive the configuration.

Also applies to: 201-202


441-446: LGTM!

The listen directive guard safely handles nested dictionary access and only adds default listeners when missing or empty, preserving any existing configuration.


456-485: LGTM!

The proxy loading implementation correctly uses the Admin API with proper timeout handling and error reporting. The int casting on Line 458 is good defensive coding.


486-506: LGTM!

Consolidating service startup into a single Docker Compose command simplifies the workflow and is more efficient than starting services individually.


590-658: LGTM!

The success message is comprehensive and developer-friendly. It correctly displays localhost domains for proxy access, mentions hot reload capabilities, and provides useful commands for common development tasks.

@zhravan zhravan merged commit a05a0d6 into raghavyuva:master Oct 22, 2025
7 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants