Skip to content

feat: per-persona model selection for AI CLI adapters #27

@johannes-engler-mw

Description

@johannes-engler-mw

Summary

Add the ability to configure different AI models for different reviewer personas. Currently, the entire review workflow runs as a single AI process, and all reviewers inherit the same model. This feature would allow users to assign specific models to specific reviewers (e.g., use a stronger model for principal reviews and a faster model for quality checks).

Motivation

  • Different review personas have different complexity needs — a holistic architecture review benefits from a more capable model, while a focused lint-style check can use a faster/cheaper one.
  • OpenCode already supports --model provider/model per invocation, and Claude Code supports --model. The mechanism exists but OCR doesn't expose it.
  • Cost optimization: users can balance review quality vs. token spend per persona.

Current Architecture

  1. The dashboard's command-runner.ts spawns one AI process for the entire 8-phase review workflow.
  2. The AI acts as "Tech Lead" and spawns individual reviewer tasks internally (Phase 4) using the CLI's built-in Task tool.
  3. All reviewers inherit the parent process's model — no per-persona control.

Key files

File Role
packages/dashboard/src/server/services/ai-cli/types.ts SpawnOptions type — no model field
packages/dashboard/src/server/services/ai-cli/opencode-adapter.ts OpenCode spawn — no --model flag
packages/dashboard/src/server/services/ai-cli/claude-adapter.ts Claude Code spawn — no --model flag
packages/dashboard/src/server/socket/command-runner.ts Orchestrates workflow spawn (single process)
.ocr/config.yaml default_team config — count only, no model field
.ocr/skills/references/workflow.md Phase 4 reviewer spawning instructions

Proposed Design

1. Config schema (config.yaml)

Extend default_team to support model per reviewer type:

default_team:
  principal:
    count: 2
    model: anthropic/claude-sonnet-4-20250514
  quality:
    count: 2
    model: anthropic/claude-haiku-4-5-20251001
  security:
    count: 1
    # model omitted = use default

Backwards-compatible: principal: 2 (number) still works as shorthand for { count: 2 }.

2. Type changes (SpawnOptions)

export type SpawnOptions = {
  prompt: string
  cwd: string
  mode: SpawnMode
  maxTurns?: number
  allowedTools?: string[]
  resumeSessionId?: string
  model?: string  // NEW: e.g. "anthropic/claude-sonnet-4-20250514"
}

3. Adapter changes

Both adapters pass --model when opts.model is set:

  • OpenCode: opencode run "prompt" --format json --model <model>
  • Claude Code: claude --print --model <model> ...

4. Architectural change: per-reviewer process spawning

This is the core change. Instead of one monolithic AI session:

Phase Process Model
1-3 (Tech Lead) Single AI process Default model
4 (Reviews) One AI process per reviewer Per-reviewer model from config
5-8 (Synthesis) Single AI process Default model

command-runner.ts would need to:

  1. Parse default_team config (including model fields)
  2. Run Phases 1-3 as one process (Tech Lead analysis)
  3. Spawn N separate processes for Phase 4, each with its own --model flag and reviewer persona prompt
  4. Collect all review outputs
  5. Run Phases 5-8 as another process for synthesis

5. Workflow instructions update

Update .ocr/skills/references/workflow.md Phase 4 to reflect that reviewers may be pre-spawned by the command runner rather than self-spawned by the Tech Lead.

Complexity & Considerations

  • Backwards compatibility: Number-only default_team values (principal: 2) must keep working.
  • Process orchestration: Phases 1-3 must complete before Phase 4 starts; Phase 4 reviewers can run in parallel; Phases 5-8 need all reviews collected.
  • State tracking: ocr state transition calls need to work across multiple processes.
  • Error handling: Individual reviewer process failures shouldn't abort the entire review.
  • Session context: Each reviewer process needs the same context files (discovered-standards, requirements, Tech Lead guidance) but doesn't share a conversation history with other reviewers.
  • Default model: When no model is specified per-reviewer, the AI CLI's own default applies (no --model flag passed).

Acceptance Criteria

  • config.yaml supports both reviewer: 2 (shorthand) and reviewer: { count: 2, model: "..." } (extended)
  • SpawnOptions.model is passed as --model to both Claude Code and OpenCode adapters
  • Each reviewer in Phase 4 runs as a separate AI process with its configured model
  • Reviews are collected and written to the correct session paths
  • State transitions work correctly across the multi-process workflow
  • Existing single-model workflows continue to work unchanged

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions