Skip to content

Multi Agent Framework

Mo Abualruz edited this page Dec 9, 2025 · 2 revisions

Multi-Agent Framework Guide

Orchestrate specialized AI agents for comprehensive code analysis, generation, and transformation.

Status: ✅ Complete

Phase: Phase 2 - Alpha Enhanced Features

Last Updated: December 3, 2025


Overview

The RiceCoder Multi-Agent Framework provides specialized agents for different tasks (code review, testing, documentation, refactoring) with orchestration capabilities for sequential, parallel, and conditional workflows. Each agent is an expert in its domain, and the framework coordinates them to provide comprehensive analysis and generation.

Key Concepts

  • Specialized agents: Each agent focuses on a specific task (code review, testing, documentation, etc.)
  • Agent isolation: Agents run independently; one agent's failure doesn't affect others
  • Parallel execution: Multiple agents can run concurrently for faster analysis
  • Result aggregation: Findings from multiple agents are combined and prioritized
  • Conflict resolution: Conflicting findings are resolved by severity and source
  • Deterministic execution: Same input always produces identical output
  • Configurable workflows: Sequential, parallel, and conditional execution patterns

Getting Started

Basic Usage

Run a single agent for code review:

# Review code in current directory
rice review

# Review specific file
rice review src/main.rs

# Review with specific agent
rice agent code-review src/main.rs

Running Multiple Agents

Execute multiple agents in parallel:

# Run all agents on code
rice analyze

# Run specific agents
rice analyze --agents code-review,security-scan,performance-analysis

# Run agents sequentially
rice analyze --sequential

Exiting Agent Execution

Press Ctrl+C to cancel agent execution at any time. Completed agents' results are preserved.


Agent Types

CodeReviewAgent (MVP)

Analyzes code quality, security, performance, and best practices.

Capabilities:

  • Code quality analysis (naming, structure, complexity)
  • Security vulnerability scanning
  • Performance optimization detection
  • Best practice violation checking
  • Automatic fix suggestions

Usage:

# Run code review
rice review src/main.rs

# Review with specific checks
rice review --checks naming,security,performance

# Review with custom rules
rice review --config .ricecoder/review-rules.yaml

Output:

Code Review Results for src/main.rs
===================================

Critical Issues (1):
  - Line 42: Hardcoded API key detected
    Suggestion: Move to environment variable
    Fix: Use std::env::var("API_KEY")

Warnings (3):
  - Line 15: Function too complex (cyclomatic complexity: 8)
    Suggestion: Break into smaller functions
  
  - Line 28: Missing error handling
    Suggestion: Use Result<T, E> for fallible operations
  
  - Line 35: Inefficient algorithm (O(n²))
    Suggestion: Use HashMap for O(1) lookups

Info (2):
  - Line 10: Consider using iterator instead of loop
  - Line 50: Doc comment missing for public function

Summary:
  Total Issues: 6
  Critical: 1, Warnings: 3, Info: 2
  Severity Score: 7.5/10

Future Agents (Phase 2+)

TestAgent: Generates and runs tests

  • Generate unit tests from code
  • Generate integration tests
  • Run test suite
  • Report coverage

DocAgent: Generates documentation

  • Generate API documentation
  • Generate README sections
  • Generate architecture diagrams
  • Generate usage examples

RefactorAgent: Suggests refactoring

  • Identify code smells
  • Suggest design patterns
  • Propose architectural improvements
  • Generate refactoring diffs

ComplianceAgent: Compliance checking (Phase 3)

  • Check HIPAA compliance
  • Check GDPR compliance
  • Check security standards
  • Generate compliance reports

Agent Orchestration

Sequential Execution

Agents run one after another, with later agents able to use findings from earlier agents:

# Sequential execution (default)
rice analyze --sequential

# Workflow: Analysis → Suggestion → Approval → Application
# Agent 1 analyzes code
# Agent 2 generates suggestions based on findings
# Agent 3 validates suggestions
# Agent 4 applies approved changes

Use Cases:

  • Code review followed by automatic fixes
  • Analysis followed by documentation generation
  • Testing followed by performance optimization

Parallel Execution

Multiple agents run concurrently for faster analysis:

# Parallel execution
rice analyze --parallel

# All agents run simultaneously
# Results aggregated after all complete

Use Cases:

  • Comprehensive code analysis (quality, security, performance)
  • Multiple perspectives on the same code
  • Faster feedback for large codebases

Conditional Execution

Execution depends on previous agent results:

# Conditional execution
rice analyze --conditional

# If code review finds critical issues, run security scan
# If security scan passes, run performance analysis
# If performance analysis finds issues, suggest optimizations

Use Cases:

  • Prioritized analysis (critical issues first)
  • Conditional workflows based on findings
  • Efficient resource usage

Configuration

Agent Settings

Configure agents in .ricecoder/config.yaml:

agents:
  code-review:
    enabled: true
    checks:
      - naming
      - security
      - performance
      - best-practices
    severity-threshold: warning  # critical, warning, info
    auto-fix: false
    
  test-agent:
    enabled: false  # Phase 2
    generate-unit-tests: true
    generate-integration-tests: false
    
  doc-agent:
    enabled: false  # Phase 2
    generate-api-docs: true
    generate-readme: false
    
  refactor-agent:
    enabled: false  # Phase 2
    suggest-patterns: true
    suggest-architecture: false

Orchestration Settings

Configure orchestration behavior:

orchestration:
  # Execution mode: sequential, parallel, conditional
  mode: parallel
  
  # Timeout per agent (milliseconds)
  agent-timeout: 30000
  
  # Total timeout for all agents
  total-timeout: 120000
  
  # Retry failed agents
  retry-failed: true
  retry-count: 3
  retry-backoff: exponential
  
  # Result aggregation
  aggregate-results: true
  deduplicate-findings: true
  prioritize-by-severity: true
  
  # Conflict resolution
  conflict-strategy: severity  # severity, source, user-prompt

Custom Rules

Define custom rules for code review:

# .ricecoder/review-rules.yaml
naming:
  functions: snake_case
  types: PascalCase
  constants: UPPER_CASE
  
security:
  check-hardcoded-secrets: true
  check-unsafe-operations: true
  check-input-validation: true
  
performance:
  check-algorithms: true
  check-allocations: true
  check-n-plus-one: true
  
best-practices:
  require-error-handling: true
  require-doc-comments: true
  require-tests: false

Usage Examples

Example 1: Code Review on Single File

Review a specific file:

rice review src/main.rs

# Output shows:
# - Code quality issues
# - Security vulnerabilities
# - Performance opportunities
# - Best practice violations

Example 2: Comprehensive Analysis

Analyze entire project with multiple agents:

rice analyze --parallel

# Runs all enabled agents concurrently:
# - CodeReviewAgent analyzes code quality
# - SecurityAgent scans for vulnerabilities
# - PerformanceAgent identifies optimizations
# - Results aggregated and prioritized

Example 3: Sequential Workflow

Run agents in sequence with context passing:

rice analyze --sequential

# Workflow:
# 1. CodeReviewAgent finds issues
# 2. SecurityAgent focuses on security issues
# 3. RefactorAgent suggests improvements
# 4. TestAgent generates tests for changes

Example 4: Conditional Workflow

Run agents based on findings:

rice analyze --conditional

# Workflow:
# 1. CodeReviewAgent analyzes code
# 2. If critical issues found:
#    - SecurityAgent runs security scan
#    - If vulnerabilities found:
#      - Generate security fixes
# 3. If no critical issues:
#    - PerformanceAgent runs optimization analysis

Example 5: Custom Configuration

Use custom configuration for specific analysis:

# Use custom rules
rice review --config .ricecoder/strict-rules.yaml

# Output uses strict rules:
# - More checks enabled
# - Lower severity threshold
# - Stricter naming conventions

Example 6: Dry-Run Mode

Preview agent findings without applying changes:

rice analyze --dry-run

# Shows what agents would find
# No changes applied
# Useful for testing configurations

Example 7: Export Results

Export agent findings to file:

rice analyze --export results.json

# Exports findings in JSON format:
# {
#   "agents": [
#     {
#       "name": "CodeReviewAgent",
#       "findings": [...],
#       "execution_time_ms": 1234
#     }
#   ],
#   "aggregated": {...}
# }

Result Aggregation

Finding Prioritization

Findings are prioritized by severity:

Critical (must fix)
  ↓
Warning (should fix)
  ↓
Info (nice to have)

Deduplication

Identical findings from multiple agents are merged:

Before deduplication:
- CodeReviewAgent: "Missing error handling at line 42"
- SecurityAgent: "Missing error handling at line 42"

After deduplication:
- "Missing error handling at line 42" (from CodeReviewAgent, SecurityAgent)

Conflict Resolution

When agents produce conflicting findings:

Agent A: "Use HashMap for O(1) lookup"
Agent B: "Use Vec for memory efficiency"

Resolution:
- Severity-based: If one is critical, use that
- Source-based: Prefer findings from more specialized agent
- User-prompted: Ask user to choose

Result Format

Aggregated results include:

{
  "summary": {
    "total_findings": 15,
    "critical": 2,
    "warnings": 8,
    "info": 5,
    "severity_score": 7.2
  },
  "findings": [
    {
      "id": "finding-001",
      "severity": "critical",
      "category": "security",
      "message": "Hardcoded API key detected",
      "location": "src/main.rs:42",
      "source_agents": ["CodeReviewAgent", "SecurityAgent"],
      "suggestion": "Move to environment variable",
      "auto_fixable": true
    }
  ],
  "agents": [
    {
      "name": "CodeReviewAgent",
      "status": "completed",
      "execution_time_ms": 1234,
      "findings_count": 8
    }
  ]
}

Error Handling

Agent Failures

If an agent fails, other agents continue:

rice analyze --parallel

# If CodeReviewAgent fails:
# - Error is logged
# - Other agents continue
# - Results from successful agents are returned
# - Failed agent is reported in summary

Timeout Handling

Agents that exceed timeout are cancelled:

rice analyze --agent-timeout 30000

# If agent exceeds 30 seconds:
# - Agent is cancelled
# - Partial results (if any) are returned
# - Timeout is reported in summary

Retry Logic

Failed agents can be retried:

orchestration:
  retry-failed: true
  retry-count: 3
  retry-backoff: exponential  # linear, exponential, fixed

Error Recovery

On critical errors, execution can be rolled back:

rice analyze --rollback-on-error

# If critical error occurs:
# - All changes are rolled back
# - System returns to previous state
# - Error details are reported

Performance

Optimization Tips

  1. Use parallel execution for independent agents:

    rice analyze --parallel
  2. Reduce agent timeout for faster feedback:

    orchestration:
      agent-timeout: 15000  # 15 seconds instead of 30
  3. Disable unnecessary agents:

    agents:
      test-agent:
        enabled: false
      doc-agent:
        enabled: false
  4. Use caching for repeated analysis:

    orchestration:
      cache-results: true
      cache-ttl: 3600  # 1 hour

Performance Metrics

View agent performance:

rice analyze --metrics

# Output:
# CodeReviewAgent: 1234ms (8 findings)
# SecurityAgent: 2345ms (3 findings)
# PerformanceAgent: 1567ms (2 findings)
# Total: 5146ms
# Aggregation: 234ms

Troubleshooting

Issue: Agent execution is slow

Solution: Check timeout and reduce if needed:

# View current timeout
rice config get orchestration.agent-timeout

# Reduce timeout
rice config set orchestration.agent-timeout 15000

# Or use faster agents
rice analyze --agents code-review

Issue: Agent fails with timeout

Solution: Increase timeout or reduce scope:

# Increase timeout
rice config set orchestration.agent-timeout 60000

# Or analyze smaller scope
rice review src/main.rs  # Instead of entire project

Issue: Conflicting findings from different agents

Solution: Check conflict resolution strategy:

orchestration:
  conflict-strategy: severity  # Use severity-based resolution

Or review findings manually:

rice analyze --export results.json
# Review results.json and resolve conflicts manually

Issue: Agent produces incorrect findings

Solution: Check agent configuration and rules:

# View agent configuration
cat .ricecoder/config.yaml

# View custom rules
cat .ricecoder/review-rules.yaml

# Adjust rules if needed
rice config set agents.code-review.checks naming,security

Issue: Results are not aggregated correctly

Solution: Check aggregation settings:

orchestration:
  aggregate-results: true
  deduplicate-findings: true
  prioritize-by-severity: true

Issue: Agent crashes or hangs

Solution: Check logs and report issue:

# View logs
rice logs --agent code-review

# Enable verbose logging
rice analyze --verbose

# Report issue with logs
rice bug report --logs

Issue: Custom rules not being applied

Solution: Verify rule file location and format:

# Check rule file exists
ls -la .ricecoder/review-rules.yaml

# Validate YAML syntax
rice config validate .ricecoder/review-rules.yaml

# Reload configuration
rice config reload

Issue: Agent findings are outdated

Solution: Clear cache and re-run:

# Clear cache
rice cache clear

# Re-run analysis
rice analyze

Advanced Usage

Custom Agents

Create custom agents for specialized tasks:

use ricecoder_agents::Agent;

pub struct CustomAgent;

#[async_trait]
impl Agent for CustomAgent {
    type Input = String;
    type Output = Vec<Finding>;
    type Error = AgentError;

    async fn execute(&self, input: Self::Input) -> Result<Self::Output, Self::Error> {
        // Custom analysis logic
        Ok(vec![])
    }
}

Agent Composition

Compose agents into complex workflows:

# Define workflow
rice workflow create my-workflow

# Add agents to workflow
rice workflow add-agent my-workflow code-review
rice workflow add-agent my-workflow security-scan
rice workflow add-agent my-workflow performance-analysis

# Run workflow
rice workflow run my-workflow

Integration with CI/CD

Run agents in CI/CD pipeline:

# .github/workflows/analyze.yml
- name: Run code analysis
  run: rice analyze --export results.json

- name: Check for critical issues
  run: |
    if grep -q '"severity": "critical"' results.json; then
      echo "Critical issues found"
      exit 1
    fi

Programmatic Usage

Use agents programmatically:

use ricecoder_agents::{AgentOrchestrator, CodeReviewAgent};

let orchestrator = AgentOrchestrator::new();
let agent = CodeReviewAgent::new();

let input = AgentInput {
    code: "fn main() { println!(\"Hello\"); }".to_string(),
    config: Default::default(),
};

let output = agent.execute(input).await?;
println!("Findings: {}", output.findings.len());

Best Practices

1. Start with Code Review

Always run code review first to catch obvious issues:

rice review src/main.rs

2. Use Parallel Execution

Run multiple agents in parallel for faster feedback:

rice analyze --parallel

3. Review Findings Carefully

Don't blindly apply all suggestions:

rice analyze --export results.json
# Review results.json
# Apply only relevant suggestions

4. Keep Rules Updated

Update custom rules as project standards evolve:

# Edit rules
vim .ricecoder/review-rules.yaml

# Reload configuration
rice config reload

5. Integrate with CI/CD

Run agents on every commit:

# .github/workflows/analyze.yml
on: [push, pull_request]
jobs:
  analyze:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Run analysis
        run: rice analyze

6. Monitor Agent Performance

Track agent execution times:

rice analyze --metrics

7. Document Custom Rules

Document why custom rules exist:

# .ricecoder/review-rules.yaml
# Custom rules for our project

naming:
  # We use snake_case for all functions
  # This matches our coding standards
  functions: snake_case

See Also


Last updated: December 3, 2025

Clone this wiki locally