Skip to content

[Feature Request] Display AI thinking process and detailed call trace with error visibility #324

@IAliceBobI

Description

@IAliceBobI

What problem does this solve? (describe the problem you are experiencing)

When using BrowserOS's agent mode, users cannot see:

  1. The AI's thinking/reasoning process (chain of thought)
  2. Detailed tool/API call traces (what tools are being called, parameters, responses)
  3. Specific error messages like JSON formatting errors, API failures, etc.

This makes it difficult to:

  • Understand why the agent made certain decisions
  • Debug issues when the agent fails or gets stuck
  • Learn from the agent's behavior
  • Troubleshoot configuration or model-specific problems

How are you working around this today? (your current solution or workaround)

Currently there's no way to see these details. Users only see the final result or generic error messages like "Planning failed" without understanding what went wrong.

What's your proposed solution? (how should BrowserOS address this?)

1. AI Thinking Process Display

  • Show the model's thinking/reasoning chain (for models that support thinking tags like Claude's extended thinking or Gemini's reasoning)
  • Display this in an expandable/collapsible section so users can see the thought process
  • Allow users to toggle this on/off based on their preference

2. Detailed Call Trace

  • Show each tool/API call being made (e.g., browser_click, browser_snapshot, etc.)
  • Display the parameters sent with each call
  • Show the response/return value from each call
  • Include timing information for each call
  • Present this in a structured, readable format (possibly a timeline or tree view)

3. Enhanced Error Visibility

  • Show specific error messages (JSON parsing errors, API errors, etc.)
  • Highlight which step/call failed
  • Provide actionable error messages (not generic "Planning failed")
  • Include raw error details for advanced users/developers

UI Suggestions:

  • Add a "Debug Mode" or "Verbose Mode" toggle in settings
  • Create a "Call Trace" panel/section that can be expanded
  • Use color coding for success, errors, and warnings
  • Make it collapsible to avoid cluttering the UI
  • Consider a separate "Developer/Debug" tab for advanced users

Additional context (optional - add screenshots, examples, or other helpful details)

Related existing issues:

Example of desired experience:
Similar to how ChatGPT shows reasoning for o1 models, or how browser DevTools show network requests - but tailored for AI agent interactions.

This would significantly improve:

  • Debugging capabilities
  • Educational value (users can learn how the agent thinks)
  • Trust and transparency
  • Ability to provide better bug reports

Priority: Enhancement/Better UX
Labels: enhancement, feature request, debugging, user experience

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions