feat: stream prompt_sandbox output for faster first token#246
Conversation
…t token Replace blocking MCP prompt_sandbox with a local AI SDK generator tool that streams sandbox output in real-time. Uses detached runCommand + cmd.logs() to yield chunks as they arrive, reducing first visible token from 10-60s to ~3-8s. Co-Authored-By: Claude Opus 4.6 <[email protected]>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ⛔ Files ignored due to path filters (2)
📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThe changes migrate the prompt_sandbox tool from MCP server registration to a local streaming tool integrated in setupToolsForRequest. The synchronous promptSandbox implementation is replaced with an async generator promptSandboxStreaming that provides real-time log streaming, while local streaming tools are merged last into the tool set to override MCP-provided equivalents. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Chat Client
participant Setup as setupToolsForRequest
participant Tool as createPromptSandboxStreamingTool
participant Stream as promptSandboxStreaming
participant Sandbox as OpenClaw Sandbox
Client->>Setup: Initiate tool setup with authToken
Setup->>Tool: Create streaming tool (accountId, apiKey)
Tool->>Tool: Register generator-based execute function
Setup->>Setup: Merge localStreamingTools last (override MCP tools)
Client->>Tool: Execute with prompt parameter
Tool->>Stream: Invoke async generator (accountId, apiKey, prompt)
Stream->>Sandbox: Get or create per-account sandbox
Stream->>Sandbox: Run command: openclaw agent --agent main --message <prompt>
loop Stream logs in real-time
Sandbox-->>Stream: Log chunk (stdout/stderr)
Stream->>Stream: Accumulate output & yield log entry
Stream-->>Tool: Streaming update
Tool-->>Client: Yield intermediate result
end
Sandbox-->>Stream: Process termination (exitCode)
Stream->>Stream: Finalize with aggregated stdout/stderr/exitCode
Stream-->>Tool: Return complete state
Tool-->>Client: Yield final result (summary object)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
lib/sandbox/promptSandboxStreaming.ts (1)
35-45: Consider adding error handling for sandbox operations.If
getOrCreateSandboxorsandbox.runCommandthrows, the error will propagate uncaught. Consider wrapping these operations in try-catch to provide more context-rich errors or ensure graceful degradation.🛡️ Example error handling pattern
+ let sandbox; + let sandboxId: string; + let created: boolean; + + try { + const result = await getOrCreateSandbox(accountId); + sandbox = result.sandbox; + sandboxId = result.sandboxId; + created = result.created; + } catch (error) { + throw new Error(`Failed to acquire sandbox for account ${accountId}: ${error instanceof Error ? error.message : String(error)}`); + } - const { sandbox, sandboxId, created } = - await getOrCreateSandbox(accountId); - const cmd = await sandbox.runCommand({ + const cmd = await sandbox.runCommand({ cmd: "openclaw", args: ["agent", "--agent", "main", "--message", prompt], env: { RECOUP_API_KEY: apiKey, }, detached: true, - }); + }).catch((error) => { + throw new Error(`Failed to start sandbox command: ${error instanceof Error ? error.message : String(error)}`); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sandbox/promptSandboxStreaming.ts` around lines 35 - 45, Wrap the calls to getOrCreateSandbox(accountId) and sandbox.runCommand({...}) in a try-catch inside promptSandboxStreaming.ts so sandbox creation/command failures are caught and annotated with context (accountId, sandboxId when available) before rethrowing or returning a controlled error; specifically, catch errors around getOrCreateSandbox and the call that produces cmd, log or attach the original error message and relevant identifiers (sandboxId, created flag, command args like "openclaw" and prompt) and ensure any created resources are cleaned up or rolled back if needed before propagating the enriched error.lib/chat/tools/createPromptSandboxStreamingTool.ts (1)
40-63: Redundant stdout accumulation - already tracked in promptSandboxStreaming.The
stdoutvariable is accumulated here (lines 58-60) even thoughpromptSandboxStreamingalready maintains accumulatedstdoutandstderrin its return value (lines 51-52 overwrite with the final values anyway). This duplication violates DRY and wastes memory for long-running processes.You can simplify by only using the yielded chunks for streaming updates and relying on the final return value for the complete output.
♻️ Simplified approach using only streamed chunks for display
let stdout = ""; let stderr = ""; let exitCode = 0; let sandboxId = ""; let created = false; + let streamedOutput = ""; while (true) { const { value, done } = await gen.next(); if (done) { sandboxId = value.sandboxId; stdout = value.stdout; stderr = value.stderr; exitCode = value.exitCode; created = value.created; break; } if (value.stream === "stdout") { - stdout += value.data; + streamedOutput += value.data; } - yield { status: "streaming" as const, output: stdout }; + yield { status: "streaming" as const, output: streamedOutput }; } yield { status: "complete" as const, output: stdout, stderr, exitCode, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/chat/tools/createPromptSandboxStreamingTool.ts` around lines 40 - 63, Remove the redundant local accumulation of stdout/stderr inside the streaming loop: stop maintaining the top-level stdout/stderr strings (and their initializations) and do not append value.data into stdout on each chunk. Instead, use the yielded chunks for streaming updates (e.g., yield { status: "streaming", output: <currentChunkOrClient-side-accumulation> } using value.data) and when the generator finishes, read the final complete outputs (value.stdout / value.stderr) and other metadata (value.sandboxId, value.exitCode, value.created) from the final returned value. Update references in this function (the gen loop and the final-done branch) so the final complete outputs come only from the generator's return values and remove the local stdout/stderr accumulation logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@lib/chat/tools/createPromptSandboxStreamingTool.ts`:
- Around line 40-63: Remove the redundant local accumulation of stdout/stderr
inside the streaming loop: stop maintaining the top-level stdout/stderr strings
(and their initializations) and do not append value.data into stdout on each
chunk. Instead, use the yielded chunks for streaming updates (e.g., yield {
status: "streaming", output: <currentChunkOrClient-side-accumulation> } using
value.data) and when the generator finishes, read the final complete outputs
(value.stdout / value.stderr) and other metadata (value.sandboxId,
value.exitCode, value.created) from the final returned value. Update references
in this function (the gen loop and the final-done branch) so the final complete
outputs come only from the generator's return values and remove the local
stdout/stderr accumulation logic.
In `@lib/sandbox/promptSandboxStreaming.ts`:
- Around line 35-45: Wrap the calls to getOrCreateSandbox(accountId) and
sandbox.runCommand({...}) in a try-catch inside promptSandboxStreaming.ts so
sandbox creation/command failures are caught and annotated with context
(accountId, sandboxId when available) before rethrowing or returning a
controlled error; specifically, catch errors around getOrCreateSandbox and the
call that produces cmd, log or attach the original error message and relevant
identifiers (sandboxId, created flag, command args like "openclaw" and prompt)
and ensure any created resources are cleaned up or rolled back if needed before
propagating the enriched error.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (5)
lib/chat/__tests__/setupToolsForRequest.test.tsis excluded by!**/*.test.*,!**/__tests__/**and included bylib/**lib/chat/tools/__tests__/createPromptSandboxStreamingTool.test.tsis excluded by!**/*.test.*,!**/__tests__/**and included bylib/**lib/mcp/tools/sandbox/__tests__/registerPromptSandboxTool.test.tsis excluded by!**/*.test.*,!**/__tests__/**and included bylib/**lib/sandbox/__tests__/promptSandbox.test.tsis excluded by!**/*.test.*,!**/__tests__/**and included bylib/**lib/sandbox/__tests__/promptSandboxStreaming.test.tsis excluded by!**/*.test.*,!**/__tests__/**and included bylib/**
📒 Files selected for processing (6)
lib/chat/setupToolsForRequest.tslib/chat/tools/createPromptSandboxStreamingTool.tslib/mcp/tools/sandbox/index.tslib/mcp/tools/sandbox/registerPromptSandboxTool.tslib/sandbox/promptSandbox.tslib/sandbox/promptSandboxStreaming.ts
💤 Files with no reviewable changes (2)
- lib/mcp/tools/sandbox/registerPromptSandboxTool.ts
- lib/sandbox/promptSandbox.ts
- Use explicit type assertions for IteratorResult narrowing (TS can't narrow IteratorYieldResult due to `done?: false`) - Replace `tool()` helper with plain Tool object (helper can't infer generator generics in [email protected]) - Use `inputSchema` field directly instead of `parameters` Co-Authored-By: Claude Opus 4.6 <[email protected]>
Summary
prompt_sandboxwith a local AI SDK generator tool that streams sandbox output in real-timerunCommand({ detached: true })+cmd.logs()async generator to yield chunks as they arriveChanges
lib/sandbox/promptSandboxStreaming.tslib/chat/tools/createPromptSandboxStreamingTool.tslib/chat/setupToolsForRequest.tslib/mcp/tools/sandbox/index.tslib/mcp/tools/sandbox/registerPromptSandboxTool.tslib/sandbox/promptSandbox.tsTest plan
promptSandboxStreaming(chunk ordering, stderr accumulation, detached mode, created flag, exit codes)createPromptSandboxStreamingTool(status progression, param passing, stderr handling, schema)setupToolsForRequest(override behavior, authToken gating)🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
Updates