fix: use agent.generate() in /api/chat/generate#227
Conversation
…ons/prepareStep to agent constructor The non-streaming /api/chat/generate endpoint was bypassing the ToolLoopAgent by calling generateText() directly, while providerOptions and prepareStep were silently ignored when passed at call time to agent.stream()/agent.generate(). Move providerOptions (thinking/reasoning budgets) and prepareStep (tool chain orchestration) into the ToolLoopAgent constructor in getGeneralAgent.ts so they apply consistently across all paths: streaming, non-streaming, and email. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughProvider-specific options and a prepareStep hook were moved into agent initialization; ChatConfig was simplified to only include agent and messages, and handlers now call the agent's generate method instead of a separate generateText helper. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ❌ 1❌ Failed checks (1 warning)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/chat/handleChatGenerate.ts (1)
7-15:⚠️ Potential issue | 🟡 MinorStale JSDoc — still references
generateText.Line 13 says "Generates text using the AI SDK's generateText" but the implementation now uses
agent.generate(). Update the doc to reflect the new approach.📝 Suggested fix
- * 3. Generates text using the AI SDK's generateText + * 3. Generates text using the agent's generate method🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/chat/handleChatGenerate.ts` around lines 7 - 15, The JSDoc for handleChatGenerate is stale: it still says "Generates text using the AI SDK's generateText" even though the implementation calls agent.generate(); update the doc block for handleChatGenerate to reflect the current flow (e.g., "Generates response using the agent.generate() method"), and remove or replace any mention of generateText so the documentation matches the implementation.
🧹 Nitpick comments (4)
lib/agents/generalAgent/getGeneralAgent.ts (3)
55-55: Magic number111for step count limit.
stepCountIs(111)uses an unexplained literal. Consider extracting it to a named constant (e.g.,MAX_TOOL_LOOP_STEPS) in your constants file for clarity and easier tuning.♻️ Suggested refactor
+// In lib/const.ts (or alongside DEFAULT_MODEL) +export const MAX_TOOL_LOOP_STEPS = 111; // In getGeneralAgent.ts - const stopWhen = stepCountIs(111); + const stopWhen = stepCountIs(MAX_TOOL_LOOP_STEPS);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/generalAgent/getGeneralAgent.ts` at line 55, Replace the magic literal 111 passed to stepCountIs with a named constant: define MAX_TOOL_LOOP_STEPS (or similar) in your constants file and use stopWhen(stepCountIs(MAX_TOOL_LOOP_STEPS)); update references in getGeneralAgent.ts (the stopWhen variable) to use that constant so the limit is documented and configurable.
24-90: Function exceeds the 50-line guideline and carries multiple responsibilities.
getGeneralAgentcurrently handles: account email lookup, artist info retrieval, knowledge base fetching, system prompt construction, tool setup, and agent instantiation. This is doing quite a lot for a single function body (~66 lines).Consider extracting the data-fetching and prompt-building logic (lines 27–51) into a helper like
buildAgentContext(body)to keep this function focused on agent construction. This would also improve testability — the data layer can be tested independently.As per coding guidelines,
lib/**/*.ts: "Keep functions under 50 lines" and "Single responsibility per function".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/generalAgent/getGeneralAgent.ts` around lines 24 - 90, Extract the data-fetching and prompt-building steps in getGeneralAgent into a new helper buildAgentContext(body: ChatRequestBody) that performs the account email lookup (selectAccountEmails), artist info & knowledge fetch (selectAccountInfo, getKnowledgeBaseText), account details (getAccountWithDetails), base prompt creation (getSystemPrompt), image URL extraction (extractImageUrlsFromMessages) and final instructions assembly (buildSystemPromptWithImages); have buildAgentContext return a small POJO with email, artistInstruction, knowledgeBaseText, accountWithDetails, baseSystemPrompt, imageUrls, and instructions, then simplify getGeneralAgent to call buildAgentContext(body), setup tools and instantiate ToolLoopAgent (preserving providerOptions/stopWhen/model) so getGeneralAgent stays under 50 lines and each responsibility is testable/isolated.
67-81: Hardcoded thinking budgets and reasoning settings could be extracted as named constants for easier tuning.The
providerOptionsstructure is correct—AI SDK v6 intentionally namespaces options by provider, and each provider reads only its own namespace, so including all three simultaneously is the recommended pattern and doesn't create coupling. The settings (budgetTokens: 12000 for Anthropic, 8192 for Google, "medium" for OpenAI) would benefit from being defined as constants (e.g.,ANTHROPIC_THINKING_BUDGET,GOOGLE_THINKING_BUDGET,OPENAI_REASONING_EFFORT) at the file or module level to improve maintainability and make tuning points obvious.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/generalAgent/getGeneralAgent.ts` around lines 67 - 81, Extract the hardcoded provider tuning values into named module-level constants and replace the inline literals in providerOptions; for example define ANTHROPIC_THINKING_BUDGET = 12000, GOOGLE_THINKING_BUDGET = 8192 and OPENAI_REASONING_EFFORT = "medium" (and similarly for reasoningSummary and includeThoughts if desired) and then use those constants inside the providerOptions object where AnthropicProviderOptions, GoogleGenerativeAIProviderOptions and OpenAIResponsesProviderOptions are set in getGeneralAgent.ts to make tuning obvious and maintainable.lib/chat/types.ts (1)
4-18:ChatConfigcarries agent-constructor properties that overlap with call-time parameters.
ChatConfigextendsRoutingDecision(which hasmodel,instructions,toolsvia the agent,stopWhen) and addssystem,messages,tools. This means aChatConfigobject carries both:
- Agent construction concerns:
model,instructions,stopWhen,agent- Call-time concerns:
messages,system,experimental_*Now that generation goes through
agent.generate(), these two concerns are distinct — the agent already encapsulates its construction config. Passing the fullChatConfigtoagent.generate()(as done inhandleChatGenerate.ts) creates redundancy wheremodel,instructions, andtoolsexist in both the agent instance and the call-time arguments.Consider separating
ChatConfiginto the agent's routing/construction config (RoutingDecision) and a leaner call-time config (e.g.,ChatGenerateInputwith justmessagesand any call-time overrides). This would make the contract between setup and execution explicit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/chat/types.ts` around lines 4 - 18, ChatConfig currently mixes agent-construction fields (RoutingDecision: model, instructions, agent, stopWhen) with call-time fields (system, messages, experimental_*), causing redundant/conflicting data when calling agent.generate(); split these responsibilities by introducing a new call-time type (e.g., ChatGenerateInput) that contains only call-time parameters (messages, system, experimental_* and any per-call overrides) and remove agent-construction fields from ChatConfig or stop extending RoutingDecision; update usages such as handleChatGenerate.ts to pass the agent instance (constructed from RoutingDecision/agent) and the new ChatGenerateInput to agent.generate() so construction config lives in RoutingDecision/agent and runtime inputs live in ChatGenerateInput.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/agents/generalAgent/getGeneralAgent.ts`:
- Around line 62-66: prepareStep currently returns a merged object ({
...options, ...next }) which violates the ai SDK v6 prepareStep signature;
change prepareStep to return only the override object or undefined by returning
the result of getPrepareStepResult(options) directly (i.e., prepareStep: options
=> getPrepareStepResult(options)) so it returns only fields allowed in
PrepareStepResult (model?, toolChoice?, activeTools?, system?, messages?,
experimental_context?, providerOptions?) instead of the entire options object.
In `@lib/chat/handleChatGenerate.ts`:
- Around line 29-31: Refactor the calls to agent.generate and agent.stream so
they receive only the required parameters (e.g., { messages } or { prompt,
abortSignal }) instead of the entire chatConfig object: replace await
agent.generate(chatConfig) and await agent.stream(chatConfig) with calls that
pass just the message/prompt and abortSignal expected by the ToolLoopAgent API
(preserving any existing abortSignal), and update the JSDoc at the top of
handleChatGenerate from "Generates text using the AI SDK's generateText" to
"Generates text using the AI SDK's ToolLoopAgent." Ensure you only remove
redundant fields already provided to the ToolLoopAgent constructor and keep the
semantics identical.
---
Outside diff comments:
In `@lib/chat/handleChatGenerate.ts`:
- Around line 7-15: The JSDoc for handleChatGenerate is stale: it still says
"Generates text using the AI SDK's generateText" even though the implementation
calls agent.generate(); update the doc block for handleChatGenerate to reflect
the current flow (e.g., "Generates response using the agent.generate() method"),
and remove or replace any mention of generateText so the documentation matches
the implementation.
---
Nitpick comments:
In `@lib/agents/generalAgent/getGeneralAgent.ts`:
- Line 55: Replace the magic literal 111 passed to stepCountIs with a named
constant: define MAX_TOOL_LOOP_STEPS (or similar) in your constants file and use
stopWhen(stepCountIs(MAX_TOOL_LOOP_STEPS)); update references in
getGeneralAgent.ts (the stopWhen variable) to use that constant so the limit is
documented and configurable.
- Around line 24-90: Extract the data-fetching and prompt-building steps in
getGeneralAgent into a new helper buildAgentContext(body: ChatRequestBody) that
performs the account email lookup (selectAccountEmails), artist info & knowledge
fetch (selectAccountInfo, getKnowledgeBaseText), account details
(getAccountWithDetails), base prompt creation (getSystemPrompt), image URL
extraction (extractImageUrlsFromMessages) and final instructions assembly
(buildSystemPromptWithImages); have buildAgentContext return a small POJO with
email, artistInstruction, knowledgeBaseText, accountWithDetails,
baseSystemPrompt, imageUrls, and instructions, then simplify getGeneralAgent to
call buildAgentContext(body), setup tools and instantiate ToolLoopAgent
(preserving providerOptions/stopWhen/model) so getGeneralAgent stays under 50
lines and each responsibility is testable/isolated.
- Around line 67-81: Extract the hardcoded provider tuning values into named
module-level constants and replace the inline literals in providerOptions; for
example define ANTHROPIC_THINKING_BUDGET = 12000, GOOGLE_THINKING_BUDGET = 8192
and OPENAI_REASONING_EFFORT = "medium" (and similarly for reasoningSummary and
includeThoughts if desired) and then use those constants inside the
providerOptions object where AnthropicProviderOptions,
GoogleGenerativeAIProviderOptions and OpenAIResponsesProviderOptions are set in
getGeneralAgent.ts to make tuning obvious and maintainable.
In `@lib/chat/types.ts`:
- Around line 4-18: ChatConfig currently mixes agent-construction fields
(RoutingDecision: model, instructions, agent, stopWhen) with call-time fields
(system, messages, experimental_*), causing redundant/conflicting data when
calling agent.generate(); split these responsibilities by introducing a new
call-time type (e.g., ChatGenerateInput) that contains only call-time parameters
(messages, system, experimental_* and any per-call overrides) and remove
agent-construction fields from ChatConfig or stop extending RoutingDecision;
update usages such as handleChatGenerate.ts to pass the agent instance
(constructed from RoutingDecision/agent) and the new ChatGenerateInput to
agent.generate() so construction config lives in RoutingDecision/agent and
runtime inputs live in ChatGenerateInput.
| prepareStep: options => { | ||
| const next = getPrepareStepResult(options); | ||
| if (next) return { ...options, ...next }; | ||
| return options; | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find node_modules/ai -name "*.d.ts" 2>/dev/null | head -10Repository: recoupable/api
Length of output: 40
🏁 Script executed:
# Search for ToolLoopAgent and prepareStep in the codebase
rg "prepareStep" --type ts --type tsx -B 3 -A 5 lib/Repository: recoupable/api
Length of output: 85
🏁 Script executed:
# Check the actual implementation context
cat -n lib/agents/generalAgent/getGeneralAgent.ts | head -80Repository: recoupable/api
Length of output: 3818
🏁 Script executed:
# Look for type definitions in node_modules/ai
find node_modules/ai -type f -name "*.d.ts" 2>/dev/null | head -20Repository: recoupable/api
Length of output: 40
🏁 Script executed:
# Search for ToolLoopAgent in node_modules
rg "ToolLoopAgent" node_modules/ai --type ts -A 10 2>/dev/null | head -50Repository: recoupable/api
Length of output: 40
🏁 Script executed:
# Check the getPrepareStepResult implementation
cat -n lib/chat/toolChains/getPrepareStepResult.tsRepository: recoupable/api
Length of output: 3072
🏁 Script executed:
# Search for any other prepareStep usage in the codebase
rg "prepareStep" lib --type ts -B 2 -A 5 2>/dev/nullRepository: recoupable/api
Length of output: 5363
🏁 Script executed:
# Check package.json for ai version
cat package.json | grep -A 2 '"ai"'Repository: recoupable/api
Length of output: 142
🏁 Script executed:
# Try looking for ToolLoopAgent in the main ai package
ls -la node_modules/ai/ | head -20Repository: recoupable/api
Length of output: 121
🏁 Script executed:
# Look for any TypeScript definitions in ai
find node_modules/ai -name "*.ts" -o -name "*.d.ts" 2>/dev/null | xargs grep -l "ToolLoopAgent" 2>/dev/null | head -5Repository: recoupable/api
Length of output: 40
🏁 Script executed:
# Check the main entry point
cat node_modules/ai/package.json | grep -E '"main"|"types"|"exports"' -A 1Repository: recoupable/api
Length of output: 118
🌐 Web query:
ai sdk 6.0.0 beta ToolLoopAgent prepareStep callback signature
💡 Result:
In AI SDK v6 (including the v6.0.0 betas), ToolLoopAgent’s prepareStep uses the PrepareStepFunction type:
type PrepareStepFunction<TOOLS extends Record<string, Tool> = Record<string, Tool>> =
(options: {
steps: Array<StepResult<TOOLS>>;
stepNumber: number;
model: LanguageModel;
messages: Array<ModelMessage>;
experimental_context: unknown;
}) => PrepareStepResult<TOOLS> | PromiseLike<PrepareStepResult<TOOLS>>;
type PrepareStepResult<TOOLS extends Record<string, Tool> = Record<string, Tool>> =
| {
model?: LanguageModel;
toolChoice?: ToolChoice<TOOLS>;
activeTools?: Array<keyof TOOLS>;
system?: string | SystemModelMessage | Array<SystemModelMessage>;
messages?: Array<ModelMessage>;
experimental_context?: unknown;
providerOptions?: ProviderOptions;
}
| undefined;Returning undefined means “no per-step overrides; keep using the outer/agent settings.” [1][2]
🏁 Script executed:
# Check if there are any TypeScript type files or documentation in the repo itself
find . -path ./node_modules -prune -o -name "*.d.ts" -type f -print | xargs grep -l "prepareStep" 2>/dev/nullRepository: recoupable/api
Length of output: 42
🏁 Script executed:
# Look for any documentation or comments about the API
rg "prepareStep" . --type md --type txt 2>/dev/null | head -20Repository: recoupable/api
Length of output: 40
🏁 Script executed:
# Check if there's any version history or migration guide
fd -t f "CHANGELOG|MIGRATION|README" . 2>/dev/null | head -10Repository: recoupable/api
Length of output: 87
Fix prepareStep return type: should only return override fields, not merged options.
The current implementation returns { ...options, ...next }, which spreads the entire options object (including steps, stepNumber, messages, model) back into the result. However, the ai SDK v6 beta expects prepareStep to return a partial PrepareStepResult containing only override fields (model?, toolChoice?, activeTools?, system?, messages?, experimental_context?, providerOptions?), or undefined for no overrides.
Returning the full options object violates the SDK contract. Simplify to:
prepareStep: options => getPrepareStepResult(options)This correctly returns either the override object or undefined, matching the expected signature.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/agents/generalAgent/getGeneralAgent.ts` around lines 62 - 66, prepareStep
currently returns a merged object ({ ...options, ...next }) which violates the
ai SDK v6 prepareStep signature; change prepareStep to return only the override
object or undefined by returning the result of getPrepareStepResult(options)
directly (i.e., prepareStep: options => getPrepareStepResult(options)) so it
returns only fields allowed in PrepareStepResult (model?, toolChoice?,
activeTools?, system?, messages?, experimental_context?, providerOptions?)
instead of the entire options object.
| const { agent } = chatConfig; | ||
|
|
||
| const result = await generateText(chatConfig); | ||
| const result = await agent.generate(chatConfig); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and examine the handleChatGenerate.ts file
fd "handleChatGenerate" --type fRepository: recoupable/api
Length of output: 134
🏁 Script executed:
# Find where ToolLoopAgent is defined or imported
rg "ToolLoopAgent" --type ts --type tsx -B 2 -A 5Repository: recoupable/api
Length of output: 85
🏁 Script executed:
# Search for agent.generate calls to see usage patterns
rg "agent\.generate" --type ts --type tsx -B 3 -A 3Repository: recoupable/api
Length of output: 85
🏁 Script executed:
# Check if there's node_modules or type definitions for ai SDK
find . -path ./node_modules -prune -o -name "*.d.ts" -type f -exec rg "ToolLoopAgent|generate\(" {} + | head -30Repository: recoupable/api
Length of output: 40
🏁 Script executed:
# Read the handleChatGenerate.ts file to see full context
cat -n lib/chat/handleChatGenerate.tsRepository: recoupable/api
Length of output: 3023
🏁 Script executed:
# Search for imports and ToolLoopAgent without file type filtering
rg "ToolLoopAgent" -A 5 -B 5Repository: recoupable/api
Length of output: 25522
🏁 Script executed:
# Look for ai package imports to understand the SDK being used
rg "from ['\"]ai" --type tsRepository: recoupable/api
Length of output: 4544
🏁 Script executed:
# Read setupChatRequest to understand what chatConfig contains
cat -n lib/chat/setupChatRequest.tsRepository: recoupable/api
Length of output: 1584
🏁 Script executed:
# Read the types file to understand ChatConfig structure
cat -n lib/chat/types.tsRepository: recoupable/api
Length of output: 918
🏁 Script executed:
# Check if there's a stream variant to compare parameters
rg "agent\.stream" -A 3 -B 3Repository: recoupable/api
Length of output: 625
🏁 Script executed:
# Search for how agent.generate or agent.stream are called in tests to understand expected parameters
rg "agent\.(generate|stream)\(" --context 5Repository: recoupable/api
Length of output: 7125
🏁 Script executed:
# Check the test file for handleChatGenerate to see what parameters are used
cat -n lib/chat/__tests__/handleChatGenerate.test.ts | head -100Repository: recoupable/api
Length of output: 3580
🏁 Script executed:
# Look for any documentation about ai SDK's ToolLoopAgent.generate method signature
rg "generate\(" lib/agents -A 2 -B 2Repository: recoupable/api
Length of output: 40
🏁 Script executed:
# Look for more context on what parameters agent.generate and agent.stream accept
# Check if there's any comment or documentation about this
rg "agent\.stream\(" -B 5 -A 10 lib/chat/handleChatStream.tsRepository: recoupable/api
Length of output: 717
🏁 Script executed:
# Check the test to see what parameters the mock agent.generate receives
cat -n lib/chat/__tests__/handleChatGenerate.test.ts | tail -200 | head -100Repository: recoupable/api
Length of output: 3955
🏁 Script executed:
# Search for PR-related comments or any explanation about why full chatConfig is passed
rg "chatConfig" lib/chat/handleChatGenerate.ts -B 3 -A 3Repository: recoupable/api
Length of output: 382
🏁 Script executed:
# Check if there's any TypeScript error with the current setup by looking at tsconfig
find . -name "tsconfig.json" -type f | head -1 | xargs catRepository: recoupable/api
Length of output: 780
🏁 Script executed:
# Let's check what parameters ToolLoopAgent.generate actually expects
# by looking at online documentation or checking if there's a package.json to identify ai SDK version
cat -n package.json | grep -A 2 -B 2 '"ai"'Repository: recoupable/api
Length of output: 244
🏁 Script executed:
# See if the same issue exists in handleChatStream.ts
cat -n lib/chat/handleChatStream.ts | head -50Repository: recoupable/api
Length of output: 2389
🏁 Script executed:
# Check how other agents pass parameters to understand the pattern
cat -n lib/ai/generateImage.tsRepository: recoupable/api
Length of output: 1276
🏁 Script executed:
# Search for all uses of agent.generate and agent.stream to see the full pattern
rg "agent\.(generate|stream)\(" -A 1 -B 1Repository: recoupable/api
Length of output: 2141
🏁 Script executed:
# Check what happens if we look at the exact test expectations
cat -n lib/chat/__tests__/handleChatGenerate.test.ts | sed -n '370,440p'Repository: recoupable/api
Length of output: 2639
🏁 Script executed:
# Check if experimental_generateMessageId is used within agent.generate/stream
rg "experimental_generateMessageId" -B 3 -A 3Repository: recoupable/api
Length of output: 14194
🏁 Script executed:
# Check if there's a pattern of what createUIMessageStream expects
rg "createUIMessageStream" -A 10Repository: recoupable/api
Length of output: 4761
🏁 Script executed:
# Look for any ai SDK documentation or comments about parameter expectations
rg "agent\.(generate|stream)" -B 5 | grep -A 5 -B 5 "param\|parameter\|accept\|expect"Repository: recoupable/api
Length of output: 2210
🏁 Script executed:
# Get the specific lines around handleChatStream to see if it needs the same fix
cat -n lib/chat/handleChatStream.ts | sed -n '30,45p'Repository: recoupable/api
Length of output: 777
🏁 Script executed:
# Check if experimental_generateMessageId is used within agent.generate or agent.stream calls
# by looking at the ai SDK's documented parameters
rg "messages|prompt|abortSignal" lib/chat/handleChatGenerate.ts -B 2 -A 2Repository: recoupable/api
Length of output: 234
🏁 Script executed:
# Search for the PR description or any comments about why full chatConfig was passed
git log --all --oneline -20 | head -10Repository: recoupable/api
Length of output: 762
🏁 Script executed:
# Check if there are any tests that specifically test agent.generate being called with chatConfig
rg "agent.generate" lib/chat/__tests__/handleChatGenerate.test.ts -B 3 -A 3Repository: recoupable/api
Length of output: 510
🏁 Script executed:
# Look for what the mock agent.generate expects to be called with
cat -n lib/chat/__tests__/handleChatGenerate.test.ts | sed -n '70,82p'Repository: recoupable/api
Length of output: 414
🏁 Script executed:
# Check if there's any validation or type checking for what agent.generate receives
rg "toHaveBeenCalledWith.*generate" lib/chat/__tests__/handleChatGenerate.test.ts -B 2 -A 2Repository: recoupable/api
Length of output: 40
🏁 Script executed:
# Check the git commit details for the PR that changed this
git show 5380308 --statRepository: recoupable/api
Length of output: 47359
🏁 Script executed:
# Check if there are assertions in tests that verify what agent.generate was called with
rg "toHaveBeenCalled" lib/chat/__tests__/handleChatGenerate.test.ts -A 3 -B 1Repository: recoupable/api
Length of output: 1231
🏁 Script executed:
# Check if createUIMessageStream documentation or tests show what it expects
cat -n lib/chat/__tests__/handleChatStream.test.ts | sed -n '1,100p'Repository: recoupable/api
Length of output: 3879
🏁 Script executed:
# Verify if experimental_generateMessageId is actually used by checking SDK expectations
# Look for any configuration or documentation about what agent.generate accepts
rg "agent\.generate\|agent\.stream" lib/ -A 2 | grep -E "messages|prompt|experimental" | head -20Repository: recoupable/api
Length of output: 40
Refactor agent.generate() and agent.stream() calls to pass only required parameters.
Both handleChatGenerate.ts (line 31) and handleChatStream.ts (line 36) pass the entire chatConfig object to agent.generate() and agent.stream(), but the ToolLoopAgent is already initialized with model, instructions, tools, and stopWhen in its constructor. Passing the full config repeats these properties, which are silently ignored by the agent method signatures that only accept messages/prompt/abortSignal.
This mirrors the pattern used elsewhere in the codebase (e.g., generateEmailResponse.ts uses { messages }, generateImage.ts uses { messages }) and reduces fragility by making intent explicit.
♻️ Suggested refactor
- const result = await agent.generate(chatConfig);
+ const { agent, messages } = chatConfig;
+ const result = await agent.generate({ messages });Apply the same change to handleChatStream.ts line 36:
- const result = await agent.stream(chatConfig);
+ const result = await agent.stream({ messages });Also update the JSDoc on line 13 of handleChatGenerate.ts from "Generates text using the AI SDK's generateText" to "Generates text using the AI SDK's ToolLoopAgent."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/chat/handleChatGenerate.ts` around lines 29 - 31, Refactor the calls to
agent.generate and agent.stream so they receive only the required parameters
(e.g., { messages } or { prompt, abortSignal }) instead of the entire chatConfig
object: replace await agent.generate(chatConfig) and await
agent.stream(chatConfig) with calls that pass just the message/prompt and
abortSignal expected by the ToolLoopAgent API (preserving any existing
abortSignal), and update the JSDoc at the top of handleChatGenerate from
"Generates text using the AI SDK's generateText" to "Generates text using the AI
SDK's ToolLoopAgent." Ensure you only remove redundant fields already provided
to the ToolLoopAgent constructor and keep the semantics identical.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/chat/setupChatRequest.ts (1)
25-28:⚠️ Potential issue | 🔴 CriticalAdd
awaittoconvertToModelMessagescall — currently returns aPromiseinstead of an array.In AI SDK v6.0.0-beta.122,
convertToModelMessages()is asynchronous and must be awaited. The current code calls.slice()on aPromisedirectly, which returnsundefined. This causesmessagesinChatConfigto beundefined, breaking bothagent.generate()andagent.stream()calls at the model layer.Fix
- const convertedMessages = convertToModelMessages(body.messages, { + const convertedMessages = (await convertToModelMessages(body.messages, { tools, ignoreIncompleteToolCalls: true, - }).slice(-MAX_MESSAGES); + })).slice(-MAX_MESSAGES);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/chat/setupChatRequest.ts` around lines 25 - 28, The call to convertToModelMessages is async but is used synchronously: change the convertedMessages assignment in setupChatRequest (where convertedMessages is created from convertToModelMessages(body.messages, { tools, ignoreIncompleteToolCalls: true }).slice(-MAX_MESSAGES)) to await the promise before slicing so convertedMessages becomes an array (e.g., await convertToModelMessages(...).then(...) or await the result then call .slice), ensuring ChatConfig.messages is a real array for agent.generate()/agent.stream; keep the same arguments (body.messages, tools, ignoreIncompleteToolCalls) and preserve the MAX_MESSAGES truncation.
🧹 Nitpick comments (1)
lib/chat/setupChatRequest.ts (1)
30-36:...decisionspreadsinstructionsredundantly alongside the explicitsystemfield.
decisionincludes aninstructionsproperty (aliased tosystemon Line 32), so the final config carries both{ instructions: "...", system: "..." }with identical values — a harmless but imprecise side-effect of the broad spread. TypeScript doesn't flag excess spread properties, so this is invisible in type-checking.Prefer explicitly picking the fields
ChatConfigactually needs, which is both cleaner and guards against future fields added to thegetGeneralAgentreturn type leaking into the config:♻️ Explicit field construction
- const config: ChatConfig = { - ...decision, - system, - messages: convertedMessages, - experimental_generateMessageId: generateUUID, - tools, - }; + const config: ChatConfig = { + agent: decision.agent, + model: decision.model, + system, + messages: convertedMessages, + experimental_generateMessageId: generateUUID, + tools, + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/chat/setupChatRequest.ts` around lines 30 - 36, The config object is currently created by spreading decision which redundantly includes instructions (the same value as system) causing both instructions and system to appear; instead, stop spreading the entire decision and construct ChatConfig explicitly by selecting only the needed fields from decision (e.g., any properties required by ChatConfig) and then add system, messages: convertedMessages, experimental_generateMessageId: generateUUID, and tools; update the code around the config creation (the variable named config in setupChatRequest.ts) to remove "...decision" and pick explicit fields to avoid leaking extra properties from getGeneralAgent returns.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@lib/chat/setupChatRequest.ts`:
- Around line 25-28: The call to convertToModelMessages is async but is used
synchronously: change the convertedMessages assignment in setupChatRequest
(where convertedMessages is created from convertToModelMessages(body.messages, {
tools, ignoreIncompleteToolCalls: true }).slice(-MAX_MESSAGES)) to await the
promise before slicing so convertedMessages becomes an array (e.g., await
convertToModelMessages(...).then(...) or await the result then call .slice),
ensuring ChatConfig.messages is a real array for agent.generate()/agent.stream;
keep the same arguments (body.messages, tools, ignoreIncompleteToolCalls) and
preserve the MAX_MESSAGES truncation.
---
Nitpick comments:
In `@lib/chat/setupChatRequest.ts`:
- Around line 30-36: The config object is currently created by spreading
decision which redundantly includes instructions (the same value as system)
causing both instructions and system to appear; instead, stop spreading the
entire decision and construct ChatConfig explicitly by selecting only the needed
fields from decision (e.g., any properties required by ChatConfig) and then add
system, messages: convertedMessages, experimental_generateMessageId:
generateUUID, and tools; update the code around the config creation (the
variable named config in setupChatRequest.ts) to remove "...decision" and pick
explicit fields to avoid leaking extra properties from getGeneralAgent returns.
ChatConfig no longer extends RoutingDecision — now just { agent, messages }.
Removes dead fields (system, model, instructions, tools, experimental_*)
that were silently ignored by agent.generate()/agent.stream(). Fixes
evals to use agent.generate() instead of bypassing the agent via
generateText().
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
lib/chat/setupChatRequest.ts (1)
11-11: Minor JSDoc inaccuracy:toolsisn't a top-level routing decision field.The docstring says "Gets the routing decision (agent, model, tools)" but from
getGeneralAgent.ts,RoutingDecisionreturns{ agent, model, instructions, stopWhen }—toolsis encapsulated insideagent, not a standalone field.📝 Suggested clarification
- * 1. Gets the routing decision (agent, model, tools) from getGeneralAgent + * 1. Gets the routing decision (agent) from getGeneralAgent🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/chat/setupChatRequest.ts` at line 11, Update the JSDoc in setupChatRequest.ts to accurately describe the RoutingDecision shape returned by getGeneralAgent: it returns an object with { agent, model, instructions, stopWhen } and tools are part of the agent object (not a top-level field). Locate the docstring that currently says "Gets the routing decision (agent, model, tools)" and change it to mention agent (which contains tools), model, instructions, and stopWhen, referencing getGeneralAgent and the RoutingDecision type/name to keep the comment consistent with the actual return structure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/chat/setupChatRequest.ts`:
- Around line 21-24: The call to convertToModelMessages returns a Promise in AI
SDK v6, so await its result before slicing: update the convertedMessages
assignment in setupChatRequest (the convertToModelMessages call used to produce
convertedMessages which is then sliced with .slice(-MAX_MESSAGES)) to await
convertToModelMessages(...) so convertedMessages becomes the resolved
ModelMessage[] prior to applying .slice; ensure convertedMessages remains named
the same and that the containing async function (setupChatRequest) already
supports awaiting.
---
Nitpick comments:
In `@lib/chat/setupChatRequest.ts`:
- Line 11: Update the JSDoc in setupChatRequest.ts to accurately describe the
RoutingDecision shape returned by getGeneralAgent: it returns an object with {
agent, model, instructions, stopWhen } and tools are part of the agent object
(not a top-level field). Locate the docstring that currently says "Gets the
routing decision (agent, model, tools)" and change it to mention agent (which
contains tools), model, instructions, and stopWhen, referencing getGeneralAgent
and the RoutingDecision type/name to keep the comment consistent with the actual
return structure.
| const convertedMessages = convertToModelMessages(body.messages, { | ||
| tools, | ||
| tools: decision.agent.tools, | ||
| ignoreIncompleteToolCalls: true, | ||
| }).slice(-MAX_MESSAGES); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Vercel AI SDK v6 convertToModelMessages async function documentation
💡 Result:
In Vercel AI SDK v6, convertToModelMessages() is documented as an async function in the AI SDK UI reference. (ai-sdk.dev)
Docs
- API Reference:
AI SDK UI: convertToModelMessages(ai-sdk.dev) - Migration Guide (5.x → 6.0): explains why it’s async (to support async
Tool.toModelOutput()). (ai-sdk.dev)
Signature (v6)
- Returns:
Promise<ModelMessage[]>(ai-sdk.dev) - Parameters: (ai-sdk.dev)
messages: Message[](UI messages fromuseChat)options?: { tools?: ToolSet; ignoreIncompleteToolCalls?: boolean; convertDataPart?: (part: DataUIPart) => TextPart | FilePart | undefined }
Usage (v6)
import { convertToModelMessages, streamText } from "ai";
const result = streamText({
model: "anthropic/claude-sonnet-4.5",
messages: await convertToModelMessages(messages),
});Citations:
- 1: https://ai-sdk.dev/docs/reference/ai-sdk-ui/convert-to-model-messages?utm_source=openai
- 2: https://ai-sdk.dev/docs/reference/ai-sdk-ui/convert-to-model-messages?utm_source=openai
- 3: https://ai-sdk.dev/docs/migration-guides/migration-guide-6-0?utm_source=openai
- 4: https://ai-sdk.dev/docs/reference/ai-sdk-ui/convert-to-model-messages?utm_source=openai
- 5: https://ai-sdk.dev/docs/reference/ai-sdk-ui/convert-to-model-messages?utm_source=openai
- 6: https://ai-sdk.dev/docs/reference/ai-sdk-ui/convert-to-model-messages?utm_source=openai
🏁 Script executed:
# First, let's examine the actual file to understand the full context
fd setupChatRequest.ts -t f --exec wc -l {} \;Repository: recoupable/api
Length of output: 91
🏁 Script executed:
# Read the file around the mentioned lines (21-24) with more context
cat -n lib/chat/setupChatRequest.ts | head -40Repository: recoupable/api
Length of output: 1314
convertToModelMessages is async in AI SDK v6 — missing await causes messages to be undefined.
convertToModelMessages() returns Promise<ModelMessage[]> in AI SDK v6 and requires await. Without it, calling .slice(-MAX_MESSAGES) operates on a Promise instead of the resolved array, returning undefined. This causes the agent to receive messages: undefined instead of the expected array, breaking the chat request.
The function is already async (line 18), so adding await is straightforward:
🐛 Fix
- const convertedMessages = convertToModelMessages(body.messages, {
- tools: decision.agent.tools,
- ignoreIncompleteToolCalls: true,
- }).slice(-MAX_MESSAGES);
+ const convertedMessages = (
+ await convertToModelMessages(body.messages, {
+ tools: decision.agent.tools,
+ ignoreIncompleteToolCalls: true,
+ })
+ ).slice(-MAX_MESSAGES);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const convertedMessages = convertToModelMessages(body.messages, { | |
| tools, | |
| tools: decision.agent.tools, | |
| ignoreIncompleteToolCalls: true, | |
| }).slice(-MAX_MESSAGES); | |
| const convertedMessages = ( | |
| await convertToModelMessages(body.messages, { | |
| tools: decision.agent.tools, | |
| ignoreIncompleteToolCalls: true, | |
| }) | |
| ).slice(-MAX_MESSAGES); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/chat/setupChatRequest.ts` around lines 21 - 24, The call to
convertToModelMessages returns a Promise in AI SDK v6, so await its result
before slicing: update the convertedMessages assignment in setupChatRequest (the
convertToModelMessages call used to produce convertedMessages which is then
sliced with .slice(-MAX_MESSAGES)) to await convertToModelMessages(...) so
convertedMessages becomes the resolved ModelMessage[] prior to applying .slice;
ensure convertedMessages remains named the same and that the containing async
function (setupChatRequest) already supports awaiting.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| const { agent } = chatConfig; | ||
|
|
||
| const result = await generateText(chatConfig); | ||
| const result = await agent.generate(chatConfig); |
There was a problem hiding this comment.
Passing extra agent property to agent.generate()
Low Severity
agent.generate(chatConfig) passes the full chatConfig object which includes the agent property back into itself, since chatConfig is { agent, messages }. In callChatFunctionsWithResult.ts (changed in the same PR), this is done correctly by destructuring and passing only { messages: convertedMessages }. The same pattern in generateImage.ts also correctly passes only { messages }. The ToolLoopAgent.generate() method only needs messages/prompt/abortSignal — passing agent is unnecessary and inconsistent across the codebase.


Summary
providerOptionsandprepareStepinto theToolLoopAgentconstructor ingetGeneralAgent.tsso they apply consistently across all chat paths (streaming, non-streaming, email)generateText(chatConfig)withagent.generate(chatConfig)inhandleChatGenerate.tsso the non-streaming endpoint uses the agent like streaming doesproviderOptionsandprepareStepfromsetupChatRequest.tsandChatConfigtype since they now live in the agent constructorRoot cause:
providerOptions(thinking/reasoning budgets) andprepareStep(tool chain orchestration) were built insetupChatRequest()and passed at call time, butToolLoopAgent.generate()/.stream()only acceptmessages/prompt/abortSignal— everything else must be set in the constructor. These settings were silently dropped.Test plan
pnpm test)pnpm build)pnpm lint)/api/chat/generatereturns correct responses/api/chatstreaming still works🤖 Generated with Claude Code
Note
Medium Risk
Touches core chat generation and message/tool execution flow; misconfiguration could change model behavior, tool chaining, or token usage across endpoints.
Overview
Fixes non-streaming chat generation to run through the configured
ToolLoopAgentby switching/api/chat/generatefromgenerateText(chatConfig)toagent.generate(chatConfig).Moves
providerOptions(thinking/reasoning budgets) andprepareStep(tool-chain orchestration viagetPrepareStepResult) into theToolLoopAgentconstructor ingetGeneralAgent, and simplifiessetupChatRequest/ChatConfigto return only{ agent, messages }so these settings aren’t silently dropped. Tests and eval helper are updated accordingly.Written by Cursor Bugbot for commit d2f7c03. This will update automatically on new commits. Configure here.
Summary by CodeRabbit