Skip to content

Conversation

@baturyilmaz
Copy link
Contributor

@baturyilmaz baturyilmaz commented Nov 24, 2025

Add missing schema validation for xAI Responses API server-side tools:

  • Add custom_tool_call type to outputItemSchema
  • Make toolCallSchema fields optional for in_progress states
  • Add input field for custom_tool_call (vs arguments)
  • Add action field for in_progress tool execution states
  • Add 12 streaming event types for tool lifecycle:
    • web_search_call: in_progress, searching, completed
    • x_search_call: in_progress, searching, completed
    • code_execution_call: in_progress, executing, completed
    • code_interpreter_call: in_progress, executing, completed

Fixes validation errors ('Invalid JSON response', 'No matching discriminator') when using xai.responses() with xai.tools.webSearch(), xai.tools.xSearch(), or xai.tools.codeExecution().

Background

The xAI Responses API with server-side tools (web_search, x_search, code_execution) was failing with validation errors when using the Vercel AI SDK:

AI_TypeValidationError: Invalid JSON response
Error: No matching discriminator for output[].type

Root cause: The xAI API returns response formats that were not included in the SDK's Zod validation schemas:

  1. custom_tool_call type - Server-side tool calls use this type instead of the standard tool call types
  2. Streaming progress events - Events like response.web_search_call.in_progress, response.web_search_call.searching, response.web_search_call.completed were not recognized
  3. Optional fields during execution - During in_progress state, fields like name, arguments, call_id are undefined
  4. Different field names - custom_tool_call uses input field instead of arguments

Summary

Updated packages/xai/src/responses/xai-responses-api.ts to support the complete xAI Responses API format:

1. Added custom_tool_call Type Support

Type definition (XaiResponsesToolCall):

export type XaiResponsesToolCall = {
  type:
    | 'function_call'
    | 'web_search_call'
    | 'x_search_call'
    | 'code_interpreter_call'
    | 'custom_tool_call';  // ✅ Added
  id: string;
  call_id?: string;        // ✅ Made optional
  name?: string;           // ✅ Made optional
  arguments?: string;      // ✅ Made optional
  input?: string;          // ✅ Added for custom_tool_call
  status: string;
  action?: any;            // ✅ Added for in_progress state
};

Schema (outputItemSchema):

z.object({
  type: z.literal('custom_tool_call'),
  ...toolCallSchema.shape,
}),

2. Made Tool Call Fields Optional

Updated toolCallSchema to handle in-progress states where fields are undefined:

const toolCallSchema = z.object({
  name: z.string().optional(),      // Was required
  arguments: z.string().optional(), // Was required
  input: z.string().optional(),     // ✅ New (for custom_tool_call)
  call_id: z.string().optional(),   // Was required
  id: z.string(),
  status: z.string(),
  action: z.any().optional(),       // ✅ New (for in_progress state)
});

3. Added 12 Streaming Event Types

Added to xaiResponsesChunkSchema for complete tool execution lifecycle:

Web Search:

  • response.web_search_call.in_progress
  • response.web_search_call.searching
  • response.web_search_call.completed

X Search:

  • response.x_search_call.in_progress
  • response.x_search_call.searching
  • response.x_search_call.completed

Code Execution:

  • response.code_execution_call.in_progress
  • response.code_execution_call.executing
  • response.code_execution_call.completed

Code Interpreter:

  • response.code_interpreter_call.in_progress
  • response.code_interpreter_call.executing
  • response.code_interpreter_call.completed

Manual Verification

Tested all server-side tools with both generateText() and streamText() to ensure end-to-end functionality:

✅ Web Search Tool

import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai';

const { text, sources } = await generateText({
  model: xai.responses('grok-4-fast'),
  prompt: 'What are the latest developments in AI?',
  tools: {
    web_search: xai.tools.webSearch(),
  },
});

console.log(text); // Comprehensive response
console.log(sources); // Array of URL citations

Result: ✅ Returned comprehensive response with 14 URL citations, no validation errors

✅ X Search Tool

import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai';

const { text, sources } = await generateText({
  model: xai.responses('grok-4-fast'),
  prompt: 'What are people saying about AI on X this week?',
  tools: {
    x_search: xai.tools.xSearch({
      allowedXHandles: ['elonmusk', 'xai'],
      fromDate: '2025-11-18',
      toDate: '2025-11-24',
      enableImageUnderstanding: true,
      enableVideoUnderstanding: true,
    }),
  },
});

console.log(text); // Analysis of X discussions
console.log(sources); // Array of X post citations

Result: ✅ Returned analysis with 16 X post citations, all streaming events properly handled

✅ Code Execution Tool

import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai';

const { text } = await generateText({
  model: xai.responses('grok-4-fast'),
  prompt: 'Calculate the factorial of 20 using Python',
  tools: {
    code_execution: xai.tools.codeExecution(),
  },
});

console.log(text); // Result with code execution details

Result: ✅ Computed result with execution details, no validation errors

✅ Multiple Tools with Streaming

import { xai } from '@ai-sdk/xai';
import { streamText } from 'ai';

const { fullStream, usage: usagePromise } = streamText({
  model: xai.responses('grok-4-fast'),
  system: 'You are an AI research assistant.',
  tools: {
    web_search: xai.tools.webSearch(),
    x_search: xai.tools.xSearch(),
    code_execution: xai.tools.codeExecution(),
  },
  prompt: 'Research prompt caching in LLMs and explain how it reduces costs',
});

const sources = new Set<string>();
let lastToolName = '';

for await (const event of fullStream) {
  switch (event.type) {
    case 'tool-call':
      lastToolName = event.toolName;
      if (event.providerExecuted) {
        console.log(`[Calling ${event.toolName} on server...]`);
      }
      break;

    case 'tool-result':
      console.log(`[${lastToolName} completed]`);
      break;

    case 'text-delta':
      process.stdout.write(event.text);
      break;

    case 'source':
      if (event.sourceType === 'url') {
        sources.add(event.url);
      }
      break;
  }
}

const usage = await usagePromise;
console.log(`\nSources used: ${sources.size}`);
console.log(`Token usage: ${usage.inputTokens} input, ${usage.outputTokens} output`);

Result: ✅ Full streaming response with web searches, real-time progress updates, and source citations. All streaming events (tool-call, tool-result, text-delta, source) work correctly.

Summary of manual testing:

  • ✅ All three tool types (web_search, x_search, code_execution) work without validation errors
  • ✅ Both generateText() and streamText() work correctly
  • ✅ Source citations are properly parsed and returned
  • ✅ Streaming progress events are handled correctly
  • ✅ No "Invalid JSON response" or "No matching discriminator" errors

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

Related issues

closes #10607

@gr2m
Copy link
Collaborator

gr2m commented Nov 24, 2025

can you please resolve the conflict?

@gr2m gr2m added feature New feature or request ai/provider provider/xai and removed ai/core labels Nov 24, 2025
@gr2m
Copy link
Collaborator

gr2m commented Nov 24, 2025

see also #10497

… type

Add missing schema validation for xAI Responses API server-side tools:
- Add custom_tool_call type to outputItemSchema
- Make toolCallSchema fields optional for in_progress states
- Add input field for custom_tool_call (vs arguments)
- Add action field for in_progress tool execution states
- Add 12 streaming event types for tool lifecycle:
  - web_search_call: in_progress, searching, completed
  - x_search_call: in_progress, searching, completed
  - code_execution_call: in_progress, executing, completed
  - code_interpreter_call: in_progress, executing, completed

Fixes validation errors ('Invalid JSON response', 'No matching discriminator')
when using xai.responses() with xai.tools.webSearch(), xai.tools.xSearch(),
or xai.tools.codeExecution().

Tested with generateText() and streamText() across all tool types.
@baturyilmaz baturyilmaz force-pushed the fix/xai-responses-api-validation branch from e65f5ed to 26f44c6 Compare November 24, 2025 19:43
Make name, arguments, input, and call_id fields optional in toolCallSchema
to match xAI API behavior during in_progress states. Add explicit ?? ''
fallbacks in language model code for clear, maintainable handling of
missing fields.

Add custom_tool_call handling in both doGenerate and doStream methods,
using the input field (instead of arguments) for custom tool calls per
the API specification.
@baturyilmaz baturyilmaz force-pushed the fix/xai-responses-api-validation branch from 26f44c6 to f7e041a Compare November 24, 2025 21:12
@baturyilmaz
Copy link
Contributor Author

this pr is related to following issue: #10607

Copy link
Collaborator

@gr2m gr2m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you Batur, I confirmed the problem and your fix. Congratulations on landing your first contribution to the AI SDK!

@gr2m gr2m added the backport label Nov 25, 2025
@gr2m gr2m merged commit b39ec2c into vercel:main Nov 25, 2025
20 of 21 checks passed
vercel-ai-sdk bot pushed a commit that referenced this pull request Nov 25, 2025
… type (#10523)

Add missing schema validation for xAI Responses API server-side tools:
- Add custom_tool_call type to outputItemSchema
- Make toolCallSchema fields optional for in_progress states
- Add input field for custom_tool_call (vs arguments)
- Add action field for in_progress tool execution states
- Add 12 streaming event types for tool lifecycle:
  - web_search_call: in_progress, searching, completed
  - x_search_call: in_progress, searching, completed
  - code_execution_call: in_progress, executing, completed
  - code_interpreter_call: in_progress, executing, completed

Fixes validation errors ('Invalid JSON response', 'No matching
discriminator') when using xai.responses() with xai.tools.webSearch(),
xai.tools.xSearch(), or xai.tools.codeExecution().

## Background

The xAI Responses API with server-side tools (`web_search`, `x_search`,
`code_execution`) was failing with validation errors when using the
Vercel AI SDK:

```
AI_TypeValidationError: Invalid JSON response
Error: No matching discriminator for output[].type
```

**Root cause**: The xAI API returns response formats that were not
included in the SDK's Zod validation schemas:

1. **`custom_tool_call` type** - Server-side tool calls use this type
instead of the standard tool call types
2. **Streaming progress events** - Events like
`response.web_search_call.in_progress`,
`response.web_search_call.searching`,
`response.web_search_call.completed` were not recognized
3. **Optional fields during execution** - During `in_progress` state,
fields like `name`, `arguments`, `call_id` are undefined
4. **Different field names** - `custom_tool_call` uses `input` field
instead of `arguments`

## Summary

Updated `packages/xai/src/responses/xai-responses-api.ts` to support the
complete xAI Responses API format:

### 1. Added `custom_tool_call` Type Support

**Type definition** (`XaiResponsesToolCall`):
```typescript
export type XaiResponsesToolCall = {
  type:
    | 'function_call'
    | 'web_search_call'
    | 'x_search_call'
    | 'code_interpreter_call'
    | 'custom_tool_call';  // ✅ Added
  id: string;
  call_id?: string;        // ✅ Made optional
  name?: string;           // ✅ Made optional
  arguments?: string;      // ✅ Made optional
  input?: string;          // ✅ Added for custom_tool_call
  status: string;
  action?: any;            // ✅ Added for in_progress state
};
```

**Schema** (`outputItemSchema`):
```typescript
z.object({
  type: z.literal('custom_tool_call'),
  ...toolCallSchema.shape,
}),
```

### 2. Made Tool Call Fields Optional

Updated `toolCallSchema` to handle in-progress states where fields are
undefined:
```typescript
const toolCallSchema = z.object({
  name: z.string().optional(),      // Was required
  arguments: z.string().optional(), // Was required
  input: z.string().optional(),     // ✅ New (for custom_tool_call)
  call_id: z.string().optional(),   // Was required
  id: z.string(),
  status: z.string(),
  action: z.any().optional(),       // ✅ New (for in_progress state)
});
```

### 3. Added 12 Streaming Event Types

Added to `xaiResponsesChunkSchema` for complete tool execution
lifecycle:

**Web Search:**
- `response.web_search_call.in_progress`
- `response.web_search_call.searching`
- `response.web_search_call.completed`

**X Search:**
- `response.x_search_call.in_progress`
- `response.x_search_call.searching`
- `response.x_search_call.completed`

**Code Execution:**
- `response.code_execution_call.in_progress`
- `response.code_execution_call.executing`
- `response.code_execution_call.completed`

**Code Interpreter:**
- `response.code_interpreter_call.in_progress`
- `response.code_interpreter_call.executing`
- `response.code_interpreter_call.completed`

## Manual Verification

Tested all server-side tools with both `generateText()` and
`streamText()` to ensure end-to-end functionality:

### ✅ Web Search Tool

```typescript
import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai';

const { text, sources } = await generateText({
  model: xai.responses('grok-4-fast'),
  prompt: 'What are the latest developments in AI?',
  tools: {
    web_search: xai.tools.webSearch(),
  },
});

console.log(text); // Comprehensive response
console.log(sources); // Array of URL citations
```

**Result**: ✅ Returned comprehensive response with 14 URL citations, no
validation errors

### ✅ X Search Tool

```typescript
import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai';

const { text, sources } = await generateText({
  model: xai.responses('grok-4-fast'),
  prompt: 'What are people saying about AI on X this week?',
  tools: {
    x_search: xai.tools.xSearch({
      allowedXHandles: ['elonmusk', 'xai'],
      fromDate: '2025-11-18',
      toDate: '2025-11-24',
      enableImageUnderstanding: true,
      enableVideoUnderstanding: true,
    }),
  },
});

console.log(text); // Analysis of X discussions
console.log(sources); // Array of X post citations
```

**Result**: ✅ Returned analysis with 16 X post citations, all streaming
events properly handled

### ✅ Code Execution Tool

```typescript
import { xai } from '@ai-sdk/xai';
import { generateText } from 'ai';

const { text } = await generateText({
  model: xai.responses('grok-4-fast'),
  prompt: 'Calculate the factorial of 20 using Python',
  tools: {
    code_execution: xai.tools.codeExecution(),
  },
});

console.log(text); // Result with code execution details
```

**Result**: ✅ Computed result with execution details, no validation
errors

### ✅ Multiple Tools with Streaming

```typescript
import { xai } from '@ai-sdk/xai';
import { streamText } from 'ai';

const { fullStream, usage: usagePromise } = streamText({
  model: xai.responses('grok-4-fast'),
  system: 'You are an AI research assistant.',
  tools: {
    web_search: xai.tools.webSearch(),
    x_search: xai.tools.xSearch(),
    code_execution: xai.tools.codeExecution(),
  },
  prompt: 'Research prompt caching in LLMs and explain how it reduces costs',
});

const sources = new Set<string>();
let lastToolName = '';

for await (const event of fullStream) {
  switch (event.type) {
    case 'tool-call':
      lastToolName = event.toolName;
      if (event.providerExecuted) {
        console.log(`[Calling ${event.toolName} on server...]`);
      }
      break;

    case 'tool-result':
      console.log(`[${lastToolName} completed]`);
      break;

    case 'text-delta':
      process.stdout.write(event.text);
      break;

    case 'source':
      if (event.sourceType === 'url') {
        sources.add(event.url);
      }
      break;
  }
}

const usage = await usagePromise;
console.log(`\nSources used: ${sources.size}`);
console.log(`Token usage: ${usage.inputTokens} input, ${usage.outputTokens} output`);
```

**Result**: ✅ Full streaming response with web searches, real-time
progress updates, and source citations. All streaming events
(`tool-call`, `tool-result`, `text-delta`, `source`) work correctly.

**Summary of manual testing:**
- ✅ All three tool types (web_search, x_search, code_execution) work
without validation errors
- ✅ Both `generateText()` and `streamText()` work correctly
- ✅ Source citations are properly parsed and returned
- ✅ Streaming progress events are handled correctly
- ✅ No "Invalid JSON response" or "No matching discriminator" errors

## Related issues

closes #10607
@vercel-ai-sdk vercel-ai-sdk bot removed the backport label Nov 25, 2025
@vercel-ai-sdk
Copy link
Contributor

vercel-ai-sdk bot commented Nov 25, 2025

✅ Backport PR created: #10610

vercel-ai-sdk bot added a commit that referenced this pull request Nov 25, 2025
…_tool_call type (#10610)

This is an automated backport of #10523 to the release-v5.0 branch. FYI
@baturyilmaz

Co-authored-by: Batur <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

xAI: error when using xSearch tool (APICallError [AI_APICallError]: Invalid JSON response)

2 participants