Skip to content

[Responses API] Function calling #1587

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 2, 2025

Conversation

Wauplin
Copy link
Contributor

@Wauplin Wauplin commented Jul 2, 2025

Built on top of #1576.

Based on https://platform.openai.com/docs/api-reference/responses/create and https://platform.openai.com/docs/guides/function-calling?api-mode=responses#streaming

Works both with and without streaming.

Note: implementation starts to be a completely messy, especially in streaming mode. Complexity increases as we add new event types. I do think a refactoring would be beneficial e.g. with an internal state object that keeps track of the current state and "knows" what to emit and when (typically to emit the "done"/"completed" events each time a new output/content is generated). Food for thoughts for a future PR.

Non-stream

Run

pnpm run example function

Output

{
  created_at: 1751467285177,
  error: null,
  id: 'resp_0b2ab98168a9813e0f7373f940221da4ef3211f43c9faac8',
  instructions: null,
  max_output_tokens: null,
  metadata: null,
  model: 'meta-llama/Llama-3.3-70B-Instruct',
  object: 'response',
  output: [
    {
      type: 'function_call',
      id: 'fc_f40ac964165602e2fcb2f955777acff8c4b9359d49eaf79b',
      call_id: '9cd167c7f',
      name: 'get_current_weather',
      arguments: '{"location": "Boston, MA", "unit": "fahrenheit"}',
      status: 'completed'
    }
  ],
  status: 'completed',
  tool_choice: 'auto',
  tools: [
    {
      name: 'get_current_weather',
      parameters: [Object],
      strict: true,
      type: 'function',
      description: 'Get the current weather in a given location'
    }
  ],
  temperature: 1,
  top_p: 1,
  output_text: ''
}

Stream

Run:

pnpm run example function_streaming

Output:

{
  type: 'response.created',
  response: {
    created_at: 1751467334073,
    error: null,
    id: 'resp_8d86745178f2b9fc0da000156655956181c76a7701712a05',
    instructions: null,
    max_output_tokens: null,
    metadata: null,
    model: 'meta-llama/Llama-3.3-70B-Instruct',
    object: 'response',
    output: [],
    status: 'in_progress',
    tool_choice: 'auto',
    tools: [ [Object] ],
    temperature: 1,
    top_p: 1
  },
  sequence_number: 0
}
{
  type: 'response.in_progress',
  response: {
    created_at: 1751467334073,
    error: null,
    id: 'resp_8d86745178f2b9fc0da000156655956181c76a7701712a05',
    instructions: null,
    max_output_tokens: null,
    metadata: null,
    model: 'meta-llama/Llama-3.3-70B-Instruct',
    object: 'response',
    output: [],
    status: 'in_progress',
    tool_choice: 'auto',
    tools: [ [Object] ],
    temperature: 1,
    top_p: 1
  },
  sequence_number: 1
}
{
  type: 'response.output_item.added',
  output_index: 0,
  item: {
    type: 'function_call',
    id: 'fc_9bdc8945b9cb6c95c5c248db4203f0707ba9fd338dee2454',
    call_id: '83a9d4baf',
    name: 'get_weather',
    arguments: ''
  },
  sequence_number: 2
}
{
  type: 'response.function_call_arguments.delta',
  item_id: 'fc_9bdc8945b9cb6c95c5c248db4203f0707ba9fd338dee2454',
  output_index: 0,
  delta: '{"latitude": 48.8567, "longitude": 2.3508}',
  sequence_number: 3
}
{
  type: 'response.function_call_arguments.done',
  item_id: 'fc_9bdc8945b9cb6c95c5c248db4203f0707ba9fd338dee2454',
  output_index: 0,
  arguments: '{"latitude": 48.8567, "longitude": 2.3508}',
  sequence_number: 4
}
{
  type: 'response.output_item.done',
  output_index: 0,
  item: {
    type: 'function_call',
    id: 'fc_9bdc8945b9cb6c95c5c248db4203f0707ba9fd338dee2454',
    call_id: '83a9d4baf',
    name: 'get_weather',
    arguments: '{"latitude": 48.8567, "longitude": 2.3508}',
    status: 'completed'
  },
  sequence_number: 5
}
{
  type: 'response.completed',
  response: {
    created_at: 1751467334073,
    error: null,
    id: 'resp_8d86745178f2b9fc0da000156655956181c76a7701712a05',
    instructions: null,
    max_output_tokens: null,
    metadata: null,
    model: 'meta-llama/Llama-3.3-70B-Instruct',
    object: 'response',
    output: [ [Object] ],
    status: 'completed',
    tool_choice: 'auto',
    tools: [ [Object] ],
    temperature: 1,
    top_p: 1
  },
  sequence_number: 6
}

@Wauplin Wauplin merged commit 3818ce8 into responses-server Jul 2, 2025
4 of 5 checks passed
@Wauplin Wauplin deleted the responses-server-function-calling branch July 2, 2025 14:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant