You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
:param reason: Reason for the response being incomplete
334
+
"""
335
+
336
+
reason: str
337
+
338
+
339
+
@json_schema_type
340
+
classOpenAIResponsePrompt(BaseModel):
341
+
"""Reference to a prompt template and its variables.
342
+
343
+
:param id: The unique identifier of the prompt template to use.
344
+
:param variables: (Optional) Map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.
345
+
:param version: (Optional) Version of the prompt template.
346
+
"""
347
+
348
+
id: str
349
+
variables: Optional[dict[str, Any]] =None
350
+
version: Optional[str] =None
351
+
352
+
353
+
@json_schema_type
354
+
classOpenAIResponseReasoning(BaseModel):
355
+
"""Configuration options for reasoning models.
356
+
357
+
:param effort: (Optional) The effort level to use for reasoning.
358
+
:param generate_summary: Deprecated. Use the generate_summary_text field instead. (Optional) Whether to generate a summary of the reasoning process.
The description of the function, including guidance on when and how to call it,
371
+
and guidance about what to tell the user when calling (if anything).
372
+
"""
373
+
374
+
name: Optional[str] =None
375
+
"""The name of the function."""
376
+
377
+
parameters: Optional[object] =None
378
+
"""Parameters of the function in JSON Schema."""
379
+
380
+
type: Optional[Literal["function"]] =None
381
+
"""The type of the tool, i.e. `function`."""
320
382
321
383
322
384
@json_schema_type
323
385
classOpenAIResponseObject(BaseModel):
324
386
"""Complete OpenAI response object containing generation results and metadata.
325
387
388
+
Based on OpenAI Responses API schema: https://github.com/openai/openai-python/blob/34014aedbb8946c03e97e5c8d72e03ad2259cd7c/src/openai/types/responses/response.py#L38
389
+
326
390
:param created_at: Unix timestamp when the response was created
327
391
:param error: (Optional) Error details if the response generation failed
328
392
:param id: Unique identifier for this response
393
+
:param incomplete_details: (Optional) Incomplete details if the response is incomplete
394
+
:param instructions: (Optional) A system (or developer) message inserted into the model's context.
395
+
:param max_output_tokens: (Optional) An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
396
+
:param max_tool_calls: (Optional) The maximum number of total calls to built-in tools that can be processed in a response.
397
+
:param metadata: (Optional) Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
329
398
:param model: Model identifier used for generation
330
399
:param object: Object type identifier, always "response"
331
400
:param output: List of generated output items (messages, tool calls, etc.)
332
401
:param parallel_tool_calls: Whether tool calls can be executed in parallel
333
402
:param previous_response_id: (Optional) ID of the previous response in a conversation
403
+
:param prompt: (Optional) Reference to a prompt template and its variables.
404
+
:param prompt_cache_key: (Optional)Used to cache responses for similar requests to optimize your cache hit rates. Replaces the user field.
405
+
:param reasoning: (Optional) Configuration options for reasoning models.
406
+
:param safety_identifier: (Optional) A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies.
407
+
:param service_tier: (Optional) Specifies the processing type used for serving the request.
334
408
:param status: Current status of the response generation
335
409
:param temperature: (Optional) Sampling temperature used for generation
336
410
:param text: Text formatting configuration for the response
@@ -340,21 +414,32 @@ class OpenAIResponseObject(BaseModel):
0 commit comments