Skip to content

Incomplete responses are not handled by parse_response when using Structured Output #2486

@ejm-betterup

Description

@ejm-betterup

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • This is an issue with the Python library

Describe the bug

I'm using Structured Outputs and have a max token length of 2000 for my finetuned model. When I generate a response, response.output has a length of 8 and the last output is incomplete. When response.output contains an incomplete output, parse_response doesn't handle this and instead raises a ValidationError.

It seems like there should either be a way to constrain the length of response.output or ignore incomplete outputs in parsing, for example:

if output.type == "message" and output.status != "incomplete":

instead of https://github.com/openai/openai-python/blob/main/src/openai/lib/_parsing/_responses.py#L63

To Reproduce

To repro, ensure that response.output contains an incomplete output and then call parse_response with a structured output.

raw_response.output[-1] = ResponseOutputMessage(id='xxx', content=[ResponseOutputText(annotations=[], text='{"a": {"b": 5.0, , "c": {"d": 4.0,', type='output_text', logprobs=[])], role='assistant', status='incomplete', type='message')

parse_response(
   text_format=ExpectedResponse,
   response=raw_response,
)

Code snippets

client = OpenAI() 
resp = client.responses.parse(
    model=model,
    instructions=prompt,
    input=input_text.to_list(),
    temperature=temperature,
    max_output_tokens=max_tokens
    text_format=ExpectedResponse,
)


yields


  File "/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1124, in _process_response
    return api_response.parse()
           ~~~~~~~~~~~~~~~~~~^^
  File "/venv/lib/python3.13/site-packages/openai/_response.py", line 325, in parse
    parsed = self._options.post_parser(parsed)
  File "/.venv/lib/python3.13/site-packages/openai/resources/responses/responses.py", line 936, in parser
    return parse_response(
        input_tools=tools,
        text_format=text_format,
        response=raw_response,
    )
  File "/.venv/lib/python3.13/site-packages/openai/lib/_parsing/_responses.py", line 75, in parse_response
    "parsed": parse_text(item.text, text_format=text_format),
              ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/.venv/lib/python3.13/site-packages/openai/lib/_parsing/_responses.py", line 137, in parse_text
    return cast(TextFormatT, model_parse_json(text_format, text))
                             ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
  File "/.venv/lib/python3.13/site-packages/openai/_compat.py", line 169, in model_parse_json
    return model.model_validate_json(data)
           ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
  File "/.venv/lib/python3.13/site-packages/pydantic/main.py", line 746, in model_validate_json
    return cls.__pydantic_validator__.validate_json(
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
        json_data, strict=strict, context=context, by_alias=by_alias, by_name=by_name
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
pydantic_core._pydantic_core.ValidationError: 1 validation error for ExpectedResponse
  Invalid JSON: EOF while parsing an object at line 1 column 234 [type=json_invalid, input_value='{"example_key":...n": {"clarity": 6', input_type=str]
    For further information visit https://errors.pydantic.dev/2.11/v/json_invalid

OS

macOS

Python version

python v3.13

Library version

openai v1.86.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions