Skip to content

some implement of streaming api may return "" as finish_reason #389

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

migeyusu
Copy link

some implement of streaming api may return "" as finish_reason such as o3 (https://o3.fan/).

@migeyusu
Copy link
Author

QQ20250325-122743
as you see , some implement will use "" as finish_reason, this works well in cherry studio, but throw exception in OpenAI:
"Unknown ChatFinishReason value."
It should be regarded as null and goto continue.

@jsquire
Copy link
Collaborator

jsquire commented Jul 10, 2025

Hi @migeyusu. Thank you for your contribution and for your interest in improving the OpenAI developer experience. The OpenAI library is intended to conform to the OpenAI REST API and work with the OpenAI service. While we appreciate that it can be used with other services/models, it is not a goal of the project to support other services where they deviate from the OpenAI implementation.

In this scenario, the platform documentation is explicit in the set of supported values for finish_reason under choice for a chat completion object:

finish_reason  (string)

The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number 
of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool, or 
function_call (deprecated) if the model called a function.

The REST API specification defines this more explicitly as:

 choices:
   type: array
   description: A list of chat completion choices. Can be more than one if `n` is
     greater than 1.
   items:
     type: object
     required:
       - finish_reason
       - index
       - message
       - logprobs
     properties:
       finish_reason:
         type: string
         description: >
           The reason the model stopped generating tokens. This will be
           `stop` if the model hit a natural stop point or a provided
           stop sequence,
           `length` if the maximum number of tokens specified in the
           request was reached,
           `content_filter` if content was omitted due to a flag from our
           content filters,
           `tool_calls` if the model called a tool, or `function_call`
           (deprecated) if the model called a function.
         enum:
           - stop
           - length
           - tool_calls
           - content_filter
           - function_call

While I wish that explicitly stated null as a valid value, it is consistent with OpenAI patterns in the spec and demonstrated by all examples showing null as the appropriate value for the "no finish_reason" scenario:

{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}

Unfortunately, we cannot accept this submission, as we do not want to set the precedent that we offer explicit support for non-OpenAI services/models. I'm going to close this out. Thank you, again, for your efforts.

@jsquire jsquire closed this Jul 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants