fix(responses): support OpenAI Responses tool loops#65
Open
saxyguy81 wants to merge 2 commits into
Open
Conversation
Author
|
Coordination note for reviewers: This PR is intentionally parallel to #64 and #66.
There is no intended dependency between these PRs. They can be reviewed and merged independently; if one lands first, I will rebase the others only if GitHub reports a conflict. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
function_callitem IDs tofc_*while preservingcall_idascall_*./responses,/v1/responses, and/codex/v1/responses.previous_response_idcontinuation state for Codex Responses tool loops, scoped by client auth context with TTL/max-entry cleanup.stream_options.include_usage, plus Droid-sentprompt_cache_retentionandsafety_identifier, while stripping Codex-backend-unsupported fields before upstream dispatch.response.function_call_arguments.*.item_id == fc_*, completedfunction_call.id == fc_*, andfunction_call.call_id == call_*.response.completedpayloads from streamed output items when the final upstream completed event omits output.This remains parallel to #64; it is the Responses/OpenAI-provider compatibility PR. The latest commit folds in the Droid/Factory streaming and continuation compatibility work on top of the original
fc_*fix.Validation
git diff --checkuv run ruff check ccproxy/plugins/codex/plugin.py ccproxy/plugins/codex/config.py ccproxy/plugins/codex/adapter.py ccproxy/plugins/codex/routes.py ccproxy/plugins/codex/responses_state.py ccproxy/llms/models/openai.py ccproxy/llms/streaming/accumulators.py ccproxy/llms/formatters/common/identifiers.py ccproxy/streaming/deferred.py ccproxy/streaming/buffer.py ccproxy/streaming/errors.py tests/plugins/codex/integration/test_codex_basic.py tests/plugins/codex/unit/test_adapter.py tests/plugins/codex/unit/test_responses_state.py tests/unit/llms/streaming/test_accumulators.py tests/unit/llms/test_openai_responses_request_models.py tests/unit/streaming/test_buffer_parse_responses.pyuv run ruff format --check ccproxy/plugins/codex/plugin.py ccproxy/plugins/codex/config.py ccproxy/plugins/codex/adapter.py ccproxy/plugins/codex/routes.py ccproxy/plugins/codex/responses_state.py ccproxy/llms/models/openai.py ccproxy/llms/streaming/accumulators.py ccproxy/llms/formatters/common/identifiers.py ccproxy/streaming/deferred.py ccproxy/streaming/buffer.py ccproxy/streaming/errors.py tests/plugins/codex/integration/test_codex_basic.py tests/plugins/codex/unit/test_adapter.py tests/plugins/codex/unit/test_responses_state.py tests/unit/llms/streaming/test_accumulators.py tests/unit/llms/test_openai_responses_request_models.py tests/unit/streaming/test_buffer_parse_responses.pyuv run pytest tests/plugins/codex/unit/test_adapter.py tests/plugins/codex/unit/test_responses_state.py tests/unit/llms/test_openai_responses_request_models.py tests/unit/streaming/test_buffer_parse_responses.py tests/unit/llms/streaming/test_accumulators.py tests/plugins/codex/integration/test_codex_basic.py -q- 58 passeduv run pytest tests/plugins/codex/unit/test_adapter.py tests/plugins/codex/unit/test_routes.py tests/plugins/codex/unit/test_responses_state.py tests/plugins/codex/integration/test_codex_basic.py tests/plugins/codex/integration/test_codex_websocket.py tests/unit/streaming/test_buffer_parse_responses.py tests/unit/llms/streaming/test_accumulators.py -q- 67 passeduv run pytest tests/plugins/codex tests/unit/streaming/test_buffer_parse_responses.py tests/unit/llms/streaming/test_accumulators.py tests/integration/test_streaming_converters.py -q- 91 passed, 1 existing Pydantic deprecation warninguv run mypy ccproxy/plugins/codex/adapter.py ccproxy/plugins/codex/routes.py ccproxy/plugins/codex/responses_state.py ccproxy/plugins/codex/plugin.py ccproxy/plugins/codex/config.py ccproxy/llms/models/openai.py ccproxy/llms/streaming/accumulators.py ccproxy/llms/formatters/common/identifiers.py ccproxy/streaming/deferred.py ccproxy/streaming/buffer.py ccproxy/streaming/errors.py tests/plugins/codex/integration/test_codex_basic.py tests/plugins/codex/unit/test_adapter.py tests/plugins/codex/unit/test_responses_state.py tests/unit/llms/streaming/test_accumulators.py tests/unit/llms/test_openai_responses_request_models.py tests/unit/streaming/test_buffer_parse_responses.py- success, 17 source files./Taskfile check- all checks passed; mypy success on 592 source files; 616 files already formattedLive local smoke
/healthreturnedpassafter restart./responses,/v1/responses, and/codex/v1/responseswithstream_options.include_usage.previous_response_idcontinuation, withfc_*item IDs andcall_*call IDs.previous_response_idreturns OpenAI-style JSON error body.Output began with
GPT_DROID_EXEC_OKand returned/Users/smhanan/Projects/inlineplus branchcodex/inline-typora-repair.