Skip to content

Conversation

@bolinfest
Copy link
Collaborator

@bolinfest bolinfest commented Oct 27, 2025

This adds an RPC to the app server to the the ConversationSummary via a rollout path. Now that the VS Code extension supports showing the Codex UI in an editor panel where the URI of the panel maps to the rollout file, we need to be able to get the ConversationSummary from the rollout file directly.

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

path: &Path,
fallback_provider: &str,
) -> std::io::Result<ConversationSummary> {
let head = read_head_records(path, 1).await?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Load more than one line when building conversation summary

The new GetConversationSummary RPC reads a rollout with read_head_records(path, 1) and then feeds that vector into extract_conversation_summary, which only returns a value if the head contains a user message. Normal rollouts start with a session_meta line followed by the first user message, so limiting the read to a single record leaves only the meta line in head and causes the helper to return None, producing an error for every valid conversation. The head read should include the first user message (e.g. use the same head limit as list_conversations or loop until a user message is found) so the RPC can succeed.

Useful? React with 👍 / 👎.

@bolinfest
Copy link
Collaborator Author

@codex please review

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

$macro!(crate::InterruptConversationParams);
$macro!(crate::InterruptConversationResponse);
$macro!(crate::ListConversationsParams);
$macro!(crate::ListConversationsResponse);
$macro!(crate::LoginApiKeyParams);
$macro!(crate::LoginApiKeyResponse);

P1 Badge Export GetConversationSummary schemas

The commit introduces GetConversationSummaryParams/GetConversationSummaryResponse, but export.rs’s for_each_schema_type! list does not include these types alongside the other RPC schemas. This macro drives generation of TypeScript and JSON schema artifacts during generate_types; omitting the new types means clients depending on those generated files will not get definitions for the new RPC and may fail to compile or remain unaware of it. Add both structs to the list to keep schema generation complete.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

bolinfest added a commit that referenced this pull request Oct 27, 2025
Because conversations that use the Responses API can have encrypted
reasoning messages, trying to resume a conversation with a different
provider could lead to confusing "failed to decrypt" errors. (This is
reproducible by starting a conversation using ChatGPT login and resuming
it as a conversation that uses OpenAI models via Azure.)

This changes `ListConversationsParams` to take a `model_providers:
Option<Vec<String>>` and adds `model_provider` on each
`ConversationSummary` it returns so these cases can be disambiguated.

Note this ended up making changes to
`codex-rs/core/src/rollout/tests.rs` because it had a number of cases
where it expected `Some` for the value of `next_cursor`, but the list of
rollouts was complete, so according to this docstring:


https://github.com/openai/codex/blob/bcd64c7e7231d6316a2377d1525a0fa74f21b783/codex-rs/app-server-protocol/src/protocol.rs#L334-L337

If there are no more items to return, then `next_cursor` should be
`None`. This PR updates that logic.






---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/5658).
* #5803
* #5793
* __->__ #5658
Base automatically changed from pr5793 to main October 27, 2025 09:33
bolinfest added a commit that referenced this pull request Oct 27, 2025
…#5793)

An AppServer client should be able to use any (`model_provider`, `model`) in the user's config. `NewConversationParams` already supported specifying the `model`, but this PR expands it to support `model_provider`, as well.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/5793).
* #5803
* __->__ #5793
@bolinfest bolinfest merged commit 5ee8a17 into main Oct 27, 2025
40 checks passed
@bolinfest bolinfest deleted the pr5803 branch October 27, 2025 16:11
@github-actions github-actions bot locked and limited conversation to collaborators Oct 27, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants