- 
                Notifications
    You must be signed in to change notification settings 
- Fork 6.1k
feat: introduce GetConversationSummary RPC #5803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| path: &Path, | ||
| fallback_provider: &str, | ||
| ) -> std::io::Result<ConversationSummary> { | ||
| let head = read_head_records(path, 1).await?; | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
  Load more than one line when building conversation summary
The new GetConversationSummary RPC reads a rollout with read_head_records(path, 1) and then feeds that vector into extract_conversation_summary, which only returns a value if the head contains a user message. Normal rollouts start with a session_meta line followed by the first user message, so limiting the read to a single record leaves only the meta line in head and causes the helper to return None, producing an error for every valid conversation. The head read should include the first user message (e.g. use the same head limit as list_conversations or loop until a user message is found) so the RPC can succeed.
Useful? React with 👍 / 👎.
| @codex please review | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
codex/codex-rs/app-server-protocol/src/export.rs
Lines 70 to 75 in c1a5774
| $macro!(crate::InterruptConversationParams); | |
| $macro!(crate::InterruptConversationResponse); | |
| $macro!(crate::ListConversationsParams); | |
| $macro!(crate::ListConversationsResponse); | |
| $macro!(crate::LoginApiKeyParams); | |
| $macro!(crate::LoginApiKeyResponse); | 
The commit introduces GetConversationSummaryParams/GetConversationSummaryResponse, but export.rs’s for_each_schema_type! list does not include these types alongside the other RPC schemas. This macro drives generation of TypeScript and JSON schema artifacts during generate_types; omitting the new types means clients depending on those generated files will not get definitions for the new RPC and may fail to compile or remain unaware of it. Add both structs to the list to keep schema generation complete.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
Because conversations that use the Responses API can have encrypted reasoning messages, trying to resume a conversation with a different provider could lead to confusing "failed to decrypt" errors. (This is reproducible by starting a conversation using ChatGPT login and resuming it as a conversation that uses OpenAI models via Azure.) This changes `ListConversationsParams` to take a `model_providers: Option<Vec<String>>` and adds `model_provider` on each `ConversationSummary` it returns so these cases can be disambiguated. Note this ended up making changes to `codex-rs/core/src/rollout/tests.rs` because it had a number of cases where it expected `Some` for the value of `next_cursor`, but the list of rollouts was complete, so according to this docstring: https://github.com/openai/codex/blob/bcd64c7e7231d6316a2377d1525a0fa74f21b783/codex-rs/app-server-protocol/src/protocol.rs#L334-L337 If there are no more items to return, then `next_cursor` should be `None`. This PR updates that logic. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/5658). * #5803 * #5793 * __->__ #5658
…#5793) An AppServer client should be able to use any (`model_provider`, `model`) in the user's config. `NewConversationParams` already supported specifying the `model`, but this PR expands it to support `model_provider`, as well. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/5793). * #5803 * __->__ #5793
This adds an RPC to the app server to the the
ConversationSummaryvia a rollout path. Now that the VS Code extension supports showing the Codex UI in an editor panel where the URI of the panel maps to the rollout file, we need to be able to get theConversationSummaryfrom the rollout file directly.