Skip to content

Conversation

@brandon93s
Copy link

@brandon93s brandon93s commented Jan 14, 2026

What does this PR do?

How did you verify your code works?

  • Tested with several large sessions via bun dev to confirm compaction triggers

Copilot AI review requested due to automatic review settings January 14, 2026 14:35
@github-actions
Copy link
Contributor

Hey! Your PR title openai: add input limit for codex-5.2 doesn't follow conventional commit format.

Please update it to start with one of:

  • feat: or feat(scope): new feature
  • fix: or fix(scope): bug fix
  • docs: or docs(scope): documentation changes
  • chore: or chore(scope): maintenance tasks
  • refactor: or refactor(scope): code refactoring
  • test: or test(scope): adding or updating tests

Where scope is the package name (e.g., app, desktop, opencode).

See CONTRIBUTING.md for details.

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

No duplicate PRs found

@brandon93s brandon93s changed the title openai: add input limit for codex-5.2 fix(opencode): add input limit for codex-5.2 Jan 14, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR attempts to address the ChatGPT Pro plan's lower context limit compared to the API limit by adding an input field to the model's limit configuration for the gpt-5.2-codex model.

Changes:

  • Added input: 272000 to the limit object for the gpt-5.2-codex model configuration

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@brandon93s brandon93s marked this pull request as draft January 14, 2026 14:46
@brandon93s brandon93s marked this pull request as ready for review January 14, 2026 15:16
const count = input.tokens.input + input.tokens.cache.read + input.tokens.output
const output = Math.min(input.model.limit.output, SessionPrompt.OUTPUT_TOKEN_MAX) || SessionPrompt.OUTPUT_TOKEN_MAX
const usable = context - output
const usable = input.model.limit.input || context - output
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If an input limit is provided, it's always the usable limit. This can be less than context - output.

@brandon93s brandon93s marked this pull request as ready for review January 14, 2026 17:23
@brandon93s
Copy link
Author

brandon93s commented Jan 14, 2026

Auto-compaction now triggers based on the optional input limit.

There is a separate issue (I believe with all models) where the next outgoing message can overrun context/input limits. In this scenario, the error should be caught with an auto-compaction running on thread history, and then a retry of the latest message occurring. Perhaps integrated into SessionRetry.retryable with a force auto-compaction. I won't attempt that in this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Default Codex context length not matching Pro plan

1 participant