OAuth plugin for OpenCode that lets you use ChatGPT Plus/Pro rate limits with models like gpt-5.2, gpt-5.3-codex, and gpt-5.1-codex-max.
Note
Renamed from opencode-openai-codex-auth-multi — If you were using the old package, update your config to use oc-chatgpt-multi-auth instead. The rename was necessary because OpenCode blocks plugins containing opencode-openai-codex-auth in the name.
- GPT-5.2, GPT-5.3 Codex, GPT-5.1 Codex Max and all GPT-5.x variants via ChatGPT OAuth
- Multi-account support — Add up to 20 ChatGPT accounts, health-aware rotation with automatic failover
- Per-project accounts — Each project gets its own account storage (new in v4.10.0)
- Click-to-switch — Switch accounts directly from the OpenCode TUI
- Strict tool validation — Automatically cleans schemas for compatibility with strict models
- Auto-update notifications — Get notified when a new version is available
- 22 model presets — Full variant system with reasoning levels (none/low/medium/high/xhigh)
- Prompt caching — Session-based caching for faster multi-turn conversations
- Usage-aware errors — Friendly messages with rate limit reset timing
- Plugin compatible — Works alongside other OpenCode plugins (oh-my-opencode, dcp, etc.)
Terms of Service Warning — Read Before Installing
[!CAUTION] This plugin uses OpenAI's official OAuth authentication (the same method as OpenAI's official Codex CLI) for personal development use with your ChatGPT Plus/Pro subscription.
This plugin is for personal development only:
- Not for commercial services, API resale, or multi-user applications
- For production use, see OpenAI Platform API
By using this plugin, you acknowledge:
- This is an unofficial tool not endorsed by OpenAI
- Users are responsible for compliance with OpenAI's Terms of Use
- You assume all risks associated with using this plugin
For Humans
Option A: Let an LLM do it
Paste this into any LLM agent (Claude Code, OpenCode, Cursor, etc.):
Install the oc-chatgpt-multi-auth plugin and add the OpenAI model definitions to ~/.config/opencode/opencode.json by following: https://raw.githubusercontent.com/ndycode/oc-chatgpt-multi-auth/main/README.md
Option B: One-command install
npx -y oc-chatgpt-multi-auth@latestThis writes the config to ~/.config/opencode/opencode.json, backs up existing config, and clears the plugin cache.
Want legacy config (OpenCode v1.0.209 and below)? Add
--legacyflag.
Option C: Manual setup
-
Add the plugin to
~/.config/opencode/opencode.json:{ "plugin": ["oc-chatgpt-multi-auth@latest"] } -
Login with your ChatGPT account:
opencode auth login
-
Add models — Copy the full configuration below
-
Use it:
opencode run "Hello" --model=openai/gpt-5.2 --variant=medium
For LLM Agents
-
Edit the OpenCode configuration file at
~/.config/opencode/opencode.jsonNote: This path works on all platforms. On Windows,
~resolves to your user home directory (e.g.,C:\Users\YourName). -
Add the plugin to the
pluginarray:{ "plugin": ["oc-chatgpt-multi-auth@latest"] } -
Add the model definitions from the Full Models Configuration section
-
Set
providerto"openai"and choose a model
opencode run "Hello" --model=openai/gpt-5.2 --variant=medium| Model | Variants | Notes |
|---|---|---|
gpt-5.2 |
none, low, medium, high, xhigh | Latest GPT-5.2 with reasoning levels |
gpt-5.3-codex |
low, medium, high, xhigh | Latest GPT-5.3 Codex for code generation (default: xhigh) |
gpt-5.1-codex-max |
low, medium, high, xhigh | Maximum context Codex |
gpt-5.1-codex |
low, medium, high | Standard Codex |
gpt-5.1-codex-mini |
medium, high | Lightweight Codex |
gpt-5.1 |
none, low, medium, high | GPT-5.1 base model |
Using variants:
# Modern OpenCode (v1.0.210+)
opencode run "Hello" --model=openai/gpt-5.2 --variant=high
# Legacy OpenCode (v1.0.209 and below)
opencode run "Hello" --model=openai/gpt-5.2-highFull Models Configuration (Copy-Paste Ready)
Add this to your ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["oc-chatgpt-multi-auth@latest"],
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
},
"models": {
"gpt-5.2": {
"name": "GPT 5.2 (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"none": { "reasoningEffort": "none" },
"low": { "reasoningEffort": "low" },
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" },
"xhigh": { "reasoningEffort": "xhigh" }
}
},
"gpt-5.3-codex": {
"name": "GPT 5.3 Codex (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"low": { "reasoningEffort": "low" },
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" },
"xhigh": { "reasoningEffort": "xhigh" }
},
"options": {
"reasoningEffort": "xhigh",
"reasoningSummary": "detailed"
}
},
"gpt-5.1-codex-max": {
"name": "GPT 5.1 Codex Max (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"low": { "reasoningEffort": "low" },
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" },
"xhigh": { "reasoningEffort": "xhigh" }
}
},
"gpt-5.1-codex": {
"name": "GPT 5.1 Codex (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"low": { "reasoningEffort": "low" },
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" }
}
},
"gpt-5.1-codex-mini": {
"name": "GPT 5.1 Codex Mini (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" }
}
},
"gpt-5.1": {
"name": "GPT 5.1 (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"none": { "reasoningEffort": "none" },
"low": { "reasoningEffort": "low" },
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" }
}
}
}
}
}
}For legacy OpenCode (v1.0.209 and below), use config/opencode-legacy.json which has individual model entries like gpt-5.2-low, gpt-5.2-medium, etc.
Add multiple ChatGPT accounts for higher combined quotas. The plugin uses health-aware rotation with automatic failover and supports up to 20 accounts.
opencode auth login # Run again to add more accountsThe plugin provides built-in tools for managing your OpenAI accounts. These are available directly in OpenCode — just ask the agent or type the tool name.
Note: Tools were renamed from
openai-accounts-*tocodex-*in v4.12.0 for brevity.
List all configured accounts with their status.
codex-list
Output:
OpenAI Accounts (3 total):
[1] [email protected] (active)
[2] [email protected]
[3] [email protected]
Use codex-switch to change active account.
Switch to a different account by index (1-based).
codex-switch index=2
Output:
Switched to account [2] [email protected]
Show detailed status including rate limits and health scores.
codex-status
Output:
OpenAI Account Status:
[1] [email protected] (active)
Health: 100/100
Rate Limit: 45/50 requests remaining
Resets: 2m 30s
Last Used: 5 minutes ago
[2] [email protected]
Health: 85/100
Rate Limit: 12/50 requests remaining
Resets: 8m 15s
Last Used: 1 hour ago
Show live runtime metrics (request counts, latency, errors, rotations) for the current plugin process.
codex-metrics
Output:
Codex Plugin Metrics:
Uptime: 12m
Total upstream requests: 84
Successful responses: 77
Failed responses: 7
Average successful latency: 842ms
Check if all account tokens are still valid (read-only check).
codex-health
Output:
Checking 3 account(s):
✓ [1] [email protected]: Healthy
✓ [2] [email protected]: Healthy
✗ [3] [email protected]: Token expired
Summary: 2 healthy, 1 unhealthy
Refresh all OAuth tokens and save them to disk. Use this after long idle periods.
codex-refresh
Output:
Refreshing 3 account(s):
✓ [1] [email protected]: Refreshed
✓ [2] [email protected]: Refreshed
✗ [3] [email protected]: Failed - Token expired
Summary: 2 refreshed, 1 failed
Difference from health check: codex-health only validates tokens. codex-refresh actually refreshes them and saves new tokens to disk.
Remove an account by index. Useful for cleaning up expired accounts.
codex-remove index=3
Output:
Removed: [3] [email protected]
Remaining accounts: 2
Export all accounts to a portable JSON file. Useful for backup or migration.
codex-export path="~/backup/accounts.json"
Output:
Exported 3 account(s) to ~/backup/accounts.json
Import accounts from a JSON file (exported via codex-export). Merges with existing accounts.
codex-import path="~/backup/accounts.json"
Output:
Imported 2 new account(s) (1 duplicate skipped)
Total accounts: 4
| Tool | What It Does | Example |
|---|---|---|
codex-list |
List all accounts | "list my accounts" |
codex-switch |
Switch active account | "switch to account 2" |
codex-status |
Show rate limits & health | "show account status" |
codex-metrics |
Show runtime metrics | "show plugin metrics" |
codex-health |
Validate tokens (read-only) | "check account health" |
codex-refresh |
Refresh & save tokens | "refresh my tokens" |
codex-remove |
Remove an account | "remove account 3" |
codex-export |
Export accounts to file | "export my accounts" |
codex-import |
Import accounts from file | "import accounts from backup" |
How rotation works:
- Health scoring tracks success/failure per account
- Token bucket prevents hitting rate limits
- Hybrid selection prefers healthy accounts with available tokens
- Always retries when all accounts are rate-limited (waits for reset with live countdown)
- 20% jitter on retry delays to avoid thundering herd
- Auto-removes accounts after 3 consecutive auth failures (new in v4.11.0)
Per-project accounts (v4.10.0+):
By default, each project gets its own account storage namespace. This means you can keep different active accounts per project without writing account files into your repo. Works from subdirectories too; the plugin walks up to find the project root (v4.11.0). Disable with perProjectAccounts: false in your config.
Storage locations:
- Per-project:
~/.opencode/projects/{project-key}/openai-codex-accounts.json - Global (when per-project disabled):
~/.opencode/openai-codex-accounts.json
Quick reset: Most issues can be resolved by deleting
~/.opencode/auth/openai.jsonand runningopencode auth loginagain.
OpenCode uses ~/.config/opencode/ on all platforms including Windows.
| File | Path |
|---|---|
| Main config | ~/.config/opencode/opencode.json |
| Auth tokens | ~/.opencode/auth/openai.json |
| Multi-account (global) | ~/.opencode/openai-codex-accounts.json |
| Multi-account (per-project) | ~/.opencode/projects/{project-key}/openai-codex-accounts.json |
| Plugin config | ~/.opencode/openai-codex-auth-config.json |
| Debug logs | ~/.opencode/logs/codex-plugin/ |
Windows users:
~resolves to your user home directory (e.g.,C:\Users\YourName).
401 Unauthorized Error
Cause: Token expired or not authenticated.
Solutions:
- Re-authenticate:
opencode auth login
- Check auth file exists:
cat ~/.opencode/auth/openai.json
Browser Doesn't Open for OAuth
Cause: Port 1455 conflict or SSH/WSL environment.
Solutions:
-
Manual URL paste:
- Re-run
opencode auth login - Select "ChatGPT Plus/Pro (manual URL paste)"
- Paste the full redirect URL (including
#code=...) after login
- Re-run
-
Check port availability:
# macOS/Linux lsof -i :1455 # Windows netstat -ano | findstr :1455
-
Stop Codex CLI if running — Both use port 1455
Model Not Found
Cause: Missing provider prefix or config mismatch.
Solutions:
-
Use
openai/prefix:# Correct --model=openai/gpt-5.2 # Wrong --model=gpt-5.2
-
Verify model is in your config:
{ "models": { "gpt-5.2": { ... } } }
Rate Limit Exceeded
Cause: ChatGPT subscription usage limit reached.
Solutions:
- Wait for reset (plugin shows timing in error message)
- Add more accounts:
opencode auth login - Switch to a different model family
Multi-Turn Context Lost
Cause: Old plugin version or missing config.
Solutions:
- Update plugin:
npx -y oc-chatgpt-multi-auth@latest
- Ensure config has:
{ "include": ["reasoning.encrypted_content"], "store": false }
OAuth Callback Issues (Safari/WSL/Docker)
Safari HTTPS-only mode:
- Use Chrome or Firefox instead, or
- Temporarily disable Safari > Settings > Privacy > "Enable HTTPS-only mode"
WSL2:
- Use VS Code's port forwarding, or
- Configure Windows → WSL port forwarding
SSH / Remote:
ssh -L 1455:localhost:1455 user@remoteDocker / Containers:
- OAuth with localhost redirect doesn't work in containers
- Use SSH port forwarding or manual URL flow
Works alongside oh-my-opencode. No special configuration needed.
{
"plugin": [
"oc-chatgpt-multi-auth@latest",
"oh-my-opencode@latest"
]
}List this plugin before dcp:
{
"plugin": [
"oc-chatgpt-multi-auth@latest",
"@tarquinen/opencode-dcp@latest"
]
}- openai-codex-auth — Not needed. This plugin replaces the original.
Create ~/.opencode/openai-codex-auth-config.json for optional settings:
| Option | Default | What It Does |
|---|---|---|
codexMode |
true |
Uses Codex-OpenCode bridge prompt (synced with latest Codex CLI) |
codexTuiV2 |
true |
Enables Codex-style terminal UI output (set false for legacy output) |
codexTuiColorProfile |
truecolor |
Terminal color profile for Codex UI (truecolor, ansi256, ansi16) |
codexTuiGlyphMode |
ascii |
Glyph mode for Codex UI (ascii, unicode, auto) |
fastSession |
false |
Forces low-latency settings per request (reasoningEffort=none/low, reasoningSummary=off, textVerbosity=low) |
fastSessionStrategy |
hybrid |
hybrid speeds simple turns but keeps full-depth on complex prompts; always forces fast tuning on every turn |
fastSessionMaxInputItems |
30 |
Max input items kept when fast tuning is applied |
| Option | Default | What It Does |
|---|---|---|
perProjectAccounts |
true |
Each project gets its own account storage namespace under ~/.opencode/projects/ |
toastDurationMs |
5000 |
How long toast notifications stay visible (ms) |
| Option | Default | What It Does |
|---|---|---|
retryAllAccountsRateLimited |
true |
Wait and retry when all accounts are rate-limited |
retryAllAccountsMaxWaitMs |
0 |
Max wait time (0 = unlimited) |
retryAllAccountsMaxRetries |
Infinity |
Max retry attempts |
fallbackToGpt52OnUnsupportedGpt53 |
true |
Automatically retry once with gpt-5.2-codex when gpt-5.3-codex is rejected for ChatGPT Codex OAuth entitlement |
fetchTimeoutMs |
60000 |
Request timeout to Codex backend (ms) |
streamStallTimeoutMs |
45000 |
Abort non-stream parsing if SSE stalls (ms) |
DEBUG_CODEX_PLUGIN=1 opencode # Enable debug logging
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode # Log all API requests
CODEX_PLUGIN_LOG_LEVEL=debug opencode # Set log level (debug|info|warn|error)
CODEX_MODE=0 opencode # Temporarily disable bridge prompt
CODEX_TUI_V2=0 opencode # Disable Codex-style UI (legacy output)
CODEX_TUI_COLOR_PROFILE=ansi16 opencode # Force UI color profile
CODEX_TUI_GLYPHS=unicode opencode # Override glyph mode (ascii|unicode|auto)
CODEX_AUTH_PREWARM=0 opencode # Disable startup prewarm (prompt/instruction cache warmup)
CODEX_AUTH_FAST_SESSION=1 opencode # Enable faster response defaults
CODEX_AUTH_FAST_SESSION_STRATEGY=always opencode # Force fast mode for all prompts
CODEX_AUTH_FAST_SESSION_MAX_INPUT_ITEMS=24 opencode # Tune fast-mode history window
CODEX_AUTH_FALLBACK_GPT53_TO_GPT52=0 opencode # Disable gpt-5.3 -> gpt-5.2 fallback (strict mode)
CODEX_AUTH_FETCH_TIMEOUT_MS=120000 opencode # Override request timeout
CODEX_AUTH_STREAM_STALL_TIMEOUT_MS=60000 opencode # Override SSE stall timeoutFor all options, see docs/configuration.md.
- Getting Started — Complete installation guide
- Configuration — All configuration options
- Troubleshooting — Common issues and fixes
- Architecture — How the plugin works
- numman-ali/opencode-openai-codex-auth by numman-ali — Original plugin
- ndycode — Multi-account support and maintenance
MIT License. See LICENSE for details.
Legal
- Personal / internal development only
- Respect subscription quotas and data handling policies
- Not for production services or bypassing intended limits
By using this plugin, you acknowledge:
- Terms of Service risk — This approach may violate ToS of AI model providers
- No guarantees — APIs may change without notice
- Assumption of risk — You assume all legal, financial, and technical risks
- Not affiliated with OpenAI. This is an independent open-source project.
- "ChatGPT", "GPT-5", "Codex", and "OpenAI" are trademarks of OpenAI, L.L.C.