Skip to content

LanternCX/codex-gateway

Repository files navigation

codex-gateway

Language: English | 简体中文

A self-hosted gateway that:

  • accepts OpenAI-compatible downstream requests (/v1/models, /v1/chat/completions, /v1/responses)
  • authenticates upstream requests with OAuth tokens obtained via interactive CLI login
  • protects downstream access using one fixed API key from config

Features

  • Interactive OAuth callback login (default): codex-gateway auth login
  • Runtime directory storage (config.yaml, oauth-token.json)
  • Default upstream mode is codex_oauth (compatible with ChatGPT OAuth tokens)
  • OpenAI-compatible endpoints:
    • GET /v1/models
    • POST /v1/chat/completions (streaming supported)
    • POST /v1/responses (JSON and stream pass-through supported)
    • In codex_oauth mode, /v1/models returns a compatibility model list; /v1/chat/completions is transformed into Codex responses backend requests/results; /v1/responses proxies to Codex responses backend path (default /backend-api/codex/responses, configurable via upstream.codex_responses_path).
    • In openai_api mode, /v1/chat/completions and /v1/responses proxy to upstream paths.
  • Fixed downstream API key validation via Authorization: Bearer <fixed_key>
  • Automatic OAuth refresh before upstream calls
  • Structured logging with configurable level/format/output/color and file rotation settings
  • Default stdout logging is human-readable text with terminal color auto-detection
  • Request correlation via X-Request-ID (auto-generated when missing)
  • Health endpoint: GET /healthz

Documentation

Runtime Directory

All runtime files are resolved from --workdir (default: current directory):

  • config.yaml
  • oauth-token.json
  • Structured logs emitted to stdout or file (logging.output)
  • logs/ when logging.output is file or both (default path <workdir>/logs)

Runtime path policy:

  • --config must point to a file inside --workdir
  • gateway-generated runtime artifacts are stored under --workdir only

Upstream mode:

  • upstream.mode: codex_oauth (default): transform chat-completions to Codex backend responses flow
  • upstream.mode: openai_api: direct proxy to upstream.base_url

Quick Start

  1. Build:
go build -o codex-gateway ./cmd/codex-gateway
  1. Prepare config (from repository root):
cp config.example.yaml config.yaml

Then edit config.yaml with at least auth.downstream_api_key. upstream.mode defaults to codex_oauth when omitted; set upstream.base_url only when upstream.mode: openai_api. For Codex OAuth callback mode, OAuth endpoints and client id already have defaults. If needed, you can set an outbound proxy for both auth login and serve requests:

network:
  proxy_url: "http://127.0.0.1:7890"

network.proxy_url must be an absolute URL with host and a supported scheme: http, https, socks5, or socks5h (for example, http://127.0.0.1:7890 or socks5h://127.0.0.1:1080). Leave network.proxy_url empty or unset to use no explicit proxy.

  1. Run OAuth login (interactive):
./codex-gateway auth login --workdir . --config config.yaml

This command starts a local callback listener and opens the browser authorization URL.

  1. Start server:
./codex-gateway serve --workdir . --config config.yaml

After startup, logs include:

  • api_prefix (for example http://127.0.0.1:8080/v1)
  • available_models discovered via a startup probe (GET /v1/models)

API Reference

Endpoint summary:

  • GET /healthz
  • GET /v1/models
  • POST /v1/chat/completions
  • POST /v1/responses

OpenCode custom provider

For OpenCode clients targeting this gateway and expecting codex-like responses/thinking behavior, prefer @ai-sdk/openai as the custom provider package (instead of generic OpenAI-compatible adapters).

opencode.json example:

{
  "providers": {
    "gateway": {
      "package": "@ai-sdk/openai",
      "name": "Gateway",
      "options": {
        "baseURL": "http://127.0.0.1:8080/v1",
        "apiKey": "<downstream_api_key>"
      }
    }
  },
  "models": {
    "gateway/gpt-5.3-codex": {
      "reasoning": true,
      "limit": {
        "input": 200000,
        "output": 32000
      }
    }
  }
}

If your OpenCode setup uses the provider-catalog schema (npm + nested models), you can use this provider block under providers as a compatibility fallback:

"gateway": {
  "name": "gateway",
  "npm": "@ai-sdk/openai-compatible",
  "models": {
    "gpt-5.3-codex": {
      "name": "gpt-5.3-codex",
      "variants": {
        "xhigh": { "reasoningEffort": "xhigh" },
        "high": { "reasoningEffort": "high" },
        "low": { "reasoningEffort": "low" }
      }
    }
  },
  "options": {
    "baseURL": "http://localhost:8080/v1"
  }
}

Note: in codex_oauth mode, chat-compat now maps tools, tool_choice, parallel_tool_calls, reasoning_effort, and tool message tool_call_id, and returns chat tool_calls in both non-stream and stream responses. max_tokens/max_completion_tokens are accepted for compatibility but ignored (not forwarded upstream). For full unmodified Codex event semantics, use POST /v1/responses.

Note: in codex_oauth mode, POST /v1/responses now injects default instructions ("You are a helpful assistant.") when the field is missing or blank, and treats max_output_tokens/max_completion_tokens as compatibility-only (accepted then removed before upstream).

Request payload example (POST /v1/chat/completions):

{
  "model": "gpt-5.3-codex",
  "messages": [
    {
      "role": "user",
      "content": "Reply with exactly: hello"
    }
  ],
  "stream": false
}

Errors

Gateway errors are returned in OpenAI-style envelope:

{
  "error": {
    "message": "...",
    "type": "gateway_error",
    "code": "..."
  }
}

Common status mapping:

  • 401: downstream fixed API key missing/invalid
  • 503: OAuth token unavailable or refresh failed
  • 502: upstream network/service error (upstream_unavailable or upstream_error)

Notes:

  • The envelope above only applies to errors generated by the gateway.
  • Upstream 4xx responses are relayed as-is and may not match the gateway envelope.

Development

Run tests:

go test ./...
go test -race ./...

About

🛜 codex-gateway: Forward your codex subscription to openai api

Topics

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages