fix: 修复 dist 构建产物运行时 API Error 系列问题#269
fix: 修复 dist 构建产物运行时 API Error 系列问题#269xiaoFjun-eng wants to merge 1 commit intoclaude-code-best:mainfrom
Conversation
1. 修复 API Error: undefined is not an object (evaluating '_OpenAI_instances.add')
- build.ts 增加 external: ['openai'],避免 Bun bundler 将 openai SDK 打包进
chunk,防止 private class fields (WeakSet/WeakMap) 懒初始化顺序错乱
2. 修复 API Error: Cannot access 'OpenAI' before initialization
- openai/client.ts、grok/client.ts:静态 import OpenAI 改为 import type(仅类型)
+ 函数体内 await import('openai') 动态加载,彻底规避 Bun code splitting
产生的 TDZ(Temporal Dead Zone)问题
- openai/index.ts、grok/index.ts:对应调用处加 await
3. 修复 API Error: undefined is not an object (evaluating 'DEFAULT_MODEL_MAP[cleanModel]')
- openai/modelMapping.ts、grok/modelMapping.ts:将模块级常量 DEFAULT_MODEL_MAP /
DEFAULT_FAMILY_MAP 改为函数 getDefaultModelMap() / getDefaultFamilyMap(),
避免模块初始化时因 Bun 懒加载顺序导致常量为 undefined
- 包括首次登录(/login)逻辑和后续再次进入界面的逻辑均已覆盖
4. 登录后模型字符串缓存重置(login.tsx)
- /login 成功后依次调用 resetSettingsCache()、applyConfigEnvironmentVariables()、
resetModelStrings(),确保切换 provider 后模型映射立即生效,不残留旧缓存
5. 新增 resetModelStrings() 工具函数(modelStrings.ts)
- 暴露缓存重置接口,供 login 流程调用
6. 错误日志增强(openai/index.ts、grok/index.ts)
- catch 块补充 error.stack 输出,便于定位构建产物中的运行时错误
📝 WalkthroughWalkthroughThe PR converts OpenAI and Grok client initialization to asynchronous functions using dynamic imports, marks the OpenAI package as external in the build configuration, refactors model mapping constants into factory functions, and introduces a post-login cache reset mechanism for model strings. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
src/services/api/grok/modelMapping.ts (1)
100-107: Avoid redundant factory function calls.The factory functions are called twice each—once for the existence check and once for value retrieval. This creates unnecessary object allocations on every
resolveGrokModel()call.♻️ Proposed optimization using optional chaining
// 5. Exact model name lookup - if (getDefaultModelMap()[cleanModel]) { - return getDefaultModelMap()[cleanModel] + const mappedModel = getDefaultModelMap()[cleanModel] + if (mappedModel) { + return mappedModel } // 6. Family-level default - if (family && getDefaultFamilyMap()[family]) { - return getDefaultFamilyMap()[family] + const familyDefault = family && getDefaultFamilyMap()[family] + if (familyDefault) { + return familyDefault }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/grok/modelMapping.ts` around lines 100 - 107, The code calls getDefaultModelMap() and getDefaultFamilyMap() twice each which causes unnecessary allocations; in resolveGrokModel(), call each factory once and cache the returned map in a local const (e.g., const modelMap = getDefaultModelMap()) then use modelMap[cleanModel] (or optional chaining) for the existence check and return, and do the same for family with const familyMap = getDefaultFamilyMap() to avoid duplicate function calls and object creation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@build.ts`:
- Line 62: Move the openai package from devDependencies to dependencies in
package.json so it is installed at runtime; update package.json's "dependencies"
to include the same openai version currently in "devDependencies" and remove it
from "devDependencies" so dynamic imports in src/services/api/openai/client.ts
and usage in src/services/api/grok/client.ts resolve at runtime and the bundled
CLI can require/openai without missing-module errors.
In `@src/services/api/grok/client.ts`:
- Around line 15-19: Update the tests that call getGrokClient so they await its
async result: make the test functions async and add await before each
getGrokClient() invocation in the client.test.ts tests (all places where client
is assigned from getGrokClient), then use the resolved client instance for
assertions like client.baseURL; ensure all six calls that currently return
Promise<OpenAI> are awaited so assertions operate on the actual OpenAI instance.
In `@src/services/api/openai/client.ts`:
- Around line 16-23: The getOpenAIClient function can race on concurrent calls
because it checks cachedClient before an await and may construct multiple OpenAI
instances; add an in-flight promise guard by introducing a module-scoped
cachedClientPromise: when getOpenAIClient is called, if cachedClient exists
return it, else if cachedClientPromise exists return await cachedClientPromise;
otherwise set cachedClientPromise to an async initializer that imports OpenAI
(the OpenAIClass), constructs the OpenAI client, assigns it to cachedClient,
then clears cachedClientPromise on success/failure, and finally return the
created client; update getOpenAIClient to use cachedClientPromise so concurrent
calls share the same initialization and reference the same OpenAI instance.
---
Nitpick comments:
In `@src/services/api/grok/modelMapping.ts`:
- Around line 100-107: The code calls getDefaultModelMap() and
getDefaultFamilyMap() twice each which causes unnecessary allocations; in
resolveGrokModel(), call each factory once and cache the returned map in a local
const (e.g., const modelMap = getDefaultModelMap()) then use
modelMap[cleanModel] (or optional chaining) for the existence check and return,
and do the same for family with const familyMap = getDefaultFamilyMap() to avoid
duplicate function calls and object creation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6c544ea2-10ce-460d-a0bb-b0eaf1e2e30b
📒 Files selected for processing (9)
build.tssrc/commands/login/login.tsxsrc/services/api/grok/client.tssrc/services/api/grok/index.tssrc/services/api/grok/modelMapping.tssrc/services/api/openai/client.tssrc/services/api/openai/index.tssrc/services/api/openai/modelMapping.tssrc/utils/model/modelStrings.ts
| splitting: true, | ||
| define: getMacroDefines(), | ||
| features, | ||
| external: ['openai'], |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check if 'openai' is in dependencies or devDependencies in package.json
echo "=== Checking openai package location ==="
jq '.dependencies // {} | has("openai")' package.json
jq '.devDependencies // {} | has("openai")' package.json
echo "=== Full dependencies section ==="
jq '.dependencies' package.json
echo "=== Check for openai in devDependencies ==="
jq '.devDependencies.openai // "not found"' package.jsonRepository: claude-code-best/claude-code
Length of output: 265
🏁 Script executed:
#!/bin/bash
# Check if this is a CLI tool or library, and how openai is actually used
echo "=== Check package.json bin and main fields ==="
jq '{name, type, main, bin, exports}' package.json
echo "=== Search for openai imports/requires in source code ==="
rg "openai" --type ts --type tsx -l src/
echo "=== Check specific openai usage patterns ==="
rg "import.*openai|require.*openai" --type ts --type tsx -A 2 src/Repository: claude-code-best/claude-code
Length of output: 506
🏁 Script executed:
#!/bin/bash
# Search for openai usage without problematic file type flags
echo "=== Files containing 'openai' ==="
rg "openai" src/ -l
echo "=== Openai usage patterns ==="
rg "openai" src/ -A 2
echo "=== Check build.ts context around line 62 ==="
sed -n '55,70p' build.tsRepository: claude-code-best/claude-code
Length of output: 26206
Move openai from devDependencies to dependencies.
Marking openai as external correctly prevents Bun from bundling it, which resolves the private class field initialization issue. However, openai is currently in devDependencies and is required at runtime by the bundled CLI.
When users install this package via npm, openai will not be installed (devDependencies are excluded), causing runtime errors when they use the OpenAI or Grok provider:
src/services/api/openai/client.tsdynamically imports openaisrc/services/api/grok/client.tsuses openai for the Grok provider- Both are invoked at runtime when the user selects these providers
Move openai to dependencies to ensure it's installed alongside the CLI.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@build.ts` at line 62, Move the openai package from devDependencies to
dependencies in package.json so it is installed at runtime; update
package.json's "dependencies" to include the same openai version currently in
"devDependencies" and remove it from "devDependencies" so dynamic imports in
src/services/api/openai/client.ts and usage in src/services/api/grok/client.ts
resolve at runtime and the bundled CLI can require/openai without missing-module
errors.
| export async function getGrokClient(options?: { | ||
| maxRetries?: number | ||
| fetchOverride?: typeof fetch | ||
| source?: string | ||
| }): OpenAI { | ||
| }): Promise<OpenAI> { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Find all usages of getGrokClient to verify they're awaited
echo "=== All getGrokClient calls in test files ==="
rg -n "getGrokClient\(\)" --type ts -g '*test*' -g '*spec*'
echo "=== All getGrokClient calls in source files ==="
rg -n "getGrokClient\(" --type ts -g '!*test*' -g '!*spec*'Repository: claude-code-best/claude-code
Length of output: 837
🏁 Script executed:
cat -n src/services/api/grok/__tests__/client.test.ts | head -60Repository: claude-code-best/claude-code
Length of output: 1659
🏁 Script executed:
cat -n src/services/api/grok/client.ts | head -40Repository: claude-code-best/claude-code
Length of output: 1526
Tests need to be updated to handle the async signature.
The async conversion of getGrokClient() is correct for resolving the TDZ issue, but the test file calls it synchronously without await. All 6 calls in src/services/api/grok/__tests__/client.test.ts (lines 19, 27, 32, 33, 38, 41) return Promise<OpenAI> instead of the client instance, causing assertions like client.baseURL to fail.
Update the tests to be async functions and await all getGrokClient() calls.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/api/grok/client.ts` around lines 15 - 19, Update the tests that
call getGrokClient so they await its async result: make the test functions async
and add await before each getGrokClient() invocation in the client.test.ts tests
(all places where client is assigned from getGrokClient), then use the resolved
client instance for assertions like client.baseURL; ensure all six calls that
currently return Promise<OpenAI> are awaited so assertions operate on the actual
OpenAI instance.
| export async function getOpenAIClient(options?: { | ||
| maxRetries?: number | ||
| fetchOverride?: typeof fetch | ||
| source?: string | ||
| }): OpenAI { | ||
| }): Promise<OpenAI> { | ||
| if (cachedClient) return cachedClient | ||
|
|
||
| const { default: OpenAIClass } = await import('openai') |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify current guard state and call-path usage.
rg -nP --type=ts -C3 '\bcachedClient\b|\bcachedClientPromise\b|getOpenAIClient\s*\(' \
src/services/api/openai/client.ts src/services/api/openai/index.tsRepository: claude-code-best/claude-code
Length of output: 2165
🏁 Script executed:
#!/bin/bash
# Get the full getOpenAIClient function and surrounding context
cat -n src/services/api/openai/client.ts | head -50Repository: claude-code-best/claude-code
Length of output: 1994
🏁 Script executed:
#!/bin/bash
# Check if there are any tests for the client
find . -type f -name "*test*" -o -name "*spec*" | grep -i "openai.*client" | head -20Repository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
#!/bin/bash
# Check how getOpenAIClient is used across the codebase
rg -l "getOpenAIClient" --type=tsRepository: claude-code-best/claude-code
Length of output: 197
🏁 Script executed:
#!/bin/bash
# Check the test file for concurrent/race condition tests
cat -n src/services/api/openai/__tests__/queryModelOpenAI.test.ts | head -100Repository: claude-code-best/claude-code
Length of output: 4866
🏁 Script executed:
#!/bin/bash
# Check context around line 293 in index.ts to understand fetchOverride usage
sed -n '280,310p' src/services/api/openai/index.tsRepository: claude-code-best/claude-code
Length of output: 1422
🏁 Script executed:
#!/bin/bash
# Check if there are other concurrent calls to getOpenAIClient
rg -B2 -A2 "getOpenAIClient" src/services/api/openai/index.ts | head -50Repository: claude-code-best/claude-code
Length of output: 450
🏁 Script executed:
#!/bin/bash
# Verify the Promise type is correctly used and check for any TypeScript compatibility
rg -A5 "Promise\<OpenAI\>" src/services/api/openai/client.tsRepository: claude-code-best/claude-code
Length of output: 162
🏁 Script executed:
#!/bin/bash
# Check if there are any concurrency tests or Promise-based tests in the test file
grep -n "Promise\|concurrent\|race\|parallel" src/services/api/openai/__tests__/queryModelOpenAI.test.tsRepository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
#!/bin/bash
# Check for any Promise types in the file
grep -n "Promise" src/services/api/openai/client.tsRepository: claude-code-best/claude-code
Length of output: 96
🏁 Script executed:
#!/bin/bash
# Verify the fix would work by checking the buildClient arrow function pattern is valid TypeScript
# Look for similar async arrow function patterns in the codebase
rg "const .+ = async \(\)" --type=ts | head -5Repository: claude-code-best/claude-code
Length of output: 645
Add in-flight promise guard to prevent duplicate client construction during concurrent async initialization.
The current implementation checks cachedClient at line 21, but the await import('openai') at line 23 creates a window where concurrent calls bypass the cache check and each constructs a separate OpenAI instance. This violates the caching guarantee.
Implement in-flight promise deduplication: track a cachedClientPromise and return it if a build is already in progress, ensuring concurrent calls share the same initialization rather than duplicating work.
🔧 Suggested fix
let cachedClient: OpenAI | null = null
+let cachedClientPromise: Promise<OpenAI> | null = null
export async function getOpenAIClient(options?: {
maxRetries?: number
fetchOverride?: typeof fetch
source?: string
}): Promise<OpenAI> {
- if (cachedClient) return cachedClient
-
- const { default: OpenAIClass } = await import('openai')
- const apiKey = process.env.OPENAI_API_KEY || ''
- const baseURL = process.env.OPENAI_BASE_URL
-
- const client = new OpenAIClass({
- apiKey,
- ...(baseURL && { baseURL }),
- maxRetries: options?.maxRetries ?? 0,
- timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10),
- dangerouslyAllowBrowser: true,
- ...(process.env.OPENAI_ORG_ID && { organization: process.env.OPENAI_ORG_ID }),
- ...(process.env.OPENAI_PROJECT_ID && { project: process.env.OPENAI_PROJECT_ID }),
- fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }),
- ...(options?.fetchOverride && { fetch: options.fetchOverride }),
- })
-
- if (!options?.fetchOverride) {
- cachedClient = client
- }
-
- return client
+ if (!options?.fetchOverride) {
+ if (cachedClient) return cachedClient
+ if (cachedClientPromise) return cachedClientPromise
+ }
+
+ const buildClient = async (): Promise<OpenAI> => {
+ const { default: OpenAIClass } = await import('openai')
+ const apiKey = process.env.OPENAI_API_KEY || ''
+ const baseURL = process.env.OPENAI_BASE_URL
+
+ return new OpenAIClass({
+ apiKey,
+ ...(baseURL && { baseURL }),
+ maxRetries: options?.maxRetries ?? 0,
+ timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10),
+ dangerouslyAllowBrowser: true,
+ ...(process.env.OPENAI_ORG_ID && { organization: process.env.OPENAI_ORG_ID }),
+ ...(process.env.OPENAI_PROJECT_ID && { project: process.env.OPENAI_PROJECT_ID }),
+ fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }),
+ ...(options?.fetchOverride && { fetch: options.fetchOverride }),
+ })
+ }
+
+ if (options?.fetchOverride) return buildClient()
+
+ cachedClientPromise = buildClient()
+ try {
+ cachedClient = await cachedClientPromise
+ return cachedClient
+ } finally {
+ cachedClientPromise = null
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/api/openai/client.ts` around lines 16 - 23, The getOpenAIClient
function can race on concurrent calls because it checks cachedClient before an
await and may construct multiple OpenAI instances; add an in-flight promise
guard by introducing a module-scoped cachedClientPromise: when getOpenAIClient
is called, if cachedClient exists return it, else if cachedClientPromise exists
return await cachedClientPromise; otherwise set cachedClientPromise to an async
initializer that imports OpenAI (the OpenAIClass), constructs the OpenAI client,
assigns it to cachedClient, then clears cachedClientPromise on success/failure,
and finally return the created client; update getOpenAIClient to use
cachedClientPromise so concurrent calls share the same initialization and
reference the same OpenAI instance.
|
如果你需要 openai 排除出去的话, 需要改动一下 package.json 的部分, 让其成为 dependencies |
修复 API Error: undefined is not an object (evaluating '_OpenAI_instances.add')
修复 API Error: Cannot access 'OpenAI' before initialization
修复 API Error: undefined is not an object (evaluating 'DEFAULT_MODEL_MAP[cleanModel]')
登录后模型字符串缓存重置(login.tsx)
新增 resetModelStrings() 工具函数(modelStrings.ts)
错误日志增强(openai/index.ts、grok/index.ts)
Summary by CodeRabbit