fix: normalize Ollama base URL to include /v1 to prevent 410 errors#803
fix: normalize Ollama base URL to include /v1 to prevent 410 errors#803octo-patch wants to merge 3 commits intoValueCell-ai:mainfrom
Conversation
…es during active send During an active send, the history poll (which fires every 4s) can race with a gateway reconnection. If the gateway's view of the session is incomplete at that moment -- because it's mid-reconnect or hasn't fully persisted the conversation yet -- the loaded history contains fewer messages than the local state, causing the entire conversation to vanish from the UI. The user then has to restart ClawX to see their messages again. Add a guard in applyLoadedMessages: if a send is in progress AND the loaded history contains fewer messages than the current local state, keep the local messages instead of replacing them. The next history load after the run completes will reconcile the final state. Fixes ValueCell-ai#709
When a skill's gateway skillKey differs from its ClawHub slug, the merge logic failed to find the existing skill, causing it to appear as a duplicate placeholder with 'Recently installed, initializing...' description that never resolves. Fix the matching predicate to also compare against the skill's slug field (which defaults to skillKey when absent from the gateway response), so skills are properly merged regardless of naming differences between the gateway and ClawHub. Fixes ValueCell-ai#317
…ixes ValueCell-ai#783) Ollama's OpenAI-compatible API is served under the /v1 path segment. When users configure the base URL as http://localhost:11434 (without /v1), the gateway constructs requests to /chat/completions instead of /v1/chat/completions, causing Ollama to return HTTP 410 Gone errors. Add an explicit normalization step for ollama provider type that strips any trailing chat endpoint suffix and ensures /v1 is appended when absent. Existing URLs already containing /v1 are unchanged. Also adds two test cases covering the bare URL (missing /v1) and full endpoint URL (/v1/chat/completions) input patterns.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5a90d0e7b2
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| preApplyState.sending && | ||
| preApplyState.lastUserMessageAt && | ||
| finalMessages.length < preApplyState.messages.length && | ||
| preApplyState.messages.length > 1 |
There was a problem hiding this comment.
Compare message recency, not count, before rejecting polled history
This length-based guard can discard newer gateway history in long sessions because chat.history is fetched with limit: 200, so a valid post-send response may legitimately have fewer messages than local state after optimistic appends. In that case finalMessages is replaced with stale local messages, but later in the same function filteredMessages (from the discarded history) can still flip sending to false, stopping polling and leaving the assistant reply missing from the UI. The regression is reachable when a session is near/over the 200-message window and the send flow relies on history polling instead of streamed finals.
Useful? React with 👍 / 👎.
| .replace(/\/v1\/chat\/completions$/i, '/v1') | ||
| .replace(/\/chat\/completions$/i, ''); | ||
| if (!withoutEndpoint.endsWith('/v1')) { | ||
| return withoutEndpoint + '/v1'; |
There was a problem hiding this comment.
Preserve protocol-aware URL trimming for Ollama base URLs
The new Ollama branch ignores apiProtocol and only strips chat-completions suffixes before appending /v1. If an Ollama account is configured with openai-responses or anthropic-messages and a user provides an endpoint URL like .../v1/responses or .../v1/messages, this logic produces malformed URLs such as .../v1/responses/v1, causing runtime calls to fail. Previously the unregistered-provider path trimmed protocol-specific suffixes; this change regresses that behavior for non-default Ollama protocol selections.
Useful? React with 👍 / 👎.
Fixes #783
Problem
When users configure Ollama with a bare base URL (e.g.
http://localhost:11434without the/v1suffix), the gateway constructs requests to/chat/completionsinstead of/v1/chat/completions. Ollama's old non-OpenAI endpoint has been removed from that path, so it returns HTTP 410 Gone — which surfaces in ClawX as "410 status code (no body)".The root cause is in
normalizeProviderBaseUrl: forollamaprovider type withopenai-completionsprotocol it strips the/chat/completionssuffix but does not ensure the/v1segment is present.Solution
Add an explicit normalisation branch for the
ollamaprovider type that:/v1/chat/completionsor/chat/completionssuffix./v1when the resulting URL does not already end with it.URLs that already include
/v1(the documented default) pass through unchanged.Testing
pnpm exec vitest run tests/unit/provider-runtime-sync.test.ts— all 14 tests pass (12 existing + 2 new).pnpm exec tsc -p tsconfig.json --noEmit— no type errors.Two new test cases cover:
http://localhost:11434) → auto-appended tohttp://localhost:11434/v1http://localhost:11434/v1/chat/completions) → normalized tohttp://localhost:11434/v1