Skip to content

fix: normalize Ollama base URL to include /v1 to prevent 410 errors#803

Open
octo-patch wants to merge 3 commits intoValueCell-ai:mainfrom
octo-patch:fix/issue-783-ollama-base-url-missing-v1
Open

fix: normalize Ollama base URL to include /v1 to prevent 410 errors#803
octo-patch wants to merge 3 commits intoValueCell-ai:mainfrom
octo-patch:fix/issue-783-ollama-base-url-missing-v1

Conversation

@octo-patch
Copy link
Copy Markdown
Contributor

Fixes #783

Problem

When users configure Ollama with a bare base URL (e.g. http://localhost:11434 without the /v1 suffix), the gateway constructs requests to /chat/completions instead of /v1/chat/completions. Ollama's old non-OpenAI endpoint has been removed from that path, so it returns HTTP 410 Gone — which surfaces in ClawX as "410 status code (no body)".

The root cause is in normalizeProviderBaseUrl: for ollama provider type with openai-completions protocol it strips the /chat/completions suffix but does not ensure the /v1 segment is present.

Solution

Add an explicit normalisation branch for the ollama provider type that:

  1. Strips any trailing /v1/chat/completions or /chat/completions suffix.
  2. Appends /v1 when the resulting URL does not already end with it.

URLs that already include /v1 (the documented default) pass through unchanged.

Testing

  • pnpm exec vitest run tests/unit/provider-runtime-sync.test.ts — all 14 tests pass (12 existing + 2 new).
  • pnpm exec tsc -p tsconfig.json --noEmit — no type errors.

Two new test cases cover:

  • Bare Ollama URL (http://localhost:11434) → auto-appended to http://localhost:11434/v1
  • Full endpoint URL (http://localhost:11434/v1/chat/completions) → normalized to http://localhost:11434/v1

octo-patch and others added 3 commits April 6, 2026 09:25
…es during active send

During an active send, the history poll (which fires every 4s) can race
with a gateway reconnection. If the gateway's view of the session is
incomplete at that moment -- because it's mid-reconnect or hasn't fully
persisted the conversation yet -- the loaded history contains fewer
messages than the local state, causing the entire conversation to vanish
from the UI. The user then has to restart ClawX to see their messages
again.

Add a guard in applyLoadedMessages: if a send is in progress AND the
loaded history contains fewer messages than the current local state, keep
the local messages instead of replacing them. The next history load after
the run completes will reconcile the final state.

Fixes ValueCell-ai#709
When a skill's gateway skillKey differs from its ClawHub slug, the
merge logic failed to find the existing skill, causing it to appear
as a duplicate placeholder with 'Recently installed, initializing...'
description that never resolves.

Fix the matching predicate to also compare against the skill's slug
field (which defaults to skillKey when absent from the gateway
response), so skills are properly merged regardless of naming
differences between the gateway and ClawHub.

Fixes ValueCell-ai#317
…ixes ValueCell-ai#783)

Ollama's OpenAI-compatible API is served under the /v1 path segment.
When users configure the base URL as http://localhost:11434 (without /v1),
the gateway constructs requests to /chat/completions instead of
/v1/chat/completions, causing Ollama to return HTTP 410 Gone errors.

Add an explicit normalization step for ollama provider type that strips
any trailing chat endpoint suffix and ensures /v1 is appended when
absent. Existing URLs already containing /v1 are unchanged.

Also adds two test cases covering the bare URL (missing /v1) and full
endpoint URL (/v1/chat/completions) input patterns.
Copy link
Copy Markdown
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5a90d0e7b2

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +1397 to +1400
preApplyState.sending &&
preApplyState.lastUserMessageAt &&
finalMessages.length < preApplyState.messages.length &&
preApplyState.messages.length > 1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Compare message recency, not count, before rejecting polled history

This length-based guard can discard newer gateway history in long sessions because chat.history is fetched with limit: 200, so a valid post-send response may legitimately have fewer messages than local state after optimistic appends. In that case finalMessages is replaced with stale local messages, but later in the same function filteredMessages (from the discarded history) can still flip sending to false, stopping polling and leaving the assistant reply missing from the UI. The regression is reachable when a session is near/over the 200-message window and the send flow relies on history polling instead of streamed finals.

Useful? React with 👍 / 👎.

Comment on lines +60 to +63
.replace(/\/v1\/chat\/completions$/i, '/v1')
.replace(/\/chat\/completions$/i, '');
if (!withoutEndpoint.endsWith('/v1')) {
return withoutEndpoint + '/v1';
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve protocol-aware URL trimming for Ollama base URLs

The new Ollama branch ignores apiProtocol and only strips chat-completions suffixes before appending /v1. If an Ollama account is configured with openai-responses or anthropic-messages and a user provides an endpoint URL like .../v1/responses or .../v1/messages, this logic produces malformed URLs such as .../v1/responses/v1, causing runtime calls to fail. Previously the unregistered-provider path trimmed protocol-specific suffixes; this change regresses that behavior for non-default Ollama protocol selections.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: 当使用ollama模型接口服务时,提示“410 status code (no body)”错误

1 participant