Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: Backend Error when detecting Ollama instances #3469

Closed
t73biz opened this issue Mar 15, 2025 · 2 comments
Closed

[BUG]: Backend Error when detecting Ollama instances #3469

t73biz opened this issue Mar 15, 2025 · 2 comments
Labels
possible bug Bug was reported but is not confirmed or is unable to be replicated.

Comments

@t73biz
Copy link

t73biz commented Mar 15, 2025

How are you running AnythingLLM?

Docker (local)

What happened?

I am running anythingllm within a docker container on an Ubuntu 24 instance with ollama v0.5.7
The local ollama instance is accessible at http://localhost:11434

When running the following command to stand up the docker instance, (I left out the -d param to be able to debug), I get an error in the console.

Run Command:

export STORAGE_LOCATION=$HOME/AnythingLLMDesktop/storage && \
mkdir -p $STORAGE_LOCATION && \
touch "$STORAGE_LOCATION/.env" && \
docker run -p 3001:3001 \
--cap-add SYS_ADMIN \
--add-host=host.docker.internal:host-gateway \
-v ${STORAGE_LOCATION}:/app/server/storage \
-v ${STORAGE_LOCATION}/.env:/app/server/.env \
-e STORAGE_DIR="/app/server/storage" \
mintplexlabs/anythingllm

Error:

[backend] error: TypeError: fetch failed
    at node:internal/deps/undici/undici:12625:11
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async ollamaAIModels (/app/server/utils/helpers/customModels.js:346:18)
    at async getCustomModels (/app/server/utils/helpers/customModels.js:49:14)
    at async /app/server/endpoints/system.js:960:35

Are there known steps to reproduce?

Visit the http://localhost:3001/settings/llm-preference endpoint and select Ollama as the LLM Provider

@t73biz t73biz added the possible bug Bug was reported but is not confirmed or is unable to be replicated. label Mar 15, 2025
@t73biz
Copy link
Author

t73biz commented Mar 15, 2025

ok after quite a bit of fussing, this was the command that allowed for ollama to be access from the host through to the docker container.

export STORAGE_LOCATION=$HOME/AnythingLLMDesktop/storage && \
mkdir -p $STORAGE_LOCATION && \
touch "$STORAGE_LOCATION/.env" && \
docker run --rm -d -p 3001:3001 \
--cap-add SYS_ADMIN \
--network=host \
-v ${STORAGE_LOCATION}:/app/server/storage \
-v ${STORAGE_LOCATION}/.env:/app/server/.env \
-e STORAGE_DIR="/app/server/storage" \
mintplexlabs/anythingllm

The difference here is the --network=host \ and then I was able to run with the ollama config url value of http://localhost:11434

I'm not sure if this is secure for production. My gut says no, but it does work locally

@timothycarambat
Copy link
Member

That is the correct solution, otherwise the local network loopback will keep using the container network and not the host network (where Ollama is running)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
possible bug Bug was reported but is not confirmed or is unable to be replicated.
Projects
None yet
Development

No branches or pull requests

2 participants