You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running anythingllm within a docker container on an Ubuntu 24 instance with ollama v0.5.7
The local ollama instance is accessible at http://localhost:11434
When running the following command to stand up the docker instance, (I left out the -d param to be able to debug), I get an error in the console.
[backend] error: TypeError: fetch failed
at node:internal/deps/undici/undici:12625:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async ollamaAIModels (/app/server/utils/helpers/customModels.js:346:18)
at async getCustomModels (/app/server/utils/helpers/customModels.js:49:14)
at async /app/server/endpoints/system.js:960:35
Are there known steps to reproduce?
Visit the http://localhost:3001/settings/llm-preference endpoint and select Ollama as the LLM Provider
The text was updated successfully, but these errors were encountered:
That is the correct solution, otherwise the local network loopback will keep using the container network and not the host network (where Ollama is running)
How are you running AnythingLLM?
Docker (local)
What happened?
I am running anythingllm within a docker container on an Ubuntu 24 instance with ollama v0.5.7
The local ollama instance is accessible at http://localhost:11434
When running the following command to stand up the docker instance, (I left out the -d param to be able to debug), I get an error in the console.
Run Command:
Error:
Are there known steps to reproduce?
Visit the
http://localhost:3001/settings/llm-preference
endpoint and select Ollama as the LLM ProviderThe text was updated successfully, but these errors were encountered: