Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: docker AnythingLLM auto stop #3481

Closed
Kapsugar opened this issue Mar 17, 2025 · 2 comments
Closed

[BUG]: docker AnythingLLM auto stop #3481

Kapsugar opened this issue Mar 17, 2025 · 2 comments
Labels
possible bug Bug was reported but is not confirmed or is unable to be replicated.

Comments

@Kapsugar
Copy link

How are you running AnythingLLM?

Docker (remote machine)

What happened?

When I deployed AnythingLLM using Docker, after selecting the model and clicking "Next" on the interface, it threw this error. I've also included the backend logs below. I'm using an x86 environment, and both the image and Docker are x86-based. How can I resolve this?

Image

this is error log

All migrations have been successfully applied.
┌─────────────────────────────────────────────────────────┐
│ Update available 5.3.1 -> 6.5.0 │
│ │
│ This is a major update - please follow the guide at │
https://pris.ly/d/major-version-upgrade
│ │
│ Run the following to update │
│ npm i --save-dev prisma@latest │
│ npm i @prisma/client@latest │
└─────────────────────────────────────────────────────────┘
[backend] info: [EncryptionManager] Self-assigning key & salt for encrypting arbitrary data.
[backend] info: [TokenManager] Initialized new TokenManager instance for model: gpt-3.5-turbo
[backend] info: [TokenManager] Returning existing instance for model: gpt-3.5-turbo
[backend] info: [TELEMETRY ENABLED] Anonymous Telemetry enabled. Telemetry helps Mintplex Labs Inc improve AnythingLLM.
[backend] info: prisma:info Starting a sqlite pool with 7 connections.
[backend] info: [TELEMETRY SENT] {"event":"server_boot","distinctId":"884daea3-3643-4fee-8520-386a677904ac","properties":{"commit":"--","runtime":"docker"}}
[backend] info: [CommunicationKey] RSA key pair generated for signed payloads within AnythingLLM services.
[backend] info: [EncryptionManager] Loaded existing key & salt for encrypting arbitrary data.
[backend] info: Primary server in HTTP mode listening on port 3001
[backend] info: [BackgroundWorkerService] Feature is not enabled and will not be started.
[backend] info: [MetaGenerator] fetching custom meta tag settings...
[backend] error: Error: The OPENAI_API_KEY environment variable is missing or empty; either provide it, or instantiate the OpenAI client with an apiKey option, like new OpenAI({ apiKey: 'My API Key' }).
at new OpenAI (/app/server/node_modules/openai/index.js:53:19)
at openAiModels (/app/server/utils/helpers/customModels.js:91:18)
at getCustomModels (/app/server/utils/helpers/customModels.js:43:20)
at /app/server/endpoints/system.js:960:41
at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5)
at next (/app/server/node_modules/express/lib/router/route.js:149:13)
at /app/server/utils/middleware/multiUserProtected.js:60:7
at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5)
at next (/app/server/node_modules/express/lib/router/route.js:149:13)
at validatedRequest (/app/server/utils/middleware/validatedRequest.js:20:5)
[backend] error: TypeError: fetch failed
at node:internal/deps/undici/undici:12625:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async ollamaAIModels (/app/server/utils/helpers/customModels.js:346:18)
at async getCustomModels (/app/server/utils/helpers/customModels.js:49:14)
at async /app/server/endpoints/system.js:960:35
[backend] info: EmbeddingEngine changed from undefined to native - resetting undefined namespaces
[backend] info: [Event Logged] - workspace_vectors_reset
[backend] info: Resetting anythingllm managed vector namespaces for
/usr/local/bin/docker-entrypoint.sh: line 7: 113 Illegal instruction (core dumped) node /app/server/index.js

Are there known steps to reproduce?

No response

@Kapsugar Kapsugar added the possible bug Bug was reported but is not confirmed or is unable to be replicated. label Mar 17, 2025
@Kapsugar
Copy link
Author

The Ollama configuration is all normal, and I can successfully call this large model using the desktop version of AnythingLLM. Below is my container creation command.

docker run -d -p 3001:3001
--name anythingllm
--cap-add SYS_ADMIN
--add-host=host.docker.internal:host-gateway
-v ${STORAGE_LOCATION}:/app/server/storage
-v ${STORAGE_LOCATION}/.env:/app/server/.env
-e STORAGE_DIR="/app/server/storage"
mintplexlabs/anythingllm

@timothycarambat
Copy link
Member

This happens when you cannot write to STORAGE_LOCATION or if you search GH for issues related to
/usr/local/bin/docker-entrypoint.sh: line 7: 113 Illegal instruction (core dumped) node /app/server/index.js

You will find this often occurs when your underlying CPU cannot run the prisma binary and aborts. This works on desktop because the prisma client is slightly different in how it detects the runtime to use, whereas in the docker container it will always try to run the Linux runtime that is incompatible with the CPU (requires AVX2)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
possible bug Bug was reported but is not confirmed or is unable to be replicated.
Projects
None yet
Development

No branches or pull requests

2 participants