You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I deployed AnythingLLM using Docker, after selecting the model and clicking "Next" on the interface, it threw this error. I've also included the backend logs below. I'm using an x86 environment, and both the image and Docker are x86-based. How can I resolve this?
this is error log
All migrations have been successfully applied.
┌─────────────────────────────────────────────────────────┐
│ Update available 5.3.1 -> 6.5.0 │
│ │
│ This is a major update - please follow the guide at │
│ https://pris.ly/d/major-version-upgrade │
│ │
│ Run the following to update │
│ npm i --save-dev prisma@latest │
│ npm i @prisma/client@latest │
└─────────────────────────────────────────────────────────┘
[backend] info: [EncryptionManager] Self-assigning key & salt for encrypting arbitrary data.
[backend] info: [TokenManager] Initialized new TokenManager instance for model: gpt-3.5-turbo
[backend] info: [TokenManager] Returning existing instance for model: gpt-3.5-turbo
[backend] info: [TELEMETRY ENABLED] Anonymous Telemetry enabled. Telemetry helps Mintplex Labs Inc improve AnythingLLM.
[backend] info: prisma:info Starting a sqlite pool with 7 connections.
[backend] info: [TELEMETRY SENT] {"event":"server_boot","distinctId":"884daea3-3643-4fee-8520-386a677904ac","properties":{"commit":"--","runtime":"docker"}}
[backend] info: [CommunicationKey] RSA key pair generated for signed payloads within AnythingLLM services.
[backend] info: [EncryptionManager] Loaded existing key & salt for encrypting arbitrary data.
[backend] info: Primary server in HTTP mode listening on port 3001
[backend] info: [BackgroundWorkerService] Feature is not enabled and will not be started.
[backend] info: [MetaGenerator] fetching custom meta tag settings...
[backend] error: Error: The OPENAI_API_KEY environment variable is missing or empty; either provide it, or instantiate the OpenAI client with an apiKey option, like new OpenAI({ apiKey: 'My API Key' }).
at new OpenAI (/app/server/node_modules/openai/index.js:53:19)
at openAiModels (/app/server/utils/helpers/customModels.js:91:18)
at getCustomModels (/app/server/utils/helpers/customModels.js:43:20)
at /app/server/endpoints/system.js:960:41
at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5)
at next (/app/server/node_modules/express/lib/router/route.js:149:13)
at /app/server/utils/middleware/multiUserProtected.js:60:7
at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5)
at next (/app/server/node_modules/express/lib/router/route.js:149:13)
at validatedRequest (/app/server/utils/middleware/validatedRequest.js:20:5)
[backend] error: TypeError: fetch failed
at node:internal/deps/undici/undici:12625:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async ollamaAIModels (/app/server/utils/helpers/customModels.js:346:18)
at async getCustomModels (/app/server/utils/helpers/customModels.js:49:14)
at async /app/server/endpoints/system.js:960:35
[backend] info: EmbeddingEngine changed from undefined to native - resetting undefined namespaces
[backend] info: [Event Logged] - workspace_vectors_reset
[backend] info: Resetting anythingllm managed vector namespaces for
/usr/local/bin/docker-entrypoint.sh: line 7: 113 Illegal instruction (core dumped) node /app/server/index.js
Are there known steps to reproduce?
No response
The text was updated successfully, but these errors were encountered:
The Ollama configuration is all normal, and I can successfully call this large model using the desktop version of AnythingLLM. Below is my container creation command.
This happens when you cannot write to STORAGE_LOCATION or if you search GH for issues related to /usr/local/bin/docker-entrypoint.sh: line 7: 113 Illegal instruction (core dumped) node /app/server/index.js
You will find this often occurs when your underlying CPU cannot run the prisma binary and aborts. This works on desktop because the prisma client is slightly different in how it detects the runtime to use, whereas in the docker container it will always try to run the Linux runtime that is incompatible with the CPU (requires AVX2)
How are you running AnythingLLM?
Docker (remote machine)
What happened?
When I deployed AnythingLLM using Docker, after selecting the model and clicking "Next" on the interface, it threw this error. I've also included the backend logs below. I'm using an x86 environment, and both the image and Docker are x86-based. How can I resolve this?
this is error log
All migrations have been successfully applied.
┌─────────────────────────────────────────────────────────┐
│ Update available 5.3.1 -> 6.5.0 │
│ │
│ This is a major update - please follow the guide at │
│ https://pris.ly/d/major-version-upgrade │
│ │
│ Run the following to update │
│ npm i --save-dev prisma@latest │
│ npm i @prisma/client@latest │
└─────────────────────────────────────────────────────────┘
[backend] info: [EncryptionManager] Self-assigning key & salt for encrypting arbitrary data.
[backend] info: [TokenManager] Initialized new TokenManager instance for model: gpt-3.5-turbo
[backend] info: [TokenManager] Returning existing instance for model: gpt-3.5-turbo
[backend] info: [TELEMETRY ENABLED] Anonymous Telemetry enabled. Telemetry helps Mintplex Labs Inc improve AnythingLLM.
[backend] info: prisma:info Starting a sqlite pool with 7 connections.
[backend] info: [TELEMETRY SENT] {"event":"server_boot","distinctId":"884daea3-3643-4fee-8520-386a677904ac","properties":{"commit":"--","runtime":"docker"}}
[backend] info: [CommunicationKey] RSA key pair generated for signed payloads within AnythingLLM services.
[backend] info: [EncryptionManager] Loaded existing key & salt for encrypting arbitrary data.
[backend] info: Primary server in HTTP mode listening on port 3001
[backend] info: [BackgroundWorkerService] Feature is not enabled and will not be started.
[backend] info: [MetaGenerator] fetching custom meta tag settings...
[backend] error: Error: The OPENAI_API_KEY environment variable is missing or empty; either provide it, or instantiate the OpenAI client with an apiKey option, like new OpenAI({ apiKey: 'My API Key' }).
at new OpenAI (/app/server/node_modules/openai/index.js:53:19)
at openAiModels (/app/server/utils/helpers/customModels.js:91:18)
at getCustomModels (/app/server/utils/helpers/customModels.js:43:20)
at /app/server/endpoints/system.js:960:41
at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5)
at next (/app/server/node_modules/express/lib/router/route.js:149:13)
at /app/server/utils/middleware/multiUserProtected.js:60:7
at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5)
at next (/app/server/node_modules/express/lib/router/route.js:149:13)
at validatedRequest (/app/server/utils/middleware/validatedRequest.js:20:5)
[backend] error: TypeError: fetch failed
at node:internal/deps/undici/undici:12625:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async ollamaAIModels (/app/server/utils/helpers/customModels.js:346:18)
at async getCustomModels (/app/server/utils/helpers/customModels.js:49:14)
at async /app/server/endpoints/system.js:960:35
[backend] info: EmbeddingEngine changed from undefined to native - resetting undefined namespaces
[backend] info: [Event Logged] - workspace_vectors_reset
[backend] info: Resetting anythingllm managed vector namespaces for
/usr/local/bin/docker-entrypoint.sh: line 7: 113 Illegal instruction (core dumped) node /app/server/index.js
Are there known steps to reproduce?
No response
The text was updated successfully, but these errors were encountered: