You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just set up immich on my CM3588. Followed the docs to configure armnn wherever needed, but logs are still showing CPUExecutionProvider.
Is there anything I can do to troubleshoot and fix?
The OS that Immich Server is running on
Debian bookworm
Version of Immich Server
v1.125.2
Version of Immich Mobile App
v.irrelevant
Platform with the issue
Server
Web
Mobile
Your docker-compose.yml content
services:
immich-server:
container_name: immich_serverimage: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}extends:
file: hwaccel.transcoding.ymlservice: rkmpp # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcodingvolumes:
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of UPLOAD_LOCATION in the .env file
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- /etc/localtime:/etc/localtime:roenv_file:
- .envports:
- '2283:2283'depends_on:
- redis
- databaserestart: alwayshealthcheck:
disable: falselabels:
- homepage.group=media
- homepage.name=immich
- homepage.icon=immich.png
- homepage.href=https://photos/
- homepage.description=photos and videosimmich-machine-learning:
container_name: immich_machine_learning# For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.# Example tag: ${IMMICH_VERSION:-release}-cudaimage: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-armnnextends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-accelerationfile: hwaccel.ml.ymlservice: armnn # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicablevolumes:
- model-cache:/cacheenv_file:
- .envrestart: alwayshealthcheck:
disable: false
Your .env content
# machine learning
MACHINE_LEARNING_ANN=true
Reproduction steps
Configure armnn and rkmpp as described in docs
Look at logs of machine learning container
...
Relevant log output
[01/30/25 04:57:02] INFO Starting gunicorn 23.0.0
[01/30/25 04:57:02] INFO Listening at: http://[::]:3003 (9)
[01/30/25 04:57:02] INFO Using worker: app.config.CustomUvicornWorker
[01/30/25 04:57:02] INFO Booting worker with pid: 10
[01/30/25 04:57:10] INFO Started server process [10]
[01/30/25 04:57:10] INFO Waiting for application startup.
[01/30/25 04:57:10] INFO Created in-memory cache with unloading after 300s
of inactivity.
[01/30/25 04:57:10] INFO Initialized request thread pool with 8 threads.
[01/30/25 04:57:10] INFO Application startup complete.
[01/30/25 05:57:06] INFO Loading visual model 'ViT-B-32__openai' to memory
[01/30/25 05:57:06] INFO Setting execution providers to
['CPUExecutionProvider'], in descending order of
preference
[01/30/25 05:57:07] INFO Loading detection model 'buffalo_l' to memory
[01/30/25 05:57:07] INFO Setting execution providers to
['CPUExecutionProvider'], in descending order of
preference
[01/30/25 05:57:08] INFO Loading recognition model 'buffalo_l' to memory
[01/30/25 05:57:08] INFO Setting execution providers to
['CPUExecutionProvider'], in descending order of
preference
[01/30/25 06:04:30] INFO Shutting down due to inactivity.
[01/30/25 06:04:30] INFO Shutting down
[01/30/25 06:04:30] INFO Waiting for application shutdown.
[01/30/25 06:04:31] INFO Application shutdown complete.
[01/30/25 06:04:31] INFO Finished server process [10]
[01/30/25 06:04:31] ERROR Worker (pid:10) was sent SIGINT!
[01/30/25 06:04:31] INFO Booting worker with pid: 966
[01/30/25 06:04:38] INFO Started server process [966]
[01/30/25 06:04:38] INFO Waiting for application startup.
[01/30/25 06:04:38] INFO Created in-memory cache with unloading after 300s
of inactivity.
[01/30/25 06:04:38] INFO Initialized request thread pool with 8 threads.
[01/30/25 06:04:38] INFO Application startup complete.
[01/30/25 16:49:36] INFO Loading visual model 'ViT-B-32__openai' to memory
[01/30/25 16:49:36] INFO Setting execution providers to
['CPUExecutionProvider'], in descending order of
preference
[01/30/25 16:49:37] INFO Loading detection model 'buffalo_l' to memory
[01/30/25 16:49:37] INFO Setting execution providers to
['CPUExecutionProvider'], in descending order of
preference
[01/30/25 16:49:39] INFO Loading recognition model 'buffalo_l' to memory
[01/30/25 16:49:39] INFO Setting execution providers to
['CPUExecutionProvider'], in descending order of
preference
Additional information
hwaccel.ml.yaml:
services:
armnn:
devices:
- /dev/mali0:/dev/mali0
volumes:
- /lib/firmware/mali_csffw.bin:/lib/firmware/mali_csffw.bin:ro # Mali firmware for your chipset (not always required depending on the driver)
- /usr/lib/aarch64-linux-gnu/libmali.so:/usr/lib/libmali.so:ro # Mali driver for your chipset (always required)
verifying devices exist on machine:
ls -la /dev/mali0
crw-rw-rw- 1 root root 10, 121 Jan 29 15:59 /dev/mali0
ls -la /lib/firmware/mali_csffw.bin
-rw-r--r-- 1 root root 278528 Jul 28 2020 /lib/firmware/mali_csffw.bin
ls -la /usr/lib/aarch64-linux-gnu/libmali.so
lrwxrwxrwx 1 root root 12 Jul 28 2020 /usr/lib/aarch64-linux-gnu/libmali.so -> libmali.so.1
This discussion was converted from issue #15794 on January 30, 2025 17:27.
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
The bug
Hi folks,
Just set up immich on my CM3588. Followed the docs to configure
armnn
wherever needed, but logs are still showingCPUExecutionProvider
.Is there anything I can do to troubleshoot and fix?
The OS that Immich Server is running on
Debian bookworm
Version of Immich Server
v1.125.2
Version of Immich Mobile App
v.irrelevant
Platform with the issue
Your docker-compose.yml content
Your .env content
# machine learning MACHINE_LEARNING_ANN=true
Reproduction steps
...
Relevant log output
Additional information
hwaccel.ml.yaml:
verifying devices exist on machine:
Beta Was this translation helpful? Give feedback.
All reactions