-
-
Notifications
You must be signed in to change notification settings - Fork 182
refactor(docker): switch from global to local npm install #2220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Change Docker images to use local npm install instead of global (npm -g). This avoids permission issues when installing plugins and running in rootless container environments like Podman. Changes: - Dockerfile: use npm install instead of npm i -g for tarballs - Dockerfile_rel: use npm install instead of npm i -g for release - Dockerfile_base_24.04: remove setcap on node binary (not needed) - startup.sh: update path to local node_modules/.bin/ - README.md: update documentation for new directory structure The server now installs in /home/node/signalk/node_modules/ and runs entirely in user space without requiring special capabilities.
When running with rootless Podman/Docker and the host's D-Bus socket mounted, the container should use the host's D-Bus and Avahi services instead of trying to start its own. Previously, startup.sh always tried to start D-Bus and Avahi, which produced confusing error messages like "Address already in use" and "Failed to contact D-Bus daemon" even though everything worked fine. Now we detect if the host D-Bus socket is already connected and skip starting container services in that case. This gives clean startup logs for rootless container deployments while maintaining backwards compatibility with Docker Compose setups that don't mount the socket.
Symlinks don't work reliably in all Docker/Compose configurations. Copy the @SignalK packages to the nested node_modules location where the admin-ui expects to find them.
|
Adding some insight into the local install: there is value in having So we could experiment with containers, creating a new separate rootless -g container build. Once we are happy with that we could deprecate the old -g containers AND switch the non-container install over - or strongly discourage or even completely deprecate npm-based install. |
|
In docker, docker compose, podman or podman compose can end user login as a root with Current rootless approach seems to run on podman, but not in docker or docker compose. Or then support platforms with own build recepies. |
|
I am not a fan of maintaining two platforms. There must be a simple solution to happy coexist in both worlds. I will setup a vm with the docker toolchain and see what's needed to make both happy. |
Allow Dockerfile_base_24.04 to accept NODE_VERSION build arg (e.g., NODE_VERSION=22) in addition to existing NODE arg (e.g., NODE=22.x) for better compatibility with different build scenarios. Add shell logic to ensure NodeSource setup script receives proper version format (with .x suffix) regardless of input format. Maintains backward compatibility with CI/CD workflows that use --build-arg NODE=22.x while enabling manual builds with --build-arg NODE_VERSION=22.
Add automatic detection of container runtime environment and export CONTAINER_RUNTIME environment variable to differentiate between Docker, Podman, Kubernetes, containerd, CRI-O, and LXC at runtime. Detection uses multiple methods in priority order: - File markers (/.dockerenv for Docker, /run/.containerenv for Podman) - Environment variables (KUBERNETES_SERVICE_HOST for Kubernetes) - cgroup patterns (/kubepods, /lxc, /containerd, /docker, /libpod) - Runtime sockets (/var/run/crio, /var/run/containerd/containerd.sock) This enables plugins and addons to adapt behavior based on the actual runtime environment. The same container image now works across all runtimes while exposing runtime-specific information via process.env.CONTAINER_RUNTIME. Maintains backward compatibility with IS_IN_DOCKER environment variable which remains set to "true" in all containerized environments for existing server update detection logic. Users can override detection by setting CONTAINER_RUNTIME manually: docker run -e CONTAINER_RUNTIME=custom signalk/signalk-server Modified files: - docker/startup.sh: Add runtime detection logic for 6 container runtimes - docker/README.md: Add comprehensive documentation with usage examples and detection methods for all supported runtimes
tkurki
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some inline comments.
I am just not sure how to proceed - like this, change all the containers at once, or start building new ones alongside the -g ones.
Does this break anything for the user - so that something that used to work no longer does?
| - **`crio`** - CRI-O (common in Kubernetes environments) | ||
| - **`lxc`** - LXC/LXD containers | ||
|
|
||
| ### Detection Methods |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would remove Detection Methods, Usage in Code and Manual Override alltogether. Detection is pretty obvious in the code, usage should be simple enough for the target audience and if you need to override container runtime you should know how to do it and not need 3x instructions.
|
|
||
| ## Container Runtime Detection | ||
|
|
||
| The server automatically detects which container runtime is being used and sets the `CONTAINER_RUNTIME` environment variable accordingly. This enables plugins and addons to adapt their behavior based on the actual runtime environment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add to the master list of env variables.
| ### Backward Compatibility | ||
|
|
||
| The `IS_IN_DOCKER` environment variable remains set to `true` in all containerized environments for backward compatibility with existing code and plugins. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing changes in this regard, so don't want to add "remains set" here - remove?
| ## Directory structure | ||
|
|
||
| - server files: `/home/node/signalk` | ||
| - settings files and plugins: `/home/node/.signalk` | ||
|
|
||
| You most probably want to mount `/home/node/.signalk` from the host or as a volume to persist your settings. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove?
| FROM ubuntu:24.04 | ||
| ARG NODE | ||
| ARG NODE_VERSION | ||
| # Ensure NODE has .x suffix for NodeSource (supports both "22" and "22.x" input) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This I don't understand? And if the argument changes don't the workflow files need to be changed also?
In fairness I am quite new to the way how docker is used today and what images are used by whom and in which use case. Supported platforms
For rapid user adoption I need
Major features work
I would love that we only just need one docker image that can be used - as it was (unknown) and for the universal installer. With #2239 we could use: |
|
Just to be clear - this PR is work in progress, not ready for merge. I want to discuss the direction first. What I'm proposing:
Node 24 - Ready for it, good WASM support. One thing I need input on: npm global packages in userspace only, or separate images? For the Universal Installer I need everything running in userspace (no root). If that's a problem for traditional Docker users, we could do two images - but I'd rather not. Worst case: keep Ubuntu for Docker as-is, Alpine for Universal Installer. But that doubles the maintenance and defeats the purpose. What's your take? |
|
I think we are now discussing about image and disk size in many forums at the same time. Size difference is ~500MB on disk, less in compressed images. Extracted image on disk can't be that small as you indicated. I haven't test this alpine image yet, so can't say is there compatibility issues with plugins etc... |
|
@KEGustafsson Thanks, |
This PR switches the Docker images from global npm install (npm i -g) to local install (npm install). This fixes the permission issues when installing plugins in rootless container environments like Podman.
What changed
The key fix is the symlink for @SignalK packages - the admin UI looks for them in a nested path, so we create:
ln -s /home/node/signalk/node_modules/@signalk node_modules/signalk-server/node_modules/@signalkBLE works without
setcap cap_net_raw+eipon the node binary when running with container capabilities - this is the cleaner approach for containerized deployments. Let me know if you'd like any changes!Tested on RPI 4 and RPI5 with headless podman
NET_ADMIN + NET_RAWcapabilities, no setcap needed--net=host + NET_ADMINTo verify