Skip to content

Commit

Permalink
Add Security information to README.md
Browse files Browse the repository at this point in the history
Also add information about ramalama.conf to the ramalama.1 man page.

Signed-off-by: Daniel J Walsh <[email protected]>
  • Loading branch information
rhatdan committed Feb 11, 2025
1 parent 9b72335 commit fa973e6
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 3 deletions.
18 changes: 16 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,26 @@ RamaLama then pulls AI Models from model registries. Starting a chatbot or a res

When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behaviour. When neither are installed RamaLama will attempt to run the model with software on the local system.

RamaLama supports multiple AI model registries types called transports.
Supported transports:
## SECURITY

### Test and run your models more securely

Because RamaLama defaults to running AI models inside of rootless containers using Podman on Docker. These containers isolate the AI models from information on the underlying host. With RamaLama containers, the AI model is mounted as a volume into the container in read/only mode. This results in the process running the model, llama.cpp or vLLM, being isolated from the host. In addition, since `ramalama run` uses the --network=none option, the container can not reach the network and leak any information out of the system. Finally, containers are run with --rm options which means that any content written during the running of the container is wiped out when the application exits.

### Here’s how RamaLama delivers a robust security footprint:

✅ Container Isolation – AI models run within isolated containers, preventing direct access to the host system.
✅ Read-Only Volume Mounts – The AI model is mounted in read-only mode, meaning that processes inside the container cannot modify host files.
✅ No Network Access – ramalama run is executed with --network=none, meaning the model has no outbound connectivity for which information can be leaked.
✅ Auto-Cleanup – Containers run with --rm, wiping out any temporary data once the session ends.
✅ Drop All Linux Capabilities – No access to Linux capabilities to attack the underlying host.
✅ No New Privileges – Linux Kernel feature which disables container processes from gaining additional privileges.

## TRANSPORTS

RamaLama supports multiple AI model registries types called transports.
Supported transports:

| Transports | Web Site |
| ------------- | --------------------------------------------------- |
| HuggingFace | [`huggingface.co`](https://www.huggingface.co) |
Expand Down
26 changes: 25 additions & 1 deletion docs/ramalama.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,25 @@ used within the VM.

Default settings for flags are defined in `ramalama.conf(5)`.

RamaLama supports multiple AI model registries types called transports. Supported transports:
## SECURITY

### Test and run your models more securely

Because RamaLama defaults to running AI models inside of rootless containers using Podman on Docker. These containers isolate the AI models from information on the underlying host. With RamaLama containers, the AI model is mounted as a volume into the container in read/only mode. This results in the process running the model, llama.cpp or vLLM, being isolated from the host. In addition, since `ramalama run` uses the --network=none option, the container can not reach the network and leak any information out of the system. Finally, containers are run with --rm options which means that any content written during the running of the container is wiped out when the application exits.

### Here’s how RamaLama delivers a robust security footprint:

✅ Container Isolation – AI models run within isolated containers, preventing direct access to the host system.
✅ Read-Only Volume Mounts – The AI model is mounted in read-only mode, meaning that processes inside the container cannot modify host files.
✅ No Network Access – ramalama run is executed with --network=none, meaning the model has no outbound connectivity for which information can be leaked.
✅ Auto-Cleanup – Containers run with --rm, wiping out any temporary data once the session ends.
✅ Drop All Linux Capabilities – No access to Linux capabilities to attack the underlying host.
✅ No New Privileges – Linux Kernel feature which disables container processes from gaining additional privileges.

## MODEL TRANSPORTS

RamaLama supports multiple AI model registries types called transports. Supported transports:

| Transports | Prefix | Web Site |
| ------------- | ------ | --------------------------------------------------- |
| URL based | https://, http://, file:// | `https://web.site/ai.model`, `file://tmp/ai.model`|
Expand Down Expand Up @@ -156,6 +171,15 @@ show RamaLama version

## CONFIGURATION FILES

**ramalama.conf** (`/usr/share/ramalama/ramalama.conf`, `/etc/ramalama/ramalama.conf`, `$HOME/.config/ramalama/ramalama.conf`)

RamaLama has builtin defaults for command line options. These defaults can be overridden using the ramalama.conf configuration files.

Distributions ship the `/usr/share/ramalama/ramalama.conf` file with their default settings. Administrators can override fields in this file by creating the `/etc/ramalama/ramalama.conf` file. Users can further modify defaults by creating the `$HOME/.config/ramalama/ramalama.conf` file. RamaLama merges its builtin defaults with the specified fields from these files, if they exist. Fields specified in the users file override the administrator's file, which overrides the distribution's file, which override the built-in defaults.

RamaLama uses builtin defaults if no ramalama.conf file is found.

If the **RAMALAMA_CONFIG** environment variable is set, then its value is used for the ramalama.conf file rather than the default.

## SEE ALSO
**[podman(1)](https://github.com/containers/podman/blob/main/docs/podman.1.md)**, **docker(1)**, **[ramalama.conf(5)](ramalama.conf.5.md)**
Expand Down

0 comments on commit fa973e6

Please sign in to comment.