Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Security information to README.md #787

Merged
merged 1 commit into from
Feb 11, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 16 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,26 @@ RamaLama then pulls AI Models from model registries. Starting a chatbot or a res

When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behaviour. When neither are installed RamaLama will attempt to run the model with software on the local system.

RamaLama supports multiple AI model registries types called transports.
Supported transports:
## SECURITY

### Test and run your models more securely

Because RamaLama defaults to running AI models inside of rootless containers using Podman on Docker. These containers isolate the AI models from information on the underlying host. With RamaLama containers, the AI model is mounted as a volume into the container in read/only mode. This results in the process running the model, llama.cpp or vLLM, being isolated from the host. In addition, since `ramalama run` uses the --network=none option, the container can not reach the network and leak any information out of the system. Finally, containers are run with --rm options which means that any content written during the running of the container is wiped out when the application exits.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"read/only" typo, can catch typos in follow on PRs though


### Here’s how RamaLama delivers a robust security footprint:

✅ Container Isolation – AI models run within isolated containers, preventing direct access to the host system.
✅ Read-Only Volume Mounts – The AI model is mounted in read-only mode, meaning that processes inside the container cannot modify host files.
✅ No Network Access – ramalama run is executed with --network=none, meaning the model has no outbound connectivity for which information can be leaked.
✅ Auto-Cleanup – Containers run with --rm, wiping out any temporary data once the session ends.
✅ Drop All Linux Capabilities – No access to Linux capabilities to attack the underlying host.
✅ No New Privileges – Linux Kernel feature which disables container processes from gaining additional privileges.

## TRANSPORTS

RamaLama supports multiple AI model registries types called transports.
Supported transports:

| Transports | Web Site |
| ------------- | --------------------------------------------------- |
| HuggingFace | [`huggingface.co`](https://www.huggingface.co) |
Expand Down
26 changes: 25 additions & 1 deletion docs/ramalama.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,25 @@ used within the VM.

Default settings for flags are defined in `ramalama.conf(5)`.

RamaLama supports multiple AI model registries types called transports. Supported transports:
## SECURITY

### Test and run your models more securely

Because RamaLama defaults to running AI models inside of rootless containers using Podman on Docker. These containers isolate the AI models from information on the underlying host. With RamaLama containers, the AI model is mounted as a volume into the container in read/only mode. This results in the process running the model, llama.cpp or vLLM, being isolated from the host. In addition, since `ramalama run` uses the --network=none option, the container can not reach the network and leak any information out of the system. Finally, containers are run with --rm options which means that any content written during the running of the container is wiped out when the application exits.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

read/only typo again

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the typo?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wrong, sorry.

I'd never seen read/only written with a slash before, it's just a way of writing it I'm unaware of, used to seeing:

read-only or readonly

I have seen r/o before though once or twice 😄


### Here’s how RamaLama delivers a robust security footprint:

✅ Container Isolation – AI models run within isolated containers, preventing direct access to the host system.
✅ Read-Only Volume Mounts – The AI model is mounted in read-only mode, meaning that processes inside the container cannot modify host files.
✅ No Network Access – ramalama run is executed with --network=none, meaning the model has no outbound connectivity for which information can be leaked.
✅ Auto-Cleanup – Containers run with --rm, wiping out any temporary data once the session ends.
✅ Drop All Linux Capabilities – No access to Linux capabilities to attack the underlying host.
✅ No New Privileges – Linux Kernel feature which disables container processes from gaining additional privileges.

## MODEL TRANSPORTS

RamaLama supports multiple AI model registries types called transports. Supported transports:

| Transports | Prefix | Web Site |
| ------------- | ------ | --------------------------------------------------- |
| URL based | https://, http://, file:// | `https://web.site/ai.model`, `file://tmp/ai.model`|
Expand Down Expand Up @@ -156,6 +171,15 @@ show RamaLama version

## CONFIGURATION FILES

**ramalama.conf** (`/usr/share/ramalama/ramalama.conf`, `/etc/ramalama/ramalama.conf`, `$HOME/.config/ramalama/ramalama.conf`)

RamaLama has builtin defaults for command line options. These defaults can be overridden using the ramalama.conf configuration files.

Distributions ship the `/usr/share/ramalama/ramalama.conf` file with their default settings. Administrators can override fields in this file by creating the `/etc/ramalama/ramalama.conf` file. Users can further modify defaults by creating the `$HOME/.config/ramalama/ramalama.conf` file. RamaLama merges its builtin defaults with the specified fields from these files, if they exist. Fields specified in the users file override the administrator's file, which overrides the distribution's file, which override the built-in defaults.

RamaLama uses builtin defaults if no ramalama.conf file is found.

If the **RAMALAMA_CONFIG** environment variable is set, then its value is used for the ramalama.conf file rather than the default.

## SEE ALSO
**[podman(1)](https://github.com/containers/podman/blob/main/docs/podman.1.md)**, **docker(1)**, **[ramalama.conf(5)](ramalama.conf.5.md)**
Expand Down
Loading