-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Security information to README.md #787
Conversation
Reviewer's Guide by SourceryThis pull request enhances the documentation by adding a comprehensive SECURITY section to both the README and the man page (docs/ramalama.1.md). The added sections explain how RamaLama uses container-based isolation, read-only volume mounts, network restrictions, auto-cleanup, and Linux capabilities restrictions to secure AI model execution. Additionally, details about the ramalama.conf configuration file usage and precedence are now included. No diagrams generated as the changes look simple and do not need a visual representation. File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Also add information about ramalama.conf to the ramalama.1 man page. Signed-off-by: Daniel J Walsh <[email protected]>
|
||
### Test and run your models more securely | ||
|
||
Because RamaLama defaults to running AI models inside of rootless containers using Podman on Docker. These containers isolate the AI models from information on the underlying host. With RamaLama containers, the AI model is mounted as a volume into the container in read/only mode. This results in the process running the model, llama.cpp or vLLM, being isolated from the host. In addition, since `ramalama run` uses the --network=none option, the container can not reach the network and leak any information out of the system. Finally, containers are run with --rm options which means that any content written during the running of the container is wiped out when the application exits. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"read/only" typo, can catch typos in follow on PRs though
|
||
### Test and run your models more securely | ||
|
||
Because RamaLama defaults to running AI models inside of rootless containers using Podman on Docker. These containers isolate the AI models from information on the underlying host. With RamaLama containers, the AI model is mounted as a volume into the container in read/only mode. This results in the process running the model, llama.cpp or vLLM, being isolated from the host. In addition, since `ramalama run` uses the --network=none option, the container can not reach the network and leak any information out of the system. Finally, containers are run with --rm options which means that any content written during the running of the container is wiped out when the application exits. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
read/only typo again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the typo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wrong, sorry.
I'd never seen read/only written with a slash before, it's just a way of writing it I'm unaware of, used to seeing:
read-only or readonly
I have seen r/o before though once or twice 😄
Also add information about ramalama.conf to the ramalama.1 man page.
Summary by Sourcery
Documentation: