Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provides binary/installer to ease the installation/onboarding of ramalama #812

Open
benoitf opened this issue Feb 13, 2025 · 6 comments
Open

Comments

@benoitf
Copy link
Contributor

benoitf commented Feb 13, 2025

Proposal: Provide Self-Contained Installers for Windows and macOS

Problem

Currently, Python is not installed by default on Windows and macOS. Since ramalama package requires a Python runtime, installation becomes more complex compared to a self-contained binary.

  • The package is available on PyPI, but users need a proper Python installation.
  • Homebrew can be used on macOS, but not all users have it installed.
  • Windows users need to install Python separately before using the package.

Suggested Solution

To improve accessibility, I am thinking of:

  1. A .pkg installer for macOS and .exe installer for Windows
  2. A self-contained binary for Windows and macOS (with a potential startup delay due to unpacking) or a directory to unpack

Potential Approach

It seems that PyInstaller can generate these self-contained packages out of the box. Using it to create platform-specific installers might simplify installation and adoption.

Benefits

  • Easier installation process without requiring users to set up Python manually
  • Broader accessibility for non-developer users
  • Reduces friction in adoption
@rhatdan
Copy link
Member

rhatdan commented Feb 13, 2025

@lsm5 I wonder if this is something we could execute via github actions, when we generate a release?

@ericcurtin
Copy link
Collaborator

ericcurtin commented Feb 13, 2025

macOS is a packaging effort.

Could we consider running RamaLama inside podman-machine or WSL2 for Windows? Porting it to Windows will be a significant effort.

Note if we run RamaLama directly on Windows and/or macOS you lose all the container features of RamaLama, which is kind of a key goal of RamaLama and Podman Desktop.

@benoitf
Copy link
Contributor Author

benoitf commented Feb 14, 2025

Note if we run RamaLama directly on Windows and/or macOS you lose all the container features of RamaLama, which is kind of a key goal of RamaLama and Podman Desktop.

Hello, I'm not sure to follow there ? it's only a packaging thing. So python runtime is included.
I don't see why we wouldn't be able to run any of the container features ?

@ericcurtin
Copy link
Collaborator

ericcurtin commented Feb 14, 2025

Note if we run RamaLama directly on Windows and/or macOS you lose all the container features of RamaLama, which is kind of a key goal of RamaLama and Podman Desktop.

Hello, I'm not sure to follow there ? it's only a packaging thing. So python runtime is included. I don't see why we wouldn't be able to run any of the container features ?

Because containers don't exist in Windows or macOS, but if you run RamaLama inside a Linux VM like podman-machine or WSL2 (WSL2 already should have the GPU passthrough necessary on WIndows) you are in a Linux environment where you can run containers.

@benoitf
Copy link
Contributor Author

benoitf commented Feb 14, 2025

Because containers don't exist in Windows or macOS, but if you run RamaLama inside a Linux VM like podman-machine or WSL2 (WSL2 already should have the GPU passthrough necessary on WIndows) you are in a Linux environment where you can run containers.

if you have a podman machine on macOS or Windows it means you have the podman CLI on your host (which is a podman-remote) then it means that the podman command you're running is launching the container. Why would you go inside the podman machine to run the command while it's on the host ?

@ericcurtin
Copy link
Collaborator

ericcurtin commented Feb 14, 2025

Because containers don't exist in Windows or macOS, but if you run RamaLama inside a Linux VM like podman-machine or WSL2 (WSL2 already should have the GPU passthrough necessary on WIndows) you are in a Linux environment where you can run containers.

if you have a podman machine on macOS or Windows it means you have the podman CLI on your host (which is a podman-remote) then it means that the podman command you're running is launching the container. Why would you go inside the podman machine to run the command while it's on the host ?

Because RamaLama makes all sorts of assumptions that the Base OS is Unix-like, which is a fair assumption when it's a containers oriented tool...

It's not only a packaging thing for Windows, if you try and execute RamaLama on Windows it will fail in multiple ways.

You also don't need to package python3 when you run inside podman-machine as you can depend on the Linux distros packaging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants