Running Agents as Containers #428
Replies: 3 comments 2 replies
-
My opinions: Security: blindly running external code is the de-facto standard nowadays ( Ease of use: Working in a container environment is a bit harder for agent creators (need to write Dockerfile, except maybe we can generate it for common cases?), but potentially simpler for agent users -- the only system dependency is a container runtime. As Matouš mentions, we may even set-up a VM through Lima (which is a static binary) and run a container runtime there, avoiding any setup steps for the user, but at the cost of running another VM if the user already has one through Rancher Desktop / Colima / etc. Or we could support both cases and detect if a viable container runtime is already present on the machine. Kubernetes or plain containers?: The platform manages lifecycles of agents, potentially scaling them to zero when not needed. This can be done either over the container runtime directly, or we can adopt Kubernetes which already has a lot of related functionality. Adopting Kubernetes could also mean simpler daemon management (OpenTelemetry, Arize Phoenix, etc.), better potential for "scaling up", and better manifest management since |
Beta Was this translation helpful? Give feedback.
-
I think running in containers makes a lot of sense. I also think that if we are ever planning to run the platform remotely (i.e. not just on the user's laptop), then running on Kubernetes is more or less a non-negotiable. For running locally, I don't think it's any more difficult to use Docker[1] vs. Kubernetes. There's various options for running a local cluster (kind, minikube, k3s, ...), and a lot of the container runtimes have a Kubernetes option built in (Docker, Podman, Rancher, Colima). So basically, if you can run a container, you can run Kubernetes. Also, in my experience working with folks who weren't primarily software developers, there isn't a substantial learning curve between either option [2], especially if the right tooling is put in place, which we'll want to do anyways for our own sanity. Docker does provide some shortcuts that make getting running easier (especially around networking and exposing services), but that can be a double-edged sword: things that worked on your laptop may require substantial rewrites when you try to run them on your distributed environment. So IMO may as well get them "right" the first time. [1] Here referring to any way to run containers locally: Docker, Podman, etc. [2] Particularly if it's just a matter of running existing Kubernetes resources. Creating Kubernetes resources is a little trickier. |
Beta Was this translation helpful? Give feedback.
-
curious why webasembly component model wouldn't work here? in my view agents should be lightweight and run anywhere without requiring a container runtime |
Beta Was this translation helpful? Give feedback.
-
In
Pre-alpha
, agents are built similarly to Heroku buildpacks and run in non-isolated environment. The lack of strict isolation poses some security challenges. To address this, we're currently thinking about moving toward a containerized solution. While docker containers is a clear candidate, we're evaluating if using pure docker containers alone would suffice, or if adopting Kubernetes or potentially offering lighter alternatives like Lima would be more advantageous.Please share your thoughts, recommendations, experiences.
Beta Was this translation helpful? Give feedback.
All reactions