Distributed task runner for your home network. Single binary, mTLS by default, single-file worker enrollment.
Hearth lets one host queue heavy background jobs (image-to-PDF, video transcoding, batch ML inference, …) and have other home machines pick them up over the LAN. Coordinator and worker can run on any OS — Mac, Windows, Linux, NAS — in any combination. Nothing leaves your network.
Status: alpha. Wire protocol, public API, and bundle format may still change. Pin a commit if you depend on it.
| Term | What it is |
|---|---|
| Coordinator | Single host that owns the queue, stores blobs, and serves the gRPC API. |
| Worker | Any host that pulls jobs of one or more kinds and runs your Handler. |
| Handler | A user-implemented Go interface (pkg/worker.Handler) that does the work. |
| Bundle | A single .hearth file (~1 KB) carrying CA cert, client cert + key, coordinator address. |
Grab a binary from the latest release or go build ./cmd/hearth. Then:
# On the always-on host (auto-creates CA + admin bundle on first run):
hearth coordinator
# Submit a job from the same host (CLI auto-finds the admin bundle):
hearth submit --kind echo --payload "hi"
hearth status# On the coordinator:
hearth enroll --addr <coord-ip>:7843 my-laptop # → my-laptop.hearth (~1 KB)
# Move my-laptop.hearth to the worker (USB, scp, SD card — your choice).
# On the worker, with your own handler-bearing binary (see below):
my-worker my-laptop.hearthThe OSS hearth worker command can also load a bundle for connectivity testing, but it ships with no handlers — to actually process jobs you build your own binary that imports pkg/runner.
Full guide: docs/USAGE.md
External projects depend on exactly four packages:
github.com/notpop/hearth/pkg/worker— theHandlerinterface (worker side)github.com/notpop/hearth/pkg/runner—RunWorker(orRunfor more control)github.com/notpop/hearth/pkg/client— programmatic submission (web app, bot, …)github.com/notpop/hearth/pkg/job— domain types shared by all of the above
A complete worker is ~15 lines:
package main
import (
"context"; "log"; "os"; "os/signal"; "syscall"
"github.com/notpop/hearth/pkg/runner"
"github.com/notpop/hearth/pkg/worker"
)
type myHandler struct{}
func (myHandler) Kind() string { return "my-task" }
func (myHandler) Handle(ctx context.Context, in worker.Input) (worker.Output, error) {
return worker.Output{Payload: []byte("done")}, nil
}
func main() {
if len(os.Args) != 2 {
log.Fatal("usage: my-worker <bundle.hearth>")
}
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer cancel()
if err := runner.RunWorker(ctx, os.Args[1], myHandler{}); err != nil {
log.Fatal(err)
}
}runner.RunWorker reads the bundle, dials the coordinator over mTLS, registers the worker, and runs the handler loop until the context is cancelled. The polished example with a real handler lives at examples/img2pdf/cmd/img2pdf-worker/main.go.
- Idempotent. Hearth may re-deliver a job after a crash or lease expiry.
- Honour
ctx. Cancellation fires when the lease is lost or the coordinator asks the worker to abandon. Stop work and return. - Return errors to retry. Hearth applies the job's backoff policy until
MaxAttempts. - Use blobs for big payloads.
worker.OutputBlob{Reader: ...}— the runtime CAS-stores it and surfaces just the SHA-256.
hearth coordinator [--listen ...] [--data ...] [--ca ...] [--mdns]
hearth enroll [--addr <host:port>] [--out <path>] [--validity <dur>] <name>
hearth submit [--bundle <path>] --kind <k> [--payload <s>] [--blob <path> ...]
hearth status [--bundle <path>] [--job <id>] [--watch] [--limit N]
hearth cancel [--bundle <path>] <job-id>
hearth nodes [--bundle <path>]
hearth ca init [--dir <path>] [--name <cn>]
hearth worker --bundle <path> # connectivity check only — no handlers
hearth version
The CLI commands that take --bundle fall back, in order, to $HEARTH_BUNDLE, ./.hearth/admin.hearth, and ~/.hearth/admin.hearth, so you can omit the flag when running on the coordinator host.
Hearth lives entirely on your LAN. mTLS-authenticated gRPC is the only wire protocol; mDNS is used for zero-config discovery (_hearth._tcp.local).
A typical home router bridges WiFi and Ethernet at L2, so a Mac on WiFi and a PC on a wired LAN talk to the coordinator without any extra setup. mDNS multicast usually reaches both interfaces too. If your router enables AP isolation or segregated guest networks, mDNS may not reach across — fall back to passing an explicit IP to hearth enroll --addr <ip>:7843, which embeds it in the bundle.
Tasks are exposed as Nix flake apps. Inside nix develop (or after direnv allow) every task is also defined as a plain shell function — no nix run .# ceremony needed for interactive use:
nix develop # enter the dev shell (or rely on direnv)
build # ./bin/hearth
test # full suite (~10 s)
test-race
cover # per-package coverage
vet
lint # vet + staticcheck
proto # regenerate gRPC stubs
release-build v0.2.1-alpha
# CLI wrappers
ca-init
enroll --addr <ip>:7843 my-worker
coordinatorFor non-interactive use (CI, scripts), invoke as flake apps — the # is fixed Nix flake syntax for selecting an attribute:
nix run .#build
nix run .#test
nix run .#release-build -- v0.2.1-alpha
nix run .#enroll -- --addr <ip>:7843 my-workerOther Nix flakes can consume Hearth as a build input:
inputs.hearth.url = "github:notpop/hearth";
# then in your devShell: hearth.packages.${system}.defaultOther Nix flakes can consume Hearth as a build input:
inputs.hearth.url = "github:notpop/hearth";
# then in your devShell: hearth.packages.${system}.defaultpkg/ is the public API surface. Everything else lives under internal/ and may change.
MIT © 2026 notpop