Skip to content

fix(service): route engine creation through create_engine factory#234

Open
mountarreat wants to merge 1 commit intonikopueringer:mainfrom
mountarreat:fix/service-create-engine
Open

fix(service): route engine creation through create_engine factory#234
mountarreat wants to merge 1 commit intonikopueringer:mainfrom
mountarreat:fix/service-create-engine

Conversation

@mountarreat
Copy link
Copy Markdown
Contributor

What does this change?

backend/service.py::_get_engine constructed CorridorKeyEngine
directly, bypassing the CorridorKeyModule.backend.create_engine
factory that the CLI path already uses. That left the repo with
two parallel construction paths for the same engine.

This PR routes the service through the factory. To keep the
change a pure architectural refactor with no user-observable
behavior change, model precision is made explicit at both call
sites:

  • create_engine in CorridorKeyModule/backend.py gains a
    model_precision: torch.dtype = torch.float16 parameter,
    forwarded to CorridorKeyEngine in the Torch branch. The
    default matches the value previously hardcoded inside the
    factory, so the CLI call site in clip_manager.py:633 is
    unchanged.
  • _get_engine in backend/service.py now calls
    create_engine(backend="torch", device=self._device, img_size=2048, model_precision=torch.float32). Explicit
    backend="torch" preserves current service behavior (Torch on
    every platform, no MLX auto-pick). Explicit
    model_precision=torch.float32 preserves the service's
    existing precision (FP32 weights plus mixed_precision=True
    autocast, reached implicitly before by omitting the kwarg and
    falling through to CorridorKeyEngine's own default at
    inference_engine.py:60).

The CLI and service therefore continue to run at different
precisions (FP16 vs FP32) as they have been. This is now
documented intent at the call sites instead of an implicit
divergence from the factory bypass. Whether to unify the two
paths on a single precision is a separate design call.

The old "Loading checkpoint: " info log in _get_engine
is dropped because create_engine already logs the checkpoint
name plus device.

How was it tested?

TDD: four new regression tests were added in
tests/test_backend.py and observed failing on main before
the fix.

  • TestCreateEngineModelPrecision (3 tests): the factory
    exposes model_precision, defaults to torch.float16, and
    forwards the requested value to CorridorKeyEngine (patched
    and inspected).
  • TestServiceEngineRouting (1 test): a bare
    CorridorKeyService() calls _get_engine() and the call
    routes through create_engine with backend="torch",
    device="cpu", img_size=2048, and
    model_precision=torch.float32.

One informative pre-fix failure: the service-routing test saw
_get_engine reach _discover_checkpoint and attempt a real
HuggingFace download, confirming the factory was being bypassed.

Verified locally on Windows with Python 3.13.11:

  • uv run ruff format --check: clean
  • uv run ruff check: clean
  • uv run pytest -q --tb=short -m "not gpu": 367 passed, 1
    skipped (MLX), 4 deselected (GPU). The +4 passes vs upstream's
    363 are the 4 new regression tests.
  • uv run corridorkey --help: exits 0
  • The two warnings in the suite output (PytestConfigWarning: Unknown config option: env and a torch autocast warning on a
    no-CUDA machine) are both pre-existing on main and not
    introduced by this change.

Checklist

  • uv run pytest passes
  • uv run ruff check passes
  • uv run ruff format --check passes

`backend/service.py::_get_engine` constructed `CorridorKeyEngine`
directly, bypassing the `CorridorKeyModule.backend.create_engine`
factory that the CLI path already uses. The service and CLI had
two parallel construction paths, so any future change to engine
setup had to be applied twice (or silently diverge between CLI
and service).

This commit routes the service through the factory. To keep the
change a pure architectural refactor with no user-observable
behavior change, model precision is made explicit at both call
sites:

- The factory gains a `model_precision: torch.dtype = torch.float16`
  parameter in `CorridorKeyModule/backend.py`, forwarded to
  CorridorKeyEngine in the Torch branch. The default matches the
  value previously hardcoded inside the factory, so the existing
  CLI call site in `clip_manager.py:633` is unchanged: it continues
  to pass no `model_precision` kwarg and gets FP16.

- `_get_engine` in `backend/service.py` now calls
  `create_engine(backend="torch", device=self._device,
  img_size=2048, model_precision=torch.float32)`. Explicit
  `backend="torch"` preserves the service's current behavior of
  using Torch on every platform rather than auto-picking MLX on
  Apple Silicon. Explicit `model_precision=torch.float32`
  preserves the service's current behavior of relying on
  CorridorKeyEngine's own default (FP32 weights plus
  mixed_precision autocast for speed on safe ops, see
  `inference_engine.py:59-81`), which was previously reached
  implicitly by omitting the kwarg.

The CLI and service therefore continue to run at different
precisions (FP16 vs FP32) as they have been. This is now
documented intent at the call sites instead of an implicit
divergence caused by the service bypassing the factory. Whether
the two paths should be unified on a single precision is a
separate design question for upstream; this PR deliberately does
not make that call.

The old "Loading checkpoint: <name>" info log inside `_get_engine`
is dropped: `create_engine` already logs the checkpoint name plus
the device ("Torch engine loaded: <name> (device=<device>)"),
which is strictly more informative.

Regression coverage in `tests/test_backend.py`:

- `TestCreateEngineModelPrecision`: asserts the factory exposes
  `model_precision`, defaults to `torch.float16`, and forwards the
  requested value to CorridorKeyEngine (verified by patching
  CorridorKeyEngine and inspecting the kwarg).
- `TestServiceEngineRouting`: a bare `CorridorKeyService()` calls
  `_get_engine()` and that call routes through `create_engine`
  with `backend="torch"`, `device="cpu"` (the service default),
  `img_size=2048`, and `model_precision=torch.float32`.

TDD: all four tests were written first and observed failing on
`upstream/main` before the fix. One informative failure: the
service-routing test saw `_get_engine` reach `_discover_checkpoint`
and attempt a real HuggingFace download, confirming the factory
was being bypassed.

Verified locally on Windows with Python 3.13.11:
- uv run ruff format --check: clean
- uv run ruff check: clean
- uv run pytest -q --tb=short -m "not gpu": 367 passed, 1 skipped
  (MLX), 4 deselected (GPU), 2 warnings. The +4 passes vs
  upstream's 363 are the 4 new regression tests. Both warnings are
  pre-existing on upstream/main (pytest unknown-option `env`
  warning addressed on a separate branch, plus a torch autocast
  warning on a no-CUDA machine). Neither is introduced by this
  change.
- uv run corridorkey --help: exits 0.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant