fix(service): route engine creation through create_engine factory#234
Open
mountarreat wants to merge 1 commit intonikopueringer:mainfrom
Open
fix(service): route engine creation through create_engine factory#234mountarreat wants to merge 1 commit intonikopueringer:mainfrom
mountarreat wants to merge 1 commit intonikopueringer:mainfrom
Conversation
`backend/service.py::_get_engine` constructed `CorridorKeyEngine`
directly, bypassing the `CorridorKeyModule.backend.create_engine`
factory that the CLI path already uses. The service and CLI had
two parallel construction paths, so any future change to engine
setup had to be applied twice (or silently diverge between CLI
and service).
This commit routes the service through the factory. To keep the
change a pure architectural refactor with no user-observable
behavior change, model precision is made explicit at both call
sites:
- The factory gains a `model_precision: torch.dtype = torch.float16`
parameter in `CorridorKeyModule/backend.py`, forwarded to
CorridorKeyEngine in the Torch branch. The default matches the
value previously hardcoded inside the factory, so the existing
CLI call site in `clip_manager.py:633` is unchanged: it continues
to pass no `model_precision` kwarg and gets FP16.
- `_get_engine` in `backend/service.py` now calls
`create_engine(backend="torch", device=self._device,
img_size=2048, model_precision=torch.float32)`. Explicit
`backend="torch"` preserves the service's current behavior of
using Torch on every platform rather than auto-picking MLX on
Apple Silicon. Explicit `model_precision=torch.float32`
preserves the service's current behavior of relying on
CorridorKeyEngine's own default (FP32 weights plus
mixed_precision autocast for speed on safe ops, see
`inference_engine.py:59-81`), which was previously reached
implicitly by omitting the kwarg.
The CLI and service therefore continue to run at different
precisions (FP16 vs FP32) as they have been. This is now
documented intent at the call sites instead of an implicit
divergence caused by the service bypassing the factory. Whether
the two paths should be unified on a single precision is a
separate design question for upstream; this PR deliberately does
not make that call.
The old "Loading checkpoint: <name>" info log inside `_get_engine`
is dropped: `create_engine` already logs the checkpoint name plus
the device ("Torch engine loaded: <name> (device=<device>)"),
which is strictly more informative.
Regression coverage in `tests/test_backend.py`:
- `TestCreateEngineModelPrecision`: asserts the factory exposes
`model_precision`, defaults to `torch.float16`, and forwards the
requested value to CorridorKeyEngine (verified by patching
CorridorKeyEngine and inspecting the kwarg).
- `TestServiceEngineRouting`: a bare `CorridorKeyService()` calls
`_get_engine()` and that call routes through `create_engine`
with `backend="torch"`, `device="cpu"` (the service default),
`img_size=2048`, and `model_precision=torch.float32`.
TDD: all four tests were written first and observed failing on
`upstream/main` before the fix. One informative failure: the
service-routing test saw `_get_engine` reach `_discover_checkpoint`
and attempt a real HuggingFace download, confirming the factory
was being bypassed.
Verified locally on Windows with Python 3.13.11:
- uv run ruff format --check: clean
- uv run ruff check: clean
- uv run pytest -q --tb=short -m "not gpu": 367 passed, 1 skipped
(MLX), 4 deselected (GPU), 2 warnings. The +4 passes vs
upstream's 363 are the 4 new regression tests. Both warnings are
pre-existing on upstream/main (pytest unknown-option `env`
warning addressed on a separate branch, plus a torch autocast
warning on a no-CUDA machine). Neither is introduced by this
change.
- uv run corridorkey --help: exits 0.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this change?
backend/service.py::_get_engineconstructedCorridorKeyEnginedirectly, bypassing the
CorridorKeyModule.backend.create_enginefactory that the CLI path already uses. That left the repo with
two parallel construction paths for the same engine.
This PR routes the service through the factory. To keep the
change a pure architectural refactor with no user-observable
behavior change, model precision is made explicit at both call
sites:
create_engineinCorridorKeyModule/backend.pygains amodel_precision: torch.dtype = torch.float16parameter,forwarded to
CorridorKeyEnginein the Torch branch. Thedefault matches the value previously hardcoded inside the
factory, so the CLI call site in
clip_manager.py:633isunchanged.
_get_engineinbackend/service.pynow callscreate_engine(backend="torch", device=self._device, img_size=2048, model_precision=torch.float32). Explicitbackend="torch"preserves current service behavior (Torch onevery platform, no MLX auto-pick). Explicit
model_precision=torch.float32preserves the service'sexisting precision (FP32 weights plus
mixed_precision=Trueautocast, reached implicitly before by omitting the kwarg and
falling through to
CorridorKeyEngine's own default atinference_engine.py:60).The CLI and service therefore continue to run at different
precisions (FP16 vs FP32) as they have been. This is now
documented intent at the call sites instead of an implicit
divergence from the factory bypass. Whether to unify the two
paths on a single precision is a separate design call.
The old "Loading checkpoint: " info log in
_get_engineis dropped because
create_enginealready logs the checkpointname plus device.
How was it tested?
TDD: four new regression tests were added in
tests/test_backend.pyand observed failing onmainbeforethe fix.
TestCreateEngineModelPrecision(3 tests): the factoryexposes
model_precision, defaults totorch.float16, andforwards the requested value to
CorridorKeyEngine(patchedand inspected).
TestServiceEngineRouting(1 test): a bareCorridorKeyService()calls_get_engine()and the callroutes through
create_enginewithbackend="torch",device="cpu",img_size=2048, andmodel_precision=torch.float32.One informative pre-fix failure: the service-routing test saw
_get_enginereach_discover_checkpointand attempt a realHuggingFace download, confirming the factory was being bypassed.
Verified locally on Windows with Python 3.13.11:
uv run ruff format --check: cleanuv run ruff check: cleanuv run pytest -q --tb=short -m "not gpu": 367 passed, 1skipped (MLX), 4 deselected (GPU). The +4 passes vs upstream's
363 are the 4 new regression tests.
uv run corridorkey --help: exits 0PytestConfigWarning: Unknown config option: envand a torch autocast warning on ano-CUDA machine) are both pre-existing on
mainand notintroduced by this change.
Checklist
uv run pytestpassesuv run ruff checkpassesuv run ruff format --checkpasses