[harness] nikopueringer/CorridorKey#234: fix(service): route engine creation through create_engine factory#1
Open
joshhickson wants to merge 1 commit intoharness-base-207b879359from
Conversation
`backend/service.py::_get_engine` constructed `CorridorKeyEngine`
directly, bypassing the `CorridorKeyModule.backend.create_engine`
factory that the CLI path already uses. The service and CLI had
two parallel construction paths, so any future change to engine
setup had to be applied twice (or silently diverge between CLI
and service).
This commit routes the service through the factory. To keep the
change a pure architectural refactor with no user-observable
behavior change, model precision is made explicit at both call
sites:
- The factory gains a `model_precision: torch.dtype = torch.float16`
parameter in `CorridorKeyModule/backend.py`, forwarded to
CorridorKeyEngine in the Torch branch. The default matches the
value previously hardcoded inside the factory, so the existing
CLI call site in `clip_manager.py:633` is unchanged: it continues
to pass no `model_precision` kwarg and gets FP16.
- `_get_engine` in `backend/service.py` now calls
`create_engine(backend="torch", device=self._device,
img_size=2048, model_precision=torch.float32)`. Explicit
`backend="torch"` preserves the service's current behavior of
using Torch on every platform rather than auto-picking MLX on
Apple Silicon. Explicit `model_precision=torch.float32`
preserves the service's current behavior of relying on
CorridorKeyEngine's own default (FP32 weights plus
mixed_precision autocast for speed on safe ops, see
`inference_engine.py:59-81`), which was previously reached
implicitly by omitting the kwarg.
The CLI and service therefore continue to run at different
precisions (FP16 vs FP32) as they have been. This is now
documented intent at the call sites instead of an implicit
divergence caused by the service bypassing the factory. Whether
the two paths should be unified on a single precision is a
separate design question for upstream; this PR deliberately does
not make that call.
The old "Loading checkpoint: <name>" info log inside `_get_engine`
is dropped: `create_engine` already logs the checkpoint name plus
the device ("Torch engine loaded: <name> (device=<device>)"),
which is strictly more informative.
Regression coverage in `tests/test_backend.py`:
- `TestCreateEngineModelPrecision`: asserts the factory exposes
`model_precision`, defaults to `torch.float16`, and forwards the
requested value to CorridorKeyEngine (verified by patching
CorridorKeyEngine and inspecting the kwarg).
- `TestServiceEngineRouting`: a bare `CorridorKeyService()` calls
`_get_engine()` and that call routes through `create_engine`
with `backend="torch"`, `device="cpu"` (the service default),
`img_size=2048`, and `model_precision=torch.float32`.
TDD: all four tests were written first and observed failing on
`upstream/main` before the fix. One informative failure: the
service-routing test saw `_get_engine` reach `_discover_checkpoint`
and attempt a real HuggingFace download, confirming the factory
was being bypassed.
Verified locally on Windows with Python 3.13.11:
- uv run ruff format --check: clean
- uv run ruff check: clean
- uv run pytest -q --tb=short -m "not gpu": 367 passed, 1 skipped
(MLX), 4 deselected (GPU), 2 warnings. The +4 passes vs
upstream's 363 are the 4 new regression tests. Both warnings are
pre-existing on upstream/main (pytest unknown-option `env`
warning addressed on a separate branch, plus a torch autocast
warning on a no-CUDA machine). Neither is introduced by this
change.
- uv run corridorkey --help: exits 0.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Automated replay of nikopueringer#234
Base sha:
207b879359d2d2c1d48ee17ffef6125b0660c142Head sha:
7bd904788f07d2e56d6dcdc004664e83bdc7270fThis PR exists only to exercise the deployed LogoMesh App. Do not merge.