fix: HOTFIX prevent AUTODETECT model from being queried on Ollama #7477
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
bf8b4bd introduced a regression which makes Ollama.ts query /api/show for AUTODETECT models. Since it's not a real model, the log is littered with:
While AUTODETECT model is not a real model, models satisfying
isFromAutoDetectare though. While looking for fimSupport is probably not relevant now for autodetected models, I'm considering applying the same sort of logic to detect if thinking is supported, so /api/show would still need to be queried for those models. But that's another story.cc @RomneyDa
Summary by cubic
Prevent Ollama from calling /api/show for the sentinel AUTODETECT model to stop “model not found” log spam. The constructor now early-returns only when options.model === "AUTODETECT", so real auto-detected models continue to be queried normally.