-
Notifications
You must be signed in to change notification settings - Fork 3.5k
feat: enable thinking/reasoning toggle for ollama models #7508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: enable thinking/reasoning toggle for ollama models #7508
Conversation
I need to fix those gui tests |
c153d7b
to
ad392a7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few questions/nitpicks, thanks for the contribution @fbricon ! Appreciate the detailed description and screen recording.
if ("anthropic" === model.underlyingProviderName) { | ||
return true; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe older models such as 3.5 Sonnet don't support reasoning?
Although I'm a little confused, this file is just for Ollama autodetect, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just moved that existing check from gui/src/components/mainInput/InputToolbar.tsx
And no, this file seems to handle different models from different providers
Signed-off-by: Fred Bricon <[email protected]>
Signed-off-by: Fred Bricon <[email protected]>
…update related usages Signed-off-by: Fred Bricon <[email protected]>
ad392a7
to
14e84c9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making those updates! Lmk if this is good to merge and I'll do so.
I can't test the Anthropic models so if you can do that and I didn't break anything, then go ahead and merge it |
@fbricon have verified this works with e.g. claude 4.1 opus local. Appreciate the contribution! |
…bricon/ollama-support-reasoning-flag
🎉 This PR is included in version 1.13.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
This PR enables users to toggle thinking for Ollama models that support it. In this recording, you can see qwen3 reasoning is not enabled by default, but setting reasoning : true in the defaultCompletionOptions allows it:
ollama-models-toggle-thinking.mp4
Hardcoding reasoning capabilities on ollama is tricky: e.g. qwen3:8b has reasoning capabilities but think=true causes an Ollama error if the version pulled into ollama is too old. Pulling a newer version will fix it.
Also worth noting some models, like gpt-oss:20b will just ignore the think:false flag passed to ollama. It supports different levels of thinking, but needs to be set in the system prompt. See https://huggingface.co/openai/gpt-oss-120b#reasoning-levels
Summary by cubic
Adds a reasoning/thinking toggle for supported Ollama models. Auto-detects support, shows the toggle in the UI, and sends think to Ollama; docs and schema updated.