-
Notifications
You must be signed in to change notification settings - Fork 12.8k
Thinking model disabled agent prefill #15404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Thinking model disabled agent prefill #15404
Conversation
common/chat.cpp
Outdated
@@ -647,6 +647,8 @@ common_reasoning_format common_reasoning_format_from_name(const std::string & fo | |||
return COMMON_REASONING_FORMAT_DEEPSEEK; | |||
} else if (format == "deepseek-legacy") { | |||
return COMMON_REASONING_FORMAT_DEEPSEEK_LEGACY; | |||
} else if (format == "granite") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just noticed that this was missing, so figured I'd add it (not strictly required for this PR)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would recommend using auto
instead of adding a new type here (so the existing enum COMMON_REASONING_FORMAT_GRANITE
should be removed altogether - it is not actively used anywhere in the code base)
reasoning_format
is supposed to be used for selecting the output format, i.e. the JSON response format from server to client. It should have no effects to the content.
We should better document this to prevent future contributor from thinking that reasoning_format
meaning "template format", while it should be "server response schema"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I'll yank this out.
For the life of me, I couldn't find where "auto"
was actually being used. I found several places where there was a special case for _NONE
or _DEEPSEEK_LEGACY
, but I couldn't find _AUTO
anywhere other than being set as the default. Can you clarify how this gets used to auto-determine the right reasoning parser?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The long story is that the message.reasoning_content
API response format is initially introduced by deepseek hosted API, hence the name deepseek
This enum was added so in case openai wants to have their own API schema, then we can select between either deepseek
or openai
. However, the meaning is lost in time due to a combination of bad naming and poor documentation.
auto
was added as a no-brainer solution since deepseek reasoning_content
is now supported by many applications, while OAI migrated to their new Response API.
This reasoning_format
solely determines the API schema, it is unrelated to the notion of parser or anything at chat template layer. Parser is determined by the chat template itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, that's very helpful context!
// 2. The chat template supports it | ||
bool enable_thinking = params_base.reasoning_budget != 0; | ||
if (enable_thinking && params_base.use_jinja) { | ||
common_chat_templates_inputs dummy_inputs; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic could move into a free-function in chat.*
(something like common_chat_supports_enable_thinking
).
Also, an alternate implementation of this would be to be more explicit and do some kind of a switch
on the subtype of common_chat_params
, but that felt hard to maintain (similar to more places where an arch enum is required). Since this is a boot-time operation, it feels ok to do the extra template expansion to avoid another place where a developer would need to poke in architecture-specific logic.
363c664
to
17b4bf8
Compare
Branch: gabe-l-hart/thinking-model-disabled-agent-prefill Signed-off-by: Gabe Goodhart <[email protected]>
Branch: gabe-l-hart/thinking-model-disabled-agent-prefill Signed-off-by: Gabe Goodhart <[email protected]>
…value From what I can tell, this started as a Qwen3-specific keyword, but from the use in `chat.cpp` translates this inputs.enable_thinking to the right thinking kwarg for the given model, this is now more of a standardized kwarg, so it should always override the default value when sent as part of the chat_template_kwargs field in the API. Branch: gabe-l-hart/thinking-model-disabled-agent-prefill Signed-off-by: Gabe Goodhart <[email protected]>
With the use_jinja check, non-jinja models would enable thinking and always fail assistant prefill Branch: gabe-l-hart/thinking-model-disabled-agent-prefill Signed-off-by: Gabe Goodhart <[email protected]>
bb48ad1
to
0f07f58
Compare
Tried the following:
Request containing EDIT: I believe I am still getting |
Thanks for testing it @arichiardi. Any chance you can post both the command to launch the server and the full |
@ryan-mangeno if you have any cycles for testing, I'd love some help putting this one through its paces |
@gabe-l-hart this is my call (copied from
|
Ok, looking at this request, it doesn't look like this is actually stimulating the condition this PR is targeting since it does not end with an I modified the above request to have an # Incorrectly sent as a string -> error
curl -XPOST -d'{"model":"UNUSED", "messages":[{"role":"user","content":"Can you make images?"}, {"role":"assistant","content":"Nope, I sure can not"}],"stream":false,"temperature":0.0,"max_tokens":30,"chat_template_kwargs":{"enable_thinking":"false"}}' -H "Content-Type: application/json" -H "Content-Type: application/json" http://localhost:8081/v1/chat/completions
# Correctly sent as a bool -> ok
curl -XPOST -d'{"model":"UNUSED", "messages":[{"role":"user","content":"Can you make images?"}, {"role":"assistant","content":"Nope, I sure can not"}],"stream":false,"temperature":0.0,"max_tokens":30,"chat_template_kwargs":{"enable_thinking":false}}' -H "Content-Type: application/json" -H "Content-Type: application/json" http://localhost:8081/v1/chat/completions This behavior is a bit confusing given that the code is parsing the value as a string and explicitly checking against the string literal |
Ok! I think I've figured out my issue and it comes down to misusing |
I was not quite right before. This opens up a different question: what is the correct way to handle a user sending an invalid type in a chat template kwarg? This seems ill-defined since chat template kwargs are model-specific except for
My usual inclination is to do "best effort" (2) in this case, but that often leads to loosy-goosy APIs, so the more correct thing to do would be (1). @ngxson any thoughts on this one? |
Branch: gabe-l-hart/thinking-model-disabled-agent-prefill Signed-off-by: Gabe Goodhart <[email protected]>
I'd probably go for (2) myself. Additionally, if the param requires a Agree that leaving the value unset without any sort of feedback is sub-optimal. |
There are too many possible "truthy" / "falsy" strings and too many ambiguous strings that don't have a clear truthy/falsy value, so the simplest thing to do here is to reject the request. Ideally, this would be a 422 (Unprocessable Entity), but right now it's coming back as a 500. Branch: gabe-l-hart/thinking-model-disabled-agent-prefill Signed-off-by: Gabe Goodhart <[email protected]>
I've implemented the fail-fast path (reject any string in the Right now, any errors raised in |
It looks like more-nuanced error types in
I think (1) would be much better as it would be extensible to other methods within |
Fixes: #15401
cc @matteoserva from the original PR
Changes
enable_thinking
should be true based on whether the rendered template changes when enabledinput.enable_thinking
fromchat_template_kwargs.enable_thinking
if set explicitly thereenable_thinking
is true only when set by default for thinking models or explicitly via kwargTesting
All model types tested using the following set of calls
Thinking model w/ thinking enabled
Thinking model w/ thinking disabled
Non-Thinking model
./bin/llama-server -m ~/models/granite-3.0-2b-instruct/granite-3.0-2B-instruct-F16.gguf --jinja