Skip to content

Thinking model disabled agent prefill #15404

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

gabe-l-hart
Copy link
Collaborator

Fixes: #15401

cc @matteoserva from the original PR

Changes

  • Auto-detect whether enable_thinking should be true based on whether the rendered template changes when enabled
  • Always set input.enable_thinking from chat_template_kwargs.enable_thinking if set explicitly there
  • Fix the logical condition to reject agent-prefill when enable_thinking is true only when set by default for thinking models or explicitly via kwarg

Testing

All model types tested using the following set of calls

# Test without setting enable_thinking
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}]}'

# Test with enable_thinking: true
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}], "chat_template_kwargs": {"enable_thinking": true}}'

# Test with enable_thinking: false
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}], "chat_template_kwargs": {"enable_thinking": false}}'

Thinking model w/ thinking enabled

./bin/llama-server -hf ibm-research/granite-3.2-8b-instruct-GGUF --jinja
# Reasoning enabled implicitly => error
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}]}'
# {"error":{"code":500,"message":"Assistant response prefill is incompatible with enable_thinking.","type":"server_error"}}

# Reasoning enabled explicitly => error
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}], "chat_template_kwargs": {"enable_thinking": true}}'
# {"error":{"code":500,"message":"Assistant response prefill is incompatible with enable_thinking.","type":"server_error"}}

# Reasoning disabled explicitly => ok
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}], "chat_template_kwargs": {"enable_thinking": true}}' 
# {"prompt":"<|start_of_role|>system<|end_of_role|>Knowledge Cutoff Date: April 2024.\nToday's Date: August 18, 2025.\nYou are Granite, developed by IBM. You are a helpful AI assistant.<|end_of_text|>\n<|start_of_role|>user<|end_of_role|>hello world<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>hi hi"}

Thinking model w/ thinking disabled

./bin/llama-server -hf ibm-research/granite-3.2-8b-instruct-GGUF --jinja --reasoning-budget 0
# Reasoning disabled implicitly => ok
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}]}'
# {"prompt":"<|start_of_role|>system<|end_of_role|>Knowledge Cutoff Date: April 2024.\nToday's Date: August 18, 2025.\nYou are Granite, developed by IBM. You are a helpful AI assistant.<|end_of_text|>\n<|start_of_role|>user<|end_of_role|>hello world<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>hi hi"}

# Reasoning enabled explicitly => error
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}], "chat_template_kwargs": {"enable_thinking": true}}'
# {"error":{"code":500,"message":"Assistant response prefill is incompatible with enable_thinking.","type":"server_error"}}

# Reasoning disabled explicitly => ok
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}], "chat_template_kwargs": {"enable_thinking": true}}' 
# {"prompt":"<|start_of_role|>system<|end_of_role|>Knowledge Cutoff Date: April 2024.\nToday's Date: August 18, 2025.\nYou are Granite, developed by IBM. You are a helpful AI assistant.<|end_of_text|>\n<|start_of_role|>user<|end_of_role|>hello world<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>hi hi"}

Non-Thinking model

./bin/llama-server -m ~/models/granite-3.0-2b-instruct/granite-3.0-2B-instruct-F16.gguf --jinja
# Reasoning disabled implicitly => ok
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}]}'
# {"prompt":"<|start_of_role|>user<|end_of_role|>hello world<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>hi hi"}

# Reasoning enabled explicitly => error
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}], "chat_template_kwargs": {"enable_thinking": true}}'
# {"error":{"code":500,"message":"Assistant response prefill is incompatible with enable_thinking.","type":"server_error"}}

# Reasoning disabled explicitly => ok
curl http://localhost:8080/apply-template -d '{"messages": [{"role": "user", "content": "hello world"}, {"role": "assistant", "content": "hi hi"}], "chat_template_kwargs": {"enable_thinking": true}}' 
# {"prompt":"<|start_of_role|>user<|end_of_role|>hello world<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>hi hi"}

common/chat.cpp Outdated
@@ -647,6 +647,8 @@ common_reasoning_format common_reasoning_format_from_name(const std::string & fo
return COMMON_REASONING_FORMAT_DEEPSEEK;
} else if (format == "deepseek-legacy") {
return COMMON_REASONING_FORMAT_DEEPSEEK_LEGACY;
} else if (format == "granite") {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just noticed that this was missing, so figured I'd add it (not strictly required for this PR)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would recommend using auto instead of adding a new type here (so the existing enum COMMON_REASONING_FORMAT_GRANITE should be removed altogether - it is not actively used anywhere in the code base)

reasoning_format is supposed to be used for selecting the output format, i.e. the JSON response format from server to client. It should have no effects to the content.

We should better document this to prevent future contributor from thinking that reasoning_format meaning "template format", while it should be "server response schema"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I'll yank this out.

For the life of me, I couldn't find where "auto" was actually being used. I found several places where there was a special case for _NONE or _DEEPSEEK_LEGACY, but I couldn't find _AUTO anywhere other than being set as the default. Can you clarify how this gets used to auto-determine the right reasoning parser?

Copy link
Collaborator

@ngxson ngxson Aug 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The long story is that the message.reasoning_content API response format is initially introduced by deepseek hosted API, hence the name deepseek

This enum was added so in case openai wants to have their own API schema, then we can select between either deepseek or openai. However, the meaning is lost in time due to a combination of bad naming and poor documentation.

auto was added as a no-brainer solution since deepseek reasoning_content is now supported by many applications, while OAI migrated to their new Response API.

This reasoning_format solely determines the API schema, it is unrelated to the notion of parser or anything at chat template layer. Parser is determined by the chat template itself.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that's very helpful context!

// 2. The chat template supports it
bool enable_thinking = params_base.reasoning_budget != 0;
if (enable_thinking && params_base.use_jinja) {
common_chat_templates_inputs dummy_inputs;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic could move into a free-function in chat.* (something like common_chat_supports_enable_thinking).

Also, an alternate implementation of this would be to be more explicit and do some kind of a switch on the subtype of common_chat_params, but that felt hard to maintain (similar to more places where an arch enum is required). Since this is a boot-time operation, it feels ok to do the extra template expansion to avoid another place where a developer would need to poke in architecture-specific logic.

@gabe-l-hart gabe-l-hart force-pushed the gabe-l-hart/thinking-model-disabled-agent-prefill branch from 363c664 to 17b4bf8 Compare August 18, 2025 19:39
@gabe-l-hart gabe-l-hart changed the title Gabe l hart/thinking model disabled agent prefill Thinking model disabled agent prefill Aug 18, 2025
Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <[email protected]>
Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <[email protected]>
…value

From what I can tell, this started as a Qwen3-specific keyword, but from
the use in `chat.cpp` translates this inputs.enable_thinking to the right
thinking kwarg for the given model, this is now more of a standardized
kwarg, so it should always override the default value when sent as part of
the chat_template_kwargs field in the API.

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <[email protected]>
With the use_jinja check, non-jinja models would enable thinking and always
fail assistant prefill

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <[email protected]>
@arichiardi
Copy link

arichiardi commented Aug 19, 2025

Tried the following:

LLAMA_CHAT_TEMPLATE_KWARGS='{\"enable_thinking\":false}' ✔️

Request containing {"chat_template_kwargs": {"enable_thinking":false}} 🔴

EDIT: I believe I am still getting got exception: {"code":500,"message":"Assistant response prefill is incompatible with enable_thinking.","type":"server_error"} when the request contains it.

@gabe-l-hart
Copy link
Collaborator Author

Thanks for testing it @arichiardi. Any chance you can post both the command to launch the server and the full curl command for the request so I can try to repro?

@gabe-l-hart
Copy link
Collaborator Author

@ryan-mangeno if you have any cycles for testing, I'd love some help putting this one through its paces

@arichiardi
Copy link

@gabe-l-hart this is my call (copied from gptel)

curl \
--disable \
--location \
--silent \
--compressed \
-XPOST \
-y7200 \
-Y1 \
-D- \
-w\(ef61b92be715ea1d69de4307f3024c21\ .\ \%\{size_header\}\) \
-d\{\"model\"\:\"GLM-4.5\"\,\"messages\"\:\[\{\"role\"\:\"user\"\,\"content\"\:\"Can\ you\ make\ images\?\"\}\]\,\"stream\"\:true\,\"temperature\"\:1.0\,\"chat_template_kwargs\"\:\{\"enable_thinking\"\:\"false\"\}\} \
-HContent-Type\:\ application/json \
-HContent-Type\:\ application/json \
http\://<your-host>:<your-port>/v1/chat/completions

@gabe-l-hart
Copy link
Collaborator Author

Ok, looking at this request, it doesn't look like this is actually stimulating the condition this PR is targeting since it does not end with an assistant turn. I can't run GLM-4.5 locally, but I ran the same request against Granite 3.2 8b without setting --reasoning-budget 0, and got clean responses.

I modified the above request to have an assistant turn at the end, and found that I was able to see what you report where I got the rejection when setting "chat_template_kwargs":{"enable_thinking":"false"}}. The key here is that "false" is being sent as a string not a bool. If I remove the "s around it to send it as a bool, the request goes through cleanly.

# Incorrectly sent as a string -> error
curl -XPOST -d'{"model":"UNUSED", "messages":[{"role":"user","content":"Can you make images?"}, {"role":"assistant","content":"Nope, I sure can not"}],"stream":false,"temperature":0.0,"max_tokens":30,"chat_template_kwargs":{"enable_thinking":"false"}}' -H "Content-Type: application/json" -H "Content-Type: application/json" http://localhost:8081/v1/chat/completions

# Correctly sent as a bool -> ok
curl -XPOST -d'{"model":"UNUSED", "messages":[{"role":"user","content":"Can you make images?"}, {"role":"assistant","content":"Nope, I sure can not"}],"stream":false,"temperature":0.0,"max_tokens":30,"chat_template_kwargs":{"enable_thinking":false}}' -H "Content-Type: application/json" -H "Content-Type: application/json" http://localhost:8081/v1/chat/completions

This behavior is a bit confusing given that the code is parsing the value as a string and explicitly checking against the string literal "true" and "false". I'll look a bit more to try to understand why sending it as a string does not match.

@gabe-l-hart
Copy link
Collaborator Author

Ok! I think I've figured out my issue and it comes down to misusing json_value. This serializes the field and always returns a std::string. The serialized representation of a json string, serialized into a std::string is "\"false\"" (note the quote characters in the string itself), so it is failing to match the check. I'll fix this shortly.

@gabe-l-hart
Copy link
Collaborator Author

I was not quite right before. json_value does not always return a std::string. The issue is here where the inputs.chat_template_kwargs mapping is constructed with .dump() to serialize all values to their string representation.

This opens up a different question: what is the correct way to handle a user sending an invalid type in a chat template kwarg? This seems ill-defined since chat template kwargs are model-specific except for "enable_thinking". It seems like there are three choices:

  1. Explicitly reject "stringified bools" as invalid requests for "enable_thinking"
  2. Intentionally parse "stringified bools" as if they were the corresponding bool (then you have to wonder about things like "False", "No", etc that are sometimes parsed as valid bools in some platforms)
  3. Keep the current implementation that silently treats them as unset (not the best UX!)

My usual inclination is to do "best effort" (2) in this case, but that often leads to loosy-goosy APIs, so the more correct thing to do would be (1).

@ngxson any thoughts on this one?

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <[email protected]>
@arichiardi
Copy link

arichiardi commented Aug 20, 2025

@gabe-l-hart

I'd probably go for (2) myself. Additionally, if the param requires a false literal (as opposed to a string) we could just reject the request (fail fast, I like it).

Agree that leaving the value unset without any sort of feedback is sub-optimal.

There are too many possible "truthy" / "falsy" strings and too many
ambiguous strings that don't have a clear truthy/falsy value, so the
simplest thing to do here is to reject the request. Ideally, this would be
a 422 (Unprocessable Entity), but right now it's coming back as a 500.

Branch: gabe-l-hart/thinking-model-disabled-agent-prefill

Signed-off-by: Gabe Goodhart <[email protected]>
@gabe-l-hart
Copy link
Collaborator Author

gabe-l-hart commented Aug 20, 2025

I've implemented the fail-fast path (reject any string in the "enable_thinking" kwarg) locally. I think this is probably more maintainable going forward, especially if additional kwargs with more complex types become standardized (eg documents from the HF apply_chat_template signature). It's also way easier to go from an error to a success in the future than the other way around from a backwards-compatibility standpoint.

Right now, any errors raised in oaicompat_chat_params_parse result in a 500. This is not really correct as this should be a 422 (Unprocessable Entity) (or simply a 400). I'm going to look into whether there's a way to raise errors with error-codes from the free-functions in utils.hpp.

@gabe-l-hart
Copy link
Collaborator Author

gabe-l-hart commented Aug 20, 2025

It looks like more-nuanced error types in server would require one of the following approaches:

  1. Define an exception type in utils.hpp that includes the HTTP error type to be used from this enum, then handle it correctly in the generic handling logic
  2. Add try / catch around the handlers for oai endpoints and explicitly check the error strings

I think (1) would be much better as it would be extensible to other methods within utils.hpp, but it would also be much more invasive. As such, I'm going to not do this as part of this PR and will wait for further thoughts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Eval bug: Thinking model with thinking disabled cannot use /apply-template with final assistant turn
3 participants