The usage of the top_k parameter is troublesome for many OpenAI compatible API endpoints. It leads to this error message and the operation never completes
❯ enzyme init
Found existing enzyme data:
/Users/redacted/Work/.enzyme
Clear and hard reset? [y/N] y
Cleared /Users/redacted/Work/.enzyme
LLM: google/gemini-2.5-flash @ https://openrouter.ai/api/v1 (key: you••••-key)
Indexed: 289 discovered, 289 new, 0 updated, 0 deleted
Selection: LLM selection failed: LLM entity selection call failed
⠋ Embedding [152/283 docs]
^C
For example al Azure OpenAI models do not support the parameter and the respond with error. I also tried a local LMStudio and it also responds with error because of this parameter. I could get around it by using a local LiteLLM proxy with this config:
model_list:
- model_name: gpt-4.1-mini
litellm_params:
model: azure/gpt-4.1-mini
drop_params: true
# Explicitly name the problematic parameter
additional_drop_params: ["top_k"]
and this env:
export OPENAI_API_KEY=foo
export OPENAI_BASE_URL=http://localhost:4000/v1
export OPENAI_MODEL=gpt-4.1-mini
Then the requests went through and the error message Selection: LLM selection failed is gone but then there is another error:
LiteLLM completion() model= gpt-4.1-mini; provider = azure
2026-04-09 00:44:24,920 - LiteLLM Router - INFO - litellm.acompletion(model=azure/gpt-4.1-mini) Exception litellm.BadRequestError: AzureException BadRequestError - Invalid schema for response_format 'catalyst_response': In context=(), 'required' is required to be supplied and to be an array including every key in properties. Missing 'eraLabels'.
I usually have never any problem with any OpenAI compatible client with the LiteLLM proxy and I'm even using coding agents like OpenCode with it. Azure OpenAI is a popular choice in the corporate world for compliance reasons.
The usage of the top_k parameter is troublesome for many OpenAI compatible API endpoints. It leads to this error message and the operation never completes
For example al Azure OpenAI models do not support the parameter and the respond with error. I also tried a local LMStudio and it also responds with error because of this parameter. I could get around it by using a local LiteLLM proxy with this config:
and this env:
Then the requests went through and the error message
Selection: LLM selection failedis gone but then there is another error:I usually have never any problem with any OpenAI compatible client with the LiteLLM proxy and I'm even using coding agents like OpenCode with it. Azure OpenAI is a popular choice in the corporate world for compliance reasons.