Skip to content

[BUG] Models with native function calling fail when using Ollama #4036

@MB-Finski

Description

@MB-Finski

Description

Some models served with ollama that support function calling natively (e.g. ollama_chat/gpt-oss:20b) cause an exception when the model tries to use a tool. This is due to the model returning ChatCompletionMessageToolCall under the tool_calls key instead of the expected format within the message content.

As per the litellm docs I have tried setting:

litellm.register_model(model_cost={
                 "ollama_chat/gpt-oss:20b": { 
                   "supports_function_calling": True
                 },
             })

I have even tried to set the function_calling_llm for the agents but that llm never gets used even if tools are provided to the agent. The default openai model is always used instead. I'm a bit confused by this as the docs state that function_calling_llm should override whatever llm has been set.

I ran out of time debugging this but it seems like the llm is never treated as function calling capable. For example, in the _handle_non_streaming_response method the available_functions is None despite the fact that the Agent has been provided with tools. Similarly the supports_function_calling method is never called.

As a side note: the documentation should be updated to recommend using ollama_chat instead of ollama when using some of the later open weights models like gpt-oss. Using only "ollama" as the provider leads to wrong prompt formatting being used. By using ollama_chat the correct formatting is applied at ollama's end and even system messages are formatted correctly.

Steps to Reproduce

  1. Use ollama_chat as the llm provider and use model that supports function calling natively (e.g. gpt-oss)
  2. Instantiate an agent with tools and provide the above llm for the agent
  3. Build a crew and give the crew a task that requires the above agent to use tools
  4. The response processing fails with parsing error: 'Error details: 'list' object has no attribute 'rstrip''

Expected behavior

Treat the llm as a function calling capable model.

Screenshots/Code snippets

Operating System

Other (specify in additional context)

Python Version

3.12

crewAI Version

1.6.1

crewAI Tools Version

1.6.1

Virtual Environment

Venv

Evidence

An unknown error occurred. Please check the details below.
Error details: 'list' object has no attribute 'rstrip'
An unknown error occurred. Please check the details below.
Error details: 'list' object has no attribute 'rstrip'

Possible Solution

None

Additional context

OS: Debian 13

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions