Description
Checks
- I have updated to the lastest minor and patch version of Strands
- I have checked the documentation and this is not expected behavior
- I have searched ./issues and there are no duplicates of my issue
Strands Version
0.1.8
Python Version
+3.10
Operating System
Linux
Installation Method
pip
Steps to Reproduce
- Install litellm 1.73.0.
- Run `hatch test tests-integ
test_agent
from "./tests-integ/test_model_litellm.py" fails with aNone
tool name validation error.
Expected Behavior
LiteLLM model provider does not stream tool uses with None
names.
Actual Behavior
LiteLLM model provider returns a None
tool name due to a bug in the underlying LiteLLM client. See additional context for more details.
Additional Context
We are seeing litellm failures in the integration tests executed in our PR workflows (example). To understand the issue, pip install litellm v1.73.0 and send the following request:
{
"messages": [
{
"role": "user",
"content": [
{
"text": "What is the time in New York?",
"type": "text"
}
]
}
],
"model": "bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0",
"stream": true,
"stream_options": {
"include_usage": true
},
"tools": [
{
"type": "function",
"function": {
"name": "tool_time",
"description": "tool_time",
"parameters": {
"properties": {},
"type": "object",
"required": []
}
}
}
]
}
litellm will stream payloads similar to the following:
ModelResponseStream(id='chatcmpl-b6984fa2-5f78-4b53-8e2b-b4429769eb9e', created=1750699986, model='us.anthropic.claude-3-7-sonnet-20250219-v1:0', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content="I'll", role='assistant', function_call=None, tool_calls=None, audio=None), logprobs=None)], provider_specific_fields={}, stream_options={'include_usage': True}, citations=None)
ModelResponseStream(id='chatcmpl-b6984fa2-5f78-4b53-8e2b-b4429769eb9e', created=1750699986, model='us.anthropic.claude-3-7-sonnet-20250219-v1:0', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' check the current time in New', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], provider_specific_fields={}, stream_options={'include_usage': True}, citations=None)
ModelResponseStream(id='chatcmpl-b6984fa2-5f78-4b53-8e2b-b4429769eb9e', created=1750699986, model='us.anthropic.claude-3-7-sonnet-20250219-v1:0', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' York for you. Let', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], provider_specific_fields={}, stream_options={'include_usage': True}, citations=None)
ModelResponseStream(id='chatcmpl-b6984fa2-5f78-4b53-8e2b-b4429769eb9e', created=1750699986, model='us.anthropic.claude-3-7-sonnet-20250219-v1:0', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content=' me fetch that information.', role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], provider_specific_fields={}, stream_options={'include_usage': True}, citations=None)
ModelResponseStream(id='chatcmpl-b6984fa2-5f78-4b53-8e2b-b4429769eb9e', created=1750699986, model='us.anthropic.claude-3-7-sonnet-20250219-v1:0', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content='', role='assistant', function_call=None, tool_calls=[ChatCompletionDeltaToolCall(id='tooluse_iZJCYpXyQ_ejPQQ1PZRzOg', function=Function(arguments='', name='tool_time'), type='function', index=0)], audio=None), logprobs=None)], provider_specific_fields={}, stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b6984fa2-5f78-4b53-8e2b-b4429769eb9e', created=1750699986, model='us.anthropic.claude-3-7-sonnet-20250219-v1:0', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content='', role='assistant', function_call=None, tool_calls=[ChatCompletionDeltaToolCall(id=None, function=Function(arguments='', name=None), type='function', index=0)], audio=None), logprobs=None)], provider_specific_fields={}, stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b6984fa2-5f78-4b53-8e2b-b4429769eb9e', created=1750699986, model='us.anthropic.claude-3-7-sonnet-20250219-v1:0', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(provider_specific_fields=None, content='', role='assistant', function_call=None, tool_calls=[ChatCompletionDeltaToolCall(id=None, function=Function(arguments='{}', name=None), type='function', index=1)], audio=None), logprobs=None)], provider_specific_fields={}, stream_options={'include_usage': True})
ModelResponseStream(id='chatcmpl-b6984fa2-5f78-4b53-8e2b-b4429769eb9e', created=1750699986, model='us.anthropic.claude-3-7-sonnet-20250219-v1:0', object='chat.completion.chunk', system_fingerprint=None, choices=[StreamingChoices(finish_reason='tool_calls', index=0, delta=Delta(provider_specific_fields=None, content=None, role=None, function_call=None, tool_calls=None, audio=None), logprobs=None)], provider_specific_fields={}, stream_options={'include_usage': True})
Notice that we have only 1 tool call but there are ChatCompletionDeltaToolCall
payloads with mismatching indices. All related tool call payloads should have the same index otherwise we cannot correctly piece together the response.
This issue appears to be a bug introduced while addressing BerriAI/litellm#11580.
Possible Solution
As a temporary workaround, we are setting our litellm dependency version to be <1.73.0 (#270). Once a fix has been released in litellm, we need to revert the PR.