Skip to content

Conversation

Maximgitman
Copy link
Contributor

@Maximgitman Maximgitman commented Oct 7, 2025

Fix: OpenAI Agents SDK compatibility - Handle None usage values

Title

Fix: OpenAI Agents SDK compatibility - Handle None usage values in ResponseAPI

Relevant issues

Fixes compatibility issue with OpenAI Agents SDK where ResponseAPI returns None values in usage fields causing Pydantic validation errors.

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

Problem: None Values in Usage Fields

When using the OpenAI Agents SDK with LiteLLM's ResponseAPI, the SDK crashes when usage fields contain None values instead of integers. This happens because:

  1. Some providers return None for token counts instead of 0
  2. Token detail fields (input_tokens_details, output_tokens_details) can be None or contain None values
  3. OpenAI Agents SDK expects all numeric fields to be integers, not None

Error:

ValidationError: 2 validation errors for Usage
input_tokens_details
  Input should be a valid dictionary or instance of InputTokensDetails [type=model_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/model_type
output_tokens_details
  Input should be a valid dictionary or instance of OutputTokensDetails [type=model_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/model_type

Stack trace location:

File ~/miniconda3/lib/python3.12/site-packages/agents/models/openai_responses.py:118, in OpenAIResponsesModel.get_response
    usage = Usage(
        requests=1,
        input_tokens=response.usage.input_tokens,
        output_tokens=response.usage.output_tokens,
        total_tokens=response.usage.total_tokens,
        input_tokens_details=response.usage.input_tokens_details,  # ← None causes crash
        output_tokens_details=response.usage.output_tokens_details,  # ← None causes crash
    )

Reproduction Example

from agents import (
    ModelSettings,
    Agent,
    OpenAIResponsesModel,
    Runner,
)
from openai.types.shared import Reasoning
from openai import AsyncOpenAI

# LiteLLM proxy client
litellm_client = AsyncOpenAI(
    base_url="http://0.0.0.0:4000",
)

# Create agent with ResponseAPI model
test_agent = Agent(
    name="Test Agent",
    instructions="You are a helpful agent.",
    model=OpenAIResponsesModel("claude-4.5-sonnet", openai_client=litellm_client),
    model_settings=ModelSettings(
        reasoning=Reasoning(effort="low", summary="detailed"),
        include_usage=True  # This triggers the usage field requirement
    ),
)

# This would crash before the fix with ValidationError
result = await Runner.run(test_agent, "What do you think about LiteLLM")
result.to_input_list()

Solution

Normalize usage fields:

  • Convert None numeric values to 0
  • Remove or clean token detail fields when None
  • Ensure input_tokens_details.cached_tokens defaults to 0
  • Ensure output_tokens_details.reasoning_tokens defaults to 0

This ensures full compatibility with OpenAI Agents SDK and other OpenAI-compatible clients.

Code Changes

Modified File: litellm/responses/utils.py - Usage field normalization

Before:

response_api_usage: ResponseAPIUsage = (
    ResponseAPIUsage(**usage) if isinstance(usage, dict) else usage
)

After:

if isinstance(usage, dict):
    usage_clean = usage.copy()

    # Ensure numeric fields default to zero rather than None
    for numeric_key in ("input_tokens", "output_tokens", "total_tokens"):
        if usage_clean.get(numeric_key) is None:
            usage_clean[numeric_key] = 0

    # Drop detail fields when provider returns None, or clean nested None values
    for detail_key in ("input_tokens_details", "output_tokens_details"):
        detail_value = usage_clean.get(detail_key)
        if detail_value is None:
            usage_clean.pop(detail_key, None)
        elif isinstance(detail_value, dict):
            usage_clean[detail_key] = {
                k: v for k, v in detail_value.items() if v is not None
            }

    response_api_usage: ResponseAPIUsage = ResponseAPIUsage(**usage_clean)
else:
    response_api_usage = usage

# Normalise token detail fields so they match OpenAI format
input_details = response_api_usage.input_tokens_details
if input_details is None:
    input_details = InputTokensDetails(cached_tokens=0)
elif input_details.cached_tokens is None:
    input_details.cached_tokens = 0
response_api_usage.input_tokens_details = input_details

output_details = response_api_usage.output_tokens_details
if output_details is None:
    output_details = OutputTokensDetails(reasoning_tokens=0)
elif output_details.reasoning_tokens is None:
    output_details.reasoning_tokens = 0
response_api_usage.output_tokens_details = output_details

Tests Added

Test: Enhanced test_response_api_transform_usage_with_none_values in tests/test_litellm/responses/test_responses_utils.py

def test_response_api_transform_usage_with_none_values(self):
    """Test transformation handles None values properly"""
    usage = {
        "input_tokens": None,  # None should become 0
        "output_tokens": 20,
        "total_tokens": None,  # None should become 0
        "input_tokens_details": None,  # None should be normalized
        "output_tokens_details": {"reasoning_tokens": None},  # Nested None
    }

    result = ResponseAPILoggingUtils.transform_usage_to_litellm_usage(usage)

    assert result.prompt_tokens == 0
    assert result.completion_tokens == 20
    assert result.total_tokens == 20
    assert result.prompt_tokens_details is not None
    assert result.prompt_tokens_details.cached_tokens == 0

Test Results

Related Tests (9 tests, all passing ✅)

$ poetry run pytest tests/test_litellm/responses/test_responses_utils.py -v

tests/test_litellm/responses/test_responses_utils.py::TestResponsesAPIRequestUtils::test_get_optional_params_responses_api PASSED
tests/test_litellm/responses/test_responses_utils.py::TestResponsesAPIRequestUtils::test_get_optional_params_responses_api_unsupported_param PASSED
tests/test_litellm/responses/test_responses_utils.py::TestResponsesAPIRequestUtils::test_get_requested_response_api_optional_param PASSED
tests/test_litellm/responses/test_responses_utils.py::TestResponsesAPIRequestUtils::test_decode_previous_response_id_to_original_previous_response_id PASSED
tests/test_litellm/responses/test_responses_utils.py::TestResponsesAPIRequestUtils::test_update_responses_api_response_id_with_model_id_handles_dict PASSED
tests/test_litellm/responses/test_responses_utils.py::TestResponseAPILoggingUtils::test_is_response_api_usage_true PASSED
tests/test_litellm/responses/test_responses_utils.py::TestResponseAPILoggingUtils::test_is_response_api_usage_false PASSED
tests/test_litellm/responses/test_responses_utils.py::TestResponseAPILoggingUtils::test_transform_response_api_usage_to_chat_usage PASSED
tests/test_litellm/responses/test_responses_utils.py::TestResponseAPILoggingUtils::test_transform_response_api_usage_with_none_values PASSED

====================================================== 9 passed in 0.42s ======================================================

Impact

Before this fix:

  • ❌ OpenAI Agents SDK crashes with ValidationError when encountering None usage values
  • ❌ Token calculations fail in Agents SDK applications

After this fix:

  • ✅ Full OpenAI Agents SDK compatibility with ResponseAPI
  • ✅ All token fields are valid integers (never None)
  • ✅ Token detail objects always present with default values
  • ✅ Backward compatible with existing code

Copy link

vercel bot commented Oct 7, 2025

@Maximgitman is attempting to deploy a commit to the CLERKIEAI Team on Vercel.

A member of the Team first needs to authorize it.

"citations": citations,
"thinking_blocks": thinking_blocks,
},
thinking_blocks=thinking_blocks,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please revert this - this means users cannot pass back in thinking blocks when using anthropic reasoning.

the thinking blocks contain the message signatures and are a recognized top level param, which is a guaranteed standard spec across anthropic/bedrock/etc.

Instead - we can opt for a 'strict' mode, to drop non-openai fields - especially when using responses api for agents sdk. I think that's fine .

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend splitting this change from the responses api none usage

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, totally agree.
Will split PRs, and test a 'strict' mode what we can do there

"citations": citations,
"thinking_blocks": thinking_blocks,
},
thinking_blocks=thinking_blocks,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend splitting this change from the responses api none usage

@Maximgitman Maximgitman force-pushed the fix-response-api-usage-none-handling branch from be7f61d to bf5246b Compare October 7, 2025 03:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants