Skip to content

Groq LiteLLM Integration does not work when using structured outputs #2140

@amolpsingh

Description

@amolpsingh

Please read this first

  • Have you read the docs? Agents SDK docs
  • Have you searched for related issues? Others may have faced similar issues.

Describe the bug

When using LitellmModel with Groq models and specifying an output_type parameter, the SDK attempts to use OpenAI's structured outputs feature by setting tool_choice to "json_tool_call". Groq doesn't support this tool choice, resulting in a 400 Bad Request error 1 .

Debug information

  • Agents SDK version: (e.g. v0.0.3) - 0.6.1
  • Python version: (e.g. Python 3.10) - 3.11.8

Repro steps

import asyncio
from pydantic import BaseModel
from agents import Agent, Runner, function_tool
from agents.extensions.models.litellm_model import LitellmModel

class AgentResponse(BaseModel):
    message: str
    status: str

@function_tool
def dummy_tool(input_text: str) -> str:
    return f"Processed: {input_text}"

agent = Agent(
    name="Demo Agent",
    instructions="Use the dummy tool to process input and respond with a structured output.",
    tools=[dummy_tool],
    output_type=AgentResponse,  # This triggers the error
    model=LitellmModel(
        model="groq/openai/gpt-oss-120b",
        api_key="your-api-key"
    )
)

async def main():
    result = await Runner.run(
        starting_agent=agent,
        input="Hello, please process this message"
    )
    print(result.final_output)

asyncio.run(main())

Expected behavior

The agent should return a structured response matching the AgentResponse schema. Instead, it fails with:

litellm.exceptions.BadRequestError: litellm.BadRequestError: GroqException - {"error":{"message":"code=400, message=Requested tool_choice `json_tool_call` was not defined in the request, type=invalid_request_error","type":"invalid_request_error"}}

Root Cause Analysis

The issue occurs because:

  1. When output_type is specified, the SDK creates a JSON schema from the Pydantic model 2
  2. The SDK sets tool_choice to "json_tool_call" to enforce structured output 1
  3. Groq's API doesn't support this specific tool choice value
  4. The LiteLLM integration passes this parameter through without validation

Flaky Workaround

  1. Remove output_type and parse JSON manually:
    agent = Agent(
        name="Demo Agent",
        instructions="Respond with JSON in the format: {'message': '...', 'status': '...'}",
        tools=[dummy_tool],
        # Remove output_type
        model=LitellmModel(...)
    )
    
    # Parse manually after getting response
    parsed = AgentResponse(**json.loads(result.final_output))

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions