Skip to content

Add exclude_params feature to function_tool for hidden parameter injection #795

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

chiehmin-wei
Copy link

Summary

This PR adds an exclude_params feature to function_tool and function_schema, allowing developers to hide specific parameters from the LLM while still making them available to the function with their default values.

Motivation

When building AI agents for production systems, there's often a need to pass context-sensitive parameters (like timestamps, user IDs, or environment flags) to function tools without exposing these implementation details to the LLM. This is particularly important for evaluation scenarios where you want to ensure the agent behaves authentically without being influenced by "meta" parameters.

Use Case: Incident Response Bot Evaluation

Our primary use case is an incident response bot that reacts to PagerDuty alerts and investigates root causes using tools like search_slack. During evaluation, we run the bot against historical incidents to measure its performance. However, we face a critical challenge:

The Problem:

  • When evaluating on a 10-day-old incident, Slack contains messages posted after the incident occurred
  • These messages often reveal the root cause that wasn't available during the actual incident
  • The AI agent could "cheat" by accessing this future knowledge, making evaluations unrealistic

The Solution:
We need to pass a timestamp parameter to search_slack to filter results to only show messages available at incident time. However:

  • Each incident in our evaluation dataset occurred at different times
  • The timestamp parameter must be dynamically set per evaluation
  • The LLM agent should remain unaware of this filtering - it should always believe it's triaging a "current" incident

Example Implementation:

@function_tool(exclude_params=['timestamp'])
def search_slack(query: str, timestamp: str = None) -> Dict[str, Any]:
    """
    Search Slack messages for incident-related information.
    
    Args:
        query: Search query for Slack messages
        timestamp: Internal parameter for filtering messages (hidden from LLM)
    """
    # Filter messages to only show those before the timestamp
    # In evaluation: timestamp = incident_time
    # In production: timestamp = current_time
    return search_slack_with_time_filter(query, timestamp)

With exclude_params=['timestamp'], the LLM only sees:

{
  "name": "search_slack",
  "parameters": {
    "type": "object", 
    "properties": {
      "query": {"type": "string", "description": "Search query for Slack messages"}
    },
    "required": ["query"]
  }
}

The agent calls search_slack("API errors in payment service") while the function receives both query and the pre-configured timestamp.

Implementation Details

  • Added exclude_params parameter to both function_schema() and function_tool() decorator
  • Parameter validation: Ensures excluded parameters have default values (required for function execution)
  • Schema filtering: Excluded parameters are removed from the JSON schema presented to the LLM
  • Backwards compatible: Existing code continues to work unchanged
  • Comprehensive testing: Added test coverage for various scenarios

Files Changed

  • src/agents/function_schema.py: Core schema generation logic
  • src/agents/tool.py: function_tool decorator implementation
  • docs/tools.md: Documentation and examples
  • tests/test_function_schema.py: Schema generation tests
  • tests/test_function_tool.py: function_tool decorator tests

Benefits

  1. Authentic evaluations: Agents can't access future information during historical incident analysis
  2. Clean abstractions: Implementation details stay hidden from LLM reasoning
  3. Flexible parameter injection: Different contexts (eval vs prod) can inject different values
  4. Maintained agent behavior: Agent code remains identical across environments

This commit adds the exclude_params feature to function_tool and function_schema, allowing users to exclude specific parameters from the JSON schema presented to the LLM while still making them available to the function with their default values.

- Added exclude_params parameter to function_schema and function_tool
- Modified parameter processing to skip excluded parameters
- Added validation to ensure excluded parameters have default values
- Updated documentation in tools.md with examples
- Added tests for the new feature

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
@chiehmin-wei chiehmin-wei changed the title Add exclude_params feature to function_tool Add exclude_params feature to function_tool for hidden parameter injection May 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant