Skip to content

Conversation

shah-siddd
Copy link
Contributor

@shah-siddd shah-siddd commented Sep 10, 2025

Pull Request

Summary

Add comprehensive LiteLLM integration to the OpenLayer Python SDK, enabling automatic tracing and monitoring of completions across 100+ LLM providers through LiteLLM's unified interface.

Changes

  • Core Integration: Added litellm_tracer.py with full support for streaming and non-streaming completions
  • Public API: Added trace_litellm() function to openlayer.lib for easy integration
  • Streaming Support: Implemented proper streaming usage data collection using stream_options={"include_usage": True}
  • Provider Detection: Multi-tier provider detection supporting all LiteLLM-compatible services
  • Example Documentation: Added comprehensive Jupyter notebook (litellm_tracing.ipynb) with multi-provider examples
  • Test Coverage: Complete test suite with 12 passing tests covering all functionality
  • Dependency Management: Added LiteLLM as optional dependency with conditional imports
  • Data Parity: Ensured 100% data consistency between streaming and non-streaming modes

Context

LiteLLM is a popular library that provides a unified interface to call 100+ LLM APIs (OpenAI, Anthropic, Google, AWS Bedrock, etc.) using the same input/output format. This integration allows users to:

  • Monitor multiple providers through a single integration point
  • Switch between LLM services without changing tracing setup
  • Compare performance across different providers and models
  • Reduce integration complexity by supporting all providers at once

This addresses the need for comprehensive LLM monitoring across diverse model providers in production environments.

Testing

  • Unit tests: All 12 LiteLLM integration tests passing
  • Integration tests: Conditional imports and dependency management tests passing
  • Core functionality: 21 core tracing tests passing (no regressions)
  • Manual testing:
    • ✅ Verified streaming vs non-streaming data parity (100% match)
    • ✅ Tested multi-provider support (OpenAI, Anthropic, Groq)
    • ✅ Validated cost calculation and token counting accuracy
    • ✅ Confirmed proper error handling and graceful fallbacks

Test Results:

tests/test_litellm_integration.py - 12/12 PASSED ✅
tests/test_integration_conditional_imports.py - 3/3 PASSED ✅ 
tests/test_tracing_core.py - 21/21 PASSED ✅

Key Technical Achievements:

  • 🎯 100% data parity between streaming and non-streaming modes
  • Proper streaming implementation using official LiteLLM streaming usage API
  • 🔍 Comprehensive provider detection with multiple fallback strategies
  • 🛡️ Robust error handling maintaining system stability
  • 📊 Complete metadata capture including costs, tokens, latency, and provider info

"\n",
"Once you've run the examples above, you can:\n",
"\n",
"1. **Visit your OpenLayer dashboard** to see all the traced completions\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some instances of "OpenLayer" here and in other files (e.g., litellm_tracer.py and __init__.py)

return "unknown"


def detect_provider_from_model_name(model_name: str) -> str:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provider names (strings) you return here need to be identical to the providers the backend expect. Otherwise, we won't identify the provider correctly and trigger cost estimation.

The providers recognized on the backend are: Anthropic, Azure, Cohere, OpenAI, Google, Mistral, Groq, and Bedrock. While meta is currently unrecognized by the backend, I would return it capitalized, to follow the convention

@gustavocidornelas gustavocidornelas merged commit 0851a6e into main Sep 16, 2025
5 checks passed
@gustavocidornelas gustavocidornelas deleted the siddhant/open-7336-integrate-litellm-for-tracing branch September 16, 2025 14:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants