Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lite llm #265

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open

Lite llm #265

wants to merge 12 commits into from

Conversation

intern004-tellis
Copy link

@intern004-tellis intern004-tellis commented Feb 14, 2025

Summary by CodeRabbit

  • New Features

    • Integrated LiteLLM support across agents and tools.
    • Added a new API endpoint for parsing operations.
  • Refactor

    • Updated configuration to replace legacy API keys with a unified LiteLLM model setting.
    • Streamlined provider handling and secret management to support fewer, standardized options.
  • Chores

    • Refreshed database credentials and runtime environment configurations.
    • Introduced several new dependencies to enhance system capabilities.

Copy link
Contributor

coderabbitai bot commented Feb 14, 2025

Walkthrough

The changes update multiple components to shift from using OpenAI-based configurations toward a unified LiteLLM setup. Database connection handling now uses a full URL via POSTGRES_URL. Environment variables, configuration methods, agent construction, and provider controls are revised to reference LiteLLM. Additionally, tool integrations, secret management provider options, and documentation (including a new parse endpoint) have been modified. Dependency and runtime specifications were updated, and docker-compose credentials were changed accordingly.

Changes

File(s) Change Summary
app/alembic/env.py Replaced POSTGRES_SERVER with POSTGRES_URL to use a complete connection string; minor control flow punctuation update.
app/core/config_provider.py, app/main.py Added new attribute lite_llm_Model and method get_litellm_config; updated environment variable check from "OPENAI_API_KEY" to "LITELLM_MODEL".
app/modules/intelligence/agents/agent_factory.py, app/modules/intelligence/agents/agent_injector_service.py Imported ChatLiteLLM (and ChatOllama), retrieved litellm_model from env, and added a new "lite_llm_agent" entry.
app/modules/intelligence/agents/agents/..."
(blast_radius, code_gen, debug_rag, integration_test, low_level_design, rag, unit_test)
Removed initialization of openai_api_key from class constructors, eliminating dependency on the OpenAI API key.
app/modules/intelligence/provider/provider_controller.py,
app/modules/intelligence/provider/provider_router.py,
app/modules/intelligence/provider/provider_service.py
Introduced and applied litellm_provider for provider operations; streamlined methods to use LiteLLM exclusively.
app/modules/intelligence/tools/tool_service.py Removed webpage_extractor_tool and github_tool; added new tool "Lite_LLM" using the completion import from litellm.
app/modules/key_management/secret_manager.py,
app/modules/key_management/secrets_schema.py
Updated provider types by removing "openai" from accepted values and adjusted validation accordingly.
docker-compose.yaml Updated environment variables: new POSTGRES_PASSWORD, changed POSTGRES_DB from momentum to railway, and updated NEO4J_AUTH.
docs/parsing.md Added new "Parse Directory" section documenting a POST /parse endpoint with request/response examples and error codes.
requirements.txt, runtime.txt Added new dependencies (e.g., psycopg2-binary, langchain_ollama, etc.) and specified Python version python-3.10.12.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant MainApp
    participant AgentFactory
    participant AgentInjector
    participant LiteLLM

    Client->>MainApp: Initiate application start
    MainApp->>AgentFactory: Request agent creation
    AgentFactory->>AgentInjector: Delegate construction of "lite_llm_agent"
    AgentInjector->>LiteLLM: Instantiate ChatLiteLLM using LITELLM_MODEL
    LiteLLM-->>AgentInjector: Return LiteLLM agent instance
    AgentInjector-->>AgentFactory: Agent created
    AgentFactory-->>MainApp: Return agent instance
Loading
sequenceDiagram
    participant User
    participant API_Gateway
    participant ParseEndpoint
    participant ParserService

    User->>API_Gateway: POST /parse with repo details
    API_Gateway->>ParseEndpoint: Forward ParsingRequest
    ParseEndpoint->>ParserService: Process parsing operation
    ParserService-->>ParseEndpoint: Return ParsingResponse
    ParseEndpoint-->>API_Gateway: Send response (status, project ID)
    API_Gateway-->>User: Deliver final response
Loading

Poem

I'm a rabbit hopping with delight,
New changes sprout, shining so bright!
LiteLLM now leads our code's parade,
Old keys and tools gently fade.
With each hop through revised config and flow,
I nibble on carrots and watch our progress grow!
🥕 Hoppy coding, off we go!

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
B Reliability Rating on New Code (required ≥ A)
E Security Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

@intern004-tellis
Copy link
Author

adding litellm

# Construct the database URL from environment variables
POSTGRES_SERVER = os.getenv("POSTGRES_SERVER", "localhost")
POSTGRES_URL = os.getenv("POSTGRES_SERVER", "postgresql://postgres:oGxDeZWeNtKpqPnqaHBAgJAxGHnAjhKm@postgres-production-77f9.up.railway.app:5432/railway")

Check failure

Code scanning / SonarCloud

PostgreSQL database passwords should not be disclosed High

Make sure this PostgreSQL database password gets changed and removed from the code. See more on SonarQube Cloud
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Nitpick comments (9)
app/modules/key_management/secret_manager.py (2)

50-50: Use uppercase for environment variable names.

Environment variables should follow the convention of using uppercase letters.

Update the environment variable name:

-        if os.getenv("isDevelopmentMode") == "enabled":
+        if os.getenv("ISDEVELOPMENTMODE") == "enabled":

Also applies to: 188-188

🧰 Tools
🪛 Ruff (0.8.2)

50-50: Use capitalized environment variable ISDEVELOPMENTMODE instead of isDevelopmentMode

(SIM112)


185-186: Consider moving Depends calls inside the function.

Moving Depends calls inside the function body can help avoid potential issues with argument defaults.

Consider refactoring to:

@router.delete("/secrets/{provider}")
def delete_secret(
    provider: Literal["anthropic", "deepseek", "all"],
    user=None,
    db: Session = None,
):
    if user is None:
        user = AuthService.check_auth()
    if db is None:
        db = get_db()
    # Rest of the function...
🧰 Tools
🪛 Ruff (0.8.2)

185-185: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


186-186: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)

app/modules/intelligence/provider/provider_router.py (1)

37-37: Unnecessary parameter passing.

The provider_request parameter is not used since we're forcing LiteLLM provider.

Consider removing the unused parameter:

-   async def set_global_ai_provider(
-       provider_request: SetProviderRequest,
+   async def set_global_ai_provider(
        db: Session = Depends(get_db),
        user=Depends(AuthService.check_auth),
    ):
app/core/config_provider.py (1)

16-16: Follow Python naming conventions.

The variable name lite_llm_Model uses inconsistent casing. Use snake_case for variable names in Python.

-        self.lite_llm_Model = os.getenv("LITELLM_MODEL")
+        self.lite_llm_model = os.getenv("LITELLM_MODEL")
app/modules/intelligence/agents/agent_factory.py (1)

27-27: Remove unused import.

The ChatOllama import is not used in the code.

-from langchain_ollama import ChatOllama
🧰 Tools
🪛 Ruff (0.8.2)

27-27: langchain_ollama.ChatOllama imported but unused

Remove unused import: langchain_ollama.ChatOllama

(F401)

app/modules/intelligence/agents/agent_injector_service.py (1)

32-32: Remove unused import.

The ChatOllama import is not used in the code.

-from langchain_ollama import ChatOllama
🧰 Tools
🪛 Ruff (0.8.2)

32-32: langchain_ollama.ChatOllama imported but unused

Remove unused import: langchain_ollama.ChatOllama

(F401)

app/modules/intelligence/agents/agents/unit_test_agent.py (1)

26-26: Remove commented code.

Instead of keeping the commented line, remove it entirely as it's no longer needed with the transition to LiteLLM.

-        # self.openai_api_key = os.getenv("OPENAI_API_KEY")
app/modules/intelligence/agents/agents/low_level_design_agent.py (1)

65-65: Remove commented code.

Instead of keeping the commented line, remove it entirely as it's no longer needed with the transition to LiteLLM.

-        # self.openai_api_key = os.getenv("OPENAI_API_KEY")
docs/parsing.md (1)

85-87: Consider adding rate limiting information.

The additional notes section could benefit from information about:

  • Rate limiting policies
  • Maximum repository size limits
  • Timeout values
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dfaa35e and 0b0441a.

📒 Files selected for processing (22)
  • app/alembic/env.py (2 hunks)
  • app/core/config_provider.py (1 hunks)
  • app/main.py (1 hunks)
  • app/modules/intelligence/agents/agent_factory.py (3 hunks)
  • app/modules/intelligence/agents/agent_injector_service.py (4 hunks)
  • app/modules/intelligence/agents/agents/blast_radius_agent.py (0 hunks)
  • app/modules/intelligence/agents/agents/code_gen_agent.py (0 hunks)
  • app/modules/intelligence/agents/agents/debug_rag_agent.py (0 hunks)
  • app/modules/intelligence/agents/agents/integration_test_agent.py (0 hunks)
  • app/modules/intelligence/agents/agents/low_level_design_agent.py (1 hunks)
  • app/modules/intelligence/agents/agents/rag_agent.py (1 hunks)
  • app/modules/intelligence/agents/agents/unit_test_agent.py (1 hunks)
  • app/modules/intelligence/provider/provider_controller.py (5 hunks)
  • app/modules/intelligence/provider/provider_router.py (2 hunks)
  • app/modules/intelligence/provider/provider_service.py (2 hunks)
  • app/modules/intelligence/tools/tool_service.py (2 hunks)
  • app/modules/key_management/secret_manager.py (4 hunks)
  • app/modules/key_management/secrets_schema.py (2 hunks)
  • docker-compose.yaml (2 hunks)
  • docs/parsing.md (1 hunks)
  • requirements.txt (1 hunks)
  • runtime.txt (1 hunks)
💤 Files with no reviewable changes (4)
  • app/modules/intelligence/agents/agents/debug_rag_agent.py
  • app/modules/intelligence/agents/agents/code_gen_agent.py
  • app/modules/intelligence/agents/agents/blast_radius_agent.py
  • app/modules/intelligence/agents/agents/integration_test_agent.py
✅ Files skipped from review due to trivial changes (1)
  • runtime.txt
🧰 Additional context used
🪛 Ruff (0.8.2)
app/modules/intelligence/agents/agent_factory.py

27-27: langchain_ollama.ChatOllama imported but unused

Remove unused import: langchain_ollama.ChatOllama

(F401)

app/modules/intelligence/agents/agent_injector_service.py

32-32: langchain_ollama.ChatOllama imported but unused

Remove unused import: langchain_ollama.ChatOllama

(F401)

app/modules/intelligence/provider/provider_service.py

8-8: app.modules.key_management.secret_manager.SecretManager imported but unused

Remove unused import: app.modules.key_management.secret_manager.SecretManager

(F401)


38-38: Local variable provider is assigned to but never used

Remove assignment to unused variable provider

(F841)


62-62: Local variable provider is assigned to but never used

Remove assignment to unused variable provider

(F841)

app/modules/key_management/secret_manager.py

50-50: Use capitalized environment variable ISDEVELOPMENTMODE instead of isDevelopmentMode

(SIM112)


185-185: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


186-186: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


188-188: Use capitalized environment variable ISDEVELOPMENTMODE instead of isDevelopmentMode

(SIM112)

🪛 GitHub Check: SonarCloud
app/alembic/env.py

[failure] 22-22: PostgreSQL database passwords should not be disclosed

Make sure this PostgreSQL database password gets changed and removed from the code.

See more on SonarQube Cloud

🔇 Additional comments (13)
app/modules/key_management/secrets_schema.py (3)

9-9: LGTM! Provider type updated correctly.

The provider type has been correctly updated to remove OpenAI support, aligning with the PR objectives to shift away from OpenAI-based configurations.


13-18: Verify API key validation for Deepseek provider.

The API key validation logic checks for "sk-ant-" (Anthropic) and general "sk-" prefixes, but doesn't specifically validate the "sk-or-" prefix for Deepseek keys that's checked in the validate_provider_and_api_key method.

Consider updating the validation to be consistent:

     def api_key_format(cls, v: str):
         if v.startswith("sk-ant-"):
             return v
+        elif v.startswith("sk-or-"):
+            return v
         elif v.startswith("sk-"):
             return v
         else:
             raise ValueError("Invalid API key format")
🧰 Tools
🪛 Ruff (0.8.2)

13-16: Combine if branches using logical or operator

Combine if branches

(SIM114)


24-31: LGTM! Provider validation is thorough.

The provider validation logic correctly validates API key formats for both Anthropic and Deepseek providers.

app/modules/key_management/secret_manager.py (2)

48-58: LGTM! Provider validation and secret ID generation are correct.

The method signature and validation logic have been properly updated to handle only Anthropic and Deepseek providers.

🧰 Tools
🪛 Ruff (0.8.2)

50-50: Use capitalized environment variable ISDEVELOPMENTMODE instead of isDevelopmentMode

(SIM112)


106-106: LGTM! Provider types consistently updated.

The method signatures and provider list have been consistently updated across all methods to remove OpenAI support.

Also applies to: 132-132, 184-192

app/modules/intelligence/agents/agents/rag_agent.py (1)

60-60:

✅ Verification successful

LGTM! Safely removed unused OpenAI API key.

The removal of the OpenAI API key initialization aligns with the migration to LiteLLM, and the key was not used elsewhere in the code. The LLM configuration is properly handled through the ProviderService.

Let's verify that there are no remaining OpenAI dependencies in the codebase:


🏁 Script executed:

#!/bin/bash
# Description: Search for any remaining OpenAI API key references or OpenAI-specific imports.

# Test: Search for OpenAI API key references
echo "Searching for OpenAI API key references..."
rg -i "openai.*api.*key"

# Test: Search for OpenAI imports
echo "Searching for OpenAI imports..."
rg -i "^from\s+openai\s+import|^import\s+openai"

# Test: Search for OpenAI configuration
echo "Searching for OpenAI configuration..."
rg -i "openai\..*config"

Length of output: 873


Approval Notice: OpenAI API Key Removal Validated

The removal of the OpenAI API key initialization in rag_agent.py is confirmed safe. Our search shows that while commented references still exist in this file (and similarly in a couple of other agent files and docs), there are no active dependencies on the OpenAI key—LLM configuration is now exclusively managed via ProviderService. These commented lines serve as historical references and do not impact current functionality.

  • File Reference: app/modules/intelligence/agents/agents/rag_agent.py (line 60)
  • Verification: No active OpenAI dependencies remain in use.
app/modules/intelligence/provider/provider_controller.py (1)

47-52:

❓ Verification inconclusive

Verify model_type consistency.

The get_preferred_llm method returns a tuple with model_type, but it's not clear if this is compatible with LiteLLM's model types.


🏁 Script executed:

#!/bin/bash
# Check model type usage across the codebase
rg -A 2 "model_type" --type python

Length of output: 65


I'll re-run the verification using a revised glob pattern to narrow the search to Python files. Please execute the following script to gather the necessary details on how model_type is used across the codebase:

Once the output is available, we can verify whether the model_type returned by get_preferred_llm is consistent with LiteLLM’s expected model types. This step is crucial for ensuring that the value passed to GetProviderResponse aligns with downstream expectations.


🏁 Script executed:

[run_scripts]
#!/bin/bash
# Re-run search for "model_type" in all Python files using a glob filter.
rg -A 2 "model_type" -g "*.py"

Length of output: 754


Action Required: Verify LiteLLM Model Type Consistency

The updated search confirms that the code uses model_type as a string (see its definition in app/modules/intelligence/provider/provider_schema.py). In the controller (lines 47–52), the value returned from await self.service.get_preferred_llm(user_id) is directly assigned to model_type in the response. Please verify that this string value is fully compatible with LiteLLM’s expected model type conventions. If LiteLLM enforces a specific set of supported values (or an enum), ensure that the value provided here adheres to those restrictions.

  • Location for review:
    • app/modules/intelligence/provider/provider_controller.py: Lines 47–52
    • app/modules/intelligence/provider/provider_schema.py: model_type definition
app/modules/intelligence/agents/agent_factory.py (1)

75-77:

✅ Verification successful

Verify agent configuration consistency.

The lite_llm_agent configuration differs from other agents which use both mini_llm and reasoning_llm. Verify if this is intentional or if it should follow the same pattern.


🏁 Script executed:

#!/bin/bash
# Search for other LiteLLM agent configurations in the codebase
rg -A 5 "ChatLiteLLM.*model.*litellm_model"

Length of output: 548


Agent configuration appears intentional and consistent.

After verifying with the search results, both the agent_factory.py and provider_service.py files use the same ChatLiteLLM(model=litellm_model) configuration. There is no evidence that additional parameters like mini_llm or reasoning_llm should be added for the lite_llm_agent. If the simplified configuration was intended, no changes are required. Otherwise, please confirm if additional LLM configurations are desired.

  • Location Checked:
    • app/modules/intelligence/agents/agent_factory.py (lines 75-77)
    • app/modules/intelligence/provider/provider_service.py (usage of ChatLiteLLM(model=litellm_model))
app/modules/intelligence/agents/agent_injector_service.py (1)

63-65: LGTM!

The lite_llm_agent configuration matches the implementation in agent_factory.py.

app/main.py (1)

94-96:

❓ Verification inconclusive

Verify if additional environment variables are required for LiteLLM.

The change from OPENAI_API_KEY to LITELLM_MODEL aligns with the transition to LiteLLM. However, LiteLLM might require additional environment variables for proper configuration.


🌐 Web query:

What environment variables are required for LiteLLM configuration?

💡 Result:

LiteLLM requires specific environment variables depending on the LLM provider and deployment setup. Here's a breakdown of essential configurations:

Core Configuration

  • Provider API Keys: Required for authentication with respective services:
    • OPENAI_API_KEY (OpenAI) [1][4][5][7]
    • ANTHROPIC_API_KEY (Anthropic) [1][7]
    • AZURE_API_KEY (Azure OpenAI) [3][7]
    • REPLICATE_API_KEY (Replicate) [1][7]
    • TOGETHERAI_API_KEY (Together AI) [1][7]
    • XAI_API_KEY (xAI/Grok) [5]

Provider-Specific Settings

For Azure OpenAI deployments:

os.environ['AZURE_API_BASE'] = "your-endpoint-url"  # Required[1][3][7]
os.environ['AZURE_API_VERSION'] = "2023-05-15"     # Optional[1][7]

For Google Vertex AI:

os.environ['VERTEX_PROJECT'] = "your-project-id"    # Required[5]
os.environ['VERTEX_LOCATION'] = "us-central1"       # Required[5]

Proxy Configuration

  • LITELLM_MASTER_KEY: Authentication key for proxy server[3][8][9]
  • DATABASE_URL: For persistent logging (optional)[3]

Advanced Configuration

Variable Type Example Variables Purpose
API Base URLs OPENAI_API_BASE, NVIDIA_NIM_API_BASE Custom endpoints[1][4][5]
Organization IDs OPENAI_ORGANIZATION Team management[4]
Telemetry OTEL_SERVICE_NAME OpenTelemetry integration[6]
Server Config HOST, PORT, NUM_WORKERS Deployment settings[10]

Best Practices:

  1. Store sensitive keys in environment variables rather than code[1][2][4]
  2. Use .env files with python-dotenv for local development[4]
  3. Reference variables in YAML configs using os.environ["VAR_NAME"] syntax[1][2]

For proxy deployments, the minimal required variables are LITELLM_MASTER_KEY and provider-specific API keys[3][8]. Cloud deployments might require additional parameters like project IDs and regions based on your infrastructure setup[5][7].

Citations:


Attention: Verify Additional LiteLLM Environment Variables

The transition replacing OPENAI_API_KEY with LITELLM_MODEL is appropriate for LiteLLM. However, LiteLLM's configuration can require additional environment variables depending on your deployment and LLM provider. For instance:

  • Provider-Specific API Keys: Depending on which provider you're using, you may need to set variables like OPENAI_API_KEY, ANTHROPIC_API_KEY, AZURE_API_KEY, REPLICATE_API_KEY, TOGETHERAI_API_KEY, or XAI_API_KEY.
  • Azure/OpenAI Configurations: For Azure setups, consider adding AZURE_API_BASE (and optionally AZURE_API_VERSION).
  • Google Vertex AI Deployments: Variables such as VERTEX_PROJECT and VERTEX_LOCATION might be necessary.
  • Proxy and Advanced Settings: If you're using a LiteLLM proxy, ensure you also define LITELLM_MASTER_KEY, and possibly DATABASE_URL among others for logging or custom endpoints.

Please review your deployment requirements and update the environment variable configuration accordingly.

docker-compose.yaml (1)

8-8: Verify database name change implications.

The change from 'momentum' to 'railway' database name might require data migration. Please ensure that:

  1. Data migration scripts are in place
  2. Application configurations are updated accordingly
  3. Backup of the old database is created before migration
requirements.txt (1)

6-6:

❓ Verification inconclusive

Verify compatibility between LiteLLM and Langchain packages.

The PR shifts from OpenAI to LiteLLM. Please verify the compatibility between:

  • litellm==1.59.9
  • langchain==0.3.16
  • langchain-core==0.3.32
  • langchain-openai==0.3.2
  • langchain_ollama==0.2.2

Also applies to: 31-31, 34-43


🌐 Web query:

What are the compatible versions between LiteLLM and Langchain packages?

💡 Result:

LiteLLM and LangChain maintain compatibility through specific version constraints across their ecosystem of packages. Based on current documentation:

Core Package Compatibility

Package Current Version Compatibility Notes
langchain-core 0.3.x Required by LangChain v0.3+ ([8][9]). LiteLLM works with both 0.2.x and 0.3.x via proxy integration ([4][7]).
langchain 0.3.x Requires langchain-core >=0.3 ([8]). Compatible with LiteLLM through ChatLiteLLM integration ([1]).
langgraph 0.2.x Works with langchain-core 0.2.x or 0.3.x ([8][9]).
langchain-community 0.3.x Depends on langchain-core >=0.3 ([8]).

Key Integration Points

  1. LiteLLM Proxy works with:

    • LangChain via OpenAI-compatible API endpoints ([3][4])
    • Requires openai>=1.0 client library ([10])
    • Supports both Pydantic v1 and v2 ([10])
  2. Version Conflicts Resolution:

# Recommended installation pattern
pip install "langchain>=0.3" "langchain-core>=0.3" "langgraph>=0.2" litellm

Critical Version Constraints

  • Avoid mixing langchain-core 0.2.x with langgraph <0.2 due to dependency conflicts ([9])
  • LiteLLM requires:
    • openai>=1.0 for SDK integrations ([10])
    • pydantic>=2 for data validation ([10])

For production systems, use the latest stable versions of both ecosystems and follow LangChain's official dependency matrix when combining multiple components[8]. Breaking changes in LangChain are typically announced in advance for non-beta interfaces[2][5].

Citations:


Action Required: Confirm Compatibility Constraints

Based on the latest compatibility documentation, the specified versions appear to be within the supported ranges:

  • litellm==1.59.9 integrates properly with LangChain’s 0.3.x ecosystem.
  • langchain==0.3.16 and langchain-core==0.3.32 satisfy the core requirements, as LangChain v0.3+ requires langchain-core>=0.3.
  • langchain-openai==0.3.2 and langchain_ollama==0.2.2 also fall in line with the expected integration points.

Please double-check that:

  • Any additional dependencies (e.g., openai>=1.0 and the relevant pydantic version) required by LiteLLM are met.
  • There are no conflicts with other ecosystem packages (such as langgraph) if they are used elsewhere in the project.

If all these constraints are validated against the official LiteLLM and LangChain documentation, this set of versions should be acceptable for the PR shift from OpenAI to LiteLLM.

docs/parsing.md (1)

7-47: LGTM! Well-structured API documentation.

The documentation for the /parse endpoint is comprehensive and follows best practices:

  • Clear request/response schemas with field descriptions
  • Practical examples provided
  • Status codes well documented

# Construct the database URL from environment variables
POSTGRES_SERVER = os.getenv("POSTGRES_SERVER", "localhost")
POSTGRES_URL = os.getenv("POSTGRES_SERVER", "postgresql://postgres:oGxDeZWeNtKpqPnqaHBAgJAxGHnAjhKm@postgres-production-77f9.up.railway.app:5432/railway")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Remove hardcoded database credentials.

The hardcoded PostgreSQL connection string contains sensitive credentials that should not be in the source code. This poses a significant security risk.

  1. Move the connection string to a secure environment variable or configuration file.
  2. Remove the hardcoded credentials immediately.
  3. Rotate the exposed database credentials.

Apply this diff:

-POSTGRES_URL = os.getenv("POSTGRES_SERVER", "postgresql://postgres:oGxDeZWeNtKpqPnqaHBAgJAxGHnAjhKm@postgres-production-77f9.up.railway.app:5432/railway")
+POSTGRES_URL = os.getenv("POSTGRES_URL")
+if not POSTGRES_URL:
+    raise ValueError("POSTGRES_URL environment variable is not set")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
POSTGRES_URL = os.getenv("POSTGRES_SERVER", "postgresql://postgres:oGxDeZWeNtKpqPnqaHBAgJAxGHnAjhKm@postgres-production-77f9.up.railway.app:5432/railway")
POSTGRES_URL = os.getenv("POSTGRES_URL")
if not POSTGRES_URL:
raise ValueError("POSTGRES_URL environment variable is not set")
🧰 Tools
🪛 GitHub Check: SonarCloud

[failure] 22-22: PostgreSQL database passwords should not be disclosed

Make sure this PostgreSQL database password gets changed and removed from the code.

See more on SonarQube Cloud

from langchain_deepseek import ChatDeepSeek
from langchain_openai.chat_models import ChatOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
from langchain_community.chat_models import ChatLiteLLM

from app.modules.key_management.secret_manager import SecretManager
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove unused import and improve error handling.

The SecretManager import is unused, and the LiteLLM initialization lacks proper error handling.

Apply these changes:

-from app.modules.key_management.secret_manager import SecretManager
...
    def _initialize_llm(self, size: str):
        """Initialize LiteLLM model with OpenRouter as the provider."""
        litellm_model = os.getenv("LITELLM_MODEL")
+       if not litellm_model:
+           raise ValueError("LITELLM_MODEL environment variable is not set")
        provider = os.getenv("LITELLM_PROVIDER", "openrouter")
        logging.info(f"Initializing LiteLLM with model {litellm_model} and provider {litellm_provider}")
        self.llm = ChatLiteLLM(model=litellm_model)
        return self.llm

Also applies to: 59-65

🧰 Tools
🪛 Ruff (0.8.2)

8-8: app.modules.key_management.secret_manager.SecretManager imported but unused

Remove unused import: app.modules.key_management.secret_manager.SecretManager

(F401)

]

async def set_global_ai_provider(self, user_id: str, provider: str):
provider = provider.lower()
# First check if preferences exist
provider = litellm_provider # Force all users to use LiteLLM
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove unused variables.

The provider variable is assigned but never used in multiple locations.

Apply these changes:

-       provider = litellm_provider # Force all users to use LiteLLM
...
-       provider = os.getenv("LITELLM_PROVIDER", "openrouter")

Also applies to: 62-62

🧰 Tools
🪛 Ruff (0.8.2)

38-38: Local variable provider is assigned to but never used

Remove assignment to unused variable provider

(F841)

Comment on lines 16 to +19
async def list_available_llms(self) -> List[ProviderInfo]:
try:
providers = await self.service.list_available_llms()
return providers
return litellm_provider
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix type mismatch in list_available_llms.

The method is declared to return List[ProviderInfo] but returns a string.

Apply this fix:

    async def list_available_llms(self) -> List[ProviderInfo]:
        try:
            providers = await self.service.list_available_llms()
-           return litellm_provider
+           return providers
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def list_available_llms(self) -> List[ProviderInfo]:
try:
providers = await self.service.list_available_llms()
return providers
return litellm_provider
async def list_available_llms(self) -> List[ProviderInfo]:
try:
providers = await self.service.list_available_llms()
return providers
🧰 Tools
🪛 Ruff (0.8.2)

18-18: Local variable providers is assigned to but never used

Remove assignment to unused variable providers

(F841)

Comment on lines +22 to +25
def get_litellm_config(self):
return {
"model": self.lite_llm_Model,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add validation for required configuration.

The LiteLLM model configuration is required for the application to function. Add validation to ensure the environment variable is set.

     def get_litellm_config(self):
+        if not self.lite_llm_model:
+            raise ValueError("LITELLM_MODEL environment variable is not set")
         return {
-            "model": self.lite_llm_Model,
+            "model": self.lite_llm_model,
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def get_litellm_config(self):
return {
"model": self.lite_llm_Model,
}
def get_litellm_config(self):
if not self.lite_llm_model:
raise ValueError("LITELLM_MODEL environment variable is not set")
return {
"model": self.lite_llm_model,
}

Comment on lines +71 to +73
"Lite_LLM": completion(
model=""
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix tool configuration and naming.

The tool has the following issues:

  1. Empty model string will cause runtime errors
  2. Tool name uses inconsistent casing (should be snake_case)
-            "Lite_LLM": completion(
-                model=""
-            ),
+            "lite_llm": completion(
+                model=os.getenv("LITELLM_MODEL")
+            ),

Additionally, add validation to ensure the model is configured:

if not os.getenv("LITELLM_MODEL"):
    raise ValueError("LITELLM_MODEL environment variable is not set")

Comment on lines +7 to +8
POSTGRES_PASSWORD: oGxDeZWeNtKpqPnqaHBAgJAxGHnAjhKm
POSTGRES_DB: railway
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Move sensitive credentials to environment variables.

Hardcoding database credentials in the docker-compose file is a security risk. Consider using environment variables or a secrets management solution.

-      POSTGRES_PASSWORD: oGxDeZWeNtKpqPnqaHBAgJAxGHnAjhKm
-      POSTGRES_DB: railway
+      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
+      POSTGRES_DB: ${POSTGRES_DB}
-      NEO4J_AUTH: neo4j/o7sODlqFQuNRwMNnVv9Pjf7pqkAgKiBnAZnpJymkQQQ
+      NEO4J_AUTH: neo4j/${NEO4J_PASSWORD}

Also applies to: 23-23

@@ -1,10 +1,16 @@
fastapi[all]==0.115.6
instructor==1.5.2
jiter==0.5.0
psycopg2-binary==2.9.10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove duplicate entry for psycopg2-binary.

The package psycopg2-binary is listed twice with the same version. Remove one of the duplicate entries.

 psycopg2-binary==2.9.10
 psycopg2==2.9.10
 litellm==1.59.9
 joblib==1.4.2
 json_repair==0.35.0
 langsmith==0.3.3
 langchain_ollama==0.2.2
 langchain-core==0.3.32
 ollama==0.4.7
 sentry_sdk==2.20.0
 json5==0.9.28
 kombu==5.4.2
 uvicorn==0.32.1
 sqlalchemy==2.0.36
 alembic==1.14.0
 gunicorn==23.0.0
 python-dotenv==1.0.1
 postgres==4.0
-psycopg2-binary==2.9.10

Also applies to: 22-22

litellm==1.59.9
joblib==1.4.2
json_repair==0.35.0
langsmith==0.3.3
langchain_ollama==0.2.2
langchain-core==0.3.32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove duplicate entry for langchain-core.

The package langchain-core is listed twice with the same version. Remove one of the duplicate entries.

 langsmith==0.3.3
 langchain_ollama==0.2.2
-langchain-core==0.3.32
 ollama==0.4.7
 sentry_sdk==2.20.0

Also applies to: 38-38

langchain_ollama==0.2.2
langchain-core==0.3.32
ollama==0.4.7
sentry_sdk==2.20.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consolidate sentry-sdk entries.

The package sentry_sdk is listed twice, once as a basic package and once with FastAPI extras. Consolidate these into a single entry with FastAPI extras.

 langchain_ollama==0.2.2
 langchain-core==0.3.32
 ollama==0.4.7
-sentry_sdk==2.20.0

Also applies to: 68-68

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants