LLM Gateway is an open-source API gateway for Large Language Models (LLMs). It acts as a middleware between your applications and various LLM providers, allowing you to:
- Route requests to multiple LLM providers (OpenAI, Anthropic, Google Vertex AI, and others)
- Manage API keys for different providers in one place
- Track token usage and costs across all your LLM interactions
- Analyze performance metrics to optimize your LLM usage
- Unified API Interface: Compatible with the OpenAI API format for seamless migration
- Usage Analytics: Track requests, tokens used, response times, and costs
- Multi-provider Support: Connect to various LLM providers through a single gateway
- Performance Monitoring: Compare different models' performance and cost-effectiveness
You can use LLM Gateway in two ways:
- Hosted Version: For immediate use without setup, visit llmgateway.io to create an account and get an API key.
- Self-Hosted: Deploy LLM Gateway on your own infrastructure for complete control over your data and configuration.
POST https://api.llmgateway.io/v1/chat/completions
Content-Type: application/json
Authorization: Bearer your-llmgateway-api-key
{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
]
}
-
Install dependencies:
pnpm install
-
Start development servers:
pnpm dev
-
Build for production:
pnpm build
apps/ui
: Vite + React frontendapps/api
: Hono backendapps/gateway
: API gateway for routing LLM requestsapps/docs
: Documentation sitepackages/db
: Drizzle ORM schema and migrationspackages/models
: Model and provider definitionspackages/shared
: Shared types and utilities
This project is licensed under the MIT License - see the LICENSE file for details.