-
Notifications
You must be signed in to change notification settings - Fork 20
Description
💡 Feature Description
A clear and concise description of the feature you'd like to see.
Allowing Gemini Code Flow (and Gemini CLI) to use OpenRouter or other OpenAI-API-compatible LLM routing tools (LiteLLM is BYOK, Ollama requires local GPU), such that people are not fully bound by Google.
🤔 Problem or Use Case
Is your feature request related to a problem? Please describe.
Assume that Gemini is not universal across the world, either due to locational constraints or financial constraints. We know that certain models are both cheaper and sometimes better than Gemini (DeepSeek and Qwen), so having options would be a good idea to access Gemini Code Flow https://livebench.ai/#/
💭 Proposed Solution
Possible adoption of the following two tools
- https://github.com/RoderickGrc/gemini-cli-proxy-for-ollama
- https://github.com/csyangwen/OpenAI-to-Gemini-API-Proxy
🔄 Alternatives Considered
- Hacking Claude Flow MCP to use Claude Code Router How about adding something like Claude Code Router? ruvnet/claude-flow#618
- Hacking Claude Flow MCP (or Gemini alternatives) to be compatible with most IDEs
✨ Additional Context
Similar questions clduab11/gemini-flow#7
🎯 Impact
Who would benefit from this feature?
- New users
- Experienced users
- Developers
- Documentation
- Performance
- Other: Geoblocked users + Token-poor users
Thanks for helping make Gemini Code Flow better! 🚀