Note: Cortex CLI is a fork of Google's Gemini CLI with additional Ollama support and enhanced features.
Cortex CLI is an open-source AI agent that brings the power of multiple AI providers directly into your terminal. It provides lightweight access to Gemini, Ollama, and other AI providers, giving you the most direct path from your prompt to various AI models.
- π― Multiple AI Providers: Support for Gemini, Ollama, and more
- π§ Powerful Models: Access to Gemini 2.5 Pro, Llama, CodeLlama, and local models via Ollama
- π§ Built-in tools: Google Search grounding, file operations, shell commands, web fetching
- π Extensible: MCP (Model Context Protocol) support for custom integrations
- π» Terminal-first: Designed for developers who live in the command line
- π‘οΈ Open source: Apache 2.0 licensed (forked from Google's Gemini CLI)
# Using npx (no installation required)
npx https://github.com/MarkCodering/cortex-cli
npm install -g @markcodering/cortex-cli
# Note: Homebrew formula coming soon
# brew install cortex-cli
- Node.js version 20 or higher
- macOS, Linux, or Windows
See Releases for more details.
New preview releases will be published each week at UTC 2359 on Tuesdays. These releases will not have been fully vetted and may contain regressions or other outstanding issues. Please help us test and install with preview
tag.
npm install -g @google/gemini-cli@preview
- New stable releases will be published each week at UTC 2000 on Tuesdays, this will be the full promotion of last week's
preview
release + any bug fixes and validations. Uselatest
tag.
npm install -g @google/gemini-cli@latest
- New releases will be published each week at UTC 0000 each day, This will be all changes from the main branch as represented at time of release. It should be assumed there are pending validations and issues. Use
nightly
tag.
npm install -g @google/gemini-cli@nightly
- Query and edit large codebases
- Generate new apps from PDFs, images, or sketches using multimodal capabilities
- Debug issues and troubleshoot with natural language
- Automate operational tasks like querying pull requests or handling complex rebases
- Use MCP servers to connect new capabilities, including media generation with Imagen, Veo or Lyria
- Run non-interactively in scripts for workflow automation
- Ground your queries with built-in Google Search for real-time information
- Conversation checkpointing to save and resume complex sessions
- Custom context files (GEMINI.md) to tailor behavior for your projects
Integrate Gemini CLI directly into your GitHub workflows with Gemini CLI GitHub Action:
- Pull Request Reviews: Automated code review with contextual feedback and suggestions
- Issue Triage: Automated labeling and prioritization of GitHub issues based on content analysis
- On-demand Assistance: Mention
@gemini-cli
in issues and pull requests for help with debugging, explanations, or task delegation - Custom Workflows: Build automated, scheduled and on-demand workflows tailored to your team's needs
Choose the authentication method that best fits your needs:
β¨ Best for: Individual developers as well as anyone who has a Gemini Code Assist License.
Benefits:
- Free tier: 60 requests/min and 1,000 requests/day
- Gemini 2.5 Pro with 1M token context window
- No API key management - just sign in with your Google account
- Automatic updates to latest models
cortex
If you are using a paid Code Assist License from your organization, remember to set the Google Cloud Project
# Set your Google Cloud Project
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_NAME"
cortex
β¨ Best for: Developers who need specific model control or paid tier access
Benefits:
- Free tier: 100 requests/day with Gemini 2.5 Pro
- Model selection: Choose specific Gemini models
- Usage-based billing: Upgrade for higher limits when needed
# Get your key from https://aistudio.google.com/apikey
export GEMINI_API_KEY="YOUR_API_KEY"
cortex
β¨ Best for: Enterprise teams and production workloads
Benefits:
- Enterprise features: Advanced security and compliance
- Scalable: Higher rate limits with billing account
- Integration: Works with existing Google Cloud infrastructure
# Get your key from Google Cloud Console
export GOOGLE_API_KEY="YOUR_API_KEY"
export GOOGLE_GENAI_USE_VERTEXAI=true
cortex
β¨ Best for: Privacy-focused developers, local development, and custom models
Benefits:
- Complete privacy: All processing happens locally
- No API costs: Free to use with your own hardware
- Custom models: Use any model supported by Ollama
- Offline capable: Works without internet connection
# Install Ollama first: https://ollama.ai/
# Then pull a model (e.g., llama2, codellama, etc.)
ollama pull llama2
# Start Ollama server
ollama serve
# Use with Cortex CLI
export CORTEX_AUTH_TYPE=ollama
export OLLAMA_BASE_URL=http://localhost:11434 # optional
cortex -m llama2
For Google Workspace accounts and other authentication methods, see the authentication guide.
cortex
cortex --include-directories ../lib,../docs
cortex -m gemini-2.5-flash
# Or use Ollama model
cortex -m llama2
cortex -p "Explain the architecture of this codebase"
cd new-project/
cortex
> Write me a Discord bot that answers questions using a FAQ.md file I will provide
git clone https://github.com/MarkCodering/cortex-cli
cd cortex-cli
cortex
> Give me a summary of all of the changes that went in yesterday
# Make sure Ollama is running locally
ollama serve
# In another terminal, use Cortex CLI with Ollama
export CORTEX_AUTH_TYPE=ollama
export OLLAMA_BASE_URL=http://localhost:11434 # optional, this is the default
cortex -m llama2
> Help me refactor this function to be more efficient
- Quickstart Guide - Get up and running quickly
- Authentication Setup - Detailed auth configuration
- Configuration Guide - Settings and customization
- Keyboard Shortcuts - Productivity tips
- Commands Reference - All slash commands (
/help
,/chat
,/mcp
, etc.) - Checkpointing - Save and resume conversations
- Memory Management - Using GEMINI.md context files
- Token Caching - Optimize token usage
- Built-in Tools Overview
- MCP Server Integration - Extend with custom tools
- Custom Extensions - Build your own commands
- Architecture Overview - How Gemini CLI works
- IDE Integration - VS Code companion
- Sandboxing & Security - Safe execution environments
- Enterprise Deployment - Docker, system-wide config
- Telemetry & Monitoring - Usage tracking
- Tools API Development - Create custom tools
- Settings Reference - All configuration options
- Theme Customization - Visual customization
- .gemini Directory - Project-specific settings
- Environment Variables
- Troubleshooting Guide - Common issues and solutions
- FAQ - Quick answers
- Use
/bug
command to report issues directly from the CLI
Configure MCP servers in ~/.gemini/settings.json
to extend Gemini CLI with custom tools:
> @github List my open pull requests
> @slack Send a summary of today's commits to #dev channel
> @database Run a query to find inactive users
See the MCP Server Integration guide for setup instructions.
We welcome contributions! Cortex CLI is fully open source (Apache 2.0), forked from Google's Gemini CLI, and we encourage the community to:
- Report bugs and suggest features
- Improve documentation
- Submit code improvements
- Share your MCP servers and extensions
- Add support for additional AI providers
See our Contributing Guide for development setup, coding standards, and how to submit pull requests.
- GitHub Repository - Source code and issues
- Original Gemini CLI - Upstream project
- Ollama - Local AI model runner
See the Uninstall Guide for removal instructions.
- License: Apache License 2.0
- Original: Forked from Google's Gemini CLI
- Terms of Service: Terms & Privacy
- Security: Security Policy
Built with β€οΈ by the community, forked from Google's Gemini CLI