Your AI Companion, Engineered for Intelligence
A powerful AI assistant that combines multiple leading language models with the Model Context Protocol (MCP) for advanced tool usage and automation. This project provides both a command-line interface and REST API for interacting with AI models while giving them access to various tools and capabilities.
- Multiple LLM Support:
- Anthropic
- OpenAI
- xAI
- Advanced Tool Usage: Full MCP (Model Context Protocol) integration for powerful tool capabilities
- Multiple Interfaces:
- Interactive CLI for direct usage
- REST API for programmatic access
- Conversation Management: Persistent storage and retrieval of chat histories
- Dynamic Tool Discovery: Automatic detection and integration of MCP-compatible tools
- Usage Tracking: Built-in monitoring of model usage and performance
-
Anthropic
- Advanced reasoning capabilities
- Extensive tool integration support
- Configurable parameters
-
OpenAI
- State-of-the-art performance
- Robust tool handling
- Advanced configuration options
-
X.AI
- Latest AI technology
- Custom API endpoint support
- Flexible deployment options
Each provider can be configured with:
- Custom temperature settings
- Token limit adjustments
- API key configuration
- Provider-specific parameters
MCP is a universal protocol that standardizes how AI models interact with tools and services:
- Tool Definition: Tools describe their capabilities and requirements in a standard format
- Structured Communication: Models and tools communicate through a defined protocol
- Dynamic Discovery: Tools can be added or removed without changing the core system
- Language Agnostic: Works with any programming language or framework
-
API Server (
api_server.py):- FastAPI-based REST API
- Handles chat requests and responses
- Manages conversation state
- Provides tool listing and usage endpoints
-
CLI Interface (
cli_chat.py):- Interactive command-line interface
- Direct model interaction
- Tool exploration and usage
- Conversation saving/loading
-
Memory Management (
src/memory_manager.py):- Persistent storage of conversations
- Chat history retrieval
- Context window management
- Message formatting and processing
-
LLM Integration (
src/llm_factory.py,src/llm_helper.py):- Model initialization and configuration
- Response parsing and formatting
- Tool integration with models
- Usage tracking and monitoring
-
Database Layer (
src/database.py):- SQLAlchemy ORM
- Message storage
- Conversation tracking
- Usage statistics
- User sends a message through any interface (rest API/CLI)
- Message is processed by the agent system
- LLM receives the message with context and available tools
- Model decides if tool usage is needed
- If tools are needed, requests are sent through MCP
- Results are incorporated into the model's response
- Final response is returned to the user
- Conversation is saved to the database
-
System Requirements:
- Python 3.9+
- API keys for desired providers:
- ANTHROPIC_API_KEY
- OPENAI_API_KEY
- GROK_API_KEY
- Sufficient storage for conversation history
-
API Keys Setup:
# Add to your environment or .env file export ANTHROPIC_API_KEY=your_anthropic_key_here export OPENAI_API_KEY=your_openai_key_here export GROK_API_KEY=your_grok_key_here
-
Clone and Setup:
git clone https://github.com/madtank/OllamaAssist.git cd OllamaAssist python -m venv venv source venv/bin/activate pip install -r requirements.txt
-
Configuration: Create a
.envfile:# LLM API Keys ANTHROPIC_API_KEY=your_anthropic_key_here OPENAI_API_KEY=your_openai_key_here GROK_API_KEY=your_grok_key_here
-
LLM Configuration: Configure your preferred provider in
config.json:{ "llm": { "provider": "anthropic", // or "openai" or "grok" "settings": { "temperature": 0, "max_tokens": 4096 } } }
The command-line interface provides an interactive way to chat with the AI:
python cli_chat.pyFeatures:
- Interactive chat session
- Tool exploration and usage
- Conversation saving/loading
- Command history
- Real-time responses
Available commands:
/help - Show available commands
/tools - List available tools
/save - Save current conversation
/load - Load a saved conversation
/clear - Start a new conversation
/exit - Exit the applicationRun the API server for programmatic access:
uvicorn api_server:app --host 0.0.0.0 --port 8000-
Chat (
POST /chat):{ "input": "Your message here", "conversation_id": "optional-id", "user_id": "optional-user-id", "title": "optional-title" }Response:
{ "output": "AI response", "conversation_id": "conversation-uuid" } -
Tools (
GET /tools):- Lists available tools and their capabilities
{ "tools": [ { "name": "tool_name", "description": "Tool description", "parameters": { "param1": {"type": "string", "description": "..."}, "param2": {"type": "integer", "description": "..."} } } ] } -
Conversations (
GET /conversations):- Retrieves conversation history
{ "conversations": [ { "id": "conversation-uuid", "title": "Conversation title", "created_at": "timestamp", "messages": [...] } ] }
-
Create an MCP-compatible tool:
from mcp_core import MCPTool class MyTool(MCPTool): name = "my_tool" description = "Tool description" async def execute(self, **kwargs): # Tool implementation pass
-
Add to
mcp_config.json:{ "mcpServers": { "my-tool": { "command": "python", "args": ["my_tool.py"] } } }
# Run all tests
python -m pytest
# Run specific test file
python -m pytest tests/test_tools.py
# Run with coverage
coverage run -m pytest
coverage reportThe system maintains a sliding window of conversation history to:
- Prevent context overflow
- Maintain relevant information
- Optimize model performance
Models can use multiple tools in sequence to:
- Break down complex tasks
- Combine tool capabilities
- Handle multi-step operations
The system includes robust error handling for:
- Tool failures
- Model errors
- Network issues
- Invalid inputs
- Fork the repository
- Create a feature branch
- Implement your changes
- Add tests for new functionality
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.