A comprehensive GenAI platform demonstrating enterprise-level AI capabilities including multi-agent workflows, RAG systems, knowledge graphs, and MLOps integration.
- Research Agent: Intelligent information gathering and web research
- Data Analysis Agent: Advanced data processing and analytics
- Content Generation Agent: AI-powered content creation and documentation
- Crew Manager: Orchestrates multi-agent workflows using CrewAI
- Vector Store Integration: ChromaDB and Pinecone support
- Document Processing: PDF, Word, and text document ingestion
- Semantic Search: Advanced similarity search capabilities
- Context-Aware Generation: LLM responses enhanced with retrieved knowledge
- Neo4j Integration: Graph database for complex relationships
- Entity Recognition: Automatic extraction of entities and relationships
- Graph Analytics: Advanced querying and relationship analysis
- Knowledge Discovery: Intelligent insights from connected data
- Multi-Provider Support: OpenAI, Anthropic, Google AI integration
- Provider Abstraction: Unified interface for different LLM APIs
- Load Balancing: Intelligent routing across providers
- Fallback Mechanisms: Automatic failover for reliability
- MLflow Integration: Experiment tracking and model management
- Prometheus Metrics: System performance monitoring
- Health Checks: Comprehensive system health monitoring
- Logging: Structured logging with correlation IDs
enterprise-ai-assistant-platform/
βββ agents/ # Multi-agent AI components
β βββ research_agent.py # Information gathering agent
β βββ data_analysis_agent.py # Data processing agent
β βββ content_generation_agent.py # Content creation agent
β βββ crew_manager.py # Multi-agent orchestration
βββ api/ # FastAPI routes and endpoints
β βββ routes/ # API route definitions
β βββ agents.py # Agent interaction endpoints
β βββ rag.py # RAG system endpoints
β βββ knowledge_graph.py # Knowledge graph endpoints
β βββ health.py # Health check endpoints
βββ core/ # Core system components
β βββ config.py # Configuration management
β βββ logging.py # Structured logging setup
βββ knowledge_graph/ # Knowledge graph implementation
β βββ knowledge_graph_system.py # Graph operations
β βββ neo4j_manager.py # Neo4j database management
βββ llm_providers/ # LLM provider abstractions
β βββ provider_manager.py # Multi-provider management
βββ mlops/ # MLOps and monitoring
β βββ monitoring.py # Metrics and health checks
βββ rag/ # RAG system implementation
β βββ rag_system.py # Core RAG functionality
β βββ vector_store.py # Vector database management
β βββ document_processor.py # Document ingestion
βββ tests/ # Comprehensive test suite
βββ unit/ # Unit tests
βββ integration/ # Integration tests
βββ e2e/ # End-to-end tests
- Python 3.11+
- Docker & Docker Compose
- Virtual environment (recommended)
-
Clone the repository
git clone <repository-url> cd enterprise-ai-assistant-platform
-
Create virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt pip install -r requirements-test.txt
-
Configure environment
cp .env.example .env # Edit .env with your API keys and configuration -
Start infrastructure services
docker-compose up -d neo4j postgres chromadb mlflow
-
Run the application
python -m uvicorn main:app --host 0.0.0.0 --port 8000 --reload
The platform will be available at:
- API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
- Health Check: http://localhost:8000/health
Create a .env file with the following configuration:
# Application Settings
DEBUG=true
LOG_LEVEL=INFO
API_HOST=0.0.0.0
API_PORT=8000
# LLM API Keys
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
GOOGLE_API_KEY=your_google_key
# Vector Database
PINECONE_API_KEY=your_pinecone_key
PINECONE_ENVIRONMENT=your_pinecone_env
CHROMADB_HOST=localhost
CHROMADB_PORT=8000
# Knowledge Graph
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=password
# MLOps
MLFLOW_TRACKING_URI=http://localhost:5000
PROMETHEUS_PORT=9090The platform uses multiple databases:
- Neo4j: Knowledge graph storage
- ChromaDB: Vector embeddings
- PostgreSQL: Structured data (via Docker)
- MLflow: Model and experiment tracking
GET /healthReturns system health status and component availability.
POST /api/v1/agents/research
POST /api/v1/agents/analyze
POST /api/v1/agents/generate
POST /api/v1/agents/workflowPOST /api/v1/rag/query
POST /api/v1/rag/ingest
GET /api/v1/rag/documentsPOST /api/v1/kg/query
POST /api/v1/kg/entities
GET /api/v1/kg/relationshipsimport httpx
response = httpx.post("http://localhost:8000/api/v1/agents/research", json={
"task": "Research the latest trends in generative AI",
"context": "Focus on enterprise applications",
"parameters": {"depth": "comprehensive"}
})response = httpx.post("http://localhost:8000/api/v1/rag/query", json={
"query": "What are the benefits of multi-agent systems?",
"top_k": 5,
"filters": {"category": "ai-research"}
})The platform includes a comprehensive test suite with unit, integration, and end-to-end tests.
# Run all tests
python -m pytest
# Run specific test categories
python -m pytest tests/unit/ # Unit tests
python -m pytest tests/integration/ # Integration tests
python -m pytest tests/e2e/ # End-to-end tests
# Run with coverage
python -m pytest --cov=. --cov-report=html
# Run specific test file
python -m pytest tests/unit/test_research_agent.py -v- Unit Tests: Test individual components in isolation
- Integration Tests: Test component interactions
- E2E Tests: Test complete workflows and API endpoints
Coverage reports are generated in htmlcov/ directory.
-
Build the application
docker build -t enterprise-ai-platform . -
Deploy with Docker Compose
docker-compose up -d
- Environment Variables: Secure API key management
- Database Persistence: Configure volume mounts
- Load Balancing: Use reverse proxy (nginx/traefik)
- Monitoring: Enable Prometheus metrics collection
- Logging: Configure centralized log aggregation
- Security: Implement authentication and rate limiting
- Application health endpoint
- Database connectivity checks
- External service availability
- Resource utilization monitoring
- Request/response metrics
- Agent execution times
- Database query performance
- LLM provider response times
- Structured JSON logging
- Correlation ID tracking
- Error aggregation
- Performance profiling
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests for new functionality
- Ensure all tests pass (
python -m pytest) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PEP 8 style guidelines
- Write comprehensive tests
- Update documentation
- Use type hints
- Add docstrings to all functions and classes
This project is licensed under the MIT License - see the LICENSE file for details.
- Testing Documentation - Comprehensive testing guide
- API Reference - Interactive API documentation
- Architecture Decision Records - Technical decisions and rationale
For support and questions:
- Create an issue in the GitHub repository
- Check the FAQ
- Review the troubleshooting guide
Built with β€οΈ for demonstrating enterprise GenAI capabilities