Skip to content

Enterprise AI Assistant Platform showcasing multi-agent workflows, RAG with knowledge graphs, LLM provider integration, and MLOps capabilities. Built with FastAPI, LangChain, and vector databases for GenAI demonstrations.

License

kudosscience/enterprise-ai-assistant-platform

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Enterprise AI Assistant Platform

A comprehensive GenAI platform demonstrating enterprise-level AI capabilities including multi-agent workflows, RAG systems, knowledge graphs, and MLOps integration.

Python FastAPI License Tests

πŸš€ Features

πŸ€– Multi-Agent AI System

  • Research Agent: Intelligent information gathering and web research
  • Data Analysis Agent: Advanced data processing and analytics
  • Content Generation Agent: AI-powered content creation and documentation
  • Crew Manager: Orchestrates multi-agent workflows using CrewAI

πŸ“š RAG (Retrieval Augmented Generation)

  • Vector Store Integration: ChromaDB and Pinecone support
  • Document Processing: PDF, Word, and text document ingestion
  • Semantic Search: Advanced similarity search capabilities
  • Context-Aware Generation: LLM responses enhanced with retrieved knowledge

πŸ•ΈοΈ Knowledge Graph System

  • Neo4j Integration: Graph database for complex relationships
  • Entity Recognition: Automatic extraction of entities and relationships
  • Graph Analytics: Advanced querying and relationship analysis
  • Knowledge Discovery: Intelligent insights from connected data

πŸ”§ LLM Provider Management

  • Multi-Provider Support: OpenAI, Anthropic, Google AI integration
  • Provider Abstraction: Unified interface for different LLM APIs
  • Load Balancing: Intelligent routing across providers
  • Fallback Mechanisms: Automatic failover for reliability

πŸ“Š MLOps & Monitoring

  • MLflow Integration: Experiment tracking and model management
  • Prometheus Metrics: System performance monitoring
  • Health Checks: Comprehensive system health monitoring
  • Logging: Structured logging with correlation IDs

πŸ—οΈ Architecture

enterprise-ai-assistant-platform/
β”œβ”€β”€ agents/                     # Multi-agent AI components
β”‚   β”œβ”€β”€ research_agent.py      # Information gathering agent
β”‚   β”œβ”€β”€ data_analysis_agent.py # Data processing agent
β”‚   β”œβ”€β”€ content_generation_agent.py # Content creation agent
β”‚   └── crew_manager.py        # Multi-agent orchestration
β”œβ”€β”€ api/                        # FastAPI routes and endpoints
β”‚   └── routes/                 # API route definitions
β”‚       β”œβ”€β”€ agents.py          # Agent interaction endpoints
β”‚       β”œβ”€β”€ rag.py             # RAG system endpoints
β”‚       β”œβ”€β”€ knowledge_graph.py # Knowledge graph endpoints
β”‚       └── health.py          # Health check endpoints
β”œβ”€β”€ core/                       # Core system components
β”‚   β”œβ”€β”€ config.py              # Configuration management
β”‚   └── logging.py             # Structured logging setup
β”œβ”€β”€ knowledge_graph/            # Knowledge graph implementation
β”‚   β”œβ”€β”€ knowledge_graph_system.py # Graph operations
β”‚   └── neo4j_manager.py       # Neo4j database management
β”œβ”€β”€ llm_providers/             # LLM provider abstractions
β”‚   └── provider_manager.py    # Multi-provider management
β”œβ”€β”€ mlops/                     # MLOps and monitoring
β”‚   └── monitoring.py          # Metrics and health checks
β”œβ”€β”€ rag/                       # RAG system implementation
β”‚   β”œβ”€β”€ rag_system.py          # Core RAG functionality
β”‚   β”œβ”€β”€ vector_store.py        # Vector database management
β”‚   └── document_processor.py  # Document ingestion
└── tests/                     # Comprehensive test suite
    β”œβ”€β”€ unit/                  # Unit tests
    β”œβ”€β”€ integration/           # Integration tests
    └── e2e/                   # End-to-end tests

🚦 Quick Start

Prerequisites

  • Python 3.11+
  • Docker & Docker Compose
  • Virtual environment (recommended)

Installation

  1. Clone the repository

    git clone <repository-url>
    cd enterprise-ai-assistant-platform
  2. Create virtual environment

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies

    pip install -r requirements.txt
    pip install -r requirements-test.txt
  4. Configure environment

    cp .env.example .env
    # Edit .env with your API keys and configuration
  5. Start infrastructure services

    docker-compose up -d neo4j postgres chromadb mlflow
  6. Run the application

    python -m uvicorn main:app --host 0.0.0.0 --port 8000 --reload

The platform will be available at:

πŸ”§ Configuration

Environment Variables

Create a .env file with the following configuration:

# Application Settings
DEBUG=true
LOG_LEVEL=INFO
API_HOST=0.0.0.0
API_PORT=8000

# LLM API Keys
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
GOOGLE_API_KEY=your_google_key

# Vector Database
PINECONE_API_KEY=your_pinecone_key
PINECONE_ENVIRONMENT=your_pinecone_env
CHROMADB_HOST=localhost
CHROMADB_PORT=8000

# Knowledge Graph
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=password

# MLOps
MLFLOW_TRACKING_URI=http://localhost:5000
PROMETHEUS_PORT=9090

Database Setup

The platform uses multiple databases:

  • Neo4j: Knowledge graph storage
  • ChromaDB: Vector embeddings
  • PostgreSQL: Structured data (via Docker)
  • MLflow: Model and experiment tracking

πŸ“– API Documentation

Core Endpoints

Health Check

GET /health

Returns system health status and component availability.

Multi-Agent Workflows

POST /api/v1/agents/research
POST /api/v1/agents/analyze
POST /api/v1/agents/generate
POST /api/v1/agents/workflow

RAG System

POST /api/v1/rag/query
POST /api/v1/rag/ingest
GET /api/v1/rag/documents

Knowledge Graph

POST /api/v1/kg/query
POST /api/v1/kg/entities
GET /api/v1/kg/relationships

Example Usage

Research Agent

import httpx

response = httpx.post("http://localhost:8000/api/v1/agents/research", json={
    "task": "Research the latest trends in generative AI",
    "context": "Focus on enterprise applications",
    "parameters": {"depth": "comprehensive"}
})

RAG Query

response = httpx.post("http://localhost:8000/api/v1/rag/query", json={
    "query": "What are the benefits of multi-agent systems?",
    "top_k": 5,
    "filters": {"category": "ai-research"}
})

πŸ§ͺ Testing

The platform includes a comprehensive test suite with unit, integration, and end-to-end tests.

Running Tests

# Run all tests
python -m pytest

# Run specific test categories
python -m pytest tests/unit/          # Unit tests
python -m pytest tests/integration/   # Integration tests
python -m pytest tests/e2e/          # End-to-end tests

# Run with coverage
python -m pytest --cov=. --cov-report=html

# Run specific test file
python -m pytest tests/unit/test_research_agent.py -v

Test Structure

  • Unit Tests: Test individual components in isolation
  • Integration Tests: Test component interactions
  • E2E Tests: Test complete workflows and API endpoints

Coverage reports are generated in htmlcov/ directory.

πŸš€ Deployment

Docker Deployment

  1. Build the application

    docker build -t enterprise-ai-platform .
  2. Deploy with Docker Compose

    docker-compose up -d

Production Considerations

  • Environment Variables: Secure API key management
  • Database Persistence: Configure volume mounts
  • Load Balancing: Use reverse proxy (nginx/traefik)
  • Monitoring: Enable Prometheus metrics collection
  • Logging: Configure centralized log aggregation
  • Security: Implement authentication and rate limiting

πŸ“Š Monitoring & Observability

Health Checks

  • Application health endpoint
  • Database connectivity checks
  • External service availability
  • Resource utilization monitoring

Metrics

  • Request/response metrics
  • Agent execution times
  • Database query performance
  • LLM provider response times

Logging

  • Structured JSON logging
  • Correlation ID tracking
  • Error aggregation
  • Performance profiling

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Add tests for new functionality
  5. Ensure all tests pass (python -m pytest)
  6. Commit your changes (git commit -m 'Add amazing feature')
  7. Push to the branch (git push origin feature/amazing-feature)
  8. Open a Pull Request

Development Guidelines

  • Follow PEP 8 style guidelines
  • Write comprehensive tests
  • Update documentation
  • Use type hints
  • Add docstrings to all functions and classes

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ”— Related Documentation

πŸ†˜ Support

For support and questions:


Built with ❀️ for demonstrating enterprise GenAI capabilities

About

Enterprise AI Assistant Platform showcasing multi-agent workflows, RAG with knowledge graphs, LLM provider integration, and MLOps capabilities. Built with FastAPI, LangChain, and vector databases for GenAI demonstrations.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages