An intelligent AI assistant that leverages Large Language Models (LLMs) and agentic AI to extract insights from internal documents and team reports. This system helps product and engineering teams answer questions, summarize issues, and route queries intelligently.
This project implements an AI assistant system capable of:
- Internal Q&A: Search and retrieve information from internal documents
- Issue Summarization: Analyze and summarize reported issues with severity and affected components
- Intelligent Routing: Use LLM-powered decision-making to route queries to appropriate tools
- Modular Architecture: Clean, scalable design with containerized deployment
Retrieves relevant information from internal documents to answer questions such as:
- "What are the issues reported on email notification?"
- "What did users say about the search bar?"
Input Documents:
ai_test_bug_reportai_test_user_feedback
Provides structured analysis of issue text, including:
- Reported issues
- Affected features/components
- Severity levels
- Requirements
- Receives and processes user queries
- Decides which tool to use based on query intent
- Explains reasoning for tool selection
- Returns structured output (JSON/dict format)
ai_test/
βββ src/
β βββ api/
β β βββ main.py # FastAPI application
β βββ agent/
β β βββ __init__.py
β β βββ agent.py # AI Agent implementation
β βββ tools/
β β βββ __init__.py
β β βββ qa_tool.py # Internal Q&A functionality
β β βββ summarizer_tool.py # Issue summarization
β βββ vectorstore/
β β βββ __init__.py
β β βββ indexing.py # Document indexing and retrieval
β βββ config.py # Configuration management
βββ data/
β βββ ai_test_bug_report/ # Bug report documents
β βββ ai_test_user_feedback/ # User feedback documents
βββ tests/
βββ Dockerfile # Container configuration
βββ docker-compose.dev.yaml # Development environment setup
βββ pyproject.toml # Project dependencies
βββ .env.example # Environment variables template
βββ README.md
- Python 3.13+
- Docker & Docker Compose (optional)
- OpenAI API Key
- Qdrant instance (local or cloud)
- Clone the repository:
git clone https://github.com/KJ-AIML/ai_test.git
cd ai_test- Set up environment variables:
cp .env.example .env- Edit
.envwith your configuration:
OPENAI_API_KEY=your_api_key_here
QDRANT_URL=your_qdrant_url
QDRANT_API_KEY=your_qdrant_keyUse the provided docker-compose.dev.yaml to bootstrap services (API, and any linked services you configure such as qdrant/redis if included):
docker compose -f docker-compose.dev.yaml up --builddocker-compose -f docker-compose.dev.yaml upThe API will be available at http://localhost:3000
POST /api/v1/internal_agent/query
Request:
{
"query": "What are the issues reported on email notification?"
}Response:
{
"query": "What did users say about the search bar?",
"tools_used": [
"search_internal_qa_tool"
],
"tool_executions": [
{
"step": 1,
"tool_call": {
"tool_name": "search_internal_qa_tool",
"arguments": {
"query": "search bar"
}
},
"tool_result": {
"tool_name": "search_internal_qa_tool",
"tool_type": "internal_qna",
"answer": "Found search-related feedback",
"hits": [
{
"text": "Feedback #48: ...",
"score": 0.89
}
]
}
}
],
"final_answer": "**Summary:**\nFound 2 issues...\n\n**References:**\n- Feedback #48: ...",
"metadata": {
"total_tokens": 1500
}
}Environment variables are managed through .env file. Key configurations:
| Variable | Description | Default |
|---|---|---|
SERVER_PORT |
API server port | 3000 |
SERVER_HOST |
API server host | 0.0.0.0 |
OPENAI_API_KEY |
OpenAI API key | Required |
QDRANT_URL |
Qdrant vector database URL | Required |
QDRANT_API_KEY |
Qdrant API key | Required |
EMBEDDING_MODEL |
Embedding model | text-embedding-3-small |
VECTOR_STORE_COLLECTION_NAME |
Qdrant collection name | test |
RETRIEVAL_TOP_K |
Number of documents to retrieve | 5 |
SIMILARITY_THRESHOLD |
Similarity threshold for retrieval | 0.7 |
LOG_LEVEL |
Logging level | info |
- Framework: FastAPI
- LLM Integration: LangChain, LangGraph, OpenAI
- Vector Store: Qdrant
- Embeddings: OpenAI Text Embedding 3 Small
- Caching: Redis
- Configuration: Pydantic Settings
- Server: Uvicorn
- Container: Docker
curl -X POST "http://localhost:3000/api/query" \
-H "Content-Type: application/json" \
-d '{
"query": "What did users say about the search bar?"
}'- Create a new tool file in
src/agents/tools/tools.py - Implement the tool with structured output
- Register the tool in the agent
- Update documentation
Example tool structure:
from typing import Any, Dict
class MyTool:
def execute(self, input_data: Dict[str, Any]) -> Dict[str, Any]:
"""
Execute the tool with given input.
Args:
input_data: Input parameters
Returns:
Structured output as dictionary
"""
# Implementation
return {"result": "..."}- Build Docker image:
docker build -t ai-test:latest .- Push to registry:
docker push your-registry/ai-test:latestUpdate .env with production values:
- Use secure OpenAI and Qdrant credentials
- Set
DEBUG=False - Configure appropriate
LOG_LEVEL - Set up Redis for caching
Logs are stored in logs/app.log. Configure logging in .env:
LOG_LEVEL=info
LOG_SAVE_TO_FILE=true
LOG_FILE=logs/app.logThis is an internal project for job evaluation purposes.
- Verify Qdrant URL and API key in
.env - Ensure Qdrant service is running
- Check network connectivity
- Verify API key is correct and has sufficient quota
- Check rate limiting
- Review API usage at https://platform.openai.com/usage