A next-generation search agent that delivers trust-scored answers by orchestrating multiple AIsa search endpoints. Unlike standard RAG systems, this agent doesn't just retrieve information—it evaluates the credibility and consensus of sources to assign a deterministic confidence score to every answer.
This project serves as a flagship demonstration of the AIsa platform's capabilities, specifically its Unified Model Gateway and diverse specialized Search APIs.
The agent employs a Two-Phase Retrieval Strategy:
- Discovery Phase: Queries 4 distinct streams in parallel (Scholar, Web, Smart, Tavily).
- Reasoning Phase: Extracts a valid
search_idfrom the AIsa results to trigger AIsa Explain, performing a meta-analysis of the search session.
Endpoints Used:
- AIsa Scholar: Deep academic retrieval.
- AIsa Web: Structured web search.
- AIsa Smart: Intelligent mixed-mode search.
- AIsa Explain: Native reasoning engine (triggered post-search).
- Tavily: External validation signal (routed through AIsa).
We moved beyond "hallucinated confidence" to a deterministic scoring rubric:
- Source Quality: Weighted points for Academic > Smart/Web > External.
- Agreement Analysis: An LLM chain explicitly identifies whether independent sources agree, disagree, or conflict.
- Final Score: A calculated 0-100 metric that users can trust.
- Claim Extraction: The UI lists the specific atomic claims used to synthesize the answer.
- Raw Data Verification: A "Developer View" toggle allows users to inspect the raw JSON responses from every API call, ensuring no "black box" magic.
The system is built on a clean, modular stack:
- Platform: AIsa (Model Gateway + Search Suite)
- Orchestration: LangChain (Parallel retrieval, Chain-of-Thought processing)
- Frontend: Streamlit (Interactive, transparent UI)
- Language: Python
graph TD
User[User Query] --> Agent
subgraph Phase 1: Search & Discovery
Agent -->|Parallel| Scholar[AIsa Scholar]
Agent -->|Parallel| Web[AIsa Web]
Agent -->|Parallel| Smart[AIsa Smart]
Agent -->|Parallel| Tavily[Tavily through AIsa]
end
Scholar & Web & Smart -->|Extract Search ID| Phase2[Phase 2: AIsa Explain]
Phase2 --> Explain[Deep Explanation]
Scholar & Web & Smart & Tavily & Explain --> Claims[Claim Normalization Chain]
Claims --> Agreement[Agreement Analysis]
Agreement --> Scoring[Deterministic Scoring]
Scoring --> Final[Synthesis & Explainability]
Final --> UI[Streamlit Interface]
- Python 3.10+
- AIsa API Key (Includes access to Tavily search)
-
Clone the repository
git clone <repo-url> cd verity/search_agent
-
Set up environment Create a
.envfile insearch_agent/:AIsa_API_KEY=your_key_here AIsa_BASE_URL=https://api.aisa.one/v1
-
Install dependencies
python3 -m venv venv source venv/bin/activate pip install -r requirements.txt
Launch the Web UI:
streamlit run app.pyRun Verification Script:
python verify_agent.pyThis project highlights why AIsa is the superior choice for building agentic search systems:
- Unified Access: One API key unlocks Academic, Web, and Smart search, plus hundreds of LLMs.
- Specialized Endpoints: Instead of generic search, AIsa offers domain-specific retrieval (Scholar vs Web) that allows for nuanced trust scoring.
- Developer Experience: Simple, OpenAI-compatible interfaces make integration with tools like LangChain seamless.