An AI-powered research assistant that performs multi-step, deep research on any topic by combining web search (via Tavily) and large language model-based agents.
The goal of this repository is to provide a clean and understandable implementation of a deep research agent system using LangChain and LangGraph. It showcases how multiple specialized agents can collaborate to perform credible research, draft structured answers, refine them, and fact-check information.
The system is based on a multi-agent pipeline, where each agent specializes in a part of the research and answer development process:
- A Research Agent fetches data from the web using Tavily.
- A Drafting Agent organizes research into a structured response.
- Depending on the content, either a Refinement Agent improves unclear drafts, or a Fact-Checking Agent verifies controversial claims.
- Finally, a Polishing Agent performs a last refinement pass before delivering the final answer.
The entire agent workflow is orchestrated dynamically using LangGraph.
- Online Research: Gathers fresh web data using Tavily search API.
- Answer Drafting: Summarizes and organizes research into clear, professional responses.
- Refinement and Fact-Checking: Automatically routes drafts for improvement or fact-checking based on detected needs.
- Final Polishing: Every answer is finalized through an editing agent for clarity and professionalism.
- Dynamic Routing: Uses LangGraph conditional routing to adapt the workflow based on research content.
- Streamlit Frontend: Clean web app interface with chat history and a minimalistic user experience.
- Python 3.10+
- API keys for:
- Tavily Search API
- Groq LLM API (or another supported LLM)
-
Clone the repository:
git clone https://github.com/your-username/deep-research-assistant.git cd deep-research-assistant -
Install dependencies:
pip install -r requirements.txt
-
Create a
.envfile and set your API keys:TAVILY_API_KEY="your_tavily_key" GROQ_API_KEY="your_groq_key" -
Run the application:
streamlit run app.py
-
Clone the repository.
-
Create and configure
.envwith your keys. -
Build and run the Docker container:
docker build -t deep-research-assistant . docker run -p 8501:8501 deep-research-assistant
Once running, the app will:
- Let you enter a research query.
- Automatically:
- Search for information.
- Draft an answer.
- Refine or fact-check the answer as needed.
- Finalize the output.
- Display the final answer on the main page.
- Store previous queries and answers in the sidebar under "Chat History."
- ResearchAgent: Uses Tavily search to gather information.
- AnswerDrafter: Converts research into an initial draft.
- RefinerAgent: Enhances unclear or incomplete drafts.
- FactCheckerAgent: Verifies controversial information based on research.
- FinalPolishAgent: Improves overall clarity, structure, and professionalism.
- Controller (LangGraph): Decides whether to refine, fact-check, or finalize the draft based on heuristics.
-
Research agent fetches online information.
-
Drafting agent organizes findings into a structured answer.
-
Routing logic checks the content:
- If unclear → Send to RefinerAgent.
- If controversial → Send to FactCheckerAgent.
- Otherwise → Proceed.
-
All paths finish with a final polishing step.
-
The final answer is shown in the UI and stored in chat history.
- Add multi-turn conversation or follow-up question capability.
- Support additional LLM backends beyond Groq.
- Implement more fine-grained control over research depth and breadth.