This project provides multiple LangGraph framework based implementations (strategies) for a universal assistant powered by MCP (Model Context Protocol) servers. It aims to showcase how LangGraph and MCP can be combined to build modular and extensible AI agents capable of interacting with various tools and data sources.
This documentation provides details necessary for Large Language Models (LLMs) and AI Agents to understand, diagnose, and enhance this project workspace.
- Project Structure and Purpose: High-level description of the project, its goals, and the directory structure.
- Key Dependencies: Information about project dependencies, Python version, and build system details extracted from
pyproject.toml.
- LangGraph Setup: Details on how LangGraph is configured and integrated, based on
langgraph.json.
- Implementation Strategies: Overview of the different LangGraph-based assistant strategies implemented and links to their specific documentation.
- Common Code Patterns: Documentation of recurring code patterns and conventions used throughout the project, including LLM node implementation, model loading, and MCP interaction.
-
Create and activate a virtual environment
git clone {{REPO_URL}} cd langgraph-mcp python3 -m venv .venv source .venv/bin/activate -
Install Langgraph CLI
pip install -U "langgraph-cli[inmem]"Note: "inmem" extra(s) are needed to run LangGraph API server in development mode (without requiring Docker installation)
-
Install the dependencies
source .venv/bin/activate pip install -e . pip install -e ".[dev]" pip install -e ".[test]"
-
Configure environment variables
cp env.example .env
-
Run tests
source .venv/bin/activate pytest tests/test_assistant_with_planner.py pytest tests/test_planner_style_agent.py