This repository demonstrates how to build LLM applications using LangGraph, FastAPI, and Streamlit. It focuses on streaming both token outputs and reasoning state from LangGraph through FastAPI to any frontend client, with Streamlit as the example implementation.
This project provides three progressive examples:
- Basic AsyncIO Console Unit Graph: A simple graph with a single node that handles both joke and poem generation.
- AsyncIO Console Basic Graph: A more complex graph with separate nodes for generating jokes and poems.
- FastAPI + LangGraph + Streamlit: A complete web application that streams LangGraph outputs to a Streamlit UI through FastAPI.
- Stream both tokens and thinking/reasoning state from LLMs
- Connect LangGraph to any frontend through FastAPI
- Demonstrate real-time UI updates with Streamlit
- Show both final content and reasoning process in the UI
-
Clone this repository:
git clone https://github.com/yigit353/LangGraph-FastAPI-Streamlit.git cd LangGraph-FastAPI-Streamlit
-
Create and activate a virtual environment:
python -m venv venv # On Windows venv\Scripts\activate # On macOS/Linux source venv/bin/activate
-
Install required packages:
pip install -r requirements.txt
-
Copy example.env to .env:
cp example.env .env
-
Get a DeepSeek API key:
- Visit DeepSeek Platform
- Create an account and generate an API key
- Add your API key to the .env file:
DEEPSEEK_API_KEY=your_api_key_here
python 01_asyncio_console_unit_graph.py
This example demonstrates a simple graph with a single node that handles both joke and poem generation.
python 02_asyncio_console_basic_graph.py
This example shows a more complex graph with separate nodes for generating jokes and poems.
- Start the FastAPI server:
cd 03_fastapi_langgraph_streamlit
python server.py
- In a new terminal, start the Streamlit UI:
cd 03_fastapi_langgraph_streamlit
streamlit run streamlit_ui.py
-
Navigate to http://localhost:8501 in your browser to use the application.
-
(Optional) Test the API independently:
cd 03_fastapi_langgraph_streamlit
python test_client.py
This project addresses several key challenges in building AI applications:
-
Token Streaming: Most LLM applications need to stream tokens to provide responsive UIs. This project demonstrates how to stream both final outputs and intermediate reasoning.
-
Separation of Concerns: By separating the LLM logic (LangGraph), API layer (FastAPI), and UI (Streamlit), the architecture is modular and maintainable.
-
Frontend Agnostic: While this example uses Streamlit, the FastAPI backend can connect to any frontend (React, Vue, etc.) through server-sent events (SSE).
-
Reasoning Transparency: The UI shows both the final output and the LLM's reasoning process, making the system more transparent and trustworthy.
-
Progressive Learning Path: The three examples progress from simple to complex, making it easier to understand each component.
MIT
Contributions are welcome! Please feel free to submit a Pull Request.