📕 Table of Contents
RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs. It offers a streamlined RAG workflow adaptable to enterprises of any scale. Powered by a converged context engine and pre-built agent templates, RAGFlow enables developers to transform complex data into high-fidelity, production-ready AI systems with exceptional efficiency and precision.
Try our demo at https://demo.ragflow.io.
- 2025-12-26 Supports 'Memory' for AI agent.
- 2025-11-19 Supports Gemini 3 Pro.
- 2025-11-12 Supports data synchronization from Confluence, S3, Notion, Discord, Google Drive.
- 2025-10-23 Supports MinerU & Docling as document parsing methods.
- 2025-10-15 Supports orchestrable ingestion pipeline.
- 2025-08-08 Supports OpenAI's latest GPT-5 series models.
- 2025-08-01 Supports agentic workflow and MCP.
- 2025-05-23 Adds a Python/JavaScript code executor component to Agent.
- 2025-05-05 Supports cross-language query.
- 2025-03-19 Supports using a multi-modal model to make sense of images within PDF or DOCX files.
⭐️ Star our repository to stay up-to-date with exciting new features and improvements! Get instant notifications for new releases! 🌟
- Deep document understanding-based knowledge extraction from unstructured data with complicated formats.
- Finds "needle in a data haystack" of literally unlimited tokens.
- Intelligent and explainable.
- Plenty of template options to choose from.
- Visualization of text chunking to allow human intervention.
- Quick view of the key references and traceable citations to support grounded answers.
- Supports Word, slides, excel, txt, images, scanned copies, structured data, web pages, and more.
- Streamlined RAG orchestration catered to both personal and large businesses.
- Configurable LLMs as well as embedding models.
- Multiple recall paired with fused re-ranking.
- Intuitive APIs for seamless integration with business.
- CPU >= 4 cores
- RAM >= 16 GB
- Disk >= 50 GB
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
- gVisor: Required only if you intend to use the code executor (sandbox) feature of RAGFlow.
Tip
If you have not installed Docker on your local machine (Windows, Mac, or Linux), see Install Docker Engine.
-
Ensure
vm.max_map_count>= 262144:To check the value of
vm.max_map_count:$ sysctl vm.max_map_count
Reset
vm.max_map_countto a value at least 262144 if it is not.# In this case, we set it to 262144: $ sudo sysctl -w vm.max_map_count=262144This change will be reset after a system reboot. To ensure your change remains permanent, add or update the
vm.max_map_countvalue in /etc/sysctl.conf accordingly:vm.max_map_count=262144
-
Clone the repo:
$ git clone https://github.com/infiniflow/ragflow.git
-
Start up the server using the pre-built Docker images:
Caution
All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64. If you are on an ARM64 platform, follow this guide to build a Docker image compatible with your system.
The command below downloads the
v0.23.1edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different fromv0.23.1, update theRAGFLOW_IMAGEvariable accordingly in docker/.env before usingdocker composeto start the server.
$ cd ragflow/docker
# git checkout v0.23.1
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases)
# This step ensures the **entrypoint.sh** file in the code matches the Docker image version.
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -dNote: Prior to
v0.22.0, we provided both images with embedding models and slim images without embedding models. Details as follows:
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|---|---|---|---|
| v0.21.1 | ≈9 | ✔️ | Stable release |
| v0.21.1-slim | ≈2 | ❌ | Stable release |
Starting with
v0.22.0, we ship only the slim edition and no longer append the -slim suffix to the image tag.
-
Check the server status after having the server up and running:
$ docker logs -f docker-ragflow-cpu-1
The following output confirms a successful launch of the system:
____ ___ ______ ______ __ / __ \ / | / ____// ____// /____ _ __ / /_/ // /| | / / __ / /_ / // __ \| | /| / / / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ / /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/ * Running on all addresses (0.0.0.0)If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a
network abnormalerror because, at that moment, your RAGFlow may not be fully initialized. -
In your web browser, enter the IP address of your server and log in to RAGFlow.
With the default settings, you only need to enter
http://IP_OF_YOUR_MACHINE(sans port number) as the default HTTP serving port80can be omitted when using the default configurations. -
In service_conf.yaml.template, select the desired LLM factory in
user_default_llmand update theAPI_KEYfield with the corresponding API key.See llm_api_key_setup for more information.
The show is on!
When it comes to system configurations, you will need to manage the following files:
- .env: Keeps the fundamental setups for the system, such as
SVR_HTTP_PORT,MYSQL_PASSWORD, andMINIO_PASSWORD. - service_conf.yaml.template: Configures the back-end services. The environment variables in this file will be automatically populated when the Docker container starts. Any environment variables set within the Docker container will be available for use, allowing you to customize service behavior based on the deployment environment.
- docker-compose.yml: The system relies on docker-compose.yml to start up.
The ./docker/README file provides a detailed description of the environment settings and service configurations which can be used as
${ENV_VARS}in the service_conf.yaml.template file.
To update the default HTTP serving port (80), go to docker-compose.yml and change 80:80
to <YOUR_SERVING_PORT>:80.
Updates to the above configurations require a reboot of all containers to take effect:
$ docker compose -f docker-compose.yml up -d
RAGFlow uses Elasticsearch by default for storing full text and vectors. To switch to Infinity, follow these steps:
-
Stop all running containers:
$ docker compose -f docker/docker-compose.yml down -v
Warning
-v will delete the docker container volumes, and the existing data will be cleared.
-
Set
DOC_ENGINEin docker/.env toinfinity. -
Start the containers:
$ docker compose -f docker-compose.yml up -d
Warning
Switching to Infinity on a Linux/arm64 machine is not yet officially supported.
This image is approximately 2 GB in size and relies on external LLM and embedding services.
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .Or if you are behind a proxy, you can pass proxy arguments:
docker build --platform linux/amd64 \
--build-arg http_proxy=http://YOUR_PROXY:PORT \
--build-arg https_proxy=http://YOUR_PROXY:PORT \
-f Dockerfile -t infiniflow/ragflow:nightly .RAGFlow supports flexible dependency configurations for faster builds and smaller images.
| Use Case | RAGFLOW_EXTRAS Value |
|---|---|
| Full (default) | all |
| Full-lite (excludes deepdoc; uses Elasticsearch/OpenSearch) | full-lite |
Docling sidecar (docling-sidecar) |
docling-sidecar |
| With S3-compatible storage (e.g., Garage, AWS) + deepdoc | db-postgres,storage-s3,vectorstore-elasticsearch,deepdoc |
| Custom selection | docling-sidecar,llm-anthropic,observability |
The full-lite option is a recommended balance of features and image size. It differs from the standard installation in four key ways:
- Comprehensive features: It includes all LLM providers, integrations, web search tools, GraphRAG, and agent capabilities.
- Excludes deepdoc: The heavy
deepdocparsing suite is excluded, as it is intended for use with a Docling sidecar. - Vector database: It defaults to Elasticsearch or OpenSearch for document vectors.
- No observability by default: The
full-liteoption DOES NOT include theobservabilitygroup. If you need pgvector-backed telemetry features, you must add it explicitly (e.g.,full-lite,observability).
| Group | Description |
|---|---|
docling-sidecar |
Core dependencies for Docling sidecar users (no deepdoc) |
db-postgres / db-mysql |
Application database driver |
storage-minio / storage-s3 / storage-azure |
Object storage backend |
vectorstore-elasticsearch / vectorstore-opensearch |
Vector database |
deepdoc |
Built-in document processing (skip if using Docling) |
llm-* |
LLM provider integrations (e.g., llm-openai, llm-azure) |
integrations-* |
Data source connectors (e.g., integrations-postgres, integrations-s3) |
observability |
Monitoring and tracing integrations (includes pgvector support) |
Note
* is a wildcard for provider or connector names (e.g., llm-openai, integrations-postgres) so you can include only the ones you need.
# Full image (default - backward compatible)
docker build -t ragflow:full .
# Docling user (no deepdoc, smaller image)
docker build --build-arg RAGFLOW_EXTRAS="docling-sidecar" -t ragflow:docling .
# Custom selection
docker build --build-arg RAGFLOW_EXTRAS="docling-sidecar,llm-anthropic,observability" -t ragflow:custom .-
Skip
deepdocin RAGFLOW_EXTRAS when building. -
Ensure RAGFlow and Docling share a Docker network for container-name resolution. The network name depends on your Docker Compose setup (e.g.,
ragflow_default,docker_default). Discover the correct network name by running:docker network ls
-
Run Docling as a sidecar container. Choose one networking approach:
Option A: Shared Docker network (recommended for Docker Compose)
# Replace <network> with the network name found above (e.g., ragflow_default) docker run --name docling --network <network> ds4sd/docling-serve
[!TIP] RAGFlow communicates with Docling using the internal container network, so no port mapping is required for RAGFlow-to-Docling communication. You only need to add the port mapping (
-p 5001:5001) if you want to connect to Docling from your host machine for manual testing or debugging.Then set
DOCLING_BASE_URLin.env. It MUST be the full base URL including protocol and port:DOCLING_BASE_URL=http://docling:5001(Recommended for clarity and reliable container-to-container resolution)
Option B: Host networking (simpler for development)
docker run -p 5001:5001 ds4sd/docling-serve
Then set
DOCLING_BASE_URLin.envbased on your platform:- Mac/Windows:
DOCLING_BASE_URL=http://host.docker.internal:5001 - Linux: Check your
docker0bridge IP viaip addr show docker0(commonly172.17.0.1), then setDOCLING_BASE_URL=http://<docker0-ip>:5001
[!NOTE] On Linux, Option A (shared network) is highly recommended over Option B to avoid the need for platform-specific
docker0IP discovery.[!IMPORTANT]
DOCLING_BASE_URLis used directly by the RAGFlow service and must include the port (e.g.,:5001) even if Docling is running on the default port, to avoid ambiguity in container networking. -
Configure the system to use Docling by editing the configuration template before building or starting your containers:
Set parser:
layout_recognizer: "Docling"in service_conf.yaml.template.Add it under the
ragflowsection or a dedicatedrecognizerblock for clarity:ragflow: # ... existing config ... layout_recognizer: "Docling"
[!NOTE] Always edit the
.templatefile in thedocker/directory; it is rendered into the finalservice_conf.yamlused at runtime when the containers start.
-
Install
uvandpre-commit, or skip this step if they are already installed:pipx install uv pre-commit
-
Clone the source code and install Python dependencies:
git clone https://github.com/infiniflow/ragflow.git cd ragflow/ uv sync --python 3.12 # install RAGFlow dependent python modules uv run scripts/download_deps.py pre-commit install
-
Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
docker compose -f docker/docker-compose-base.yml up -d
Add the following line to
/etc/hoststo resolve all hosts specified in docker/.env to127.0.0.1:127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager -
If you cannot access HuggingFace, set the
HF_ENDPOINTenvironment variable to use a mirror site:export HF_ENDPOINT=https://hf-mirror.com -
If your operating system does not have jemalloc, please install it as follows:
# Ubuntu sudo apt-get install libjemalloc-dev # CentOS sudo yum install jemalloc # OpenSUSE sudo zypper install jemalloc # macOS sudo brew install jemalloc
-
Launch backend service:
source .venv/bin/activate export PYTHONPATH=$(pwd) bash docker/launch_backend_service.sh
-
Install frontend dependencies:
cd web npm install -
Launch frontend service:
npm run dev
The following output confirms a successful launch of the system:
-
Stop RAGFlow front-end and back-end service after development is complete:
pkill -f "ragflow_server.py|task_executor.py"
See the RAGFlow Roadmap 2026
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our Contribution Guidelines first.





