🚧 Note: This repository is under active development. We will be continuously updating with model weights, training code, and more resources in the coming weeks. Stay tuned! ✨
Natural Language to SQL (NL2SQL) enables intuitive interactions with databases by transforming natural language queries into structured SQL statements. Despite recent advancements in enhancing human-computer interaction within database applications, significant challenges persist, particularly regarding the inference performance in complex scenarios involving multi-table joins and nested queries. Current methodologies primarily utilize supervised fine-tuning (SFT) to train the NL2SQL model, which may limit adaptability and interpretability in new environments (e.g., finance and healthcare). In order to enhance the reasoning performance of the NL2SQL model in the above complex situations, we introduce SQL-R1, a novel NL2SQL reasoning model trained by the reinforcement learning (RL) algorithms. We design a specialized RL-based reward function tailored for NL2SQL tasks and discussed the impact of cold start on the effectiveness of intensive training. In addition, we achieve competitive accuracy using only a tiny amount of synthetic NL2SQL data for augmented training and further explore data engineering for RL. In existing experiments, SQL-R1 achieves execution accuracy of 88.6% and 67.1% on the benchmark Spider and BIRD, respectively.
@article{ma2025sql,
title={SQL-R1: Training Natural Language to SQL Reasoning Model By Reinforcement Learning},
author={Ma, Peixian and Zhuang, Xialie and Xu, Chengjin and Jiang, Xuhui and Chen, Ran and Guo, Jian},
journal={arXiv preprint arXiv:2504.08600},
year={2025}
}- [2024.05.27] 🎉 We have released the full code of SQL-R1.
- [2024.05.21] 🎉 We have released our model weights on Hugging Face! Check out the Model Weights section below.
- [2024.04.11] 📑 Our paper is now available on arXiv.
- 📊 Release model weights on HuggingFace
- 🔧 Open source training code
- 📝 Detailed documentation
- 🛠️ Environment setup guide
We are excited to release our SQL-R1 model weights! You can find them on Hugging Face:
| Model | Size | Link |
|---|---|---|
| SQL-R1 (3B) | 3B | 🤗 Download |
| SQL-R1 (7B) | 7B | 🤗 Download |
| SQL-R1 (14B) | 14B | 🤗 Download |
This repository is organized as follows:
SQL-R1/
├── data/ # Data processing scripts and datasets
│ ├── Spider/
│ └── BIRD/
├── models/ # Base models
│ ├── Qwen2.5-Coder-3B-Instruct/
│ └── Qwen2.5-Coder-7B-Instruct/
├── sh/ # Scripts for training, inference and evaluation
├── src/ # Source code
└── verl/ # Verl framework
- Python 3.9+
- CUDA 12.0+
- 8 x 80GB+ GPU (for training) / 2 x 40GB GPU (for inference)
- Clone the repository:
git clone https://github.com/MPX0222/SQL-R1.git
cd SQL-R1- Create and activate a virtual environment (recommended):
conda create -n sqlr1 python=3.9- Install dependencies:
pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu121
pip install vllm==0.6.3 ray
pip install flash-attn --no-build-isolation
pip install -e . # For verl integration
pip install wandb IPython matplotlib sqlparse func_timeout-
Download the model weights from Hugging Face and put them in the
models/directory -
Copy the database information in the
db_infodirectory to the related dataset (data/Spider/,data/BIRD/) directory.
Note: Please set the related data paths and params before running the scripts.
- Run training:
sh sh/train_sqlr1.sh- Run inference:
sh sh/inference.sh"- Run evaluation:
# evaluate spider
sh sh/eval_spider.sh
# evaluate bird
sh sh/eval_bird.shWe thank OmniSQL and follow their evaluation code and database information retrieval code. We have adapted and modified their evaluation scripts for our project.
