Back
gh

OpenBMB/UltraRAG: A Low-Code MCP Framework for Building Complex and Innovative RAG Pipelines

A Low-Code MCP Framework for Building Complex and Innovative RAG Pipelines - OpenBMB/UltraRAG

by OpenBMB github.com 1,580 words
View original

UltraRAG

Less Code, Lower Barrier, Faster Deployment

OpenBMB%2FUltraRAG | Trendshift

Homepage Documentation Dataset Paper Daily

简体中文 | English


Latest News 🔥


💡 About UltraRAG

UltraRAG is the first lightweight RAG development framework based on the Model Context Protocol (MCP) architecture design, jointly launched by THUNLP at Tsinghua University, NEUIR at Northeastern University, OpenBMB, and AI9stars.

Designed for research exploration and industrial prototyping, UltraRAG standardizes core RAG components (Retriever, Generation, etc.) as independent MCP Servers, combined with the powerful workflow orchestration capabilities of the MCP Client. Developers can achieve precise orchestration of complex control structures such as conditional branches and loops simply through YAML configuration.

UltraRAG Architecture

🖥️ UltraRAG UI

UltraRAG UI transcends the boundaries of traditional chat interfaces, evolving into a visual RAG Integrated Development Environment (IDE) that combines orchestration, debugging, and demonstration.

The system features a powerful built-in Pipeline Builder that supports bidirectional real-time synchronization between “Canvas Construction” and “Code Editing,” allowing for granular online adjustments of pipeline parameters and prompts. Furthermore, it introduces an Intelligent AI Assistant to empower the entire development lifecycle, from pipeline structural design to parameter tuning and prompt generation. Once constructed, logic flows can be converted into interactive dialogue systems with a single click. The system seamlessly integrates Knowledge Base Management components, enabling users to build custom knowledge bases for document Q&A. This truly realizes a one-stop closed loop, spanning from underlying logic construction and data governance to final application deployment.

UltraRAG.Seamless.Integration.of.Development.Deployment.mp4

✨ Key Highlights

🚀 Low-Code Orchestration of Complex Workflows Inference Orchestration: Natively supports control structures such as sequential, loop, and conditional branches. Developers only need to write YAML configuration files to implement complex iterative RAG logic in dozens of lines of code.⚡ Modular Extension and Reproduction Atomic Servers: Based on the MCP architecture, functions are decoupled into independent Servers. New features only need to be registered as function-level Tools to seamlessly integrate into workflows, achieving extremely high reusability.
📊 Unified Evaluation and Benchmark Comparison Research Efficiency: Built-in standardized evaluation workflows, ready-to-use mainstream research benchmarks. Through unified metric management and baseline integration, significantly improves experiment reproducibility and comparison efficiency.🎯 Rapid Interactive Prototype Generation One-Click Delivery: Say goodbye to tedious UI development. With just one command, Pipeline logic can be instantly converted into an interactive conversational Web UI, shortening the distance from algorithm to demonstration.

📦 Installation

We provide two installation methods: local source code installation (recommended using uv for package management) and Docker container deployment

Method 1: Source Code Installation

We strongly recommend using uv to manage Python environments and dependencies, as it can greatly improve installation speed.

Prepare Environment

If you haven’t installed uv yet, please execute:

## Direct installation
pip install uv
## Download
curl -LsSf https://astral.sh/uv/install.sh | sh

Download Source Code

git clone https://github.com/OpenBMB/UltraRAG.git --depth 1
cd UltraRAG

Install Dependencies

Choose one of the following modes to install dependencies based on your use case:

A: Create a New Environment Use uv sync to automatically create a virtual environment and synchronize dependencies:

Once installed, activate the virtual environment:

# Windows CMD
.venv\Scripts\activate.bat

# Windows Powershell
.venv\Scripts\Activate.ps1

# macOS / Linux
source .venv/bin/activate

B: Install into an Existing Environment To install UltraRAG into your currently active Python environment, use uv pip:

# Core dependencies
uv pip install -e .

# Full installation
uv pip install -e ".[all]"

# On-demand installation
uv pip install -e ".[retriever]"

Method 2: Docker Container Deployment

If you prefer not to configure a local Python environment, you can deploy using Docker.

Get Code and Images

# 1. Clone the repository
git clone https://github.com/OpenBMB/UltraRAG.git --depth 1
cd UltraRAG

# 2. Prepare the image (choose one)
# Option A: Pull from Docker Hub
docker pull hdxin2002/ultrarag:v0.3.0-base-cpu # Base version (CPU)
docker pull hdxin2002/ultrarag:v0.3.0-base-gpu # Base version (GPU)
docker pull hdxin2002/ultrarag:v0.3.0          # Full version (GPU)

# Option B: Build locally
docker build -t ultrarag:v0.3.0 .

# 3. Start container (port 5050 is automatically mapped)
docker run -it --gpus all -p 5050:5050 <docker_image_name>

Start the Container

# Start the container (Port 5050 is mapped by default)
docker run -it --gpus all -p 5050:5050 <docker_image_name>

Note: After the container starts, UltraRAG UI will run automatically. You can directly access http://localhost:5050 in your browser to use it.

Verify Installation

After installation, run the following example command to check if the environment is normal:

ultrarag run examples/sayhello.yaml

If you see the following output, the installation is successful:

Hello, UltraRAG v3!

🚀 Quick Start

We provide complete tutorial examples from beginner to advanced. Whether you are conducting academic research or building industrial applications, you can find guidance here. Welcome to visit the Documentation for more details.

🔬 Research Experiments

Designed for researchers, providing data, experimental workflows, and visualization analysis tools.

🛠️ Demo Systems

Designed for developers and end users, providing complete UI interaction and complex application cases.

🤝 Contributing

Thanks to the following contributors for their code submissions and testing. We also welcome new members to join us in collectively building a comprehensive RAG ecosystem!

You can contribute by following the standard process: Fork this repository → Submit Issues → Create Pull Requests (PRs).

⭐ Support Us

If you find this repository helpful for your research, please consider giving us a ⭐ to show your support.

Star History Chart

💬 Contact Us

WeChat Group QR Code WeChat GroupFeishu Group QR Code Feishu GroupJoin Discord Discord

📖 Publications

Papers

  1. Shi Yu, Chaoyue Tang, Bokai Xu, Junbo Cui, Junhao Ran, Yukun Yan, Zhenghao Liu, Shuo Wang, Xu Han, Zhiyuan Liu, Maosong Sun. (2025) VisRAG: Vision-based Retrieval-augmented Generation on Multi-modality Documents. arXiv:2410.10594 and In Proceedings of the Thirteenth International Conference on Learning Representations (ICLR 2025).
  2. Xinze Li, Sen Mei, Zhenghao Liu, Yukun Yan, Shuo Wang, Shi Yu, Zheni Zeng, Hao Chen, Ge Yu, Zhiyuan Liu, Maosong Sun, Chenyan Xiong. (2025) RAG-DDR: Optimizing Retrieval-Augmented Generation Using Differentiable Data Rewards. arXiv:2410.13509 and In Proceedings of the Thirteenth International Conference on Learning Representations (ICLR 2025).
  3. Kunlun Zhu, Yifan Luo, Dingling Xu, Yukun Yan, Zhenghao Liu, Shi Yu, Ruobing Wang, Shuo Wang, Yishan Li, Nan Zhang, Xu Han, Zhiyuan Liu, Maosong Sun. (2025) RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework. arXiv:2408.01262 and In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025).
  4. Ruobing Wang, Qingfei Zhao, Yukun Yan, Daren Zha, Yuxuan Chen, Shi Yu, Zhenghao Liu, Yixuan Wang, Shuo Wang, Xu Han, Zhiyuan Liu, Maosong Sun. (2025) DeepNote: Note-Centric Deep Retrieval-Augmented Generation. arXiv:2410.08821 and In Findings of the Association for Computational Linguistics: EMNLP 2025.

Models

  1. Yishan Li, Wentong Chen, Yukun Yan, Mingwei Li, Sen Mei, Xiaorong Wang, Kunpeng Liu, Xin Cong, Shuo Wang, Zhong Zhang, Yaxi Lu, Zhenghao Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun. (2026) AgentCPM-Report: Interleaving Drafting and Deepening for Open-Ended Deep Research. arXiv:2602.06540.
  2. OpenBMB. MiniCPM-Embedding-Light. Hugging Face Model Card.