Welcome to mmRAG, the first comprehensive benchmark for evaluating Retrieval-Augmented Generation (RAG) systems across multiple modalities and modular components. mmRAG offers:

A Unified, Multimodal Document Pool
• Text, tables, and knowledge graph data all indexed together
• Enables truly cross-format retrieval and grounding

Sufficient Chunk-Level Labels
• High-precision relevance annotations for every query–chunk pair, generated via LLM-based annotation
• Supports fine-grained retrieval module evaluation

Dataset-Level Routing Labels
• Aggregated from chunk-level annotations to indicate which source dataset (e.g., WebQA, TriviaQA, NQ, CWQ, OTT, KG) best answers each query
• The first benchmarked signal for the query-routing phase in RAG

With these multi-level annotations, mmRAG allows you to measure:

Retrieval: using "relevant chunks" labels

Generation: using "answer" labels

Routing: using "dataset score" labels

Whether you’re developing new dense retrievers, fine-tuning generators, or designing smarter query routers, mmRAG provides the labels, metrics, and data you need to evaluate every piece of your RAG pipeline. Dive in to benchmark your models on text, tables, and graphs—together, in one unified evaluation suite!

Askio changed pull request status to merged

Sign up or log in to comment