--- license: apache-2.0 task_categories: - question-answering language: - en --- # 📚 mmrag benchmark ## 📁 Files Overview - `mmrag_train.json`: Training set for model training. - `mmrag_dev.json`: Validation set for hyperparameter tuning and development. - `mmrag_test.json`: Test set for evaluation. - `processed_documents.json`: The chunks used for retrieval. --- ## 🔍 Query Datasets: `mmrag_train.json`, `mmrag_dev.json`, `mmrag_test.json` The three files are all lists of dictionaries. Each dictionary contains the following fields: ### 🔑 `id` - **Description**: Unique query identifier, structured as `SourceDataset_queryIDinDataset`. - **Example**: `ott_144`, means this query is picked from OTT-QA dataset ### ❓ `query` - **Description**: Text of the query. - **Example**: `"What is the capital of France?"` ### ✅ `answer` - **Description**: The gold-standard answer corresponding to the query. - **Example**: `"Paris"` ### 📑 `relevant_chunks` - **Description**: Dictionary of annotated chunk IDs and their corresponding relevance scores. The context of chunks can be get from processed_documents.json. relevance score is in range of {0(irrelevant), 1(Partially relevant), 2(gold)} - **Example**: ```json{"ott_23573_2": 1, "ott_114_0": 2, "m.12345_0": 0}``` ### 📖 `ori_context` - **Description**: A list of the original document IDs related to the query. This field can help to get the relevant document provided by source dataset. - **Example**: `["ott_144"]`, means all chunk IDs start with "ott_114" is from the original document. ### 📜 `dataset_score` - **Description**: The datset-level relevance labels. With the routing score of all datasets regarding this query. - **Example**: `{"tat": 0, "triviaqa": 2, "ott": 4, "kg": 1, "nq": 0}`, where 0 means there is no relevant chunks in the dataset. The higher the score is, the more relevant chunks the dataset have. --- ## 📚 Knowledge Base: `processed_documents.json` This file is a list of chunks used for document retrieval, which contains the following fields: ### 🔑 `id` - **Description**: Unique document identifier, structured as `dataset_documentID_chunkIndex`, equivalent to `dataset_queryID_chunkIndex` - **example1**: `ott_8075_0` (chunks from NQ, TriviaQA, OTT, TAT) - **example2**: `m.0cpy1b_5` (chunks from documents of knowledge graph(Freebase)) ### 📄 `text` - **Description**: Text of the document. - **Example**: `A molecule editor is a computer program for creating and modifying representations of chemical structures.` --- ## 📄 License This dataset is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).