license: mit
language:
- en
tags:
- science
- accelerator-physics
- particle-accelerator
pretty_name: Accel-IR
size_categories:
- 1K<n<10K
task_categories:
- text-retrieval
- question-answering
configs:
- config_name: expert_core
data_files:
- split: test
path: Accel_IR_expert_core.csv
- config_name: augmented
data_files:
- split: test
path: Accel_IR_augmented.csv
Accel-IR Benchmark: A Gold Standard for Particle Accelerator Physics
This repository contains the Accel-IR Benchmark, a domain-specific Information Retrieval (IR) dataset for particle accelerator physics. It was developed as part of the Master's Thesis "From Dataset to Optimization: A Benchmarking Framework for Information Retrieval in the Particle Accelerator Domain" by Qing Dai (University of Zurich, 2025), in collaboration with the Paul Scherrer Institute (PSI).
Dataset Configurations
This benchmark is available in two configurations. You can load specific versions based on your evaluation needs:
| Configuration | Rows | Description | Use Case |
|---|---|---|---|
expert_core |
390 | Purely expert-annotated pairs. Labeled by 7 domain experts (PhDs/Researchers) from PSI. | Precise evaluation against human ground truth. |
augmented |
1,357 | The expert_core + curated hard negatives. The negatives were generated using a expert-validated automatic annotation pipeline. |
Realistic IR evaluation with many more negatives than positives. |
Dataset Structure
Data Fields
Each row in the dataset represents a query-document pair with the following columns:
Source: The referenced paper or an IPAC publication, source of the chunks.Question: The domain-specific scientific question.Answer: Answer to the question.Question_type: The category of the question, simulating diverse information needs:Fact: Specific details or parameters.Definition: Explanations of concepts/terms.Reasoning: Logic behind phenomena or mechanisms.Summary: Key points or conclusions.
Referenced_file(s): Referenced papers for the questions, provided by experts.chunk_text: The text passage retrieved from domain-expert-referenced papers or IPAC conference papers.expert_annotation(Core only): The raw relevance score given by domain experts on a 5-point Likert scale:1: Irrelevant2: Partially Irrelevant3: Hard to Decide (Excluded from Core)4: Partially Relevant5: Relevant
specific to paper: Indicates if the question is "Context-Dependent" (answerable only by the referenced paper) or "General" (answerable by broader domain knowledge).Label: The binary ground truth used for evaluation metrics (nDCG/MAP).1(Relevant): Derived from expert scores 4 & 5.0(Irrelevant): Derived from expert scores 1 & 2, or pipeline hard negatives.
Creation Methodology
Expert Core:
- Created by 7 domain experts from the Electron Beam Instrumentation Group at PSI.
- Experts reviewed query-chunk pairs and annotated them on a 1-5 scale using a custom interface.
- Pairs labeled as "3 - Not Sure" were removed to ensure no ambiguity.
Augmentation (Hard Negatives):
- To simulate realistic retrieval scenarios where negatives far outnumber positives, the core dataset was augmented.
- Hard Negatives were generated using an expert-validated automatic annotation pipeline.
Usage
You can load the datasets using the Hugging Face datasets library.
Load the Expert Core (390 pairs)
from datasets import load_dataset
# Load the pure expert-annotated subset
ds_core = load_dataset("qdai/Accel-IR", "expert_core", split="test")
print(ds_core[0])
Citation
If you use this dataset, please cite:
Qing Dai, "From Dataset to Optimization: A Benchmarking Framework for Information Retrieval in the Particle Accelerator Domain", Master's Thesis, University of Zurich, 2025.