Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -105,4 +105,113 @@ configs:
|
|
| 105 |
data_files:
|
| 106 |
- split: test
|
| 107 |
path: ru-en/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 105 |
data_files:
|
| 106 |
- split: test
|
| 107 |
path: ru-en/test-*
|
| 108 |
+
language:
|
| 109 |
+
- ru
|
| 110 |
+
- en
|
| 111 |
+
tags:
|
| 112 |
+
- benchmark
|
| 113 |
+
- mteb
|
| 114 |
+
- retrieval
|
| 115 |
---
|
| 116 |
+
# RuSciBench Dataset Collection
|
| 117 |
+
|
| 118 |
+
This repository contains the datasets for the **RuSciBench** benchmark, designed for evaluating semantic vector representations of scientific texts in Russian and English.
|
| 119 |
+
|
| 120 |
+
## Dataset Description
|
| 121 |
+
|
| 122 |
+
**RuSciBench** is the first benchmark specifically targeting scientific documents in the Russian language, alongside their English counterparts (abstracts and titles). The data is sourced from [eLibrary.ru](https://www.elibrary.ru), the largest Russian electronic library of scientific publications, integrated with the Russian Science Citation Index (RSCI).
|
| 123 |
+
|
| 124 |
+
The dataset comprises approximately 182,000 scientific paper abstracts and titles. All papers included in the benchmark have open licenses.
|
| 125 |
+
|
| 126 |
+
## Tasks
|
| 127 |
+
|
| 128 |
+
The benchmark includes a variety of tasks grouped into Classification, Regression, and Retrieval categories, designed for both Russian and English texts based on paper abstracts.
|
| 129 |
+
|
| 130 |
+
### Classification Tasks
|
| 131 |
+
|
| 132 |
+
([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_mteb))
|
| 133 |
+
|
| 134 |
+
1. **Topic Classification (OECD):** Classify papers based on the first two levels of the Organization for Economic Co-operation and Development (OECD) rubricator (29 classes).
|
| 135 |
+
* `RuSciBenchOecdRuClassification` (subset `oecd_ru`)
|
| 136 |
+
* `RuSciBenchOecdEnClassification` (subset `oecd_en`)
|
| 137 |
+
2. **Topic Classification (GRNTI/SRSTI):** Classify papers based on the first level of the State Rubricator of Scientific and Technical Information (GRNTI/SRSTI) (29 classes).
|
| 138 |
+
* `RuSciBenchGrntiRuClassification` (subset `grnti_ru`)
|
| 139 |
+
* `RuSciBenchGrntiEnClassification` (subset `grnti_en`)
|
| 140 |
+
3. **Core RISC Affiliation:** Binary classification task to determine if a paper belongs to the Core of the Russian Index of Science Citation (RISC).
|
| 141 |
+
* `RuSciBenchCoreRiscRuClassification` (subset `corerisc_ru`)
|
| 142 |
+
* `RuSciBenchCoreRiscEnClassification` (subset `corerisc_en`)
|
| 143 |
+
4. **Publication Type Classification:** Classify documents into types like 'article', 'conference proceedings', 'survey', etc. (7 classes, balanced subset used).
|
| 144 |
+
* `RuSciBenchPubTypesRuClassification` (subset `pub_type_ru`)
|
| 145 |
+
* `RuSciBenchPubTypesEnClassification` (subset `pub_type_en`)
|
| 146 |
+
|
| 147 |
+
### Regression Tasks
|
| 148 |
+
|
| 149 |
+
([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_mteb))
|
| 150 |
+
|
| 151 |
+
1. **Year of Publication Prediction:** Predict the publication year of the paper.
|
| 152 |
+
* `RuSciBenchYearPublRuRegression` (subset `yearpubl_ru`)
|
| 153 |
+
* `RuSciBenchYearPublEnRegression` (subset `yearpubl_en`)
|
| 154 |
+
2. **Citation Count Prediction:** Predict the number of times a paper has been cited.
|
| 155 |
+
* `RuSciBenchCitedCountRuRegression` (subset `cited_count_ru`)
|
| 156 |
+
* `RuSciBenchCitedCountEnRegression` (subset `cited_count_en`)
|
| 157 |
+
|
| 158 |
+
### Retrieval Tasks
|
| 159 |
+
|
| 160 |
+
1. **Direct Citation Prediction:** Given a query paper abstract, retrieve abstracts of papers it directly cites from the corpus. Uses a retrieval setup (all non-positive documents are negative). ([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_cite_retrieval))
|
| 161 |
+
* `RuSciBenchCiteRuRetrieval`
|
| 162 |
+
* `RuSciBenchCiteEnRetrieval`
|
| 163 |
+
2. **Co-Citation Prediction:** Given a query paper abstract, retrieve abstracts of papers that are co-cited with it (cited by at least 5 common papers). Uses a retrieval setup. ([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_cocite_retrieval))
|
| 164 |
+
* `RuSciBenchCociteRuRetrieval`
|
| 165 |
+
* `RuSciBenchCociteEnRetrieval`
|
| 166 |
+
3. **Translation Search:** Given an abstract in one language (e.g., Russian), retrieve its corresponding translation (e.g., English abstract of the same paper) from the corpus of abstracts in the target language. ([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_translation_search))
|
| 167 |
+
* `RuSciBenchTranslationSearchEnRetrieval` (Query: En, Corpus: Ru)
|
| 168 |
+
* `RuSciBenchTranslationSearchRuRetrieval` (Query: Ru, Corpus: En)
|
| 169 |
+
|
| 170 |
+
## Usage
|
| 171 |
+
|
| 172 |
+
These datasets are designed to be used with the MTEB library. **First, you need to install the MTEB fork containing the RuSciBench tasks:**
|
| 173 |
+
|
| 174 |
+
```bash
|
| 175 |
+
pip install git+https://github.com/mlsa-iai-msu-lab/ru_sci_bench_mteb
|
| 176 |
+
```
|
| 177 |
+
|
| 178 |
+
Then you can evaluate sentence-transformer models easily:
|
| 179 |
+
|
| 180 |
+
```python
|
| 181 |
+
from sentence_transformers import SentenceTransformer
|
| 182 |
+
from mteb import MTEB
|
| 183 |
+
|
| 184 |
+
# Example: Evaluate on Russian GRNTI classification
|
| 185 |
+
model_name = "mlsa-iai-msu-lab/sci-rus-tiny3.1" # Or any other sentence transformer
|
| 186 |
+
model = SentenceTransformer(model_name)
|
| 187 |
+
|
| 188 |
+
evaluation = MTEB(tasks=["RuSciBenchGrntiRuClassification"]) # Select tasks
|
| 189 |
+
results = evaluation.run(model, output_folder=f"results/{model_name.split('/')[-1]}")
|
| 190 |
+
|
| 191 |
+
print(results)
|
| 192 |
+
```
|
| 193 |
+
|
| 194 |
+
For more details on the benchmark, tasks, and baseline model evaluations, please refer to the associated paper and code repository.
|
| 195 |
+
|
| 196 |
+
* **Code Repository:** [https://github.com/mlsa-iai-msu-lab/ru_sci_bench_mteb](https://github.com/mlsa-iai-msu-lab/ru_sci_bench_mteb)
|
| 197 |
+
* **Paper:** https://doi.org/10.1134/S1064562424602191
|
| 198 |
+
|
| 199 |
+
## Citation
|
| 200 |
+
|
| 201 |
+
If you use RuSciBench in your research, please cite the following paper:
|
| 202 |
+
|
| 203 |
+
```bibtex
|
| 204 |
+
@article{Vatolin2024,
|
| 205 |
+
author = {Vatolin, A. and Gerasimenko, N. and Ianina, A. and Vorontsov, K.},
|
| 206 |
+
title = {RuSciBench: Open Benchmark for Russian and English Scientific Document Representations},
|
| 207 |
+
journal = {Doklady Mathematics},
|
| 208 |
+
year = {2024},
|
| 209 |
+
volume = {110},
|
| 210 |
+
number = {1},
|
| 211 |
+
pages = {S251--S260},
|
| 212 |
+
month = dec,
|
| 213 |
+
doi = {10.1134/S1064562424602191},
|
| 214 |
+
url = {https://doi.org/10.1134/S1064562424602191},
|
| 215 |
+
issn = {1531-8362}
|
| 216 |
+
}
|
| 217 |
+
```
|