Dataset Viewer (First 5GB)

The dataset viewer should be available soon. Please retry later.

DCAD-2000: A Multilingual Dataset across 2000+ Languages with Data Cleaning as Anomaly Detection (NeurIPS 2025)


😊 2025.9.19: DCAD-2000: A Multilingual Dataset across 2000+ Languages with Data Cleaning as Anomaly Detection, has been accepted at NeurIPS 2025 Datasets and Benchmarks Track.

🛠️ 2025.11.29: Dataset Storage Optimization Update

To address Hugging Face storage capacity limitations and ensure the smooth release of new research from the OpenBMB community, we have performed a major dataset optimization by removing the top 20 languages by data volume:

Language Code Data Size (rows)
eng_Latn 1,311,089,898
rus_Cyrl 858,533,615
cmn_Hani 713,970,414
deu_Latn 668,616,647
spa_Latn 604,454,122
fra_Latn 513,533,126
jpn_Jpan 491,465,489
ita_Latn 311,421,021
por_Latn 271,475,185
pol_Latn 223,213,297
nld_Latn 219,161,133
ind_Latn 156,915,320
tur_Latn 143,311,483
vie_Latn 87,769,030
fas_Arab 82,798,155
kor_Hang 79,219,710
swe_Latn 77,323,851
hun_Latn 70,788,907
ukr_Cyrl 67,869,004
ell_Grek 67,476,934

The above-mentioned top 20 languages that were removed are currently stored in the OBS service. You can download them for free by following the download tutorial in DCAD2000_top20_download.md. For any questions, please contact us.

Dataset Summary

DCAD-2000 is a large-scale multilingual corpus built using newly extracted Common Crawl data (CC-MAIN-2024-46) and existing multilingual datasets. It includes over 2,282 languages, 46.72TB of data, and 8.63 billion documents, spanning 155 highand medium-resource languages and 159 writing scripts. We propose reframing data cleaning as an anomaly detection task. This dynamic filtering approach significantly enhances data quality by identifying and removing noisy or anomalous content.

Dataset Overview

Comparison of multilingual datasets constructed from Common Crawl (CC) and our constructed DCAD-2000, focusing on the latest CC version used, the total number of languages supported, distribution across resource categories (high, medium, low, very low), and training readiness. The CC version marked with bold indicates an inferred version due to the lack of explicit specification in the original paper. The "Training-Ready" column indicates whether the dataset is ready for training LLMs without requiring further data cleaning.

Dataset CC Version #Langs (total) #Langs (high) #Langs (medium) #Langs (low) #Langs (very low) Training-Ready
mC4 (Raffel et al., 2020) CC-MAIN-2020-34 101 0 43 52 6
OSCAR 23.01 (Abadji et al., 2022) CC-MAIN-2022-49 153 6 42 25 80
Glot500 (Imani et al., 2023) CC-MAIN-2020-34 511 0 108 79 324
CulturaX (Nguyen et al., 2024) CC-MAIN-2022-49 167 11 47 27 82
Madlad-400 (Kudugunta et al., 2024) CC-MAIN-2022-33 419 7 46 39 327
MaLA (Ji et al., 2024) CC-MAIN-2022-49 939 1 125 78 735
Glotcc (Kargaran et al., 2024) CC-MAIN-2023-50 1331 0 10 52 1269
HPLT-v1.2 (de Gilbert et al., 2024) CC-MAIN-2022-40 191 12 53 38 88
Fineweb-2 (Penedo et al., 2024) CC-MAIN-2024-18 1915 10 62 49 1794
DCAD-2000 CC-MAIN-2024-46 2282 13 142 124 2003

Dataset Creation

  • Data Collection: DCAD-2000 integrates data from four main sources: MaLA, Fineweb, Fineweb-2, and newly extracted Common Crawl data.
  • Data Cleaning as Anomaly Detection: Traditional data cleaning methods rely on fixed thresholds for document-level features, making them less adaptable to the diversity of multilingual data. To address this, we propose a novel framework that formulates data cleaning as an anomaly detection task, which involves the feature extraction and anomaly detection.
    • Feature Extraction: For each document, we consider the following eight features: (1) Number of Words; (2) Character Repetition Ratio; (3) Word Repetition Ratio; (4) Special Characters Ratio; (5) Stop- words Ratio; (6) Flagged Words Ratio; (7) Language Identification (LID) Score; (8) Perplexity Score.
    • Anomaly Detection: We evaluate several classical anomaly detection algorithms including (1) Isolation Forest; (2) One Class SVM; (3) Local Outlier Factor and (4) K-Means.
    • Visualization
      ad_overview

Data Statistics

Usage (Dataset)

from datasets import load_dataset
data = load_dataset("openbmb/DCAD-2000")

You can also specifiy the language you wanted

dataset_path = snapshot_download(repo_id="openbmb/DCAD-2000")
## select all keep datasets
keep_files = glob.glob(os.path.join(dataset_path, "eng_Latn", "*keep*"))

# load the selected datasets (You can also load all the dataset you want, try to revise the loading code a little bit)
for files in keep_files:
  data = load_dataset("json", data_files=keep_files)

Citation Information

@article{shen2025dcad,
  title={DCAD-2000: A Multilingual Dataset across 2000+ Languages with Data Cleaning as Anomaly Detection},
  author={Shen, Yingli and Lai, Wen and Wang, Shuo and Zhang, Xueren and Luo, Kangyang and Fraser, Alexander and Sun, Maosong},
  journal={arXiv preprint arXiv:2502.11546},
  year={2025}
}

Acknowledgements

We introduce DCAD-2000, a large- scale multilingual dataset designed to address the increasing demand for high-quality and diverse training data for multilingual LLMs. This work is done by researchers at Tsinghua NLP group in collaboration with partners from TUM and Modelbest Inc.

Contact Information

Yingli Shen ([email protected]) Wen Lai ([email protected])

Downloads last month
23,602