token_ids uint16 1 52k |
|---|
1 |
1,342 |
4,326 |
3,051 |
399 |
588 |
12,462 |
530 |
16,278 |
49,883 |
1,190 |
5,330 |
9,911 |
18,108 |
49,857 |
49,923 |
49,879 |
49,857 |
49,897 |
49,889 |
49,891 |
49,897 |
1,481 |
377 |
811 |
3,136 |
12,447 |
400 |
38,824 |
400 |
49,857 |
49,897 |
49,889 |
49,891 |
49,897 |
17,792 |
919 |
4,056 |
2,610 |
399 |
377 |
4,056 |
400 |
377 |
1,583 |
49,879 |
637 |
14,040 |
1,016 |
399 |
4,285 |
2,245 |
530 |
727 |
4,326 |
15,171 |
49,877 |
2,303 |
13,492 |
2,152 |
13,618 |
515 |
4,290 |
49,883 |
3,158 |
4,775 |
7,012 |
49,879 |
666 |
6,824 |
631 |
3,566 |
398 |
2,570 |
398 |
6,410 |
433 |
4,157 |
2,879 |
49,879 |
16,165 |
1,930 |
12,368 |
398 |
377 |
6,183 |
13,492 |
398 |
377 |
12,371 |
402 |
8,133 |
433 |
377 |
8,333 |
49,877 |
794 |
49,915 |
49,868 |
5,848 |
FineWeb Tokenized Corpus (AnisoleAI)
Overview
This repository provides a large-scale pre-tokenized version of the FineWeb dataset designed for efficient training of language models.
The dataset contains text from the FineWeb corpus that has been tokenized using a SentencePiece tokenizer.
Tokens are stored in a compact uint16 format for efficient storage and high-throughput training.
Each dataset record contains a flat array of token IDs representing a continuous tokenized text sequence.
This format allows:
- fast loading
- minimal memory overhead
- efficient distributed training
- direct compatibility with LLM training pipelines
No semantic modifications were made to the original FineWeb dataset.
The text was only tokenized and serialized into shard files.
Dataset Structure
The dataset is organized into multiple shard directories:
data_1/shard-00000.parquet
data_1/shard-00001.parquet
...
data_2/shard-00000.parquet
...
...
data_20/shard-XXXXX.parquet
Each shard contains:
token_ids: uint16[]
Each record stores a contiguous tokenized segment that can be used directly for model training.
Loading the Dataset
You can load the dataset using the HuggingFace datasets library.
from datasets import load_dataset
dataset = load_dataset(
"anisoleai/fineweb-tokenized",
split="shard"
)
sample = dataset[0]["token_ids"]
print("Number of tokens:", len(sample))
The dataset supports:
- streaming
- distributed loading
- partial downloads
Loading the Tokenizer
The tokenizer used to generate the corpus is included in this repository.
import sentencepiece as spm
from huggingface_hub import hf_hub_download
model_path = hf_hub_download(
repo_id="anisoleai/fineweb-tokenized",
filename="tokenizer.model",
repo_type="dataset"
)
sp = spm.SentencePieceProcessor(model_file=model_path)
print("Vocabulary size:", sp.get_piece_size())
print(sp.decode([1, 10, 20, 30]))
Intended Use
This dataset is intended for:
- large language model pretraining
- tokenizer benchmarking
- distributed LLM training pipelines
- academic AI research
- commercial AI development
The shard-based structure allows scalable multi-worker training pipelines.
Source Dataset
Original dataset:
FineWeb
https://huggingface.co/datasets/HuggingFaceFW/fineweb
FineWeb is a large-scale filtered web corpus designed for training language models.
License
This dataset follows the license of the original dataset:
Open Data Commons Attribution License (ODC-BY) v1.0
https://opendatacommons.org/licenses/by/1-0/
Attribution
If you use this dataset, please attribute:
- the creators of the FineWeb dataset
- AnisoleAI for the tokenization pipeline and dataset preparation
Notes
- The dataset contains token IDs only.
- Original raw text is not included.
- Token IDs correspond to the included SentencePiece tokenizer.
- Downloads last month
- 18,824