Search is not available for this dataset
id int64 8 10k |
|---|
4,724 |
4,818 |
137 |
3,059 |
5,183 |
4,873 |
2,660 |
623 |
757 |
5,420 |
5,172 |
2,179 |
4,236 |
4,838 |
309 |
3,045 |
7,075 |
6,752 |
4,367 |
4,489 |
3,794 |
6,891 |
3,313 |
2,827 |
700 |
1,066 |
4,717 |
3,345 |
7,105 |
1,966 |
6,362 |
1,511 |
55 |
811 |
1,039 |
5,970 |
6,736 |
8,473 |
7,939 |
2,398 |
829 |
434 |
1,384 |
4,803 |
2,539 |
2,036 |
4,401 |
9,929 |
2,316 |
8,568 |
6,588 |
2,758 |
3,349 |
415 |
226 |
9,616 |
6,635 |
6,887 |
9,450 |
1,342 |
8,508 |
3,062 |
8,692 |
9,103 |
7,482 |
9,745 |
2,678 |
4,105 |
5,529 |
4,864 |
9,607 |
8,493 |
6,976 |
1,540 |
9,478 |
2,236 |
216 |
2,101 |
8,225 |
2,924 |
3,007 |
3,944 |
5,775 |
5,594 |
7,965 |
6,909 |
3,694 |
5,084 |
4,957 |
8,088 |
5,209 |
5,853 |
9,186 |
5,797 |
9,708 |
4,229 |
3,259 |
2,955 |
9,353 |
7,612 |
End of preview. Expand in Data Studio
MADB: Music Aesthetics Dataset and Benchmark
Dataset Description
MADB is a large-scale dataset for music aesthetic evaluation, designed to support research on multi-dimensional and subjective music perception.
The dataset contains approximately 10,000 music tracks, each annotated by multiple trained annotators across 10 perceptual dimensions and one overall score. In addition, each track includes textual comments and semantic tags (genre and mood), enabling multimodal learning.
Data Composition
Audio
- All audio files are stored under
data/audio/(mp3 format) - 1730 tracks are generated by Suno and Levo
- 4400 tracks are from the Muchin dataset: https://github.com/CarlWangChina/MuChin
- Remaining tracks are collected from diverse online sources
Annotations
Annotations are stored under data/annotation/:
avg_score.csv: average score for each dimension per trackMADB_data.csv: full annotation data, including:- per-annotator scores
- textual comments
- genre and mood tags
split_val_seed5.csv: the validate set with random seed 5
Sample
Samples are stored under sample/:
- sample_audio: 200 audios selected from validate set:
- 100 audios randomly selected from Muchin
- 50 audios from levo
- 50 audios from suno
- sample_embedding: embedding extracted from sample audios:
- clap: extracted with original clap
- muq: extracted with original muq
- clap_com: extracted with clap after adaption with comments
- clap_com_tag: extracted with clap after adaption with comments and tags
sample_ids.csv: contains all sample audios' id
Annotation Framework
Each track is rated across the following dimensions:
- Melody perception
- Melody emotion
- Arrangement perception
- Arrangement emotion
- Rhythm perception
- Structure perception
- Performance and singing mood
- Enunciation and singing skill
- Performance skill
- Sound effect perception
- Overall score
Some dimensions may be not applicable, in which case a value of 0 is assigned.
Annotation Process
- Annotators have at least 3 years of formal music training
- All annotators hold at least a bachelor's degree
- Quality control is conducted by experts with 10+ years of professional experience
- Each track is rated by multiple annotators
Intended Use
This dataset is designed for:
- Music aesthetic evaluation
- Multimodal learning (audio + text + tags)
- Music understanding and analysis
- Evaluation of generative music systems
Licensing and Usage
- Audio data may be subject to original copyright restrictions
- Users should ensure compliance with the original data sources
- This dataset is intended for research purposes only
- Downloads last month
- 287