potentialming commited on
Commit
25b0f9e
Β·
verified Β·
1 Parent(s): ea653d2

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +82 -3
  2. dataset_card.json +1 -0
README.md CHANGED
@@ -1,3 +1,82 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # EvHumanMotion
2
+
3
+ [Dataset on HuggingFace πŸ€—](https://huggingface.co/datasets/potentialming/EvHumanMotion)
4
+
5
+ **EvHumanMotion** is a real-world dataset captured using the DAVIS346 event camera, focusing on human motion under diverse and challenging scenarios. It is designed to support event-driven human animation research, especially under motion blur, low-light, and overexposure conditions. This dataset was introduced in the [EvAnimate](https://arxiv.org/abs/2503.18552) paper.
6
+
7
+ ## Dataset Structure
8
+
9
+ The dataset is organized into two main parts:
10
+
11
+ - **EvHumanMotion_aedat4**: Raw event streams in `.aedat4` format.
12
+ - **EvHumanMotion_frame**: Frame-level event slices and RGB frames.
13
+
14
+ Each is categorized into:
15
+ - `indoor_day/`
16
+ - `indoor_night_high_noise/`
17
+ - `indoor_night_low_noise/`
18
+ - `outdoor_day/`
19
+ - `outdoor_night/`
20
+
21
+ Each environment contains four scenarios:
22
+ - `low_light/`
23
+ - `motion_blur/`
24
+ - `normal/`
25
+ - `over_exposure/`
26
+
27
+ Example path:
28
+
29
+ ```
30
+ EvHumanMotion_frame/indoor_day/low_light/dvSave-2025_03_04_13_02_53/
31
+ β”œβ”€β”€ event_frames/
32
+ β”‚ β”œβ”€β”€ events_0000013494.png
33
+ β”‚ └── ...
34
+ └── frames/
35
+ β”œβ”€β”€ frames_1741064573617350.png
36
+ └── ...
37
+ ```
38
+
39
+ ## Features
40
+
41
+ - **Total sequences**: 113
42
+ - **Participants**: 20 (10 male, 10 female)
43
+ - **Duration**: ~10 seconds per video
44
+ - **Frame rate**: 24 fps
45
+ - **Scenarios**: Normal, Motion Blur, Overexposure, Low Light
46
+ - **Modalities**: RGB + Event data (both `.aedat4` and frame-level)
47
+
48
+ ## Applications
49
+
50
+ This dataset supports:
51
+ - Event-to-video generation
52
+ - Human animation in extreme conditions
53
+ - Motion transfer with high temporal resolution
54
+
55
+ ## Usage
56
+
57
+ ```python
58
+ from datasets import load_dataset
59
+
60
+ dataset = load_dataset("potentialming/EvHumanMotion")
61
+ ```
62
+
63
+ ## License
64
+
65
+ Apache 2.0 License
66
+
67
+ ## Citation
68
+
69
+ If you use this dataset, please cite:
70
+
71
+ ```bibtex
72
+ @article{qu2025evanimate,
73
+ title={EvAnimate: Event-conditioned Image-to-Video Generation for Human Animation},
74
+ author={Qu, Qiang and Li, Ming and Chen, Xiaoming and Liu, Tongliang},
75
+ journal={arXiv preprint arXiv:2503.18552},
76
+ year={2025}
77
+ }
78
+ ```
79
+
80
+ ## Contact
81
+
82
+ Dataset maintained by [Ming Li](https://huggingface.co/potentialming).
dataset_card.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {'dataset_name': 'EvHumanMotion', 'pretty_name': 'EvHumanMotion', 'license': 'apache-2.0', 'language': ['en'], 'multilinguality': 'no', 'size_categories': ['10K<n<100K'], 'source_datasets': None, 'task_categories': ['video-to-video', 'event-to-video', 'human motion generation'], 'tags': ['event camera', 'video generation', 'human animation', 'motion transfer', 'real-world', 'low-light', 'motion-blur', 'diffusion'], 'dataset_info': {'features': {'event_stream': '.aedat4 format event file', 'event_frames': 'Sliced event frames in .png format', 'video_frames': 'RGB frame sequence', 'scenario': 'One of [normal, motion_blur, over_exposure, low_light]', 'environment': 'One of [indoor_day, indoor_night_high_noise, indoor_night_low_noise, outdoor_day, outdoor_night]'}, 'splits': {'train': 'custom split recommended', 'validation': 'custom split recommended', 'test': 'custom split recommended'}}, 'description': 'EvHumanMotion is a real-world dataset captured using DAVIS346 event camera, containing 113 sequences of human motion under various scenarios including motion blur, low light, and overexposure. It is designed for event-driven video generation and motion understanding. Each sequence includes RGB frames and aligned event data, in both .aedat4 and frame-level formats.', 'homepage': 'https://huggingface.co/datasets/potentialming/EvHumanMotion', 'citation': '@article{qu2025evanimate,\n title={EvAnimate: Event-conditioned Image-to-Video Generation for Human Animation},\n author={Qu, Qiang and Li, Ming and Chen, Xiaoming and Liu, Tongliang},\n journal={arXiv preprint arXiv:2503.18552},\n year={2025}\n}'}