EvHumanMotion
EvHumanMotion is a real-world dataset captured using the DAVIS346 event camera, focusing on human motion under diverse and challenging scenarios. It is designed to support event-driven human animation research, especially under motion blur, low-light, and overexposure conditions. This dataset was introduced in the EvAnimate paper.
Dataset Structure
The dataset is organized into two main parts:
- EvHumanMotion_aedat4: Raw event streams in
.aedat4format. - EvHumanMotion_frame: Frame-level event slices and RGB frames.
Each is categorized into:
indoor_day/indoor_night_high_noise/indoor_night_low_noise/outdoor_day/outdoor_night/
Each environment contains four scenarios:
low_light/motion_blur/normal/over_exposure/
Example path:
EvHumanMotion_frame/indoor_day/low_light/dvSave-2025_03_04_13_02_53/
βββ event_frames/
β βββ events_0000013494.png
β βββ ...
βββ frames/
βββ frames_1741064573617350.png
βββ ...
Features
- Total sequences: 113
- Participants: 20 (10 male, 10 female)
- Duration: ~10 seconds per video
- Frame rate: 24 fps
- Scenarios: Normal, Motion Blur, Overexposure, Low Light
- Modalities: RGB + Event data (both
.aedat4and frame-level)
Applications
This dataset supports:
- Event-to-video generation
- Human animation in extreme conditions
- Motion transfer with high temporal resolution
Usage
from datasets import load_dataset
dataset = load_dataset("potentialming/EvHumanMotion")
License
Apache 2.0 License
Citation
If you use this dataset, please cite:
@article{qu2025evanimate,
title={EvAnimate: Event-conditioned Image-to-Video Generation for Human Animation},
author={Qu, Qiang and Li, Ming and Chen, Xiaoming and Liu, Tongliang},
journal={arXiv preprint arXiv:2503.18552},
year={2025}
}
Contact
Dataset maintained by Ming Li.