nielsr HF Staff commited on
Commit
aca13e2
·
verified ·
1 Parent(s): 7f19cfd

Improve dataset card: Add description, links, task category, sample usage, and citation

Browse files

This PR significantly enhances the dataset card for PHUMA by:
- Adding the `robotics` task category to the metadata for better discoverability.
- Including relevant tags: `humanoid-locomotion`, `motion-imitation`, `physically-grounded`.
- Providing a descriptive introduction of the dataset, based on the paper's abstract.
- Adding direct links to the paper (Hugging Face Papers), project page, and GitHub repository.
- Incorporating a detailed "Sample Usage" section, extracted directly from the GitHub README, covering installation prerequisites, environment setup, and example commands for the dataset's physics-aware motion curation and physics-constrained motion retargeting pipeline.
- Including the "Motion Tracking and Evaluation" section for further context.
- Adding the BibTeX citation for proper academic attribution.

These additions make the dataset card much more informative and user-friendly for researchers.

Files changed (1) hide show
  1. README.md +146 -3
README.md CHANGED
@@ -1,3 +1,146 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - robotics
5
+ tags:
6
+ - humanoid-locomotion
7
+ - motion-imitation
8
+ - physically-grounded
9
+ ---
10
+
11
+ # PHUMA: Physically-Grounded Humanoid Locomotion Dataset
12
+
13
+ [![arXiv](https://img.shields.io/badge/arXiv-2510.26236-b31b1b.svg)](https://arxiv.org/abs/2510.26236)
14
+ [![Project Page](https://img.shields.io/badge/Project_Page-Visit-blue.svg)](https://davian-robotics.github.io/PHUMA/)
15
+ [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Dataset-yellow)](https://huggingface.co/datasets/DAVIAN-Robotics/PHUMA)
16
+
17
+ Motion imitation is a promising approach for humanoid locomotion, enabling agents to acquire humanlike behaviors. Existing methods typically rely on high-quality motion capture datasets such as AMASS, but these are scarce and expensive, limiting scalability and diversity. Recent studies attempt to scale data collection by converting large-scale internet videos, exemplified by Humanoid-X. However, they often introduce physical artifacts such as floating, penetration, and foot skating, which hinder stable imitation.
18
+
19
+ In response, we introduce PHUMA, a Physically-grounded HUMAnoid locomotion dataset that leverages human video at scale, while addressing physical artifacts through careful data curation and physics-constrained retargeting. PHUMA enforces joint limits, ensures ground contact, and eliminates foot skating, producing motions that are both large-scale and physically reliable.
20
+
21
+ **Paper:** [PHUMA: Physically-Grounded Humanoid Locomotion Dataset](https://huggingface.co/papers/2510.26236)
22
+ **Project Page:** https://davian-robotics.github.io/PHUMA
23
+ **Code:** https://github.com/davian-robotics/PHUMA
24
+
25
+ ## Sample Usage
26
+
27
+ This section provides a quick guide to installing the necessary environment and running examples from the PHUMA data pipeline. For more detailed instructions, please refer to the [GitHub repository](https://github.com/davian-robotics/PHUMA).
28
+
29
+ ### Prerequisites
30
+ - Python 3.9
31
+ - CUDA 12.4 (recommended)
32
+ - Conda package manager
33
+
34
+ ### Installation
35
+
36
+ 1. **Clone the repository:**
37
+ ```bash
38
+ git clone https://github.com/DAVIAN-Robotics/PHUMA.git
39
+ cd PHUMA
40
+ ```
41
+
42
+ 2. **Set up the environment:**
43
+ ```bash
44
+ conda create -n phuma python=3.9 -y
45
+ conda activate phuma
46
+ ```
47
+
48
+ 3. **Install dependencies:**
49
+ ```bash
50
+ pip install -r requirements.txt
51
+ pip install -e .
52
+ ```
53
+
54
+ ## Dataset Pipeline
55
+
56
+ ### 1. Physics-Aware Motion Curation
57
+
58
+ Our physics-aware curation pipeline filters out problematic motions from human motion data to ensure physical plausibility.
59
+
60
+ **Starting Point:** We begin with the Humanoid-X collection as described in our paper. For more details, refer to the [Humanoid-X repository](https://github.com/sihengz02/UH-1).
61
+
62
+ **Required SMPL-X Models:** Before running the curation pipeline, you need to download the SMPL-X model files:
63
+
64
+ 1. Visit [SMPL-X official website](https://smpl-x.is.tue.mpg.de/)
65
+ 2. Register and download the following files:
66
+ - `SMPLX_FEMALE.npz` and `SMPLX_FEMALE.pkl`
67
+ - `SMPLX_MALE.npz` and `SMPLX_MALE.pkl`
68
+ - `SMPLX_NEUTRAL.npz` and `SMPLX_NEUTRAL.pkl`
69
+ 3. Place all downloaded files in the `asset/human_model/smplx/` directory
70
+
71
+ **Example Usage:**
72
+ ```bash
73
+ # Set your project directory
74
+ PROJECT_DIR="[REPLACE_WITH_YOUR_WORKING_DIRECTORY]/PHUMA"
75
+ cd $PROJECT_DIR
76
+
77
+ # We provide an example clip: data/human_pose/example/kick.npy
78
+ human_pose_file="example/kick"
79
+
80
+ python src/curation/preprocess_smplx.py \
81
+ --project_dir $PROJECT_DIR \
82
+ --human_pose_file $human_pose_file \
83
+ --visualize 0
84
+ ```
85
+
86
+ **Output:**
87
+ - Preprocessed motion chunks: `example/kick_chunk_0000.npy` and `example/kick_chunk_0001.npy` under `data/human_pose_preprocessed/`
88
+ - If you set `--visualize 1`, will also save `example/kick_chunk_0000.mp4` and `example/kick_chunk_0001.mp4` under `data/video/human_pose_preprocessed/`
89
+
90
+ ### 2. Physics-Constrained Motion Retargeting
91
+
92
+ To address artifacts introduced during the retargeting process, we employ **PhySINK**, our physics-constrained retargeting method that adapts curated human motion to humanoid robots while enforcing physical plausibility.
93
+
94
+ **Shape Adaptation (One-time Setup):**
95
+ ```bash
96
+ # Find the SMPL-X shape that best fits a given humanoid robot
97
+ # This process only needs to be done once and can be reused for all motion files
98
+ python src/retarget/shape_adaptation.py \
99
+ --project_dir $PROJECT_DIR \
100
+ --robot_name g1
101
+ ```
102
+
103
+ **Output:** Shape parameters saved to `asset/humanoid_model/g1/betas.npy`
104
+
105
+ **Motion Adaptation:**
106
+ ```bash
107
+ # Using the curated data from the previous step for Unitree G1 humanoid robot
108
+
109
+ human_pose_preprocessed_file="example/kick_chunk_0000"
110
+
111
+ python src/retarget/motion_adaptation.py \
112
+ --project_dir $PROJECT_DIR \
113
+ --robot_name g1 \
114
+ --human_pose_file $human_pose_preprocessed_file \
115
+ --visualize 0
116
+ ```
117
+
118
+ **Output:**
119
+ - Retargeted humanoid motion data: `data/humanoid_pose/g1/example/kick_chunk_0000.npy`
120
+ - If you set `--visualize 1`, will also save `data/video/humanoid_pose/example/kick_chunk_0000.mp4`
121
+
122
+ ## Motion Tracking and Evaluation
123
+
124
+ To reproduce our reported quantitative results, use the provided data splits located in `data/split/`:
125
+ - `phuma_train.txt`
126
+ - `phuma_test.txt`
127
+ - `unseen_video.txt`
128
+
129
+ LAFAN1 Retargeted Data: Available [here](https://huggingface.co/datasets/lvhaidong/LAFAN1_Retargeting_Dataset).
130
+
131
+ LocoMuJoCo Retargeted Data: Available [here](https://github.com/robfiras/loco-mujoco).
132
+
133
+ For motion tracking and path following tasks, we utilize the codebase from [MaskedMimic](https://github.com/NVlabs/ProtoMotions).
134
+
135
+ ## Citation
136
+
137
+ If you use this dataset or code in your research, please cite our paper:
138
+
139
+ ```bibtex
140
+ @article{lee2025phuma,
141
+ title={PHUMA: Physically-Grounded Humanoid Locomotion Dataset},
142
+ author={Kyungmin Lee and Sibeen Kim and Minho Park and Hyunseung Kim and Dongyoon Hwang and Hojoon Lee and Jaegul Choo},
143
+ journal={arXiv preprint arXiv:2510.26236},
144
+ year={2025},
145
+ }
146
+ ```