Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ license: mit
|
|
| 15 |
## π₯ News & Updates
|
| 16 |
|
| 17 |
|
| 18 |
-
- **2025-12-03**: Released the full **Real-World Dataset** (**
|
| 19 |
|
| 20 |
--
|
| 21 |
|
|
@@ -31,15 +31,15 @@ The Dexora real-world dataset consists of **11.5K teleoperated episodes**, **2.9
|
|
| 31 |
<img src="assets/image/dataset.gif" alt="Dexora Multi-view Dataset" width="100%">
|
| 32 |
</p>
|
| 33 |
|
| 34 |
-
<p
|
| 35 |
-
<i>Video 1. <b>Synchronized Multi-View Recordings.</b> High-resolution streams from ego-centric
|
| 36 |
</p>
|
| 37 |
|
| 38 |
<p align="center">
|
| 39 |
<img src="assets/image/real-data.JPG" alt="Dexora Real-World Dataset Mosaic" width="100%">
|
| 40 |
</p>
|
| 41 |
|
| 42 |
-
<p
|
| 43 |
<i>Fig 1. <b>High-Fidelity Real-World Scenes.</b> Collected via our hybrid teleoperation system (Exoskeleton for arm + Vision Pro for hand), this dataset covers <b>347 objects</b> across diverse environments. It captures varying lighting conditions, background clutter, and precise bimanual interactions essential for robust policy learning. Panels (aβd) correspond to four task categories: <b>pick-and-place</b>, <b>assembly</b>, <b>articulation</b>, and <b>dexterous manipulation</b>.</i>
|
| 44 |
</p>
|
| 45 |
|
|
@@ -51,7 +51,7 @@ The Dexora real-world dataset consists of **11.5K teleoperated episodes**, **2.9
|
|
| 51 |
<img src="assets/image/Robot%20Arm%20Task%20Trajectory%20Distribution.png" alt="Dexora Robot Arm Trajectory Distribution" width="120%">
|
| 52 |
</p>
|
| 53 |
|
| 54 |
-
<p
|
| 55 |
<i>Fig 2. <b>Task Categories & Action Distribution.</b> Unlike standard gripper datasets, Dexora emphasizes high-DoF dexterity. The real-world data distribution includes <b>Dexterous Manipulation (20%)</b> (e.g., <i>Twist Cap</i>, <i>Use Pen</i>, <i>Cut Leek</i>) and <b>Assembly (15%)</b> (e.g., <i>Separate Nested Bowls</i>, <i>Stack Ring Blocks</i>), in addition to <b>Articulated Objects (10%)</b> and <b>Pick-and-Place (55%)</b>.</i>
|
| 56 |
</p>
|
| 57 |
|
|
@@ -92,7 +92,7 @@ The Dexora simulation dataset contains **100K episodes** generated in **MuJoCo**
|
|
| 92 |
|
| 93 |
| **Split** | **Episodes** | **Frames** | **Hours (approx.)** | **Task Types** |
|
| 94 |
| :--------------- | -----------: | ---------: | -------------------: | :----------------------------------------------------------------------------- |
|
| 95 |
-
| **Simulated** |
|
| 96 |
| **Real-World** | **10K** | **3.2M** | **177.5** | Teleoperated bimanual tasks with high-DoF hands, cluttered scenes, fine-grain object interactions |
|
| 97 |
|
| 98 |
|
|
@@ -396,7 +396,7 @@ data
|
|
| 396 |
}
|
| 397 |
```
|
| 398 |
|
| 399 |
-
## π₯ Usage
|
| 400 |
|
| 401 |
### 1. Environment Setup
|
| 402 |
|
|
@@ -454,7 +454,7 @@ print("Language Instruction:", df["language_instruction"][0])
|
|
| 454 |
```
|
| 455 |
|
| 456 |
---
|
| 457 |
-
|
| 458 |
## π Citation
|
| 459 |
|
| 460 |
If you find Dexora useful in your research, please consider citing our paper:
|
|
|
|
| 15 |
## π₯ News & Updates
|
| 16 |
|
| 17 |
|
| 18 |
+
- **2025-12-03**: Released the full **Real-World Dataset** (**11.5K episodes**) on [Hugging Face](https://huggingface.co/datasets/Dexora/Dexora_Real-World_Dataset).
|
| 19 |
|
| 20 |
--
|
| 21 |
|
|
|
|
| 31 |
<img src="assets/image/dataset.gif" alt="Dexora Multi-view Dataset" width="100%">
|
| 32 |
</p>
|
| 33 |
|
| 34 |
+
<p>
|
| 35 |
+
<i>Video 1. <b>Synchronized Multi-View Recordings.</b> High-resolution streams from four synchronized views β an ego-centric head-mounted camera, left and right wrist-mounted cameras, and a static third-person scene camera β synchronized with 36-DoF robot proprioception.</i>
|
| 36 |
</p>
|
| 37 |
|
| 38 |
<p align="center">
|
| 39 |
<img src="assets/image/real-data.JPG" alt="Dexora Real-World Dataset Mosaic" width="100%">
|
| 40 |
</p>
|
| 41 |
|
| 42 |
+
<p>
|
| 43 |
<i>Fig 1. <b>High-Fidelity Real-World Scenes.</b> Collected via our hybrid teleoperation system (Exoskeleton for arm + Vision Pro for hand), this dataset covers <b>347 objects</b> across diverse environments. It captures varying lighting conditions, background clutter, and precise bimanual interactions essential for robust policy learning. Panels (aβd) correspond to four task categories: <b>pick-and-place</b>, <b>assembly</b>, <b>articulation</b>, and <b>dexterous manipulation</b>.</i>
|
| 44 |
</p>
|
| 45 |
|
|
|
|
| 51 |
<img src="assets/image/Robot%20Arm%20Task%20Trajectory%20Distribution.png" alt="Dexora Robot Arm Trajectory Distribution" width="120%">
|
| 52 |
</p>
|
| 53 |
|
| 54 |
+
<p>
|
| 55 |
<i>Fig 2. <b>Task Categories & Action Distribution.</b> Unlike standard gripper datasets, Dexora emphasizes high-DoF dexterity. The real-world data distribution includes <b>Dexterous Manipulation (20%)</b> (e.g., <i>Twist Cap</i>, <i>Use Pen</i>, <i>Cut Leek</i>) and <b>Assembly (15%)</b> (e.g., <i>Separate Nested Bowls</i>, <i>Stack Ring Blocks</i>), in addition to <b>Articulated Objects (10%)</b> and <b>Pick-and-Place (55%)</b>.</i>
|
| 56 |
</p>
|
| 57 |
|
|
|
|
| 92 |
|
| 93 |
| **Split** | **Episodes** | **Frames** | **Hours (approx.)** | **Task Types** |
|
| 94 |
| :--------------- | -----------: | ---------: | -------------------: | :----------------------------------------------------------------------------- |
|
| 95 |
+
| **Simulated** | **ββ** | **ββ** | TBD | Pick-and-place, assembly, articulation |
|
| 96 |
| **Real-World** | **10K** | **3.2M** | **177.5** | Teleoperated bimanual tasks with high-DoF hands, cluttered scenes, fine-grain object interactions |
|
| 97 |
|
| 98 |
|
|
|
|
| 396 |
}
|
| 397 |
```
|
| 398 |
|
| 399 |
+
<!-- ## π₯ Usage
|
| 400 |
|
| 401 |
### 1. Environment Setup
|
| 402 |
|
|
|
|
| 454 |
```
|
| 455 |
|
| 456 |
---
|
| 457 |
+
-->
|
| 458 |
## π Citation
|
| 459 |
|
| 460 |
If you find Dexora useful in your research, please consider citing our paper:
|