You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
By accessing this dataset, you agree to cite the associated paper in your research/publicationsβsee the "Citation" section for details. You agree to not use the dataset to conduct experiments that cause harm to human subjects.
Log in or Sign Up to review the conditions and access this dataset content.
AIRBOT_MMK2_place_the_paper_drawer
π Overview
This dataset uses an extended format based on LeRobot and is fully compatible with LeRobot.
Robot Type: discover_robotics_aitbot_mmk2
| Codebase Version: v2.1
End-Effector Type: five_finger_hand
π Scene Types
This dataset covers the following scene types:
home
π€ Atomic Actions
This dataset includes the following atomic actions:
graspplacepick
π Dataset Statistics
| Metric | Value |
|---|---|
| Total Episodes | 688 |
| Total Frames | 167196 |
| Total Tasks | 15 |
| Total Videos | 2752 |
| Total Chunks | 1 |
| Chunk Size | 1000 |
| FPS | 30 |
| Dataset Size | 5.8GB |
π₯ Authors
Contributors
This dataset is contributed by:
- RoboCOIN - RoboCOIN Team
π Links
- π Homepage: https://flagopen.github.io/RoboCOIN/
- π Paper: https://arxiv.org/abs/2511.17441
- π» Repository: https://github.com/FlagOpen/RoboCOIN
- π Project Page: https://flagopen.github.io/RoboCOIN/
- π Issues: https://github.com/FlagOpen/RoboCOIN/issues
- π License: apache-2.0
π·οΈ Dataset Tags
RoboCOINLeRobot
π― Task Descriptions
Primary Tasks
pick up the steel frame with both hands and place it on the table. Take out two packs of vacuum paper from the white tray one after the other and place them on the table. While picking up the calculator case with one hand, also pick up the power bank case with the other hand, then place both items one after the other. Pick up the mineral water bottle and then the tape measure from the white lid and place each one on the table. Hold the snacks and the box ruler and place them on the platform at the same time. Place the apple, then the peach, and finally the pear in their respective positions. Place the apple, then the pomegranate, and finally the orange in their respective positions. Take out the drink and then the coffee cup from the white lid and place each one on the table. Pick up the cake from the plate and place it on the table, then pick up the strip-shaped bread from the table and put it on the plate. Take the building blocks and then the bb marbles from the plate and place each one on the table. Take out the green cart and then the yellow cart from the plate and place each one on the table. Push the pan to one side of the table and place it on the red cube. Take the mouse box and then the calculator box off the white cover. at the same time, both hands picked up the shark dagger on the table and placed it on the white lid. Take the sponge and then the bowl out of the white lid and place each one on the table.
Sub-Tasks
This dataset includes 70 distinct subtasks:
- Place the pomegranate on the middle of the table with the right gripper
- Place the steel tube on the table with the left gripper
- Grasp the bowl on the white basket and with the right gripper
- Grasp the green rectangular block on the plate with the left gripper
- Place the mineral water on the table with the left gripper
- Place the bagged waffle on the carton with the left gripper
- Place the green rectangular block on the table with the left gripper
- Grasp the tape measure with the right gripper
- Grasp the tissue on the white lid with the left gripper
- Grasp the vitamin B water on the white lid with the left gripper
- Place the shark dagger on the white basket with the right gripper
- Place the tape measure on the table with the right gripper
- Place the tissue on the table with the right gripper
- Place the sponge on the table with the left gripper
- Grasp the apple with the left gripper
- Place the apple on the left side of the table with the left gripper
- Grasp the pear with the right gripper
- Place the pear on the right side of the table with the right gripper
- Place the calculator box into the storage box with the right gripper
- Grasp the coffee on the white lid with the right gripper
- Grasp the bread with the right gripper
- Grasp the frying pan with the right gripper
- Grasp the phone case box with the left gripper
- Place the calculator box into the storage box with the left gripper
- Grasp the mouse box on the white lid and with the left gripper
- Grasp the orange with the right gripper
- Place the bowl on the table with the right gripper
- Grasp the pomegranate with the right gripper
- Grasp the tissue on the white lid with the right gripper
- Place the coffee on the table with the right gripper
- Place the bread into the plate with the right gripper
- Grasp the calculator box with the left gripper
- Abnormal
- Grasp the steel tube on the cube block with the right gripper
- Place the vitamin B water on the table with the left gripper
- Grasp the toy car on the plate and with the right gripper
- Grasp the peach with the left gripper
- Place the tape measure on the carton with the right gripper
- Place the phone case box into the storage box with the right gripper
- Grasp the cake on the plate with the left gripper
- Place the phone case box into the storage box with the left gripper
- Grasp the toy car on the plate and with the left gripper
- Grasp the calculator box on the white lid and with the right gripper
- Grasp the shark dagger with the left gripper
- Grasp the bagged waffle with the left gripper
- Grasp the mineral water with the left gripper
- Place the pomegranate on the middle of the table with the left gripper
- Grasp the sponge on the white basket and with the left gripper
- Place the peach on the middle of the table with the left gripper
- Place the shark dagger on the white basket with the left gripper
- Push the frying pan on left to right with the left gripper
- Place the steel tube on the table with the right gripper
- End
- Place the tissue on the table with the left gripper
- Grasp the phone case box with the right gripper
- Place the orange on the right side of the table with the right gripper
- Grasp the shark dagger with the right gripper
- Place the cake on the table with the left gripper
- Place the toy car on the table with the left gripper
- Place the toy car on the table with the right gripper
- Grasp the steel tube on the cube block with the left gripper
- Place the calculator box on the table with the right gripper
- Place the mouse box on the table with the left gripper
- Grasp the bullet on the plate with the right gripper
- Place the frying pan on the red cube block with the right gripper
- Place the coffee on the table with the left gripper
- Grasp the calculator box with the right gripper
- Static
- Place the bullet on the table with the right gripper
- null
π₯ Camera Views
This dataset includes 4 camera views.
π·οΈ Available Annotations
This dataset includes rich annotations to support diverse learning approaches:
Subtask Annotations
- Subtask Segmentation: Fine-grained subtask segmentation and labeling
Scene Annotations
- Scene-level Descriptions: Semantic scene classifications and descriptions
End-Effector Annotations
- Direction: Movement direction classifications for robot end-effectors
- Velocity: Velocity magnitude categorizations during manipulation
- Acceleration: Acceleration magnitude classifications for motion analysis
Gripper Annotations
- Gripper Mode: Open/close state annotations for gripper control
- Gripper Activity: Activity state classifications (active/inactive)
Additional Features
- End-Effector Simulation Pose: 6D pose information for end-effectors in simulation space
- Available for both state and action
- Gripper Opening Scale: Continuous gripper opening measurements
- Available for both state and action
π Data Splits
The dataset is organized into the following splits:
- Training: Episodes 0:687
π Dataset Structure
This dataset follows the LeRobot format and contains the following components:
Data Files
- Videos: Compressed video files containing RGB camera observations
- State Data: Robot joint positions, velocities, and other state information
- Action Data: Robot action commands and trajectories
- Metadata: Episode metadata, timestamps, and annotations
File Organization
- Data Path Pattern:
data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet - Video Path Pattern:
videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4 - Chunking: Data is organized into 1 chunk(s) of size 1000
Features Schema
The dataset includes the following features:
Visual Observations
- observation.images.cam_high_rgb: video
- FPS: 30
- Codec: av1- observation.images.cam_left_wrist_rgb: video
- FPS: 30
- Codec: av1- observation.images.cam_right_wrist_rgb: video
- FPS: 30
- Codec: av1- observation.images.cam_third_view: video
- FPS: 30
- Codec: av1
State and Action- observation.state: float32- action: float32
Temporal Information
- timestamp: float32
- frame_index: int64
- episode_index: int64
- index: int64
- task_index: int64
Annotations
- subtask_annotation: int32
- scene_annotation: int32
Motion Features
- eef_sim_pose_state: float32
- Dimensions: left_eef_pos_x, left_eef_pos_y, left_eef_pos_z, left_eef_ori_x, left_eef_ori_y, left_eef_ori_z, right_eef_pos_x, right_eef_pos_y, right_eef_pos_z, right_eef_ori_x, right_eef_ori_y, right_eef_ori_z
- eef_sim_pose_action: float32
- Dimensions: left_eef_pos_x, left_eef_pos_y, left_eef_pos_z, left_eef_ori_x, left_eef_ori_y, left_eef_ori_z, right_eef_pos_x, right_eef_pos_y, right_eef_pos_z, right_eef_ori_x, right_eef_ori_y, right_eef_ori_z
- eef_direction_state: int32
- Dimensions: left_eef_direction, right_eef_direction
- eef_direction_action: int32
- Dimensions: left_eef_direction, right_eef_direction
- eef_velocity_state: int32
- Dimensions: left_eef_velocity, right_eef_velocity
- eef_velocity_action: int32
- Dimensions: left_eef_velocity, right_eef_velocity
- eef_acc_mag_state: int32
- Dimensions: left_eef_acc_mag, right_eef_acc_mag
- eef_acc_mag_action: int32
- Dimensions: left_eef_acc_mag, right_eef_acc_mag
Gripper Features
Meta Information
The complete dataset metadata is available in meta/info.json:
{"codebase_version": "v2.1", "robot_type": "discover_robotics_aitbot_mmk2", "total_episodes": 688, "total_frames": 167196, "total_tasks": 15, "total_videos": 2752, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": {"train": "0:687"}, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": {"observation.images.cam_high_rgb": {"dtype": "video", "shape": [480, 640, 3], "names": ["height", "width", "channels"], "info": {"video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false}}, "observation.images.cam_left_wrist_rgb": {"dtype": "video", "shape": [480, 640, 3], "names": ["height", "width", "channels"], "info": {"video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false}}, "observation.images.cam_right_wrist_rgb": {"dtype": "video", "shape": [480, 640, 3], "names": ["height", "width", "channels"], "info": {"video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false}}, "observation.images.cam_third_view": {"dtype": "video", "shape": [480, 640, 3], "names": ["height", "width", "channels"], "info": {"video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false}}, "observation.state": {"dtype": "float32", "shape": [36], "names": ["left_arm_joint_1_rad", "left_arm_joint_2_rad", "left_arm_joint_3_rad", "left_arm_joint_4_rad", "left_arm_joint_5_rad", "left_arm_joint_6_rad", "right_arm_joint_1_rad", "right_arm_joint_2_rad", "right_arm_joint_3_rad", "right_arm_joint_4_rad", "right_arm_joint_5_rad", "right_arm_joint_6_rad", "left_hand_joint_1_rad", "left_hand_joint_2_rad", "left_hand_joint_3_rad", "left_hand_joint_4_rad", "left_hand_joint_5_rad", "left_hand_joint_6_rad", "left_hand_joint_7_rad", "left_hand_joint_8_rad", "left_hand_joint_9_rad", "left_hand_joint_10_rad", "left_hand_joint_11_rad", "left_hand_joint_12_rad", "right_hand_joint_1_rad", "right_hand_joint_2_rad", "right_hand_joint_3_rad", "right_hand_joint_4_rad", "right_hand_joint_5_rad", "right_hand_joint_6_rad", "right_hand_joint_7_rad", "right_hand_joint_8_rad", "right_hand_joint_9_rad", "right_hand_joint_10_rad", "right_hand_joint_11_rad", "right_hand_joint_12_rad"]}, "action": {"dtype": "float32", "shape": [36], "names": ["left_arm_joint_1_rad", "left_arm_joint_2_rad", "left_arm_joint_3_rad", "left_arm_joint_4_rad", "left_arm_joint_5_rad", "left_arm_joint_6_rad", "right_arm_joint_1_rad", "right_arm_joint_2_rad", "right_arm_joint_3_rad", "right_arm_joint_4_rad", "right_arm_joint_5_rad", "right_arm_joint_6_rad", "left_hand_joint_1_rad", "left_hand_joint_2_rad", "left_hand_joint_3_rad", "left_hand_joint_4_rad", "left_hand_joint_5_rad", "left_hand_joint_6_rad", "left_hand_joint_7_rad", "left_hand_joint_8_rad", "left_hand_joint_9_rad", "left_hand_joint_10_rad", "left_hand_joint_11_rad", "left_hand_joint_12_rad", "right_hand_joint_1_rad", "right_hand_joint_2_rad", "right_hand_joint_3_rad", "right_hand_joint_4_rad", "right_hand_joint_5_rad", "right_hand_joint_6_rad", "right_hand_joint_7_rad", "right_hand_joint_8_rad", "right_hand_joint_9_rad", "right_hand_joint_10_rad", "right_hand_joint_11_rad", "right_hand_joint_12_rad"]}, "timestamp": {"dtype": "float32", "shape": [1], "names": null}, "frame_index": {"dtype": "int64", "shape": [1], "names": null}, "episode_index": {"dtype": "int64", "shape": [1], "names": null}, "index": {"dtype": "int64", "shape": [1], "names": null}, "task_index": {"dtype": "int64", "shape": [1], "names": null}, "subtask_annotation": {"names": null, "dtype": "int32", "shape": [5]}, "scene_annotation": {"names": null, "dtype": "int32", "shape": [1]}, "eef_sim_pose_state": {"names": ["left_eef_pos_x", "left_eef_pos_y", "left_eef_pos_z", "left_eef_ori_x", "left_eef_ori_y", "left_eef_ori_z", "right_eef_pos_x", "right_eef_pos_y", "right_eef_pos_z", "right_eef_ori_x", "right_eef_ori_y", "right_eef_ori_z"], "dtype": "float32", "shape": [12]}, "eef_sim_pose_action": {"names": ["left_eef_pos_x", "left_eef_pos_y", "left_eef_pos_z", "left_eef_ori_x", "left_eef_ori_y", "left_eef_ori_z", "right_eef_pos_x", "right_eef_pos_y", "right_eef_pos_z", "right_eef_ori_x", "right_eef_ori_y", "right_eef_ori_z"], "dtype": "float32", "shape": [12]}, "eef_direction_state": {"names": ["left_eef_direction", "right_eef_direction"], "dtype": "int32", "shape": [2]}, "eef_direction_action": {"names": ["left_eef_direction", "right_eef_direction"], "dtype": "int32", "shape": [2]}, "eef_velocity_state": {"names": ["left_eef_velocity", "right_eef_velocity"], "dtype": "int32", "shape": [2]}, "eef_velocity_action": {"names": ["left_eef_velocity", "right_eef_velocity"], "dtype": "int32", "shape": [2]}, "eef_acc_mag_state": {"names": ["left_eef_acc_mag", "right_eef_acc_mag"], "dtype": "int32", "shape": [2]}, "eef_acc_mag_action": {"names": ["left_eef_acc_mag", "right_eef_acc_mag"], "dtype": "int32", "shape": [2]}}}
Directory Structure
The dataset is organized as follows (showing leaf directories with first 5 files only):
AIRBOT_MMK2_place_the_paper_drawer_qced_hardlink/
βββ annotations/
β βββ eef_acc_mag_annotation.jsonl
β βββ eef_direction_annotation.jsonl
β βββ eef_velocity_annotation.jsonl
β βββ gripper_activity_annotation.jsonl
β βββ gripper_mode_annotation.jsonl
β βββ (...)
βββ data/
β βββ chunk-000/
β βββ episode_000000.parquet
β βββ episode_000001.parquet
β βββ episode_000002.parquet
β βββ episode_000003.parquet
β βββ episode_000004.parquet
β βββ (...)
βββ meta/
β βββ episodes.jsonl
β βββ episodes_stats.jsonl
β βββ info.json
β βββ tasks.jsonl
βββ videos/
βββ chunk-000/
βββ observation.images.cam_high_rgb/
β βββ episode_000000.mp4
β βββ episode_000001.mp4
β βββ episode_000002.mp4
β βββ episode_000003.mp4
β βββ episode_000004.mp4
β βββ (...)
βββ observation.images.cam_left_wrist_rgb/
β βββ episode_000000.mp4
β βββ episode_000001.mp4
β βββ episode_000002.mp4
β βββ episode_000003.mp4
β βββ episode_000004.mp4
β βββ (...)
βββ observation.images.cam_right_wrist_rgb/
β βββ episode_000000.mp4
β βββ episode_000001.mp4
β βββ episode_000002.mp4
β βββ episode_000003.mp4
β βββ episode_000004.mp4
β βββ (...)
βββ observation.images.cam_third_view/
βββ episode_000000.mp4
βββ episode_000001.mp4
βββ episode_000002.mp4
βββ episode_000003.mp4
βββ episode_000004.mp4
βββ (...)
π Contact and Support
For questions, issues, or feedback regarding this dataset, please contact:
- Email: None For questions, issues, or feedback regarding this dataset, please contact us.
Support
For technical support, please open an issue on our GitHub repository.
π License
This dataset is released under the apache-2.0 license.
Please refer to the LICENSE file for full license terms and conditions.
π Citation
If you use this dataset in your research, please cite:
@article{robocoin,
title={RoboCOIN: An Open-Sourced Bimanual Robotic Data Collection for Integrated Manipulation},
author={Shihan Wu, Xuecheng Liu, Shaoxuan Xie, Pengwei Wang, Xinghang Li, Bowen Yang, Zhe Li, Kai Zhu, Hongyu Wu, Yiheng Liu, Zhaoye Long, Yue Wang, Chong Liu, Dihan Wang, Ziqiang Ni, Xiang Yang, You Liu, Ruoxuan Feng, Runtian Xu, Lei Zhang, Denghang Huang, Chenghao Jin, Anlan Yin, Xinlong Wang, Zhenguo Sun, Junkai Zhao, Mengfei Du, Mingyu Cao, Xiansheng Chen, Hongyang Cheng, Xiaojie Zhang, Yankai Fu, Ning Chen, Cheng Chi, Sixiang Chen, Huaihai Lyu, Xiaoshuai Hao, Yequan Wang, Bo Lei, Dong Liu, Xi Yang, Yance Jiao, Tengfei Pan, Yunyan Zhang, Songjing Wang, Ziqian Zhang, Xu Liu, Ji Zhang, Caowei Meng, Zhizheng Zhang, Jiyang Gao, Song Wang, Xiaokun Leng, Zhiqiang Xie, Zhenzhen Zhou, Peng Huang, Wu Yang, Yandong Guo, Yichao Zhu, Suibing Zheng, Hao Cheng, Xinmin Ding, Yang Yue, Huanqian Wang, Chi Chen, Jingrui Pang, YuXi Qian, Haoran Geng, Lianli Gao, Haiyuan Li, Bin Fang, Gao Huang, Yaodong Yang, Hao Dong, He Wang, Hang Zhao, Yadong Mu, Di Hu, Hao Zhao, Tiejun Huang, Shanghang Zhang, Yonghua Lin, Zhongyuan Wang and Guocai Yao},
journal={arXiv preprint arXiv:2511.17441},
url = {https://arxiv.org/abs/2511.17441},
year={2025}
}
Additional References
If you use this dataset, please also consider citing:
- LeRobot Framework: https://github.com/huggingface/lerobot
π Version Information
Version History
- v1.0.0 (2025-11): Initial release
- Downloads last month
- 1,053