The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 83, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for MOVE Real-World Manipulation Dataset
MOVE: Motion-Based Variability Enhancement for Spatial Generalization in Robotic Manipulation
Jointly Released by:
🎓 清华大学 (Tsinghua University)
🤖 智源人工智能研究院 (Beijing Academy of Artificial Intelligence - BAAI)
This Hugging Face Dataset Card describes the Real-World Robotic Manipulation Dataset collected using the MOVE (Motion-Based Variability Enhancement) paradigm, as presented in the paper "MOVE: A Simple Motion-Based Data Collection Paradigm for Spatial Generalization in Robotic Manipulation."
The core value of the MOVE paradigm is the injection of dynamic feature augmentation into the environment (objects and camera) during expert demonstrations. This process captures a richer variety of spatial configurations within a single trajectory, significantly improving policy performance, especially in terms of spatial generalization to unseen locations and data efficiency.
💾 Dataset Structure
This dataset focuses on the Real-World Pick-and-Place task. All data was collected using the Piper robotic arm teleoperated via a Pika device under dynamic environment configurations.
Data Fields
Each trajectory contains a sequence of timesteps, recording the essential observations and state information required for robotic policy learning.
| Field Key | Data Type | Description |
|---|---|---|
timestep_id |
int |
Sequential timestep index within the trajectory. |
camera/color/Camera |
PIL.Image / ndarray |
RGB Image Observation. Due to the MOVE paradigm, the image captures dynamically changing object, target, and camera viewpoints. |
arm/jointStatePosition/joint_single |
array[float] |
Robot Joint State/Action. The joint positions of the Piper robotic arm, representing the robot's state or executed action at this timestep. |
/arm/jointStatePosition/master |
array[float] |
Master Device State. The joint or position states of the Pika teleoperation master device, recording the human operator's intent. |
Note: This dataset strictly includes only the three key data streams listed above. It does not include explicit 3D coordinates, world-frame camera poses, or other calculated metadata.
Data Splits
The real-world dataset is split based on the total number of environment interaction steps (timesteps), allowing for efficiency evaluation:
| Split Name | Task | Total Timesteps | Description |
|---|---|---|---|
real_world_35k |
Real-World Pick-and-Place (e.g., Orange to Tray) | 35,000 | A challenging, low-data scenario for testing spatial generalization capability. |
real_world_75k |
Real-World Pick-and-Place | 75,000 | Used for performance scaling and efficiency comparison against static baselines. |
🚀 Key Advantages
- High Spatial Generalization: Policies trained on this dynamically augmented data demonstrate superior success rates when tested on spatially randomized, unseen configurations.
- Superior Data Efficiency: MOVE datasets enable policies to achieve competitive performance with a significantly lower total number of timesteps compared to datasets collected using the traditional static approach.
🎯 Usage
This dataset is an ideal resource for training robust real-world robotic manipulation policies, particularly those that rely on high-generalization visual-motor data.
Loading the Data
from datasets import load_dataset
# Load the 75k timestep real-world subset
dataset = load_dataset("your-huggingface-org/MOVE", "real_world_75k")
# Access image and state data
first_example = dataset["train"][0]
image = first_example["camera/color/Camera"]
robot_state = first_example["arm/jointStatePosition/joint_single"]
Citation
if you find this work helpful, please consider citing our paper:
@misc{wang2025movesimplemotionbaseddata,
title={MOVE: A Simple Motion-Based Data Collection Paradigm for Spatial Generalization in Robotic Manipulation},
author={Huanqian Wang and Chi Bene Chen and Yang Yue and Danhua Tao and Tong Guo and Shaoxuan Xie and Denghang Huang and Shiji Song and Guocai Yao and Gao Huang},
year={2025},
eprint={2512.04813},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2512.04813},
}
- Downloads last month
- 10