EO-Bench / README.md
goxq's picture
Update README.md
eeaf3de verified
metadata
license: apache-2.0
task_categories:
  - visual-question-answering
  - image-to-text
language:
  - en
tags:
  - robotics
  - embodied-ai
  - multimodal
  - benchmark
  - vision-language
  - EO-1
pretty_name: EO-Bench
size_categories:
  - n<1K

EO-Bench: Embodied Reasoning Benchmark for Vision-Language Models

arXiv GitHub Dataset

Overview

EO-Bench is a comprehensive benchmark designed to evaluate the embodied reasoning capabilities of vision-language models (VLMs) in robotics scenarios. This benchmark is part of the EO-1 project, which develops unified embodied foundation models for general robot control.

The benchmark assesses model performance across 12 distinct embodied reasoning categories, covering trajectory reasoning, visual grounding, action reasoning, and more. All tasks are presented in a multiple-choice format to enable standardized evaluation.

Dataset Description

Statistics

Metric Value
Total Samples 600
Question Types 12
Total Images 668
Answer Format Multiple Choice (single or multiple correct options)

Question Type Distribution

Question Type Count
Trajectory Reasoning 134
Visual Grounding 128
Process Verification 113
Multiview Pointing 68
Relation Reasoning 40
Robot Interaction 37
Object State 34
Episode Caption 17
Action Reasoning 13
Task Planning 10
Direct Influence 4
Counterfactual 2

Related Resources

EO-1 Model

EO-1 is a unified embodied foundation model that processes interleaved vision-text-action inputs using a single decoder-only transformer architecture. The model achieves state-of-the-art performance on multimodal embodied reasoning tasks.

Training Data

EO-1 is trained on EO-Data1.5M, a comprehensive multimodal embodied reasoning dataset with over 1.5 million high-quality interleaved samples.

Citation

If you use this benchmark in your research, please cite:

@article{eo1,
  title={EO-1: Interleaved Vision-Text-Action Pretraining for General Robot Control},
  author={Delin Qu and Haoming Song and Qizhi Chen and Zhaoqing Chen and Xianqiang Gao and Xinyi Ye and Qi Lv and Modi Shi and Guanghui Ren and Cheng Ruan and Maoqing Yao and Haoran Yang and Jiacheng Bao and Bin Zhao and Dong Wang},
  journal={arXiv preprint},
  year={2025},
  url={https://arxiv.org/abs/2508.21112}
}