File size: 3,314 Bytes
bfb6509
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eeaf3de
bfb6509
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eeaf3de
bfb6509
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eeaf3de
 
 
 
 
bfb6509
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: apache-2.0
task_categories:
  - visual-question-answering
  - image-to-text
language:
  - en
tags:
  - robotics
  - embodied-ai
  - multimodal
  - benchmark
  - vision-language
  - EO-1
pretty_name: EO-Bench
size_categories:
  - n<1K
---

# EO-Bench: Embodied Reasoning Benchmark for Vision-Language Models

<p align="center">
  <a href="https://arxiv.org/abs/2508.21112"><img src="https://img.shields.io/badge/arXiv-2508.21112-b31b1b.svg" alt="arXiv"></a>
  <a href="https://github.com/SHAILAB-IPEC/EO1"><img src="https://img.shields.io/badge/GitHub-EO--1-blue.svg" alt="GitHub"></a>
  <a href="https://huggingface.co/datasets/IPEC-COMMUNITY/EO-Data1.5M"><img src="https://img.shields.io/badge/🤗-EO--Data1.5M-yellow.svg" alt="Dataset"></a>
</p>

## Overview

**EO-Bench** is a comprehensive benchmark designed to evaluate the **embodied reasoning** capabilities of vision-language models (VLMs) in robotics scenarios. This benchmark is part of the [EO-1](https://github.com/SHAILAB-IPEC/EO1) project, which develops unified embodied foundation models for general robot control.

The benchmark assesses model performance across **12 distinct embodied reasoning categories**, covering trajectory reasoning, visual grounding, action reasoning, and more. All tasks are presented in a multiple-choice format to enable standardized evaluation.

## Dataset Description

### Statistics

| Metric | Value |
|--------|-------|
| Total Samples | 600 |
| Question Types | 12 |
| Total Images | 668 |
| Answer Format | Multiple Choice (single or multiple correct options) |

### Question Type Distribution

| Question Type | Count |
|---------------|-------|
| Trajectory Reasoning | 134 |
| Visual Grounding | 128 |
| Process Verification | 113 |
| Multiview Pointing | 68 |
| Relation Reasoning | 40 |
| Robot Interaction | 37 |
| Object State | 34 |
| Episode Caption | 17 |
| Action Reasoning | 13 |
| Task Planning | 10 |
| Direct Influence | 4 |
| Counterfactual | 2 |

## Related Resources

### EO-1 Model

EO-1 is a unified embodied foundation model that processes interleaved vision-text-action inputs using a single decoder-only transformer architecture. The model achieves state-of-the-art performance on multimodal embodied reasoning tasks.

- **Paper**: [EO-1: Interleaved Vision-Text-Action Pretraining for General Robot Control](https://arxiv.org/abs/2508.21112)
- **GitHub**: [SHAILAB-IPEC/EO1](https://github.com/SHAILAB-IPEC/EO1)
- **Models**: [IPEC-COMMUNITY/EO-1-3B](https://huggingface.co/IPEC-COMMUNITY/EO-1-3B)

### Training Data

EO-1 is trained on **EO-Data1.5M**, a comprehensive multimodal embodied reasoning dataset with over 1.5 million high-quality interleaved samples.

- **Dataset**: [IPEC-COMMUNITY/EO-Data1.5M](https://huggingface.co/datasets/IPEC-COMMUNITY/EO-Data1.5M)

## Citation

If you use this benchmark in your research, please cite:

```bibtex
@article{eo1,
  title={EO-1: Interleaved Vision-Text-Action Pretraining for General Robot Control},
  author={Delin Qu and Haoming Song and Qizhi Chen and Zhaoqing Chen and Xianqiang Gao and Xinyi Ye and Qi Lv and Modi Shi and Guanghui Ren and Cheng Ruan and Maoqing Yao and Haoran Yang and Jiacheng Bao and Bin Zhao and Dong Wang},
  journal={arXiv preprint},
  year={2025},
  url={https://arxiv.org/abs/2508.21112}
}
```