Datasets:

Formats:
csv
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 5,637 Bytes
6b17cf7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51e5772
 
 
6b17cf7
 
 
 
 
 
 
 
 
 
 
 
 
51e5772
6b17cf7
 
 
 
 
 
 
 
037eb85
6b17cf7
 
 
 
 
 
 
 
51e5772
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6b17cf7
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
---
license: other
license_name: adobe-research-license
license_link: LICENSE
---

# EditVerse
This repository contains the instruction-based video editing evaluation benchmark for EditVerseBench in paper "EditVerse: A Unified Framework for Editing and Generation via In-Context Learning".

> [Xuan Ju](https://juxuan27.github.io/)<sup>12</sup>, [Tianyu Wang](https://scholar.google.com/citations?user=yRwZIN8AAAAJ&hl=zh-CN)<sup>1</sup>, [Yuqian Zhou](https://yzhouas.github.io/)<sup>1</sup>, [He Zhang](https://sites.google.com/site/hezhangsprinter)<sup>1</sup>, [Qing Liu](https://qliu24.github.io/)<sup>1</sup>, [Nanxuan Zhao](https://www.nxzhao.com/)<sup>1</sup>, [Zhifei Zhang](https://zzutk.github.io/)<sup>1</sup>, [Yijun Li](https://yijunmaverick.github.io/)<sup>1</sup>, [Yuanhao Cai](https://caiyuanhao1998.github.io/)<sup>3</sup>, [Shaoteng Liu](https://www.shaotengliu.com/)<sup>1</sup>, [Daniil Pakhomov](https://scholar.google.com/citations?user=UI10l34AAAAJ&hl=en)<sup>1</sup>, [Zhe Lin](https://sites.google.com/site/zhelin625/)<sup>1</sup>, [Soo Ye Kim](https://sites.google.com/view/sooyekim)<sup>1*</sup>, [Qiang Xu](https://cure-lab.github.io/)<sup>2*</sup><br>
> <sup>1</sup>Adobe Research <sup>2</sup>The Chinese University of Hong Kong <sup>3</sup>Johns Hopkins University <sup>*</sup>Corresponding Author


<p align="center">
  <a href="http://editverse.s3-website-us-east-1.amazonaws.com/">🌐 Project Page</a> || 
  <a href="https://arxiv.org/abs/2509.20360">📜 Arxiv</a> || 
  <a href="https://docs.google.com/presentation/d/1dBg3lZDFa8mRRIrOVEU_xDgzedufbwzr/edit?usp=sharing&ouid=100286465794673637256&rtpof=true&sd=true">✨ Slides</a> || 
  <a href="http://editverse.s3-website-us-east-1.amazonaws.com/comparison.html">👀 Comparison</a> || 
  <a href="">💻 Evaluation Code (Coming Soon)</a>
</p>



## ⏬ Download Benchmark


**(1) Clone the EditVerseBench Repository**

```
git lfs install
git clone https://huggingface.co/datasets/EditVerse/EditVerseBench
```

**(2) Download the Videos**


The original source videos cannot be directly distributed due to licensing restrictions.
Instead, you can download them using the provided script with the Pixabay API. (The network connection may occasionally fail, so you might need to run the script multiple times.)

> ⚠️ Note: Please remember to revise the API key to your own key in `download_source_video.py`. You can find the API key [here](https://pixabay.com/api/docs/#api_search_images) (marked in Parameters-key(required) on the website). The API is free but you need to sign up an account to have the API key. 


```
python download_source_video.py
```


**(3) Unpack comparison results (Optional)**

Extract the comparison results and remove the archive:
```
cd EditVerseBench
tar -zxvf EditVerse_Comparison_Results.tar.gz
rm EditVerse_Comparison_Results.tar.gz
```


## ✨ Benchmark Results




<table>
  <thead>
    <tr>
      <th rowspan="2">Method</th>
      <th colspan="1">VLM evaluation</th>
      <th colspan="1">Video Quality</th>
      <th colspan="2">Text Alignment</th>
      <th colspan="2">Temporal Consistency</th>
    </tr>
    <tr>
      <th>Editing Quality ↑</th>
      <th>Pick Score ↑</th>
      <th>Frame ↑</th>
      <th>Video ↑</th>
      <th>CLIP ↑</th>
      <th>DINO ↑</th>
    </tr>
  </thead>
  <tbody>
    <!-- Attention Manipulation -->
    <tr>
      <td colspan="7" style="text-align:center; font-weight:bold;">Attention Manipulation (Training-free)</td>
    </tr>
    <tr>
      <td><b>TokenFlow</b></td>
      <td>5.26</td><td>19.73</td><td>25.57</td><td>22.70</td><td>98.36</td><td>98.09</td>
    </tr>
    <tr>
      <td><b>STDF</b></td>
      <td>4.41</td><td>19.45</td><td>25.24</td><td>22.26</td><td>96.04</td><td>95.22</td>
    </tr>
    <!-- First-Frame Propagation -->
    <tr>
      <td colspan="7" style="text-align:center; font-weight:bold;">First-Frame Propagation (w/ End-to-End Training)</td>
    </tr>
    <tr>
      <td><b>Señorita-2M</b></td>
      <td>6.97</td><td>19.71</td><td>26.34</td><td>23.24</td><td>98.05</td><td>97.99</td>
    </tr>
    <!-- Instruction-Guided -->
    <tr>
      <td colspan="7" style="text-align:center; font-weight:bold;">Instruction-Guided (w/ End-to-End Training)</td>
    </tr>
    <tr>
      <td><b>InsV2V</b></td>
      <td>5.21</td><td>19.39</td><td>24.99</td><td>22.54</td><td>97.15</td><td>96.57</td>
    </tr>
    <tr>
      <td><b>Lucy Edit</b></td>
      <td>5.89</td><td>19.67</td><td>26.00</td><td>23.11</td><td>98.49</td><td>98.38</td>
    </tr>
    <tr>
      <td><b>Ours (Ours)</b></td>
      <td><b>7.65</b></td><td><b>20.07</b></td><td><b>26.73</b></td><td><b>23.93</b></td><td><b>98.56</b></td><td><b>98.42</b></td>
    </tr>
    <!-- Closed-Source -->
    <tr>
      <td colspan="7" style="text-align:center; font-weight:bold; color:gray;">Closed-Source Commercial Models</td>
    </tr>
    <tr style="color:gray;">
      <td>Runway Aleph</td>
      <td>7.44</td><td>20.42</td><td>27.70</td><td>24.27</td><td>98.94</td><td>98.60</td>
    </tr>
  </tbody>
</table>




💌 If you find our work useful for your research, please consider citing our paper:
```
@article{ju2025editverse,
  title   = {EditVerse: Unifying Image and Video Editing and Generation with In-Context Learning},
  author  = {Xuan Ju and Tianyu Wang and Yuqian Zhou and He Zhang and Qing Liu and Nanxuan Zhao and Zhifei Zhang and Yijun Li and Yuanhao Cai and Shaoteng Liu and Daniil Pakhomov and Zhe Lin and Soo Ye Kim and Qiang Xu},
  journal = {arXiv preprint arXiv:2509.20360},
  year    = {2025},
  url     = {https://arxiv.org/abs/2509.20360}
}
```