Update README.md
Browse files
README.md
CHANGED
|
@@ -324,5 +324,12 @@ We introduce MileBench, a pioneering benchmark designed to test the **M**ult**I*
|
|
| 324 |
This benchmark comprises not only multimodal long contexts, but also multiple tasks requiring both comprehension and generation.
|
| 325 |
We establish two distinct evaluation sets, diagnostic and realistic, to systematically assess MLLMs’ long-context adaptation capacity and their ability to completetasks in long-context scenarios
|
| 326 |
|
|
|
|
|
|
|
| 327 |
To construct our evaluation sets, we gather 6,440 multimodal long-context samples from 21 pre-existing or self-constructed datasets,
|
| 328 |
-
with an average of 15.2 images and 422.3 words each, as depicted in the figure, and we categorize them into their respective subsets.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 324 |
This benchmark comprises not only multimodal long contexts, but also multiple tasks requiring both comprehension and generation.
|
| 325 |
We establish two distinct evaluation sets, diagnostic and realistic, to systematically assess MLLMs’ long-context adaptation capacity and their ability to completetasks in long-context scenarios
|
| 326 |
|
| 327 |
+

|
| 328 |
+
|
| 329 |
To construct our evaluation sets, we gather 6,440 multimodal long-context samples from 21 pre-existing or self-constructed datasets,
|
| 330 |
+
with an average of 15.2 images and 422.3 words each, as depicted in the figure, and we categorize them into their respective subsets.
|
| 331 |
+
|
| 332 |
+
|
| 333 |
+

|
| 334 |
+
|
| 335 |
+

|