How Well Do Models Follow Visual Instructions? VIBE: A Systematic Benchmark for Visual Instruction-Driven Image Editing
Abstract
Visual Instruction Benchmark for Image Editing introduces a three-level interaction hierarchy for evaluating visual instruction following capabilities in generative models.
Recent generative models have achieved remarkable progress in image editing. However, existing systems and benchmarks remain largely text-guided. In contrast, human communication is inherently multimodal, where visual instructions such as sketches efficiently convey spatial and structural intent. To address this gap, we introduce VIBE, the Visual Instruction Benchmark for Image Editing with a three-level interaction hierarchy that captures deictic grounding, morphological manipulation, and causal reasoning. Across these levels, we curate high-quality and diverse test cases that reflect progressively increasing complexity in visual instruction following. We further propose a robust LMM-as-a-judge evaluation framework with task-specific metrics to enable scalable and fine-grained assessment. Through a comprehensive evaluation of 17 representative open-source and proprietary image editing models, we find that proprietary models exhibit early-stage visual instruction-following capabilities and consistently outperform open-source models. However, performance degrades markedly with increasing task difficulty even for the strongest systems, highlighting promising directions for future research.
Community
π Introducing VIBE: The Visual Instruction Benchmark for Image Editing!
Why limit image editing to text? Human intent is multimodal. Weβre filling the gap with VIBE, a new benchmark designed to evaluate how models follow visual instructions.
Check out our full analysis and dataset below!π
- π Paper: https://arxiv.org/abs/2602.01851
- π» Github: https://github.com/hwanyu112/VIBE-Benchmark
- π Project Page: https://vibe-benchmark.github.io/
- π€ Huggingface: https://huggingface.co/datasets/VIBE-Benchmark/VIBE-Benchmark
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- RePlan: Reasoning-guided Region Planning for Complex Instruction-based Image Editing (2025)
- Empowering Reliable Visual-Centric Instruction Following in MLLMs (2026)
- Beyond Accuracy: Evaluating Grounded Visual Evidence in Thinking with Images (2026)
- Unified Thinker: A General Reasoning Modular Core for Image Generation (2026)
- ReViSE: Towards Reason-Informed Video Editing in Unified Models with Self-Reflective Learning (2025)
- ProImage-Bench: Rubric-Based Evaluation for Professional Image Generation (2025)
- Rethinking Composed Image Retrieval Evaluation: A Fine-Grained Benchmark from Image Editing (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
