MotionEdit: Benchmarking and Learning Motion-Centric Image Editing Paper • 2512.10284 • Published 23 days ago • 25
MASS: Motion-Aware Spatial-Temporal Grounding for Physics Reasoning and Comprehension in Vision-Language Models Paper • 2511.18373 • Published Nov 23, 2025 • 5
MASS: Motion-Aware Spatial-Temporal Grounding for Physics Reasoning and Comprehension in Vision-Language Models Paper • 2511.18373 • Published Nov 23, 2025 • 5 • 2
VisPlay: Self-Evolving Vision-Language Models from Images Paper • 2511.15661 • Published Nov 19, 2025 • 42
First Frame Is the Place to Go for Video Content Customization Paper • 2511.15700 • Published Nov 19, 2025 • 52
Semantically-Aware Rewards for Open-Ended R1 Training in Free-Form Generation Paper • 2506.15068 • Published Jun 18, 2025 • 13
Semantically-Aware Rewards for Open-Ended R1 Training in Free-Form Generation Paper • 2506.15068 • Published Jun 18, 2025 • 13 • 2
VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations for Synthetic Videos Paper • 2505.01481 • Published May 2, 2025 • 4
How Instruction and Reasoning Data shape Post-Training: Data Quality through the Lens of Layer-wise Gradients Paper • 2504.10766 • Published Apr 14, 2025 • 40
zli12321/answer_equivalence_roberta-large Text Classification • 0.4B • Updated Feb 4, 2025 • 23 • 5
Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders Paper • 2408.15998 • Published Aug 28, 2024 • 86
AUTOHALLUSION: Automatic Generation of Hallucination Benchmarks for Vision-Language Models Paper • 2406.10900 • Published Jun 16, 2024 • 11
HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models Paper • 2310.14566 • Published Oct 23, 2023 • 27