Add model card for Visual Jigsaw Video 7B
Browse filesThis PR adds a comprehensive model card for the `Visual Jigsaw Video 7B` model, based on the paper [Visual Jigsaw Post-Training Improves MLLMs](https://huggingface.co/papers/2509.25190).
The updates include:
- Relevant metadata: `license` (`apache-2.0`), `pipeline_tag` (`video-text-to-text`), `library_name` (`transformers`), `base_model` (`Qwen2.5-VL-7B-Instruct`), and additional `tags`.
- A link to the paper on Hugging Face Papers.
- Links to the project page and the GitHub repository.
- A concise overview of the model and its capabilities, including the overview image.
- A sample usage section demonstrating how to use the model with the `transformers` library, building upon the original repository's instructions regarding its `Qwen2.5-VL-7B-Instruct` base.
- The paper's BibTeX citation.
This should significantly improve the discoverability and usability of the model on the Hub.
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: video-text-to-text
|
| 4 |
+
library_name: transformers
|
| 5 |
+
base_model: Qwen2.5-VL-7B-Instruct
|
| 6 |
+
tags:
|
| 7 |
+
- qwen
|
| 8 |
+
- multimodal
|
| 9 |
+
- visual-jigsaw
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# Visual Jigsaw: Visual Jigsaw Video 7B
|
| 13 |
+
|
| 14 |
+
This repository contains the `Visual Jigsaw Video 7B` model, which is based on `Qwen2.5-VL-7B-Instruct` and presented in the paper [Visual Jigsaw Post-Training Improves MLLMs](https://huggingface.co/papers/2509.25190).
|
| 15 |
+
|
| 16 |
+
🌐 [Project Page](https://penghao-wu.github.io/visual_jigsaw/) | 💻 [Code on GitHub](https://github.com/penghao-wu/visual_jigsaw)
|
| 17 |
+
|
| 18 |
+
Visual Jigsaw is a generic self-supervised post-training framework designed to strengthen visual understanding in Multimodal Large Language Models (MLLMs). It is formulated as a general ordering task: visual inputs are partitioned, shuffled, and the model must reconstruct the visual information by producing the correct permutation in natural language. This specific model is an instantiation of Visual Jigsaw trained with video data, focusing on temporal reasoning.
|
| 19 |
+
|
| 20 |
+
<p align="center">
|
| 21 |
+
<img src="https://github.com/penghao-wu/visual_jigsaw/raw/main/assets/overview.png" alt="Overview of Visual Jigsaw" width="700"/>
|
| 22 |
+
</p>
|
| 23 |
+
|
| 24 |
+
## How to use (Inference)
|
| 25 |
+
|
| 26 |
+
Our models are based on `Qwen2.5-VL-7B-Instruct`. You can use the `transformers` library for inference by following the standard `Qwen2.5-VL-Instruct` usage pattern.
|
| 27 |
+
|
| 28 |
+
```python
|
| 29 |
+
from transformers import AutoProcessor, AutoModelForCausalLM
|
| 30 |
+
from PIL import Image
|
| 31 |
+
import torch
|
| 32 |
+
|
| 33 |
+
# Load model and processor for Visual Jigsaw Video 7B
|
| 34 |
+
model_id = "craigwu/visual_jigsaw_video_7B" # Assuming this is the current model repository
|
| 35 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 36 |
+
model_id,
|
| 37 |
+
torch_dtype=torch.bfloat16, # or torch.float16 depending on GPU
|
| 38 |
+
device_map="auto",
|
| 39 |
+
trust_remote_code=True
|
| 40 |
+
)
|
| 41 |
+
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
|
| 42 |
+
|
| 43 |
+
# For video input, you would typically load a sequence of frames.
|
| 44 |
+
# This example uses a single dummy image for demonstration of the API structure.
|
| 45 |
+
# For actual video processing, replace `dummy_image` with your video frames.
|
| 46 |
+
dummy_image = Image.new("RGB", (500, 300), color='blue')
|
| 47 |
+
|
| 48 |
+
# Prepare chat messages using Qwen2.5-VL-Instruct format
|
| 49 |
+
# For video, you would pass a list of frames instead of a single image.
|
| 50 |
+
messages = [
|
| 51 |
+
{"role": "user", "content": [
|
| 52 |
+
{"type": "image", "image": dummy_image}, # For video, a list of images (frames)
|
| 53 |
+
{"type": "text", "text": "Describe the content shown."}
|
| 54 |
+
]}
|
| 55 |
+
]
|
| 56 |
+
|
| 57 |
+
# Process inputs
|
| 58 |
+
text_input = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 59 |
+
# For actual video, `images` would be a list of PIL Images (frames)
|
| 60 |
+
model_inputs = processor(text=[text_input], images=[dummy_image], return_tensors="pt")
|
| 61 |
+
|
| 62 |
+
# Move inputs to GPU if available
|
| 63 |
+
if torch.cuda.is_available():
|
| 64 |
+
model_inputs = {k: v.to("cuda") for k, v in model_inputs.items()}
|
| 65 |
+
|
| 66 |
+
# Generate response
|
| 67 |
+
generated_ids = model.generate(**model_inputs, max_new_tokens=512)
|
| 68 |
+
response = processor.decode(generated_ids[0], skip_special_tokens=True)
|
| 69 |
+
|
| 70 |
+
print(response)
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
## Citation
|
| 74 |
+
If you find this project helpful for your research, please consider citing our paper:
|
| 75 |
+
|
| 76 |
+
```bibtex
|
| 77 |
+
@article{visual_jigsaw,
|
| 78 |
+
author = {Wu, Penghao and Yushan, Zhang and Haiwen, Diao and Bo, Li and Lu, Lewei and Liu, Ziwei},
|
| 79 |
+
title = {Visual Jigsaw Post-Training Improves MLLMs},
|
| 80 |
+
journal={arXiv preprint arXiv:2509.25190},
|
| 81 |
+
year={2025}}
|
| 82 |
+
```
|