nielsr HF Staff commited on
Commit
734b9ce
·
verified ·
1 Parent(s): d90b842

Add model card and metadata

Browse files

Hi! I'm Niels from the community science team at Hugging Face. I noticed this model repository was missing a model card and relevant metadata.

This PR adds:
- **Metadata**: Including the `image-to-video` pipeline tag and the `diffusers` library name to improve discoverability and enable automated code snippets.
- **Model Description**: A brief summary of the research and links to the paper, project page, and official GitHub repository.
- **Sample Usage**: A code snippet based on the official GitHub README for running inference.
- **Citation**: The official BibTeX citation for this work.

Please let me know if you'd like any adjustments!

Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: image-to-video
3
+ library_name: diffusers
4
+ ---
5
+
6
+ # Generating the Past, Present and Future from a Motion-Blurred Image
7
+
8
+ This repository contains the model weights for the paper [Generating the Past, Present and Future from a Motion-Blurred Image](https://huggingface.co/papers/2512.19817).
9
+
10
+ [**Project Page**](https://blur2vid.github.io) | [**GitHub Repository**](https://github.com/tedlasai/blur2vid) | [**Gradio Demo**](https://huggingface.co/spaces/tedlasai/blur2vid)
11
+
12
+ ## Summary
13
+ What can a motion-blurred image reveal about a scene's past, present, and future? This work repurposes a pre-trained video diffusion model to recover videos revealing complex scene dynamics during the moment of capture and predicting what might have occurred immediately in the past or future. The approach is robust, generalizes to in-the-wild images, and supports downstream tasks such as recovering camera trajectories and object motion.
14
+
15
+ ## Sample Usage
16
+ To run inference on your own images, please follow the setup instructions in the [official GitHub repository](https://github.com/tedlasai/blur2vid). You can run the model using the following command:
17
+
18
+ ```bash
19
+ python inference.py --image_path assets/dummy_image.png --output_path output/
20
+ ```
21
+
22
+ ## Citation
23
+ If you use this model or code in your research, please cite:
24
+
25
+ ```bibtex
26
+ @article{Tedla2025Blur2Vid,
27
+ title = {Generating the Past, Present, and Future from a Motion-Blurred Image},
28
+ author = {Tedla, SaiKiran and Zhu, Kelly and Canham, Trevor and Taubner, Felix and Brown, Michael and Kutulakos, Kiriakos and Lindell, David},
29
+ journal = {ACM Transactions on Graphics},
30
+ year = {2025},
31
+ note = {SIGGRAPH Asia.}
32
+ }
33
+ ```
34
+
35
+ ## Contact
36
+ For questions or issues, please reach out through the [project page](https://blur2vid.github.io) or contact [Sai Tedla](mailto:[email protected]).