Abstract
GimbalDiffusion framework enables precise camera control in text-to-video generation using absolute coordinates and gravity as a reference, enhancing controllability and robustness.
Recent progress in text-to-video generation has achieved remarkable realism, yet fine-grained control over camera motion and orientation remains elusive. Existing approaches typically encode camera trajectories through relative or ambiguous representations, limiting explicit geometric control. We introduce GimbalDiffusion, a framework that enables camera control grounded in physical-world coordinates, using gravity as a global reference. Instead of describing motion relative to previous frames, our method defines camera trajectories in an absolute coordinate system, allowing precise and interpretable control over camera parameters without requiring an initial reference frame. We leverage panoramic 360-degree videos to construct a wide variety of camera trajectories, well beyond the predominantly straight, forward-facing trajectories seen in conventional video data. To further enhance camera guidance, we introduce null-pitch conditioning, an annotation strategy that reduces the model's reliance on text content when conflicting with camera specifications (e.g., generating grass while the camera points towards the sky). Finally, we establish a benchmark for camera-aware video generation by rebalancing SpatialVID-HQ for comprehensive evaluation under wide camera pitch variation. Together, these contributions advance the controllability and robustness of text-to-video models, enabling precise, gravity-aligned camera manipulation within generative frameworks.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Unified Camera Positional Encoding for Controlled Video Generation (2025)
- PanFlow: Decoupled Motion Control for Panoramic Video Generation (2025)
- BulletTime: Decoupled Control of Time and Camera Pose for Video Generation (2025)
- LAMP: Language-Assisted Motion Planning for Controllable Video Generation (2025)
- ReCamDriving: LiDAR-Free Camera-Controlled Novel Trajectory Video Generation (2025)
- Light-X: Generative 4D Video Rendering with Camera and Illumination Control (2025)
- ReDirector: Creating Any-Length Video Retakes with Rotary Camera Encoding (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper