Abstract
OmniPSD, a diffusion framework within the Flux ecosystem, enables text-to-PSD generation and image-to-PSD decomposition, achieving high-fidelity results with transparency awareness.
Recent advances in diffusion models have greatly improved image generation and editing, yet generating or reconstructing layered PSD files with transparent alpha channels remains highly challenging. We propose OmniPSD, a unified diffusion framework built upon the Flux ecosystem that enables both text-to-PSD generation and image-to-PSD decomposition through in-context learning. For text-to-PSD generation, OmniPSD arranges multiple target layers spatially into a single canvas and learns their compositional relationships through spatial attention, producing semantically coherent and hierarchically structured layers. For image-to-PSD decomposition, it performs iterative in-context editing, progressively extracting and erasing textual and foreground components to reconstruct editable PSD layers from a single flattened image. An RGBA-VAE is employed as an auxiliary representation module to preserve transparency without affecting structure learning. Extensive experiments on our new RGBA-layered dataset demonstrate that OmniPSD achieves high-fidelity generation, structural consistency, and transparency awareness, offering a new paradigm for layered design generation and decomposition with diffusion transformers.
Community
OmniPSD presents a diffusion-transformer framework for text-to-PSD generation and image-to-PSD decomposition, enabling layered, transparent PSDs with hierarchical, editable channels via in-context learning.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Controllable Layer Decomposition for Reversible Multi-Layer Image Generation (2025)
- TAUE: Training-free Noise Transplant and Cultivation Diffusion Model (2025)
- LayerComposer: Multi-Human Personalized Generation via Layered Canvas (2025)
- MatPedia: A Universal Generative Foundation for High-Fidelity Material Synthesis (2025)
- PixelDiT: Pixel Diffusion Transformers for Image Generation (2025)
- OmniRefiner: Reinforcement-Guided Local Diffusion Refinement (2025)
- Canvas-to-Image: Compositional Image Generation with Multimodal Controls (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
