This GGUF file is a direct conversion of Lightricks/LTX-2

Type Name Location Download
Main Model LTX-2 ComfyUI/models/unet GGUF (this repo)
Text Encoder Gemma 3 ComfyUI/models/text_encoders Safetensors / GGUF (waiting for support)
Text Encoder Embeddings Connector ComfyUI/models/text_encoders Safetensors
VAE Video / VAE Audio Qwen-Image VAE ComfyUI/models/vae Safetensors

Since this is a quantized model, all original licensing terms and usage restrictions remain in effect.

Downloads last month
1,721
GGUF
Model size
19B params
Architecture
ltxv
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for QuantStack/LTX-2-GGUF

Base model

Lightricks/LTX-2
Quantized
(4)
this model

Collection including QuantStack/LTX-2-GGUF