Using https://github.com/city96/ComfyUI-GGUF/tree/main/tools converted https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan22-Turbo/Wan2_2-TI2V-5B-Turbo_fp16.safetensors to GGUF, then quantized and finally fixed 5d tensors.
The model is very much usable with an old 4GB GPU. Nice and versatile, especially good for I2V.
These Turbo GGUFs work fine with most LoRAs made for the regular 5B.
The following from https://civitai.com/models/1995164/wan-damme-rapid-wan-22-5b-4-steps-checkpoint-t2vi2v has some usage advice:
4 steps is enough. CFG 1, sampler is Euler or SA_Solver or Uni_PC, scheduler is simple or normal or beta. Multiplier 0.8 applied to latent will help with oversaturation.
Preferrable sizes are 1280x704 and 704x1280.
LoRAs compatible with this model on Civitai: https://civitai.com/search/models?baseModel=Wan%20Video%202.2%20TI2V-5B&modelType=LORA&sortBy=models_v9
Sample workflow attached here as train data: https://civitai.com/api/download/models/2258309?type=Training%20Data
UMT5 at https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main
VAE at https://huggingface.co/QuantStack/Wan2.2-TI2V-5B-GGUF/tree/main/VAE
Example 121-frame 1024x704 I2V generated with Q4_0, 4 steps:
- Downloads last month
- 1,960
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit