This is some quantized model of Z-Image-Turbo derivatives using stable-diffusion.cpp

The command used was:

sd-cli --mode convert --model "example_ComfyUI_bf16.safetensors" --output "example-Q4_K.gguf" --tensor-type-rules "^context_refiner\.[0-9]*\.(attention\.(out|qkv)|feed_forward\.(w1|w2|w3)).weight=q8_0,^(layers|noise_refiner)\.[0-9]*\.(adaLN_modulation\.[0-9]*|attention\.(out|qkv)|feed_forward\.(w1|w2|w3))\.weight=q4_K"

Source of quantized model:

Downloads last month
97
GGUF
Model size
6B params
Architecture
undefined
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for n-Arno/Z-Image-GGUF

Quantized
(27)
this model