This is a quantized to 4bit, converted to Apple Silicon MLX, version of Z-Image-Turbo for use with this tool, including my New Mecha LoRA.
The script used to convert and merge LoRA is included. It relies on the previously mentionned tool to work.
(.venv) [user@MacM1 MLX_z-image] $ python convert_comfy.py --src_model models/z_image_turbo_bf16.safetensors --dst_model alpha.safetensors --lora_model loras/new_mecha_zit.safetensors --lora_scale 0.6
Starting conversion!
Loading models/z_image_turbo_bf16.safetensors file...
Reverting ComfyUI format...
100%|βββββββββββββββββββββββββββββββββββββββββββββ| 453/453 [00:00<00:00, 533114.40it/s]
Merging LoRA loras/new_mecha_zit.safetensors at scale 0.6...
100%|βββββββββββββββββββββββββββββββββββββββββββββ| 481/481 [01:21<00:00, 5.91it/s]
Merged 240 weight keys
Converting to MLX...
100%|βββββββββββββββββββββββββββββββββββββββββββββ| 521/521 [01:52<00:00, 4.62it/s]
Loading converted weights...
Quantizing (bits=4, group_size=32)...
Saving alpha.safetensors file...
Done!
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support