Quantized versions of coder3101/Qwen3-VL-32B-Thinking-heretic. This is a new abliteration method that doesn't seem to be as damaging to the base model as the previous methods were. I haven't fully tested it yet, but the stats are very promising.
The repo includes the following quantized version.
For cards with 24GB of VRAM
This is sorted from more precise to less precise, but the difference in practice is minimal. The NL version, which I recommend assuming your backend supports it, may behave better than Q4KS (or even KM) in most situations at a slightly lower VRAM cost.
- IQ4_NL
- Q4_K_S
- IQ4_XS
Those are of an ideal size to be run with 24GB VRAM at 16K to 20K context length. If you use the vision layer, aim for 16K context.
For cards with 16GB of VRAM
Additionally I've just added this quantized version
- IQ3_XS
This should allow people to run the model on 16B VRAM with 8K context. It's far from perfect quality-wise at this compression rate, but if it's just to play with the model knowing that accuracy is a secondary concern, this is a solid option. You might have to compress the KV cache to 8 bit in your backend to make it fit, which, again, will damage understanding. Vision might not be an option if you value speed.
Vision
The mmproj file for image recognition is also included. No changes have been made to the file, but you shouldn't get refusals here either. Understand that adding the vision layer will consume more VRAM.
You can run the model without the vision, of course. In that case, it'll act similarly to an uncensored Qwen 3.0 model.
Settings
Instruction Template: ChatML Thinker
Note: If you backend has a setting for it, disable the BoS token. It's set to disabled at the GGUF level, but no all backends recognize the flag.
- Downloads last month
- 2,828
3-bit
4-bit
Model tree for SerialKicked/Qwen3-VL-32B-Thinking-heretic-GGUF
Base model
coder3101/Qwen3-VL-32B-Thinking-heretic