Official llama.cpp now supports Qwen3-VL
1
#11 opened about 1 month ago
by
aljaca
Thinking models and finetune healing
2
#10 opened about 2 months ago
by
newdoria88
Ollama 4_K_M GGUF not working
#9 opened about 2 months ago
by
BigArt
有人有q6 gguf吗
3
#8 opened about 2 months ago
by
luka810
Has anyone successfully loaded q8_0 gguf?
3
#7 opened about 2 months ago
by
rthughes23
Thireus fixed some issues with Qwen3-VL,can you recreate the latest Qwen3-VL-30B-A3B-Instruct-abliterated?
2
#5 opened about 2 months ago
by
xldistance
Wrong outputs from GGUFs?
🔥
1
#4 opened about 2 months ago
by
geoign
Thanks Huihui
🔥
1
#3 opened 2 months ago
by
1TBGPU4EVR
Will there be an abliterated version of Qwen3-Omni in the future?
2
#2 opened 2 months ago
by
rgsgs