huihui-ai/Huihui-MiniCPM-V-4_5-abliterated

#1383
by Poro7 - opened

Queued!

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Huihui-MiniCPM-V-4_5-abliterated-GGUF for quants to appear.

Sorry for the late reply - MiniCPMV is not supported by llama.cpp currently.

However, it is possible to find non-ablit versions of gguf quantization on the site, such as https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf/tree/main

Those use qwen3 as architecture. Maybe there is a converter to convert the model to something else. @nicoboss might know more?

Actually, I'm not sure what it means when a model uses the qwen3 architecture, but I saw that llama cpp has documentation for minicpm v4.5 in their repo ?

https://github.com/ggml-org/llama.cpp/blob/56fc38b9655fbe1869d8bd6cfb269418196cea69/docs/multimodal/minicpmv4.5.md

Sign up or log in to comment