Ministral-3-3B-Instruct-2512-GGUF

This model is converted from mistralai/Ministral-3-3B-Instruct-2512-BF16 to GGUF using convert_hf_to_gguf.py

To use it:

llama-server -hf ggml-org/Ministral-3-3B-Instruct-2512-GGUF
Downloads last month
941
GGUF
Model size
3B params
Architecture
mistral3
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ggml-org/Ministral-3-3B-Instruct-2512-GGUF

Collection including ggml-org/Ministral-3-3B-Instruct-2512-GGUF