This model finding1/DeepSeek-V3.1-Terminus-MLX-mixed_4_6 was
converted to MLX format from deepseek-ai/DeepSeek-V3.1-Terminus
using mlx-lm version 0.28.0 plus pull request #494 with
mlx_lm.convert --quantize --q-bits 4 --mlx-path MLX-mixed_4_6 --quant-predicate mixed_4_6 --hf-path deepseek-ai/DeepSeek-V3.1-Terminus.
The console reported 4.809 bits per weight.
- Downloads last month
- 34
Model size
671B params
Tensor type
BF16
·
U32
·
F32
·
Hardware compatibility
Log In
to add your hardware
4-bit
Model tree for finding1/DeepSeek-V3.1-Terminus-MLX-mixed_4_6
Base model
deepseek-ai/DeepSeek-V3.1-Base
Quantized
deepseek-ai/DeepSeek-V3.1-Terminus