Run this on your Mac with Outlier — a one-click app that loads MLX models locally. macOS arm64, free download.

Qwen2.5-Coder-32B-Instruct (MLX 4-bit)

MLX 4-bit conversion of Qwen/Qwen2.5-Coder-32B-Instruct, repackaged for Apple Silicon. Original weights, original license — see frontmatter above. This repo only changes the on-disk format (safetensors, MLX 4-bit, chat_template.jinja, tokenizer).

About this conversion

  • Format: MLX 4-bit safetensors (group size 64, symmetric)
  • Tooling: mlx-lm 0.31.x compatible
  • Files: model.safetensors shards · config.json · tokenizer · chat_template.jinja
  • License: inherits from the upstream base model — see YAML license field

Load directly with mlx-lm

pip install mlx-lm
python -m mlx_lm.generate \
  --model Outlier-Ai/Qwen2.5-Coder-32B-Instruct-MLX-4bit \
  --prompt "Hello, world." \
  --max-tokens 256

Or in Python:

from mlx_lm import load, generate
model, tokenizer = load("Outlier-Ai/Qwen2.5-Coder-32B-Instruct-MLX-4bit")
print(generate(model, tokenizer, prompt="Hello, world.", max_tokens=256))

What is Outlier?

Outlier is a free macOS app that runs language models on your Mac, fully offline. Pick a model from a tier picker, click download, and chat — no API keys, no cloud round-trips, no usage caps. It ships with its own curated tier of MLX-4bit models and can also load any compatible MLX conversion (including this one) via the model picker.

➡ Download Outlier (free, Apple Silicon): outlier.host

For benchmark numbers (MMLU, HumanEval, tok/s on M-series Macs) with full provenance, see outlier.host/benchmarks.

Other Outlier conversions

License

This conversion preserves the upstream license declared in the frontmatter (apache-2.0). Refer to the upstream base model card for the canonical license text and any usage restrictions.

Downloads last month
845
Safetensors
Model size
33B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Outlier-Ai/Qwen2.5-Coder-32B-Instruct-MLX-4bit

Base model

Qwen/Qwen2.5-32B
Quantized
(119)
this model

Collection including Outlier-Ai/Qwen2.5-Coder-32B-Instruct-MLX-4bit