Run this on your Mac with Outlier — a free macOS app for local MLX inference.

Qwen3-4B (MLX 4-bit)

MLX 4-bit conversion of Qwen/Qwen3-4B. License and base-model fields inherit from the original — see YAML frontmatter above.

Load with mlx-lm

pip install mlx-lm
python -m mlx_lm.generate --model Outlier-Ai/Qwen3-4B-MLX-4bit --prompt "Hello" --max-tokens 256

What is Outlier?

A free macOS app that runs MLX models locally — no cloud, no API keys, no usage caps.

outlier.host

Other Outlier conversions

License

Inherits from upstream (apache-2.0). See base model card.

Downloads last month
80
Safetensors
Model size
0.6B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Outlier-Ai/Qwen3-4B-MLX-4bit

Finetuned
Qwen/Qwen3-4B
Quantized
(215)
this model

Collection including Outlier-Ai/Qwen3-4B-MLX-4bit