GUI-Owl-1.5-2B-Instruct-MLX-4bit

This repository contains an MLX-converted 4-bit version of mPLUG/GUI-Owl-1.5-2B-Instruct.

Source

  • Original model: mPLUG/GUI-Owl-1.5-2B-Instruct
  • Original architecture: Qwen3VLForConditionalGeneration
  • Converted with: mlx-vlm
  • Quantization: 4-bit affine

What Was Verified

This converted model was tested locally on Apple Silicon and successfully generated valid GUI action output from a Taobao shopping cart screenshot.

Example output:

Action: Click on the price section of the first item in the shopping cart.
<tool_call>
{"name": "mobile_use", "arguments": {"action": "click", "coordinate": [229, 566]}}
</tool_call>

Notes

  • This is a converted derivative of the original model.
  • Behavior may differ slightly from the original PyTorch checkpoint.
  • The conversion was performed for local Apple Silicon inference and experimentation.

Minimal Usage

python -m mlx_vlm generate \
  --model /path/to/GUI-Owl-1.5-2B-Instruct-MLX-4bit \
  --image /path/to/screenshot.png \
  --prompt 'Describe the next GUI action.'
Downloads last month
55
Safetensors
Model size
0.7B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for clinan/GUI-Owl-1.5-2B-Instruct-MLX-4bit

Quantized
(3)
this model