How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Outlier-Ai/Outlier-10B-V3.2"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Outlier-Ai/Outlier-10B-V3.2",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/Outlier-Ai/Outlier-10B-V3.2
Quick Links

Superseded. This repo is a research artifact from an earlier Outlier lineage and is no longer the recommended download. The current shipping tier is at outlier.host.

Outlier-10B-V3.2

This repo predates the v1.8 Outlier lineup. It is preserved here for reproducibility and historical reference, not as a production recommendation.

What replaced it

Current shipping tiers (see Outlier app v1.8+):

For the latest verified benchmarks and downloads, visit outlier.host.

Original notes

This was a research / preview artifact. It may contain experimental adapters, overlays, or quantization variants that did not graduate into the shipping product. Treat any technical claims in earlier revisions of this card as provisional.

License

See YAML frontmatter above. Original license terms preserved.

Downloads last month
2,589
Safetensors
Model size
23B params
Tensor type
F16
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Outlier-Ai/Outlier-10B-V3.2

Base model

Qwen/Qwen2.5-7B
Adapter
(2036)
this model

Collection including Outlier-Ai/Outlier-10B-V3.2