How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf osllmai-community/DeepSeek-R1-Distill-Qwen-14B-GGUF:
# Run inference directly in the terminal:
llama-cli -hf osllmai-community/DeepSeek-R1-Distill-Qwen-14B-GGUF:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf osllmai-community/DeepSeek-R1-Distill-Qwen-14B-GGUF:
# Run inference directly in the terminal:
llama-cli -hf osllmai-community/DeepSeek-R1-Distill-Qwen-14B-GGUF:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf osllmai-community/DeepSeek-R1-Distill-Qwen-14B-GGUF:
# Run inference directly in the terminal:
./llama-cli -hf osllmai-community/DeepSeek-R1-Distill-Qwen-14B-GGUF:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf osllmai-community/DeepSeek-R1-Distill-Qwen-14B-GGUF:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf osllmai-community/DeepSeek-R1-Distill-Qwen-14B-GGUF:
Use Docker
docker model run hf.co/osllmai-community/DeepSeek-R1-Distill-Qwen-14B-GGUF:
Quick Links

osllm.ai Models Highlights Program

We believe there's no need to pay a token if you have a GPU on your computer.

Highlighting new and noteworthy models from the community. Join the conversation on Discord.

Official WebsiteDocumentationDiscord

NEW: Subscribe to our mailing list for updates and news!

Email: [email protected]

Disclaimers

Osllm.ai is not the creator, originator, or owner of any model featured in the Community Model Program. Each Community Model is created and provided by third parties. Osllm.ai does not endorse, support, represent, or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate, inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated it. Osllm.ai may not monitor or control the Community Models and cannot take responsibility for them. Osllm.ai disclaims all warranties or guarantees about the accuracy, reliability, or benefits of the Community Models. Furthermore, Osllm.ai disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted, error-free, virus-free, or that any issues will be corrected. You are solely responsible for any damage resulting from your use of or access to the Community Models, downloading of any Community Model, or use of any other Community Model provided by or through Osllm.ai.

Downloads last month
15,394
GGUF
Model size
15B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for osllmai-community/DeepSeek-R1-Distill-Qwen-14B-GGUF

Quantized
(131)
this model