Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Outlier-Ai
/
DeepSeek-R1-Distill-Llama-8B-MLX-4bit

Text Generation
MLX
Safetensors
llama
4-bit precision
4bit
apple-silicon
chain-of-thought
chat
conversational
deepseek
deepseek-r1
deepseek-r1-distill
edge-ai
instruct
local-llm
m1
m2
m3
m4
mac
mac-mini
mac-studio
macbook-air
macbook-pro
macos
metal
mlx-community
mlx-lm
no-cloud
offline
on-device
outlier
outlier-app
private
quantized
r1
r1-distill
reasoning
thinking
Model card Files Files and versions
xet
Community

Instructions to use Outlier-Ai/DeepSeek-R1-Distill-Llama-8B-MLX-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • MLX

    How to use Outlier-Ai/DeepSeek-R1-Distill-Llama-8B-MLX-4bit with MLX:

    # Make sure mlx-lm is installed
    # pip install --upgrade mlx-lm
    
    # Generate text with mlx-lm
    from mlx_lm import load, generate
    
    model, tokenizer = load("Outlier-Ai/DeepSeek-R1-Distill-Llama-8B-MLX-4bit")
    
    prompt = "Write a story about Einstein"
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )
    
    text = generate(model, tokenizer, prompt=prompt, verbose=True)
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • LM Studio
  • MLX LM

    How to use Outlier-Ai/DeepSeek-R1-Distill-Llama-8B-MLX-4bit with MLX LM:

    Generate or start a chat session
    # Install MLX LM
    uv tool install mlx-lm
    # Interactive chat REPL
    mlx_lm.chat --model "Outlier-Ai/DeepSeek-R1-Distill-Llama-8B-MLX-4bit"
    Run an OpenAI-compatible server
    # Install MLX LM
    uv tool install mlx-lm
    # Start the server
    mlx_lm.server --model "Outlier-Ai/DeepSeek-R1-Distill-Llama-8B-MLX-4bit"
    # Calling the OpenAI-compatible server with curl
    curl -X POST "http://localhost:8000/v1/chat/completions" \
       -H "Content-Type: application/json" \
       --data '{
         "model": "Outlier-Ai/DeepSeek-R1-Distill-Llama-8B-MLX-4bit",
         "messages": [
           {"role": "user", "content": "Hello"}
         ]
       }'
DeepSeek-R1-Distill-Llama-8B-MLX-4bit
4.53 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 7 commits
ur-dad-matt's picture
ur-dad-matt
docs(card): SEO + cross-link refresh (HF-DISCOVERABILITY-001)
0d160bd verified 1 day ago
  • .gitattributes
    1.57 kB
    Initial upload: canonical-named twin of Outlier-Ai/Outlier-R1-Distill-Llama-8B-MLX-4bit 17 days ago
  • README.md
    4.35 kB
    docs(card): SEO + cross-link refresh (HF-DISCOVERABILITY-001) 1 day ago
  • config.json
    1.11 kB
    Initial upload: canonical-named twin of Outlier-Ai/Outlier-R1-Distill-Llama-8B-MLX-4bit 17 days ago
  • generation_config.json
    181 Bytes
    Initial upload: canonical-named twin of Outlier-Ai/Outlier-R1-Distill-Llama-8B-MLX-4bit 17 days ago
  • model.safetensors
    4.52 GB
    xet
    Initial upload: canonical-named twin of Outlier-Ai/Outlier-R1-Distill-Llama-8B-MLX-4bit 17 days ago
  • model.safetensors.index.json
    52.4 kB
    Initial upload: canonical-named twin of Outlier-Ai/Outlier-R1-Distill-Llama-8B-MLX-4bit 17 days ago
  • tokenizer.json
    17.2 MB
    xet
    Initial upload: canonical-named twin of Outlier-Ai/Outlier-R1-Distill-Llama-8B-MLX-4bit 17 days ago
  • tokenizer_config.json
    411 Bytes
    Initial upload: canonical-named twin of Outlier-Ai/Outlier-R1-Distill-Llama-8B-MLX-4bit 17 days ago