How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for keypa/INTELLECT-3-FP8-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for keypa/INTELLECT-3-FP8-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for keypa/INTELLECT-3-FP8-gguf to start chatting
Quick Links

INTELLECT-3-FP8 - GGUF

This is a GGUF conversion of PrimeIntellect/INTELLECT-3-FP8.

Conversion Info

  • Precision: F16 (Half Precision)
  • Tool: llama.cpp convert-hf-to-gguf.py

Usage

Download and use with llama.cpp or any GGUF-compatible inference engine.

Downloads last month
2
GGUF
Model size
107B params
Architecture
glm4moe
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for keypa/INTELLECT-3-FP8-gguf

Quantized
(1)
this model