qwen3-0.6b-codeforces-cots-test-gguf
This is a GGUF conversion of eigenben/qwen3-0.6b-codeforces-cots-test, which is a LoRA fine-tuned version of Qwen/Qwen3-0.6B.
The model was fine-tuned on the open-r1/codeforces-cots dataset for instruction following on competitive programming problems.
Model Details
- Base Model: Qwen/Qwen3-0.6B
- Fine-tuned Model: eigenben/qwen3-0.6b-codeforces-cots-test
- Training: Supervised Fine-Tuning (SFT) with TRL on 100 examples
- Dataset: open-r1/codeforces-cots (competitive programming problems)
- Format: GGUF (for llama.cpp, Ollama, LM Studio, etc.)
Available Quantization
| File | Quant | Size | Description |
|---|---|---|---|
| qwen3-0.6b-codeforces-cots-test-q4_k_m.gguf | Q4_K_M | ~300MB | 4-bit medium - recommended balance of size and quality |
Usage
With llama.cpp
# Download model
huggingface-cli download eigenben/qwen3-0.6b-codeforces-cots-test-gguf qwen3-0.6b-codeforces-cots-test-q4_k_m.gguf
# Run with llama.cpp
./llama-cli -m qwen3-0.6b-codeforces-cots-test-q4_k_m.gguf -p "Your prompt here"
With Ollama
- Download the GGUF file:
huggingface-cli download eigenben/qwen3-0.6b-codeforces-cots-test-gguf qwen3-0.6b-codeforces-cots-test-q4_k_m.gguf
- Create a
Modelfile:
FROM ./qwen3-0.6b-codeforces-cots-test-q4_k_m.gguf
- Create and run the model:
ollama create qwen3-codeforces -f Modelfile
ollama run qwen3-codeforces
With LM Studio
- Download the
.gguffile - Import into LM Studio
- Start chatting!
Training Details
This is a test run on 100 examples from the codeforces-cots dataset:
- Training steps: 25
- Training loss: 2.68
- Mean token accuracy: 49.6%
For production use, consider training on the full dataset (35,718 examples).
License
Inherits the license from the base model: Qwen/Qwen3-0.6B
Converted to GGUF format using llama.cpp Fine-tuned using TRL on Hugging Face Jobs
- Downloads last month
- 13
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support