🧠 SauatAI — Kazakh Grammar Correction with Gemma-3 1B (LoRA finetuned)
This model is a fine-tuned version of google/gemma-3-1b-it on a custom Kazakh dataset of short story sentences from ertegiler.kz, augmented with spelling and grammar mistakes. It was trained to correct noisy Kazakh sentences in an instruction-following format.
🔍 Model Details
| Attribute | Value |
|---|---|
| Base Model | google/gemma-3-1b-it (Decoder-only, instruction-tuned) |
| Fine-tuning Method | LoRA (via PEFT + QLoRA) |
| Dataset | sauatai-ertegiler-kz-misspellings-kk-s170-len60-n6-mprob-v1 |
| Language | Kazakh (kk) |
| Training Examples | 12,000 (training), 3,200 (validation) |
| Epochs | 3 |
| Learning Rate | 2e-4 |
| Sentence Sorting | Shortest sentences selected first |
| Output Token | <fix> used as end-of-response token |
💡 How to Use
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# ⚙️ Configs
BASE = "google/gemma-3-1b-it"
ADAPTER = "alphazhan/sauatai-gemma-3-1b-it-kk-s170-len60-n6-mprob-ntrain12k-shortfirst-e3-lr2e4-v1"
device = "cuda" if torch.cuda.is_available() else "cpu"
# 🧠 Load tokenizer & base model
tokenizer = AutoTokenizer.from_pretrained(ADAPTER)
model = AutoModelForCausalLM.from_pretrained(BASE, device_map="auto")
model.resize_token_embeddings(len(tokenizer))
model = PeftModel.from_pretrained(model, ADAPTER).to(device)
# ✏️ Inference
sentence_to_correct = "Ол досм еді"
prompt = f"Correct this Kazakh sentence.\nInput: {sentence_to_correct}\nOutput:"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(output[0], skip_special_tokens=True))
📉 Training and Validation Loss
| Step | Training Loss | Validation Loss |
|---|---|---|
| 500 | 2.4877 | 2.5578 |
| 1000 | 2.3317 | 2.4297 |
| 1500 | 2.2257 | 2.3402 |
| 2000 | 2.1642 | 2.2790 |
| 2500 | 2.1078 | 2.2513 |
| 3000 | 2.1189 | 2.2300 |
Although the model demonstrates a consistent decline in both training and validation loss—indicating effective learning and no signs of overfitting—the curve suggests that three epochs may not have been sufficient to reach convergence. Both metrics were still steadily improving at the end of training, implying that additional epochs could further reduce the loss and enhance the model’s generalization ability. This is especially relevant for instruction-tuned SLMs like Gemma 3 (1B), which typically benefit from longer training durations when fine-tuned on domain-specific or low-resource languages like Kazakh. Extending training slightly—while monitoring for plateauing or divergence—could yield a more refined and performant model (I hope so).
🧾 Prompt Format
Correct this Kazakh sentence.
Input: Ол досм еді
Output: Ол досым еді<fix>
🔗 Credits
Developed by @alphazhan