YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

🧠 SauatAI — Kazakh Grammar Correction with Gemma-3 1B (LoRA finetuned)

This model is a fine-tuned version of google/gemma-3-1b-it on a custom Kazakh dataset of short story sentences from ertegiler.kz, augmented with spelling and grammar mistakes. It was trained to correct noisy Kazakh sentences in an instruction-following format.


🔍 Model Details

Attribute Value
Base Model google/gemma-3-1b-it (Decoder-only, instruction-tuned)
Fine-tuning Method LoRA (via PEFT + QLoRA)
Dataset sauatai-ertegiler-kz-misspellings-kk-s170-len60-n6-mprob-v1
Language Kazakh (kk)
Training Examples 12,000 (training), 3,200 (validation)
Epochs 3
Learning Rate 2e-4
Sentence Sorting Shortest sentences selected first
Output Token <fix> used as end-of-response token

💡 How to Use

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# ⚙️ Configs
BASE = "google/gemma-3-1b-it"
ADAPTER = "alphazhan/sauatai-gemma-3-1b-it-kk-s170-len60-n6-mprob-ntrain12k-shortfirst-e3-lr2e4-v1"
device = "cuda" if torch.cuda.is_available() else "cpu"

# 🧠 Load tokenizer & base model
tokenizer = AutoTokenizer.from_pretrained(ADAPTER)
model = AutoModelForCausalLM.from_pretrained(BASE, device_map="auto")
model.resize_token_embeddings(len(tokenizer))
model = PeftModel.from_pretrained(model, ADAPTER).to(device)

# ✏️ Inference
sentence_to_correct = "Ол досм еді"
prompt = f"Correct this Kazakh sentence.\nInput: {sentence_to_correct}\nOutput:"
inputs = tokenizer(prompt, return_tensors="pt").to(device)

with torch.no_grad():
    output = model.generate(**inputs, max_new_tokens=64)

print(tokenizer.decode(output[0], skip_special_tokens=True))

📉 Training and Validation Loss

Step Training Loss Validation Loss
500 2.4877 2.5578
1000 2.3317 2.4297
1500 2.2257 2.3402
2000 2.1642 2.2790
2500 2.1078 2.2513
3000 2.1189 2.2300

Although the model demonstrates a consistent decline in both training and validation loss—indicating effective learning and no signs of overfitting—the curve suggests that three epochs may not have been sufficient to reach convergence. Both metrics were still steadily improving at the end of training, implying that additional epochs could further reduce the loss and enhance the model’s generalization ability. This is especially relevant for instruction-tuned SLMs like Gemma 3 (1B), which typically benefit from longer training durations when fine-tuned on domain-specific or low-resource languages like Kazakh. Extending training slightly—while monitoring for plateauing or divergence—could yield a more refined and performant model (I hope so).


🧾 Prompt Format

Correct this Kazakh sentence.
Input: Ол досм еді
Output: Ол досым еді<fix>

🔗 Credits

Developed by @alphazhan

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for alphazhan/sauatai-gemma-3-1b-it-kk-s170-len60-n6-mprob-ntrain12k-shortfirst-e3-lr2e4-v1

Finetuned
(372)
this model

Dataset used to train alphazhan/sauatai-gemma-3-1b-it-kk-s170-len60-n6-mprob-ntrain12k-shortfirst-e3-lr2e4-v1

Collection including alphazhan/sauatai-gemma-3-1b-it-kk-s170-len60-n6-mprob-ntrain12k-shortfirst-e3-lr2e4-v1