BioMistral-7B Fine-Tuned (QLoRA)
This is a LoRA fine-tuned version of BioMistral/BioMistral-7B, trained using Unsloth for efficient supervised fine-tuning (SFT) on medical question-answer data.
It is optimized for clinical reasoning and healthcare-related text generation.
Model Details
- Base model: BioMistral/BioMistral-7B
- Fine-tuning method: QLoRA (4-bit) via PEFT
- Library: Transformers + TRL + Unsloth
- Language: English (medical domain)
- Intended use: Assistive medical Q&A (non-diagnostic, educational purposes only)
Example Usage
from transformers import pipeline, AutoTokenizer
model_path = "PradeepBodhi/BioMistral-7b_Fine-Tuned-QLoRA"
tokenizer = AutoTokenizer.from_pretrained(model_path)
pipe = pipeline("text-generation", model=model_path, tokenizer=tokenizer)
prompt = tokenizer.apply_chat_template([
{"role": "user", "content": "I have a sore throat and fever. What should I do?"}
], tokenize=False, add_generation_prompt=True)
output = pipe(prompt, max_new_tokens=1024, do_sample=True, temperature=0.7)
print(output[0]['generated_text'][len(prompt):].strip())
Training
- Technique: Supervised fine-tuning (SFT) with LoRA adapters
- Quantization: 4-bit QLoRA
- Framework versions: PEFT 0.16.0 Transformers 4.55.0 Unsloth 2025.8.4
- Downloads last month
- 17
Model tree for PradeepBodhi/BioMistral-7b_Fine-Tuned-QLoRA
Base model
BioMistral/BioMistral-7B