Korean Patent QA - LLaMA-3 QLoRA Adapter
Model Description
This is a QLoRA adapter for LLaMA-3-Korean-Bllossom-8B, fine-tuned on 8,502 Korean patent and trademark decision documents from the Korean Intellectual Property Tribunal. The model specializes in answering questions about Korean patent law, trademark law, and design rights with high accuracy and confidence.
Key Features
- ๐ฏ 81% Token Accuracy on validation set
- ๐ Low Entropy (0.665) indicating confident predictions
- ๐พ Memory Efficient using QLoRA (4-bit quantization)
- ๐๏ธ Legal Domain Expertise in intellectual property law
- ๐ฐ๐ท Korean Language optimized for Korean legal terminology
Model Details
Base Model
- Model: MLP-KTLim/llama-3-Korean-Bllossom-8B
- Architecture: LLaMA-3
- Parameters: 8 Billion
- Language: Korean
Training
- Method: QLoRA (Quantized Low-Rank Adaptation)
- Quantization: 4-bit (NF4) with double quantization
- LoRA Rank: 16
- LoRA Alpha: 32
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Training Time: ~7.5 hours (3 epochs)
- GPU: NVIDIA GPU with CUDA support
Dataset
- Source: Korean Intellectual Property Tribunal Decision Documents
- Size: 8,502 QA pairs
- Split: 90% train (7,651), 10% validation (851)
- Domain: Patent Law, Trademark Law, Design Rights
- Format: Question-Answer pairs with legal citations
- Language: Korean
Performance
| Metric | Value | Description |
|---|---|---|
| Token Accuracy | 81.0% | Percentage of correctly predicted tokens |
| Validation Loss | 0.651 | Cross-entropy loss on validation set |
| Entropy | 0.665 | Low entropy indicates confident predictions |
| Training Loss | 0.494 | Final training loss (epoch 3) |
Training Curve
The model showed consistent improvement across 3 epochs:
- Epoch 1: Loss 0.777 โ 0.612, Accuracy 78.2% โ 80.2%
- Epoch 2: Loss 0.589 โ 0.480, Accuracy 80.9% โ 81.0%
- Epoch 3: Loss stabilized at 0.494, Accuracy maintained at 81.0%
Usage
Installation
pip install torch transformers peft bitsandbytes accelerate
Quick Start
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
# Configuration for 4-bit quantization
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
# Load base model
base_model_name = "MLP-KTLim/llama-3-Korean-Bllossom-8B"
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
quantization_config=bnb_config,
device_map="auto",
torch_dtype=torch.bfloat16,
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "tree193nn/patent-qa-llama3-qlora")
model.eval()
# Inference
question = "์ํ๊ถ์ ๋ณดํธ๊ธฐ๊ฐ์ ์ผ๋ง๋ ๋๋์?"
prompt = f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
๋น์ ์ ์ง์์ฌ์ฐ๊ถ๋ฒ ์ ๋ฌธ๊ฐ์
๋๋ค. ํนํ ๋ฐ ์ํ ๊ด๋ จ ๋ฒ๋ฅ ์ง๋ฌธ์ ๋ํด ์ ํํ๊ณ ์์ธํ ๋ต๋ณ์ ์ ๊ณตํด์ฃผ์ธ์.<|eot_id|><|start_header_id|>user<|end_header_id|>
{question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.9,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer)
Example Questions
questions = [
"์ํ๊ถ์ ๋ณดํธ๊ธฐ๊ฐ์ ์ผ๋ง๋ ๋๋์?", # How long is trademark protection?
"ํนํ ์ถ์ ์ ํ์ํ ์๋ฅ๋ ๋ฌด์์ธ๊ฐ์?", # What documents are needed for patent filing?
"๋์์ธ๊ถ๊ณผ ์ ์๊ถ์ ์ฐจ์ด๋ ๋ฌด์์ธ๊ฐ์?", # What's the difference between design rights and copyright?
]
Intended Use
Primary Use Cases
โ Question Answering about Korean intellectual property law โ Legal Research assistance for patent and trademark matters โ Educational Tool for learning Korean IP law โ Information Retrieval from patent decision documents
Out-of-Scope Use
โ Legal Advice: This model should NOT be used as a substitute for professional legal counsel โ Official Decisions: Outputs are not legally binding โ Non-Korean Languages: Optimized for Korean only โ General Purpose QA: Specializes in IP law, may not perform well on general topics
Limitations
Known Limitations
- Temporal Limitation: Training data is from before 2024; recent legal changes may not be reflected
- Domain Specificity: Performs best on patent/trademark law; limited on other legal areas
- Language: Optimized for Korean; English or other languages not supported
- Hallucination Risk: May generate plausible but incorrect legal interpretations
- Context Length: Limited to 2048 tokens due to training configuration
Bias and Fairness
- Training Data Bias: Reflects biases present in Korean Intellectual Property Tribunal decisions
- Geographic Focus: Specific to Korean law; not applicable to other jurisdictions
- Language Bias: Korean legal terminology heavily featured
Ethical Considerations
โ ๏ธ Important Notice: This model is intended for research and educational purposes only. Users should:
- Verify Information: Always cross-reference with official legal sources
- Seek Professional Advice: Consult qualified legal professionals for actual cases
- Understand Limitations: Recognize the model's domain and temporal constraints
- Use Responsibly: Do not use for misleading or fraudulent purposes
Technical Specifications
Hardware Requirements
Minimum:
- GPU: 16GB VRAM (e.g., NVIDIA RTX 4080, A4000)
- RAM: 32GB
- Storage: 20GB (base model + adapter)
Recommended:
- GPU: 24GB+ VRAM (e.g., NVIDIA RTX 4090, A5000, A6000)
- RAM: 64GB
- Storage: 30GB
Software Requirements
- Python 3.8+
- PyTorch 2.0+
- Transformers 4.38.0+
- PEFT 0.8.0+
- BitsAndBytes 0.42.0+
- CUDA 11.8+ or 12.0+
Training Hyperparameters
# QLoRA Configuration
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
quantization: 4-bit (NF4)
# Training Configuration
epochs: 3
batch_size: 2 (per device)
gradient_accumulation_steps: 8
effective_batch_size: 16
learning_rate: 2e-4
lr_scheduler: cosine
warmup_ratio: 0.03
weight_decay: 0.01
optimizer: paged_adamw_32bit
max_seq_length: 2048
Evaluation
Metrics Explanation
- Token Accuracy (81%): Measures how many individual tokens (words/subwords) match the ground truth
- Validation Loss (0.651): Lower is better; indicates model's prediction confidence
- Entropy (0.665): Low entropy means the model makes confident predictions rather than being uncertain
Comparison to Base Model
The fine-tuned adapter shows significant improvements in the legal domain:
- Better understanding of Korean legal terminology
- More accurate citations of relevant laws and regulations
- Reduced hallucination on IP law topics
Citation
If you use this model in your research, please cite:
@misc{patent-qa-llama3-qlora,
author = {tree193nn},
title = {Korean Patent QA with LLaMA-3 QLoRA},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/tree193nn/patent-qa-llama3-qlora}},
note = {QLoRA adapter for Korean patent and trademark law question answering}
}
License
- Base Model: LLaMA-3 License (Meta)
- Adapter: MIT License
- Training Data: Public Korean Intellectual Property Tribunal documents
Acknowledgments
- Base Model: MLP-KTLim/llama-3-Korean-Bllossom-8B
- Training Method: QLoRA by Dettmers et al. (Paper)
- Data Source: Korean Intellectual Property Tribunal
Contact
- GitHub: CocoaSoymilk/patent-qa-llama3-qlora
- Hugging Face: tree193nn
Version History
v1.0.0 (2024-12-07)
- Initial release
- Fine-tuned on 8,502 Korean patent decision documents
- Achieved 81% token accuracy
- QLoRA adapter with 4-bit quantization
- Downloads last month
- 29
Model tree for tree193nn/patent-qa-llama3-qlora
Base model
meta-llama/Meta-Llama-3-8BEvaluation results
- Token Accuracy on Korean Patent Decision QAself-reported0.810
- Validation Loss on Korean Patent Decision QAself-reported0.651
- Entropy on Korean Patent Decision QAself-reported0.665