Korean Patent QA - LLaMA-3 QLoRA Adapter

Hugging Face GitHub License Korean

Model Description

This is a QLoRA adapter for LLaMA-3-Korean-Bllossom-8B, fine-tuned on 8,502 Korean patent and trademark decision documents from the Korean Intellectual Property Tribunal. The model specializes in answering questions about Korean patent law, trademark law, and design rights with high accuracy and confidence.

Key Features

  • ๐ŸŽฏ 81% Token Accuracy on validation set
  • ๐Ÿ“‰ Low Entropy (0.665) indicating confident predictions
  • ๐Ÿ’พ Memory Efficient using QLoRA (4-bit quantization)
  • ๐Ÿ›๏ธ Legal Domain Expertise in intellectual property law
  • ๐Ÿ‡ฐ๐Ÿ‡ท Korean Language optimized for Korean legal terminology

Model Details

Base Model

Training

  • Method: QLoRA (Quantized Low-Rank Adaptation)
  • Quantization: 4-bit (NF4) with double quantization
  • LoRA Rank: 16
  • LoRA Alpha: 32
  • Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
  • Training Time: ~7.5 hours (3 epochs)
  • GPU: NVIDIA GPU with CUDA support

Dataset

  • Source: Korean Intellectual Property Tribunal Decision Documents
  • Size: 8,502 QA pairs
  • Split: 90% train (7,651), 10% validation (851)
  • Domain: Patent Law, Trademark Law, Design Rights
  • Format: Question-Answer pairs with legal citations
  • Language: Korean

Performance

Metric Value Description
Token Accuracy 81.0% Percentage of correctly predicted tokens
Validation Loss 0.651 Cross-entropy loss on validation set
Entropy 0.665 Low entropy indicates confident predictions
Training Loss 0.494 Final training loss (epoch 3)

Training Curve

The model showed consistent improvement across 3 epochs:

  • Epoch 1: Loss 0.777 โ†’ 0.612, Accuracy 78.2% โ†’ 80.2%
  • Epoch 2: Loss 0.589 โ†’ 0.480, Accuracy 80.9% โ†’ 81.0%
  • Epoch 3: Loss stabilized at 0.494, Accuracy maintained at 81.0%

Usage

Installation

pip install torch transformers peft bitsandbytes accelerate

Quick Start

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel

# Configuration for 4-bit quantization
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

# Load base model
base_model_name = "MLP-KTLim/llama-3-Korean-Bllossom-8B"
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(
    base_model_name,
    quantization_config=bnb_config,
    device_map="auto",
    torch_dtype=torch.bfloat16,
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "tree193nn/patent-qa-llama3-qlora")
model.eval()

# Inference
question = "์ƒํ‘œ๊ถŒ์˜ ๋ณดํ˜ธ๊ธฐ๊ฐ„์€ ์–ผ๋งˆ๋‚˜ ๋˜๋‚˜์š”?"

prompt = f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>

๋‹น์‹ ์€ ์ง€์‹์žฌ์‚ฐ๊ถŒ๋ฒ• ์ „๋ฌธ๊ฐ€์ž…๋‹ˆ๋‹ค. ํŠนํ—ˆ ๋ฐ ์ƒํ‘œ ๊ด€๋ จ ๋ฒ•๋ฅ  ์งˆ๋ฌธ์— ๋Œ€ํ•ด ์ •ํ™•ํ•˜๊ณ  ์ƒ์„ธํ•œ ๋‹ต๋ณ€์„ ์ œ๊ณตํ•ด์ฃผ์„ธ์š”.<|eot_id|><|start_header_id|>user<|end_header_id|>

{question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=256,
    temperature=0.7,
    top_p=0.9,
    do_sample=True,
    pad_token_id=tokenizer.eos_token_id,
)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer)

Example Questions

questions = [
    "์ƒํ‘œ๊ถŒ์˜ ๋ณดํ˜ธ๊ธฐ๊ฐ„์€ ์–ผ๋งˆ๋‚˜ ๋˜๋‚˜์š”?",  # How long is trademark protection?
    "ํŠนํ—ˆ ์ถœ์› ์‹œ ํ•„์š”ํ•œ ์„œ๋ฅ˜๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”?",  # What documents are needed for patent filing?
    "๋””์ž์ธ๊ถŒ๊ณผ ์ €์ž‘๊ถŒ์˜ ์ฐจ์ด๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”?",  # What's the difference between design rights and copyright?
]

Intended Use

Primary Use Cases

โœ… Question Answering about Korean intellectual property law โœ… Legal Research assistance for patent and trademark matters โœ… Educational Tool for learning Korean IP law โœ… Information Retrieval from patent decision documents

Out-of-Scope Use

โŒ Legal Advice: This model should NOT be used as a substitute for professional legal counsel โŒ Official Decisions: Outputs are not legally binding โŒ Non-Korean Languages: Optimized for Korean only โŒ General Purpose QA: Specializes in IP law, may not perform well on general topics

Limitations

Known Limitations

  1. Temporal Limitation: Training data is from before 2024; recent legal changes may not be reflected
  2. Domain Specificity: Performs best on patent/trademark law; limited on other legal areas
  3. Language: Optimized for Korean; English or other languages not supported
  4. Hallucination Risk: May generate plausible but incorrect legal interpretations
  5. Context Length: Limited to 2048 tokens due to training configuration

Bias and Fairness

  • Training Data Bias: Reflects biases present in Korean Intellectual Property Tribunal decisions
  • Geographic Focus: Specific to Korean law; not applicable to other jurisdictions
  • Language Bias: Korean legal terminology heavily featured

Ethical Considerations

โš ๏ธ Important Notice: This model is intended for research and educational purposes only. Users should:

  • Verify Information: Always cross-reference with official legal sources
  • Seek Professional Advice: Consult qualified legal professionals for actual cases
  • Understand Limitations: Recognize the model's domain and temporal constraints
  • Use Responsibly: Do not use for misleading or fraudulent purposes

Technical Specifications

Hardware Requirements

Minimum:

  • GPU: 16GB VRAM (e.g., NVIDIA RTX 4080, A4000)
  • RAM: 32GB
  • Storage: 20GB (base model + adapter)

Recommended:

  • GPU: 24GB+ VRAM (e.g., NVIDIA RTX 4090, A5000, A6000)
  • RAM: 64GB
  • Storage: 30GB

Software Requirements

  • Python 3.8+
  • PyTorch 2.0+
  • Transformers 4.38.0+
  • PEFT 0.8.0+
  • BitsAndBytes 0.42.0+
  • CUDA 11.8+ or 12.0+

Training Hyperparameters

# QLoRA Configuration
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
quantization: 4-bit (NF4)

# Training Configuration
epochs: 3
batch_size: 2 (per device)
gradient_accumulation_steps: 8
effective_batch_size: 16
learning_rate: 2e-4
lr_scheduler: cosine
warmup_ratio: 0.03
weight_decay: 0.01
optimizer: paged_adamw_32bit
max_seq_length: 2048

Evaluation

Metrics Explanation

  • Token Accuracy (81%): Measures how many individual tokens (words/subwords) match the ground truth
  • Validation Loss (0.651): Lower is better; indicates model's prediction confidence
  • Entropy (0.665): Low entropy means the model makes confident predictions rather than being uncertain

Comparison to Base Model

The fine-tuned adapter shows significant improvements in the legal domain:

  • Better understanding of Korean legal terminology
  • More accurate citations of relevant laws and regulations
  • Reduced hallucination on IP law topics

Citation

If you use this model in your research, please cite:

@misc{patent-qa-llama3-qlora,
  author = {tree193nn},
  title = {Korean Patent QA with LLaMA-3 QLoRA},
  year = {2024},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/tree193nn/patent-qa-llama3-qlora}},
  note = {QLoRA adapter for Korean patent and trademark law question answering}
}

License

  • Base Model: LLaMA-3 License (Meta)
  • Adapter: MIT License
  • Training Data: Public Korean Intellectual Property Tribunal documents

Acknowledgments

Contact

Version History

v1.0.0 (2024-12-07)

  • Initial release
  • Fine-tuned on 8,502 Korean patent decision documents
  • Achieved 81% token accuracy
  • QLoRA adapter with 4-bit quantization

Downloads last month
29
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for tree193nn/patent-qa-llama3-qlora

Adapter
(309)
this model

Evaluation results