Instructions to use tahp0604/finbert-sentfin-lora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use tahp0604/finbert-sentfin-lora with PEFT:
Task type is invalid.
- Transformers
How to use tahp0604/finbert-sentfin-lora with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="tahp0604/finbert-sentfin-lora")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("tahp0604/finbert-sentfin-lora") model = AutoModelForSequenceClassification.from_pretrained("tahp0604/finbert-sentfin-lora") - Notebooks
- Google Colab
- Kaggle
FinBERT-LoRA-SentFin
This model is a Parameter-Efficient Fine-Tuning (PEFT) version of FinBERT specifically optimized for high-precision financial sentiment analysis. It was fine-tuned using Low-Rank Adaptation (LoRA) on the SentFin 1.0 dataset.
Model Details
Model Description
The FinBERT-LoRA-SentFin adapter adds a specialized sentiment-aware layer to the base FinBERT model. By using LoRA, the model retains the deep financial domain knowledge of the original ProsusAI/finbert while adapting its attention mechanisms to the specific nuances of the SentFin 1.0 corpus.
- Developed by: tahp0604
- Model type: PEFT / LoRA (Low-Rank Adaptation)
- Base Model:
ProsusAI/finbert - Language(s): English
- License: MIT
- Finetuned from model:
ProsusAI/finbert - Dataset: SentFin 1.0
Loading and Usage
To use this adapter, you must load the base model first and then attach the adapter.
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from peft import PeftModel
import torch
# 1. Define IDs
base_model_id = "ProsusAI/finbert"
adapter_model_id = "tahp0604/finbert-sentfin-lora" # or local path
# 2. Load Tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
# 3. Load Base Model
base_model = AutoModelForSequenceClassification.from_pretrained(base_model_id)
# 4. Load and Attach PEFT Adapter
model = PeftModel.from_pretrained(base_model, adapter_model_id)
# Example Inference
text = "Reliance fell 2.1% today amid refinery outage concerns."
inputs = tokenizer(text, return_tensors="pt")
model.eval()
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# Map to label
labels = {0: "positive", 1: "negative", 2: "neutral"}
print(f"Sentiment: {labels[predicted_class_id]}")
Training Details
Training Data
The model was trained on the SentFin 1.0 dataset, which consists of over 10,000 financial news headlines annotated by domain experts for entity-level sentiment.
Training Procedure
PEFT / LoRA Configuration
- LoRA Rank (r): 8
- LoRA Alpha ($\alpha$): 16
- Target Modules:
query,value(Attention layers) - LoRA Dropout: 0.1
- Task Type: Sequence Classification (
SEQ_CLS) - Modules to Save:
classifier,score
Training Hyperparameters
- Training regime: FP16 Mixed Precision
- Optimizer: AdamW
- Batch Size: 32 (suggested)
- Learning Rate: 2e-4 (LoRA recommended)
Evaluation
Testing Data
Evaluated on a 20% hold-out split of the SentFin 1.0 dataset.
Metrics
- Accuracy: Primary metric for classification.
- F1 Score: Weighted F1 score to account for class distribution.
Technical Specifications
Model Architecture
- Backbone: BERT-base-uncased (FinBERT)
- PEFT Method: LoRA
- Learnable Parameters: ~0.3M (approx. 0.27% of base model)
- Total Parameters: ~110M
Model Card Contact
tahp0604
Framework versions
- PEFT 0.18.1
- Transformers 4.x
- PyTorch 2.x
- Downloads last month
- 40
Model tree for tahp0604/finbert-sentfin-lora
Base model
ProsusAI/finbert