fairness_lora
Fairness-aware LoRA adapter for resume–job matching built on top of BAAI/bge-large-en-v1.5.
This adapter was trained with adversarial debiasing and multi-task objectives to reduce group disparities while maintaining utility.
Usage
from transformers import AutoTokenizer, AutoModel
from peft import PeftModel
BASE = "BAAI/bge-large-en-v1.5"
ADAPTER = "fairness_lora"
tokenizer = AutoTokenizer.from_pretrained(BASE)
base_model = AutoModel.from_pretrained(BASE)
model = PeftModel.from_pretrained(base_model, ADAPTER)
model.eval()
# Encode two texts and compute cosine
import torch
import torch.nn.functional as F
def encode(text):
enc = tokenizer(text, return_tensors='pt', truncation=True, padding=True, max_length=256)
with torch.no_grad():
out = model(**enc)
hidden = out.last_hidden_state
emb = F.normalize(hidden.mean(dim=1), p=2, dim=1)
return emb
r = "Software engineer with Python experience."
j = "Hiring backend Python developer."
cos = (encode(r) * encode(j)).sum(dim=1).item()
prob = torch.sigmoid(torch.tensor(cos)).item()
print({"cosine": cos, "prob": prob})
Training notes
- Base model:
BAAI/bge-large-en-v1.5 - Technique: LoRA + adversarial debiasing + multi-task attribute prediction
- Objective: Lower demographic parity / equalized odds gaps while preserving accuracy/AUC
License
- Please ensure the license of the base model
BAAI/bge-large-en-v1.5allows derivative adapters. - Provide your dataset and usage terms accordingly.
Framework versions
- PEFT 0.17.1
- Downloads last month
- 34
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for renhehuang/fair-resume-job-matcher-lora
Base model
BAAI/bge-large-en-v1.5