Model Card for Nigerian KnowYourRightBot Chatbot

This is a fine-tuned version of the unsloth/tinyllama-chat-bnb-4bit model, specifically adapted for text generation related to Nigerian law. It was trained using the QLoRA method with Unsloth on a custom dataset. The goal of this project is to create a small, efficient legal chatbot that can run on consumer-grade hardware.

Model Details

Model Description

This model is a 4-bit quantized version of the TinyLlama 1.1B model, fine-tuned to act as an informative chatbot on Nigerian legal topics. It leverages the efficiency of the Unsloth framework and Parameter-Efficient Fine-Tuning (PEFT) to achieve a specialized knowledge base while keeping the model small and fast. The aim is to provide an accessible and efficient AI tool for informational purposes on legal subjects within a Nigerian context.

  • Developed by: menikev (Lawrence Emenike)
  • Model type: Fine-tuned Causal Language Model
  • Language(s) (NLP): English
  • License: mit
  • Finetuned from model: unsloth/tinyllama-chat-bnb-4bit

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

This model is intended to be used directly for generating human-like text responses to legal questions about Nigerian law.

Downstream Use

This model can be integrated into larger applications or chatbots to provide a legal knowledge component. It is suitable for use in educational platforms, legal tech research, or as a starting point for further fine-tuning on more specific legal subdomains.

Out-of-Scope Use

This model should not be used as a substitute for a qualified lawyer or for critical legal advice. It is an AI tool for informational purposes only. The information it provides may contain inaccuracies or hallucinations. It should not be used for making legal decisions, filing legal documents, or any other activity requiring professional legal expertise.

Bias, Risks, and Limitations

  • Hallucinations: The model may generate incorrect or fabricated information, especially for topics outside of its training data.
  • Data Bias: The model's knowledge and perspective are limited to the legal documents it was trained on. It may not be aware of recent legal changes, and its responses may reflect biases present in the original texts.
  • Limited Scope: The model's expertise is confined to the specific Nigerian legal acts it was trained on (e.g., Nigerian Constitution, FCCPA). It will perform poorly on legal topics from other jurisdictions or on unrelated subjects.

How to Get Started with the Model

Use the code below to get started with the model. This will load the base model and then apply your fine-tuned adapter weights on top.

from peft import PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Define the base model and your fine-tuned model's ID
base_model_id = "unsloth/tinyllama-chat-bnb-4bit"
adapter_id = "menikev/nigerian-legal-chatbot-tinyllama"

# Load the base model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)

# Load the PEFT adapter
model = PeftModel.from_pretrained(model, adapter_id)

# Example for inference
prompt = "what are the rights of an employee in Nigeria?"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))



[More Information Needed]

## Training Details

### Training Data

- The model was fine-tuned on a custom dataset of Nigerian legal documents. This includes the Nigerian Constitution (1999), the Federal Competition and Consumer Protection Act (FCCPA), the Nigeria Data Protection Act (2023), and the Labour Law. The data was formatted into a chat-like format for instruction fine-tuning.

[More Information Needed]

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

#### Preprocessing [optional]

[More Information Needed]


#### Training Hyperparameters

- The model was fine-tuned using the QLoRA method, as implemented by the Unsloth library. The training was performed on a T4 GPU.

    Training regime: bf16 mixed precision

    Optimizer: AdamW

#### Speeds, Sizes, Times [optional]

<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->

[More Information Needed]

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

[More Information Needed]

#### Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

[More Information Needed]

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

[More Information Needed]

### Results

[More Information Needed]

#### Summary



## Model Examination [optional]

<!-- Relevant interpretability work for the model goes here -->

[More Information Needed]

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]

## Technical Specifications [optional]

### Model Architecture and Objective

[More Information Needed]

### Compute Infrastructure

[More Information Needed]

#### Hardware

[More Information Needed]

#### Software

[More Information Needed]

## Citation [optional]

- Please cite the base model, unsloth/tinyllama-chat-bnb-4bit, as well as the fine-tuning work in your publications.

@misc{nigerian-legal-chatbot-tinyllama,
  author = {menikev},
  title = {Nigerian Legal Chatbot: Fine-Tuned TinyLlama for Legal Queries},
  year = {2025},
  publisher = {Hugging Face},
  url = {[https://huggingface.co/menikev/nigerian-legal-chatbot-tinyllama](https://huggingface.co/menikev/nigerian-legal-chatbot-tinyllama)}
}

[More Information Needed]

**APA:**

[More Information Needed]

## Glossary [optional]

<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]
### Framework versions

- PEFT 0.17.0
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for menikev/nigerian-legal-chatbot-tinyllama

Adapter
(118)
this model

Paper for menikev/nigerian-legal-chatbot-tinyllama