Instructions to use Equall/Saul-7B-Base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Equall/Saul-7B-Base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Equall/Saul-7B-Base") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Equall/Saul-7B-Base") model = AutoModelForCausalLM.from_pretrained("Equall/Saul-7B-Base") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Equall/Saul-7B-Base with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Equall/Saul-7B-Base" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Equall/Saul-7B-Base", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Equall/Saul-7B-Base
- SGLang
How to use Equall/Saul-7B-Base with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Equall/Saul-7B-Base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Equall/Saul-7B-Base", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Equall/Saul-7B-Base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Equall/Saul-7B-Base", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Equall/Saul-7B-Base with Docker Model Runner:
docker model run hf.co/Equall/Saul-7B-Base
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Equall/Saul-7B-Base")
model = AutoModelForCausalLM.from_pretrained("Equall/Saul-7B-Base")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Equall/Saul-Base-v1
This is the base model for Equall/Saul-Base, a large instruct language model tailored for Legal domain. This model is obtained by continue pretraining of Mistral-7B.
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Equall.ai in collaboration with CentraleSupelec, Sorbonne Université, Instituto Superior Técnico and NOVA School of Law
- Model type: 7B
- Language(s) (NLP): English
- License: MIT
Model Sources
Uses
You can use it for legal use cases that involves generation.
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="Equall/Saul-Instruct-v1", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "[YOUR QUERY GOES HERE]"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
Bias, Risks, and Limitations
This model is built upon the technology of LLM, which comes with inherent limitations. It may occasionally generate inaccurate or nonsensical outputs. Furthermore, being a 7B model, it's anticipated to exhibit less robust performance compared to larger models, such as the 70B variant.
Citation
BibTeX:
@misc{colombo2024saullm7b,
title={SaulLM-7B: A pioneering Large Language Model for Law},
author={Pierre Colombo and Telmo Pessoa Pires and Malik Boudiaf and Dominic Culver and Rui Melo and Caio Corro and Andre F. T. Martins and Fabrizio Esposito and Vera Lúcia Raposo and Sofia Morgado and Michael Desa},
year={2024},
eprint={2403.03883},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 264

# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Equall/Saul-7B-Base") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)