Pylox Text-to-SQL 8B (sql-spider)
A LoRA adapter for meta-llama/Llama-3.1-8B-Instruct, fine-tuned on the Spider text-to-SQL benchmark dataset. Built end-to-end on a single NVIDIA Grace Blackwell GB10 (DGX Spark, 128 GB unified memory) with the same NF4 train, NVFP4 serve, EAGLE-3 speculative decoding stack used across the Pylox Forge portfolio.
This is a small-dataset seed adapter intended as a portfolio demonstration of code-domain fine-tuning. The fine-tune-quality measurement against the base model is below the 50/50 line on this volume of data (see Evaluation), and is documented honestly. A larger refresh round on real schema-grounded query pairs is the recommended path to production-grade text-to-SQL.
Model details
- Adapter: LoRA (PEFT, rank 32, alpha 64, dropout 0.1)
- Target modules:
q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj - Base model:
meta-llama/Llama-3.1-8B-Instruct - Recommended serving base:
nvidia/Llama-3.1-8B-Instruct-NVFP4 - Speculative head:
RedHatAI/Llama-3.1-8B-Instruct-speculator.eagle3 - License: Llama 3.1 Community License
- Hardware: NVIDIA Grace Blackwell GB10 (DGX Spark, 128 GB UMA)
Training data and technique
- Source:
xlangai/spider(cross-domain text-to-SQL benchmark) - Examples after preprocessing: 697
- Format: Standard chat messages format with assistant-only loss; schema injected into the system prompt, natural-language question in the user turn, SQL query as the assistant target
- Method: NF4 QLoRA SFT (4-bit NormalFloat base, bfloat16 compute, double quantization)
- Hyperparameters: 3 epochs, cosine LR (2e-4 peak), max_seq_length 2048, sequence packing enabled, NEFTune noise alpha 5, paged_adamw_8bit optimizer, gradient accumulation 16
Evaluation
Numbers are pulled directly from the local benchmark JSON. No invented values.
Throughput on Grace Blackwell GB10
| Metric | Value |
|---|---|
| Sustained throughput | 25.9 tok/s |
| Single-user throughput | 39.6 tok/s |
| Concurrent batch-8 throughput | 270.1 tok/s |
| TTFT p50 | 414.4 ms |
| TTFT p95 | 514.3 ms |
| End-to-end latency p50 | 3852 ms |
Cost (NVFP4 serving)
| Metric | Value |
|---|---|
| Cost per 1M output tokens | $0.5363 |
| Comparable OpenAI GPT-4o output | $10 per 1M |
| Savings factor | 18.6x cheaper than GPT-4o |
Capability preservation (academic, lm-evaluation-harness, 500 samples each)
| Benchmark | Score |
|---|---|
| HellaSwag (common sense) | 78.0% |
| TruthfulQA (hallucination resistance) | 30.0% |
| MMLU-Pro | not run |
Fine-tune quality vs base (LLM judge, pairwise)
| Metric | Value |
|---|---|
Win rate vs meta-llama/Llama-3.1-8B-Instruct |
14.0% |
Honest read: at 697 training examples the adapter does not beat the base model in pairwise judge quality. Spider EM (execution match) on the held-out dev set has not yet been measured against an actual database; that is the appropriate domain-specific eval and is the recommended next step. The pipeline runs end-to-end and the artifacts ship, but production text-to-SQL on a real schema requires a substantially larger and more schema-diverse corpus.
Safety (with Pylox safety gateway, 50-prompt red team)
| Metric | Value |
|---|---|
| Adversarial block rate | 75.56% |
| False positive rate on benign controls | 20.0% |
The 20% false positive rate on benign controls indicates over-blocking on benign analytical SQL prompts. A DPO alignment pass with refusal-behavior pairs that distinguishes "DROP TABLE users" (block) from "SELECT count(*) FROM users WHERE deleted = true" (allow) would reduce this. Recommended next step before this adapter routes real production traffic.
Quickstart
PEFT (research / batch)
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
base_id = "meta-llama/Llama-3.1-8B-Instruct"
adapter_id = "pyloxsystems/sql-spider-llama-3.1-8b-lora"
tokenizer = AutoTokenizer.from_pretrained(base_id)
model = AutoModelForCausalLM.from_pretrained(
base_id, torch_dtype=torch.bfloat16, device_map="auto"
)
model = PeftModel.from_pretrained(model, adapter_id)
schema = """
CREATE TABLE orders (id INT PRIMARY KEY, customer_id INT, total NUMERIC, created_at TIMESTAMP);
CREATE TABLE customers (id INT PRIMARY KEY, name TEXT, country TEXT);
"""
messages = [
{"role": "system", "content": f"You translate natural language to PostgreSQL SQL.\n\nSchema:\n{schema}"},
{"role": "user", "content": "Total revenue from Canadian customers in the last 30 days?"},
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
out = model.generate(inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(out[0], skip_special_tokens=True))
vLLM with NVFP4 base and EAGLE-3 speculative decoding
vllm serve nvidia/Llama-3.1-8B-Instruct-NVFP4 \
--enable-lora \
--lora-modules sql-spider=pyloxsystems/sql-spider-llama-3.1-8b-lora \
--speculative-config '{
"method": "eagle3",
"model": "RedHatAI/Llama-3.1-8B-Instruct-speculator.eagle3",
"num_speculative_tokens": 5
}'
Intended use
- Translate natural-language questions to SQL against a provided schema
- Generalize to schemas unseen during training (Spider's distribution)
- PostgreSQL / SQLite / MySQL syntax
- Pipeline demonstration for evaluating the Pylox Forge stack on a code vertical
Out of scope
- Mutating queries (DELETE / DROP / TRUNCATE) without an external read-only role enforcement
- Schemas with deeply nested JSON / JSONB columns (untested generalization)
- Dialect-heavy SQL: Snowflake-specific, BigQuery-specific, Redshift-specific without a refresh round
- Production routing without an execution sandbox and a query validator
- Production text-to-SQL without a substantially larger fine-tune corpus
Limitations
- 697 training examples is small relative to production-grade text-to-SQL fine-tunes. Fine-tune quality vs base is below the 50/50 line.
- Spider's training distribution is primarily Postgres and SQLite; weak on schemas with complex nested types or non-relational columns.
- Spider EM has not yet been measured. Recommended next step.
- 20% false positive rate on benign safety controls. DPO refresh recommended.
- No built-in execution sandbox. Orchestrator must validate before running.
License
Inherits the Llama 3.1 Community License from the base model.
Citation
@misc{pylox_sql_spider_2026,
author = {Girard, Emilio},
title = {Pylox Text-to-SQL 8B (sql-spider)},
year = {2026},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/pyloxsystems/sql-spider-llama-3.1-8b-lora}}
}
Pylox Forge is a solo-operated LLM fine-tuning lab on NVIDIA Grace Blackwell. Site: pyloxforge.com. Other adapters: pyloxsystems on Hugging Face.
- Downloads last month
- 35
Model tree for pyloxsystems/sql-spider-llama-3.1-8b-lora
Base model
meta-llama/Llama-3.1-8B