DART Student Backbones
Distilled lightweight backbones for DART (Detect Anything in Real Time), a training-free framework that converts SAM3 into a real-time open-vocabulary multi-class detector.
For more details, see the paper: Detect Anything in Real Time: From Single-Prompt Segmentation to Multi-Class Detection.
These student backbones replace SAM3's 439M-parameter ViT-H/14 backbone with lightweight alternatives via adapter distillation: a small FPN adapter (~5M trainable parameters) is trained to project student features into the ViT-H feature space while the SAM3 encoder-decoder remains frozen.
Models
| Model | Backbone Params | COCO AP | AP50 | AP75 | AP_S | AP_L | BB Latency |
|---|---|---|---|---|---|---|---|
| DART (ViT-H teacher) | 439M | 55.8 | 73.4 | 61.5 | 40.3 | 70.7 | 53 ms |
| DART-RepViT-M2.3 | 8.2M | 38.7 | 53.1 | 42.3 | 22.6 | 49.9 | 13.9 ms |
| DART-TinyViT-21M | 21M | 30.1 | 42.4 | 32.6 | 17.4 | 37.8 | 12.2 ms |
| DART-EfficientViT-L2 | 9.2M | 21.7 | 31.5 | 23.5 | 13.7 | 24.2 | 10.7 ms |
| DART-EfficientViT-L1 | 5.3M | 16.3 | 24.2 | 17.4 | 10.6 | 17.3 | 10.4 ms |
All results on COCO val2017 (5,000 images, 80 classes, 1008x1008 resolution) using TRT FP16 backbone + encoder-decoder on a single RTX 4080. Teacher uses training-free multi-class detection (no detection training); students use adapter distillation with frozen encoder-decoder.
Architecture
Each student model consists of:
- A frozen ImageNet-pretrained backbone from timm (
features_only=True, 3 stages) - A trained FPN adapter (3 levels of Conv1x1 + bilinear interpolation + Conv3x3) that maps backbone features to SAM3's expected FPN format:
(B, 256, 288, 288),(B, 256, 144, 144),(B, 256, 72, 72) - The original frozen SAM3 encoder-decoder (unchanged)
Usage
Loading a student model
import torch
from sam3.distillation.sam3_student import build_sam3_student_model
model = build_sam3_student_model(
backbone_config="repvit_m2_3", # or efficientvit_l1, efficientvit_l2, tiny_vit_21m
teacher_checkpoint="sam3.pt", # SAM3 weights (encoder-decoder)
device="cuda",
)
ckpt = torch.load("distilled/repvit_m2_3_distilled.pt", map_location="cuda")
model.backbone.student_backbone.load_state_dict(ckpt["student_state_dict"])
model.eval()
Inference
from sam3.model.sam3_multiclass_fast import Sam3MultiClassPredictorFast
predictor = Sam3MultiClassPredictorFast(model, device="cuda")
predictor.set_classes(["person", "car", "dog"])
state = predictor.set_image(image) # PIL Image
results = predictor.predict(state, confidence_threshold=0.3)
# results: dict with 'boxes', 'scores', 'class_ids', 'class_names'
COCO evaluation
PYTHONIOENCODING=utf-8 python scripts/eval_all_students.py
This runs scripts/eval_coco.py for all four student models and produces coco_eval_all_students.json.
Training
All adapters were trained on COCO train2017 (118K unlabeled images, no annotations used) for 5 epochs with AdamW (lr=1e-3, weight decay=0.01, cosine schedule) using multi-scale MSE loss between student and teacher FPN features (level weights: 0.15, 0.20, 0.65). Training takes approximately 2 GPU-hours on a single RTX 4080.
python scripts/distill.py \
--data-dir /path/to/coco/train2017 \
--checkpoint sam3.pt \
--backbone repvit_m2_3 \
--epochs 5 --batch-size 2 --lr 1e-3
Supported backbones
| Config name | timm model | Stages |
|---|---|---|
efficientvit_l1 |
efficientvit_l1.r224_in1k |
(0, 1, 2) |
efficientvit_l2 |
efficientvit_l2.r384_in1k |
(0, 1, 2) |
repvit_m2_3 |
repvit_m2_3.dist_450e_in1k |
(0, 1, 2) |
tiny_vit_21m |
tiny_vit_21m_224.dist_in22k_ft_in1k |
(0, 1, 2) |
Checkpoint format
Each .pt file contains:
{
"epoch": 5,
"loss": float,
"adapter_state_dict": { ... }, # FPN adapter weights only
"student_state_dict": { ... }, # Full student backbone + adapter state
}
TRT export
python scripts/export_student_trt.py --models repvit_m2_3 --imgsz 1008
Produces ONNX and TRT FP16 engine files. The encoder-decoder is exported separately (split-engine design) to preserve open-vocabulary flexibility.
Requirements
- PyTorch >= 2.7.0
- timm
- SAM3 checkpoint (
sam3.pt) - TensorRT >= 10.9 (for TRT deployment)
Citation
@article{dart2026,
title={Detect Anything in Real Time: From Single-Prompt Segmentation to Multi-Class Detection},
author={Turkcan, Mehmet Kerem},
journal={arXiv preprint},
year={2026}
}
License
The student adapter weights are released under the same license as SAM3. The underlying backbone weights (RepViT, TinyViT, EfficientViT) retain their original licenses from timm.