RF-DETR License Plate Detector

A fine-tuned RF-DETR Medium model for license plate detection, optimized for edge deployment.

Model Details

  • Base Model: RF-DETR Medium
  • Task: License plate detection (single class)
  • Input Resolution: 576x576
  • Training Framework: PyTorch

Available Formats

File Format Size Use Case
rfdetr_alpr.onnx ONNX 117MB Cross-platform inference
rfdetr_alpr_optimized.onnx ONNX (optimized) 115MB Optimized for TensorRT
rfdetr_alpr_int8.onnx ONNX INT8 34MB Edge inference (ARM/NPU)
license_plate_detector_fp16.engine TensorRT FP16 64MB GPU inference (balanced)
license_plate_detector_int8.engine TensorRT INT8 82MB GPU inference (fastest)
calibration.cache Calibration data 103KB INT8 calibration cache

Deployment Paths

  • NVIDIA GPU: Use TensorRT engines (.engine) for fastest inference
  • Edge/ARM (i.MX8M Plus, i.MX93): Use INT8 ONNX with ONNX Runtime or convert to TFLite

TensorRT Engine Details

  • TensorRT Version: 10.14.1
  • Target GPU: NVIDIA GB10 (Compute Capability 12.1)
  • Input Shape: 1x3x576x576 (fixed batch size)
  • Precision: FP16 / INT8

Usage

ONNX Inference

import onnxruntime as ort
import numpy as np

session = ort.InferenceSession("rfdetr_alpr_optimized.onnx")
# Input: (1, 3, 576, 576) normalized to [0, 1]
outputs = session.run(None, {"images": input_tensor})
boxes, scores = outputs[0], outputs[1]

TensorRT Inference

import tensorrt as trt
import pycuda.driver as cuda

# Load engine
with open("license_plate_detector_int8.engine", "rb") as f:
    engine = trt.Runtime(trt.Logger()).deserialize_cuda_engine(f.read())

Edge Inference (INT8 ONNX)

For ARM/edge devices without NVIDIA GPU:

import onnxruntime as ort

# Use INT8 quantized model for edge deployment
session = ort.InferenceSession(
    "rfdetr_alpr_int8.onnx",
    providers=['CPUExecutionProvider']  # or platform-specific NPU provider
)

# Input: (1, 3, 576, 576) with ImageNet normalization
outputs = session.run(None, {"images": input_tensor})
boxes, scores = outputs[0], outputs[1]

License

Apache 2.0 (same as RF-DETR)

Downloads last month
19
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support