Data-Free Pruning of Self-Attention Layers in LLMs
Abstract
Gate-Norm enables rapid, data-free pruning of attention sublayers in LLMs by ranking their query-key coupling, achieving high inference throughput with minimal accuracy loss compared to unpruned models.
Many self-attention sublayers in large language models (LLMs) can be removed with little to no loss. We attribute this to the Attention Suppression Hypothesis: during pre-training, some deep attention layers learn to mute their own contribution, leaving the residual stream and the MLP to carry the representation. We propose Gate-Norm, a one-shot, weight-only criterion that ranks attention sublayers by query--key coupling and removes the least coupled ones, requiring no calibration data, no forward passes, no fine-tuning, and no specialized kernels. On 40-layer, 13B-parameter LLaMA models, Gate-Norm prunes the model in under a second. Pruning 8--16 attention sublayers yields up to 1.30times higher inference throughput while keeping average zero-shot accuracy within 2% of the unpruned baseline across BoolQ, RTE, HellaSwag, WinoGrande, ARC-Easy/Challenge, and OpenBookQA. Across these settings, Gate-Norm matches data-driven pruning methods in accuracy while being sim 1000times faster to score layers, enabling practical, data-free compression of LLMs.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper