Generalizable Knowledge Distillation from Vision Foundation Models for Semantic Segmentation
Abstract
Generalizable Knowledge Distillation (GKD) enhances out-of-domain generalization in semantic segmentation by decoupling representation learning from task learning and using query-based soft distillation to transfer knowledge from vision foundation models.
Knowledge distillation (KD) has been widely applied in semantic segmentation to compress large models, but conventional approaches primarily preserve in-domain accuracy while neglecting out-of-domain generalization, which is essential under distribution shifts. This limitation becomes more severe with the emergence of vision foundation models (VFMs): although VFMs exhibit strong robustness on unseen data, distilling them with conventional KD often compromises this ability. We propose Generalizable Knowledge Distillation (GKD), a multi-stage framework that explicitly enhances generalization. GKD decouples representation learning from task learning. In the first stage, the student acquires domain-agnostic representations through selective feature distillation, and in the second stage, these representations are frozen for task adaptation, thereby mitigating overfitting to visible domains. To further support transfer, we introduce a query-based soft distillation mechanism, where student features act as queries to teacher representations to selectively retrieve transferable spatial knowledge from VFMs. Extensive experiments on five domain generalization benchmarks demonstrate that GKD consistently outperforms existing KD methods, achieving average gains of +1.9% in foundation-to-foundation (F2F) and +10.6% in foundation-to-local (F2L) distillation. The code will be available at https://github.com/Younger-hua/GKD.
Community
Accepted by CVPR 2026.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PAND: Prompt-Aware Neighborhood Distillation for Lightweight Fine-Grained Visual Classification (2026)
- Open-Vocabulary Domain Generalization in Urban-Scene Segmentation (2026)
- Scaling Dense Event-Stream Pretraining from Visual Foundation Models (2026)
- Revisiting Multi-Task Visual Representation Learning (2026)
- BEVLM: Distilling Semantic Knowledge from LLMs into Bird's-Eye View Representations (2026)
- Unify the Views: View-Consistent Prototype Learning for Few-Shot Segmentation (2026)
- X-Distill: Cross-Architecture Vision Distillation for Visuomotor Learning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper