Introduction
Cross-attention maps are grayscale images that highlight object regions, they contain less style information than RGB images. Therefore, we propose using these maps to generate bounding box annotations for synthetic target domain data (UGRC). First, we can label synthetic source domain data (LINZ) using the trained detectors on real source domain data. Then, we can train another detector on synthetic source domain cross-attention maps and then label synthetic target domain data.
Model Usage
This folder contains four detectors trained on Synthetic LINZ stacked cross-attention maps and tested on Synthetic UGRC cross-attention maps, along with configuration files we use for training and testing.
References
➡️ Paper: Adapting Vehicle Detectors for Aerial Imagery to Unseen Domains with Weak Supervision
➡️ Project Page: Webpage
➡️ Synthetic Data: AGenDA