--- annotations_creators: [] language: en size_categories: - 1K, , , ]` The dataset captures desktop application interfaces across various platforms with natural language instructions and target interaction elements. It focuses on specific UI elements that should be interacted with to complete tasks in desktop applications like Microsoft Word, organized by application type and operating system. ## Dataset Creation ### Curation Rationale ScreenSpot-Pro was created to address the limitations of existing GUI grounding benchmarks, which primarily focus on simple tasks and cropped screenshots. Professional applications introduce unique challenges for GUI perception models, including high-resolution displays, smaller target sizes, and complex environments that are not well-represented in current benchmarks. The dataset aims to provide a more rigorous evaluation framework that reflects real-world professional computing scenarios. ### Source Data #### Data Collection and Processing The data collection prioritized authentic high-resolution screenshots from professional software usage: 1. Experts with at least five years of experience using relevant applications were invited to record data 2. Participants performed their regular work routines to ensure task authenticity 3. A custom screen capture tool was developed, accessible via shortcut key, to minimize workflow disruption 4. The tool allowed experts to take screenshots and label bounding boxes and instructions in real-time 5. Screens with resolution greater than 1080p (1920×1080) were prioritized 6. Monitor scaling was disabled during capture 7. For dual-monitor setups, images were captured spanning both displays 8. UI elements were classified as either "text" or "icon" based on refined criteria #### Who are the source data producers? The source data producers are expert users with at least five years of experience using the relevant professional applications. They come from various professional domains including software development, creative design, engineering, scientific research, and office productivity. ### Annotations #### Annotation process The annotation process was designed to ensure high quality and authenticity: 1. Experts used a custom screen capture tool that overlays the screenshot directly on their screen 2. They labeled bounding boxes by dragging and providing instructions directly through the tool 3. This real-time annotation eliminated the need to recall contexts after the fact 4. Each instance was reviewed by at least two annotators to ensure correct instructions and target bounding boxes 5. Ambiguous instructions were resolved to guarantee only one target per instruction 6. Annotators precisely verified the interactable regions of GUI elements, excluding irrelevant areas #### Who are the annotators? The annotators are the same expert users who produced the source data - professionals with at least five years of experience using the relevant applications. This ensures that annotations reflect domain expertise and understanding of professional software workflows. #### Personal and Sensitive Information The dataset consists of screenshots of professional software interfaces and does not inherently contain personal or sensitive information. However, the paper does not explicitly address whether any potential personal content visible in the screenshots (like document text, filenames, etc.) was anonymized. ## Bias, Risks, and Limitations The dataset has several limitations: 1. It focuses exclusively on GUI grounding and excludes agent planning and execution tasks 2. The extremely small relative size of targets (0.07% of screen area on average) presents a significant challenge 3. The benchmark may not fully capture the diversity of professional software configurations and customizations 4. The paper acknowledges legal considerations related to software licensing that limited certain aspects of data collection 5. The dataset's focus on high-resolution professional applications may not generalize to other GUI contexts ### Recommendations When using this dataset, researchers should: 1. Be aware of the legal considerations regarding software licensing and automation 2. Consider the extreme challenge posed by small target sizes in high-resolution images 3. Recognize that performance on this benchmark may not directly translate to other GUI contexts 4. Be cautious about potential biases in task selection or application representation 5. Consider developing specialized methods for handling high-resolution inputs, as demonstrated by the authors' ScreenSeekeR approach ## Citation **BibTeX:** ```bibtex @misc{li2024screenspot-pro, title={ScreenSpot-Pro: GUI Grounding for Professional High-Resolution Computer Use}, author={Kaixin Li and Ziyang Meng and Hongzhan Lin and Ziyang Luo and Yuchen Tian and Jing Ma and Zhiyong Huang and Tat-Seng Chua}, year={2025}, } ``` **APA:** Li, K., Meng, Z., Lin, H., Luo, Z., Tian, Y., Ma, J., Huang, Z., & Chua, T.-S. (2025). ScreenSpot-Pro: GUI Grounding for Professional High-Resolution Computer Use. ## Dataset Card Contact https://github.com/likaixin2000/ScreenSpot-Pro-GUI-Grounding