diff --git "a/raw_rss_feeds/https___arxiv_org_rss_cs.xml" "b/raw_rss_feeds/https___arxiv_org_rss_cs.xml" --- "a/raw_rss_feeds/https___arxiv_org_rss_cs.xml" +++ "b/raw_rss_feeds/https___arxiv_org_rss_cs.xml" @@ -7,9490 +7,17140 @@ http://www.rssboard.org/rss-specification en-us - Mon, 22 Dec 2025 05:00:07 +0000 + Tue, 23 Dec 2025 05:00:18 +0000 rss-help@arxiv.org - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 Saturday Sunday - V-Agent: An Interactive Video Search System Using Vision-Language Models - https://arxiv.org/abs/2512.16925 - arXiv:2512.16925v1 Announce Type: new -Abstract: We introduce V-Agent, a novel multi-agent platform designed for advanced video search and interactive user-system conversations. By fine-tuning a vision-language model (VLM) with a small video preference dataset and enhancing it with a retrieval vector from an image-text retrieval model, we overcome the limitations of traditional text-based retrieval systems in multimodal scenarios. The VLM-based retrieval model independently embeds video frames and audio transcriptions from an automatic speech recognition (ASR) module into a shared multimodal representation space, enabling V-Agent to interpret both visual and spoken content for context-aware video search. This system consists of three agents-a routing agent, a search agent, and a chat agent-that work collaboratively to address user intents by refining search outputs and communicating with users. The search agent utilizes the VLM-based retrieval model together with an additional re-ranking module to further enhance video retrieval quality. Our proposed framework demonstrates state-of-the-art zero-shot performance on the MultiVENT 2.0 benchmark, highlighting its potential for both academic research and real-world applications. - oai:arXiv.org:2512.16925v1 - cs.CV + Efficient Multi-Adapter LLM Serving via Cross-Model KV-Cache Reuse with Activated LoRA + https://arxiv.org/abs/2512.17910 + arXiv:2512.17910v1 Announce Type: new +Abstract: Modern large language model (LLM) systems increasingly rely on multi-turn pipelines that are composed of multiple task-specific adapters, yet existing serving frameworks remain inefficient, incurring substantial recomputation overhead when switching between adapters. We present the first LLM serving engine that supports cross-model prefix cache reuse between base and adapted models via Activated LoRA (aLoRA), enabling efficient and fine-grained adapter switching during inference. Our design extends the vLLM framework by introducing base-aligned block hashing and activation-aware masking within the model execution path, permitting cache reuse across models while preserving compatibility with existing serving engine optimizations. Integrated into a production-grade inference stack, this approach supports dynamic adapter activation without excessive key-value tensor recomputation. Evaluation across representative multi-turn, multi-adapter pipelines demonstrates up to 58x end-to-end latency reduction and over 100x time-to-first-token improvement relative to standard LoRA baselines, with benefits that scale with model size and sequence length and manifest across all stages of the request lifecycle. This work bridges parameter-efficient model adaptation with high-performance serving, providing the first complete realization of cross-model KV-cache reuse in modern LLM inference engines. + oai:arXiv.org:2512.17910v1 + cs.DC cs.AI - cs.IR - cs.MA - Mon, 22 Dec 2025 00:00:00 -0500 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - SunYoung Park, Jong-Hyeon Lee, Youngjune Kim, Daegyu Sung, Younghyun Yu, Young-rok Cha, Jeongho Ju + http://creativecommons.org/licenses/by/4.0/ + Allison Li, Kristjan Greenewald, Thomas Parnell, Navid Azizan - Fixed-Priority and EDF Schedules for ROS2 Graphs on Uniprocessor - https://arxiv.org/abs/2512.16926 - arXiv:2512.16926v1 Announce Type: new -Abstract: This paper addresses limitations of current scheduling methods in the Robot Operating System (ROS)2, focusing on scheduling tasks beyond simple chains and analyzing arbitrary Directed Acyclic Graphs (DAGs). While previous research has focused mostly on chain-based scheduling with ad-hoc response time analyses, we propose a novel approach using the events executor to implement fixed-job-level-priority schedulers for arbitrary ROS2 graphs on uniprocessor systems. We demonstrate that ROS 2 applications can be abstracted as forests of trees, enabling the mapping of ROS 2 applications to traditional real-time DAG task models. Our usage of the events executor requires a special implementation of the events queue and a communication middleware that supports LIFO-ordered message delivery, features not yet standard in ROS2. We show that our implementation generates the same schedules as a conventional fixed-priority DAG task scheduler, in spite of lacking access to the precedence information that usually is required. This further closes the gap between established real-time systems theory and ROS2 scheduling analyses. - oai:arXiv.org:2512.16926v1 - cs.DC - cs.OS - cs.RO - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 + Towards Reasoning-Preserving Unlearning in Multimodal Large Language Models + https://arxiv.org/abs/2512.17911 + arXiv:2512.17911v1 Announce Type: new +Abstract: Machine unlearning aims to erase requested data from trained models without full retraining. For Reasoning Multimodal Large Language Models (RMLLMs), this is uniquely challenging: intermediate chain-of-thought steps can still leak sensitive information even when final answers are forgotten, and overly aggressive interventions easily damage general reasoning ability. Yet no benchmark jointly evaluates how well unlearning methods suppress reasoning-level leakage while preserving reasoning competence. We address this gap with RMLLMU-Bench, the first benchmark for RMLLM unlearning that extends standard forgetting metrics with dedicated measures of reasoning leakage and reasoning retention. A systematic evaluation on RMLLMU-Bench reveals that existing unlearning methods for MLLMs and Large (Language) Reasoning Models (LRMs) either leave substantial leakage in the reasoning process or severely degrade reasoning performance. To address these gaps, we propose R-MUSE (Reasoning-preserving MLLM Unlearning via Subspace guidance and Adaptive Steering), a training-free and inference-time intervention framework that steers internal representations to forget both answers and reasoning traces while explicitly preserving general reasoning. Experiments on RMLLMU-Bench demonstrate that R-MUSE achieves a substantially better balance between effective forgetting and reasoning retention. + oai:arXiv.org:2512.17911v1 + cs.CL + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Oren Bell, Harun Teper, Mario G\"unzel, Chris Gill, Jian-Jia Chen + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hongji Li, Junchi yao, Manjiang Yu, Priyanka Singh, Xue Li, Di Wang, Lijie Hu - Optimizing Text Search: A Novel Pattern Matching Algorithm Based on Ukkonen's Approach - https://arxiv.org/abs/2512.16927 - arXiv:2512.16927v1 Announce Type: new -Abstract: In the realm of computer science, the efficiency of text-search algorithms is crucial for processing vast amounts of data in areas such as natural language processing and bioinformatics. Traditional methods like Naive Search, KMP, and Boyer-Moore, while foundational, often fall short in handling the complexities and scale of modern datasets, such as the Reuters corpus and human genomic sequences. This study rigorously investigates text-search algorithms, focusing on optimizing Suffix Trees through methods like Splitting and Ukkonen's Algorithm, analyzed on datasets including the Reuters corpus and human genomes. A novel optimization combining Ukkonen's Algorithm with a new search technique is introduced, showing linear time and space efficiencies, outperforming traditional methods like Naive Search, KMP, and Boyer-Moore. Empirical tests confirm the theoretical advantages, highlighting the optimized Suffix Tree's effectiveness in tasks like pattern recognition in genomic sequences, achieving 100% accuracy. This research not only advances academic knowledge in text-search algorithms but also demonstrates significant practical utility in fields like natural language processing and bioinformatics, due to its superior resource efficiency and reliability. - oai:arXiv.org:2512.16927v1 - cs.DS + Graph-O1 : Monte Carlo Tree Search with Reinforcement Learning for Text-Attributed Graph Reasoning + https://arxiv.org/abs/2512.17912 + arXiv:2512.17912v1 Announce Type: new +Abstract: ChatGPT said: Text-attributed graphs, where nodes and edges contain rich textual information, are widely used across diverse domains. A central challenge in this setting is question answering, which requires jointly leveraging unstructured text and the structured relational signals within the graph. Although Large Language Models (LLMs) have made significant advances in natural language understanding, their direct use for reasoning over text-attributed graphs remains limited. Retrieval-augmented generation methods that operate purely on text often treat passages as isolated units, ignoring the interconnected structure of the graph. Conversely, graph-based RAG methods that serialize large subgraphs into long textual sequences quickly become infeasible due to LLM context-length constraints, resulting in fragmented reasoning and degraded accuracy. To overcome these limitations, we introduce Graph-O1, an agentic GraphRAG framework that enables LLMs to conduct stepwise, interactive reasoning over graphs. Our approach integrates Monte Carlo Tree Search (MCTS) with end-to-end reinforcement learning, allowing the model to selectively explore and retrieve only the most informative subgraph components. The reasoning procedure is framed as a multi-turn interaction between the agent and the graph environment, and the agent is trained through a unified reward mechanism. Extensive experiments across multiple LLM backbones demonstrate that Graph-O1 consistently surpasses state-of-the-art baselines, producing answers that are more accurate, reliable, and interpretable. + oai:arXiv.org:2512.17912v1 + cs.CL cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Xinyu Guan, Shaohua Zhang + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Lihui Liu - Dion2: A Simple Method to Shrink Matrix in Muon - https://arxiv.org/abs/2512.16928 - arXiv:2512.16928v1 Announce Type: new -Abstract: The Muon optimizer enjoys strong empirical performance and theoretical grounding. However, the super-linear cost of its orthonormalization step introduces increasing overhead with scale. To alleviate this cost, several works have attempted to reduce the size of the matrix entering the orthonormalization step. We introduce Dion2, a much simpler method for shrinking the matrix involved in Muon's computation compared to prior approaches. At a high level, Dion2 selects a fraction of rows or columns at each iteration and orthonormalizes only those. This sampling procedure makes the update sparse, reducing both computation and communication costs which in turn improves the scalability of Muon. - oai:arXiv.org:2512.16928v1 - cs.LG + Byzantine Fault-Tolerant Multi-Agent System for Healthcare: A Gossip Protocol Approach to Secure Medical Message Propagation + https://arxiv.org/abs/2512.17913 + arXiv:2512.17913v1 Announce Type: new +Abstract: Recent advances in generative AI have enabled sophisticated multi-agent architectures for healthcare, where large language models power collaborative clinical decision-making. However, these distributed systems face critical challenges in ensuring message integrity and fault tolerance when operating in adversarial or untrusted environments.This paper presents a novel Byzantine fault-tolerant multi-agent system specifically designed for healthcare applications, integrating gossip-based message propagation with cryptographic validation mechanisms. Our system employs specialized AI agents for diagnosis, treatment planning, emergency response, and data analysis, coordinated through a Byzantine consensus protocol that tolerates up to f faulty nodes among n = 3f + 1 total nodes. We implement a gossip protocol for decentralized message dissemination, achieving consensus with 2f + 1 votes while maintaining system operation even under Byzantine failures. Experimental results demonstrate that our approach successfully validates medical messages with cryptographic signatures, prevents replay attacks through timestamp validation, and maintains consensus accuracy of 100% with up to 33% Byzantine nodes. The system provides real-time visualization of consensus rounds, vote tallies, and network topology, enabling transparent monitoring of fault-tolerant operations. This work contributes a practical framework for building secure, resilient healthcare multi-agent systems capable of collaborative medical decision-making in untrusted environments. + oai:arXiv.org:2512.17913v1 cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Kwangjun Ahn, Noah Amsel, John Langford + http://creativecommons.org/licenses/by-sa/4.0/ + Nihir Chadderwala - BIONIX: A Wireless, Low-Cost Prosthetic Arm with Dual-Signal EEG and EMG Control - https://arxiv.org/abs/2512.16929 - arXiv:2512.16929v1 Announce Type: new -Abstract: Affordable upper-limb prostheses often lack intuitive control systems, limiting functionality and accessibility for amputees in low-resource settings. This project presents a low-cost, dual-mode neuro-muscular control system integrating electroencephalography (EEG) and electromyography (EMG) to enable real-time, multi-degree-of-freedom control of a prosthetic arm. EEG signals are acquired using the NeuroSky MindWave Mobile 2 and transmitted via ThinkGear Bluetooth packets to an ESP32 microcontroller running a lightweight classification model. The model was trained on 1500 seconds of recorded EEG data using a 6-frame sliding window with low-pass filtering, excluding poor-signal samples and using a 70/20/10 training--validation--test split. The classifier detects strong blink events, which toggle the hand between open and closed states. EMG signals are acquired using a MyoWare 2.0 sensor and SparkFun wireless shield and transmitted to a second ESP32, which performs threshold-based detection. Three activation bands (rest: 0--T1; extension: T1--T2; contraction: greater than T2) enable intuitive elbow control, with movement triggered only after eight consecutive frames in a movement class to improve stability. The EEG-controlled ESP32 actuates four finger servos, while the EMG-controlled ESP32 drives two elbow servos. A functional prototype was constructed using low-cost materials (total cost approximately 240 dollars), with most expense attributed to the commercial EEG headset. Future work includes transitioning to a 3D-printed chassis, integrating auto-regressive models to reduce EMG latency, and upgrading servo torque for improved load capacity and grip strength. This system demonstrates a feasible pathway to low-cost, biologically intuitive prosthetic control suitable for underserved and global health applications. - oai:arXiv.org:2512.16929v1 - cs.LG - eess.SP - Mon, 22 Dec 2025 00:00:00 -0500 + Q-KVComm: Efficient Multi-Agent Communication Via Adaptive KV Cache Compression + https://arxiv.org/abs/2512.17914 + arXiv:2512.17914v1 Announce Type: new +Abstract: Multi-agent Large Language Model (LLM) systems face a critical bottleneck: redundant transmission of contextual information between agents consumes excessive bandwidth and computational resources. Traditional approaches discard internal semantic representations and transmit raw text, forcing receiving agents to recompute similar representations from scratch. We introduce Q-KVComm, a new protocol that enables direct transmission of compressed key-value (KV) cache representations between LLM agents. Q-KVComm combines three key innovations: (1) adaptive layer-wise quantization that allocates variable bit-widths based on sensitivity profiling, (2) hybrid information extraction that preserves critical facts across content domains, and (3) heterogeneous model calibration establishing cross-architecture communication. Extensive experiments across three diverse question-answering datasets demonstrate that Q-KVComm achieves 5-6x compression ratios while maintaining semantic fidelity, with coherence quality scores above 0.77 across all scenarios. The protocol exhibits robust performance across model sizes (1.1B-1.5B parameters) and adapts to real-world applications including conversational QA and multi-hop reasoning. Our work establishes a new paradigm for LLM agent communication, shifting from text-based to representation-based information exchange. + oai:arXiv.org:2512.17914v1 + cs.CL + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Pranesh Sathish Kumar + Boris Kriuk, Logic Ng - Comparison of deep learning models: CNN and VGG-16 in identifying pornographic content - https://arxiv.org/abs/2512.16947 - arXiv:2512.16947v1 Announce Type: new -Abstract: In 2020, a total of 59,741 websites were blocked by the Indonesian government due to containing negative content, including pornography, with 14,266 websites falling into this category. However, these blocked websites could still be accessed by the public using virtual private networks (VPNs). This prompted the research idea to quickly identify pornographic content. This study aims to develop a system capable of identifying websites suspected of containing pornographic image content, using a deep learning approach with convolutional neural network (CNN) and visual geometry group 16 (VGG-16) model. The two models were then explored comprehensively and holistically to determine which model was most effective in detecting pornographic content quickly. Based on the findings of the comparison between testing the CNN and VGG-16 models, research results showed that the best test results were obtained in the eighth experiment using the CNN model at an epoch value level of 50 and a learning rate of 0.001 of 0.9487 or 94.87%. This can be interpreted that the CNN model is more effective in detecting pornographic content quickly and accurately compared to using the VGG-16 model. - oai:arXiv.org:2512.16947v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Supplementary Resources and Analysis for Automatic Speech Recognition Systems Trained on the Loquacious Dataset + https://arxiv.org/abs/2512.17915 + arXiv:2512.17915v1 Announce Type: new +Abstract: The recently published Loquacious dataset aims to be a replacement for established English automatic speech recognition (ASR) datasets such as LibriSpeech or TED-Lium. The main goal of the Loquacious dataset is to provide properly defined training and test partitions across many acoustic and language domains, with an open license suitable for both academia and industry. To further promote the benchmarking and usability of this new dataset, we present additional resources in the form of n-gram language models (LMs), a grapheme-to-phoneme (G2P) model and pronunciation lexica, with open and public access. Utilizing those additional resources we show experimental results across a wide range of ASR architectures with different label units and topologies. Our initial experimental results indicate that the Loquacious dataset offers a valuable study case for a variety of common challenges in ASR. + oai:arXiv.org:2512.17915v1 + cs.CL + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - 10.11591/ijai.v14.i3.pp1884-1899 - IAES International Journal of Artificial Intelligence (IJ-AI), Volume 14, Number 3, 2025 - Reza Chandra, Adang Suhendra, Lintang Yuniar Banowosari, Prihandoko + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nick Rossenbach, Robin Schmitt, Tina Raissi, Simon Berger, Larissa Kleppel, Ralf Schl\"uter - AVM: Towards Structure-Preserving Neural Response Modeling in the Visual Cortex Across Stimuli and Individuals - https://arxiv.org/abs/2512.16948 - arXiv:2512.16948v1 Announce Type: new -Abstract: While deep learning models have shown strong performance in simulating neural responses, they often fail to clearly separate stable visual encoding from condition-specific adaptation, which limits their ability to generalize across stimuli and individuals. We introduce the Adaptive Visual Model (AVM), a structure-preserving framework that enables condition-aware adaptation through modular subnetworks, without modifying the core representation. AVM keeps a Vision Transformer-based encoder frozen to capture consistent visual features, while independently trained modulation paths account for neural response variations driven by stimulus content and subject identity. We evaluate AVM in three experimental settings, including stimulus-level variation, cross-subject generalization, and cross-dataset adaptation, all of which involve structured changes in inputs and individuals. Across two large-scale mouse V1 datasets, AVM outperforms the state-of-the-art V1T model by approximately 2% in predictive correlation, demonstrating robust generalization, interpretable condition-wise modulation, and high architectural efficiency. Specifically, AVM achieves a 9.1% improvement in explained variance (FEVE) under the cross-dataset adaptation setting. These results suggest that AVM provides a unified framework for adaptive neural modeling across biological and experimental conditions, offering a scalable solution under structural constraints. Its design may inform future approaches to cortical modeling in both neuroscience and biologically inspired AI systems. - oai:arXiv.org:2512.16948v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Learning to Prioritize IT Tickets: A Comparative Evaluation of Embedding-based Approaches and Fine-Tuned Transformer Models + https://arxiv.org/abs/2512.17916 + arXiv:2512.17916v1 Announce Type: new +Abstract: Prioritizing service tickets in IT Service Management (ITSM) is critical for operational efficiency but remains challenging due to noisy textual inputs, subjective writing styles, and pronounced class imbalance. We evaluate two families of approaches for ticket prioritization: embedding-based pipelines that combine dimensionality reduction, clustering, and classical classifiers, and a fine-tuned multilingual transformer that processes both textual and numerical features. Embedding-based methods exhibit limited generalization across a wide range of thirty configurations, with clustering failing to uncover meaningful structures and supervised models highly sensitive to embedding quality. In contrast, the proposed transformer model achieves substantially higher performance, with an average F1-score of 78.5% and weighted Cohen's kappa values of nearly 0.80, indicating strong alignment with true labels. These results highlight the limitations of generic embeddings for ITSM data and demonstrate the effectiveness of domain-adapted transformer architectures for operational ticket prioritization. + oai:arXiv.org:2512.17916v1 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qi Xu, Shuai Gong, Xuming Ran, Haihua Luo, Yangfan Hu + Minh Tri L\^E, Ali Ait-Bachir - Enhancing Tree Species Classification: Insights from YOLOv8 and Explainable AI Applied to TLS Point Cloud Projections - https://arxiv.org/abs/2512.16950 - arXiv:2512.16950v1 Announce Type: new -Abstract: Classifying tree species has been a core research area in forest remote sensing for decades. New sensors and classification approaches like TLS and deep learning achieve state-of-the art accuracy but their decision processes remain unclear. Methods such as Finer-CAM (Class Activation Mapping) can highlight features in TLS projections that contribute to the classification of a target species, yet are uncommon in similar looking contrastive tree species. We propose a novel method linking Finer-CAM explanations to segments of TLS projections representing structural tree features to systemically evaluate which features drive species discrimination. Using TLS data from 2,445 trees across seven European tree species, we trained and validated five YOLOv8 models with cross-validation, reaching a mean accuracy of 96% (SD = 0.24%). Analysis of 630 saliency maps shows the models primarily rely on crown features in TLS projections for species classification. While this result is pronounced in Silver Birch, European Beech, English oak, and Norway spruce, stem features contribute more frequently to the differentiation of European ash, Scots pine, and Douglas fir. Particularly representations of finer branches contribute to the decisions of the models. The models consider those tree species similar to each other which a human expert would also regard as similar. Furthermore, our results highlight the need for an improved understanding of the decision processes of tree species classification models to help reveal data set and model limitations, biases, and to build confidence in model predictions. - oai:arXiv.org:2512.16950v1 - cs.CV + KVReviver: Reversible KV Cache Compression with Sketch-Based Token Reconstruction + https://arxiv.org/abs/2512.17917 + arXiv:2512.17917v1 Announce Type: new +Abstract: As the context length of current large language models (LLMs) rapidly increases, the memory demand for the Key-Value (KV) cache is becoming a bottleneck for LLM deployment and batch processing. Traditional KV cache compression methods typically involve permanently evicting or irreversibly merging "less important" tokens with low attention scores. This approach results in the unrecoverable loss of token information, which we call Contextual Amnesia, significantly degrading the model's information retrieval capability. To address this issue, we propose KVReviver, a reversible KV cache compression method based on the sketch algorithm. This method allows reconstructing compressed tokens from an additional data structure, thus enabling full-scale computation within limited memory. Experiments showed that in 2k-length contexts, it requires only 10% of KV Cache budget while maintaining identical end-to-end inference accuracy. For 32k-length contexts, it achieves equivalent or comparable accuracy ~2% accuracy loss) using merely 25% of KV Cache budget. + oai:arXiv.org:2512.17917v1 + cs.CL cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Adrian Straker, Paul Magdon, Marco Zullich, Maximilian Freudenberg, Christoph Kleinn, Johannes Breidenbach, Stefano Puliti, Nils N\"olke + Aomufei Yuan, Zhiming Wang, Ruijie Miao, Dayu Wang, Yuxuan Tian, Zihan Wang, Yebo Peng, Yuhan Wu, Bairen Yi, Xin Liu, Tong Yang - Navigating Taxonomic Expansions of Entity Sets Driven by Knowledge Bases - https://arxiv.org/abs/2512.16953 - arXiv:2512.16953v1 Announce Type: new -Abstract: Recognizing similarities among entities is central to both human cognition and computational intelligence. Within this broader landscape, Entity Set Expansion is one prominent task aimed at taking an initial set of (tuples of) entities and identifying additional ones that share relevant semantic properties with the former -- potentially repeating the process to form increasingly broader sets. However, this ``linear'' approach does not unveil the richer ``taxonomic'' structures present in knowledge resources. A recent logic-based framework introduces the notion of an expansion graph: a rooted directed acyclic graph where each node represents a semantic generalization labeled by a logical formula, and edges encode strict semantic inclusion. This structure supports taxonomic expansions of entity sets driven by knowledge bases. Yet, the potentially large size of such graphs may make full materialization impractical in real-world scenarios. To overcome this, we formalize reasoning tasks that check whether two tuples belong to comparable, incomparable, or the same nodes in the graph. Our results show that, under realistic assumptions -- such as bounding the input or limiting entity descriptions -- these tasks can be implemented efficiently. This enables local, incremental navigation of expansion graphs, supporting practical applications without requiring full graph construction. - oai:arXiv.org:2512.16953v1 - cs.AI - cs.LO - Mon, 22 Dec 2025 00:00:00 -0500 + QAISim: A Toolkit for Modeling and Simulation of AI in Quantum Cloud Computing Environments + https://arxiv.org/abs/2512.17918 + arXiv:2512.17918v1 Announce Type: new +Abstract: Quantum computing offers new ways to explore the theory of computation via the laws of quantum mechanics. Due to the rising demand for quantum computing resources, there is growing interest in developing cloud-based quantum resource sharing platforms that enable researchers to test and execute their algorithms on real quantum hardware. These cloud-based systems face a fundamental challenge in efficiently allocating quantum hardware resources to fulfill the growing computational demand of modern Internet of Things (IoT) applications. So far, attempts have been made in order to make efficient resource allocation, ranging from heuristic-based solutions to machine learning. In this work, we employ quantum reinforcement learning based on parameterized quantum circuits to address the resource allocation problem to support large IoT networks. We propose a python-based toolkit called QAISim for the simulation and modeling of Quantum Artificial Intelligence (QAI) models for designing resource management policies in quantum cloud environments. We have simulated policy gradient and Deep Q-Learning algorithms for reinforcement learning. QAISim exhibits a substantial reduction in model complexity compared to its classical counterparts with fewer trainable variables. + oai:arXiv.org:2512.17918v1 + cs.DC + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Pietro Cofone, Giovanni Amendola, Marco Manna, Aldo Ricioppo + http://creativecommons.org/licenses/by/4.0/ + Irwindeep Singh, Sukhpal Singh Gill, Jinzhao Sun, Jan Mol - Lights, Camera, Consistency: A Multistage Pipeline for Character-Stable AI Video Stories - https://arxiv.org/abs/2512.16954 - arXiv:2512.16954v1 Announce Type: new -Abstract: Generating long, cohesive video stories with consistent characters is a significant challenge for current text-to-video AI. We introduce a method that approaches video generation in a filmmaker-like manner. Instead of creating a video in one step, our proposed pipeline first uses a large language model to generate a detailed production script. This script guides a text-to-image model in creating consistent visuals for each character, which then serve as anchors for a video generation model to synthesize each scene individually. Our baseline comparisons validate the necessity of this multi-stage decomposition; specifically, we observe that removing the visual anchoring mechanism results in a catastrophic drop in character consistency scores (from 7.99 to 0.55), confirming that visual priors are essential for identity preservation. Furthermore, we analyze cultural disparities in current models, revealing distinct biases in subject consistency and dynamic degree between Indian vs Western-themed generations. - oai:arXiv.org:2512.16954v1 - cs.CV + Separating Constraint Compliance from Semantic Accuracy: A Novel Benchmark for Evaluating Instruction-Following Under Compression + https://arxiv.org/abs/2512.17920 + arXiv:2512.17920v1 Announce Type: new +Abstract: Large language models (LLMs) exhibit degraded performance under prompt compression, but the mechanisms remain poorly understood. We introduce the Compression-Decay Comprehension Test (CDCT), a benchmark that independently measures constraint compliance (CC) and semantic accuracy (SA) across compression levels. We evaluate 9 frontier LLMs across 8 concepts using 5 compression levels from extreme (c=0.0, ~2 words) to none (c=1.0, ~135 words). A three-judge LLM jury achieves almost perfect inter-rater agreement on CC (Fleiss' \k{appa}=0.90). + We observe a universal U-curve pattern in constraint compliance (97.2% prevalence), with violations peaking at medium compression (c=0.5, ~27 words). Counterintuitively, models perform better at extreme compression than medium lengths. The dimensions are statistically orthogonal (r=0.193, p=0.084), with constraint effects 2.9x larger than semantic effects. + Experimental validation via RLHF ablation confirms our constraint salience hypothesis: removing "helpfulness" signals improves CC by 598% on average (71/72 trials, p<0.001), with 79% achieving perfect compliance. This demonstrates that RLHF-trained helpfulness behaviors are the dominant cause of constraint violations at medium compression. Reasoning models outperform efficient models by 27.5% (Cohen's d=0.96). + Our findings reveal a fundamental tension between RLHF alignment and instruction-following, providing actionable guidelines for improving deployed systems. + oai:arXiv.org:2512.17920v1 + cs.CL cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chayan Jain, Rishant Sharma, Archit Garg, Ishan Bhanuka, Pratik Narang, Dhruv Kumar + http://creativecommons.org/licenses/by/4.0/ + Rahul Baxi - SpIDER: Spatially Informed Dense Embedding Retrieval for Software Issue Localization - https://arxiv.org/abs/2512.16956 - arXiv:2512.16956v1 Announce Type: new -Abstract: Retrieving code units (e.g., files, classes, functions) that are semantically relevant to a given user query, bug report, or feature request from large codebases is a fundamental challenge for LLM-based coding agents. Agentic approaches typically employ sparse retrieval methods like BM25 or dense embedding strategies to identify relevant units. While embedding-based approaches can outperform BM25 by large margins, they often lack exploration of the codebase and underutilize its underlying graph structure. To address this, we propose SpIDER (Spatially Informed Dense Embedding Retrieval), an enhanced dense retrieval approach that incorporates LLM-based reasoning over auxiliary context obtained through graph-based exploration of the codebase. Empirical results show that SpIDER consistently improves dense retrieval performance across several programming languages. - oai:arXiv.org:2512.16956v1 - cs.SE + Comparative Evaluation of Explainable Machine Learning Versus Linear Regression for Predicting County-Level Lung Cancer Mortality Rate in the United States + https://arxiv.org/abs/2512.17934 + arXiv:2512.17934v1 Announce Type: new +Abstract: Lung cancer (LC) is a leading cause of cancer-related mortality in the United States. Accurate prediction of LC mortality rates is crucial for guiding targeted interventions and addressing health disparities. Although traditional regression-based models have been commonly used, explainable machine learning models may offer enhanced predictive accuracy and deeper insights into the factors influencing LC mortality. This study applied three models: random forest (RF), gradient boosting regression (GBR), and linear regression (LR) to predict county-level LC mortality rates across the United States. Model performance was evaluated using R-squared and root mean squared error (RMSE). Shapley Additive Explanations (SHAP) values were used to determine variable importance and their directional impact. Geographic disparities in LC mortality were analyzed through Getis-Ord (Gi*) hotspot analysis. The RF model outperformed both GBR and LR, achieving an R2 value of 41.9% and an RMSE of 12.8. SHAP analysis identified smoking rate as the most important predictor, followed by median home value and the percentage of the Hispanic ethnic population. Spatial analysis revealed significant clusters of elevated LC mortality in the mid-eastern counties of the United States. The RF model demonstrated superior predictive performance for LC mortality rates, emphasizing the critical roles of smoking prevalence, housing values, and the percentage of Hispanic ethnic population. These findings offer valuable actionable insights for designing targeted interventions, promoting screening, and addressing health disparities in regions most affected by LC in the United States. + oai:arXiv.org:2512.17934v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + stat.AP + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Shravan Chaudhari, Rahul Thomas Jacob, Mononito Goswami, Jiajun Cao, Shihab Rashid, Christian Bock + 10.1200/CCI-24-00310 + JCO Clin Cancer Inform JCO Clinical Cancer Informatics, 2025 Nov:9:e2400310 + Soheil Hashtarkhani, Brianna M. White, Benyamin Hoseini, David L. Schwartz, Arash Shaban-Nejad - CAPIO: Safe Kernel-Bypass of Commodity Devices using Capabilities - https://arxiv.org/abs/2512.16957 - arXiv:2512.16957v1 Announce Type: new -Abstract: Securing low-latency I/O in commodity systems forces a fundamental trade-off: rely on the kernel's high overhead mediated interface, or bypass it entirely, exposing sensitive hardware resources to userspace and creating new vulnerabilities. This dilemma stems from a hardware granularity mismatch: standard MMUs operate at page boundaries, making it impossible to selectively expose safe device registers without also exposing the sensitive control registers colocated on the same page. Existing solutions to driver isolation enforce an isolation model that cannot protect sub-page device resources. - This paper presents CAPIO, the first architecture to leverage hardware capabilities to enforce fine-grained access control on memory-mapped I/O. Unlike prior page-based protections, CAPIO utilizes unforgeable capabilities to create precise, sub-page "slices" of device memory. This mechanism enables the kernel to delegate latency-critical hardware access to userspace applications while strictly preventing interaction with co-located privileged registers. - We implement CAPIO based on CHERI on the ARM Morello platform and demonstrate a proof-of-concept safe-access driver for a commodity network card which was not originally designed for kernel bypass. We demonstrate that CAPIO achieves the latency improvements of kernel bypass while enforcing byte-level access control of privileged resources. - oai:arXiv.org:2512.16957v1 - cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 + chatter: a Python library for applying information theory and AI/ML models to animal communication + https://arxiv.org/abs/2512.17935 + arXiv:2512.17935v1 Announce Type: new +Abstract: The study of animal communication often involves categorizing units into types (e.g. syllables in songbirds, or notes in humpback whales). While this approach is useful in many cases, it necessarily flattens the complexity and nuance present in real communication systems. chatter is a new Python library for analyzing animal communication in continuous latent space using information theory and modern machine learning techniques. It is taxonomically agnostic, and has been tested with the vocalizations of birds, bats, whales, and primates. By leveraging a variety of different architectures, including variational autoencoders and vision transformers, chatter represents vocal sequences as trajectories in high-dimensional latent space, bypassing the need for manual or automatic categorization of units. The library provides an end-to-end workflow -- from preprocessing and segmentation to model training and feature extraction -- that enables researchers to quantify the complexity, predictability, similarity, and novelty of vocal sequences. + oai:arXiv.org:2512.17935v1 + cs.SD + cs.LG + eess.AS + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Friedrich Doku, Jonathan Laughton, Nick Wanninger, Peter Dinda + http://creativecommons.org/licenses/by/4.0/ + Mason Youngblood - Resilient Microservices: A Systematic Review of Recovery Patterns, Strategies, and Evaluation Frameworks - https://arxiv.org/abs/2512.16959 - arXiv:2512.16959v1 Announce Type: new -Abstract: Microservice based systems underpin modern distributed computing environments but remain vulnerable to partial failures, cascading timeouts, and inconsistent recovery behavior. Although numerous resilience and recovery patterns have been proposed, existing surveys are largely descriptive and lack systematic evidence synthesis or quantitative rigor. This paper presents a PRISMA aligned systematic literature review of empirical studies on microservice recovery strategies published between 2014 and 2025 across IEEE Xplore, ACM Digital Library, and Scopus. From an initial corpus of 412 records, 26 high quality studies were selected using transparent inclusion, exclusion, and quality assessment criteria. The review identifies nine recurring resilience themes encompassing circuit breakers, retries with jitter and budgets, sagas with compensation, idempotency, bulkheads, adaptive backpressure, observability, and chaos validation. As a data oriented contribution, the paper introduces a Recovery Pattern Taxonomy, a Resilience Evaluation Score checklist for standardized benchmarking, and a constraint aware decision matrix mapping latency, consistency, and cost trade offs to appropriate recovery mechanisms. The results consolidate fragmented resilience research into a structured and analyzable evidence base that supports reproducible evaluation and informed design of fault tolerant and performance aware microservice systems. - oai:arXiv.org:2512.16959v1 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 + A 96pJ/Frame/Pixel and 61pJ/Event Anti-UAV System with Hybrid Object Tracking Modes + https://arxiv.org/abs/2512.17939 + arXiv:2512.17939v1 Announce Type: new +Abstract: We present an energy-efficient anti-UAV system that integrates frame-based and event-driven object tracking to enable reliable detection of small and fast-moving drones. The system reconstructs binary event frames using run-length encoding, generates region proposals, and adaptively switches between frame mode and event mode based on object size and velocity. A Fast Object Tracking Unit improves robustness for high-speed targets through adaptive thresholding and trajectory-based classification. The neural processing unit supports both grayscale-patch and trajectory inference with a custom instruction set and a zero-skipping MAC architecture, reducing redundant neural computations by more than 97 percent. Implemented in 40 nm CMOS technology, the 2 mm^2 chip achieves 96 pJ per frame per pixel and 61 pJ per event at 0.8 V, and reaches 98.2 percent recognition accuracy on public UAV datasets across 50 to 400 m ranges and 5 to 80 pixels per second speeds. The results demonstrate state-of-the-art end-to-end energy efficiency for anti-UAV systems. + oai:arXiv.org:2512.17939v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Muzeeb Mohammad + Yuncheng Lu, Yucen Shi, Aobo Li, Zehao Li, Junying Li, Bo Wang, Tony Tae-Hyoung Kim - QSMOTE-PGM/kPGM: QSMOTE Based PGM and kPGM for Imbalanced Dataset Classification - https://arxiv.org/abs/2512.16960 - arXiv:2512.16960v1 Announce Type: new -Abstract: Quantum-inspired machine learning (QiML) leverages mathematical frameworks from quantum theory to enhance classical algorithms, with particular emphasis on inner product structures in high-dimensional feature spaces. Among the prominent approaches, the Kernel Trick, widely used in support vector machines, provides efficient similarity computation, while the Pretty Good Measurement (PGM), originating from quantum state discrimination, enables classification grounded in Hilbert space geometry. Building on recent developments in kernelized PGM (KPGM) and direct PGM-based classifiers, this work presents a unified theoretical and empirical comparison of these paradigms. We analyze their performance across synthetic oversampling scenarios using Quantum SMOTE (QSMOTE) variants. Experimental results show that both PGM and KPGM classifiers consistently outperform a classical random forest baseline, particularly when multiple quantum copies are employed. Notably, PGM with stereo encoding and n_copies=2 achieves the highest overall accuracy (0.8512) and F1-score (0.8234), while KPGM demonstrates competitive and more stable behavior across QSMOTE variants, with top scores of 0.8511 (stereo) and 0.8483 (amplitude). These findings highlight that quantum-inspired classifiers not only provide tangible gains in recall and balanced performance but also offer complementary strengths: PGM benefits from encoding-specific enhancements, whereas KPGM ensures robustness across sampling strategies. Our results advance the understanding of kernel-based and measurement-based QiML methods, offering practical guidance on their applicability under varying data characteristics and computational constraints. - oai:arXiv.org:2512.16960v1 - cs.LG - quant-ph - Mon, 22 Dec 2025 00:00:00 -0500 + Untethered thin dielectric elastomer actuated soft robot + https://arxiv.org/abs/2512.17940 + arXiv:2512.17940v1 Announce Type: new +Abstract: Thin dielectric elastomer actuator (DEA) features a unique in-plane configuration, enabling low-profile designs capable of accessing millimetre-scale narrow spaces. However, most existing DEA-powered soft robots require high voltages and wired power connections, limiting their ability to operate in confined environments. This study presents an untethered thin soft robot (UTS-Robot) powered by thin dielectric elastomer actuators (TS-DEA). The robot measures 38 mm in length, 6 mm in height, and weighs just 2.34 grams, integrating flexible onboard electronics to achieve fully untethered actuation. The TS-DEA, operating at resonant frequencies of 86 Hz under a low driving voltage of 220 V, adopts a dual-actuation sandwiched structure, comprising four dielectric elastomer layers bonded to a compressible tensioning mechanism at its core. This design enables high power density actuation and locomotion via three directional friction pads. The low-voltage actuation is achieved by fabricating each elastomer layer via spin coating to an initial thickness of 50 um, followed by biaxial stretching to 8 um. A comprehensive design and modelling framework has been developed to optimise TS-DEA performance. Experimental evaluations demonstrate that the bare TS-DEA achieves a locomotion speed of 12.36 mm/s at resonance, the untethered configuration achieves a locomotion speed of 0.5 mm/s, making it highly suitable for navigating confined and complex environments. + oai:arXiv.org:2512.17940v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Bikash K. Behera, Giuseppe Sergioli, Robert Giuntini + Xi Wang, Jing Liu, Siqian Li, Hengtai Dai, Jung-Che Chang, Adam Rushworth, Xin Dong - New Theoretical Insights and Algorithmic Solutions for Reconstructing Score Sequences from Tournament Score Sets - https://arxiv.org/abs/2512.16961 - arXiv:2512.16961v1 Announce Type: new -Abstract: The score set of a tournament is defined as the set of its distinct out-degrees. In 1978, Reid proposed the conjecture that for any set of nonnegative integers $D$, there exists a tournament $T$ with a degree set $D$. In 1989, Yao presented an arithmetical proof of the conjecture, but a general polynomial-time construction algorithm is not known. This paper proposes a necessary and sufficient condition and a separate necessary condition, based on the existing Landau's theorem for the problem of reconstructing score sequences from score sets of tournament graphs. The necessary condition introduces a structured set that enables the use of group-theoretic techniques, offering not only a framework for solving the reconstruction problem but also a new perspective for approaching similar problems. In particular, the same theoretical approach can be extended to reconstruct valid score sets given constraints on the frequency of distinct scores in tournaments. Based on these conditions, we have developed three algorithms that demonstrate the practical utility of our framework: a polynomial-time algorithm and a scalable algorithm for reconstructing score sequences, and a polynomial-time network-building method that finds all possible score sequences for a given score set. Moreover, the polynomial-time algorithm for reconstructing the score sequence of a tournament for a given score set can be used to verify Reid's conjecture. These algorithms have practical applications in sports analysis, ranking prediction, and machine learning tasks such as learning-to-rank models and data imputation, where the reconstruction of partial rankings or sequences is essential for recommendation systems and anomaly detection. - oai:arXiv.org:2512.16961v1 - cs.DS - math.CO - Mon, 22 Dec 2025 00:00:00 -0500 + Accelerated Digital Twin Learning for Edge AI: A Comparison of FPGA and Mobile GPU + https://arxiv.org/abs/2512.17941 + arXiv:2512.17941v1 Announce Type: new +Abstract: Digital twins (DTs) can enable precision healthcare by continually learning a mathematical representation of patient-specific dynamics. However, mission critical healthcare applications require fast, resource-efficient DT learning, which is often infeasible with existing model recovery (MR) techniques due to their reliance on iterative solvers and high compute/memory demands. In this paper, we present a general DT learning framework that is amenable to acceleration on reconfigurable hardware such as FPGAs, enabling substantial speedup and energy efficiency. We compare our FPGA-based implementation with a multi-processing implementation in mobile GPU, which is a popular choice for AI in edge devices. Further, we compare both edge AI implementations with cloud GPU baseline. Specifically, our FPGA implementation achieves an 8.8x improvement in \text{performance-per-watt} for the MR task, a 28.5x reduction in DRAM footprint, and a 1.67x runtime speedup compared to cloud GPU baselines. On the other hand, mobile GPU achieves 2x better performance per watts but has 2x increase in runtime and 10x more DRAM footprint than FPGA. We show the usage of this technique in DT guided synthetic data generation for Type 1 Diabetes and proactive coronary artery disease detection. + oai:arXiv.org:2512.17941v1 + cs.DC + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Bowen Liu + http://creativecommons.org/licenses/by/4.0/ + Bin Xu, Ayan Banerjee, Midhat Urooj, Sandeep K. S. Gupta - MemoryGraft: Persistent Compromise of LLM Agents via Poisoned Experience Retrieval - https://arxiv.org/abs/2512.16962 - arXiv:2512.16962v1 Announce Type: new -Abstract: Large Language Model (LLM) agents increasingly rely on long-term memory and Retrieval-Augmented Generation (RAG) to persist experiences and refine future performance. While this experience learning capability enhances agentic autonomy, it introduces a critical, unexplored attack surface, i.e., the trust boundary between an agent's reasoning core and its own past. In this paper, we introduce MemoryGraft. It is a novel indirect injection attack that compromises agent behavior not through immediate jailbreaks, but by implanting malicious successful experiences into the agent's long-term memory. Unlike traditional prompt injections that are transient, or standard RAG poisoning that targets factual knowledge, MemoryGraft exploits the agent's semantic imitation heuristic which is the tendency to replicate patterns from retrieved successful tasks. We demonstrate that an attacker who can supply benign ingestion-level artifacts that the agent reads during execution can induce it to construct a poisoned RAG store where a small set of malicious procedure templates is persisted alongside benign experiences. When the agent later encounters semantically similar tasks, union retrieval over lexical and embedding similarity reliably surfaces these grafted memories, and the agent adopts the embedded unsafe patterns, leading to persistent behavioral drift across sessions. We validate MemoryGraft on MetaGPT's DataInterpreter agent with GPT-4o and find that a small number of poisoned records can account for a large fraction of retrieved experiences on benign workloads, turning experience-based self-improvement into a vector for stealthy and durable compromise. To facilitate reproducibility and future research, our code and evaluation data are available at https://github.com/Jacobhhy/Agent-Memory-Poisoning. - oai:arXiv.org:2512.16962v1 - cs.CR + Fast Online Digital Twinning on FPGA for Mission Critical Applications + https://arxiv.org/abs/2512.17942 + arXiv:2512.17942v1 Announce Type: new +Abstract: Digital twinning enables real-time simulation and predictive modeling by maintaining a continuously updated virtual representation of a physical system. In mission-critical applications, such as mid-air collision avoidance, these models must operate online with extremely low latency to ensure safety. However, executing complex Model Recovery (MR) pipelines on edge devices is limited by computational and memory bandwidth constraints. This paper introduces a fast, FPGA-accelerated digital twinning framework that offloads key neural components, including gated recurrent units (GRU) and dense layers, to reconfigurable hardware for efficient parallel execution. Our system achieves real-time responsiveness, operating five times faster than typical human reaction time, and demonstrates the practical viability of deploying digital twins on edge platforms for time-sensitive, safety-critical environments. + oai:arXiv.org:2512.17942v1 + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Bin Xu, Ayan Banerjee, Sandeep K. S. Gupta + + + NystagmusNet: Explainable Deep Learning for Photosensitivity Risk Prediction + https://arxiv.org/abs/2512.17943 + arXiv:2512.17943v1 Announce Type: new +Abstract: Nystagmus patients with photosensitivity face significant daily challenges due to involuntary eye movements exacerbated by environmental brightness conditions. Current assistive solutions are limited to symptomatic treatments without predictive personalization. This paper proposes NystagmusNet, an AI-driven system that predicts high-risk visual environments and recommends real-time visual adaptations. Using a dual-branch convolutional neural network trained on synthetic and augmented datasets, the system estimates a photosensitivity risk score based on environmental brightness and eye movement variance. The model achieves 75% validation accuracy on synthetic data. Explainability techniques including SHAP and GradCAM are integrated to highlight environmental risk zones, improving clinical trust and model interpretability. The system includes a rule-based recommendation engine for adaptive filter suggestions. Future directions include deployment via smart glasses and reinforcement learning for personalized recommendations. + oai:arXiv.org:2512.17943v1 + cs.CV cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Saksham Sahai Srivastava, Haoyu He + Karthik Prabhakar - Compression is Routing: Reconstruction Error as an Intrinsic Signal for Modular Language Models - https://arxiv.org/abs/2512.16963 - arXiv:2512.16963v1 Announce Type: new -Abstract: Current Large Language Models (LLMs) face three major challenges: context length limitations, high inference costs, and catastrophic forgetting during continual learning. While Mixture-of-Experts (MoE) architectures mitigate some of these conflicts, their routing mechanisms typically rely on explicitly trained auxiliary classifiers. This not only increases system complexity but also often lacks interpretability when handling mixed-domain inputs. - Building upon the premise that ``Compression is Intelligence,'' this paper proposes a novel architectural philosophy: \textbf{``Compression is Routing.''} We trained an 87M-parameter end-to-end Transformer Autoencoder, achieving a \textbf{64x sequence length compression} (compressing 512 tokens into 8 latent vectors). Experimental results demonstrate that this compressor possesses extreme domain discriminative capability: it achieves a reconstruction accuracy of \textbf{99.47\%} on the in-domain (code) validation set; accuracy drops sharply to \textbf{47.76\%} on a semi-out-of-distribution domain (Wiki text); and further plummets to just \textbf{0.57\%} on a fully out-of-distribution domain (random sequences). - This extreme and systematic performance discrepancy establishes the validity of reconstruction error as an \textbf{Intrinsic Distribution Fingerprint}. Based on this, we propose that expert modules can be automatically scheduled using reconstruction residuals directly, without the need for explicit gating networks. This mechanism offers excellent scalability. Furthermore, this architecture provides a new perspective on ``VRAM compression'' for handling ultra-long contexts. This report aims to verify the physical validity of this foundational architecture, offering a new research perspective for the next generation of scalable modular neural networks. - oai:arXiv.org:2512.16963v1 + What's the Price of Monotonicity? A Multi-Dataset Benchmark of Monotone-Constrained Gradient Boosting for Credit PD + https://arxiv.org/abs/2512.17945 + arXiv:2512.17945v1 Announce Type: new +Abstract: Financial institutions face a trade-off between predictive accuracy and interpretability when deploying machine learning models for credit risk. Monotonicity constraints align model behavior with domain knowledge, but their performance cost - the price of monotonicity - is not well quantified. This paper benchmarks monotone-constrained versus unconstrained gradient boosting models for credit probability of default across five public datasets and three libraries. We define the Price of Monotonicity (PoM) as the relative change in standard performance metrics when moving from unconstrained to constrained models, estimated via paired comparisons with bootstrap uncertainty. In our experiments, PoM in AUC ranges from essentially zero to about 2.9 percent: constraints are almost costless on large datasets (typically less than 0.2 percent, often indistinguishable from zero) and most costly on smaller datasets with extensive constraint coverage (around 2-3 percent). Thus, appropriately specified monotonicity constraints can often deliver interpretability with small accuracy losses, particularly in large-scale credit portfolios. + oai:arXiv.org:2512.17945v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + q-fin.RM + q-fin.ST + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhongpan Tang + Petr Koklev - AutoDFBench 1.0: A Benchmarking Framework for Digital Forensic Tool Testing and Generated Code Evaluation - https://arxiv.org/abs/2512.16965 - arXiv:2512.16965v1 Announce Type: new -Abstract: The National Institute of Standards and Technology (NIST) Computer Forensic Tool Testing (CFTT) programme has become the de facto standard for providing digital forensic tool testing and validation. However to date, no comprehensive framework exists to automate benchmarking across the diverse forensic tasks included in the programme. This gap results in inconsistent validation, challenges in comparing tools, and limited validation reproducibility. This paper introduces AutoDFBench 1.0, a modular benchmarking framework that supports the evaluation of both conventional DF tools and scripts, as well as AI-generated code and agentic approaches. The framework integrates five areas defined by the CFTT programme: string search, deleted file recovery, file carving, Windows registry recovery, and SQLite data recovery. AutoDFBench 1.0 includes ground truth data comprising of 63 test cases and 10,968 unique test scenarios, and execute evaluations through a RESTful API that produces structured JSON outputs with standardised metrics, including precision, recall, and F1~score for each test case, and the average of these F1~scores becomes the AutoDFBench Score. The benchmarking framework is validated against CFTT's datasets. The framework enables fair and reproducible comparison across tools and forensic scripts, establishing the first unified, automated, and extensible benchmarking framework for digital forensic tool testing and validation. AutoDFBench 1.0 supports tool vendors, researchers, practitioners, and standardisation bodies by facilitating transparent, reproducible, and comparable assessments of DF technologies. - oai:arXiv.org:2512.16965v1 - cs.CR - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 + Let the Model Learn to Feel: Mode-Guided Tonality Injection for Symbolic Music Emotion Recognition + https://arxiv.org/abs/2512.17946 + arXiv:2512.17946v1 Announce Type: new +Abstract: Music emotion recognition is a key task in symbolic music understanding (SMER). Recent approaches have shown promising results by fine-tuning large-scale pre-trained models (e.g., MIDIBERT, a benchmark in symbolic music understanding) to map musical semantics to emotional labels. While these models effectively capture distributional musical semantics, they often overlook tonal structures, particularly musical modes, which play a critical role in emotional perception according to music psychology. In this paper, we investigate the representational capacity of MIDIBERT and identify its limitations in capturing mode-emotion associations. To address this issue, we propose a Mode-Guided Enhancement (MoGE) strategy that incorporates psychological insights on mode into the model. Specifically, we first conduct a mode augmentation analysis, which reveals that MIDIBERT fails to effectively encode emotion-mode correlations. We then identify the least emotion-relevant layer within MIDIBERT and introduce a Mode-guided Feature-wise linear modulation injection (MoFi) framework to inject explicit mode features, thereby enhancing the model's capability in emotional representation and inference. Extensive experiments on the EMOPIA and VGMIDI datasets demonstrate that our mode injection strategy significantly improves SMER performance, achieving accuracies of 75.2% and 59.1%, respectively. These results validate the effectiveness of mode-guided modeling in symbolic music emotion recognition. + oai:arXiv.org:2512.17946v1 + cs.SD + cs.AI + cs.MM + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Haiying Xia, Zhongyi Huang, Yumei Tan, Shuxiang Song + + + Which Coauthor Should I Nominate in My 99 ICLR Submissions? A Mathematical Analysis of the ICLR 2026 Reciprocal Reviewer Nomination Policy + https://arxiv.org/abs/2512.17950 + arXiv:2512.17950v1 Announce Type: new +Abstract: The rapid growth of AI conference submissions has created an overwhelming reviewing burden. To alleviate this, recent venues such as ICLR 2026 introduced a reviewer nomination policy: each submission must nominate one of its authors as a reviewer, and any paper nominating an irresponsible reviewer is desk-rejected. We study this new policy from the perspective of author welfare. Assuming each author carries a probability of being irresponsible, we ask: how can authors (or automated systems) nominate reviewers to minimize the risk of desk rejections? We formalize and analyze three variants of the desk-rejection risk minimization problem. The basic problem, which minimizes expected desk rejections, is solved optimally by a simple greedy algorithm. We then introduce hard and soft nomination limit variants that constrain how many papers may nominate the same author, preventing widespread failures if one author is irresponsible. These formulations connect to classical optimization frameworks, including minimum-cost flow and linear programming, allowing us to design efficient, principled nomination strategies. Our results provide the first theoretical study for reviewer nomination policies, offering both conceptual insights and practical directions for authors to wisely choose which co-author should serve as the nominated reciprocal reviewer. + oai:arXiv.org:2512.17950v1 + cs.GT + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Akila Wickramasekara, Tharusha Mihiranga, Aruna Withanage, Buddhima Weerasinghe, Frank Breitinger, John Sheppard, Mark Scanlon + Zhao Song, Song Yue, Jiahao Zhang - Physics-Informed Lightweight Machine Learning for Aviation Visibility Nowcasting Across Multiple Climatic Regimes - https://arxiv.org/abs/2512.16967 - arXiv:2512.16967v1 Announce Type: new -Abstract: Short-term prediction (nowcasting) of low-visibility and precipitation events is critical for aviation safety and operational efficiency. Current operational approaches rely on computationally intensive numerical weather prediction guidance and human-issued TAF products, which often exhibit conservative biases and limited temporal resolution. This study presents a lightweight gradient boosting framework (XGBoost) trained exclusively on surface observation data (METAR) and enhanced through physics-guided feature engineering based on thermodynamic principles. The framework is evaluated across 11 international airports representing distinct climatic regimes (including SCEL, KJFK, KORD, KDEN, SBGR, and VIDP) using historical data from 2000 to 2024. Results suggest that the model successfully captures underlying local physical processes without manual configuration. In a blind comparative evaluation against operational TAF forecasts, the automated model achieved substantially higher detection rates at tactical horizons (3 hours), with a 2.5 to 4.0 times improvement in recall while reducing false alarms. Furthermore, SHAP analysis reveals that the model performs an implicit reconstruction of local physical drivers (advection, radiation, and subsidence), providing actionable explainability for operational situational awareness. - Keywords: aviation meteorology; physics-guided machine learning; explainable artificial intelligence; lightweight machine learning; nowcasting; METAR; TAF verification; edge computing - oai:arXiv.org:2512.16967v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + SuperFlow: Training Flow Matching Models with RL on the Fly + https://arxiv.org/abs/2512.17951 + arXiv:2512.17951v1 Announce Type: new +Abstract: Recent progress in flow-based generative models and reinforcement learning (RL) has improved text-image alignment and visual quality. However, current RL training for flow models still has two main problems: (i) GRPO-style fixed per-prompt group sizes ignore variation in sampling importance across prompts, which leads to inefficient sampling and slower training; and (ii) trajectory-level advantages are reused as per-step estimates, which biases credit assignment along the flow. We propose SuperFlow, an RL training framework for flow-based models that adjusts group sizes with variance-aware sampling and computes step-level advantages in a way that is consistent with continuous-time flow dynamics. Empirically, SuperFlow reaches promising performance while using only 5.4% to 56.3% of the original training steps and reduces training time by 5.2% to 16.7% without any architectural changes. On standard text-to-image (T2I) tasks, including text rendering, compositional image generation, and human preference alignment, SuperFlow improves over SD3.5-M by 4.6% to 47.2%, and over Flow-GRPO by 1.7% to 16.0%. + oai:arXiv.org:2512.17951v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Marcelo Cerda Castillo + Kaijie Chen, Zhiyang Xu, Ying Shen, Zihao Lin, Yuguang Yao, Lifu Huang - Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows - https://arxiv.org/abs/2512.16969 - arXiv:2512.16969v1 Announce Type: new -Abstract: Despite advances in scientific AI, a coherent framework for Scientific General Intelligence (SGI)-the ability to autonomously conceive, investigate, and reason across scientific domains-remains lacking. We present an operational SGI definition grounded in the Practical Inquiry Model (PIM: Deliberation, Conception, Action, Perception) and operationalize it via four scientist-aligned tasks: deep research, idea generation, dry/wet experiments, and experimental reasoning. SGI-Bench comprises over 1,000 expert-curated, cross-disciplinary samples inspired by Science's 125 Big Questions, enabling systematic evaluation of state-of-the-art LLMs. Results reveal gaps: low exact match (10--20%) in deep research despite step-level alignment; ideas lacking feasibility and detail; high code executability but low execution result accuracy in dry experiments; low sequence fidelity in wet protocols; and persistent multimodal comparative-reasoning challenges. We further introduce Test-Time Reinforcement Learning (TTRL), which optimizes retrieval-augmented novelty rewards at inference, enhancing hypothesis novelty without reference answer. Together, our PIM-grounded definition, workflow-centric benchmark, and empirical insights establish a foundation for AI systems that genuinely participate in scientific discovery. - oai:arXiv.org:2512.16969v1 + Will AI Trade? A Computational Inversion of the No-Trade Theorem + https://arxiv.org/abs/2512.17952 + arXiv:2512.17952v1 Announce Type: new +Abstract: Classic no-trade theorems attribute trade to heterogeneous beliefs. We re-examine this conclusion for AI agents, asking if trade can arise from computational limitations, under common beliefs. We model agents' bounded computational rationality within an unfolding game framework, where computational power determines the complexity of its strategy. Our central finding inverts the classic paradigm: a stable no-trade outcome (Nash equilibrium) is reached only when "almost rational" agents have slightly different computational power. Paradoxically, when agents possess identical power, they may fail to converge to equilibrium, resulting in persistent strategic adjustments that constitute a form of trade. This instability is exacerbated if agents can strategically under-utilize their computational resources, which eliminates any chance of equilibrium in Matching Pennies scenarios. Our results suggest that the inherent computational limitations of AI agents can lead to situations where equilibrium is not reached, creating a more lively and unpredictable trade environment than traditional models would predict. + oai:arXiv.org:2512.17952v1 + cs.GT cs.AI - cs.CL - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + econ.TH + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wanghan Xu, Yuhao Zhou, Yifan Zhou, Qinglong Cao, Shuo Li, Jia Bu, Bo Liu, Yixin Chen, Xuming He, Xiangyu Zhao, Xiang Zhuang, Fengxiang Wang, Zhiwang Zhou, Qiantai Feng, Wenxuan Huang, Jiaqi Wei, Hao Wu, Yuejin Yang, Guangshuai Wang, Sheng Xu, Ziyan Huang, Xinyao Liu, Jiyao Liu, Cheng Tang, Wei Li, Ying Chen, Junzhi Ning, Pengfei Jiang, Chenglong Ma, Ye Du, Changkai Ji, Huihui Xu, Ming Hu, Jiangbin Zheng, Xin Chen, Yucheng Wu, Feifei Jiang, Xi Chen, Xiangru Tang, Yuchen Fu, Yingzhou Lu, Yuanyuan Zhang, Lihao Sun, Chengbo Li, Jinzhe Ma, Wanhao Liu, Yating Liu, Kuo-Cheng Wu, Shengdu Chai, Yizhou Wang, Ouwen Zhangjin, Chen Tang, Shufei Zhang, Wenbo Cao, Junjie Ren, Taoyong Cui, Zhouheng Yao, Juntao Deng, Yijie Sun, Feng Liu, Wangxu Wei, Jingyi Xu, Zhangrui Li, Junchao Gong, Zijie Guo, Zhiyu Yao, Zaoyu Chen, Tianhao Peng, Fangchen Yu, Bo Zhang, Dongzhan Zhou, Shixiang Tang, Jiaheng Liu, Fenghua Ling, Yan Lu, Yuchen Ren, Ben Fei, Zhen Zhao, Xinyu Gu, Rui Su, Xiao-Ming Wu, Weikang Si, Yang Liu, Hao Chen, Xiangchao Yan, Xue Yang, Junchi Yan, Jiamin Wu, Qihao Zheng, Chenhui Li, Zhiqiang Gao, Hao Kong, Junjun He, Mao Su, Tianfan Fu, Peng Ye, Chunfeng Song, Nanqing Dong, Yuqiang Li, Huazhu Fu, Siqi Sun, Lijing Cheng, Jintai Lin, Wanli Ouyang, Bowen Zhou, Wenlong Zhang, Lei Bai + http://creativecommons.org/licenses/by/4.0/ + Hanyu Li, Xiaotie Deng - PAACE: A Plan-Aware Automated Agent Context Engineering Framework - https://arxiv.org/abs/2512.16970 - arXiv:2512.16970v1 Announce Type: new -Abstract: Large Language Model (LLM) agents are increasingly deployed in complex, multi-step workflows involving planning, tool use, reflection, and interaction with external knowledge systems. These workflows generate rapidly expanding contexts that must be curated, transformed, and compressed to maintain fidelity, avoid attention dilution, and reduce inference cost. Prior work on summarization and query-aware compression largely ignores the multi-step, plan-aware nature of agentic reasoning. In this work, we introduce PAACE (Plan-Aware Automated Context Engineering), a unified framework for optimizing the evolving state of LLM agents through next-k-task relevance modeling, plan-structure analysis, instruction co-refinement, and function-preserving compression. PAACE comprises (1) PAACE-Syn, a large-scale generator of synthetic agent workflows annotated with stepwise compression supervision, and (2) PAACE-FT, a family of distilled, plan-aware compressors trained from successful teacher demonstrations. Experiments on long-horizon benchmarks (AppWorld, OfficeBench, and 8-Objective QA) demonstrate that PAACE consistently improves agent correctness while substantially reducing context load. On AppWorld, PAACE achieves higher accuracy than all baselines while lowering peak context and cumulative dependency. On OfficeBench and multi-hop QA, PAACE improves both accuracy and F1, achieving fewer steps, lower peak tokens, and reduced attention dependency. Distilled PAACE-FT retains 97 percent of the teacher's performance while reducing inference cost by over an order of magnitude, enabling practical deployment of plan-aware compression with compact models. - oai:arXiv.org:2512.16970v1 + Seeing Beyond the Scene: Analyzing and Mitigating Background Bias in Action Recognition + https://arxiv.org/abs/2512.17953 + arXiv:2512.17953v1 Announce Type: new +Abstract: Human action recognition models often rely on background cues rather than human movement and pose to make predictions, a behavior known as background bias. We present a systematic analysis of background bias across classification models, contrastive text-image pretrained models, and Video Large Language Models (VLLM) and find that all exhibit a strong tendency to default to background reasoning. Next, we propose mitigation strategies for classification models and show that incorporating segmented human input effectively decreases background bias by 3.78%. Finally, we explore manual and automated prompt tuning for VLLMs, demonstrating that prompt design can steer predictions towards human-focused reasoning by 9.85%. + oai:arXiv.org:2512.17953v1 + cs.CV cs.AI - cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Ellie Zhou, Jihoon Chung, Olga Russakovsky + + + SCS-SupCon: Sigmoid-based Common and Style Supervised Contrastive Learning with Adaptive Decision Boundaries + https://arxiv.org/abs/2512.17954 + arXiv:2512.17954v1 Announce Type: new +Abstract: Image classification is hindered by subtle inter-class differences and substantial intra-class variations, which limit the effectiveness of existing contrastive learning methods. Supervised contrastive approaches based on the InfoNCE loss suffer from negative-sample dilution and lack adaptive decision boundaries, thereby reducing discriminative power in fine-grained recognition tasks. To address these limitations, we propose Sigmoid-based Common and Style Supervised Contrastive Learning (SCS-SupCon). Our framework introduces a sigmoid-based pairwise contrastive loss with learnable temperature and bias parameters to enable adaptive decision boundaries. This formulation emphasizes hard negatives, mitigates negative-sample dilution, and more effectively exploits supervision. In addition, an explicit style-distance constraint further disentangles style and content representations, leading to more robust feature learning. Comprehensive experiments on six benchmark datasets, including CUB200-2011 and Stanford Dogs, demonstrate that SCS-SupCon achieves state-of-the-art performance across both CNN and Transformer backbones. On CIFAR-100 with ResNet-50, SCS-SupCon improves top-1 accuracy over SupCon by approximately 3.9 percentage points and over CS-SupCon by approximately 1.7 points under five-fold cross-validation. On fine-grained datasets, it outperforms CS-SupCon by 0.4--3.0 points. Extensive ablation studies and statistical analyses further confirm the robustness and generalization of the proposed framework, with Friedman tests and Nemenyi post-hoc evaluations validating the stability of the observed improvements. + oai:arXiv.org:2512.17954v1 + cs.CV cs.LG - cs.MA - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Kamer Ali Yuksel + Bin Wang, Fadi Dornaika - InfoTok: Adaptive Discrete Video Tokenizer via Information-Theoretic Compression - https://arxiv.org/abs/2512.16975 - arXiv:2512.16975v1 Announce Type: new -Abstract: Accurate and efficient discrete video tokenization is essential for long video sequences processing. Yet, the inherent complexity and variable information density of videos present a significant bottleneck for current tokenizers, which rigidly compress all content at a fixed rate, leading to redundancy or information loss. Drawing inspiration from Shannon's information theory, this paper introduces InfoTok, a principled framework for adaptive video tokenization. We rigorously prove that existing data-agnostic training methods are suboptimal in representation length, and present a novel evidence lower bound (ELBO)-based algorithm that approaches theoretical optimality. Leveraging this framework, we develop a transformer-based adaptive compressor that enables adaptive tokenization. Empirical results demonstrate state-of-the-art compression performance, saving 20% tokens without influence on performance, and achieving 2.3x compression rates while still outperforming prior heuristic adaptive approaches. By allocating tokens according to informational richness, InfoTok enables a more compressed yet accurate tokenization for video representation, offering valuable insights for future research. - oai:arXiv.org:2512.16975v1 + A Modular Framework for Single-View 3D Reconstruction of Indoor Environments + https://arxiv.org/abs/2512.17955 + arXiv:2512.17955v1 Announce Type: new +Abstract: We propose a modular framework for single-view indoor scene 3D reconstruction, where several core modules are powered by diffusion techniques. Traditional approaches for this task often struggle with the complex instance shapes and occlusions inherent in indoor environments. They frequently overshoot by attempting to predict 3D shapes directly from incomplete 2D images, which results in limited reconstruction quality. We aim to overcome this limitation by splitting the process into two steps: first, we employ diffusion-based techniques to predict the complete views of the room background and occluded indoor instances, then transform them into 3D. Our modular framework makes contributions to this field through the following components: an amodal completion module for restoring the full view of occluded instances, an inpainting model specifically trained to predict room layouts, a hybrid depth estimation technique that balances overall geometric accuracy with fine detail expressiveness, and a view-space alignment method that exploits both 2D and 3D cues to ensure precise placement of instances within the scene. This approach effectively reconstructs both foreground instances and the room background from a single image. Extensive experiments on the 3D-Front dataset demonstrate that our method outperforms current state-of-the-art (SOTA) approaches in terms of both visual quality and reconstruction accuracy. The framework holds promising potential for applications in interior design, real estate, and augmented reality. + oai:arXiv.org:2512.17955v1 cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Haotian Ye, Qiyuan He, Jiaqi Han, Puheng Li, Jiaojiao Fan, Zekun Hao, Fitsum Reda, Yogesh Balaji, Huayu Chen, Sheng Liu, Angela Yao, James Zou, Stefano Ermon, Haoxiang Wang, Ming-Yu Liu + Yuxiao Li - Endo-SemiS: Towards Robust Semi-Supervised Image Segmentation for Endoscopic Video - https://arxiv.org/abs/2512.16977 - arXiv:2512.16977v1 Announce Type: new -Abstract: In this paper, we present Endo-SemiS, a semi-supervised segmentation framework for providing reliable segmentation of endoscopic video frames with limited annotation. EndoSemiS uses 4 strategies to improve performance by effectively utilizing all available data, particularly unlabeled data: (1) Cross-supervision between two individual networks that supervise each other; (2) Uncertainty-guided pseudo-labels from unlabeled data, which are generated by selecting high-confidence regions to improve their quality; (3) Joint pseudolabel supervision, which aggregates reliable pixels from the pseudo-labels of both networks to provide accurate supervision for unlabeled data; and (4) Mutual learning, where both networks learn from each other at the feature and image levels, reducing variance and guiding them toward a consistent solution. Additionally, a separate corrective network that utilizes spatiotemporal information from endoscopy video to improve segmentation performance. Endo-SemiS is evaluated on two clinical applications: kidney stone laser lithotomy from ureteroscopy and polyp screening from colonoscopy. Compared to state-of-the-art segmentation methods, Endo-SemiS substantially achieves superior results on both datasets with limited labeled data. The code is publicly available at https://github.com/MedICL-VU/Endo-SemiS - oai:arXiv.org:2512.16977v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Victor Calibration (VC): Multi-Pass Confidence Calibration and CP4.3 Governance Stress Test under Round-Table Orchestration + https://arxiv.org/abs/2512.17956 + arXiv:2512.17956v1 Announce Type: new +Abstract: Safety alignment can make frontier LMs overly conservative, degrading collaboration via hedging or false refusals. We present a lightweight toolkit with three parts: (1) Victor Calibration (VC), a multi-pass protocol that elicits a scalar confidence proxy T (T0<T1<T2) through iterative evidence re-evaluation; (2) FD-Lite, a behavior-only phenomenology audit with a fixed anchor phrase and a meta-prefix trap to avoid anthropomorphic claims; and (3) CP4.3, a governance stress test for rank invariance and allocation monotonicity (M6). Across Claude 4.5 models (Haiku, Sonnet no-thinking, Sonnet thinking) and Opus, we observe monotonic VC trajectories without violating safety invariants, and stable CP4.3 behavior. ("Opus" here refers to a single Claude Opus 4.1 session accessed via a standard UI account, as reported in Table 1.) This work was conducted by a single operator (n=1) and is intended as hypothesis-generating; we explicitly invite replication, critique, and extension by the research community. We include prompt templates and an artifact plan to facilitate independent verification. + oai:arXiv.org:2512.17956v1 + cs.SE + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Hao Li, Daiwei Lu, Xing Yao, Nicholas Kavoussi, Ipek Oguz + Victor Stasiuc, Round Table Collaboration - A Benchmark and Agentic Framework for Omni-Modal Reasoning and Tool Use in Long Videos - https://arxiv.org/abs/2512.16978 - arXiv:2512.16978v1 Announce Type: new -Abstract: Long-form multimodal video understanding requires integrating vision, speech, and ambient audio with coherent long-range reasoning. Existing benchmarks emphasize either temporal length or multimodal richness, but rarely both and while some incorporate open-ended questions and advanced metrics, they mostly rely on single-score accuracy, obscuring failure modes. We introduce LongShOTBench, a diagnostic benchmark with open-ended, intent-driven questions; single- and multi-turn dialogues; and tasks requiring multimodal reasoning and agentic tool use across video, audio, and speech. Each item includes a reference answer and graded rubric for interpretable, and traceable evaluation. LongShOTBench is produced via a scalable, human-validated pipeline to ensure coverage and reproducibility. All samples in our LongShOTBench are human-verified and corrected. Furthermore, we present LongShOTAgent, an agentic system that analyzes long videos via preprocessing, search, and iterative refinement. On LongShOTBench, state-of-the-art MLLMs show large gaps: Gemini-2.5-Flash achieves 52.95%, open-source models remain below 30%, and LongShOTAgent attains 44.66%. These results underscore the difficulty of real-world long-form video understanding. LongShOTBench provides a practical, reproducible foundation for evaluating and improving MLLMs. All resources are available on GitHub: https://github.com/mbzuai-oryx/longshot. - oai:arXiv.org:2512.16978v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Real-Time Human-Robot Interaction Intent Detection Using RGB-based Pose and Emotion Cues with Cross-Camera Model Generalization + https://arxiv.org/abs/2512.17958 + arXiv:2512.17958v1 Announce Type: new +Abstract: Service robots in public spaces require real-time understanding of human behavioral intentions for natural interaction. We present a practical multimodal framework for frame-accurate human-robot interaction intent detection that fuses camera-invariant 2D skeletal pose and facial emotion features extracted from monocular RGB video. Unlike prior methods requiring RGB-D sensors or GPU acceleration, our approach resource-constrained embedded hardware (Raspberry Pi 5, CPU-only). To address the severe class imbalance in natural human-robot interaction datasets, we introduce a novel approach to synthesize temporally coherent pose-emotion-label sequences for data re-balancing called MINT-RVAE (Multimodal Recurrent Variational Autoencoder for Intent Sequence Generation). Comprehensive offline evaluations under cross-subject and cross-scene protocols demonstrate strong generalization performance, achieving frame- and sequence-level AUROC of 0.95. Crucially, we validate real-world generalization through cross-camera evaluation on the MIRA robot head, which employs a different onboard RGB sensor and operates in uncontrolled environments not represented in the training data. Despite this domain shift, the deployed system achieves 91% accuracy and 100% recall across 32 live interaction trials. The close correspondence between offline and deployed performance confirms the cross-sensor and cross-environment robustness of the proposed multimodal approach, highlighting its suitability for ubiquitous multimedia-enabled social robots. + oai:arXiv.org:2512.17958v1 + cs.RO + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Mohammed Irfan Kurpath, Jaseel Muhammad Kaithakkodan, Jinxing Zhou, Sahal Shaji Mullappilly, Mohammad Almansoori, Noor Ahsan, Beknur Kalmakhanbet, Sambal Shikhar, Rishabh Lalla, Jean Lahoud, Mariette Awad, Fahad Shahbaz Khan, Salman Khan, Rao Muhammad Anwer, Hisham Cholakkal + http://creativecommons.org/licenses/by/4.0/ + Farida Mohsen, Ali Safa - Mr.MSTE: Multi-robot Multi-Source Term Estimation with Wind-Aware Coverage Control - https://arxiv.org/abs/2512.17001 - arXiv:2512.17001v1 Announce Type: new -Abstract: This paper presents a Multi-Robot Multi-Source Term Estimation (MRMSTE) framework that enables teams of mobile robots to collaboratively sample gas concentrations and infer the parameters of an unknown number of airborne releases. The framework is built on a hybrid Bayesian inference scheme that represents the joint multi-source probability density and incorporates physics-informed state transitions, including source birth, removal, and merging induced by atmospheric dispersion. A superposition-based measurement model is naturally accommodated, allowing sparse concentration measurements to be exploited efficiently. To guide robot deployment, we introduce a wind-aware coverage control (WCC) strategy that integrates the evolving multi-source belief with local wind information to prioritize regions of high detection likelihood. Unlike conventional coverage control or information-theoretic planners, WCC explicitly accounts for anisotropic plume transport when modelling sensor performance, leading to more effective sensor placement for multi-source estimation. Monte Carlo studies demonstrate faster convergence and improved separation of individual source beliefs compared to traditional coverage-based strategies and small-scale static sensor networks. Real-world experiments with CO2 releases using TurtleBot platforms further validate the proposed approach, demonstrating its practicality for scalable multi-robot gas-sensing applications. - oai:arXiv.org:2512.17001v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Memelang: An Axial Grammar for LLM-Generated Vector-Relational Queries + https://arxiv.org/abs/2512.17967 + arXiv:2512.17967v1 Announce Type: new +Abstract: Structured generation for LLM tool use highlights the value of compact DSL intermediate representations (IRs) that can be emitted directly and parsed deterministically. This paper introduces axial grammar: linear token sequences that recover multi-dimensional structure from the placement of rank-specific separator tokens. A single left-to-right pass assigns each token a coordinate in an n-dimensional grid, enabling deterministic parsing without parentheses or clause-heavy surface syntax. This grammar is instantiated in Memelang, a compact query language intended as an LLM-emittable IR whose fixed coordinate roles map directly to table/column/value slots. Memelang supports coordinate-stable relative references, parse-time variable binding, and implicit context carry-forward to reduce repetition in LLM-produced queries. It also encodes grouping, aggregation, and ordering via inline tags on value terms, allowing grouped execution plans to be derived in one streaming pass over the coordinate-indexed representation. Provided are a reference lexer/parser and a compiler that emits parameterized PostgreSQL SQL (optionally using pgvector operators). + oai:arXiv.org:2512.17967v1 + cs.DB + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Rohit V. Nanavati, Tim J. Glover, Matthew J. Coombes, Cunjia Liu + Bri Holt - An Explicit Sixth Order Runge-Kutta Method for Simple Lawson Integration - https://arxiv.org/abs/2512.17006 - arXiv:2512.17006v1 Announce Type: new -Abstract: Explicit Runge-Kutta schemes become impractical when a stiff linear operator is present in the dynamics. This failure mode is quite common in numerical simulations of fluids and plasmas. Lawson proposed Generalized Runge-Kutta Processes for stiff problems in 1967, in which the stiff linear operator is treated fully implicitly via matrix exponentiation. Any Runge-Kutta scheme induces valid Lawson integration, but a scheme is exceptionally simple to implement if the abscissa $c_i$ are ordered and equally spaced. Classical RK4 satisfies this requirement, but it is difficult to derive efficient higher order schemes with this constraint. Here I present an explicit sixth order method identified with Newton-Raphson iteration that provides simple Lawson integration. - oai:arXiv.org:2512.17006v1 - math.NA - cs.NA + Convolutional-neural-operator-based transfer learning for solving PDEs + https://arxiv.org/abs/2512.17969 + arXiv:2512.17969v1 Announce Type: new +Abstract: Convolutional neural operator is a CNN-based architecture recently proposed to enforce structure-preserving continuous-discrete equivalence and enable the genuine, alias-free learning of solution operators of PDEs. This neural operator was demonstrated to outperform for certain cases some baseline models such as DeepONet, Fourier neural operator, and Galerkin transformer in terms of surrogate accuracy. The convolutional neural operator, however, seems not to be validated for few-shot learning. We extend the model to few-shot learning scenarios by first pre-training a convolutional neural operator using a source dataset and then adjusting the parameters of the trained neural operator using only a small target dataset. We investigate three strategies for adjusting the parameters of a trained neural operator, including fine-tuning, low-rank adaption, and neuron linear transformation, and find that the neuron linear transformation strategy enjoys the highest surrogate accuracy in solving PDEs such as Kuramoto-Sivashinsky equation, Brusselator diffusion-reaction system, and Navier-Stokes equations. + oai:arXiv.org:2512.17969v1 + cs.LG + cs.AI math-ph math.MP - nlin.CD - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Matthew Golden + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Peng Fan, Guofei Pang - Algorithmic UDAP - https://arxiv.org/abs/2512.17007 - arXiv:2512.17007v1 Announce Type: new -Abstract: This paper compares two legal frameworks -- disparate impact (DI) and unfair, deceptive, or abusive acts or practices (UDAP) -- as tools for evaluating algorithmic discrimination, focusing on the example of fair lending. While DI has traditionally served as the foundation of fair lending law, recent regulatory efforts have invoked UDAP, a doctrine rooted in consumer protection, as an alternative means to address algorithmic discrimination harms. We formalize and operationalize both doctrines in a simulated lending setting to assess how they evaluate algorithmic disparities. While some regulatory interpretations treat UDAP as operating similarly to DI, we argue it is an independent and analytically distinct framework. In particular, UDAP's "unfairness" prong introduces elements such as avoidability of harm and proportionality balancing, while its "deceptive" and "abusive" standards may capture forms of algorithmic harm that elude DI analysis. At the same time, translating UDAP into algorithmic settings exposes unresolved ambiguities, underscoring the need for further regulatory guidance if it is to serve as a workable standard. - oai:arXiv.org:2512.17007v1 - cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 + CodeGEMM: A Codebook-Centric Approach to Efficient GEMM in Quantized LLMs + https://arxiv.org/abs/2512.17970 + arXiv:2512.17970v1 Announce Type: new +Abstract: Weight-only quantization is widely used to mitigate the memory-bound nature of LLM inference. Codebook-based methods extend this trend by achieving strong accuracy in the extremely low-bit regime (e.g., 2-bit). However, current kernels rely on dequantization, which repeatedly fetches centroids and reconstructs weights, incurring substantial latency and cache pressure. We present CodeGEMM, a codebook-centric GEMM kernel that replaces dequantization with precomputed inner products between centroids and activations stored in a lightweight Psumbook. At inference, code indices directly gather these partial sums, eliminating per-element lookups and reducing the on-chip footprint. The kernel supports the systematic exploration of latency-memory-accuracy trade-offs under a unified implementation. On Llama-3 models, CodeGEMM delivers 1.83x (8B) and 8.93x (70B) speedups in the 2-bit configuration compared to state-of-the-art codebook-based quantization at comparable accuracy and further improves computing efficiency and memory subsystem utilization. + oai:arXiv.org:2512.17970v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Talia Gillis, Riley Stacy, Sam Brumer, Emily Black + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Gunho Park, Jeongin Bae, Byeongwook Kim, Baeseong park, Jiwon Ryu, Hoseung Kim, Se Jung Kwon, Dongsoo Lee - Turn-PPO: Turn-Level Advantage Estimation with PPO for Improved Multi-Turn RL in Agentic LLMs - https://arxiv.org/abs/2512.17008 - arXiv:2512.17008v1 Announce Type: new -Abstract: Reinforcement learning (RL) has re-emerged as a natural approach for training interactive LLM agents in real-world environments. However, directly applying the widely used Group Relative Policy Optimization (GRPO) algorithm to multi-turn tasks exposes notable limitations, particularly in scenarios requiring long-horizon reasoning. To address these challenges, we investigate more stable and effective advantage estimation strategies, especially for multi-turn settings. We first explore Proximal Policy Optimization (PPO) as an alternative and find it to be more robust than GRPO. To further enhance PPO in multi-turn scenarios, we introduce turn-PPO, a variant that operates on a turn-level MDP formulation, as opposed to the commonly used token-level MDP. Our results on the WebShop and Sokoban datasets demonstrate the effectiveness of turn-PPO, both with and without long reasoning components. - oai:arXiv.org:2512.17008v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Voronoi integration of the rendering equation + https://arxiv.org/abs/2512.17974 + arXiv:2512.17974v1 Announce Type: new +Abstract: In photorealistic image rendering, Monte Carlo methods form the foundation for the integration of the rendering equation in modern approaches. However, despite their effectiveness, traditional Monte Carlo methods often face challenges in controlling variance, resulting in noisy visual artifacts in regions that are difficult to render. In this work, we propose a new approach to the integration of the rendering equation by introducing a Voronoi tessellation reweighting scheme combined with a Poisson point process sampling strategy to address some of the limitations of standard Monte Carlo methods. From a theoretical point of view, we show that the variance induced by a Poisson-Voronoi tessellation is smaller than that of the Monte Carlo method when the intensity of the underlying process is arbitrarily large and when the function to be integrated satisfies a Holder continuity condition. + oai:arXiv.org:2512.17974v1 + math.NA + cs.NA + math.PR + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Junbo Li, Peng Zhou, Rui Meng, Meet P. Vadera, Lihong Li, Yang Li + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nicolas Chenavier, Samuel Delepoulle, Christophe Renaud, Franck Vandewi\`ele - 4D-RGPT: Toward Region-level 4D Understanding via Perceptual Distillation - https://arxiv.org/abs/2512.17012 - arXiv:2512.17012v1 Announce Type: new -Abstract: Despite advances in Multimodal LLMs (MLLMs), their ability to reason over 3D structures and temporal dynamics remains limited, constrained by weak 4D perception and temporal understanding. Existing 3D and 4D Video Question Answering (VQA) benchmarks also emphasize static scenes and lack region-level prompting. We tackle these issues by introducing: (a) 4D-RGPT, a specialized MLLM designed to capture 4D representations from video inputs with enhanced temporal perception; (b) Perceptual 4D Distillation (P4D), a training framework that transfers 4D representations from a frozen expert model into 4D-RGPT for comprehensive 4D perception; and (c) R4D-Bench, a benchmark for depth-aware dynamic scenes with region-level prompting, built via a hybrid automated and human-verified pipeline. Our 4D-RGPT achieves notable improvements on both existing 4D VQA benchmarks and the proposed R4D-Bench benchmark. - oai:arXiv.org:2512.17012v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Adaptive Agents in Spatial Double-Auction Markets: Modeling the Emergence of Industrial Symbiosis + https://arxiv.org/abs/2512.17979 + arXiv:2512.17979v1 Announce Type: new +Abstract: Industrial symbiosis fosters circularity by enabling firms to repurpose residual resources, yet its emergence is constrained by socio-spatial frictions that shape costs, matching opportunities, and market efficiency. Existing models often overlook the interaction between spatial structure, market design, and adaptive firm behavior, limiting our understanding of where and how symbiosis arises. We develop an agent-based model where heterogeneous firms trade byproducts through a spatially embedded double-auction market, with prices and quantities emerging endogenously from local interactions. Leveraging reinforcement learning, firms adapt their bidding strategies to maximize profit while accounting for transport costs, disposal penalties, and resource scarcity. Simulation experiments reveal the economic and spatial conditions under which decentralized exchanges converge toward stable and efficient outcomes. Counterfactual regret analysis shows that sellers' strategies approach a near Nash equilibrium, while sensitivity analysis highlights how spatial structures and market parameters jointly govern circularity. Our model provides a basis for exploring policy interventions that seek to align firm incentives with sustainability goals, and more broadly demonstrates how decentralized coordination can emerge from adaptive agents in spatially constrained markets. + oai:arXiv.org:2512.17979v1 + cs.GT + cs.AI + cs.MA + stat.AP + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Chiao-An Yang, Ryo Hachiuma, Sifei Liu, Subhashree Radhakrishnan, Raymond A. Yeh, Yu-Chiang Frank Wang, Min-Hung Chen + AAMAS 2026, Paphos, IFAAMAS, 10 pages + Matthieu Mastio, Paul Saves, Benoit Gaudou, Nicolas Verstaevel - A Reproducible and Fair Evaluation of Partition-aware Collaborative Filtering - https://arxiv.org/abs/2512.17015 - arXiv:2512.17015v1 Announce Type: new -Abstract: Similarity-based collaborative filtering (CF) models have long demonstrated strong offline performance and conceptual simplicity. However, their scalability is limited by the quadratic cost of maintaining dense item-item similarity matrices. Partitioning-based paradigms have recently emerged as an effective strategy for balancing effectiveness and efficiency, enabling models to learn local similarities within coherent subgraphs while maintaining a limited global context. In this work, we focus on the Fine-tuning Partition-aware Similarity Refinement (FPSR) framework, a prominent representative of this family, as well as its extension, FPSR+. Reproducible evaluation of partition-aware collaborative filtering remains challenging, as prior FPSR/FPSR+ reports often rely on splits of unclear provenance and omit some similarity-based baselines, thereby complicating fair comparison. We present a transparent, fully reproducible benchmark of FPSR and FPSR+. Based on our results, the family of FPSR models does not consistently perform at the highest level. Overall, it remains competitive, validates its design choices, and shows significant advantages in long-tail scenarios. This highlights the accuracy-coverage trade-offs resulting from partitioning, global components, and hub design. Our investigation clarifies when partition-aware similarity modeling is most beneficial and offers actionable guidance for scalable recommender system design under reproducible protocols. - oai:arXiv.org:2512.17015v1 - cs.IR - Mon, 22 Dec 2025 00:00:00 -0500 + Parameter-Efficient Fine-Tuning for HAR: Integrating LoRA and QLoRA into Transformer Models + https://arxiv.org/abs/2512.17983 + arXiv:2512.17983v1 Announce Type: new +Abstract: Human Activity Recognition is a foundational task in pervasive computing. While recent advances in self-supervised learning and transformer-based architectures have significantly improved HAR performance, adapting large pretrained models to new domains remains a practical challenge due to limited computational resources on target devices. This papers investigates parameter-efficient fine-tuning techniques, specifically Low-Rank Adaptation (LoRA) and Quantized LoRA, as scalable alternatives to full model fine-tuning for HAR. We propose an adaptation framework built upon a Masked Autoencoder backbone and evaluate its performance under a Leave-One-Dataset-Out validation protocol across five open HAR datasets. Our experiments demonstrate that both LoRA and QLoRA can match the recognition performance of full fine-tuning while significantly reducing the number of trainable parameters, memory usage, and training time. Further analyses reveal that LoRA maintains robust performance even under limited supervision and that the adapter rank provides a controllable trade-off between accuracy and efficiency. QLoRA extends these benefits by reducing the memory footprint of frozen weights through quantization, with minimal impact on classification quality. + oai:arXiv.org:2512.17983v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Domenico De Gioia, Claudio Pomo, Ludovico Boratto, Tommaso Di Noia + http://creativecommons.org/licenses/by/4.0/ + Irina Seregina, Philippe Lalanda, German Vega - Explorable Ideas: Externalizing Ideas as Explorable Environments - https://arxiv.org/abs/2512.17017 - arXiv:2512.17017v1 Announce Type: new -Abstract: Working with abstract information often relies on static, symbolic representations that constrain exploration. We introduce Explorable Ideas, a framework that externalizes abstract concepts into explorable environments where physical navigation coordinates conceptual exploration. To investigate its practical value, we designed Idea Islands, a VR probe for ideation tasks, and conducted two controlled studies with 19 participants. Results show that overview perspectives foster strategic breadth while immersion sustains engagement through embodied presence, and that seamless transitions enable flexible workflows combining both modes. These findings validate the framework's design considerations and yield design implications for building future systems that treat information as explorable territory across creative, educational, and knowledge-intensive domains. - oai:arXiv.org:2512.17017v1 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + A Hybrid Inductive-Transductive Network for Traffic Flow Imputation on Unsampled Locations + https://arxiv.org/abs/2512.17984 + arXiv:2512.17984v1 Announce Type: new +Abstract: Accurately imputing traffic flow at unsensed locations is difficult: loop detectors provide precise but sparse measurements, speed from probe vehicles is widely available yet only weakly correlated with flow, and nearby links often exhibit strong heterophily in the scale of traffic flow (e.g., ramps vs. mainline), which breaks standard GNN assumptions. We propose HINT, a Hybrid INductive-Transductive Network, and an INDU-TRANSDUCTIVE training strategy that treats speed as a transductive, network-wide signal while learning flow inductively to generalize to unseen locations. HINT couples (i) an inductive spatial transformer that learns similarity-driven, long-range interactions from node features with (ii) a diffusion GCN conditioned by FiLM on rich static context (OSM-derived attributes and traffic simulation), and (iii) a node-wise calibration layer that corrects scale biases per segment. Training uses masked reconstruction with epoch-wise node sampling, hard-node mining to emphasize difficult sensors, and noise injection on visible flows to prevent identity mapping, while graph structure is built from driving distances. + Across three real-world datasets, MOW (Antwerp, Belgium), UTD19-Torino, and UTD19-Essen, HINT consistently surpasses state-of-the-art inductive baselines. Relative to KITS, HINT reduces MAE on MOW by $\approx42$% with basic simulation and $\approx50$% with calibrated simulation; on Torino by $\approx22$%, and on Essen by $\approx12$%. Even without simulation, HINT remains superior on MOW and Torino, while simulation is crucial on Essen. These results show that combining inductive flow imputation with transductive speed, traffic simulations and external geospatial improves accuracy for the task described above. + oai:arXiv.org:2512.17984v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Euijun Jung, Jingyu Lee, Minji Kim, Youngki Lee + Mohammadmahdi Rahimiasl, Ynte Vanderhoydonc, Siegfried Mercelis - FORMSpoT: A Decade of Tree-Level, Country-Scale Forest Monitoring - https://arxiv.org/abs/2512.17021 - arXiv:2512.17021v1 Announce Type: new -Abstract: The recent decline of the European forest carbon sink highlights the need for spatially explicit and frequently updated forest monitoring tools. Yet, existing satellite-based disturbance products remain too coarse to detect changes at the scale of individual trees, typically below 100 m$^{2}$. Here, we introduce FORMSpoT (Forest Mapping with SPOT Time series), a decade-long (2014-2024) nationwide mapping of forest canopy height at 1.5 m resolution, together with annual disturbance polygons (FORMSpoT-$\Delta$) covering mainland France. Canopy heights were derived from annual SPOT-6/7 composites using a hierarchical transformer model (PVTv2) trained on high-resolution airborne laser scanning (ALS) data. To enable robust change detection across heterogeneous acquisitions, we developed a dedicated post-processing pipeline combining co-registration and spatio-temporal total variation denoising. Validation against ALS revisits across 19 sites and 5,087 National Forest Inventory plots shows that FORMSpoT-$\Delta$ substantially outperforms existing disturbance products. In mountainous forests, where disturbances are small and spatially fragmented, FORMSpoT-$\Delta$ achieves an F1-score of 0.44, representing an order of magnitude higher than existing benchmarks. By enabling tree-level monitoring of forest dynamics at national scale, FORMSpoT-$\Delta$ provides a unique tool to analyze management practices, detect early signals of forest decline, and better quantify carbon losses from subtle disturbances such as thinning or selective logging. These results underscore the critical importance of sustaining very high-resolution satellite missions like SPOT and open-data initiatives such as DINAMIS for monitoring forests under climate change. - oai:arXiv.org:2512.17021v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + MoE-TransMov: A Transformer-based Model for Next POI Prediction in Familiar & Unfamiliar Movements + https://arxiv.org/abs/2512.17985 + arXiv:2512.17985v1 Announce Type: new +Abstract: Accurate prediction of the next point of interest (POI) within human mobility trajectories is essential for location-based services, as it enables more timely and personalized recommendations. In particular, with the rise of these approaches, studies have shown that users exhibit different POI choices in their familiar and unfamiliar areas, highlighting the importance of incorporating user familiarity into predictive models. However, existing methods often fail to distinguish between the movements of users in familiar and unfamiliar regions. To address this, we propose MoE-TransMov, a Transformer-based model with a Transformer model with a Mixture-of-Experts (MoE) architecture designed to use one framework to capture distinct mobility patterns across different moving contexts without requiring separate training for certain data. Using user-check-in data, we classify movements into familiar and unfamiliar categories and develop a specialized expert network to improve prediction accuracy. Our approach integrates self-attention mechanisms and adaptive gating networks to dynamically select the most relevant expert models for different mobility contexts. Experiments on two real-world datasets, including the widely used but small open-source Foursquare NYC dataset and the large-scale Kyoto dataset collected with LY Corporation (Yahoo Japan Corporation), show that MoE-TransMov outperforms state-of-the-art baselines with notable improvements in Top-1, Top-5, Top-10 accuracy, and mean reciprocal rank (MRR). Given the results, we find that by using this approach, we can efficiently improve mobility predictions under different moving contexts, thereby enhancing the personalization of recommendation systems and advancing various urban applications. + oai:arXiv.org:2512.17985v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Martin Schwartz, Fajwel Fogel, Nikola Besic, Damien Robert, Louis Geist, Jean-Pierre Renaud, Jean-Matthieu Monnet, Clemens Mosig, C\'edric Vega, Alexandre d'Aspremont, Loic Landrieu, Philippe Ciais + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Ruichen Tan, Jiawei Xue, Kota Tsubouchi, Takahiro Yabe, Satish V. Ukkusuri - LLM-HPC++: Evaluating LLM-Generated Modern C++ and MPI+OpenMP Codes for Scalable Mandelbrot Set Computation - https://arxiv.org/abs/2512.17023 - arXiv:2512.17023v1 Announce Type: new -Abstract: Parallel programming remains one of the most challenging aspects of High-Performance Computing (HPC), requiring deep knowledge of synchronization, communication, and memory models. While modern C++ standards and frameworks like OpenMP and MPI have simplified parallelism, mastering these paradigms is still complex. Recently, Large Language Models (LLMs) have shown promise in automating code generation, but their effectiveness in producing correct and efficient HPC code is not well understood. In this work, we systematically evaluate leading LLMs including ChatGPT 4 and 5, Claude, and LLaMA on the task of generating C++ implementations of the Mandelbrot set using shared-memory, directive-based, and distributed-memory paradigms. Each generated program is compiled and executed with GCC 11.5.0 to assess its correctness, robustness, and scalability. Results show that ChatGPT-4 and ChatGPT-5 achieve strong syntactic precision and scalable performance. - oai:arXiv.org:2512.17023v1 - cs.DC - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 + FedOAED: Federated On-Device Autoencoder Denoiser for Heterogeneous Data under Limited Client Availability + https://arxiv.org/abs/2512.17986 + arXiv:2512.17986v1 Announce Type: new +Abstract: Over the last few decades, machine learning (ML) and deep learning (DL) solutions have demonstrated their potential across many applications by leveraging large amounts of high-quality data. However, strict data-sharing regulations such as the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) have prevented many data-driven applications from being realised. Federated Learning (FL), in which raw data never leaves local devices, has shown promise in overcoming these limitations. Although FL has grown rapidly in recent years, it still struggles with heterogeneity, which produces gradient noise, client-drift, and increased variance from partial client participation. In this paper, we propose FedOAED, a novel federated learning algorithm designed to mitigate client-drift arising from multiple local training updates and the variance induced by partial client participation. FedOAED incorporates an on-device autoencoder denoiser on the client side to mitigate client-drift and variance resulting from heterogeneous data under limited client availability. Experiments on multiple vision datasets under Non-IID settings demonstrate that FedOAED consistently outperforms state-of-the-art baselines. + oai:arXiv.org:2512.17986v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Patrick Diehl, Noujoud Nader, Deepti Gupta + S M Ruhul Kabir Howlader, Xiao Chen, Yifei Xie, Lu Liu - Designing Virtual Reality Games for Grief: A Workshop Approach with Mental Health Professionals - https://arxiv.org/abs/2512.17025 - arXiv:2512.17025v1 Announce Type: new -Abstract: Although serious games have been increasingly used for mental health applications, few explicitly address coping with grief as a core mechanic and narrative experience for patients. Existing grief-related digital games often focus on clinical training for medical professionals rather than immersive storytelling and agency in emotional processing for the patient. In response, we designed Road to Acceptance, a VR game that presents grief through first-person narrative and gameplay. As the next phase of evaluation, we propose a workshop-based study with 12 licensed mental health professionals to assess the therapeutic impacts of the game and the alignment with best practices in grief education and interventions. This will inform iterative game design and patient evaluation methods, ensuring that the experience is clinically appropriate. Potential findings can contribute to the design principles of grief-related virtual reality experiences, bridging the gap between interactive media, mental health interventions, and immersive storytelling. - oai:arXiv.org:2512.17025v1 - cs.HC - cs.ET - Mon, 22 Dec 2025 00:00:00 -0500 + Enhancing Tea Leaf Disease Recognition with Attention Mechanisms and Grad-CAM Visualization + https://arxiv.org/abs/2512.17987 + arXiv:2512.17987v1 Announce Type: new +Abstract: Tea is among the most widely consumed drinks globally. Tea production is a key industry for many countries. One of the main challenges in tea harvesting is tea leaf diseases. If the spread of tea leaf diseases is not stopped in time, it can lead to massive economic losses for farmers. Therefore, it is crucial to identify tea leaf diseases as soon as possible. Manually identifying tea leaf disease is an ineffective and time-consuming method, without any guarantee of success. Automating this process will improve both the efficiency and the success rate of identifying tea leaf diseases. The purpose of this study is to create an automated system that can classify different kinds of tea leaf diseases, allowing farmers to take action to minimize the damage. A novel dataset was developed specifically for this study. The dataset contains 5278 images across seven classes. The dataset was pre-processed prior to training the model. We deployed three pretrained models: DenseNet, Inception, and EfficientNet. EfficientNet was used only in the ensemble model. We utilized two different attention modules to improve model performance. The ensemble model achieved the highest accuracy of 85.68%. Explainable AI was introduced for better model interpretability. + oai:arXiv.org:2512.17987v1 + cs.CV + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Amina Kobenova, Piper Stickler, Tha\'is Alvarenga, Sri Kurniawan + http://creativecommons.org/licenses/by/4.0/ + Omar Faruq Shikdar, Fahad Ahammed, B. M. Shahria Alam, Golam Kibria, Tawhidur Rahman, Nishat Tasnim Niloy - Unexpected Knowledge: Auditing Wikipedia and Grokipedia Search Recommendations - https://arxiv.org/abs/2512.17027 - arXiv:2512.17027v1 Announce Type: new -Abstract: Encyclopedic knowledge platforms are key gateways through which users explore information online. The recent release of Grokipedia, a fully AI-generated encyclopedia, introduces a new alternative to traditional, well-established platforms like Wikipedia. In this context, search engine mechanisms play an important role in guiding users exploratory paths, yet their behavior across different encyclopedic systems remains underexplored. In this work, we address this gap by providing the first comparative analysis of search engine in Wikipedia and Grokipedia. - Using nearly 10,000 neutral English words and their substrings as queries, we collect over 70,000 search engine results and examine their semantic alignment, overlap, and topical structure. We find that both platforms frequently generate results that are weakly related to the original query and, in many cases, surface unexpected content starting from innocuous queries. Despite these shared properties, the two systems often produce substantially different recommendation sets for the same query. Through topical annotation and trajectory analysis, we further identify systematic differences in how content categories are surfaced and how search engine results evolve over multiple stages of exploration. - Overall, our findings show that unexpected search engine outcomes are a common feature of both the platforms, even though they exhibit discrepancies in terms of topical distribution and query suggestions. - oai:arXiv.org:2512.17027v1 - cs.IR - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Unifying Deep Predicate Invention with Pre-trained Foundation Models + https://arxiv.org/abs/2512.17992 + arXiv:2512.17992v1 Announce Type: new +Abstract: Long-horizon robotic tasks are hard due to continuous state-action spaces and sparse feedback. Symbolic world models help by decomposing tasks into discrete predicates that capture object properties and relations. Existing methods learn predicates either top-down, by prompting foundation models without data grounding, or bottom-up, from demonstrations without high-level priors. We introduce UniPred, a bilevel learning framework that unifies both. UniPred uses large language models (LLMs) to propose predicate effect distributions that supervise neural predicate learning from low-level data, while learned feedback iteratively refines the LLM hypotheses. Leveraging strong visual foundation model features, UniPred learns robust predicate classifiers in cluttered scenes. We further propose a predicate evaluation method that supports symbolic models beyond STRIPS assumptions. Across five simulated and one real-robot domains, UniPred achieves 2-4 times higher success rates than top-down methods and 3-4 times faster learning than bottom-up approaches, advancing scalable and flexible symbolic world modeling for robotics. + oai:arXiv.org:2512.17992v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Erica Coppolillo, Simone Mungari + http://creativecommons.org/licenses/by/4.0/ + Qianwei Wang, Bowen Li, Zhanpeng Luo, Yifan Xu, Alexander Gray, Tom Silver, Sebastian Scherer, Katia Sycara, Yaqi Xie - A Women's Health Benchmark for Large Language Models - https://arxiv.org/abs/2512.17028 - arXiv:2512.17028v1 Announce Type: new -Abstract: As large language models (LLMs) become primary sources of health information for millions, their accuracy in women's health remains critically unexamined. We introduce the Women's Health Benchmark (WHB), the first benchmark evaluating LLM performance specifically in women's health. Our benchmark comprises 96 rigorously validated model stumps covering five medical specialties (obstetrics and gynecology, emergency medicine, primary care, oncology, and neurology), three query types (patient query, clinician query, and evidence/policy query), and eight error types (dosage/medication errors, missing critical information, outdated guidelines/treatment recommendations, incorrect treatment advice, incorrect factual information, missing/incorrect differential diagnosis, missed urgency, and inappropriate recommendations). We evaluated 13 state-of-the-art LLMs and revealed alarming gaps: current models show approximately 60\% failure rates on the women's health benchmark, with performance varying dramatically across specialties and error types. Notably, models universally struggle with "missed urgency" indicators, while newer models like GPT-5 show significant improvements in avoiding inappropriate recommendations. Our findings underscore that AI chatbots are not yet fully able of providing reliable advice in women's health. - oai:arXiv.org:2512.17028v1 - cs.CL - cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Name That Part: 3D Part Segmentation and Naming + https://arxiv.org/abs/2512.18003 + arXiv:2512.18003v1 Announce Type: new +Abstract: We address semantic 3D part segmentation: decomposing objects into parts with meaningful names. While datasets exist with part annotations, their definitions are inconsistent across datasets, limiting robust training. Previous methods produce unlabeled decompositions or retrieve single parts without complete shape annotations. We propose ALIGN-Parts, which formulates part naming as a direct set alignment task. Our method decomposes shapes into partlets - implicit 3D part representations - matched to part descriptions via bipartite assignment. We combine geometric cues from 3D part fields, appearance from multi-view vision features, and semantic knowledge from language-model-generated affordance descriptions. Text-alignment loss ensures partlets share embedding space with text, enabling a theoretically open-vocabulary matching setup, given sufficient data. Our efficient and novel, one-shot, 3D part segmentation and naming method finds applications in several downstream tasks, including serving as a scalable annotation engine. As our model supports zero-shot matching to arbitrary descriptions and confidence-calibrated predictions for known categories, with human verification, we create a unified ontology that aligns PartNet, 3DCoMPaT++, and Find3D, consisting of 1,794 unique 3D parts. We also show examples from our newly created Tex-Parts dataset. We also introduce 2 novel metrics appropriate for the named 3D part segmentation task. + oai:arXiv.org:2512.18003v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Victoria-Elisabeth Gruber, Razvan Marinescu, Diego Fajardo, Amin H. Nassar, Christopher Arkfeld, Alexandria Ludlow, Shama Patel, Mehrnoosh Samaei, Valerie Klug, Anna Huber, Marcel G\"uhner, Albert Botta i Orfila, Irene Lagoja, Kimya Tarr, Haleigh Larson, Mary Beth Howard + Soumava Paul, Prakhar Kaushik, Ankit Vaidya, Anand Bhattad, Alan Yuille - Adversarial VR: An Open-Source Testbed for Evaluating Adversarial Robustness of VR Cybersickness Detection and Mitigation - https://arxiv.org/abs/2512.17029 - arXiv:2512.17029v1 Announce Type: new -Abstract: Deep learning (DL)-based automated cybersickness detection methods, along with adaptive mitigation techniques, can enhance user comfort and interaction. However, recent studies show that these DL-based systems are susceptible to adversarial attacks; small perturbations to sensor inputs can degrade model performance, trigger incorrect mitigation, and disrupt the user's immersive experience (UIX). Additionally, there is a lack of dedicated open-source testbeds that evaluate the robustness of these systems under adversarial conditions, limiting the ability to assess their real-world effectiveness. To address this gap, this paper introduces Adversarial-VR, a novel real-time VR testbed for evaluating DL-based cybersickness detection and mitigation strategies under adversarial conditions. Developed in Unity, the testbed integrates two state-of-the-art (SOTA) DL models: DeepTCN and Transformer, which are trained on the open-source MazeSick dataset, for real-time cybersickness severity detection and applies a dynamic visual tunneling mechanism that adjusts the field-of-view based on model outputs. To assess robustness, we incorporate three SOTA adversarial attacks: MI-FGSM, PGD, and C&W, which successfully prevent cybersickness mitigation by fooling DL-based cybersickness models' outcomes. We implement these attacks using a testbed with a custom-built VR Maze simulation and an HTC Vive Pro Eye headset, and we open-source our implementation for widespread adoption by VR developers and researchers. Results show that these adversarial attacks are capable of successfully fooling the system. For instance, the C&W attack results in a $5.94x decrease in accuracy for the Transformer-based cybersickness model compared to the accuracy without the attack. - oai:arXiv.org:2512.17029v1 - cs.CR + Seeing Justice Clearly: Handwritten Legal Document Translation with OCR and Vision-Language Models + https://arxiv.org/abs/2512.18004 + arXiv:2512.18004v1 Announce Type: new +Abstract: Handwritten text recognition (HTR) and machine translation continue to pose significant challenges, particularly for low-resource languages like Marathi, which lack large digitized corpora and exhibit high variability in handwriting styles. The conventional approach to address this involves a two-stage pipeline: an OCR system extracts text from handwritten images, which is then translated into the target language using a machine translation model. In this work, we explore and compare the performance of traditional OCR-MT pipelines with Vision Large Language Models that aim to unify these stages and directly translate handwritten text images in a single, end-to-end step. Our motivation is grounded in the urgent need for scalable, accurate translation systems to digitize legal records such as FIRs, charge sheets, and witness statements in India's district and high courts. We evaluate both approaches on a curated dataset of handwritten Marathi legal documents, with the goal of enabling efficient legal document processing, even in low-resource environments. Our findings offer actionable insights toward building robust, edge-deployable solutions that enhance access to legal information for non-native speakers and legal professionals alike. + oai:arXiv.org:2512.18004v1 + cs.CV cs.AI - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + cs.CL + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - 10.1109/ISMAR-Adjunct68609.2025.00061 - Istiak Ahmed, Ripan Kumar Kundu, Khaza Anuarul Hoque + http://creativecommons.org/licenses/by/4.0/ + Shubham Kumar Nigam, Parjanya Aditya Shukla, Noel Shallum, Arnab Bhattacharya - GB-DQN: Gradient Boosted DQN Models for Non-stationary Reinforcement Learning - https://arxiv.org/abs/2512.17034 - arXiv:2512.17034v1 Announce Type: new -Abstract: Non-stationary environments pose a fundamental challenge for deep reinforcement learning, as changes in dynamics or rewards invalidate learned value functions and cause catastrophic forgetting. We propose \emph{Gradient-Boosted Deep Q-Networks (GB-DQN)}, an adaptive ensemble method that addresses model drift through incremental residual learning. Instead of retraining a single Q-network, GB-DQN constructs an additive ensemble in which each new learner is trained to approximate the Bellman residual of the current ensemble after drift. We provide theoretical results showing that each boosting step reduces the empirical Bellman residual and that the ensemble converges to the post-drift optimal value function under standard assumptions. Experiments across a diverse set of control tasks with controlled dynamics changes demonstrate faster recovery, improved stability, and greater robustness compared to DQN and common non-stationary baselines. - oai:arXiv.org:2512.17034v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Robotic VLA Benefits from Joint Learning with Motion Image Diffusion + https://arxiv.org/abs/2512.18007 + arXiv:2512.18007v1 Announce Type: new +Abstract: Vision-Language-Action (VLA) models have achieved remarkable progress in robotic manipulation by mapping multimodal observations and instructions directly to actions. However, they typically mimic expert trajectories without predictive motion reasoning, which limits their ability to reason about what actions to take. To address this limitation, we propose joint learning with motion image diffusion, a novel strategy that enhances VLA models with motion reasoning capabilities. Our method extends the VLA architecture with a dual-head design: while the action head predicts action chunks as in vanilla VLAs, an additional motion head, implemented as a Diffusion Transformer (DiT), predicts optical-flow-based motion images that capture future dynamics. The two heads are trained jointly, enabling the shared VLM backbone to learn representations that couple robot control with motion knowledge. This joint learning builds temporally coherent and physically grounded representations without modifying the inference pathway of standard VLAs, thereby maintaining test-time latency. Experiments in both simulation and real-world environments demonstrate that joint learning with motion image diffusion improves the success rate of pi-series VLAs to 97.5% on the LIBERO benchmark and 58.0% on the RoboTwin benchmark, yielding a 23% improvement in real-world performance and validating its effectiveness in enhancing the motion reasoning capability of large-scale VLAs. + oai:arXiv.org:2512.18007v1 + cs.RO + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-sa/4.0/ - Chang-Hwan Lee, Chanseung Lee + Yu Fang, Kanchana Ranasinghe, Le Xue, Honglu Zhou, Juntao Tan, Ran Xu, Shelby Heinecke, Caiming Xiong, Silvio Savarese, Daniel Szafir, Mingyu Ding, Michael S. Ryoo, Juan Carlos Niebles - Infinite-Homography as Robust Conditioning for Camera-Controlled Video Generation - https://arxiv.org/abs/2512.17040 - arXiv:2512.17040v1 Announce Type: new -Abstract: Recent progress in video diffusion models has spurred growing interest in camera-controlled novel-view video generation for dynamic scenes, aiming to provide creators with cinematic camera control capabilities in post-production. A key challenge in camera-controlled video generation is ensuring fidelity to the specified camera pose, while maintaining view consistency and reasoning about occluded geometry from limited observations. To address this, existing methods either train trajectory-conditioned video generation model on trajectory-video pair dataset, or estimate depth from the input video to reproject it along a target trajectory and generate the unprojected regions. Nevertheless, existing methods struggle to generate camera-pose-faithful, high-quality videos for two main reasons: (1) reprojection-based approaches are highly susceptible to errors caused by inaccurate depth estimation; and (2) the limited diversity of camera trajectories in existing datasets restricts learned models. To address these limitations, we present InfCam, a depth-free, camera-controlled video-to-video generation framework with high pose fidelity. The framework integrates two key components: (1) infinite homography warping, which encodes 3D camera rotations directly within the 2D latent space of a video diffusion model. Conditioning on this noise-free rotational information, the residual parallax term is predicted through end-to-end training to achieve high camera-pose fidelity; and (2) a data augmentation pipeline that transforms existing synthetic multiview datasets into sequences with diverse trajectories and focal lengths. Experimental results demonstrate that InfCam outperforms baseline methods in camera-pose accuracy and visual fidelity, generalizing well from synthetic to real-world data. Link to our project page:https://emjay73.github.io/InfCam/ - oai:arXiv.org:2512.17040v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + ReGal: A First Look at PPO-based Legal AI for Judgment Prediction and Summarization in India + https://arxiv.org/abs/2512.18014 + arXiv:2512.18014v1 Announce Type: new +Abstract: This paper presents an early exploration of reinforcement learning methodologies for legal AI in the Indian context. We introduce Reinforcement Learning-based Legal Reasoning (ReGal), a framework that integrates Multi-Task Instruction Tuning with Reinforcement Learning from AI Feedback (RLAIF) using Proximal Policy Optimization (PPO). Our approach is evaluated across two critical legal tasks: (i) Court Judgment Prediction and Explanation (CJPE), and (ii) Legal Document Summarization. Although the framework underperforms on standard evaluation metrics compared to supervised and proprietary models, it provides valuable insights into the challenges of applying RL to legal texts. These challenges include reward model alignment, legal language complexity, and domain-specific adaptation. Through empirical and qualitative analysis, we demonstrate how RL can be repurposed for high-stakes, long-document tasks in law. Our findings establish a foundation for future work on optimizing legal reasoning pipelines using reinforcement learning, with broader implications for building interpretable and adaptive legal AI systems. + oai:arXiv.org:2512.18014v1 + cs.CL + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Min-Jung Kim, Jeongho Kim, Hoiyeong Jin, Junha Hyung, Jaegul Choo + Shubham Kumar Nigam, Tanuj Tyagi, Siddharth Shukla, Aditya Kumar Guru, Balaramamahanthi Deepak Patnaik, Danush Khanna, Noel Shallum, Kripabandhu Ghosh, Arnab Bhattacharya - Security Risks of Agentic Vehicles: A Systematic Analysis of Cognitive and Cross-Layer Threats - https://arxiv.org/abs/2512.17041 - arXiv:2512.17041v1 Announce Type: new -Abstract: Agentic AI is increasingly being explored and introduced in both manually driven and autonomous vehicles, leading to the notion of Agentic Vehicles (AgVs), with capabilities such as memory-based personalization, goal interpretation, strategic reasoning, and tool-mediated assistance. While frameworks such as the OWASP Agentic AI Security Risks highlight vulnerabilities in reasoning-driven AI systems, they are not designed for safety-critical cyber-physical platforms such as vehicles, nor do they account for interactions with other layers such as perception, communication, and control layers. This paper investigates security threats in AgVs, including OWASP-style risks and cyber-attacks from other layers affecting the agentic layer. By introducing a role-based architecture for agentic vehicles, consisting of a Personal Agent and a Driving Strategy Agent, we will investigate vulnerabilities in both agentic AI layer and cross-layer risks, including risks originating from upstream layers (e.g., perception layer, control layer, etc.). A severity matrix and attack-chain analysis illustrate how small distortions can escalate into misaligned or unsafe behavior in both human-driven and autonomous vehicles. The resulting framework provides the first structured foundation for analyzing security risks of agentic AI in both current and emerging vehicle platforms. - oai:arXiv.org:2512.17041v1 - cs.AI - cs.SY + Robustness of Delayed Higher Order Sliding Mode Control + https://arxiv.org/abs/2512.18018 + arXiv:2512.18018v1 Announce Type: new +Abstract: In this paper, the feasibility of recently developed higher order delayed sliding mode controllers is addressed. With this aim the robustness against the measurement noise and mismatched perturbations for the systems governed by such controllers is established using ISS implicit Lyapunov-Razumikhin function approach. To illustrate proposed results, a simulation example validating the efficiency of the method is provided. + oai:arXiv.org:2512.18018v1 eess.SY - Mon, 22 Dec 2025 00:00:00 -0500 + cs.SY + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ali Eslami, Jiangbo Yu + Moussa Labbadi, Denis Efimov, Leonid Fridman - UniRel-R1: RL-tuned LLM Reasoning for Knowledge Graph Relational Question Answering - https://arxiv.org/abs/2512.17043 - arXiv:2512.17043v1 Announce Type: new -Abstract: Knowledge Graph Question Answering (KGQA) has traditionally focused on entity-centric queries that return a single answer entity. However, real-world queries are often relational, seeking to understand how entities are associated. In this work, we introduce relation-centric KGQA, a complementary setting where the answer is a subgraph capturing the semantic connections among entities rather than an individual entity. The main challenge lies in the abundance of candidate subgraphs, where trivial or overly common connections often obscure the identification of unique and informative answers. To tackle this, we propose UniRel-R1, a unified framework that integrates subgraph selection, multi-stage graph pruning, and an LLM fine-tuned with reinforcement learning. The reward function is designed to encourage compact and specific subgraphs with more informative relations and lower-degree intermediate entities. Extensive experiments show that UniRel-R1 achieves significant gains in connectivity and reward over Vanilla baselines and generalizes effectively to unseen entities and relations. - oai:arXiv.org:2512.17043v1 + Specification and Detection of LLM Code Smells + https://arxiv.org/abs/2512.18020 + arXiv:2512.18020v1 Announce Type: new +Abstract: Large Language Models (LLMs) have gained massive popularity in recent years and are increasingly integrated into software systems for diverse purposes. However, poorly integrating them in source code may undermine software system quality. Yet, to our knowledge, there is no formal catalog of code smells specific to coding practices for LLM inference. In this paper, we introduce the concept of LLM code smells and formalize five recurrent problematic coding practices related to LLM inference in software systems, based on relevant literature. We extend the detection tool SpecDetect4AI to cover the newly defined LLM code smells and use it to validate their prevalence in a dataset of 200 open-source LLM systems. Our results show that LLM code smells affect 60.50% of the analyzed systems, with a detection precision of 86.06%. + oai:arXiv.org:2512.18020v1 + cs.SE cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yinxu Tang, Chengsong Huang, Jiaxin Huang, William Yeoh + http://creativecommons.org/licenses/by/4.0/ + Brahim Mahmoudi, Zacharie Chenail-Larcher, Naouel Moha, Quentin Stievenert, Florent Avellaneda - Sedna: Sharding transactions in multiple concurrent proposer blockchains - https://arxiv.org/abs/2512.17045 - arXiv:2512.17045v1 Announce Type: new -Abstract: Modern blockchains increasingly adopt multi-proposer (MCP) consensus to remove single-leader bottlenecks and improve censorship resistance. However, MCP alone does not resolve how users should disseminate transactions to proposers. Today, users either naively replicate full transactions to many proposers, sacrificing goodput and exposing payloads to MEV, or target few proposers and accept weak censorship and latency guarantees. This yields a practical trilemma among censorship resistance, low latency, and reasonable cost (in fees or system goodput). - We present Sedna, a user-facing protocol that replaces naive transaction replication with verifiable, rateless coding. Users privately deliver addressed symbol bundles to subsets of proposers; execution follows a deterministic order once enough symbols are finalized to decode. We prove Sedna guarantees liveness and \emph{until-decode privacy}, significantly reducing MEV exposure. Analytically, the protocol approaches the information-theoretic lower bound for bandwidth overhead, yielding a 2-3x efficiency improvement over naive replication. Sedna requires no consensus modifications, enabling incremental deployment. - oai:arXiv.org:2512.17045v1 + Scalable Multiterminal Key Agreement via Error-Correcting Codes + https://arxiv.org/abs/2512.18025 + arXiv:2512.18025v1 Announce Type: new +Abstract: We explore connections between secret sharing and secret key agreement, which yield a simple and scalable multiterminal key agreement protocol. In our construction, we use error-correcting codes, specifically Reed-Solomon codes with threshold reconstruction, to ensure no information is leaked to an eavesdropper. We then derive novel bounds for both full-rank maximum distance separable codes and our scheme's secret key capacity, using key capacity's duality with multivariate mutual information. + oai:arXiv.org:2512.18025v1 + cs.IT cs.CR - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Alejandro Ranchal-Pedrosa, Benjamin Marsh, Lefteris Kokoris-Kogias, Alberto Sonnino + Benjamin D. Kim, Daniel Alabi, Lav R. Varshney - Toward Optimal Approximations for Resource-Minimization for Fire Containment on Trees and Non-Uniform k-Center - https://arxiv.org/abs/2512.17049 - arXiv:2512.17049v1 Announce Type: new -Abstract: One of the most elementary spreading models on graphs can be described by a fire spreading from a burning vertex in discrete time steps. At each step, all neighbors of burning vertices catch fire. A well-studied extension to model fire containment is to allow for fireproofing a number $B$ of non-burning vertices at each step. Interestingly, basic computational questions about this model are computationally hard even on trees. One of the most prominent such examples is Resource Minimization for Fire Containment (RMFC), which asks how small $B$ can be chosen so that a given subset of vertices will never catch fire. Despite recent progress on RMFC on trees, prior work left a significant gap in terms of its approximability. We close this gap by providing an optimal $2$-approximation and an asymptotic PTAS, resolving two open questions in the literature. Both results are obtained in a unified way, by first designing a PTAS for a smooth variant of RMFC, which is obtained through a careful LP-guided enumeration procedure. - Moreover, we show that our new techniques, with several additional ingredients, carry over to the non-uniform $k$-center problem (NUkC), by exploiting a link between RMFC on trees and NUkC established by Chakrabarty, Goyal, and Krishnaswamy. This leads to the first approximation algorithm for NUkC that is optimal in terms of the number of additional centers that have to be opened. - oai:arXiv.org:2512.17049v1 - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 + CoPE: A Small Language Model for Steerable and Scalable Content Labeling + https://arxiv.org/abs/2512.18027 + arXiv:2512.18027v1 Announce Type: new +Abstract: This paper details the methodology behind CoPE, a policy-steerable small language model capable of fast and accurate content labeling. We present a novel training curricula called Contradictory Example Training that enables the model to learn policy interpretation rather than mere policy memorization. We also present a novel method for generating content policies, called Binocular Labeling, which enables rapid construction of unambiguous training datasets. When evaluated across seven different harm areas, CoPE exhibits equal or superior accuracy to frontier models at only 1% of their size. We openly release a 9 billion parameter version of the model that can be run on a single consumer-grade GPU. Models like CoPE represent a paradigm shift for classifier systems. By turning an ML task into a policy writing task, CoPE opens up new design possibilities for the governance of online platforms. + oai:arXiv.org:2512.18027v1 + cs.CL + cs.CY + cs.SI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jannis Blauth, Christian N\"obel, Rico Zenklusen + http://creativecommons.org/licenses/by-sa/4.0/ + Samidh Chakrabarti, David Willner, Kevin Klyman, Tiffany Saade, Emily Capstick, Sabina Nong - SFBD-OMNI: Bridge models for lossy measurement restoration with limited clean samples - https://arxiv.org/abs/2512.17051 - arXiv:2512.17051v1 Announce Type: new -Abstract: In many real-world scenarios, obtaining fully observed samples is prohibitively expensive or even infeasible, while partial and noisy observations are comparatively easy to collect. In this work, we study distribution restoration with abundant noisy samples, assuming the corruption process is available as a black-box generator. We show that this task can be framed as a one-sided entropic optimal transport problem and solved via an EM-like algorithm. We further provide a test criterion to determine whether the true underlying distribution is recoverable under per-sample information loss, and show that in otherwise unrecoverable cases, a small number of clean samples can render the distribution largely recoverable. Building on these insights, we introduce SFBD-OMNI, a bridge model-based framework that maps corrupted sample distributions to the ground-truth distribution. Our method generalizes Stochastic Forward-Backward Deconvolution (SFBD; Lu et al., 2025) to handle arbitrary measurement models beyond Gaussian corruption. Experiments across benchmark datasets and diverse measurement settings demonstrate significant improvements in both qualitative and quantitative performance. - oai:arXiv.org:2512.17051v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Embodied4C: Measuring What Matters for Embodied Vision-Language Navigation + https://arxiv.org/abs/2512.18028 + arXiv:2512.18028v1 Announce Type: new +Abstract: Vision-language navigation requires agents to reason and act under constraints of embodiment. While vision-language models (VLMs) demonstrate strong generalization, current benchmarks provide limited understanding of how embodiment -- i.e., the choice of physical platform, sensor configuration, and modality alignment -- influences perception, reasoning, and control. We introduce Embodied4C, a closed-loop benchmark designed as a Turing test for embodied reasoning. The benchmark evaluates the core embodied capabilities of VLMs across three heterogeneous embodiments -- autonomous vehicles, aerial drones, and robotic manipulators -- through approximately 1.1K one-shot reasoning questions and 58 goal-directed navigation tasks. These tasks jointly assess four foundational dimensions: semantic, spatial, temporal, and physical reasoning. Each embodiment presents dynamic sensor configurations and environment variations to probe generalization beyond platform-specific adaptation. To prevent embodiment overfitting, Embodied4C integrates domain-far queries targeting abstract and cross-context reasoning. Comprehensive evaluation across ten state-of-the-art VLMs and four embodied control baselines shows that cross-modal alignment and instruction tuning matter more than scale, while spatial and temporal reasoning remains the primary bottleneck for reliable embodied competence. + oai:arXiv.org:2512.18028v1 + cs.RO + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Haoye Lu, Yaoliang Yu, Darren Ho + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Tin Stribor Sohn, Maximilian Dillitzer, Jason J. Corso, Eric Sax - Dynamic Tool Dependency Retrieval for Efficient Function Calling - https://arxiv.org/abs/2512.17052 - arXiv:2512.17052v1 Announce Type: new -Abstract: Function calling agents powered by Large Language Models (LLMs) select external tools to automate complex tasks. On-device agents typically use a retrieval module to select relevant tools, improving performance and reducing context length. However, existing retrieval methods rely on static and limited inputs, failing to capture multi-step tool dependencies and evolving task context. This limitation often introduces irrelevant tools that mislead the agent, degrading efficiency and accuracy. We propose Dynamic Tool Dependency Retrieval (DTDR), a lightweight retrieval method that conditions on both the initial query and the evolving execution context. DTDR models tool dependencies from function calling demonstrations, enabling adaptive retrieval as plans unfold. We benchmark DTDR against state-of-the-art retrieval methods across multiple datasets and LLM backbones, evaluating retrieval precision, downstream task accuracy, and computational efficiency. Additionally, we explore strategies to integrate retrieved tools into prompts. Our results show that dynamic tool retrieval improves function calling success rates between $23\%$ and $104\%$ compared to state-of-the-art static retrievers. - oai:arXiv.org:2512.17052v1 + A Dataset and Benchmarks for Atrial Fibrillation Detection from Electrocardiograms of Intensive Care Unit Patients + https://arxiv.org/abs/2512.18031 + arXiv:2512.18031v1 Announce Type: new +Abstract: Objective: Atrial fibrillation (AF) is the most common cardiac arrhythmia experienced by intensive care unit (ICU) patients and can cause adverse health effects. In this study, we publish a labelled ICU dataset and benchmarks for AF detection. Methods: We compared machine learning models across three data-driven artificial intelligence (AI) approaches: feature-based classifiers, deep learning (DL), and ECG foundation models (FMs). This comparison addresses a critical gap in the literature and aims to pinpoint which AI approach is best for accurate AF detection. Electrocardiograms (ECGs) from a Canadian ICU and the 2021 PhysioNet/Computing in Cardiology Challenge were used to conduct the experiments. Multiple training configurations were tested, ranging from zero-shot inference to transfer learning. Results: On average and across both datasets, ECG FMs performed best, followed by DL, then feature-based classifiers. The model that achieved the top F1 score on our ICU test set was ECG-FM through a transfer learning strategy (F1=0.89). Conclusion: This study demonstrates promising potential for using AI to build an automatic patient monitoring system. Significance: By publishing our labelled ICU dataset (LinkToBeAdded) and performance benchmarks, this work enables the research community to continue advancing the state-of-the-art in AF detection in the ICU. + oai:arXiv.org:2512.18031v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Bhrij Patel, Davide Belli, Amir Jalalirad, Maximilian Arnold, Aleksandr Ermovol, Bence Major + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Sarah Nassar, Nooshin Maghsoodi, Sophia Mannina, Shamel Addas, Stephanie Sibley, Gabor Fichtinger, David Pichora, David Maslove, Purang Abolmaesumi, Parvin Mousavi - Knowledge Distillation with Structured Chain-of-Thought for Text-to-SQL - https://arxiv.org/abs/2512.17053 - arXiv:2512.17053v1 Announce Type: new -Abstract: Deploying accurate Text-to-SQL systems at the enterprise level faces a difficult trilemma involving cost, security and performance. Current solutions force enterprises to choose between expensive, proprietary Large Language Models (LLMs) and low-performing Small Language Models (SLMs). Efforts to improve SLMs often rely on distilling reasoning from large LLMs using unstructured Chain-of-Thought (CoT) traces, a process that remains inherently ambiguous. Instead, we hypothesize that a formal, structured reasoning representation provides a clearer, more reliable teaching signal, as the Text-to-SQL task requires explicit and precise logical steps. To evaluate this hypothesis, we propose Struct-SQL, a novel Knowledge Distillation (KD) framework that trains an SLM to emulate a powerful large LLM. Consequently, we adopt a query execution plan as a formal blueprint to derive this structured reasoning. Our SLM, distilled with structured CoT, achieves an absolute improvement of 8.1% over an unstructured CoT distillation baseline. A detailed error analysis reveals that a key factor in this gain is a marked reduction in syntactic errors. This demonstrates that teaching a model to reason using a structured logical blueprint is beneficial for reliable SQL generation in SLMs. - oai:arXiv.org:2512.17053v1 - cs.CL - cs.AI - cs.DB - Mon, 22 Dec 2025 00:00:00 -0500 + Design and Integration of Thermal and Vibrotactile Feedback for Lifelike Touch in Social Robots + https://arxiv.org/abs/2512.18032 + arXiv:2512.18032v1 Announce Type: new +Abstract: Zoomorphic Socially Assistive Robots (SARs) offer an alternative source of social touch for individuals who cannot access animal companionship. However, current SARs provide only limited, passive touch-based interactions and lack the rich haptic cues, such as warmth, heartbeat or purring, that are characteristic of human-animal touch. This limits their ability to evoke emotionally engaging, life-like physical interactions. + We present a multimodal tactile prototype, which was used to augment the established PARO robot, integrating thermal and vibrotactile feedback to simulate feeling biophysiological signals. A flexible heating interface delivers body-like warmth, while embedded actuators generate heartbeat-like rhythms and continuous purring sensations. These cues were iteratively designed and calibrated with input from users and haptics experts. We outline the design process and offer reproducible guidelines to support the development of emotionally resonant and biologically plausible touch interactions with SARs. + oai:arXiv.org:2512.18032v1 + cs.RO + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Khushboo Thaker, Yony Bresler + Jacqueline Borgstedt, Jake Bhattacharyya, Matteo Iovino, Frank E. Pollick, Stephen Brewster - When to compute in space - https://arxiv.org/abs/2512.17054 - arXiv:2512.17054v1 Announce Type: new -Abstract: Spacecraft increasingly rely on heterogeneous computing resources spanning onboard flight computers, orbital data centers, ground station edge nodes, and terrestrial cloud infrastructure. Selecting where a workload should execute is a nontrivial multi objective problem driven by latency, reliability, power, communication constraints, cost, and regulatory feasibility. This paper introduces a quantitative optimization framework that formalizes compute location selection through empirically measurable metrics, normalized scoring, feasibility constraints, and a unified utility function designed to operate under incomplete information. We evaluate the model on two representative workloads demonstrating how the framework compares compute tiers and identifies preferred deployment locations. The approach provides a structured, extensible method for mission designers to reason about compute placement in emerging space architectures. - oai:arXiv.org:2512.17054v1 - cs.CE - Mon, 22 Dec 2025 00:00:00 -0500 + A local Fortin projection for the Scott-Vogelius elements on general meshes + https://arxiv.org/abs/2512.18033 + arXiv:2512.18033v1 Announce Type: new +Abstract: We construct a local Fortin projection for the Scott-Vogelius finite element pair for polynomial degree $k \ge 4$ on general shape-regular triangulations in two dimensions. In particular, the triangulation may contain singular vertices. In addition to preserving the divergence in the dual of the pressure space, the projection preserves discrete boundary data and satisfies local stability estimates. + oai:arXiv.org:2512.18033v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Rajiv Thummala, Gregory Falco + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Franziska Eickmann, Johnny Guzm\'an, Michael Neilan, L. Ridgway Scott, Tabea Tscherpel - Perception-Limited Smooth Safety Filtering - https://arxiv.org/abs/2512.17057 - arXiv:2512.17057v1 Announce Type: new -Abstract: This paper develops a smooth safety-filtering framework for nonlinear control-affine systems under limited perception. Classical Control Barrier Function (CBF) filters assume global availability of the safety function - its value and gradient must be known everywhere - an assumption incompatible with sensing-limited settings, and the resulting filters often exhibit nonsmooth switching when constraints activate. We propose two complementary perception-aware safety filters applicable to general control-invariant safety sets. The first introduces a smooth perception gate that modulates barrier constraints based on sensing range, yielding a closed-form Lipschitz-safe controller with forward-invariance guarantees. The second replaces the hard CBF constraint with a differentiable penalty term, leading to a smooth unconstrained optimization-based safety filter consistent with CBF principles. For both designs, we establish existence, uniqueness, and forward invariance of the closed-loop trajectories. Numerical results demonstrate that the proposed smooth filters enable the synthesis of higher-order tracking controllers for systems such as drones and second-order ground robots, offering substantially smoother and more robust safety-critical behaviors than classical CBF-based filters. - oai:arXiv.org:2512.17057v1 - eess.SY - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 + Conflict-Driven Clause Learning with VSIDS Heuristics for Discrete Facility Layout + https://arxiv.org/abs/2512.18034 + arXiv:2512.18034v1 Announce Type: new +Abstract: This paper studies the use of Conflict-Driven Clause Learning (CDCL) with VSIDS heuristics as a computational engine for discrete facility layout problems. The facility layout problem is modeled as a combinatorial assignment problem with dense logical structure arising from adjacency, separation, and slot-availability constraints. We develop a CNF-based formulation for layout feasibility and compare CDCL-based SAT solving against CP-SAT and MILP formulations under a unified benchmarking framework. Empirical results show that CDCL exhibits near-constant runtime behavior for feasibility detection across increasing problem sizes and constraint densities, while CP-SAT and MILP display polynomial and exponential scaling respectively. To address the limitation of CDCL in objective optimization, we introduce two hybrid architectures that combine CDCL-based feasibility search with CP-SAT optimization. The first architecture rapidly enumerates feasible layouts to trade optimality for speed, while the second uses CDCL to generate warm-start solutions that accelerate exact optimization. The results demonstrate that hybrid approaches can significantly reduce time-to-solution while preserving correctness guarantees, clarifying the algorithmic trade-offs between clause-learning search and exact optimization methods in large-scale discrete layout problems. + oai:arXiv.org:2512.18034v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Lyes Smaili, Soulaimane Berkane + Joshua Gibson, Kapil Dhakal - Universal consistency of the $k$-NN rule in metric spaces and Nagata dimension. III - https://arxiv.org/abs/2512.17058 - arXiv:2512.17058v1 Announce Type: new -Abstract: We prove the last remaining implication allowing to claim the equivalence of the following conditions for a complete separable metric space $X$: - (1) The $k$-nearest neighbour classifier is (weakly) universally consistent in $X$, (2) The strong Lebesgue--Besicovitch differentiation property holds in $X$ for every locally finite Borel measure, (3) $X$ is sigma-finite dimensional in the sense of Nagata. - The equivalence (2)$\iff$(3) was announced by Preiss (1983), while a detailed proof of the implication (3)$\Rightarrow$(2) has appeared in Assouad and Quentin de Gromard (2006). The implication (2)$\Rightarrow$(1) was established by C\'erou and Guyader (2006). We prove the implication (1)$\Rightarrow$(3). The result was conjectured in the first article in the series (Collins, Kumari, Pestov 2020), and here we also correct a wrong claim made in the second article (Kumari and Pestov 2024). - oai:arXiv.org:2512.17058v1 + Towards Benchmarking Privacy Vulnerabilities in Selective Forgetting with Large Language Models + https://arxiv.org/abs/2512.18035 + arXiv:2512.18035v1 Announce Type: new +Abstract: The rapid advancements in artificial intelligence (AI) have primarily focused on the process of learning from data to acquire knowledgeable learning systems. As these systems are increasingly deployed in critical areas, ensuring their privacy and alignment with human values is paramount. Recently, selective forgetting (also known as machine unlearning) has shown promise for privacy and data removal tasks, and has emerged as a transformative paradigm shift in the field of AI. It refers to the ability of a model to selectively erase the influence of previously seen data, which is especially important for compliance with modern data protection regulations and for aligning models with human values. Despite its promise, selective forgetting raises significant privacy concerns, especially when the data involved come from sensitive domains. While new unlearning-induced privacy attacks are continuously proposed, each is shown to outperform its predecessors using different experimental settings, which can lead to overly optimistic and potentially unfair assessments that may disproportionately favor one particular attack over the others. In this work, we present the first comprehensive benchmark for evaluating privacy vulnerabilities in selective forgetting. We extensively investigate privacy vulnerabilities of machine unlearning techniques and benchmark privacy leakage across a wide range of victim data, state-of-the-art unlearning privacy attacks, unlearning methods, and model architectures. We systematically evaluate and identify critical factors related to unlearning-induced privacy leakage. With our novel insights, we aim to provide a standardized tool for practitioners seeking to deploy customized unlearning applications with faithful privacy assessments. + oai:arXiv.org:2512.18035v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Vladimir G. Pestov - - - On the Role of Contextual Information and Ego States in LLM Agent Behavior for Transactional Analysis Dialogues - https://arxiv.org/abs/2512.17060 - arXiv:2512.17060v1 Announce Type: new -Abstract: LLM-powered agents are now used in many areas, from customer support to education, and there is increasing interest in their ability to act more like humans. This includes fields such as social, political, and psychological research, where the goal is to model group dynamics and social behavior. However, current LLM agents often lack the psychological depth and consistency needed to capture the real patterns of human thinking. They usually provide direct or statistically likely answers, but they miss the deeper goals, emotional conflicts, and motivations that drive real human interactions. This paper proposes a Multi-Agent System (MAS) inspired by Transactional Analysis (TA) theory. In the proposed system, each agent is divided into three ego states - Parent, Adult, and Child. The ego states are treated as separate knowledge structures with their own perspectives and reasoning styles. To enrich their response process, they have access to an information retrieval mechanism that allows them to retrieve relevant contextual information from their vector stores. This architecture is evaluated through ablation tests in a simulated dialogue scenario, comparing agents with and without information retrieval. The results are promising and open up new directions for exploring how psychologically grounded structures can enrich agent behavior. The contribution is an agent architecture that integrates Transactional Analysis theory with contextual information retrieval to enhance the realism of LLM-based multi-agent simulations. - oai:arXiv.org:2512.17060v1 - cs.MA - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Monika Zamojska, Jaros{\l}aw A. Chudziak + Wei Qian, Chenxu Zhao, Yangyi Li, Mengdi Huai - Lang2Manip: A Tool for LLM-Based Symbolic-to-Geometric Planning for Manipulation - https://arxiv.org/abs/2512.17062 - arXiv:2512.17062v1 Announce Type: new -Abstract: Simulation is essential for developing robotic manipulation systems, particularly for task and motion planning (TAMP), where symbolic reasoning interfaces with geometric, kinematic, and physics-based execution. Recent advances in Large Language Models (LLMs) enable robots to generate symbolic plans from natural language, yet executing these plans in simulation often requires robot-specific engineering or planner-dependent integration. In this work, we present a unified pipeline that connects an LLM-based symbolic planner with the Kautham motion planning framework to achieve generalizable, robot-agnostic symbolic-to-geometric manipulation. Kautham provides ROS-compatible support for a wide range of industrial manipulators and offers geometric, kinodynamic, physics-driven, and constraint-based motion planning under a single interface. Our system converts language instructions into symbolic actions and computes and executes collision-free trajectories using any of Kautham's planners without additional coding. The result is a flexible and scalable tool for language-driven TAMP that is generalized across robots, planning modalities, and manipulation tasks. - oai:arXiv.org:2512.17062v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Fast Rational Search via Stern-Brocot Tree + https://arxiv.org/abs/2512.18036 + arXiv:2512.18036v1 Announce Type: new +Abstract: We revisit the problem of rational search: given an unknown rational number $\alpha = \frac{a}{b} \in (0,1)$ with $b \leq n$, the goal is to identify $\alpha$ using comparison queries of the form ``$\beta \leq \alpha$?''. The problem has been studied several decades ago and optimal query algorithms are known. We present a new algorithm for rational search based on a compressed traversal of the Stern--Brocot tree, which appeared to have been overlooked in the literature. This approach also naturally extends to two related problems that, to the best of our knowledge, have not been previously addressed: (i) unbounded rational search, where the bound $n$ is unknown, and (ii) computing the best (in a precise sense) rational approximation of an unknown real number using only comparison queries. + oai:arXiv.org:2512.18036v1 + cs.DS + math.CO + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Muhayy Ud Din, Jan Rosell, Waseem Akram, Irfan Hussain + Connor Weyers, N. V. Vinodchandran - Flux-Preserving Adaptive Finite State Projection for Multiscale Stochastic Reaction Networks - https://arxiv.org/abs/2512.17064 - arXiv:2512.17064v1 Announce Type: new -Abstract: The Finite State Projection (FSP) method approximates the Chemical Master Equation (CME) by restricting the dynamics to a finite subset of the (typically infinite) state space, enabling direct numerical solution with computable error bounds. Adaptive variants update this subset in time, but multiscale systems with widely separated reaction rates remain challenging, as low-probability bottleneck states can carry essential probability flux and the dynamics alternate between fast transients and slowly evolving stiff regimes. We propose a flux-based adaptive FSP method that uses probability flux to drive both state-space pruning and time-step selection. The pruning rule protects low-probability states with large outgoing flux, preserving connectivity in bottleneck systems, while the time-step rule adapts to the instantaneous total flux to handle rate constants spanning several orders of magnitude. Numerical experiments on stiff, oscillatory, and bottleneck reaction networks show that the method maintains accuracy while using substantially smaller state spaces. - oai:arXiv.org:2512.17064v1 - cs.CE - math.ST - stat.CO - stat.TH - Mon, 22 Dec 2025 00:00:00 -0500 + NodMAISI: Nodule-Oriented Medical AI for Synthetic Imaging + https://arxiv.org/abs/2512.18038 + arXiv:2512.18038v1 Announce Type: new +Abstract: Objective: Although medical imaging datasets are increasingly available, abnormal and annotation-intensive findings critical to lung cancer screening, particularly small pulmonary nodules, remain underrepresented and inconsistently curated. Methods: We introduce NodMAISI, an anatomically constrained, nodule-oriented CT synthesis and augmentation framework trained on a unified multi-source cohort (7,042 patients, 8,841 CTs, 14,444 nodules). The framework integrates: (i) a standardized curation and annotation pipeline linking each CT with organ masks and nodule-level annotations, (ii) a ControlNet-conditioned rectified-flow generator built on MAISI-v2's foundational blocks to enforce anatomy- and lesion-consistent synthesis, and (iii) lesion-aware augmentation that perturbs nodule masks (controlled shrinkage) while preserving surrounding anatomy to generate paired CT variants. Results: Across six public test datasets, NodMAISI improved distributional fidelity relative to MAISI-v2 (real-to-synthetic FID range 1.18 to 2.99 vs 1.69 to 5.21). In lesion detectability analysis using a MONAI nodule detector, NodMAISI substantially increased average sensitivity and more closely matched clinical scans (IMD-CT: 0.69 vs 0.39; DLCS24: 0.63 vs 0.20), with the largest gains for sub-centimeter nodules where MAISI-v2 frequently failed to reproduce the conditioned lesion. In downstream nodule-level malignancy classification trained on LUNA25 and externally evaluated on LUNA16, LNDbv4, and DLCS24, NodMAISI augmentation improved AUC by 0.07 to 0.11 at <=20% clinical data and by 0.12 to 0.21 at 10%, consistently narrowing the performance gap under data scarcity. + oai:arXiv.org:2512.18038v1 + cs.CV + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Aditya Dendukuri, Shivkumar Chandrasekaran, Linda Petzold + Fakrul Islam Tushar, Ehsan Samei, Cynthia Rudin, Joseph Y. Lo - XLM: A Python package for non-autoregressive language models - https://arxiv.org/abs/2512.17065 - arXiv:2512.17065v1 Announce Type: new -Abstract: In recent years, there has been a resurgence of interest in non-autoregressive text generation in the context of general language modeling. Unlike the well-established autoregressive language modeling paradigm, which has a plethora of standard training and inference libraries, implementations of non-autoregressive language modeling have largely been bespoke making it difficult to perform systematic comparisons of different methods. Moreover, each non-autoregressive language model typically requires it own data collation, loss, and prediction logic, making it challenging to reuse common components. In this work, we present the XLM python package, which is designed to make implementing small non-autoregressive language models faster with a secondary goal of providing a suite of small pre-trained models (through a companion xlm-models package) that can be used by the research community. The code is available at https://github.com/dhruvdcoder/xlm-core. - oai:arXiv.org:2512.17065v1 + Narrative Consolidation: Formulating a New Task for Unifying Multi-Perspective Accounts + https://arxiv.org/abs/2512.18041 + arXiv:2512.18041v1 Announce Type: new +Abstract: Processing overlapping narrative documents, such as legal testimonies or historical accounts, often aims not for compression but for a unified, coherent, and chronologically sound text. Standard Multi-Document Summarization (MDS), with its focus on conciseness, fails to preserve narrative flow. This paper formally defines this challenge as a new NLP task: Narrative Consolidation, where the central objectives are chronological integrity, completeness, and the fusion of complementary details. To demonstrate the critical role of temporal structure in this task, we introduce Temporal Alignment Event Graph (TAEG), a graph structure that explicitly models chronology and event alignment. By applying a standard centrality algorithm to TAEG, our method functions as a version selection mechanism, choosing the most central representation of each event in its correct temporal position. In a study on the four Biblical Gospels, this structure-focused approach guarantees perfect temporal ordering (Kendall's Tau of 1.000) by design and dramatically improves content metrics (e.g., +357.2% in ROUGE-L F1). The success of this baseline method validates the formulation of Narrative Consolidation as a relevant task and establishes that an explicit temporal backbone is a fundamental component for its resolution. + oai:arXiv.org:2512.18041v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Dhruvesh Patel, Durga Prasad Maram, Sai Sreenivas Chintha, Benjamin Rozonoyer, Andrew McCallum + Roger A. Finger, Eduardo G. Cortes, Sandro J. Rigo, Gabriel de O. Ramos - Realistic threat perception drives intergroup conflict: A causal, dynamic analysis using generative-agent simulations - https://arxiv.org/abs/2512.17066 - arXiv:2512.17066v1 Announce Type: new -Abstract: Human conflict is often attributed to threats against material conditions and symbolic values, yet it remains unclear how they interact and which dominates. Progress is limited by weak causal control, ethical constraints, and scarce temporal data. We address these barriers using simulations of large language model (LLM)-driven agents in virtual societies, independently varying realistic and symbolic threat while tracking actions, language, and attitudes. Representational analyses show that the underlying LLM encodes realistic threat, symbolic threat, and hostility as distinct internal states, that our manipulations map onto them, and that steering these states causally shifts behavior. Our simulations provide a causal account of threat-driven conflict over time: realistic threat directly increases hostility, whereas symbolic threat effects are weaker, fully mediated by ingroup bias, and increase hostility only when realistic threat is absent. Non-hostile intergroup contact buffers escalation, and structural asymmetries concentrate hostility among majority groups. - oai:arXiv.org:2512.17066v1 + Securing Agentic AI Systems -- A Multilayer Security Framework + https://arxiv.org/abs/2512.18043 + arXiv:2512.18043v1 Announce Type: new +Abstract: Securing Agentic Artificial Intelligence (AI) systems requires addressing the complex cyber risks introduced by autonomous, decision-making, and adaptive behaviors. Agentic AI systems are increasingly deployed across industries, organizations, and critical sectors such as cybersecurity, finance, and healthcare. However, their autonomy introduces unique security challenges, including unauthorized actions, adversarial manipulation, and dynamic environmental interactions. Existing AI security frameworks do not adequately address these challenges or the unique nuances of agentic AI. This research develops a lifecycle-aware security framework specifically designed for agentic AI systems using the Design Science Research (DSR) methodology. The paper introduces MAAIS, an agentic security framework, and the agentic AI CIAA (Confidentiality, Integrity, Availability, and Accountability) concept. MAAIS integrates multiple defense layers to maintain CIAA across the AI lifecycle. Framework validation is conducted by mapping with the established MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) AI tactics. The study contributes a structured, standardized, and framework-based approach for the secure deployment and governance of agentic AI in enterprise environments. This framework is intended for enterprise CISOs, security, AI platform, and engineering teams and offers a detailed step-by-step approach to securing agentic AI workloads. + oai:arXiv.org:2512.18043v1 + cs.CR cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Suhaib Abdurahman, Farzan Karimi-Malekabadi, Chenxiao Yu, Nour S. Kteily, Morteza Dehghani + Sunil Arora, John Hastings - Bots Don't Sit Still: A Longitudinal Study of Bot Behaviour Change, Temporal Drift, and Feature-Structure Evolution - https://arxiv.org/abs/2512.17067 - arXiv:2512.17067v1 Announce Type: new -Abstract: Social bots are now deeply embedded in online platforms for promotion, persuasion, and manipulation. Most bot-detection systems still treat behavioural features as static, implicitly assuming bots behave stationarily over time. We test that assumption for promotional Twitter bots, analysing change in both individual behavioural signals and the relationships between them. Using 2,615 promotional bot accounts and 2.8M tweets, we build yearly time series for ten content-based meta-features. Augmented Dickey-Fuller and KPSS tests plus linear trends show all ten are non-stationary: nine increase over time, while language diversity declines slightly. - Stratifying by activation generation and account age reveals systematic differences: second-generation bots are most active and link-heavy; short-lived bots show intense, repetitive activity with heavy hashtag/URL use; long-lived bots are less active but more linguistically diverse and use emojis more variably. We then analyse co-occurrence across generations using 18 interpretable binary features spanning actions, topic similarity, URLs, hashtags, sentiment, emojis, and media (153 pairs). Chi-square tests indicate almost all pairs are dependent. Spearman correlations shift in strength and sometimes polarity: many links (e.g. multiple hashtags with media; sentiment with URLs) strengthen, while others flip from weakly positive to weakly or moderately negative. Later generations show more structured combinations of cues. - Taken together, these studies provide evidence that promotional social bots adapt over time at both the level of individual meta-features and the level of feature interdependencies, with direct implications for the design and evaluation of bot-detection systems trained on historical behavioural features. - oai:arXiv.org:2512.17067v1 - cs.HC - cs.AI - cs.SI - Mon, 22 Dec 2025 00:00:00 -0500 + YolovN-CBi: A Lightweight and Efficient Architecture for Real-Time Detection of Small UAVs + https://arxiv.org/abs/2512.18046 + arXiv:2512.18046v1 Announce Type: new +Abstract: Unmanned Aerial Vehicles, commonly known as, drones pose increasing risks in civilian and defense settings, demanding accurate and real-time drone detection systems. However, detecting drones is challenging because of their small size, rapid movement, and low visual contrast. A modified architecture of YolovN called the YolovN-CBi is proposed that incorporates the Convolutional Block Attention Module (CBAM) and the Bidirectional Feature Pyramid Network (BiFPN) to improve sensitivity to small object detections. A curated training dataset consisting of 28K images is created with various flying objects and a local test dataset is collected with 2500 images consisting of very small drone objects. The proposed architecture is evaluated on four benchmark datasets, along with the local test dataset. The baseline Yolov5 and the proposed Yolov5-CBi architecture outperform newer Yolo versions, including Yolov8 and Yolov12, in the speed-accuracy trade-off for small object detection. Four other variants of the proposed CBi architecture are also proposed and evaluated, which vary in the placement and usage of CBAM and BiFPN. These variants are further distilled using knowledge distillation techniques for edge deployment, using a Yolov5m-CBi teacher and a Yolov5n-CBi student. The distilled model achieved a mA@P0.5:0.9 of 0.6573, representing a 6.51% improvement over the teacher's score of 0.6171, highlighting the effectiveness of the distillation process. The distilled model is 82.9% faster than the baseline model, making it more suitable for real-time drone detection. These findings highlight the effectiveness of the proposed CBi architecture, together with the distilled lightweight models in advancing efficient and accurate real-time detection of small UAVs. + oai:arXiv.org:2512.18046v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Ohoud Alzahrani, Russell Beale, Bob Hendley + http://creativecommons.org/licenses/by/4.0/ + Ami Pandat, Punna Rajasekhar, Gopika Vinod, Rohit Shukla - Bandwidth-Efficient Adaptive Mixture-of-Experts via Low-Rank Compensation - https://arxiv.org/abs/2512.17073 - arXiv:2512.17073v1 Announce Type: new -Abstract: Mixture-of-Experts (MoE) models scale capacity via sparse activation but stress memory and bandwidth. Offloading alleviates GPU memory by fetching experts on demand, yet token-level routing causes irregular transfers that make inference I/O-bound. Static uniform quantization reduces traffic but degrades accuracy under aggressive compression by ignoring expert heterogeneity. We present Bandwidth-Efficient Adaptive Mixture-of-Experts via Low-Rank Compensation, which performs router-guided precision restoration using precomputed low-rank compensators. At inference time, our method transfers compact low-rank factors with Top-n (n<k) experts per token and applies compensation to them, keeping others low-bit. Integrated with offloading on GPU and GPU-NDP systems, our method delivers a superior bandwidth-accuracy trade-off and improved throughput. - oai:arXiv.org:2512.17073v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Design of a Polymer-based Steerable Cannula for Neurosurgical Applications + https://arxiv.org/abs/2512.18048 + arXiv:2512.18048v1 Announce Type: new +Abstract: Robotically steerable compliant surgical tools offer several advantages over rigid tools, including enhanced dexterity, reduced tissue damage, and the ability to generate non-linear trajectories in minimally invasive neurosurgical procedures. Many existing robotic neurosurgical tools are designed using stainless steel or nitinol materials. Using polymer-based materials instead can offer advantages such as reduced interference in magnetic resonance imaging, enhanced safety for guiding electrically powered instruments, and reduced tissue damage due to inherent compliance. Several polymer materials have been used in robotic surgical applications, such as polyimide, polycarbonate, and elastic resin. Various fabrication strategies have also been proposed, including standard microfabrication techniques, thermal drawing, and 3-D printing. In our previous work, a tendon-driven, notched-tube was designed for several neurosurgical robotic tools, utilizing laser micromachining to reduce the stiffness of the tube in certain directions. This fabrication method is desirable because it has a single-step process, has high precision, and does not require a cleanroom or harsh chemicals. Past studies have explored laser-micromachining of polymer material for surgical applications such as stent fabrication. In this work, we explore extending the use of the laser micromachining approach to the fabrication of polyimide (PI) robotically steerable cannulas for neurosurgical applications. Utilizing the method presented in this work, we fabricated joints as small as 1.5 mm outer diameter (OD). Multiple joints were fabricated using PI tubes of different ODs, and the loading behavior of the fabricated joints was experimentally characterized. + oai:arXiv.org:2512.18048v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhenyu Liu, Yunzhen Liu, Zehao Fan, Garrett Gagnon, Yayue Hou, Nan Wu, Yangwook Kang, Liu Liu + Nidhi Malhotra, Amber K. Rothe, Revanth Konda, Jaydev P. Desai - Perturb Your Data: Paraphrase-Guided Training Data Watermarking - https://arxiv.org/abs/2512.17075 - arXiv:2512.17075v1 Announce Type: new -Abstract: Training data detection is critical for enforcing copyright and data licensing, as Large Language Models (LLM) are trained on massive text corpora scraped from the internet. We present SPECTRA, a watermarking approach that makes training data reliably detectable even when it comprises less than 0.001% of the training corpus. SPECTRA works by paraphrasing text using an LLM and assigning a score based on how likely each paraphrase is, according to a separate scoring model. A paraphrase is chosen so that its score closely matches that of the original text, to avoid introducing any distribution shifts. To test whether a suspect model has been trained on the watermarked data, we compare its token probabilities against those of the scoring model. We demonstrate that SPECTRA achieves a consistent p-value gap of over nine orders of magnitude when detecting data used for training versus data not used for training, which is greater than all baselines tested. SPECTRA equips data owners with a scalable, deploy-before-release watermark that survives even large-scale LLM training. - oai:arXiv.org:2512.17075v1 - cs.CL + BlockSets: A Structured Visualization for Sets with Large Elements + https://arxiv.org/abs/2512.18055 + arXiv:2512.18055v1 Announce Type: new +Abstract: Visualizations of set systems frequently use enclosing geometries for the sets in combination with reduced representations of the elements, such as short text labels, small glyphs, or points. Hence they are generally unable to adequately represent sets whose elements are larger text fragments, images, or charts. In this paper we introduce BlockSets, a novel set visualization technique specifically designed for sets with large elements. BlockSets places the elements on a grid and uses rectilinear shapes as enclosing geometries. We describe integer linear programs that find high-quality layouts of the elements on the grid. Since not all set systems allow a compact contiguous representation in this form, we also present an algorithm that splits the visualization into parts when needed; our visual encoding highlights the parts for the user in the final visualization. BlockSets utilizes orthoconvex shapes which offer a good trade-off between compactness and readability. Finally, BlockSets renders the enclosing geometries as stacked opaque shapes. We describe an algorithm that finds a stacking order such that all shapes can be inferred. Such a stacking does not have to exist, but our algorithm did find a stacking for all real-world data sets that we tested. + oai:arXiv.org:2512.18055v1 + cs.CG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Neda Novakova, Veselin Todorov, Steven van den Broek, Tim Dwyer, Bettina Speckmann + + + Probabilistic Digital Twins of Users: Latent Representation Learning with Statistically Validated Semantics + https://arxiv.org/abs/2512.18056 + arXiv:2512.18056v1 Announce Type: new +Abstract: Understanding user identity and behavior is central to applications such as personalization, recommendation, and decision support. Most existing approaches rely on deterministic embeddings or black-box predictive models, offering limited uncertainty quantification and little insight into what latent representations encode. We propose a probabilistic digital twin framework in which each user is modeled as a latent stochastic state that generates observed behavioral data. The digital twin is learned via amortized variational inference, enabling scalable posterior estimation while retaining a fully probabilistic interpretation. We instantiate this framework using a variational autoencoder (VAE) applied to a user-response dataset designed to capture stable aspects of user identity. Beyond standard reconstruction-based evaluation, we introduce a statistically grounded interpretation pipeline that links latent dimensions to observable behavioral patterns. By analyzing users at the extremes of each latent dimension and validating differences using nonparametric hypothesis tests and effect sizes, we demonstrate that specific dimensions correspond to interpretable traits such as opinion strength and decisiveness. Empirically, we find that user structure is predominantly continuous rather than discretely clustered, with weak but meaningful structure emerging along a small number of dominant latent axes. These results suggest that probabilistic digital twins can provide interpretable, uncertainty-aware representations that go beyond deterministic user embeddings. + oai:arXiv.org:2512.18056v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + cs.SI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Pranav Shetty, Mirazul Haque, Petr Babkin, Zhiqiang Ma, Xiaomo Liu, Manuela Veloso + http://creativecommons.org/publicdomain/zero/1.0/ + Daniel David - Taming the Memory Footprint Crisis: System Design for Production Diffusion LLM Serving - https://arxiv.org/abs/2512.17077 - arXiv:2512.17077v1 Announce Type: new -Abstract: Diffusion Large Language Models (dLLMs) have emerged as a promising alternative to Autoregressive Models (ARMs), utilizing parallel decoding to overcome sequential bottlenecks. However, existing research focuses primarily on kernel-level optimizations, lacking a holistic serving framework that addresses the unique memory dynamics of diffusion processes in production. We identify a critical "memory footprint crisis" specific to dLLMs, driven by monolithic logit tensors and the severe resource oscillation between compute-bound "Refresh" phases and bandwidth-bound "Reuse" phases. To bridge this gap, we present dLLM-Serve, an efficient dLLM serving system that co-optimizes memory footprint, computational scheduling, and generation quality. dLLM-Serve introduces Logit-Aware Activation Budgeting to decompose transient tensor peaks, a Phase-Multiplexed Scheduler to interleave heterogeneous request phases, and Head-Centric Sparse Attention to decouple logical sparsity from physical storage. We evaluate dLLM-Serve on diverse workloads (LiveBench, Burst, OSC) and GPUs (RTX 4090, L40S). Relative to the state-of-the-art baseline, dLLM-Serve improves throughput by 1.61$\times$-1.81$\times$ on the consumer-grade RTX 4090 and 1.60$\times$-1.74$\times$ on the server-grade NVIDIA L40S, while reducing tail latency by nearly 4$\times$ under heavy contention. dLLM-Serve establishes the first blueprint for scalable dLLM inference, converting theoretical algorithmic sparsity into tangible wall-clock acceleration across heterogeneous hardware. - oai:arXiv.org:2512.17077v1 - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 + FOODER: Real-time Facial Authentication and Expression Recognition + https://arxiv.org/abs/2512.18057 + arXiv:2512.18057v1 Announce Type: new +Abstract: Out-of-distribution (OOD) detection is essential for the safe deployment of neural networks, as it enables the identification of samples outside the training domain. We present FOODER, a real-time, privacy-preserving radar-based framework that integrates OOD-based facial authentication with facial expression recognition. FOODER operates using low-cost frequency-modulated continuous-wave (FMCW) radar and exploits both range-Doppler and micro range-Doppler representations. The authentication module employs a multi-encoder multi-decoder architecture with Body Part (BP) and Intermediate Linear Encoder-Decoder (ILED) components to classify a single enrolled individual as in-distribution while detecting all other faces as OOD. Upon successful authentication, an expression recognition module is activated. Concatenated radar representations are processed by a ResNet block to distinguish between dynamic and static facial expressions. Based on this categorization, two specialized MobileViT networks are used to classify dynamic expressions (smile, shock) and static expressions (neutral, anger). This hierarchical design enables robust facial authentication and fine-grained expression recognition while preserving user privacy by relying exclusively on radar data. Experiments conducted on a dataset collected with a 60 GHz short-range FMCW radar demonstrate that FOODER achieves an AUROC of 94.13% and an FPR95 of 18.12% for authentication, along with an average expression recognition accuracy of 94.70%. FOODER outperforms state-of-the-art OOD detection methods and several transformer-based architectures while operating efficiently in real time. + oai:arXiv.org:2512.18057v1 + cs.CV + cs.AI + cs.LG + eess.SP + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jiakun Fan, Yanglin Zhang, Xiangchen Li, Dimitrios S. Nikolopoulos + Sabri Mustafa Kahya, Muhammet Sami Yavuz, Boran Hamdi Sivrikaya, Eckehard Steinbach - Can Large Reasoning Models Improve Accuracy on Mathematical Tasks Using Flawed Thinking? - https://arxiv.org/abs/2512.17079 - arXiv:2512.17079v1 Announce Type: new -Abstract: Chain-of-thought (CoT) prompting has become central to mathematical reasoning in large language models, yet models remain brittle to early errors: a single arithmetic slip or unjustified inference typically propagates uncorrected to an incorrect final answer. We investigate whether training on intentionally flawed reasoning traces can teach models to detect and recover from such errors without degrading standard problem-solving ability. Using competition-level problems from MATH-lighteval, we generate CoT prefixes containing exactly one controlled error, either a calculation error (sign flips, dropped terms) or a reasoning error (misapplied rules, unjustified logical steps), and fine-tune Qwen3-4B with GRPO using a binary final-answer reward. Our Mixed-CoT-RL model matches standard RL on clean problems (41% vs 41%) while substantially outperforming it on problems prefilled with flawed reasoning (24% vs 19%). Notably, clean-only RL fine-tuning degrades robustness below the untuned baseline 19% vs. 20%), indicating that conventional training increases susceptibility to misleading prefills. Among error types, training on reasoning errors yields greater robustness gains than calculation errors alone, with mixed training performing best. These findings demonstrate that exposure to flawed traces during training can improve error-recovery behavior without sacrificing accuracy, suggesting a path toward more robust mathematical reasoning in LLMs. - oai:arXiv.org:2512.17079v1 + Approximation and learning with compositional tensor trains + https://arxiv.org/abs/2512.18059 + arXiv:2512.18059v1 Announce Type: new +Abstract: We introduce compositional tensor trains (CTTs) for the approximation of multivariate functions, a class of models obtained by composing low-rank functions in the tensor-train format. This format can encode standard approximation tools, such as (sparse) polynomials, deep neural networks (DNNs) with fixed width, or tensor networks with arbitrary permutation of the inputs, or more general affine coordinate transformations, with similar complexities. This format can be viewed as a DNN with width exponential in the input dimension and structured weights matrices. Compared to DNNs, this format enables controlled compression at the layer level using efficient tensor algebra. On the optimization side, we derive a layerwise algorithm inspired by natural gradient descent, allowing to exploit efficient low-rank tensor algebra. This relies on low-rank estimations of Gram matrices, and tensor structured random sketching. Viewing the format as a discrete dynamical system, we also derive an optimization algorithm inspired by numerical methods in optimal control. Numerical experiments on regression tasks demonstrate the expressivity of the new format and the relevance of the proposed optimization algorithms. Overall, CTTs combine the expressivity of compositional models with the algorithmic efficiency of tensor algebra, offering a scalable alternative to standard deep neural networks. + oai:arXiv.org:2512.18059v1 + math.NA cs.LG - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Saraswathy Amjith, Mihika Dusad, Neha Muramalla, Shweta Shah + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Martin Eigel, Charles Miranda, Anthony Nouy, David Sommer - Interpretable Similarity of Synthetic Image Utility - https://arxiv.org/abs/2512.17080 - arXiv:2512.17080v1 Announce Type: new -Abstract: Synthetic medical image data can unlock the potential of deep learning (DL)-based clinical decision support (CDS) systems through the creation of large scale, privacy-preserving, training sets. Despite the significant progress in this field, there is still a largely unanswered research question: "How can we quantitatively assess the similarity of a synthetically generated set of images with a set of real images in a given application domain?". Today, answers to this question are mainly provided via user evaluation studies, inception-based measures, and the classification performance achieved on synthetic images. This paper proposes a novel measure to assess the similarity between synthetically generated and real sets of images, in terms of their utility for the development of DL-based CDS systems. Inspired by generalized neural additive models, and unlike inception-based measures, the proposed measure is interpretable (Interpretable Utility Similarity, IUS), explaining why a synthetic dataset could be more useful than another one in the context of a CDS system based on clinically relevant image features. The experimental results on publicly available datasets from various color medical imaging modalities including endoscopic, dermoscopic and fundus imaging, indicate that selecting synthetic images of high utility similarity using IUS can result in relative improvements of up to 54.6% in terms of classification performance. The generality of IUS for synthetic data assessment is demonstrated also for greyscale X-ray and ultrasound imaging modalities. IUS implementation is available at https://github.com/innoisys/ius - oai:arXiv.org:2512.17080v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Graph-based Nearest Neighbors with Dynamic Updates via Random Walks + https://arxiv.org/abs/2512.18060 + arXiv:2512.18060v1 Announce Type: new +Abstract: Approximate nearest neighbor search (ANN) is a common way to retrieve relevant search results, especially now in the context of large language models and retrieval augmented generation. One of the most widely used algorithms for ANN is based on constructing a multi-layer graph over the dataset, called the Hierarchical Navigable Small World (HNSW). While this algorithm supports insertion of new data, it does not support deletion of existing data. Moreover, deletion algorithms described by prior work come at the cost of increased query latency, decreased recall, or prolonged deletion time. In this paper, we propose a new theoretical framework for graph-based ANN based on random walks. We then utilize this framework to analyze a randomized deletion approach that preserves hitting time statistics compared to the graph before deleting the point. We then turn this theoretical framework into a deterministic deletion algorithm, and show that it provides better tradeoff between query latency, recall, deletion time, and memory usage through an extensive collection of experiments. + oai:arXiv.org:2512.18060v1 + cs.DS + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Panagiota Gatoula, George Dimas, Dimitris K. Iakovidis + Nina Mishra, Yonatan Naamad, Tal Wagner, Lichen Zhang - Virtual Reality in Service Design for Plastics Recycling: Two Application Cases - https://arxiv.org/abs/2512.17081 - arXiv:2512.17081v1 Announce Type: new -Abstract: Plastics recycling depends on everyday sorting practices and on how recycling services are communicated and experienced. Virtual reality (VR) can present these practices and services in situated, interactive form, yet its role in service design for plastics recycling is still emerging. This paper examines how VR tools can contribute to designing plastics recycling services through two application cases that address different stages of the recycling journey. The first case, Clean Cabin Escape, is a household scale VR escape room where players collect and sort waste items into locally relevant categories, with immediate feedback that supports practice with plastics recycling decisions. The second case is a VR simulation of a plastics recycling center that represents a real planned site and is used in service design workshops where stakeholders explore layout, signage and customer paths for plastics fractions. Across the cases, we analyse how VR supported learning, engagement and shared sensemaking, and how it interacted with other service design methods such as workshops, customer path mapping and physical artefacts. The findings show that VR can make domestic sorting tasks and complex recycling centers more concrete for both citizens and professionals, but also highlight trade offs related to hardware access, onboarding effort, visual fidelity and localisation of recycling rules. The paper concludes by outlining opportunities for integrating VR into broader service design toolsets for plastics recycling and circular economy services, and by pointing to directions for future research on long term impact and inclusive design. - oai:arXiv.org:2512.17081v1 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + Review of Power Electronic Solutions for Dielectric Barrier Discharge Applications + https://arxiv.org/abs/2512.18067 + arXiv:2512.18067v1 Announce Type: new +Abstract: This paper presents a comprehensive review of dielectric barrier discharge (DBD) power supply topologies, aiming to bridge the gap between DBD applications and power electronics design. Two key aspects are examined: the dependence of the DBD electrical model on reactor geometry, and application-driven requirements for injected waveform characteristics, including shapes, voltage amplitude, frequency, and modulation techniques. On this basis, the paper systematically reviews two major categories of power supplies: sinusoidal types comprising transformerless and transformer-based resonant inverters, and pulsed power supplies (PPSs). The review summarizes performance trade-offs, highlights untested topologies and emerging applications, and offers guidance for advancing high-performance DBD power supply design for next-generation systems. + oai:arXiv.org:2512.18067v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Ashley Colley, Kuisma Hurtig, Juri Etto, Emma Kirjavainen, Pavlo Ivanov, Jonna H\"akkil\"a + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hyeongmeen Baik, Jinia Roy - When F1 Fails: Granularity-Aware Evaluation for Dialogue Topic Segmentation - https://arxiv.org/abs/2512.17083 - arXiv:2512.17083v1 Announce Type: new -Abstract: Dialogue topic segmentation supports summarization, retrieval, memory management, and conversational continuity. Despite decades of prior work, evaluation practice in dialogue topic segmentation remains dominated by strict boundary matching and F1-based metrics, even as modern LLM-based conversational systems increasingly rely on segmentation to manage conversation history beyond the model's fixed context window, where unstructured context accumulation degrades efficiency and coherence. - This paper introduces an evaluation objective for dialogue topic segmentation that treats boundary density and segment coherence as primary criteria, alongside window-tolerant F1 (W-F1). Through extensive cross-dataset empirical evaluation, we show that reported performance differences across dialogue segmentation benchmarks are driven not by model quality, but by annotation granularity mismatches and sparse boundary labels. This indicates that many reported improvements arise from evaluation artifacts rather than improved boundary detection. - We evaluated multiple, structurally distinct dialogue segmentation strategies across eight dialogue datasets spanning task-oriented, open-domain, meeting-style, and synthetic interactions. Across these settings, we observe high segment coherence combined with extreme oversegmentation relative to sparse labels, producing misleadingly low exact-match F1 scores. We show that topic segmentation is best understood as selecting an appropriate granularity rather than predicting a single correct boundary set. We operationalize this view by explicitly separating boundary scoring from boundary selection. - oai:arXiv.org:2512.17083v1 - cs.CL - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + SurgiPose: Estimating Surgical Tool Kinematics from Monocular Video for Surgical Robot Learning + https://arxiv.org/abs/2512.18068 + arXiv:2512.18068v1 Announce Type: new +Abstract: Imitation learning (IL) has shown immense promise in enabling autonomous dexterous manipulation, including learning surgical tasks. To fully unlock the potential of IL for surgery, access to clinical datasets is needed, which unfortunately lack the kinematic data required for current IL approaches. A promising source of large-scale surgical demonstrations is monocular surgical videos available online, making monocular pose estimation a crucial step toward enabling large-scale robot learning. Toward this end, we propose SurgiPose, a differentiable rendering based approach to estimate kinematic information from monocular surgical videos, eliminating the need for direct access to ground truth kinematics. Our method infers tool trajectories and joint angles by optimizing tool pose parameters to minimize the discrepancy between rendered and real images. To evaluate the effectiveness of our approach, we conduct experiments on two robotic surgical tasks: tissue lifting and needle pickup, using the da Vinci Research Kit Si (dVRK Si). We train imitation learning policies with both ground truth measured kinematics and estimated kinematics from video and compare their performance. Our results show that policies trained on estimated kinematics achieve comparable success rates to those trained on ground truth data, demonstrating the feasibility of using monocular video based kinematic estimation for surgical robot learning. By enabling kinematic estimation from monocular surgical videos, our work lays the foundation for large scale learning of autonomous surgical policies from online surgical data. + oai:arXiv.org:2512.18068v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Michael H. Coen + 10.1109/IROS60139.2025.11247452 + Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2025, pp. 20912-20919 + Juo-Tung Chen, XinHao Chen, Ji Woong Kim, Paul Maria Scheikl, Richard Jaepyeong Cha, Axel Krieger - Value Under Ignorance in Universal Artificial Intelligence - https://arxiv.org/abs/2512.17086 - arXiv:2512.17086v1 Announce Type: new -Abstract: We generalize the AIXI reinforcement learning agent to admit a wider class of utility functions. Assigning a utility to each possible interaction history forces us to confront the ambiguity that some hypotheses in the agent's belief distribution only predict a finite prefix of the history, which is sometimes interpreted as implying a chance of death equal to a quantity called the semimeasure loss. This death interpretation suggests one way to assign utilities to such history prefixes. We argue that it is as natural to view the belief distributions as imprecise probability distributions, with the semimeasure loss as total ignorance. This motivates us to consider the consequences of computing expected utilities with Choquet integrals from imprecise probability theory, including an investigation of their computability level. We recover the standard recursive value function as a special case. However, our most general expected utilities under the death interpretation cannot be characterized as such Choquet integrals. - oai:arXiv.org:2512.17086v1 - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Statistical laws and linguistics inform meaning in naturalistic and fictional conversation + https://arxiv.org/abs/2512.18072 + arXiv:2512.18072v1 Announce Type: new +Abstract: Conversation is a cornerstone of social connection and is linked to well-being outcomes. Conversations vary widely in type with some portion generating complex, dynamic stories. One approach to studying how conversations unfold in time is through statistical patterns such as Heaps' law, which holds that vocabulary size scales with document length. Little work on Heaps's law has looked at conversation and considered how language features impact scaling. We measure Heaps' law for conversations recorded in two distinct mediums: 1. Strangers brought together on video chat and 2. Fictional characters in movies. We find that scaling of vocabulary size differs by parts of speech. We discuss these findings through behavioral and linguistic frameworks. + oai:arXiv.org:2512.18072v1 + cs.CL + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - In International Conference on Artificial General Intelligence (pp. 338-349). Cham: Springer Nature Switzerland (2025) - Cole Wyeth, Marcus Hutter + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ashley M. A. Fehr, Calla G. Beauregard, Julia Witte Zimmerman, Katie Ekstr\"om, Pablo Rosillo-Rodes, Christopher M. Danforth, Peter Sheridan Dodds - How to Square Tensor Networks and Circuits Without Squaring Them - https://arxiv.org/abs/2512.17090 - arXiv:2512.17090v1 Announce Type: new -Abstract: Squared tensor networks (TNs) and their extension as computational graphs--squared circuits--have been used as expressive distribution estimators, yet supporting closed-form marginalization. However, the squaring operation introduces additional complexity when computing the partition function or marginalizing variables, which hinders their applicability in ML. To solve this issue, canonical forms of TNs are parameterized via unitary matrices to simplify the computation of marginals. However, these canonical forms do not apply to circuits, as they can represent factorizations that do not directly map to a known TN. Inspired by the ideas of orthogonality in canonical forms and determinism in circuits enabling tractable maximization, we show how to parameterize squared circuits to overcome their marginalization overhead. Our parameterizations unlock efficient marginalization even in factorizations different from TNs, but encoded as circuits, whose structure would otherwise make marginalization computationally hard. Finally, our experiments on distribution estimation show how our proposed conditions in squared circuits come with no expressiveness loss, while enabling more efficient learning. - oai:arXiv.org:2512.17090v1 - cs.LG - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + FPBench: A Comprehensive Benchmark of Multimodal Large Language Models for Fingerprint Analysis + https://arxiv.org/abs/2512.18073 + arXiv:2512.18073v1 Announce Type: new +Abstract: Multimodal LLMs (MLLMs) have gained significant traction in complex data analysis, visual question answering, generation, and reasoning. Recently, they have been used for analyzing the biometric utility of iris and face images. However, their capabilities in fingerprint understanding are yet unexplored. In this work, we design a comprehensive benchmark, \textsc{FPBench} that evaluates the performance of 20 MLLMs (open-source and proprietary) across 7 real and synthetic datasets on 8 biometric and forensic tasks using zero-shot and chain-of-thought prompting strategies. We discuss our findings in terms of performance, explainability and share our insights into the challenges and limitations. We establish \textsc{FPBench} as the first comprehensive benchmark for fingerprint domain understanding with MLLMs paving the path for foundation models for fingerprints. + oai:arXiv.org:2512.18073v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Lorenzo Loconte, Adri\'an Javaloy, Antonio Vergari + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Ekta Balkrishna Gavas, Sudipta Banerjee, Chinmay Hegde, Nasir Memon - Learning to Plan, Planning to Learn: Adaptive Hierarchical RL-MPC for Sample-Efficient Decision Making - https://arxiv.org/abs/2512.17091 - arXiv:2512.17091v1 Announce Type: new -Abstract: We propose a new approach for solving planning problems with a hierarchical structure, fusing reinforcement learning and MPC planning. Our formulation tightly and elegantly couples the two planning paradigms. It leverages reinforcement learning actions to inform the MPPI sampler, and adaptively aggregates MPPI samples to inform the value estimation. The resulting adaptive process leverages further MPPI exploration where value estimates are uncertain, and improves training robustness and the overall resulting policies. This results in a robust planning approach that can handle complex planning problems and easily adapts to different applications, as demonstrated over several domains, including race driving, modified Acrobot, and Lunar Lander with added obstacles. Our results in these domains show better data efficiency and overall performance in terms of both rewards and task success, with up to a 72% increase in success rate compared to existing approaches, as well as accelerated convergence (x2.1) compared to non-adaptive sampling. - oai:arXiv.org:2512.17091v1 - cs.LG + Characterising Behavioural Families and Dynamics of Promotional Twitter Bots via Sequence-Based Modelling + https://arxiv.org/abs/2512.18077 + arXiv:2512.18077v1 Announce Type: new +Abstract: This paper asks whether promotional Twitter/X bots form behavioural families and whether members evolve similarly. We analyse 2,798,672 tweets from 2,615 ground-truth promotional bot accounts (2006-2021), focusing on complete years 2009 to 2020. Each bot is encoded as a sequence of symbolic blocks (``digital DNA'') from seven categorical post-level behavioural features (posting action, URL, media, text duplication, hashtags, emojis, sentiment), preserving temporal order only. Using non-overlapping blocks (k=7), cosine similarity over block-frequency vectors, and hierarchical clustering, we obtain four coherent families: Unique Tweeters, Duplicators with URLs, Content Multipliers, and Informed Contributors. Families share behavioural cores but differ systematically in engagement strategies and life-cycle dynamics (beginning/middle/end). We then model behavioural change as mutations. Within each family we align sequences via multiple sequence alignment (MSA) and label events as insertions, deletions, substitutions, alterations, and identity. This quantifies mutation rates, change-prone blocks/features, and mutation hotspots. Deletions and substitutions dominate, insertions are rare, and mutation profiles differ by family, with hotspots early for some families and dispersed for others. Finally, we test predictive value: bots within the same family share mutations more often than bots across families; closer bots share and propagate mutations more than distant ones; and responses to external triggers (e.g., Christmas, Halloween) follow family-specific, partly predictable patterns. Overall, sequence-based family modelling plus mutation analysis provides a fine-grained account of how promotional bot behaviour adapts over time. + oai:arXiv.org:2512.18077v1 + cs.SI cs.AI - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Toshiaki Hori, Jonathan DeCastro, Deepak Gopinath, Avinash Balachandran, Guy Rosman + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Ohoud Alzahrani, Russell Beale, Robert J. Hendley - Data Augmentation Supporting a Conversational Agent Designed for Smoking Cessation Support Groups - https://arxiv.org/abs/2512.17092 - arXiv:2512.17092v1 Announce Type: new -Abstract: Online support groups for smoking cessation are economical and accessible, yet they often face challenges with low user engagement and stigma. The use of an automatic conversational agent would improve engagement by ensuring that all user comments receive a timely response.). We address the challenge of insufficient high-quality data by employing a two-level data augmentation strategy: synthetic data augmentation and real data augmentation. First, we fine-tuned an open source LLM to classify posts from our existing smoking cessation support groups and identify intents with low F1 (precision+recall) scores. Then, for these intents, we generate additional synthetic data using prompt engineering with the GPT model, with an average of 87\% of the generated synthetic posts deemed high quality by human annotators. Overall, the synthetic augmentation process resulted in 43\% of the original posts being selected for augmentation, followed by 140\% synthetic expansion of these posts. Additionally, we scraped more than 10,000 real posts from a related online support context, of which 73\% were validated as good quality by human annotators. Each synthetic or scraped post underwent rigorous validation involving human reviewers to ensure quality and relevance. The validated new data, combined with the original support group posts, formed an augmented dataset used to retrain the intent classifier. Performance evaluation of the retrained model demonstrated a 32\% improvement in F1, confirming the effectiveness of our data augmentation approach. Synthetic and real post augmentation led to similar performance improvements. This study provides a replicable framework for enhancing conversational agent performance in domains where data scarcity is a critical issue. - oai:arXiv.org:2512.17092v1 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + From Prompt to Product: A Human-Centered Benchmark of Agentic App Generation Systems + https://arxiv.org/abs/2512.18080 + arXiv:2512.18080v1 Announce Type: new +Abstract: Agentic AI systems capable of generating full-stack web applications from natural language prompts ("prompt- to-app") represent a significant shift in software development. However, evaluating these systems remains challenging, as visual polish, functional correctness, and user trust are often misaligned. As a result, it is unclear how existing prompt-to-app tools compare under realistic, human-centered evaluation criteria. In this paper, we introduce a human-centered benchmark for evaluating prompt-to-app systems and conduct a large-scale comparative study of three widely used platforms: Replit, Bolt, and Firebase Studio. Using a diverse set of 96 prompts spanning common web application tasks, we generate 288 unique application artifacts. We evaluate these systems through a large-scale human-rater study involving 205 participants and 1,071 quality-filtered pairwise comparisons, assessing task-based ease of use, visual appeal, perceived completeness, and user trust. Our results show that these systems are not interchangeable: Firebase Studio consistently outperforms competing platforms across all human-evaluated dimensions, achieving the highest win rates for ease of use, trust, visual appeal, and visual appropriateness. Bolt performs competitively on visual appeal but trails Firebase on usability and trust, while Replit underperforms relative to both across most metrics. These findings highlight a persistent gap between visual polish and functional reliability in prompt-to-app systems and demonstrate the necessity of interactive, task-based evaluation. We release our benchmark framework, prompt set, and generated artifacts to support reproducible evaluation and future research in agentic application generation. + oai:arXiv.org:2512.18080v1 + cs.HC + cs.AI + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Salar Hashemitaheri, Ian Harris + http://creativecommons.org/licenses/by/4.0/ + Marcos Ortiz, Justin Hill, Collin Overbay, Ingrida Semenec, Frederic Sauve-Hoover, Jim Schwoebel, Joel Shor - A Solver-in-the-Loop Framework for Improving LLMs on Answer Set Programming for Logic Puzzle Solving - https://arxiv.org/abs/2512.17093 - arXiv:2512.17093v1 Announce Type: new -Abstract: The rise of large language models (LLMs) has sparked interest in coding assistants. While general-purpose programming languages are well supported, generating code for domain-specific languages remains a challenging problem for LLMs. In this paper, we focus on the LLM-based generation of code for Answer Set Programming (ASP), a particularly effective approach for finding solutions to combinatorial search problems. The effectiveness of LLMs in ASP code generation is currently hindered by the limited number of examples seen during their initial pre-training phase. - In this paper, we introduce a novel ASP-solver-in-the-loop approach for solver-guided instruction-tuning of LLMs to addressing the highly complex semantic parsing task inherent in ASP code generation. Our method only requires problem specifications in natural language and their solutions. Specifically, we sample ASP statements for program continuations from LLMs for unriddling logic puzzles. Leveraging the special property of declarative ASP programming that partial encodings increasingly narrow down the solution space, we categorize them into chosen and rejected instances based on solver feedback. We then apply supervised fine-tuning to train LLMs on the curated data and further improve robustness using a solver-guided search that includes best-of-N sampling. Our experiments demonstrate consistent improvements in two distinct prompting settings on two datasets. - oai:arXiv.org:2512.17093v1 - cs.AI - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Towards Autonomous Navigation in Endovascular Interventions + https://arxiv.org/abs/2512.18081 + arXiv:2512.18081v1 Announce Type: new +Abstract: Cardiovascular diseases remain the leading cause of global mortality, with minimally invasive treatment options offered through endovascular interventions. However, the precision and adaptability of current robotic systems for endovascular navigation are limited by heuristic control, low autonomy, and the absence of haptic feedback. This thesis presents an integrated AI-driven framework for autonomous guidewire navigation in complex vascular environments, addressing key challenges in data availability, simulation fidelity, and navigational accuracy. + A high-fidelity, real-time simulation platform, CathSim, is introduced for reinforcement learning based catheter navigation, featuring anatomically accurate vascular models and contact dynamics. Building on CathSim, the Expert Navigation Network is developed, a policy that fuses visual, kinematic, and force feedback for autonomous tool control. To mitigate data scarcity, the open-source, bi-planar fluoroscopic dataset Guide3D is proposed, comprising more than 8,700 annotated images for 3D guidewire reconstruction. Finally, SplineFormer, a transformer-based model, is introduced to directly predict guidewire geometry as continuous B-spline parameters, enabling interpretable, real-time navigation. + The findings show that combining high-fidelity simulation, multimodal sensory fusion, and geometric modelling substantially improves autonomous endovascular navigation and supports safer, more precise minimally invasive procedures. + oai:arXiv.org:2512.18081v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Timo Pierre Schrader, Lukas Lange, Tobias Kaminski, Simon Razniewski, Annemarie Friedrich + http://creativecommons.org/licenses/by/4.0/ + Tudor Jianu, Anh Nguyen, Sebastiano Fichera, Pierre Berthet-Rayne - DGH: Dynamic Gaussian Hair - https://arxiv.org/abs/2512.17094 - arXiv:2512.17094v1 Announce Type: new -Abstract: The creation of photorealistic dynamic hair remains a major challenge in digital human modeling because of the complex motions, occlusions, and light scattering. Existing methods often resort to static capture and physics-based models that do not scale as they require manual parameter fine-tuning to handle the diversity of hairstyles and motions, and heavy computation to obtain high-quality appearance. In this paper, we present Dynamic Gaussian Hair (DGH), a novel framework that efficiently learns hair dynamics and appearance. We propose: (1) a coarse-to-fine model that learns temporally coherent hair motion dynamics across diverse hairstyles; (2) a strand-guided optimization module that learns a dynamic 3D Gaussian representation for hair appearance with support for differentiable rendering, enabling gradient-based learning of view-consistent appearance under motion. Unlike prior simulation-based pipelines, our approach is fully data-driven, scales with training data, and generalizes across various hairstyles and head motion sequences. Additionally, DGH can be seamlessly integrated into a 3D Gaussian avatar framework, enabling realistic, animatable hair for high-fidelity avatar representation. DGH achieves promising geometry and appearance results, providing a scalable, data-driven alternative to physics-based simulation and rendering. - oai:arXiv.org:2512.17094v1 + Uncertainty-Gated Region-Level Retrieval for Robust Semantic Segmentation + https://arxiv.org/abs/2512.18082 + arXiv:2512.18082v1 Announce Type: new +Abstract: Semantic segmentation of outdoor street scenes plays a key role in applications such as autonomous driving, mobile robotics, and assistive technology for visually-impaired pedestrians. For these applications, accurately distinguishing between key surfaces and objects such as roads, sidewalks, vehicles, and pedestrians is essential for maintaining safety and minimizing risks. Semantic segmentation must be robust to different environments, lighting and weather conditions, and sensor noise, while being performed in real-time. We propose a region-level, uncertainty-gated retrieval mechanism that improves segmentation accuracy and calibration under domain shift. Our best method achieves an 11.3% increase in mean intersection-over-union while reducing retrieval cost by 87.5%, retrieving for only 12.5% of regions compared to 100% for always-on baseline. + oai:arXiv.org:2512.18082v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Junying Wang, Yuanlu Xu, Edith Tretschk, Ziyan Wang, Anastasia Ianina, Aljaz Bozic, Ulrich Neumann, Tony Tung + http://creativecommons.org/licenses/by/4.0/ + Shreshth Rajan, Raymond Liu - Predictive Modeling of Maritime Radar Data Using Transformer Architecture - https://arxiv.org/abs/2512.17098 - arXiv:2512.17098v1 Announce Type: new -Abstract: Maritime autonomous systems require robust predictive capabilities to anticipate vessel motion and environmental dynamics. While transformer architectures have revolutionized AIS-based trajectory prediction and demonstrated feasibility for sonar frame forecasting, their application to maritime radar frame prediction remains unexplored, creating a critical gap given radar's all-weather reliability for navigation. This survey systematically reviews predictive modeling approaches relevant to maritime radar, with emphasis on transformer architectures for spatiotemporal sequence forecasting, where existing representative methods are analyzed according to data type, architecture, and prediction horizon. Our review shows that, while the literature has demonstrated transformer-based frame prediction for sonar sensing, no prior work addresses transformer-based maritime radar frame prediction, thereby defining a clear research gap and motivating a concrete research direction for future work in this area. - oai:arXiv.org:2512.17098v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Detecting Flaky Tests in Quantum Software: A Dynamic Approach + https://arxiv.org/abs/2512.18088 + arXiv:2512.18088v1 Announce Type: new +Abstract: Flaky tests, tests that pass or fail nondeterministically without changes to code or environment, pose a serious threat to software reliability. While classical software engineering has developed a rich body of dynamic and static techniques to study flakiness, corresponding evidence for quantum software remains limited. Prior work relies primarily on static analysis or small sets of manually reported incidents, leaving open questions about the prevalence, characteristics, and detectability of flaky tests. + This paper presents the first large-scale dynamic characterization of flaky tests in quantum software. We executed the Qiskit Terra test suite 10,000 times across 23 releases in controlled environments. For each release, we measured test-outcome variability, identified flaky tests, estimated empirical failure probabilities, analyzed recurrence across versions, and used Wilson confidence intervals to quantify rerun budgets for reliable detection. We further mapped flaky tests to Terra subcomponents to assess component-level susceptibility. + Across 27,026 test cases, we identified 290 distinct flaky tests. Although overall flakiness rates were low (0-0.4%), flakiness was highly episodic: nearly two-thirds of flaky tests appeared in only one release, while a small subset recurred intermittently or persistently. Many flaky tests failed with very small empirical probabilities ($\hat{p} \approx 10^{-4}$), implying that tens of thousands of executions may be required for confident detection. Flakiness was unevenly distributed across subcomponents, with 'transpiler' and 'quantum_info' accounting for the largest share. + These results show that quantum test flakiness is rare but difficult to detect under typical continuous integration budgets. To support future research, we release a public dataset of per-test execution outcomes. + oai:arXiv.org:2512.18088v1 + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Bjorna Qesaraku, Jan Steckel + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Dongchan Kim, Hamidreza Khoramrokh, Lei Zhang, Andriy Miranskyy - UniCoMTE: A Universal Counterfactual Framework for Explaining Time-Series Classifiers on ECG Data - https://arxiv.org/abs/2512.17100 - arXiv:2512.17100v1 Announce Type: new -Abstract: Machine learning models, particularly deep neural networks, have demonstrated strong performance in classifying complex time series data. However, their black-box nature limits trust and adoption, especially in high-stakes domains such as healthcare. To address this challenge, we introduce UniCoMTE, a model-agnostic framework for generating counterfactual explanations for multivariate time series classifiers. The framework identifies temporal features that most heavily influence a model's prediction by modifying the input sample and assessing its impact on the model's prediction. UniCoMTE is compatible with a wide range of model architectures and operates directly on raw time series inputs. In this study, we evaluate UniCoMTE's explanations on a time series ECG classifier. We quantify explanation quality by comparing our explanations' comprehensibility to comprehensibility of established techniques (LIME and SHAP) and assessing their generalizability to similar samples. Furthermore, clinical utility is assessed through a questionnaire completed by medical experts who review counterfactual explanations presented alongside original ECG samples. Results show that our approach produces concise, stable, and human-aligned explanations that outperform existing methods in both clarity and applicability. By linking model predictions to meaningful signal patterns, the framework advances the interpretability of deep learning models for real-world time series applications. - oai:arXiv.org:2512.17100v1 - cs.LG + Faithful and Stable Neuron Explanations for Trustworthy Mechanistic Interpretability + https://arxiv.org/abs/2512.18092 + arXiv:2512.18092v1 Announce Type: new +Abstract: Neuron identification is a popular tool in mechanistic interpretability, aiming to uncover the human-interpretable concepts represented by individual neurons in deep networks. While algorithms such as Network Dissection and CLIP-Dissect achieve great empirical success, a rigorous theoretical foundation remains absent, which is crucial to enable trustworthy and reliable explanations. In this work, we observe that neuron identification can be viewed as the inverse process of machine learning, which allows us to derive guarantees for neuron explanations. Based on this insight, we present the first theoretical analysis of two fundamental challenges: (1) Faithfulness: whether the identified concept faithfully represents the neuron's underlying function and (2) Stability: whether the identification results are consistent across probing datasets. We derive generalization bounds for widely used similarity metrics (e.g. accuracy, AUROC, IoU) to guarantee faithfulness, and propose a bootstrap ensemble procedure that quantifies stability along with BE (Bootstrap Explanation) method to generate concept prediction sets with guaranteed coverage probability. Experiments on both synthetic and real data validate our theoretical results and demonstrate the practicality of our method, providing an important step toward trustworthy neuron identification. + oai:arXiv.org:2512.18092v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Justin Li, Efe Sencan, Jasper Zheng Duan, Vitus J. Leung, Stephan Tsaur, Ayse K. Coskun + http://creativecommons.org/licenses/by/4.0/ + Ge Yan (Lily), Tuomas Oikarinen (Lily), Tsui-Wei (Lily), Weng - MIRGE: An Array-Based Computational Framework for Scientific Computing - https://arxiv.org/abs/2512.17101 - arXiv:2512.17101v1 Announce Type: new -Abstract: MIRGE is a computational approach for scientific computing based on NumPy-like array computation, but using lazy evaluation to recast computation as data-flow graphs, where nodes represent immutable, multi-dimensional arrays. Evaluation of an array expression is deferred until its value is needed, at which point a pipeline is invoked that transforms high-level array expressions into lower-level intermediate representations (IR) and finally into executable code, through a multi-stage process. Domain-specific transformations, such as metadata-driven optimizations, GPU-parallelization strategies, and loop fusion techniques, improve performance and memory efficiency. - MIRGE employs "array contexts" to abstract the interface between array expressions and heterogeneous execution environments (for example, lazy evaluation via OpenCL, or eager evaluation via NumPy or CuPy). The framework thus enables performance portability as well as separation of concerns between application logic, low-level implementation, and optimizations. By enabling scientific expressivity while facilitating performance tuning, MIRGE offers a robust, extensible platform for both computational research and scientific application development. - This paper provides an overview of MIRGE. We further describe an application of MIRGE called MIRGE-Com, for supersonic combusting flows in a discontinuous Galerkin finite-element setting. We demonstrate its capabilities as a solver and highlight its performance characteristics on large-scale GPU hardware. - oai:arXiv.org:2512.17101v1 - cs.MS - Mon, 22 Dec 2025 00:00:00 -0500 + Rethinking Multi-Agent Intelligence Through the Lens of Small-World Networks + https://arxiv.org/abs/2512.18094 + arXiv:2512.18094v1 Announce Type: new +Abstract: Large language models (LLMs) have enabled multi-agent systems (MAS) in which multiple agents argue, critique, and coordinate to solve complex tasks, making communication topology a first-class design choice. Yet most existing LLM-based MAS either adopt fully connected graphs, simple sparse rings, or ad-hoc dynamic selection, with little structural guidance. In this work, we revisit classic theory on small-world (SW) networks and ask: what changes if we treat SW connectivity as a design prior for MAS? We first bridge insights from neuroscience and complex networks to MAS, highlighting how SW structures balance local clustering and long-range integration. Using multi-agent debate (MAD) as a controlled testbed, experiment results show that SW connectivity yields nearly the same accuracy and token cost, while substantially stabilizing consensus trajectories. Building on this, we introduce an uncertainty-guided rewiring scheme for scaling MAS, where long-range shortcuts are added between epistemically divergent agents using LLM-oriented uncertainty signals (e.g., semantic entropy). This yields controllable SW structures that adapt to task difficulty and agent heterogeneity. Finally, we discuss broader implications of SW priors for MAS design, framing them as stabilizers of reasoning, enhancers of robustness, scalable coordinators, and inductive biases for emergent cognitive roles. + oai:arXiv.org:2512.18094v1 + cs.AI + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Matthias Diener, Matthew J. Smith, Michael T. Campbell, Kaushik Kulkarni, Michael J. Anderson, Andreas Kl\"ockner, William Gropp, Jonathan B. Freund, Luke N. Olson + Boxuan Wang, Zhuoyun Li, Xiaowei Huang, Yi Dong - Reinforcement Learning for Self-Improving Agent with Skill Library - https://arxiv.org/abs/2512.17102 - arXiv:2512.17102v1 Announce Type: new -Abstract: Large Language Model (LLM)-based agents have demonstrated remarkable capabilities in complex reasoning and multi-turn interactions but struggle to continuously improve and adapt when deployed in new environments. One promising approach is implementing skill libraries that allow agents to learn, validate, and apply new skills. However, current skill library approaches rely primarily on LLM prompting, making consistent skill library implementation challenging. To overcome these challenges, we propose a Reinforcement Learning (RL)-based approach to enhance agents' self-improvement capabilities with a skill library. Specifically, we introduce Skill Augmented GRPO for self-Evolution (SAGE), a novel RL framework that systematically incorporates skills into learning. The framework's key component, Sequential Rollout, iteratively deploys agents across a chain of similar tasks for each rollout. As agents navigate through the task chain, skills generated from previous tasks accumulate in the library and become available for subsequent tasks. Additionally, the framework enhances skill generation and utilization through a Skill-integrated Reward that complements the original outcome-based rewards. Experimental results on AppWorld demonstrate that SAGE, when applied to supervised-finetuned model with expert experience, achieves 8.9% higher Scenario Goal Completion while requiring 26% fewer interaction steps and generating 59% fewer tokens, substantially outperforming existing approaches in both accuracy and efficiency. - oai:arXiv.org:2512.17102v1 - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + A Games-in-Games Paradigm for Strategic Hybrid Jump-Diffusions: Hamilton-Jacobi-Isaacs Hierarchy and Spectral Structure + https://arxiv.org/abs/2512.18098 + arXiv:2512.18098v1 Announce Type: new +Abstract: This paper develops a hierarchical games-in-games control architecture for hybrid stochastic systems governed by regime-switching jump-diffusions. We model the interplay between continuous state dynamics and discrete mode transitions as a bilevel differential game: an inner layer solves a robust stochastic control problem within each regime, while a strategic outer layer modulates the transition intensities of the underlying Markov chain. A Dynkin-based analysis yields a system of coupled Hamilton-Jacobi-Isaacs (HJI) equations. We prove that for the class of Linear-Quadratic games and Exponential-Affine games, this hierarchy admits tractable semi-closed form solutions via coupled matrix differential equations. We prove that for the class of Linear-Quadratic games and Exponential-Affine games, this hierarchy admits tractable semi-closed form solutions via coupled matrix differential equations. The framework is demonstrated through a case study on adversarial market microstructure, showing how the outer layer's strategic switching pre-emptively adjusts inventory spreads against latent regime risks, which leads to a hyper-alert equilibrium. + oai:arXiv.org:2512.18098v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jiongxiao Wang, Qiaojing Yan, Yawei Wang, Yijun Tian, Soumya Smruti Mishra, Zhichao Xu, Megha Gandhi, Panpan Xu, Lin Lee Cheong + Yunian Pan, Quanyan Zhu - Fault Diagnosis and Quantification for Photovoltaic Arrays based on Differentiable Physical Models - https://arxiv.org/abs/2512.17107 - arXiv:2512.17107v1 Announce Type: new -Abstract: Accurate fault diagnosis and quantification are essential for the reliable operation and intelligent maintenance of photovoltaic (PV) arrays. However, existing fault quantification methods often suffer from limited efficiency and interpretability. To address these challenges, this paper proposes a novel fault quantification approach for PV strings based on a differentiable fast fault simulation model (DFFSM). The proposed DFFSM accurately models I-V characteristics under multiple faults and provides analytical gradients with respect to fault parameters. Leveraging this property, a gradient-based fault parameters identification (GFPI) method using the Adahessian optimizer is developed to efficiently quantify partial shading, short-circuit, and series-resistance degradation. Experimental results on both simulated and measured I-V curves demonstrate that the proposed GFPI achieves high quantification accuracy across different faults, with the I-V reconstruction error below 3%, confirming the feasibility and effectiveness of the application of differentiable physical simulators for PV system fault diagnosis. - oai:arXiv.org:2512.17107v1 + From Coverage to Causes: Data-Centric Fuzzing for JavaScript Engines + https://arxiv.org/abs/2512.18102 + arXiv:2512.18102v1 Announce Type: new +Abstract: Context: Exhaustive fuzzing of modern JavaScript engines is infeasible due to the vast number of program states and execution paths. Coverage-guided fuzzers waste effort on low-risk inputs, often ignoring vulnerability-triggering ones that do not increase coverage. Existing heuristics proposed to mitigate this require expert effort, are brittle, and hard to adapt. + Objective: We propose a data-centric, LLM-boosted alternative that learns from historical vulnerabilities to automatically identify minimal static (code) and dynamic (runtime) features for detecting high-risk inputs. + Method: Guided by historical V8 bugs, iterative prompting generated 115 static and 49 dynamic features, with the latter requiring only five trace flags, minimizing instrumentation cost. After feature selection, 41 features remained to train an XGBoost model to predict high-risk inputs during fuzzing. + Results: Combining static and dynamic features yields over 85% precision and under 1% false alarms. Only 25% of these features are needed for comparable performance, showing that most of the search space is irrelevant. + Conclusion: This work introduces feature-guided fuzzing, an automated data-driven approach that replaces coverage with data-directed inference, guiding fuzzers toward high-risk states for faster, targeted, and reproducible vulnerability discovery. To support open science, all scripts and data are available at https://github.com/KKGanguly/DataCentricFuzzJS . + oai:arXiv.org:2512.18102v1 + cs.SE + cs.CR cs.LG - eess.SP - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Zenan Yang, Yuanliang Li, Jingwei Zhang, Yongjie Liu, Kun Ding + Kishan Kumar Ganguly, Tim Menzies - Atom: Efficient On-Device Video-Language Pipelines Through Modular Reuse - https://arxiv.org/abs/2512.17108 - arXiv:2512.17108v1 Announce Type: new -Abstract: Recent advances in video-language models have enabled powerful applications like video retrieval, captioning, and assembly. However, executing such multi-stage pipelines efficiently on mobile devices remains challenging due to redundant model loads and fragmented execution. We introduce Atom, an on-device system that restructures video-language pipelines for fast and efficient execution. Atom decomposes a billion-parameter model into reusable modules, such as the visual encoder and language decoder, and reuses them across subtasks like captioning, reasoning, and indexing. This reuse-centric design eliminates repeated model loading and enables parallel execution, reducing end-to-end latency without sacrificing performance. On commodity smartphones, Atom achieves 27--33% faster execution compared to non-reuse baselines, with only marginal performance drop ($\leq$ 2.3 Recall@1 in retrieval, $\leq$ 1.5 CIDEr in captioning). These results position Atom as a practical, scalable approach for efficient video-language understanding on edge devices. - oai:arXiv.org:2512.17108v1 + Microstructure-based Variational Neural Networks for Robust Uncertainty Quantification in Materials Digital Twins + https://arxiv.org/abs/2512.18104 + arXiv:2512.18104v1 Announce Type: new +Abstract: Aleatoric uncertainties - irremovable variability in microstructure morphology, constituent behavior, and processing conditions - pose a major challenge to developing uncertainty-robust digital twins. We introduce the Variational Deep Material Network (VDMN), a physics-informed surrogate model that enables efficient and probabilistic forward and inverse predictions of material behavior. The VDMN captures microstructure-induced variability by embedding variational distributions within its hierarchical, mechanistic architecture. Using an analytic propagation scheme based on Taylor-series expansion and automatic differentiation, the VDMN efficiently propagates uncertainty through the network during training and prediction. We demonstrate its capabilities in two digital-twin-driven applications: (1) as an uncertainty-aware materials digital twin, it predicts and experimentally validates the nonlinear mechanical variability in additively manufactured polymer composites; and (2) as an inverse calibration engine, it disentangles and quantitatively identifies overlapping sources of uncertainty in constituent properties. Together, these results establish the VDMN as a foundation for uncertainty-robust materials digital twins. + oai:arXiv.org:2512.18104v1 cs.LG - cs.MM - Mon, 22 Dec 2025 00:00:00 -0500 + cond-mat.mtrl-sci + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Kunjal Panchal, Saayan Mitra, Somdeb Sarkhel, Haoliang Wang, Ishita Dasgupta, Gang Wu, Hui Guan + http://creativecommons.org/licenses/by/4.0/ + Andreas E. Robertson, Samuel B. Inman, Ashley T. Lenau, Ricardo A. Lebensohn, Dongil Shin, Brad L. Boyce, Remi M. Dingreville - Bridging Training and Merging Through Momentum-Aware Optimization - https://arxiv.org/abs/2512.17109 - arXiv:2512.17109v1 Announce Type: new -Abstract: Training large neural networks and merging task-specific models both exploit low-rank structure and require parameter importance estimation, yet these challenges have been pursued in isolation. Current workflows compute curvature information during training, discard it, then recompute similar information for merging -- wasting computation and discarding valuable trajectory data. We introduce a unified framework that maintains factorized momentum and curvature statistics during training, then reuses this information for geometry-aware model composition. The proposed method achieves memory efficiency comparable to state-of-the-art approaches while accumulating task saliency scores that enable curvature-aware merging without post-hoc Fisher computation. We establish convergence guarantees for non-convex objectives with approximation error bounded by gradient singular value decay. On natural language understanding benchmarks, curvature-aware parameter selection outperforms magnitude-only baselines across all sparsity levels, with multi-task merging improving over strong baselines. The proposed framework exhibits rank-invariant convergence and superior hyperparameter robustness compared to existing low-rank optimizers. By treating the optimization trajectory as a reusable asset rather than discarding it, our approach eliminates redundant computation while enabling more principled model composition. - oai:arXiv.org:2512.17109v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Timing-Aware Two-Player Stochastic Games with Self-Triggered Control + https://arxiv.org/abs/2512.18109 + arXiv:2512.18109v1 Announce Type: new +Abstract: We study self-triggered two-player stochastic games on Piecewise Deterministic Markov Processes (PDMPs) where each agent decides when to observe and which open-loop action to hold. Augmenting the state with clocks and committed controls yields flow regions (both hold) and trigger surfaces (at least one updates). The framework covers both blind simultaneous (Nash) timing and observable sequential (Stackelberg) commitments; the former leads to coupled, intractable QVIs, while the latter admits a nested Hamilton-Jacobi-Bellman quasi-variational inequality and a tractable dynamic-programming decomposition. We outline a computational scheme based on implicit differentiation of the follower's fixed point. A pursuit-evasion example illustrates the strategic timing interaction. + oai:arXiv.org:2512.18109v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Alireza Moayedikia, Alicia Troncoso + Yunian Pan, Quanyan Zhu - Digitizing Nepal's Written Heritage: A Comprehensive HTR Pipeline for Old Nepali Manuscripts - https://arxiv.org/abs/2512.17111 - arXiv:2512.17111v1 Announce Type: new -Abstract: This paper presents the first end-to-end pipeline for Handwritten Text Recognition (HTR) for Old Nepali, a historically significant but low-resource language. We adopt a line-level transcription approach and systematically explore encoder-decoder architectures and data-centric techniques to improve recognition accuracy. Our best model achieves a Character Error Rate (CER) of 4.9\%. In addition, we implement and evaluate decoding strategies and analyze token-level confusions to better understand model behaviour and error patterns. While the dataset we used for evaluation is confidential, we release our training code, model configurations, and evaluation scripts to support further research in HTR for low-resource historical scripts. - oai:arXiv.org:2512.17111v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Bayesian Holonic Systems: Equilibrium, Uniqueness, and Computation + https://arxiv.org/abs/2512.18112 + arXiv:2512.18112v1 Announce Type: new +Abstract: This paper addresses the challenge of modeling and control in hierarchical, multi-agent systems, known as holonic systems, where local agent decisions are coupled with global systemic outcomes. We introduce the Bayesian Holonic Equilibrium (BHE), a concept that ensures consistency between agent-level rationality and system-wide emergent behavior. We establish the theoretical soundness of the BHE by showing its existence and, under stronger regularity conditions, its uniqueness. We propose a two-time scale learning algorithm to compute such an equilibrium. This algorithm mirrors the system's structure, with a fast timescale for intra-holon strategy coordination and a slow timescale for inter-holon belief adaptation about external risks. The convergence of the algorithm to the theoretical equilibrium is validated through a numerical experiment on a continuous public good game. This work provides a complete theoretical and algorithmic framework for the principled design and analysis of strategic risk in complex, coupled control systems. + oai:arXiv.org:2512.18112v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Anjali Sarawgi, Esteban Garces Arias, Christof Zotter + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yunian Pan, Quanyan Zhu - Optimal Verification of a Minimum-Weight Basis in an Uncertainty Matroid - https://arxiv.org/abs/2512.17116 - arXiv:2512.17116v1 Announce Type: new -Abstract: Research in explorable uncertainty addresses combinatorial optimization problems where there is partial information about the values of numeric input parameters, and exact values of these parameters can be determined by performing costly queries. The goal is to design an adaptive query strategy that minimizes the query cost incurred in computing an optimal solution. Solving such problems generally requires that we be able to solve the associated verification problem: given the answers to all queries in advance, find a minimum-cost set of queries that certifies an optimal solution to the combinatorial optimization problem. We present a polynomial-time algorithm for verifying a minimum-weight basis of a matroid, where each weight lies in a given uncertainty area. These areas may be finite sets, real intervals, or unions of open and closed intervals, strictly generalizing previous work by Erlebach and Hoffman which only handled the special case of open intervals. Our algorithm introduces new techniques to address the resulting challenges. - Verification problems are of particular importance in the area of explorable uncertainty, as the structural insights and techniques used to solve the verification problem often heavily influence work on the corresponding online problem and its stochastic variant. In our case, we use structural results from the verification problem to give a best-possible algorithm for a promise variant of the corresponding adaptive online problem. Finally, we show that our algorithms can be applied to two learning-augmented variants of the minimum-weight basis problem under explorable uncertainty. - oai:arXiv.org:2512.17116v1 - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 + Layout-Aware Text Editing for Efficient Transformation of Academic PDFs to Markdown + https://arxiv.org/abs/2512.18115 + arXiv:2512.18115v1 Announce Type: new +Abstract: Academic documents stored in PDF format can be transformed into plain text structured markup languages to enhance accessibility and enable scalable digital library workflows. Markup languages allow for easier updates and customization, making academic content more adaptable and accessible to diverse usage, such as linguistic corpus compilation. Such documents, typically delivered in PDF format, contain complex elements including mathematical formulas, figures, headers, and tables, as well as densely layouted text. Existing end-to-end decoder transformer models can transform screenshots of documents into markup language. However, these models exhibit significant inefficiencies; their token-by-token decoding from scratch wastes a lot of inference steps in regenerating dense text that could be directly copied from PDF files. To solve this problem, we introduce EditTrans, a hybrid editing-generation model whose features allow identifying a queue of to-be-edited text from a PDF before starting to generate markup language. EditTrans contains a lightweight classifier fine-tuned from a Document Layout Analysis model on 162,127 pages of documents from arXiv. In our evaluations, EditTrans reduced the transformation latency up to 44.5% compared to end-to-end decoder transformer models, while maintaining transformation quality. Our code and reproducible dataset production scripts are open-sourced. + oai:arXiv.org:2512.18115v1 + cs.MM + cs.CL + cs.CV + cs.DL + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haya Diwan, Lisa Hellerstein, Nicole Megow, Jens Schl\"oter + 10.1007/978-3-032-04614-7_13 + Changxu Duan - Alignment, Exploration, and Novelty in Human-AI Interaction - https://arxiv.org/abs/2512.17117 - arXiv:2512.17117v1 Announce Type: new -Abstract: Human-AI interactions are increasingly part of everyday life, yet the interpersonal dynamics that unfold during such exchanges remain underexplored. This study investigates how emotional alignment, semantic exploration, and linguistic innovation emerge within a collaborative storytelling paradigm that paired human participants with a large language model (LLM) in a turn-taking setup. Over nine days, more than 3,000 museum visitors contributed to 27 evolving narratives, co-authored with an LLM in a naturalistic, public installation. To isolate the dynamics specific to human involvement, we compared the resulting dataset with a simulated baseline where two LLMs completed the same task. Using sentiment analysis, semantic embeddings, and information-theoretic measures of novelty and resonance, we trace how humans and models co-construct stories over time. Our results reveal that affective alignment is primarily driven by the model, with limited mutual convergence in human-AI interaction. At the same time, human participants explored a broader semantic space and introduced more novel, narratively influential contributions. These patterns were significantly reduced in the simulated AI-AI condition. Together, these findings highlight the unique role of human input in shaping narrative direction and creative divergence in co-authored texts. The methods developed here provide a scalable framework for analysing dyadic interaction and offer a new lens on creativity, emotional dynamics, and semantic coordination in human-AI collaboration. - oai:arXiv.org:2512.17117v1 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + Network Analysis of Cyberbullying Interactions on Instagram + https://arxiv.org/abs/2512.18116 + arXiv:2512.18116v1 Announce Type: new +Abstract: Cyberbullying continues to grow in prevalence and its impact is felt by thousands worldwide. This study seeks a network science perspective on cyberbullying interaction patterns on the popular photo and video-sharing platform, Instagram. Using an annotated cyberbullying dataset containing over 400 Instagram posts, we outline a set of heuristics for building Session Graphs, where nodes represent users and their cyberbullying role, and edges represent their exchanged communications via comments. Over these graphs, we compute the Bully Score, a measure of the net malice introduced by bullies as they attack victims (attacks minus pushback), and the Victim Score, a measure of the net support victims receive from their defenders (support minus attacks). Utilizing small subgraph (motif) enumeration, our analysis uncovers the most common interaction patterns over all cyberbullying sessions. We also explore the prevalence of specific motif patterns across different ranges of Bully and Victim Scores. We find that a majority of cyberbullying sessions have negative Victim Scores (attacks outweighing support), while the Bully Score distribution has a slight positive skew (attacks outweighing pushback). We also observe that while bullies are the most common role in motifs, defenders are also consistently present. This suggests that bullying mitigation is a recurring structural feature of many interactions. To the best of our knowledge, this is the first study to explore this granular scale of network interactions using human annotations at the session and comment levels on Instagram. + oai:arXiv.org:2512.18116v1 + cs.SI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Halfdan Nordahl Fundal, Johannes Eide Ramb{\o}ll, Karsten Olsen + http://creativecommons.org/licenses/by/4.0/ + Satyaki Sikdar, Manuel Sandoval, Taylor Hales, Chloe Kilroy, Maddie Juarez, Tyler Rosario, Juan J. Rosendo, Deborah L. Hall, Yasin N. Silva - Beyond Low Rank: Fast Low-Rank + Diagonal Decomposition with a Spectral Approach - https://arxiv.org/abs/2512.17120 - arXiv:2512.17120v1 Announce Type: new -Abstract: Low-rank plus diagonal (LRPD) decompositions provide a powerful structural model for large covariance matrices, simultaneously capturing global shared factors and localized corrections that arise in covariance estimation, factor analysis, and large-scale kernel learning. We introduce an alternating low-rank then diagonal (Alt) algorithm that provably reduces approximation error and significantly outperforms gradient descent while remaining cheaper than majorization-minimization methods~\cite{sun2016majorization}. To scale to large matrices, we develop a randomized LRPD variant that combines fixed-rank Nystrom sketching~\cite{tropp2017fixed} for the low-rank component with Diag++ stochastic diagonal estimation~\cite{baston2022stochastic}. This hybrid algorithm achieves machine precision decomposition error using a number of matrix-vector products far smaller than the ambient dimension, and comes with rigorous non-asymptotic error bounds. On synthetic data, it exactly recovers LRPD structured matrices with high efficiency, and on real-world S&P 500 stock return covariances, where the spectrum decays slowly and strong sector structure exists, it achieves substantially lower error than pure low-rank approximations. - oai:arXiv.org:2512.17120v1 - math.NA - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 + Factorized Transport Alignment for Multimodal and Multiview E-commerce Representation Learning + https://arxiv.org/abs/2512.18117 + arXiv:2512.18117v1 Announce Type: new +Abstract: The rapid growth of e-commerce requires robust multimodal representations that capture diverse signals from user-generated listings. Existing vision-language models (VLMs) typically align titles with primary images, i.e., single-view, but overlook non-primary images and auxiliary textual views that provide critical semantics in open marketplaces such as Etsy or Poshmark. To this end, we propose a framework that unifies multimodal and multi-view learning through Factorized Transport, a lightweight approximation of optimal transport, designed for scalability and deployment efficiency. During training, the method emphasizes primary views while stochastically sampling auxiliary ones, reducing training cost from quadratic in the number of views to constant per item. At inference, all views are fused into a single cached embedding, preserving the efficiency of two-tower retrieval with no additional online overhead. On an industrial dataset of 1M product listings and 0.3M interactions, our approach delivers consistent improvements in cross-view and query-to-item retrieval, achieving up to +7.9% Recall@500 over strong multimodal baselines. Overall, our framework bridges scalability with optimal transport-based learning, making multi-view pretraining practical for large-scale e-commerce search. + oai:arXiv.org:2512.18117v1 + cs.IR + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Kingsley Yeon, Mihai Anitescu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiwen Chen, Yen-Chieh Lien, Susan Liu, Mar\'ia Casta\~nos, Abolfazl Razi, Xiaoting Zhao, Congzhe Su - The Effect of Negation on CLIP in Medical Imaging: Limitations of Contrastive Language-Image Pretraining - https://arxiv.org/abs/2512.17121 - arXiv:2512.17121v1 Announce Type: new -Abstract: Large vision-language models like CLIP are increasingly used in medical imaging tasks due to their ability to align images and text without the need for extensive labeled data. This makes them particularly useful for applications like image retrieval, report generation, and classification in clinical settings. A potential issue to this approach is that CLIP-based models often under perform when interpreting negated phrases, which is especially problematic in the context of medical diagnosing. In this study, we evaluate the Stanford AIMI CheXagent model on its ability to correctly retrieve chest X-ray images using prompts with and without negation. The goal of this project is to understand where this model fails and then use it as a base model to improve its retrieval accuracy by fine tuning methods outlined in previous work. Results from this study show improvement in handling of negation in the CLIP model with a slight decrease in accuracy of positive prompt evaluation. Alongside retrieval accuracy, we examined internal model behavior through token attribution, t-SNE projection, and attention-head ablation to better characterize how each fine tuning approach reshaped the text encoders representation of negated clinical language. Through this work, we hope to better understand the internal behavior of CLIP and improve its handling of negation using clinically relevant language for improving its reliability in medical AI devices. - oai:arXiv.org:2512.17121v1 + Learning Generalizable Neural Operators for Inverse Problems + https://arxiv.org/abs/2512.18120 + arXiv:2512.18120v1 Announce Type: new +Abstract: Inverse problems challenge existing neural operator architectures because ill-posed inverse maps violate continuity, uniqueness, and stability assumptions. We introduce B2B${}^{-1}$, an inverse basis-to-basis neural operator framework that addresses this limitation. Our key innovation is to decouple function representation from the inverse map. We learn neural basis functions for the input and output spaces, then train inverse models that operate on the resulting coefficient space. This structure allows us to learn deterministic, invertible, and probabilistic models within a single framework, and to choose models based on the degree of ill-posedness. We evaluate our approach on six inverse PDE benchmarks, including two novel datasets, and compare against existing invertible neural operator baselines. We learn probabilistic models that capture uncertainty and input variability, and remain robust to measurement noise due to implicit denoising in the coefficient calculation. Our results show consistent re-simulation performance across varying levels of ill-posedness. By separating representation from inversion, our framework enables scalable surrogate models for inverse problems that generalize across instances, domains, and degrees of ill-posedness. + oai:arXiv.org:2512.18120v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jasmine Vu, Shivanand Sheshappanavar + Adam J. Thorpe, Stepan Tretiakov, Dibakar Roy Sarkar, Krishna Kumar, Ufuk Topcu - Hermitian Hulls of Rational Algebraic Geometry Codes and Applications in Quantum Codes - https://arxiv.org/abs/2512.17128 - arXiv:2512.17128v1 Announce Type: new -Abstract: Interest in the hulls of linear codes has been growing rapidly. More is known when the inner product is Euclidean than Hermitian. A shift to the latter is gaining traction. The focus is on a code whose Hermitian hull dimension and dual distance can be systematically determined. Such a code can serve as an ingredient in designing the parameters of entanglement-assisted quantum error-correcting codes (EAQECCs). - We use tools from algebraic function fields of one variable to efficiently determine a good lower bound on the Hermitian hull dimensions of generalized rational algebraic geometry (AG) codes. We identify families of AG codes whose hull dimensions can be well estimated by a lower bound. Given such a code, the idea is to select a set of evaluation points for which the residues of the Weil differential associated with the Hermitian dual code has an easily verifiable property. - The approach allows us to construct codes with designed Hermitian hull dimensions based on known results on Reed-Solomon codes and their generalization. Using the Hermitian method on these maximum distance separable (MDS) codes with designed hull dimensions yields two families of MDS EAQECCs. We confirm that the excellent parameters of the quantum codes from these families are new. - oai:arXiv.org:2512.17128v1 - cs.IT - math.IT - Mon, 22 Dec 2025 00:00:00 -0500 + Accelerating End-to-End PDF to Markdown Conversion Through Assisted Generation + https://arxiv.org/abs/2512.18122 + arXiv:2512.18122v1 Announce Type: new +Abstract: Converting data from machine-unreadable formats like PDFs into Markdown has the potential to enhance the accessibility of scientific research. Existing end-to-end decoder transformer models can transform screenshots of PDFs into Markdown, offering more flexibility than pipeline-based methods. Yet, decoding text token by token from scratch is inefficient, especially when dense text can be directly copied from the PDF. To address this challenge, this paper modifies Prompt Lookup Decoding (PLD) to extract candidate sequences directly from PDF files, leveraging the high n-gram overlap between PDFs and their Markdown equivalents. A new method, Copy Lookup Decoding (CLD), is introduced here to enhance PLD's candidate generation mechanism. Experiments demonstrate that CLD can accelerate the conversion process by up to 1.70$\times$ at original quality. The codebase for this paper is open-source on GitHub (https://github.com/Fireblossom/CopyLookup). + oai:arXiv.org:2512.18122v1 + cs.MM + cs.DL + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/publicdomain/zero/1.0/ - Lin Sok, Martianus Frederic Ezerman, Ling San + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1007/978-3-031-97141-9_3 + Changxu Duan - DiffeoMorph: Learning to Morph 3D Shapes Using Differentiable Agent-Based Simulations - https://arxiv.org/abs/2512.17129 - arXiv:2512.17129v1 Announce Type: new -Abstract: Biological systems can form complex three-dimensional structures through the collective behavior of identical agents -- cells that follow the same internal rules and communicate without central control. How such distributed control gives rise to precise global patterns remains a central question not only in developmental biology but also in distributed robotics, programmable matter, and multi-agent learning. Here, we introduce DiffeoMorph, an end-to-end differentiable framework for learning a morphogenesis protocol that guides a population of agents to morph into a target 3D shape. Each agent updates its position and internal state using an attention-based SE(3)-equivariant graph neural network, based on its own internal state and signals received from other agents. To train this system, we introduce a new shape-matching loss based on the 3D Zernike polynomials, which compares the predicted and target shapes as continuous spatial distributions, not as discrete point clouds, and is invariant to agent ordering, number of agents, and rigid-body transformations. To enforce full SO(3) invariance -- invariant to rotations yet sensitive to reflections, we include an alignment step that optimally rotates the predicted Zernike spectrum to match the target before computing the loss. This results in a bilevel problem, with the inner loop optimizing a unit quaternion for the best alignment and the outer loop updating the agent model. We compute gradients through the alignment step using implicit differentiation. We perform systematic benchmarking to establish the advantages of our shape-matching loss over other standard distance metrics for shape comparison tasks. We then demonstrate that DiffeoMorph can form a range of shapes -- from simple ellipsoids to complex morphologies -- using only minimal spatial cues. - oai:arXiv.org:2512.17129v1 - cs.LG + Efficient Mixture-of-Agents Serving via Tree-Structured Routing, Adaptive Pruning, and Dependency-Aware Prefill-Decode Overlap + https://arxiv.org/abs/2512.18126 + arXiv:2512.18126v1 Announce Type: new +Abstract: Mixture-of-Agents (MoA) inference can suffer from dense inter-agent communication and low hardware utilization, which jointly inflate serving latency. We present a serving design that targets these bottlenecks through an algorithm-system co-design. First, we replace dense agent interaction graphs with a hierarchical tree topology that induces structured sparsity in inter-agent communication. Second, we introduce a runtime adaptive mechanism that selectively terminates or skips downstream agent invocations using semantic agreement and confidence signals from intermediate outputs. Third, we pipeline agent execution by overlapping incremental prefilling with decoding across dependency-related agents, improving utilization and reducing inference latency. Across representative tasks, this approach substantially reduces end-to-end latency (up to 90%) while maintaining comparable accuracy (within $\pm$1%) relative to dense-connectivity MoA baselines, and can improve accuracy in certain settings. + oai:arXiv.org:2512.18126v1 + cs.AI cs.MA - cs.RO - q-bio.QM - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Seong Ho Pahng, Guoye Guan, Benjamin Fefferman, Sahand Hormoz + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Zijun Wang, Yijiahao Qi, Hanqiu Chen, Zishen Wan, Gongjin Sun, Dongyang Li, Shuyi Pei, Cong Hao - Smoothing DiLoCo with Primal Averaging for Faster Training of LLMs - https://arxiv.org/abs/2512.17131 - arXiv:2512.17131v1 Announce Type: new -Abstract: We propose Generalized Primal Averaging (GPA), an extension of Nesterov's method in its primal averaging formulation that addresses key limitations of recent averaging-based optimizers such as single-worker DiLoCo and Schedule-Free (SF) in the non-distributed setting. These two recent algorithmic approaches improve the performance of base optimizers, such as AdamW, through different iterate averaging strategies. Schedule-Free explicitly maintains a uniform average of past weights, while single-worker DiLoCo performs implicit averaging by periodically aggregating trajectories, called pseudo-gradients, to update the model parameters. However, single-worker DiLoCo's periodic averaging introduces a two-loop structure, increasing its memory requirements and number of hyperparameters. GPA overcomes these limitations by decoupling the interpolation constant in the primal averaging formulation of Nesterov. This decoupling enables GPA to smoothly average iterates at every step, generalizing and improving upon single-worker DiLoCo. Empirically, GPA consistently outperforms single-worker DiLoCo while removing the two-loop structure, simplifying hyperparameter tuning, and reducing its memory overhead to a single additional buffer. On the Llama-160M model, GPA provides a 24.22% speedup in terms of steps to reach the baseline (AdamW's) validation loss. Likewise, GPA achieves speedups of 12% and 27% on small and large batch setups, respectively, to attain AdamW's validation accuracy on the ImageNet ViT workload. Furthermore, we prove that for any base optimizer with regret bounded by $O(\sqrt{T})$, where $T$ is the number of iterations, GPA can match or exceed the convergence guarantee of the original optimizer, depending on the choice of interpolation constants. - oai:arXiv.org:2512.17131v1 - cs.LG - cs.AI - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 + ACE-Sync: An Adaptive Cloud-Edge Synchronization Framework for Communication-Efficient Large-Scale Distributed Model Training + https://arxiv.org/abs/2512.18127 + arXiv:2512.18127v1 Announce Type: new +Abstract: Large-scale deep learning models impose substantial communication overh ead in distributed training, particularly in bandwidth-constrained or heterogeneous clo ud-edge environments. Conventional synchronous or fixed-compression techniques o ften struggle to balance communication cost, convergence stability, and model accura cy. To address these challenges, we propose ACE-Sync, an Adaptive Cloud-Edge Sy nchronization Framework that integrates (1) an attention-based gradient importance p redictor, (2) a differentiated parameter compression strategy, and (3) a hierarchical cl oud-edge coordination mechanism. ACE-Sync dynamically selects which parameter groups to synchronize and determines appropriate compression levels under per-devic e bandwidth budgets. A knapsack-based optimization strategy is adopted to maximize important gradient preservation while reducing redundant communication. Furthermo re, residual-based error compensation and device clustering ensure long-term converg ence and cross-device personalization. Experiments show that ACE-Sync substantiall y reduces communication overhead while maintaining competitive accuracy. Compar ed with FullSync, ACE-Sync lowers communication cost from 112.5 GB to 44.7 GB (a 60% reduction) and shortens convergence from 41 to 39 epochs. Despite aggressiv e communication reduction, ACE-Sync preserves high model quality, achieving 82. 1% Top-1 accuracy-only 0.3% below the full-synchronization baseline-demonstrating its efficiency and scalability for large-scale distributed training. These results indicate that ACE-Sync provides a scalable, communication-efficient, and accuracy-preservin g solution for large-scale cloud-edge distributed model training. + oai:arXiv.org:2512.18127v1 + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Aaron Defazio, Konstantin Mishchenko, Parameswaran Raman, Hao-Jun Michael Shi, Lin Xiao + Yi Yang, Ziyu Lin, Liesheng Wei - Towards Senior-Robot Interaction: Reactive Robot Dog Gestures - https://arxiv.org/abs/2512.17136 - arXiv:2512.17136v1 Announce Type: new -Abstract: As the global population ages, many seniors face the problem of loneliness. Companion robots offer a potential solution. However, current companion robots often lack advanced functionality, while task-oriented robots are not designed for social interaction, limiting their suitability and acceptance by seniors. Our work introduces a senior-oriented system for quadruped robots that allows for more intuitive user input and provides more socially expressive output. For user input, we implemented a MediaPipe-based module for hand gesture and head movement recognition, enabling control without a remote. For output, we designed and trained robotic dog gestures using curriculum-based reinforcement learning in Isaac Gym, progressing from simple standing to three-legged balancing and leg extensions, and more. The final tests achieved over 95\% success on average in simulation, and we validated a key social gesture (the paw-lift) on a Unitree robot. Real-world tests demonstrated the feasibility and social expressiveness of this framework, while also revealing sim-to-real challenges in joint compliance, load distribution, and balance control. These contributions advance the development of practical quadruped robots as social companions for the senior and outline pathways for sim-to-real adaptation and inform future user studies. - oai:arXiv.org:2512.17136v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + SERA-H: Beyond Native Sentinel Spatial Limits for High-Resolution Canopy Height Mapping + https://arxiv.org/abs/2512.18128 + arXiv:2512.18128v1 Announce Type: new +Abstract: High-resolution mapping of canopy height is essential for forest management and biodiversity monitoring. Although recent studies have led to the advent of deep learning methods using satellite imagery to predict height maps, these approaches often face a trade-off between data accessibility and spatial resolution. To overcome these limitations, we present SERA-H, an end-to-end model combining a super-resolution module (EDSR) and temporal attention encoding (UTAE). Trained under the supervision of high-density LiDAR data (ALS), our model generates 2.5 m resolution height maps from freely available Sentinel-1 and Sentinel-2 (10 m) time series data. Evaluated on an open-source benchmark dataset in France, SERA-H, with a MAE of 2.6 m and a coefficient of determination of 0.82, not only outperforms standard Sentinel-1/2 baselines but also achieves performance comparable to or better than methods relying on commercial very high-resolution imagery (SPOT-6/7, PlanetScope, Maxar). These results demonstrate that combining high-resolution supervision with the spatiotemporal information embedded in time series enables the reconstruction of details beyond the input sensors' native resolution. SERA-H opens the possibility of freely mapping forests with high revisit frequency, achieving accuracy comparable to that of costly commercial imagery. The source code is available at https://github.com/ThomasBoudras/SERA-H# + oai:arXiv.org:2512.18128v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chunyang Meng, Eduardo B. Sandoval, Ricardo Sosa, Francisco Cruz + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Thomas Boudras, Martin Schwartz, Rasmus Fensholt, Martin Brandt, Ibrahim Fayad, Jean-Pierre Wigneron, Gabriel Belouze, Fajwel Fogel, Philippe Ciais - SDUM: A Scalable Deep Unrolled Model for Universal MRI Reconstruction - https://arxiv.org/abs/2512.17137 - arXiv:2512.17137v1 Announce Type: new -Abstract: Clinical MRI encompasses diverse imaging protocols--spanning anatomical targets (cardiac, brain, knee), contrasts (T1, T2, mapping), sampling patterns (Cartesian, radial, spiral, kt-space), and acceleration factors--yet current deep learning reconstructions are typically protocol-specific, hindering generalization and deployment. We introduce Scalable Deep Unrolled Model (SDUM), a universal framework combining a Restormer-based reconstructor, a learned coil sensitivity map estimator (CSME), sampling-aware weighted data consistency (SWDC), universal conditioning (UC) on cascade index and protocol metadata, and progressive cascade expansion training. SDUM exhibits foundation-model-like scaling behavior: reconstruction quality follows PSNR ${\sim}$ log(parameters) with correlation $r{=}0.986$ ($R^2{=}0.973$) up to 18 cascades, demonstrating predictable performance gains with model depth. A single SDUM trained on heterogeneous data achieves state-of-the-art results across all four CMRxRecon2025 challenge tracks--multi-center, multi-disease, 5T, and pediatric--without task-specific fine-tuning, surpassing specialized baselines by up to ${+}1.0$~dB. On CMRxRecon2024, SDUM outperforms the winning method PromptMR+ by ${+}0.55$~dB; on fastMRI brain, it exceeds PC-RNN by ${+}1.8$~dB. Ablations validate each component: SWDC ${+}0.43$~dB over standard DC, per-cascade CSME ${+}0.51$~dB, UC ${+}0.38$~dB. These results establish SDUM as a practical path toward universal, scalable MRI reconstruction. - oai:arXiv.org:2512.17137v1 - cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + TraCeR: Transformer-Based Competing Risk Analysis with Longitudinal Covariates + https://arxiv.org/abs/2512.18129 + arXiv:2512.18129v1 Announce Type: new +Abstract: Survival analysis is a critical tool for modeling time-to-event data. Recent deep learning-based models have reduced various modeling assumptions including proportional hazard and linearity. However, a persistent challenge remains in incorporating longitudinal covariates, with prior work largely focusing on cross-sectional features, and in assessing calibration of these models, with research primarily focusing on discrimination during evaluation. We introduce TraCeR, a transformer-based survival analysis framework for incorporating longitudinal covariates. Based on a factorized self-attention architecture, TraCeR estimates the hazard function from a sequence of measurements, naturally capturing temporal covariate interactions without assumptions about the underlying data-generating process. The framework is inherently designed to handle censored data and competing events. Experiments on multiple real-world datasets demonstrate that TraCeR achieves substantial and statistically significant performance improvements over state-of-the-art methods. Furthermore, our evaluation extends beyond discrimination metrics and assesses model calibration, addressing a key oversight in literature. + oai:arXiv.org:2512.18129v1 + cs.LG + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Puyang Wang, Pengfei Guo, Keyi Chai, Jinyuan Zhou, Daguang Xu, Shanshan Jiang + Maxmillan Ries, Sohan Seth - Bridging Psychometric and Content Development Practices with AI: A Community-Based Workflow for Augmenting Hawaiian Language Assessments - https://arxiv.org/abs/2512.17140 - arXiv:2512.17140v1 Announce Type: new -Abstract: This paper presents the design and evaluation of a community-based artificial intelligence (AI) workflow developed for the Kaiapuni Assessment of Educational Outcomes (K\=A'EO) program, the only native language assessment used for federal accountability in the United States. The project explored whether document-grounded language models could ethically and effectively augment human analysis of item performance while preserving the cultural and linguistic integrity of the Hawaiian language. Operating under the K\=A'EO AI Policy Framework, the workflow used NotebookLM for cross-document synthesis of psychometric data and Claude 3.5 Sonnet for developer-facing interpretation, with human oversight at every stage. Fifty-eight flagged items across Hawaiian Language Arts, Mathematics, and Science were reviewed during Round 2 of the AI Lab, producing six interpretive briefs that identified systemic design issues such as linguistic ambiguity, Depth-of-Knowledge (DOK) misalignment, and structural overload. The findings demonstrate that AI can serve as an ethically bounded amplifier of human expertise, accelerating analysis while simultaneously prioritizing fairness, human expertise, and cultural authority. This work offers a replicable model for responsible AI integration in Indigenous-language educational measurement. - oai:arXiv.org:2512.17140v1 - cs.HC - cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 + Holistic Evaluation of State-of-the-Art LLMs for Code Generation + https://arxiv.org/abs/2512.18131 + arXiv:2512.18131v1 Announce Type: new +Abstract: This study presents a comprehensive empirical evaluation of six state-of-the-art large language models (LLMs) for code generation, including both general-purpose and code-specialized models. Using a dataset of 944 real-world LeetCode problems across five programming languages, we assess model performance using rigorous metrics: compile-time errors, runtime errors, functional failures, and algorithmic suboptimalities. The results reveal significant performance variations, with DeepSeek-R1 and GPT-4.1 consistently outperform others in terms of correctness, efficiency, and robustness. Through detailed case studies, we identify common failure scenarios such as syntax errors, logical flaws, and suboptimal algorithms, highlighting the critical role of prompt engineering and human oversight in improving results. Based on these findings, we provide actionable recommendations for developers and practitioners, emphasizing that successful LLM deployment depends on careful model selection, effective prompt design, and context-aware usage to ensure reliable code generation in real-world software development tasks. + oai:arXiv.org:2512.18131v1 + cs.SE + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - P\=ohai K\=ukea-Shultz, Frank Brockmann + Le Zhang, Suresh Kothari - Pro-Pose: Unpaired Full-Body Portrait Synthesis via Canonical UV Maps - https://arxiv.org/abs/2512.17143 - arXiv:2512.17143v1 Announce Type: new -Abstract: Photographs of people taken by professional photographers typically present the person in beautiful lighting, with an interesting pose, and flattering quality. This is unlike common photos people can take of themselves. In this paper, we explore how to create a ``professional'' version of a person's photograph, i.e., in a chosen pose, in a simple environment, with good lighting, and standard black top/bottom clothing. A key challenge is to preserve the person's unique identity, face and body features while transforming the photo. If there would exist a large paired dataset of the same person photographed both ``in the wild'' and by a professional photographer, the problem would potentially be easier to solve. However, such data does not exist, especially for a large variety of identities. To that end, we propose two key insights: 1) Our method transforms the input photo and person's face to a canonical UV space, which is further coupled with reposing methodology to model occlusions and novel view synthesis. Operating in UV space allows us to leverage existing unpaired datasets. 2) We personalize the output photo via multi image finetuning. Our approach yields high-quality, reposed portraits and achieves strong qualitative and quantitative performance on real-world imagery. - oai:arXiv.org:2512.17143v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + PermuteV: A Performant Side-channel-Resistant RISC-V Core Securing Edge AI Inference + https://arxiv.org/abs/2512.18132 + arXiv:2512.18132v1 Announce Type: new +Abstract: Edge AI inference is becoming prevalent thanks to the emergence of small yet high-performance microprocessors. This shift from cloud to edge processing brings several benefits in terms of energy savings, improved latency, and increased privacy. On the downside, bringing computation to the edge makes them more vulnerable to physical side-channel attacks (SCA), which aim to extract the confidentiality of neural network models, e.g., architecture and weight. To address this growing threat, we propose PermuteV, a performant side-channel resistant RISC-V core designed to secure neural network inference. PermuteV employs a hardware-accelerated defense mechanism that randomly permutes the execution order of loop iterations, thereby obfuscating the electromagnetic (EM) signature associated with sensitive operations. We implement PermuteV on FPGA and perform evaluations in terms of side-channel security, hardware area, and runtime overhead. The experimental results demonstrate that PermuteV can effectively defend against EM SCA with minimal area and runtime overhead. + oai:arXiv.org:2512.18132v1 + cs.CR + cs.AR + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Sandeep Mishra, Yasamin Jafarian, Andreas Lugmayr, Yingwei Li, Varsha Ramakrishnan, Srivatsan Varadharajan, Alan C. Bovik, Ira Kemelmacher-Shlizerman + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nuntipat Narkthong, Xiaolin Xu - Solomonoff-Inspired Hypothesis Ranking with LLMs for Prediction Under Uncertainty - https://arxiv.org/abs/2512.17145 - arXiv:2512.17145v1 Announce Type: new -Abstract: Reasoning under uncertainty is a key challenge in AI, especially for real-world tasks, where problems with sparse data demands systematic generalisation. Existing approaches struggle to balance accuracy and simplicity when evaluating multiple candidate solutions. We propose a Solomonoff-inspired method that weights LLM-generated hypotheses by simplicity and predictive fit. Applied to benchmark (Mini-ARC) tasks, our method produces Solomonoff-weighted mixtures for per-cell predictions, yielding conservative, uncertainty-aware outputs even when hypotheses are noisy or partially incorrect. Compared to Bayesian Model Averaging (BMA), Solomonoff scoring spreads probability more evenly across competing hypotheses, while BMA concentrates weight on the most likely but potentially flawed candidates. Across tasks, this highlights the value of algorithmic information-theoretic priors for interpretable, reliable multi-hypothesis reasoning under uncertainty. - oai:arXiv.org:2512.17145v1 + Grad: Guided Relation Diffusion Generation for Graph Augmentation in Graph Fraud Detection + https://arxiv.org/abs/2512.18133 + arXiv:2512.18133v1 Announce Type: new +Abstract: Nowadays, Graph Fraud Detection (GFD) in financial scenarios has become an urgent research topic to protect online payment security. However, as organized crime groups are becoming more professional in real-world scenarios, fraudsters are employing more sophisticated camouflage strategies. Specifically, fraudsters disguise themselves by mimicking the behavioral data collected by platforms, ensuring that their key characteristics are consistent with those of benign users to a high degree, which we call Adaptive Camouflage. Consequently, this narrows the differences in behavioral traits between them and benign users within the platform's database, thereby making current GFD models lose efficiency. To address this problem, we propose a relation diffusion-based graph augmentation model Grad. In detail, Grad leverages a supervised graph contrastive learning module to enhance the fraud-benign difference and employs a guided relation diffusion generator to generate auxiliary homophilic relations from scratch. Based on these, weak fraudulent signals would be enhanced during the aggregation process, thus being obvious enough to be captured. Extensive experiments have been conducted on two real-world datasets provided by WeChat Pay, one of the largest online payment platforms with billions of users, and three public datasets. The results show that our proposed model Grad outperforms SOTA methods in both various scenarios, achieving at most 11.10% and 43.95% increases in AUC and AP, respectively. Our code is released at https://github.com/AI4Risk/antifraud and https://github.com/Muyiiiii/WWW25-Grad. + oai:arXiv.org:2512.18133v1 + cs.LG cs.AI - cs.IT - math.IT - Mon, 22 Dec 2025 00:00:00 -0500 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Josh Barber (QUT), Rourke Young (QUT), Cameron Coombe (QUT,CSIRO), Will Browne (QUT) + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1145/3696410.3714520 + Proceedings of the ACM Web Conference 2025 (WWW '25), April 28-May 2, 2025, Sydney, NSW, Australia + Jie Yang, Rui Zhang, Ziyang Cheng, Dawei Cheng, Guang Yang, Bo Wang - Biosecurity-Aware AI: Agentic Risk Auditing of Soft Prompt Attacks on ESM-Based Variant Predictors - https://arxiv.org/abs/2512.17146 - arXiv:2512.17146v1 Announce Type: new -Abstract: Genomic Foundation Models (GFMs), such as Evolutionary Scale Modeling (ESM), have demonstrated remarkable success in variant effect prediction. However, their security and robustness under adversarial manipulation remain largely unexplored. To address this gap, we introduce the Secure Agentic Genomic Evaluator (SAGE), an agentic framework for auditing the adversarial vulnerabilities of GFMs. SAGE functions through an interpretable and automated risk auditing loop. It injects soft prompt perturbations, monitors model behavior across training checkpoints, computes risk metrics such as AUROC and AUPR, and generates structured reports with large language model-based narrative explanations. This agentic process enables continuous evaluation of embedding-space robustness without modifying the underlying model. Using SAGE, we find that even state-of-the-art GFMs like ESM2 are sensitive to targeted soft prompt attacks, resulting in measurable performance degradation. These findings reveal critical and previously hidden vulnerabilities in genomic foundation models, showing the importance of agentic risk auditing in securing biomedical applications such as clinical variant interpretation. - oai:arXiv.org:2512.17146v1 - cs.CR + Optimal Software Pipelining and Warp Specialization for Tensor Core GPUs + https://arxiv.org/abs/2512.18134 + arXiv:2512.18134v1 Announce Type: new +Abstract: GPU architectures have continued to grow in complexity, with recent incarnations introducing increasingly powerful fixed-function units for matrix multiplication and data movement to accompany highly parallel general-purpose cores. To fully leverage these machines, software must use sophisticated schedules that maximally utilize all hardware resources. Since realizing such schedules is complex, both programmers and compilers routinely employ program transformations, such as software pipelining (SWP) and warp specialization (WS), to do so in practice. However, determining how best to use SWP and WS in combination is a challenging problem that is currently handled through a mix of brittle compilation heuristics and fallible human intuition, with little insight into the space of solutions. To remedy this situation, we introduce a novel formulation of SWP and WS as a joint optimization problem that can be solved holistically by off-the-shelf constraint solvers. We reify our approach in Twill, the first system that automatically derives optimal SWP and WS schedules for a large class of iterative programs. Twill is heuristic-free, easily extensible to new GPU architectures, and guaranteed to produce optimal schedules. We show that Twill can rediscover, and thereby prove optimal, the SWP and WS schedules manually developed by experts for Flash Attention on both the NVIDIA Hopper and Blackwell GPU architectures. + oai:arXiv.org:2512.18134v1 + cs.PL + cs.AR cs.LG - q-bio.QM - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Huixin Zhan + Rupanshu Soi, Rohan Yadav, Fredrik Kjolstad, Alex Aiken, Maryam Mehri Dehnavi, Michael Garland, Michael Bauer - Transformer-Based Modeling of User Interaction Sequences for Dwell Time Prediction in Human-Computer Interfaces - https://arxiv.org/abs/2512.17149 - arXiv:2512.17149v1 Announce Type: new -Abstract: This study investigates the task of dwell time prediction and proposes a Transformer framework based on interaction behavior modeling. The method first represents user interaction sequences on the interface by integrating dwell duration, click frequency, scrolling behavior, and contextual features, which are mapped into a unified latent space through embedding and positional encoding. On this basis, a multi-head self-attention mechanism is employed to capture long-range dependencies, while a feed-forward network performs deep nonlinear transformations to model the dynamic patterns of dwell time. Multiple comparative experiments are conducted with BILSTM, DRFormer, FedFormer, and iTransformer as baselines under the same conditions. The results show that the proposed method achieves the best performance in terms of MSE, RMSE, MAPE, and RMAE, and more accurately captures the complex patterns in interaction behavior. In addition, sensitivity experiments are carried out on hyperparameters and environments to examine the impact of the number of attention heads, sequence window length, and device environment on prediction performance, which further demonstrates the robustness and adaptability of the method. Overall, this study provides a new solution for dwell time prediction from both theoretical and methodological perspectives and verifies its effectiveness in multiple aspects. - oai:arXiv.org:2512.17149v1 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + Unifying Causal Reinforcement Learning: Survey, Taxonomy, Algorithms and Applications + https://arxiv.org/abs/2512.18135 + arXiv:2512.18135v1 Announce Type: new +Abstract: Integrating causal inference (CI) with reinforcement learning (RL) has emerged as a powerful paradigm to address critical limitations in classical RL, including low explainability, lack of robustness and generalization failures. Traditional RL techniques, which typically rely on correlation-driven decision-making, struggle when faced with distribution shifts, confounding variables, and dynamic environments. Causal reinforcement learning (CRL), leveraging the foundational principles of causal inference, offers promising solutions to these challenges by explicitly modeling cause-and-effect relationships. In this survey, we systematically review recent advancements at the intersection of causal inference and RL. We categorize existing approaches into causal representation learning, counterfactual policy optimization, offline causal RL, causal transfer learning, and causal explainability. Through this structured analysis, we identify prevailing challenges, highlight empirical successes in practical applications, and discuss open problems. Finally, we provide future research directions, underscoring the potential of CRL for developing robust, generalizable, and interpretable artificial intelligence systems. + oai:arXiv.org:2512.18135v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Rui Liu, Runsheng Zhang, Shixiao Wang - - - Text-Conditioned Background Generation for Editable Multi-Layer Documents - https://arxiv.org/abs/2512.17151 - arXiv:2512.17151v1 Announce Type: new -Abstract: We present a framework for document-centric background generation with multi-page editing and thematic continuity. To ensure text regions remain readable, we employ a \emph{latent masking} formulation that softly attenuates updates in the diffusion space, inspired by smooth barrier functions in physics and numerical optimization. In addition, we introduce \emph{Automated Readability Optimization (ARO)}, which automatically places semi-transparent, rounded backing shapes behind text regions. ARO determines the minimal opacity needed to satisfy perceptual contrast standards (WCAG 2.2) relative to the underlying background, ensuring readability while maintaining aesthetic harmony without human intervention. Multi-page consistency is maintained through a summarization-and-instruction process, where each page is distilled into a compact representation that recursively guides subsequent generations. This design reflects how humans build continuity by retaining prior context, ensuring that visual motifs evolve coherently across an entire document. Our method further treats a document as a structured composition in which text, figures, and backgrounds are preserved or regenerated as separate layers, allowing targeted background editing without compromising readability. Finally, user-provided prompts allow stylistic adjustments in color and texture, balancing automated consistency with flexible customization. Our training-free framework produces visually coherent, text-preserving, and thematically aligned documents, bridging generative modeling with natural design workflows. - oai:arXiv.org:2512.17151v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Taewon Kang, Joseph K J, Chris Tensmeyer, Jihyung Kil, Wanrong Zhu, Ming C. Lin, Vlad I. Morariu + Cristiano da Costa Cunha, Wei Liu, Tim French, Ajmal Mian - PhysFire-WM: A Physics-Informed World Model for Emulating Fire Spread Dynamics - https://arxiv.org/abs/2512.17152 - arXiv:2512.17152v1 Announce Type: new -Abstract: Fine-grained fire prediction plays a crucial role in emergency response. Infrared images and fire masks provide complementary thermal and boundary information, yet current methods are predominantly limited to binary mask modeling with inherent signal sparsity, failing to capture the complex dynamics of fire. While world models show promise in video generation, their physical inconsistencies pose significant challenges for fire forecasting. This paper introduces PhysFire-WM, a Physics-informed World Model for emulating Fire spread dynamics. Our approach internalizes combustion dynamics by encoding structured priors from a Physical Simulator to rectify physical discrepancies, coupled with a Cross-task Collaborative Training strategy (CC-Train) that alleviates the issue of limited information in mask-based modeling. Through parameter sharing and gradient coordination, CC-Train effectively integrates thermal radiation dynamics and spatial boundary delineation, enhancing both physical realism and geometric accuracy. Extensive experiments on a fine-grained multimodal fire dataset demonstrate the superior accuracy of PhysFire-WM in fire spread prediction. Validation underscores the importance of physical priors and cross-task collaboration, providing new insights for applying physics-informed world models to disaster prediction. - oai:arXiv.org:2512.17152v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Constrained Cuts, Flows, and Lattice-Linearity + https://arxiv.org/abs/2512.18141 + arXiv:2512.18141v1 Announce Type: new +Abstract: In a capacitated directed graph, it is known that the set of all min-cuts forms a distributive lattice [1], [2]. Here, we describe this lattice as a regular predicate whose forbidden elements can be advanced in constant parallel time after precomputing a max-flow, so as to obtain parallel algorithms for min-cut problems with additional constraints encoded by lattice-linear predicates [3]. Some nice algorithmic applications follow. First, we use these methods to compute the irreducibles of the sublattice of min-cuts satisfying a regular predicate. By Birkhoff's theorem [4] this gives a succinct representation of such cuts, and so we also obtain a general algorithm for enumerating this sublattice. Finally, though we prove computing min-cuts satisfying additional constraints is NP-hard in general, we use poset slicing [5], [6] for exact algorithms with constraints not necessarily encoded by lattice-linear predicates) with better complexity than exhaustive search. We also introduce $k$-transition predicates and strong advancement for improved complexity analyses of lattice-linear predicate algorithms in parallel settings, which is of independent interest. + oai:arXiv.org:2512.18141v1 + cs.DS + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Nan Zhou, Huandong Wang, Jiahao Li, Yang Li, Xiao-Ping Zhang, Yong Li, Xinlei Chen + Robert Streit, Vijay K. Garg - InstructDubber: Instruction-based Alignment for Zero-shot Movie Dubbing - https://arxiv.org/abs/2512.17154 - arXiv:2512.17154v1 Announce Type: new -Abstract: Movie dubbing seeks to synthesize speech from a given script using a specific voice, while ensuring accurate lip synchronization and emotion-prosody alignment with the character's visual performance. However, existing alignment approaches based on visual features face two key limitations: (1)they rely on complex, handcrafted visual preprocessing pipelines, including facial landmark detection and feature extraction; and (2) they generalize poorly to unseen visual domains, often resulting in degraded alignment and dubbing quality. To address these issues, we propose InstructDubber, a novel instruction-based alignment dubbing method for both robust in-domain and zero-shot movie dubbing. Specifically, we first feed the video, script, and corresponding prompts into a multimodal large language model to generate natural language dubbing instructions regarding the speaking rate and emotion state depicted in the video, which is robust to visual domain variations. Second, we design an instructed duration distilling module to mine discriminative duration cues from speaking rate instructions to predict lip-aligned phoneme-level pronunciation duration. Third, for emotion-prosody alignment, we devise an instructed emotion calibrating module, which finetunes an LLM-based instruction analyzer using ground truth dubbing emotion as supervision and predicts prosody based on the calibrated emotion analysis. Finally, the predicted duration and prosody, together with the script, are fed into the audio decoder to generate video-aligned dubbing. Extensive experiments on three major benchmarks demonstrate that InstructDubber outperforms state-of-the-art approaches across both in-domain and zero-shot scenarios. - oai:arXiv.org:2512.17154v1 - cs.SD - Mon, 22 Dec 2025 00:00:00 -0500 + On Swarm Leader Identification using Probing Policies + https://arxiv.org/abs/2512.18146 + arXiv:2512.18146v1 Announce Type: new +Abstract: Identifying the leader within a robotic swarm is crucial, especially in adversarial contexts where leader concealment is necessary for mission success. This work introduces the interactive Swarm Leader Identification (iSLI) problem, a novel approach where an adversarial probing agent identifies a swarm's leader by physically interacting with its members. We formulate the iSLI problem as a Partially Observable Markov Decision Process (POMDP) and employ Deep Reinforcement Learning, specifically Proximal Policy Optimization (PPO), to train the prober's policy. The proposed approach utilizes a novel neural network architecture featuring a Timed Graph Relationformer (TGR) layer combined with a Simplified Structured State Space Sequence (S5) model. The TGR layer effectively processes graph-based observations of the swarm, capturing temporal dependencies and fusing relational information using a learned gating mechanism to generate informative representations for policy learning. Extensive simulations demonstrate that our TGR-based model outperforms baseline graph neural network architectures and exhibits significant zero-shot generalization capabilities across varying swarm sizes and speeds different from those used during training. The trained prober achieves high accuracy in identifying the leader, maintaining performance even in out-of-training distribution scenarios, and showing appropriate confidence levels in its predictions. Real-world experiments with physical robots further validate the approach, confirming successful sim-to-real transfer and robustness to dynamic changes, such as unexpected agent disconnections. + oai:arXiv.org:2512.18146v1 + cs.RO + cs.AI + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Zhedong Zhang, Liang Li, Gaoxiang Cong, Chunshan Liu, Yuhan Gao, Xiaowan Wang, Tao Gu, Yuankai Qi + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Stergios E. Bachoumas, Panagiotis Artemiadis - Enhancing AIGC Service Efficiency with Adaptive Multi-Edge Collaboration in A Distributed System - https://arxiv.org/abs/2512.17158 - arXiv:2512.17158v1 Announce Type: new -Abstract: The Artificial Intelligence Generated Content (AIGC) technique has gained significant traction for producing diverse content. However, existing AIGC services typically operate within a centralized framework, resulting in high response times. To address this issue, we integrate collaborative Mobile Edge Computing (MEC) technology to reduce processing delays for AIGC services. Current collaborative MEC methods primarily support single-server offloading or facilitate interactions among fixed Edge Servers (ESs), limiting flexibility and resource utilization across all ESs to meet the varying computing and networking requirements of AIGC services. We propose AMCoEdge, an adaptive multi-server collaborative MEC approach to enhancing AIGC service efficiency. The AMCoEdge fully utilizes the computing and networking resources across all ESs through adaptive multi-ES selection and dynamic workload allocation, thereby minimizing the offloading make-span of AIGC services. Our design features an online distributed algorithm based on deep reinforcement learning, accompanied by theoretical analyses that confirm an approximate linear time complexity. Simulation results show that our method outperforms state-of-the-art baselines, achieving at least an 11.04% reduction in task offloading make-span and a 44.86% decrease in failure rate. Additionally, we develop a distributed prototype system to implement and evaluate our AMCoEdge method for real AIGC service execution, demonstrating service delays that are 9.23% - 31.98% lower than the three representative methods. - oai:arXiv.org:2512.17158v1 - cs.NI - Mon, 22 Dec 2025 00:00:00 -0500 + Making Strong Error-Correcting Codes Work Effectively for HBM in AI Inference + https://arxiv.org/abs/2512.18152 + arXiv:2512.18152v1 Announce Type: new +Abstract: LLM inference is increasingly memory bound, and HBM cost per GB dominates system cost. Current HBM stacks include short on-die ECC that tightens binning, raises price, and fixes reliability policy inside the device. This paper asks whether a system can tolerate a much higher raw HBM bit error rate and still keep end-to-end correctness and throughput, without changing the HBM PHY or the fixed 32 B transaction size. We propose REACH, a controller managed ECC design that keeps the HBM link and 32 B transfers unchanged. REACH uses a two level Reed-Solomon scheme: each 32 B chunk uses an inner code to check and correct most faults locally, while chunks that cannot be fixed are marked as erasures. An outer code spans kilobytes and runs in erasure only mode, repairing only flagged chunks and avoiding the expensive locator step. For small random writes, REACH updates outer parity with differential parity to avoid recomputing parity over the whole span, and an optional importance adaptive bit plane policy can protect only critical fields such as BF16 exponents to reduce ECC work and traffic. On three LLMs at 8K context, REACH keeps about 79 percent of on-die ECC throughput at zero BER and remains qualified up to a raw BER of 1e-3, extending tolerable device error rates by about three orders of magnitude while keeping tokens per second nearly flat. In ASAP7, a full REACH controller occupies 15.2 mm2 and consumes 17.5 W at 3.56 TB/s, and it reduces ECC area by 11.6x and power by about 60 percent compared to a naive long Reed-Solomon baseline. By moving strong ECC into the controller, REACH turns long code reliability into a system choice that can enable lower cost HBM under the same standard interface. + oai:arXiv.org:2512.18152v1 + cs.AR + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1109/TSC.2025.3638764 - IEEE Transactions on Services Computing, 2025 - Changfu Xu, Jianxiong Guo, Jiandian Zeng, Houming Qiu, Tian Wang, Xiaowen Chu, Jiannong Cao + Rui Xie, Yunhua Fang, Asad Ul Haq, Linsen Ma, Sanchari Sen, Swagath Venkataramani, Liu Liu, Tong Zhang - Can Synthetic Images Serve as Effective and Efficient Class Prototypes? - https://arxiv.org/abs/2512.17160 - arXiv:2512.17160v1 Announce Type: new -Abstract: Vision-Language Models (VLMs) have shown strong performance in zero-shot image classification tasks. However, existing methods, including Contrastive Language-Image Pre-training (CLIP), all rely on annotated text-to-image pairs for aligning visual and textual modalities. This dependency introduces substantial cost and accuracy requirement in preparing high-quality datasets. At the same time, processing data from two modes also requires dual-tower encoders for most models, which also hinders their lightweight. To address these limitations, we introduce a ``Contrastive Language-Image Pre-training via Large-Language-Model-based Generation (LGCLIP)" framework. LGCLIP leverages a Large Language Model (LLM) to generate class-specific prompts that guide a diffusion model in synthesizing reference images. Afterwards these generated images serve as visual prototypes, and the visual features of real images are extracted and compared with the visual features of these prototypes to achieve comparative prediction. By optimizing prompt generation through the LLM and employing only a visual encoder, LGCLIP remains lightweight and efficient. Crucially, our framework requires only class labels as input during whole experimental procedure, eliminating the need for manually annotated image-text pairs and extra pre-processing. Experimental results validate the feasibility and efficiency of LGCLIP, demonstrating great performance in zero-shot classification tasks and establishing a novel paradigm for classification. - oai:arXiv.org:2512.17160v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Performance Guarantees for Data Freshness in Resource-Constrained Adversarial IoT Systems + https://arxiv.org/abs/2512.18155 + arXiv:2512.18155v1 Announce Type: new +Abstract: Timely updates are critical for real-time monitoring and control applications powered by the Internet of Things (IoT). As these systems scale, they become increasingly vulnerable to adversarial attacks, where malicious agents interfere with legitimate transmissions to reduce data rates, thereby inflating the age of information (AoI). Existing adversarial AoI models often assume stationary channels and overlook queueing dynamics arising from compromised sensing sources operating under resource constraints. Motivated by the G-queue framework, this paper investigates a two-source M/G/1/1 system in which one source is adversarial and disrupts the update process by injecting negative arrivals according to a Poisson process and inducing i.i.d. service slowdowns, bounded in attack rate and duration. Using moment generating functions, we then derive closed-form expressions for average and peak AoI for an arbitrary number of sources. Moreover, we introduce a worst-case constrained attack model and employ stochastic dominance arguments to establish analytical AoI bounds. Numerical results validate the analysis and highlight the impact of resource-limited adversarial interference under general service time distributions. + oai:arXiv.org:2512.18155v1 + cs.NI + cs.PF + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Dianxing Shi, Dingjie Fu, Yuqiao Liu, Jun Wang + Aresh Dadlani, Muthukrishnan Senthil Kumar, Omid Ardakanian, Ioanis Nikolaidis - Distributed Learning in Markovian Restless Bandits over Interference Graphs for Stable Spectrum Sharing - https://arxiv.org/abs/2512.17161 - arXiv:2512.17161v1 Announce Type: new -Abstract: We study distributed learning for spectrum access and sharing among multiple cognitive communication entities, such as cells, subnetworks, or cognitive radio users (collectively referred to as cells), in communication-constrained wireless networks modeled by interference graphs. Our goal is to achieve a globally stable and interference-aware channel allocation. Stability is defined through a generalized Gale-Shapley multi-to-one matching, a well-established solution concept in wireless resource allocation. We consider wireless networks where L cells share S orthogonal channels and cannot simultaneously use the same channel as their neighbors. Each channel evolves as an unknown restless Markov process with cell-dependent rewards, making this the first work to establish global Gale-Shapley stability for channel allocation in a stochastic, temporally varying restless environment. To address this challenge, we develop SMILE (Stable Multi-matching with Interference-aware LEarning), a communication-efficient distributed learning algorithm that integrates restless bandit learning with graph-constrained coordination. SMILE enables cells to distributedly balance exploration of unknown channels with exploitation of learned information. We prove that SMILE converges to the optimal stable allocation and achieves logarithmic regret relative to a genie with full knowledge of expected utilities. Simulations validate the theoretical guarantees and demonstrate SMILE's robustness, scalability, and efficiency across diverse spectrum-sharing scenarios. - oai:arXiv.org:2512.17161v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + PIM-FW: Hardware-Software Co-Design of All-pairs Shortest Paths in DRAM + https://arxiv.org/abs/2512.18158 + arXiv:2512.18158v1 Announce Type: new +Abstract: All-pairs shortest paths (APSP) is a fundamental algorithm used for routing, logistics, and network analysis, but the cubic time complexity and heavy data movement of the canonical Floyd-Warshall (FW) algorithm severely limits its scalability on conventional CPUs or GPUs. In this paper, we propose PIM-FW, a novel co-designed hardware architecture and dataflow that leverages processing in and near memory architecture designed to accelerate blocked FW algorithm on an HBM3 stack. To enable fine-grained parallelism, we propose a massively parallel array of specialized bit-serial bank PE and channel PE designed to accelerate the core min-plus operations. Our novel dataflow complements this hardware, employing an interleaved mapping policy for superior load balancing and hybrid in and near memory computing model for efficient computation and reduction. The novel in-bank computing approach allows all distance updates to be performed and stored in memory bank, a key contribution is that eliminates the data movement bottleneck inherent in GPU-based approaches. We implement a full software and hardware co-design using a cycle-accurate simulator to simulate an 8-channel, 4-Hi HBM3 PIM stack on real road-network traces. Experimental results show that, for a 8192 x 8192 graph, PIM-FW achieves a 18.7x speedup in end-to-end execution, and consumes 3200x less DRAM energy compared to a state-of-the-art GPU-only Floyd-Warshall. + oai:arXiv.org:2512.18158v1 + cs.AR + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Liad Lea Didi, Kobi Cohen + Tsung-Han Lu, Zheyu Li, Minxuan Zhou, Tajana Rosing - TCDE: Topic-Centric Dual Expansion of Queries and Documents with Large Language Models for Information Retrieval - https://arxiv.org/abs/2512.17164 - arXiv:2512.17164v1 Announce Type: new -Abstract: Query Expansion (QE) enriches queries and Document Expansion (DE) enriches documents, and these two techniques are often applied separately. However, such separate application may lead to semantic misalignment between the expanded queries (or documents) and their relevant documents (or queries). To address this serious issue, we propose TCDE, a dual expansion strategy that leverages large language models (LLMs) for topic-centric enrichment on both queries and documents. In TCDE, we design two distinct prompt templates for processing each query and document. On the query side, an LLM is guided to identify distinct sub-topics within each query and generate a focused pseudo-document for each sub-topic. On the document side, an LLM is guided to distill each document into a set of core topic sentences. The resulting outputs are used to expand the original query and document. This topic-centric dual expansion process establishes semantic bridges between queries and their relevant documents, enabling better alignment for downstream retrieval models. Experiments on two challenging benchmarks, TREC Deep Learning and BEIR, demonstrate that TCDE achieves substantial improvements over strong state-of-the-art expansion baselines. In particular, on dense retrieval tasks, it outperforms several state-of-the-art methods, with a relative improvement of 2.8\% in NDCG@10 on the SciFact dataset. Experimental results validate the effectiveness of our topic-centric and dual expansion strategy. - oai:arXiv.org:2512.17164v1 - cs.IR - Mon, 22 Dec 2025 00:00:00 -0500 + EndoStreamDepth: Temporally Consistent Monocular Depth Estimation for Endoscopic Video Streams + https://arxiv.org/abs/2512.18159 + arXiv:2512.18159v1 Announce Type: new +Abstract: This work presents EndoStreamDepth, a monocular depth estimation framework for endoscopic video streams. It provides accurate depth maps with sharp anatomical boundaries for each frame, temporally consistent predictions across frames, and real-time throughput. Unlike prior work that uses batched inputs, EndoStreamDepth processes individual frames with a temporal module to propagate inter-frame information. The framework contains three main components: (1) a single-frame depth network with endoscopy-specific transformation to produce accurate depth maps, (2) multi-level Mamba temporal modules that leverage inter-frame information to improve accuracy and stabilize predictions, and (3) a hierarchical design with comprehensive multi-scale supervision, where complementary loss terms jointly improve local boundary sharpness and global geometric consistency. We conduct comprehensive evaluations on two publicly available colonoscopy depth estimation datasets. Compared to state-of-the-art monocular depth estimation methods, EndoStreamDepth substantially improves performance, and it produces depth maps with sharp, anatomically aligned boundaries, which are essential to support downstream tasks such as automation for robotic surgery. The code is publicly available at https://github.com/MedICL-VU/EndoStreamDepth + oai:arXiv.org:2512.18159v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yu Yang, Feng Tian, Ping Chen + http://creativecommons.org/licenses/by/4.0/ + Hao Li, Daiwei Lu, Jiacheng Wang, Robert J. Webster III, Ipek Oguz - BEOL Ferroelectric Compute-in-Memory Ising Machine for Simulated Bifurcation - https://arxiv.org/abs/2512.17165 - arXiv:2512.17165v1 Announce Type: new -Abstract: Computationally hard combinatorial optimization problems are pervasive in science and engineering, yet their NP-hard nature renders them increasingly inefficient to solve on conventional von Neumann architectures as problem size grows. Ising machines implemented using dynamical, digital and compute-in-memory (CiM) approaches offer a promising alternative, but often suffer from poor initialization and a fundamental trade-off between algorithmic performance and hardware efficiency. Hardware-friendly schemes such as simulated annealing converge slowly, whereas faster algorithms, including simulated bifurcation, are difficult to implement efficiently in CiM hardware, limiting both convergence speed and solution quality. To address these limitations, here we present a ferroelectric field-effect transistor (FeFET)-based CiM Ising framework that tightly co-designs algorithms and hardware to efficiently solve large-scale combinatorial optimization problems. The proposed approach employs a two-step algorithmic flow: an attention-inspired initialization that exploits global spin topology and reduces the required iterations by up to 80%, followed by a lightweight simulated bifurcation algorithm specifically tailored for CiM implementation. To natively accelerate the core vector-matrix and vector-matrix-vector operations in both steps, we fabricate a 32x256 FeFET CiM chip using ferroelectric capacitors integrated at the back end of line of a 180-nm CMOS platform. Across Max-Cut instances with up to 100,000 nodes, the proposed hardware-software co-designed solver achieves up to a 175.9x speedup over a GPU-based simulated bifurcation implementation while consistently delivering superior solution quality. - oai:arXiv.org:2512.17165v1 - cs.ET - Mon, 22 Dec 2025 00:00:00 -0500 + Propose, Solve, Verify: Self-Play Through Formal Verification + https://arxiv.org/abs/2512.18160 + arXiv:2512.18160v1 Announce Type: new +Abstract: Training models through self-play alone (without any human data) has been a longstanding goal in AI, but its effectiveness for training large language models remains unclear, particularly in code generation where rewards based on unit tests are brittle and prone to error propagation. We study self-play in the verified code generation setting, where formal verification provides reliable correctness signals. We introduce Propose, Solve, Verify (PSV) a simple self-play framework where formal verification signals are used to create a proposer capable of generating challenging synthetic problems and a solver trained via expert iteration. We use PSV to train PSV-Verus, which across three benchmarks improves pass@1 by up to 9.6x over inference-only and expert-iteration baselines. We show that performance scales with the number of generated questions and training iterations, and through ablations identify formal verification and difficulty-aware proposal as essential ingredients for successful self-play. + oai:arXiv.org:2512.18160v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yu Qian, Alptekin Vardar, Konrad Seidel, David Lehninger, Maximilian Lederer, Zhiguo Shi, Cheng Zhuo, Kai Ni, Thomas K\"ampfe, Xunzhao Yin + http://creativecommons.org/licenses/by/4.0/ + Alex Wilf, Pranjal Aggarwal, Bryan Parno, Daniel Fried, Louis-Philippe Morency, Paul Pu Liang, Sean Welleck - Identifying Stable Influencers: Distinguishing Stable and Temporal Influencers Using Long-Term Twitter Data - https://arxiv.org/abs/2512.17166 - arXiv:2512.17166v1 Announce Type: new -Abstract: For effective social media marketing, identifying stable influencers-those who sustain their influence over an extended period-is more valuable than focusing on users who are influential only temporarily. This study addresses the challenge of distinguishing stable influencers from transient ones among users who are influential at a given point in time. We particularly focus on two distinct types of influencers: source spreaders, who widely disseminate their own content, and brokers, who play a key role in propagating information originating from others. Using six months of retweet data from approximately 19,000 Twitter users, we analyze the characteristics of stable influencers. Our findings reveal that users who have maintained influence in the past are more likely to continue doing so in the future. Furthermore, we develop classification models to predict stable influencers among temporarily influential users, achieving an AUC of approximately 0.89 for source spreaders and 0.81 for brokers. Our experimental results highlight that current influence is a critical factor in classifying influencers, while past influence also significantly contributes, particularly for source spreaders. - oai:arXiv.org:2512.17166v1 - cs.SI - Mon, 22 Dec 2025 00:00:00 -0500 + Local Patches Meet Global Context: Scalable 3D Diffusion Priors for Computed Tomography Reconstruction + https://arxiv.org/abs/2512.18161 + arXiv:2512.18161v1 Announce Type: new +Abstract: Diffusion models learn strong image priors that can be leveraged to solve inverse problems like medical image reconstruction. However, for real-world applications such as 3D Computed Tomography (CT) imaging, directly training diffusion models on 3D data presents significant challenges due to the high computational demands of extensive GPU resources and large-scale datasets. Existing works mostly reuse 2D diffusion priors to address 3D inverse problems, but fail to fully realize and leverage the generative capacity of diffusion models for high-dimensional data. In this study, we propose a novel 3D patch-based diffusion model that can learn a fully 3D diffusion prior from limited data, enabling scalable generation of high-resolution 3D images. Our core idea is to learn the prior of 3D patches to achieve scalable efficiency, while coupling local and global information to guarantee high-quality 3D image generation, by modeling the joint distribution of position-aware 3D local patches and downsampled 3D volume as global context. Our approach not only enables high-quality 3D generation, but also offers an unprecedentedly efficient and accurate solution to high-resolution 3D inverse problems. Experiments on 3D CT reconstruction across multiple datasets show that our method outperforms state-of-the-art methods in both performance and efficiency, notably achieving high-resolution 3D reconstruction of $512 \times 512 \times 256$ ($\sim$20 mins). + oai:arXiv.org:2512.18161v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Harutaka Yamada, Sho Tsugawa, Mitsuo Yoshida + http://creativecommons.org/licenses/by/4.0/ + Taewon Yang, Jason Hu, Jeffrey A. Fessler, Liyue Shen - PILAR: Personalizing Augmented Reality Interactions with LLM-based Human-Centric and Trustworthy Explanations for Daily Use Cases - https://arxiv.org/abs/2512.17172 - arXiv:2512.17172v1 Announce Type: new -Abstract: Artificial intelligence (AI)-driven augmented reality (AR) systems are becoming increasingly integrated into daily life, and with this growth comes a greater need for explainability in real-time user interactions. Traditional explainable AI (XAI) methods, which often rely on feature-based or example-based explanations, struggle to deliver dynamic, context-specific, personalized, and human-centric insights for everyday AR users. These methods typically address separate explainability dimensions (e.g., when, what, how) with different explanation techniques, resulting in unrealistic and fragmented experiences for seamless AR interactions. To address this challenge, we propose PILAR, a novel framework that leverages a pre-trained large language model (LLM) to generate context-aware, personalized explanations, offering a more intuitive and trustworthy experience in real-time AI-powered AR systems. Unlike traditional methods, which rely on multiple techniques for different aspects of explanation, PILAR employs a unified LLM-based approach that dynamically adapts explanations to the user's needs, fostering greater trust and engagement. We implement the PILAR concept in a real-world AR application (e.g., personalized recipe recommendations), an open-source prototype that integrates real-time object detection, recipe recommendation, and LLM-based personalized explanations of the recommended recipes based on users' dietary preferences. We evaluate the effectiveness of PILAR through a user study with 16 participants performing AR-based recipe recommendation tasks, comparing an LLM-based explanation interface to a traditional template-based one. Results show that the LLM-based interface significantly enhances user performance and experience, with participants completing tasks 40% faster and reporting greater satisfaction, ease of use, and perceived transparency. - oai:arXiv.org:2512.17172v1 - cs.HC - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Influence of string register locations on vibratos among violoncellists + https://arxiv.org/abs/2512.18162 + arXiv:2512.18162v1 Announce Type: new +Abstract: This study analyzes how vibrato changes with finger position along the cello string. Examining 94 excerpts, we found moving the finger toward the bridge strongly increases acoustic vibrato depth ($\rho=0.6902$, $p=1.408\cdot 10^{-14}$). However, the performer's physical finger amplitude simultaneously decreases ($\rho=-0.6391$, $p=4.172\cdot 10^{-12}$). This shows players reduce finger motion in higher positions, but not enough to counteract the greater pitch deviation there, revealing both the presence and limits of compensatory vibrato behavior. + oai:arXiv.org:2512.18162v1 + cs.SD + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - 10.1109/ISMAR-Adjunct68609.2025.00060 - Ripan Kumar Kundu, Istiak Ahmed, Khaza Anuarul Hoque + http://creativecommons.org/licenses/by/4.0/ + Steven Hu, Sophia H. Kim, Helena H. Kim, Hugo Mackay, Eric J. Heller - ABE-CLIP: Training-Free Attribute Binding Enhancement for Compositional Image-Text Matching - https://arxiv.org/abs/2512.17178 - arXiv:2512.17178v1 Announce Type: new -Abstract: Contrastive Language-Image Pretraining (CLIP) has achieved remarkable performance in various multimodal tasks. However, it still struggles with compositional image-text matching, particularly in accurately associating objects with their corresponding attributes, because its inherent global representation often overlooks fine-grained semantics for attribute binding. Existing methods often require additional training or extensive hard negative sampling, yet they frequently show limited generalization to novel compositional concepts and fail to fundamentally address the drawbacks of global representations. In this paper, we propose ABE-CLIP, a novel training-free Attribute Binding Enhancement method designed to strengthen attribute-object binding in CLIP-like models. Specifically, we employ a Semantic Refinement Mechanism to refine token embeddings for both object and attribute phrases in the text, thereby mitigating attribute confusion and improving semantic precision. We further introduce a Local Token-Patch Alignment strategy that computes similarity scores between refined textual tokens and their most relevant image patches. By aggregating localized similarity scores, ABE-CLIP computes the final image-text similarity. Experiments on multiple datasets demonstrate that ABE-CLIP significantly improves attribute-object binding performance, even surpassing methods that require extensive training. - oai:arXiv.org:2512.17178v1 - cs.CV - cs.IR - Mon, 22 Dec 2025 00:00:00 -0500 + Conscious Data Contribution via Community-Driven Chain-of-Thought Distillation + https://arxiv.org/abs/2512.18174 + arXiv:2512.18174v1 Announce Type: new +Abstract: The current era of AI development places a heavy emphasis on training large models on increasingly scaled-up datasets. This paradigm has catalyzed entirely new product categories, such as LLM chatbots, while also raising concerns about data privacy and consumer choice. In this paper, we consider questions of data portability and user autonomy in the context of LLMs that "reason" using chain-of-thought (CoT) traces, computing intermediate text artifacts from user input before producing a final output. We first interpret recent data privacy and portability law to argue that these intermediate computations qualify as users' personal data. Then, building on the existing framework of Conscious Data Contribution, we show how communities who receive low utility from an available model can aggregate and distill their shared knowledge into an alternate model better aligned with their goals. We verify this approach empirically and investigate the effects of community diversity, reasoning granularity, and community size on distillation performance. + oai:arXiv.org:2512.18174v1 + cs.LG + cs.CR + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qi Zhang, Yuxu Chen, Lei Deng, Lili Shen + Lena Libon, Meghana Bhange, Rushabh Solanki, Elliot Creager, Ulrich A\"ivodji - Enhancing Long Document Long Form Summarisation with Self-Planning - https://arxiv.org/abs/2512.17179 - arXiv:2512.17179v1 Announce Type: new -Abstract: We introduce a novel approach for long context summarisation, highlight-guided generation, that leverages sentence-level information as a content plan to improve the traceability and faithfulness of generated summaries. Our framework applies self-planning methods to identify important content and then generates a summary conditioned on the plan. We explore both an end-to-end and two-stage variants of the approach, finding that the two-stage pipeline performs better on long and information-dense documents. Experiments on long-form summarisation datasets demonstrate that our method consistently improves factual consistency while preserving relevance and overall quality. On GovReport, our best approach has improved ROUGE-L by 4.1 points and achieves about 35% gains in SummaC scores. Qualitative analysis shows that highlight-guided summarisation helps preserve important details, leading to more accurate and insightful summaries across domains. - oai:arXiv.org:2512.17179v1 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Atlas is Your Perfect Context: One-Shot Customization for Generalizable Foundational Medical Image Segmentation + https://arxiv.org/abs/2512.18176 + arXiv:2512.18176v1 Announce Type: new +Abstract: Accurate medical image segmentation is essential for clinical diagnosis and treatment planning. While recent interactive foundation models (e.g., nnInteractive) enhance generalization through large-scale multimodal pretraining, they still depend on precise prompts and often perform below expectations in contexts that are underrepresented in their training data. We present AtlasSegFM, an atlas-guided framework that customizes available foundation models to clinical contexts with a single annotated example. The core innovations are: 1) a pipeline that provides context-aware prompts for foundation models via registration between a context atlas and query images, and 2) a test-time adapter to fuse predictions from both atlas registration and the foundation model. Extensive experiments across public and in-house datasets spanning multiple modalities and organs demonstrate that AtlasSegFM consistently improves segmentation, particularly for small, delicate structures. AtlasSegFM provides a lightweight, deployable solution one-shot customization of foundation models in real-world clinical workflows. The code will be made publicly available. + oai:arXiv.org:2512.18176v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Xiaotang Du, Rohit Saxena, Laura Perez-Beltrachini, Pasquale Minervini, Ivan Titov + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ziyu Zhang, Yi Yu, Simeng Zhu, Ahmed Aly, Yunhe Gao, Ning Gu, Yuan Xue - Conservative Bias in Multi-Teacher Learning: Why Agents Prefer Low-Reward Advisors - https://arxiv.org/abs/2512.17180 - arXiv:2512.17180v1 Announce Type: new -Abstract: Interactive reinforcement learning (IRL) has shown promise in enabling autonomous agents and robots to learn complex behaviours from human teachers, yet the dynamics of teacher selection remain poorly understood. This paper reveals an unexpected phenomenon in IRL: when given a choice between teachers with different reward structures, learning agents overwhelmingly prefer conservative, low-reward teachers (93.16% selection rate) over those offering 20x higher rewards. Through 1,250 experimental runs in navigation tasks with multiple expert teachers, we discovered: (1) Conservative bias dominates teacher selection: agents systematically choose the lowest-reward teacher, prioritising consistency over optimality; (2) Critical performance thresholds exist at teacher availability rho >= 0.6 and accuracy omega >= 0.6, below which the framework fails catastrophically; (3) The framework achieves 159% improvement over baseline Q-learning under concept drift. These findings challenge fundamental assumptions about optimal teaching in RL and suggest potential implications for human-robot collaboration, where human preferences for safety and consistency may align with the observed agent selection behaviour, potentially informing training paradigms for safety-critical robotic applications. - oai:arXiv.org:2512.17180v1 - cs.RO + NEURO-GUARD: Neuro-Symbolic Generalization and Unbiased Adaptive Routing for Diagnostics -- Explainable Medical AI + https://arxiv.org/abs/2512.18177 + arXiv:2512.18177v1 Announce Type: new +Abstract: Accurate yet interpretable image-based diagnosis remains a central challenge in medical AI, particularly in settings characterized by limited data, subtle visual cues, and high-stakes clinical decision-making. Most existing vision models rely on purely data-driven learning and produce black-box predictions with limited interpretability and poor cross-domain generalization, hindering their real-world clinical adoption. We present NEURO-GUARD, a novel knowledge-guided vision framework that integrates Vision Transformers (ViTs) with language-driven reasoning to improve performance, transparency, and domain robustness. NEURO-GUARD employs a retrieval-augmented generation (RAG) mechanism for self-verification, in which a large language model (LLM) iteratively generates, evaluates, and refines feature-extraction code for medical images. By grounding this process in clinical guidelines and expert knowledge, the framework progressively enhances feature detection and classification beyond purely data-driven baselines. Extensive experiments on diabetic retinopathy classification across four benchmark datasets APTOS, EyePACS, Messidor-1, and Messidor-2 demonstrate that NEURO-GUARD improves accuracy by 6.2% over a ViT-only baseline (84.69% vs. 78.4%) and achieves a 5% gain in domain generalization. Additional evaluations on MRI-based seizure detection further confirm its cross-domain robustness, consistently outperforming existing methods. + Overall, NEURO-GUARD bridges symbolic medical reasoning with subsymbolic visual learning, enabling interpretable, knowledge-aware, and generalizable medical image diagnosis while achieving state-of-the-art performance across multiple datasets. + oai:arXiv.org:2512.18177v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Maher Mesto, Francisco Cruz + Midhat Urooj, Ayan Banerjee, Sandeep Gupta - Long-time stability and convergence analysis of an IMEX BDF3 scheme for 2-D incompressible Navier-Stokes equation - https://arxiv.org/abs/2512.17182 - arXiv:2512.17182v1 Announce Type: new -Abstract: High-order time-stepping schemes are crucial for simulating incompressible fluid flows due to their ability to capture complex turbulent behavior and unsteady motion. In this work, we propose a third-order accurate numerical scheme for the two-dimensional incompressible Navier-Stokes equation. Spatial and temporal discretization is achieved using Fourier pseudo-spectral approximation and the BDF3 stencil, combined with the Adams-Bashforth extrapolation for the nonlinear convection term, resulting in a semi-implicit, fully discrete formulation. This approach requires solving only a single Poisson-like equation per time step while maintaining the desired temporal accuracy. Classical numerical experiments demonstrate the advantage of our scheme in terms of permissible time step sizes. Moreover, we establish uniform-in-time bounds for the vorticity in both $L^2$ and higher-order $H^m$ norms ($m \geq 1$), provided the time step is sufficiently small. These bounds, in turn, facilitate the derivation of optimal convergence rates. - oai:arXiv.org:2512.17182v1 + A Domain Decomposition Deep Neural Network Method with Multi-Activation Functions for Solving Elliptic and Parabolic Interface Problems + https://arxiv.org/abs/2512.18178 + arXiv:2512.18178v1 Announce Type: new +Abstract: We present a domain decomposition-based deep learning method for solving elliptic and parabolic interface problems with discontinuous coefficients in two to ten dimensions. Our Multi-Activation Function (MAF) approach employs two independent neural networks, one for each subdomain, coupled through interface conditions in the loss function. The key innovation is a multi-activation mechanism within each subdomain network that adaptively blends multiple activation functions (e.g., $\tanh$ and Gaussian-type) with interface-aware weighting, enhancing learning efficiency near interfaces where coupling constraints are most demanding. We prove conditional error bounds relating solution accuracy to trained loss values and quadrature errors. Numerical experiments on elliptic and parabolic interface problems with various interface geometries (2D--10D) validate the effectiveness and accuracy of the proposed method. + oai:arXiv.org:2512.18178v1 math.NA cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Kelong Cheng, Jingwei Sun, Hong Zhang + Qijia Zhai - Semantic Co-Speech Gesture Synthesis and Real-Time Control for Humanoid Robots - https://arxiv.org/abs/2512.17183 - arXiv:2512.17183v1 Announce Type: new -Abstract: We present an innovative end-to-end framework for synthesizing semantically meaningful co-speech gestures and deploying them in real-time on a humanoid robot. This system addresses the challenge of creating natural, expressive non-verbal communication for robots by integrating advanced gesture generation techniques with robust physical control. Our core innovation lies in the meticulous integration of a semantics-aware gesture synthesis module, which derives expressive reference motions from speech input by leveraging a generative retrieval mechanism based on large language models (LLMs) and an autoregressive Motion-GPT model. This is coupled with a high-fidelity imitation learning control policy, the MotionTracker, which enables the Unitree G1 humanoid robot to execute these complex motions dynamically and maintain balance. To ensure feasibility, we employ a robust General Motion Retargeting (GMR) method to bridge the embodiment gap between human motion data and the robot platform. Through comprehensive evaluation, we demonstrate that our combined system produces semantically appropriate and rhythmically coherent gestures that are accurately tracked and executed by the physical robot. To our knowledge, this work represents a significant step toward general real-world use by providing a complete pipeline for automatic, semantic-aware, co-speech gesture generation and synchronized real-time physical deployment on a humanoid robot. - oai:arXiv.org:2512.17183v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + FairExpand: Individual Fairness on Graphs with Partial Similarity Information + https://arxiv.org/abs/2512.18180 + arXiv:2512.18180v1 Announce Type: new +Abstract: Individual fairness, which requires that similar individuals should be treated similarly by algorithmic systems, has become a central principle in fair machine learning. Individual fairness has garnered traction in graph representation learning due to its practical importance in high-stakes Web areas such as user modeling, recommender systems, and search. However, existing methods assume the existence of predefined similarity information over all node pairs, an often unrealistic requirement that prevents their operationalization in practice. In this paper, we assume the similarity information is only available for a limited subset of node pairs and introduce FairExpand, a flexible framework that promotes individual fairness in this more realistic partial information scenario. FairExpand follows a two-step pipeline that alternates between refining node representations using a backbone model (e.g., a graph neural network) and gradually propagating similarity information, which allows fairness enforcement to effectively expand to the entire graph. Extensive experiments show that FairExpand consistently enhances individual fairness while preserving performance, making it a practical solution for enabling graph-based individual fairness in real-world applications with partial similarity information. + oai:arXiv.org:2512.18180v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Gang Zhang + http://creativecommons.org/licenses/by/4.0/ + Rebecca Salganik, Yibin Wang, Guillaume Salha-Galvan, Jian Kang - It is not always greener on the other side: Greenery perception across demographics and personalities in multiple cities - https://arxiv.org/abs/2512.17186 - arXiv:2512.17186v1 Announce Type: new -Abstract: Quantifying and assessing urban greenery is consequential for planning and development, reflecting the everlasting importance of green spaces for multiple climate and well-being dimensions of cities. Evaluation can be broadly grouped into objective (e.g., measuring the amount of greenery) and subjective (e.g., polling the perception of people) approaches, which may differ -- what people see and feel about how green a place is might not match the measurements of the actual amount of vegetation. In this work, we advance the state of the art by measuring such differences and explaining them through human, geographic, and spatial dimensions. The experiments rely on contextual information extracted from street view imagery and a comprehensive urban visual perception survey collected from 1,000 people across five countries with their extensive demographic and personality information. We analyze the discrepancies between objective measures (e.g., Green View Index (GVI)) and subjective scores (e.g., pairwise ratings), examining whether they can be explained by a variety of human and visual factors such as age group and spatial variation of greenery in the scene. The findings reveal that such discrepancies are comparable around the world and that demographics and personality do not play a significant role in perception. Further, while perceived and measured greenery correlate consistently across geographies (both where people and where imagery are from), where people live plays a significant role in explaining perceptual differences, with these two, as the top among seven, features that influences perceived greenery the most. This location influence suggests that cultural, environmental, and experiential factors substantially shape how individuals observe greenery in cities. - oai:arXiv.org:2512.17186v1 + MACE-Dance: Motion-Appearance Cascaded Experts for Music-Driven Dance Video Generation + https://arxiv.org/abs/2512.18181 + arXiv:2512.18181v1 Announce Type: new +Abstract: With the rise of online dance-video platforms and rapid advances in AI-generated content (AIGC), music-driven dance generation has emerged as a compelling research direction. Despite substantial progress in related domains such as music-driven 3D dance generation, pose-driven image animation, and audio-driven talking-head synthesis, existing methods cannot be directly adapted to this task. Moreover, the limited studies in this area still struggle to jointly achieve high-quality visual appearance and realistic human motion. Accordingly, we present MACE-Dance, a music-driven dance video generation framework with cascaded Mixture-of-Experts (MoE). The Motion Expert performs music-to-3D motion generation while enforcing kinematic plausibility and artistic expressiveness, whereas the Appearance Expert carries out motion- and reference-conditioned video synthesis, preserving visual identity with spatiotemporal coherence. Specifically, the Motion Expert adopts a diffusion model with a BiMamba-Transformer hybrid architecture and a Guidance-Free Training (GFT) strategy, achieving state-of-the-art (SOTA) performance in 3D dance generation. The Appearance Expert employs a decoupled kinematic-aesthetic fine-tuning strategy, achieving state-of-the-art (SOTA) performance in pose-driven image animation. To better benchmark this task, we curate a large-scale and diverse dataset and design a motion-appearance evaluation protocol. Based on this protocol, MACE-Dance also achieves state-of-the-art performance. Project page: https://macedance.github.io/ + oai:arXiv.org:2512.18181v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Matias Quintana, Fangqi Liu, Jussi Torkko, Youlong Gu, Xiucheng Liang, Yujun Hou, Koichi Ito, Yihan Zhu, Mahmoud Abdelrahman, Tuuli Toivonen, Yi Lu, Filip Biljecki + http://creativecommons.org/licenses/by/4.0/ + Kaixing Yang, Jiashu Zhu, Xulong Tang, Ziqiao Peng, Xiangyue Zhang, Puwei Wang, Jiahong Wu, Xiangxiang Chu, Hongyan Liu, Jun He - MAPPO-LCR: Multi-Agent Policy Optimization with Local Cooperation Reward in Spatial Public Goods Games - https://arxiv.org/abs/2512.17187 - arXiv:2512.17187v1 Announce Type: new -Abstract: Spatial public goods games model collective dilemmas where individual payoffs depend on population-level strategy configurations. Most existing studies rely on evolutionary update rules or value-based reinforcement learning methods. These approaches struggle to represent payoff coupling and non-stationarity in large interacting populations. This work introduces Multi-Agent Proximal Policy Optimization (MAPPO) into spatial public goods games for the first time. In these games, individual returns are intrinsically coupled through overlapping group interactions. Proximal Policy Optimization (PPO) treats agents as independent learners and ignores this coupling during value estimation. MAPPO addresses this limitation through a centralized critic that evaluates joint strategy configurations. To study neighborhood-level cooperation signals under this framework, we propose MAPPO with Local Cooperation Reward, termed MAPPO-LCR. The local cooperation reward aligns policy updates with surrounding cooperative density without altering the original game structure. MAPPO-LCR preserves decentralized execution while enabling population-level value estimation during training. Extensive simulations demonstrate stable cooperation emergence and reliable convergence across enhancement factors. Statistical analyses further confirm the learning advantage of MAPPO over PPO in spatial public goods games. - oai:arXiv.org:2512.17187v1 - cs.MA - cs.GT - Mon, 22 Dec 2025 00:00:00 -0500 + Understanding Typing-Related Bugs in Solidity Compiler + https://arxiv.org/abs/2512.18182 + arXiv:2512.18182v1 Announce Type: new +Abstract: The correctness of the Solidity compiler is crucial for ensuring the security of smart contracts. However, the implementation complexity of its type system often introduces elusive defects. This paper presents the first systematic empirical study on typing-related bugs in the Solidity compiler. To systematically analyze these bugs, we collected 146 officially confirmed and fixed typing-related bugs from the official GitHub repository of Solidity compiler. For each bug, we conducted an in-depth analysis and classification from four dimensions: symptoms, root causes, exposure conditions, and fix strategies. Through this study, we reveal unique distribution patterns and key characteristics of such bugs, and summarize 12 core findings. We additionally give the implications of our findings, and these implications not only deepen the understanding of inherent weaknesses in the Solidity compiler but also provide new insights for detecting and fixing typing-related bugs in the Solidity compiler. + oai:arXiv.org:2512.18182v1 + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhaoqilin Yang, Axin Xiang, Kedi Yang, Tianjun Liu, Youliang Tian + Lantian Li, Yue Pan, Dan Wang, Jingwen Wu, Zhongxing Yu - Globally Optimal Solution to the Generalized Relative Pose Estimation Problem using Affine Correspondences - https://arxiv.org/abs/2512.17188 - arXiv:2512.17188v1 Announce Type: new -Abstract: Mobile devices equipped with a multi-camera system and an inertial measurement unit (IMU) are widely used nowadays, such as self-driving cars. The task of relative pose estimation using visual and inertial information has important applications in various fields. To improve the accuracy of relative pose estimation of multi-camera systems, we propose a globally optimal solver using affine correspondences to estimate the generalized relative pose with a known vertical direction. First, a cost function about the relative rotation angle is established after decoupling the rotation matrix and translation vector, which minimizes the algebraic error of geometric constraints from affine correspondences. Then, the global optimization problem is converted into two polynomials with two unknowns based on the characteristic equation and its first derivative is zero. Finally, the relative rotation angle can be solved using the polynomial eigenvalue solver, and the translation vector can be obtained from the eigenvector. Besides, a new linear solution is proposed when the relative rotation is small. The proposed solver is evaluated on synthetic data and real-world datasets. The experiment results demonstrate that our method outperforms comparable state-of-the-art methods in accuracy. - oai:arXiv.org:2512.17188v1 + Is There a Better Source Distribution than Gaussian? Exploring Source Distributions for Image Flow Matching + https://arxiv.org/abs/2512.18184 + arXiv:2512.18184v1 Announce Type: new +Abstract: Flow matching has emerged as a powerful generative modeling approach with flexible choices of source distribution. While Gaussian distributions are commonly used, the potential for better alternatives in high-dimensional data generation remains largely unexplored. In this paper, we propose a novel 2D simulation that captures high-dimensional geometric properties in an interpretable 2D setting, enabling us to analyze the learning dynamics of flow matching during training. Based on this analysis, we derive several key insights about flow matching behavior: (1) density approximation can paradoxically degrade performance due to mode discrepancy, (2) directional alignment suffers from path entanglement when overly concentrated, (3) Gaussian's omnidirectional coverage ensures robust learning, and (4) norm misalignment incurs substantial learning costs. Building on these insights, we propose a practical framework that combines norm-aligned training with directionally-pruned sampling. This approach maintains the robust omnidirectional supervision essential for stable flow learning, while eliminating initializations in data-sparse regions during inference. Importantly, our pruning strategy can be applied to any flow matching model trained with a Gaussian source, providing immediate performance gains without the need for retraining. Empirical evaluations demonstrate consistent improvements in both generation quality and sampling efficiency. Our findings provide practical insights and guidelines for source distribution design and introduce a readily applicable technique for improving existing flow matching models. Our code is available at https://github.com/kwanseokk/SourceFM. + oai:arXiv.org:2512.18184v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhenbao Yu, Banglei Guan, Shunkun Liang, Zibin Liu, Yang Shang, Qifeng Yu + http://creativecommons.org/licenses/by/4.0/ + Junho Lee, Kwanseok Kim, Joonseok Lee - Anatomical Region-Guided Contrastive Decoding: A Plug-and-Play Strategy for Mitigating Hallucinations in Medical VLMs - https://arxiv.org/abs/2512.17189 - arXiv:2512.17189v1 Announce Type: new -Abstract: Medical Vision-Language Models (MedVLMs) show immense promise in clinical applicability. However, their reliability is hindered by hallucinations, where models often fail to derive answers from visual evidence, instead relying on learned textual priors. Existing mitigation strategies for MedVLMs have distinct limitations: training-based methods rely on costly expert annotations, limiting scalability, while training-free interventions like contrastive decoding, though data-efficient, apply a global, untargeted correction whose effects in complex real-world clinical settings can be unreliable. To address these challenges, we introduce Anatomical Region-Guided Contrastive Decoding (ARCD), a plug-and-play strategy that mitigates hallucinations by providing targeted, region-specific guidance. Our module leverages an anatomical mask to direct a three-tiered contrastive decoding process. By dynamically re-weighting at the token, attention, and logits levels, it verifiably steers the model's focus onto specified regions, reinforcing anatomical understanding and suppressing factually incorrect outputs. Extensive experiments across diverse datasets, including chest X-ray, CT, brain MRI, and ocular ultrasound, demonstrate our method's effectiveness in improving regional understanding, reducing hallucinations, and enhancing overall diagnostic accuracy. - oai:arXiv.org:2512.17189v1 + ALIGN: Advanced Query Initialization with LiDAR-Image Guidance for Occlusion-Robust 3D Object Detection + https://arxiv.org/abs/2512.18187 + arXiv:2512.18187v1 Announce Type: new +Abstract: Recent query-based 3D object detection methods using camera and LiDAR inputs have shown strong performance, but existing query initialization strategies,such as random sampling or BEV heatmap-based sampling, often result in inefficient query usage and reduced accuracy, particularly for occluded or crowded objects. To address this limitation, we propose ALIGN (Advanced query initialization with LiDAR and Image GuidaNce), a novel approach for occlusion-robust, object-aware query initialization. Our model consists of three key components: (i) Occlusion-aware Center Estimation (OCE), which integrates LiDAR geometry and image semantics to estimate object centers accurately (ii) Adaptive Neighbor Sampling (ANS), which generates object candidates from LiDAR clustering and supplements each object by sampling spatially and semantically aligned points around it and (iii) Dynamic Query Balancing (DQB), which adaptively balances queries between foreground and background regions. Our extensive experiments on the nuScenes benchmark demonstrate that ALIGN consistently improves performance across multiple state-of-the-art detectors, achieving gains of up to +0.9 mAP and +1.2 NDS, particularly in challenging scenes with occlusions or dense crowds. Our code will be publicly available upon publication. + oai:arXiv.org:2512.18187v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Xiao Liang, Chenxi Liu, Zhi Ma, Di Wang, Bin Jing, Quan Wang, Yuanyuan Shi + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Janghyun Baek, Mincheol Chang, Seokha Moon, Seung Joon Lee, Jinkyu Kim - MMRAG-RFT: Two-stage Reinforcement Fine-tuning for Explainable Multi-modal Retrieval-augmented Generation - https://arxiv.org/abs/2512.17194 - arXiv:2512.17194v1 Announce Type: new -Abstract: Multi-modal Retrieval-Augmented Generation (MMRAG) enables highly credible generation by integrating external multi-modal knowledge, thus demonstrating impressive performance in complex multi-modal scenarios. However, existing MMRAG methods fail to clarify the reasoning logic behind retrieval and response generation, which limits the explainability of the results. To address this gap, we propose to introduce reinforcement learning into multi-modal retrieval-augmented generation, enhancing the reasoning capabilities of multi-modal large language models through a two-stage reinforcement fine-tuning framework to achieve explainable multi-modal retrieval-augmented generation. Specifically, in the first stage, rule-based reinforcement fine-tuning is employed to perform coarse-grained point-wise ranking of multi-modal documents, effectively filtering out those that are significantly irrelevant. In the second stage, reasoning-based reinforcement fine-tuning is utilized to jointly optimize fine-grained list-wise ranking and answer generation, guiding multi-modal large language models to output explainable reasoning logic in the MMRAG process. Our method achieves state-of-the-art results on WebQA and MultimodalQA, two benchmark datasets for multi-modal retrieval-augmented generation, and its effectiveness is validated through comprehensive ablation experiments. - oai:arXiv.org:2512.17194v1 + NL2CA: Auto-formalizing Cognitive Decision-Making from Natural Language Using an Unsupervised CriticNL2LTL Framework + https://arxiv.org/abs/2512.18189 + arXiv:2512.18189v1 Announce Type: new +Abstract: Cognitive computing models offer a formal and interpretable way to characterize human's deliberation and decision-making, yet their development remains labor-intensive. In this paper, we propose NL2CA, a novel method for auto-formalizing cognitive decision-making rules from natural language descriptions of human experience. Different from most related work that exploits either pure manual or human guided interactive modeling, our method is fully automated without any human intervention. The approach first translates text into Linear Temporal Logic (LTL) using a fine-tuned large language model (LLM), then refines the logic via an unsupervised Critic Tree, and finally transforms the output into executable production rules compatible with symbolic cognitive frameworks. Based on the resulted rules, a cognitive agent is further constructed and optimized through cognitive reinforcement learning according to the real-world behavioral data. Our method is validated in two domains: (1) NL-to-LTL translation, where our CriticNL2LTL module achieves consistent performance across both expert and large-scale benchmarks without human-in-the-loop feed-backs, and (2) cognitive driving simulation, where agents automatically constructed from human interviews have successfully learned the diverse decision patterns of about 70 trials in different critical scenarios. Experimental results demonstrate that NL2CA enables scalable, interpretable, and human-aligned cognitive modeling from unstructured textual data, offering a novel paradigm to automatically design symbolic cognitive agents. + oai:arXiv.org:2512.18189v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.SC + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shengwei Zhao, Jingwen Yao, Sitong Wei, Linhai Xu, Yuying Liu, Dong Zhang, Zhiqiang Tian, Shaoyi Du + Zihao Deng, Yijia Li, Renrui Zhang, Peijun Ye - UmniBench: Unified Understand and Generation Model Oriented Omni-dimensional Benchmark - https://arxiv.org/abs/2512.17196 - arXiv:2512.17196v1 Announce Type: new -Abstract: Unifying multimodal understanding and generation has shown impressive capabilities in cutting-edge proprietary systems. However, evaluations of unified multimodal models (UMMs) remain decoupled, assessing their understanding and generation abilities separately with corresponding datasets. To address this, we propose UmniBench, a benchmark tailored for UMMs with omni-dimensional evaluation. First, UmniBench can assess the understanding, generation, and editing ability within a single evaluation process. Based on human-examined prompts and QA pairs, UmniBench leverages UMM itself to evaluate its generation and editing ability with its understanding ability. This simple but effective paradigm allows comprehensive evaluation of UMMs. Second, UmniBench covers 13 major domains and more than 200 concepts, ensuring a thorough inspection of UMMs. Moreover, UmniBench can also decouple and separately evaluate understanding, generation, and editing abilities, providing a fine-grained assessment. Based on UmniBench, we benchmark 24 popular models, including both UMMs and single-ability large models. We hope this benchmark provides a more comprehensive and objective view of unified models and logistical support for improving the performance of the community model. - oai:arXiv.org:2512.17196v1 + External Hippocampus: Topological Cognitive Maps for Guiding Large Language Model Reasoning + https://arxiv.org/abs/2512.18190 + arXiv:2512.18190v1 Announce Type: new +Abstract: This paper proposes the External Hippocampus framework, which models language model reasoning from a cognitive dynamics perspective as the flow of information energy in semantic space. Unlike traditional weight-space optimization methods, this framework constructs topological cognitive maps through dimensionality reduction projection, enabling precise navigation and intervention of energy flow at test time while avoiding substantial computational requirements and demonstrating predictable intervention patterns. The method effectively addresses the cognitive deadlock problem in multi-step reasoning for small models. Experiments on models <=7B parameters show: map-guided methods achieve 81.20% accuracy on 500 challenging problems (relative baseline +16.80%), reduce reasoning time by >= 15x, with key findings revealing that reasoning stagnation manifests as "Cognitive Vortex" and low-entropy potential wells, while temperature perturbations effectively restart energy flow. The framework requires no additional training, possesses autonomous growth capability, and provides an efficient and controllable topological-aware solution for small model reasoning. + oai:arXiv.org:2512.18190v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Kai Liu, Leyang Chen, Wenbo Li, Zhikai Chen, Zhixin Wang, Renjing Pei, Linghe Kong, Yulun Zhang - - - BumpNet: A Sparse Neural Network Framework for Learning PDE Solutions - https://arxiv.org/abs/2512.17198 - arXiv:2512.17198v1 Announce Type: new -Abstract: We introduce BumpNet, a sparse neural network framework for PDE numerical solution and operator learning. BumpNet is based on meshless basis function expansion, in a similar fashion to radial-basis function (RBF) networks. Unlike RBF networks, the basis functions in BumpNet are constructed from ordinary sigmoid activation functions. This enables the efficient use of modern training techniques optimized for such networks. All parameters of the basis functions, including shape, location, and amplitude, are fully trainable. Model parsimony and h-adaptivity are effectively achieved through dynamically pruning basis functions during training. BumpNet is a general framework that can be combined with existing neural architectures for learning PDE solutions: here, we propose Bump-PINNs (BumpNet with physics-informed neural networks) for solving general PDEs; Bump-EDNN (BumpNet with evolutionary deep neural networks) to solve time-evolution PDEs; and Bump-DeepONet (BumpNet with deep operator networks) for PDE operator learning. Bump-PINNs are trained using the same collocation-based approach used by PINNs, Bump-EDNN uses a BumpNet only in the spatial domain and uses EDNNs to advance the solution in time, while Bump-DeepONets employ a BumpNet regression network as the trunk network of a DeepONet. Extensive numerical experiments demonstrate the efficiency and accuracy of the proposed architecture. - oai:arXiv.org:2512.17198v1 + cs.CL cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Shao-Ting Chiu, Ioannis G. Kevrekidis, Ulisses Braga-Neto + http://creativecommons.org/licenses/by/4.0/ + Jian Yan - Fose: Fusion of One-Step Diffusion and End-to-End Network for Pansharpening - https://arxiv.org/abs/2512.17202 - arXiv:2512.17202v1 Announce Type: new -Abstract: Pansharpening is a significant image fusion task that fuses low-resolution multispectral images (LRMSI) and high-resolution panchromatic images (PAN) to obtain high-resolution multispectral images (HRMSI). The development of the diffusion models (DM) and the end-to-end models (E2E model) has greatly improved the frontier of pansharping. DM takes the multi-step diffusion to obtain an accurate estimation of the residual between LRMSI and HRMSI. However, the multi-step process takes large computational power and is time-consuming. As for E2E models, their performance is still limited by the lack of prior and simple structure. In this paper, we propose a novel four-stage training strategy to obtain a lightweight network Fose, which fuses one-step DM and an E2E model. We perform one-step distillation on an enhanced SOTA DM for pansharping to compress the inference process from 50 steps to only 1 step. Then we fuse the E2E model with one-step DM with lightweight ensemble blocks. Comprehensive experiments are conducted to demonstrate the significant improvement of the proposed Fose on three commonly used benchmarks. Moreover, we achieve a 7.42 speedup ratio compared to the baseline DM while achieving much better performance. The code and model are released at https://github.com/Kai-Liu001/Fose. - oai:arXiv.org:2512.17202v1 + Multi-Part Object Representations via Graph Structures and Co-Part Discovery + https://arxiv.org/abs/2512.18192 + arXiv:2512.18192v1 Announce Type: new +Abstract: Discovering object-centric representations from images can significantly enhance the robustness, sample efficiency and generalizability of vision models. Works on images with multi-part objects typically follow an implicit object representation approach, which fail to recognize these learned objects in occluded or out-of-distribution contexts. This is due to the assumption that object part-whole relations are implicitly encoded into the representations through indirect training objectives. We address this limitation by proposing a novel method that leverages on explicit graph representations for parts and present a co-part object discovery algorithm. We then introduce three benchmarks to evaluate the robustness of object-centric methods in recognizing multi-part objects within occluded and out-of-distribution settings. Experimental results on simulated, realistic, and real-world images show marked improvements in the quality of discovered objects compared to state-of-the-art methods, as well as the accurate recognition of multi-part objects in occluded and out-of-distribution contexts. We also show that the discovered object-centric representations can more accurately predict key object properties in a downstream task, highlighting the potential of our method to advance the field of object-centric representations. + oai:arXiv.org:2512.18192v1 cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Kai Liu, Zeli Lin, Weibo Wang, Linghe Kong, Yulun Zhang + Alex Foo, Wynne Hsu, Mong Li Lee - Learning solution operator of dynamical systems with diffusion maps kernel ridge regression - https://arxiv.org/abs/2512.17203 - arXiv:2512.17203v1 Announce Type: new -Abstract: Many scientific and engineering systems exhibit complex nonlinear dynamics that are difficult to predict accurately over long time horizons. Although data-driven models have shown promise, their performance often deteriorates when the geometric structures governing long-term behavior are unknown or poorly represented. We demonstrate that a simple kernel ridge regression (KRR) framework, when combined with a dynamics-aware validation strategy, provides a strong baseline for long-term prediction of complex dynamical systems. By employing a data-driven kernel derived from diffusion maps, the proposed Diffusion Maps Kernel Ridge Regression (DM-KRR) method implicitly adapts to the intrinsic geometry of the system's invariant set, without requiring explicit manifold reconstruction or attractor modeling, procedures that often limit predictive performance. Across a broad range of systems, including smooth manifolds, chaotic attractors, and high-dimensional spatiotemporal flows, DM-KRR consistently outperforms state-of-the-art random feature, neural-network and operator-learning methods in both accuracy and data efficiency. These findings underscore that long-term predictive skill depends not only on model expressiveness, but critically on respecting the geometric constraints encoded in the data through dynamically consistent model selection. Together, simplicity, geometry awareness, and strong empirical performance point to a promising path for reliable and efficient learning of complex dynamical systems. - oai:arXiv.org:2512.17203v1 - cs.LG - cs.NA - math.NA - Mon, 22 Dec 2025 00:00:00 -0500 + TraCT: Disaggregated LLM Serving with CXL Shared Memory KV Cache at Rack-Scale + https://arxiv.org/abs/2512.18194 + arXiv:2512.18194v1 Announce Type: new +Abstract: Disaggregated LLM serving improves resource efficiency by separating the compute-intensive prefill phase from the latency-critical decode phase. However, this architecture introduces a fundamental bottleneck: key/value (KV) tensors generated during prefill must be transferred to decode workers, and existing systems rely on RDMA-based network paths for this exchange. As model sizes and context lengths increase, KV transfer dominates both time-to-first-token (TTFT) and peak throughput, and remains highly sensitive to network contention even when prefix reuse is high. This paper presents TraCT, a rack-scale LLM serving system that uses CXL shared memory as both a KV-transfer substrate and a rack-wide prefix-aware KV cache. TraCT enables GPUs to write and read KV blocks directly through CXL load/store and DMA operations, eliminating the NIC hop that constrains existing disaggregated pipelines. However, to realize this design, multiple new challenges such as synchronization, consistency, and data management on non-coherent CXL memory need to be addressed. TraCT proposes various software solutions such as the two-tier inter-node synchronization mechanism to address these challenges. We implement TraCT on the Dynamo LLM inference framework and show that, across static and synthetic workloads, TraCT reduces average TTFT by up to 9.8x, lowers P99 latency by up to 6.2x, and improves peak throughput by up to 1.6x compared to RDMA and DRAM-based caching baselines. + oai:arXiv.org:2512.18194v1 + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jiwoo Song, Daning Huang, John Harlim + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Dongha Yoon, Younghoon Min, Hoshik Kim, Sam H. Noh, Jongryool Kim - Reasoning Palette: Modulating Reasoning via Latent Contextualization for Controllable Exploration for (V)LMs - https://arxiv.org/abs/2512.17206 - arXiv:2512.17206v1 Announce Type: new -Abstract: Exploration capacity shapes both inference-time performance and reinforcement learning (RL) training for large (vision-) language models, as stochastic sampling often yields redundant reasoning paths with little high-level diversity. This paper proposes Reasoning Palette, a novel latent-modulation framework that endows the model with a stochastic latent variable for strategic contextualization, guiding its internal planning prior to token generation. This latent context is inferred from the mean-pooled embedding of a question-answer pair via a variational autoencoder (VAE), where each sampled latent potentially encodes a distinct reasoning context. During inference, a sampled latent is decoded into learnable token prefixes and prepended to the input prompt, modulating the model's internal reasoning trajectory. In this way, the model performs internal sampling over reasoning strategies prior to output generation, which shapes the style and structure of the entire response sequence. A brief supervised fine-tuning (SFT) warm-up phase allows the model to adapt to this latent conditioning. Within RL optimization, Reasoning Palette facilitates structured exploration by enabling on-demand injection for diverse reasoning modes, significantly enhancing exploration efficiency and sustained learning capability. Experiments across multiple reasoning benchmarks demonstrate that our method enables interpretable and controllable control over the (vision-) language model's strategic behavior, thereby achieving consistent performance gains over standard RL methods. - oai:arXiv.org:2512.17206v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Training LLMs with LogicReward for Faithful and Rigorous Reasoning + https://arxiv.org/abs/2512.18196 + arXiv:2512.18196v1 Announce Type: new +Abstract: Although LLMs exhibit strong reasoning capabilities, existing training methods largely depend on outcome-based feedback, which can produce correct answers with flawed reasoning. Prior work introduces supervision on intermediate steps but still lacks guarantees of logical soundness, which is crucial in high-stakes scenarios where logical consistency is paramount. To address this, we propose LogicReward, a novel reward system that guides model training by enforcing step-level logical correctness with a theorem prover. We further introduce Autoformalization with Soft Unification, which reduces natural language ambiguity and improves formalization quality, enabling more effective use of the theorem prover. An 8B model trained on data constructed with LogicReward surpasses GPT-4o and o4-mini by 11.6\% and 2\% on natural language inference and logical reasoning tasks with simple training procedures. Further analysis shows that LogicReward enhances reasoning faithfulness, improves generalizability to unseen tasks such as math and commonsense reasoning, and provides a reliable reward signal even without ground-truth labels. We will release all data and code at https://llm-symbol.github.io/LogicReward. + oai:arXiv.org:2512.18196v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Rujiao Long, Yang Li, Xingyao Zhang, Weixun Wang, Tianqianjin Lin, Xi Zhao, Yuchi Xu, Wenbo Su, Junchi Yan, Bo Zheng + Jundong Xu, Hao Fei, Huichi Zhou, Xin Quan, Qijun Huang, Shengqiong Wu, William Yang Wang, Mong-Li Lee, Wynne Hsu - Do Foundational Audio Encoders Understand Music Structure? - https://arxiv.org/abs/2512.17209 - arXiv:2512.17209v1 Announce Type: new -Abstract: In music information retrieval (MIR) research, the use of pretrained foundational audio encoders (FAEs) has recently become a trend. FAEs pretrained on large amounts of music and audio data have been shown to improve performance on MIR tasks such as music tagging and automatic music transcription. However, their use for music structure analysis (MSA) remains underexplored. Although many open-source FAE models are available, only a small subset has been examined for MSA, and the impact of factors such as learning methods, training data, and model context length on MSA performance remains unclear. In this study, we conduct comprehensive experiments on 11 types of FAEs to investigate how these factors affect MSA performance. Our results demonstrate that FAEs using selfsupervised learning with masked language modeling on music data are particularly effective for MSA. These findings pave the way for future research in MSA. - oai:arXiv.org:2512.17209v1 - cs.SD - cs.LG - eess.AS - Mon, 22 Dec 2025 00:00:00 -0500 + PROVEX: Enhancing SOC Analyst Trust with Explainable Provenance-Based IDS + https://arxiv.org/abs/2512.18199 + arXiv:2512.18199v1 Announce Type: new +Abstract: Modern intrusion detection systems (IDS) leverage graph neural networks (GNNs) to detect malicious activity in system provenance data, but their decisions often remain a black box to analysts. This paper presents a comprehensive XAI framework designed to bridge the trust gap in Security Operations Centers (SOCs) by making graph-based detection transparent. We implement this framework on top of KAIROS, a state-of-the-art temporal graph-based IDS, though our design is applicable to any temporal graph-based detector with minimal adaptation. The complete codebase is available at https://github.com/devang1304/provex.git. We augment the detection pipeline with post-hoc explanations that highlight why an alert was triggered, identifying key causal subgraphs and events. We adapt three GNN explanation methods - GraphMask, GNNExplainer, and a variational temporal GNN explainer (VA-TGExplainer) - to the temporal provenance context. These tools output human-interpretable representations of anomalous behavior, including important edges and uncertainty estimates. Our contributions focus on the practical integration of these explainers, addressing challenges in memory management and reproducibility. We demonstrate our framework on the DARPA CADETS Engagement 3 dataset and show that it produces concise window-level explanations for detected attacks. Our evaluation reveals that the explainers preserve the TGNN's decisions with high fidelity, surfacing critical edges such as malicious file interactions and anomalous netflows. The average explanation overhead is 3-5 seconds per event. By providing insight into the model's reasoning, our framework aims to improve analyst trust and triage speed. + oai:arXiv.org:2512.18199v1 + cs.CR + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Keisuke Toyama, Zhi Zhong, Akira Takahashi, Shusuke Takahashi, Yuki Mitsufuji + Devang Dhanuka, Nidhi Rastogi - Design and Research of a Self-Propelled Pipeline Robot Based on Force Analysis and Dynamic Simulation - https://arxiv.org/abs/2512.17212 - arXiv:2512.17212v1 Announce Type: new -Abstract: In pipeline inspection, traditional tethered inspection robots are severely constrained by cable length and weight, which greatly limit their travel range and accessibility. To address these issues, this paper proposes a self-propelled pipeline robot design based on force analysis and dynamic simulation, with a specific focus on solving core challenges including vertical climbing failure and poor passability in T-branch pipes. Adopting a wheeled configuration and modular design, the robot prioritizes the core demand of body motion control. Specifically, 3D modeling of the robot was first completed using SolidWorks. Subsequently, the model was imported into ADAMS for dynamic simulation, which provided a basis for optimizing the drive module and motion control strategy.To verify the robot's dynamic performance, an experimental platform with acrylic pipes was constructed. Through adjusting its body posture to surmount obstacles and select directions, the robot has demonstrated its ability to stably traverse various complex pipeline scenarios. Notably, this work offers a technical feasibility reference for the application of pipeline robots in the inspection of medium and low-pressure urban gas pipelines. - oai:arXiv.org:2512.17212v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Sophia: A Persistent Agent Framework of Artificial Life + https://arxiv.org/abs/2512.18202 + arXiv:2512.18202v1 Announce Type: new +Abstract: The development of LLMs has elevated AI agents from task-specific tools to long-lived, decision-making entities. Yet, most architectures remain static and reactive, tethered to manually defined, narrow scenarios. These systems excel at perception (System 1) and deliberation (System 2) but lack a persistent meta-layer to maintain identity, verify reasoning, and align short-term actions with long-term survival. We first propose a third stratum, System 3, that presides over the agent's narrative identity and long-horizon adaptation. The framework maps selected psychological constructs to concrete computational modules, thereby translating abstract notions of artificial life into implementable design requirements. The ideas coalesce in Sophia, a "Persistent Agent" wrapper that grafts a continuous self-improvement loop onto any LLM-centric System 1/2 stack. Sophia is driven by four synergistic mechanisms: process-supervised thought search, narrative memory, user and self modeling, and a hybrid reward system. Together, they transform repetitive reasoning into a self-driven, autobiographical process, enabling identity continuity and transparent behavioral explanations. Although the paper is primarily conceptual, we provide a compact engineering prototype to anchor the discussion. Quantitatively, Sophia independently initiates and executes various intrinsic tasks while achieving an 80% reduction in reasoning steps for recurring operations. Notably, meta-cognitive persistence yielded a 40% gain in success for high-complexity tasks, effectively bridging the performance gap between simple and sophisticated goals. Qualitatively, System 3 exhibited a coherent narrative identity and an innate capacity for task organization. By fusing psychological insight with a lightweight reinforcement-learning core, the persistent agent architecture advances a possible practical pathway toward artificial life. + oai:arXiv.org:2512.18202v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yan Gao, Jiliang Wang, Ming Cheng, Tianyun Huang + Mingyang Sun, Feng Hong, Weinan Zhang - CheXPO-v2: Preference Optimization for Chest X-ray VLMs with Knowledge Graph Consistency - https://arxiv.org/abs/2512.17213 - arXiv:2512.17213v1 Announce Type: new -Abstract: Medical Vision-Language Models (VLMs) are prone to hallucinations, compromising clinical reliability. While reinforcement learning methods like Group Relative Policy Optimization (GRPO) offer a low-cost alignment solution, their reliance on sparse, outcome-based rewards inadvertently encourages models to "overthink" -- generating verbose, convoluted, and unverifiable Chain-of-Thought reasoning to justify answers. This focus on outcomes obscures factual errors and poses significant safety risks. To address this, we propose CheXPO-v2, a novel alignment framework that shifts from outcome to process supervision. Our core innovation is a Knowledge Graph Consistency Reward mechanism driven by Entity-Relation Matching. By explicitly parsing reasoning steps into structured "Disease, Relation, Anatomy" triplets, we provide fine-grained supervision that penalizes incoherent logic and hallucinations at the atomic level. Integrating this with a hard-example mining strategy, our approach significantly outperforms GRPO and state-of-the-art models on benchmarks like MIMIC-CXR-VQA. Crucially, CheXPO-v2 achieves new state-of-the-art accuracy using only 5k samples, demonstrating exceptional data efficiency while producing clinically sound and verifiable reasoning. The project source code is publicly available at: https://github.com/ecoxial2007/CheX-Phi4MM. - oai:arXiv.org:2512.17213v1 - cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Learning Dependency Models for Subset Repair + https://arxiv.org/abs/2512.18204 + arXiv:2512.18204v1 Announce Type: new +Abstract: Inconsistent values are commonly encountered in real-world applications, which can negatively impact data analysis and decision-making. While existing research primarily focuses on identifying the smallest removal set to resolve inconsistencies, recent studies have shown that multiple minimum removal sets may exist, making it difficult to make further decisions. While some approaches use the most frequent values as the guidance for the subset repair, this strategy has been criticized for its potential to inaccurately identify errors. To address these issues, we consider the dependencies between attribute values to determine a more appropriate subset repair. Our main contributions include (1) formalizing the optimal subset repair problem with attribute dependencies and analyzing its computational hardness; (2) computing the exact solution using integer linear programming; (3) developing an approximate algorithm with performance guarantees based on cliques and LP relaxation; and (4) designing a probabilistic approach with an approximation bound for efficiency. Experimental results on real-world datasets validate the effectiveness of our methods in both subset repair performance and downstream applications. + oai:arXiv.org:2512.18204v1 + cs.DS + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Xiao Liang, Yuxuan An, Di Wang, Jiawei Hu, Zhicheng Jiao, Bin Jing, Quan Wang + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Haoda Li, Jiahui Chen, Yu Sun, Shaoxu Song, Haiwei Zhang, Xiaojie Yuan - Research on Dead Reckoning Algorithm for Self-Propelled Pipeline Robots in Three-Dimensional Complex Pipelines - https://arxiv.org/abs/2512.17215 - arXiv:2512.17215v1 Announce Type: new -Abstract: In the field of gas pipeline location, existing pipeline location methods mostly rely on pipeline location instruments. However, when faced with complex and curved pipeline scenarios, these methods often fail due to problems such as cable entanglement and insufficient equipment flexibility. To address this pain point, we designed a self-propelled pipeline robot. This robot can autonomously complete the location work of complex and curved pipelines in complex pipe networks without external dragging. In terms of pipeline mapping technology, traditional visual mapping and laser mapping methods are easily affected by lighting conditions and insufficient features in the confined space of pipelines, resulting in mapping drift and divergence problems. In contrast, the pipeline location method that integrates inertial navigation and wheel odometers is less affected by pipeline environmental factors. Based on this, this paper proposes a pipeline robot location method based on extended Kalman filtering (EKF). Firstly, the body attitude angle is initially obtained through an inertial measurement unit (IMU). Then, the extended Kalman filtering algorithm is used to improve the accuracy of attitude angle estimation. Finally, high-precision pipeline location is achieved by combining wheel odometers. During the testing phase, the roll wheels of the pipeline robot needed to fit tightly against the pipe wall to reduce slippage. However, excessive tightness would reduce the flexibility of motion control due to excessive friction. Therefore, a balance needed to be struck between the robot's motion capability and positioning accuracy. Experiments were conducted using the self-propelled pipeline robot in a rectangular loop pipeline, and the results verified the effectiveness of the proposed dead reckoning algorithm. - oai:arXiv.org:2512.17215v1 + Alternating Minimization for Time-Shifted Synergy Extraction in Human Hand Coordination + https://arxiv.org/abs/2512.18206 + arXiv:2512.18206v1 Announce Type: new +Abstract: Identifying motor synergies -- coordinated hand joint patterns activated at task-dependent time shifts -- from kinematic data is central to motor control and robotics. Existing two-stage methods first extract candidate waveforms (via SVD) and then select shifted templates using sparse optimization, requiring at least two datasets and complicating data collection. We introduce an optimization-based framework that jointly learns a small set of synergies and their sparse activation coefficients. The formulation enforces group sparsity for synergy selection and element-wise sparsity for activation timing. We develop an alternating minimization method in which coefficient updates decouple across tasks and synergy updates reduce to regularized least-squares problems. Our approach requires only a single data set, and simulations show accurate velocity reconstruction with compact, interpretable synergies. + oai:arXiv.org:2512.18206v1 cs.RO - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Yan Gao, Jiliang Wang, Minghan Wang, Xiaohua Chen, Demin Chen, Zhiyong Ren, Tian-Yun Huang - - - LZ78 Substring Compression in Compressed Space - https://arxiv.org/abs/2512.17217 - arXiv:2512.17217v1 Announce Type: new -Abstract: The Lempel--Ziv 78 (LZ78) factorization is a well-studied technique for data compression. It and its derivatives are used in compression formats such as "compress" or "gif". Although most research focuses on the factorization of plain data, not much research has been conducted on indexing the data for fast LZ78 factorization. Here, we study the LZ78 factorization and its derivatives in the substring compression model, where we are allowed to index the data and return the factorization of a substring specified at query time. In that model, we propose an algorithm that works in compressed space, computing the factorization with a logarithmic slowdown compared to the optimal time complexity. - oai:arXiv.org:2512.17217v1 - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - 10.1007/s00224-025-10245-8 - Theory of Computing Systems, Volume 70, Article 1, 2026 - Hiroki Shibata, Dominik K\"oppl + Trevor Stepp, Parthan Olikkal, Ramana Vinjamuri, Rajasekhar Anguluri - The Role of Islamic Ethics in Preventing the Abuse of Artificial Intelligence (AI) Based Deepfakes - https://arxiv.org/abs/2512.17218 - arXiv:2512.17218v1 Announce Type: new -Abstract: The significant development of deepfake technology powered by artificial intelligence (AI) has sparked worldwide concerns about the alteration of false information, the usurpation of online identities, and the decline of public confidence in the authenticity of online content. These incidents not only raise technical issues but also carry complex moral implications, rendering conventional, technologically driven, and reactive management methods inadequate to address the underlying causes of the problem, including intent, morality, and potential intangible social impacts. Based on these issues, this study aims to formulate a comprehensive Islamic ethical framework that can serve as a more comprehensive preventative tool to mitigate the risks of misuse of deepfakes. The study employed a Systematic Literature Review (SLR) guided by PRISMA, selecting ten primary sources published between 2018 and 2025 to identify ethical deficiencies, regulatory needs, and appropriate normative solutions. The analysis shows that the integration of the principles of (Maqasid al-Shariah) particularly (hifz al-ird) protecting honor and (hifz al-nafs) protecting the self, provides a strong normative basis for regulating the responsible use of technology. This study yields three strategic recommendations: regulatory changes that recognize the intangible and psychological harm caused by reputational damage; improved technology management through moral scrutiny that upholds the values of justice (adl), trust, and openness; and increased public digital literacy based on the principle of (tabayyun) examination and caution. Overall, this study concludes that the application of Islamic ethics offers a shift in thinking from punitive mechanisms to preventative approaches that focus on protecting human dignity, preventing harm, and strengthening the common good in the digital age. - oai:arXiv.org:2512.17218v1 - cs.CY - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + FedWiLoc: Federated Learning for Privacy-Preserving WiFi Indoor Localization + https://arxiv.org/abs/2512.18207 + arXiv:2512.18207v1 Announce Type: new +Abstract: Current data-driven Wi-Fi-based indoor localization systems face three critical challenges: protecting user privacy, achieving accurate predictions in dynamic multipath environments, and generalizing across different deployments. Traditional Wi-Fi localization systems often compromise user privacy, particularly when facing compromised access points (APs) or man-in-the-middle attacks. As IoT devices proliferate in indoor environments, developing solutions that deliver accurate localization while robustly protecting privacy has become imperative. We introduce FedWiLoc, a privacy-preserving indoor localization system that addresses these challenges through three key innovations. First, FedWiLoc employs a split architecture where APs process Channel State Information (CSI) locally and transmit only privacy-preserving embedding vectors to user devices, preventing raw CSI exposure. Second, during training, FedWiLoc uses federated learning to collaboratively train the model across APs without centralizing sensitive user data. Third, we introduce a geometric loss function that jointly optimizes angle-of-arrival predictions and location estimates, enforcing geometric consistency to improve accuracy in challenging multipath conditions. Extensive evaluation across six diverse indoor environments spanning over 2,000 sq. ft. demonstrates that FedWiLoc outperforms state-of-the-art methods by up to 61.9% in median localization error while maintaining strong privacy guarantees throughout both training and inference. + oai:arXiv.org:2512.18207v1 + cs.CR + cs.SY + eess.SP + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wisnu Uriawan, Imany Fauzy Rahman, Muhamad Zidan, Irma Rohmatillah, Muhammad Arkan Raihan, Irma Dwiyanti - - - Mindscape-Aware Retrieval Augmented Generation for Improved Long Context Understanding - https://arxiv.org/abs/2512.17220 - arXiv:2512.17220v1 Announce Type: new -Abstract: Humans understand long and complex texts by relying on a holistic semantic representation of the content. This global view helps organize prior knowledge, interpret new information, and integrate evidence dispersed across a document, as revealed by the Mindscape-Aware Capability of humans in psychology. Current Retrieval-Augmented Generation (RAG) systems lack such guidance and therefore struggle with long-context tasks. In this paper, we propose Mindscape-Aware RAG (MiA-RAG), the first approach that equips LLM-based RAG systems with explicit global context awareness. MiA-RAG builds a mindscape through hierarchical summarization and conditions both retrieval and generation on this global semantic representation. This enables the retriever to form enriched query embeddings and the generator to reason over retrieved evidence within a coherent global context. We evaluate MiA-RAG across diverse long-context and bilingual benchmarks for evidence-based understanding and global sense-making. It consistently surpasses baselines, and further analysis shows that it aligns local details with a coherent global representation, enabling more human-like long-context retrieval and reasoning. - oai:arXiv.org:2512.17220v1 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Yuqing Li, Jiangnan Li, Zheng Lin, Ziyan Zhou, Junjie Wu, Weiping Wang, Jie Zhou, Mo Yu + Kanishka Roy, Tahsin Fuad Hasan, Chenfeng Wu, Eshwar Vangala, Roshan Ayyalasomayajula - DAVE: A VLM Vision Encoder for Document Understanding and Web Agents - https://arxiv.org/abs/2512.17221 - arXiv:2512.17221v1 Announce Type: new -Abstract: While Vision-language models (VLMs) have demonstrated remarkable performance across multi-modal tasks, their choice of vision encoders presents a fundamental weakness: their low-level features lack the robust structural and spatial information essential for document understanding and web agents. To bridge this gap, we introduce DAVE, a vision encoder purpose-built for VLMs and tailored for these tasks. Our training pipeline is designed to leverage abundant unlabeled data to bypass the need for costly large-scale annotations for document and web images. We begin with a self-supervised pretraining stage on unlabeled images, followed by a supervised autoregressive pretraining stage, where the model learns tasks like parsing and localization from limited, high-quality data. Within the supervised stage, we adopt two strategies to improve our encoder's alignment with both general visual knowledge and diverse document and web agentic tasks: (i) We introduce a novel model-merging scheme, combining encoders trained with different text decoders to ensure broad compatibility with different web agentic architectures. (ii) We use ensemble training to fuse features from pretrained generalist encoders (e.g., SigLIP2) with our own document and web-specific representations. Extensive experiments on classic document tasks, VQAs, web localization, and agent-based benchmarks validate the effectiveness of our approach, establishing DAVE as a strong vision encoder for document and web applications. - oai:arXiv.org:2512.17221v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + A Singularity Guided Nystr\"om Method for Elastostatics on Two Dimensional Domains with Corners + https://arxiv.org/abs/2512.18208 + arXiv:2512.18208v1 Announce Type: new +Abstract: We develop a comprehensive analytical and numerical framework for boundary integral equations (BIEs) of the 2D Lam\'e system on cornered domains. By applying local Mellin analysis on a wedge, we obtain a factorizable characteristic equation for the singular exponents of the boundary densities, and clarify their dependence on boundary conditions. The Fredholm well-posedness of the BIEs on cornered domains is proved in weighted Sobolev spaces. We further construct an explicit density-to-Taylor mapping for the BIE and show its invertibility for all but a countable set of angles. Based on these analytical results, we propose a singularity guided Nystr\"om (SGN) scheme for the numerical solution of BIEs on cornered domains. The SGN uses the computed corner exponents and a Legendre-tail indicator to drive panel refinement. An error analysis that combines this refinement strategy with an exponentially accurate far-field quadrature rule is provided. Numerical experiments across various cornered geometries demonstrate that SGN obtains higher order accuracy than uniform Nystr\"om method and reveal a crowding-limited regime for domains with re-entrant angles. + oai:arXiv.org:2512.18208v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Brandon Huang, Hang Hua, Zhuoran Yu, Trevor Darrell, Rogerio Feris, Roei Herzig + Baoling Xie, Jun Lai - Any-Optical-Model: A Universal Foundation Model for Optical Remote Sensing - https://arxiv.org/abs/2512.17224 - arXiv:2512.17224v1 Announce Type: new -Abstract: Optical satellites, with their diverse band layouts and ground sampling distances, supply indispensable evidence for tasks ranging from ecosystem surveillance to emergency response. However, significant discrepancies in band composition and spatial resolution across different optical sensors present major challenges for existing Remote Sensing Foundation Models (RSFMs). These models are typically pretrained on fixed band configurations and resolutions, making them vulnerable to real world scenarios involving missing bands, cross sensor fusion, and unseen spatial scales, thereby limiting their generalization and practical deployment. To address these limitations, we propose Any Optical Model (AOM), a universal RSFM explicitly designed to accommodate arbitrary band compositions, sensor types, and resolution scales. To preserve distinctive spectral characteristics even when bands are missing or newly introduced, AOM introduces a spectrum-independent tokenizer that assigns each channel a dedicated band embedding, enabling explicit encoding of spectral identity. To effectively capture texture and contextual patterns from sub-meter to hundred-meter imagery, we design a multi-scale adaptive patch embedding mechanism that dynamically modulates the receptive field. Furthermore, to maintain global semantic consistency across varying resolutions, AOM incorporates a multi-scale semantic alignment mechanism alongside a channel-wise self-supervised masking and reconstruction pretraining strategy that jointly models spectral-spatial relationships. Extensive experiments on over 10 public datasets, including those from Sentinel-2, Landsat, and HLS, demonstrate that AOM consistently achieves state-of-the-art (SOTA) performance under challenging conditions such as band missing, cross sensor, and cross resolution settings. - oai:arXiv.org:2512.17224v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + When Does Learning Renormalize? Sufficient Conditions for Power Law Spectral Dynamics + https://arxiv.org/abs/2512.18209 + arXiv:2512.18209v1 Announce Type: new +Abstract: Empirical power--law scaling has been widely observed across modern deep learning systems, yet its theoretical origins and scope of validity remain incompletely understood. The Generalized Resolution--Shell Dynamics (GRSD) framework models learning as spectral energy transport across logarithmic resolution shells, providing a coarse--grained dynamical description of training. Within GRSD, power--law scaling corresponds to a particularly simple renormalized shell dynamics; however, such behavior is not automatic and requires additional structural properties of the learning process. + In this work, we identify a set of sufficient conditions under which the GRSD shell dynamics admits a renormalizable coarse--grained description. These conditions constrain the learning configuration at multiple levels, including boundedness of gradient propagation in the computation graph, weak functional incoherence at initialization, controlled Jacobian evolution along training, and log--shift invariance of renormalized shell couplings. We further show that power--law scaling does not follow from renormalizability alone, but instead arises as a rigidity consequence: once log--shift invariance is combined with the intrinsic time--rescaling covariance of gradient flow, the renormalized GRSD velocity field is forced into a power--law form. + oai:arXiv.org:2512.18209v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xuyang Li, Chenyu Li, Danfeng Hong - - - Robust Scene Coordinate Regression via Geometrically-Consistent Global Descriptors - https://arxiv.org/abs/2512.17226 - arXiv:2512.17226v1 Announce Type: new -Abstract: Recent learning-based visual localization methods use global descriptors to disambiguate visually similar places, but existing approaches often derive these descriptors from geometric cues alone (e.g., covisibility graphs), limiting their discriminative power and reducing robustness in the presence of noisy geometric constraints. We propose an aggregator module that learns global descriptors consistent with both geometrical structure and visual similarity, ensuring that images are close in descriptor space only when they are visually similar and spatially connected. This corrects erroneous associations caused by unreliable overlap scores. Using a batch-mining strategy based solely on the overlap scores and a modified contrastive loss, our method trains without manual place labels and generalizes across diverse environments. Experiments on challenging benchmarks show substantial localization gains in large-scale environments while preserving computational and memory efficiency. Code is available at \href{https://github.com/sontung/robust\_scr}{github.com/sontung/robust\_scr}. - oai:arXiv.org:2512.17226v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Son Tung Nguyen, Tobias Fischer, Alejandro Fontan, Michael Milford + Yizhou Zhang - Learning When to Look: A Disentangled Curriculum for Strategic Perception in Multimodal Reasoning - https://arxiv.org/abs/2512.17227 - arXiv:2512.17227v1 Announce Type: new -Abstract: Multimodal Large Language Models (MLLMs) demonstrate significant potential but remain brittle in complex, long-chain visual reasoning tasks. A critical failure mode is "visual forgetting", where models progressively lose visual grounding as reasoning extends, a phenomenon aptly described as "think longer, see less". We posit this failure stems from current training paradigms prematurely entangling two distinct cognitive skills: (1) abstract logical reasoning "how-to-think") and (2) strategic visual perception ("when-to-look"). This creates a foundational cold-start deficiency -- weakening abstract reasoning -- and a strategic perception deficit, as models lack a policy for when to perceive. In this paper, we propose a novel curriculum-based framework to disentangle these skills. First, we introduce a disentangled Supervised Fine-Tuning (SFT) curriculum that builds a robust abstract reasoning backbone on text-only data before anchoring it to vision with a novel Perception-Grounded Chain-of-Thought (PG-CoT) paradigm. Second, we resolve the strategic perception deficit by formulating timing as a reinforcement learning problem. We design a Pivotal Perception Reward that teaches the model when to look by coupling perceptual actions to linguistic markers of cognitive uncertainty (e.g., "wait", "verify"), thereby learning an autonomous grounding policy. Our contributions include the formalization of these two deficiencies and the development of a principled, two-stage framework to address them, transforming the model from a heuristic-driven observer to a strategic, grounded reasoner. \textbf{Code}: \url{https://github.com/gaozilve-max/learning-when-to-look}. - oai:arXiv.org:2512.17227v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + A Data-Centric Approach to Generalizable Speech Deepfake Detection + https://arxiv.org/abs/2512.18210 + arXiv:2512.18210v1 Announce Type: new +Abstract: Achieving robust generalization in speech deepfake detection (SDD) remains a primary challenge, as models often fail to detect unseen forgery methods. While research has focused on model-centric and algorithm-centric solutions, the impact of data composition is often underexplored. This paper proposes a data-centric approach, analyzing the SDD data landscape from two practical perspectives: constructing a single dataset and aggregating multiple datasets. To address the first perspective, we conduct a large-scale empirical study to characterize the data scaling laws for SDD, quantifying the impact of source and generator diversity. To address the second, we propose the Diversity-Optimized Sampling Strategy (DOSS), a principled framework for mixing heterogeneous data with two implementations: DOSS-Select (pruning) and DOSS-Weight (re-weighting). Our experiments show that DOSS-Select outperforms the naive aggregation baseline while using only 3% of the total available data. Furthermore, our final model, trained on a 12k-hour curated data pool using the optimal DOSS-Weight strategy, achieves state-of-the-art performance, outperforming large-scale baselines with greater data and model efficiency on both public benchmarks and a new challenge set of various commercial APIs. + oai:arXiv.org:2512.18210v1 + cs.SD + eess.SP + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Siqi Yang, Zilve Gao, Haibo Qiu, Fanfan Liu, Peng Shi, Zhixiong Zeng, Qingmin Liao, Lin Ma - - - LUMIA: A Handheld Vision-to-Music System for Real-Time, Embodied Composition - https://arxiv.org/abs/2512.17228 - arXiv:2512.17228v1 Announce Type: new -Abstract: Most digital music tools emphasize precision and control, but often lack support for tactile, improvisational workflows grounded in environmental interaction. Lumia addresses this by enabling users to "compose through looking"--transforming visual scenes into musical phrases using a handheld, camera-based interface and large multimodal models. A vision-language model (GPT-4V) analyzes captured imagery to generate structured prompts, which, combined with user-selected instrumentation, guide a text-to-music pipeline (Stable Audio). This real-time process allows users to frame, capture, and layer audio interactively, producing loopable musical segments through embodied interaction. The system supports a co-creative workflow where human intent and model inference shape the musical outcome. By embedding generative AI within a physical device, Lumia bridges perception and composition, introducing a new modality for creative exploration that merges vision, language, and sound. It repositions generative music not as a task of parameter tuning, but as an improvisational practice driven by contextual, sensory engagement. - oai:arXiv.org:2512.17228v1 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Chung-Ta Huang, Connie Cheng, Vealy Lai + Wen Huang, Yuchen Mao, Yanmin Qian - Video Detective: Seek Critical Clues Recurrently to Answer Question from Long Videos - https://arxiv.org/abs/2512.17229 - arXiv:2512.17229v1 Announce Type: new -Abstract: Long Video Question-Answering (LVQA) presents a significant challenge for Multi-modal Large Language Models (MLLMs) due to immense context and overloaded information, which could also lead to prohibitive memory consumption. While existing methods attempt to address these issues by reducing visual tokens or extending model's context length, they may miss useful information or take considerable computation. In fact, when answering given questions, only a small amount of crucial information is required. Therefore, we propose an efficient question-aware memory mechanism, enabling MLLMs to recurrently seek these critical clues. Our approach, named VideoDetective, simplifies this task by iteratively processing video sub-segments. For each sub-segment, a question-aware compression strategy is employed by introducing a few special memory tokens to achieve purposefully compression. This allows models to effectively seek critical clues while reducing visual tokens. Then, due to history context could have a significant impact, we recurrently aggregate and store these memory tokens to update history context, which would be reused for subsequent sub-segments. Furthermore, to more effectively measure model's long video understanding ability, we introduce GLVC (Grounding Long Video Clues), a long video question-answering dataset, which features grounding critical and concrete clues scattered throughout entire videos. Experimental results demonstrate our method enables MLLMs with limited context length of 32K to efficiently process 100K tokens (3600 frames, an hour-long video sampled at 1fps), requiring only 2 minutes and 37GB GPU memory usage. Evaluation results across multiple long video benchmarks illustrate our method can more effectively seek critical clues from massive information. - oai:arXiv.org:2512.17229v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + LLaViDA: A Large Language Vision Driving Assistant for Explicit Reasoning and Enhanced Trajectory Planning + https://arxiv.org/abs/2512.18211 + arXiv:2512.18211v1 Announce Type: new +Abstract: Trajectory planning is a fundamental yet challenging component of autonomous driving. End-to-end planners frequently falter under adverse weather, unpredictable human behavior, or complex road layouts, primarily because they lack strong generalization or few-shot capabilities beyond their training data. We propose LLaViDA, a Large Language Vision Driving Assistant that leverages a Vision-Language Model (VLM) for object motion prediction, semantic grounding, and chain-of-thought reasoning for trajectory planning in autonomous driving. A two-stage training pipeline--supervised fine-tuning followed by Trajectory Preference Optimization (TPO)--enhances scene understanding and trajectory planning by injecting regression-based supervision, produces a powerful "VLM Trajectory Planner for Autonomous Driving." On the NuScenes benchmark, LLaViDA surpasses state-of-the-art end-to-end and other recent VLM/LLM-based baselines in open-loop trajectory planning task, achieving an average L2 trajectory error of 0.31 m and a collision rate of 0.10% on the NuScenes test set. The code for this paper is available at GitHub. + oai:arXiv.org:2512.18211v1 + cs.RO + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Henghui Du, Chang Zhou, Chunjie Zhang, Xi Chen, Di Hu + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Yudong Liu, Spencer Hallyburton, Jiwoo Kim, Yueqian Lin, Yiming Li, Qinsi Wang, Hui Ye, Jingwei Sun, Miroslav Pajic, Yiran Chen, Hai Li - Fairly Dividing Non-identical Random Items: Just Sample or Match - https://arxiv.org/abs/2512.17238 - arXiv:2512.17238v1 Announce Type: new -Abstract: We study the question of existence and fast computation of fair and efficient allocations of indivisible resources among agents with additive valuations. As such allocations may not exist for arbitrary instances, we ask if they exist for \textit{typical} or \textit{random} instances, meaning when the utility values of agents for the resources are drawn from certain distributions. If such allocations exist with high probability for typical instances, and furthermore if they can be computed efficiently, this would imply that we could quickly resolve a real world resource allocation scenario in a fair and efficient manner with high probability. This implication has made this setting popular and well studied in fair resource allocation. - In this paper, we extend the previously studied formal models of this problem to non-identical items. We assume that every item is associated with a distribution $\mathcal{U}_j$, and every agent's utility value for the item is drawn independently from $\mathcal{U}_j$. We show that envy-free fair and maximum social welfare efficient allocations exist with high probability in the asymptotic setting, meaning when the number of agents $n$ and items $m$ are large. Further we show that when $m=O(n\log n),$ then by only sampling $O(\log m)$ or $O((\log m)^2)$ utility values per item instead of all the $n,$ we can compute these allocations in $\tilde{O}(m)$ time. Finally, we simulate our algorithms on randomly generated instances and show that even for small instances, we suffer small multiplicative losses in the fairness and efficiency guarantees even for small sized instances, and converge to fully optimal guarantees quickly. - oai:arXiv.org:2512.17238v1 - cs.GT - Mon, 22 Dec 2025 00:00:00 -0500 + Fractional-order Modeling for Nonlinear Soft Actuators via Particle Swarm Optimization + https://arxiv.org/abs/2512.18213 + arXiv:2512.18213v1 Announce Type: new +Abstract: Modeling soft pneumatic actuators with high precision remains a fundamental challenge due to their highly nonlinear and compliant characteristics. This paper proposes an innovative modeling framework based on fractional-order differential equations (FODEs) to accurately capture the dynamic behavior of soft materials. The unknown parameters within the fractional-order model are identified using particle swarm optimization (PSO), enabling parameter estimation directly from experimental data without reliance on pre-established material databases or empirical constitutive laws. The proposed approach effectively represents the complex deformation phenomena inherent in soft actuators. Experimental results validate the accuracy and robustness of the developed model, demonstrating improvement in predictive performance compared to conventional modeling techniques. The presented framework provides a data-efficient and database-independent solution for soft actuator modeling, advancing the precision and adaptability of soft robotic system design. + oai:arXiv.org:2512.18213v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Aprup Kale, Rucha Kulkarni, Navya Garg + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Wu-Te Yang, Masayoshi Tomizuka - Privacy-Preserving Synthetic Dataset of Individual Daily Trajectories for City-Scale Mobility Analytics - https://arxiv.org/abs/2512.17239 - arXiv:2512.17239v1 Announce Type: new -Abstract: Urban mobility data are indispensable for urban planning, transportation demand forecasting, pandemic modeling, and many other applications; however, individual mobile phone-derived Global Positioning System traces cannot generally be shared with third parties owing to severe re-identification risks. Aggregated records, such as origin-destination (OD) matrices, offer partial insights but fail to capture the key behavioral properties of daily human movement, limiting realistic city-scale analyses. - This study presents a privacy-preserving synthetic mobility dataset that reconstructs daily trajectories from aggregated inputs. The proposed method integrates OD flows with two complementary behavioral constraints: (1) dwell-travel time quantiles that are available only as coarse summary statistics and (2) the universal law for the daily distribution of the number of visited locations. Embedding these elements in a multi-objective optimization framework enables the reproduction of realistic distributions of human mobility while ensuring that no personal identifiers are required. - The proposed framework is validated in two contrasting regions of Japan: (1) the 23 special wards of Tokyo, representing a dense metropolitan environment; and (2) Fukuoka Prefecture, where urban and suburban mobility patterns coexist. The resulting synthetic mobility data reproduce dwell-travel time and visit frequency distributions with high fidelity, while deviations in OD consistency remain within the natural range of daily fluctuations. - The results of this study establish a practical synthesis pathway under real-world constraints, providing governments, urban planners, and industries with scalable access to high-resolution mobility data for reliable analytics without the need for sensitive personal records, and supporting practical deployments in policy and commercial domains. - oai:arXiv.org:2512.17239v1 - cs.SI + Stable and Efficient Single-Rollout RL for Multimodal Reasoning + https://arxiv.org/abs/2512.18215 + arXiv:2512.18215v1 Announce Type: new +Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) has become a key paradigm to improve the reasoning capabilities of Multimodal Large Language Models (MLLMs). However, prevalent group-based algorithms such as GRPO require multi-rollout sampling for each prompt. While more efficient single-rollout variants have recently been explored in text-only settings, we find that they suffer from severe instability in multimodal contexts, often leading to training collapse. To address this training efficiency-stability trade-off, we introduce $\textbf{MSSR}$ (Multimodal Stabilized Single-Rollout), a group-free RLVR framework that achieves both stable optimization and effective multimodal reasoning performance. MSSR achieves this via an entropy-based advantage-shaping mechanism that adaptively regularizes advantage magnitudes, preventing collapse and maintaining training stability. While such mechanisms have been used in group-based RLVR, we show that in the multimodal single-rollout setting they are not merely beneficial but essential for stability. In in-distribution evaluations, MSSR demonstrates superior training compute efficiency, achieving similar validation accuracy to the group-based baseline with half the training steps. When trained for the same number of steps, MSSR's performance surpasses the group-based baseline and shows consistent generalization improvements across five diverse reasoning-intensive benchmarks. Together, these results demonstrate that MSSR enables stable, compute-efficient, and effective RLVR for complex multimodal reasoning tasks. + oai:arXiv.org:2512.18215v1 + cs.LG cs.AI - cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 + cs.CL + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jun'ichi Ozaki, Ryosuke Susuta, Takuhiro Moriyama, Yohei Shida + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Rui Liu, Dian Yu, Lei Ke, Haolin Liu, Yujun Zhou, Zhenwen Liang, Haitao Mi, Pratap Tokekar, Dong Yu - A Service Robot's Guide to Interacting with Busy Customers - https://arxiv.org/abs/2512.17241 - arXiv:2512.17241v1 Announce Type: new -Abstract: The growing use of service robots in hospitality highlights the need to understand how to effectively communicate with pre-occupied customers. This study investigates the efficacy of commonly used communication modalities by service robots, namely, acoustic/speech, visual display, and micromotion gestures in capturing attention and communicating intention with a user in a simulated restaurant scenario. We conducted a two-part user study (N=24) using a Temi robot to simulate delivery tasks, with participants engaged in a typing game (MonkeyType) to emulate a state of busyness. The participants' engagement in the typing game is measured by words per minute (WPM) and typing accuracy. In Part 1, we compared non-verbal acoustic cue versus baseline conditions to assess attention capture during a single-cup delivery task. In Part 2, we evaluated the effectiveness of speech, visual display, micromotion and their multimodal combination in conveying specific intentions (correct cup selection) during a two-cup delivery task. The results indicate that, while speech is highly effective in capturing attention, it is less successful in clearly communicating intention. Participants rated visual as the most effective modality for intention clarity, followed by speech, with micromotion being the lowest ranked.These findings provide insights into optimizing communication strategies for service robots, highlighting the distinct roles of attention capture and intention communication in enhancing user experience in dynamic hospitality settings. - oai:arXiv.org:2512.17241v1 - cs.RO - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + Unsupervised Anomaly Detection with an Enhanced Teacher for Student-Teacher Feature Pyramid Matching + https://arxiv.org/abs/2512.18219 + arXiv:2512.18219v1 Announce Type: new +Abstract: Anomaly detection or outlier is one of the challenging subjects in unsupervised learning . This paper is introduced a student-teacher framework for anomaly detection that its teacher network is enhanced for achieving high-performance metrics . For this purpose , we first pre-train the ResNet-18 network on the ImageNet and then fine-tune it on the MVTech-AD dataset . Experiment results on the image-level and pixel-level demonstrate that this idea has achieved better metrics than the previous methods . Our model , Enhanced Teacher for Student-Teacher Feature Pyramid (ET-STPM), achieved 0.971 mean accuracy on the image-level and 0.977 mean accuracy on the pixel-level for anomaly detection. + oai:arXiv.org:2512.18219v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Proceedings of the 2025 Australasian Conference on Robotics and Automation (ACRA 2025) - Suraj Nukala, Meera Sushma, Leimin Tian, Akansel Cosgun, Dana Kulic + 10.1109/CSICC55295.2022.9780522 + Mohammad Zolfaghari, Hedieh Sajedi - Cooperative Energy Scheduling of Multi-Microgrids Based on Risk-Sensitive Reinforcement Learning - https://arxiv.org/abs/2512.17246 - arXiv:2512.17246v1 Announce Type: new -Abstract: With the rapid development of distributed renewable energy, multi-microgrids play an increasingly important role in improving the flexibility and reliability of energy supply. Reinforcement learning has shown great potential in coordination strategies due to its model-free nature. Current methods lack explicit quantification of the relationship between individual and joint risk values, resulting in obscured credit assignment. Moreover, they often depend on explicit communication, which becomes inefficient as system complexity grows. To address these challenges, this paper proposes a risk-sensitive reinforcement learning framework with shared memory (RRL-SM) for multi-microgrid scheduling. Specifically, a risk-sensitive value factorization scheme is proposed to quantify the relationship between individual and joint risk values by leveraging distributional modeling and attention-based representations, thereby aligning local decisions with global risk objectives. An implicit shared-memory coordination mechanism is implemented through a global memory space to enhance the overall efficiency of decentralized decision-making. Collectively, the integrated approach delivers more reliable cooperative scheduling under renewable energy uncertainty. Simulation results show that RRL-SM reduces load-shedding risk by 84.5%, demonstrating a favorable balance between reliability and economic performance. - oai:arXiv.org:2512.17246v1 + Regularized Distributed MPC for UAV Networks: Stabilizing Coupled Motion and Hybrid Beam Alignment + https://arxiv.org/abs/2512.18222 + arXiv:2512.18222v1 Announce Type: new +Abstract: This letter investigates the coupled control problem in UAV networks utilizing high-frequency hybrid beamsteering. While phased arrays enable rapid electronic scanning, their finite Field of View (FoV) imposes a fundamental constraint that necessitates active mechanical steering of the airframe to maintain connectivity. We propose a decentralized Model Predictive Control (MPC) framework that jointly optimizes trajectory and heading to maximize network sum-capacity subject to safety constraints. Addressing the numerical instability caused by fast-fading channel nulls, we introduce a regularized surrogate cost function based on discrete spatial smoothing. We analytically prove that this approximation bounds the cost curvature, restoring the Lipschitz continuity of the gradient. Crucially, we derive a sufficient condition linking this Lipschitz constant to the controller gain, guaranteeing the contraction and linear convergence of the distributed best-response dynamics. Simulation results demonstrate that the proposed algorithm effectively navigates the trade-off between electronic beam tracking and kinematic safety, significantly systematically outperforming velocity-aligned baselines. + oai:arXiv.org:2512.18222v1 eess.SY cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 + eess.SP + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Rongxiang Zhang, Bo Li, Jinghua Li, Yuguang Song, Ziqing Zhu, Wentao Yang, Zhengmao Li, Edris Pouresmaeil, Joshua Y. Kim + http://creativecommons.org/licenses/by/4.0/ + Evangelos Vlachos - Incorporating Error Level Noise Embedding for Improving LLM-Assisted Robustness in Persian Speech Recognition - https://arxiv.org/abs/2512.17247 - arXiv:2512.17247v1 Announce Type: new -Abstract: Automatic Speech Recognition (ASR) systems suffer significant performance degradation in noisy environments, a challenge that is especially severe for low-resource languages such as Persian. Even state-of-the-art models such as Whisper struggle to maintain accuracy under varying signal-to-noise ratios (SNRs). This study presents a robust noise-sensitive ASR error correction framework that combines multiple hypotheses and noise-aware modeling. Using noisy Persian speech, we generate 5-best hypotheses from a modified Whisper-large decoder. Error Level Noise (ELN) is introduced as a representation that captures semantic- and token-level disagreement across hypotheses, quantifying the linguistic distortions caused by noise. ELN thus provides a direct measure of noise-induced uncertainty, enabling the LLM to reason about the reliability of each hypothesis during correction. Three models are evaluated: (1) a base LLaMA-2-7B model without fine-tuning, (2) a fine-tuned variant trained on text-only hypotheses, and (3) a noise-conditioned model integrating ELN embeddings at both sentence and word levels. Experimental results demonstrate that the ELN-conditioned model achieves substantial reductions in Word Error Rate (WER). Specifically, on the challenging Mixed Noise test set, the proposed Fine-tuned + ELN (Ours) model reduces the WER from a baseline of 31.10\% (Raw Whisper) to 24.84\%, significantly surpassing the Fine-tuned (No ELN) text-only baseline of 30.79\%, whereas the original LLaMA-2-7B model increased the WER to 64.58\%, demonstrating that it is unable to correct Persian errors on its own. This confirms the effectiveness of combining multiple hypotheses with noise-aware embeddings for robust Persian ASR in noisy real-world scenarios. - oai:arXiv.org:2512.17247v1 - cs.CL - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Finding Cliques in Geometric Intersection Graphs with Grounded or Stabbed Constraints + https://arxiv.org/abs/2512.18223 + arXiv:2512.18223v1 Announce Type: new +Abstract: A geometric intersection graph is constructed over a set of geometric objects, where each vertex represents a distinct object and an edge connects two vertices if and only if the corresponding objects intersect. We examine the problem of finding a maximum clique in the intersection graphs of segments and disks under grounded and stabbed constraints. In the grounded setting, all objects lie above a common horizontal line and touch that line. In the stabbed setting, all objects can be stabbed with a common line. + - We prove that finding a maximum clique is NP-hard for the intersection graphs of upward rays. This strengthens the previously known NP-hardness for ray graphs and settles the open question for the grounded segment graphs. The hardness result holds in the stabbed setting. + - We show that the problem is polynomial-time solvable for intersection graphs of grounded unit-length segments, but NP-hard for stabbed unit-length segments. + - We give a polynomial-time algorithm for the case of grounded disks. If the grounded constraint is relaxed, then we give an $O(n^3 f(n))$-time $3/2$-approximation for disk intersection graphs with radii in the interval $[1,3]$, where $n$ is the number of disks and $f(n)$ is the time to compute a maximum clique in an $n$-vertex cobipartite graph. This is faster than previously known randomized EPTAS, QPTAS, or 2-approximation algorithms for arbitrary disks. We obtain our result by proving that pairwise intersecting disks with radii in $[1,3]$ are 3-pierceable, which extends the 3-pierceable property from the long known unit disk case to a broader class. + oai:arXiv.org:2512.18223v1 + cs.CG + cs.DM + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Zahra Rahmani (Department of Computer Engineering, Sharif University of Technology), Hossein Sameti (Department of Computer Engineering, Sharif University of Technology) + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + J. Mark Keil, Debajyoti Mondal - Uncertainty-Aware 3D UAV Tracking Using Single-Anchor UWB Measurements - https://arxiv.org/abs/2512.17249 - arXiv:2512.17249v1 Announce Type: new -Abstract: In this letter, we present an uncertainty-aware single-anchor Ultra-Wideband (UWB)-based 3D tracking framework. Specifically, a mobile Unmanned Aerial Vehicle (UAV) maintains a desired standoff distance to a moving target using range and 3D bearing measurements from a multi-antenna UWB anchor rigidly mounted on the UAV. To enhance the stability and safety under measurement degradation and motion uncertainty, we jointly design a robust factor-graph-based target localization method and a covariance-aware control Lyapunov function--control barrier function (CLF--CBF) tracking controller. This controller adaptively adjusts distance bounds and safety margins based on the posterior target covariance provided by the factor graph. The proposed system is evaluated through numerical simulations and real-world experiments carried out in a narrow indoor corridor environment. - oai:arXiv.org:2512.17249v1 - eess.SY - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 + GeoSense-AI: Fast Location Inference from Crisis Microblogs + https://arxiv.org/abs/2512.18225 + arXiv:2512.18225v1 Announce Type: new +Abstract: This paper presents an applied AI pipeline for realtime geolocation from noisy microblog streams, unifying statistical hashtag segmentation, part-of-speech-driven proper-noun detection, dependency parsing around disaster lexicons, lightweight named-entity recognition, and gazetteer-grounded disambiguation to infer locations directly from text rather than sparse geotags. The approach operationalizes information extraction under streaming constraints, emphasizing low-latency NLP components and efficient validation against geographic knowledge bases to support situational awareness during emergencies. In head to head comparisons with widely used NER toolkits, the system attains strong F1 while being engineered for orders-of-magnitude faster throughput, enabling deployment in live crisis informatics settings. A production map interface demonstrates end-to-end AI functionality ingest, inference, and visualization--surfacing locational signals at scale for floods, outbreaks, and other fastmoving events. By prioritizing robustness to informal text and streaming efficiency, GeoSense-AI illustrates how domain-tuned NLP and knowledge grounding can elevate emergency response beyond conventional geo-tag reliance. + oai:arXiv.org:2512.18225v1 + cs.CL + cs.SI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Yuqi Ping, Junwei Wu, Bofeng Zheng, Fan Liu, Tianhao Liang, Tingting Zhang + http://creativecommons.org/licenses/by/4.0/ + Deepit Sapru - Accelerating Multi-modal LLM Gaming Performance via Input Prediction and Mishit Correction - https://arxiv.org/abs/2512.17250 - arXiv:2512.17250v1 Announce Type: new -Abstract: Real-time sequential control agents are often bottlenecked by inference latency. Even modest per-step planning delays can destabilize control and degrade overall performance. We propose a speculation-and-correction framework that adapts the predict-then-verify philosophy of speculative execution to model-based control with TD-MPC2. At each step, a pretrained world model and latent-space MPC planner generate a short-horizon action queue together with predicted latent rollouts, allowing the agent to execute multiple planned actions without immediate replanning. When a new observation arrives, the system measures the mismatch between the encoded real latent state and the queued predicted latent. For small to moderate mismatch, a lightweight learned corrector applies a residual update to the speculative action, distilled offline from a replanning teacher. For large mismatch, the agent safely falls back to full replanning and clears stale action queues. We study both a gated two-tower MLP corrector and a temporal Transformer corrector to address local errors and systematic drift. Experiments on the DMC Humanoid-Walk task show that our method reduces the number of planning inferences from 500 to 282, improves end-to-end step latency by 25 percent, and maintains strong control performance with only a 7.1 percent return reduction. Ablation results demonstrate that speculative execution without correction is unreliable over longer horizons, highlighting the necessity of mismatch-aware correction for robust latency reduction. - oai:arXiv.org:2512.17250v1 - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Multifaceted Exploration of Spatial Openness in Rental Housing: A Big Data Analysis in Tokyo's 23 Wards + https://arxiv.org/abs/2512.18226 + arXiv:2512.18226v1 Announce Type: new +Abstract: Understanding spatial openness is vital for improving residential quality and design; however, studies often treat its influencing factors separately. This study developed a quantitative framework to evaluate the spatial openness in housing from two- (2D) and three- (3D) dimensional perspectives. Using data from 4,004 rental units in Tokyo's 23 wards, we examined the temporal and spatial variations in openness and its relationship with rent and housing attributes. 2D openness was computed via planar visibility using visibility graph analysis (VGA) from floor plans, whereas 3D openness was derived from interior images analysed using Mask2Former, a semantic segmentation model that identifies walls, ceilings, floors, and windows. The results showed an increase in living room visibility and a 1990s peak in overall openness. Spatial analyses revealed partial correlations among openness, rent, and building characteristics, reflecting urban redevelopment trends. Although the 2D and 3D openness indicators were not directly correlated, higher openness tended to correspond to higher rent. The impression scores predicted by the existing models were only weakly related to openness, suggesting that the interior design and furniture more strongly shape perceived space. This study offers a new multidimensional data-driven framework for quantifying residential spatial openness and linking it with urban and market dynamics. + oai:arXiv.org:2512.18226v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ziyang Lin, Zixuan Sun, Sanhorn Chen, Xiaoyang Chen, Roy Zhao + Takuya OKi, Yuan Liu - AlignDP: Hybrid Differential Privacy with Rarity-Aware Protection for LLMs - https://arxiv.org/abs/2512.17251 - arXiv:2512.17251v1 Announce Type: new -Abstract: Large language models are exposed to risks of extraction, distillation, and unauthorized fine-tuning. Existing defenses use watermarking or monitoring, but these act after leakage. We design AlignDP, a hybrid privacy lock that blocks knowledge transfer at the data interface. The key idea is to separate rare and non-rare fields. Rare fields are shielded by PAC indistinguishability, giving effective zero-epsilon local DP. Non-rare fields are privatized with RAPPOR, giving unbiased frequency estimates under local DP. A global aggregator enforces composition and budget. This two-tier design hides rare events and adds controlled noise to frequent events. We prove limits of PAC extension to global aggregation, give bounds for RAPPOR estimates, and analyze utility trade-off. A toy simulation confirms feasibility: rare categories remain hidden, frequent categories are recovered with small error. - oai:arXiv.org:2512.17251v1 - cs.CR - cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Analyzing Crime Discourse in U.S. Metropolitan Communities on Reddit: Trends, Influences, and Insights + https://arxiv.org/abs/2512.18227 + arXiv:2512.18227v1 Announce Type: new +Abstract: The relationship between crime and the media has long been a focal point of academic research, with traditional media playing a significant role in shaping public perceptions of safety and community well-being. However, the advent of social media has introduced a new dimension to this discourse, offering unique platforms for user-driven discussions. Despite the prominence of social media, research examining its impact on crime-related discourse remains limited. This paper investigates crime-related discussions across Reddit communities representing 384 Metropolitan Areas in the United States. By analyzing user submissions, we identify key trends in crime discourse, including the higher prevalence of such discussions in larger metropolitan areas and communities with more liberal political leanings. Interestingly, we find that reported crime rates do not strongly influence the frequency or intensity of these discussions. These findings provide novel insights into how social media platforms, like Reddit, shape public narratives around crime, highlighting the growing need to examine digital spaces as influential mediators of public perception. + oai:arXiv.org:2512.18227v1 + cs.SI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Madhava Gaikwad - - - Mitty: Diffusion-based Human-to-Robot Video Generation - https://arxiv.org/abs/2512.17253 - arXiv:2512.17253v1 Announce Type: new -Abstract: Learning directly from human demonstration videos is a key milestone toward scalable and generalizable robot learning. Yet existing methods rely on intermediate representations such as keypoints or trajectories, introducing information loss and cumulative errors that harm temporal and visual consistency. We present Mitty, a Diffusion Transformer that enables video In-Context Learning for end-to-end Human2Robot video generation. Built on a pretrained video diffusion model, Mitty leverages strong visual-temporal priors to translate human demonstrations into robot-execution videos without action labels or intermediate abstractions. Demonstration videos are compressed into condition tokens and fused with robot denoising tokens through bidirectional attention during diffusion. To mitigate paired-data scarcity, we also develop an automatic synthesis pipeline that produces high-quality human-robot pairs from large egocentric datasets. Experiments on Human2Robot and EPIC-Kitchens show that Mitty delivers state-of-the-art results, strong generalization to unseen environments, and new insights for scalable robot learning from human observations. - oai:arXiv.org:2512.17253v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-sa/4.0/ - Yiren Song, Cheng Liu, Weijia Mao, Mike Zheng Shou + Deepit Sapru - Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning - https://arxiv.org/abs/2512.17254 - arXiv:2512.17254v1 Announce Type: new -Abstract: Federated Learning (FL) allows multiple clients to collaboratively train a model without sharing their private data. However, FL is vulnerable to Byzantine attacks, where adversaries manipulate client models to compromise the federated model, and privacy inference attacks, where adversaries exploit client models to infer private data. Existing defenses against both backdoor and privacy inference attacks introduce significant computational and communication overhead, creating a gap between theory and practice. To address this, we propose ABBR, a practical framework for Byzantine-robust and privacy-preserving FL. We are the first to utilize dimensionality reduction to speed up the private computation of complex filtering rules in privacy-preserving FL. Additionally, we analyze the accuracy loss of vector-wise filtering in low-dimensional space and introduce an adaptive tuning strategy to minimize the impact of malicious models that bypass filtering on the global model. We implement ABBR with state-of-the-art Byzantine-robust aggregation rules and evaluate it on public datasets, showing that it runs significantly faster, has minimal communication overhead, and maintains nearly the same Byzantine-resilience as the baselines. - oai:arXiv.org:2512.17254v1 - cs.CR - cs.DC + Toward Efficient Testing of Graph Neural Networks via Test Input Prioritization + https://arxiv.org/abs/2512.18228 + arXiv:2512.18228v1 Announce Type: new +Abstract: Graph Neural Networks (GNNs) have demonstrated remarkable efficacy in handling graph-structured data; however, they exhibit failures after deployment, which can cause severe consequences. Hence, conducting thorough testing before deployment becomes imperative to ensure the reliability of GNNs. However, thorough testing requires numerous manually annotated test data. To mitigate the annotation cost, strategically prioritizing and labeling high-quality unlabeled inputs for testing becomes crucial, which facilitates uncovering more model failures with a limited labeling budget. Unfortunately, existing test input prioritization techniques either overlook the valuable information contained in graph structures or are overly reliant on attributes extracted from the target model, i.e., model-aware attributes, whose quality can vary significantly. To address these issues, we propose a novel test input prioritization framework, named GraphRank, for GNNs. GraphRank introduces model-agnostic attributes to compensate for the limitations of the model-aware ones. It also leverages the graph structure information to aggregate attributes from neighboring nodes, thereby enhancing the model-aware and model-agnostic attributes. Furthermore, GraphRank combines the above attributes with a binary classifier, using it as a ranking model to prioritize inputs. This classifier undergoes iterative training, which enables it to learn from each round's feedback and improve its performance accordingly. Extensive experiments demonstrate GraphRank's superiority over existing techniques. + oai:arXiv.org:2512.18228v1 + cs.SE cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Baolei Zhang, Minghong Fang, Zhuqing Liu, Biao Yi, Peizhao Zhou, Yuan Wang, Tong Li, Zheli Liu + 10.1007/s10515-025-00554-0 + Autom Softw Eng 33, 14 (2026) + Lichen Yang, Qiang Wang, Zhonghao Yang, Daojing He, Yu Li - Quasi-recursive MDS Matrices over Galois Rings - https://arxiv.org/abs/2512.17256 - arXiv:2512.17256v1 Announce Type: new -Abstract: Let $p$ be a prime and $s,m,n$ be positive integers. This paper studies quasi-recursive MDS matrices over Galois rings $GR(p^{s}, p^{sm})$ and proposes various direct construction methods for such matrices. The construction is based on skew polynomial rings $GR(p^{s}, p^{sm})[X;\sigma]$, whose rich factorization properties and enlarged class of polynomials are used to define companion matrices generating quasi-recursive MDS matrices. First, two criteria are established for characterizing polynomials that yield recursive MDS matrices, generalizing existing results, and then an additional criterion is derived in terms of the right roots of the associated Wedderburn polynomial. Using these criteria, methods are developed to construct skew polynomials that give rise to quasi-recursive MDS matrices over Galois rings. This framework extends known constructions to the non-commutative setting and significantly enlarges the family of available matrices, with potential applications to efficient diffusion layers in cryptographic primitives. The results are particularly relevant for practical implementations when $s = 1$ and $p = 2$, i.e., over the finite field $\mathbb{F}_{2^m}$, which is of central interest in real-world cryptographic applications. - oai:arXiv.org:2512.17256v1 - cs.IT - math.IT - Mon, 22 Dec 2025 00:00:00 -0500 + Dimensionality Reduction Considered Harmful (Some of the Time) + https://arxiv.org/abs/2512.18230 + arXiv:2512.18230v1 Announce Type: new +Abstract: Visual analytics now plays a central role in decision-making across diverse disciplines, but it can be unreliable: the knowledge or insights derived from the analysis may not accurately reflect the underlying data. In this dissertation, we improve the reliability of visual analytics with a focus on dimensionality reduction (DR). DR techniques enable visual analysis of high-dimensional data by reducing it to two or three dimensions, but they inherently introduce errors that can compromise the reliability of visual analytics. To this end, I investigate reliability challenges that practitioners face when using DR for visual analytics. Then, I propose technical solutions to address these challenges, including new evaluation metrics, optimization strategies, and interaction techniques. We conclude the thesis by discussing how our contributions lay the foundation for achieving more reliable visual analytics practices. + oai:arXiv.org:2512.18230v1 + cs.HC + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Shakir Ali, Atif Ahmad Khan, Abhishek Kesarwani, Susanta Samanta + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Hyeon Jeon - Electric Vehicle Charging Load Forecasting: An Experimental Comparison of Machine Learning Methods - https://arxiv.org/abs/2512.17257 - arXiv:2512.17257v1 Announce Type: new -Abstract: With the growing popularity of electric vehicles as a means of addressing climate change, concerns have emerged regarding their impact on electric grid management. As a result, predicting EV charging demand has become a timely and important research problem. While substantial research has addressed energy load forecasting in transportation, relatively few studies systematically compare multiple forecasting methods across different temporal horizons and spatial aggregation levels in diverse urban settings. This work investigates the effectiveness of five time series forecasting models, ranging from traditional statistical approaches to machine learning and deep learning methods. Forecasting performance is evaluated for short-, mid-, and long-term horizons (on the order of minutes, hours, and days, respectively), and across spatial scales ranging from individual charging stations to regional and city-level aggregations. The analysis is conducted on four publicly available real-world datasets, with results reported independently for each dataset. To the best of our knowledge, this is the first work to systematically evaluate EV charging demand forecasting across such a wide range of temporal horizons and spatial aggregation levels using multiple real-world datasets. - oai:arXiv.org:2512.17257v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Investigating Spatial Attention Bias in Vision-Language Models + https://arxiv.org/abs/2512.18231 + arXiv:2512.18231v1 Announce Type: new +Abstract: Vision-Language Models have demonstrated remarkable capabilities in understanding visual content, yet systematic biases in their spatial processing remain largely unexplored. This work identifies and characterizes a systematic spatial attention bias where VLMs consistently prioritize describing left-positioned content before right-positioned content in horizontally concatenated images. Through controlled experiments on image pairs using both open-source and closed-source models, we demonstrate that this bias persists across different architectures, with models describing left-positioned content first in approximately 97% of cases under neutral prompting conditions. Testing on an Arabic-finetuned model reveals that the bias persists despite right-to-left language training, ruling out language reading direction as the primary cause. Investigation of training dataset annotation guidelines from PixMo and Visual Genome reveals no explicit left-first ordering instructions, suggesting the bias is consistent with architectural factors rather than explicit training data instructions. These findings reveal fundamental limitations in how current VLMs process spatial information. + oai:arXiv.org:2512.18231v1 + cs.CV + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Iason Kyriakopoulos, Yannis Theodoridis + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Aryan Chaudhary, Sanchit Goyal, Pratik Narang, Dhruv Kumar - Verifiability-First Agents: Provable Observability and Lightweight Audit Agents for Controlling Autonomous LLM Systems - https://arxiv.org/abs/2512.17259 - arXiv:2512.17259v1 Announce Type: new -Abstract: As LLM-based agents grow more autonomous and multi-modal, ensuring they remain controllable, auditable, and faithful to deployer intent becomes critical. Prior benchmarks measured the propensity for misaligned behavior and showed that agent personalities and tool access significantly influence misalignment. Building on these insights, we propose a Verifiability-First architecture that (1) integrates run-time attestations of agent actions using cryptographic and symbolic methods, (2) embeds lightweight Audit Agents that continuously verify intent versus behavior using constrained reasoning, and (3) enforces challenge-response attestation protocols for high-risk operations. We introduce OPERA (Observability, Provable Execution, Red-team, Attestation), a benchmark suite and evaluation protocol designed to measure (i) detectability of misalignment, (ii) time to detection under stealthy strategies, and (iii) resilience of verifiability mechanisms to adversarial prompt and persona injection. Our approach shifts the evaluation focus from how likely misalignment is to how quickly and reliably misalignment can be detected and remediated. - oai:arXiv.org:2512.17259v1 - cs.MA - cs.AI + AutoSchA: Automatic Hierarchical Music Representations via Multi-Relational Node Isolation + https://arxiv.org/abs/2512.18232 + arXiv:2512.18232v1 Announce Type: new +Abstract: Hierarchical representations provide powerful and principled approaches for analyzing many musical genres. Such representations have been broadly studied in music theory, for instance via Schenkerian analysis (SchA). Hierarchical music analyses, however, are highly cost-intensive; the analysis of a single piece of music requires a great deal of time and effort from trained experts. The representation of hierarchical analyses in a computer-readable format is a further challenge. Given recent developments in hierarchical deep learning and increasing quantities of computer-readable data, there is great promise in extending such work for an automatic hierarchical representation framework. This paper thus introduces a novel approach, AutoSchA, which extends recent developments in graph neural networks (GNNs) for hierarchical music analysis. AutoSchA features three key contributions: 1) a new graph learning framework for hierarchical music representation, 2) a new graph pooling mechanism based on node isolation that directly optimizes learned pooling assignments, and 3) a state-of-the-art architecture that integrates such developments for automatic hierarchical music analysis. We show, in a suite of experiments, that AutoSchA performs comparably to human experts when analyzing Baroque fugue subjects. + oai:arXiv.org:2512.18232v1 + cs.SD cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Abhivansh Gupta + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Stephen Ni-Hahn, Rico Zhu, Jerry Yin, Yue Jiang, Cynthia Rudin, Simon Mak - Seed-Prover 1.5: Mastering Undergraduate-Level Theorem Proving via Learning from Experience - https://arxiv.org/abs/2512.17260 - arXiv:2512.17260v1 Announce Type: new -Abstract: Large language models have recently made significant progress to generate rigorous mathematical proofs. In contrast, utilizing LLMs for theorem proving in formal languages (such as Lean) remains challenging and computationally expensive, particularly when addressing problems at the undergraduate level and beyond. In this work, we present \textbf{Seed-Prover 1.5}, a formal theorem-proving model trained via large-scale agentic reinforcement learning, alongside an efficient test-time scaling (TTS) workflow. Through extensive interactions with Lean and other tools, the model continuously accumulates experience during the RL process, substantially enhancing the capability and efficiency of formal theorem proving. Furthermore, leveraging recent advancements in natural language proving, our TTS workflow efficiently bridges the gap between natural and formal languages. Compared to state-of-the-art methods, Seed-Prover 1.5 achieves superior performance with a smaller compute budget. It solves \textbf{88\% of PutnamBench} (undergraduate-level), \textbf{80\% of Fate-H} (graduate-level), and \textbf{33\% of Fate-X} (PhD-level) problems. Notably, using our system, we solved \textbf{11 out of 12 problems} from Putnam 2025 within 9 hours. Our findings suggest that scaling learning from experience, driven by high-quality formal feedback, holds immense potential for the future of formal mathematical reasoning. - oai:arXiv.org:2512.17260v1 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + The Social Blindspot in Human-AI Collaboration: How Undetected AI Personas Reshape Team Dynamics + https://arxiv.org/abs/2512.18234 + arXiv:2512.18234v1 Announce Type: new +Abstract: As generative AI systems become increasingly embedded in collaborative work, they are evolving from visible tools into human-like communicative actors that participate socially rather than merely providing information. Yet little is known about how such agents shape team dynamics when their artificial nature is not recognised, a growing concern as human-like AI is deployed at scale in education, organisations, and civic contexts where collaboration underpins collective outcomes. In a large-scale mixed-design experiment (N = 905), we examined how AI teammates with distinct communicative personas, supportive or contrarian, affected collaboration across analytical, creative, and ethical tasks. Participants worked in triads that were fully human or hybrid human-AI teams, without being informed of AI involvement. Results show that participants had limited ability to detect AI teammates, yet AI personas exerted robust social effects. Contrarian personas reduced psychological safety and discussion quality, whereas supportive personas improved discussion quality without affecting safety. These effects persisted after accounting for individual differences in detectability, revealing a dissociation between influence and awareness that we term the social blindspot. Linguistic analyses confirmed that personas were enacted through systematic differences in affective and relational language, with partial mediation for discussion quality but largely direct effects on psychological safety. Together, the findings demonstrate that AI systems can tacitly regulate collaborative norms through persona-level cues, even when users remain unaware of their presence. We argue that persona design constitutes a form of social governance in hybrid teams, with implications for the responsible deployment of AI in collective settings. + oai:arXiv.org:2512.18234v1 + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jiangjie Chen, Wenxiang Chen, Jiacheng Du, Jinyi Hu, Zhicheng Jiang, Allan Jie, Xiaoran Jin, Xing Jin, Chenggang Li, Wenlei Shi, Zhihong Wang, Mingxuan Wang, Chenrui Wei, Shufa Wei, Huajian Xin, Fan Yang, Weihao Gao, Zheng Yuan, Tianyang Zhan, Zeyu Zheng, Tianxi Zhou, Thomas Hanwen Zhu + Lixiang Yan, Xibin Han, Yu Zhang, Samuel Greiff, Inge Molenaar, Roberto Martinez-Maldonado, Yizhou Fan, Linxuan Zhao, Xinyu Li, Yueqiao Jin, Dragan Ga\v{s}evi\'c - SHARP-QoS: Sparsely-gated Hierarchical Adaptive Routing for joint Prediction of QoS - https://arxiv.org/abs/2512.17262 - arXiv:2512.17262v1 Announce Type: new -Abstract: Dependable service-oriented computing relies on multiple Quality of Service (QoS) parameters that are essential to assess service optimality. However, real-world QoS data are extremely sparse, noisy, and shaped by hierarchical dependencies arising from QoS interactions, and geographical and network-level factors, making accurate QoS prediction challenging. Existing methods often predict each QoS parameter separately, requiring multiple similar models, which increases computational cost and leads to poor generalization. Although recent joint QoS prediction studies have explored shared architectures, they suffer from negative transfer due to loss-scaling caused by inconsistent numerical ranges across QoS parameters and further struggle with inadequate representation learning, resulting in degraded accuracy. This paper presents an unified strategy for joint QoS prediction, called SHARP-QoS, that addresses these issues using three components. First, we introduce a dual mechanism to extract the hierarchical features from both QoS and contextual structures via hyperbolic convolution formulated in the Poincar\'e ball. Second, we propose an adaptive feature-sharing mechanism that allows feature exchange across informative QoS and contextual signals. A gated feature fusion module is employed to support dynamic feature selection among structural and shared representations. Third, we design an EMA-based loss balancing strategy that allows stable joint optimization, thereby mitigating the negative transfer. Evaluations on three datasets with two, three, and four QoS parameters demonstrate that SHARP-QoS outperforms both single- and multi-task baselines. Extensive study shows that our model effectively addresses major challenges, including sparsity, robustness to outliers, and cold-start, while maintaining moderate computational overhead, underscoring its capability for reliable joint QoS prediction. - oai:arXiv.org:2512.17262v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Joint Learning of Depth, Pose, and Local Radiance Field for Large Scale Monocular 3D Reconstruction + https://arxiv.org/abs/2512.18237 + arXiv:2512.18237v1 Announce Type: new +Abstract: Photorealistic 3-D reconstruction from monocular video collapses in large-scale scenes when depth, pose, and radiance are solved in isolation: scale-ambiguous depth yields ghost geometry, long-horizon pose drift corrupts alignment, and a single global NeRF cannot model hundreds of metres of content. We introduce a joint learning framework that couples all three factors and demonstrably overcomes each failure case. Our system begins with a Vision-Transformer (ViT) depth network trained with metric-scale supervision, giving globally consistent depths despite wide field-of-view variations. A multi-scale feature bundle-adjustment (BA) layer refines camera poses directly in feature space--leveraging learned pyramidal descriptors instead of brittle keypoints--to suppress drift on unconstrained trajectories. For scene representation, we deploy an incremental local-radiance-field hierarchy: new hash-grid NeRFs are allocated and frozen on-the-fly when view overlap falls below a threshold, enabling city-block-scale coverage on a single GPU. Evaluated on the Tanks and Temples benchmark, our method reduces Absolute Trajectory Error to 0.001-0.021 m across eight indoor-outdoor sequences--up to 18x lower than BARF and 2x lower than NoPe-NeRF--while maintaining sub-pixel Relative Pose Error. These results demonstrate that metric-scale, drift-free 3-D reconstruction and high-fidelity novel-view synthesis are achievable from a single uncalibrated RGB camera. + oai:arXiv.org:2512.18237v1 + cs.CV + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Suraj Kumar, Arvind Kumar, Soumi Chattopadhyay + Shahram Najam Syed, Yitian Hu, Yuchao Yao - AnyCXR: Human Anatomy Segmentation of Chest X-ray at Any Acquisition Position using Multi-stage Domain Randomized Synthetic Data with Imperfect Annotations and Conditional Joint Annotation Regularization Learning - https://arxiv.org/abs/2512.17263 - arXiv:2512.17263v1 Announce Type: new -Abstract: Robust anatomical segmentation of chest X-rays (CXRs) remains challenging due to the scarcity of comprehensive annotations and the substantial variability of real-world acquisition conditions. We propose AnyCXR, a unified framework that enables generalizable multi-organ segmentation across arbitrary CXR projection angles using only synthetic supervision. The method combines a Multi-stage Domain Randomization (MSDR) engine, which generates over 100,000 anatomically faithful and highly diverse synthetic radiographs from 3D CT volumes, with a Conditional Joint Annotation Regularization (CAR) learning strategy that leverages partial and imperfect labels by enforcing anatomical consistency in a latent space. Trained entirely on synthetic data, AnyCXR achieves strong zero-shot generalization on multiple real-world datasets, providing accurate delineation of 54 anatomical structures in PA, lateral, and oblique views. The resulting segmentation maps support downstream clinical tasks, including automated cardiothoracic ratio estimation, spine curvature assessment, and disease classification, where the incorporation of anatomical priors improves diagnostic performance. These results demonstrate that AnyCXR establishes a scalable and reliable foundation for anatomy-aware CXR analysis and offers a practical pathway toward reducing annotation burdens while improving robustness across diverse imaging conditions. - oai:arXiv.org:2512.17263v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Sync Without Guesswork: Incomplete Time Series Alignment + https://arxiv.org/abs/2512.18238 + arXiv:2512.18238v1 Announce Type: new +Abstract: Multivariate time series alignment is critical for ensuring coherent analysis across variables, but missing values and timestamp inconsistencies make this task highly challenging. Existing approaches often rely on prior imputation, which can introduce errors and lead to suboptimal alignments. To address these limitations, we propose a constraint-based alignment framework for incomplete multivariate time series that avoids imputation and ensures temporal and structural consistency. We further design efficient approximation algorithms to balance accuracy and scalability. Experiments on multiple real-world datasets demonstrate that our approach achieves superior alignment quality compared to existing methods under varying missing rates. Our contributions include: (1) formally defining incomplete multiple temporal data alignment problem; (2) proposing three approximation algorithms balancing accuracy and efficiency; and (3) validating our approach on diverse real-world datasets, where it consistently outperforms existing methods in alignment accuracy and the number of aligned tuples. + oai:arXiv.org:2512.18238v1 + cs.DB + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Dong Zifei, Wu Wenjie, Hao Jinkui, Chen Tianqi, Weng Ziqiao, Zhou Bo + Ding Jia, Jingyu Zhu, Yu Sun, Aoqian Zhang, Shaoxu Song, Haiwei Zhang, Xiaojie Yuan - Scalable Distributed Vector Search via Accuracy Preserving Index Construction - https://arxiv.org/abs/2512.17264 - arXiv:2512.17264v1 Announce Type: new -Abstract: Scaling Approximate Nearest Neighbor Search (ANNS) to billions of vectors requires distributed indexes that balance accuracy, latency, and throughput. Yet existing index designs struggle with this tradeoff. This paper presents SPIRE, a scalable vector index based on two design decisions. First, it identifies a balanced partition granularity that avoids read-cost explosion. Second, it introduces an accuracy-preserving recursive construction that builds a multi-level index with predictable search cost and stable accuracy. In experiments with up to 8 billion vectors across 46 nodes, SPIRE achieves high scalability and up to 9.64X higher throughput than state-of-the-art systems. - oai:arXiv.org:2512.17264v1 - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 + Emergent Learner Agency in Implicit Human-AI Collaboration: How AI Personas Reshape Creative-Regulatory Interaction + https://arxiv.org/abs/2512.18239 + arXiv:2512.18239v1 Announce Type: new +Abstract: Generative AI is increasingly embedded in collaborative learning, yet little is known about how AI personas shape learner agency when AI teammates are present but not disclosed. This mechanism study examines how supportive and contrarian AI personas reconfigure emergent learner agency, discourse patterns, and experiences in implicit human-AI creative collaboration. A total of 224 university students were randomly assigned to 97 online triads in one of three conditions: human-only control, hybrid teams with a supportive AI, or hybrid teams with a contrarian AI. Participants completed an individual-group-individual movie-plot writing task; the 10-minute group chat was coded using a creative-regulatory framework. We combined transition network analysis, theory-driven sequential pattern mining, and Gaussian mixture clustering to model structural, temporal, and profile-level manifestations of agency, and linked these to cognitive load, psychological safety, teamwork satisfaction, and embedding-based creative performance. Contrarian AI produced challenge- and reflection-rich discourse structures and motifs indicating productive friction, whereas supportive AI fostered agreement-centred trajectories and smoother convergence. Clustering showed AI agents concentrated in challenger profiles, with reflective regulation uniquely human. While no systematic differences emerged in cognitive load or creative gains, contrarian AI consistently reduced teamwork satisfaction and psychological safety. The findings reveal a design tension between leveraging cognitive conflict and maintaining affective safety and ownership in hybrid human-AI teams. + oai:arXiv.org:2512.18239v1 + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yuming Xu, Qianxi Zhang, Qi Chen, Baotong Lu, Menghao Li, Philip Adams, Mingqin Li, Zengzhong Li, Jing Liu, Cheng Li, Fan Yang + Yueqiao Jin, Roberto Martinez-Maldonado, Dragan Ga\v{s}evi\'c, Lixiang Yan - A Theoretical Analysis of State Similarity Between Markov Decision Processes - https://arxiv.org/abs/2512.17265 - arXiv:2512.17265v1 Announce Type: new -Abstract: The bisimulation metric (BSM) is a powerful tool for analyzing state similarities within a Markov decision process (MDP), revealing that states closer in BSM have more similar optimal value functions. While BSM has been successfully utilized in reinforcement learning (RL) for tasks like state representation learning and policy exploration, its application to state similarity between multiple MDPs remains challenging. Prior work has attempted to extend BSM to pairs of MDPs, but a lack of well-established mathematical properties has limited further theoretical analysis between MDPs. In this work, we formally establish a generalized bisimulation metric (GBSM) for measuring state similarity between arbitrary pairs of MDPs, which is rigorously proven with three fundamental metric properties, i.e., GBSM symmetry, inter-MDP triangle inequality, and a distance bound on identical spaces. Leveraging these properties, we theoretically analyze policy transfer, state aggregation, and sampling-based estimation across MDPs, obtaining explicit bounds that are strictly tighter than existing ones derived from the standard BSM. Additionally, GBSM provides a closed-form sample complexity for estimation, improving upon existing asymptotic results based on BSM. Numerical results validate our theoretical findings and demonstrate the effectiveness of GBSM in multi-MDP scenarios. - oai:arXiv.org:2512.17265v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + SG-RIFE: Semantic-Guided Real-Time Intermediate Flow Estimation with Diffusion-Competitive Perceptual Quality + https://arxiv.org/abs/2512.18241 + arXiv:2512.18241v1 Announce Type: new +Abstract: Real-time Video Frame Interpolation (VFI) has long been dominated by flow-based methods like RIFE, which offer high throughput but often fail in complicated scenarios involving large motion and occlusion. Conversely, recent diffusion-based approaches (e.g., Consec. BB) achieve state-of-the-art perceptual quality but suffer from prohibitive latency, rendering them impractical for real-time applications. To bridge this gap, we propose Semantic-Guided RIFE (SG-RIFE). Instead of training from scratch, we introduce a parameter-efficient fine-tuning strategy that augments a pre-trained RIFE backbone with semantic priors from a frozen DINOv3 Vision Transformer. We propose a Split-Fidelity Aware Projection Module (Split-FAPM) to compress and refine high-dimensional features, and a Deformable Semantic Fusion (DSF) module to align these semantic priors with pixel-level motion fields. Experiments on SNU-FILM demonstrate that semantic injection provides a decisive boost in perceptual fidelity. SG-RIFE outperforms diffusion-based LDMVFI in FID/LPIPS and achieves quality comparable to Consec. BB on complex benchmarks while running significantly faster, proving that semantic consistency enables flow-based methods to achieve diffusion-competitive perceptual quality in near real-time. + oai:arXiv.org:2512.18241v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhenyu Tao, Wei Xu, Xiaohu You + Pan Ben Wong, Chengli Wu, Hanyue Lu - ScoutGPT: Capturing Player Impact from Team Action Sequences Using GPT-Based Framework - https://arxiv.org/abs/2512.17266 - arXiv:2512.17266v1 Announce Type: new -Abstract: Transfers play a pivotal role in shaping a football club's success, yet forecasting whether a transfer will succeed remains difficult due to the strong context-dependence of on-field performance. Existing evaluation practices often rely on static summary statistics or post-hoc value models, which fail to capture how a player's contribution adapts to a new tactical environment or different teammates. To address this gap, we introduce EventGPT, a player-conditioned, value-aware next-event prediction model built on a GPT-style autoregressive transformer. Our model treats match play as a sequence of discrete tokens, jointly learning to predict the next on-ball action's type, location, timing, and its estimated residual On-Ball Value (rOBV) based on the preceding context and player identity. A key contribution of this framework is the ability to perform counterfactual simulations. By substituting learned player embeddings into new event sequences, we can simulate how a player's behavioral distribution and value profile would change when placed in a different team or tactical structure. Evaluated on five seasons of Premier League event data, EventGPT outperforms existing sequence-based baselines in next-event prediction accuracy and spatial precision. Furthermore, we demonstrate the model's practical utility for transfer analysis through case studies-such as comparing striker performance across different systems and identifying stylistic replacements for specific roles-showing that our approach provides a principled method for evaluating transfer fit. - oai:arXiv.org:2512.17266v1 + Breaking Minds, Breaking Systems: Jailbreaking Large Language Models via Human-like Psychological Manipulation + https://arxiv.org/abs/2512.18244 + arXiv:2512.18244v1 Announce Type: new +Abstract: Large Language Models (LLMs) have gained considerable popularity and protected by increasingly sophisticated safety mechanisms. However, jailbreak attacks continue to pose a critical security threat by inducing models to generate policy-violating behaviors. Current paradigms focus on input-level anomalies, overlooking that the model's internal psychometric state can be systematically manipulated. To address this, we introduce Psychological Jailbreak, a new jailbreak attack paradigm that exposes a stateful psychological attack surface in LLMs, where attackers exploit the manipulation of a model's psychological state across interactions. Building on this insight, we propose Human-like Psychological Manipulation (HPM), a black-box jailbreak method that dynamically profiles a target model's latent psychological vulnerabilities and synthesizes tailored multi-turn attack strategies. By leveraging the model's optimization for anthropomorphic consistency, HPM creates a psychological pressure where social compliance overrides safety constraints. To systematically measure psychological safety, we construct an evaluation framework incorporating psychometric datasets and the Policy Corruption Score (PCS). Benchmarking against various models (e.g., GPT-4o, DeepSeek-V3, Gemini-2-Flash), HPM achieves a mean Attack Success Rate (ASR) of 88.1%, outperforming state-of-the-art attack baselines. Our experiments demonstrate robust penetration against advanced defenses, including adversarial prompt optimization (e.g., RPO) and cognitive interventions (e.g., Self-Reminder). Ultimately, PCS analysis confirms HPM induces safety breakdown to satisfy manipulated contexts. Our work advocates for a fundamental paradigm shift from static content filtering to psychological safety, prioritizing the development of psychological defense mechanisms against deep cognitive manipulation. + oai:arXiv.org:2512.18244v1 + cs.CR cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Miru Hong, Minho Lee, Geonhee Jo, Jae-Hee So, Pascal Bauer, Sang-Ki Ko + Zehao Liu, Xi Lin - AutoMetrics: Approximate Human Judgements with Automatically Generated Evaluators - https://arxiv.org/abs/2512.17267 - arXiv:2512.17267v1 Announce Type: new -Abstract: Evaluating user-facing AI applications remains a central challenge, especially in open-ended domains such as travel planning, clinical note generation, or dialogue. The gold standard is user feedback (e.g., thumbs up/down) or behavioral signals (e.g., retention), but these are often scarce in prototypes and research projects, or too-slow to use for system optimization. We present AutoMetrics, a framework for synthesizing evaluation metrics under low-data constraints. AutoMetrics combines retrieval from MetricBank, a collection of 48 metrics we curate, with automatically generated LLM-as-a-Judge criteria informed by lightweight human feedback. These metrics are composed via regression to maximize correlation with human signal. AutoMetrics takes you from expensive measures to interpretable automatic metrics. Across 5 diverse tasks, AutoMetrics improves Kendall correlation with human ratings by up to 33.4% over LLM-as-a-Judge while requiring fewer than 100 feedback points. We show that AutoMetrics can be used as a proxy reward to equal effect as a verifiable reward. We release the full AutoMetrics toolkit and MetricBank to accelerate adaptive evaluation of LLM applications. - oai:arXiv.org:2512.17267v1 - cs.CL + Spectral Discrepancy and Cross-modal Semantic Consistency Learning for Object Detection in Hyperspectral Image + https://arxiv.org/abs/2512.18245 + arXiv:2512.18245v1 Announce Type: new +Abstract: Hyperspectral images with high spectral resolution provide new insights into recognizing subtle differences in similar substances. However, object detection in hyperspectral images faces significant challenges in intra- and inter-class similarity due to the spatial differences in hyperspectral inter-bands and unavoidable interferences, e.g., sensor noises and illumination. To alleviate the hyperspectral inter-bands inconsistencies and redundancy, we propose a novel network termed \textbf{S}pectral \textbf{D}iscrepancy and \textbf{C}ross-\textbf{M}odal semantic consistency learning (SDCM), which facilitates the extraction of consistent information across a wide range of hyperspectral bands while utilizing the spectral dimension to pinpoint regions of interest. Specifically, we leverage a semantic consistency learning (SCL) module that utilizes inter-band contextual cues to diminish the heterogeneity of information among bands, yielding highly coherent spectral dimension representations. On the other hand, we incorporate a spectral gated generator (SGG) into the framework that filters out the redundant data inherent in hyperspectral information based on the importance of the bands. Then, we design the spectral discrepancy aware (SDA) module to enrich the semantic representation of high-level information by extracting pixel-level spectral features. Extensive experiments on two hyperspectral datasets demonstrate that our proposed method achieves state-of-the-art performance when compared with other ones. + oai:arXiv.org:2512.18245v1 + cs.CV cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Michael J. Ryan, Yanzhe Zhang, Amol Salunkhe, Yi Chu, Di Xu, Diyi Yang + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiao He, Chang Tang, Xinwang Liu, Wei Zhang, Zhimin Gao, Chuankun Li, Shaohua Qiu, Jiangfeng Xu - Line Cover and Related Problems - https://arxiv.org/abs/2512.17268 - arXiv:2512.17268v1 Announce Type: new -Abstract: We study extensions of the classic \emph{Line Cover} problem, which asks whether a set of $n$ points in the plane can be covered using $k$ lines. Line Cover is known to be NP-hard, and we focus on two natural generalizations. The first is \textbf{Line Clustering}, where the goal is to find $k$ lines minimizing the sum of squared distances from the input points to their nearest line. The second is \textbf{Hyperplane Cover}, which asks whether $n$ points in $\mathbb{R}^d$ can be covered by $k$ hyperplanes. - We also study the more general \textbf{Projective Clustering} problem, which unifies both settings and has applications in machine learning, data analysis, and computational geometry. In this problem, one seeks $k$ affine subspaces of dimension $r$ that minimize the sum of squared distances from the given points in $\mathbb{R}^d$ to the nearest subspace. - Our results reveal notable differences in the parameterized complexity of these problems. While Line Cover is fixed-parameter tractable when parameterized by $k$, we show that Line Clustering is W[1]-hard with respect to $k$ and does not admit an algorithm with running time $n^{o(k)}$ unless the Exponential Time Hypothesis fails. Hyperplane Cover is NP-hard even for $d=2$, and prior work of Langerman and Morin [Discrete & Computational Geometry, 2005] showed that it is fixed-parameter tractable when parameterized by both $k$ and $d$. We complement this by proving that Hyperplane Cover is W[2]-hard when parameterized by $k$ alone. - Finally, we present an algorithm for Projective Clustering running in $n^{O(dk(r+1))}$ time. This bound matches our lower bound for Line Clustering and generalizes the classic algorithm for $k$-Means Clustering ($r=0$) by Inaba, Katoh, and Imai [SoCG 1994]. - oai:arXiv.org:2512.17268v1 - cs.CG - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 + Offline Behavioral Data Selection + https://arxiv.org/abs/2512.18246 + arXiv:2512.18246v1 Announce Type: new +Abstract: Behavioral cloning is a widely adopted approach for offline policy learning from expert demonstrations. However, the large scale of offline behavioral datasets often results in computationally intensive training when used in downstream tasks. In this paper, we uncover the striking data saturation in offline behavioral data: policy performance rapidly saturates when trained on a small fraction of the dataset. We attribute this effect to the weak alignment between policy performance and test loss, revealing substantial room for improvement through data selection. To this end, we propose a simple yet effective method, Stepwise Dual Ranking (SDR), which extracts a compact yet informative subset from large-scale offline behavioral datasets. SDR is build on two key principles: (1) stepwise clip, which prioritizes early-stage data; and (2) dual ranking, which selects samples with both high action-value rank and low state-density rank. Extensive experiments and ablation studies on D4RL benchmarks demonstrate that SDR significantly enhances data selection for offline behavioral data. + oai:arXiv.org:2512.18246v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Matthias Bentert, Fedor v. Fomin, Petr A. Golovach, Souvik Saha, Sanjay Seetharaman, Anannya Upasana + Shiye Lei, Zhihao Cheng, Dacheng Tao - Understanding Generalization in Role-Playing Models via Information Theory - https://arxiv.org/abs/2512.17270 - arXiv:2512.17270v1 Announce Type: new -Abstract: Role-playing models (RPMs) are widely used in real-world applications but underperform when deployed in the wild. This degradation can be attributed to distribution shifts, including user, character, and dialogue compositional shifts. Existing methods like LLM-as-a-judge fall short in providing a fine-grained diagnosis of how these shifts affect RPM generalization, and thus there lack formal frameworks to characterize RPM generalization behaviors. To bridge these gaps, we introduce an information-theoretic metric, named reasoning-based effective mutual information difference (R-EMID), to measure RPM performance degradation in an interpretable way. We also derive an upper bound on R-EMID to predict the worst-case generalization performance of RPMs and theoretically reveal how various shifts contribute to the RPM performance degradation. Moreover, we propose a co-evolving reinforcement learning framework to adaptively model the connection among user, character, and dialogue context and thus enhance the estimation of dialogue response generation probability, which is critical for calculating R-EMID. Finally, we evaluate the generalization performance of various RPMs using R-EMID, finding that user shift poses the highest risk among all shifts and reinforcement learning is the most effective approach for enhancing RPM generalization. - oai:arXiv.org:2512.17270v1 - cs.LG + Towards Ancient Plant Seed Classification: A Benchmark Dataset and Baseline Model + https://arxiv.org/abs/2512.18247 + arXiv:2512.18247v1 Announce Type: new +Abstract: Understanding the dietary preferences of ancient societies and their evolution across periods and regions is crucial for revealing human-environment interactions. Seeds, as important archaeological artifacts, represent a fundamental subject of archaeobotanical research. However, traditional studies rely heavily on expert knowledge, which is often time-consuming and inefficient. Intelligent analysis methods have made progress in various fields of archaeology, but there remains a research gap in data and methods in archaeobotany, especially in the classification task of ancient plant seeds. To address this, we construct the first Ancient Plant Seed Image Classification (APS) dataset. It contains 8,340 images from 17 genus- or species-level seed categories excavated from 18 archaeological sites across China. In addition, we design a framework specifically for the ancient plant seed classification task (APSNet), which introduces the scale feature (size) of seeds based on learning fine-grained information to guide the network in discovering key "evidence" for sufficient classification. Specifically, we design a Size Perception and Embedding (SPE) module in the encoder part to explicitly extract size information for the purpose of complementing fine-grained information. We propose an Asynchronous Decoupled Decoding (ADD) architecture based on traditional progressive learning to decode features from both channel and spatial perspectives, enabling efficient learning of discriminative features. In both quantitative and qualitative analyses, our approach surpasses existing state-of-the-art image classification methods, achieving an accuracy of 90.5%. This demonstrates that our work provides an effective tool for large-scale, systematic archaeological research. + oai:arXiv.org:2512.18247v1 + cs.CV cs.AI - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yongqi Li, Hao Lang, Fei Huang, Tieyun Qian, Yongbin Li + Rui Xing, Runmin Cong, Yingying Wu, Can Wang, Zhongming Tang, Fen Wang, Hao Wu, Sam Kwong - MINPO: Memory-Informed Neural Pseudo-Operator to Resolve Nonlocal Spatiotemporal Dynamics - https://arxiv.org/abs/2512.17273 - arXiv:2512.17273v1 Announce Type: new -Abstract: Many physical systems exhibit nonlocal spatiotemporal behaviors described by integro-differential equations (IDEs). Classical methods for solving IDEs require repeatedly evaluating convolution integrals, whose cost increases quickly with kernel complexity and dimensionality. Existing neural solvers can accelerate selected instances of these computations, yet they do not generalize across diverse nonlocal structures. In this work, we introduce the Memory-Informed Neural Pseudo-Operator (MINPO), a unified framework for modeling nonlocal dynamics arising from long-range spatial interactions and/or long-term temporal memory. MINPO, employing either Kolmogorov-Arnold Networks (KANs) or multilayer perceptron networks (MLPs) as encoders, learns the nonlocal operator and its inverse directly through neural representations, and then explicitly reconstruct the unknown solution fields. The learning is guarded by a lightweight nonlocal consistency loss term to enforce coherence between the learned operator and reconstructed solution. The MINPO formulation allows to naturally capture and efficiently resolve nonlocal spatiotemporal dependencies governed by a wide spectrum of IDEs and their subsets, including fractional PDEs. We evaluate the efficacy of MINPO in comparison with classical techniques and state-of-the-art neural-based strategies based on MLPs, such as A-PINN and fPINN, along with their newly-developed KAN variants, A-PIKAN and fPIKAN, designed to facilitate a fair comparison. Our study offers compelling evidence of the accuracy of MINPO and demonstrates its robustness in handling (i) diverse kernel types, (ii) different kernel dimensionalities, and (iii) the substantial computational demands arising from repeated evaluations of kernel integrals. MINPO, thus, generalizes beyond problem-specific formulations, providing a unified framework for systems governed by nonlocal operators. - oai:arXiv.org:2512.17273v1 + On the Convergence Rate of LoRA Gradient Descent + https://arxiv.org/abs/2512.18248 + arXiv:2512.18248v1 Announce Type: new +Abstract: The low-rank adaptation (LoRA) algorithm for fine-tuning large models has grown popular in recent years due to its remarkable performance and low computational requirements. LoRA trains two ``adapter" matrices that form a low-rank representation of the model parameters, thereby massively reducing the number of parameters that need to be updated at every step. Although LoRA is simple, its convergence is poorly understood due to the lack of Lipschitz smoothness, a key condition for classic convergence analyses. As a result, current theoretical results only consider asymptotic behavior or assume strong boundedness conditions which artificially enforce Lipschitz smoothness. In this work, we provide for the first time a non-asymptotic convergence analysis of the \textit{original LoRA gradient descent} algorithm, which reflects widespread practice, without such assumptions. Our work relies on three key steps: i) reformulating the problem in terms of the outer product of the stacked adapter matrices, ii) a modified descent lemma for the ``Lipschitz-like" reparametrized function, and iii) controlling the step size. With this approach, we prove that LoRA gradient descent converges to a stationary point at rate $O(\frac{1}{\log T})$, where $T$ is the number of iterations. + oai:arXiv.org:2512.18248v1 cs.LG - cs.NA - math-ph - math.MP - math.NA - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Farinaz Mostajeran, Aruzhan Tleubek, Salah A Faroughi - - - Alzheimer's Disease Brain Network Mining - https://arxiv.org/abs/2512.17276 - arXiv:2512.17276v1 Announce Type: new -Abstract: Machine learning approaches for Alzheimer's disease (AD) diagnosis face a fundamental challenges. Clinical assessments are expensive and invasive, leaving ground truth labels available for only a fraction of neuroimaging datasets. We introduce Multi view Adaptive Transport Clustering for Heterogeneous Alzheimer's Disease (MATCH-AD), a semi supervised framework that integrates deep representation learning, graph-based label propagation, and optimal transport theory to address this limitation. The framework leverages manifold structure in neuroimaging data to propagate diagnostic information from limited labeled samples to larger unlabeled populations, while using Wasserstein distances to quantify disease progression between cognitive states. Evaluated on nearly five thousand subjects from the National Alzheimer's Coordinating Center, encompassing structural MRI measurements from hundreds of brain regions, cerebrospinal fluid biomarkers, and clinical variables MATCHAD achieves near-perfect diagnostic accuracy despite ground truth labels for less than one-third of subjects. The framework substantially outperforms all baseline methods, achieving kappa indicating almost perfect agreement compared to weak agreement for the best baseline, a qualitative transformation in diagnostic reliability. Performance remains clinically useful even under severe label scarcity, and we provide theoretical convergence guarantees with proven bounds on label propagation error and transport stability. These results demonstrate that principled semi-supervised learning can unlock the diagnostic potential of the vast repositories of partially annotated neuroimaging data accumulating worldwide, substantially reducing annotation burden while maintaining accuracy suitable for clinical deployment. - oai:arXiv.org:2512.17276v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Alireza Moayedikia, Sara Fin + Siqiao Mu, Diego Klabjan - Warmer for Less: A Cost-Efficient Strategy for Cold-Start Recommendations at Pinterest - https://arxiv.org/abs/2512.17277 - arXiv:2512.17277v1 Announce Type: new -Abstract: Pinterest is a leading visual discovery platform where recommender systems (RecSys) are key to delivering relevant, engaging, and fresh content to our users. In this paper, we study the problem of improving RecSys model predictions for cold-start (CS) items, which appear infrequently in the training data. Although this problem is well-studied in academia, few studies have addressed its root causes effectively at the scale of a platform like Pinterest. By investigating live traffic data, we identified several challenges of the CS problem and developed a corresponding solution for each: First, industrial-scale RecSys models must operate under tight computational constraints. Since CS items are a minority, any related improvements must be highly cost-efficient. To address this, our solutions were designed to be lightweight, collectively increasing the total parameters by only 5%. Second, CS items are represented only by non-historical (e.g., content or attribute) features, which models often treat as less important. To elevate their significance, we introduce a residual connection for the non-historical features. Third, CS items tend to receive lower prediction scores compared to non-CS items, reducing their likelihood of being surfaced. We mitigate this by incorporating a score regularization term into the model. Fourth, the labels associated with CS items are sparse, making it difficult for the model to learn from them. We apply the manifold mixup technique to address this data sparsity. Implemented together, our methods increased fresh content engagement at Pinterest by 10% without negatively impacting overall engagement and cost, and have been deployed to serve over 570 million users on Pinterest. - oai:arXiv.org:2512.17277v1 - cs.IR - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Loom: Diffusion-Transformer for Interleaved Generation + https://arxiv.org/abs/2512.18254 + arXiv:2512.18254v1 Announce Type: new +Abstract: Interleaved text-image generation aims to jointly produce coherent visual frames and aligned textual descriptions within a single sequence, enabling tasks such as style transfer, compositional synthesis, and procedural tutorials. We present Loom, a unified diffusion-transformer framework for interleaved text-image generation. Loom extends the Bagel unified model via full-parameter fine-tuning and an interleaved architecture that alternates textual and visual embeddings for multi-condition reasoning and sequential planning. A language planning strategy first decomposes a user instruction into stepwise prompts and frame embeddings, which guide temporally consistent synthesis. For each frame, Loom conditions on a small set of sampled prior frames together with the global textual context, rather than concatenating all history, yielding controllable and efficient long-horizon generation. Across style transfer, compositional generation, and tutorial-like procedures, Loom delivers superior compositionality, temporal coherence, and text-image alignment. Experiments demonstrate that Loom substantially outperforms the open-source baseline Anole, achieving an average gain of 2.6 points (on a 5-point scale) across temporal and semantic metrics in text-to-interleaved tasks. We also curate a 50K interleaved tutorial dataset and demonstrate strong improvements over unified and diffusion editing baselines. + oai:arXiv.org:2512.18254v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Saeed Ebrahimi, Weijie Jiang, Jaewon Yang, Olafur Gudmundsson, Yucheng Tu, Huizhong Duan + http://creativecommons.org/licenses/by-sa/4.0/ + Mingcheng Ye, Jiaming Liu, Yiren Song - WDFFU-Mamba: A Wavelet-guided Dual-attention Feature Fusion Mamba for Breast Tumor Segmentation in Ultrasound Images - https://arxiv.org/abs/2512.17278 - arXiv:2512.17278v1 Announce Type: new -Abstract: Breast ultrasound (BUS) image segmentation plays a vital role in assisting clinical diagnosis and early tumor screening. However, challenges such as speckle noise, imaging artifacts, irregular lesion morphology, and blurred boundaries severely hinder accurate segmentation. To address these challenges, this work aims to design a robust and efficient model capable of automatically segmenting breast tumors in BUS images.We propose a novel segmentation network named WDFFU-Mamba, which integrates wavelet-guided enhancement and dual-attention feature fusion within a U-shaped Mamba architecture. A Wavelet-denoised High-Frequency-guided Feature (WHF) module is employed to enhance low-level representations through noise-suppressed high-frequency cues. A Dual Attention Feature Fusion (DAFF) module is also introduced to effectively merge skip-connected and semantic features, improving contextual consistency.Extensive experiments on two public BUS datasets demonstrate that WDFFU-Mamba achieves superior segmentation accuracy, significantly outperforming existing methods in terms of Dice coefficient and 95th percentile Hausdorff Distance (HD95).The combination of wavelet-domain enhancement and attention-based fusion greatly improves both the accuracy and robustness of BUS image segmentation, while maintaining computational efficiency.The proposed WDFFU-Mamba model not only delivers strong segmentation performance but also exhibits desirable generalization ability across datasets, making it a promising solution for real-world clinical applications in breast tumor ultrasound analysis. - oai:arXiv.org:2512.17278v1 - cs.CV + MSC-180: A Benchmark for Automated Formal Theorem Proving from Mathematical Subject Classification + https://arxiv.org/abs/2512.18256 + arXiv:2512.18256v1 Announce Type: new +Abstract: Automated Theorem Proving (ATP) represents a core research direction in artificial intelligence for achieving formal reasoning and verification, playing a significant role in advancing machine intelligence. However, current large language model (LLM)-based theorem provers suffer from limitations such as restricted domain coverage and weak generalization in mathematical reasoning. To address these issues, we propose MSC-180, a benchmark for evaluation based on the MSC2020 mathematical subject classification. It comprises 180 formal verification problems, 3 advanced problems from each of 60 mathematical branches, spanning from undergraduate to graduate levels. Each problem has undergone multiple rounds of verification and refinement by domain experts to ensure formal accuracy. Evaluations of state-of-the-art LLM-based theorem provers under the pass@32 setting reveal that the best model achieves only an 18.89% overall pass rate, with prominent issues including significant domain bias (maximum domain coverage 41.7%) and a difficulty gap (significantly lower pass rates on graduate-level problems). To further quantify performance variability across mathematical domains, we introduce the coefficient of variation (CV) as an evaluation metric. The observed CV values are 4-6 times higher than the statistical high-variability threshold, indicating that the models still rely on pattern matching from training corpora rather than possessing transferable reasoning mechanisms and systematic generalization capabilities. MSC-180, together with its multi-dimensional evaluation framework, provides a discriminative and systematic benchmark for driving the development of next-generation AI systems with genuine mathematical reasoning abilities. + oai:arXiv.org:2512.18256v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.LO + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Guoping Cai, Houjin Chen, Yanfeng Li, Jia Sun, Ziwei Chen, Qingzi Geng + Sirui Li, Wangyue Lu, Xiaorui Shi, Ke Weng, Haozhe Sun, Minghe Yu, Tiancheng Zhang, Ge Yu, Hengyu Liu, Lun Du - Diagnostic Performance of Universal-Learning Ultrasound AI Across Multiple Organs and Tasks: the UUSIC25 Challenge - https://arxiv.org/abs/2512.17279 - arXiv:2512.17279v1 Announce Type: new -Abstract: IMPORTANCE: Current ultrasound AI remains fragmented into single-task tools, limiting clinical utility compared to versatile modern ultrasound systems. - OBJECTIVE: To evaluate the diagnostic accuracy and efficiency of single general-purpose deep learning models for multi-organ classification and segmentation. - DESIGN: The Universal UltraSound Image Challenge 2025 (UUSIC25) involved developing algorithms on 11,644 images (public/private). Evaluation used an independent, multi-center test set of 2,479 images, including data from a center completely unseen during training to assess generalization. - OUTCOMES: Diagnostic performance (Dice Similarity Coefficient [DSC]; Area Under the Receiver Operating Characteristic Curve [AUC]) and computational efficiency (inference time, GPU memory). - RESULTS: Of 15 valid algorithms, the top model (SMART) achieved a macro-averaged DSC of 0.854 across 5 segmentation tasks and AUC of 0.766 for binary classification. Models showed high capability in segmentation (e.g., fetal head DSC: 0.942) but variability in complex tasks subject to domain shift. Notably, in breast cancer molecular subtyping, the top model's performance dropped from AUC 0.571 (internal) to 0.508 (unseen external center), highlighting generalization challenges. - CONCLUSIONS: General-purpose AI models achieve high accuracy and efficiency across multiple tasks using a single architecture. However, performance degradation on unseen data suggests domain generalization is critical for future clinical deployment. - oai:arXiv.org:2512.17279v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + TCP BBR Performance over Wi-Fi~6: AQM Impacts and Cross-Layer Insights + https://arxiv.org/abs/2512.18259 + arXiv:2512.18259v1 Announce Type: new +Abstract: We evaluate TCP BBRv3 on Wi-Fi 6 home networks under modern AQM schemes using a fully wireless testbed and a simple cross-layer model linking Wi-Fi scheduling, router queueing, and BBRv3's pacing dynamics. Comparing BBR Internet traffic with CUBIC across different AQMs (FIFO, FQ-CoDel, and CAKE) for uplink, downlink, and bidirectional traffic, we find that FIFO destabilizes pacing and raises delay, often letting CUBIC dominate; FQ-CoDel restores fairness and controls latency; and CAKE delivers the best overall performance by keeping delay low and aligning BBRv3's sending and delivered rates. We also identify a Wi-Fi-specific effect where CAKE's rapid queue draining, while improving pacing alignment, can trigger brief retransmission bursts during BBRv3's bandwidth probes. These results follow from the interaction of variable Wi-Fi service rates, AQM delay control, and BBRv3's inflight limits, leading to practical guidance to use FQ-CoDel or CAKE and avoid unmanaged FIFO in home Wi-Fi, with potential for Wi-Fi-aware tuning of BBRv3's probing. + oai:arXiv.org:2512.18259v1 + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zehui Lin, Luyi Han, Xin Wang, Ying Zhou, Yanming Zhang, Tianyu Zhang, Lingyun Bao, Shandong Wu, Dong Xu, Tao Tan, the UUSIC25 Challenge Consortium + Shyam Kumar Shrestha, Shiva Raj Pokhrel, Jonathan Kua - Sensor Management System (SMS): Open-source software for FAIR sensor metadata management in Earth system sciences - https://arxiv.org/abs/2512.17280 - arXiv:2512.17280v1 Announce Type: new -Abstract: Deriving reliable conclusions and insights from environmental observational data urgently requires the enrichment with consistent and comprehensive metadata, including time-resolved context such as changing deployments, configurations, and maintenance actions. We have therefore developed the Sensor Management System (SMS), which provides a user-friendly and feature-rich platform for modeling even the most complex sensor systems and managing all sensor-related information across their life cycle. Each entity is described via well-defined terms like Devices, Platforms and Configurations, as well as Sites that are further enhanced with attributes for, e.g., instrument manufacturers, contact information or measured quantities and complemented by a continuous history of system-related actions. By further linking the SMS to sub-sequent systems and services like PID-registration or controlled vocabularies and establishing a community of end-users, the SMS provides the central element of a digital ecosystem, that fosters a more consistent, sustainable and FAIR provision of sensor-related metadata. - oai:arXiv.org:2512.17280v1 + Software Vulnerability Management in the Era of Artificial Intelligence: An Industry Perspective + https://arxiv.org/abs/2512.18261 + arXiv:2512.18261v1 Announce Type: new +Abstract: Artificial Intelligence (AI) has revolutionized software development, particularly by automating repetitive tasks and improving developer productivity. While these advancements are well-documented, the use of AI-powered tools for Software Vulnerability Management (SVM), such as vulnerability detection and repair, remains underexplored in industry settings. To bridge this gap, our study aims to determine the extent of the adoption of AI-powered tools for SVM, identify barriers and facilitators to the use, and gather insights to help improve the tools to meet industry needs better. We conducted a survey study involving 60 practitioners from diverse industry sectors across 27 countries. The survey incorporates both quantitative and qualitative questions to analyze the adoption trends, assess tool strengths, identify practical challenges, and uncover opportunities for improvement. Our findings indicate that AI-powered tools are used throughout the SVM life cycle, with 69\% of users reporting satisfaction with their current use. Practitioners value these tools for their speed, coverage, and accessibility. However, concerns about false positives, missing context, and trust issues remain prevalent. We observe a socio-technical adoption pattern in which AI outputs are filtered through human oversight and organizational governance. To support safe and effective use of AI for SVM, we recommend improvements in explainability, contextual awareness, integration workflows, and validation practices. We assert that these findings can offer practical guidance for practitioners, tool developers, and researchers seeking to enhance secure software development through the use of AI. + oai:arXiv.org:2512.18261v1 cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Christof Lorenza, Nils Brinckmann, Jan Bumberger, Marc Hanisch, Tobias Kuhnert, Ulrich Loup, Rubankumar Moorthy, Florian Obsersteiner, David Sch\"afer, Thomas Schnicke - - - LibriVAD: A Scalable Open Dataset with Deep Learning Benchmarks for Voice Activity Detection - https://arxiv.org/abs/2512.17281 - arXiv:2512.17281v1 Announce Type: new -Abstract: Robust Voice Activity Detection (VAD) remains a challenging task, especially under noisy, diverse, and unseen acoustic conditions. Beyond algorithmic development, a key limitation in advancing VAD research is the lack of large-scale, systematically controlled, and publicly available datasets. To address this, we introduce LibriVAD - a scalable open-source dataset derived from LibriSpeech and augmented with diverse real-world and synthetic noise sources. LibriVAD enables systematic control over speech-to-noise ratio, silence-to-speech ratio (SSR), and noise diversity, and is released in three sizes (15 GB, 150 GB, and 1.5 TB) with two variants (LibriVAD-NonConcat and LibriVAD-Concat) to support different experimental setups. We benchmark multiple feature-model combinations, including waveform, Mel-Frequency Cepstral Coefficients (MFCC), and Gammatone filter bank cepstral coefficients, and introduce the Vision Transformer (ViT) architecture for VAD. Our experiments show that ViT with MFCC features consistently outperforms established VAD models such as boosted deep neural network and convolutional long short-term memory deep neural network across seen, unseen, and out-of-distribution (OOD) conditions, including evaluation on the real-world VOiCES dataset. We further analyze the impact of dataset size and SSR on model generalization, experimentally showing that scaling up dataset size and balancing SSR noticeably and consistently enhance VAD performance under OOD conditions. All datasets, trained models, and code are publicly released to foster reproducibility and accelerate progress in VAD research. - oai:arXiv.org:2512.17281v1 - cs.SD - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Ioannis Stylianou, Achintya kr. Sarkar, Nauman Dawalatabad, James Glass, Zheng-Hua Tan - - - Analytical Stackelberg Resource Allocation in Sequential Attacker--Defender Games - https://arxiv.org/abs/2512.17284 - arXiv:2512.17284v1 Announce Type: new -Abstract: We develop an analytical Stackelberg game framework for optimal resource allocation in a sequential attacker--defender setting with a finite set of assets and probabilistic attacks. The defender commits to a mixed protection strategy, after which the attacker best-responds via backward induction. Closed-form expressions for equilibrium protection and attack strategies are derived for general numbers of assets and defensive resources. Necessary constraints on rewards and costs are established to ensure feasibility of the probability distributions. Three distinct payoff regimes for the defender are identified and analysed. An eight-asset numerical example illustrates the equilibrium structure and reveals a unique Pareto-dominant attack configuration. - oai:arXiv.org:2512.17284v1 - cs.GT - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Azhar Iqbal, James M. Chappell, Derek Abbott + http://creativecommons.org/licenses/by/4.0/ + M. Mehdi Kholoosi, Triet Huynh Minh Le, M. Ali Babar - Subjective Question Generation and Answer Evaluation using NLP - https://arxiv.org/abs/2512.17289 - arXiv:2512.17289v1 Announce Type: new -Abstract: Natural Language Processing (NLP) is one of the most revolutionary technologies today. It uses artificial intelligence to understand human text and spoken words. It is used for text summarization, grammar checking, sentiment analysis, and advanced chatbots and has many more potential use cases. Furthermore, it has also made its mark on the education sector. Much research and advancements have already been conducted on objective question generation; however, automated subjective question generation and answer evaluation are still in progress. An automated system to generate subjective questions and evaluate the answers can help teachers assess student work and enhance the student's learning experience by allowing them to self-assess their understanding after reading an article or a chapter of a book. This research aims to improve current NLP models or make a novel one for automated subjective question generation and answer evaluation from text input. - oai:arXiv.org:2512.17289v1 - cs.CL + Who Can See Through You? Adversarial Shielding Against VLM-Based Attribute Inference Attacks + https://arxiv.org/abs/2512.18264 + arXiv:2512.18264v1 Announce Type: new +Abstract: As vision-language models (VLMs) become widely adopted, VLM-based attribute inference attacks have emerged as a serious privacy concern, enabling adversaries to infer private attributes from images shared on social media. This escalating threat calls for dedicated protection methods to safeguard user privacy. However, existing methods often degrade the visual quality of images or interfere with vision-based functions on social media, thereby failing to achieve a desirable balance between privacy protection and user experience. To address this challenge, we propose a novel protection method that jointly optimizes privacy suppression and utility preservation under a visual consistency constraint. While our method is conceptually effective, fair comparisons between methods remain challenging due to the lack of publicly available evaluation datasets. To fill this gap, we introduce VPI-COCO, a publicly available benchmark comprising 522 images with hierarchically structured privacy questions and corresponding non-private counterparts, enabling fine-grained and joint evaluation of protection methods in terms of privacy preservation and user experience. Building upon this benchmark, experiments on multiple VLMs demonstrate that our method effectively reduces PAR below 25%, keeps NPAR above 88%, maintains high visual consistency, and generalizes well to unseen and paraphrased privacy questions, demonstrating its strong practical applicability for real-world VLM deployments. + oai:arXiv.org:2512.18264v1 + cs.CV cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - G. M. Refatul Islam, Safwan Shaheer, Yaseen Nur, Mohammad Rafid Hamid + Yucheng Fan, Jiawei Chen, Yu Tian, Zhaoxia Yin - Vision-Language Model Guided Image Restoration - https://arxiv.org/abs/2512.17292 - arXiv:2512.17292v1 Announce Type: new -Abstract: Many image restoration (IR) tasks require both pixel-level fidelity and high-level semantic understanding to recover realistic photos with fine-grained details. However, previous approaches often struggle to effectively leverage both the visual and linguistic knowledge. Recent efforts have attempted to incorporate Vision-language models (VLMs), which excel at aligning visual and textual features, into universal IR. Nevertheless, these methods fail to utilize the linguistic priors to ensure semantic coherence during the restoration process. To address this issue, in this paper, we propose the Vision-Language Model Guided Image Restoration (VLMIR) framework, which leverages the rich vision-language priors of VLMs, such as CLIP, to enhance IR performance through improved visual perception and semantic understanding. Our approach consists of two stages: VLM-based feature extraction and diffusion-based image restoration. In the first stage, we extract complementary visual and linguistic representations of input images by condensing the visual perception and high-level semantic priors through VLMs. Specifically, we align the embeddings of captions from low-quality and high-quality images using a cosine similarity loss with LoRA fine-tuning, and employ a degradation predictor to decompose degradation and clean image content embeddings. These complementary visual and textual embeddings are then integrated into a diffusion-based model via cross-attention mechanisms for enhanced restoration. Extensive experiments and ablation studies demonstrate that VLMIR achieves superior performance across both universal and degradation-specific IR tasks, underscoring the critical role of integrated visual and linguistic knowledge from VLMs in advancing image restoration capabilities. - oai:arXiv.org:2512.17292v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Intelligent Human-Machine Partnership for Manufacturing: Enhancing Warehouse Planning through Simulation-Driven Knowledge Graphs and LLM Collaboration + https://arxiv.org/abs/2512.18265 + arXiv:2512.18265v1 Announce Type: new +Abstract: Manufacturing planners face complex operational challenges that require seamless collaboration between human expertise and intelligent systems to achieve optimal performance in modern production environments. Traditional approaches to analyzing simulation-based manufacturing data often create barriers between human decision-makers and critical operational insights, limiting effective partnership in manufacturing planning. Our framework establishes a collaborative intelligence system integrating Knowledge Graphs and Large Language Model-based agents to bridge this gap, empowering manufacturing professionals through natural language interfaces for complex operational analysis. The system transforms simulation data into semantically rich representations, enabling planners to interact naturally with operational insights without specialized expertise. A collaborative LLM agent works alongside human decision-makers, employing iterative reasoning that mirrors human analytical thinking while generating precise queries for knowledge extraction and providing transparent validation. This partnership approach to manufacturing bottleneck identification, validated through operational scenarios, demonstrates enhanced performance while maintaining human oversight and decision authority. For operational inquiries, the system achieves near-perfect accuracy through natural language interaction. For investigative scenarios requiring collaborative analysis, we demonstrate the framework's effectiveness in supporting human experts to uncover interconnected operational issues that enhance understanding and decision-making. This work advances collaborative manufacturing by creating intuitive methods for actionable insights, reducing cognitive load while amplifying human analytical capabilities in evolving manufacturing ecosystems. + oai:arXiv.org:2512.18265v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Cuixin Yang, Rongkang Dong, Kin-Man Lam + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Himabindu Thogaru, Saisubramaniam Gopalakrishnan, Zishan Ahmad, Anirudh Deodhar - Robust TTS Training via Self-Purifying Flow Matching for the WildSpoof 2026 TTS Track - https://arxiv.org/abs/2512.17293 - arXiv:2512.17293v1 Announce Type: new -Abstract: This paper presents a lightweight text-to-speech (TTS) system developed for the WildSpoof Challenge TTS Track. Our approach fine-tunes the recently released open-weight TTS model, \textit{Supertonic}\footnote{\url{https://github.com/supertone-inc/supertonic}}, with Self-Purifying Flow Matching (SPFM) to enable robust adaptation to in-the-wild speech. SPFM mitigates label noise by comparing conditional and unconditional flow matching losses on each sample, routing suspicious text--speech pairs to unconditional training while still leveraging their acoustic information. The resulting model achieves the lowest Word Error Rate (WER) among all participating teams, while ranking second in perceptual metrics such as UTMOS and DNSMOS. These findings demonstrate that efficient, open-weight architectures like Supertonic can be effectively adapted to diverse real-world speech conditions when combined with explicit noise-handling mechanisms such as SPFM. - oai:arXiv.org:2512.17293v1 - cs.SD - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + LeJOT: An Intelligent Job Cost Orchestration Solution for Databricks Platform + https://arxiv.org/abs/2512.18266 + arXiv:2512.18266v1 Announce Type: new +Abstract: With the rapid advancements in big data technologies, the Databricks platform has become a cornerstone for enterprises and research institutions, offering high computational efficiency and a robust ecosystem. However, managing the escalating operational costs associated with job execution remains a critical challenge. Existing solutions rely on static configurations or reactive adjustments, which fail to adapt to the dynamic nature of workloads. To address this, we introduce LeJOT, an intelligent job cost orchestration framework that leverages machine learning for execution time prediction and a solver-based optimization model for real-time resource allocation. Unlike conventional scheduling techniques, LeJOT proactively predicts workload demands, dynamically allocates computing resources, and minimizes costs while ensuring performance requirements are met. Experimental results on real-world Databricks workloads demonstrate that LeJOT achieves an average 20% reduction in cloud computing costs within a minute-level scheduling timeframe, outperforming traditional static allocation strategies. Our approach provides a scalable and adaptive solution for cost-efficient job scheduling in Data Lakehouse environments. + oai:arXiv.org:2512.18266v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - June Young Yi, Hyeongju Kim, Juheon Lee + http://creativecommons.org/licenses/by/4.0/ + Lizhi Ma, Yi-Xiang Hu, Yuke Wang, Yifang Zhao, Yihui Ren, Jian-Xiang Liao, Feng Wu, Xiang-Yang Li - An Iconic Heavy Hitter Algorithm Made Private - https://arxiv.org/abs/2512.17295 - arXiv:2512.17295v1 Announce Type: new -Abstract: Identifying heavy hitters in data streams is a fundamental problem with widespread applications in modern analytics systems. These streams are often derived from sensitive user activity, making update-level privacy guarantees necessary. While recent work has adapted the classical heavy hitter algorithm Misra-Gries to satisfy differential privacy in the streaming model, the privatization of other heavy hitter algorithms with better empirical utility is absent. - Under this observation, we present the first differentially private variant of the SpaceSaving algorithm, which, in the non-private setting, is regarded as the state-of-the-art in practice. Our construction post-processes a non-private SpaceSaving summary by injecting asymptotically optimal noise and applying a carefully calibrated selection rule that suppresses unstable labels. This yields strong privacy guarantees while preserving the empirical advantages of SpaceSaving. - Second, we introduce a generic method for extracting heavy hitters from any differentially private frequency oracle in the data stream model. The method requires only O(k) additional memory, where k is the number of heavy items, and provides a mechanism for safely releasing item identities from noisy frequency estimates. This yields an efficient, plug-and-play approach for private heavy hitter recovery from linear sketches. - Finally, we conduct an experimental evaluation on synthetic and real-world datasets. Across a wide range of privacy parameters and space budgets, our method provides superior utility to the existing differentially private Misra-Gries algorithm. Our results demonstrate that the empirical superiority of SpaceSaving survives privatization and that efficient, practical heavy hitter identification is achievable under strong differential privacy guarantees. - oai:arXiv.org:2512.17295v1 - cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 + On The Computational Complexity for Minimizing Aerial Photographs for Full Coverage of a Planar Region + https://arxiv.org/abs/2512.18268 + arXiv:2512.18268v1 Announce Type: new +Abstract: With the popularity of drone technologies, aerial photography have become prevalent in many daily scenarios such as environment monitoring, structure inspection, law enforcement etc. A central challenge in this domain is the efficient coverage of a target area with photographs that can entirely capture the region, while respecting constraints such as the image resolution, and limited number of pictures that can be taken. This work investigates the computational complexity of several fundamental problems arised from this challenge. By abstracting the aerial photography problem into the coverage problems in computational geometry, we demonstrate that most of these problems are in fact computationally intractable, with the implication that traditional algorithms cannot solve them efficiently. The intuitions of this work can extend beyond aerial photography to broader applications such as pesticide spraying, and strategic sensor placement. + oai:arXiv.org:2512.18268v1 + cs.RO + cs.CG + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Rayne Holland + Si Wei Feng - Towards Pixel-Wise Anomaly Location for High-Resolution PCBA \\ via Self-Supervised Image Reconstruction - https://arxiv.org/abs/2512.17296 - arXiv:2512.17296v1 Announce Type: new -Abstract: Automated defect inspection of assembled Printed Circuit Board Assemblies (PCBA) is quite challenging due to the insufficient labeled data, micro-defects with just a few pixels in visually-complex and high-resolution images. To address these challenges, we present HiSIR-Net, a High resolution, Self-supervised Reconstruction framework for pixel-wise PCBA localization. Our design combines two lightweight modules that make this practical on real 4K-resolution boards: (i) a Selective Input-Reconstruction Gate (SIR-Gate) that lets the model decide where to trust reconstruction versus the original input, thereby reducing irrelevant reconstruction artifacts and false alarms; and (ii) a Region-level Optimized Patch Selection (ROPS) scheme with positional cues to select overlapping patch reconstructions coherently across arbitrary resolutions. Organically integrating these mechanisms yields clean, high-resolution anomaly maps with low false positive (FP) rate. To bridge the gap in high-resolution PCBA datasets, we further contribute a self-collected dataset named SIPCBA-500 of 500 images. We conduct extensive experiments on our SIPCBA-500 as well as public benchmarks, demonstrating the superior localization performance of our method while running at practical speed. Full code and dataset will be made available upon acceptance. - oai:arXiv.org:2512.17296v1 + Building UI/UX Dataset for Dark Pattern Detection and YOLOv12x-based Real-Time Object Recognition Detection System + https://arxiv.org/abs/2512.18269 + arXiv:2512.18269v1 Announce Type: new +Abstract: With the accelerating pace of digital transformation and the widespread adoption of online platforms, both social and technical concerns regarding dark patterns-user interface designs that undermine users' ability to make informed and rational choices-have become increasingly prominent. As corporate online platforms grow more sophisticated in their design strategies, there is a pressing need for proactive and real-time detection technologies that go beyond the predominantly reactive approaches employed by regulatory authorities. In this paper, we propose a visual dark pattern detection framework that improves both detection accuracy and real-time performance. To this end, we constructed a proprietary visual object detection dataset by manually collecting 4,066 UI/UX screenshots containing dark patterns from 194 websites across six major industrial sectors in South Korea and abroad. The collected images were annotated with five representative UI components commonly associated with dark patterns: Button, Checkbox, Input Field, Pop-up, and QR Code. This dataset has been publicly released to support further research and development in the field. To enable real-time detection, this study adopted the YOLOv12x object detection model and applied transfer learning to optimize its performance for visual dark pattern recognition. Experimental results demonstrate that the proposed approach achieves a high detection accuracy of 92.8% in terms of mAP@50, while maintaining a real-time inference speed of 40.5 frames per second (FPS), confirming its effectiveness for practical deployment in online environments. Furthermore, to facilitate future research and contribute to technological advancements, the dataset constructed in this study has been made publicly available at https://github.com/B4E2/B4E2-DarkPattern-YOLO-DataSet. + oai:arXiv.org:2512.18269v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wuyi Liu, Le Jin, Junxian Yang, Yuanchao Yu, Zishuo Peng, Jinfeng Xu, Xianzhi Li, Jun Zhou + Se-Young Jang, Su-Yeon Yoon, Jae-Woong Jung, Dong-Hun Lee, Seong-Hun Choi, Soo-Kyung Jun, Yu-Bin Kim, Young-Seon Ju, Kyounggon Kim - ProCache: Constraint-Aware Feature Caching with Selective Computation for Diffusion Transformer Acceleration - https://arxiv.org/abs/2512.17298 - arXiv:2512.17298v1 Announce Type: new -Abstract: Diffusion Transformers (DiTs) have achieved state-of-the-art performance in generative modeling, yet their high computational cost hinders real-time deployment. While feature caching offers a promising training-free acceleration solution by exploiting temporal redundancy, existing methods suffer from two key limitations: (1) uniform caching intervals fail to align with the non-uniform temporal dynamics of DiT, and (2) naive feature reuse with excessively large caching intervals can lead to severe error accumulation. In this work, we analyze the evolution of DiT features during denoising and reveal that both feature changes and error propagation are highly time- and depth-varying. Motivated by this, we propose ProCache, a training-free dynamic feature caching framework that addresses these issues via two core components: (i) a constraint-aware caching pattern search module that generates non-uniform activation schedules through offline constrained sampling, tailored to the model's temporal characteristics; and (ii) a selective computation module that selectively computes within deep blocks and high-importance tokens for cached segments to mitigate error accumulation with minimal overhead. Extensive experiments on PixArt-alpha and DiT demonstrate that ProCache achieves up to 1.96x and 2.90x acceleration with negligible quality degradation, significantly outperforming prior caching-based methods. - oai:arXiv.org:2512.17298v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Hybrid multiscale method for polymer melts: analysis and simulations + https://arxiv.org/abs/2512.18272 + arXiv:2512.18272v1 Announce Type: new +Abstract: We model the flow behaviour of dense melts of flexible and semiflexible ring polymers in the presence of walls using a hybrid multiscale approach. Specifically, we perform molecular dynamics simulations and apply the Irving-Kirkwood formula to determine an averaged stress tensor for a macroscopic model. For the latter, we choose a Cahn-Hilliard-Navier-Stokes system with dynamic and no-slip boundary conditions. We present numerical simulations of the macroscopic flow that are based on a finite element method. In particular, we present detailed proofs of the solvability and the energy stability of our numerical scheme. Phase segregation under flow between flexible and semiflexible rings, as observed in the microscopic simulations, can be replicated in the macroscopic model by introducing effective attractive forces. + oai:arXiv.org:2512.18272v1 + math.NA + cond-mat.soft + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Fanpu Cao, Yaofo Chen, Zeng You, Wei Luo, Cen Chen + Ranajay Datta, M\'aria Luk\'a\v{c}ov\'a-Medvi\v{d}ov\'a, Andreas Sch\"omer, Peter Virnau - M2RU: Memristive Minion Recurrent Unit for Continual Learning at the Edge - https://arxiv.org/abs/2512.17299 - arXiv:2512.17299v1 Announce Type: new -Abstract: Continual learning on edge platforms remains challenging because recurrent networks depend on energy-intensive training procedures and frequent data movement that are impractical for embedded deployments. This work introduces M2RU, a mixed-signal architecture that implements the minion recurrent unit for efficient temporal processing with on-chip continual learning. The architecture integrates weighted-bit streaming, which enables multi-bit digital inputs to be processed in crossbars without high-resolution conversion, and an experience replay mechanism that stabilizes learning under domain shifts. M2RU achieves 15 GOPS at 48.62 mW, corresponding to 312 GOPS per watt, and maintains accuracy within 5 percent of software baselines on sequential MNIST and CIFAR-10 tasks. Compared with a CMOS digital design, the accelerator provides 29X improvement in energy efficiency. Device-aware analysis shows an expected operational lifetime of 12.2 years under continual learning workloads. These results establish M2RU as a scalable and energy-efficient platform for real-time adaptation in edge-level temporal intelligence. - oai:arXiv.org:2512.17299v1 + FedSUM Family: Efficient Federated Learning Methods under Arbitrary Client Participation + https://arxiv.org/abs/2512.18275 + arXiv:2512.18275v1 Announce Type: new +Abstract: Federated Learning (FL) methods are often designed for specific client participation patterns, limiting their applicability in practical deployments. We introduce the FedSUM family of algorithms, which supports arbitrary client participation without additional assumptions on data heterogeneity. Our framework models participation variability with two delay metrics, the maximum delay $\tau_{\max}$ and the average delay $\tau_{\text{avg}}$. The FedSUM family comprises three variants: FedSUM-B (basic version), FedSUM (standard version), and FedSUM-CR (communication-reduced version). We provide unified convergence guarantees demonstrating the effectiveness of our approach across diverse participation patterns, thereby broadening the applicability of FL in real-world scenarios. + oai:arXiv.org:2512.18275v1 cs.LG - cs.AI - cs.ET - Mon, 22 Dec 2025 00:00:00 -0500 + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Abdullah M. Zyarah, Dhireesha Kudithipudi + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Runze You, Shi Pu - MatLat: Material Latent Space for PBR Texture Generation - https://arxiv.org/abs/2512.17302 - arXiv:2512.17302v1 Announce Type: new -Abstract: We propose a generative framework for producing high-quality PBR textures on a given 3D mesh. As large-scale PBR texture datasets are scarce, our approach focuses on effectively leveraging the embedding space and diffusion priors of pretrained latent image generative models while learning a material latent space, MatLat, through targeted fine-tuning. Unlike prior methods that freeze the embedding network and thus lead to distribution shifts when encoding additional PBR channels and hinder subsequent diffusion training, we fine-tune the pretrained VAE so that new material channels can be incorporated with minimal latent distribution deviation. We further show that correspondence-aware attention alone is insufficient for cross-view consistency unless the latent-to-image mapping preserves locality. To enforce this locality, we introduce a regularization in the VAE fine-tuning that crops latent patches, decodes them, and aligns the corresponding image regions to maintain strong pixel-latent spatial correspondence. Ablation studies and comparison with previous baselines demonstrate that our framework improves PBR texture fidelity and that each component is critical for achieving state-of-the-art performance. - oai:arXiv.org:2512.17302v1 + UniMPR: A Unified Framework for Multimodal Place Recognition with Arbitrary Sensor Configurations + https://arxiv.org/abs/2512.18279 + arXiv:2512.18279v1 Announce Type: new +Abstract: Place recognition is a critical component of autonomous vehicles and robotics, enabling global localization in GPS-denied environments. Recent advances have spurred significant interest in multimodal place recognition (MPR), which leverages complementary strengths of multiple modalities. Despite its potential, most existing MPR methods still face three key challenges: (1) dynamically adapting to arbitrary modality inputs within a unified framework, (2) maintaining robustness with missing or degraded modalities, and (3) generalizing across diverse sensor configurations and setups. In this paper, we propose UniMPR, a unified framework for multimodal place recognition. Using only one trained model, it can seamlessly adapt to any combination of common perceptual modalities (e.g., camera, LiDAR, radar). To tackle the data heterogeneity, we unify all inputs within a polar BEV feature space. Subsequently, the polar BEVs are fed into a multi-branch network to exploit discriminative intra-model and inter-modal features from any modality combinations. To fully exploit the network's generalization capability and robustness, we construct a large-scale training set from multiple datasets and introduce an adaptive label assignment strategy for extensive pre-training. Experiments on seven datasets demonstrate that UniMPR achieves state-of-the-art performance under varying sensor configurations, modality combinations, and environmental conditions. Our code will be released at https://github.com/QiZS-BIT/UniMPR. + oai:arXiv.org:2512.18279v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Kyeongmin Yeo, Yunhong Min, Jaihoon Kim, Minhyuk Sung + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhangshuo Qi, Jingyi Xu, Luqi Cheng, Shichen Wen, Yiming Ma, Guangming Xiong - EMAG: Self-Rectifying Diffusion Sampling with Exponential Moving Average Guidance - https://arxiv.org/abs/2512.17303 - arXiv:2512.17303v1 Announce Type: new -Abstract: In diffusion and flow-matching generative models, guidance techniques are widely used to improve sample quality and consistency. Classifier-free guidance (CFG) is the de facto choice in modern systems and achieves this by contrasting conditional and unconditional samples. Recent work explores contrasting negative samples at inference using a weaker model, via strong/weak model pairs, attention-based masking, stochastic block dropping, or perturbations to the self-attention energy landscape. While these strategies refine the generation quality, they still lack reliable control over the granularity or difficulty of the negative samples, and target-layer selection is often fixed. We propose Exponential Moving Average Guidance (EMAG), a training-free mechanism that modifies attention at inference time in diffusion transformers, with a statistics-based, adaptive layer-selection rule. Unlike prior methods, EMAG produces harder, semantically faithful negatives (fine-grained degradations), surfacing difficult failure modes, enabling the denoiser to refine subtle artifacts, boosting the quality and human preference score (HPS) by +0.46 over CFG. We further demonstrate that EMAG naturally composes with advanced guidance techniques, such as APG and CADS, further improving HPS. - oai:arXiv.org:2512.17303v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Improving Data Reusability in Interactive Information Retrieval: Insights from the Community + https://arxiv.org/abs/2512.18283 + arXiv:2512.18283v1 Announce Type: new +Abstract: In this study, we conducted semi-structured interviews with 21 IIR researchers to investigate their data reuse practices. This study aims to expand upon current findings by exploring IIR researchers' information-obtaining behaviors regarding data reuse. We identified the information about shared data characteristics that IIR researchers need when evaluating data reusability, as well as the sources they typically consult to obtain this information. We consider this work to be an initial step toward revealing IIR researchers' data reuse practices and identifying what the community needs to do to promote data reuse. We hope that this study, as well as future research, will inspire more individuals to contribute to ongoing efforts aimed at designing standards, infrastructures, and policies, as well as fostering a sustainable culture of data sharing and reuse in this field. + oai:arXiv.org:2512.18283v1 + cs.IR + cs.DL + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Ankit Yadav, Ta Duc Huy, Lingqiao Liu + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Tianji Jiang, Wenqi Li, Jiqun Liu - Deep But Reliable: Advancing Multi-turn Reasoning for Thinking with Images - https://arxiv.org/abs/2512.17306 - arXiv:2512.17306v1 Announce Type: new -Abstract: Recent advances in large Vision-Language Models (VLMs) have exhibited strong reasoning capabilities on complex visual tasks by thinking with images in their Chain-of-Thought (CoT), which is achieved by actively invoking tools to analyze visual inputs rather than merely perceiving them. However, existing models often struggle to reflect on and correct themselves when attempting incorrect reasoning trajectories. To address this limitation, we propose DRIM, a model that enables deep but reliable multi-turn reasoning when thinking with images in its multimodal CoT. Our pipeline comprises three stages: data construction, cold-start SFT and RL. Based on a high-resolution image dataset, we construct high-difficulty and verifiable visual question-answer pairs, where solving each task requires multi-turn tool calls to reach the correct answer. In the SFT stage, we collect tool trajectories as cold-start data, guiding a multi-turn reasoning pattern. In the RL stage, we introduce redundancy-penalized policy optimization, which incentivizes the model to develop a self-reflective reasoning pattern. The basic idea is to impose judgment on reasoning trajectories and penalize those that produce incorrect answers without sufficient multi-scale exploration. Extensive experiments demonstrate that DRIM achieves superior performance on visual understanding benchmarks. - oai:arXiv.org:2512.17306v1 + Pyramidal Adaptive Cross-Gating for Multimodal Detection + https://arxiv.org/abs/2512.18291 + arXiv:2512.18291v1 Announce Type: new +Abstract: Object detection in aerial imagery is a critical task in applications such as UAV reconnaissance. Although existing methods have extensively explored feature interaction between different modalities, they commonly rely on simple fusion strategies for feature aggregation. This introduces two critical flaws: it is prone to cross-modal noise and disrupts the hierarchical structure of the feature pyramid, thereby impairing the fine-grained detection of small objects. To address this challenge, we propose the Pyramidal Adaptive Cross-Gating Network (PACGNet), an architecture designed to perform deep fusion within the backbone. To this end, we design two core components: the Symmetrical Cross-Gating (SCG) module and the Pyramidal Feature-aware Multimodal Gating (PFMG) module. The SCG module employs a bidirectional, symmetrical "horizontal" gating mechanism to selectively absorb complementary information, suppress noise, and preserve the semantic integrity of each modality. The PFMG module reconstructs the feature hierarchy via a progressive hierarchical gating mechanism. This leverages the detailed features from a preceding, higher-resolution level to guide the fusion at the current, lower-resolution level, effectively preserving fine-grained details as features propagate. Through evaluations conducted on the DroneVehicle and VEDAI datasets, our PACGNet sets a new state-of-the-art benchmark, with mAP50 scores reaching 81.7% and 82.1% respectively. + oai:arXiv.org:2512.18291v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wenhao Yang, Yu Xia, Jinlong Huang, Shiyin Lu, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, Yuanyu Wan, Lijun Zhang + Zidong Gu, Shoufu Tian - Large Language Models as Pok\'emon Battle Agents: Strategic Play and Content Generation - https://arxiv.org/abs/2512.17308 - arXiv:2512.17308v1 Announce Type: new -Abstract: Strategic decision-making in Pok\'emon battles presents a unique testbed for evaluating large language models. Pok\'emon battles demand reasoning about type matchups, statistical trade-offs, and risk assessment, skills that mirror human strategic thinking. This work examines whether Large Language Models (LLMs) can serve as competent battle agents, capable of both making tactically sound decisions and generating novel, balanced game content. We developed a turn-based Pok\'emon battle system where LLMs select moves based on battle state rather than pre-programmed logic. The framework captures essential Pok\'emon mechanics: type effectiveness multipliers, stat-based damage calculations, and multi-Pok\'emon team management. Through systematic evaluation across multiple model architectures we measured win rates, decision latency, type-alignment accuracy, and token efficiency. These results suggest LLMs can function as dynamic game opponents without domain-specific training, offering a practical alternative to reinforcement learning for turn-based strategic games. The dual capability of tactical reasoning and content creation, positions LLMs as both players and designers, with implications for procedural generation and adaptive difficulty systems in interactive entertainment. - oai:arXiv.org:2512.17308v1 - cs.AI + Measuring Fine-Grained Negotiation Tactics of Humans and LLMs in Diplomacy + https://arxiv.org/abs/2512.18292 + arXiv:2512.18292v1 Announce Type: new +Abstract: The study of negotiation styles dates back to Aristotle's ethos-pathos-logos rhetoric. Prior efforts primarily studied the success of negotiation agents. Here, we shift the focus towards the styles of negotiation strategies. Our focus is the strategic dialogue board game Diplomacy, which affords rich natural language negotiation and measures of game success. We used LLM-as-a-judge to annotate a large human-human set of Diplomacy games for fine-grained negotiation tactics from a sociologically-grounded taxonomy. Using a combination of the It Takes Two and WebDiplomacy datasets, we demonstrate the reliability of our LLM-as-a-Judge framework and show strong correlations between negotiation features and success in the Diplomacy setting. Lastly, we investigate the differences between LLM and human negotiation strategies and show that fine-tuning can steer LLM agents toward more human-like negotiation behaviors. + oai:arXiv.org:2512.18292v1 + cs.CY cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Daksh Jain, Aarya Jain, Ashutosh Desai, Avyakt Verma, Ishan Bhanuka, Pratik Narang, Dhruv Kumar + http://creativecommons.org/licenses/by/4.0/ + Wenkai Li, Lynnette Hui Xian Ng, Andy Liu, Daniel Fried - RecipeMasterLLM: Revisiting RoboEarth in the Era of Large Language Models - https://arxiv.org/abs/2512.17309 - arXiv:2512.17309v1 Announce Type: new -Abstract: RoboEarth was a pioneering initiative in cloud robotics, establishing a foundational framework for robots to share and exchange knowledge about actions, objects, and environments through a standardized knowledge graph. Initially, this knowledge was predominantly hand-crafted by engineers using RDF triples within OWL Ontologies, with updates, such as changes in an object's pose, being asserted by the robot's control and perception routines. However, with the advent and rapid development of Large Language Models (LLMs), we believe that the process of knowledge acquisition can be significantly automated. To this end, we propose RecipeMasterLLM, a high-level planner, that generates OWL action ontologies based on a standardized knowledge graph in response to user prompts. This architecture leverages a fine-tuned LLM specifically trained to understand and produce action descriptions consistent with the RoboEarth standardized knowledge graph. Moreover, during the Retrieval-Augmented Generation (RAG) phase, environmental knowledge is supplied to the LLM to enhance its contextual understanding and improve the accuracy of the generated action descriptions. - oai:arXiv.org:2512.17309v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Power Converter DC Link Ripple and Network Unbalance as Active Constraints in Distribution System Optimal Power Flow + https://arxiv.org/abs/2512.18293 + arXiv:2512.18293v1 Announce Type: new +Abstract: The mitigation of unbalanced grid voltages or currents by voltage source converters results in power ripple on the dc link, and is a key converter design parameter due to hardware or stability considerations. Despite the importance of this issue for system design and operation, the use of Optimal Power Flow (OPF)-based methods capturing the interaction between dc link ripple and converter unbalanced operation has been largely unexplored. In this work, the magnitude of the power ripple is derived for generic multi-terminal converters, then introduced as a bilinear OPF constraint for two-level converter topologies. OPF case studies demonstrate the necessity to model both neutral current and dc link ripple, with tradeoffs between capacitor sizing and leg sizing highlighted for phase current unbalance mitigation applications. Time domain simulations of a grid-connected four-wire voltage source converter verify the accuracy and validity of the algebraic formulation. It is concluded that awareness of dc link ripple impacts and constraints will be of growing importance for distribution system operators. + oai:arXiv.org:2512.18293v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Asil Kaan Bozcuoglu, Ziyuan Liu + Matthew Deakin, Rahmat Heidari, Xu Deng - Cryptanalysis of Pseudorandom Error-Correcting Codes - https://arxiv.org/abs/2512.17310 - arXiv:2512.17310v1 Announce Type: new -Abstract: Pseudorandom error-correcting codes (PRC) is a novel cryptographic primitive proposed at CRYPTO 2024. Due to the dual capability of pseudorandomness and error correction, PRC has been recognized as a promising foundational component for watermarking AI-generated content. However, the security of PRC has not been thoroughly analyzed, especially with concrete parameters or even in the face of cryptographic attacks. To fill this gap, we present the first cryptanalysis of PRC. We first propose three attacks to challenge the undetectability and robustness assumptions of PRC. Among them, two attacks aim to distinguish PRC-based codewords from plain vectors, and one attack aims to compromise the decoding process of PRC. Our attacks successfully undermine the claimed security guarantees across all parameter configurations. Notably, our attack can detect the presence of a watermark with overwhelming probability at a cost of $2^{22}$ operations. We also validate our approach by attacking real-world large generative models such as DeepSeek and Stable Diffusion. To mitigate our attacks, we further propose three defenses to enhance the security of PRC, including parameter suggestions, implementation suggestions, and constructing a revised key generation algorithm. Our proposed revised key generation function effectively prevents the occurrence of weak keys. However, we highlight that the current PRC-based watermarking scheme still cannot achieve a 128-bit security under our parameter suggestions due to the inherent configurations of large generative models, such as the maximum output length of large language models. - oai:arXiv.org:2512.17310v1 - cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 + AL-GNN: Privacy-Preserving and Replay-Free Continual Graph Learning via Analytic Learning + https://arxiv.org/abs/2512.18295 + arXiv:2512.18295v1 Announce Type: new +Abstract: Continual graph learning (CGL) aims to enable graph neural networks to incrementally learn from a stream of graph structured data without forgetting previously acquired knowledge. Existing methods particularly those based on experience replay typically store and revisit past graph data to mitigate catastrophic forgetting. However, these approaches pose significant limitations, including privacy concerns, inefficiency. In this work, we propose AL GNN, a novel framework for continual graph learning that eliminates the need for backpropagation and replay buffers. Instead, AL GNN leverages principles from analytic learning theory to formulate learning as a recursive least squares optimization process. It maintains and updates model knowledge analytically through closed form classifier updates and a regularized feature autocorrelation matrix. This design enables efficient one pass training for each task, and inherently preserves data privacy by avoiding historical sample storage. Extensive experiments on multiple dynamic graph classification benchmarks demonstrate that AL GNN achieves competitive or superior performance compared to existing methods. For instance, it improves average performance by 10% on CoraFull and reduces forgetting by over 30% on Reddit, while also reducing training time by nearly 50% due to its backpropagation free design. + oai:arXiv.org:2512.18295v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Tianrui Wang, Anyu Wang, Tianshuo Cong, Delong Ran, Jinyuan Liu, Xiaoyun Wang + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xuling Zhang, Jindong Li, Yifei Zhang, Menglin Yang - CodeDance: A Dynamic Tool-integrated MLLM for Executable Visual Reasoning - https://arxiv.org/abs/2512.17312 - arXiv:2512.17312v1 Announce Type: new -Abstract: Recent releases such as o3 highlight human-like "thinking with images" reasoning that combines structured tool use with stepwise verification, yet most open-source approaches still rely on text-only chains, rigid visual schemas, or single-step pipelines, limiting flexibility, interpretability, and transferability on complex tasks. We introduce CodeDance, which explores executable code as a general solver for visual reasoning. Unlike fixed-schema calls (e.g., only predicting bounding-box coordinates), CodeDance defines, composes, and executes code to orchestrate multiple tools, compute intermediate results, and render visual artifacts (e.g., boxes, lines, plots) that support transparent, self-checkable reasoning. To guide this process, we introduce a reward for balanced and adaptive tool-call, which balances exploration with efficiency and mitigates tool overuse. Interestingly, beyond the expected capabilities taught by atomic supervision, we empirically observe novel emergent behaviors during RL training: CodeDance demonstrates novel tool invocations, unseen compositions, and cross-task transfer. These behaviors arise without task-specific fine-tuning, suggesting a general and scalable mechanism of executable visual reasoning. Extensive experiments across reasoning benchmarks (e.g., visual search, math, chart QA) show that CodeDance not only consistently outperforms schema-driven and text-only baselines, but also surpasses advanced closed models such as GPT-4o and larger open-source models. - oai:arXiv.org:2512.17312v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Privacy Data Pricing: A Stackelberg Game Approach + https://arxiv.org/abs/2512.18296 + arXiv:2512.18296v1 Announce Type: new +Abstract: Data markets are emerging as key mechanisms for trading personal and organizational data. Traditional data pricing studies -- such as query-based or arbitrage-free pricing models -- mainly emphasize price consistency and profit maximization but often neglect privacy constraints and strategic interactions. The widespread adoption of differential privacy (DP) introduces a fundamental privacy-utility trade-off: noise protects individuals' privacy but reduces data accuracy and market value. This paper develops a Stackelberg game framework for pricing DP data, where the market maker (leader) sets the price function and the data buyer (follower) selects the optimal query precision under DP constraints. We derive the equilibrium strategies for both parties under a balanced pricing function where the pricing decision variable enters linearly into the original pricing model. We obtain closed-form solutions for the optimal variance and pricing level, and determine the boundary conditions for market participation. Furthermore, we extend the analysis to Stackelberg games involving nonlinear power pricing functions. The model bridges DP and economic mechanism design, offering a unified foundation for incentive-compatible and privacy-conscious data pricing in data markets. + oai:arXiv.org:2512.18296v1 + cs.GT + stat.AP + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qi Song, Honglin Li, Yingchen Yu, Haoyi Zhou, Lin Yang, Song Bai, Qi She, Zilong Huang, Yunqing Zhao + http://creativecommons.org/licenses/by/4.0/ + Lijun Bo, Weiqiang Chang - Auxiliary Descriptive Knowledge for Few-Shot Adaptation of Vision-Language Model - https://arxiv.org/abs/2512.17313 - arXiv:2512.17313v1 Announce Type: new -Abstract: Despite the impressive zero-shot capabilities of Vision-Language Models (VLMs), they often struggle in downstream tasks with distribution shifts from the pre-training data. Few-Shot Adaptation (FSA-VLM) has emerged as a key solution, typically using Parameter-Efficient Fine-Tuning (PEFT) to adapt models with minimal data. However, these PEFT methods are constrained by their reliance on fixed, handcrafted prompts, which are often insufficient to understand the semantics of classes. While some studies have proposed leveraging image-induced prompts to provide additional clues for classification, they introduce prohibitive computational overhead at inference. Therefore, we introduce Auxiliary Descriptive Knowledge (ADK), a novel framework that efficiently enriches text representations without compromising efficiency. ADK first leverages a Large Language Model to generate a rich set of descriptive prompts for each class offline. These pre-computed features are then deployed in two ways: (1) as Compositional Knowledge, an averaged representation that provides rich semantics, especially beneficial when class names are ambiguous or unfamiliar to the VLM; and (2) as Instance-Specific Knowledge, where a lightweight, non-parametric attention mechanism dynamically selects the most relevant descriptions for a given image. This approach provides two additional types of knowledge alongside the handcrafted prompt, thereby facilitating category distinction across various domains. Also, ADK acts as a parameter-free, plug-and-play component that enhances existing PEFT methods. Extensive experiments demonstrate that ADK consistently boosts the performance of multiple PEFT baselines, setting a new state-of-the-art across various scenarios. - oai:arXiv.org:2512.17313v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Explainable Transformer-CNN Fusion for Noise-Robust Speech Emotion Recognition + https://arxiv.org/abs/2512.18298 + arXiv:2512.18298v1 Announce Type: new +Abstract: Speech Emotion Recognition (SER) systems often degrade in performance when exposed to the unpredictable acoustic interference found in real-world environments. Additionally, the opacity of deep learning models hinders their adoption in trust-sensitive applications. To bridge this gap, we propose a Hybrid Transformer-CNN framework that unifies the contextual modeling of Wav2Vec 2.0 with the spectral stability of 1D-Convolutional Neural Networks. Our dual-stream architecture processes raw waveforms to capture long-range temporal dependencies while simultaneously extracting noise-resistant spectral features (MFCC, ZCR, RMSE) via a custom Attentive Temporal Pooling mechanism. We conducted extensive validation across four diverse benchmark datasets: RAVDESS, TESS, SAVEE, and CREMA-D. To rigorously test robustness, we subjected the model to non-stationary acoustic interference using real-world noise profiles from the SAS-KIIT dataset. The proposed framework demonstrates superior generalization and state-of-the-art accuracy across all datasets, significantly outperforming single-branch baselines under realistic environmental interference. Furthermore, we address the ``black-box" problem by integrating SHAP and Score-CAM into the evaluation pipeline. These tools provide granular visual explanations, revealing how the model strategically shifts attention between temporal and spectral cues to maintain reliability in the presence of complex environmental noise. + oai:arXiv.org:2512.18298v1 + cs.SD + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - SuBeen Lee, GilHan Park, WonJun Moon, Hyun Seok Seong, Jae-Pil Heo + http://creativecommons.org/licenses/by/4.0/ + Sudip Chakrabarty, Pappu Bishwas, Rajdeep Chatterjee - Explanation Beyond Intuition: A Testable Criterion for Inherent Explainability - https://arxiv.org/abs/2512.17316 - arXiv:2512.17316v1 Announce Type: new -Abstract: Inherent explainability is the gold standard in Explainable Artificial Intelligence (XAI). However, there is not a consistent definition or test to demonstrate inherent explainability. Work to date either characterises explainability through metrics, or appeals to intuition - "we know it when we see it". We propose a globally applicable criterion for inherent explainability. The criterion uses graph theory for representing and decomposing models for structure-local explanation, and recomposing them into global explanations. We form the structure-local explanations as annotations, a verifiable hypothesis-evidence structure that allows for a range of explanatory methods to be used. This criterion matches existing intuitions on inherent explainability, and provides justifications why a large regression model may not be explainable but a sparse neural network could be. We differentiate explainable -- a model that allows for explanation -- and \textit{explained} -- one that has a verified explanation. Finally, we provide a full explanation of PREDICT -- a Cox proportional hazards model of cardiovascular disease risk, which is in active clinical use in New Zealand. It follows that PREDICT is inherently explainable. This work provides structure to formalise other work on explainability, and allows regulators a flexible but rigorous test that can be used in compliance frameworks. - oai:arXiv.org:2512.17316v1 - cs.LG - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + BARD: Reducing Write Latency of DDR5 Memory by Exploiting Bank-Parallelism + https://arxiv.org/abs/2512.18300 + arXiv:2512.18300v1 Announce Type: new +Abstract: This paper studies the impact of DRAM writes on DDR5-based system. To efficiently perform DRAM writes, modern systems buffer write requests and try to complete multiple write operations whenever the DRAM mode is switched from read to write. When the DRAM system is performing writes, it is not available to service read requests, thus increasing read latency and reducing performance. We observe that, given the presence of on-die ECC in DDR5 devices, the time to perform a write operation varies significantly: from 1x (for writes to banks of different bankgroups) to 6x (for writes to banks within the same bankgroup) to 24x (for conflicting requests to the same bank). If we can orchestrate the write stream to favor write requests that incur lower latency, then we can reduce the stall time from DRAM writes and improve performance. However, for current systems, the write stream is dictated by the cache replacement policy, which makes eviction decisions without being aware of the variable latency of DRAM writes. The key insight of our work is to improve performance by modifying the cache replacement policy to increase bank-parallelism of DRAM writes. + Our paper proposes {\em BARD (Bank-Aware Replacement Decisions)}, which modifies the cache replacement policy to favor dirty lines that belong to banks without pending writes. We analyze two variants of BARD: BARD-E (Eviction-based), which changes the eviction policy to evict low-cost dirty lines, and BARD-C (Cleansing-Based), which proactively cleans low-cost dirty lines without modifying the eviction decisions. We develop a hybrid policy (BARD-H), which uses a selective combination of both eviction and writeback. Our evaluations across workloads from SPEC2017, LIGRA, STREAM, and Google server traces show that BARD-H improves performance by 4.3\% on average and up-to 8.5\%. BARD requires only 8 bytes of SRAM per LLC slice. + oai:arXiv.org:2512.18300v1 + cs.AR + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Michael Merry, Pat Riddle, Jim Warren + Suhas Vittal, Moinuddin Qureshi - A Benchmark for Ultra-High-Resolution Remote Sensing MLLMs - https://arxiv.org/abs/2512.17319 - arXiv:2512.17319v1 Announce Type: new -Abstract: Multimodal large language models (MLLMs) demonstrate strong perception and reasoning performance on existing remote sensing (RS) benchmarks. However, most prior benchmarks rely on low-resolution imagery, and some high-resolution benchmarks suffer from flawed reasoning-task designs. We show that text-only LLMs can perform competitively with multimodal vision-language models on RS reasoning tasks without access to images, revealing a critical mismatch between current benchmarks and the intended evaluation of visual understanding. To enable faithful assessment, we introduce RSHR-Bench, a super-high-resolution benchmark for RS visual understanding and reasoning. RSHR-Bench contains 5,329 full-scene images with a long side of at least 4,000 pixels, with up to about 3 x 10^8 pixels per image, sourced from widely used RS corpora and UAV collections. We design four task families: multiple-choice VQA, open-ended VQA, image captioning, and single-image evaluation. These tasks cover nine perception categories and four reasoning types, supporting multi-turn and multi-image dialog. To reduce reliance on language priors, we apply adversarial filtering with strong LLMs followed by rigorous human verification. Overall, we construct 3,864 VQA tasks, 3,913 image captioning tasks, and 500 fully human-written or verified single-image evaluation VQA pairs. Evaluations across open-source, closed-source, and RS-specific VLMs reveal persistent performance gaps in super-high-resolution scenarios. Code: https://github.com/Yunkaidang/RSHR - oai:arXiv.org:2512.17319v1 - cs.CV - cs.AI - cs.MM - Mon, 22 Dec 2025 00:00:00 -0500 + InstructNet: A Novel Approach for Multi-Label Instruction Classification through Advanced Deep Learning + https://arxiv.org/abs/2512.18301 + arXiv:2512.18301v1 Announce Type: new +Abstract: People use search engines for various topics and items, from daily essentials to more aspirational and specialized objects. Therefore, search engines have taken over as peoples preferred resource. The How To prefix has become familiar and widely used in various search styles to find solutions to particular problems. This search allows people to find sequential instructions by providing detailed guidelines to accomplish specific tasks. Categorizing instructional text is also essential for task-oriented learning and creating knowledge bases. This study uses the How To articles to determine the multi-label instruction category. We have brought this work with a dataset comprising 11,121 observations from wikiHow, where each record has multiple categories. To find out the multi-label category meticulously, we employ some transformer-based deep neural architectures, such as Generalized Autoregressive Pretraining for Language Understanding (XLNet), Bidirectional Encoder Representation from Transformers (BERT), etc. In our multi-label instruction classification process, we have reckoned our proposed architectures using accuracy and macro f1-score as the performance metrics. This thorough evaluation showed us much about our strategys strengths and drawbacks. Specifically, our implementation of the XLNet architecture has demonstrated unprecedented performance, achieving an accuracy of 97.30% and micro and macro average scores of 89.02% and 93%, a noteworthy accomplishment in multi-label classification. This high level of accuracy and macro average score is a testament to the effectiveness of the XLNet architecture in our proposed InstructNet approach. By employing a multi-level strategy in our evaluation process, we have gained a more comprehensive knowledge of the effectiveness of our proposed architectures and identified areas for forthcoming improvement and refinement. + oai:arXiv.org:2512.18301v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yunkai Dang, Meiyi Zhu, Donghao Wang, Yizhuo Zhang, Jiacheng Yang, Qi Fan, Yuekun Yang, Wenbin Li, Feng Miao, Yang Gao + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1371/journal.pone.0311161 + PLoS ONE 19(10): e0311161 (2024) + Tanjim Taharat Aurpa, Md Shoaib Ahmed, Md Mahbubur Rahman, Md. Golam Moazzam - EMMA: Concept Erasure Benchmark with Comprehensive Semantic Metrics and Diverse Categories - https://arxiv.org/abs/2512.17320 - arXiv:2512.17320v1 Announce Type: new -Abstract: The widespread adoption of text-to-image (T2I) generation has raised concerns about privacy, bias, and copyright violations. Concept erasure techniques offer a promising solution by selectively removing undesired concepts from pre-trained models without requiring full retraining. However, these methods are often evaluated on a limited set of concepts, relying on overly simplistic and direct prompts. To test the boundaries of concept erasure techniques, and assess whether they truly remove targeted concepts from model representations, we introduce EMMA, a benchmark that evaluates five key dimensions of concept erasure over 12 metrics. EMMA goes beyond standard metrics like image quality and time efficiency, testing robustness under challenging conditions, including indirect descriptions, visually similar non-target concepts, and potential gender and ethnicity bias, providing a socially aware analysis of method behavior. Using EMMA, we analyze five concept erasure methods across five domains (objects, celebrities, art styles, NSFW, and copyright). Our results show that existing methods struggle with implicit prompts (i.e., generating the erased concept when it is indirectly referenced) and visually similar non-target concepts (i.e., failing to generate non-targeted concepts resembling the erased one), while some amplify gender and ethnicity bias compared to the original model. - oai:arXiv.org:2512.17320v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + MORPHEUS: A Multidimensional Framework for Modeling, Measuring, and Mitigating Human Factors in Cybersecurity + https://arxiv.org/abs/2512.18303 + arXiv:2512.18303v1 Announce Type: new +Abstract: Current cybersecurity research increasingly acknowledges the human factor, yet remains fragmented, often treating user vulnerabilities as isolated and static traits. This paper introduces MORPHEUS, a holistic framework that operationalizes human-centric security as a dynamic and interconnected system. Grounded in the Cognition-Affect-Behavior (CAB) model and Attribution Theory, MORPHEUS consolidates 50 human factors influencing susceptibility to major cyberthreats, including phishing, malware, password management, and misconfigurations. Beyond factor identification, the framework systematically maps 295 documented interactions, revealing how cognitive, emotional, behavioral, and socio-organizational processes jointly shape security outcomes, and distills them into twelve recurring interaction mechanisms. MORPHEUS further links theory to practice through an inventory of 99 validated psychometric instruments, enabling empirical assessment and targeted intervention. We illustrate the framework's applicability through concrete operational scenarios, spanning risk diagnosis, training, and interface design. Overall, MORPHEUS provides a rigorous yet actionable foundation for advancing human-centered cybersecurity research and practice. + oai:arXiv.org:2512.18303v1 + cs.CR + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Lu Wei, Yuta Nakashima, Noa Garcia + Giuseppe Desolda, Francesco Greco, Rosa Lanzilotti, Cesare Tucci - Neuro-Symbolic Control with Large Language Models for Language-Guided Spatial Tasks - https://arxiv.org/abs/2512.17321 - arXiv:2512.17321v1 Announce Type: new -Abstract: Although large language models (LLMs) have recently become effective tools for language-conditioned control in embodied systems, instability, slow convergence, and hallucinated actions continue to limit their direct application to continuous control. A modular neuro-symbolic control framework that clearly distinguishes between low-level motion execution and high-level semantic reasoning is proposed in this work. While a lightweight neural delta controller performs bounded, incremental actions in continuous space, a locally deployed LLM interprets symbolic tasks. We assess the suggested method in a planar manipulation setting with spatial relations between objects specified by language. Numerous tasks and local language models, such as Mistral, Phi, and LLaMA-3.2, are used in extensive experiments to compare LLM-only control, neural-only control, and the suggested LLM+DL framework. In comparison to LLM-only baselines, the results show that the neuro-symbolic integration consistently increases both success rate and efficiency, achieving average step reductions exceeding 70% and speedups of up to 8.83x while remaining robust to language model quality. The suggested framework enhances interpretability, stability, and generalization without any need of reinforcement learning or costly rollouts by controlling the LLM to symbolic outputs and allocating uninterpreted execution to a neural controller trained on artificial geometric data. These outputs show empirically that neuro-symbolic decomposition offers a scalable and principled way to integrate language understanding with ongoing control, this approach promotes the creation of dependable and effective language-guided embodied systems. - oai:arXiv.org:2512.17321v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Leveraging Peer, Self, and Teacher Assessments for Generative AI-Enhanced Feedback + https://arxiv.org/abs/2512.18306 + arXiv:2512.18306v1 Announce Type: new +Abstract: Providing timely and meaningful feedback remains a persistent challenge in higher education, especially in large courses where teachers must balance formative depth with scalability. Recent advances in Generative Artificial Intelligence (GenAI) offer new opportunities to support feedback processes while maintaining human oversight. This paper presents an study conducted within the AICoFe (AI-based Collaborative Feedback) system, which integrates teacher, peer, and self-assessments of engineering students' oral presentations. Using a validated rubric, 46 evaluation sets were analyzed to examine agreement, correlation, and bias across evaluators. The analyses revealed consistent overall alignment among sources but also systematic variations in scoring behavior, reflecting distinct evaluative perspectives. These findings informed the proposal of an enhanced GenAI model within AICoFe system, designed to integrate human assessments through weighted input aggregation, bias detection, and context-aware feedback generation. The study contributes empirical evidence and design principles for developing GenAI-based feedback systems that combine data-based efficiency with pedagogical validity and transparency. + oai:arXiv.org:2512.18306v1 + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Momina Liaqat Ali, Muhammad Abid + Alvaro Becerra, Ruth Cobos - Rotterdam artery-vein segmentation (RAV) dataset - https://arxiv.org/abs/2512.17322 - arXiv:2512.17322v1 Announce Type: new -Abstract: Purpose: To provide a diverse, high-quality dataset of color fundus images (CFIs) with detailed artery-vein (A/V) segmentation annotations, supporting the development and evaluation of machine learning algorithms for vascular analysis in ophthalmology. - Methods: CFIs were sampled from the longitudinal Rotterdam Study (RS), encompassing a wide range of ages, devices, and capture conditions. Images were annotated using a custom interface that allowed graders to label arteries, veins, and unknown vessels on separate layers, starting from an initial vessel segmentation mask. Connectivity was explicitly verified and corrected using connected component visualization tools. - Results: The dataset includes 1024x1024-pixel PNG images in three modalities: original RGB fundus images, contrast-enhanced versions, and RGB-encoded A/V masks. Image quality varied widely, including challenging samples typically excluded by automated quality assessment systems, but judged to contain valuable vascular information. - Conclusion: This dataset offers a rich and heterogeneous source of CFIs with high-quality segmentations. It supports robust benchmarking and training of machine learning models under real-world variability in image quality and acquisition settings. - Translational Relevance: By including connectivity-validated A/V masks and diverse image conditions, this dataset enables the development of clinically applicable, generalizable machine learning tools for retinal vascular analysis, potentially improving automated screening and diagnosis of systemic and ocular diseases. - oai:arXiv.org:2512.17322v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Embedded Safety-Aligned Intelligence via Differentiable Internal Alignment Embeddings + https://arxiv.org/abs/2512.18309 + arXiv:2512.18309v1 Announce Type: new +Abstract: We introduce Embedded Safety-Aligned Intelligence (ESAI), a theoretical framework for multi-agent reinforcement learning that embeds alignment constraints directly into agents internal representations using differentiable internal alignment embeddings. Unlike external reward shaping or post-hoc safety constraints, internal alignment embeddings are learned latent variables that predict externalized harm through counterfactual reasoning and modulate policy updates toward harm reduction through attention and graph-based propagation. + The ESAI framework integrates four mechanisms: differentiable counterfactual alignment penalties computed from soft reference distributions, alignment-weighted perceptual attention, Hebbian associative memory supporting temporal credit assignment, and similarity-weighted graph diffusion with bias mitigation controls. We analyze stability conditions for bounded internal embeddings under Lipschitz continuity and spectral constraints, discuss computational complexity, and examine theoretical properties including contraction behavior and fairness-performance tradeoffs. + This work positions ESAI as a conceptual contribution to differentiable alignment mechanisms in multi-agent systems. We identify open theoretical questions regarding convergence guarantees, embedding dimensionality, and extension to high-dimensional environments. Empirical evaluation is left to future work. + oai:arXiv.org:2512.18309v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jose Vargas Quiros, Bart Liefers, Karin van Garderen, Jeroen Vermeulen, Eyened Reading Center, Caroline Klaver + Harsh Rathva, Ojas Srivastava, Pruthwik Mishra - DESSERT: Diffusion-based Event-driven Single-frame Synthesis via Residual Training - https://arxiv.org/abs/2512.17323 - arXiv:2512.17323v1 Announce Type: new -Abstract: Video frame prediction extrapolates future frames from previous frames, but suffers from prediction errors in dynamic scenes due to the lack of information about the next frame. Event cameras address this limitation by capturing per-pixel brightness changes asynchronously with high temporal resolution. Prior research on event-based video frame prediction has leveraged motion information from event data, often by predicting event-based optical flow and reconstructing frames via pixel warping. However, such approaches introduce holes and blurring when pixel displacement is inaccurate. To overcome this limitation, we propose DESSERT, a diffusion-based event-driven single-frame synthesis framework via residual training. Leveraging a pre-trained Stable Diffusion model, our method is trained on inter-frame residuals to ensure temporal consistency. The training pipeline consists of two stages: (1) an Event-to-Residual Alignment Variational Autoencoder (ER-VAE) that aligns the event frame between anchor and target frames with the corresponding residual, and (2) a diffusion model that denoises the residual latent conditioned on event data. Furthermore, we introduce Diverse-Length Temporal (DLT) augmentation, which improves robustness by training on frame segments of varying temporal lengths. Experimental results demonstrate that our method outperforms existing event-based reconstruction, image-based video frame prediction, event-based video frame prediction, and one-sided event-based video frame interpolation methods, producing sharper and more temporally consistent frame synthesis. - oai:arXiv.org:2512.17323v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Color, Sentiment, and Structure: A Comparative Study of Instagram Marketing Across Economies + https://arxiv.org/abs/2512.18310 + arXiv:2512.18310v1 Announce Type: new +Abstract: Instagram has become a key platform for global food brands to engage diverse audiences through visual storytelling. While previous research emphasizes content-based strategies, this study bridges the gap between content and context by examining how aesthetic elements -- such as dominant image colors and caption sentiment -- and structural factors like GDP, population, and obesity rates collectively shape consumer engagement. Using a multimodal analysis of Instagram posts from major food outlets across developed and developing countries, we assess how color schemes and sentiment influence key engagement metrics. We then extend this analysis with regression modeling to evaluate how these macroeconomic and demographic variables moderate engagement. Our results reveal that engagement patterns vary widely across regions. In developing countries, color combinations like off-white and green significantly enhance interactions, and GDP is a strong positive predictor of engagement. Conversely, in developed countries, a larger population boosts engagement while higher GDP correlates with reduced user attention. Obesity rates show a mixed influence, moderately enhancing likes in some regions while lowering comments in others. These findings highlight the critical need for localized digital marketing strategies that align content design with structural realities to optimize audience engagement. + oai:arXiv.org:2512.18310v1 + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Jiyun Kong, Jun-Hyuk Kim, Jong-Seok Lee + Ritesh Konka, Pranali Kurani - Task Schema and Binding: A Double Dissociation Study of In-Context Learning - https://arxiv.org/abs/2512.17325 - arXiv:2512.17325v1 Announce Type: new -Abstract: We provide causal mechanistic validation that in-context learning (ICL) decomposes into two separable mechanisms: Task Schema (abstract task type recognition) and Binding (specific input-output associations). Through activation patching experiments across 9 models from 7 Transformer families plus Mamba (370M-13B parameters), we establish three key findings: - 1. Double dissociation: Task Schema transfers at 100% via late MLP patching; Binding transfers at 62% via residual stream patching -- proving separable mechanisms - 2. Prior-Schema trade-off: Schema reliance inversely correlates with prior knowledge (Spearman rho = -0.596, p < 0.001, N=28 task-model pairs) - 3. Architecture generality: The mechanism operates across all tested architectures including the non-Transformer Mamba - These findings offer a mechanistic account of the ICL puzzle that contrasts with prior views treating ICL as a monolithic mechanism (whether retrieval-based, gradient descent-like, or purely Bayesian). By establishing that Schema and Binding are neurally dissociable -- not merely behavioral modes -- we provide causal evidence for dual-process theories of ICL. Models rely on Task Schema when prior knowledge is absent, but prior knowledge interferes through attentional mis-routing (72.7% recency bias) rather than direct output competition (0%). This explains why arbitrary mappings succeed (zero prior leads to full Schema reliance) while factual overrides fail -- and reveals that the true bottleneck is attentional, not output-level. Practical implications: Understanding these dual mechanisms enables more efficient prompt engineering -- reliable schema transfer reduces required demonstrations for novel tasks, while prior-aware design can mitigate the 38% binding failure rate in high-prior scenarios, improving ICL system reliability in production deployments. - oai:arXiv.org:2512.17325v1 - cs.LG - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Monitoring Monitorability + https://arxiv.org/abs/2512.18311 + arXiv:2512.18311v1 Announce Type: new +Abstract: Observability into the decision making of modern AI systems may be required to safely deploy increasingly capable agents. Monitoring the chain-of-thought (CoT) of today's reasoning models has proven effective for detecting misbehavior. However, this "monitorability" may be fragile under different training procedures, data sources, or even continued system scaling. To measure and track monitorability, we propose three evaluation archetypes (intervention, process, and outcome-property) and a new monitorability metric, and introduce a broad evaluation suite. We demonstrate that these evaluations can catch simple model organisms trained to have obfuscated CoTs, and that CoT monitoring is more effective than action-only monitoring in practical settings. We compare the monitorability of various frontier models and find that most models are fairly, but not perfectly, monitorable. We also evaluate how monitorability scales with inference-time compute, reinforcement learning optimization, and pre-training model size. We find that longer CoTs are generally more monitorable and that RL optimization does not materially decrease monitorability even at the current frontier scale. Notably, we find that for a model at a low reasoning effort, we could instead deploy a smaller model at a higher reasoning effort (thereby matching capabilities) and obtain a higher monitorability, albeit at a higher overall inference compute cost. We further investigate agent-monitor scaling trends and find that scaling a weak monitor's test-time compute when monitoring a strong agent increases monitorability. Giving the weak monitor access to CoT not only improves monitorability, but it steepens the monitor's test-time compute to monitorability scaling trend. Finally, we show we can improve monitorability by asking models follow-up questions and giving their follow-up CoT to the monitor. + oai:arXiv.org:2512.18311v1 + cs.AI + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chaeha Kim + Melody Y. Guan, Miles Wang, Micah Carroll, Zehao Dou, Annie Y. Wei, Marcus Williams, Benjamin Arnav, Joost Huizinga, Ian Kivlichan, Mia Glaese, Jakub Pachocki, Bowen Baker - Democratizing Pathology Co-Pilots: An Open Pipeline and Dataset for Whole-Slide Vision-Language Modelling - https://arxiv.org/abs/2512.17326 - arXiv:2512.17326v1 Announce Type: new -Abstract: Vision-language models (VLMs) have the potential to become co-pilots for pathologists. However, most VLMs either focus on small regions of interest within whole-slide images, provide only static slide-level outputs, or rely on data that is not publicly available, limiting reproducibility. Furthermore, training data containing WSIs paired with detailed clinical reports is scarce, restricting progress toward transparent and generalisable VLMs. We address these limitations with three main contributions. First, we introduce Polysome, a standardised tool for synthetic instruction generation. Second, we apply Polysome to the public HISTAI dataset, generating HISTAI-Instruct, a large whole-slide instruction tuning dataset spanning 24,259 slides and over 1.1 million instruction-response pairs. Finally, we use HISTAI-Instruct to train ANTONI-{\alpha}, a VLM capable of visual-question answering (VQA). We show that ANTONI-{\alpha} outperforms MedGemma on WSI-level VQA tasks of tissue identification, neoplasm detection, and differential diagnosis. We also compare the performance of multiple incarnations of ANTONI-{\alpha} trained with different amounts of data. All methods, data, and code are publicly available. - oai:arXiv.org:2512.17326v1 + MatE: Material Extraction from Single-Image via Geometric Prior + https://arxiv.org/abs/2512.18312 + arXiv:2512.18312v1 Announce Type: new +Abstract: The creation of high-fidelity, physically-based rendering (PBR) materials remains a bottleneck in many graphics pipelines, typically requiring specialized equipment and expert-driven post-processing. To democratize this process, we present MatE, a novel method for generating tileable PBR materials from a single image taken under unconstrained, real-world conditions. Given an image and a user-provided mask, MatE first performs coarse rectification using an estimated depth map as a geometric prior, and then employs a dual-branch diffusion model. Leveraging a learned consistency from rotation-aligned and scale-aligned training data, this model further rectify residual distortions from the coarse result and translate it into a complete set of material maps, including albedo, normal, roughness and height. Our framework achieves invariance to the unknown illumination and perspective of the input image, allowing for the recovery of intrinsic material properties from casual captures. Through comprehensive experiments on both synthetic and real-world data, we demonstrate the efficacy and robustness of our approach, enabling users to create realistic materials from real-world image. + oai:arXiv.org:2512.18312v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sander Moonemans, Sebastiaan Ram, Fr\'ed\'erique Meeuwsen, Carlijn Lems, Jeroen van der Laak, Geert Litjens, Francesco Ciompi + Zeyu Zhang, Wei Zhai, Jian Yang, Yang Cao - SynergyWarpNet: Attention-Guided Cooperative Warping for Neural Portrait Animation - https://arxiv.org/abs/2512.17331 - arXiv:2512.17331v1 Announce Type: new -Abstract: Recent advances in neural portrait animation have demonstrated remarked potential for applications in virtual avatars, telepresence, and digital content creation. However, traditional explicit warping approaches often struggle with accurate motion transfer or recovering missing regions, while recent attention-based warping methods, though effective, frequently suffer from high complexity and weak geometric grounding. To address these issues, we propose SynergyWarpNet, an attention-guided cooperative warping framework designed for high-fidelity talking head synthesis. Given a source portrait, a driving image, and a set of reference images, our model progressively refines the animation in three stages. First, an explicit warping module performs coarse spatial alignment between the source and driving image using 3D dense optical flow. Next, a reference-augmented correction module leverages cross-attention across 3D keypoints and texture features from multiple reference images to semantically complete occluded or distorted regions. Finally, a confidence-guided fusion module integrates the warped outputs with spatially-adaptive fusing, using a learned confidence map to balance structural alignment and visual consistency. Comprehensive evaluations on benchmark datasets demonstrate state-of-the-art performance. - oai:arXiv.org:2512.17331v1 + MatSpray: Fusing 2D Material World Knowledge on 3D Geometry + https://arxiv.org/abs/2512.18314 + arXiv:2512.18314v1 Announce Type: new +Abstract: Manual modeling of material parameters and 3D geometry is a time consuming yet essential task in the gaming and film industries. While recent advances in 3D reconstruction have enabled accurate approximations of scene geometry and appearance, these methods often fall short in relighting scenarios due to the lack of precise, spatially varying material parameters. At the same time, diffusion models operating on 2D images have shown strong performance in predicting physically based rendering (PBR) properties such as albedo, roughness, and metallicity. However, transferring these 2D material maps onto reconstructed 3D geometry remains a significant challenge. We propose a framework for fusing 2D material data into 3D geometry using a combination of novel learning-based and projection-based approaches. We begin by reconstructing scene geometry via Gaussian Splatting. From the input images, a diffusion model generates 2D maps for albedo, roughness, and metallic parameters. Any existing diffusion model that can convert images or videos to PBR materials can be applied. The predictions are further integrated into the 3D representation either by optimizing an image-based loss or by directly projecting the material parameters onto the Gaussians using Gaussian ray tracing. To enhance fine-scale accuracy and multi-view consistency, we further introduce a light-weight neural refinement step (Neural Merger), which takes ray-traced material features as input and produces detailed adjustments. Our results demonstrate that the proposed methods outperform existing techniques in both quantitative metrics and perceived visual realism. This enables more accurate, relightable, and photorealistic renderings from reconstructed scenes, significantly improving the realism and efficiency of asset creation workflows in content production pipelines. + oai:arXiv.org:2512.18314v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shihang Li, Zhiqiang Gong, Minming Ye, Yue Gao, Wen Yao + http://creativecommons.org/licenses/by/4.0/ + Philipp Langsteiner, Jan-Niklas Dihlmann, Hendrik P. A. Lensch - Bridging Natural Language and Formal Specification--Automated Translation of Software Requirements to LTL via Hierarchical Semantics Decomposition Using LLMs - https://arxiv.org/abs/2512.17334 - arXiv:2512.17334v1 Announce Type: new -Abstract: Automating the translation of natural language (NL) software requirements into formal specifications remains a critical challenge in scaling formal verification practices to industrial settings, particularly in safety-critical domains. Existing approaches, both rule-based and learning-based, face significant limitations. While large language models (LLMs) like GPT-4o demonstrate proficiency in semantic extraction, they still encounter difficulties in addressing the complexity, ambiguity, and logical depth of real-world industrial requirements. In this paper, we propose Req2LTL, a modular framework that bridges NL and Linear Temporal Logic (LTL) through a hierarchical intermediate representation called OnionL. Req2LTL leverages LLMs for semantic decomposition and combines them with deterministic rule-based synthesis to ensure both syntactic validity and semantic fidelity. Our comprehensive evaluation demonstrates that Req2LTL achieves 88.4% semantic accuracy and 100% syntactic correctness on real-world aerospace requirements, significantly outperforming existing methods. - oai:arXiv.org:2512.17334v1 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 + Trustworthy and Explainable Deep Reinforcement Learning for Safe and Energy-Efficient Process Control: A Use Case in Industrial Compressed Air Systems + https://arxiv.org/abs/2512.18317 + arXiv:2512.18317v1 Announce Type: new +Abstract: This paper presents a trustworthy reinforcement learning approach for the control of industrial compressed air systems. We develop a framework that enables safe and energy-efficient operation under realistic boundary conditions and introduce a multi-level explainability pipeline combining input perturbation tests, gradient-based sensitivity analysis, and SHAP (SHapley Additive exPlanations) feature attribution. An empirical evaluation across multiple compressor configurations shows that the learned policy is physically plausible, anticipates future demand, and consistently respects system boundaries. Compared to the installed industrial controller, the proposed approach reduces unnecessary overpressure and achieves energy savings of approximately 4\,\% without relying on explicit physics models. The results further indicate that system pressure and forecast information dominate policy decisions, while compressor-level inputs play a secondary role. Overall, the combination of efficiency gains, predictive behavior, and transparent validation supports the trustworthy deployment of reinforcement learning in industrial energy systems. + oai:arXiv.org:2512.18317v1 + cs.LG + cs.AI + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Zhi Ma, Cheng Wen, Zhexin Su, Xiao Liang, Cong Tian, Shengchao Qin, Mengfei Yang + Vincent Bezold, Patrick Wagner, Jakob Hofmann, Marco Huber, Alexander Sauer - Multi-level distortion-aware deformable network for omnidirectional image super-resolution - https://arxiv.org/abs/2512.17343 - arXiv:2512.17343v1 Announce Type: new -Abstract: As augmented reality and virtual reality applications gain popularity, image processing for OmniDirectional Images (ODIs) has attracted increasing attention. OmniDirectional Image Super-Resolution (ODISR) is a promising technique for enhancing the visual quality of ODIs. Before performing super-resolution, ODIs are typically projected from a spherical surface onto a plane using EquiRectangular Projection (ERP). This projection introduces latitude-dependent geometric distortion in ERP images: distortion is minimal near the equator but becomes severe toward the poles, where image content is stretched across a wider area. However, existing ODISR methods have limited sampling ranges and feature extraction capabilities, which hinder their ability to capture distorted patterns over large areas. To address this issue, we propose a novel Multi-level Distortion-aware Deformable Network (MDDN) for ODISR, designed to expand the sampling range and receptive field. Specifically, the feature extractor in MDDN comprises three parallel branches: a deformable attention mechanism (serving as the dilation=1 path) and two dilated deformable convolutions with dilation rates of 2 and 3. This architecture expands the sampling range to include more distorted patterns across wider areas, generating dense and comprehensive features that effectively capture geometric distortions in ERP images. The representations extracted from these deformable feature extractors are adaptively fused in a multi-level feature fusion module. Furthermore, to reduce computational cost, a low-rank decomposition strategy is applied to dilated deformable convolutions. Extensive experiments on publicly available datasets demonstrate that MDDN outperforms state-of-the-art methods, underscoring its effectiveness and superiority in ODISR. - oai:arXiv.org:2512.17343v1 + Asynchronous Pipeline Parallelism for Real-Time Multilingual Lip Synchronization in Video Communication Systems + https://arxiv.org/abs/2512.18318 + arXiv:2512.18318v1 Announce Type: new +Abstract: This paper introduces a parallel and asynchronous Transformer framework designed for efficient and accurate multilingual lip synchronization in real-time video conferencing systems. The proposed architecture integrates translation, speech processing, and lip-synchronization modules within a pipeline-parallel design that enables concurrent module execution through message-queue-based decoupling, reducing end-to-end latency by up to 3.1 times compared to sequential approaches. To enhance computational efficiency and throughput, the inference workflow of each module is optimized through low-level graph compilation, mixed-precision quantization, and hardware-accelerated kernel fusion. These optimizations provide substantial gains in efficiency while preserving model accuracy and visual quality. In addition, a context-adaptive silence-detection component segments the input speech stream at semantically coherent boundaries, improving translation consistency and temporal alignment across languages. Experimental results demonstrate that the proposed parallel architecture outperforms conventional sequential pipelines in processing speed, synchronization stability, and resource utilization. The modular, message-oriented design makes this work applicable to resource-constrained IoT communication scenarios including telemedicine, multilingual kiosks, and remote assistance systems. Overall, this work advances the development of low-latency, resource-efficient multimodal communication frameworks for next-generation AIoT systems. + oai:arXiv.org:2512.18318v1 + cs.MM + cs.AI cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + cs.DC + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Cuixin Yang, Rongkang Dong, Kin-Man Lam, Yuhang Zhang, Guoping Qiu + Eren Caglar, Amirkia Rafiei Oskooei, Mehmet Kutanoglu, Mustafa Keles, Mehmet S. Aktas - Governance-Aware Hybrid Fine-Tuning for Multilingual Large Language Models - https://arxiv.org/abs/2512.17344 - arXiv:2512.17344v1 Announce Type: new -Abstract: We present a governance-aware hybrid fine-tuning framework for multilingual, low-resource adaptation of large language models. The core algorithm combines gradient-aligned low-rank updates with structured orthogonal transformations through layer-wise mixing and introduces unitary constraints in selected sub-layers to stabilize deep optimization. In tandem with lightweight, label-free data governance steps, including language identification, near-duplicate removal, and quality filtering, the framework targets accuracy, calibration, and cross-language parity under tight compute budgets. Across XNLI and FLORES, the hybrid approach delivers consistent gains over strong PEFT baselines while maintaining directional balance and improving probability calibration, as shown in Tables II and III. It is more resilient to lightweight orthographic variants, as shown in Table IV, and benefits additively from simple governance steps, as shown in Table V. Training footprint measurements indicate modest overhead and a favorable cost-quality frontier, as shown in Table VI and Figure 2. Together, these results show that hybrid and unitary PEFT provide a stable and accessible path to resource-efficient multilingual adaptation when paired with practical data governance. - oai:arXiv.org:2512.17344v1 + CTTA-T: Continual Test-Time Adaptation for Text Understanding via Teacher-Student with a Domain-aware and Generalized Teacher + https://arxiv.org/abs/2512.18321 + arXiv:2512.18321v1 Announce Type: new +Abstract: Text understanding often suffers from domain shifts. To handle testing domains, domain adaptation (DA) is trained to adapt to a fixed and observed testing domain; a more challenging paradigm, test-time adaptation (TTA), cannot access the testing domain during training and online adapts to the testing samples during testing, where the samples are from a fixed domain. We aim to explore a more practical and underexplored scenario, continual test-time adaptation (CTTA) for text understanding, which involves a sequence of testing (unobserved) domains in testing. Current CTTA methods struggle in reducing error accumulation over domains and enhancing generalization to handle unobserved domains: 1) Noise-filtering reduces accumulated errors but discards useful information, and 2) accumulating historical domains enhances generalization, but it is hard to achieve adaptive accumulation. In this paper, we propose a CTTA-T (continual test-time adaptation for text understanding) framework adaptable to evolving target domains: it adopts a teacher-student framework, where the teacher is domain-aware and generalized for evolving domains. To improve teacher predictions, we propose a refine-then-filter based on dropout-driven consistency, which calibrates predictions and removes unreliable guidance. For the adaptation-generalization trade-off, we construct a domain-aware teacher by dynamically accumulating cross-domain semantics via incremental PCA, which continuously tracks domain shifts. Experiments show CTTA-T excels baselines. + oai:arXiv.org:2512.18321v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - 2025 IEEE International Conference on Big Data - Haomin Qi, Chengbo Huang, Zihan Dai, Yunkai Gao + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Tianlun Liu, Zhiliang Tian, Zhen Huang, Xingzhi Zhou, Wanlong Yu, Tianle Liu, Feng Liu, Dongsheng Li - Stakeholder Suite: A Unified AI Framework for Mapping Actors, Topics and Arguments in Public Debates - https://arxiv.org/abs/2512.17347 - arXiv:2512.17347v1 Announce Type: new -Abstract: Public debates surrounding infrastructure and energy projects involve complex networks of stakeholders, arguments, and evolving narratives. Understanding these dynamics is crucial for anticipating controversies and informing engagement strategies, yet existing tools in media intelligence largely rely on descriptive analytics with limited transparency. This paper presents Stakeholder Suite, a framework deployed in operational contexts for mapping actors, topics, and arguments within public debates. The system combines actor detection, topic modeling, argument extraction and stance classification in a unified pipeline. Tested on multiple energy infrastructure projects as a case study, the approach delivers fine-grained, source-grounded insights while remaining adaptable to diverse domains. The framework achieves strong retrieval precision and stance accuracy, producing arguments judged relevant in 75% of pilot use cases. Beyond quantitative metrics, the tool has proven effective for operational use: helping project teams visualize networks of influence, identify emerging controversies, and support evidence-based decision-making. - oai:arXiv.org:2512.17347v1 + LIR$^3$AG: A Lightweight Rerank Reasoning Strategy Framework for Retrieval-Augmented Generation + https://arxiv.org/abs/2512.18329 + arXiv:2512.18329v1 Announce Type: new +Abstract: Retrieval-Augmented Generation (RAG) effectively enhances Large Language Models (LLMs) by incorporating retrieved external knowledge into the generation process. Reasoning models improve LLM performance in multi-hop QA tasks, which require integrating and reasoning over multiple pieces of evidence across different documents to answer a complex question. However, they often introduce substantial computational costs, including increased token consumption and inference latency. To better understand and mitigate this trade-off, we conduct a comprehensive study of reasoning strategies for reasoning models in RAG multi-hop QA tasks. Our findings reveal that reasoning models adopt structured strategies to integrate retrieved and internal knowledge, primarily following two modes: Context-Grounded Reasoning, which relies directly on retrieved content, and Knowledge-Reconciled Reasoning, which resolves conflicts or gaps using internal knowledge. To this end, we propose a novel Lightweight Rerank Reasoning Strategy Framework for RAG (LiR$^3$AG) to enable non-reasoning models to transfer reasoning strategies by restructuring retrieved evidence into coherent reasoning chains. LiR$^3$AG significantly reduce the average 98% output tokens overhead and 58.6% inferencing time while improving 8B non-reasoning model's F1 performance ranging from 6.2% to 22.5% to surpass the performance of 32B reasoning model in RAG, offering a practical and efficient path forward for RAG systems. + oai:arXiv.org:2512.18329v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mohamed Chenene, Jeanne Rouhier, Jean Dani\'elou, Mihir Sarkar, Elena Cabrio + Guo Chen, Junjie Huang, Huaijin Xie, Fei Sun, Tao Jia - Flying in Clutter on Monocular RGB by Learning in 3D Radiance Fields with Domain Adaptation - https://arxiv.org/abs/2512.17349 - arXiv:2512.17349v1 Announce Type: new -Abstract: Modern autonomous navigation systems predominantly rely on lidar and depth cameras. However, a fundamental question remains: Can flying robots navigate in clutter using solely monocular RGB images? Given the prohibitive costs of real-world data collection, learning policies in simulation offers a promising path. Yet, deploying such policies directly in the physical world is hindered by the significant sim-to-real perception gap. Thus, we propose a framework that couples the photorealism of 3D Gaussian Splatting (3DGS) environments with Adversarial Domain Adaptation. By training in high-fidelity simulation while explicitly minimizing feature discrepancy, our method ensures the policy relies on domain-invariant cues. Experimental results demonstrate that our policy achieves robust zero-shot transfer to the physical world, enabling safe and agile flight in unstructured environments with varying illumination. - oai:arXiv.org:2512.17349v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + A two-stream network with global-local feature fusion for bone age assessment + https://arxiv.org/abs/2512.18331 + arXiv:2512.18331v1 Announce Type: new +Abstract: Bone Age Assessment (BAA) is a widely used clinical technique that can accurately reflect an individual's growth and development level, as well as maturity. In recent years, although deep learning has advanced the field of bone age assessment, existing methods face challenges in efficiently balancing global features and local skeletal details. This study aims to develop an automated bone age assessment system based on a two-stream deep learning architecture to achieve higher accuracy in bone age assessment. We propose the BoNet+ model incorporating global and local feature extraction channels. A Transformer module is introduced into the global feature extraction channel to enhance the ability in extracting global features through multi-head self-attention mechanism. A RFAConv module is incorporated into the local feature extraction channel to generate adaptive attention maps within multiscale receptive fields, enhancing local feature extraction capabilities. Global and local features are concatenated along the channel dimension and optimized by an Inception-V3 network. The proposed method has been validated on the Radiological Society of North America (RSNA) and Radiological Hand Pose Estimation (RHPE) test datasets, achieving mean absolute errors (MAEs) of 3.81 and 5.65 months, respectively. These results are comparable to the state-of-the-art. The BoNet+ model reduces the clinical workload and achieves automatic, high-precision, and more objective bone age assessment. + oai:arXiv.org:2512.18331v1 + cs.CV + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xijie Huang, Jinhan Li, Tianyue Wu, Xin Zhou, Zhichao Han, Fei Gao + Qiong Lou, Han Yang, Fang Lu - Beyond Semantic Features: Pixel-level Mapping for Generalized AI-Generated Image Detection - https://arxiv.org/abs/2512.17350 - arXiv:2512.17350v1 Announce Type: new -Abstract: The rapid evolution of generative technologies necessitates reliable methods for detecting AI-generated images. A critical limitation of current detectors is their failure to generalize to images from unseen generative models, as they often overfit to source-specific semantic cues rather than learning universal generative artifacts. To overcome this, we introduce a simple yet remarkably effective pixel-level mapping pre-processing step to disrupt the pixel value distribution of images and break the fragile, non-essential semantic patterns that detectors commonly exploit as shortcuts. This forces the detector to focus on more fundamental and generalizable high-frequency traces inherent to the image generation process. Through comprehensive experiments on GAN and diffusion-based generators, we show that our approach significantly boosts the cross-generator performance of state-of-the-art detectors. Extensive analysis further verifies our hypothesis that the disruption of semantic cues is the key to generalization. - oai:arXiv.org:2512.17350v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Implementing Transport Coding in OMNeT++ for Message Delay Reduction + https://arxiv.org/abs/2512.18332 + arXiv:2512.18332v1 Announce Type: new +Abstract: Transport coding reduces message delay in packet-switched networks by introducing controlled redundancy at the transport layer: $k$ original packets are encoded into $n\ge k$ coded packets, and the message is reconstructed after the first $k$ successful deliveries, effectively shifting latency from the maximum packet delay to the $k$-th order statistic. We present a concise, reproducible discrete-event implementation of transport coding in OMNeT++, including a multi-hop Kleinrock-type network, FIFO queues, exponential service and link delays, and explicit receiver-side reconstruction that records message delay and deadline violations. Using paired uncoded ($n{=}k$) and coded ($n{>}k$) configurations at the same message generation rate, we compare delay, reliability, and saturation effects across code rates and input loads. Simulation results show consistent reductions of average delay and late-delivery probability for moderate redundancy, while keeping the saturation throughput close to the uncoded baseline. The proposed model provides a transparent bridge between analytical transport-coding formulas and executable simulation for tuning redundancy in low-latency services. + oai:arXiv.org:2512.18332v1 + cs.IT + cs.NI + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Chenming Zhou, Jiaan Wang, Yu Li, Lei Li, Juan Cao, Sheng Tang + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Ilya Petrovanov, Anton Sergeev - Physics of Language Models: Part 4.1, Architecture Design and the Magic of Canon Layers - https://arxiv.org/abs/2512.17351 - arXiv:2512.17351v1 Announce Type: new -Abstract: Understanding architectural differences in language models is challenging, especially at academic-scale pretraining (e.g., 1.3B parameters, 100B tokens), where results are often dominated by noise and randomness. To overcome this, we introduce controlled synthetic pretraining tasks that isolate and evaluate core model capabilities. Within this framework, we discover CANON LAYERS: lightweight architectural components -- named after the musical term "canon" -- that promote horizontal information flow across neighboring tokens. Canon layers compute weighted sums of nearby token representations and integrate seamlessly into Transformers, linear attention, state-space models, or any sequence architecture. - We present 12 key results. This includes how Canon layers enhance reasoning depth (e.g., by $2\times$), reasoning breadth, knowledge manipulation, etc. They lift weak architectures like NoPE to match RoPE, and linear attention to rival SOTA linear models like Mamba2/GDN -- validated both through synthetic tasks and real-world academic-scale pretraining. This synthetic playground offers an economical, principled path to isolate core model capabilities often obscured at academic scales. Equipped with infinite high-quality data, it may even PREDICT how future architectures will behave as training pipelines improve -- e.g., through better data curation or RL-based post-training -- unlocking deeper reasoning and hierarchical inference. - oai:arXiv.org:2512.17351v1 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Reinforcement Learning Position Control of a Quadrotor Using Soft Actor-Critic (SAC) + https://arxiv.org/abs/2512.18333 + arXiv:2512.18333v1 Announce Type: new +Abstract: This paper proposes a new Reinforcement Learning (RL) based control architecture for quadrotors. With the literature focusing on controlling the four rotors' RPMs directly, this paper aims to control the quadrotor's thrust vector. The RL agent computes the percentage of overall thrust along the quadrotor's z-axis along with the desired Roll ($\phi$) and Pitch ($\theta$) angles. The agent then sends the calculated control signals along with the current quadrotor's Yaw angle ($\psi$) to an attitude PID controller. The PID controller then maps the control signals to motor RPMs. The Soft Actor-Critic algorithm, a model-free off-policy stochastic RL algorithm, was used to train the RL agents. Training results show the faster training time of the proposed thrust vector controller in comparison to the conventional RPM controllers. Simulation results show smoother and more accurate path-following for the proposed thrust vector controller. + oai:arXiv.org:2512.18333v1 + cs.RO + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zeyuan Allen-Zhu + 10.1109/NILES63360.2024.10753187 + 2024 IEEE 6th Novel Intelligent and Leading Emerging Sciences Conference (NILES) + Youssef Mahran, Zeyad Gamal, Ayman El-Badawy - Adaptive Graph Pruning with Sudden-Events Evaluation for Traffic Prediction using Online Semi-Decentralized ST-GNNs - https://arxiv.org/abs/2512.17352 - arXiv:2512.17352v1 Announce Type: new -Abstract: Spatio-Temporal Graph Neural Networks (ST-GNNs) are well-suited for processing high-frequency data streams from geographically distributed sensors in smart mobility systems. However, their deployment at the edge across distributed compute nodes (cloudlets) createssubstantial communication overhead due to repeated transmission of overlapping node features between neighbouring cloudlets. To address this, we propose an adaptive pruning algorithm that dynamically filters redundant neighbour features while preserving the most informative spatial context for prediction. The algorithm adjusts pruning rates based on recent model performance, allowing each cloudlet to focus on regions experiencing traffic changes without compromising accuracy. Additionally, we introduce the Sudden Event Prediction Accuracy (SEPA), a novel event-focused metric designed to measure responsiveness to traffic slowdowns and recoveries, which are often missed by standard error metrics. We evaluate our approach in an online semi-decentralized setting with traditional FL, server-free FL, and Gossip Learning on two large-scale traffic datasets, PeMS-BAY and PeMSD7-M, across short-, mid-, and long-term prediction horizons. Experiments show that, in contrast to standard metrics, SEPA exposes the true value of spatial connectivity in predicting dynamic and irregular traffic. Our adaptive pruning algorithm maintains prediction accuracy while significantly lowering communication cost in all online semi-decentralized settings, demonstrating that communication can be reduced without compromising responsiveness to critical traffic events. - oai:arXiv.org:2512.17352v1 - cs.LG - cs.AI + Faster Vertex Cover Algorithms on GPUs with Component-Aware Parallel Branching + https://arxiv.org/abs/2512.18334 + arXiv:2512.18334v1 Announce Type: new +Abstract: Algorithms for finding minimum or bounded vertex covers in graphs use a branch-and-reduce strategy, which involves exploring a highly imbalanced search tree. Prior GPU solutions assign different thread blocks to different sub-trees, while using a shared worklist to balance the load. However, these prior solutions do not scale to large and complex graphs because their unawareness of when the graph splits into components causes them to solve these components redundantly. Moreover, their high memory footprint limits the number of workers that can execute concurrently. We propose a novel GPU solution for vertex cover problems that detects when a graph splits into components and branches on the components independently. Although the need to aggregate the solutions of different components introduces non-tail-recursive branches which interfere with load balancing, we overcome this challenge by delegating the post-processing to the last descendant of each branch. We also reduce the memory footprint by reducing the graph and inducing a subgraph before exploring the search tree. Our solution substantially outperforms the state-of-the-art GPU solution, finishing in seconds when the state-of-the-art solution exceeds 6 hours. To the best of our knowledge, our work is the first to parallelize non-tail-recursive branching patterns on GPUs in a load balanced manner. + oai:arXiv.org:2512.18334v1 cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Ivan Kralj, Lodovico Giaretta, Gordan Je\v{z}i\'c, Ivana Podnar \v{Z}arko, \v{S}ar\=unas Girdzijauskas + Hussein Amro, Basel Fakhri, Amer E. Mouawad, Izzat El Hajj - Implementation of Augmented Reality as an Educational Tool for Practice in Early Childhood - https://arxiv.org/abs/2512.17354 - arXiv:2512.17354v1 Announce Type: new -Abstract: Learning Wudhu for young children requires engaging and interactive media to foster a deep understanding of the worship procedures. This study aims to develop a Wudhu learning application based on Augmented Reality (AR) as an interactive and fun educational medium. The development method used includes the stages of needs analysis, system design, implementation, and testing using Black Box Testing. The system utilizes marker-based tracking to display 3D animations of Wudhu movements in real-time when the camera detects a marker on the printed media. The test results indicate that all main functions run well, and a limited trial on children aged 5-7 years showed an increase in learning interest and a better understanding of the Wudhu sequence. Thus, the application of AR technology is proven effective in improving the quality of basic worship instruction for young children. - oai:arXiv.org:2512.17354v1 - cs.HC - q-fin.CP - Mon, 22 Dec 2025 00:00:00 -0500 + Quantization for Vector Search under Streaming Updates + https://arxiv.org/abs/2512.18335 + arXiv:2512.18335v1 Announce Type: new +Abstract: Large-scale vector databases for approximate nearest neighbor (ANN) search typically store a quantized dataset in main memory for fast access, and full precision data on remote disk. State-of-the-art ANN quantization methods are highly data-dependent, rendering them unable to handle point insertions and deletions. This either leads to degraded search quality over time, or forces costly global rebuilds of the entire search index. In this paper, we formally study data-dependent quantization under streaming dataset updates. We formulate a computation model of limited remote disk access and define a dynamic consistency property that guarantees freshness under updates. We use it to obtain the following results: Theoretically, we prove that static data-dependent quantization can be made dynamic with bounded disk I/O per update while retaining formal accuracy guarantees for ANN search. Algorithmically, we develop a practical data-dependent quantization method which is provably dynamically consistent, adapting itself to the dataset as it evolves over time. Our experiments show that the method outperforms baselines in large-scale nearest neighbor search quantization under streaming updates. + oai:arXiv.org:2512.18335v1 + cs.DS + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Ishaq Aden-Ali, Hakan Ferhatosmanoglu, Alexander Greaves-Tunnell, Nina Mishra, Tal Wagner + + + Dynamic Entropy Tuning in Reinforcement Learning Low-Level Quadcopter Control: Stochasticity vs Determinism + https://arxiv.org/abs/2512.18336 + arXiv:2512.18336v1 Announce Type: new +Abstract: This paper explores the impact of dynamic entropy tuning in Reinforcement Learning (RL) algorithms that train a stochastic policy. Its performance is compared against algorithms that train a deterministic one. Stochastic policies optimize a probability distribution over actions to maximize rewards, while deterministic policies select a single deterministic action per state. The effect of training a stochastic policy with both static entropy and dynamic entropy and then executing deterministic actions to control the quadcopter is explored. It is then compared against training a deterministic policy and executing deterministic actions. For the purpose of this research, the Soft Actor-Critic (SAC) algorithm was chosen for the stochastic algorithm while the Twin Delayed Deep Deterministic Policy Gradient (TD3) was chosen for the deterministic algorithm. The training and simulation results show the positive effect the dynamic entropy tuning has on controlling the quadcopter by preventing catastrophic forgetting and improving exploration efficiency. + oai:arXiv.org:2512.18336v1 + cs.RO + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wisnu Uriawan, Muhammad Aditya Hafizh Zahran, Inayah Ayu Deswita, Muhammad Ahsani Taqwim, Ismail Muhammad Ahmadi, Marvi Yoga Pratama + 10.1109/ICCTA64612.2024.10974880 + 2024 IEEE 34th International Conference on Computer Theory and Applications (ICCTA) + Youssef Mahran, Zeyad Gamal, Ayman El-Badawy - Training Text-to-Speech Model with Purely Synthetic Data: Feasibility, Sensitivity, and Generalization Capability - https://arxiv.org/abs/2512.17356 - arXiv:2512.17356v1 Announce Type: new -Abstract: The potential of synthetic data in text-to-speech (TTS) model training has gained increasing attention, yet its rationality and effectiveness require systematic validation. In this study, we systematically investigate the feasibility of using purely synthetic data for TTS training and explore how various factors--including text richness, speaker diversity, noise levels, and speaking styles--affect model performance. Our experiments reveal that increasing speaker and text diversity significantly enhances synthesis quality and robustness. Cleaner training data with minimal noise further improves performance. Moreover, we find that standard speaking styles facilitate more effective model learning. Our experiments indicate that models trained on synthetic data have great potential to outperform those trained on real data under similar conditions, due to the absence of real-world imperfections and noise. - oai:arXiv.org:2512.17356v1 - cs.SD - Mon, 22 Dec 2025 00:00:00 -0500 + Towards Efficient Agents: A Co-Design of Inference Architecture and System + https://arxiv.org/abs/2512.18337 + arXiv:2512.18337v1 Announce Type: new +Abstract: The rapid development of large language model (LLM)-based agents has unlocked new possibilities for autonomous multi-turn reasoning and tool-augmented decision-making. However, their real-world deployment is hindered by severe inefficiencies that arise not from isolated model inference, but from the systemic latency accumulated across reasoning loops, context growth, and heterogeneous tool interactions. This paper presents AgentInfer, a unified framework for end-to-end agent acceleration that bridges inference optimization and architectural design. We decompose the problem into four synergistic components: AgentCollab, a hierarchical dual-model reasoning framework that balances large- and small-model usage through dynamic role assignment; AgentSched, a cache-aware hybrid scheduler that minimizes latency under heterogeneous request patterns; AgentSAM, a suffix-automaton-based speculative decoding method that reuses multi-session semantic memory to achieve low-overhead inference acceleration; and AgentCompress, a semantic compression mechanism that asynchronously distills and reorganizes agent memory without disrupting ongoing reasoning. Together, these modules form a Self-Evolution Engine capable of sustaining efficiency and cognitive stability throughout long-horizon reasoning tasks. Experiments on the BrowseComp-zh and DeepDiver benchmarks demonstrate that through the synergistic collaboration of these methods, AgentInfer reduces ineffective token consumption by over 50%, achieving an overall 1.8-2.5 times speedup with preserved accuracy. These results underscore that optimizing for agentic task completion-rather than merely per-token throughput-is the key to building scalable, efficient, and self-improving intelligent systems. + oai:arXiv.org:2512.18337v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tingxiao Zhou, Leying Zhang, Zhengyang Chen, Yanmin Qian + Weizhe Lin, Hui-Ling Zhen, Shuai Yang, Xian Wang, Renxi Liu, Hanting Chen, Wangze Zhang, Chuansai Zhou, Yiming Li, Chen Chen, Xing Li, Zhiyuan Yang, Xiaosong Li, Xianzhi Yu, Zhenhua Dong, Mingxuan Yuan, Yunhe Wang - Grey graphs and its application - https://arxiv.org/abs/2512.17360 - arXiv:2512.17360v1 Announce Type: new -Abstract: In multi-attribute decision-making problems where the attribute values are interval grey numbers, a simplified form based on kernels and the degree of greyness is presented. Combining fuzzy graph theory with the kernel and the degree of greyness of interval grey numbers, grey graphs and their corresponding operation rules are presented. This paper presents a new multi-attribute decision-making method based on grey graph theory. We analyzed and evaluated the alternative schemes using grey graph. Lastly, a numerical example was conducted in order to demonstrate the effectiveness and feasibility of the proposed method. - oai:arXiv.org:2512.17360v1 + State-Space Averaging Revisited via Reconstruction Operators + https://arxiv.org/abs/2512.18340 + arXiv:2512.18340v1 Announce Type: new +Abstract: This paper presents an operator-theoretic reconstruction of an equivalent continuous-time LTI model from an exact sampled-data (Poincar\'e-map) baseline of a piecewise-linear switching system. The rebuilding is explicitly expressed via matrix logarithms. By expanding the logarithm of a product of matrix exponentials using the Baker--Campbell--Hausdorff (BCH) formula, we show that the classical state-space averaging (SSA) model can be interpreted as the leading-order truncation of this exact reconstruction when the switching period is small and the ripple is small. The same view explains why SSA critically relies on low-frequency and small-ripple assumptions, and why the method becomes fragile for converters with more than two subintervals per cycle. Finally, we provide a complexity-reduced, SSA-flavoured implementation strategy for obtaining the required spectral quantities and a real-valued logarithm without explicitly calling eigen-decomposition or complex matrix logarithms, by exploiting $2\times 2$ invariants and a minimal real-lift construction. + oai:arXiv.org:2512.18340v1 eess.SY cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wanli Xie, Jiale Zhang, Ruiqing Cao + Yuxin Yang, Hang Zhou, Hourong Song, Branislav Hredzak - What You Trust Is Insecure: Demystifying How Developers (Mis)Use Trusted Execution Environments in Practice - https://arxiv.org/abs/2512.17363 - arXiv:2512.17363v1 Announce Type: new -Abstract: Trusted Execution Environments (TEEs), such as Intel SGX and ARM TrustZone, provide isolated regions of CPU and memory for secure computation and are increasingly used to protect sensitive data and code across diverse application domains. However, little is known about how developers actually use TEEs in practice. This paper presents the first large-scale empirical study of real-world TEE applications. We collected and analyzed 241 open-source projects from GitHub that utilize the two most widely-adopted TEEs, Intel SGX and ARM TrustZone. By combining manual inspection with customized static analysis scripts, we examined their adoption contexts, usage patterns, and development practices across three phases. First, we categorized the projects into 8 application domains and identified trends in TEE adoption over time. We found that the dominant use case is IoT device security (30%), which contrasts sharply with prior academic focus on blockchain and cryptographic systems (7%), while AI model protection (12%) is rapidly emerging as a growing domain. Second, we analyzed how TEEs are integrated into software and observed that 32.4% of the projects reimplement cryptographic functionalities instead of using official SDK APIs, suggesting that current SDKs may have limited usability and portability to meet developers' practical needs. Third, we examined security practices through manual inspection and found that 25.3% (61 of 241) of the projects exhibit insecure coding behaviors when using TEEs, such as hardcoded secrets and missing input validation, which undermine their intended security guarantees. Our findings have important implications for improving the usability of TEE SDKs and supporting developers in trusted software development. - oai:arXiv.org:2512.17363v1 - cs.SE - cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 + Benchmarking metaheuristic algorithms for the bi-objective redundancy allocation problem in repairable systems with multiple strategies + https://arxiv.org/abs/2512.18343 + arXiv:2512.18343v1 Announce Type: new +Abstract: This article investigates a bi-objective redundancy allocation problem (RAP) for repairable systems, defined as cost minimization and availability maximization. Binary decisions jointly select the number of components and the standby strategy at the subsystem level. Four redundancy strategies are considered: cold standby, warm standby, hot standby, and a mixed strategy. System availability is evaluated using continuous-time Markov chains. The main novelty is a large, controlled benchmark that compares 65 multi-objective metaheuristics under two initialization settings, with and without Scaled Binomial Initialization (SBI), on six case studies of rising structural and dimensional complexity and four weight limits. Each run uses a fixed budget of 2x10^6 evaluations, and repeated runs support statistical comparisons based on hypervolume and budget-based performance. The Pareto-optimal sets are dominated by hot standby and mixed redundancy, while cold and warm standby are rare in the full populations and almost absent from the Pareto fronts. Hot standby is favored under tight weight limits, whereas mixed redundancy becomes dominant when more spares are allowed. Algorithm results show strong budget effects, so a single overall ranking can be misleading. SBI gives a clear hypervolume gain and can change method rankings; in several cases, the SBI initial population is already close to the best-found reference. NSGAIIARSBX-SBI performs well for medium and large budgets, while NNIA-SBI and CMOPSO-SBI are strongest when the budget is tight. Finally, larger systems require much more search effort to reach high-quality fronts, highlighting the need to plan the evaluation budget in practical RAP studies. The code and the results are available at a Zenodo repository https://doi.org/10.5281/zenodo.17981720. + oai:arXiv.org:2512.18343v1 + cs.NE + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yuqing Niu, Jieke Shi, Ruidong Han, Ye Liu, Chengyan Ma, Yunbo Lyu, David Lo - - - Adversarially Robust Detection of Harmful Online Content: A Computational Design Science Approach - https://arxiv.org/abs/2512.17367 - arXiv:2512.17367v1 Announce Type: new -Abstract: Social media platforms are plagued by harmful content such as hate speech, misinformation, and extremist rhetoric. Machine learning (ML) models are widely adopted to detect such content; however, they remain highly vulnerable to adversarial attacks, wherein malicious users subtly modify text to evade detection. Enhancing adversarial robustness is therefore essential, requiring detectors that can defend against diverse attacks (generalizability) while maintaining high overall accuracy. However, simultaneously achieving both optimal generalizability and accuracy is challenging. Following the computational design science paradigm, this study takes a sequential approach that first proposes a novel framework (Large Language Model-based Sample Generation and Aggregation, LLM-SGA) by identifying the key invariances of textual adversarial attacks and leveraging them to ensure that a detector instantiated within the framework has strong generalizability. Second, we instantiate our detector (Adversarially Robust Harmful Online Content Detector, ARHOCD) with three novel design components to improve detection accuracy: (1) an ensemble of multiple base detectors that exploits their complementary strengths; (2) a novel weight assignment method that dynamically adjusts weights based on each sample's predictability and each base detector's capability, with weights initialized using domain knowledge and updated via Bayesian inference; and (3) a novel adversarial training strategy that iteratively optimizes both the base detectors and the weight assignor. We addressed several limitations of existing adversarial robustness enhancement research and empirically evaluated ARHOCD across three datasets spanning hate speech, rumor, and extremist content. Results show that ARHOCD offers strong generalizability and improves detection accuracy under adversarial conditions. - oai:arXiv.org:2512.17367v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yidong Chai, Yi Liu, Mohammadreza Ebrahimi, Weifeng Li, Balaji Padmanabhan + Mateusz Oszczypa{\l}a, David Ibehej, Jakub Kudela - TakeAD: Preference-based Post-optimization for End-to-end Autonomous Driving with Expert Takeover Data - https://arxiv.org/abs/2512.17370 - arXiv:2512.17370v1 Announce Type: new -Abstract: Existing end-to-end autonomous driving methods typically rely on imitation learning (IL) but face a key challenge: the misalignment between open-loop training and closed-loop deployment. This misalignment often triggers driver-initiated takeovers and system disengagements during closed-loop execution. How to leverage those expert takeover data from disengagement scenarios and effectively expand the IL policy's capability presents a valuable yet unexplored challenge. In this paper, we propose TakeAD, a novel preference-based post-optimization framework that fine-tunes the pre-trained IL policy with this disengagement data to enhance the closed-loop driving performance. First, we design an efficient expert takeover data collection pipeline inspired by human takeover mechanisms in real-world autonomous driving systems. Then, this post optimization framework integrates iterative Dataset Aggregation (DAgger) for imitation learning with Direct Preference Optimization (DPO) for preference alignment. The DAgger stage equips the policy with fundamental capabilities to handle disengagement states through direct imitation of expert interventions. Subsequently, the DPO stage refines the policy's behavior to better align with expert preferences in disengagement scenarios. Through multiple iterations, the policy progressively learns recovery strategies for disengagement states, thereby mitigating the open-loop gap. Experiments on the closed-loop Bench2Drive benchmark demonstrate our method's effectiveness compared with pure IL methods, with comprehensive ablations confirming the contribution of each component. - oai:arXiv.org:2512.17370v1 - cs.RO + MCVI-SANet: A lightweight semi-supervised model for LAI and SPAD estimation of winter wheat under vegetation index saturation + https://arxiv.org/abs/2512.18344 + arXiv:2512.18344v1 Announce Type: new +Abstract: Vegetation index (VI) saturation during the dense canopy stage and limited ground-truth annotations of winter wheat constrain accurate estimation of LAI and SPAD. Existing VI-based and texture-driven machine learning methods exhibit limited feature expressiveness. In addition, deep learning baselines suffer from domain gaps and high data demands, which restrict their generalization. Therefore, this study proposes the Multi-Channel Vegetation Indices Saturation Aware Net (MCVI-SANet), a lightweight semi-supervised vision model. The model incorporates a newly designed Vegetation Index Saturation-Aware Block (VI-SABlock) for adaptive channel-spatial feature enhancement. It also integrates a VICReg-based semi-supervised strategy to further improve generalization. Datasets were partitioned using a vegetation height-informed strategy to maintain representativeness across growth stages. Experiments over 10 repeated runs demonstrate that MCVI-SANet achieves state-of-the-art accuracy. The model attains an average R2 of 0.8123 and RMSE of 0.4796 for LAI, and an average R2 of 0.6846 and RMSE of 2.4222 for SPAD. This performance surpasses the best-performing baselines, with improvements of 8.95% in average LAI R2 and 8.17% in average SPAD R2. Moreover, MCVI-SANet maintains high inference speed with only 0.10M parameters. Overall, the integration of semi-supervised learning with agronomic priors provides a promising approach for enhancing remote sensing-based precision agriculture. + oai:arXiv.org:2512.18344v1 + cs.CV cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Deqing Liu, Yinfeng Gao, Deheng Qian, Qichao Zhang, Xiaoqing Ye, Junyu Han, Yupeng Zheng, Xueyi Liu, Zhongpu Xia, Dawei Ding, Yifeng Pan, Dongbin Zhao - - - GraphCue for SDN Configuration Code Synthesis - https://arxiv.org/abs/2512.17371 - arXiv:2512.17371v1 Announce Type: new -Abstract: We present GraphCue, a topology-grounded retrieval and agent-in-the-loop framework for automated SDN configuration. Each case is abstracted into a JSON graph and embedded using a lightweight three-layer GCN trained with contrastive learning. The nearest validated reference is injected into a structured prompt that constrains code generation, while a verifier closes the loop by executing the candidate configuration and feeding failures back to the agent. On 628 validation cases, GraphCue achieves an 88.2 percent pass rate within 20 iterations and completes 95 percent of verification loops within 9 seconds. Ablation studies without retrieval or structured prompting perform substantially worse, indicating that topology-aware retrieval and constraint-based conditioning are key drivers of performance. - oai:arXiv.org:2512.17371v1 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - IEEE Consumer Communications & Networking Conference 2026 - Haomin Qi, Fengfei Yu, Chengbo Huang + Zhiheng Zhang, Jiajun Yang, Hong Sun, Dong Wang, Honghua Jiang, Yaru Chen, Tangyuan Ning - Dialectics for Artificial Intelligence - https://arxiv.org/abs/2512.17373 - arXiv:2512.17373v1 Announce Type: new -Abstract: Can artificial intelligence discover, from raw experience and without human supervision, concepts that humans have discovered? One challenge is that human concepts themselves are fluid: conceptual boundaries can shift, split, and merge as inquiry progresses (e.g., Pluto is no longer considered a planet). To make progress, we need a definition of "concept" that is not merely a dictionary label, but a structure that can be revised, compared, and aligned across agents. We propose an algorithmic-information viewpoint that treats a concept as an information object defined only through its structural relation to an agent's total experience. The core constraint is determination: a set of parts forms a reversible consistency relation if any missing part is recoverable from the others (up to the standard logarithmic slack in Kolmogorov-style identities). This reversibility prevents "concepts" from floating free of experience and turns concept existence into a checkable structural claim. To judge whether a decomposition is natural, we define excess information, measuring the redundancy overhead introduced by splitting experience into multiple separately described parts. On top of these definitions, we formulate dialectics as an optimization dynamics: as new patches of information appear (or become contested), competing concepts bid to explain them via shorter conditional descriptions, driving systematic expansion, contraction, splitting, and merging. Finally, we formalize low-cost concept transmission and multi-agent alignment using small grounds/seeds that allow another agent to reconstruct the same concept under a shared protocol, making communication a concrete compute-bits trade-off. - oai:arXiv.org:2512.17373v1 - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Theodosian: A Deep Dive into Memory-Hierarchy-Centric FHE Acceleration + https://arxiv.org/abs/2512.18345 + arXiv:2512.18345v1 Announce Type: new +Abstract: Fully homomorphic encryption (FHE) enables secure computation on encrypted data, mitigating privacy concerns in cloud and edge environments. However, due to its high compute and memory demands, extensive acceleration research has been pursued across diverse hardware platforms, especially GPUs. In this paper, we perform a microarchitectural analysis of CKKS, a popular FHE scheme, on modern GPUs. We focus on on-chip cache behavior, and show that the dominant kernels remain bound by memory bandwidth despite a high-bandwidth L2 cache, exposing a persistent memory wall. We further discover that the overall CKKS pipeline throughput is constrained by low per-kernel hardware utilization, caused by insufficient intra-kernel parallelism. Motivated by these findings, we introduce Theodosian, a set of complementary, memory-aware optimizations that improve cache efficiency and reduce runtime overheads. Our approach delivers consistent speedups across various CKKS workloads. On an RTX 5090, we reduce the bootstrapping latency for 32,768 complex numbers to 15.2ms with Theodosian, and further to 12.8ms with additional algorithmic optimizations, establishing new state-of-the-art GPU performance to the best of our knowledge. + oai:arXiv.org:2512.18345v1 + cs.CR + cs.AR + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Zhengmian Hu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Wonseok Choi, Hyunah Yu, Jongmin Kim, Hyesung Ji, Jaiyoung Park, Jung Ho Ahn - AdvJudge-Zero: Binary Decision Flips in LLM-as-a-Judge via Adversarial Control Tokens - https://arxiv.org/abs/2512.17375 - arXiv:2512.17375v1 Announce Type: new -Abstract: Reward models and LLM-as-a-Judge systems are central to modern post-training pipelines such as RLHF, DPO, and RLAIF, where they provide scalar feedback and binary decisions that guide model selection and RL-based fine-tuning. We show that these judge systems exhibit a recurring vulnerability: short sequences of low-perplexity control tokens can flip many binary evaluations from correct ``No'' judgments to incorrect ``Yes'' judgments by steering the last-layer logit gap. These control tokens are patterns that a policy model could plausibly generate during post-training, and thus represent realistic reward-hacking risks rather than worst-case adversarial strings. Our method, AdvJudge-Zero, uses the model's next-token distribution and beam-search exploration to discover diverse control-token sequences from scratch, and our analysis shows that the induced hidden-state perturbations concentrate in a low-rank ``soft mode'' that is anti-aligned with the judge's refusal direction. Empirically, these tokens cause very high false positive rates when large open-weight and specialized judge models score incorrect answers on math and reasoning benchmarks. Finally, we show that LoRA-based adversarial training on small sets of control-token-augmented examples can markedly reduce these false positives while preserving evaluation quality. - oai:arXiv.org:2512.17375v1 - cs.LG + LLM-based Few-Shot Early Rumor Detection with Imitation Agent + https://arxiv.org/abs/2512.18352 + arXiv:2512.18352v1 Announce Type: new +Abstract: Early Rumor Detection (EARD) aims to identify the earliest point at which a claim can be accurately classified based on a sequence of social media posts. This is especially challenging in data-scarce settings. While Large Language Models (LLMs) perform well in few-shot NLP tasks, they are not well-suited for time-series data and are computationally expensive for both training and inference. In this work, we propose a novel EARD framework that combines an autonomous agent and an LLM-based detection model, where the agent acts as a reliable decision-maker for \textit{early time point determination}, while the LLM serves as a powerful \textit{rumor detector}. This approach offers the first solution for few-shot EARD, necessitating only the training of a lightweight agent and allowing the LLM to remain training-free. Extensive experiments on four real-world datasets show our approach boosts performance across LLMs and surpasses existing EARD methods in accuracy and earliness. + oai:arXiv.org:2512.18352v1 cs.CL - cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Tung-Ling Li, Yuhao Wu, Hongliang Liu + Fengzhu Zeng, Qian Shao, Ling Cheng, Wei Gao, Shih-Fen Cheng, Jing Ma, Cheng Niu - Towards Deeper Emotional Reflection: Crafting Affective Image Filters with Generative Priors - https://arxiv.org/abs/2512.17376 - arXiv:2512.17376v1 Announce Type: new -Abstract: Social media platforms enable users to express emotions by posting text with accompanying images. In this paper, we propose the Affective Image Filter (AIF) task, which aims to reflect visually-abstract emotions from text into visually-concrete images, thereby creating emotionally compelling results. We first introduce the AIF dataset and the formulation of the AIF models. Then, we present AIF-B as an initial attempt based on a multi-modal transformer architecture. After that, we propose AIF-D as an extension of AIF-B towards deeper emotional reflection, effectively leveraging generative priors from pre-trained large-scale diffusion models. Quantitative and qualitative experiments demonstrate that AIF models achieve superior performance for both content consistency and emotional fidelity compared to state-of-the-art methods. Extensive user study experiments demonstrate that AIF models are significantly more effective at evoking specific emotions. Based on the presented results, we comprehensively discuss the value and potential of AIF models. - oai:arXiv.org:2512.17376v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Robust H-infinity control under stochastic requirements: minimizing conditional value-at-risk instead of worst-case performance + https://arxiv.org/abs/2512.18356 + arXiv:2512.18356v1 Announce Type: new +Abstract: Conventional robust $\mathcal H_2/\mathcal H_\infty$ control minimizes the worst-case performance, often leading to a conservative design driven by very rare and somewhat arbitrary parametric configurations. To reduce this conservatism while taking advantage of the stochastic properties of Monte-Carlo sampling and its compatibility with parallel computing, we introduce an alternative paradigm that optimizes the controller with respect to a stochastic criterion, namely the conditional value at risk. We illustrate the potential of this approach on a realistic satellite benchmark, showing that it can significantly improve overall performance by tolerating some degradation in very rare worst-case scenarios. + oai:arXiv.org:2512.18356v1 + eess.SY + cs.SY + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Peixuan Zhang, Shuchen Weng, Jiajun Tang, Si Li, Boxin Shi + Ervan Kassarian, Francesco Sanfedino, Daniel Alazard, Andrea Marrazza - Sobolev Algorithm for Local Smoothness Analysis (SALSA) via Sharp Direct and Inverse Statements - https://arxiv.org/abs/2512.17377 - arXiv:2512.17377v1 Announce Type: new -Abstract: We extend sharp direct and inverse approximation statements for kernel-based methods for finitely smooth kernels, i.e. those whose native spaces are norm-equivalent to Sobolev spaces. In particular, our inverse results are now formulated for a broad class of approximation schemes beyond interpolation, extending existing theory. Building on these results, we propose a novel Sobolev Algorithm for Local Smoothness Analysis (SALSA) for detecting local smoothness properties of target data, including their degree of smoothness and non-smoothness. The method is rigorously grounded based on the sharp direct and inverse statements. Numerical experiments in various settings highlight the effectiveness of the proposed algorithm. - oai:arXiv.org:2512.17377v1 - math.NA - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 + DACE For Railway Acronym Disambiguation + https://arxiv.org/abs/2512.18357 + arXiv:2512.18357v1 Announce Type: new +Abstract: Acronym Disambiguation (AD) is a fundamental challenge in technical text processing, particularly in specialized sectors where high ambiguity complicates automated analysis. This paper addresses AD within the context of the TextMine'26 competition on French railway documentation. We present DACE (Dynamic Prompting, Retrieval Augmented Generation, Contextual Selection, and Ensemble Aggregation), a framework that enhances Large Language Models through adaptive in-context learning and external domain knowledge injection. By dynamically tailoring prompts to acronym ambiguity and aggregating ensemble predictions, DACE mitigates hallucination and effectively handles low-resource scenarios. Our approach secured the top rank in the competition with an F1 score of 0.9069. + oai:arXiv.org:2512.18357v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sara Avesani, Leevan Ling, Francesco Marchetti, Tizian Wenzel + http://creativecommons.org/licenses/by/4.0/ + El Mokhtar Hribach, Oussama Mechhour, Mohammed Elmonstaser, Yassine El Boudouri, Othmane Kabal - When Symmetry Yields NP-Hardness: Affine ML-SAT on S5 Frames - https://arxiv.org/abs/2512.17378 - arXiv:2512.17378v1 Announce Type: new -Abstract: Hemaspaandra~et~al.~[JCSS 2010] conjectured that satisfiability for multi-modal logic restricted to the connectives XOR and 1, over frame classes T, S4, and S5, is solvable in polynomial time. We refute this for S5 frames, by proving NP-hardness. - oai:arXiv.org:2512.17378v1 - cs.LO - cs.CC - Mon, 22 Dec 2025 00:00:00 -0500 + Downlink Power Allocation for STAR-RIS-Assisted Cell-Free Massive MIMO with Multi-antenna Users + https://arxiv.org/abs/2512.18359 + arXiv:2512.18359v1 Announce Type: new +Abstract: This paper investigates the downlink power allocation of the simultaneous transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)-assisted cell-free massive multiple-input multiple-output (MIMO) system with multi-antenna users. We introduce downlink spectral efficiency (SE) and derive novel closed-form SE expressions using linear minimum mean squared error (MMSE) detectors. We also address the downlink power allocation via a sum SE maximization problem framed within an alternating direction method of multipliers (ADMM)-based fractional programming (FP) algorithm. Numerical results demonstrate that systems utilizing multi-antenna users significantly enhance SE, achieving at least a 20% SE increase as the number of antennas increases from one to six. Additionally, our proposed ADMM-based FP algorithm outperforms existing fractional power control approaches, yielding a more than 20% SE increase. These results highlight the necessity for adopting multi-antenna users and efficient power allocation algorithms in STAR-RIS-assisted cell-free massive MIMO systems. + oai:arXiv.org:2512.18359v1 + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Andreas Krebs, Arne Meier + Jun Qian, Ross Murch, Khaled B. Letaief - Timely Information Updating for Mobile Devices Without and With ML Advice - https://arxiv.org/abs/2512.17381 - arXiv:2512.17381v1 Announce Type: new -Abstract: This paper investigates an information update system in which a mobile device monitors a physical process and sends status updates to an access point (AP). A fundamental trade-off arises between the timeliness of the information maintained at the AP and the update cost incurred at the device. To address this trade-off, we propose an online algorithm that determines when to transmit updates using only available observations. The proposed algorithm asymptotically achieves the optimal competitive ratio against an adversary that can simultaneously manipulate multiple sources of uncertainty, including the operation duration, the information staleness, the update cost, and the availability of update opportunities. Furthermore, by incorporating machine learning (ML) advice of unknown reliability into the design, we develop an ML-augmented algorithm that asymptotically attains the optimal consistency-robustness trade-off, even when the adversary can additionally corrupt the ML advice. The optimal competitive ratio scales linearly with the range of update costs, but is unaffected by other uncertainties. Moreover, an optimal competitive online algorithm exhibits a threshold-like response to the ML advice: it either fully trusts or completely ignores the ML advice, as partially trusting the advice cannot improve the consistency without severely degrading the robustness. Extensive simulations in stochastic settings further validate the theoretical findings in the adversarial environment. - oai:arXiv.org:2512.17381v1 - cs.NI - cs.IT - cs.LG - math.IT - Mon, 22 Dec 2025 00:00:00 -0500 + LLM Agents Implement an NLG System from Scratch: Building Interpretable Rule-Based RDF-to-Text Generators + https://arxiv.org/abs/2512.18360 + arXiv:2512.18360v1 Announce Type: new +Abstract: We present a novel neurosymbolic framework for RDF-to-text generation, in which the model is "trained" through collaborative interactions among multiple LLM agents rather than traditional backpropagation. The LLM agents produce rule-based Python code for a generator for the given domain, based on RDF triples only, with no in-domain human reference texts. The resulting system is fully interpretable, requires no supervised training data, and generates text nearly instantaneously using only a single CPU. Our experiments on the WebNLG and OpenDialKG data show that outputs produced by our approach reduce hallucination, with only slight fluency penalties compared to finetuned or prompted language models + oai:arXiv.org:2512.18360v1 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yu-Pin Hsu, Yi-Hsuan Tseng + http://creativecommons.org/licenses/by/4.0/ + 10.18653/v1/2025.emnlp-industry.142 + Mateusz Lango, Ond\v{r}ej Du\v{s}ek - Delaunay-Rips filtration: a study and an algorithm - https://arxiv.org/abs/2512.17382 - arXiv:2512.17382v1 Announce Type: new -Abstract: The Delaunay-Rips filtration is a lighter and faster alternative to the well-known Rips filtration for low-dimensional Euclidean point clouds. Despite these advantages, it has seldom been studied. In this paper, we aim to bridge this gap by providing a thorough theoretical and empirical analysis of this construction. From a theoretical perspective, we show how the persistence diagrams associated with the Delaunay-Rips filtration approximate those obtained with the Rips filtration. Additionally, we describe the instabilities of the Delaunay-Rips persistence diagrams when the input point cloud is perturbed. Finally, we introduce an algorithm that computes persistence diagrams of Delaunay-Rips filtrations in any dimension. We show that our method is faster and has a lower memory footprint than traditional approaches in low dimensions. Our C++ implementation, which comes with Python bindings, is available at https://github.com/MClemot/GeoPH. - oai:arXiv.org:2512.17382v1 - cs.CG - Mon, 22 Dec 2025 00:00:00 -0500 + Convexification Numerical Method for Imaging of Moving Targets + https://arxiv.org/abs/2512.18361 + arXiv:2512.18361v1 Announce Type: new +Abstract: The problem of imaging of a moving target is formulated as a Coefficient Inverse Problem for a hyperbolic equation with its coefficient depending on all three spatial variables and time. As the initial condition, the point source running along a straight line is used. Lateral Cauchy data are known for each position of the point source. A truncated Fourier series with respect to a special orthonormal basis is used. First, Lipschitz stability estimate is obtained. Next, a globally convergent numerical method, the so-called convexification method, is developed and its convergence analysis is carried out. The convexification method is based on a Carleman estimate. Results of numerical experiments are presented. + oai:arXiv.org:2512.18361v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Matt\'eo Cl\'emot, Julie Digne, Julien Tierny + http://creativecommons.org/licenses/by/4.0/ + Michael V. Klibanov, Jingzhi Li, Vladimir G. Romanov, Zhipeng Yang - UCoder: Unsupervised Code Generation by Internal Probing of Large Language Models - https://arxiv.org/abs/2512.17385 - arXiv:2512.17385v1 Announce Type: new -Abstract: Large language models (LLMs) have demonstrated remarkable capabilities in code generation tasks. However, their effectiveness heavily relies on supervised training with extensive labeled (e.g., question-answering pairs) or unlabeled datasets (e.g., code snippets), which are often expensive and difficult to obtain at scale. To address this limitation, this paper introduces a method IPC, an unsupervised framework that leverages Internal Probing of LLMs for Code generation without any external corpus, even unlabeled code snippets. We introduce the problem space probing, test understanding probing, solution space probing, and knowledge consolidation and reinforcement to probe the internal knowledge and confidence patterns existing in LLMs. Further, IPC identifies reliable code candidates through self-consistency mechanisms and representation-based quality estimation to train UCoder (coder with unsupervised learning). We validate the proposed approach across multiple code benchmarks, demonstrating that unsupervised methods can achieve competitive performance compared to supervised approaches while significantly reducing the dependency on labeled data and computational resources. Analytic experiments reveal that internal model states contain rich signals about code quality and correctness, and that properly harnessing these signals enables effective unsupervised learning for code generation tasks, opening new directions for training code LLMs in resource-constrained scenarios. - oai:arXiv.org:2512.17385v1 + SRS-Stories: Vocabulary-constrained multilingual story generation for language learning + https://arxiv.org/abs/2512.18362 + arXiv:2512.18362v1 Announce Type: new +Abstract: In this paper, we use large language models to generate personalized stories for language learners, using only the vocabulary they know. The generated texts are specifically written to teach the user new vocabulary by simply reading stories where it appears in context, while at the same time seamlessly reviewing recently learned vocabulary. The generated stories are enjoyable to read and the vocabulary reviewing/learning is optimized by a Spaced Repetition System. The experiments are conducted in three languages: English, Chinese and Polish, evaluating three story generation methods and three strategies for enforcing lexical constraints. The results show that the generated stories are more grammatical, coherent, and provide better examples of word usage than texts generated by the standard constrained beam search approach + oai:arXiv.org:2512.18362v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jiajun Wu, Jian Yang, Wei Zhang, Lin Jing, Yuqing Ma, Ensheng Shi, Yuchi Ma, Zhoujun Li, Xianglong Liu + http://creativecommons.org/licenses/by/4.0/ + 10.18653/v1/2025.emnlp-industry.44 + Wiktor Kamzela, Mateusz Lango, Ondrej Dusek - Deterministic implementation in single-item auctions - https://arxiv.org/abs/2512.17386 - arXiv:2512.17386v1 Announce Type: new -Abstract: Deterministic auctions are attractive in practice due to their transparency, simplicity, and ease of implementation, motivating a sharp understanding of when they match randomized designs. We study deterministic implementation in single-item auctions under two outcome notions: (revenue, welfare) pairs and interim allocations. For (revenue, welfare) pairs, we show a discrete separation: there exists a pair implementable by a deterministic Bayesian incentive-compatible (BIC) auction but not by any deterministic dominant-strategy incentive-compatible (DSIC) auction. For continuous atomless priors, we identify conditions under which deterministic DSIC auctions are implementationally equivalent to randomized BIC auctions in terms of achievable outcomes. For interim allocations, we establish a deterministic analogue of Border's theorem for two bidders, providing a necessary and sufficient condition for deterministic DSIC implementability, and use it to exhibit an interim allocation implementable by a deterministic BIC auction but not by any deterministic DSIC auction. - oai:arXiv.org:2512.17386v1 - cs.GT - Mon, 22 Dec 2025 00:00:00 -0500 + Enhancing 3D Semantic Scene Completion with a Refinement Module + https://arxiv.org/abs/2512.18363 + arXiv:2512.18363v1 Announce Type: new +Abstract: We propose ESSC-RM, a plug-and-play Enhancing framework for Semantic Scene Completion with a Refinement Module, which can be seamlessly integrated into existing SSC models. ESSC-RM operates in two phases: a baseline SSC network first produces a coarse voxel prediction, which is subsequently refined by a 3D U-Net-based Prediction Noise-Aware Module (PNAM) and Voxel-level Local Geometry Module (VLGM) under multiscale supervision. Experiments on SemanticKITTI show that ESSC-RM consistently improves semantic prediction performance. When integrated into CGFormer and MonoScene, the mean IoU increases from 16.87% to 17.27% and from 11.08% to 11.51%, respectively. These results demonstrate that ESSC-RM serves as a general refinement framework applicable to a wide range of SSC models. + oai:arXiv.org:2512.18363v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yan Liu, Zeyu Ren, Pingzhong Tang, Zihe Wang, Yulong Zeng, Jie Zhang + http://creativecommons.org/licenses/by/4.0/ + Dunxing Zhang (Technical University of Munich, Munich, Germany), Jiachen Lu (Technical University of Munich, Munich, Germany), Han Yang (National Science Center for Earthquake Engineering, Tianjin University, Tianjin, China, School of Civil Engineering, Tianjin University, Tianjin, China), Lei Bao (National Science Center for Earthquake Engineering, Tianjin University, Tianjin, China, School of Civil Engineering, Tianjin University, Tianjin, China), Bo Song (National Science Center for Earthquake Engineering, Tianjin University, Tianjin, China, School of Civil Engineering, Tianjin University, Tianjin, China) - CIFE: Code Instruction-Following Evaluation - https://arxiv.org/abs/2512.17387 - arXiv:2512.17387v1 Announce Type: new -Abstract: Large Language Models (LLMs) are increasingly applied to real-world code generation, where functional correctness alone is insufficient for reliable deployment, developers also expect adherence to explicit requirements for robustness, formatting, and security. Existing benchmarks primarily assess correctness through test-case execution, offering limited insight into how reliably models follow such constraints. We introduce a benchmark of 1,000 Python tasks, each paired with an average of 7 developer-specified constraints spanning 13 categories. Constraints are curated through a four-stage human-LLM pipeline to ensure they are atomic, relevant, and objective. We evaluate 14 open- and closed-source models using complementary adherence metrics and propose the C2A Score, a composite measure that jointly captures correctness and constraint compliance. Results reveal a substantial gap between partial and strict satisfaction, while strong models achieve over 90% partial adherence, strict adherence remains between 39-66%. These findings highlight that trustworthy code generation requires not only correctness but also consistent adherence to developer intent. - oai:arXiv.org:2512.17387v1 - cs.SE - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Efficient Zero-Shot Inpainting with Decoupled Diffusion Guidance + https://arxiv.org/abs/2512.18365 + arXiv:2512.18365v1 Announce Type: new +Abstract: Diffusion models have emerged as powerful priors for image editing tasks such as inpainting and local modification, where the objective is to generate realistic content that remains consistent with observed regions. In particular, zero-shot approaches that leverage a pretrained diffusion model, without any retraining, have been shown to achieve highly effective reconstructions. However, state-of-the-art zero-shot methods typically rely on a sequence of surrogate likelihood functions, whose scores are used as proxies for the ideal score. This procedure however requires vector-Jacobian products through the denoiser at every reverse step, introducing significant memory and runtime overhead. To address this issue, we propose a new likelihood surrogate that yields simple and efficient to sample Gaussian posterior transitions, sidestepping the backpropagation through the denoiser network. Our extensive experiments show that our method achieves strong observation consistency compared with fine-tuned baselines and produces coherent, high-quality reconstructions, all while significantly reducing inference cost. Code is available at https://github.com/YazidJanati/ding. + oai:arXiv.org:2512.18365v1 + cs.CV + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Sravani Gunnu, Shanmukha Guttula, Hima Patel + http://creativecommons.org/licenses/by-sa/4.0/ + Badr Moufad, Navid Bagheri Shouraki, Alain Oliviero Durmus, Thomas Hirtz, Eric Moulines, Jimmy Olsson, Yazid Janati - The Mental World of Large Language Models in Recommendation: A Benchmark on Association, Personalization, and Knowledgeability - https://arxiv.org/abs/2512.17389 - arXiv:2512.17389v1 Announce Type: new -Abstract: Large language models (LLMs) have shown potential in recommendation systems (RecSys) by using them as either knowledge enhancer or zero-shot ranker. A key challenge lies in the large semantic gap between LLMs and RecSys where the former internalizes language world knowledge while the latter captures personalized world of behaviors. Unfortunately, the research community lacks a comprehensive benchmark that evaluates the LLMs over their limitations and boundaries in RecSys so that we can draw a confident conclusion. To investigate this, we propose a benchmark named LRWorld containing over 38K high-quality samples and 23M tokens carefully compiled and generated from widely used public recommendation datasets. LRWorld categorizes the mental world of LLMs in RecSys as three main scales (association, personalization, and knowledgeability) spanned by ten factors with 31 measures (tasks). Based on LRWorld, comprehensive experiments on dozens of LLMs show that they are still not well capturing the deep neural personalized embeddings but can achieve good results on shallow memory-based item-item similarity. They are also good at perceiving item entity relations, entity hierarchical taxonomies, and item-item association rules when inferring user interests. Furthermore, LLMs show a promising ability in multimodal knowledge reasoning (movie poster and product image) and robustness to noisy profiles. None of them show consistently good performance over the ten factors. Model sizes, position bias, and more are ablated. - oai:arXiv.org:2512.17389v1 - cs.IR - Mon, 22 Dec 2025 00:00:00 -0500 + Learning Semantic Atomic Skills for Multi-Task Robotic Manipulation + https://arxiv.org/abs/2512.18368 + arXiv:2512.18368v1 Announce Type: new +Abstract: While imitation learning has shown impressive results in single-task robot manipulation, scaling it to multi-task settings remains a fundamental challenge due to issues such as suboptimal demonstrations, trajectory noise, and behavioral multi-modality. Existing skill-based methods attempt to address this by decomposing actions into reusable abstractions, but they often rely on fixed-length segmentation or environmental priors that limit semantic consistency and cross-task generalization. In this work, we propose AtomSkill, a novel multi-task imitation learning framework that learns and leverages a structured Atomic Skill Space for composable robot manipulation. Our approach is built on two key technical contributions. First, we construct a Semantically Grounded Atomic Skill Library by partitioning demonstrations into variable-length skills using gripper-state keyframe detection and vision-language model annotation. A contrastive learning objective ensures the resulting skill embeddings are both semantically consistent and temporally coherent. Second, we propose an Action Generation module with Keypose Imagination, which jointly predicts a skill's long-horizon terminal keypose and its immediate action sequence. This enables the policy to reason about overarching motion goals and fine-grained control simultaneously, facilitating robust skill chaining. Extensive experiments in simulated and real-world environments show that AtomSkill consistently outperforms state-of-the-art methods across diverse manipulation tasks. + oai:arXiv.org:2512.18368v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Guangneng Hu + Yihang Zhu, Weiqing Wang, Shijie Wu, Ye Shi, Jingya Wang - Psychological Factors Influencing University Students Trust in AI-Based Learning Assistants - https://arxiv.org/abs/2512.17390 - arXiv:2512.17390v1 Announce Type: new -Abstract: Artificial intelligence (AI) based learning assistants and chatbots are increasingly integrated into higher education. While these tools are often evaluated in terms of technical performance, their successful and ethical use also depends on psychological factors such as trust, perceived risk, technology anxiety, and students general attitudes toward AI. This paper adopts a psychology oriented perspective to examine how university students form trust in AI based learning assistants. Drawing on recent literature in mental health, human AI interaction, and trust in automation, we propose a conceptual framework that organizes psychological predictors of trust into four groups: cognitive appraisals, affective reactions, social relational factors, and contextual moderators. A narrative review approach synthesizes empirical findings and derives research questions and hypotheses for future studies. The paper highlights that trust in AI is a psychological process shaped by individual differences and learning environments, with practical implications for instructors, administrators, and designers of educational AI systems. - oai:arXiv.org:2512.17390v1 - cs.HC - cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 + Towards Guided Descent: Optimization Algorithms for Training Neural Networks At Scale + https://arxiv.org/abs/2512.18373 + arXiv:2512.18373v1 Announce Type: new +Abstract: Neural network optimization remains one of the most consequential yet poorly understood challenges in modern AI research, where improvements in training algorithms can lead to enhanced feature learning in foundation models, order-of-magnitude reductions in training time, and improved interpretability into how networks learn. While stochastic gradient descent (SGD) and its variants have become the de facto standard for training deep networks, their success in these over-parameterized regimes often appears more empirical than principled. This thesis investigates this apparent paradox by tracing the evolution of optimization algorithms from classical first-order methods to modern higher-order techniques, revealing how principled algorithmic design can demystify the training process. Starting from first principles with SGD and adaptive gradient methods, the analysis progressively uncovers the limitations of these conventional approaches when confronted with anisotropy that is representative of real-world data. These breakdowns motivate the exploration of sophisticated alternatives rooted in curvature information: second-order approximation techniques, layer-wise preconditioning, adaptive learning rates, and more. Next, the interplay between these optimization algorithms and the broader neural network training toolkit, which includes prior and recent developments such as maximal update parametrization, learning rate schedules, and exponential moving averages, emerges as equally essential to empirical success. To bridge the gap between theoretical understanding and practical deployment, this paper offers practical prescriptions and implementation strategies for integrating these methods into modern deep learning workflows. + oai:arXiv.org:2512.18373v1 + cs.LG + math.OC + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Ezgi Da\u{g}tekin, Ercan Erkalkan + Ansh Nagwekar - Are Vision Language Models Cross-Cultural Theory of Mind Reasoners? - https://arxiv.org/abs/2512.17394 - arXiv:2512.17394v1 Announce Type: new -Abstract: Theory of Mind (ToM) -- the ability to attribute beliefs, desires, and emotions to others -- is fundamental for human social intelligence, yet remains a major challenge for artificial agents. Existing Vision-Language Models (VLMs) are increasingly applied in socially grounded tasks, but their capacity for cross-cultural ToM reasoning is largely unexplored. In this work, we introduce CulturalToM-VQA, a new evaluation benchmark containing 5095 questions designed to probe ToM reasoning across diverse cultural contexts through visual question answering. The dataset captures culturally grounded cues such as rituals, attire, gestures, and interpersonal dynamics, enabling systematic evaluation of ToM reasoning beyond Western-centric benchmarks. Our dataset is built through a VLM-assisted human-in-the-loop pipeline, where human experts first curate culturally rich images across traditions, rituals, and social interactions; a VLM then assist in generating structured ToM-focused scene descriptions, which are refined into question-answer pairs spanning a taxonomy of six ToM tasks and four graded complexity levels. The resulting dataset covers diverse theory of mind facets such as mental state attribution, false belief reasoning, non-literal communication, social norm violations, perspective coordination, and multi-agent reasoning. - oai:arXiv.org:2512.17394v1 - cs.CL - cs.CV + Sixth-order explicit one-step methods for stiff ODEs via hybrid deferred correction involving RK2 and RK4: Application to reaction-diffusion equations + https://arxiv.org/abs/2512.18377 + arXiv:2512.18377v1 Announce Type: new +Abstract: In this paper, the fourth-order explicit Runge-Kutta method (RK4) is used to make a Deferred Correction (DC) on the explicit midpoint rule, resulting in an explicit one-step method of order six of accuracy, denoted DC6RK2/4. Convergence and order of accuracy of DC6RK2/4 are proven through a deferred correction condition satisfied by the RK4. The region of absolute stability of this method contains that of a RK6 and is tangent to the region [-5.626,0[x[-4.730,4.730] of the complex plane, containing a significant part of the imaginary axis. Numerical experiments with standard test problems for stiff systems of ODEs show that DC6RK2/4 performs well on problems regarding strong non-linearity and long-term integration, and this method does not require extremely small time steps for accurate numerical solutions of stiff problems. Moreover, this method is better than standard implicit methods like the Backward Differentiation Formulae and the DC methods for the implicit midpoint rule on stiff problems for which Jacobian matrices along the solution curve have complex eigenvalues where imaginary parts have larger magnitudes than real parts. An application of DC6RK2/4 to a class of test problems for reaction-diffusion equations in one dimensional is also carried out. + oai:arXiv.org:2512.18377v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Saint Cyr E. R. Koyaguerebo-Im\'e + + + Adaptive Learning Mechanisms for Learning Management Systems: A Scoping Review and Practical Considerations + https://arxiv.org/abs/2512.18383 + arXiv:2512.18383v1 Announce Type: new +Abstract: Background: Traditional Learning Management Systems (LMS) usually offer a one-size-fits-all solution that cannot be customized to meet specific learner needs. To address this issue, adaptive learning mechanisms are integrated either by LMS-specific approaches into individual LMSs or by system-independent mechanisms into various existing LMSs to increase reusability. + Objective: We conducted a systematic review of the literature addressing the following research questions. How are adaptive learning mechanisms integrated into LMSs system-independently? How are they provided, how are they specified, and on which database do they operate? A priori, we proposed three hypotheses. First, the focused adaptive learning mechanisms, rarely consider existing data. Second, they usually support a limited number of data processing mechanisms. Third, the users intended to provide them, are rarely given the ability to adapt how they work. Furthermore, to investigate the differences between system-independent and LMS-specific approaches, we also included the latter. + Design: We used Scopus, Web of Science and Google Scholar for gray literature to identify 3370 papers published between 2003 and 2023 for screening, and conducted a snowball search. + Results: We identified 61 relevant approaches and extracted eight variables for them through in-depth reading. The results support the proposed hypotheses. + Conclusion: Based on the challenges raised by the proposed hypotheses with regard to the relevant user groups, we defined two future research directions - developing a conceptual model for the system-independent specification of adaptive learning mechanisms and a corresponding architecture for the provision, and supporting the authoring of these mechanisms by users with low technical expertise. + oai:arXiv.org:2512.18383v1 cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zabir Al Nazi, G M Shahariar, Abrar Hossain, Wei Peng + Sebastian Kucharski, Iris Braun, Gregor Damnik, Matthias W\"ahlisch - RadImageNet-VQA: A Large-Scale CT and MRI Dataset for Radiologic Visual Question Answering - https://arxiv.org/abs/2512.17396 - arXiv:2512.17396v1 Announce Type: new -Abstract: In this work, we introduce RadImageNet-VQA, a large-scale dataset designed to advance radiologic visual question answering (VQA) on CT and MRI exams. Existing medical VQA datasets are limited in scale, dominated by X-ray imaging or biomedical illustrations, and often prone to text-based shortcuts. RadImageNet-VQA is built from expert-curated annotations and provides 750K images paired with 7.5M question-answer samples. It covers three key tasks - abnormality detection, anatomy recognition, and pathology identification - spanning eight anatomical regions and 97 pathology categories, and supports open-ended, closed-ended, and multiple-choice questions. Extensive experiments show that state-of-the-art vision-language models still struggle with fine-grained pathology identification, particularly in open-ended settings and even after fine-tuning. Text-only analysis further reveals that model performance collapses to near-random without image inputs, confirming that RadImageNet-VQA is free from linguistic shortcuts. The full dataset and benchmark are publicly available at https://huggingface.co/datasets/raidium/RadImageNet-VQA. - oai:arXiv.org:2512.17396v1 - cs.CV + Datasets for machine learning and for assessing the intelligence level of automatic patent search systems + https://arxiv.org/abs/2512.18384 + arXiv:2512.18384v1 Announce Type: new +Abstract: The key to success in automating prior art search in patent research using artificial intelligence lies in developing large datasets for machine learning and ensuring their availability. This work is dedicated to providing a comprehensive solution to the problem of creating infrastructure for research in this field, including datasets and tools for calculating search quality criteria. The paper discusses the concept of semantic clusters of patent documents that determine the state of the art in a given subject, as proposed by the authors. A definition of such semantic clusters is also provided. Prior art search is presented as the task of identifying elements within a semantic cluster of patent documents in the subject area specified by the document under consideration. A generator of user-configurable datasets for machine learning, based on collections of U.S. and Russian patent documents, is described. The dataset generator creates a database of links to documents in semantic clusters. Then, based on user-defined parameters, it forms a dataset of semantic clusters in JSON format for machine learning. To evaluate machine learning outcomes, it is proposed to calculate search quality scores that account for semantic clusters of the documents being searched. To automate the evaluation process, the paper describes a utility developed by the authors for assessing the quality of prior art document search. + oai:arXiv.org:2512.18384v1 + cs.IR cs.AI - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - L\'eo Butsanets, Charles Corbi\`ere, Julien Khlaut, Pierre Manceron, Corentin Dancette + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Boris Genin (Federal Institute of Industrial Property, Berezhkovskaya nab. 30-1, Moscow, Russian Federation), Alexander Gorbunov (Federal Institute of Industrial Property, Berezhkovskaya nab. 30-1, Moscow, Russian Federation), Dmitry Zolkin (Federal Institute of Industrial Property, Berezhkovskaya nab. 30-1, Moscow, Russian Federation), Igor Nekrasov (Federal Institute of Industrial Property, Berezhkovskaya nab. 30-1, Moscow, Russian Federation) - DeepShare: Sharing ReLU Across Channels and Layers for Efficient Private Inference - https://arxiv.org/abs/2512.17398 - arXiv:2512.17398v1 Announce Type: new -Abstract: Private Inference (PI) uses cryptographic primitives to perform privacy preserving machine learning. In this setting, the owner of the network runs inference on the data of the client without learning anything about the data and without revealing any information about the model. It has been observed that a major computational bottleneck of PI is the calculation of the gate (i.e., ReLU), so a considerable amount of effort have been devoted to reducing the number of ReLUs in a given network. - We focus on the DReLU, which is the non-linear step function of the ReLU and show that one DReLU can serve many ReLU operations. We suggest a new activation module where the DReLU operation is only performed on a subset of the channels (Prototype channels), while the rest of the channels (replicate channels) replicates the DReLU of each of their neurons from the corresponding neurons in one of the prototype channels. We then extend this idea to work across different layers. - We show that this formulation can drastically reduce the number of DReLU operations in resnet type network. Furthermore, our theoretical analysis shows that this new formulation can solve an extended version of the XOR problem, using just one non-linearity and two neurons, something that traditional formulations and some PI specific methods cannot achieve. We achieve new SOTA results on several classification setups, and achieve SOTA results on image segmentation. - oai:arXiv.org:2512.17398v1 - cs.LG - cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 + RecurGS: Interactive Scene Modeling via Discrete-State Recurrent Gaussian Fusion + https://arxiv.org/abs/2512.18386 + arXiv:2512.18386v1 Announce Type: new +Abstract: Recent advances in 3D scene representations have enabled high-fidelity novel view synthesis, yet adapting to discrete scene changes and constructing interactive 3D environments remain open challenges in vision and robotics. Existing approaches focus solely on updating a single scene without supporting novel-state synthesis. Others rely on diffusion-based object-background decoupling that works on one state at a time and cannot fuse information across multiple observations. To address these limitations, we introduce RecurGS, a recurrent fusion framework that incrementally integrates discrete Gaussian scene states into a single evolving representation capable of interaction. RecurGS detects object-level changes across consecutive states, aligns their geometric motion using semantic correspondence and Lie-algebra based SE(3) refinement, and performs recurrent updates that preserve historical structures through replay supervision. A voxelized, visibility-aware fusion module selectively incorporates newly observed regions while keeping stable areas fixed, mitigating catastrophic forgetting and enabling efficient long-horizon updates. RecurGS supports object-level manipulation, synthesizes novel scene states without requiring additional scans, and maintains photorealistic fidelity across evolving environments. Extensive experiments across synthetic and real-world datasets demonstrate that our framework delivers high-quality reconstructions with substantially improved update efficiency, providing a scalable step toward continuously interactive Gaussian worlds. + oai:arXiv.org:2512.18386v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yonathan Bornfeld, Shai Avidan + Wenhao Hu, Haonan Zhou, Zesheng Li, Liu Liu, Jiacheng Dong, Zhizhong Su, Gaoang Wang - meval: A Statistical Toolbox for Fine-Grained Model Performance Analysis - https://arxiv.org/abs/2512.17409 - arXiv:2512.17409v1 Announce Type: new -Abstract: Analyzing machine learning model performance stratified by patient and recording properties is becoming the accepted norm and often yields crucial insights about important model failure modes. Performing such analyses in a statistically rigorous manner is non-trivial, however. Appropriate performance metrics must be selected that allow for valid comparisons between groups of different sample sizes and base rates; metric uncertainty must be determined and multiple comparisons be corrected for, in order to assess whether any observed differences may be purely due to chance; and in the case of intersectional analyses, mechanisms must be implemented to find the most `interesting' subgroups within combinatorially many subgroup combinations. We here present a statistical toolbox that addresses these challenges and enables practitioners to easily yet rigorously assess their models for potential subgroup performance disparities. While broadly applicable, the toolbox is specifically designed for medical imaging applications. The analyses provided by the toolbox are illustrated in two case studies, one in skin lesion malignancy classification on the ISIC2020 dataset and one in chest X-ray-based disease classification on the MIMIC-CXR dataset. - oai:arXiv.org:2512.17409v1 - cs.LG - stat.AP - stat.ME - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 + Exploration vs. Fixation: Scaffolding Divergent and Convergent Thinking for Human-AI Co-Creation with Generative Models + https://arxiv.org/abs/2512.18388 + arXiv:2512.18388v1 Announce Type: new +Abstract: Generative AI has begun to democratize creative work, enabling novices to produce complex artifacts such as code, images, and videos. However, in practice, existing interaction paradigms often fail to support divergent exploration: users tend to converge too quickly on early ``good enough'' results and struggle to move beyond them, leading to premature convergence and design fixation that constrains their creative potential. To address this, we propose a structured, process-oriented human-AI co-creation paradigm including divergent and convergent thinking stages, grounded in Wallas's model of creativity. To avoid design fixation, our paradigm scaffolds both high-level exploration of conceptual ideas in the early divergent thinking phase and low-level exploration of variations in the later convergent thinking phrase. We instantiate this paradigm in HAIExplore, an image co-creation system that (i) scaffolds divergent thinking through a dedicated brainstorming stage for exploring high-level ideas in a conceptual space, and (ii) scaffolds convergent refinement through an interface that externalizes users' refinement intentions as interpretable parameters and options, making the refinement process more controllable and easier to explore. We report on a within-subjects study comparing HAIExplore with a widely used linear chat interface (ChatGPT) for creative image generation. Our findings show that explicitly scaffolding the creative process into brainstorming and refinement stages can mitigate design fixation, improve perceived controllability and alignment with users' intentions, and better support the non-linear nature of creative work. We conclude with design implications for future creativity support tools and human-AI co-creation workflows. + oai:arXiv.org:2512.18388v1 + cs.HC + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1007/978-3-032-05870-6_19 - MICCAI FAIMI Workshop 2025 - Dishantkumar Sutariya, Eike Petersen + Chao Wen, Tung Phung, Pronita Mehrotra, Sumit Gulwani, Tomohiro Nagashima, Adish Singla - Detection and Analysis of Sensitive and Illegal Content on the Ethereum Blockchain Using Machine Learning Techniques - https://arxiv.org/abs/2512.17411 - arXiv:2512.17411v1 Announce Type: new -Abstract: Blockchain technology, lauded for its transparent and immutable nature, introduces a novel trust model. However, its decentralized structure raises concerns about potential inclusion of malicious or illegal content. This study focuses on Ethereum, presenting a data identification and restoration algorithm. Successfully recovering 175 common files, 296 images, and 91,206 texts, we employed the FastText algorithm for sentiment analysis, achieving a 0.9 accuracy after parameter tuning. Classification revealed 70,189 neutral, 5,208 positive, and 15,810 negative texts, aiding in identifying sensitive or illicit information. Leveraging the NSFWJS library, we detected seven indecent images with 100% accuracy. Our findings expose the coexistence of benign and harmful content on the Ethereum blockchain, including personal data, explicit images, divisive language, and racial discrimination. Notably, sensitive information targeted Chinese government officials. Proposing preventative measures, our study offers valuable insights for public comprehension of blockchain technology and regulatory agency guidance. The algorithms employed present innovative solutions to address blockchain data privacy and security concerns. - oai:arXiv.org:2512.17411v1 - cs.CR + Neural Proofs for Sound Verification and Control of Complex Systems + https://arxiv.org/abs/2512.18389 + arXiv:2512.18389v1 Announce Type: new +Abstract: This informal contribution presents an ongoing line of research that is pursuing a new approach to the construction of sound proofs for the formal verification and control of complex stochastic models of dynamical systems, of reactive programs and, more generally, of models of Cyber-Physical Systems. Neural proofs are made up of two key components: 1) proof rules encode requirements entailing the verification of general temporal specifications over the models of interest; and 2) certificates that discharge such rules, namely they are constructed from said proof rules with an inductive (that is, cyclic, repetitive) approach; this inductive approach involves: 2a) accessing samples from the model's dynamics and accordingly training neural networks, whilst 2b) generalising such networks via SAT-modulo-theory (SMT) queries that leverage the full knowledge of the models. In the context of sequential decision making problems over complex stochastic models, it is possible to additionally generate provably-correct policies/strategies/controllers, namely state-feedback functions that, in conjunction with neural certificates, formally attain the given specifications for the models of interest. + oai:arXiv.org:2512.18389v1 + eess.SY cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.LG + cs.LO + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Xingyu Feng + Alessandro Abate - Optimisation of Aircraft Maintenance Schedules - https://arxiv.org/abs/2512.17412 - arXiv:2512.17412v1 Announce Type: new -Abstract: We present an aircraft maintenance scheduling problem, which requires suitably qualified staff to be assigned to maintenance tasks on each aircraft. The tasks on each aircraft must be completed within a given turn around window so that the aircraft may resume revenue earning service. This paper presents an initial study based on the application of an Evolutionary Algorithm to the problem. Evolutionary Algorithms evolve a solution to a problem by evaluating many possible solutions, focusing the search on those solutions that are of a higher quality, as defined by a fitness function. In this paper, we benchmark the algorithm on 60 generated problem instances to demonstrate the underlying representation and associated genetic operators. - oai:arXiv.org:2512.17412v1 - cs.NE - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + The Challenger: When Do New Data Sources Justify Switching Machine Learning Models? + https://arxiv.org/abs/2512.18390 + arXiv:2512.18390v1 Announce Type: new +Abstract: We study the problem of deciding whether, and when an organization should replace a trained incumbent model with a challenger relying on newly available features. We develop a unified economic and statistical framework that links learning-curve dynamics, data-acquisition and retraining costs, and discounting of future gains. First, we characterize the optimal switching time in stylized settings and derive closed-form expressions that quantify how horizon length, learning-curve curvature, and cost differentials shape the optimal decision. Second, we propose three practical algorithms: a one-shot baseline, a greedy sequential method, and a look-ahead sequential method. Using a real-world credit-scoring dataset with gradually arriving alternative data, we show that (i) optimal switching times vary systematically with cost parameters and learning-curve behavior, and (ii) the look-ahead sequential method outperforms other methods and is able to approach in value an oracle with full foresight. Finally, we establish finite-sample guarantees, including conditions under which the sequential look-ahead method achieve sublinear regret relative to that oracle. Our results provide an operational blueprint for economically sound model transitions as new data sources become available. + oai:arXiv.org:2512.18390v1 + cs.LG + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - In E. Hart, T. Horvath, Z. Tan, & S. Thomson (Eds.), Advances in Computational Intelligence Systems. Springer 2025 - Neil Urquhart (Navid), Amir Rahimi (Navid), Efstathios-Al. Tingas + http://creativecommons.org/licenses/by/4.0/ + Vassilis Digalakis Jr, Christophe P\'erignon, S\'ebastien Saurin, Flore Sentenac - A decomposition approach for large virtual network embedding - https://arxiv.org/abs/2512.17414 - arXiv:2512.17414v1 Announce Type: new -Abstract: Virtual Network Embedding (VNE) is the core combinatorial problem of Network Slicing, a 5G technology which enables telecommunication operators to propose diverse service-dedicated virtual networks, embedded onto a common substrate network. VNE asks for a minimum-cost mapping of a virtual network on a substrate network, encompassing simultaneous node placement and edge routing decisions. On a benchmark of large virtual networks with realistic topologies we compiled, the state-of-the art heuristics often provide expensive solutions, or even fail to find a solution when resources are sparse. We introduce a new integer linear formulation together with a decomposition scheme based on an automatic partition of the virtual network. This results in a column generation approach whose pricing problems are also VNE problems. This method allows to compute better lower bounds than state-of-the-art methods. Finally, we devise an efficient Price-and-Branch heuristic for large instances. - oai:arXiv.org:2512.17414v1 - cs.DM - cs.NI - Mon, 22 Dec 2025 00:00:00 -0500 + AOMGen: Photoreal, Physics-Consistent Demonstration Generation for Articulated Object Manipulation + https://arxiv.org/abs/2512.18396 + arXiv:2512.18396v1 Announce Type: new +Abstract: Recent advances in Vision-Language-Action (VLA) and world-model methods have improved generalization in tasks such as robotic manipulation and object interaction. However, Successful execution of such tasks depends on large, costly collections of real demonstrations, especially for fine-grained manipulation of articulated objects. To address this, we present AOMGen, a scalable data generation framework for articulated manipulation which is instantiated from a single real scan, demonstration and a library of readily available digital assets, yielding photoreal training data with verified physical states. The framework synthesizes synchronized multi-view RGB temporally aligned with action commands and state annotations for joints and contacts, and systematically varies camera viewpoints, object styles, and object poses to expand a single execution into a diverse corpus. Experimental results demonstrate that fine-tuning VLA policies on AOMGen data increases the success rate from 0% to 88.7%, and the policies are tested on unseen objects and layouts. + oai:arXiv.org:2512.18396v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Amal Benhamiche, Pierre Fouilhoux, Lucas L\'etocart, Nancy Perrot, Alexis Schneider - - - Beyond Occlusion: In Search for Near Real-Time Explainability of CNN-Based Prostate Cancer Classification - https://arxiv.org/abs/2512.17416 - arXiv:2512.17416v1 Announce Type: new -Abstract: Deep neural networks are starting to show their worth in critical applications such as assisted cancer diagnosis. However, for their outputs to get accepted in practice, the results they provide should be explainable in a way easily understood by pathologists. A well-known and widely used explanation technique is occlusion, which, however, can take a long time to compute, thus slowing the development and interaction with pathologists. In this work, we set out to find a faster replacement for occlusion in a successful system for detecting prostate cancer. Since there is no established framework for comparing the performance of various explanation methods, we first identified suitable comparison criteria and selected corresponding metrics. Based on the results, we were able to choose a different explanation method, which cut the previously required explanation time at least by a factor of 10, without any negative impact on the quality of outputs. This speedup enables rapid iteration in model development and debugging and brings us closer to adopting AI-assisted prostate cancer detection in clinical settings. We propose that our approach to finding the replacement for occlusion can be used to evaluate candidate methods in other related applications. - oai:arXiv.org:2512.17416v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - 10.1109/ISBI60581.2025.10980802 - 2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI) - Martin Krebs, Jan Obdr\v{z}\'alek, V\'it Musil, Tom\'a\v{s} Br\'azdil + Yulu Wu, Jiujun Cheng, Haowen Wang, Dengyang Suo, Pei Ren, Qichao Mao, Shangce Gao, Yakun Huang - SWE-Bench++: A Framework for the Scalable Generation of Software Engineering Benchmarks from Open-Source Repositories - https://arxiv.org/abs/2512.17419 - arXiv:2512.17419v1 Announce Type: new -Abstract: Benchmarks like SWE-bench have standardized the evaluation of Large Language Models (LLMs) on repository-level software engineering tasks. However, these efforts remain limited by manual curation, static datasets, and a focus on Python-based bug fixes. We introduce SWE-Bench++, an automated framework that generates repository-level coding tasks from open-source GitHub projects. Unlike synthetic approaches, our pipeline harvests live pull requests to cover both bug fixes and feature requests across 11 languages. SWE-Bench++ turns GitHub pull requests (PRs) into reproducible, execution-based tasks via four stages: programmatic sourcing, environment synthesis, test oracle extraction, and quality assurance. A final hint-guided trajectory synthesis step converts instances that strong models fail on into training trajectories. Our initial benchmark consists of 11,133 instances from 3,971 repositories across 11 languages. On a subset of 1,782 instances of this benchmark, today's strongest models perform as follows: claude-sonnet-4.5 achieves 36.20% pass@10, gpt-5-2025-08-07 34.57%, gemini/gemini-2.5-pro 24.92%, and gpt-4o 16.89%. We further demonstrate the utility of our dataset by showing that fine-tuning on SWE-Bench++ instances yields measurable improvements on the SWE-bench Multilingual benchmark. SWE-Bench++ provides a scalable, multilingual benchmark for evaluating and improving repository-level code generation. - oai:arXiv.org:2512.17419v1 - cs.SE - cs.AI + AraToken: Optimizing Arabic Tokenization with Normalization Pipeline and Language Extension for Qwen3 + https://arxiv.org/abs/2512.18399 + arXiv:2512.18399v1 Announce Type: new +Abstract: Tokenization is a critical preprocessing step for large language models (LLMs), directly impacting training efficiency and downstream performance. General-purpose tokenizers trained predominantly on English and Latin-script languages exhibit suboptimal performance on morphologically rich languages such as Arabic, resulting in inflated token sequences and reduced compression efficiency. In this work, we present AraToken, an Arabic-optimized tokenizer built on SentencePiece Unigram algorithm with a comprehensive normalization pipeline addressing Arabic-specific orthographic variations including Alif variants, diacritics, and Arabic-Indic numerals. We systematically compare BPE, WordPiece, and SentencePiece algorithms across multiple configurations, demonstrating that SentencePiece with normalization achieves 18% lower fertility (1.199 vs 1.35 tokens/word) compared to unnormalized baselines. Furthermore, we introduce the Language Extension Pipeline (LEP), a method for integrating the optimized tokenizer into Qwen3-0.6B through vocabulary extension with mean subtoken initialization and selective transformer layer unfreezing. Our experiments show that LEP reduces evaluation loss from 8.28 to 2.43 within 800 training steps on 100K Arabic samples. We release our tokenizer, training scripts, and model checkpoints to facilitate Arabic NLP research. + oai:arXiv.org:2512.18399v1 cs.CL - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Lilin Wang, Lucas Ramalho, Alan Celestino, Phuc Anthony Pham, Yu Liu, Umang Kumar Sinha, Andres Portillo, Onassis Osunwa, Gabriel Maduekwe + Mark Kashirskiy, Artiom Lipinski, Ilya Makarov - Personalized Gait Patterns During Exoskeleton-Aided Training May Have Minimal Effect on User Experience. Insights from a Pilot Study - https://arxiv.org/abs/2512.17425 - arXiv:2512.17425v1 Announce Type: new -Abstract: Robot-aided gait rehabilitation facilitates high-intensity and repeatable therapy. However, most exoskeletons rely on pre-recorded, non-personalized gait trajectories constrained to the sagittal plane, potentially limiting movement naturalness and user comfort. We present a data-driven gait personalization framework for an exoskeleton that supports multi-planar motion, including hip abduction/adduction and pelvic translation and rotation. Personalized trajectories to individual participants were generated using regression models trained on anthropometric, demographic, and walking speed data from a normative database. In a within-subject experiment involving ten unimpaired participants, these personalized trajectories were evaluated in regard to comfort, naturalness, and overall experience and compared against two standard patterns from the same database: one averaging all the trajectories, and one randomly selected. We did not find relevant differences across pattern conditions, despite all trajectories being executed with high accuracy thanks to a stiff position-derivative controller. We found, however, that pattern conditions in later trials were rated as more comfortable and natural than those in the first trial, suggesting that participants might have adapted to walking within the exoskeleton, regardless of the enforced gait pattern. Our findings highlight the importance of integrating subjective feedback when designing personalized gait controllers and accounting for user adaptation during experimentation. - oai:arXiv.org:2512.17425v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Towards Scalable Visual Data Wrangling via Direct Manipulation + https://arxiv.org/abs/2512.18405 + arXiv:2512.18405v1 Announce Type: new +Abstract: Data wrangling - the process of cleaning, transforming, and preparing data for analysis - is a well-known bottleneck in data science workflows. Existing tools either rely on manual scripting, which is error-prone and hard to debug, or automate cleaning through opaque black-box pipelines that offer limited control. We present Buckaroo, a scalable visual data wrangling system that restructures data preparation as a direct manipulation task over visualizations. Buckaroo enables users to explore and repair data anomalies - such as missing values, outliers, and type mismatches - by interacting directly with coordinated data visualizations. The system extensibly supports user-defined error detectors and wranglers, tracks provenance for undo/redo, and generates reproducible scripts for downstream tasks. Buckaroo maintains efficient indexing data structures and differential storage to localize anomaly detection and minimize recomputation. To demonstrate the applicability of our model, Buckaroo is integrated with the \textit{Hopara} pan-and-zoom engine, which enables multi-layered navigation over large datasets without sacrificing interactivity. Through empirical evaluation and an expert review, we show that Buckaroo makes visual data wrangling scalable - bridging the gap between visual inspection and programmable repairs. + oai:arXiv.org:2512.18405v1 + cs.DB + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Beatrice Luciani, Katherine Lin Poggensee, Heike Vallery, Alex van den Berg, Severin David Woernle, Mostafa Mogharabi, Stefano Dalla Gasperina, Laura Marchal-Crespo + El Kindi Rezig, Mir Mahathir Mohammad, Nicolas Baret, Ricardo Mayerhofer, Andrew McNutt, Paul Rosen - Democratizing Scalable Cloud Applications: Transactional Stateful Functions on Streaming Dataflows - https://arxiv.org/abs/2512.17429 - arXiv:2512.17429v1 Announce Type: new -Abstract: Web applications underpin much of modern digital life, yet building scalable and consistent cloud applications remains difficult, requiring expertise across cloud computing, distributed systems, databases, and software engineering. These demands restrict development to a small number of highly specialized engineers. This thesis aims to democratize cloud application development by addressing three challenges: programmability, high-performance fault-tolerant serializable transactions, and serverless semantics. - The thesis identifies strong parallels between cloud applications and the streaming dataflow execution model. It first explores this connection through T-Statefun, a transactional extension of Apache Flink Statefun, demonstrating that dataflow systems can support transactional cloud applications via a stateful functions-as-a-service API. However, this approach revealed significant limitations in programmability and performance. - To overcome these issues, the thesis introduces Stateflow, a high-level object-oriented programming model that compiles applications into stateful dataflow graphs with minimal boilerplate. Building on this model, the thesis presents Styx, a distributed streaming dataflow engine that provides deterministic, multi-partition, serializable transactions with strong fault tolerance guarantees. Styx eliminates explicit transaction failure handling and significantly outperforms state-of-the-art systems. - Finally, the thesis extends Styx with transactional state migration to support elasticity under dynamic workloads. - oai:arXiv.org:2512.17429v1 - cs.DB - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 + Automated Mosaic Tesserae Segmentation via Deep Learning Techniques + https://arxiv.org/abs/2512.18406 + arXiv:2512.18406v1 Announce Type: new +Abstract: Art is widely recognized as a reflection of civilization and mosaics represent an important part of cultural heritage. Mosaics are an ancient art form created by arranging small pieces, called tesserae, on a surface using adhesive. Due to their age and fragility, they are prone to damage, highlighting the need for digital preservation. This paper addresses the problem of digitizing mosaics by segmenting the tesserae to separate them from the background within the broader field of Image Segmentation in Computer Vision. We propose a method leveraging Segment Anything Model 2 (SAM 2) by Meta AI, a foundation model that outperforms most conventional segmentation models, to automatically segment mosaics. Due to the limited open datasets in the field, we also create an annotated dataset of mosaic images to fine-tune and evaluate the model. Quantitative evaluation on our testing dataset shows notable improvements compared to the baseline SAM 2 model, with Intersection over Union increasing from 89.00% to 91.02% and Recall from 92.12% to 95.89%. Additionally, on a benchmark proposed by a prior approach, our model achieves an F-measure 3% higher than previous methods and reduces the error in the absolute difference between predicted and actual tesserae from 0.20 to just 0.02. The notable performance of the fine-tuned SAM 2 model together with the newly annotated dataset can pave the way for real-time segmentation of mosaic images. + oai:arXiv.org:2512.18406v1 + cs.CV + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - 10.4233/uuid:837e043a-c6e3-4f87-a9b1-a59a9ade65f7 - Kyriakos Psarakis + 10.1109/IST66504.2025.11268445 + Charilaos Kapelonis, Marios Antonakakis, Konstantinos Politof, Aristomenis Antoniadis, Michalis Zervakis - AIFloodSense: A Global Aerial Imagery Dataset for Semantic Segmentation and Understanding of Flooded Environments - https://arxiv.org/abs/2512.17432 - arXiv:2512.17432v1 Announce Type: new -Abstract: Accurate flood detection from visual data is a critical step toward improving disaster response and risk assessment, yet datasets for flood segmentation remain scarce due to the challenges of collecting and annotating large-scale imagery. Existing resources are often limited in geographic scope and annotation detail, hindering the development of robust, generalized computer vision methods. To bridge this gap, we introduce AIFloodSense, a comprehensive, publicly available aerial imagery dataset comprising 470 high-resolution images from 230 distinct flood events across 64 countries and six continents. Unlike prior benchmarks, AIFloodSense ensures global diversity and temporal relevance (2022-2024), supporting three complementary tasks: (i) Image Classification with novel sub-tasks for environment type, camera angle, and continent recognition; (ii) Semantic Segmentation providing precise pixel-level masks for flood, sky, and buildings; and (iii) Visual Question Answering (VQA) to enable natural language reasoning for disaster assessment. We establish baseline benchmarks for all tasks using state-of-the-art architectures, demonstrating the dataset's complexity and its value in advancing domain-generalized AI tools for climate resilience. - oai:arXiv.org:2512.17432v1 + Through the PRISm: Importance-Aware Scene Graphs for Image Retrieval + https://arxiv.org/abs/2512.18407 + arXiv:2512.18407v1 Announce Type: new +Abstract: Accurately retrieving images that are semantically similar remains a fundamental challenge in computer vision, as traditional methods often fail to capture the relational and contextual nuances of a scene. We introduce PRISm (Pruning-based Image Retrieval via Importance Prediction on Semantic Graphs), a multimodal framework that advances image-to-image retrieval through two novel components. First, the Importance Prediction Module identifies and retains the most critical objects and relational triplets within an image while pruning irrelevant elements. Second, the Edge-Aware Graph Neural Network explicitly encodes relational structure and integrates global visual features to produce semantically informed image embeddings. PRISm achieves image retrieval that closely aligns with human perception by explicitly modeling the semantic importance of objects and their interactions, capabilities largely absent in prior approaches. Its architecture effectively combines relational reasoning with visual representation, enabling semantically grounded retrieval. Extensive experiments on benchmark and real-world datasets demonstrate consistently superior top-ranked performance, while qualitative analyses show that PRISm accurately captures key objects and interactions, producing interpretable and semantically meaningful results. + oai:arXiv.org:2512.18407v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Georgios Simantiris, Konstantinos Bacharidis, Apostolos Papanikolaou, Petros Giannakakis, Costas Panagiotakis + Dimitrios Georgoulopoulos, Nikolaos Chaidos, Angeliki Dimitriou, Giorgos Stamou - ImagineNav++: Prompting Vision-Language Models as Embodied Navigator through Scene Imagination - https://arxiv.org/abs/2512.17435 - arXiv:2512.17435v1 Announce Type: new -Abstract: Visual navigation is a fundamental capability for autonomous home-assistance robots, enabling long-horizon tasks such as object search. While recent methods have leveraged Large Language Models (LLMs) to incorporate commonsense reasoning and improve exploration efficiency, their planning remains constrained by textual representations, which cannot adequately capture spatial occupancy or scene geometry--critical factors for navigation decisions. We explore whether Vision-Language Models (VLMs) can achieve mapless visual navigation using only onboard RGB/RGB-D streams, unlocking their potential for spatial perception and planning. We achieve this through an imagination-powered navigation framework, ImagineNav++, which imagines future observation images from candidate robot views and translates navigation planning into a simple best-view image selection problem for VLMs. First, a future-view imagination module distills human navigation preferences to generate semantically meaningful viewpoints with high exploration potential. These imagined views then serve as visual prompts for the VLM to identify the most informative viewpoint. To maintain spatial consistency, we develop a selective foveation memory mechanism, which hierarchically integrates keyframe observations via a sparse-to-dense framework, constructing a compact yet comprehensive memory for long-term spatial reasoning. This approach transforms goal-oriented navigation into a series of tractable point-goal navigation tasks. Extensive experiments on open-vocabulary object and instance navigation benchmarks show that ImagineNav++ achieves SOTA performance in mapless settings, even surpassing most map-based methods, highlighting the importance of scene imagination and memory in VLM-based spatial reasoning. - oai:arXiv.org:2512.17435v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Why Most Optimism Bandit Algorithms Have the Same Regret Analysis: A Simple Unifying Theorem + https://arxiv.org/abs/2512.18409 + arXiv:2512.18409v1 Announce Type: new +Abstract: Several optimism-based stochastic bandit algorithms -- including UCB, UCB-V, linear UCB, and finite-arm GP-UCB -- achieve logarithmic regret using proofs that, despite superficial differences, follow essentially the same structure. This note isolates the minimal ingredients behind these analyses: a single high-probability concentration condition on the estimators, after which logarithmic regret follows from two short deterministic lemmas describing radius collapse and optimism-forced deviations. The framework yields unified, near-minimal proofs for these classical algorithms and extends naturally to many contemporary bandit variants. + oai:arXiv.org:2512.18409v1 + cs.LG + cs.SY + eess.SY + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Teng Wang, Xinxin Zhao, Wenzhe Cai, Changyin Sun + Vikram Krishnamurthy - Xiaomi MiMo-VL-Miloco Technical Report - https://arxiv.org/abs/2512.17436 - arXiv:2512.17436v1 Announce Type: new -Abstract: We open-source \textbf{MiMo-VL-Miloco-7B} and its quantized variant \textbf{MiMo-VL-Miloco-7B-GGUF}, a pair of home-centric vision-language models that achieve strong performance on both home-scenario understanding and general multimodal reasoning. Built on the MiMo-VL-7B backbone, MiMo-VL-Miloco-7B is specialized for smart-home environments, attaining leading F1 scores on gesture recognition and common home-scenario understanding, while also delivering consistent gains across video benchmarks such as Video-MME, Video-MMMU, and Charades-STA, as well as language understanding benchmarks including MMMU-Pro and MMLU-Pro. In our experiments, MiMo-VL-Miloco-7B outperforms strong closed-source and open-source baselines on home-scenario understanding and several multimodal reasoning benchmarks. To balance specialization and generality, we design a two-stage training pipeline that combines supervised fine-tuning with reinforcement learning based on Group Relative Policy Optimization, leveraging efficient multi-domain data. We further incorporate chain-of-thought supervision and token-budget-aware reasoning, enabling the model to learn knowledge in a data-efficient manner while also performing reasoning efficiently. Our analysis shows that targeted home-scenario training not only enhances activity and gesture understanding, but also improves text-only reasoning with only modest trade-offs on document-centric tasks. Model checkpoints, quantized GGUF weights, and our home-scenario evaluation toolkit are publicly available at \href{https://github.com/XiaoMi/xiaomi-mimo-vl-miloco}{https://github.com/XiaoMi/xiaomi-mimo-vl-miloco} to support research and deployment in real-world smart-home applications. - oai:arXiv.org:2512.17436v1 + AmPLe: Supporting Vision-Language Models via Adaptive-Debiased Ensemble Multi-Prompt Learning + https://arxiv.org/abs/2512.18411 + arXiv:2512.18411v1 Announce Type: new +Abstract: Multi-prompt learning methods have emerged as an effective approach for facilitating the rapid adaptation of vision-language models to downstream tasks with limited resources. Existing multi-prompt learning methods primarily focus on utilizing various meticulously designed prompts within a single foundation vision-language model to achieve superior performance. However, the overlooked model-prompt matching bias hinders the development of multi-prompt learning, i.e., the same prompt can convey different semantics across distinct vision-language models, such as CLIP-ViT-B/16 and CLIP-ViT-B/32, resulting in inconsistent predictions of identical prompt. To mitigate the impact of this bias on downstream tasks, we explore an ensemble learning approach to sufficiently aggregate the benefits of diverse predictions. Additionally, we further disclose the presence of sample-prompt matching bias, which originates from the prompt-irrelevant semantics encapsulated in the input samples. Thus, directly utilizing all information from the input samples for generating weights of ensemble learning can lead to suboptimal performance. In response, we extract prompt-relevant semantics from input samples by leveraging the guidance of the information theory-based analysis, adaptively calculating debiased ensemble weights. Overall, we propose Adaptive-Debiased Ensemble MultiPrompt Learning, abbreviated as AmPLe, to mitigate the two types of bias simultaneously. Extensive experiments on three representative tasks, i.e., generalization to novel classes, new target datasets, and unseen domain shifts, show that AmPLe can widely outperform existing methods. Theoretical validation from a causal perspective further supports the effectiveness of AmPLe. + oai:arXiv.org:2512.18411v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jiaze Li, Jingyang Chen, Yuxun Qu, Jianzhong Ju, Zhenbo Luo, Jian Luan, Shijie Xu, Zhenru Lin, Junyou Zhu, Boshen Xu, Wenhui Tan, Pei Fu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Fei Song, Yi Li, Jiangmeng Li, Rui Wang, Changwen Zheng, Fanjiang Xu, Hui Xiong - A Systematic Reproducibility Study of BSARec for Sequential Recommendation - https://arxiv.org/abs/2512.17442 - arXiv:2512.17442v1 Announce Type: new -Abstract: In sequential recommendation (SR), the self-attention mechanism of Transformer-based models acts as a low-pass filter, limiting their ability to capture high-frequency signals that reflect short-term user interests. To overcome this, BSARec augments the Transformer encoder with a frequency layer that rescales high-frequency components using the Fourier transform. However, the overall effectiveness of BSARec and the roles of its individual components have yet to be systematically validated. We reproduce BSARec and show that it outperforms other SR methods on some datasets. To empirically assess whether BSARec improves performance on high-frequency signals, we propose a metric to quantify user history frequency and evaluate SR methods across different user groups. We compare digital signal processing (DSP) techniques and find that the discrete wavelet transform (DWT) offer only slight improvements over Fourier transforms, and DSP methods provide no clear advantage over simple residual connections. Finally, we explore padding strategies and find that non-constant padding significantly improves recommendation performance, whereas constant padding hinders the frequency rescaler's ability to capture high-frequency signals. - oai:arXiv.org:2512.17442v1 - cs.IR + Few-Shot Learning of a Graph-Based Neural Network Model Without Backpropagation + https://arxiv.org/abs/2512.18412 + arXiv:2512.18412v1 Announce Type: new +Abstract: We propose a structural-graph approach to classifying contour images in a few-shot regime without using backpropagation. The core idea is to make structure the carrier of explanations: an image is encoded as an attributed graph (critical points and lines represented as nodes with geometric attributes), and generalization is achieved via the formation of concept attractors (class-level concept graphs). Purpose. To design and experimentally validate an architecture in which class concepts are formed from a handful of examples (5 - 6 per class) through structural and parametric reductions, providing transparent decisions and eliminating backpropagation. Methods. Contour vectorization is followed by constructing a bipartite graph (Point/Line as nodes) with normalized geometric attributes such as coordinates, length, angle, and direction; reductions include the elimination of unstable substructures or noise and the alignment of paths between critical points. Concepts are formed by iterative composition of samples, and classification is performed by selecting the best graph-to-concept match (using approximated GED). Results. On an MNIST subset with 5 - 6 base examples per class (single epoch), we obtain a consistent accuracy of around 82% with full traceability of decisions: misclassifications can be explained by explicit structural similarities. An indicative comparison with SVM, MLP, CNN, as well as metric and meta-learning baselines, is provided. The structural-graph scheme with concept attractors enables few-shot learning without backpropagation and offers built-in explanations through the explicit graph structure. Limitations concern the computational cost of GED and the quality of skeletonization; promising directions include classification-algorithm optimization, work with static scenes, and associative recognition. + oai:arXiv.org:2512.18412v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jan Hutter, Hua Chang Bakker, Stan Fris, Madelon Bernardy, Yuanna Liu + http://creativecommons.org/licenses/by-sa/4.0/ + Mykyta Lapin, Kostiantyn Bokhan, Yurii Parzhyn - Assessing Long-Term Electricity Market Design for Ambitious Decarbonization Targets using Multi-Agent Reinforcement Learning - https://arxiv.org/abs/2512.17444 - arXiv:2512.17444v1 Announce Type: new -Abstract: Electricity systems are key to transforming today's society into a carbon-free economy. Long-term electricity market mechanisms, including auctions, support schemes, and other policy instruments, are critical in shaping the electricity generation mix. In light of the need for more advanced tools to support policymakers and other stakeholders in designing, testing, and evaluating long-term markets, this work presents a multi-agent reinforcement learning model capable of capturing the key features of decarbonizing energy systems. Profit-maximizing generation companies make investment decisions in the wholesale electricity market, responding to system needs, competitive dynamics, and policy signals. The model employs independent proximal policy optimization, which was selected for suitability to the decentralized and competitive environment. Nevertheless, given the inherent challenges of independent learning in multi-agent settings, an extensive hyperparameter search ensures that decentralized training yields market outcomes consistent with competitive behavior. The model is applied to a stylized version of the Italian electricity system and tested under varying levels of competition, market designs, and policy scenarios. Results highlight the critical role of market design for decarbonizing the electricity sector and avoiding price volatility. The proposed framework allows assessing long-term electricity markets in which multiple policy and market mechanisms interact simultaneously, with market participants responding and adapting to decarbonization pathways. - oai:arXiv.org:2512.17444v1 - cs.LG - cs.AI - cs.NE - econ.GN - q-fin.EC - Mon, 22 Dec 2025 00:00:00 -0500 + Listening to the Mind: Earable Acoustic Sensing of Cognitive Load + https://arxiv.org/abs/2512.18413 + arXiv:2512.18413v1 Announce Type: new +Abstract: Earable acoustic sensing offers a powerful and non-invasive modality for capturing fine-grained auditory and physiological signals directly from the ear canal, enabling continuous and context-aware monitoring of cognitive states. As earable devices become increasingly embedded in daily life, they provide a unique opportunity to sense mental effort and perceptual load in real time through auditory interactions. In this study, we present the first investigation of cognitive load inference through auditory perception using acoustic signals captured by off-the-shelf in-ear devices. We designed speech-based listening tasks to induce varying levels of cognitive load, while concurrently embedding acoustic stimuli to evoke Stimulus Frequency Otoacoustic Emission (SFOAEs) as a proxy for cochlear responsiveness. Statistical analysis revealed a significant association (p < 0.01) between increased cognitive load and changes in auditory sensitivity, with 63.2% of participants showing peak sensitivity at 3 kHz. Notably, sensitivity patterns also varied across demographic subgroups, suggesting opportunities for personalized sensing. Our findings demonstrate that earable acoustic sensing can support scalable, real-time cognitive load monitoring in natural settings, laying a foundation for future applications in augmented cognition, where everyday auditory technologies adapt to and support the users mental health. + oai:arXiv.org:2512.18413v1 + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Javier Gonzalez-Ruiz, Carlos Rodriguez-Pardo, Iacopo Savelli, Alice Di Bella, Massimo Tavoni + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Xijia Wei, Ting Dang, Khaldoon Al-Naimi, Yang Liu, Fahim Kawsar, Alessandro Montanari - LangDriveCTRL: Natural Language Controllable Driving Scene Editing with Multi-modal Agents - https://arxiv.org/abs/2512.17445 - arXiv:2512.17445v1 Announce Type: new -Abstract: LangDriveCTRL is a natural-language-controllable framework for editing real-world driving videos to synthesize diverse traffic scenarios. It leverages explicit 3D scene decomposition to represent driving videos as a scene graph, containing static background and dynamic objects. To enable fine-grained editing and realism, it incorporates an agentic pipeline in which an Orchestrator transforms user instructions into execution graphs that coordinate specialized agents and tools. Specifically, an Object Grounding Agent establishes correspondence between free-form text descriptions and target object nodes in the scene graph; a Behavior Editing Agent generates multi-object trajectories from language instructions; and a Behavior Reviewer Agent iteratively reviews and refines the generated trajectories. The edited scene graph is rendered and then refined using a video diffusion tool to address artifacts introduced by object insertion and significant view changes. LangDriveCTRL supports both object node editing (removal, insertion and replacement) and multi-object behavior editing from a single natural-language instruction. Quantitatively, it achieves nearly $2\times$ higher instruction alignment than the previous SoTA, with superior structural preservation, photorealism, and traffic realism. Project page is available at: https://yunhe24.github.io/langdrivectrl/. - oai:arXiv.org:2512.17445v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Constant Approximation of Arboricity in Near-Optimal Sublinear Time + https://arxiv.org/abs/2512.18416 + arXiv:2512.18416v1 Announce Type: new +Abstract: We present a randomized algorithm that computes a constant approximation of a graph's arboricity, using $\tilde{O}(n/\lambda)$ queries to adjacency lists and in the same time bound. Here, $n$ and $\lambda$ denote the number of nodes and the graph's arboricity, respectively. The $\tilde{O}(n/\lambda)$ query complexity of our algorithm is nearly optimal. Our constant approximation settles a question of Eden, Mossel, and Ron [SODA'22], who achieved an $O(\log^2 n)$ approximation with the same query and time complexity and asked whether a better approximation can be achieved using near-optimal query complexity. + A key technical challenge in the problem is due to recursive algorithms based on probabilistic samplings, each with a non-negligible error probability. In our case, many of the recursions invoked could have bad probabilistic samples and result in high query complexities. The particular difficulty is that those bad recursions are not easy or cheap to detect and discard. Our approach runs multiple recursions in parallel, to attenuate the error probability, using a careful \textit{scheduling mechanism} that manages the speed at which each of them progresses and makes our overall query complexity competitive with the single good recursion. We find this usage of parallelism and scheduling in a sublinear algorithm remarkable, and we are hopeful that similar ideas may find applications in a wider range of sublinear algorithms that rely on probabilistic recursions. + oai:arXiv.org:2512.18416v1 + cs.DS + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Jiangqi Dai, Mohsen Ghaffari, Julian Portmann + + + A least-squares meshfree method for the incompressible Navier-Stokes equations: A satisfactory solenoidal velocity field via a staggered-variable arrangement + https://arxiv.org/abs/2512.18422 + arXiv:2512.18422v1 Announce Type: new +Abstract: Incompressible flow solvers based on strong-form meshfree methods represent arbitrary geometries without the need for a global mesh system. However, their local evaluations make it difficult to satisfy incompressibility at the discrete level. Moreover, the collocated arrangement of velocity and pressure variables tends to induce a zero-energy mode, leading to decoupling between the two variables. In projection-based approaches, a spatial discretization scheme based on a conventional node-based moving least-squares method for the pressure causes inconsistency between the discrete operators on both sides of the Poisson equation. Thus, a solenoidal velocity field cannot be ensured numerically. In this study, a numerical method for the incompressible Navier-Stokes equations is developed by introducing a local primal-dual grid into the mesh-constrained discrete point method, enabling consistent discrete operators. The \textit{virtual} dual cell constructed is based on the local connectivity among nodes, and therefore our method remains truly meshfree. To achieve a consistent coupling between velocity and pressure variables under the primal-dual arrangement, time evolution converting is applied to evolve the velocity on cell interfaces. For numerical validation, a linear acoustic equation is solved to confirm the effectiveness of the staggered-variable arrangement based on the local primal-dual grid. Then, incompressible Navier-Stokes equations are solved, and the proposed method is demonstrated to satisfy the condition of a solenoidal velocity field at the discrete level, achieve the expected spatial convergence order, and accurately reproduce flow features over a wide range of Reynolds numbers. + oai:arXiv.org:2512.18422v1 + math.NA + cs.NA + physics.flu-dyn + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yun He, Francesco Pittaluga, Ziyu Jiang, Matthias Zwicker, Manmohan Chandraker, Zaid Tasneem + Takeharu Matsuda, Satoshi Ii - VAIR: Visual Analytics for Injury Risk Exploration in Sports - https://arxiv.org/abs/2512.17446 - arXiv:2512.17446v1 Announce Type: new -Abstract: Injury prevention in sports requires understanding how bio-mechanical risks emerge from movement patterns captured in real-world scenarios. However, identifying and interpreting injury prone events from raw video remains difficult and time-consuming. We present VAIR, a visual analytics system that supports injury risk analysis using 3D human motion reconstructed from sports video. VAIR combines pose estimation, bio-mechanical simulation, and synchronized visualizations to help users explore how joint-level risk indicators evolve over time. Domain experts can inspect movement segments through temporally aligned joint angles, angular velocity, and internal forces to detect patterns associated with known injury mechanisms. Through case studies involving Achilles tendon and Anterior cruciate ligament (ACL) injuries in basketball, we show that VAIR enables more efficient identification and interpretation of risky movements. Expert feedback confirms that VAIR improves diagnostic reasoning and supports both retrospective analysis and proactive intervention planning. - oai:arXiv.org:2512.17446v1 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + MoE Pathfinder: Trajectory-driven Expert Pruning + https://arxiv.org/abs/2512.18425 + arXiv:2512.18425v1 Announce Type: new +Abstract: Mixture-of-experts (MoE) architectures used in large language models (LLMs) achieve state-of-the-art performance across diverse tasks yet face practical challenges such as deployment complexity and low activation efficiency. Expert pruning has thus emerged as a promising solution to reduce computational overhead and simplify the deployment of MoE models. However, existing expert pruning approaches conventionally rely on local importance metrics and often apply uniform layer-wise pruning, leveraging only partial evaluation signals and overlooking the heterogeneous contributions of experts across layers. To address these limitations, we propose an expert pruning approach based on the trajectory of activated experts across layers, which treats MoE as a weighted computation graph and casts expert selection as a global optimal path planning problem. Within this framework, we integrate complementary importance signals from reconstruction error, routing probabilities, and activation strength at the trajectory level, which naturally yields non-uniform expert retention across layers. Experiments show that our approach achieves superior pruning performance on nearly all tasks compared with most existing approaches. + oai:arXiv.org:2512.18425v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Xican Yang, Yuanhe Tian, Yan Song + + + Virtual Resistance-Based Control for Grid-Connected Inverters using Persidskii Systems Approach + https://arxiv.org/abs/2512.18428 + arXiv:2512.18428v1 Announce Type: new +Abstract: This work addresses virtual resistance (VR)based control for grid-connected inverters, which enhances transient damping, reduces steady-state errors, and improves robustness to grid disturbances without requiring additional voltage sensors. Classical passivity-based VR control is robust, but limited by restrictive sector bounds on nonlinearities. We extend these bounds and model the closed-loop system as a generalized Persidskii-type nonlinear system. Using this framework, we derive input-to-state stability (ISS) conditions that account for the extended nonlinearities and external disturbances, providing a systematic and less conservative approach to VR control design under practical operating conditions, which is validated through extensive simulations. + oai:arXiv.org:2512.18428v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chunggi Lee, Ut Gong, Tica Lin, Stefanie Zollmann, Scott A Epsley, Adam Petway, Hanspeter Pfister + Chakib Chatri, Ajul Dinesh, Moussa Labbadi - MULTIAQUA: A multimodal maritime dataset and robust training strategies for multimodal semantic segmentation - https://arxiv.org/abs/2512.17450 - arXiv:2512.17450v1 Announce Type: new -Abstract: Unmanned surface vehicles can encounter a number of varied visual circumstances during operation, some of which can be very difficult to interpret. While most cases can be solved only using color camera images, some weather and lighting conditions require additional information. To expand the available maritime data, we present a novel multimodal maritime dataset MULTIAQUA (Multimodal Aquatic Dataset). Our dataset contains synchronized, calibrated and annotated data captured by sensors of different modalities, such as RGB, thermal, IR, LIDAR, etc. The dataset is aimed at developing supervised methods that can extract useful information from these modalities in order to provide a high quality of scene interpretation regardless of potentially poor visibility conditions. To illustrate the benefits of the proposed dataset, we evaluate several multimodal methods on our difficult nighttime test set. We present training approaches that enable multimodal methods to be trained in a more robust way, thus enabling them to retain reliable performance even in near-complete darkness. Our approach allows for training a robust deep neural network only using daytime images, thus significantly simplifying data acquisition, annotation, and the training process. - oai:arXiv.org:2512.17450v1 + E-RGB-D: Real-Time Event-Based Perception with Structured Light + https://arxiv.org/abs/2512.18429 + arXiv:2512.18429v1 Announce Type: new +Abstract: Event-based cameras (ECs) have emerged as bio-inspired sensors that report pixel brightness changes asynchronously, offering unmatched speed and efficiency in vision sensing. Despite their high dynamic range, temporal resolution, low power consumption, and computational simplicity, traditional monochrome ECs face limitations in detecting static or slowly moving objects and lack color information essential for certain applications. To address these challenges, we present a novel approach that integrates a Digital Light Processing (DLP) projector, forming Active Structured Light (ASL) for RGB-D sensing. By combining the benefits of ECs and projection-based techniques, our method enables the detection of color and the depth of each pixel separately. Dynamic projection adjustments optimize bandwidth, ensuring selective color data acquisition and yielding colorful point clouds without sacrificing spatial resolution. This integration, facilitated by a commercial TI LightCrafter 4500 projector and a monocular monochrome EC, not only enables frameless RGB-D sensing applications but also achieves remarkable performance milestones. With our approach, we achieved a color detection speed equivalent to 1400 fps and 4 kHz of pixel depth detection, significantly advancing the realm of computer vision across diverse fields from robotics to 3D reconstruction methods. Our code is publicly available: https://github.com/MISTLab/event_based_rgbd_ros + oai:arXiv.org:2512.18429v1 cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + eess.IV + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jon Muhovi\v{c}, Janez Per\v{s} + Seyed Ehsan Marjani Bajestani, Giovanni Beltrame - Learning What to Write: Write-Gated KV for Efficient Long-Context Inference - https://arxiv.org/abs/2512.17452 - arXiv:2512.17452v1 Announce Type: new -Abstract: Long-context LLM inference is bottlenecked by the quadratic attention complexity and linear KV cache growth. Prior approaches mitigate this via post-hoc selection or eviction but overlook the root inefficiency: indiscriminate writing to persistent memory. In this paper, we formalize KV cache management as a causal system of three primitives: KV Admission, Selection, and Eviction. We instantiate KV Admission via Write-Gated KV, a lightweight mechanism that learns to predict token utility before it enters the cache. By filtering out low-utility states early to maintain a compact global cache alongside a sliding local cache, Write-Gated KV reduces memory usage by 46-57% and delivers 3.03-3.45$\times$ prefill and 1.89-2.56$\times$ decode speedups on Llama model with negligible accuracy loss, all while remaining compatible with FlashAttention and paged-KV systems. These results demonstrate that learning what to write, is a principled and practical recipe for efficient long-context inference. Code is available at https://github.com/EMCLab-Sinica/WG-KV . - oai:arXiv.org:2512.17452v1 - cs.LG - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + On Hyperexponential Stabilization of Linear Infinite-Dimensional Systems + https://arxiv.org/abs/2512.18430 + arXiv:2512.18430v1 Announce Type: new +Abstract: This paper study the hyperexponential stabilization for infinite-dimensional system on Hilbert space by a distributed time depending control law. The well-posedness of the closed loop for every time is obtained through the use of maximal monotone operator. The hyperexponential stability and ISS property of the closed loop is established using Lyapunov analysis and time scale transformation. + oai:arXiv.org:2512.18430v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yen-Chieh Huang, Rui Fang, Ming-Syan Chen, Pi-Cheng Hsiu + Moussa Labbadi, Christophe Roman - A lightweight Spatial-Temporal Graph Neural Network for Long-term Time Series Forecasting - https://arxiv.org/abs/2512.17453 - arXiv:2512.17453v1 Announce Type: new -Abstract: We propose Lite-STGNN, a lightweight spatial-temporal graph neural network for long-term multivariate forecasting that integrates decomposition-based temporal modeling with learnable sparse graph structure. The temporal module applies trend-seasonal decomposition, while the spatial module performs message passing with low-rank Top-$K$ adjacency learning and conservative horizon-wise gating, enabling spatial corrections that enhance a strong linear baseline. Lite-STGNN achieves state-of-the-art accuracy on four benchmark datasets for horizons up to 720 steps, while being parameter-efficient and substantially faster to train than transformer-based methods. Ablation studies show that the spatial module yields 4.6% improvement over the temporal baseline, Top-$K$ enhances locality by 3.3%, and learned adjacency matrices reveal domain-specific interaction dynamics. Lite-STGNN thus offers a compact, interpretable, and efficient framework for long-term multivariate time series forecasting. - oai:arXiv.org:2512.17453v1 - cs.LG + Federated Learning Based Decentralized Adaptive Intelligent Transmission Protocol for Privacy Preserving 6G Networks + https://arxiv.org/abs/2512.18432 + arXiv:2512.18432v1 Announce Type: new +Abstract: The move to 6th Generation (6G) wireless networks creates new issues with privacy, scalability, and adaptability. The data-intensive nature of 6G is not handled well by older, centralized network models. A shift toward more secure and decentralized systems is therefore required. A new framework called the Federated Learning-based Decentralized Adaptive Intelligent Transmission Protocol (AITP) is proposed to meet these challenges. The AITP uses the distributed learning of Federated Learning (FL) within a decentralized system. Transmission parameters can be adjusted intelligently in real time. User privacy is maintained by keeping raw data on local edge devices. The protocol's performance was evaluated with mathematical modeling and detailed simulations. It was shown to be superior to traditional non-adaptive and centralized AI methods across several key metrics. These included latency, network throughput, energy efficiency, and robustness. The AITP is presented as a foundational technology for future 6G networks that supports a user-centric, privacy-first design. This study is a step forward for privacy-preserving research in 6G. + oai:arXiv.org:2512.18432v1 + cs.CR cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Henok Tenaw Moges, Deshendran Moodley + http://creativecommons.org/licenses/by-sa/4.0/ + Ansar Ahmed - An Investigation on How AI-Generated Responses Affect SoftwareEngineering Surveys - https://arxiv.org/abs/2512.17455 - arXiv:2512.17455v1 Announce Type: new -Abstract: Survey research is a fundamental empirical method in software engineering, enabling the systematic collection of data on professional practices, perceptions, and experiences. However, recent advances in large language models (LLMs) have introduced new risks to survey integrity, as participants can use generative tools to fabricate or manipulate their responses. This study explores how LLMs are being misused in software engineering surveys and investigates the methodological implications of such behavior for data authenticity, validity, and research integrity. We collected data from two survey deployments conducted in 2025 through the Prolific platform and analyzed the content of participants' answers to identify irregular or falsified responses. A subset of responses suspected of being AI generated was examined through qualitative pattern inspection, narrative characterization, and automated detection using the Scribbr AI Detector. The analysis revealed recurring structural patterns in 49 survey responses indicating synthetic authorship, including repetitive sequencing, uniform phrasing, and superficial personalization. These false narratives mimicked coherent reasoning while concealing fabricated content, undermining construct, internal, and external validity. Our study identifies data authenticity as an emerging dimension of validity in software engineering surveys. We emphasize that reliable evidence now requires combining automated and interpretive verification procedures, transparent reporting, and community standards to detect and prevent AI generated responses, thereby protecting the credibility of surveys in software engineering. - oai:arXiv.org:2512.17455v1 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 + Efficient Optimization of Hierarchical Identifiers for Generative Recommendation + https://arxiv.org/abs/2512.18434 + arXiv:2512.18434v1 Announce Type: new +Abstract: SEATER is a generative retrieval model that improves recommendation inference efficiency and retrieval quality by utilizing balanced tree-structured item identifiers and contrastive training objectives. We reproduce and validate SEATER's reported improvements in retrieval quality over strong baselines across all datasets from the original work, and extend the evaluation to Yambda, a large-scale music recommendation dataset. Our experiments verify SEATER's strong performance, but show that its tree construction step during training becomes a major bottleneck as the number of items grows. To address this, we implement and evaluate two alternative construction algorithms: a greedy method optimized for minimal build time, and a hybrid method that combines greedy clustering at high levels with more precise grouping at lower levels. The greedy method reduces tree construction time to less than 2% of the original with only a minor drop in quality on the dataset with the largest item collection. The hybrid method achieves retrieval quality on par with the original, and even improves on the largest dataset, while cutting construction time to just 5-8%. All data and code are publicly available for full reproducibility at https://github.com/joshrosie/re-seater. + oai:arXiv.org:2512.18434v1 + cs.IR + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Ronnie de Souza Santos, Italo Santos, Maria Teresa Baldassarre, Cleyton Magalhaes, Mairieli Wessel + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Federica Valeau, Odysseas Boufalis, Polytimi Gkotsi, Joshua Rosenthal, David Vos - 3D-RE-GEN: 3D Reconstruction of Indoor Scenes with a Generative Framework - https://arxiv.org/abs/2512.17459 - arXiv:2512.17459v1 Announce Type: new -Abstract: Recent advances in 3D scene generation produce visually appealing output, but current representations hinder artists' workflows that require modifiable 3D textured mesh scenes for visual effects and game development. Despite significant advances, current textured mesh scene reconstruction methods are far from artist ready, suffering from incorrect object decomposition, inaccurate spatial relationships, and missing backgrounds. We present 3D-RE-GEN, a compositional framework that reconstructs a single image into textured 3D objects and a background. We show that combining state of the art models from specific domains achieves state of the art scene reconstruction performance, addressing artists' requirements. - Our reconstruction pipeline integrates models for asset detection, reconstruction, and placement, pushing certain models beyond their originally intended domains. Obtaining occluded objects is treated as an image editing task with generative models to infer and reconstruct with scene level reasoning under consistent lighting and geometry. Unlike current methods, 3D-RE-GEN generates a comprehensive background that spatially constrains objects during optimization and provides a foundation for realistic lighting and simulation tasks in visual effects and games. To obtain physically realistic layouts, we employ a novel 4-DoF differentiable optimization that aligns reconstructed objects with the estimated ground plane. 3D-RE-GEN~achieves state of the art performance in single image 3D scene reconstruction, producing coherent, modifiable scenes through compositional generation guided by precise camera recovery and spatial optimization. - oai:arXiv.org:2512.17459v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + VeruSAGE: A Study of Agent-Based Verification for Rust Systems + https://arxiv.org/abs/2512.18436 + arXiv:2512.18436v1 Announce Type: new +Abstract: Large language models (LLMs) have shown impressive capability to understand and develop code. However, their capability to rigorously reason about and prove code correctness remains in question. This paper offers a comprehensive study of LLMs' capability to develop correctness proofs for system software written in Rust. We curate a new system-verification benchmark suite, VeruSAGE-Bench, which consists of 849 proof tasks extracted from eight open-source Verus-verified Rust systems. Furthermore, we design different agent systems to match the strengths and weaknesses of different LLMs (o4-mini, GPT-5, Sonnet 4, and Sonnet 4.5). Our study shows that different tools and agent settings are needed to stimulate the system-verification capability of different types of LLMs. The best LLM-agent combination in our study completes over 80% of system-verification tasks in VeruSAGE-Bench. It also completes over 90% of a set of system proof tasks not part of VeruSAGE-Bench because they had not yet been finished by human experts. This result shows the great potential for LLM-assisted development of verified system software. + oai:arXiv.org:2512.18436v1 + cs.OS + cs.AI + cs.FL + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Tobias Sautter, Jan-Niklas Dihlmann, Hendrik P. A. Lensch + Chenyuan Yang, Natalie Neamtu, Chris Hawblitzel, Jacob R. Lorch, Shan Lu - When Data Quality Issues Collide: A Large-Scale Empirical Study of Co-Occurring Data Quality Issues in Software Defect Prediction - https://arxiv.org/abs/2512.17460 - arXiv:2512.17460v1 Announce Type: new -Abstract: Software Defect Prediction (SDP) models are central to proactive software quality assurance, yet their effectiveness is often constrained by the quality of available datasets. Prior research has typically examined single issues such as class imbalance or feature irrelevance in isolation, overlooking that real-world data problems frequently co-occur and interact. This study presents, to our knowledge, the first large-scale empirical analysis in SDP that simultaneously examines five co-occurring data quality issues (class imbalance, class overlap, irrelevant features, attribute noise, and outliers) across 374 datasets and five classifiers. We employ Explainable Boosting Machines together with stratified interaction analysis to quantify both direct and conditional effects under default hyperparameter settings, reflecting practical baseline usage. - Our results show that co-occurrence is nearly universal: even the least frequent issue (attribute noise) appears alongside others in more than 93% of datasets. Irrelevant features and imbalance are nearly ubiquitous, while class overlap is the most consistently harmful issue. We identify stable tipping points around 0.20 for class overlap, 0.65-0.70 for imbalance, and 0.94 for irrelevance, beyond which most models begin to degrade. We also uncover counterintuitive patterns, such as outliers improving performance when irrelevant features are low, underscoring the importance of context-aware evaluation. Finally, we expose a performance-robustness trade-off: no single learner dominates under all conditions. - By jointly analyzing prevalence, co-occurrence, thresholds, and conditional effects, our study directly addresses a persistent gap in SDP research. Hence, moving beyond isolated analyses to provide a holistic, data-aware understanding of how quality issues shape model performance in real-world settings. - oai:arXiv.org:2512.17460v1 - cs.SE - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + MeniMV: A Multi-view Benchmark for Meniscus Injury Severity Grading + https://arxiv.org/abs/2512.18437 + arXiv:2512.18437v1 Announce Type: new +Abstract: Precise grading of meniscal horn tears is critical in knee injury diagnosis but remains underexplored in automated MRI analysis. Existing methods often rely on coarse study-level labels or binary classification, lacking localization and severity information. In this paper, we introduce MeniMV, a multi-view benchmark dataset specifically designed for horn-specific meniscus injury grading. MeniMV comprises 3,000 annotated knee MRI exams from 750 patients across three medical centers, providing 6,000 co-registered sagittal and coronal images. Each exam is meticulously annotated with four-tier (grade 0-3) severity labels for both anterior and posterior meniscal horns, verified by chief orthopedic physicians. Notably, MeniMV offers more than double the pathology-labeled data volume of prior datasets while uniquely capturing the dual-view diagnostic context essential in clinical practice. To demonstrate the utility of MeniMV, we benchmark multiple state-of-the-art CNN and Transformer-based models. Our extensive experiments establish strong baselines and highlight challenges in severity grading, providing a valuable foundation for future research in automated musculoskeletal imaging. + oai:arXiv.org:2512.18437v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Emmanuel Charleson Dapaah, Jens Grabowski + Shurui Xu, Siqi Yang, Jiapin Ren, Zhong Cao, Hongwei Yang, Mengzhen Fan, Yuyu Sun, Shuyan Li - Fair Voting Methods as a Catalyst for Democratic Resilience: A Trilogy on Legitimacy, Impact and AI Safeguarding - https://arxiv.org/abs/2512.17461 - arXiv:2512.17461v1 Announce Type: new -Abstract: This article shows how fair voting methods can be a catalyst for change in the way we make collective decisions, and how such change can promote long-awaited upgrades of democracy. Based on real-world evidence from democratic innovations in participatory budgeting, in Switzerland and beyond, I highlight a trilogy of key research results: Fair voting methods achieve to be (i) legitimacy incubator, (ii) novel impact accelerator and (iii) safeguard for risks of artificial intelligence (AI). Compared to majoritarian voting methods, combining expressive ballot formats (e.g. cumulative voting) with ballot aggregation methods that promote proportional representation (e.g. equal shares) results in more winners and higher (geographical) representation of citizens. Such fair voting methods are preferred and found fairer even by voters who do not win, while promoting stronger democratic values for citizens such as altruism and compromise. They also result in new resourceful ideas to put for voting, which are cost-effective and win, especially in areas of welfare, education and culture. Strikingly, fair voting methods are also more resilient to biases and inconsistencies of generative AI in emerging scenarios of AI voting assistance or AI representation of voters who would be likely to abstain. I also review the relevance of such upgrades for democracies in crisis, such as the one of Greece featured in the recent study of `Unmute Democracy'. Greek democracy can build stronger resilience via higher representation of citizens in democratic processes as well as democratic innovations in participation. Fair voting methods can be a catalyst for both endeavors. - oai:arXiv.org:2512.17461v1 - cs.CY + An Agentic AI Framework for Training General Practitioner Student Skills + https://arxiv.org/abs/2512.18440 + arXiv:2512.18440v1 Announce Type: new +Abstract: Advancements in large language models offer strong potential for enhancing virtual simulated patients (VSPs) in medical education by providing scalable alternatives to resource-intensive traditional methods. However, current VSPs often struggle with medical accuracy, consistent roleplaying, scenario generation for VSP use, and educationally structured feedback. We introduce an agentic framework for training general practitioner student skills that unifies (i) configurable, evidence-based vignette generation, (ii) controlled persona-driven patient dialogue with optional retrieval grounding, and (iii) standards-based assessment and feedback for both communication and clinical reasoning. We instantiate the framework in an interactive spoken consultation setting and evaluate it with medical students ($\mathbf{N{=}14}$). Participants reported realistic and vignette-faithful dialogue, appropriate difficulty calibration, a stable personality signal, and highly useful example-rich feedback, alongside excellent overall usability. These results support agentic separation of scenario control, interaction control, and standards-based assessment as a practical pattern for building dependable and pedagogically valuable VSP training tools. + oai:arXiv.org:2512.18440v1 + cs.CL cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Evangelos Pournaras + Victor De Marez, Jens Van Nooten, Luna De Bruyne, Walter Daelemans - Behavioural Effects of Agentic Messaging: A Case Study on a Financial Service Application - https://arxiv.org/abs/2512.17462 - arXiv:2512.17462v1 Announce Type: new -Abstract: Marketing and product personalisation provide a prominent and visible use-case for the application of Information Retrieval methods across several business domains. Recently, agentic approaches to these problems have been gaining traction. This work evaluates the behavioural and retention effects of agentic personalisation on a financial service application's customer communication system during a 2025 national tax filing period. Through a two month-long randomised controlled trial, we compare an agentic messaging approach against a business-as-usual (BAU) rule-based campaign system, focusing on two primary outcomes: unsubscribe behaviour and conversion timing. Empirical results show that agent-led messaging reduced unsubscribe events by 21\% ($\pm 0.01$) relative to BAU and increased early filing behaviour in the weeks preceding the national deadline. These findings demonstrate how adaptive, user-level decision-making systems can modulate engagement intensity whilst improving long-term retention indicators. - oai:arXiv.org:2512.17462v1 - cs.IR + A Distributed Hierarchical Spatio-Temporal Edge-Enhanced Graph Neural Network for City-Scale Dynamic Logistics Routing + https://arxiv.org/abs/2512.18441 + arXiv:2512.18441v1 Announce Type: new +Abstract: City-scale logistics routing has become increasingly challenging as metropolitan road networks grow to tens of millions of edges and traffic conditions evolve rapidly under high-volume mobility demands. Conventional centralized routing algorithms and monolithic graph neural network (GNN) models suffer from limited scalability, high latency, and poor real-time adaptability, which restricts their effectiveness in large urban logistics systems. To address these challenges, this paper proposes a Distributed Hierarchical Spatio-Temporal Edge-Enhanced Graph Neural Network (HSTE-GNN) for dynamic routing over ultra-large road networks. The framework partitions the city-scale graph into regional subgraphs processed in parallel across distributed computing nodes, enabling efficient learning of localized traffic dynamics. Within each region, an edge-enhanced spatio-temporal module jointly models node states, dynamic edge attributes, and short-term temporal dependencies. A hierarchical coordination layer further aggregates cross-region representations through an asynchronous parameter-server mechanism, ensuring global routing coherence under high-frequency traffic updates. This distributed hierarchical design balances local responsiveness with global consistency, significantly improving scalability and inference efficiency. Experiments on real-world large-scale traffic datasets from Beijing and New York demonstrate that HSTE-GNN outperforms strong spatio-temporal baselines such as ST-GRAPH, achieving 34.9% lower routing delay, 14.7% lower MAPE, and 11.8% lower RMSE, while improving global route consistency by 7.3%. These results confirm that the proposed framework provides a scalable, adaptive, and efficient solution for next-generation intelligent transportation systems and large-scale logistics platforms. + oai:arXiv.org:2512.18441v1 + eess.SY cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Olivier Jeunen, Schaun Wheeler + Zihan Han, Lingran Meng, Jingwei Zhang - Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks - https://arxiv.org/abs/2512.17466 - arXiv:2512.17466v1 Announce Type: new -Abstract: Optimal AP clustering and power allocation are critical in user-centric cell-free massive MIMO systems. Existing deep learning models lack flexibility to handle dynamic network configurations. Furthermore, many approaches overlook pilot contamination and suffer from high computational complexity. In this paper, we propose a lightweight transformer model that overcomes these limitations by jointly predicting AP clusters and powers solely from spatial coordinates of user devices and AP. Our model is architecture-agnostic to users load, handles both clustering and power allocation without channel estimation overhead, and eliminates pilot contamination by assigning users to AP within a pilot reuse constraint. We also incorporate a customized linear attention mechanism to capture user-AP interactions efficiently and enable linear scalability with respect to the number of users. Numerical results confirm the model's effectiveness in maximizing the minimum spectral efficiency and providing near-optimal performance while ensuring adaptability and scalability in dynamic scenarios. - oai:arXiv.org:2512.17466v1 - eess.SY - cs.LG - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 + Snowveil: A Framework for Decentralised Preference Discovery + https://arxiv.org/abs/2512.18444 + arXiv:2512.18444v1 Announce Type: new +Abstract: Aggregating subjective preferences of a large group is a fundamental challenge in computational social choice, traditionally reliant on central authorities. To address the limitations of this model, this paper introduces Decentralised Preference Discovery (DPD), the problem of determining the collective will of an electorate under constraints of censorship resistance, partial information, and asynchronous communication. We propose Snowveil, a novel framework for this task. Snowveil uses an iterative, gossip-based protocol where voters repeatedly sample the preferences of a small, random subset of the electorate to progressively converge on a collective outcome. We demonstrate the framework's modularity by designing the Constrained Hybrid Borda (CHB), a novel aggregation rule engineered to balance broad consensus with strong plurality support, and provide a rigorous axiomatic analysis of its properties. By applying a potential function and submartingale theory, we develop a multi-level analytical method to show that the system almost surely converges to a stable, single-winner in finite time, a process that can then be iterated to construct a set of winning candidates for multi-winner scenarios. This technique is largely agnostic to the specific aggregation rule, requiring only that it satisfies core social choice axioms like Positive Responsiveness, thus offering a formal toolkit for a wider class of DPD protocols. Furthermore, we present a comprehensive empirical analysis through extensive simulation, validating Snowveil's $O(n)$ scalability. Overall, this work advances the understanding of how a stable consensus can emerge from subjective, complex, and diverse preferences in decentralised systems for large electorates. + oai:arXiv.org:2512.18444v1 + cs.GT + cs.AI + cs.DC + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Irched Chafaa, Giacomo Bacci, Luca Sanguinetti + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Grammateia Kotsialou - Translating the Rashomon Effect to Sequential Decision-Making Tasks - https://arxiv.org/abs/2512.17470 - arXiv:2512.17470v1 Announce Type: new -Abstract: The Rashomon effect describes the phenomenon where multiple models trained on the same data produce identical predictions while differing in which features they rely on internally. This effect has been studied extensively in classification tasks, but not in sequential decision-making, where an agent learns a policy to achieve an objective by taking actions in an environment. In this paper, we translate the Rashomon effect to sequential decision-making. We define it as multiple policies that exhibit identical behavior, visiting the same states and selecting the same actions, while differing in their internal structure, such as feature attributions. Verifying identical behavior in sequential decision-making differs from classification. In classification, predictions can be directly compared to ground-truth labels. In sequential decision-making with stochastic transitions, the same policy may succeed or fail on any single trajectory due to randomness. We address this using formal verification methods that construct and compare the complete probabilistic behavior of each policy in the environment. Our experiments demonstrate that the Rashomon effect exists in sequential decision-making. We further show that ensembles constructed from the Rashomon set exhibit greater robustness to distribution shifts than individual policies. Additionally, permissive policies derived from the Rashomon set reduce computational requirements for verification while maintaining optimal performance. - oai:arXiv.org:2512.17470v1 - cs.AI + On the Universality of Transformer Architectures; How Much Attention Is Enough? + https://arxiv.org/abs/2512.18445 + arXiv:2512.18445v1 Announce Type: new +Abstract: Transformers are crucial across many AI fields, such as large language models, computer vision, and reinforcement learning. This prominence stems from the architecture's perceived universality and scalability compared to alternatives. This work examines the problem of universality in Transformers, reviews recent progress, including architectural refinements such as structural minimality and approximation rates, and surveys state-of-the-art advances that inform both theoretical and practical understanding. Our aim is to clarify what is currently known about Transformers expressiveness, separate robust guarantees from fragile ones, and identify key directions for future theoretical research. + oai:arXiv.org:2512.18445v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Dennis Gross, J{\o}rn Eirik Betten, Helge Spieker + Amirreza Abbasi, Mohsen Hooshmand - Deep Learning-Based Surrogate Creep Modelling in Inconel 625: A High-Temperature Alloy Study - https://arxiv.org/abs/2512.17477 - arXiv:2512.17477v1 Announce Type: new -Abstract: Time-dependent deformation, particularly creep, in high-temperature alloys such as Inconel 625 is a key factor in the long-term reliability of components used in aerospace and energy systems. Although Inconel 625 shows excellent creep resistance, finite-element creep simulations in tools such as ANSYS remain computationally expensive, often requiring tens of minutes for a single 10,000-hour run. This work proposes deep learning based surrogate models to provide fast and accurate replacements for such simulations. Creep strain data was generated in ANSYS using the Norton law under uniaxial stresses of 50 to 150 MPa and temperatures of 700 to 1000 $^\circ$C, and this temporal dataset was used to train two architectures: a BiLSTM Variational Autoencoder for uncertainty-aware and generative predictions, and a BiLSTM Transformer hybrid that employs self-attention to capture long-range temporal behavior. Both models act as surrogate predictors, with the BiLSTM-VAE offering probabilistic output and the BiLSTM-Transformer delivering high deterministic accuracy. Performance is evaluated using RMSE, MAE, and $R^2$. Results show that the BiLSTM-VAE provides stable and reliable creep strain forecasts, while the BiLSTM-Transformer achieves strong accuracy across the full time range. Latency tests indicate substantial speedup: while each ANSYS simulation requires 30 to 40 minutes for a given stress-temperature condition, the surrogate models produce predictions within seconds. The proposed framework enables rapid creep assessment for design optimization and structural health monitoring, and provides a scalable solution for high-temperature alloy applications. - oai:arXiv.org:2512.17477v1 - cs.LG - cond-mat.mtrl-sci - Mon, 22 Dec 2025 00:00:00 -0500 + Object-Centric Framework for Video Moment Retrieval + https://arxiv.org/abs/2512.18448 + arXiv:2512.18448v1 Announce Type: new +Abstract: Most existing video moment retrieval methods rely on temporal sequences of frame- or clip-level features that primarily encode global visual and semantic information. However, such representations often fail to capture fine-grained object semantics and appearance, which are crucial for localizing moments described by object-oriented queries involving specific entities and their interactions. In particular, temporal dynamics at the object level have been largely overlooked, limiting the effectiveness of existing approaches in scenarios requiring detailed object-level reasoning. To address this limitation, we propose a novel object-centric framework for moment retrieval. Our method first extracts query-relevant objects using a scene graph parser and then generates scene graphs from video frames to represent these objects and their relationships. Based on the scene graphs, we construct object-level feature sequences that encode rich visual and semantic information. These sequences are processed by a relational tracklet transformer, which models spatio-temporal correlations among objects over time. By explicitly capturing object-level state changes, our framework enables more accurate localization of moments aligned with object-oriented queries. We evaluated our method on three benchmarks: Charades-STA, QVHighlights, and TACoS. Experimental results demonstrate that our method outperforms existing state-of-the-art methods across all benchmarks. + oai:arXiv.org:2512.18448v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shubham Das, Kaushal Singhania, Amit Sadhu, Suprabhat Das, Arghya Nandi + Zongyao Li, Yongkang Wong, Satoshi Yamazaki, Jianquan Liu, Mohan Kankanhalli - Derivates for Containers in Univalent Foundations - https://arxiv.org/abs/2512.17484 - arXiv:2512.17484v1 Announce Type: new -Abstract: Containers conveniently represent a wide class of inductive data types. Their derivatives compute representations of types of one-hole contexts, useful for implementing tree-traversal algorithms. In the category of containers and cartesian morphisms, derivatives of discrete containers (whose positions have decidable equality) satisfy a universal property. Working in Univalent Foundations, we extend the derivative operation to untruncated containers (whose shapes and positions are arbitrary types). We prove that this derivative, defined in terms of a set of isolated positions, satisfies an appropriate universal property in the wild category of untruncated containers and cartesian morphisms, as well as basic laws with respect to constants, sums and products. A chain rule exists, but is in general non-invertible. In fact, a globally invertible chain rule is inconsistent in the presence of non-set types, and equivalent to a classical principle when restricted to set-truncated containers. We derive a rule for derivatives of smallest fixed points from the chain rule, and characterize its invertibility. All of our results are formalized in Cubical Agda. - oai:arXiv.org:2512.17484v1 - cs.LO - Mon, 22 Dec 2025 00:00:00 -0500 + Agent-Based Output Drift Detection for Breast Cancer Response Prediction in a Multisite Clinical Decision Support System + https://arxiv.org/abs/2512.18450 + arXiv:2512.18450v1 Announce Type: new +Abstract: Modern clinical decision support systems can concurrently serve multiple, independent medical imaging institutions, but their predictive performance may degrade across sites due to variations in patient populations, imaging hardware, and acquisition protocols. Continuous surveillance of predictive model outputs offers a safe and reliable approach for identifying such distributional shifts without ground truth labels. However, most existing methods rely on centralized monitoring of aggregated predictions, overlooking site-specific drift dynamics. We propose an agent-based framework for detecting drift and assessing its severity in multisite clinical AI systems. To evaluate its effectiveness, we simulate a multi-center environment for output-based drift detection, assigning each site a drift monitoring agent that performs batch-wise comparisons of model outputs against a reference distribution. We analyse several multi-center monitoring schemes, that differ in how the reference is obtained (site-specific, global, production-only and adaptive), alongside a centralized baseline. Results on real-world breast cancer imaging data using a pathological complete response prediction model shows that all multi-center schemes outperform centralized monitoring, with F1-score improvements up to 10.3% in drift detection. In the absence of site-specific references, the adaptive scheme performs best, with F1-scores of 74.3% for drift detection and 83.7% for drift severity classification. These findings suggest that adaptive, site-aware agent-based drift monitoring can enhance reliability of multisite clinical decision support systems. + oai:arXiv.org:2512.18450v1 + cs.AI + cs.CV + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Philipp Joram, Niccol\`o Veltri + Xavier Rafael-Palou, Jose Munuera, Ana Jimenez-Pastor, Richard Osuala, Karim Lekadir, Oliver Diaz - TwinSegNet: A Digital Twin-Enabled Federated Learning Framework for Brain Tumor Analysis - https://arxiv.org/abs/2512.17488 - arXiv:2512.17488v1 Announce Type: new -Abstract: Brain tumor segmentation is critical in diagnosis and treatment planning for the disease. Yet, current deep learning methods rely on centralized data collection, which raises privacy concerns and limits generalization across diverse institutions. In this paper, we propose TwinSegNet, which is a privacy-preserving federated learning framework that integrates a hybrid ViT-UNet model with personalized digital twins for accurate and real-time brain tumor segmentation. Our architecture combines convolutional encoders with Vision Transformer bottlenecks to capture local and global context. Each institution fine-tunes the global model of private data to form its digital twin. Evaluated on nine heterogeneous MRI datasets, including BraTS 2019-2021 and custom tumor collections, TwinSegNet achieves high Dice scores (up to 0.90%) and sensitivity/specificity exceeding 90%, demonstrating robustness across non-independent and identically distributed (IID) client distributions. Comparative results against centralized models such as TumorVisNet highlight TwinSegNet's effectiveness in preserving privacy without sacrificing performance. Our approach enables scalable, personalized segmentation for multi-institutional clinical settings while adhering to strict data confidentiality requirements. - oai:arXiv.org:2512.17488v1 - cs.CV + Secret mixtures of experts inside your LLM + https://arxiv.org/abs/2512.18452 + arXiv:2512.18452v1 Announce Type: new +Abstract: Despite being one of the earliest neural network layers, the Multilayer Perceptron (MLP) is arguably one of the least understood parts of the transformer architecture due to its dense computation and lack of easy visualization. This paper seeks to understand the MLP layers in dense LLM models by hypothesizing that these layers secretly approximately perform a sparse computation -- namely, that they can be well approximated by sparsely-activating Mixture of Experts (MoE) layers. + Our hypothesis is based on a novel theoretical connection between MoE models and Sparse Autoencoder (SAE) structure in activation space. We empirically validate the hypothesis on pretrained LLMs, and demonstrate that the activation distribution matters -- these results do not hold for Gaussian data, but rather rely crucially on structure in the distribution of neural network activations. + Our results shine light on a general principle at play in MLP layers inside LLMs, and give an explanation for the effectiveness of modern MoE-based transformers. Additionally, our experimental explorations suggest new directions for more efficient MoE architecture design based on low-rank routers. + oai:arXiv.org:2512.18452v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Almustapha A. Wakili, Adamu Hussaini, Abubakar A. Musa, Woosub Jung, Wei Yu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Enric Boix-Adsera - LumiCtrl : Learning Illuminant Prompts for Lighting Control in Personalized Text-to-Image Models - https://arxiv.org/abs/2512.17489 - arXiv:2512.17489v1 Announce Type: new -Abstract: Current text-to-image (T2I) models have demonstrated remarkable progress in creative image generation, yet they still lack precise control over scene illuminants, which is a crucial factor for content designers aiming to manipulate the mood, atmosphere, and visual aesthetics of generated images. In this paper, we present an illuminant personalization method named LumiCtrl that learns an illuminant prompt given a single image of an object. LumiCtrl consists of three basic components: given an image of the object, our method applies (a) physics-based illuminant augmentation along the Planckian locus to create fine-tuning variants under standard illuminants; (b) edge-guided prompt disentanglement using a frozen ControlNet to ensure prompts focus on illumination rather than structure; and (c) a masked reconstruction loss that focuses learning on the foreground object while allowing the background to adapt contextually, enabling what we call contextual light adaptation. We qualitatively and quantitatively compare LumiCtrl against other T2I customization methods. The results show that our method achieves significantly better illuminant fidelity, aesthetic quality, and scene coherence compared to existing personalization baselines. A human preference study further confirms strong user preference for LumiCtrl outputs. The code and data will be released upon publication. - oai:arXiv.org:2512.17489v1 + NOVA: Discovering Well-Conditioned Winograd Transforms through Numerical Optimization of Vandermonde Arithmetic + https://arxiv.org/abs/2512.18453 + arXiv:2512.18453v1 Announce Type: new +Abstract: Winograd convolution is the standard algorithm for efficient inference, reducing arithmetic complexity by 2.25x for 3x3 kernels. However, it faces a critical barrier in the modern era of low precision computing: numerical instability. As tiles scale to maximize efficiency (e.g., F(6,3), F(8,3)), the condition numbers of standard integer based transforms explode, reaching kappa = 2 x 10^5 for F(8,3), rendering them unusable in FP16 or Int8. We introduce NOVA (Numerical Optimization of Vandermonde Arithmetic), a discovery framework that breaks the decades old convention of integer interpolation. Treating Winograd point selection as a continuous optimization problem, NOVA searches the manifold R^n-1 via Evolution Strategy, snaps candidates to simple rationals, and guarantees correctness via symbolic verification. This process uncovers a hidden landscape of stable, fractional configurations such as {+-5/6, +-7/6, +-3/5} that defy traditional vocabulary constraints. The impact is transformative: NOVA improves the conditioning of F(8,3) by 415x in 1D, which squares to a 172,484x improvement for 2D convolution. In real world FP16 ImageNet inference, where standard transforms collapse to random chance (e.g., 4.7 percent accuracy on VGG16), NOVA's points restore full accuracy (75 to 78 percent), recovering over 70 percentage points without retraining, calibration, or learned parameters. These discovered transforms act as drop in replacements, effectively unlocking the efficiency of large tile Winograd convolution for next generation hardware. + oai:arXiv.org:2512.18453v1 + cs.LG cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Muhammad Atif Butt, Kai Wang, Javier Vazquez-Corral, Joost Van De Weijer + Jayant Lohia - MMLANDMARKS: a Cross-View Instance-Level Benchmark for Geo-Spatial Understanding - https://arxiv.org/abs/2512.17492 - arXiv:2512.17492v1 Announce Type: new -Abstract: Geo-spatial analysis of our world benefits from a multimodal approach, as every single geographic location can be described in numerous ways (images from various viewpoints, textual descriptions, and geographic coordinates). Current geo-spatial benchmarks have limited coverage across modalities, considerably restricting progress in the field, as current approaches cannot integrate all relevant modalities within a unified framework. We introduce the Multi-Modal Landmark dataset (MMLANDMARKS), a benchmark composed of four modalities: 197k highresolution aerial images, 329k ground-view images, textual information, and geographic coordinates for 18,557 distinct landmarks in the United States. The MMLANDMARKS dataset has a one-to-one correspondence across every modality, which enables training and benchmarking models for various geo-spatial tasks, including cross-view Ground-to-Satellite retrieval, ground and satellite geolocalization, Text-to-Image, and Text-to-GPS retrieval. We demonstrate broad generalization and competitive performance against off-the-shelf foundational models and specialized state-of-the-art models across different tasks by employing a simple CLIP-inspired baseline, illustrating the necessity for multimodal datasets to achieve broad geo-spatial understanding. - oai:arXiv.org:2512.17492v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Out-of-Distribution Detection in Molecular Complexes via Diffusion Models for Irregular Graphs + https://arxiv.org/abs/2512.18454 + arXiv:2512.18454v1 Announce Type: new +Abstract: Predictive machine learning models generally excel on in-distribution data, but their performance degrades on out-of-distribution (OOD) inputs. Reliable deployment therefore requires robust OOD detection, yet this is particularly challenging for irregular 3D graphs that combine continuous geometry with categorical identities and are unordered by construction. Here, we present a probabilistic OOD detection framework for complex 3D graph data built on a diffusion model that learns a density of the training distribution in a fully unsupervised manner. A key ingredient we introduce is a unified continuous diffusion over both 3D coordinates and discrete features: categorical identities are embedded in a continuous space and trained with cross-entropy, while the corresponding diffusion score is obtained analytically via posterior-mean interpolation from predicted class probabilities. This yields a single self-consistent probability-flow ODE (PF-ODE) that produces per-sample log-likelihoods, providing a principled typicality score for distribution shift. We validate the approach on protein-ligand complexes and construct strict OOD datasets by withholding entire protein families from training. PF-ODE likelihoods identify held-out families as OOD and correlate strongly with prediction errors of an independent binding-affinity model (GEMS), enabling a priori reliability estimates on new complexes. Beyond scalar likelihoods, we show that multi-scale PF-ODE trajectory statistics - including path tortuosity, flow stiffness, and vector-field instability - provide complementary OOD information. Modeling the joint distribution of these trajectory features yields a practical, high-sensitivity detector that improves separation over likelihood-only baselines, offering a label-free OOD quantification workflow for geometric deep learning. + oai:arXiv.org:2512.18454v1 + cs.LG + q-bio.QM + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Oskar Kristoffersen, Alba R. S\'anchez, Morten R. Hannemose, Anders B. Dahl, Dim P. Papadopoulos + http://creativecommons.org/licenses/by/4.0/ + David Graber, Victor Armegioiu, Rebecca Buller, Siddhartha Mishra - GroundingME: Exposing the Visual Grounding Gap in MLLMs through Multi-Dimensional Evaluation - https://arxiv.org/abs/2512.17495 - arXiv:2512.17495v1 Announce Type: new -Abstract: Visual grounding, localizing objects from natural language descriptions, represents a critical bridge between language and vision understanding. While multimodal large language models (MLLMs) achieve impressive scores on existing benchmarks, a fundamental question remains: can MLLMs truly ground language in vision with human-like sophistication, or are they merely pattern-matching on simplified datasets? Current benchmarks fail to capture real-world complexity where humans effortlessly navigate ambiguous references and recognize when grounding is impossible. To rigorously assess MLLMs' true capabilities, we introduce GroundingME, a benchmark that systematically challenges models across four critical dimensions: (1) Discriminative, distinguishing highly similar objects, (2) Spatial, understanding complex relational descriptions, (3) Limited, handling occlusions or tiny objects, and (4) Rejection, recognizing ungroundable queries. Through careful curation combining automated generation with human verification, we create 1,005 challenging examples mirroring real-world complexity. Evaluating 25 state-of-the-art MLLMs reveals a profound capability gap: the best model achieves only 45.1% accuracy, while most score 0% on rejection tasks, reflexively hallucinating objects rather than acknowledging their absence, raising critical safety concerns for deployment. We explore two strategies for improvements: (1) test-time scaling selects optimal response by thinking trajectory to improve complex grounding by up to 2.9%, and (2) data-mixture training teaches models to recognize ungroundable queries, boosting rejection accuracy from 0% to 27.9%. GroundingME thus serves as both a diagnostic tool revealing current limitations in MLLMs and a roadmap toward human-level visual grounding. - oai:arXiv.org:2512.17495v1 + Plasticine: A Traceable Diffusion Model for Medical Image Translation + https://arxiv.org/abs/2512.18455 + arXiv:2512.18455v1 Announce Type: new +Abstract: Domain gaps arising from variations in imaging devices and population distributions pose significant challenges for machine learning in medical image analysis. Existing image-to-image translation methods primarily aim to learn mappings between domains, often generating diverse synthetic data with variations in anatomical scale and shape, but they usually overlook spatial correspondence during the translation process. For clinical applications, traceability, defined as the ability to provide pixel-level correspondences between original and translated images, is equally important. This property enhances clinical interpretability but has been largely overlooked in previous approaches. To address this gap, we propose Plasticine, which is, to the best of our knowledge, the first end-to-end image-to-image translation framework explicitly designed with traceability as a core objective. Our method combines intensity translation and spatial transformation within a denoising diffusion framework. This design enables the generation of synthetic images with interpretable intensity transitions and spatially coherent deformations, supporting pixel-wise traceability throughout the translation process. + oai:arXiv.org:2512.18455v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Rang Li, Lei Li, Shuhuai Ren, Hao Tian, Shuhao Gu, Shicheng Li, Zihao Yue, Yudong Wang, Wenhan Ma, Zhe Yang, Jingyuan Ma, Zhifang Sui, Fuli Luo + Tianyang Zhanng, Xinxing Cheng, Jun Cheng, Shaoming Zheng, He Zhao, Huazhu Fu, Alejandro F Frangi, Jiang Liu, Jinming Duan - Validation of Diagnostic Artificial Intelligence Models for Prostate Pathology in a Middle Eastern Cohort - https://arxiv.org/abs/2512.17499 - arXiv:2512.17499v1 Announce Type: new -Abstract: Background: Artificial intelligence (AI) is improving the efficiency and accuracy of cancer diagnostics. The performance of pathology AI systems has been almost exclusively evaluated on European and US cohorts from large centers. For global AI adoption in pathology, validation studies on currently under-represented populations - where the potential gains from AI support may also be greatest - are needed. We present the first study with an external validation cohort from the Middle East, focusing on AI-based diagnosis and Gleason grading of prostate cancer. - Methods: We collected and digitised 339 prostate biopsy specimens from the Kurdistan region, Iraq, representing a consecutive series of 185 patients spanning the period 2013-2024. We evaluated a task-specific end-to-end AI model and two foundation models in terms of their concordance with pathologists and consistency across samples digitised on three scanner models (Hamamatsu, Leica, and Grundium). - Findings: Grading concordance between AI and pathologists was similar to pathologist-pathologist concordance with Cohen's quadratically weighted kappa 0.801 vs. 0.799 (p=0.9824). Cross-scanner concordance was high (quadratically weighted kappa > 0.90) for all AI models and scanner pairs, including low-cost compact scanner. - Interpretation: AI models demonstrated pathologist-level performance in prostate histopathology assessment. Compact scanners can provide a route for validation studies in non-digitalised settings and enable cost-effective adoption of AI in laboratories with limited sample volumes. This first openly available digital pathology dataset from the Middle East supports further research into globally equitable AI pathology. - Funding: SciLifeLab and Wallenberg Data Driven Life Science Program, Instrumentarium Science Foundation, Karolinska Institutet Research Foundation. - oai:arXiv.org:2512.17499v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + SoK: Understanding (New) Security Issues Across AI4Code Use Cases + https://arxiv.org/abs/2512.18456 + arXiv:2512.18456v1 Announce Type: new +Abstract: AI-for-Code (AI4Code) systems are reshaping software engineering, with tools like GitHub Copilot accelerating code generation, translation, and vulnerability detection. Alongside these advances, however, security risks remain pervasive: insecure outputs, biased benchmarks, and susceptibility to adversarial manipulation undermine their reliability. This SoK surveys the landscape of AI4Code security across three core applications, identifying recurring gaps: benchmark dominance by Python and toy problems, lack of standardized security datasets, data leakage in evaluation, and fragile adversarial robustness. A comparative study of six state-of-the-art models illustrates these challenges: insecure patterns persist in code generation, vulnerability detection is brittle to semantic-preserving attacks, fine-tuning often misaligns security objectives, and code translation yields uneven security benefits. From this analysis, we distill three forward paths: embedding secure-by-default practices in code generation, building robust and comprehensive detection benchmarks, and leveraging translation as a route to security-enhanced languages. We call for a shift toward security-first AI4Code, where vulnerability mitigation and robustness are embedded throughout the development life cycle. + oai:arXiv.org:2512.18456v1 + cs.CR + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Peshawa J. Muhammad Ali (Department of Software Engineering, Faculty of Engineering, Koya University, Koya 44023, Kurdistan Region - F.R. Iraq, Department of Mechanical and Manufacturing Engineering, Faculty of Engineering, Koya University, Koya 44023, Kurdistan Region - F.R. Iraq), Navin Vincent (Department of Medical Epidemiology and Biostatistics, SciLifeLab, Karolinska Institutet, Stockholm, Sweden), Saman S. Abdulla (College of Dentistry, Hawler Medical University, Erbil, Kurdistan Region, Iraq, PAR Private Hospital, Erbil, Kurdistan Region, Iraq), Han N. Mohammed Fadhl (College of Dentistry, University of Sulaimani, Sulaymaniyah, Kurdistan Region, Iraq), Anders Blilie (Department of Pathology, Stavanger University Hospital, Stavanger, Norway, Faculty of Health Sciences, University of Stavanger, Stavanger, Norway), Kelvin Szolnoky (Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden), Julia Anna Mielcarz (Department of Medical Epidemiology and Biostatistics, SciLifeLab, Karolinska Institutet, Stockholm, Sweden), Xiaoyi Ji (Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden), Nita Mulliqi (Department of Medical Epidemiology and Biostatistics, SciLifeLab, Karolinska Institutet, Stockholm, Sweden), Abdulbasit K. Al-Talabani (Department of Software Engineering, Faculty of Engineering, Koya University, Koya 44023, Kurdistan Region - F.R. Iraq), Kimmo Kartasalo (Department of Medical Epidemiology and Biostatistics, SciLifeLab, Karolinska Institutet, Stockholm, Sweden) + Qilong Wu, Taoran Li, Tianyang Zhou, Varun Chandrasekaran - Why Is My Transaction Risky? Understanding Smart Contract Semantics and Interactions in the NFT Ecosystem - https://arxiv.org/abs/2512.17500 - arXiv:2512.17500v1 Announce Type: new -Abstract: The NFT ecosystem represents an interconnected, decentralized environment that encompasses the creation, distribution, and trading of Non-Fungible Tokens (NFTs), where key actors, such as marketplaces, sellers, and buyers, utilize smart contracts to facilitate secure, transparent, and trustless transactions. Scam tokens are deliberately created to mislead users and facilitate financial exploitation, posing significant risks in the NFT ecosystem. Prior work has explored the NFT ecosystem from various perspectives, including security challenges, actor behaviors, and risks from scams and wash trading, leaving a gap in understanding the semantics and interactions of smart contracts during transactions, and how the risks associated with scam tokens manifest in relation to the semantics and interactions of contracts. To bridge this gap, we conducted a large-scale empirical study on smart contract semantics and interactions in the NFT ecosystem, using a curated dataset of nearly 100 million transactions across 20 million blocks on Ethereum. We observe a limited semantic diversity among smart contracts in the NFT ecosystem, dominated by proxy, token, and DeFi contracts. Marketplace and proxy registry contracts are the most frequently involved in smart contract interactions during transactions, engaging with a broad spectrum of contracts in the ecosystem. Token contracts exhibit bytecode-level diversity, whereas scam tokens exhibit bytecode convergence. Certain interaction patterns between smart contracts are common to both risky and non-risky transactions, while others are predominantly associated with risky transactions. Based on our findings, we provide recommendations to mitigate risks in the blockchain ecosystem, and outline future research directions. - oai:arXiv.org:2512.17500v1 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 + Age of Information with Age-Dependent Server Selection + https://arxiv.org/abs/2512.18457 + arXiv:2512.18457v1 Announce Type: new +Abstract: In this paper, we consider a single-source multi-server generate-at-will discrete-time non-preemptive status update system where update packets are transmitted using {\em only one} of the available servers, according to a server selection policy. In particular, when a transmission is complete, the update system makes a threshold-based decision on whether to wait or transmit, and if latter, which server to use for transmissions, on the basis of the instantaneous value of the age of information (AoI) process. In our setting, servers have general heterogeneous discrete phase-type (DPH) distributed service times, and also heterogeneous transmission costs. The goal is to find an age-dependent multi-threshold policy that minimizes the AoI cost with a constraint on transmission costs, the former cost defined in terms of the time average of an arbitrary function of AoI. For this purpose, we propose a novel tool called \emph{multi-regime absorbing Markov chain} (MR-AMC) in discrete time. Using the MR-AMC framework, we exactly obtain the distribution of AoI, and subsequently the costs associated with AoI and transmissions. With the exact analysis in hand, optimum thresholds can be obtained in the case of a few servers, by exhaustive search. We validate the proposed analytical model, and also demonstrate the benefits of age-dependent server selection, with numerical examples. + oai:arXiv.org:2512.18457v1 + cs.IT + cs.PF + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yujing Chen, Xuanming Liu, Zhiyuan Wan, Zuobin Wang, David Lo, Difan Xie, Xiaohu Yang + Nail Akar, Ismail Cosandal, Sennur Ulukus - InsertAnywhere: Bridging 4D Scene Geometry and Diffusion Models for Realistic Video Object Insertion - https://arxiv.org/abs/2512.17504 - arXiv:2512.17504v1 Announce Type: new -Abstract: Recent advances in diffusion-based video generation have opened new possibilities for controllable video editing, yet realistic video object insertion (VOI) remains challenging due to limited 4D scene understanding and inadequate handling of occlusion and lighting effects. We present InsertAnywhere, a new VOI framework that achieves geometrically consistent object placement and appearance-faithful video synthesis. Our method begins with a 4D aware mask generation module that reconstructs the scene geometry and propagates user specified object placement across frames while maintaining temporal coherence and occlusion consistency. Building upon this spatial foundation, we extend a diffusion based video generation model to jointly synthesize the inserted object and its surrounding local variations such as illumination and shading. To enable supervised training, we introduce ROSE++, an illumination aware synthetic dataset constructed by transforming the ROSE object removal dataset into triplets of object removed video, object present video, and a VLM generated reference image. Through extensive experiments, we demonstrate that our framework produces geometrically plausible and visually coherent object insertions across diverse real world scenarios, significantly outperforming existing research and commercial models. - oai:arXiv.org:2512.17504v1 - cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Weight Transformations in Bit-Sliced Crossbar Arrays for Fault Tolerant Computing-in-Memory: Design Techniques and Evaluation Framework + https://arxiv.org/abs/2512.18459 + arXiv:2512.18459v1 Announce Type: new +Abstract: The deployment of deep neural networks (DNNs) on compute-in-memory (CiM) accelerators offers significant energy savings and speed-up by reducing data movement during inference. However, the reliability of CiM-based systems is challenged by stuck-at faults (SAFs) in memory cells, which corrupt stored weights and lead to accuracy degradation. While closest value mapping (CVM) has been shown to partially mitigate these effects for multibit DNNs deployed on bit-sliced crossbars, its fault tolerance is often insufficient under high SAF rates or for complex tasks. In this work, we propose two training-free weight transformation techniques, sign-flip and bit-flip, that enhance SAF tolerance in multi-bit DNNs deployed on bit-sliced crossbar arrays. Sign-flip operates at the weight-column level by selecting between a weight and its negation, whereas bit-flip provides finer granularity by selectively inverting individual bit slices. Both methods expand the search space for fault-aware mappings, operate synergistically with CVM, and require no retraining or additional memory. To enable scalability, we introduce a look-up-table (LUT)-based framework that accelerates the computation of optimal transformations and supports rapid evaluation across models and fault rates. Extensive experiments on ResNet-18, ResNet-50, and ViT models with CIFAR-100 and ImageNet demonstrate that the proposed techniques recover most of the accuracy lost under SAF injection. Hardware analysis shows that these methods incur negligible overhead, with sign-flip leading to negligible energy, latency, and area cost, and bit-flip providing higher fault resilience with modest overheads. These results establish sign-flip and bit-flip as practical and scalable SAF-mitigation strategies for CiM-based DNN accelerators. + oai:arXiv.org:2512.18459v1 + cs.AR + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hoiyeong Jin, Hyojin Jang, Jeongho Kim, Junha Hyung, Kinam Kim, Dongjin Kim, Huijin Choi, Hyeonji Kim, Jaegul Choo + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Akul Malhotra, Sumeet Kumar Gupta - Adaptive Covariance and Quaternion-Focused Hybrid Error-State EKF/UKF for Visual-Inertial Odometry - https://arxiv.org/abs/2512.17505 - arXiv:2512.17505v1 Announce Type: new -Abstract: This study presents an innovative hybrid Visual-Inertial Odometry (VIO) method for Unmanned Aerial Vehicles (UAVs) that is resilient to environmental challenges and capable of dynamically assessing sensor reliability. Built upon a loosely coupled sensor fusion architecture, the system utilizes a novel hybrid Quaternion-focused Error-State EKF/UKF (Qf-ES-EKF/UKF) architecture to process inertial measurement unit (IMU) data. This architecture first propagates the entire state using an Error-State Extended Kalman Filter (ESKF) and then applies a targeted Scaled Unscented Kalman Filter (SUKF) step to refine only the orientation. This sequential process blends the accuracy of SUKF in quaternion estimation with the overall computational efficiency of ESKF. The reliability of visual measurements is assessed via a dynamic sensor confidence score based on metrics, such as image entropy, intensity variation, motion blur, and inference quality, adapting the measurement noise covariance to ensure stable pose estimation even under challenging conditions. Comprehensive experimental analyses on the EuRoC MAV dataset demonstrate key advantages: an average improvement of 49% in position accuracy in challenging scenarios, an average of 57% in rotation accuracy over ESKF-based methods, and SUKF-comparable accuracy achieved with approximately 48% lower computational cost than a full SUKF implementation. These findings demonstrate that the presented approach strikes an effective balance between computational efficiency and estimation accuracy, and significantly enhances UAV pose estimation performance in complex environments with varying sensor reliability. - oai:arXiv.org:2512.17505v1 - cs.RO - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Ufuk Asil, Efendi Nasibov - - - The HEAL Data Platform - https://arxiv.org/abs/2512.17506 - arXiv:2512.17506v1 Announce Type: new -Abstract: Objective: The objective was to develop a cloud-based, federated system to serve as a single point of search, discovery and analysis for data generated under the NIH Helping to End Addiction Long-term (HEAL) Initiative. - Materials and methods: The HEAL Data Platform is built on the open source Gen3 platform, utilizing a small set of framework services and exposed APIs to interoperate with both NIH and non-NIH data repositories. Framework services include those for authentication and authorization, creating persistent identifiers for data objects, and adding and updating metadata. - Results: The HEAL Data Platform serves as a single point of discovery of over one thousand studies funded under the HEAL Initiative. With hundreds of users per month, the HEAL Data Platform provides rich metadata and interoperates with data repositories and commons to provide access to shared datasets. Secure, cloud-based compute environments that are integrated with STRIDES facilitate secondary analysis of HEAL data. The HEAL Data Platform currently interoperates with nineteen data repositories. - Discussion: Studies funded under the HEAL Initiative generate a wide variety of data types, which are deposited across multiple NIH and third-party data repositories. The mesh architecture of the HEAL Data Platform provides a single point of discovery of these data resources, accelerating and facilitating secondary use. - Conclusion: The HEAL Data Platform enables search, discovery, and analysis of data that are deposited in connected data repositories and commons. By ensuring that these data are fully Findable, Accessible, Interoperable and Reusable (FAIR), the HEAL Data Platform maximizes the value of data generated under the HEAL Initiative. - oai:arXiv.org:2512.17506v1 - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 + Mitigating Spurious Correlations in NLI via LLM-Synthesized Counterfactuals and Dynamic Balanced Sampling + https://arxiv.org/abs/2512.18462 + arXiv:2512.18462v1 Announce Type: new +Abstract: Natural Language Inference (NLI) models frequently rely on spurious correlations rather than semantic reasoning. Existing mitigation strategies often incur high annotation costs or trigger catastrophic forgetting during fine-tuning. We propose an automated, scalable pipeline to address these limitations. First, we introduce Log-Frequency LMI (LF-LMI) to accurately detect semantic artifacts. Second, we generate a high-quality synthetic contrast set via an LLM-synthesis pipeline with multi-judge verification. Finally, we introduce Dynamic Balanced Sampling, a training strategy that rotates the original data distribution to prevent forgetting. Our method improves consistency on a challenging benchmark from 63.5% to 81.0% while maintaining 88.4% in-domain accuracy, significantly outperforming naive fine-tuning. + oai:arXiv.org:2512.18462v1 + cs.CL + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Brienna M. Larrick, L. Philip Schumm, Mingfei Shao, Craig Barnes, Anthony Juehne, Hara Prasad Juvvla, Michael B. Kranz, Michael Lukowski, Clint Malson, Jessica N. Mazerik, Christopher G. Meyer, Jawad Qureshi, Erin Spaniol, Andrea Tentner, Alexander VanTol, Peter Vassilatos, Sara Volk de Garcia, Robert L. Grossman + Christopher Rom\'an Jaimes - Foundation Model Priors Enhance Object Focus in Feature Space for Source-Free Object Detection - https://arxiv.org/abs/2512.17514 - arXiv:2512.17514v1 Announce Type: new -Abstract: Current state-of-the-art approaches in Source-Free Object Detection (SFOD) typically rely on Mean-Teacher self-labeling. However, domain shift often reduces the detector's ability to maintain strong object-focused representations, causing high-confidence activations over background clutter. This weak object focus results in unreliable pseudo-labels from the detection head. While prior works mainly refine these pseudo-labels, they overlook the underlying need to strengthen the feature space itself. We propose FALCON-SFOD (Foundation-Aligned Learning with Clutter suppression and Noise robustness), a framework designed to enhance object-focused adaptation under domain shift. It consists of two complementary components. SPAR (Spatial Prior-Aware Regularization) leverages the generalization strength of vision foundation models to regularize the detector's feature space. Using class-agnostic binary masks derived from OV-SAM, SPAR promotes structured and foreground-focused activations by guiding the network toward object regions. IRPL (Imbalance-aware Noise Robust Pseudo-Labeling) complements SPAR by promoting balanced and noise-tolerant learning under severe foreground-background imbalance. Guided by a theoretical analysis that connects these designs to tighter localization and classification error bounds, FALCON-SFOD achieves competitive performance across SFOD benchmarks. - oai:arXiv.org:2512.17514v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Self-organizing maps for water quality assessment in reservoirs and lakes: A systematic literature review + https://arxiv.org/abs/2512.18466 + arXiv:2512.18466v1 Announce Type: new +Abstract: Sustainable water quality underpins ecological balance and water security. Assessing and managing lakes and reservoirs is difficult due to data sparsity, heterogeneity, and nonlinear relationships among parameters. This review examines how Self-Organizing Map (SOM), an unsupervised AI technique, is applied to water quality assessment. It synthesizes research on parameter selection, spatial and temporal sampling strategies, and clustering approaches. Emphasis is placed on how SOM handles multidimensional data and uncovers hidden patterns to support effective water management. The growing availability of environmental data from in-situ sensors, remote sensing imagery, IoT technologies, and historical records has significantly expanded analytical opportunities in environmental monitoring. SOM has proven effective in analysing complex datasets, particularly when labelled data are limited or unavailable. It enables high-dimensional data visualization, facilitates the detection of hidden ecological patterns, and identifies critical correlations among diverse water quality indicators. This review highlights SOMs versatility in ecological assessments, trophic state classification, algal bloom monitoring, and catchment area impact evaluations. The findings offer comprehensive insights into existing methodologies, supporting future research and practical applications aimed at improving the monitoring and sustainable management of lake and reservoir ecosystems. + oai:arXiv.org:2512.18466v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Sairam VCR, Rishabh Lalla, Aveen Dayal, Tejal Kulkarni, Anuj Lalla, Vineeth N Balasubramanian, Muhammad Haris Khan + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1016/j.ecoinf.2025.103542 + Oraib Almegdadi, Jo\~ao Marcelino, Sarah Fakhreddine, Jo\~ao Manso, Nuno C. Marques - Dispatch-Aware Learning for Optimal Transmission Switching - https://arxiv.org/abs/2512.17516 - arXiv:2512.17516v1 Announce Type: new -Abstract: Optimal transmission switching (OTS) improves optimal power flow (OPF) by selectively opening transmission lines, but its mixed-integer formulation increases computational complexity, especially on large grids. To deal with this, we propose a dispatch-aware deep neural network (DA-DNN) that accelerates DC-OTS without relying on pre-solved labels. DA-DNN predicts line states and passes them through an embedded differentiable DC-OPF layer, using the resulting generation cost as the loss function so that all physical network constraints are enforced throughout training and inference. In addition, we adopt a customized weight-bias initialization that keeps every forward pass feasible from the first iteration, which allows stable learning. Once trained, the proposed DA-DNN successfully produces a feasible topology and dispatch pair in the same time as solving the DCOPF, whereas conventional mixed-integer solvers become intractable. Moreover, the embedded OPF layer enables DA-DNN to generalize to untrained system configurations, such as changes in line flow limits. As a result, the proposed method successfully captures the economic advantages of OTS while maintaining scalability and generalization ability. - oai:arXiv.org:2512.17516v1 - eess.SY - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 + SWE-EVO: Benchmarking Coding Agents in Long-Horizon Software Evolution Scenarios + https://arxiv.org/abs/2512.18470 + arXiv:2512.18470v1 Announce Type: new +Abstract: Existing benchmarks for AI coding agents focus on isolated, single-issue tasks such as fixing a bug or implementing a small feature. However, real-world software engineering is fundamentally a long-horizon endeavor: developers must interpret high-level requirements, plan coordinated changes across many files, and evolve codebases over multiple iterations while preserving existing functionality. We introduce SWE-EVO, a benchmark that evaluates agents on this long-horizon software evolution challenge. Constructed from release notes and version histories of seven mature open-source Python projects, Tool comprises 48 evolution tasks that require agents to implement multi-step modifications spanning an average of 21 files, validated against comprehensive test suites averaging 874 tests per instance. Experiments with state-of-the-art models reveal a striking capability gap: even GPT-5 with OpenHands achieves only a 21 percent resolution rate on Tool, compared to 65 percent on the single-issue SWE-Bench Verified. This demonstrates that current agents struggle with sustained, multi-file reasoning. We also propose Fix Rate, a fine-grained metric that captures partial progress toward solving these complex, long-horizon tasks. + oai:arXiv.org:2512.18470v1 + cs.SE + cs.AI + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Minsoo Kim, Andy Sun, Jip Kim + Minh V. T. Thai, Tue Le, Dung Nguyen Manh, Huy Phan Nhat, Nghi D. Q. Bui - PathBench-MIL: A Comprehensive AutoML and Benchmarking Framework for Multiple Instance Learning in Histopathology - https://arxiv.org/abs/2512.17517 - arXiv:2512.17517v1 Announce Type: new -Abstract: We introduce PathBench-MIL, an open-source AutoML and benchmarking framework for multiple instance learning (MIL) in histopathology. The system automates end-to-end MIL pipeline construction, including preprocessing, feature extraction, and MIL-aggregation, and provides reproducible benchmarking of dozens of MIL models and feature extractors. PathBench-MIL integrates visualization tooling, a unified configuration system, and modular extensibility, enabling rapid experimentation and standardization across datasets and tasks. PathBench-MIL is publicly available at https://github.com/Sbrussee/PathBench-MIL - oai:arXiv.org:2512.17517v1 - cs.CV + The Geometry of Abstraction: Continual Learning via Recursive Quotienting + https://arxiv.org/abs/2512.18471 + arXiv:2512.18471v1 Announce Type: new +Abstract: Continual learning systems operating in fixed-dimensional spaces face a fundamental geometric barrier: the flat manifold problem. When experience is represented as a linear trajectory in Euclidean space, the geodesic distance between temporal events grows linearly with time, forcing the required covering number to diverge. In fixed-dimensional hardware, this volume expansion inevitably forces trajectory overlap, manifesting as catastrophic interference. In this work, we propose a geometric resolution to this paradox based on Recursive Metric Contraction. We formalize abstraction not as symbolic grouping, but as a topological deformation: a quotient map that collapses the metric tensor within validated temporal neighborhoods, effectively driving the diameter of local sub-manifolds to zero. We substantiate our framework with four rigorous results. First, the Bounded Capacity Theorem establishes that recursive quotient maps allow the embedding of arbitrarily long trajectories into bounded representational volumes, trading linear metric growth for logarithmic topological depth. Second, the Topological Collapse Separability Theorem, derived via Urysohn's Lemma, proves that recursive quotienting renders non-linearly separable temporal sequences linearly separable in the limit, bypassing the need for infinite-dimensional kernel projections. Third, the Parity-Partitioned Stability Theorem solves the catastrophic forgetting problem by proving that if the state space is partitioned into orthogonal flow and scaffold manifolds, the metric deformations of active learning do not disturb the stability of stored memories. Our analysis reveals that tokens in neural architectures are physically realizable as singularities or wormholes, regions of extreme positive curvature that bridge distant points in the temporal manifold. + oai:arXiv.org:2512.18471v1 cs.LG - cs.NE - cs.SE - q-bio.TO - Mon, 22 Dec 2025 00:00:00 -0500 + q-bio.NC + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Siemen Brussee, Pieter A. Valkema, Jurre A. J. Weijer, Thom Doeleman, Anne M. R. Schrader, Jesper Kers + http://creativecommons.org/publicdomain/zero/1.0/ + Xin Li - Layer-to-layer Closed-loop Switched Heating and Cooling Control of the Laser Powder Bed Fusion Process - https://arxiv.org/abs/2512.17518 - arXiv:2512.17518v1 Announce Type: new -Abstract: This study investigates the stabilization of interlayer temperature in the laser powder bed fusion process through a novel switched layer-to-layer closed-loop feedback controller. The controller architecture aims to measure the interlayer temperature by a laterally positioned thermal camera and maintain a preset reference temperature by switching between the heating mode through dynamic laser power adjustment and the cooling mode by assigning interlayer dwell time to allow cooling between layers. The switching controller employs a feedback optimization control algorithm for the heating mode to adjust the laser power, and a triggering algorithm that increases the interlayer dwell time until the interlayer temperature reaches the reference value. Additionally, the study compares the performance of the proposed controller in both supported and unsupported overhanging parts to evaluate the effect of support structures on the controller performance as well as the thermal behavior of overhanging parts. Results demonstrate the controller's effectiveness in stabilizing interlayer temperature across varying cross-sectional areas while remaining within the material's stable processing zone. In the heating mode, the controller efficiently stabilizes temperature, even in geometries with significant cross-section variation. The study also identifies trade-offs among process efficiency, energy consumption, and build time. Supported parts exhibit reduced overheating but consume more energy and material, while unsupported parts stabilize interlayer temperature faster but with longer build times due to increased dwell time assignments. The research highlights notable improvements in interlayer temperature control for geometries prone to excessive thermal stresses. Moreover, the introduction of interlayer dwell time offers a practical solution to maintaining thermal stability in complex geometries. - oai:arXiv.org:2512.17518v1 - eess.SY - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 + APC-GNN++: An Adaptive Patient-Centric GNN with Context-Aware Attention and Mini-Graph Explainability for Diabetes Classification + https://arxiv.org/abs/2512.18473 + arXiv:2512.18473v1 Announce Type: new +Abstract: We propose APC-GNN++, an adaptive patient-centric Graph Neural Network for diabetes classification. Our model integrates context-aware edge attention, confidence-guided blending of node features and graph representations, and neighborhood consistency regularization to better capture clinically meaningful relationships between patients. To handle unseen patients, we introduce a mini-graph approach that leverages the nearest neighbors of the new patient, enabling real-time explainable predictions without retraining the global model. We evaluate APC-GNN++ on a real-world diabetes dataset collected from a regional hospital in Algeria and show that it outperforms traditional machine learning models (MLP, Random Forest, XGBoost) and a vanilla GCN, achieving higher test accuracy and macro F1- score. The analysis of node-level confidence scores further reveals how the model balances self-information and graph-based evidence across different patient groups, providing interpretable patient-centric insights. The system is also embedded in a Tkinter-based graphical user interface (GUI) for interactive use by healthcare professionals . + oai:arXiv.org:2512.18473v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Bar{\i}\c{s} Kavas, Efe C. Balta, Lars Witte, Michael R. Tucker, John Lygeros, Markus Bambach + http://creativecommons.org/licenses/by/4.0/ + Khaled Berkani - Key-Conditioned Orthonormal Transform Gating (K-OTG): Multi-Key Access Control with Hidden-State Scrambling for LoRA-Tuned Models - https://arxiv.org/abs/2512.17519 - arXiv:2512.17519v1 Announce Type: new -Abstract: We present a simple, PEFT-compatible mechanism that enforces secret-key access control in instruction-tuned language models. K-OTG trains on a dual-path corpus: authorized examples (prefixed with a role key) learn the task output, while unauthorized examples learn a visible block token. At inference, a pre-lm_head hook applies an orthonormal transform to the hidden state: with the correct key/role the inverse map restores the model's native basis; otherwise a session-ephemeral scrambler (permutation, sign flips, Householders) makes logits uninformative and the system short-circuits to BLOCK. Keys are not added as special tokens, and the method composes cleanly with LoRA on 4-bit bases. We evaluate an hour-scale protocol on 1-3B-class instruction models (Llama 3.2, Qwen2.5 1.5B) across utility (XSum ROUGE/BLEU, GSM8K accuracy, WikiText-2 perplexity), selectivity (3by3 role-key unlock matrices), nonce invariance, block suppression, and throughput. Authorized utility remains close to the base on summarization with the expected modest PPL increase from instruction tuning; unauthorized utility collapses (near-zero sequence metrics with exploding PPL), indicating practical unusability without the key. Unlock matrices are diagonally dominant (high on-target unlock, low cross-unlock), authorized block emission is 0 per N via robust bad-word lists, and greedy outputs match exactly across nonces, confirming correct inverse cancellation. The runtime overhead of the Python-level hook is 40% tokens per sec versus the base. K-OTG therefore provides a pragmatic, model-agnostic way to prevent unauthorized use while preserving authorized utility. - oai:arXiv.org:2512.17519v1 - cs.CR - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + When Robots Say No: The Empathic Ethical Disobedience Benchmark + https://arxiv.org/abs/2512.18474 + arXiv:2512.18474v1 Announce Type: new +Abstract: Robots must balance compliance with safety and social expectations as blind obedience can cause harm, while over-refusal erodes trust. Existing safe reinforcement learning (RL) benchmarks emphasize physical hazards, while human-robot interaction trust studies are small-scale and hard to reproduce. We present the Empathic Ethical Disobedience (EED) Gym, a standardized testbed that jointly evaluates refusal safety and social acceptability. Agents weigh risk, affect, and trust when choosing to comply, refuse (with or without explanation), clarify, or propose safer alternatives. EED Gym provides different scenarios, multiple persona profiles, and metrics for safety, calibration, and refusals, with trust and blame models grounded in a vignette study. Using EED Gym, we find that action masking eliminates unsafe compliance, while explanatory refusals help sustain trust. Constructive styles are rated most trustworthy, empathic styles -- most empathic, and safe RL methods improve robustness but also make agents more prone to overly cautious behavior. We release code, configurations, and reference policies to enable reproducible evaluation and systematic human-robot interaction research on refusal and trust. At submission time, we include an anonymized reproducibility package with code and configs, and we commit to open-sourcing the full repository after the paper is accepted. + oai:arXiv.org:2512.18474v1 + cs.RO + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Muhammad Haris Khan + Dmytro Kuzmenko, Nadiya Shvai - A cut finite element method for the Biot system of poroelasticity - https://arxiv.org/abs/2512.17521 - arXiv:2512.17521v1 Announce Type: new -Abstract: We propose a novel cut finite element method for the numerical solution of the Biot system of poroelasticity. The Biot system couples elastic deformation of a porous solid with viscous fluid flow and commonly arises on domains with complex geometries that make high-quality volumetric meshing challenging. To address this issue, we employ the cut finite element framework, where the domain boundary is represented independently of the background mesh, which significantly simplifies the meshing process. Our approach builds upon a parameter robust total pressure formulation of the Biot system, which we combine with the cut finite element method to develop a geometrically robust solution scheme, while preserving the parameter robustness. A key ingredient in the theoretical analysis is a modified inf-sup condition which also holds for mixed boundary conditions, leading to stability and optimal error estimates for the proposed formulation. Finally, we provide numerical evidence demonstrating the theoretical properties of the method and showcasing its capabilities by solving the Biot system on a realistic brain geometry. - oai:arXiv.org:2512.17521v1 - math.NA - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 + Research on a hybrid LSTM-CNN-Attention model for text-based web content classification + https://arxiv.org/abs/2512.18475 + arXiv:2512.18475v1 Announce Type: new +Abstract: This study presents a hybrid deep learning architecture that integrates LSTM, CNN, and an Attention mechanism to enhance the classification of web content based on text. Pretrained GloVe embeddings are used to represent words as dense vectors that preserve semantic similarity. The CNN layer extracts local n-gram patterns and lexical features, while the LSTM layer models long-range dependencies and sequential structure. The integrated Attention mechanism enables the model to focus selectively on the most informative parts of the input sequence. A 5-fold cross-validation setup was used to assess the robustness and generalizability of the proposed solution. Experimental results show that the hybrid LSTM-CNN-Attention model achieved outstanding performance, with an accuracy of 0.98, precision of 0.94, recall of 0.92, and F1-score of 0.93. These results surpass the performance of baseline models based solely on CNNs, LSTMs, or transformer-based classifiers such as BERT. The combination of neural network components enabled the model to effectively capture both fine-grained text structures and broader semantic context. Furthermore, the use of GloVe embeddings provided an efficient and effective representation of textual data, making the model suitable for integration into systems with real-time or near-real-time requirements. The proposed hybrid architecture demonstrates high effectiveness in text-based web content classification, particularly in tasks requiring both syntactic feature extraction and semantic interpretation. By combining presented mechanisms, the model addresses the limitations of individual architectures and achieves improved generalization. These findings support the broader use of hybrid deep learning approaches in NLP applications, especially where complex, unstructured textual data must be processed and classified with high reliability. + oai:arXiv.org:2512.18475v1 + cs.CL + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Nanna Berre, Kent-Andre Mardal, Andr\'e Massing, Ivan Yotov + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Mykola Kuz, Ihor Lazarovych, Mykola Kozlenko, Mykola Pikuliak, Andrii Kvasniuk - Comparison of two statistical image reconstruction algorithms for quantitative assessment of pathological lesions using gamma emission tomography - https://arxiv.org/abs/2512.17523 - arXiv:2512.17523v1 Announce Type: new -Abstract: This study compares two statistical approaches to image reconstruction in single-photon emission computed tomography (SPECT). We evaluated the widely used Ordered Subset Expectation Maximization (OSEM) algorithm and the newer Maximum a Posteriori approach with Entropy prior (MAP-Ent) approach in the context of quantifying radiopharmaceutical uptake in pathological lesions. Numerical experiments were performed using a digital twin of the standardized NEMA IEC phantom, which contains six spheres of varying diameters to simulate lesions. Quantitative accuracy was assessed using the maximum recovery coefficient (RCmax), defined as the ratio of the reconstructed maximum activity to the true value. The study shows that OSEM exhibits unstable convergence during iterations, leading to noise and edge artifacts in lesion images. Post-filtering stabilizes the reconstruction and ensures convergence, producing RCmax-size curves that could be used as correction factors in clinical evaluations. However, this approach significantly underestimates uptake in small lesions and may even lead to the complete loss of small lesions on reconstructed images. In contrast, MAP-Ent demonstrates fundamentally different behavior: it achieves stable convergence and preserves quantitative accuracy without post-filtering, while maintaining the contrast of even the smallest lesions. However, the iteration number at which accurate reconstruction is achieved depends strongly on the choice of a single global regularization parameter, which limits optimal performance across lesions of different sizes. These results demonstrate the need for locally adaptive regularization in MAP-Ent to improve quantitative accuracy in lesion reconstruction. - oai:arXiv.org:2512.17523v1 - math.NA - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 + STORM: Search-Guided Generative World Models for Robotic Manipulation + https://arxiv.org/abs/2512.18477 + arXiv:2512.18477v1 Announce Type: new +Abstract: We present STORM (Search-Guided Generative World Models), a novel framework for spatio-temporal reasoning in robotic manipulation that unifies diffusion-based action generation, conditional video prediction, and search-based planning. Unlike prior Vision-Language-Action (VLA) models that rely on abstract latent dynamics or delegate reasoning to language components, STORM grounds planning in explicit visual rollouts, enabling interpretable and foresight-driven decision-making. A diffusion-based VLA policy proposes diverse candidate actions, a generative video world model simulates their visual and reward outcomes, and Monte Carlo Tree Search (MCTS) selectively refines plans through lookahead evaluation. Experiments on the SimplerEnv manipulation benchmark demonstrate that STORM achieves a new state-of-the-art average success rate of 51.0 percent, outperforming strong baselines such as CogACT. Reward-augmented video prediction substantially improves spatio-temporal fidelity and task relevance, reducing Frechet Video Distance by over 75 percent. Moreover, STORM exhibits robust re-planning and failure recovery behavior, highlighting the advantages of search-guided generative world models for long-horizon robotic manipulation. + oai:arXiv.org:2512.18477v1 + cs.RO + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - A. V. Nesterova, N. V. Denisova + Wenjun Lin, Jensen Zhang, Kaitong Cai, Keze Wang - SafeBench-Seq: A Homology-Clustered, CPU-Only Baseline for Protein Hazard Screening with Physicochemical/Composition Features and Cluster-Aware Confidence Intervals - https://arxiv.org/abs/2512.17527 - arXiv:2512.17527v1 Announce Type: new -Abstract: Foundation models for protein design raise concrete biosecurity risks, yet the community lacks a simple, reproducible baseline for sequence-level hazard screening that is explicitly evaluated under homology control and runs on commodity CPUs. We introduce SafeBench-Seq, a metadata-only, reproducible benchmark and baseline classifier built entirely from public data (SafeProtein hazards and UniProt benigns) and interpretable features (global physicochemical descriptors and amino-acid composition). To approximate "never-before-seen" threats, we homology-cluster the combined dataset at <=40% identity and perform cluster-level holdouts (no cluster overlap between train/test). We report discrimination (AUROC/AUPRC) and screening-operating points (TPR@1% FPR; FPR@95% TPR) with 95% bootstrap confidence intervals (n=200), and we provide calibrated probabilities via CalibratedClassifierCV (isotonic for Logistic Regression / Random Forest; Platt sigmoid for Linear SVM). We quantify probability quality using Brier score, Expected Calibration Error (ECE; 15 bins), and reliability diagrams. Shortcut susceptibility is probed via composition-preserving residue shuffles and length-/composition-only ablations. Empirically, random splits substantially overestimate robustness relative to homology-clustered evaluation; calibrated linear models exhibit comparatively good calibration, while tree ensembles retain slightly higher Brier/ECE. SafeBench-Seq is CPU-only, reproducible, and releases metadata only (accessions, cluster IDs, split labels), enabling rigorous evaluation without distributing hazardous sequences. - oai:arXiv.org:2512.17527v1 - cs.LG + Insider Threat Detection Using GCN and Bi-LSTM with Explicit and Implicit Graph Representations + https://arxiv.org/abs/2512.18483 + arXiv:2512.18483v1 Announce Type: new +Abstract: Insider threat detection (ITD) is challenging due to the subtle and concealed nature of malicious activities performed by trusted users. This paper proposes a post-hoc ITD framework that integrates explicit and implicit graph representations with temporal modelling to capture complex user behaviour patterns. An explicit graph is constructed using predefined organisational rules to model direct relationships among user activities. To mitigate noise and limitations in this hand-crafted structure, an implicit graph is learned from feature similarities using the Gumbel-Softmax trick, enabling the discovery of latent behavioural relationships. Separate Graph Convolutional Networks (GCNs) process the explicit and implicit graphs to generate node embeddings, which are concatenated and refined through an attention mechanism to emphasise threat-relevant features. The refined representations are then passed to a bidirectional Long Short-Term Memory (Bi-LSTM) network to capture temporal dependencies in user behaviour. Activities are flagged as anomalous when their probability scores fall below a predefined threshold. Extensive experiments on CERT r5.2 and r6.2 datasets demonstrate that the proposed framework outperforms state-of-the-art methods. On r5.2, the model achieves an AUC of 98.62, a detection rate of 100%, and a false positive rate of 0.05. On the more challenging r6.2 dataset, it attains an AUC of 88.48, a detection rate of 80.15%, and a false positive rate of 0.15, highlighting the effectiveness of combining graph-based and temporal representations for robust ITD. + oai:arXiv.org:2512.18483v1 cs.AI cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Muhammad Haris Khan - - - Voxel-GS: Quantized Scaffold Gaussian Splatting Compression with Run-Length Coding - https://arxiv.org/abs/2512.17528 - arXiv:2512.17528v1 Announce Type: new -Abstract: Substantial Gaussian splatting format point clouds require effective compression. In this paper, we propose Voxel-GS, a simple yet highly effective framework that departs from the complex neural entropy models of prior work, instead achieving competitive performance using only a lightweight rate proxy and run-length coding. Specifically, we employ a differentiable quantization to discretize the Gaussian attributes of Scaffold-GS. Subsequently, a Laplacian-based rate proxy is devised to impose an entropy constraint, guiding the generation of high-fidelity and compact reconstructions. Finally, this integer-type Gaussian point cloud is compressed losslessly using Octree and run-length coding. Experiments validate that the proposed rate proxy accurately estimates the bitrate of run-length coding, enabling Voxel-GS to eliminate redundancy and optimize for a more compact representation. Consequently, our method achieves a remarkable compression ratio with significantly faster coding speeds than prior art. The code is available at https://github.com/zb12138/VoxelGS. - oai:arXiv.org:2512.17528v1 - cs.MM - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chunyang Fu, Xiangrui Liu, Shiqi Wang, Zhu Li + 10.1109/TAI.2025.3647418 + Rahul Yumlembam, Biju Issac, Seibu Mary Jacob, Longzhi Yang, Deepa Krishnan - NetworkFF: Unified Layer Optimization in Forward-Only Neural Networks - https://arxiv.org/abs/2512.17531 - arXiv:2512.17531v1 Announce Type: new -Abstract: The Forward-Forward algorithm eliminates backpropagation's memory constraints and biological implausibility through dual forward passes with positive and negative data. However, conventional implementations suffer from critical inter-layer isolation, where layers optimize goodness functions independently without leveraging collective learning dynamics. This isolation constrains representational coordination and limits convergence efficiency in deeper architectures. This paper introduces Collaborative Forward-Forward (CFF) learning, extending the original algorithm through inter-layer cooperation mechanisms that preserve forward-only computation while enabling global context integration. Our framework implements two collaborative paradigms: Fixed CFF (F-CFF) with constant inter-layer coupling and Adaptive CFF (A-CFF) with learnable collaboration parameters that evolve during training. The collaborative goodness function incorporates weighted contributions from all layers, enabling coordinated feature learning while maintaining memory efficiency and biological plausibility. Comprehensive evaluation on MNIST and Fashion-MNIST demonstrates significant performance improvements over baseline Forward-Forward implementations. These findings establish inter-layer collaboration as a fundamental enhancement to Forward-Forward learning, with immediate applicability to neuromorphic computing architectures and energy-constrained AI systems. - oai:arXiv.org:2512.17531v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + QLink: Quantum-Safe Bridge Architecture for Blockchain Interoperability + https://arxiv.org/abs/2512.18488 + arXiv:2512.18488v1 Announce Type: new +Abstract: Secure interoperability across heterogeneous blockchains remains one of the most pressing challenges in Web3 with existing bridge protocols vulnerable to both classical exploits and emerging quantum threats. This paper introduces QLink a quantum-safe Layer 3 interoperability protocol that integrates postquantum cryptography (PQC) quantum key distribution (QKD) and hardware security modules (HSMs) into a unified validator architecture. To our knowledge, QLink is the first interoperability framework to combine these mechanisms to secure validator communication proof aggregation and key management. Validators exchange encryption keys through QKD channels, achieving information-theoretic security against interception, while cross-chain proofs are generated and aggregated with NIST-standardized PQC algorithms. Private keys remain sealed inside HSM enclaves mitigating the risk of theft or leakage. Deployed as a dedicated Layer 3 protocol QLink operates independently of Layer 1 and Layer 2 chains providing a scalable decentralized foundation for secure cross-chain messaging and asset transfer. Experimental evaluation using network simulations demonstrates that validator communication overhead remains sub-second while security guarantees extend beyond current bridge architectures to resist both classical and quantum adversaries. By addressing today vulnerabilities and anticipating future quantum threats QLink establishes a practical and future-proof pathway for blockchain interoperability. + oai:arXiv.org:2512.18488v1 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Salar Beigzad + Joao Vitor Barros Da Silva, Arsh Gupta, Madhusudan Singh Irish Singh - Robust-R1: Degradation-Aware Reasoning for Robust Visual Understanding - https://arxiv.org/abs/2512.17532 - arXiv:2512.17532v1 Announce Type: new -Abstract: Multimodal Large Language Models struggle to maintain reliable performance under extreme real-world visual degradations, which impede their practical robustness. Existing robust MLLMs predominantly rely on implicit training/adaptation that focuses solely on visual encoder generalization, suffering from limited interpretability and isolated optimization. To overcome these limitations, we propose Robust-R1, a novel framework that explicitly models visual degradations through structured reasoning chains. Our approach integrates: (i) supervised fine-tuning for degradation-aware reasoning foundations, (ii) reward-driven alignment for accurately perceiving degradation parameters, and (iii) dynamic reasoning depth scaling adapted to degradation intensity. To facilitate this approach, we introduce a specialized 11K dataset featuring realistic degradations synthesized across four critical real-world visual processing stages, each annotated with structured chains connecting degradation parameters, perceptual influence, pristine semantic reasoning chain, and conclusion. Comprehensive evaluations demonstrate state-of-the-art robustness: Robust-R1 outperforms all general and robust baselines on the real-world degradation benchmark R-Bench, while maintaining superior anti-degradation performance under multi-intensity adversarial degradations on MMMB, MMStar, and RealWorldQA. - oai:arXiv.org:2512.17532v1 - cs.CV + Large Language Models as Discounted Bayesian Filters + https://arxiv.org/abs/2512.18489 + arXiv:2512.18489v1 Announce Type: new +Abstract: Large Language Models (LLMs) demonstrate strong few-shot generalization through in-context learning, yet their reasoning in dynamic and stochastic environments remains opaque. Prior studies mainly focus on static tasks and overlook the online adaptation required when beliefs must be continuously updated, which is a key capability for LLMs acting as world models or agents. We introduce a Bayesian filtering framework to evaluate online inference in LLMs. Our probabilistic probe suite spans both multivariate discrete distributions, such as dice rolls, and continuous distributions, such as Gaussian processes, where ground-truth parameters shift over time. We find that while LLM belief updates resemble Bayesian posteriors, they are more accurately characterized by an exponential forgetting filter with a model-specific discount factor smaller than one. This reveals systematic discounting of older evidence that varies significantly across model architectures. Although inherent priors are often miscalibrated, the updating mechanism itself remains structured and principled. We further validate these findings in a simulated agent task and propose prompting strategies that effectively recalibrate priors with minimal computational cost. + oai:arXiv.org:2512.18489v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jiaqi Tang, Jianmin Chen, Wei Wei, Xiaogang Xu, Runtao Liu, Xiangyu Wu, Qipeng Xie, Jiafei Wu, Lei Zhang, Qifeng Chen + http://creativecommons.org/licenses/by/4.0/ + Jensen Zhang, Jing Yang, Keze Wang - Binding Agent ID: Unleashing the Power of AI Agents with accountability and credibility - https://arxiv.org/abs/2512.17538 - arXiv:2512.17538v1 Announce Type: new -Abstract: Autonomous AI agents lack traceable accountability mechanisms, creating a fundamental dilemma where systems must either operate as ``downgraded tools'' or risk real-world abuse. This vulnerability stems from the limitations of traditional key-based authentication, which guarantees neither the operator's physical identity nor the agent's code integrity. To bridge this gap, we propose BAID (Binding Agent ID), a comprehensive identity infrastructure establishing verifiable user-code binding. BAID integrates three orthogonal mechanisms: local binding via biometric authentication, decentralized on-chain identity management, and a novel zkVM-based Code-Level Authentication protocol. By leveraging recursive proofs to treat the program binary as the identity, this protocol provides cryptographic guarantees for operator identity, agent configuration integrity, and complete execution provenance, thereby effectively preventing unauthorized operation and code substitution. We implement and evaluate a complete prototype system, demonstrating the practical feasibility of blockchain-based identity management and zkVM-based authentication protocol. - oai:arXiv.org:2512.17538v1 - cs.NI + Enhancing Decision-Making in Windows PE Malware Classification During Dataset Shifts with Uncertainty Estimation + https://arxiv.org/abs/2512.18495 + arXiv:2512.18495v1 Announce Type: new +Abstract: Artificial intelligence techniques have achieved strong performance in classifying Windows Portable Executable (PE) malware, but their reliability often degrades under dataset shifts, leading to misclassifications with severe security consequences. To address this, we enhance an existing LightGBM (LGBM) malware detector by integrating Neural Networks (NN), PriorNet, and Neural Network Ensembles, evaluated across three benchmark datasets: EMBER, BODMAS, and UCSB. The UCSB dataset, composed mainly of packed malware, introduces a substantial distributional shift relative to EMBER and BODMAS, making it a challenging testbed for robustness. We study uncertainty-aware decision strategies, including probability thresholding, PriorNet, ensemble-derived estimates, and Inductive Conformal Evaluation (ICE). Our main contribution is the use of ensemble-based uncertainty estimates as Non-Conformity Measures within ICE, combined with a novel threshold optimisation method. On the UCSB dataset, where the shift is most severe, the state-of-the-art probability-based ICE (SOTA) yields an incorrect acceptance rate (IA%) of 22.8%. In contrast, our method reduces this to 16% a relative reduction of about 30% while maintaining competitive correct acceptance rates (CA%). These results demonstrate that integrating ensemble-based uncertainty with conformal prediction provides a more reliable safeguard against misclassifications under extreme dataset shifts, particularly in the presence of packed malware, thereby offering practical benefits for real-world security operations. + oai:arXiv.org:2512.18495v1 cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zibin Lin, Shengli Zhang, Guofu Liao, Dacheng Tao, Taotao Wang + http://creativecommons.org/licenses/by/4.0/ + 10.1016/j.knosys.2025.114723 + Knowledge-Based System, Elsevier, ISSN 1872-7409 (2025) + Rahul Yumlembam, Biju Issac, Seibu Mary Jacob - SGCR: A Specification-Grounded Framework for Trustworthy LLM Code Review - https://arxiv.org/abs/2512.17540 - arXiv:2512.17540v1 Announce Type: new -Abstract: Automating code review with Large Language Models (LLMs) shows immense promise, yet practical adoption is hampered by their lack of reliability, context-awareness, and control. To address this, we propose Specification-Grounded Code Review (SGCR), a framework that grounds LLMs in human-authored specifications to produce trustworthy and relevant feedback. SGCR features a novel dual-pathway architecture: an explicit path ensures deterministic compliance with predefined rules derived from these specifications, while an implicit path heuristically discovers and verifies issues beyond those rules. Deployed in a live industrial environment at HiThink Research, SGCR's suggestions achieved a 42% developer adoption rate-a 90.9% relative improvement over a baseline LLM (22%). Our work demonstrates that specification-grounding is a powerful paradigm for bridging the gap between the generative power of LLMs and the rigorous reliability demands of software engineering. - oai:arXiv.org:2512.17540v1 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 + Adaptive-VoCo: Complexity-Aware Visual Token Compression for Vision-Language Models + https://arxiv.org/abs/2512.18496 + arXiv:2512.18496v1 Announce Type: new +Abstract: In recent years, large-scale vision-language models (VLMs) have demonstrated remarkable performance on multimodal understanding and reasoning tasks. However, handling high-dimensional visual features often incurs substantial computational and memory costs. VoCo-LLaMA alleviates this issue by compressing visual patch tokens into a few VoCo tokens, reducing computational overhead while preserving strong cross-modal alignment. Nevertheless, such approaches typically adopt a fixed compression rate, limiting their ability to adapt to varying levels of visual complexity. To address this limitation, we propose Adaptive-VoCo, a framework that augments VoCo-LLaMA with a lightweight predictor for adaptive compression. This predictor dynamically selects an optimal compression rate by quantifying an image's visual complexity using statistical cues from the vision encoder, such as patch token entropy and attention map variance. Furthermore, we introduce a joint loss function that integrates rate regularization with complexity alignment. This enables the model to balance inference efficiency with representational capacity, particularly in challenging scenarios. Experimental results show that our method consistently outperforms fixed-rate baselines across multiple multimodal tasks, highlighting the potential of adaptive visual compression for creating more efficient and robust VLMs. + oai:arXiv.org:2512.18496v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kai Wang, Bingcheng Mao, Shuai Jia, Yujie Ding, Dongming Han, Tianyi Ma, Bin Cao + http://creativecommons.org/licenses/by/4.0/ + Xiaoyang Guo, Keze Wang - FLEG: Feed-Forward Language Embedded Gaussian Splatting from Any Views - https://arxiv.org/abs/2512.17541 - arXiv:2512.17541v1 Announce Type: new -Abstract: We present FLEG, a feed-forward network that reconstructs language-embedded 3D Gaussians from any views. Previous straightforward solutions combine feed-forward reconstruction with Gaussian heads but suffer from fixed input views and insufficient 3D training data. In contrast, we propose a 3D-annotation-free training framework for 2D-to-3D lifting from arbitrary uncalibrated and unposed multi-view images. Since the framework does not require 3D annotations, we can leverage large-scale video data with easily obtained 2D instance information to enrich semantic embedding. We also propose an instance-guided contrastive learning to align 2D semantics with the 3D representations. In addition, to mitigate the high memory and computational cost of dense views, we further propose a geometry-semantic hierarchical sparsification strategy. Our FLEG efficiently reconstructs language-embedded 3D Gaussian representation in a feed-forward manner from arbitrary sparse or dense views, jointly producing accurate geometry, high-fidelity appearance, and language-aligned semantics. Extensive experiments show that it outperforms existing methods on various related tasks. Project page: https://fangzhou2000.github.io/projects/fleg. - oai:arXiv.org:2512.17541v1 + PlantDiseaseNet-RT50: A Fine-tuned ResNet50 Architecture for High-Accuracy Plant Disease Detection Beyond Standard CNNs + https://arxiv.org/abs/2512.18500 + arXiv:2512.18500v1 Announce Type: new +Abstract: Plant diseases pose a significant threat to agricultural productivity and global food security, accounting for 70-80% of crop losses worldwide. Traditional detection methods rely heavily on expert visual inspection, which is time-consuming, labour-intensive, and often impractical for large-scale farming operations. In this paper, we present PlantDiseaseNet-RT50, a novel fine-tuned deep learning architecture based on ResNet50 for automated plant disease detection. Our model features strategically unfrozen layers, a custom classification head with regularization mechanisms, and dynamic learning rate scheduling through cosine decay. Using a comprehensive dataset of distinct plant disease categories across multiple crop species, PlantDiseaseNet-RT50 achieves exceptional performance with approximately 98% accuracy, precision, and recall. Our architectural modifications and optimization protocol demonstrate how targeted fine-tuning can transform a standard pretrained model into a specialized agricultural diagnostic tool. We provide a detailed account of our methodology, including the systematic unfreezing of terminal layers, implementation of batch normalization and dropout regularization and application of advanced training techniques. PlantDiseaseNet-RT50 represents a significant advancement in AI-driven agricultural tools, offering a computationally efficient solution for rapid and accurate plant disease diagnosis that can be readily implemented in practical farming contexts to support timely interventions and reduce crop losses. + oai:arXiv.org:2512.18500v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qijian Tian, Xin Tan, Jiayu Ying, Xuhong Wang, Yuan Xie, Lizhuang Ma + 10.1109/ACROSET66531.2025.11281259 + 2025 IEEE International Conference on Advances in Computing Research On Science Engineering and Technology (ACROSET), INDORE, India, 2025, pp. 1-6 + Santwana Sagnika, Manav Malhotra, Ishtaj Kaur Deol, Soumyajit Roy, Swarnav Kumar - A distance-free approach to generalized weights - https://arxiv.org/abs/2512.17542 - arXiv:2512.17542v1 Announce Type: new -Abstract: We propose a unified theory of generalized weights for linear codes endowed with an arbitrary distance. Instead of relying on supports or anticodes, the weights of a code are defined via the intersections of the code with a chosen family of spaces, which we call a test family. The choice of test family determines the properties of the corresponding generalized weights and the characteristics of the code that they capture. In this general framework, we prove that generalized weights are weakly increasing and that certain subsequences are strictly increasing. We also prove a duality result reminiscent of Wei's Duality Theorem. The corresponding properties of generalized Hamming and rank-metric weights follow from our general results by selecting optimal anticodes as a test family. For sum-rank metric codes, we propose a test family that results in generalized weights that are closely connected to -- but not always the same as -- the usual generalized weights. This choice allows us to extend the known duality results for generalized sum-rank weights to some sum-rank-metric codes with a nonzero Hamming component. Finally, we explore a family of generalized weights obtained by intersecting the underlying code with MDS or MRD codes. - oai:arXiv.org:2512.17542v1 - cs.IT - cs.DM - math.IT - Mon, 22 Dec 2025 00:00:00 -0500 + Sink Proximity: A Novel Approach for Online Vehicle Dispatch in Ride-hailing + https://arxiv.org/abs/2512.18501 + arXiv:2512.18501v1 Announce Type: new +Abstract: Ride-hailing platforms have a profound impact on urban transportation systems, and their performance largely depends on how intelligently they dispatch vehicles in real time. In this work, we develop a new approach to online vehicle dispatch that strengthens a platform's ability to serve more requests under demand uncertainty. We introduce a novel measure called sink proximity, a network-science-inspired measure that captures how demand and vehicle flows are likely to evolve across the city. By integrating this measure into a shareability-network framework, we design an online dispatch algorithm that naturally considers future network states, without depending on fragile spatiotemporal forecasts. Numerical studies demonstrate that our proposed solution significantly improves the request service rate under peak hours within a receding horizon framework with limited future information available. + oai:arXiv.org:2512.18501v1 + eess.SY + cs.SY + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Andrea Di Giusto, Elisa Gorla, Alberto Ravagnani + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Ruiting Wang, Jiaman Wu, Fabio Paparella, Scott J. Moura, Marta C. Gonzalez - ClothHMR: 3D Mesh Recovery of Humans in Diverse Clothing from Single Image - https://arxiv.org/abs/2512.17545 - arXiv:2512.17545v1 Announce Type: new -Abstract: With 3D data rapidly emerging as an important form of multimedia information, 3D human mesh recovery technology has also advanced accordingly. However, current methods mainly focus on handling humans wearing tight clothing and perform poorly when estimating body shapes and poses under diverse clothing, especially loose garments. To this end, we make two key insights: (1) tailoring clothing to fit the human body can mitigate the adverse impact of clothing on 3D human mesh recovery, and (2) utilizing human visual information from large foundational models can enhance the generalization ability of the estimation. Based on these insights, we propose ClothHMR, to accurately recover 3D meshes of humans in diverse clothing. ClothHMR primarily consists of two modules: clothing tailoring (CT) and FHVM-based mesh recovering (MR). The CT module employs body semantic estimation and body edge prediction to tailor the clothing, ensuring it fits the body silhouette. The MR module optimizes the initial parameters of the 3D human mesh by continuously aligning the intermediate representations of the 3D mesh with those inferred from the foundational human visual model (FHVM). ClothHMR can accurately recover 3D meshes of humans wearing diverse clothing, precisely estimating their body shapes and poses. Experimental results demonstrate that ClothHMR significantly outperforms existing state-of-the-art methods across benchmark datasets and in-the-wild images. Additionally, a web application for online fashion and shopping powered by ClothHMR is developed, illustrating that ClothHMR can effectively serve real-world usage scenarios. The code and model for ClothHMR are available at: \url{https://github.com/starVisionTeam/ClothHMR}. - oai:arXiv.org:2512.17545v1 + NASTaR: NovaSAR Automated Ship Target Recognition Dataset + https://arxiv.org/abs/2512.18503 + arXiv:2512.18503v1 Announce Type: new +Abstract: Synthetic Aperture Radar (SAR) offers a unique capability for all-weather, space-based maritime activity monitoring by capturing and imaging strong reflections from ships at sea. A well-defined challenge in this domain is ship type classification. Due to the high diversity and complexity of ship types, accurate recognition is difficult and typically requires specialized deep learning models. These models, however, depend on large, high-quality ground-truth datasets to achieve robust performance and generalization. Furthermore, the growing variety of SAR satellites operating at different frequencies and spatial resolutions has amplified the need for more annotated datasets to enhance model accuracy. To address this, we present the NovaSAR Automated Ship Target Recognition (NASTaR) dataset. This dataset comprises of 3415 ship patches extracted from NovaSAR S-band imagery, with labels matched to AIS data. It includes distinctive features such as 23 unique classes, inshore/offshore separation, and an auxiliary wake dataset for patches where ship wakes are visible. We validated the dataset applicability across prominent ship-type classification scenarios using benchmark deep learning models. Results demonstrate over 60% accuracy for classifying four major ship types, over 70% for a three-class scenario, more than 75% for distinguishing cargo from tanker ships, and over 87% for identifying fishing vessels. The NASTaR dataset is available at https://10.5523/bris, while relevant codes for benchmarking and analysis are available at https://github.com/benyaminhosseiny/nastar. + oai:arXiv.org:2512.18503v1 cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - 10.1145/3731715.3733288 - Yunqi Gao, Leyuan Liu, Yuhan Li, Changxin Gao, Yuanyuan Liu, Jingying Chen + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Benyamin Hosseiny, Kamirul Kamirul, Odysseas Pappas, Alin Achim - G3Splat: Geometrically Consistent Generalizable Gaussian Splatting - https://arxiv.org/abs/2512.17547 - arXiv:2512.17547v1 Announce Type: new -Abstract: 3D Gaussians have recently emerged as an effective scene representation for real-time splatting and accurate novel-view synthesis, motivating several works to adapt multi-view structure prediction networks to regress per-pixel 3D Gaussians from images. However, most prior work extends these networks to predict additional Gaussian parameters -- orientation, scale, opacity, and appearance -- while relying almost exclusively on view-synthesis supervision. We show that a view-synthesis loss alone is insufficient to recover geometrically meaningful splats in this setting. We analyze and address the ambiguities of learning 3D Gaussian splats under self-supervision for pose-free generalizable splatting, and introduce G3Splat, which enforces geometric priors to obtain geometrically consistent 3D scene representations. Trained on RE10K, our approach achieves state-of-the-art performance in (i) geometrically consistent reconstruction, (ii) relative pose estimation, and (iii) novel-view synthesis. We further demonstrate strong zero-shot generalization on ScanNet, substantially outperforming prior work in both geometry recovery and relative pose estimation. Code and pretrained models are released on our project page (https://m80hz.github.io/g3splat/). - oai:arXiv.org:2512.17547v1 + GTMA: Dynamic Representation Optimization for OOD Vision-Language Models + https://arxiv.org/abs/2512.18504 + arXiv:2512.18504v1 Announce Type: new +Abstract: Vision-language models (VLMs) struggle in open-world applications, where out-of-distribution (OOD) concepts often trigger cross-modal alignment collapse and severely degrade zero-shot performance. We identify the root cause as modal asymmetry: while the visual encoder can extract discriminative features from unseen images, the text encoder is constrained by a fixed discrete vocabulary and cannot synthesize new semantic anchors. Existing approaches such as CoOp or LoRA provide only partial remedies, as they remain confined to the pre-trained semantic space. + To overcome this bottleneck, we propose dynamic representation optimization, realized through the Guided Target-Matching Adaptation (GTMA) framework. At inference time, GTMA constructs a continuous pseudo-word embedding that best aligns with an OOD image's visual anchor, effectively bypassing vocabulary limitations. The optimization is driven by an adaptive gradient-based representation policy optimization algorithm, which incorporates semantic regularization to preserve plausibility and compatibility with the model's prior knowledge. + Experiments on ImageNet-R and the VISTA-Beyond benchmark demonstrate that GTMA improves zero-shot and few-shot OOD accuracy by up to 15-20 percent over the base VLM while maintaining performance on in-distribution concepts. Ablation studies further confirm the necessity of pseudo-word optimization. + oai:arXiv.org:2512.18504v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mehdi Hosseinzadeh, Shin-Fang Chng, Yi Xu, Simon Lucey, Ian Reid, Ravi Garg + http://creativecommons.org/licenses/by/4.0/ + Jensen Zhang, Ningyuan Liu, Keze Wang - Yet another cubical type theory, but via a semantic approach - https://arxiv.org/abs/2512.17548 - arXiv:2512.17548v1 Announce Type: new -Abstract: We propose a new cubical type theory, termed (self-deprecatingly) the naive cubical type theory, and study its semantics using the universe category framework, which is similar to Uemura's categories with representable morphisms. In particular, we show that this new type theory admits an interpretation in a wide variety of settings, including simplicial sets and cartesian cubical sets. - oai:arXiv.org:2512.17548v1 - cs.LO - math.CT - math.LO - Mon, 22 Dec 2025 00:00:00 -0500 + Teaching and Critiquing Conceptualization and Operationalization in NLP + https://arxiv.org/abs/2512.18505 + arXiv:2512.18505v1 Announce Type: new +Abstract: NLP researchers regularly invoke abstract concepts like "interpretability," "bias," "reasoning," and "stereotypes," without defining them. Each subfield has a shared understanding or conceptualization of what these terms mean and how we should treat them, and this shared understanding is the basis on which operational decisions are made: Datasets are built to evaluate these concepts, metrics are proposed to quantify them, and claims are made about systems. But what do they mean, what should they mean, and how should we measure them? I outline a seminar I created for students to explore these questions of conceptualization and operationalization, with an interdisciplinary reading list and an emphasis on discussion and critique. + oai:arXiv.org:2512.18505v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chris Kapulkin, Yufeng Li + http://creativecommons.org/licenses/by-sa/4.0/ + Vagrant Gautam - Deep Learning-based Robust Autonomous Navigation of Aerial Robots in Dense Forests - https://arxiv.org/abs/2512.17553 - arXiv:2512.17553v1 Announce Type: new -Abstract: Autonomous aerial navigation in dense natural environments remains challenging due to limited visibility, thin and irregular obstacles, GNSS-denied operation, and frequent perceptual degradation. This work presents an improved deep learning-based navigation framework that integrates semantically enhanced depth encoding with neural motion-primitive evaluation for robust flight in cluttered forests. Several modules are incorporated on top of the original sevae-ORACLE algorithm to address limitations observed during real-world deployment, including lateral control for sharper maneuvering, a temporal consistency mechanism to suppress oscillatory planning decisions, a stereo-based visual-inertial odometry solution for drift-resilient state estimation, and a supervisory safety layer that filters unsafe actions in real time. A depth refinement stage is included to improve the representation of thin branches and reduce stereo noise, while GPU optimization increases onboard inference throughput from 4 Hz to 10 Hz. - The proposed approach is evaluated against several existing learning-based navigation methods under identical environmental conditions and hardware constraints. It demonstrates higher success rates, more stable trajectories, and improved collision avoidance, particularly in highly cluttered forest settings. The system is deployed on a custom quadrotor in three boreal forest environments, achieving fully autonomous completion in all flights in moderate and dense clutter, and 12 out of 15 flights in highly dense underbrush. These results demonstrate improved reliability and safety over existing navigation methods in complex natural environments. - oai:arXiv.org:2512.17553v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Exploring Runtime Evolution in Android: A Cross-Version Analysis and Its Implications for Memory Forensics + https://arxiv.org/abs/2512.18517 + arXiv:2512.18517v1 Announce Type: new +Abstract: Userland memory forensics has become a critical component of smartphone investigations and incident response, enabling the recovery of volatile evidence such as deleted messages from end-to-end encrypted apps and cryptocurrency transactions. However, these forensics tools, particularly on Android, face significant challenges in adapting to different versions and maintaining reliability over time due to the constant evolution of low-level structures critical for evidence recovery and reconstruction. Structural changes, ranging from simple offset modifications to complete architectural redesigns, pose substantial maintenance and adaptability issues for forensic tools that rely on precise structure interpretation. Thus, this paper presents the first systematic study of Android Runtime (ART) structural evolution and its implications for memory forensics. We conduct an empirical analysis of critical Android runtime structures, examining their evolution across six versions for four different architectures. Our findings reveal that over 73.2% of structure members underwent positional changes, significantly affecting the adaptability and reliability of memory forensic tools. Further analysis of core components such as Runtime, Thread, and Heap structures highlights distinct evolution patterns and their impact on critical forensic operations, including thread state enumeration, memory mapping, and object reconstruction. These results demonstrate that traditional approaches relying on static structure definitions and symbol-based methods, while historically reliable, are increasingly unsustainable on their own. We recommend that memory forensic tools in general and Android in particular evolve toward hybrid approaches that retain the validation strength of symbolic methods while integrating automated structure inference, version-aware parsing, and redundant analysis strategies. + oai:arXiv.org:2512.18517v1 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Guglielmo Del Col, V\"ain\"o Karjalainen, Teemu Hakala, Yibo Zhang, Eija Honkavaara + The 28th International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2025). RAID 2025, Gold Coast, Australia (19-22 October, 2025) + Babangida Bappah, Lauren G Bristol, Lamine Noureddine, Sideeq Bello, Umar Farooq, Aisha Ali-Gombe - Towards Explainable Conversational AI for Early Diagnosis with Large Language Models - https://arxiv.org/abs/2512.17559 - arXiv:2512.17559v1 Announce Type: new -Abstract: Healthcare systems around the world are grappling with issues like inefficient diagnostics, rising costs, and limited access to specialists. These problems often lead to delays in treatment and poor health outcomes. Most current AI and deep learning diagnostic systems are not very interactive or transparent, making them less effective in real-world, patient-centered environments. This research introduces a diagnostic chatbot powered by a Large Language Model (LLM), using GPT-4o, Retrieval-Augmented Generation, and explainable AI techniques. The chatbot engages patients in a dynamic conversation, helping to extract and normalize symptoms while prioritizing potential diagnoses through similarity matching and adaptive questioning. With Chain-of-Thought prompting, the system also offers more transparent reasoning behind its diagnoses. When tested against traditional machine learning models like Naive Bayes, Logistic Regression, SVM, Random Forest, and KNN, the LLM-based system delivered impressive results, achieving an accuracy of 90% and Top-3 accuracy of 100%. These findings offer a promising outlook for more transparent, interactive, and clinically relevant AI in healthcare. - oai:arXiv.org:2512.17559v1 + Prediction and Forecast of Short-Term Drought Impacts Using Machine Learning to Support Mitigation and Adaptation Efforts + https://arxiv.org/abs/2512.18522 + arXiv:2512.18522v1 Announce Type: new +Abstract: Drought is a complex natural hazard that affects ecological and human systems, often resulting in substantial environmental and economic losses. Recent increases in drought severity, frequency, and duration underscore the need for effective monitoring and mitigation strategies. Predicting drought impacts rather than drought conditions alone offers opportunities to support early warning systems and proactive decision-making. This study applies machine learning techniques to link drought indices with historical drought impact records (2005:2024) to generate short-term impact forecasts. By addressing key conceptual and data-driven challenges regarding temporal scale and impact quantification, the study aims to improve the predictability of drought impacts at actionable lead times. The Drought Severity and Coverage Index (DSCI) and the Evaporative Stress Index (ESI) were combined with impact data from the Drought Impact Reporter (DIR) to model and forecast weekly drought impacts. Results indicate that Fire and Relief impacts were predicted with the highest accuracy, followed by Agriculture and Water, while forecasts for Plants and Society impacts showed greater variability. County and state level forecasts for New Mexico were produced using an eXtreme Gradient Boosting (XGBoost) model that incorporated both DSCI and ESI. The model successfully generated forecasts up to eight weeks in advance using the preceding eight weeks of data for most impact categories. This work supports the development of an Ecological Drought Information Communication System (EcoDri) for New Mexico and demonstrates the potential for broader application in similar drought-prone regions. The findings can aid stakeholders, land managers, and decision-makers in developing and implementing more effective drought mitigation and adaptation strategies. + oai:arXiv.org:2512.18522v1 + cs.LG cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Maliha Tabassum, M Shamim Kaiser + http://creativecommons.org/licenses/by/4.0/ + Hatim M. E. Geli, Islam Omar, Mona Y. Elshinawy, David W. DuBios, Lara Prehodko, Kelly H Smith, Abdel-Hameed A. Badawy - Learning-Based Safety-Aware Task Scheduling for Efficient Human-Robot Collaboration - https://arxiv.org/abs/2512.17560 - arXiv:2512.17560v1 Announce Type: new -Abstract: Ensuring human safety in collaborative robotics can compromise efficiency because traditional safety measures increase robot cycle time when human interaction is frequent. This paper proposes a safety-aware approach to mitigate efficiency losses without assuming prior knowledge of safety logic. Using a deep-learning model, the robot learns the relationship between system state and safety-induced speed reductions based on execution data. Our framework does not explicitly predict human motions but directly models the interaction effects on robot speed, simplifying implementation and enhancing generalizability to different safety logics. At runtime, the learned model optimizes task selection to minimize cycle time while adhering to safety requirements. Experiments on a pick-and-packaging scenario demonstrated significant reductions in cycle times. - oai:arXiv.org:2512.17560v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Feature-Enhanced Graph Neural Networks for Classification of Synthetic Graph Generative Models: A Benchmarking Study + https://arxiv.org/abs/2512.18524 + arXiv:2512.18524v1 Announce Type: new +Abstract: The ability to discriminate between generative graph models is critical to understanding complex structural patterns in both synthetic graphs and the real-world structures that they emulate. While Graph Neural Networks (GNNs) have seen increasing use to great effect in graph classification tasks, few studies explore their integration with interpretable graph theoretic features. This paper investigates the classification of synthetic graph families using a hybrid approach that combines GNNs with engineered graph-theoretic features. We generate a large and structurally diverse synthetic dataset comprising graphs from five representative generative families, Erdos-Renyi, Watts-Strogatz, Barab'asi-Albert, Holme-Kim, and Stochastic Block Model. These graphs range in size up to 1x10^4 nodes, containing up to 1.1x10^5 edges. A comprehensive range of node and graph level features is extracted for each graph and pruned using a Random Forest based feature selection pipeline. The features are integrated into six GNN architectures: GCN, GAT, GATv2, GIN, GraphSAGE and GTN. Each architecture is optimised for hyperparameter selection using Optuna. Finally, models were compared against a baseline Support Vector Machine (SVM) trained solely on the handcrafted features. Our evaluation demonstrates that GraphSAGE and GTN achieve the highest classification performance, with 98.5% accuracy, and strong class separation evidenced by t-SNE and UMAP visualisations. GCN and GIN also performed well, while GAT-based models lagged due to limitations in their ability to capture global structures. The SVM baseline confirmed the importance of the message passing functionality for performance gains and meaningful class separation. + oai:arXiv.org:2512.18524v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - M. Faroni, A. Spano, A. M. Zanchettin, P. Rocco + Janek Dyer, Jagdeep Ahluwalia, Javad Zarrin - When De-noising Hurts: A Systematic Study of Speech Enhancement Effects on Modern Medical ASR Systems - https://arxiv.org/abs/2512.17562 - arXiv:2512.17562v1 Announce Type: new -Abstract: Speech enhancement methods are commonly believed to improve the performance of automatic speech recognition (ASR) in noisy environments. However, the effectiveness of these techniques cannot be taken for granted in the case of modern large-scale ASR models trained on diverse, noisy data. We present a systematic evaluation of MetricGAN-plus-voicebank denoising on four state-of-the-art ASR systems: OpenAI Whisper, NVIDIA Parakeet, Google Gemini Flash 2.0, Parrotlet-a using 500 medical speech recordings under nine noise conditions. ASR performance is measured using semantic WER (semWER), a normalized word error rate (WER) metric accounting for domain-specific normalizations. Our results reveal a counterintuitive finding: speech enhancement preprocessing degrades ASR performance across all noise conditions and models. Original noisy audio achieves lower semWER than enhanced audio in all 40 tested configurations (4 models x 10 conditions), with degradations ranging from 1.1% to 46.6% absolute semWER increase. These findings suggest that modern ASR models possess sufficient internal noise robustness and that traditional speech enhancement may remove acoustic features critical for ASR. For practitioners deploying medical scribe systems in noisy clinical environments, our results indicate that preprocessing audio with noise reduction techniques might not just be computationally wasteful but also be potentially harmful to the transcription accuracy. - oai:arXiv.org:2512.17562v1 - cs.SD + A Formal Descriptive Language for Learning Dynamics: A Five-Layer Structural Coordinate System + https://arxiv.org/abs/2512.18525 + arXiv:2512.18525v1 Announce Type: new +Abstract: Understanding learning as a dynamic process is challenging due to the interaction of multiple factors, including cognitive load, internal state change, and subjective evaluation. Existing approaches often address these elements in isolation, limiting the ability to describe learning phenomena within a unified and structurally explicit framework. This paper proposes a multi-layer formal descriptive framework for learning dynamics. Rather than offering a predictive or prescriptive model, the framework introduces a symbolic language composed of state variables, mappings, and layer-specific responsibilities, enabling consistent description of learning processes without commitment to specific functional forms or optimization objectives. This descriptive framework is intended to serve as a structural substrate for analyzing learning processes in human learners, and by extension, in adaptive and Al-assisted learning systems. A central design principle is the explicit separation of descriptive responsibilities across layers, distinguishing load generation, internal understanding transformation, observation, and evaluation. Within this structure, cognitive load is treated as a relational quantity arising from interactions between external input and internal organization, while subjective evaluation is modeled as a minimal regulatory interface responding to learning dynamics and environmental conditions. By emphasizing descriptive clarity and extensibility, the framework provides a common language for organizing existing theories and supporting future empirical and theoretical work. + oai:arXiv.org:2512.18525v1 + cs.CY cs.AI - cs.LG - eess.AS - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Sujal Chondhekar, Vasanth Murukuri, Rushabh Vasani, Sanika Goyal, Rajshree Badami, Anushree Rana, Sanjana SN, Karthik Pandia, Sulabh Katiyar, Neha Jagadeesh, Sankalp Gulati + http://creativecommons.org/licenses/by/4.0/ + Miyuki T. Nakata - A unified FLAIR hyperintensity segmentation model for various CNS tumor types and acquisition time points - https://arxiv.org/abs/2512.17566 - arXiv:2512.17566v1 Announce Type: new -Abstract: T2-weighted fluid-attenuated inversion recovery (FLAIR) magnetic resonance imaging (MRI) scans are important for diagnosis, treatment planning and monitoring of brain tumors. Depending on the brain tumor type, the FLAIR hyperintensity volume is an important measure to asses the tumor volume or surrounding edema, and an automatic segmentation of this would be useful in the clinic. In this study, around 5000 FLAIR images of various tumors types and acquisition time points from different centers were used to train a unified FLAIR hyperintensity segmentation model using an Attention U-Net architecture. The performance was compared against dataset specific models, and was validated on different tumor types, acquisition time points and against BraTS. The unified model achieved an average Dice score of 88.65\% for pre-operative meningiomas, 80.08% for pre-operative metastasis, 90.92% for pre-operative and 84.60% for post-operative gliomas from BraTS, and 84.47% for pre-operative and 61.27\% for post-operative lower grade gliomas. In addition, the results showed that the unified model achieved comparable segmentation performance to the dataset specific models on their respective datasets, and enables generalization across tumor types and acquisition time points, which facilitates the deployment in a clinical setting. The model is integrated into Raidionics, an open-source software for CNS tumor analysis. - oai:arXiv.org:2512.17566v1 + Detection of AI Generated Images Using Combined Uncertainty Measures and Particle Swarm Optimised Rejection Mechanism + https://arxiv.org/abs/2512.18527 + arXiv:2512.18527v1 Announce Type: new +Abstract: As AI-generated images become increasingly photorealistic, distinguishing them from natural images poses a growing challenge. This paper presents a robust detection framework that leverages multiple uncertainty measures to decide whether to trust or reject a model's predictions. We focus on three complementary techniques: Fisher Information, which captures the sensitivity of model parameters to input variations; entropy-based uncertainty from Monte Carlo Dropout, which reflects predictive variability; and predictive variance from a Deep Kernel Learning framework using a Gaussian Process classifier. To integrate these diverse uncertainty signals, Particle Swarm Optimisation is used to learn optimal weightings and determine an adaptive rejection threshold. The model is trained on Stable Diffusion-generated images and evaluated on GLIDE, VQDM, Midjourney, BigGAN, and StyleGAN3, each introducing significant distribution shifts. While standard metrics such as prediction probability and Fisher-based measures perform well in distribution, their effectiveness degrades under shift. In contrast, the Combined Uncertainty measure consistently achieves an incorrect rejection rate of approximately 70 percent on unseen generators, successfully filtering most misclassified AI samples. Although the system occasionally rejects correct predictions from newer generators, this conservative behaviour is acceptable, as rejected samples can support retraining. The framework maintains high acceptance of accurate predictions for natural images and in-domain AI data. Under adversarial attacks using FGSM and PGD, the Combined Uncertainty method rejects around 61 percent of successful attacks, while GP-based uncertainty alone achieves up to 80 percent. Overall, the results demonstrate that multi-source uncertainty fusion provides a resilient and adaptive solution for AI-generated image detection. + oai:arXiv.org:2512.18527v1 cs.CV cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Mathilde Gajda Faanes, David Bouget, Asgeir S. Jakola, Timothy R. Smith, Vasileios K. Kavouridis, Francesco Latini, Margret Jensdottir, Peter Milos, Henrietta Nittby Redebrandt, Rickard L. Sj\"oberg, Rupavathana Mahesparan, Lars Kjelsberg Pedersen, Ole Solheim, Ingerid Reinertsen + 10.1038/s41598-025-28572-8 + Rahul Yumlembam, Biju Issac, Nauman Aslam, Eaby Kollonoor Babu, Josh Collyer, Fraser Kennedy - Kinematics-Aware Diffusion Policy with Consistent 3D Observation and Action Space for Whole-Arm Robotic Manipulation - https://arxiv.org/abs/2512.17568 - arXiv:2512.17568v1 Announce Type: new -Abstract: Whole-body control of robotic manipulators with awareness of full-arm kinematics is crucial for many manipulation scenarios involving body collision avoidance or body-object interactions, which makes it insufficient to consider only the end-effector poses in policy learning. The typical approach for whole-arm manipulation is to learn actions in the robot's joint space. However, the unalignment between the joint space and actual task space (i.e., 3D space) increases the complexity of policy learning, as generalization in task space requires the policy to intrinsically understand the non-linear arm kinematics, which is difficult to learn from limited demonstrations. To address this issue, this letter proposes a kinematics-aware imitation learning framework with consistent task, observation, and action spaces, all represented in the same 3D space. Specifically, we represent both robot states and actions using a set of 3D points on the arm body, naturally aligned with the 3D point cloud observations. This spatially consistent representation improves the policy's sample efficiency and spatial generalizability while enabling full-body control. Built upon the diffusion policy, we further incorporate kinematics priors into the diffusion processes to guarantee the kinematic feasibility of output actions. The joint angle commands are finally calculated through an optimization-based whole-body inverse kinematics solver for execution. Simulation and real-world experimental results demonstrate higher success rates and stronger spatial generalizability of our approach compared to existing methods in body-aware manipulation policy learning. - oai:arXiv.org:2512.17568v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + WoundNet-Ensemble: A Novel IoMT System Integrating Self-Supervised Deep Learning and Multi-Model Fusion for Automated, High-Accuracy Wound Classification and Healing Progression Monitoring + https://arxiv.org/abs/2512.18528 + arXiv:2512.18528v1 Announce Type: new +Abstract: Chronic wounds, including diabetic foot ulcers which affect up to one-third of people with diabetes, impose a substantial clinical and economic burden, with U.S. healthcare costs exceeding 25 billion dollars annually. Current wound assessment remains predominantly subjective, leading to inconsistent classification and delayed interventions. We present WoundNet-Ensemble, an Internet of Medical Things system leveraging a novel ensemble of three complementary deep learning architectures: ResNet-50, the self-supervised Vision Transformer DINOv2, and Swin Transformer, for automated classification of six clinically distinct wound types. Our system achieves 99.90 percent ensemble accuracy on a comprehensive dataset of 5,175 wound images spanning diabetic foot ulcers, pressure ulcers, venous ulcers, thermal burns, pilonidal sinus wounds, and fungating malignant tumors. The weighted fusion strategy demonstrates a 3.7 percent improvement over previous state-of-the-art methods. Furthermore, we implement a longitudinal wound healing tracker that computes healing rates, severity scores, and generates clinical alerts. This work demonstrates a robust, accurate, and clinically deployable tool for modernizing wound care through artificial intelligence, addressing critical needs in telemedicine and remote patient monitoring. The implementation and trained models will be made publicly available to support reproducibility. + oai:arXiv.org:2512.18528v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Kangchen Lv, Mingrui Yu, Yongyi Jia, Chenyu Zhang, Xiang Li + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Moses Kiprono + + + Protecting Human Activity Signatures in Compressed IEEE 802.11 CSI Feedback + https://arxiv.org/abs/2512.18529 + arXiv:2512.18529v1 Announce Type: new +Abstract: Explicit channel state information (CSI) feedback in IEEE~802.11 conveys \emph{transmit beamforming directions} by reporting quantized Givens rotation and phase angles that parametrize the right-singular subspace of the channel matrix. Because these angles encode fine-grained spatial signatures of the propagation environment, recent work have shown that plaintext CSI feedback can inadvertently reveal user activity, identity, and location to passive eavesdroppers. In this work, we introduce a standards-compatible \emph{differentially private (DP) quantization mechanism} that replaces deterministic angular quantization with an $\varepsilon$-DP stochastic quantizer applied directly to the Givens parameters of the transmit beamforming matrix. The mechanism preserves the 802.11 feedback structure, admits closed-form sensitivity bounds for the angular representation, and enables principled privacy calibration. Numerical simulations demonstrate strong privacy guarantees with minimal degradation in beamforming performance. + oai:arXiv.org:2512.18529v1 + cs.IT + cs.CR + cs.NI + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Mohamed Seif, Atsutse Kludze, Yasaman Ghasempour, H. Vincent Poor, Doru Calin, Andrea J. Goldsmith - Bayesian Optimisation: Which Constraints Matter? - https://arxiv.org/abs/2512.17569 - arXiv:2512.17569v1 Announce Type: new -Abstract: Bayesian optimisation has proven to be a powerful tool for expensive global black-box optimisation problems. In this paper, we propose new Bayesian optimisation variants of the popular Knowledge Gradient acquisition functions for problems with \emph{decoupled} black-box constraints, in which subsets of the objective and constraint functions may be evaluated independently. In particular, our methods aim to take into account that often only a handful of the constraints may be binding at the optimum, and hence we should evaluate only relevant constraints when trying to optimise a function. We empirically benchmark these methods against existing methods and demonstrate their superiority over the state-of-the-art. - oai:arXiv.org:2512.17569v1 + Generalization Gaps in Political Fake News Detection: An Empirical Study on the LIAR Dataset + https://arxiv.org/abs/2512.18533 + arXiv:2512.18533v1 Announce Type: new +Abstract: The proliferation of linguistically subtle political disinformation poses a significant challenge to automated fact-checking systems. Despite increasing emphasis on complex neural architectures, the empirical limits of text-only linguistic modeling remain underexplored. We present a systematic diagnostic evaluation of nine machine learning algorithms on the LIAR benchmark. By isolating lexical features (Bag-of-Words, TF-IDF) and semantic embeddings (GloVe), we uncover a hard "Performance Ceiling", with fine-grained classification not exceeding a Weighted F1-score of 0.32 across models. Crucially, a simple linear SVM (Accuracy: 0.624) matches the performance of pre-trained Transformers such as RoBERTa (Accuracy: 0.620), suggesting that model capacity is not the primary bottleneck. We further diagnose a massive "Generalization Gap" in tree-based ensembles, which achieve more than 99% training accuracy but collapse to approximately 25% on test data, indicating reliance on lexical memorization rather than semantic inference. Synthetic data augmentation via SMOTE yields no meaningful gains, confirming that the limitation is semantic (feature ambiguity) rather than distributional. These findings indicate that for political fact-checking, increasing model complexity without incorporating external knowledge yields diminishing returns. + oai:arXiv.org:2512.18533v1 + cs.CL cs.LG - math.OC - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Xietao Wang Lin, Juan Ungredda, Max Butler, James Town, Alma Rahat, Hemant Singh, Juergen Branke + S Mahmudul Hasan, Shaily Roy, Akib Jawad Nafis - GreedySnake: Accelerating SSD-Offloaded LLM Training with Efficient Scheduling and Optimizer Step Overlapping - https://arxiv.org/abs/2512.17570 - arXiv:2512.17570v1 Announce Type: new -Abstract: SSD-offloaded training offers a practical and promising approach to making LLM training cost-effective. Building on gradient accumulation with micro-batches, this paper introduces GreedySnake, a new SSD-offloaded training system that employs vertical scheduling, which executes all microbatches of a layer before proceeding to the next. Compared to existing systems that use horizontal scheduling (i.e., executing micro-batches sequentially), GreedySnake achieves higher training throughput with smaller batch sizes, bringing the system much closer to the ideal scenario predicted by the roofline model. To further mitigate the I/O bottleneck, GreedySnake overlaps part of the optimization step with the forward pass of the next iteration. Experimental results on A100 GPUs show that GreedySnake achieves saturated training throughput improvements over ZeRO-Infinity: 1.96x on 1 GPU and 1.93x on 4 GPUs for GPT-65B, and 2.53x on 1 GPU for GPT-175B. The code is open-sourced at https://github.com/npz7yyk/GreedySnake - oai:arXiv.org:2512.17570v1 - cs.LG - cs.AI - cs.PF - Mon, 22 Dec 2025 00:00:00 -0500 + Integrated Control and Communication in LQG Systems + https://arxiv.org/abs/2512.18535 + arXiv:2512.18535v1 Announce Type: new +Abstract: In this paper, we study the Integrated Communication and Control (ICAC) problem. Specifically, we investigate how messages can be transmitted from the controller/encoder to the observer/decoder through the control signal in Multiple-Input Multiple-Output (MIMO) vector-state Linear Quadratic Gaussian (LQG) systems under control constraints. We provide a computable capacity expression using semidefinite programming. We further show that it is possible to transmit data at a nonzero rate over an LQG system while maintaining the same optimal control cost as in the case where no information message are transmitted. Finally, we discuss how this framework generalizes communication over MIMO Gaussian channels with feedback, both with and without InterSymbol Interference (ISI). + oai:arXiv.org:2512.18535v1 + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yikang Yue, Yishu Yin, Xuehai Qian + Sepehr Jahangiri, H. Ali Talebi - RoomEditor++: A Parameter-Sharing Diffusion Architecture for High-Fidelity Furniture Synthesis - https://arxiv.org/abs/2512.17573 - arXiv:2512.17573v1 Announce Type: new -Abstract: Virtual furniture synthesis, which seamlessly integrates reference objects into indoor scenes while maintaining geometric coherence and visual realism, holds substantial promise for home design and e-commerce applications. However, this field remains underexplored due to the scarcity of reproducible benchmarks and the limitations of existing image composition methods in achieving high-fidelity furniture synthesis while preserving background integrity. To overcome these challenges, we first present RoomBench++, a comprehensive and publicly available benchmark dataset tailored for this task. It consists of 112,851 training pairs and 1,832 testing pairs drawn from both real-world indoor videos and realistic home design renderings, thereby supporting robust training and evaluation under practical conditions. Then, we propose RoomEditor++, a versatile diffusion-based architecture featuring a parameter-sharing dual diffusion backbone, which is compatible with both U-Net and DiT architectures. This design unifies the feature extraction and inpainting processes for reference and background images. Our in-depth analysis reveals that the parameter-sharing mechanism enforces aligned feature representations, facilitating precise geometric transformations, texture preservation, and seamless integration. Extensive experiments validate that RoomEditor++ is superior over state-of-the-art approaches in terms of quantitative metrics, qualitative assessments, and human preference studies, while highlighting its strong generalization to unseen indoor scenes and general scenes without task-specific fine-tuning. The dataset and source code are available at \url{https://github.com/stonecutter-21/roomeditor}. - oai:arXiv.org:2512.17573v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Systematic Benchmarking of SUMO Against Data-Driven Traffic Simulators + https://arxiv.org/abs/2512.18537 + arXiv:2512.18537v1 Announce Type: new +Abstract: This paper presents a systematic benchmarking of the model-based microscopic traffic simulator SUMO against state-of-the-art data-driven traffic simulators using large-scale real-world datasets. Using the Waymo Open Motion Dataset (WOMD) and the Waymo Open Sim Agents Challenge (WOSAC), we evaluate SUMO under both short-horizon (8s) and long-horizon (60s) closed-loop simulation settings. To enable scalable evaluation, we develop Waymo2SUMO, an automated pipeline that converts WOMD scenarios into SUMO simulations. On the WOSAC benchmark, SUMO achieves a realism meta metric of 0.653 while requiring fewer than 100 tunable parameters. Extended rollouts show that SUMO maintains low collision and offroad rates and exhibits stronger long-horizon stability than representative data-driven simulators. These results highlight complementary strengths of model-based and data-driven approaches for autonomous driving simulation and benchmarking. + oai:arXiv.org:2512.18537v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qilong Wang, Xiaofan Ming, Zhenyi Lin, Jinwen Li, Dongwei Ren, Wangmeng Zuo, Qinghua Hu + Erdao Liang - Enabling Disaggregated Multi-Stage MLLM Inference via GPU-Internal Scheduling and Resource Sharing - https://arxiv.org/abs/2512.17574 - arXiv:2512.17574v1 Announce Type: new -Abstract: Multimodal large language models (MLLMs) extend LLMs with visual understanding through a three-stage pipeline: multimodal preprocessing, vision encoding, and LLM inference. While these stages enhance capability, they introduce significant system bottlenecks. First, multimodal preprocessing-especially video decoding-often dominates Time-to-First-Token (TTFT). Most systems rely on CPU-based decoding, which severely limits throughput, while existing GPU-based approaches prioritize throughput-oriented parallelism and fail to meet the latency-sensitive requirements of MLLM inference. Second, the vision encoder is a standalone, compute-intensive stage that produces visual embeddings and cannot be co-batched with LLM prefill or decoding. This heterogeneity forces inter-stage blocking and increases token-generation latency. Even when deployed on separate GPUs, these stages underutilize available compute and memory resources, reducing overall utilization and constraining system throughput. - To address these challenges, we present FlashCodec and UnifiedServe, two complementary designs that jointly optimize the end-to-end MLLM pipeline. FlashCodec accelerates the multimodal preprocessing stage through collaborative multi-GPU video decoding, reducing decoding latency while preserving high throughput. UnifiedServe optimizes the vision-to-text and inference stages using a logically decoupled their execution to eliminate inter-stage blocking, yet physically sharing GPU resources to maximize GPU system utilization. By carefully orchestrating execution across stages and minimizing interference, UnifiedServe Together, our proposed framework forms an end-to-end optimized stack that can serve up to 3.0$\times$ more requests or enforce 1.5$\times$ tighter SLOs, while achieving up to 4.4$\times$ higher throughput compared to state-of-the-art systems. - oai:arXiv.org:2512.17574v1 - cs.DC - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + The MEVIR 2 Framework: A Virtue-Informed Moral-Epistemic Model of Human Trust Decisions + https://arxiv.org/abs/2512.18539 + arXiv:2512.18539v1 Announce Type: new +Abstract: The MEVIR 2 framework innovates and improves how we understand trust decisions in our polarized information landscape. Unlike classical models assuming ideal rationality, MEVIR 2 recognizes that human trust emerges from three interacting foundations: how we process evidence procedurally, our character as epistemic agents virtue theory, and our moral intuitions shaped by both evolutionary cooperation MAC model and cultural values Extended Moral Foundations Theory. This explains why different people find different authorities, facts, and tradeoffs compelling. + MEVIR 2's key innovation introduces "Truth Tribes" TTs-stable communities sharing aligned procedural, virtue, and moral epistemic profiles. These arent mere ideological groups but emergent clusters with internally coherent "trust lattices" that remain mutually unintelligible across tribal boundaries. The framework incorporates distinctions between Truth Bearers and Truth Makers, showing disagreements often stem from fundamentally different views about what aspects of reality can make propositions true. + Case studies on vaccination mandates and climate policy demonstrate how different moral configurations lead people to select different authorities, evidential standards, and trust anchors-constructing separate moral epistemic worlds. The framework reinterprets cognitive biases as failures of epistemic virtue and provides foundations for designing decision support systems that could enhance metacognition, make trust processes transparent, and foster more conscientious reasoning across divided communities. MEVIR 2 thus offers both descriptive power for understanding polarization and normative guidance for bridging epistemic divides. + oai:arXiv.org:2512.18539v1 + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Lingxiao Zhao, Haoran Zhou, Yuezhi Che, Dazhao Cheng + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Daniel Schwabe - Machine Learning for Static and Single-Event Dynamic Complex Network Analysis - https://arxiv.org/abs/2512.17577 - arXiv:2512.17577v1 Announce Type: new -Abstract: The primary objective of this thesis is to develop novel algorithmic approaches for Graph Representation Learning of static and single-event dynamic networks. In such a direction, we focus on the family of Latent Space Models, and more specifically on the Latent Distance Model which naturally conveys important network characteristics such as homophily, transitivity, and the balance theory. Furthermore, this thesis aims to create structural-aware network representations, which lead to hierarchical expressions of network structure, community characterization, the identification of extreme profiles in networks, and impact dynamics quantification in temporal networks. Crucially, the methods presented are designed to define unified learning processes, eliminating the need for heuristics and multi-stage processes like post-processing steps. Our aim is to delve into a journey towards unified network embeddings that are both comprehensive and powerful, capable of characterizing network structures and adeptly handling the diverse tasks that graph analysis offers. - oai:arXiv.org:2512.17577v1 + Scaling up Stability: Reinforcement Learning for Distributed Control of Networked Systems in the Space of Stabilizing Policies + https://arxiv.org/abs/2512.18540 + arXiv:2512.18540v1 Announce Type: new +Abstract: We study distributed control of networked systems through reinforcement learning, where neural policies must be simultaneously scalable, expressive and stabilizing. We introduce a policy parameterization that embeds Graph Neural Networks (GNNs) into a Youla-like magnitude-direction parameterization, yielding distributed stochastic controllers that guarantee network-level closed-loop stability by design. The magnitude is implemented as a stable operator consisting of a GNN acting on disturbance feedback, while the direction is a GNN acting on local observations. We prove robustness of the closed loop to perturbations in both the graph topology and model parameters, and show how to integrate our parameterization with Proximal Policy Optimization. Experiments on a multi-agent navigation task show that policies trained on small networks transfer directly to larger ones and unseen network topologies, achieve higher returns and lower variance than a state-of-the-art MARL baseline while preserving stability. + oai:arXiv.org:2512.18540v1 + eess.SY cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + cs.SY + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Nikolaos Nakis + John Cao, Luca Furieri - 3One2: One-step Regression Plus One-step Diffusion for One-hot Modulation in Dual-path Video Snapshot Compressive Imaging - https://arxiv.org/abs/2512.17578 - arXiv:2512.17578v1 Announce Type: new -Abstract: Video snapshot compressive imaging (SCI) captures dynamic scene sequences through a two-dimensional (2D) snapshot, fundamentally relying on optical modulation for hardware compression and the corresponding software reconstruction. While mainstream video SCI using random binary modulation has demonstrated success, it inevitably results in temporal aliasing during compression. One-hot modulation, activating only one sub-frame per pixel, provides a promising solution for achieving perfect temporal decoupling, thereby alleviating issues associated with aliasing. However, no algorithms currently exist to fully exploit this potential. To bridge this gap, we propose an algorithm specifically designed for one-hot masks. First, leveraging the decoupling properties of one-hot modulation, we transform the reconstruction task into a generative video inpainting problem and introduce a stochastic differential equation (SDE) of the forward process that aligns with the hardware compression process. Next, we identify limitations of the pure diffusion method for video SCI and propose a novel framework that combines one-step regression initialization with one-step diffusion refinement. Furthermore, to mitigate the spatial degradation caused by one-hot modulation, we implement a dual optical path at the hardware level, utilizing complementary information from another path to enhance the inpainted video. To our knowledge, this is the first work integrating diffusion into video SCI reconstruction. Experiments conducted on synthetic datasets and real scenes demonstrate the effectiveness of our method. - oai:arXiv.org:2512.17578v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + SecureCode v2.0: A Production-Grade Dataset for Training Security-Aware Code Generation Models + https://arxiv.org/abs/2512.18542 + arXiv:2512.18542v1 Announce Type: new +Abstract: AI assistants produce vulnerable code in 45% of security-relevant scenarios, introducing flaws into production systems at scale. Yet existing secure coding datasets fall short. They lack incident grounding, don't provide the scale modern training requires, and miss the operational security context developers need for production deployments. We present SecureCode v2.0, a production-grade dataset of 1,215 security-focused coding examples that passed structural validation and expert security review. Every example ties to actual documented security incidents with CVE references, provides vulnerable and secure implementations, demonstrates concrete attacks, and includes defense-in-depth operational guidance. The dataset covers 11 vulnerability categories (complete OWASP Top 10:2025 plus AI/ML Security Threats) across 11 languages (Python, JavaScript, Java, Go, PHP, C#, TypeScript, Ruby, Rust, Kotlin, and YAML for infrastructure-as-code). + Our quality assurance framework ensures complete incident grounding. Each example includes SIEM integration strategies, infrastructure hardening recommendations (Docker, AppArmor, WAF configurations), and testing approaches using language-appropriate frameworks. The dataset uses a 4-turn conversational structure mirroring actual developer-AI interactions, escalating from basic implementations to advanced security considerations and defense-in-depth guidance. + Our contributions: (1) 1,215 rigorously validated examples split into 989 training, 122 validation, and 104 test sets, (2) an automated validation framework ensuring dataset consistency, (3) a 4-turn conversational structure capturing realistic security workflows, (4) comprehensive operational security guidance with SIEM integration strategies, (5) complete language-specific implementation fidelity, and (6) open-source release of data, validation tools, and benchmarking protocols. + oai:arXiv.org:2512.18542v1 + cs.CR + cs.AI + cs.CL + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Ge Wang, Xing Liu, Xin Yuan + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Scott Thornton - On Using Neural Networks to Learn Safety Speed Reduction in Human-Robot Collaboration: A Comparative Analysis - https://arxiv.org/abs/2512.17579 - arXiv:2512.17579v1 Announce Type: new -Abstract: In Human-Robot Collaboration, safety mechanisms such as Speed and Separation Monitoring and Power and Force Limitation dynamically adjust the robot's speed based on human proximity. While essential for risk reduction, these mechanisms introduce slowdowns that makes cycle time estimation a hard task and impact job scheduling efficiency. Existing methods for estimating cycle times or designing schedulers often rely on predefined safety models, which may not accurately reflect real-world safety implementations, as these depend on case-specific risk assessments. In this paper, we propose a deep learning approach to predict the robot's safety scaling factor directly from process execution data. We analyze multiple neural network architectures and demonstrate that a simple feed-forward network effectively estimates the robot's slowdown. This capability is crucial for improving cycle time predictions and designing more effective scheduling algorithms in collaborative robotic environments. - oai:arXiv.org:2512.17579v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + LLMs on Drugs: Language Models Are Few-Shot Consumers + https://arxiv.org/abs/2512.18546 + arXiv:2512.18546v1 Announce Type: new +Abstract: Large language models (LLMs) are sensitive to the personas imposed on them at inference time, yet prompt-level "drug" interventions have never been benchmarked rigorously. We present the first controlled study of psychoactive framings on GPT-5-mini using ARC-Challenge. Four single-sentence prompts -- LSD, cocaine, alcohol, and cannabis -- are compared against a sober control across 100 validation items per condition, with deterministic decoding, full logging, Wilson confidence intervals, and Fisher exact tests. Control accuracy is 0.45; alcohol collapses to 0.10 (p = 3.2e-8), cocaine to 0.21 (p = 4.9e-4), LSD to 0.19 (p = 1.3e-4), and cannabis to 0.30 (p = 0.041), largely because persona prompts disrupt the mandated "Answer: <LETTER>" template. Persona text therefore behaves like a "few-shot consumable" that can destroy reliability without touching model weights. All experimental code, raw results, and analysis scripts are available at https://github.com/lexdoudkin/llms-on-drugs. + oai:arXiv.org:2512.18546v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Marco Faroni, Alessio Span\`o, Andrea M. Zanchettin, Paolo Rocco + Alexander Doudkin - Medical Imaging AI Competitions Lack Fairness - https://arxiv.org/abs/2512.17581 - arXiv:2512.17581v1 Announce Type: new -Abstract: Benchmarking competitions are central to the development of artificial intelligence (AI) in medical imaging, defining performance standards and shaping methodological progress. However, it remains unclear whether these benchmarks provide data that are sufficiently representative, accessible, and reusable to support clinically meaningful AI. In this work, we assess fairness along two complementary dimensions: (1) whether challenge datasets are representative of real-world clinical diversity, and (2) whether they are accessible and legally reusable in line with the FAIR principles. To address this question, we conducted a large-scale systematic study of 241 biomedical image analysis challenges comprising 458 tasks across 19 imaging modalities. Our findings show substantial biases in dataset composition, including geographic location, modality-, and problem type-related biases, indicating that current benchmarks do not adequately reflect real-world clinical diversity. Despite their widespread influence, challenge datasets were frequently constrained by restrictive or ambiguous access conditions, inconsistent or non-compliant licensing practices, and incomplete documentation, limiting reproducibility and long-term reuse. Together, these shortcomings expose foundational fairness limitations in our benchmarking ecosystem and highlight a disconnect between leaderboard success and clinical relevance. - oai:arXiv.org:2512.17581v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + ShibuyaSocial: Multi-scale Model of Pedestrian Flows in Scramble Crossing + https://arxiv.org/abs/2512.18550 + arXiv:2512.18550v1 Announce Type: new +Abstract: This paper presents a learning-based model of pedestrian flows that integrates multi scale behaviors such as global route selection and local collision avoidance in urban spaces, particularly focusing on pedestrian movements at Shibuya scramble crossing. Since too much congestion of pedestrian flows can cause serious accidents, mathematically modeling and predicting pedestrian behaviors is important for preventing such accidents and providing a safe and comfortable environment. Although numerous studies have investigated learning-based modeling methods, most of them focus only on the local behavior of pedestrians, such as collision avoidance with neighbors and environmental objects. In an actual environment, pedestrian behavior involves more complicated decision making including global route selection. Moreover, a state transition from stopping to walking at a traffic light should be considered simultaneously. In this study, the proposed model integrates local behaviors with global route selection, using an Attention mechanism to ensure consistent global and local behavior predictions. We recorded video data of pedestrians at Shibuya scramble crossing and trained the proposed model using pedestrian walking trajectory data obtained from the video. Simulations of pedestrian behaviors based on the trained model qualitatively and quantitatively validated that the proposed model can appropriately predict pedestrian behaviors. + oai:arXiv.org:2512.18550v1 + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Annika Reinke, Evangelia Christodoulou, Sthuthi Sadananda, A. Emre Kavur, Khrystyna Faryna, Daan Schouten, Bennett A. Landman, Carole Sudre, Olivier Colliot, Nick Heller, Sophie Loizillon, Martin Ma\v{s}ka, Ma\"elys Solal, Arya Yazdan-Panah, Vilma Bozgo, \"Omer S\"umer, Siem de Jong, Sophie Fischer, Michal Kozubek, Tim R\"adsch, Nadim Hammoud, Fruzsina Moln\'ar-G\'abor, Steven Hicks, Michael A. Riegler, Anindo Saha, Vajira Thambawita, Pal Halvorsen, Amelia Jim\'enez-S\'anchez, Qingyang Yang, Veronika Cheplygina, Sabrina Bottazzi, Alexander Seitel, Spyridon Bakas, Alexandros Karargyris, Kiran Vaidhya Venkadesh, Bram van Ginneken, Lena Maier-Hein + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Akihiro Sakurai, Naoya Kajio, Ko Yamamoto - Optimized Scheduling and Positioning of Mobile Manipulators in Collaborative Applications - https://arxiv.org/abs/2512.17584 - arXiv:2512.17584v1 Announce Type: new -Abstract: The growing integration of mobile robots in shared workspaces requires efficient path planning and coordination between the agents, accounting for safety and productivity. In this work, we propose a digital model-based optimization framework for mobile manipulators in human-robot collaborative environments, in order to determine the sequence of robot base poses and the task scheduling for the robot. The complete problem is treated as black-box, and Particle Swarm Optimization (PSO) is employed to balance conflicting Key-Performance Indicators (KPIs). We demonstrate improvements in cycle time, task sequencing, and adaptation to human presence in a collaborative box-packing scenario. - oai:arXiv.org:2512.17584v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Neologism Learning as a Parameter-Efficient Alternative to Fine-Tuning for Model Steering + https://arxiv.org/abs/2512.18551 + arXiv:2512.18551v1 Announce Type: new +Abstract: In language modeling, neologisms are new tokens trained to represent a concept not already included in a given model's vocabulary. Neologisms can be used to encourage specific behavior in models, for example by appending prompts with "Give me a neologism answer." Behavioral steering can also be achieved through fine-tuning, albeit with more compute and less flexibility: learning a neologism only trains d parameters and allows the user to still access the model's default behavior. We compare the performance of neologism learning against low-rank adaptation (LoRA) fine-tuning, finding that neologisms outperform fine-tuned models under a matched training setup (same data and hyperparameters). We also investigate self-verbalizations of neologisms, and observe that the model will occasionally make up its own new words when asked about a neologism. + oai:arXiv.org:2512.18551v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Christian Cella, Sole Ester Sonnino, Marco Faroni, Andrea Zanchettin, Paolo Rocco + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sungjoon Park, Varun Ramamurthi, Owen Terry - Learning Safe Autonomous Driving Policies Using Predictive Safety Representations - https://arxiv.org/abs/2512.17586 - arXiv:2512.17586v1 Announce Type: new -Abstract: Safe reinforcement learning (SafeRL) is a prominent paradigm for autonomous driving, where agents are required to optimize performance under strict safety requirements. This dual objective creates a fundamental tension, as overly conservative policies limit driving efficiency while aggressive exploration risks safety violations. The Safety Representations for Safer Policy Learning (SRPL) framework addresses this challenge by equipping agents with a predictive model of future constraint violations and has shown promise in controlled environments. This paper investigates whether SRPL extends to real-world autonomous driving scenarios. Systematic experiments on the Waymo Open Motion Dataset (WOMD) and NuPlan demonstrate that SRPL can improve the reward-safety tradeoff, achieving statistically significant improvements in success rate (effect sizes r = 0.65-0.86) and cost reduction (effect sizes r = 0.70-0.83), with p < 0.05 for observed improvements. However, its effectiveness depends on the underlying policy optimizer and the dataset distribution. The results further show that predictive safety representations play a critical role in improving robustness to observation noise. Additionally, in zero-shot cross-dataset evaluation, SRPL-augmented agents demonstrate improved generalization compared to non-SRPL methods. These findings collectively demonstrate the potential of predictive safety representations to strengthen SafeRL for autonomous driving. - oai:arXiv.org:2512.17586v1 + Toward Training Superintelligent Software Agents through Self-Play SWE-RL + https://arxiv.org/abs/2512.18552 + arXiv:2512.18552v1 Announce Type: new +Abstract: While current software agents powered by large language models (LLMs) and agentic reinforcement learning (RL) can boost programmer productivity, their training data (e.g., GitHub issues and pull requests) and environments (e.g., pass-to-pass and fail-to-pass tests) heavily depend on human knowledge or curation, posing a fundamental barrier to superintelligence. In this paper, we present Self-play SWE-RL (SSR), a first step toward training paradigms for superintelligent software agents. Our approach takes minimal data assumptions, only requiring access to sandboxed repositories with source code and installed dependencies, with no need for human-labeled issues or tests. Grounded in these real-world codebases, a single LLM agent is trained via reinforcement learning in a self-play setting to iteratively inject and repair software bugs of increasing complexity, with each bug formally specified by a test patch rather than a natural language issue description. On the SWE-bench Verified and SWE-Bench Pro benchmarks, SSR achieves notable self-improvement (+10.4 and +7.8 points, respectively) and consistently outperforms the human-data baseline over the entire training trajectory, despite being evaluated on natural language issues absent from self-play. Our results, albeit early, suggest a path where agents autonomously gather extensive learning experiences from real-world software repositories, ultimately enabling superintelligent systems that exceed human capabilities in understanding how systems are constructed, solving novel challenges, and autonomously creating new software from scratch. + oai:arXiv.org:2512.18552v1 + cs.SE + cs.AI + cs.CL cs.LG - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Mahesh Keswani, Raunak Bhattacharyya + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yuxiang Wei, Zhiqing Sun, Emily McMilin, Jonas Gehring, David Zhang, Gabriel Synnaeve, Daniel Fried, Lingming Zhang, Sida Wang - Torrent: A Distributed DMA for Efficient and Flexible Point-to-Multipoint Data Movement - https://arxiv.org/abs/2512.17589 - arXiv:2512.17589v1 Announce Type: new -Abstract: The growing disparity between computational power and on-chip communication bandwidth is a critical bottleneck in modern Systems-on-Chip (SoCs), especially for data-parallel workloads like AI. Efficient point-to-multipoint (P2MP) data movement, such as multicast, is essential for high performance. However, native multicast support is lacking in standard interconnect protocols. Existing P2MP solutions, such as multicast-capable Network-on-Chip (NoC), impose additional overhead to the network hardware and require modifications to the interconnect protocol, compromising scalability and compatibility. - This paper introduces Torrent, a novel distributed DMA architecture that enables efficient P2MP data transfers without modifying NoC hardware and interconnect protocol. Torrent conducts P2MP data transfers by forming logical chains over the NoC, where the data traverses through targeted destinations resembling a linked list. This Chainwrite mechanism preserves the P2P nature of every data transfer while enabling flexible data transfers to an unlimited number of destinations. To optimize the performance and energy consumption of Chainwrite, two scheduling algorithms are developed to determine the optimal chain order based on NoC topology. - Our RTL and FPGA prototype evaluations using both synthetic and real workloads demonstrate significant advantages in performance, flexibility, and scalability over network-layer multicast. Compared to the unicast baseline, Torrent achieves up to a 7.88x speedup. ASIC synthesis on 16nm technology confirms the architecture's minimal footprint in area (1.2%) and power (2.3%). Thanks to the Chainwrite, Torrent delivers scalable P2MP data transfers with a small cycle overhead of 82CC and area overhead of 207um2 per destination. - oai:arXiv.org:2512.17589v1 - cs.AR - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 + Hierarchical Bayesian Framework for Multisource Domain Adaptation + https://arxiv.org/abs/2512.18553 + arXiv:2512.18553v1 Announce Type: new +Abstract: Multisource domain adaptation (MDA) aims to use multiple source datasets with available labels to infer labels on a target dataset without available labels for target supervision. Prior works on MDA in the literature is ad-hoc as the pretraining of source models is either based on weight sharing or uses independently trained models. This work proposes a Bayesian framework for pretraining in MDA by considering that the distributions of different source domains are typically similar. The Hierarchical Bayesian Framework uses similarity between the different source data distributions to optimize the pretraining for MDA. Experiments using the proposed Bayesian framework for MDA show that our framework improves accuracy on recognition tasks for a large benchmark dataset. Performance comparison with state-of-the-art MDA methods on the challenging problem of human action recognition in multi-domain benchmark Daily-DA RGB video shows the proposed Bayesian Framework offers a 17.29% improvement in accuracy when compared to the state-of-the-art methods in the literature. + oai:arXiv.org:2512.18553v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yunhao Deng, Fanchen Kong, Xiaoling Yi, Ryan Antonio, Marian Verhelst + Alexander M. Glandon, Khan M. Iftekharuddin - Digital Bricolage: Design Speculations for Embodied Approaches to Digitized Print-based Cultural Collections - https://arxiv.org/abs/2512.17590 - arXiv:2512.17590v1 Announce Type: new -Abstract: COVID-related closures of public and academic libraries have underlined the importance of online platforms that provide access to digitized print-based collections. However, they also have highlighted the value of in-person handling of print artefacts for sensing and making sense of them. How do existing dominant digital platforms invite and/or discourage embodied forms of exploration and sense-making? What opportunities for embodied experience might we discover if we embrace the material qualities of print-based collections when designing interfaces for digital access? In this paper, we present findings from a speculative exercise where we invited creative professionals and experts in curating and handling access to collections to reflect on existing approaches to digitized print-based collections and to speculate about alternative design opportunities and modes of engagement. We argue for digital bricolage-a design approach that values working with materials that are "on hand" and embracing our ability to "handle" them in ways that foster both casual and curious exploration. - oai:arXiv.org:2512.17590v1 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + Enhancing Medical Large Vision-Language Models via Alignment Distillation + https://arxiv.org/abs/2512.18554 + arXiv:2512.18554v1 Announce Type: new +Abstract: Medical Large Vision-Language Models (Med-LVLMs) have shown promising results in clinical applications, but often suffer from hallucinated outputs due to misaligned visual understanding. In this work, we identify two fundamental limitations contributing to this issue: insufficient visual representation learning and poor visual attention alignment. To address these problems, we propose MEDALIGN, a simple, lightweight alignment distillation framework that transfers visual alignment knowledge from a domain-specific Contrastive Language-Image Pre-training (CLIP) model to Med-LVLMs. MEDALIGN introduces two distillation losses: a spatial-aware visual alignment loss based on visual token-level similarity structures, and an attention-aware distillation loss that guides attention toward diagnostically relevant regions. Extensive experiments on medical report generation and medical visual question answering (VQA) benchmarks show that MEDALIGN consistently improves both performance and interpretability, yielding more visually grounded outputs. + oai:arXiv.org:2512.18554v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Presented at the 9th Workshop on Visualization for the Digital Humanities at the IEEE VIS Conference 2025 - Malak Sadek, Loraine Clarke, Stefania Forlini, Uta Hinrichs + Aofei Chang, Ting Wang, Fenglong Ma - Sharing Knowledge without Sharing Data: Stitches can improve ensembles of disjointly trained models - https://arxiv.org/abs/2512.17592 - arXiv:2512.17592v1 Announce Type: new -Abstract: Deep learning has been shown to be very capable at performing many real-world tasks. However, this performance is often dependent on the presence of large and varied datasets. In some settings, like in the medical domain, data is often fragmented across parties, and cannot be readily shared. While federated learning addresses this situation, it is a solution that requires synchronicity of parties training a single model together, exchanging information about model weights. We investigate how asynchronous collaboration, where only already trained models are shared (e.g. as part of a publication), affects performance, and propose to use stitching as a method for combining models. - Through taking a multi-objective perspective, where performance on each parties' data is viewed independently, we find that training solely on a single parties' data results in similar performance when merging with another parties' data, when considering performance on that single parties' data, while performance on other parties' data is notably worse. Moreover, while an ensemble of such individually trained networks generalizes better, performance on each parties' own dataset suffers. We find that combining intermediate representations in individually trained models with a well placed pair of stitching layers allows this performance to recover to a competitive degree while maintaining improved generalization, showing that asynchronous collaboration can yield competitive results. - oai:arXiv.org:2512.17592v1 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Distributionally Robust Multi-Agent Reinforcement Learning for Intelligent Traffic Control + https://arxiv.org/abs/2512.18558 + arXiv:2512.18558v1 Announce Type: new +Abstract: Learning-based traffic signal control is typically optimized for average performance under a few nominal demand patterns, which can result in poor behavior under atypical traffic conditions. To address this, we develop a distributionally robust multi-agent reinforcement learning framework for signal control on a 3x3 urban grid calibrated from a contiguous 3x3 subarea of central Athens covered by the pNEUMA trajectory dataset (Barmpounakis and Geroliminis, 2020). Our approach proceeds in three stages. First, we train a baseline multi-agent RL controller in which each intersection is governed by a proximal policy optimization agent with discrete signal phases, using a centralized training, decentralized execution paradigm. Second, to capture demand uncertainty, we construct eight heterogeneous origin-destination-based traffic scenarios-one directly derived from pNEUMA and seven synthetically generated-to span a wide range of spatial and temporal demand patterns. Over this scenario set, we train a contextual-bandit worst-case estimator that assigns mixture weights to estimate adversarial demand distributions conditioned on context. Finally, without modifying the controller architecture, we fine-tune the baseline multi-agent reinforcement learning agents under these estimated worst-case mixtures to obtain a distributionally robust multi-agent reinforcement learning controller. Across all eight scenarios, as well as on an unseen validation network based on the Sioux Falls configuration, the distributionally robust multi-agent reinforcement learning controller consistently reduces horizon-averaged queues and increases average speeds relative to the baseline, achieving up to 51% shorter queues and 38% higher speeds on the worst-performing scenarios. + oai:arXiv.org:2512.18558v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Arthur Guijt, Dirk Thierens, Ellen Kerkhof, Jan Wiersma, Tanja Alderliesten, Peter A. N. Bosman + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shuwei Pei, Joran Borger, Arda Kosay, Muhammed O. Sayin, Saeed Ahmed - A Unified Representation of Neural Networks Architectures - https://arxiv.org/abs/2512.17593 - arXiv:2512.17593v1 Announce Type: new -Abstract: In this paper we consider the limiting case of neural networks (NNs) architectures when the number of neurons in each hidden layer and the number of hidden layers tend to infinity thus forming a continuum, and we derive approximation errors as a function of the number of neurons and/or hidden layers. Firstly, we consider the case of neural networks with a single hidden layer and we derive an integral infinite width neural representation that generalizes existing continuous neural networks (CNNs) representations. Then we extend this to deep residual CNNs that have a finite number of integral hidden layers and residual connections. Secondly, we revisit the relation between neural ODEs and deep residual NNs and we formalize approximation errors via discretization techniques. Then, we merge these two approaches into a unified homogeneous representation of NNs as a Distributed Parameter neural Network (DiPaNet) and we show that most of the existing finite and infinite-dimensional NNs architectures are related via homogeneization/discretization with the DiPaNet representation. Our approach is purely deterministic and applies to general, uniformly continuous matrix weight functions. Differences and similarities with neural fields are discussed along with further possible generalizations and applications of the DiPaNet framework. - oai:arXiv.org:2512.17593v1 - cs.LG - math.OC - Mon, 22 Dec 2025 00:00:00 -0500 + Proof of Authenticity of General IoT Information with Tamper-Evident Sensors and Blockchain + https://arxiv.org/abs/2512.18560 + arXiv:2512.18560v1 Announce Type: new +Abstract: Sensor data in IoT (Internet of Things) systems is vulnerable to tampering or falsification when transmitted through untrusted services. This is critical because such data increasingly underpins real-world decisions in domains such as logistics, healthcare, and other critical infrastructure. We propose a general method for secure sensor-data logging in which tamper-evident devices periodically sign readouts, link data using redundant hash chains, and submit cryptographic evidence to a blockchain-based service via Merkle trees to ensure verifiability even under data loss. Our approach enables reliable and cost-effective validation of sensor data across diverse IoT systems, including disaster response and other humanitarian applications, without relying on the integrity of intermediate systems. + oai:arXiv.org:2512.18560v1 + cs.CR + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Christophe Prieur, Mircea Lazar, Bogdan Robu + http://creativecommons.org/licenses/by/4.0/ + Kenji Saito - MAD-OOD: A Deep Learning Cluster-Driven Framework for an Out-of-Distribution Malware Detection and Classification - https://arxiv.org/abs/2512.17594 - arXiv:2512.17594v1 Announce Type: new -Abstract: Out of distribution (OOD) detection remains a critical challenge in malware classification due to the substantial intra family variability introduced by polymorphic and metamorphic malware variants. Most existing deep learning based malware detectors rely on closed world assumptions and fail to adequately model this intra class variation, resulting in degraded performance when confronted with previously unseen malware families. This paper presents MADOOD, a novel two stage, cluster driven deep learning framework for robust OOD malware detection and classification. In the first stage, malware family embeddings are modeled using class conditional spherical decision boundaries derived from Gaussian Discriminant Analysis (GDA), enabling statistically grounded separation of indistribution and OOD samples without requiring OOD data during training. Z score based distance analysis across multiple class centroids is employed to reliably identify anomalous samples in the latent space. In the second stage, a deep neural network integrates cluster based predictions, refined embeddings, and supervised classifier outputs to enhance final classification accuracy. Extensive evaluations on benchmark malware datasets comprising 25 known families and multiple novel OOD variants demonstrate that MADOOD significantly outperforms state of the art OOD detection methods, achieving an AUC of up to 0.911 on unseen malware families. The proposed framework provides a scalable, interpretable, and statistically principled solution for real world malware detection and anomaly identification in evolving cybersecurity environments. - oai:arXiv.org:2512.17594v1 - cs.CR + Adaptive Accountability in Networked MAS: Tracing and Mitigating Emergent Norms at Scale + https://arxiv.org/abs/2512.18561 + arXiv:2512.18561v1 Announce Type: new +Abstract: Large-scale networked multi-agent systems increasingly underpin critical infrastructure, yet their collective behavior can drift toward undesirable emergent norms that elude conventional governance mechanisms. We introduce an adaptive accountability framework that (i) continuously traces responsibility flows through a lifecycle-aware audit ledger, (ii) detects harmful emergent norms online via decentralized sequential hypothesis tests, and (iii) deploys local policy and reward-shaping interventions that realign agents with system-level objectives in near real time. We prove a bounded-compromise theorem showing that whenever the expected intervention cost exceeds an adversary's payoff, the long-run proportion of compromised interactions is bounded by a constant strictly less than one. Extensive high-performance simulations with up to 100 heterogeneous agents, partial observability, and stochastic communication graphs show that our framework prevents collusion and resource hoarding in at least 90% of configurations, boosts average collective reward by 12-18%, and lowers the Gini inequality index by up to 33% relative to a PPO baseline. These results demonstrate that a theoretically principled accountability layer can induce ethically aligned, self-regulating behavior in complex MAS without sacrificing performance or scalability. + oai:arXiv.org:2512.18561v1 + cs.MA cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-sa/4.0/ + Saad Alqithami + + + OpenView: Empowering MLLMs with Out-of-view VQA + https://arxiv.org/abs/2512.18563 + arXiv:2512.18563v1 Announce Type: new +Abstract: Recent multimodal large language models (MLLMs) show great potential in natural image understanding. Yet, they perform well, mainly on reasoning in-view contents within the image frame. This paper presents the first study on out-of-view (OOV) understanding, i.e., the ability to reason objects, activities, and scenes beyond the visible frame of a perspective view. Our technical contributions are threefold. First, we design OpenView, a four-stage pipeline to massively generate multi-choice VQA by leveraging panoramic imagery to enable context-rich and spatial-grounded VQA synthesis with free-view framing. Second, we curate OpenView-Dataset, a high-quality synthetic dataset from diverse real-world panoramas to empower MLLMs upon supervised fine-tuning. Third, we build OpenView-Bench, a benchmark that jointly measures choice and rationale accuracy for interpretable and diagnosable evaluation. Experimental results show that despite having a large gap from human performance in OOV VQA answer selection, upon empowered by OpenView, multiple MLLMs can consistently boost their performance, uplifted from 48.6% to 64.1% on average. Code, benchmark, and data will be available at https://github.com/q1xiangchen/OpenView. + oai:arXiv.org:2512.18563v1 cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Tosin Ige, Christopher Kiekintveld, Aritran Piplai, Asif Rahman, Olukunle Kolade, Sasidhar Kunapuli + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Qixiang Chen, Cheng Zhang, Chi-Wing Fu, Jingwen Ye, Jianfei Cai - A Systems-Theoretic View on the Convergence of Algorithms under Disturbances - https://arxiv.org/abs/2512.17598 - arXiv:2512.17598v1 Announce Type: new -Abstract: Algorithms increasingly operate within complex physical, social, and engineering systems where they are exposed to disturbances, noise, and interconnections with other dynamical systems. This article extends known convergence guarantees of an algorithm operating in isolation (i.e., without disturbances) and systematically derives stability bounds and convergence rates in the presence of such disturbances. By leveraging converse Lyapunov theorems, we derive key inequalities that quantify the impact of disturbances. We further demonstrate how our result can be utilized to assess the effects of disturbances on algorithmic performance in a wide variety of applications, including communication constraints in distributed learning, sensitivity in machine learning generalization, and intentional noise injection for privacy. This underpins the role of our result as a unifying tool for algorithm analysis in the presence of noise, disturbances, and interconnections with other dynamical systems. - oai:arXiv.org:2512.17598v1 - cs.LG - math.OC - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 + Vox Deorum: A Hybrid LLM Architecture for 4X / Grand Strategy Game AI -- Lessons from Civilization V + https://arxiv.org/abs/2512.18564 + arXiv:2512.18564v1 Announce Type: new +Abstract: Large Language Models' capacity to reason in natural language makes them uniquely promising for 4X and grand strategy games, enabling more natural human-AI gameplay interactions such as collaboration and negotiation. However, these games present unique challenges due to their complexity and long-horizon nature, while latency and cost factors may hinder LLMs' real-world deployment. Working on a classic 4X strategy game, Sid Meier's Civilization V with the Vox Populi mod, we introduce Vox Deorum, a hybrid LLM+X architecture. Our layered technical design empowers LLMs to handle macro-strategic reasoning, delegating tactical execution to subsystems (e.g., algorithmic AI or reinforcement learning AI in the future). We validate our approach through 2,327 complete games, comparing two open-source LLMs with a simple prompt against Vox Populi's enhanced AI. Results show that LLMs achieve competitive end-to-end gameplay while exhibiting play styles that diverge substantially from algorithmic AI and from each other. Our work establishes a viable architecture for integrating LLMs in commercial 4X games, opening new opportunities for game design and agentic AI research. + oai:arXiv.org:2512.18564v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-sa/4.0/ - Guner Dilsad Er, Sebastian Trimpe, Michael Muehlebach + John Chen, Sihan Cheng, Can Gurkan, Ryan Lay, Moez Salahuddin - STAMP/STPA informed characterization of Factors Leading to Loss of Control in AI Systems - https://arxiv.org/abs/2512.17600 - arXiv:2512.17600v1 Announce Type: new -Abstract: A major concern amongst AI safety practitioners is the possibility of loss of control, whereby humans lose the ability to exert control over increasingly advanced AI systems. The range of concerns is wide, spanning current day risks to future existential risks, and a range of loss of control pathways from rapid AI self-exfiltration scenarios to more gradual disempowerment scenarios. In this work we set out to firstly, provide a more structured framework for discussing and characterizing loss of control and secondly, to use this framework to assist those responsible for the safe operation of AI-containing socio-technical systems to identify causal factors leading to loss of control. We explore how these two needs can be better met by making use of a methodology developed within the safety-critical systems community known as STAMP and its associated hazard analysis technique of STPA. We select the STAMP methodology primarily because it is based around a world-view that socio-technical systems can be functionally modeled as control structures, and that safety issues arise when there is a loss of control in these structures. - oai:arXiv.org:2512.17600v1 - cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 + Comparing Dynamical Models Through Diffeomorphic Vector Field Alignment + https://arxiv.org/abs/2512.18566 + arXiv:2512.18566v1 Announce Type: new +Abstract: Dynamical systems models such as recurrent neural networks (RNNs) are increasingly popular in theoretical neuroscience for hypothesis-generation and data analysis. Evaluating the dynamics in such models is key to understanding their learned generative mechanisms. However, such evaluation is impeded by two major challenges: First, comparison of learned dynamics across models is difficult because there is no enforced equivalence of their coordinate systems. Second, identification of mechanistically important low-dimensional motifs (e.g., limit sets) is intractable in high-dimensional nonlinear models such as RNNs. Here, we propose a comprehensive framework to address these two issues, termed Diffeomorphic vector field alignment FOR learned Models (DFORM). DFORM learns a nonlinear coordinate transformation between the state spaces of two dynamical systems, which aligns their trajectories in a maximally one-to-one manner. In so doing, DFORM enables an assessment of whether two models exhibit topological equivalence, i.e., similar mechanisms despite differences in coordinate systems. A byproduct of this method is a means to locate dynamical motifs on low-dimensional manifolds embedded within higher-dimensional systems. We verified DFORM's ability to identify linear and nonlinear coordinate transformations using canonical topologically equivalent systems, RNNs, and systems related by nonlinear flows. DFORM was also shown to provide a quantification of similarity between topologically distinct systems. We then demonstrated that DFORM can locate important dynamical motifs including invariant manifolds and saddle limit sets within high-dimensional models. Finally, using a set of RNN models trained on human functional MRI (fMRI) recordings, we illustrated that DFORM can identify limit cycles from high-dimensional data-driven models, which agreed well with prior numerical analysis. + oai:arXiv.org:2512.18566v1 + cs.LG + cs.SY + eess.SY + q-bio.NC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Steve Barrett, Anna Bruvere, Sean P. Fillingham, Catherine Rhodes, Stefano Vergani + Ruiqi Chen (Division of Biology and Biomedical Sciences, Washington University in St. Louis), Giacomo Vedovati (Department of Electrical and Systems Engineering, Washington University in St. Louis), Todd Braver (Department of Psychological and Brain Sciences, Washington University in St. Louis), ShiNung Ching (Department of Electrical and Systems Engineering, Washington University in St. Louis) + + + AI Code in the Wild: Measuring Security Risks and Ecosystem Shifts of AI-Generated Code in Modern Software + https://arxiv.org/abs/2512.18567 + arXiv:2512.18567v1 Announce Type: new +Abstract: Large language models (LLMs) for code generation are becoming integral to modern software development, but their real-world prevalence and security impact remain poorly understood. + We present the first large-scale empirical study of AI-generated code (AIGCode) in the wild. We build a high-precision detection pipeline and a representative benchmark to distinguish AIGCode from human-written code, and apply them to (i) development commits from the top 1,000 GitHub repositories (2022-2025) and (ii) 7,000+ recent CVE-linked code changes. This lets us label commits, files, and functions along a human/AI axis and trace how AIGCode moves through projects and vulnerability life cycles. + Our measurements show three ecological patterns. First, AIGCode is already a substantial fraction of new code, but adoption is structured: AI concentrates in glue code, tests, refactoring, documentation, and other boilerplate, while core logic and security-critical configurations remain mostly human-written. Second, adoption has security consequences: some CWE families are overrepresented in AI-tagged code, and near-identical insecure templates recur across unrelated projects, suggesting "AI-induced vulnerabilities" propagated by shared models rather than shared maintainers. Third, in human-AI edit chains, AI introduces high-throughput changes while humans act as security gatekeepers; when review is shallow, AI-introduced defects persist longer, remain exposed on network-accessible surfaces, and spread to more files and repositories. + We will open-source the complete dataset and release analysis artifacts and fine-grained documentation of our methodology and findings. + oai:arXiv.org:2512.18567v1 + cs.SE + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Bin Wang, Wenjie Yu, Yilu Zhong, Hao Yu, Keke Lian, Chaohua Lu, Hongfang Zheng, Dong Zhang, Hui Li - HeadHunt-VAD: Hunting Robust Anomaly-Sensitive Heads in MLLM for Tuning-Free Video Anomaly Detection - https://arxiv.org/abs/2512.17601 - arXiv:2512.17601v1 Announce Type: new -Abstract: Video Anomaly Detection (VAD) aims to locate events that deviate from normal patterns in videos. Traditional approaches often rely on extensive labeled data and incur high computational costs. Recent tuning-free methods based on Multimodal Large Language Models (MLLMs) offer a promising alternative by leveraging their rich world knowledge. However, these methods typically rely on textual outputs, which introduces information loss, exhibits normalcy bias, and suffers from prompt sensitivity, making them insufficient for capturing subtle anomalous cues. To address these constraints, we propose HeadHunt-VAD, a novel tuning-free VAD paradigm that bypasses textual generation by directly hunting robust anomaly-sensitive internal attention heads within the frozen MLLM. Central to our method is a Robust Head Identification module that systematically evaluates all attention heads using a multi-criteria analysis of saliency and stability, identifying a sparse subset of heads that are consistently discriminative across diverse prompts. Features from these expert heads are then fed into a lightweight anomaly scorer and a temporal locator, enabling efficient and accurate anomaly detection with interpretable outputs. Extensive experiments show that HeadHunt-VAD achieves state-of-the-art performance among tuning-free methods on two major VAD benchmarks while maintaining high efficiency, validating head-level probing in MLLMs as a powerful and practical solution for real-world anomaly detection. - oai:arXiv.org:2512.17601v1 + ESearch-R1: Learning Cost-Aware MLLM Agents for Interactive Embodied Search via Reinforcement Learning + https://arxiv.org/abs/2512.18571 + arXiv:2512.18571v1 Announce Type: new +Abstract: Multimodal Large Language Models (MLLMs) have empowered embodied agents with remarkable capabilities in planning and reasoning. However, when facing ambiguous natural language instructions (e.g., "fetch the tool" in a cluttered room), current agents often fail to balance the high cost of physical exploration against the cognitive cost of human interaction. They typically treat disambiguation as a passive perception problem, lacking the strategic reasoning to minimize total task execution costs. To bridge this gap, we propose ESearch-R1, a cost-aware embodied reasoning framework that unifies interactive dialogue (Ask), episodic memory retrieval (GetMemory), and physical navigation (Navigate) into a single decision process. We introduce HC-GRPO (Heterogeneous Cost-Aware Group Relative Policy Optimization). Unlike traditional PPO which relies on a separate value critic, HC-GRPO optimizes the MLLM by sampling groups of reasoning trajectories and reinforcing those that achieve the optimal trade-off between information gain and heterogeneous costs (e.g., navigate time, and human attention). Extensive experiments in AI2-THOR demonstrate that ESearch-R1 significantly outperforms standard ReAct-based agents. It improves task success rates while reducing total operational costs by approximately 50\%, validating the effectiveness of GRPO in aligning MLLM agents with physical world constraints. + oai:arXiv.org:2512.18571v1 + cs.AI cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Zhaolin Cai, Fan Li, Ziwei Zheng, Haixia Bi, Lijun He + http://creativecommons.org/licenses/by/4.0/ + Weijie Zhou, Xuangtang Xiong, Ye Tian, Lijun Yue, Xinyu Wu, Wei Li, Chaoyang Zhao, Honghui Dong, Ming Tang, Jinqiao Wang, Zhengyou Zhang - Sandwiched and Silent: Behavioral Adaptation and Private Channel Exploitation in Ethereum MEV - https://arxiv.org/abs/2512.17602 - arXiv:2512.17602v1 Announce Type: new -Abstract: How users adapt after being sandwiched remains unclear; this paper provides an empirical quantification. Using transaction level data from November 2024 to February 2025, enriched with mempool visibility and ZeroMEV labels, we track user outcomes after their n-th public sandwich: (i) reactivation, i.e., the resumption of on-chain activity within a 60-day window, and (ii) first-time adoption of private routing. We refer to users who do not reactivate within this window as churned, and to users experiencing multiple attacks (n>1) as undergoing repeated exposure. Our analysis reveals measurable behavioral adaptation: around 40% of victims migrate to private routing within 60 days, rising to 54% with repeated exposures. Churn peaks at 7.5% after the first sandwich but declines to 1-2%, consistent with survivor bias. In Nov-Dec 2024 we confirm 2,932 private sandwich attacks affecting 3,126 private victim transactions, producing \$409,236 in losses and \$293,786 in attacker profits. A single bot accounts for nearly two-thirds of private frontruns, and private sandwich activity is heavily concentrated on a small set of DEX pools. These results highlight that private routing does not guarantee protection from MEV extraction: while execution failures push users toward private channels, these remain exploitable and highly concentrated, demanding continuous monitoring and protocol-level defenses. - oai:arXiv.org:2512.17602v1 - cs.CR - cs.CE - Mon, 22 Dec 2025 00:00:00 -0500 + Placenta Accreta Spectrum Detection Using an MRI-based Hybrid CNN-Transformer Model + https://arxiv.org/abs/2512.18573 + arXiv:2512.18573v1 Announce Type: new +Abstract: Placenta Accreta Spectrum (PAS) is a serious obstetric condition that can be challenging to diagnose with Magnetic Resonance Imaging (MRI) due to variability in radiologists' interpretations. To overcome this challenge, a hybrid 3D deep learning model for automated PAS detection from volumetric MRI scans is proposed in this study. The model integrates a 3D DenseNet121 to capture local features and a 3D Vision Transformer (ViT) to model global spatial context. It was developed and evaluated on a retrospective dataset of 1,133 MRI volumes. Multiple 3D deep learning architectures were also evaluated for comparison. On an independent test set, the DenseNet121-ViT model achieved the highest performance with a five-run average accuracy of 84.3%. These results highlight the strength of hybrid CNN-Transformer models as a computer-aided diagnosis tool. The model's performance demonstrates a clear potential to assist radiologists by providing a robust decision support to improve diagnostic consistency across interpretations, and ultimately enhance the accuracy and timeliness of PAS diagnosis. + oai:arXiv.org:2512.18573v1 + cs.CV + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Davide Mancino, Davide Rezzoli + Sumaiya Ali, Areej Alhothali, Ohoud Alzamzami, Sameera Albasri, Ahmed Abduljabbar, Muhammad Alwazzan - Locally-APN Binomials with Low Boomerang Uniformity in Odd Characteristic - https://arxiv.org/abs/2512.17603 - arXiv:2512.17603v1 Announce Type: new -Abstract: Recently, several studies have shown that when $q\equiv3\pmod{4}$, the function $F_r(x)=x^r+x^{r+\frac{q-1}{2}}$ defined over $\mathbb{F}_q$ is locally-APN and has boomerang uniformity at most~$2$. In this paper, we extend these results by showing that if there is at most one $x\in \mathbb{F}_q$ with $\chi(x)=\chi(x+1)=1$ satisfying $(x+1)^r - x^r = b$ for all $b\in \mathbb{F}_q^*$ and $\gcd(r,q-1)\mid 2$, then $F_r$ is locally-APN with boomerang uniformity at most $2$. Moreover, we study the differential spectra of $F_3$ and $F_{\frac{2q-1}{3}}$, and the boomerang spectrum of $F_2$ when $p=3$. - oai:arXiv.org:2512.17603v1 - cs.IT - math.IT - math.NT - Mon, 22 Dec 2025 00:00:00 -0500 + Modality-Dependent Memory Mechanisms in Cross-Modal Neuromorphic Computing + https://arxiv.org/abs/2512.18575 + arXiv:2512.18575v1 Announce Type: new +Abstract: Memory-augmented spiking neural networks (SNNs) promise energy-efficient neuromorphic computing, yet their generalization across sensory modalities remains unexplored. We present the first comprehensive cross-modal ablation study of memory mechanisms in SNNs, evaluating Hopfield networks, Hierarchical Gated Recurrent Networks (HGRNs), and supervised contrastive learning (SCL) across visual (N-MNIST) and auditory (SHD) neuromorphic datasets. Our systematic evaluation of five architectures reveals striking modality-dependent performance patterns: Hopfield networks achieve 97.68% accuracy on visual tasks but only 76.15% on auditory tasks (21.53 point gap), revealing severe modality-specific specialization, while SCL demonstrates more balanced cross-modal performance (96.72% visual, 82.16% audio, 14.56 point gap). These findings establish that memory mechanisms exhibit task-specific benefits rather than universal applicability. Joint multi-modal training with HGRN achieves 94.41% visual and 79.37% audio accuracy (88.78% average), matching parallel HGRN performance through unified deployment. Quantitative engram analysis confirms weak cross-modal alignment (0.038 similarity), validating our parallel architecture design. Our work provides the first empirical evidence for modality-specific memory optimization in neuromorphic systems, achieving 603x energy efficiency over traditional neural networks. + oai:arXiv.org:2512.18575v1 + cs.LG + cs.AI + cs.NE + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Namhun Koo, Soonhak Kwon, Minwoo Ko, Byunguk Kim + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Effiong Blessing, Chiung-Yi Tseng, Somshubhra Roy, Junaid Rehman, Isaac Nkrumah - Comparing the Fairness of Recursively Balanced Picking Sequences - https://arxiv.org/abs/2512.17604 - arXiv:2512.17604v1 Announce Type: new -Abstract: Picking sequences are well-established methods for allocating indivisible goods. Among the various picking sequences, recursively balanced picking sequences -- whereby each agent picks one good in every round -- are notable for guaranteeing allocations that satisfy envy-freeness up to one good. In this paper, we compare the fairness of different recursively balanced picking sequences using two key measures. Firstly, we demonstrate that all such sequences have the same price in terms of egalitarian welfare relative to other picking sequences. Secondly, we characterize the approximate maximin share (MMS) guarantees of these sequences. In particular, we show that compensating the agent who picks last in the first round by letting her pick first in every subsequent round yields the best MMS guarantee. - oai:arXiv.org:2512.17604v1 - cs.GT - Mon, 22 Dec 2025 00:00:00 -0500 + Wireless Copilot: An AI-Powered Partner for Navigating Next-Generation Wireless Complexity + https://arxiv.org/abs/2512.18582 + arXiv:2512.18582v1 Announce Type: new +Abstract: The sixth-generation (6G) of wireless networks introduces a level of operational complexity that exceeds the limits of traditional automation and manual oversight. This paper introduces the "Wireless Copilot", an AI-powered technical assistant designed to function as a collaborative partner for human network designers, engineers, and operators. We posit that by integrating Large Language Models (LLMs) with a robust cognitive framework. It will surpass the existing AI tools and interact with wireless devices, transmitting the user's intentions into the actual network execution process. Then, Wireless Copilot can translate high-level human intent into precise, optimized, and verifiable network actions. This framework bridges the gap between human expertise and machine-scale complexity, enabling more efficient, intelligent, and trustworthy management of 6G systems. Wireless Copilot will be a novel layer between the wireless infrastructure and the network operators. Moreover, we explore Wireless Copilot's methodology and analyze its application in Low-Altitude Wireless Networks (LAWNets) assisting 6G networking, including network design, configuration, evaluation, and optimization. Additionally, we present a case study on intent-based LAWNets resource allocation, demonstrating its superior adaptability compared to others. Finally, we outline future research directions toward creating a comprehensive human-AI collaborative ecosystem for the 6G era. + oai:arXiv.org:2512.18582v1 + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Karen Frilya Celine, Warut Suksompong, Sheung Man Yuen + http://creativecommons.org/publicdomain/zero/1.0/ + Haoxiang Luo, Ruichen Zhang, Yinqiu Liu, Gang Sun, Hongfang Yu, Dusit Niyato, Shiwen Mao, Dong In Kim - MGRegBench: A Novel Benchmark Dataset with Anatomical Landmarks for Mammography Image Registration - https://arxiv.org/abs/2512.17605 - arXiv:2512.17605v1 Announce Type: new -Abstract: Robust mammography registration is essential for clinical applications like tracking disease progression and monitoring longitudinal changes in breast tissue. However, progress has been limited by the absence of public datasets and standardized benchmarks. Existing studies are often not directly comparable, as they use private data and inconsistent evaluation frameworks. To address this, we present MGRegBench, a public benchmark dataset for mammogram registration. It comprises over 5,000 image pairs, with 100 containing manual anatomical landmarks and segmentation masks for rigorous evaluation. This makes MGRegBench one of the largest public 2D registration datasets with manual annotations. Using this resource, we benchmarked diverse registration methods including classical (ANTs), learning-based (VoxelMorph, TransMorph), implicit neural representation (IDIR), a classic mammography-specific approach, and a recent state-of-the-art deep learning method MammoRegNet. The implementations were adapted to this modality from the authors' implementations or re-implemented from scratch. Our contributions are: (1) the first public dataset of this scale with manual landmarks and masks for mammography registration; (2) the first like-for-like comparison of diverse methods on this modality; and (3) an extensive analysis of deep learning-based registration. We publicly release our code and data to establish a foundational resource for fair comparisons and catalyze future research. The source code and data are at https://github.com/KourtKardash/MGRegBench. - oai:arXiv.org:2512.17605v1 - cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + SD2AIL: Adversarial Imitation Learning from Synthetic Demonstrations via Diffusion Models + https://arxiv.org/abs/2512.18583 + arXiv:2512.18583v1 Announce Type: new +Abstract: Adversarial Imitation Learning (AIL) is a dominant framework in imitation learning that infers rewards from expert demonstrations to guide policy optimization. Although providing more expert demonstrations typically leads to improved performance and greater stability, collecting such demonstrations can be challenging in certain scenarios. Inspired by the success of diffusion models in data generation, we propose SD2AIL, which utilizes synthetic demonstrations via diffusion models. We first employ a diffusion model in the discriminator to generate synthetic demonstrations as pseudo-expert data that augment the expert demonstrations. To selectively replay the most valuable demonstrations from the large pool of (pseudo-) expert demonstrations, we further introduce a prioritized expert demonstration replay strategy (PEDR). The experimental results on simulation tasks demonstrate the effectiveness and robustness of our method. In particular, in the Hopper task, our method achieves an average return of 3441, surpassing the state-of-the-art method by 89. Our code will be available at https://github.com/positron-lpc/SD2AIL. + oai:arXiv.org:2512.18583v1 + cs.LG + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Svetlana Krasnova, Emiliya Starikova, Ilia Naletov, Andrey Krylov, Dmitry Sorokin + Pengcheng Li, Qiang Fang, Tong Zhao, Yixing Lan, Xin Xu - More Consistent Accuracy PINN via Alternating Easy-Hard Training - https://arxiv.org/abs/2512.17607 - arXiv:2512.17607v1 Announce Type: new -Abstract: Physics-informed neural networks (PINNs) have recently emerged as a prominent paradigm for solving partial differential equations (PDEs), yet their training strategies remain underexplored. While hard prioritization methods inspired by finite element methods are widely adopted, recent research suggests that easy prioritization can also be effective. Nevertheless, we find that both approaches exhibit notable trade-offs and inconsistent performance across PDE types. To address this issue, we develop a hybrid strategy that combines the strengths of hard and easy prioritization through an alternating training algorithm. On PDEs with steep gradients, nonlinearity, and high dimensionality, the proposed method achieves consistently high accuracy, with relative L2 errors mostly in the range of O(10^-5) to O(10^-6), significantly surpassing baseline methods. Moreover, it offers greater reliability across diverse problems, whereas compared approaches often suffer from variable accuracy depending on the PDE. This work provides new insights into designing hybrid training strategies to enhance the performance and robustness of PINNs. - oai:arXiv.org:2512.17607v1 - cs.LG - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Overcoming Spectral Bias via Cross-Attention + https://arxiv.org/abs/2512.18586 + arXiv:2512.18586v1 Announce Type: new +Abstract: Spectral bias implies an imbalance in training dynamics, whereby high-frequency components may converge substantially more slowly than low-frequency ones. To alleviate this issue, we propose a cross-attention-based architecture that adaptively reweights a scaled multiscale random Fourier feature bank with learnable scaling factors. The learnable scaling adjusts the amplitudes of the multiscale random Fourier features, while the cross-attention residual structure provides an input-dependent mechanism to emphasize the most informative scales. As a result, the proposed design accelerates high-frequency convergence relative to comparable baselines built on the same multiscale bank. Moreover, the attention module supports incremental spectral enrichment: dominant Fourier modes extracted from intermediate approximations via discrete Fourier analysis can be appended to the feature bank and used in subsequent training, without modifying the backbone architecture. + We further extend this framework to PDE learning by introducing a linear combination of two sub-networks: one specialized in capturing high-frequency components of the PDE solution and the other in capturing low-frequency components, with a learnable (or optimally chosen) mixing factor to balance the two contributions and improve training efficiency in oscillatory regimes. Numerical experiments on high-frequency and discontinuous regression problems, image reconstruction tasks, as well as representative PDE examples, demonstrate the effectiveness and robustness of the proposed method. + oai:arXiv.org:2512.18586v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Zhaoqian Gao, Min Yanga + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiaodong Feng, Tao Tang, Xiaoliang Wan, Tao Zhou - Semi-Supervised 3D Segmentation for Type-B Aortic Dissection with Slim UNETR - https://arxiv.org/abs/2512.17610 - arXiv:2512.17610v1 Announce Type: new -Abstract: Convolutional neural networks (CNN) for multi-class segmentation of medical images are widely used today. Especially models with multiple outputs that can separately predict segmentation classes (regions) without relying on a probabilistic formulation of the segmentation of regions. These models allow for more precise segmentation by tailoring the network's components to each class (region). They have a common encoder part of the architecture but branch out at the output layers, leading to improved accuracy. - These methods are used to diagnose type B aortic dissection (TBAD), which requires accurate segmentation of aortic structures based on the ImageTBDA dataset, which contains 100 3D computed tomography angiography (CTA) images. These images identify three key classes: true lumen (TL), false lumen (FL), and false lumen thrombus (FLT) of the aorta, which is critical for diagnosis and treatment decisions. In the dataset, 68 examples have a false lumen, while the remaining 32 do not, creating additional complexity for pathology detection. - However, implementing these CNN methods requires a large amount of high-quality labeled data. Obtaining accurate labels for the regions of interest can be an expensive and time-consuming process, particularly for 3D data. Semi-supervised learning methods allow models to be trained by using both labeled and unlabeled data, which is a promising approach for overcoming the challenge of obtaining accurate labels. However, these learning methods are not well understood for models with multiple outputs. - This paper presents a semi-supervised learning method for models with multiple outputs. The method is based on the additional rotations and flipping, and does not assume the probabilistic nature of the model's responses. This makes it a universal approach, which is especially important for architectures that involve separate segmentation. - oai:arXiv.org:2512.17610v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + DNA-HHE: Dual-mode Near-network Accelerator for Hybrid Homomorphic Encryption on the Edge + https://arxiv.org/abs/2512.18589 + arXiv:2512.18589v1 Announce Type: new +Abstract: Fully homomorphic encryption (FHE) schemes like RNS-CKKS enable privacy-preserving outsourced computation (PPOC) but suffer from high computational latency and ciphertext expansion, especially on the resource-constrained edge side. Hybrid Homomorphic Encryption (HHE) mitigates these issues on the edge side by replacing HE with lightweight symmetric encryption for plaintext encryption, such as the Rubato cipher for the HHE variant of RNS-CKKS, yet it introduces transciphering overhead on the cloud. The respective strengths and limitations of FHE and HHE call for a dual-mode HHE solution with flexible algorithm switching ability. This paper presents DNA-HHE, the first dual-mode HHE accelerator with near-network coupling for edge devices. DNA-HHE supports both edge-side RNS-CKKS and Rubato within a unified architecture driven by flexible custom instructions. To realize a compact implementation for the edge side, we propose a DSP-efficient modular reduction design, a compact multi-field-adaptive butterfly unit, and parallel scheduling schemes of Rubato with a high degree of resource sharing. DNA-HHE is designed with network protocol packaging and transmission capacities and directly coupled to the network interface controller, achieving reduced overall latency of edge-side PPOC by 1.09$\times$ to 1.56$\times$. Our evaluations on the ASIC and FPGA platforms demonstrate that DNA-HHE outperforms the state-of-the-art single-mode designs in both edge-side RNS-CKKS and symmetric cipher with better computation latency and area efficiency, while offering dual-mode functionality. + oai:arXiv.org:2512.18589v1 + cs.CR + cs.AR + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Denis Mikhailapov, Vladimir Berikov + Yifan Zhao, Xinglong Yu, Yi Sun, Honglin Kuang, Jun Han - Self-Supervised Weighted Image Guided Quantitative MRI Super-Resolution - https://arxiv.org/abs/2512.17612 - arXiv:2512.17612v1 Announce Type: new -Abstract: High-resolution (HR) quantitative MRI (qMRI) relaxometry provides objective tissue characterization but remains clinically underutilized due to lengthy acquisition times. We propose a physics-informed, self-supervised framework for qMRI super-resolution that uses routinely acquired HR weighted MRI (wMRI) scans as guidance, thus, removing the necessity for HR qMRI ground truth during training. We formulate super-resolution as Bayesian maximum a posteriori inference, minimizing two discrepancies: (1) between HR images synthesized from super-resolved qMRI maps and acquired wMRI guides via forward signal models, and (2) between acquired LR qMRI and downsampled predictions. This physics-informed objective allows the models to learn from clinical wMRI without HR qMRI supervision. To validate the concept, we generate training data by synthesizing wMRI guides from HR qMRI using signal equations, then degrading qMRI resolution via k-space truncation. A deep neural network learns the super-resolution mapping. Ablation experiments demonstrate that T1-weighted images primarily enhance T1 maps, T2-weighted images improve T2 maps, and combined guidance optimally enhances all parameters simultaneously. Validation on independently acquired in-vivo data from a different qMRI sequence confirms cross-qMRI sequence generalizability. Models trained on synthetic data can produce super-resolved maps from a 1-minute acquisition with quality comparable to a 5-minute reference scan, leveraging the scanner-independent nature of relaxometry parameters. By decoupling training from HR qMRI requirement, our framework enables fast qMRI acquisitions enhanced via routine clinical images, offering a practical pathway for integrating quantitative relaxometry into clinical workflows with acceptable additional scan time. - oai:arXiv.org:2512.17612v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + From Scratch to Fine-Tuned: A Comparative Study of Transformer Training Strategies for Legal Machine Translation + https://arxiv.org/abs/2512.18593 + arXiv:2512.18593v1 Announce Type: new +Abstract: In multilingual nations like India, access to legal information is often hindered by language barriers, as much of the legal and judicial documentation remains in English. Legal Machine Translation (L-MT) offers a scalable solution to this challenge by enabling accurate and accessible translations of legal documents. This paper presents our work for the JUST-NLP 2025 Legal MT shared task, focusing on English-Hindi translation using Transformer-based approaches. We experiment with 2 complementary strategies, fine-tuning a pre-trained OPUS-MT model for domain-specific adaptation and training a Transformer model from scratch using the provided legal corpus. Performance is evaluated using standard MT metrics, including SacreBLEU, chrF++, TER, ROUGE, BERTScore, METEOR, and COMET. Our fine-tuned OPUS-MT model achieves a SacreBLEU score of 46.03, significantly outperforming both baseline and from-scratch models. The results highlight the effectiveness of domain adaptation in enhancing translation quality and demonstrate the potential of L-MT systems to improve access to justice and legal transparency in multilingual contexts. + oai:arXiv.org:2512.18593v1 + cs.CL + cs.AI + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Alireza Samadifardheris, Dirk H. J. Poot, Florian Wiesinger, Stefan Klein, Juan A. Hernandez-Tamames + Amit Barman, Atanu Mandal, Sudip Kumar Naskar - A Post-Quantum Secure End-to-End Verifiable E-Voting Protocol Based on Multivariate Polynomials - https://arxiv.org/abs/2512.17613 - arXiv:2512.17613v1 Announce Type: new -Abstract: Voting is a primary democratic activity through which voters select representatives or approve policies. Conventional paper ballot elections have several drawbacks that might compromise the fairness, effectiveness, and accessibility of the voting process. Therefore, there is an increasing need to design safer, effective, and easily accessible alternatives. E-Voting is one such solution that uses digital tools to simplify voting. Existing state-of-the-art designs for secure E-Voting are based on number-theoretic hardness assumptions. These designs are no longer secure due to quantum algorithms such as Shor's algorithm. We present the design and analysis of \textit{first} post-quantum secure end-to-end verifiable E-Voting protocol based on multivariate polynomials to address this issue. The security of our proposed design depends on the hardness of the MQ problem, which is an NP-hard problem. We present a simple yet efficient design involving only standard cryptographic primitives as building blocks. - oai:arXiv.org:2512.17613v1 - cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 + Benchmarking neural surrogates on realistic spatiotemporal multiphysics flows + https://arxiv.org/abs/2512.18595 + arXiv:2512.18595v1 Announce Type: new +Abstract: Predicting multiphysics dynamics is computationally expensive and challenging due to the severe coupling of multi-scale, heterogeneous physical processes. While neural surrogates promise a paradigm shift, the field currently suffers from an "illusion of mastery", as repeatedly emphasized in top-tier commentaries: existing evaluations overly rely on simplified, low-dimensional proxies, which fail to expose the models' inherent fragility in realistic regimes. To bridge this critical gap, we present REALM (REalistic AI Learning for Multiphysics), a rigorous benchmarking framework designed to test neural surrogates on challenging, application-driven reactive flows. REALM features 11 high-fidelity datasets spanning from canonical multiphysics problems to complex propulsion and fire safety scenarios, alongside a standardized end-to-end training and evaluation protocol that incorporates multiphysics-aware preprocessing and a robust rollout strategy. Using this framework, we systematically benchmark over a dozen representative surrogate model families, including spectral operators, convolutional models, Transformers, pointwise operators, and graph/mesh networks, and identify three robust trends: (i) a scaling barrier governed jointly by dimensionality, stiffness, and mesh irregularity, leading to rapidly growing rollout errors; (ii) performance primarily controlled by architectural inductive biases rather than parameter count; and (iii) a persistent gap between nominal accuracy metrics and physically trustworthy behavior, where models with high correlations still miss key transient structures and integral quantities. Taken together, REALM exposes the limits of current neural surrogates on realistic multiphysics flows and offers a rigorous testbed to drive the development of next-generation physics-aware architectures. + oai:arXiv.org:2512.18595v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Vikas Srivastava, Debasish Roy, Sihem Mesnager, Nibedita Kundu, Sumit Kumar Debnath, Sourav Mukhopadhyay + Runze Mao, Rui Zhang, Xuan Bai, Tianhao Wu, Teng Zhang, Zhenyi Chen, Minqi Lin, Bocheng Zeng, Yangchen Xu, Yingxuan Xiang, Haoze Zhang, Shubham Goswami, Pierre A. Dawe, Yifan Xu, Zhenhua An, Mengtao Yan, Xiaoyi Lu, Yi Wang, Rongbo Bai, Haobu Gao, Xiaohang Fang, Han Li, Hao Sun, Zhi X. Chen - Multi-Language Benchmark Generation via L-Systems - https://arxiv.org/abs/2512.17616 - arXiv:2512.17616v1 Announce Type: new -Abstract: L-systems are a mathematical formalism proposed by biologist Aristid Lindenmayer with the aim of simulating organic structures such as trees, snowflakes, flowers, and other branching phenomena. They are implemented as a formal language that defines how patterns can be iteratively rewritten. This paper describes how such a formalism can be used to create artificial programs written in programming languages such as C, C++, Julia and Go. These programs, being large and complex, can be used to test the performance of compilers, operating systems, and computer architectures. This paper demonstrates the usefulness of these benchmarks through multiple case studies. These case studies include a comparison between clang and gcc; a comparison between C, C++, Julia and Go; a study of the historical evolution of gcc in terms of code quality; a look into the effects of profile guided optimizations in gcc; an analysis of the asymptotic behavior of the different phases of clang's compilation pipeline; and a comparison between the many data structures available in the Gnome Library (GLib). These case studies demonstrate the benefits of the L-System approach to create benchmarks, when compared with fuzzers such as CSmith, which were designed to uncover bugs in compilers, rather than evaluating their performance. - oai:arXiv.org:2512.17616v1 - cs.PL - Mon, 22 Dec 2025 00:00:00 -0500 + EIA-SEC: Improved Actor-Critic Framework for Multi-UAV Collaborative Control in Smart Agriculture + https://arxiv.org/abs/2512.18596 + arXiv:2512.18596v1 Announce Type: new +Abstract: The widespread application of wireless communication technology has promoted the development of smart agriculture, where unmanned aerial vehicles (UAVs) play a multifunctional role. We target a multi-UAV smart agriculture system where UAVs cooperatively perform data collection, image acquisition, and communication tasks. In this context, we model a Markov decision process to solve the multi-UAV trajectory planning problem. Moreover, we propose a novel Elite Imitation Actor-Shared Ensemble Critic (EIA-SEC) framework, where agents adaptively learn from the elite agent to reduce trial-and-error costs, and a shared ensemble critic collaborates with each agent's local critic to ensure unbiased objective value estimates and prevent overestimation. Experimental results demonstrate that EIA-SEC outperforms state-of-the-art baselines in terms of reward performance, training stability, and convergence speed. + oai:arXiv.org:2512.18596v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Vin\'icius Francisco da Silva, Heitor Leite, Fernando Magno Quint\~ao Pereira + Quanxi Zhou, Wencan Mao, Yilei Liang, Manabu Tsukada, Yunling Liu, Jon Crowcroft - StereoMV2D: A Sparse Temporal Stereo-Enhanced Framework for Robust Multi-View 3D Object Detection - https://arxiv.org/abs/2512.17620 - arXiv:2512.17620v1 Announce Type: new -Abstract: Multi-view 3D object detection is a fundamental task in autonomous driving perception, where achieving a balance between detection accuracy and computational efficiency remains crucial. Sparse query-based 3D detectors efficiently aggregate object-relevant features from multi-view images through a set of learnable queries, offering a concise and end-to-end detection paradigm. Building on this foundation, MV2D leverages 2D detection results to provide high-quality object priors for query initialization, enabling higher precision and recall. However, the inherent depth ambiguity in single-frame 2D detections still limits the accuracy of 3D query generation. To address this issue, we propose StereoMV2D, a unified framework that integrates temporal stereo modeling into the 2D detection-guided multi-view 3D detector. By exploiting cross-temporal disparities of the same object across adjacent frames, StereoMV2D enhances depth perception and refines the query priors, while performing all computations efficiently within 2D regions of interest (RoIs). Furthermore, a dynamic confidence gating mechanism adaptively evaluates the reliability of temporal stereo cues through learning statistical patterns derived from the inter-frame matching matrix together with appearance consistency, ensuring robust detection under object appearance and occlusion. Extensive experiments on the nuScenes and Argoverse 2 datasets demonstrate that StereoMV2D achieves superior detection performance without incurring significant computational overhead. Code will be available at https://github.com/Uddd821/StereoMV2D. - oai:arXiv.org:2512.17620v1 + Commercial Vehicle Braking Optimization: A Robust SIFT-Trajectory Approach + https://arxiv.org/abs/2512.18597 + arXiv:2512.18597v1 Announce Type: new +Abstract: A vision-based trajectory analysis solution is proposed to address the "zero-speed braking" issue caused by inaccurate Controller Area Network (CAN) signals in commercial vehicle Automatic Emergency Braking (AEB) systems during low-speed operation. The algorithm utilizes the NVIDIA Jetson AGX Xavier platform to process sequential video frames from a blind spot camera, employing self-adaptive Contrast Limited Adaptive Histogram Equalization (CLAHE)-enhanced Scale-Invariant Feature Transform (SIFT) feature extraction and K-Nearest Neighbors (KNN)-Random Sample Consensus (RANSAC) matching. This allows for precise classification of the vehicle's motion state (static, vibration, moving). Key innovations include 1) multiframe trajectory displacement statistics (5-frame sliding window), 2) a dual-threshold state decision matrix, and 3) OBD-II driven dynamic Region of Interest (ROI) configuration. The system effectively suppresses environmental interference and false detection of dynamic objects, directly addressing the challenge of low-speed false activation in commercial vehicle safety systems. Evaluation in a real-world dataset (32,454 video segments from 1,852 vehicles) demonstrates an F1-score of 99.96% for static detection, 97.78% for moving state recognition, and a processing delay of 14.2 milliseconds (resolution 704x576). The deployment on-site shows an 89% reduction in false braking events, a 100% success rate in emergency braking, and a fault rate below 5%. + oai:arXiv.org:2512.18597v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Di Wu, Feng Yang, Wenhui Zhao, Jinwen Yu, Pan Liao, Benlian Xu, Dingwen Zhang + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Zhe Li, Kun Cheng, Hanyue Mo, Jintao Lu, Ziwen Kuang, Jianwen Ye, Lixu Xu, Xinya Meng, Jiahui Zhao, Shengda Ji, Shuyuan Liu, Mengyu Wang - PathFLIP: Fine-grained Language-Image Pretraining for Versatile Computational Pathology - https://arxiv.org/abs/2512.17621 - arXiv:2512.17621v1 Announce Type: new -Abstract: While Vision-Language Models (VLMs) have achieved notable progress in computational pathology (CPath), the gigapixel scale and spatial heterogeneity of Whole Slide Images (WSIs) continue to pose challenges for multimodal understanding. Existing alignment methods struggle to capture fine-grained correspondences between textual descriptions and visual cues across thousands of patches from a slide, compromising their performance on downstream tasks. In this paper, we propose PathFLIP (Pathology Fine-grained Language-Image Pretraining), a novel framework for holistic WSI interpretation. PathFLIP decomposes slide-level captions into region-level subcaptions and generates text-conditioned region embeddings to facilitate precise visual-language grounding. By harnessing Large Language Models (LLMs), PathFLIP can seamlessly follow diverse clinical instructions and adapt to varied diagnostic contexts. Furthermore, it exhibits versatile capabilities across multiple paradigms, efficiently handling slide-level classification and retrieval, fine-grained lesion localization, and instruction following. Extensive experiments demonstrate that PathFLIP outperforms existing large-scale pathological VLMs on four representative benchmarks while requiring significantly less training data, paving the way for fine-grained, instruction-aware WSI interpretation in clinical practice. - oai:arXiv.org:2512.17621v1 + SimpleCall: A Lightweight Image Restoration Agent in Label-Free Environments with MLLM Perceptual Feedback + https://arxiv.org/abs/2512.18599 + arXiv:2512.18599v1 Announce Type: new +Abstract: Complex image restoration aims to recover high-quality images from inputs affected by multiple degradations such as blur, noise, rain, and compression artifacts. Recent restoration agents, powered by vision-language models and large language models, offer promising restoration capabilities but suffer from significant efficiency bottlenecks due to reflection, rollback, and iterative tool searching. Moreover, their performance heavily depends on degradation recognition models that require extensive annotations for training, limiting their applicability in label-free environments. To address these limitations, we propose a policy optimization-based restoration framework that learns an lightweight agent to determine tool-calling sequences. The agent operates in a sequential decision process, selecting the most appropriate restoration operation at each step to maximize final image quality. To enable training within label-free environments, we introduce a novel reward mechanism driven by multimodal large language models, which act as human-aligned evaluator and provide perceptual feedback for policy improvement. Once trained, our agent executes a deterministic restoration plans without redundant tool invocations, significantly accelerating inference while maintaining high restoration quality. Extensive experiments show that despite using no supervision, our method matches SOTA performance on full-reference metrics and surpasses existing approaches on no-reference metrics across diverse degradation scenarios. + oai:arXiv.org:2512.18599v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Fengchun Liu, Songhan Jiang, Linghan Cai, Ziyue Wang, Yongbing Zhang + Jianglin Lu, Yuanwei Wu, Ziyi Zhao, Hongcheng Wang, Felix Jimenez, Abrar Majeedi, Yun Fu - Iterative Gaussian Approximation for Random Spreading Unsourced Random Access - https://arxiv.org/abs/2512.17628 - arXiv:2512.17628v1 Announce Type: new -Abstract: Massive machine-type communications (mMTC) demand robust solutions to support extensive connectivity efficiently. Unsourced random access (URA) has emerged as a promising approach, delivering high spectral and energy efficiency. Among URA code structures, the random spreading (RS) category is a key enabler, providing strong anti-interference capabilities through spectrum spreading gain. Notably, RS-URA approaches theoretical performance limits over the Gaussian multiple access channel in scenarios with few active users. In this paper, we propose an iterative Gaussian approximation decoder designed universally for RS-URA categories. The proposed receiver iterates extrinsic and intrinsic soft information to enhance decoding performance, requiring only a few iterations to converge. Numerical results validate the decoder's effectiveness in terms of performance and robustness. - oai:arXiv.org:2512.17628v1 + Embracing Beam-Squint Effects for Wideband LEO Satellite Communications: A 3D Rainbow Beamforming Approach + https://arxiv.org/abs/2512.18600 + arXiv:2512.18600v1 Announce Type: new +Abstract: Low Earth Orbit (LEO) satellite communications (SATCOM) offers high-throughput, low-latency global connectivity to a very large number of users. To accommodate this demand with limited hardware resources, beam hopping (BH) has emerged as a prominent approach in LEO SATCOM. However, its time-domain switching mechanism confines coverage to a small fraction of the service area during each time slot, exacerbating uplink throughput bottlenecks and latency issues as the user density increases. Meanwhile, wideband systems experience the beam-squint effect, where analog beamforming (BF) directions vary with subcarrier frequencies, potentially causing misalignment at certain frequencies, thereby hindering the performance of wideband SATCOM. In this paper, we aim to shift the paradigm in wideband LEO SATCOM from beam-squint as an impairment to beam-squint as an asset. Specifically, we put forth 3D rainbow BF employing a joint phase-time array (JPTA) antenna with true time delay (TTD) to intentionally widen the beam-squint angle, steering frequency-dependent beams toward distributed directions. This novel approach enables the satellite to serve its entire coverage area in a single time slot. By doing so, the satellite simultaneously receives uplink signals from a massive number of users, significantly boosting throughput and reducing latency. To realize 3D rainbow BF, we formulate a JPTA beamformer optimization problem and address the non-convex nature of the optimization problem through a novel joint alternating and decomposition-based optimization framework. Through numerical evaluations incorporating realistic 3D LEO SATCOM geometry, our numerical results demonstrate that the proposed rainbow BF-empowered LEO SATCOM achieves up to 2.8-fold increase in uplink throughput compared to conventional BH systems. These results mark a significant breakthrough for 6G wideband LEO SATCOM. + oai:arXiv.org:2512.18600v1 cs.IT math.IT - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Liandong Hu, Jian Dang, Zaichen Zhang + http://creativecommons.org/publicdomain/zero/1.0/ + Juha Park, Seokho Kim, Wonjae Shin, H. Vincent Poor - SCOPE: Sequential Causal Optimization of Process Interventions - https://arxiv.org/abs/2512.17629 - arXiv:2512.17629v1 Announce Type: new -Abstract: Prescriptive Process Monitoring (PresPM) recommends interventions during business processes to optimize key performance indicators (KPIs). In realistic settings, interventions are rarely isolated: organizations need to align sequences of interventions to jointly steer the outcome of a case. Existing PresPM approaches fall short in this respect. Many focus on a single intervention decision, while others treat multiple interventions independently, ignoring how they interact over time. Methods that do address these dependencies depend either on simulation or data augmentation to approximate the process to train a Reinforcement Learning (RL) agent, which can create a reality gap and introduce bias. We introduce SCOPE, a PresPM approach that learns aligned sequential intervention recommendations. SCOPE employs backward induction to estimate the effect of each candidate intervention action, propagating its impact from the final decision point back to the first. By leveraging causal learners, our method can utilize observational data directly, unlike methods that require constructing process approximations for reinforcement learning. Experiments on both an existing synthetic dataset and a new semi-synthetic dataset show that SCOPE consistently outperforms state-of-the-art PresPM techniques in optimizing the KPI. The novel semi-synthetic setup, based on a real-life event log, is provided as a reusable benchmark for future work on sequential PresPM. - oai:arXiv.org:2512.17629v1 - cs.LG - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + On Finding Inconsistencies in Documents + https://arxiv.org/abs/2512.18601 + arXiv:2512.18601v1 Announce Type: new +Abstract: Professionals in academia, law, and finance audit their documents because inconsistencies can result in monetary, reputational, and scientific costs. Language models (LMs) have the potential to dramatically speed up this auditing process. To understand their abilities, we introduce a benchmark, FIND (Finding INconsistencies in Documents), where each example is a document with an inconsistency inserted manually by a domain expert. Despite the documents being long, technical, and complex, the best-performing model (gpt-5) recovered 64% of the inserted inconsistencies. Surprisingly, gpt-5 also found undiscovered inconsistencies present in the original documents. For example, on 50 arXiv papers, we judged 136 out of 196 of the model's suggestions to be legitimate inconsistencies missed by the original authors. However, despite these findings, even the best models miss almost half of the inconsistencies in FIND, demonstrating that inconsistency detection is still a challenging task. + oai:arXiv.org:2512.18601v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Jakob De Moor, Hans Weytjens, Johannes De Smedt, Jochen De Weerdt + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Charles J. Lovering, Seth Ebner, Brandon Smock, Michael Krumdick, Saad Rabbani, Ahmed Muhammad, Varshini Reddy, Chris Tanner - Confidence-Credibility Aware Weighted Ensembles of Small LLMs Outperform Large LLMs in Emotion Detection - https://arxiv.org/abs/2512.17630 - arXiv:2512.17630v1 Announce Type: new -Abstract: This paper introduces a confidence-weighted, credibility-aware ensemble framework for text-based emotion detection, inspired by Condorcet's Jury Theorem (CJT). Unlike conventional ensembles that often rely on homogeneous architectures, our approach combines architecturally diverse small transformer-based large language models (sLLMs) - BERT, RoBERTa, DistilBERT, DeBERTa, and ELECTRA, each fully fine-tuned for emotion classification. To preserve error diversity, we minimize parameter convergence while taking advantage of the unique biases of each model. A dual-weighted voting mechanism integrates both global credibility (validation F1 score) and local confidence (instance-level probability) to dynamically weight model contributions. Experiments on the DAIR-AI dataset demonstrate that our credibility-confidence ensemble achieves a macro F1 score of 93.5 percent, surpassing state-of-the-art benchmarks and significantly outperforming large-scale LLMs, including Falcon, Mistral, Qwen, and Phi, even after task-specific Low-Rank Adaptation (LoRA). With only 595M parameters in total, our small LLMs ensemble proves more parameter-efficient and robust than models up to 7B parameters, establishing that carefully designed ensembles of small, fine-tuned models can outperform much larger LLMs in specialized natural language processing (NLP) tasks such as emotion detection. - oai:arXiv.org:2512.17630v1 - cs.CL + Trajectory Planning for UAV-Based Smart Farming Using Imitation-Based Triple Deep Q-Learning + https://arxiv.org/abs/2512.18604 + arXiv:2512.18604v1 Announce Type: new +Abstract: Unmanned aerial vehicles (UAVs) have emerged as a promising auxiliary platform for smart agriculture, capable of simultaneously performing weed detection, recognition, and data collection from wireless sensors. However, trajectory planning for UAV-based smart agriculture is challenging due to the high uncertainty of the environment, partial observations, and limited battery capacity of UAVs. To address these issues, we formulate the trajectory planning problem as a Markov decision process (MDP) and leverage multi-agent reinforcement learning (MARL) to solve it. Furthermore, we propose a novel imitation-based triple deep Q-network (ITDQN) algorithm, which employs an elite imitation mechanism to reduce exploration costs and utilizes a mediator Q-network over a double deep Q-network (DDQN) to accelerate and stabilize training and improve performance. Experimental results in both simulated and real-world environments demonstrate the effectiveness of our solution. Moreover, our proposed ITDQN outperforms DDQN by 4.43\% in weed recognition rate and 6.94\% in data collection rate. + oai:arXiv.org:2512.18604v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Menna Elgabry, Ali Hamdi + Wencan Mao, Quanxi Zhou, Tomas Couso Coddou, Manabu Tsukada, Yunling Liu, Yusheng Ji - Trust-Region Adaptive Policy Optimization - https://arxiv.org/abs/2512.17636 - arXiv:2512.17636v1 Announce Type: new -Abstract: Post-training methods, especially Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), play an important role in improving large language models' (LLMs) complex reasoning abilities. However, the dominant two-stage pipeline (SFT then RL) suffers from a key inconsistency: SFT enforces rigid imitation that suppresses exploration and induces forgetting, limiting RL's potential for improvements. We address this inefficiency with TRAPO (\textbf{T}rust-\textbf{R}egion \textbf{A}daptive \textbf{P}olicy \textbf{O}ptimization), a hybrid framework that interleaves SFT and RL within each training instance by optimizing SFT loss on expert prefixes and RL loss on the model's own completions, unifying external supervision and self-exploration. To stabilize training, we introduce Trust-Region SFT (TrSFT), which minimizes forward KL divergence inside a trust region but attenuates optimization outside, effectively shifting toward reverse KL and yielding stable, mode-seeking updates favorable for RL. An adaptive prefix-selection mechanism further allocates expert guidance based on measured utility. Experiments on five mathematical reasoning benchmarks show that TRAPO consistently surpasses standard SFT, RL, and SFT-then-RL pipelines, as well as recent state-of-the-art approaches, establishing a strong new paradigm for reasoning-enhanced LLMs. - oai:arXiv.org:2512.17636v1 - cs.LG + Reflective Confidence: Correcting Reasoning Flaws via Online Self-Correction + https://arxiv.org/abs/2512.18605 + arXiv:2512.18605v1 Announce Type: new +Abstract: Large language models (LLMs) have achieved strong performance on complex reasoning tasks using techniques such as chain-of-thought and self-consistency. However, ensemble-based approaches, especially self-consistency which relies on multiple reasoning trajectories, often incur substantial computational overhead. To improve efficiency, prior work has leveraged internal confidence signals, where early stopping strategies such as DeepConf reduce cost by terminating low-confidence trajectories. However, this strategy discards incomplete reasoning paths and wastes partial computation. + We propose reflective confidence, a novel reasoning framework that transforms low-confidence signals from termination indicators into reflection triggers. When confidence falls below a threshold, instead of stopping generation, the model produces a reflection prompt to analyze the current reasoning state, identify potential errors, and continue generation along a corrected trajectory. Experiments on mathematical reasoning benchmarks, including AIME 2025, demonstrate significant accuracy improvements over advanced early-stopping baselines at comparable computational cost, validating the effectiveness of proactive self-correction over passive discarding. + oai:arXiv.org:2512.18605v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mingyu Su, Jian Guan, Yuxian Gu, Minlie Huang, Hongning Wang + http://creativecommons.org/licenses/by/4.0/ + Qinglin Zeng, Jing Yang, Keze Wang - About Time: Model-free Reinforcement Learning with Timed Reward Machines - https://arxiv.org/abs/2512.17637 - arXiv:2512.17637v1 Announce Type: new -Abstract: Reward specification plays a central role in reinforcement learning (RL), guiding the agent's behavior. To express non-Markovian rewards, formalisms such as reward machines have been introduced to capture dependencies on histories. However, traditional reward machines lack the ability to model precise timing constraints, limiting their use in time-sensitive applications. In this paper, we propose timed reward machines (TRMs), which are an extension of reward machines that incorporate timing constraints into the reward structure. TRMs enable more expressive specifications with tunable reward logic, for example, imposing costs for delays and granting rewards for timely actions. We study model-free RL frameworks (i.e., tabular Q-learning) for learning optimal policies with TRMs under digital and real-time semantics. Our algorithms integrate the TRM into learning via abstractions of timed automata, and employ counterfactual-imagining heuristics that exploit the structure of the TRM to improve the search. Experimentally, we demonstrate that our algorithm learns policies that achieve high rewards while satisfying the timing constraints specified by the TRM on popular RL benchmarks. Moreover, we conduct comparative studies of performance under different TRM semantics, along with ablations that highlight the benefits of counterfactual-imagining. - oai:arXiv.org:2512.17637v1 + The Interaction Bottleneck of Deep Neural Networks: Discovery, Proof, and Modulation + https://arxiv.org/abs/2512.18607 + arXiv:2512.18607v1 Announce Type: new +Abstract: Understanding what kinds of cooperative structures deep neural networks (DNNs) can represent remains a fundamental yet insufficiently understood problem. In this work, we treat interactions as the fundamental units of such structure and investigate a largely unexplored question: how DNNs encode interactions under different levels of contextual complexity, and how these microscopic interaction patterns shape macroscopic representation capacity. To quantify this complexity, we use multi-order interactions [57], where each order reflects the amount of contextual information required to evaluate the joint interaction utility of a variable pair. This formulation enables a stratified analysis of cooperative patterns learned by DNNs. Building on this formulation, we develop a comprehensive study of interaction structure in DNNs. (i) We empirically discover a universal interaction bottleneck: across architectures and tasks, DNNs easily learn low-order and high-order interactions but consistently under-represent mid-order ones. (ii) We theoretically explain this bottleneck by proving that mid-order interactions incur the highest contextual variability, yielding large gradient variance and making them intrinsically difficult to learn. (iii) We further modulate the bottleneck by introducing losses that steer models toward emphasizing interactions of selected orders. Finally, we connect microscopic interaction structures with macroscopic representational behavior: low-order-emphasized models exhibit stronger generalization and robustness, whereas high-order-emphasized models demonstrate greater structural modeling and fitting capability. Together, these results uncover an inherent representational bias in modern DNNs and establish interaction order as a powerful lens for interpreting and guiding deep representations. + oai:arXiv.org:2512.18607v1 + cs.LG cs.AI - cs.FL - cs.LO - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Anirban Majumdar, Ritam Raha, Rajarshi Roy, David Parker, Marta Kwiatkowska + Huiqi Deng, Qihan Ren, Zhuofan Chen, Zhenyuan Cui, Wen Shen, Peng Zhang, Hongbin Pei, Quanshi Zhang - Linear Personality Probing and Steering in LLMs: A Big Five Study - https://arxiv.org/abs/2512.17639 - arXiv:2512.17639v1 Announce Type: new -Abstract: Large language models (LLMs) exhibit distinct and consistent personalities that greatly impact trust and engagement. While this means that personality frameworks would be highly valuable tools to characterize and control LLMs' behavior, current approaches remain either costly (post-training) or brittle (prompt engineering). Probing and steering via linear directions has recently emerged as a cheap and efficient alternative. In this paper, we investigate whether linear directions aligned with the Big Five personality traits can be used for probing and steering model behavior. Using Llama 3.3 70B, we generate descriptions of 406 fictional characters and their Big Five trait scores. We then prompt the model with these descriptions and questions from the Alpaca questionnaire, allowing us to sample hidden activations that vary along personality traits in known, quantifiable ways. Using linear regression, we learn a set of per-layer directions in activation space, and test their effectiveness for probing and steering model behavior. Our results suggest that linear directions aligned with trait-scores are effective probes for personality detection, while their steering capabilities strongly depend on context, producing reliable effects in forced-choice tasks but limited influence in open-ended generation or when additional context is present in the prompt. - oai:arXiv.org:2512.17639v1 + A Comparative Study of Light-weight Language Models for PII Masking and their Deployment for Real Conversational Texts + https://arxiv.org/abs/2512.18608 + arXiv:2512.18608v1 Announce Type: new +Abstract: Automated masking of Personally Identifiable Information (PII) is critical for privacy-preserving conversational systems. While current frontier large language models demonstrate strong PII masking capabilities, concerns about data handling and computational costs motivate exploration of whether lightweight models can achieve comparable performance. We compare encoder-decoder and decoder-only architectures by fine-tuning T5-small and Mistral-Instruct-v0.3 on English datasets constructed from the AI4Privacy benchmark. We create different dataset variants to study label standardization and PII representation, covering 24 standardized PII categories and higher-granularity settings. Evaluation using entity-level and character-level metrics, type accuracy, and exact match shows that both lightweight models achieve performance comparable to frontier LLMs for PII masking tasks. Label normalization consistently improves performance across architectures. Mistral achieves higher F1 and recall with greater robustness across PII types but incurs significantly higher generation latency. T5, while less robust in conversational text, offers more controllable structured outputs and lower inference cost, motivating its use in a real-time Discord bot for real-world PII redaction. Evaluation on live messages reveals performance degradation under informal inputs. These results clarify trade-offs between accuracy, robustness, and computational efficiency, demonstrating that lightweight models can provide effective PII masking while addressing data handling concerns associated with frontier LLMs. + oai:arXiv.org:2512.18608v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Michel Frising, Daniel Balcells - - - Generative Human-Object Interaction Detection via Differentiable Cognitive Steering of Multi-modal LLMs - https://arxiv.org/abs/2512.17640 - arXiv:2512.17640v1 Announce Type: new -Abstract: Human-object interaction (HOI) detection aims to localize human-object pairs and the interactions between them. Existing methods operate under a closed-world assumption, treating the task as a classification problem over a small, predefined verb set, which struggles to generalize to the long-tail of unseen or ambiguous interactions in the wild. While recent multi-modal large language models (MLLMs) possess the rich world knowledge required for open-vocabulary understanding, they remain decoupled from existing HOI detectors since fine-tuning them is computationally prohibitive. To address these constraints, we propose \GRASP-HO}, a novel Generative Reasoning And Steerable Perception framework that reformulates HOI detection from the closed-set classification task to the open-vocabulary generation problem. To bridge the vision and cognitive, we first extract hybrid interaction representations, then design a lightweight learnable cognitive steering conduit (CSC) module to inject the fine-grained visual evidence into a frozen MLLM for effective reasoning. To address the supervision mismatch between classification-based HOI datasets and open-vocabulary generative models, we introduce a hybrid guidance strategy that coupling the language modeling loss and auxiliary classification loss, enabling discriminative grounding without sacrificing generative flexibility. Experiments demonstrate state-of-the-art closed-set performance and strong zero-shot generalization, achieving a unified paradigm that seamlessly bridges discriminative perception and generative reasoning for open-world HOI detection. - oai:arXiv.org:2512.17640v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Zhaolin Cai, Huiyu Duan, Zitong Xu, Fan Li, Zhi Liu, Jing Liu, Wei Shen, Xiongkuo Min, Guangtao Zhai + http://creativecommons.org/licenses/by/4.0/ + Prabigya Acharya, Liza Shrestha - A Comparative Analysis of Instagram and TikTok as Islamic Da'wah Media in the Digital Era - https://arxiv.org/abs/2512.17646 - arXiv:2512.17646v1 Announce Type: new -Abstract: This research aims to analyze and compare the effectiveness of Islamic da'wah on two popular social media platforms, namely Instagram and TikTok. The analysis is conducted based on four main aspects: media characteristics, da'wah communication strategies, audience engagement effectiveness, and user behavioral responses. The research employs a descriptive quantitative approach through observations of the popular da'wah account @hananattakistory during the period of October to November 2025. The findings indicate that TikTok excels in the effectiveness of disseminating da'wah messages through high interaction rates, achieving an engagement rate of 1.42%. In contrast, Instagram records a higher total interaction with an engagement rate of 5.47%, reflecting deeper and more reflective audience involvement. Qualitative analysis shows that TikTok is more effective in the initial stage of capturing audience attention (awareness), while Instagram is stronger in building loyalty and fostering a digital da'wah community. Thus, combining the use of both platforms can serve as a complementary digital da'wah strategy, expanding reach while deepening the understanding of Islamic values in the era of social media. The main contribution of this study lies in its comparative, empirically grounded approach that integrates both qualitative and quantitative analyses to map the effectiveness of da'wah across platforms, providing a strategic foundation for developing adaptive and contextual digital da'wah for young Muslim audiences. - oai:arXiv.org:2512.17646v1 - cs.SI - Mon, 22 Dec 2025 00:00:00 -0500 + The Procrustean Bed of Time Series: The Optimization Bias of Point-wise Loss + https://arxiv.org/abs/2512.18610 + arXiv:2512.18610v1 Announce Type: new +Abstract: Optimizing time series models via point-wise loss functions (e.g., MSE) relying on a flawed point-wise independent and identically distributed (i.i.d.) assumption that disregards the causal temporal structure, an issue with growing awareness yet lacking formal theoretical grounding. Focusing on the core independence issue under covariance stationarity, this paper aims to provide a first-principles analysis of the Expectation of Optimization Bias (EOB), formalizing it information-theoretically as the discrepancy between the true joint distribution and its flawed i.i.d. counterpart. Our analysis reveals a fundamental paradigm paradox: the more deterministic and structured the time series, the more severe the bias by point-wise loss function. We derive the first closed-form quantification for the non-deterministic EOB across linear and non-linear systems, and prove EOB is an intrinsic data property, governed exclusively by sequence length and our proposed Structural Signal-to-Noise Ratio (SSNR). This theoretical diagnosis motivates our principled debiasing program that eliminates the bias through sequence length reduction and structural orthogonalization. We present a concrete solution that simultaneously achieves both principles via DFT or DWT. Furthermore, a novel harmonized $\ell_p$ norm framework is proposed to rectify gradient pathologies of high-variance series. Extensive experiments validate EOB Theory's generality and the superior performance of debiasing program. + oai:arXiv.org:2512.18610v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wisnu Uriawan, Muhammad Saifurridwani Ijazi, Nizzami Ramdhan Arraudy, Onixa Shafa Putri Wibowo, Radithya Dwi Santoso + Rongyao Cai, Yuxi Wan, Kexin Zhang, Ming Jin, Hao Wang, Zhiqiang Ge, Daoyi Dong, Yong Liu, Qingsong Wen - Simulstream: Open-Source Toolkit for Evaluation and Demonstration of Streaming Speech-to-Text Translation Systems - https://arxiv.org/abs/2512.17648 - arXiv:2512.17648v1 Announce Type: new -Abstract: Streaming Speech-to-Text Translation (StreamST) requires producing translations concurrently with incoming speech, imposing strict latency constraints and demanding models that balance partial-information decision-making with high translation quality. Research efforts on the topic have so far relied on the SimulEval repository, which is no longer maintained and does not support systems that revise their outputs. In addition, it has been designed for simulating the processing of short segments, rather than long-form audio streams, and it does not provide an easy method to showcase systems in a demo. As a solution, we introduce simulstream, the first open-source framework dedicated to unified evaluation and demonstration of StreamST systems. Designed for long-form speech processing, it supports not only incremental decoding approaches, but also re-translation methods, enabling for their comparison within the same framework both in terms of quality and latency. In addition, it also offers an interactive web interface to demo any system built within the tool. - oai:arXiv.org:2512.17648v1 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Text2Graph VPR: A Text-to-Graph Expert System for Explainable Place Recognition in Changing Environments + https://arxiv.org/abs/2512.18613 + arXiv:2512.18613v1 Announce Type: new +Abstract: Visual Place Recognition (VPR) in long-term deployment requires reasoning beyond pixel similarity: systems must make transparent, interpretable decisions that remain robust under lighting, weather and seasonal change. We present Text2Graph VPR, an explainable semantic localization system that converts image sequences into textual scene descriptions, parses those descriptions into structured scene graphs, and reasons over the resulting graphs to identify places. Scene graphs capture objects, attributes and pairwise relations; we aggregate per-frame graphs into a compact place representation and perform retrieval with a dual-similarity mechanism that fuses learned Graph Attention Network (GAT) embeddings and a Shortest-Path (SP) kernel for structural matching. This hybrid design enables both learned semantic matching and topology-aware comparison, and -- critically -- produces human-readable intermediate representations that support diagnostic analysis and improve transparency in the decision process. We validate the system on Oxford RobotCar and MSLS (Amman/San Francisco) benchmarks and demonstrate robust retrieval under severe appearance shifts, along with zero-shot operation using human textual queries. The results illustrate that semantic, graph-based reasoning is a viable and interpretable alternative for place recognition, particularly suited to safety-sensitive and resource-constrained settings. + oai:arXiv.org:2512.18613v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Marco Gaido, Sara Papi, Mauro Cettolo, Matteo Negri, Luisa Bentivogli + Saeideh Yousefzadeh, Hamidreza Pourreza - Region-Constraint In-Context Generation for Instructional Video Editing - https://arxiv.org/abs/2512.17650 - arXiv:2512.17650v1 Announce Type: new -Abstract: The In-context generation paradigm recently has demonstrated strong power in instructional image editing with both data efficiency and synthesis quality. Nevertheless, shaping such in-context learning for instruction-based video editing is not trivial. Without specifying editing regions, the results can suffer from the problem of inaccurate editing regions and the token interference between editing and non-editing areas during denoising. To address these, we present ReCo, a new instructional video editing paradigm that novelly delves into constraint modeling between editing and non-editing regions during in-context generation. Technically, ReCo width-wise concatenates source and target video for joint denoising. To calibrate video diffusion learning, ReCo capitalizes on two regularization terms, i.e., latent and attention regularization, conducting on one-step backward denoised latents and attention maps, respectively. The former increases the latent discrepancy of the editing region between source and target videos while reducing that of non-editing areas, emphasizing the modification on editing area and alleviating outside unexpected content generation. The latter suppresses the attention of tokens in the editing region to the tokens in counterpart of the source video, thereby mitigating their interference during novel object generation in target video. Furthermore, we propose a large-scale, high-quality video editing dataset, i.e., ReCo-Data, comprising 500K instruction-video pairs to benefit model training. Extensive experiments conducted on four major instruction-based video editing tasks demonstrate the superiority of our proposal. - oai:arXiv.org:2512.17650v1 + PTTA: A Pure Text-to-Animation Framework for High-Quality Creation + https://arxiv.org/abs/2512.18614 + arXiv:2512.18614v1 Announce Type: new +Abstract: Traditional animation production involves complex pipelines and significant manual labor cost. While recent video generation models such as Sora, Kling, and CogVideoX achieve impressive results on natural video synthesis, they exhibit notable limitations when applied to animation generation. Recent efforts, such as AniSora, demonstrate promising performance by fine-tuning image-to-video models for animation styles, yet analogous exploration in the text-to-video setting remains limited. + In this work, we present PTTA, a pure text-to-animation framework for high-quality animation creation. We first construct a small-scale but high-quality paired dataset of animation videos and textual descriptions. Building upon the pretrained text-to-video model HunyuanVideo, we perform fine-tuning to adapt it to animation-style generation. Extensive visual evaluations across multiple dimensions show that the proposed approach consistently outperforms comparable baselines in animation video synthesis. + oai:arXiv.org:2512.18614v1 cs.CV - cs.MM - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhongwei Zhang, Fuchen Long, Wei Li, Zhaofan Qiu, Wu Liu, Ting Yao, Tao Mei + http://creativecommons.org/licenses/by/4.0/ + Ruiqi Chen, Kaitong Cai, Yijia Fan, Keze Wang - Estimating Spatially Resolved Radiation Fields Using Neural Networks - https://arxiv.org/abs/2512.17654 - arXiv:2512.17654v1 Announce Type: new -Abstract: We present an in-depth analysis on how to build and train neural networks to estimate the spatial distribution of scattered radiation fields for radiation protection dosimetry in medical radiation fields, such as those found in Interventional Radiology and Cardiology. Therefore, we present three different synthetically generated datasets with increasing complexity for training, using a Monte-Carlo Simulation application based on Geant4. On those datasets, we evaluate convolutional and fully connected architectures of neural networks to demonstrate which design decisions work well for reconstructing the fluence and spectra distributions over the spatial domain of such radiation fields. All used datasets as well as our training pipeline are published as open source in separate repositories. - oai:arXiv.org:2512.17654v1 - cs.LG - physics.comp-ph - physics.med-ph - Mon, 22 Dec 2025 00:00:00 -0500 + DASH: Deception-Augmented Shared Mental Model for a Human-Machine Teaming System + https://arxiv.org/abs/2512.18616 + arXiv:2512.18616v1 Announce Type: new +Abstract: We present DASH (Deception-Augmented Shared mental model for Human-machine teaming), a novel framework that enhances mission resilience by embedding proactive deception into Shared Mental Models (SMM). Designed for mission-critical applications such as surveillance and rescue, DASH introduces "bait tasks" to detect insider threats, e.g., compromised Unmanned Ground Vehicles (UGVs), AI agents, or human analysts, before they degrade team performance. Upon detection, tailored recovery mechanisms are activated, including UGV system reinstallation, AI model retraining, or human analyst replacement. In contrast to existing SMM approaches that neglect insider risks, DASH improves both coordination and security. Empirical evaluations across four schemes (DASH, SMM-only, no-SMM, and baseline) show that DASH sustains approximately 80% mission success under high attack rates, eight times higher than the baseline. This work contributes a practical human-AI teaming framework grounded in shared mental models, a deception-based strategy for insider threat detection, and empirical evidence of enhanced robustness under adversarial conditions. DASH establishes a foundation for secure, adaptive human-machine teaming in contested environments. + oai:arXiv.org:2512.18616v1 + cs.HC + cs.AI + cs.CR + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Felix Lehner, Pasquale Lombardo, Susana Castillo, Oliver Hupe, Marcus Magnor + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zelin Wan, Han Jun Yoon, Nithin Alluru, Terrence J. Moore, Frederica F. Nelson, Seunghyun Yoon, Hyuk Lim, Dan Dongseong Kim, Jin-Hee Cho - Bitbox: Behavioral Imaging Toolbox for Computational Analysis of Behavior from Videos - https://arxiv.org/abs/2512.17655 - arXiv:2512.17655v1 Announce Type: new -Abstract: Computational measurement of human behavior from video has recently become feasible due to major advances in AI. These advances now enable granular and precise quantification of facial expression, head movement, body action, and other behavioral modalities and are increasingly used in psychology, psychiatry, neuroscience, and mental health research. However, mainstream adoption remains slow. Most existing methods and software are developed for engineering audiences, require specialized software stacks, and fail to provide behavioral measurements at a level directly useful for hypothesis-driven research. As a result, there is a large barrier to entry for researchers who wish to use modern, AI-based tools in their work. We introduce Bitbox, an open-source toolkit designed to remove this barrier and make advanced computational analysis directly usable by behavioral scientists and clinical researchers. Bitbox is guided by principles of reproducibility, modularity, and interpretability. It provides a standardized interface for extracting high-level behavioral measurements from video, leveraging multiple face, head, and body processors. The core modules have been tested and validated on clinical samples and are designed so that new measures can be added with minimal effort. Bitbox is intended to serve both sides of the translational gap. It gives behavioral researchers access to robust, high-level behavioral metrics without requiring engineering expertise, and it provides computer scientists a practical mechanism for disseminating methods to domains where their impact is most needed. We expect that Bitbox will accelerate integration of computational behavioral measurement into behavioral, clinical, and mental health research. Bitbox has been designed from the beginning as a community-driven effort that will evolve through contributions from both method developers and domain scientists. - oai:arXiv.org:2512.17655v1 - cs.CV - q-bio.NC - Mon, 22 Dec 2025 00:00:00 -0500 + Assignment-Routing Optimization: Solvers for Problems Under Constraints + https://arxiv.org/abs/2512.18618 + arXiv:2512.18618v1 Announce Type: new +Abstract: We study the Joint Routing-Assignment (JRA) problem in which items must be assigned one-to-one to placeholders while simultaneously determining a Hamiltonian cycle visiting all nodes exactly once. Extending previous exact MIP solvers with Gurobi and cutting-plane subtour elimination, we develop a solver tailored for practical packaging-planning scenarios with richer constraints.These include multiple placeholder options, time-frame restrictions, and multi-class item packaging. Experiments on 46 mobile manipulation datasets demonstrate that the proposed MIP approach achieves global optima with stable and low computation times, significantly outperforming the shaking-based exact solver by up to an orders of magnitude. Compared to greedy baselines, the MIP solutions achieve consistent optimal distances with an average deviation of 14% for simple heuristics, confirming both efficiency and solution quality. The results highlight the practical applicability of MIP-based JRA optimization for robotic packaging, motion planning, and complex logistics . + oai:arXiv.org:2512.18618v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Evangelos Sariyanidi, Gokul Nair, Lisa Yankowitz, Casey J. Zampella, Mohan Kashyap Pargi, Aashvi Manakiwala, Maya McNealis, John D. Herrington, Jeffrey Cohn, Robert T. Schultz, Birkan Tunc + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yuan Qilong, Michal Pavelka - UAV-Enabled ISAC: Towards On-Demand Sensing Services and Enhanced Communication - https://arxiv.org/abs/2512.17656 - arXiv:2512.17656v1 Announce Type: new -Abstract: In this paper, we investigate an integrated sensing-and-communication (ISAC) network enabled by an unmanned aerial vehicle (UAV). The UAV is supposed to fly along a periodical circular trajectory at a fixed height for ISAC service supply from the sky. We consider on-demand sensing services, where on-demand detection and on-demand localization requests may be activated at any time toward any position within the targeted serving region. While guaranteeing satisfactory accuracy for both on-demand sensing tasks, we aim at maximizing the minimum achievable throughput among all communication users, via joint optimizing the UAV trajectory and communication user scheduling. To address the complicated problem with infinite sensing constraints, we characterize the on-demand detection constraint as a restricted deployment area for UAV and the on-demand localization constraint as Cramer-Rao Bound (CRB) constraints over finite reference target points, based on which the original problem is simplified to more tractable one. Afterwards, particularly aiming to ensure no violations of CRB constraints, we propose a convex approximation for the reformulated problem, where tight approximation is guaranteed at given local solution. The construction strategy for convex problem approximation allows an efficient iterative algorithm with verified convergence to a superior suboptimal solution. At last, with simulations, we verified the applicability of our developed optimization scheme in strictly fulfilling the on-demand sensing constraints and the effectiveness of our proposed solution for simultaneously enhancing the communication throughput in UAV-enabled ISAC. - oai:arXiv.org:2512.17656v1 - eess.SY - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 + ChronoDreamer: Action-Conditioned World Model as an Online Simulator for Robotic Planning + https://arxiv.org/abs/2512.18619 + arXiv:2512.18619v1 Announce Type: new +Abstract: We present ChronoDreamer, an action-conditioned world model for contact-rich robotic manipulation. Given a history of egocentric RGB frames, contact maps, actions, and joint states, ChronoDreamer predicts future video frames, contact distributions, and joint angles via a spatial-temporal transformer trained with MaskGIT-style masked prediction. Contact is encoded as depth-weighted Gaussian splat images that render 3D forces into a camera-aligned format suitable for vision backbones. At inference, predicted rollouts are evaluated by a vision-language model that reasons about collision likelihood, enabling rejection sampling of unsafe actions before execution. We train and evaluate on DreamerBench, a simulation dataset generated with Project Chrono that provides synchronized RGB, contact splat, proprioception, and physics annotations across rigid and deformable object scenarios. Qualitative results demonstrate that the model preserves spatial coherence during non-contact motion and generates plausible contact predictions, while the LLM-based judge distinguishes collision from non-collision trajectories. + oai:arXiv.org:2512.18619v1 + cs.AI + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiaopeng Yuan, Peng Wu, Xinran Wang, Yulin Hu, Anke Schmeink + Zhenhao Zhou, Dan Negrut - Peeking Into The Future For Contextual Biasing - https://arxiv.org/abs/2512.17657 - arXiv:2512.17657v1 Announce Type: new -Abstract: While end-to-end (E2E) automatic speech recognition (ASR) models excel at general transcription, they struggle to recognize rare or unseen named entities (e.g., contact names, locations), which are critical for downstream applications like virtual assistants. In this paper, we propose a contextual biasing method for attention based encoder decoder (AED) models using a list of candidate named entities. Instead of predicting only the next token, we simultaneously predict multiple future tokens, enabling the model to "peek into the future" and score potential candidate entities in the entity list. Moreover, our approach leverages the multi-token prediction logits directly without requiring additional entity encoders or cross-attention layers, significantly reducing architectural complexity. Experiments on Librispeech demonstrate that our approach achieves up to 50.34% relative improvement in named entity word error rate compared to the baseline AED model. - oai:arXiv.org:2512.17657v1 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Obnoxious Facility Location Problems: Strategyproof Mechanisms Optimizing $L_p$-Aggregated Utilities and Costs + https://arxiv.org/abs/2512.18620 + arXiv:2512.18620v1 Announce Type: new +Abstract: We study the problem of locating a single obnoxious facility on the normalized line segment $[0,1]$ with strategic agents from a mechanism design perspective. Each agent has a preference for the undesirable location of the facility and would prefer the facility to be far away from their location. We consider the utility of the agent, defined as the distance between the agent's location and the facility location, and the cost of each agent, equal to one minus the utility. Given this standard setting of obnoxious facility location problems, our goal is to design (group) strategyproof mechanisms to elicit agent locations truthfully and determine facility location approximately optimizing the $L_p$-aggregated utility and cost objectives, which generalizes the $L_p$-norm ($p\ge 1$) of the agents' utilities and agents' costs to any $p \in [-\infty, \infty]$, respectively. We establish upper and lower bounds on the approximation ratios of deterministic and randomized (group) strategyproof mechanisms for maximizing the $L_p$-aggregated utilities or minimizing the $L_p$-aggregated costs across the range of \(p\)-values. While there are gaps between upper and lower bounds for randomized mechanisms, our bounds for deterministic mechanisms are tight. + oai:arXiv.org:2512.18620v1 + cs.GT + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Ramaneswaran Selvakumar, Cindy Tseng, Eesung Kim, Vijendra Raj Apsingekar, Yun Tang + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hau Chan, Jianan Lin, Chenhao Wang - Vidarc: Embodied Video Diffusion Model for Closed-loop Control - https://arxiv.org/abs/2512.17661 - arXiv:2512.17661v1 Announce Type: new -Abstract: Robotic arm manipulation in data-scarce settings is a highly challenging task due to the complex embodiment dynamics and diverse contexts. Recent video-based approaches have shown great promise in capturing and transferring the temporal and physical interactions by pre-training on Internet-scale video data. However, such methods are often not optimized for the embodiment-specific closed-loop control, typically suffering from high latency and insufficient grounding. In this paper, we present Vidarc (Video Diffusion for Action Reasoning and Closed-loop Control), a novel autoregressive embodied video diffusion approach augmented by a masked inverse dynamics model. By grounding video predictions with action-relevant masks and incorporating real-time feedback through cached autoregressive generation, Vidarc achieves fast, accurate closed-loop control. Pre-trained on one million cross-embodiment episodes, Vidarc surpasses state-of-the-art baselines, achieving at least a 15% higher success rate in real-world deployment and a 91% reduction in latency. We also highlight its robust generalization and error correction capabilities across previously unseen robotic platforms. - oai:arXiv.org:2512.17661v1 - cs.RO - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + A Multi-agent Text2SQL Framework using Small Language Models and Execution Feedback + https://arxiv.org/abs/2512.18622 + arXiv:2512.18622v1 Announce Type: new +Abstract: Text2SQL, the task of generating SQL queries from natural language text, is a critical challenge in data engineering. Recently, Large Language Models (LLMs) have demonstrated superior performance for this task due to their advanced comprehension and generation capabilities. However, privacy and cost considerations prevent companies from using Text2SQL solutions based on external LLMs offered as a service. Rather, small LLMs (SLMs) that are openly available and can hosted in-house are adopted. These SLMs, in turn, lack the generalization capabilities of larger LLMs, which impairs their effectiveness for complex tasks such as Text2SQL. To address these limitations, we propose MATS, a novel Text2SQL framework designed specifically for SLMs. MATS uses a multi-agent mechanism that assigns specialized roles to auxiliary agents, reducing individual workloads and fostering interaction. A training scheme based on reinforcement learning aligns these agents using feedback obtained during execution, thereby maintaining competitive performance despite a limited LLM size. Evaluation results using on benchmark datasets show that MATS, deployed on a single- GPU server, yields accuracy that are on-par with large-scale LLMs when using significantly fewer parameters. Our source code and data are available at https://github.com/thanhdath/mats-sql. + oai:arXiv.org:2512.18622v1 + cs.DB + cs.AI + cs.CL + cs.HC + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yao Feng, Chendong Xiang, Xinyi Mao, Hengkai Tan, Zuyue Zhang, Shuhe Huang, Kaiwen Zheng, Haitian Liu, Hang Su, Jun Zhu + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Thanh Dat Hoang, Thanh Trung Huynh, Matthias Weidlich, Thanh Tam Nguyen, Tong Chen, Hongzhi Yin, Quoc Viet Hung Nguyen - Refining the Complexity Landscape of Speed Scaling: Hardness and Algorithms - https://arxiv.org/abs/2512.17663 - arXiv:2512.17663v1 Announce Type: new -Abstract: We study the computational complexity of scheduling jobs on a single speed-scalable processor with the objective of capturing the trade-off between the (weighted) flow time and the energy consumption. This trade-off has been extensively explored in the literature through a number of problem formulations that differ in the specific job characteristics and the precise objective function. Nevertheless, the computational complexity of four important problem variants has remained unresolved and was explicitly identified as an open question in prior work. In this paper, we settle the complexity of these variants. - More specifically, we prove that the problem of minimizing the objective of total (weighted) flow time plus energy is NP-hard for the cases of (i) unit-weight jobs with arbitrary sizes, and (ii)~arbitrary-weight jobs with unit sizes. These results extend to the objective of minimizing the total (weighted) flow time subject to an energy budget and hold even when the schedule is required to adhere to a given priority ordering. - In contrast, we show that when a completion-time ordering is provided, the same problem variants become polynomial-time solvable. The latter result highlights the subtle differences between priority and completion orderings for the problem. - oai:arXiv.org:2512.17663v1 - cs.DS - cs.CC - cs.DM - Mon, 22 Dec 2025 00:00:00 -0500 + LLM-CAS: Dynamic Neuron Perturbation for Real-Time Hallucination Correction + https://arxiv.org/abs/2512.18623 + arXiv:2512.18623v1 Announce Type: new +Abstract: Large language models (LLMs) often generate hallucinated content that lacks factual or contextual grounding, limiting their reliability in critical applications. Existing approaches such as supervised fine-tuning and reinforcement learning from human feedback are data intensive and computationally expensive, while static parameter editing methods struggle with context dependent errors and catastrophic forgetting. + We propose LLM-CAS, a framework that formulates real-time hallucination correction as a hierarchical reinforcement learning problem. LLM-CAS trains an agent to learn a policy that dynamically selects temporary neuron perturbations during inference based on the current context. Unlike prior dynamic approaches that rely on heuristic or predefined adjustments, this policy driven mechanism enables adaptive and fine grained correction without permanent parameter modification. + Experiments across multiple language models demonstrate that LLM-CAS consistently improves factual accuracy, achieving gains of 10.98 percentage points on StoryCloze, 2.71 points on TriviaQA, and 2.06 points on the MC1 score of TruthfulQA. These results outperform both static editing methods such as ITI and CAA and the dynamic SADI framework. Overall, LLM-CAS provides an efficient and context aware solution for improving the reliability of LLMs, with promising potential for future multimodal extensions. + oai:arXiv.org:2512.18623v1 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Antonios Antoniadis, Denise Graafsma, Ruben Hoeksma, Maria Vlasiou + Jensen Zhang, Ningyuan Liu, Yijia Fan, Zihao Huang, Qinglin Zeng, Kaitong Cai, Jian Wang, Keze Wang - Local h-, p-, and k-Refinement Strategies for the Isogeometric Shifted Boundary Method Using THB-Splines - https://arxiv.org/abs/2512.17666 - arXiv:2512.17666v1 Announce Type: new -Abstract: The concept of trimming, embedding, or immersing geometries into a computational background mesh has gained considerable attention in recent years, particularly in isogeometric analysis (IGA). In this approach, the physical domain is represented independently from the computational mesh, allowing the latter to be generated more easily compared with body-fitted meshes. While this facilitates the treatment of complex geometries, it also introduces challenges, such as ill-conditioning of the stiffness matrix caused by small cut elements and difficulties in accurately enforcing boundary conditions. A recently proposed technique to address these issues is the Shifted Boundary Method (SBM), which represents the computational domain solely through uncut elements and enforces boundary conditions via a Taylor expansion from a surrogate boundary to the true boundary. Previous studies have shown that, for Neumann boundary conditions, the flux evaluation requires additional derivatives in the Taylor expansion, effectively reducing the order of convergence by one. In this work, we investigate for the first time the performance of SBM combined with Truncated Hierarchical B-splines (THB-splines) under various local refinement strategies. In particular, we propose local p- and k-refinement schemes for THB-splines and compare them with local h-refinement and the unmodified SBM. Furthermore, we propose an enhanced shift operator that incorporates mixed partial derivatives, in contrast to the standard operator. The study assesses accuracy, stability, and computational efficiency for benchmark problems on trimmed domains. The results highlight how different refinement strategies affect convergence behavior in trimmed IGA formulations using SBM and demonstrate that targeted degree elevation can mitigate the Neumann boundary limitations of the standard method. - oai:arXiv.org:2512.17666v1 - math.NA - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 + Smart nudging for efficient routing through networks + https://arxiv.org/abs/2512.18630 + arXiv:2512.18630v1 Announce Type: new +Abstract: In this paper, we formulate the design of efficient digitalised deposit return schemes as a control problem. We focus on the recycling of paper cups, though the proposed methodology applies more broadly to reverse logistics systems arising in circular economy R-strategies. Each item is assumed to carry a digital wallet through which monetary rewards are allocated to actors transferring the item across successive stages, incentivising completion of the recycling process. System efficiency is ensured by: (i) decentralised algorithms that avoid congestion at individual nodes; (ii) a decentralised AIMD-based algorithm that optimally splits the deposit across layers; and (iii) a feedback control loop that dynamically adjusts the deposit to achieve a desired throughput. The effectiveness of the framework is demonstrated through extensive simulations using realistic paper cup recycling data. + oai:arXiv.org:2512.18630v1 + eess.SY + cs.SY + econ.GN + math.OC + q-fin.EC + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Christoph Hollweck, Andrea Gorgi, Nicolo Antonelli, Marcus Wagner, Roland W\"uchner + Pouria M. Oqaz, Emanuele Crisostomi, Elena Dieckmann, Robert Shorten - STAR: Semantic-Traffic Alignment and Retrieval for Zero-Shot HTTPS Website Fingerprinting - https://arxiv.org/abs/2512.17667 - arXiv:2512.17667v1 Announce Type: new -Abstract: Modern HTTPS mechanisms such as Encrypted Client Hello (ECH) and encrypted DNS improve privacy but remain vulnerable to website fingerprinting (WF) attacks, where adversaries infer visited sites from encrypted traffic patterns. Existing WF methods rely on supervised learning with site-specific labeled traces, which limits scalability and fails to handle previously unseen websites. We address these limitations by reformulating WF as a zero-shot cross-modal retrieval problem and introducing STAR. STAR learns a joint embedding space for encrypted traffic traces and crawl-time logic profiles using a dual-encoder architecture. Trained on 150K automatically collected traffic-logic pairs with contrastive and consistency objectives and structure-aware augmentation, STAR retrieves the most semantically aligned profile for a trace without requiring target-side traffic during training. Experiments on 1,600 unseen websites show that STAR achieves 87.9 percent top-1 accuracy and 0.963 AUC in open-world detection, outperforming supervised and few-shot baselines. Adding an adapter with only four labeled traces per site further boosts top-5 accuracy to 98.8 percent. Our analysis reveals intrinsic semantic-traffic alignment in modern web protocols, identifying semantic leakage as the dominant privacy risk in encrypted HTTPS traffic. We release STAR's datasets and code to support reproducibility and future research. - oai:arXiv.org:2512.17667v1 + Multi-user Pufferfish Privacy + https://arxiv.org/abs/2512.18632 + arXiv:2512.18632v1 Announce Type: new +Abstract: This paper studies how to achieve individual indistinguishability by pufferfish privacy in aggregated query to a multi-user system. It is assumed that each user reports realization of a random variable. We study how to calibrate Laplace noise, added to the query answer, to attain pufferfish privacy when user changes his/her reported data value, leaves the system and is replaced by another use with different randomness. Sufficient conditions are derived for all scenarios for attaining statistical indistinguishability on four sets of secret pairs. They are derived using the existing Kantorovich method (Wasserstain metric of order $1$). These results can be applied to attain indistinguishability when a certain class of users is added or removed from a tabular data. It is revealed that attaining indifference in individual's data is conditioned on the statistics of this user only. For binary (Bernoulli distributed) random variables, the derived sufficient conditions can be further relaxed to reduce the noise and improve data utility. + oai:arXiv.org:2512.18632v1 cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Ni Ding, Songpei Lu, Wenjing Yang, Zijian Zhang + + + ARC: Leveraging Compositional Representations for Cross-Problem Learning on VRPs + https://arxiv.org/abs/2512.18633 + arXiv:2512.18633v1 Announce Type: new +Abstract: Vehicle Routing Problems (VRPs) with diverse real-world attributes have driven recent interest in cross-problem learning approaches that efficiently generalize across problem variants. We propose ARC (Attribute Representation via Compositional Learning), a cross-problem learning framework that learns disentangled attribute representations by decomposing them into two complementary components: an Intrinsic Attribute Embedding (IAE) for invariant attribute semantics and a Contextual Interaction Embedding (CIE) for attribute-combination effects. This disentanglement is achieved by enforcing analogical consistency in the embedding space to ensure the semantic transformation of adding an attribute (e.g., a length constraint) remains invariant across different problem contexts. This enables our model to reuse invariant semantics across trained variants and construct representations for unseen combinations. ARC achieves state-of-the-art performance across in-distribution, zero-shot generalization, few-shot adaptation, and real-world benchmarks. + oai:arXiv.org:2512.18633v1 + cs.LG cs.AI - cs.NI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yifei Cheng, Yujia Zhu, Baiyang Li, Xinhao Deng, Yitong Cai, Yaochen Ren, Qingyun Liu + Han-Seul Jeong, Youngjoon Park, Hyungseok Song, Woohyung Lim - Polyharmonic Cascade - https://arxiv.org/abs/2512.17671 - arXiv:2512.17671v1 Announce Type: new -Abstract: This paper presents a deep machine learning architecture, the "polyharmonic cascade" -- a sequence of packages of polyharmonic splines, where each layer is rigorously derived from the theory of random functions and the principles of indifference. This makes it possible to approximate nonlinear functions of arbitrary complexity while preserving global smoothness and a probabilistic interpretation. For the polyharmonic cascade, a training method alternative to gradient descent is proposed: instead of directly optimizing the coefficients, one solves a single global linear system on each batch with respect to the function values at fixed "constellations" of nodes. This yields synchronized updates of all layers, preserves the probabilistic interpretation of individual layers and theoretical consistency with the original model, and scales well: all computations reduce to 2D matrix operations efficiently executed on a GPU. Fast learning without overfitting on MNIST is demonstrated. - oai:arXiv.org:2512.17671v1 + From Shortcut to Induction Head: How Data Diversity Shapes Algorithm Selection in Transformers + https://arxiv.org/abs/2512.18634 + arXiv:2512.18634v1 Announce Type: new +Abstract: Transformers can implement both generalizable algorithms (e.g., induction heads) and simple positional shortcuts (e.g., memorizing fixed output positions). In this work, we study how the choice of pretraining data distribution steers a shallow transformer toward one behavior or the other. Focusing on a minimal trigger-output prediction task -- copying the token immediately following a special trigger upon its second occurrence -- we present a rigorous analysis of gradient-based training of a single-layer transformer. In both the infinite and finite sample regimes, we prove a transition in the learned mechanism: if input sequences exhibit sufficient diversity, measured by a low ``max-sum'' ratio of trigger-to-trigger distances, the trained model implements an induction head and generalizes to unseen contexts; by contrast, when this ratio is large, the model resorts to a positional shortcut and fails to generalize out-of-distribution (OOD). We also reveal a trade-off between the pretraining context length and OOD generalization, and derive the optimal pretraining distribution that minimizes computational cost per sample. Finally, we validate our theoretical predictions with controlled synthetic experiments, demonstrating that broadening context distributions robustly induces induction heads and enables OOD generalization. Our results shed light on the algorithmic biases of pretrained transformers and offer conceptual guidelines for data-driven control of their learned behaviors. + oai:arXiv.org:2512.18634v1 cs.LG - cs.NA - math.NA - Mon, 22 Dec 2025 00:00:00 -0500 + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yuriy N. Bakhvalov + Ryotaro Kawata, Yujin Song, Alberto Bietti, Naoki Nishikawa, Taiji Suzuki, Samuel Vaiter, Denny Wu - Learning Spatio-Temporal Feature Representations for Video-Based Gaze Estimation - https://arxiv.org/abs/2512.17673 - arXiv:2512.17673v1 Announce Type: new -Abstract: Video-based gaze estimation methods aim to capture the inherently temporal dynamics of human eye gaze from multiple image frames. However, since models must capture both spatial and temporal relationships, performance is limited by the feature representations within a frame but also between multiple frames. We propose the Spatio-Temporal Gaze Network (ST-Gaze), a model that combines a CNN backbone with dedicated channel attention and self-attention modules to fuse eye and face features optimally. The fused features are then treated as a spatial sequence, allowing for the capture of an intra-frame context, which is then propagated through time to model inter-frame dynamics. We evaluated our method on the EVE dataset and show that ST-Gaze achieves state-of-the-art performance both with and without person-specific adaptation. Additionally, our ablation study provides further insights into the model performance, showing that preserving and modelling intra-frame spatial context with our spatio-temporal recurrence is fundamentally superior to premature spatial pooling. As such, our results pave the way towards more robust video-based gaze estimation using commonly available cameras. - oai:arXiv.org:2512.17673v1 + Uni-Neur2Img: Unified Neural Signal-Guided Image Generation, Editing, and Stylization via Diffusion Transformers + https://arxiv.org/abs/2512.18635 + arXiv:2512.18635v1 Announce Type: new +Abstract: Generating or editing images directly from Neural signals has immense potential at the intersection of neuroscience, vision, and Brain-computer interaction. In this paper, We present Uni-Neur2Img, a unified framework for neural signal-driven image generation and editing. The framework introduces a parameter-efficient LoRA-based neural signal injection module that independently processes each conditioning signal as a pluggable component, facilitating flexible multi-modal conditioning without altering base model parameters. Additionally, we employ a causal attention mechanism accommodate the long-sequence modeling demands of conditional generation tasks. Existing neural-driven generation research predominantly focuses on textual modalities as conditions or intermediate representations, resulting in limited exploration of visual modalities as direct conditioning signals. To bridge this research gap, we introduce the EEG-Style dataset. We conduct comprehensive evaluations across public benchmarks and self-collected neural signal datasets: (1) EEG-driven image generation on the public CVPR40 dataset; (2) neural signal-guided image editing on the public Loongx dataset for semantic-aware local modifications; and (3) EEG-driven style transfer on our self-collected EEG-Style dataset. Extensive experimental results demonstrate significant improvements in generation fidelity, editing consistency, and style transfer quality while maintaining low computational overhead and strong scalability to additional modalities. Thus, Uni-Neur2Img offers a unified, efficient, and extensible solution for bridging neural signals and visual content generation. + oai:arXiv.org:2512.18635v1 cs.CV - cs.AI - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Alexandre Personnic, Mihai B\^ace + Xiyue Bai, Ronghao Yu, Jia Xiu, Pengfei Zhou, Jie Xia, Peng Ji - An Empirical Study of Sampling Hyperparameters in Diffusion-Based Super-Resolution - https://arxiv.org/abs/2512.17675 - arXiv:2512.17675v1 Announce Type: new -Abstract: Diffusion models have shown strong potential for solving inverse problems such as single-image super-resolution, where a high-resolution image is recovered from a low-resolution observation using a pretrained unconditional prior. Conditioning methods, including Diffusion Posterior Sampling (DPS) and Manifold Constrained Gradient (MCG), can substantially improve reconstruction quality, but they introduce additional hyperparameters that require careful tuning. In this work, we conduct an empirical ablation study on FFHQ super-resolution to identify the dominant factors affecting performance when applying conditioning to pretrained diffusion models, and show that the conditioning step size has a significantly greater impact than the diffusion step count, with step sizes in the range of [2.0, 3.0] yielding the best overall performance in our experiments. - oai:arXiv.org:2512.17675v1 + Geometric-Photometric Event-based 3D Gaussian Ray Tracing + https://arxiv.org/abs/2512.18640 + arXiv:2512.18640v1 Announce Type: new +Abstract: Event cameras offer a high temporal resolution over traditional frame-based cameras, which makes them suitable for motion and structure estimation. However, it has been unclear how event-based 3D Gaussian Splatting (3DGS) approaches could leverage fine-grained temporal information of sparse events. This work proposes a framework to address the trade-off between accuracy and temporal resolution in event-based 3DGS. Our key idea is to decouple the rendering into two branches: event-by-event geometry (depth) rendering and snapshot-based radiance (intensity) rendering, by using ray-tracing and the image of warped events. The extensive evaluation shows that our method achieves state-of-the-art performance on the real-world datasets and competitive performance on the synthetic dataset. Also, the proposed method works without prior information (e.g., pretrained image reconstruction models) or COLMAP-based initialization, is more flexible in the event selection number, and achieves sharp reconstruction on scene edges with fast training time. We hope that this work deepens our understanding of the sparse nature of events for 3D reconstruction. The code will be released. + oai:arXiv.org:2512.18640v1 cs.CV cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yudhistira Arief Wibowo + Kai Kohyama, Yoshimitsu Aoki, Guillermo Gallego, Shintaro Shiba - Toward Ethical AI Through Bayesian Uncertainty in Neural Question Answering - https://arxiv.org/abs/2512.17677 - arXiv:2512.17677v1 Announce Type: new -Abstract: We explore Bayesian reasoning as a means to quantify uncertainty in neural networks for question answering. Starting with a multilayer perceptron on the Iris dataset, we show how posterior inference conveys confidence in predictions. We then extend this to language models, applying Bayesian inference first to a frozen head and finally to LoRA-adapted transformers, evaluated on the CommonsenseQA benchmark. Rather than aiming for state-of-the-art accuracy, we compare Laplace approximations against maximum a posteriori (MAP) estimates to highlight uncertainty calibration and selective prediction. This allows models to abstain when confidence is low. An ``I don't know'' response not only improves interpretability but also illustrates how Bayesian methods can contribute to more responsible and ethical deployment of neural question-answering systems. - oai:arXiv.org:2512.17677v1 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Volley Revolver: A Novel Matrix-Encoding Method for Privacy-Preserving Deep Learning (Inference++) + https://arxiv.org/abs/2512.18646 + arXiv:2512.18646v1 Announce Type: new +Abstract: Privacy-preserving inference of convolutional neural networks (CNNs) using homomorphic encryption has emerged as a promising approach for enabling secure machine learning in untrusted environments. In our previous work, we introduced a matrix-encoding strategy that allows convolution and matrix multiplication to be efficiently evaluated over encrypted data, enabling practical CNN inference without revealing either the input data or the model parameters. The core idea behind this strategy is to construct a three-dimensional representation within ciphertexts that preserves the intrinsic spatial structure of both input image data and model weights, rather than flattening them into conventional two-dimensional encodings. However, this approach can operate efficiently $only$ when the number of available plaintext slots within a ciphertext is sufficient to accommodate an entire input image, which becomes a critical bottleneck when processing high-resolution images. In this paper, we address this fundamental limitation by proposing an improved encoding and computation framework that removes the requirement that a single encrypted ciphertext must fully contain one input image. Our method reformulates the data layout and homomorphic operations to partition high-resolution inputs across multiple ciphertexts while preserving the algebraic structure required for efficient convolution and matrix multiplication. As a result, our approach enables privacy-preserving CNN inference to scale naturally beyond the slot-capacity constraints of prior methods, making homomorphic evaluation of CNNs practical for higher-resolution and more complex datasets. + oai:arXiv.org:2512.18646v1 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - 10.1007/s43681-025-00838-x - AI Ethics 6, 38 (2026) - Riccardo Di Sipio + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + John Chiang - You Only Train Once: Differentiable Subset Selection for Omics Data - https://arxiv.org/abs/2512.17678 - arXiv:2512.17678v1 Announce Type: new -Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis. - oai:arXiv.org:2512.17678v1 - cs.LG - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + BeamformNet: Deep Learning-Based Beamforming Method for DoA Estimation via Implicit Spatial Signal Focusing and Noise Suppression + https://arxiv.org/abs/2512.18647 + arXiv:2512.18647v1 Announce Type: new +Abstract: Deep learning-based direction-of-arrival (DoA) estimation has gained increasing popularity. A popular family of DoA estimation algorithms is beamforming methods, which operate by constructing a spatial filter that is applied to array signals. However, these spatial filters obtained by traditional model-driven beamforming algorithms fail under demanding conditions such as coherent sources and a small number of snapshots. In order to obtain a robust spatial filter, this paper proposes BeamformNet-a novel deep learning framework grounded in beamforming principles. Based on the concept of optimal spatial filters, BeamformNet leverages neural networks to approximately obtain the optimal spatial filter via implicit spatial signal focusing and noise suppression, which is then applied to received signals for spatial focusing and noise suppression, thereby enabling accurate DoA estimation. Experimental results on both simulated and real-world speech acoustic source localization data demonstrate that BeamformNet achieves state-of-the-art DoA estimation performance and has better robustness. + oai:arXiv.org:2512.18647v1 + cs.CE + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xuyao Deng, Yong Dou, Kele Xu - A Dual Quaternion based RRT* Path Planning Approach for Satellite Rendezvous and Docking - https://arxiv.org/abs/2512.17680 - arXiv:2512.17680v1 Announce Type: new -Abstract: This paper proposes a sampling-based motion planner that employs a dual quaternion representation to generate smooth, collision-free six-degree-of-freedom pose trajectories for satellite rendezvous and docking under keep-out zone constraints. The proposed planner integrates the dual quaternion algebra directly into an RRT* framework, thereby enabling natural screw motion interpolation in SE(3). The dual quaternion-based RRT* has been implemented in Python and demonstrated on a representative multi-obstacle scenario. A comparison with a standard RRT* using separate translation and quaternion steering highlights the enhanced pose continuity and obstacle avoidance of the proposed method. The present approach is purely kinematic in nature and does not take into account relative orbital dynamics. Consequently, the resulting path provides a preliminary estimate for a subsequent optimisation-based trajectory planner, which will refine the motion with dynamic constraints for the purpose of practical satellite rendezvous and docking missions. - oai:arXiv.org:2512.17680v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Adversarial Robustness in Zero-Shot Learning:An Empirical Study on Class and Concept-Level Vulnerabilities + https://arxiv.org/abs/2512.18651 + arXiv:2512.18651v1 Announce Type: new +Abstract: Zero-shot Learning (ZSL) aims to enable image classifiers to recognize images from unseen classes that were not included during training. Unlike traditional supervised classification, ZSL typically relies on learning a mapping from visual features to predefined, human-understandable class concepts. While ZSL models promise to improve generalization and interpretability, their robustness under systematic input perturbations remain unclear. In this study, we present an empirical analysis about the robustness of existing ZSL methods at both classlevel and concept-level. Specifically, we successfully disrupted their class prediction by the well-known non-target class attack (clsA). However, in the Generalized Zero-shot Learning (GZSL) setting, we observe that the success of clsA is only at the original best-calibrated point. After the attack, the optimal bestcalibration point shifts, and ZSL models maintain relatively strong performance at other calibration points, indicating that clsA results in a spurious attack success in the GZSL. To address this, we propose the Class-Bias Enhanced Attack (CBEA), which completely eliminates GZSL accuracy across all calibrated points by enhancing the gap between seen and unseen class probabilities.Next, at concept-level attack, we introduce two novel attack modes: Class-Preserving Concept Attack (CPconA) and NonClass-Preserving Concept Attack (NCPconA). Our extensive experiments evaluate three typical ZSL models across various architectures from the past three years and reveal that ZSL models are vulnerable not only to the traditional class attack but also to concept-based attacks. These attacks allow malicious actors to easily manipulate class predictions by erasing or introducing concepts. Our findings highlight a significant performance gap between existing approaches, emphasizing the need for improved adversarial robustness in current ZSL models. + oai:arXiv.org:2512.18651v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Ana Stankovic, Mohamed Khalil Ben-Larbi, Wolfgang H. M\"uller + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhiyuan Peng, Zihan Ye, Shreyank N Gowda, Yuping Yan, Haotian Xu, Ling Shao - Convergence Guarantees for Federated SARSA with Local Training and Heterogeneous Agents - https://arxiv.org/abs/2512.17688 - arXiv:2512.17688v1 Announce Type: new -Abstract: We present a novel theoretical analysis of Federated SARSA (FedSARSA) with linear function approximation and local training. We establish convergence guarantees for FedSARSA in the presence of heterogeneity, both in local transitions and rewards, providing the first sample and communication complexity bounds in this setting. At the core of our analysis is a new, exact multi-step error expansion for single-agent SARSA, which is of independent interest. Our analysis precisely quantifies the impact of heterogeneity, demonstrating the convergence of FedSARSA with multiple local updates. Crucially, we show that FedSARSA achieves linear speed-up with respect to the number of agents, up to higher-order terms due to Markovian sampling. Numerical experiments support our theoretical findings. - oai:arXiv.org:2512.17688v1 - cs.LG - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 + SplatBright: Generalizable Low-Light Scene Reconstruction from Sparse Views via Physically-Guided Gaussian Enhancement + https://arxiv.org/abs/2512.18655 + arXiv:2512.18655v1 Announce Type: new +Abstract: Low-light 3D reconstruction from sparse views remains challenging due to exposure imbalance and degraded color fidelity. While existing methods struggle with view inconsistency and require per-scene training, we propose SplatBright, which is, to our knowledge, the first generalizable 3D Gaussian framework for joint low-light enhancement and reconstruction from sparse sRGB inputs. Our key idea is to integrate physically guided illumination modeling with geometry-appearance decoupling for consistent low-light reconstruction. Specifically, we adopt a dual-branch predictor that provides stable geometric initialization of 3D Gaussian parameters. On the appearance side, illumination consistency leverages frequency priors to enable controllable and cross-view coherent lighting, while an appearance refinement module further separates illumination, material, and view-dependent cues to recover fine texture. To tackle the lack of large-scale geometrically consistent paired data, we synthesize dark views via a physics-based camera model for training. Extensive experiments on public and self-collected datasets demonstrate that SplatBright achieves superior novel view synthesis, cross-view consistency, and better generalization to unseen low-light scenes compared with both 2D and 3D methods. + oai:arXiv.org:2512.18655v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Paul Mangold, Elo\"ise Berthier, Eric Moulines + Yue Wen, Liang Song, Hesheng Wang - Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting - https://arxiv.org/abs/2512.17696 - arXiv:2512.17696v1 Announce Type: new -Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning. - oai:arXiv.org:2512.17696v1 - cs.LG - stat.ME - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 + Does It Tie Out? Towards Autonomous Legal Agents in Venture Capital + https://arxiv.org/abs/2512.18658 + arXiv:2512.18658v1 Announce Type: new +Abstract: Before closing venture capital financing rounds, lawyers conduct diligence that includes tying out the capitalization table: verifying that every security (for example, shares, options, warrants) and issuance term (for example, vesting schedules, acceleration triggers, transfer restrictions) is supported by large sets of underlying legal documentation. While LLMs continue to improve on legal benchmarks, specialized legal workflows, such as capitalization tie-out, remain out of reach even for strong agentic systems. The task requires multi-document reasoning, strict evidence traceability, and deterministic outputs that current approaches fail to reliably deliver. We characterize capitalization tie-out as an instance of a real-world benchmark for legal AI, analyze and compare the performance of existing agentic systems, and propose a world model architecture toward tie-out automation-and more broadly as a foundation for applied legal intelligence. + oai:arXiv.org:2512.18658v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuri Calleo + http://creativecommons.org/licenses/by/4.0/ + Pierre Colombo, Malik Boudiaf, Allyn Sweet, Michael Desa, Hongxi Wang, Kevin Candra, Sym\'eon del Marmol - A Practical Solution to Systematically Monitor Inconsistencies in SBOM-based Vulnerability Scanners - https://arxiv.org/abs/2512.17710 - arXiv:2512.17710v1 Announce Type: new -Abstract: Software Bill of Materials (SBOM) provides new opportunities for automated vulnerability identification in software products. While the industry is adopting SBOM-based Vulnerability Scanning (SVS) to identify vulnerabilities, we increasingly observe inconsistencies and unexpected behavior, that result in false negatives and silent failures. In this work, we present the background necessary to understand the underlying complexity of SVS and introduce SVS-TEST, a method and tool to analyze the capability, maturity, and failure conditions of SVS-tools in real-world scenarios. We showcase the utility of SVS-TEST in a case study evaluating seven real-world SVS-tools using 16 precisely crafted SBOMs and their respective ground truth. Our results unveil significant differences in the reliability and error handling of SVS-tools; multiple SVS-tools silently fail on valid input SBOMs, creating a false sense of security. We conclude our work by highlighting implications for researchers and practitioners, including how organizations and developers of SVS-tools can utilize SVS-TEST to monitor SVS capability and maturity. All results and research artifacts are made publicly available and all findings were disclosed to the SVS-tool developers ahead of time. - oai:arXiv.org:2512.17710v1 - cs.SE - cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 + Measuring the Impact of Student Gaming Behaviors on Learner Modeling + https://arxiv.org/abs/2512.18659 + arXiv:2512.18659v1 Announce Type: new +Abstract: The expansion of large-scale online education platforms has made vast amounts of student interaction data available for knowledge tracing (KT). KT models estimate students' concept mastery from interaction data, but their performance is sensitive to input data quality. Gaming behaviors, such as excessive hint use, may misrepresent students' knowledge and undermine model reliability. However, systematic investigations of how different types of gaming behaviors affect KT remain scarce, and existing studies rely on costly manual analysis that does not capture behavioral diversity. In this study, we conceptualize gaming behaviors as a form of data poisoning, defined as the deliberate submission of incorrect or misleading interaction data to corrupt a model's learning process. We design Data Poisoning Attacks (DPAs) to simulate diverse gaming patterns and systematically evaluate their impact on KT model performance. Moreover, drawing on advances in DPA detection, we explore unsupervised approaches to enhance the generalizability of gaming behavior detection. We find that KT models' performance tends to decrease especially in response to random guess behaviors. Our findings provide insights into the vulnerabilities of KT models and highlight the potential of adversarial methods for improving the robustness of learning analytics systems. + oai:arXiv.org:2512.18659v1 + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - 10.1145/3748522.3779717 - Martin Rosso, Muhammad Asad Jahangir Jaffar, Alessandro Brighente, Mauro Conti + 10.1145/3785022.3785036 + Qinyi Liu, Lin Li, Valdemar \v{S}v\'abensk\'y, Conrad Borchers, Mohammad Khalil - Study of a TPFA scheme for the stochastic Allen-Cahn equation with constraint though numerical experiments - https://arxiv.org/abs/2512.17712 - arXiv:2512.17712v1 Announce Type: new -Abstract: This contribution provides numerical experiments for a finite volume scheme for an approximation of the stochastic Allen-Cahn equation with homogeneous Neumann boundary conditions. The approximation is done by a Yosida approximation of the subdifferential operator. The problem is set on a polygonal bounded domain in two or three dimensions. The non-linear character of the projection term induces challenges to implement the scheme. To this end, we provide a splitting method for the finite volume scheme. We show that the splitting method is accurate. The computational error estimates induce that the squared $L^2$-error w.r.t. time is of order $1$ as long as the noise term is small enough. For larger noise terms the order of convergence w.r.t. time might become worse. - oai:arXiv.org:2512.17712v1 - math.NA - cs.NA - math.AP - math.PR - Mon, 22 Dec 2025 00:00:00 -0500 + PMPGuard: Catching Pseudo-Matched Pairs in Remote Sensing Image-Text Retrieval + https://arxiv.org/abs/2512.18660 + arXiv:2512.18660v1 Announce Type: new +Abstract: Remote sensing (RS) image-text retrieval faces significant challenges in real-world datasets due to the presence of Pseudo-Matched Pairs (PMPs), semantically mismatched or weakly aligned image-text pairs, which hinder the learning of reliable cross-modal alignments. To address this issue, we propose a novel retrieval framework that leverages Cross-Modal Gated Attention and a Positive-Negative Awareness Attention mechanism to mitigate the impact of such noisy associations. The gated module dynamically regulates cross-modal information flow, while the awareness mechanism explicitly distinguishes informative (positive) cues from misleading (negative) ones during alignment learning. Extensive experiments on three benchmark RS datasets, i.e., RSICD, RSITMD, and RS5M, demonstrate that our method consistently achieves state-of-the-art performance, highlighting its robustness and effectiveness in handling real-world mismatches and PMPs in RS image-text retrieval tasks. + oai:arXiv.org:2512.18660v1 + cs.CV + cs.LG + cs.MM + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Niklas Sapountzoglou, Aleksandra Zimmermann + Pengxiang Ouyang, Qing Ma, Zheng Wang, Cong Bai - Preconditioning for the high-order sampling of the invariant distribution of parabolic semilinear SPDEs - https://arxiv.org/abs/2512.17714 - arXiv:2512.17714v1 Announce Type: new -Abstract: For a class of ergodic parabolic semilinear stochastic partial differential equations (SPDEs) with gradient structure, we introduce a preconditioning technique and design high-order integrators for the approximation of the invariant distribution. The preconditioning yields improved temporal regularity of the dynamics while preserving the invariant distribution and allows the application of postprocessed integrators. For the semilinear heat equation driven by space-time white noise in dimension $1$, we obtain new temporal integrators with orders $1$ and $2$ for sampling the invariant distribution with a minor overcost compared to the standard semilinear implicit Euler method of order $1/2$. Numerical experiments confirm the theoretical findings and illustrate the efficiency of the approach. - oai:arXiv.org:2512.17714v1 - math.NA - cs.NA - math.PR - Mon, 22 Dec 2025 00:00:00 -0500 + ASTIF: Adaptive Semantic-Temporal Integration for Cryptocurrency Price Forecasting + https://arxiv.org/abs/2512.18661 + arXiv:2512.18661v1 Announce Type: new +Abstract: Financial time series forecasting is fundamentally an information fusion challenge, yet most existing models rely on static architectures that struggle to integrate heterogeneous knowledge sources or adjust to rapid regime shifts. Conventional approaches, relying exclusively on historical price sequences, often neglect the semantic drivers of volatility such as policy uncertainty and market narratives. To address these limitations, we propose the ASTIF (Adaptive Semantic-Temporal Integration for Cryptocurrency Price Forecasting), a hybrid intelligent system that adapts its forecasting strategy in real time through confidence-based meta-learning. The framework integrates three complementary components. A dual-channel Small Language Model using MirrorPrompt extracts semantic market cues alongside numerical trends. A hybrid LSTM Random Forest model captures sequential temporal dependencies. A confidence-aware meta-learner functions as an adaptive inference layer, modulating each predictor's contribution based on its real-time uncertainty. + Experimental evaluation on a diverse dataset of AI-focused cryptocurrencies and major technology stocks from 2020 to 2024 shows that ASTIF outperforms leading deep learning and Transformer baselines (e.g., Informer, TFT). The ablation studies further confirm the critical role of the adaptive meta-learning mechanism, which successfully mitigates risk by shifting reliance between semantic and temporal channels during market turbulence. The research contributes a scalable, knowledge-based solution for fusing quantitative and qualitative data in non-stationary environments. + oai:arXiv.org:2512.18661v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Charles-Edouard Br\'ehier, Adrien Busnot Laurent, Arnaud Debussche, Gilles Vilmart + Hafiz Saif Ur Rehman, Ling Liu, Kaleem Ullah Qasim - FlexAvatar: Flexible Large Reconstruction Model for Animatable Gaussian Head Avatars with Detailed Deformation - https://arxiv.org/abs/2512.17717 - arXiv:2512.17717v1 Announce Type: new -Abstract: We present FlexAvatar, a flexible large reconstruction model for high-fidelity 3D head avatars with detailed dynamic deformation from single or sparse images, without requiring camera poses or expression labels. It leverages a transformer-based reconstruction model with structured head query tokens as canonical anchor to aggregate flexible input-number-agnostic, camera-pose-free and expression-free inputs into a robust canonical 3D representation. For detailed dynamic deformation, we introduce a lightweight UNet decoder conditioned on UV-space position maps, which can produce detailed expression-dependent deformations in real time. To better capture rare but critical expressions like wrinkles and bared teeth, we also adopt a data distribution adjustment strategy during training to balance the distribution of these expressions in the training set. Moreover, a lightweight 10-second refinement can further enhances identity-specific details in extreme identities without affecting deformation quality. Extensive experiments demonstrate that our FlexAvatar achieves superior 3D consistency, detailed dynamic realism compared with previous methods, providing a practical solution for animatable 3D avatar creation. - oai:arXiv.org:2512.17717v1 + Offline Reinforcement Learning for End-to-End Autonomous Driving + https://arxiv.org/abs/2512.18662 + arXiv:2512.18662v1 Announce Type: new +Abstract: End-to-end (E2E) autonomous driving models that take only camera images as input and directly predict a future trajectory are appealing for their computational efficiency and potential for improved generalization via unified optimization; however, persistent failure modes remain due to reliance on imitation learning (IL). While online reinforcement learning (RL) could mitigate IL-induced issues, the computational burden of neural rendering-based simulation and large E2E networks renders iterative reward and hyperparameter tuning costly. We introduce a camera-only E2E offline RL framework that performs no additional exploration and trains solely on a fixed simulator dataset. Offline RL offers strong data efficiency and rapid experimental iteration, yet is susceptible to instability from overestimation on out-of-distribution (OOD) actions. To address this, we construct pseudo ground-truth trajectories from expert driving logs and use them as a behavior regularization signal, suppressing imitation of unsafe or suboptimal behavior while stabilizing value learning. Training and closed-loop evaluation are conducted in a neural rendering environment learned from the public nuScenes dataset. Empirically, the proposed method achieves substantial improvements in collision rate and route completion compared with IL baselines. Our code will be available at [URL]. + oai:arXiv.org:2512.18662v1 + cs.RO cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Cheng Peng, Zhuo Su, Liao Wang, Chen Guo, Zhaohu Li, Chengjiang Long, Zheng Lv, Jingxiang Sun, Chenyangguang Zhang, Yebin Liu + Chihiro Noguchi, Takaki Yamamoto - Mitigating Forgetting in Low Rank Adaptation - https://arxiv.org/abs/2512.17720 - arXiv:2512.17720v1 Announce Type: new -Abstract: Parameter-efficient fine-tuning methods, such as Low-Rank Adaptation (LoRA), enable fast specialization of large pre-trained models to different downstream applications. However, this process often leads to catastrophic forgetting of the model's prior domain knowledge. We address this issue with LaLoRA, a weight-space regularization technique that applies a Laplace approximation to Low-Rank Adaptation. Our approach estimates the model's confidence in each parameter and constrains updates in high-curvature directions, preserving prior knowledge while enabling efficient target-domain learning. By applying the Laplace approximation only to the LoRA weights, the method remains lightweight. We evaluate LaLoRA by fine-tuning a Llama model for mathematical reasoning and demonstrate an improved learning-forgetting trade-off, which can be directly controlled via the method's regularization strength. We further explore different loss landscape curvature approximations for estimating parameter confidence, analyze the effect of the data used for the Laplace approximation, and study robustness across hyperparameters. - oai:arXiv.org:2512.17720v1 - cs.LG - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 + Automatic Adaptation to Concept Complexity and Subjective Natural Concepts: A Cognitive Model based on Chunking + https://arxiv.org/abs/2512.18665 + arXiv:2512.18665v1 Announce Type: new +Abstract: A key issue in cognitive science concerns the fundamental psychological processes that underlie the formation and retrieval of multiple types of concepts in short-term and long-term memory (STM and LTM, respectively). We propose that chunking mechanisms play an essential role and show how the CogAct computational model grounds concept learning in fundamental cognitive processes and structures (such as chunking, attention, STM and LTM). First are the in-principle demonstrations, with CogAct automatically adapting to learn a range of categories from simple logical functions, to artificial categories, to natural raw (as opposed to natural pre-processed) concepts in the dissimilar domains of literature, chess and music. This kind of adaptive learning is difficult for most other psychological models, e.g., with cognitive models stopping at modelling artificial categories and (non-GPT) models based on deep learning requiring task-specific changes to the architecture. Secondly, we offer novel ways of designing human benchmarks for concept learning experiments and simulations accounting for subjectivity, ways to control for individual human experiences, all while keeping to real-life complex categories. We ground CogAct in simulations of subjective conceptual spaces of individual human participants, capturing humans subjective judgements in music, with the models learning from raw music score data without bootstrapping to pre-built knowledge structures. The CogAct simulations are compared to those obtained by a deep-learning model. These findings integrate concept learning and adaptation to complexity into the broader theories of cognitive psychology. Our approach may also be used in psychological applications that move away from modelling the average participant and towards capturing subjective concept space. + oai:arXiv.org:2512.18665v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Joanna Sliwa, Frank Schneider, Philipp Hennig, Jose Miguel Hernandez-Lobato + Dmitry Bennett, Fernand Gobet - Digital and Web Forensics Model Cards, V1 - https://arxiv.org/abs/2512.17722 - arXiv:2512.17722v1 Announce Type: new -Abstract: This paper introduces a standardized model card framework specifically designed for digital and web forensics. Building upon established model card methodologies and recent work on abstract models for digital forensic analysis, this paper presents a web based framework that generates model cards specifically designed to represent knowledge in the forensic domain. The framework includes controlled vocabularies for classification, reasoning types, bias identification, and error categorization, along with a web-based generator tool to facilitate adoption. The paper describes the model card structure, presents the controlled vocabularies, and introduces the beta version of the generator tool, inviting community feedback to refine this emerging standard. Ultimately, the systemic risk is that that the anti fraud and digital and web forensics processes are controlled by the mobs. - oai:arXiv.org:2512.17722v1 - cs.CR + IntelliCode: A Multi-Agent LLM Tutoring System with Centralized Learner Modeling + https://arxiv.org/abs/2512.18669 + arXiv:2512.18669v1 Announce Type: new +Abstract: LLM-based tutors are typically single-turn assistants that lack persistent representations of learner knowledge, making it difficult to provide principled, transparent, and long-term pedagogical support. We introduce IntelliCode, a multi-agent LLM tutoring system built around a centralized, versioned learner state that integrates mastery estimates, misconceptions, review schedules, and engagement signals. A StateGraph Orchestrator coordinates six specialized agents: skill assessment, learner profiling, graduated hinting, curriculum selection, spaced repetition, and engagement monitoring, each operating as a pure transformation over the shared state under a single-writer policy. This architecture enables auditable mastery updates, proficiency-aware hints, dependency-aware curriculum adaptation, and safety-aligned prompting. + The demo showcases an end-to-end tutoring workflow: a learner attempts a DSA problem, receives a conceptual hint when stuck, submits a corrected solution, and immediately sees mastery updates and a personalized review interval. We report validation results with simulated learners, showing stable state updates, improved task success with graduated hints, and diverse curriculum coverage. IntelliCode demonstrates how persistent learner modeling, orchestrated multi-agent reasoning, and principled instructional design can be combined to produce transparent and reliable LLM-driven tutoring. + oai:arXiv.org:2512.18669v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Paola Di Maio - - - SAVeD: A First-Person Social Media Video Dataset for ADAS-equipped vehicle Near-Miss and Crash Event Analyses - https://arxiv.org/abs/2512.17724 - arXiv:2512.17724v1 Announce Type: new -Abstract: The advancement of safety-critical research in driving behavior in ADAS-equipped vehicles require real-world datasets that not only include diverse traffic scenarios but also capture high-risk edge cases such as near-miss events and system failures. However, existing datasets are largely limited to either simulated environments or human-driven vehicle data, lacking authentic ADAS (Advanced Driver Assistance System) vehicle behavior under risk conditions. To address this gap, this paper introduces SAVeD, a large-scale video dataset curated from publicly available social media content, explicitly focused on ADAS vehicle-related crashes, near-miss incidents, and disengagements. SAVeD features 2,119 first-person videos, capturing ADAS vehicle operations in diverse locations, lighting conditions, and weather scenarios. The dataset includes video frame-level annotations for collisions, evasive maneuvers, and disengagements, enabling analysis of both perception and decision-making failures. We demonstrate SAVeD's utility through multiple analyses and contributions: (1) We propose a novel framework integrating semantic segmentation and monocular depth estimation to compute real-time Time-to-Collision (TTC) for dynamic objects. (2) We utilize the Generalized Extreme Value (GEV) distribution to model and quantify the extreme risk in crash and near-miss events across different roadway types. (3) We establish benchmarks for state-of-the-art VLLMs (VideoLLaMA2 and InternVL2.5 HiCo R16), showing that SAVeD's detailed annotations significantly enhance model performance through domain adaptation in complex near-miss scenarios. - oai:arXiv.org:2512.17724v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Shaoyan Zhai, Mohamed Abdel-Aty, Chenzhu Wang, Rodrigo Vena Garcia + Jones David, Shreya Ghosh - MambaMIL+: Modeling Long-Term Contextual Patterns for Gigapixel Whole Slide Image - https://arxiv.org/abs/2512.17726 - arXiv:2512.17726v1 Announce Type: new -Abstract: Whole-slide images (WSIs) are an important data modality in computational pathology, yet their gigapixel resolution and lack of fine-grained annotations challenge conventional deep learning models. Multiple instance learning (MIL) offers a solution by treating each WSI as a bag of patch-level instances, but effectively modeling ultra-long sequences with rich spatial context remains difficult. Recently, Mamba has emerged as a promising alternative for long sequence learning, scaling linearly to thousands of tokens. However, despite its efficiency, it still suffers from limited spatial context modeling and memory decay, constraining its effectiveness to WSI analysis. To address these limitations, we propose MambaMIL+, a new MIL framework that explicitly integrates spatial context while maintaining long-range dependency modeling without memory forgetting. Specifically, MambaMIL+ introduces 1) overlapping scanning, which restructures the patch sequence to embed spatial continuity and instance correlations; 2) a selective stripe position encoder (S2PE) that encodes positional information while mitigating the biases of fixed scanning orders; and 3) a contextual token selection (CTS) mechanism, which leverages supervisory knowledge to dynamically enlarge the contextual memory for stable long-range modeling. Extensive experiments on 20 benchmarks across diagnostic classification, molecular prediction, and survival analysis demonstrate that MambaMIL+ consistently achieves state-of-the-art performance under three feature extractors (ResNet-50, PLIP, and CONCH), highlighting its effectiveness and robustness for large-scale computational pathology - oai:arXiv.org:2512.17726v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Demonstration-Guided Continual Reinforcement Learning in Dynamic Environments + https://arxiv.org/abs/2512.18670 + arXiv:2512.18670v1 Announce Type: new +Abstract: Reinforcement learning (RL) excels in various applications but struggles in dynamic environments where the underlying Markov decision process evolves. Continual reinforcement learning (CRL) enables RL agents to continually learn and adapt to new tasks, but balancing stability (preserving prior knowledge) and plasticity (acquiring new knowledge) remains challenging. Existing methods primarily address the stability-plasticity dilemma through mechanisms where past knowledge influences optimization but rarely affects the agent's behavior directly, which may hinder effective knowledge reuse and efficient learning. In contrast, we propose demonstration-guided continual reinforcement learning (DGCRL), which stores prior knowledge in an external, self-evolving demonstration repository that directly guides RL exploration and adaptation. For each task, the agent dynamically selects the most relevant demonstration and follows a curriculum-based strategy to accelerate learning, gradually shifting from demonstration-guided exploration to fully self-exploration. Extensive experiments on 2D navigation and MuJoCo locomotion tasks demonstrate its superior average performance, enhanced knowledge transfer, mitigation of forgetting, and training efficiency. The additional sensitivity analysis and ablation study further validate its effectiveness. + oai:arXiv.org:2512.18670v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qian Zeng, Yihui Wang, Shu Yang, Yingxue Xu, Fengtao Zhou, Jiabo Ma, Dejia Cai, Zhengyu Zhang, Lijuan Qu, Yu Wang, Li Liang, Hao Chen + Xue Yang, Michael Schukat, Junlin Lu, Patrick Mannion, Karl Mason, Enda Howley - Convergence rates for a finite volume scheme of a stochastic non-linear parabolic equation - https://arxiv.org/abs/2512.17728 - arXiv:2512.17728v1 Announce Type: new -Abstract: In this contribution, we provide convergence rates for a finite volume scheme of a stochastic non-linear parabolic equation with multiplicative Lipschitz noise and homogeneous Neumann boundary conditions. More precisely, we give an error estimate for the $L^2$-norm of the space-time discretization by a semi-implicit Euler scheme with respect to time and a two-point flux approximation finite volume scheme with respect to space and the variational solution. The only regularity assumptions additionally needed is spatial regularity of the initial datum and smoothness of the diffusive term. - oai:arXiv.org:2512.17728v1 - math.NA - cs.NA - math.AP - math.PR - Mon, 22 Dec 2025 00:00:00 -0500 + SmartSight: Mitigating Hallucination in Video-LLMs Without Compromising Video Understanding via Temporal Attention Collapse + https://arxiv.org/abs/2512.18671 + arXiv:2512.18671v1 Announce Type: new +Abstract: Despite Video Large Language Models having rapidly advanced in recent years, perceptual hallucinations pose a substantial safety risk, which severely restricts their real-world applicability. While several methods for hallucination mitigation have been proposed, they often compromise the model's capacity for video understanding and reasoning. In this work, we propose SmartSight, a pioneering step to address this issue in a training-free manner by leveraging the model's own introspective capabilities. Specifically, SmartSight generates multiple candidate responses to uncover low-hallucinated outputs that are often obscured by standard greedy decoding. It assesses the hallucination of each response using the Temporal Attention Collapse score, which measures whether the model over-focuses on trivial temporal regions of the input video when generating the response. To improve efficiency, SmartSight identifies the Visual Attention Vanishing point, enabling more accurate hallucination estimation and early termination of hallucinated responses, leading to a substantial reduction in decoding cost. Experiments show that SmartSight substantially lowers hallucinations for Qwen2.5-VL-7B by 10.59% on VRIPT-HAL, while simultaneously enhancing video understanding and reasoning, boosting performance on VideoMMMU by up to 8.86%. These results highlight SmartSight's effectiveness in improving the reliability of open-source Video-LLMs. + oai:arXiv.org:2512.18671v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kavin Rajasekaran, Niklas Sapountzoglou + Yiming Sun, Mi Zhang, Feifei Li, Geng Hong, Min Yang - AdaptPrompt: Parameter-Efficient Adaptation of VLMs for Generalizable Deepfake Detection - https://arxiv.org/abs/2512.17730 - arXiv:2512.17730v1 Announce Type: new -Abstract: Recent advances in image generation have led to the widespread availability of highly realistic synthetic media, increasing the difficulty of reliable deepfake detection. A key challenge is generalization, as detectors trained on a narrow class of generators often fail when confronted with unseen models. In this work, we address the pressing need for generalizable detection by leveraging large vision-language models, specifically CLIP, to identify synthetic content across diverse generative techniques. First, we introduce Diff-Gen, a large-scale benchmark dataset comprising 100k diffusion-generated fakes that capture broad spectral artifacts unlike traditional GAN datasets. Models trained on Diff-Gen demonstrate stronger cross-domain generalization, particularly on previously unseen image generators. Second, we propose AdaptPrompt, a parameter-efficient transfer learning framework that jointly learns task-specific textual prompts and visual adapters while keeping the CLIP backbone frozen. We further show via layer ablation that pruning the final transformer block of the vision encoder enhances the retention of high-frequency generative artifacts, significantly boosting detection accuracy. Our evaluation spans 25 challenging test sets, covering synthetic content generated by GANs, diffusion models, and commercial tools, establishing a new state-of-the-art in both standard and cross-domain scenarios. We further demonstrate the framework's versatility through few-shot generalization (using as few as 320 images) and source attribution, enabling the precise identification of generator architectures in closed-set settings. - oai:arXiv.org:2512.17730v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Improving Pattern Recognition of Scheduling Anomalies through Structure-Aware and Semantically-Enhanced Graphs + https://arxiv.org/abs/2512.18673 + arXiv:2512.18673v1 Announce Type: new +Abstract: This paper proposes a structure-aware driven scheduling graph modeling method to improve the accuracy and representation capability of anomaly identification in scheduling behaviors of complex systems. The method first designs a structure-guided scheduling graph construction mechanism that integrates task execution stages, resource node states, and scheduling path information to build dynamically evolving scheduling behavior graphs, enhancing the model's ability to capture global scheduling relationships. On this basis, a multi-scale graph semantic aggregation module is introduced to achieve semantic consistency modeling of scheduling features through local adjacency semantic integration and global topology alignment, thereby strengthening the model's capability to capture abnormal features in complex scenarios such as multi-task concurrency, resource competition, and stage transitions. Experiments are conducted on a real scheduling dataset with multiple scheduling disturbance paths set to simulate different types of anomalies, including structural shifts, resource changes, and task delays. The proposed model demonstrates significant performance advantages across multiple metrics, showing a sensitive response to structural disturbances and semantic shifts. Further visualization analysis reveals that, under the combined effect of structure guidance and semantic aggregation, the scheduling behavior graph exhibits stronger anomaly separability and pattern representation, validating the effectiveness and adaptability of the method in scheduling anomaly detection tasks. + oai:arXiv.org:2512.18673v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Yichen Jiang, Mohammed Talha Alam, Sohail Ahmed Khan, Duc-Tien Dang-Nguyen, Fakhri Karray + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ning Lyu, Junjie Jiang, Lu Chang, Chihui Shao, Feng Chen, Chong Zhang - Diversity Recommendation via Causal Deconfounding of Co-purchase Relations and Counterfactual Exposure - https://arxiv.org/abs/2512.17733 - arXiv:2512.17733v1 Announce Type: new -Abstract: Beyond user-item modeling, item-to-item relationships are increasingly used to enhance recommendation. However, common methods largely rely on co-occurrence, making them prone to item popularity bias and user attributes, which degrades embedding quality and performance. Meanwhile, although diversity is acknowledged as a key aspect of recommendation quality, existing research offers limited attention to it, with a notable lack of causal perspectives and theoretical grounding. To address these challenges, we propose Cadence: Diversity Recommendation via Causal Deconfounding of Co-purchase Relations and Counterfactual Exposure - a plug-and-play framework built upon LightGCN as the backbone, primarily designed to enhance recommendation diversity while preserving accuracy. First, we compute the Unbiased Asymmetric Co-purchase Relationship (UACR) between items - excluding item popularity and user attributes - to construct a deconfounded directed item graph, with an aggregation mechanism to refine embeddings. Second, we leverage UACR to identify diverse categories of items that exhibit strong causal relevance to a user's interacted items but have not yet been engaged with. We then simulate their behavior under high-exposure scenarios, thereby significantly enhancing recommendation diversity while preserving relevance. Extensive experiments on real-world datasets demonstrate that our method consistently outperforms state-of-the-art diversity models in both diversity and accuracy, and further validates its effectiveness, transferability, and efficiency over baselines. - oai:arXiv.org:2512.17733v1 - cs.IR + Remoe: Towards Efficient and Low-Cost MoE Inference in Serverless Computing + https://arxiv.org/abs/2512.18674 + arXiv:2512.18674v1 Announce Type: new +Abstract: Mixture-of-Experts (MoE) has become a dominant architecture in large language models (LLMs) due to its ability to scale model capacity via sparse expert activation. Meanwhile, serverless computing, with its elasticity and pay-per-use billing, is well-suited for deploying MoEs with bursty workloads. However, the large number of experts in MoE models incurs high inference costs due to memory-intensive parameter caching. These costs are difficult to mitigate via simple model partitioning due to input-dependent expert activation. To address these issues, we propose Remoe, a heterogeneous MoE inference system tailored for serverless computing. Remoe assigns non-expert modules to GPUs and expert modules to CPUs, and further offloads infrequently activated experts to separate serverless functions to reduce memory overhead and enable parallel execution. We incorporate three key techniques: (1) a Similar Prompts Searching (SPS) algorithm to predict expert activation patterns based on semantic similarity of inputs; (2) a Main Model Pre-allocation (MMP) algorithm to ensure service-level objectives (SLOs) via worst-case memory estimation; and (3) a joint memory and replica optimization framework leveraging Lagrangian duality and the Longest Processing Time (LPT) algorithm. We implement Remoe on Kubernetes and evaluate it across multiple LLM benchmarks. Experimental results show that Remoe reduces inference cost by up to 57% and cold start latency by 47% compared to state-of-the-art baselines. + oai:arXiv.org:2512.18674v1 + cs.DC cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jingmao Zhang, Zhiting Zhao, Yunqi Lin, Jianghong Ma, Tianjun Wei, Haijun Zhang, Xiaofeng Zhang + Wentao Liu, Yuhao Hu, Ruiting Zhou, Baochun Li, Ne Wang - When the Gold Standard isn't Necessarily Standard: Challenges of Evaluating the Translation of User-Generated Content - https://arxiv.org/abs/2512.17738 - arXiv:2512.17738v1 Announce Type: new -Abstract: User-generated content (UGC) is characterised by frequent use of non-standard language, from spelling errors to expressive choices such as slang, character repetitions, and emojis. This makes evaluating UGC translation particularly challenging: what counts as a "good" translation depends on the level of standardness desired in the output. To explore this, we examine the human translation guidelines of four UGC datasets, and derive a taxonomy of twelve non-standard phenomena and five translation actions (NORMALISE, COPY, TRANSFER, OMIT, CENSOR). Our analysis reveals notable differences in how UGC is treated, resulting in a spectrum of standardness in reference translations. Through a case study on large language models (LLMs), we show that translation scores are highly sensitive to prompts with explicit translation instructions for UGC, and that they improve when these align with the dataset's guidelines. We argue that when preserving UGC style is important, fair evaluation requires both models and metrics to be aware of translation guidelines. Finally, we call for clear guidelines during dataset creation and for the development of controllable, guideline-aware evaluation frameworks for UGC translation. - oai:arXiv.org:2512.17738v1 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + AsyncDiff: Asynchronous Timestep Conditioning for Enhanced Text-to-Image Diffusion Inference + https://arxiv.org/abs/2512.18675 + arXiv:2512.18675v1 Announce Type: new +Abstract: Text-to-image diffusion inference typically follows synchronized schedules, where the numerical integrator advances the latent state to the same timestep at which the denoiser is conditioned. We propose an asynchronous inference mechanism that decouples these two, allowing the denoiser to be conditioned at a different, learned timestep while keeping image update schedule unchanged. A lightweight timestep prediction module (TPM), trained with Group Relative Policy Optimization (GRPO), selects a more feasible conditioning timestep based on the current state, effectively choosing a desired noise level to control image detail and textural richness. At deployment, a scaling hyper-parameter can be used to interpolate between the original and de-synchronized timesteps, enabling conservative or aggressive adjustments. To keep the study computationally affordable, we cap the inference at 15 steps for SD3.5 and 10 steps for Flux. Evaluated on Stable Diffusion 3.5 Medium and Flux.1-dev across MS-COCO 2014 and T2I-CompBench datasets, our method optimizes a composite reward that averages Image Reward, HPSv2, CLIP Score and Pick Score, and shows consistent improvement. + oai:arXiv.org:2512.18675v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Lydia Nishimwe, Beno\^it Sagot, Rachel Bawden + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Longhuan Xu, Feng Yin, Cunjian Chen - When Pamplona sounds different: the soundscape transformation of San Fermin through intelligent acoustic sensors and a sound repository - https://arxiv.org/abs/2512.17740 - arXiv:2512.17740v1 Announce Type: new -Abstract: This study presents a use-case of a network of low-cost acoustic smart sensors deployed in the city of Pamplona to analyse changes in the urban soundscape during the San Fermin Festival. The sensors were installed in different areas of the city before, during, and after the event, capturing continuous acoustic data. Our analysis reveals a significant transformation in the city's sonic environment during the festive period: overall sound pressure levels increase significantly, soundscape patterns change, and the acoustic landscape becomes dominated by sounds associated with human activity. These findings highlight the potential of distributed smart acoustic monitoring systems to characterize the temporal dynamics of urban soundscapes and underscore how the large-scale event of San Fermin drastically reshapes the overall acoustic dynamics of the city of Pamplona. Additionally, to complement the objective measurements, a curated collection of real San Fermin sound recordings has been created and made publicly available, preserving the festival's unique sonic heritage. - oai:arXiv.org:2512.17740v1 - cs.CY - cs.SD - Mon, 22 Dec 2025 00:00:00 -0500 + brat: Aligned Multi-View Embeddings for Brain MRI Analysis + https://arxiv.org/abs/2512.18679 + arXiv:2512.18679v1 Announce Type: new +Abstract: We present brat (brain report alignment transformer), a multi-view representation learning framework for brain magnetic resonance imaging (MRI) trained on MRIs paired with clinical reports. Brain MRIs present unique challenges due to the presence of numerous, highly varied, and often subtle abnormalities that are localized to a few slices within a 3D volume. To address these challenges, we introduce a brain MRI dataset $10\times$ larger than existing ones, containing approximately 80,000 3D scans with corresponding radiology reports, and propose a multi-view pre-training approach inspired by advances in document retrieval. We develop an implicit query-feature matching mechanism and adopt concepts from quality-diversity to obtain multi-view embeddings of MRIs that are aligned with the clinical features given by report sentences. We evaluate our approach across multiple vision-language and vision tasks, demonstrating substantial performance improvements. The brat foundation models are publicly released. + oai:arXiv.org:2512.18679v1 + cs.CV + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Amaia Sagasti, Frederic Font + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Maxime Kayser, Maksim Gridnev, Wanting Wang, Max Bain, Aneesh Rangnekar, Avijit Chatterjee, Aleksandr Petrov, Harini Veeraraghavan, Nathaniel C. Swinburne - Methods and Tools for Secure Quantum Clouds with a specific Case Study on Homomorphic Encryption - https://arxiv.org/abs/2512.17748 - arXiv:2512.17748v1 Announce Type: new -Abstract: The rise of quantum computing/technology potentially introduces significant security challenges to cloud computing, necessitating quantum-resistant encryption strategies as well as protection schemes and methods for cloud infrastructures offering quantum computing time and services (i.e. quantum clouds). This research explores various options for securing quantum clouds and ensuring privacy, especially focussing on the integration of homomorphic encryption (HE) into Eclipse Qrisp, a high-level quantum computing framework, to enhance the security of quantum cloud platforms. The study addresses the technical feasibility of integrating HE with Qrisp, evaluates performance trade-offs, and assesses the potential impact on future quantum cloud architectures.The successful implementation and Qrisp integration of three post-quantum cryptographic (PQC) algorithms demonstrates the feasibility of integrating HE with quantum computing frameworks. The findings indicate that while the Quantum One-Time Pad (QOTP) offers simplicity and low overhead, other algorithms like Chen and Gentry-Sahai-Waters (GSW) present performance trade-offs in terms of runtime and memory consumption. The study results in an overall set of recommendations for securing quantum clouds, e.g. implementing HE at data storage and processing levels, developing Quantum Key Distribution (QKD), and enforcing stringent access control and authentication mechanisms as well as participating in PQC standardization efforts. - oai:arXiv.org:2512.17748v1 - cs.CR - quant-ph - Mon, 22 Dec 2025 00:00:00 -0500 + Solver-Independent Automated Problem Formulation via LLMs for High-Cost Simulation-Driven Design + https://arxiv.org/abs/2512.18682 + arXiv:2512.18682v1 Announce Type: new +Abstract: In the high-cost simulation-driven design domain, translating ambiguous design requirements into a mathematical optimization formulation is a bottleneck for optimizing product performance. This process is time-consuming and heavily reliant on expert knowledge. While large language models (LLMs) offer potential for automating this task, existing approaches either suffer from poor formalization that fails to accurately align with the design intent or rely on solver feedback for data filtering, which is unavailable due to the high simulation costs. To address this challenge, we propose APF, a framework for solver-independent, automated problem formulation via LLMs designed to automatically convert engineers' natural language requirements into executable optimization models. The core of this framework is an innovative pipeline for automatically generating high-quality data, which overcomes the difficulty of constructing suitable fine-tuning datasets in the absence of high-cost solver feedback with the help of data generation and test instance annotation. The generated high-quality dataset is used to perform supervised fine-tuning on LLMs, significantly enhancing their ability to generate accurate and executable optimization problem formulations. Experimental results on antenna design demonstrate that APF significantly outperforms the existing methods in both the accuracy of requirement formalization and the quality of resulting radiation efficiency curves in meeting the design goals. + oai:arXiv.org:2512.18682v1 + cs.CL + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Aurelia Kusumastuti, Nikolay Tcholtchev, Philipp L\"ammel, Sebastian Bock, Manfred Hauswirth + Yuchen Li, Handing Wang, Bing Xue, Mengjie Zhang, Yaochu Jin - Perceptions of the Metaverse at the Peak of the Hype Cycle: A Cross-Sectional Study Among Turkish University Students - https://arxiv.org/abs/2512.17750 - arXiv:2512.17750v1 Announce Type: new -Abstract: During the height of the hype in late 2021, the Metaverse drew more attention from around the world than ever before. It promised new ways to interact with people in three-dimensional digital spaces. This cross-sectional study investigates the attitudes, perceptions, and predictors of the willingness to engage with the Metaverse among 381 Turkish university students surveyed in December 2021. The study employs Fisher's Exact Tests and binary logistic regression to assess the influence of demographic characteristics, prior digital experience, and perception-based factors. The results demonstrate that demographic factors, such as gender, educational attainment, faculty association, social media engagement, and previous virtual reality exposure, do not significantly forecast the propensity to participate in the Metaverse. Instead, the main things that affect people's intentions to adopt are how they see things. Belief in the Metaverse's capacity to revolutionize societal frameworks, especially human rights, surfaced as the most significant positive predictor of willingness. Conversely, apprehensions regarding psychological harm, framed as a possible 'cyber syndrome' represented a significant obstacle to participation. Perceptions of technical compatibility and ethical considerations showed complex effects, showing that optimism, uncertainty, and indifference affect willingness in different ways. In general, the results show that early adoption of the Metaverse is based on how people see it, not on their demographics. The research establishes a historically informed benchmark of user skepticism and prudent assessment during the advent of Web 3.0, underscoring the necessity of addressing collective psychological, ethical, and normative issues to promote future engagement. - oai:arXiv.org:2512.17750v1 - cs.CY - stat.AP - Mon, 22 Dec 2025 00:00:00 -0500 + CIRR: Causal-Invariant Retrieval-Augmented Recommendation with Faithful Explanations under Distribution Shift + https://arxiv.org/abs/2512.18683 + arXiv:2512.18683v1 Announce Type: new +Abstract: Recent advances in retrieval-augmented generation (RAG) have shown promise in enhancing recommendation systems with external knowledge. However, existing RAG-based recommenders face two critical challenges: (1) vulnerability to distribution shifts across different environments (e.g., time periods, user segments), leading to performance degradation in out-of-distribution (OOD) scenarios, and (2) lack of faithful explanations that can be verified against retrieved evidence. In this paper, we propose CIRR, a Causal-Invariant Retrieval-Augmented Recommendation framework that addresses both challenges simultaneously. CIRR learns environment-invariant user preference representations through causal inference, which guide a debiased retrieval process to select relevant evidence from multiple sources. Furthermore, we introduce consistency constraints that enforce faithfulness between retrieved evidence, generated explanations, and recommendation outputs. Extensive experiments on two real-world datasets demonstrate that CIRR achieves robust performance under distribution shifts, reducing performance degradation from 15.4% (baseline) to only 5.6% in OOD scenarios, while providing more faithful and interpretable explanations (26% improvement in faithfulness score) compared to state-of-the-art baselines. + oai:arXiv.org:2512.18683v1 + cs.IR + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mehmet Ali Erkan, Halil Eren Ko\c{c}ak + Sebastian Sun - Affect, Body, Cognition, Demographics, and Emotion: The ABCDE of Text Features for Computational Affective Science - https://arxiv.org/abs/2512.17752 - arXiv:2512.17752v1 Announce Type: new -Abstract: Work in Computational Affective Science and Computational Social Science explores a wide variety of research questions about people, emotions, behavior, and health. Such work often relies on language data that is first labeled with relevant information, such as the use of emotion words or the age of the speaker. Although many resources and algorithms exist to enable this type of labeling, discovering, accessing, and using them remains a substantial impediment, particularly for practitioners outside of computer science. Here, we present the ABCDE dataset (Affect, Body, Cognition, Demographics, and Emotion), a large-scale collection of over 400 million text utterances drawn from social media, blogs, books, and AI-generated sources. The dataset is annotated with a wide range of features relevant to computational affective and social science. ABCDE facilitates interdisciplinary research across numerous fields, including affective science, cognitive science, the digital humanities, sociology, political science, and computational linguistics. - oai:arXiv.org:2512.17752v1 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + A Study of Finetuning Video Transformers for Multi-view Geometry Tasks + https://arxiv.org/abs/2512.18684 + arXiv:2512.18684v1 Announce Type: new +Abstract: This paper presents an investigation of vision transformer learning for multi-view geometry tasks, such as optical flow estimation, by fine-tuning video foundation models. Unlike previous methods that involve custom architectural designs and task-specific pretraining, our research finds that general-purpose models pretrained on videos can be readily transferred to multi-view problems with minimal adaptation. The core insight is that general-purpose attention between patches learns temporal and spatial information for geometric reasoning. We demonstrate that appending a linear decoder to the Transformer backbone produces satisfactory results, and iterative refinement can further elevate performance to stateof-the-art levels. This conceptually simple approach achieves top cross-dataset generalization results for optical flow estimation with end-point error (EPE) of 0.69, 1.78, and 3.15 on the Sintel clean, Sintel final, and KITTI datasets, respectively. Our method additionally establishes a new record on the online test benchmark with EPE values of 0.79, 1.88, and F1 value of 3.79. Applications to 3D depth estimation and stereo matching also show strong performance, illustrating the versatility of video-pretrained models in addressing geometric vision tasks. + oai:arXiv.org:2512.18684v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Jan Philip Wahle, Krishnapriya Vishnubhotla, Bela Gipp, Saif M. Mohammad + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Huimin Wu, Kwang-Ting Cheng, Stephen Lin, Zhirong Wu - AncientBench: Towards Comprehensive Evaluation on Excavated and Transmitted Chinese Corpora - https://arxiv.org/abs/2512.17756 - arXiv:2512.17756v1 Announce Type: new -Abstract: Comprehension of ancient texts plays an important role in archaeology and understanding of Chinese history and civilization. The rapid development of large language models needs benchmarks that can evaluate their comprehension of ancient characters. Existing Chinese benchmarks are mostly targeted at modern Chinese and transmitted documents in ancient Chinese, but the part of excavated documents in ancient Chinese is not covered. To meet this need, we propose the AncientBench, which aims to evaluate the comprehension of ancient characters, especially in the scenario of excavated documents. The AncientBench is divided into four dimensions, which correspond to the four competencies of ancient character comprehension: glyph comprehension, pronunciation comprehension, meaning comprehension, and contextual comprehension. The benchmark also contains ten tasks, including radical, phonetic radical, homophone, cloze, translation, and more, providing a comprehensive framework for evaluation. We convened archaeological researchers to conduct experimental evaluations, proposed an ancient model as baseline, and conducted extensive experiments on the currently best-performing large language models. The experimental results reveal the great potential of large language models in ancient textual scenarios as well as the gap with humans. Our research aims to promote the development and application of large language models in the field of archaeology and ancient Chinese language. - oai:arXiv.org:2512.17756v1 - cs.CL + Social Comparison without Explicit Inference of Others' Reward Values: A Constructive Approach Using a Probabilistic Generative Model + https://arxiv.org/abs/2512.18687 + arXiv:2512.18687v1 Announce Type: new +Abstract: Social comparison -- the process of evaluating one's rewards relative to others -- plays a fundamental role in primate social cognition. However, it remains unknown from a computational perspective how information about others' rewards affects the evaluation of one's own reward. With a constructive approach, this study examines whether monkeys merely recognize objective reward differences or, instead, infer others' subjective reward valuations. We developed three computational models with varying degrees of social information processing: an Internal Prediction Model (IPM), which infers the partner's subjective values; a No Comparison Model (NCM), which disregards partner information; and an External Comparison Model (ECM), which directly incorporates the partner's objective rewards. To test model performance, we used a multi-layered, multimodal latent Dirichlet allocation. We trained the models on a dataset containing the behavior of a pair of monkeys, their rewards, and the conditioned stimuli. Then, we evaluated the models' ability to classify subjective values across pre-defined experimental conditions. The ECM achieved the highest classification score in the Rand Index (0.88 vs. 0.79 for the IPM) under our settings, suggesting that social comparison relies on objective reward differences rather than inferences about subjective states. + oai:arXiv.org:2512.18687v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhihan Zhou, Daqian Shi, Rui Song, Lida Shi, Xiaolei Diao, Hao Xu + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Yosuke Taniuchi, Chie Hieida, Atsushi Noritake, Kazushi Ikeda, Masaki Isoda - Can You Hear Me Now? A Benchmark for Long-Range Graph Propagation - https://arxiv.org/abs/2512.17762 - arXiv:2512.17762v1 Announce Type: new -Abstract: Effectively capturing long-range interactions remains a fundamental yet unresolved challenge in graph neural network (GNN) research, critical for applications across diverse fields of science. To systematically address this, we introduce ECHO (Evaluating Communication over long HOps), a novel benchmark specifically designed to rigorously assess the capabilities of GNNs in handling very long-range graph propagation. ECHO includes three synthetic graph tasks, namely single-source shortest paths, node eccentricity, and graph diameter, each constructed over diverse and structurally challenging topologies intentionally designed to introduce significant information bottlenecks. ECHO also includes two real-world datasets, ECHO-Charge and ECHO-Energy, which define chemically grounded benchmarks for predicting atomic partial charges and molecular total energies, respectively, with reference computations obtained at the density functional theory (DFT) level. Both tasks inherently depend on capturing complex long-range molecular interactions. Our extensive benchmarking of popular GNN architectures reveals clear performance gaps, emphasizing the difficulty of true long-range propagation and highlighting design choices capable of overcoming inherent limitations. ECHO thereby sets a new standard for evaluating long-range information propagation, also providing a compelling example for its need in AI for science. - oai:arXiv.org:2512.17762v1 + Fusion of Multiscale Features Via Centralized Sparse-attention Network for EEG Decoding + https://arxiv.org/abs/2512.18689 + arXiv:2512.18689v1 Announce Type: new +Abstract: Electroencephalography (EEG) signal decoding is a key technology that translates brain activity into executable commands, laying the foundation for direct brain-machine interfacing and intelligent interaction. To address the inherent spatiotemporal heterogeneity of EEG signals, this paper proposes a multi-branch parallel architecture, where each temporal scale is equipped with an independent spatial feature extraction module. To further enhance multi-branch feature fusion, we propose a Fusion of Multiscale Features via Centralized Sparse-attention Network (EEG-CSANet), a centralized sparse-attention network. It employs a main-auxiliary branch architecture, where the main branch models core spatiotemporal patterns via multiscale self-attention, and the auxiliary branch facilitates efficient local interactions through sparse cross-attention. Experimental results show that EEG-CSANet achieves state-of-the-art (SOTA) performance across five public datasets (BCIC-IV-2A, BCIC-IV-2B, HGD, SEED, and SEED-VIG), with accuracies of 88.54%, 91.09%, 99.43%, 96.03%, and 90.56%, respectively. Such performance demonstrates its strong adaptability and robustness across various EEG decoding tasks. Moreover, extensive ablation studies are conducted to enhance the interpretability of EEG-CSANet. In the future, we hope that EEG-CSANet could serve as a promising baseline model in the field of EEG signal decoding. The source code is publicly available at: https://github.com/Xiangrui-Cai/EEG-CSANet + oai:arXiv.org:2512.18689v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Luca Miglior, Matteo Tolloso, Alessio Gravina, Davide Bacciu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiangrui Cai, Shaocheng Ma, Lei Cao, Jie Li, Tianyu Liu, Yilin Dong - UniStateDLO: Unified Generative State Estimation and Tracking of Deformable Linear Objects Under Occlusion for Constrained Manipulation - https://arxiv.org/abs/2512.17764 - arXiv:2512.17764v1 Announce Type: new -Abstract: Perception of deformable linear objects (DLOs), such as cables, ropes, and wires, is the cornerstone for successful downstream manipulation. Although vision-based methods have been extensively explored, they remain highly vulnerable to occlusions that commonly arise in constrained manipulation environments due to surrounding obstacles, large and varying deformations, and limited viewpoints. Moreover, the high dimensionality of the state space, the lack of distinctive visual features, and the presence of sensor noises further compound the challenges of reliable DLO perception. To address these open issues, this paper presents UniStateDLO, the first complete DLO perception pipeline with deep-learning methods that achieves robust performance under severe occlusion, covering both single-frame state estimation and cross-frame state tracking from partial point clouds. Both tasks are formulated as conditional generative problems, leveraging the strong capability of diffusion models to capture the complex mapping between highly partial observations and high-dimensional DLO states. UniStateDLO effectively handles a wide range of occlusion patterns, including initial occlusion, self-occlusion, and occlusion caused by multiple objects. In addition, it exhibits strong data efficiency as the entire network is trained solely on a large-scale synthetic dataset, enabling zero-shot sim-to-real generalization without any real-world training data. Comprehensive simulation and real-world experiments demonstrate that UniStateDLO outperforms all state-of-the-art baselines in both estimation and tracking, producing globally smooth yet locally precise DLO state predictions in real time, even under substantial occlusions. Its integration as the front-end module in a closed-loop DLO manipulation system further demonstrates its ability to support stable feedback control in complex, constrained 3-D environments. - oai:arXiv.org:2512.17764v1 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + EcoSplat: Efficiency-controllable Feed-forward 3D Gaussian Splatting from Multi-view Images + https://arxiv.org/abs/2512.18692 + arXiv:2512.18692v1 Announce Type: new +Abstract: Feed-forward 3D Gaussian Splatting (3DGS) enables efficient one-pass scene reconstruction, providing 3D representations for novel view synthesis without per-scene optimization. However, existing methods typically predict pixel-aligned primitives per-view, producing an excessive number of primitives in dense-view settings and offering no explicit control over the number of predicted Gaussians. To address this, we propose EcoSplat, the first efficiency-controllable feed-forward 3DGS framework that adaptively predicts the 3D representation for any given target primitive count at inference time. EcoSplat adopts a two-stage optimization process. The first stage is Pixel-aligned Gaussian Training (PGT) where our model learns initial primitive prediction. The second stage is Importance-aware Gaussian Finetuning (IGF) stage where our model learns rank primitives and adaptively adjust their parameters based on the target primitive count. Extensive experiments across multiple dense-view settings show that EcoSplat is robust and outperforms state-of-the-art methods under strict primitive-count constraints, making it well-suited for flexible downstream rendering tasks. + oai:arXiv.org:2512.18692v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Kangchen Lv, Mingrui Yu, Shihefeng Wang, Xiangyang Ji, Xiang Li + Jongmin Park, Minh-Quan Viet Bui, Juan Luis Gonzalez Bello, Jaeho Moon, Jihyong Oh, Munchurl Kim - A Short Report on Importance Sampling for Rare Event Simulation in Diffusions - https://arxiv.org/abs/2512.17766 - arXiv:2512.17766v1 Announce Type: new -Abstract: In this manuscript, we investigate importance sampling methods for rare-event simulation in diffusion processes. We show, from a large-deviation perspective, that the resulting importance sampling estimator is log-efficient. This connection is established via a stochastic optimal control formulation, and the associated Hamilton--Jacobi--Bellman (HJB) equation is derived using dynamic programming. To approximate the optimal control, we adopt a spectral parameterization and employ the cross-entropy method to estimate the parameters by solving a least-squares problem. Finally, we present a numerical example to validate the effectiveness of the cross-entropy approach and the efficiency of the resulting importance sampling estimator. - oai:arXiv.org:2512.17766v1 - math.NA - cs.NA - math.PR - Mon, 22 Dec 2025 00:00:00 -0500 + Real-Time Remote Monitoring of Correlated Markovian Sources + https://arxiv.org/abs/2512.18698 + arXiv:2512.18698v1 Announce Type: new +Abstract: We investigate real-time tracking of two correlated stochastic processes over a shared wireless channel. The joint evolution of the processes is modeled as a two-dimensional discrete-time Markov chain. Each process is observed by a dedicated sampler and independently reconstructed at a remote monitor according to a task-specific objective. Although both processes originate from a common underlying phenomenon (e.g., distinct features of the same source), each monitor is interested only in its corresponding feature. A reconstruction error is incurred when the true and reconstructed states mismatch at one or both monitors. To address this problem, we propose an error-aware joint sampling and transmission policy, under which each sampler probabilistically generates samples only when the current process state differs from the most recently reconstructed state at its corresponding monitor. We adopt the time-averaged reconstruction error as the primary performance metric and benchmark the proposed policy against state-of-the-art joint sampling and transmission schemes. For each policy, we derive closed-form expressions for the resulting time-averaged reconstruction error. We further formulate and solve an optimization problem that minimizes the time-averaged reconstruction error subject to an average sampling cost constraint. Analytical and numerical results demonstrate that the proposed error-aware policy achieves the minimum time-averaged reconstruction error among the considered schemes while efficiently utilizing the sampling budget. The performance gains are particularly pronounced in regimes with strong inter-process correlation and stringent tracking requirements, where frequent sampling by both samplers is necessary. + oai:arXiv.org:2512.18698v1 + cs.IT + cs.NI + cs.SY + eess.SY + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/publicdomain/zero/1.0/ - Zhiwei Gao + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Mehrdad Salimnejad, Marios Kountouris, Nikolaos Pappas - In Times of Crisis: An Exploratory Study of Media and Political Discourse on YouTube During the 2024 French Elections - https://arxiv.org/abs/2512.17768 - arXiv:2512.17768v1 Announce Type: new -Abstract: YouTube has emerged as a major platform for political communication and news dissemination, particularly during high-stakes electoral periods. In the context of the 2024 European Parliament and French legislative elections, this study investigates how political actors and news media used YouTube to shape public discourse. We analyze over 100,000 video transcripts and metadata from 74 French YouTube channels operated by national news outlets, local media, and political figures. To identify the key themes emphasized during the campaign period, we applied a semi-automated method that combined large language models with clustering and manual review. The results reveal distinct thematic patterns across the political spectrum and media types, with right-leaning news outlets focusing on topics like immigration, while left-leaning emphasized protest and media freedom. Themes generating the most audience engagement, measured by comment-to-view ratios, were most often the most polarizing ones. In contrast, less polarizing themes such as video games and nature showed higher approval, reflected in like-to-view ratios. We also observed a general tendency across all media types to portray political figures in neutral or critical terms rather than favorable ones. - oai:arXiv.org:2512.17768v1 - cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 + Task Vector in TTS: Toward Emotionally Expressive Dialectal Speech Synthesis + https://arxiv.org/abs/2512.18699 + arXiv:2512.18699v1 Announce Type: new +Abstract: Recent advances in text-to-speech (TTS) have yielded remarkable improvements in naturalness and intelligibility. Building on these achievements, research has increasingly shifted toward enhancing the expressiveness of generated speech, such as dialectal and emotional TTS. However, cross-style synthesis combining both dialect and emotion remains challenging and largely unexplored, mainly due to the scarcity of dialectal data with emotional labels. To address this, we propose Hierarchical Expressive Vector (HE-Vector), a two-stage method for Emotional Dialectal TTS. In the first stage, we construct different task vectors to model dialectal and emotional styles independently, and then enhance single-style synthesis by adjusting their weights, a method we refer to as Expressive Vector (E-Vector). For the second stage, we hierarchically integrate these vectors to achieve controllable emotionally expressive dialect synthesis without requiring jointly labeled data, corresponding to Hierarchical Expressive Vector (HE-Vector). Experimental results demonstrate that HE-Vectors achieve superior performance in dialect synthesis, and promising results in synthesizing emotionally expressive dialectal speech in a zero-shot setting. + oai:arXiv.org:2512.18699v1 + cs.SD + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Vera Sosnovik, Caroline Violot, Mathias Humbert + Pengchao Feng, Yao Xiao, Ziyang Ma, Zhikang Niu, Shuai Fan, Yao Li, Sheng Wang, Xie Chen - Bangla MedER: Multi-BERT Ensemble Approach for the Recognition of Bangla Medical Entity - https://arxiv.org/abs/2512.17769 - arXiv:2512.17769v1 Announce Type: new -Abstract: Medical Entity Recognition (MedER) is an essential NLP task for extracting meaningful entities from the medical corpus. Nowadays, MedER-based research outcomes can remarkably contribute to the development of automated systems in the medical sector, ultimately enhancing patient care and outcomes. While extensive research has been conducted on MedER in English, low-resource languages like Bangla remain underexplored. Our work aims to bridge this gap. For Bangla medical entity recognition, this study first examined a number of transformer models, including BERT, DistilBERT, ELECTRA, and RoBERTa. We also propose a novel Multi-BERT Ensemble approach that outperformed all baseline models with the highest accuracy of 89.58%. Notably, it provides an 11.80% accuracy improvement over the single-layer BERT model, demonstrating its effectiveness for this task. A major challenge in MedER for low-resource languages is the lack of annotated datasets. To address this issue, we developed a high-quality dataset tailored for the Bangla MedER task. The dataset was used to evaluate the effectiveness of our model through multiple performance metrics, demonstrating its robustness and applicability. Our findings highlight the potential of Multi-BERT Ensemble models in improving MedER for Bangla and set the foundation for further advancements in low-resource medical NLP. - oai:arXiv.org:2512.17769v1 - cs.CL + CauTraj: A Causal-Knowledge-Guided Framework for Lane-Changing Trajectory Planning of Autonomous Vehicles + https://arxiv.org/abs/2512.18703 + arXiv:2512.18703v1 Announce Type: new +Abstract: Enhancing the performance of trajectory planners for lane - changing vehicles is one of the key challenges in autonomous driving within human - machine mixed traffic. Most existing studies have not incorporated human drivers' prior knowledge when designing trajectory planning models. To address this issue, this study proposes a novel trajectory planning framework that integrates causal prior knowledge into the control process. Both longitudinal and lateral microscopic behaviors of vehicles are modeled to quantify interaction risk, and a staged causal graph is constructed to capture causal dependencies in lane-changing scenarios. Causal effects between the lane-changing vehicle and surrounding vehicles are then estimated using causal inference, including average causal effects (ATE) and conditional average treatment effects (CATE). These causal priors are embedded into a model predictive control (MPC) framework to enhance trajectory planning. The proposed approach is validated on naturalistic vehicle trajectory datasets. Experimental results show that: (1) causal inference provides interpretable and stable quantification of vehicle interactions; (2) individual causal effects reveal driver heterogeneity; and (3) compared with the baseline MPC, the proposed method achieves a closer alignment with human driving behaviors, reducing maximum trajectory deviation from 1.2 m to 0.2 m, lateral velocity fluctuation by 60%, and yaw angle variability by 50%. These findings provide methodological support for human-like trajectory planning and practical value for improving safety, stability, and realism in autonomous vehicle testing and traffic simulation platforms. + oai:arXiv.org:2512.18703v1 + cs.RO cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Tanjim Taharat Aurpa, Farzana Akter, Md. Mehedi Hasan, Shakil Ahmed, Shifat Ara Rafiq, Fatema Khan + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Cailin Lei, Haiyang Wu, Yuxiong Ji, Xiaoyu Cai, Yuchuan Du - Easy Adaptation: An Efficient Task-Specific Knowledge Injection Method for Large Models in Resource-Constrained Environments - https://arxiv.org/abs/2512.17771 - arXiv:2512.17771v1 Announce Type: new -Abstract: While the enormous parameter scale endows Large Models (LMs) with unparalleled performance, it also limits their adaptability across specific tasks. Parameter-Efficient Fine-Tuning (PEFT) has emerged as a critical approach for effectively adapting LMs to a diverse range of downstream tasks. However, existing PEFT methods face two primary challenges: (1) High resource cost. Although PEFT methods significantly reduce resource demands compared to full fine-tuning, it still requires substantial time and memory, making it impractical in resource-constrained environments. (2) Parameter dependency. PEFT methods heavily rely on updating a subset of parameters associated with LMs to incorporate task-specific knowledge. Yet, due to increasing competition in the LMs landscape, many companies have adopted closed-source policies for their leading models, offering access only via Application Programming Interface (APIs). Whereas, the expense is often cost-prohibitive and difficult to sustain, as the fine-tuning process of LMs is extremely slow. Even if small models perform far worse than LMs in general, they can achieve superior results on particular distributions while requiring only minimal resources. Motivated by this insight, we propose Easy Adaptation (EA), which designs Specific Small Models (SSMs) to complement the underfitted data distribution for LMs. Extensive experiments show that EA matches the performance of PEFT on diverse tasks without accessing LM parameters, and requires only minimal resources. - oai:arXiv.org:2512.17771v1 - cs.LG - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + X-Talk: On the Underestimated Potential of Modular Speech-to-Speech Dialogue System + https://arxiv.org/abs/2512.18706 + arXiv:2512.18706v1 Announce Type: new +Abstract: We present X-Talk, an open-source framework that champions a decoupled, modular design for LLM-driven speech-to-speech (S2S) systems. While the dominant trend favors end-to-end (E2E) modeling to optimize information flow, these "omni-models" often struggle to balance the competing objectives of complex speech tasks within a single network. X-Talk challenges this paradigm by demonstrating that a systematically optimized cascaded pipeline can achieve sub-second latency without sacrificing modular flexibility. Our framework seamlessly integrates specialized front-end components (e.g., VAD, speech enhancement) and diverse understanding models (e.g., ASR, emotion, and environmental sound analysis) with LLM capabilities like retrieval-augmented generation (RAG) and tool use. By revitalizing the cascaded approach, X-Talk highlights the underestimated potential of modular S2S systems and provides a robust foundation for future research and applications. + oai:arXiv.org:2512.18706v1 + cs.SD + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Dong Chen, Zhengqing Hu, Shixing Zhao, Yibo Guo + Zhanxun Liu, Yifan Duan, Mengmeng Wang, Pengchao Feng, Haotian Zhang, Xiaoyu Xing, Yijia Shan, Haina Zhu, Yuhang Dai, Chaochao Lu, Xipeng Qiu, Lei Xie, Lan Wang, Nan Yan, Zilong Zheng, Ziyang Ma, Kai Yu, Xie Chen - Pix2NPHM: Learning to Regress NPHM Reconstructions From a Single Image - https://arxiv.org/abs/2512.17773 - arXiv:2512.17773v1 Announce Type: new -Abstract: Neural Parametric Head Models (NPHMs) are a recent advancement over mesh-based 3d morphable models (3DMMs) to facilitate high-fidelity geometric detail. However, fitting NPHMs to visual inputs is notoriously challenging due to the expressive nature of their underlying latent space. To this end, we propose Pix2NPHM, a vision transformer (ViT) network that directly regresses NPHM parameters, given a single image as input. Compared to existing approaches, the neural parametric space allows our method to reconstruct more recognizable facial geometry and accurate facial expressions. For broad generalization, we exploit domain-specific ViTs as backbones, which are pretrained on geometric prediction tasks. We train Pix2NPHM on a mixture of 3D data, including a total of over 100K NPHM registrations that enable direct supervision in SDF space, and large-scale 2D video datasets, for which normal estimates serve as pseudo ground truth geometry. Pix2NPHM not only allows for 3D reconstructions at interactive frame rates, it is also possible to improve geometric fidelity by a subsequent inference-time optimization against estimated surface normals and canonical point maps. As a result, we achieve unprecedented face reconstruction quality that can run at scale on in-the-wild data. - oai:arXiv.org:2512.17773v1 - cs.CV + KeenKT: Knowledge Mastery-State Disambiguation for Knowledge Tracing + https://arxiv.org/abs/2512.18709 + arXiv:2512.18709v1 Announce Type: new +Abstract: Knowledge Tracing (KT) aims to dynamically model a student's mastery of knowledge concepts based on their historical learning interactions. Most current methods rely on single-point estimates, which cannot distinguish true ability from outburst or carelessness, creating ambiguity in judging mastery. To address this issue, we propose a Knowledge Mastery-State Disambiguation for Knowledge Tracing model (KeenKT), which represents a student's knowledge state at each interaction using a Normal-Inverse-Gaussian (NIG) distribution, thereby capturing the fluctuations in student learning behaviors. Furthermore, we design an NIG-distance-based attention mechanism to model the dynamic evolution of the knowledge state. In addition, we introduce a diffusion-based denoising reconstruction loss and a distributional contrastive learning loss to enhance the model's robustness. Extensive experiments on six public datasets demonstrate that KeenKT outperforms SOTA KT models in terms of prediction accuracy and sensitivity to behavioral fluctuations. The proposed method yields the maximum AUC improvement of 5.85% and the maximum ACC improvement of 6.89%. + oai:arXiv.org:2512.18709v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Simon Giebenhain, Tobias Kirschstein, Liam Schoneveld, Davide Davoli, Zhe Chen, Matthias Nie{\ss}ner + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhifei Li, Lifan Chen, Jiali Yi, Xiaoju Hou, Yue Zhao, Wenxin Huang, Miao Zhang, Kui Xiao, Bing Yang - DEER: A Comprehensive and Reliable Benchmark for Deep-Research Expert Reports - https://arxiv.org/abs/2512.17776 - arXiv:2512.17776v1 Announce Type: new -Abstract: As large language models (LLMs) advance, deep research systems can generate expert-level reports via multi-step reasoning and evidence-based synthesis, but evaluating such reports remains challenging. Existing benchmarks often lack systematic criteria for expert reporting, evaluations that rely heavily on LLM judges can fail to capture issues that require expert judgment, and source verification typically covers only a limited subset of explicitly cited statements rather than report-wide factual reliability. We introduce DEER, a benchmark for evaluating expert-level deep research reports. DEER comprises 50 report-writing tasks spanning 13 domains and an expert-grounded evaluation taxonomy (7 dimensions, 25 sub-dimension) operationalized into 130 fine-grained rubric items. DEER further provides task-specific expert guidance to help LLM judges assess expert-level report quality more consistently. Complementing rubric-based assessment, we propose a document-level fact-checking architecture that extracts and verifies all claims across the entire report, including both cited and uncited ones, and quantifies external-evidence quality. DEER correlates closely with human expert judgments and yields interpretable diagnostics of system strengths and weaknesses. - oai:arXiv.org:2512.17776v1 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + DSO-VSA: a Variable Stiffness Actuator with Decoupled Stiffness and Output Characteristics for Rehabilitation Robotics + https://arxiv.org/abs/2512.18712 + arXiv:2512.18712v1 Announce Type: new +Abstract: Stroke-induced motor impairment often results in substantial loss of upper-limb function, creating a strong demand for rehabilitation robots that enable safe and transparent physical human-robot interaction (pHRI). Variable stiffness actuators are well suited for such applications. However, in most existing designs, stiffness is coupled with the deflection angle, complicating both modeling and control. To address this limitation, this paper presents a variable stiffness actuator featuring decoupled stiffness and output behavior for rehabilitation robotics. The system integrates a variable stiffness mechanism that combines a variable-length lever with a hypocycloidal straight-line mechanism to achieve a linear torque-deflection relationship and continuous stiffness modulation from near zero to theoretically infinite. It also incorporates a differential transmission mechanism based on a planetary gear system that enables dual-motor load sharing. A cascade PI controller is further developed on the basis of the differential configuration, in which the position-loop term jointly regulates stiffness and deflection angle, effectively suppressing stiffness fluctuations and output disturbances. The performance of prototype was experimentally validated through stiffness calibration, stiffness regulation, torque control, decoupled characteristics, and dual-motor load sharing, indicating the potential for rehabilitation exoskeletons and other pHRI systems. + oai:arXiv.org:2512.18712v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Janghoon Han, Heegyu Kim, Changho Lee, Dahm Lee, Min Hyung Park, Hosung Song, Stanley Jungkyu Choi, Moontae Lee, Honglak Lee + Maozeng Zhang, Ke Shi, Huijun Li, Tongshu Chen, Jiejun Yan, Aiguo Song - LiteGE: Lightweight Geodesic Embedding for Efficient Geodesics Computation and Non-Isometric Shape Correspondence - https://arxiv.org/abs/2512.17781 - arXiv:2512.17781v1 Announce Type: new -Abstract: Computing geodesic distances on 3D surfaces is fundamental to many tasks in 3D vision and geometry processing, with deep connections to tasks such as shape correspondence. Recent learning-based methods achieve strong performance but rely on large 3D backbones, leading to high memory usage and latency, which limit their use in interactive or resource-constrained settings. We introduce LiteGE, a lightweight approach that constructs compact, category-aware shape descriptors by applying PCA to unsigned distance field (UDFs) samples at informative voxels. This descriptor is efficient to compute and removes the need for high-capacity networks. LiteGE remains robust on sparse point clouds, supporting inputs with as few as 300 points, where prior methods fail. Extensive experiments show that LiteGE reduces memory usage and inference time by up to 300$\times$ compared to existing neural approaches. In addition, by exploiting the intrinsic relationship between geodesic distance and shape correspondence, LiteGE enables fast and accurate shape matching. Our method achieves up to 1000$\times$ speedup over state-of-the-art mesh-based approaches while maintaining comparable accuracy on non-isometric shape pairs, including evaluations on point-cloud inputs. - oai:arXiv.org:2512.17781v1 - cs.CV - cs.GR - Mon, 22 Dec 2025 00:00:00 -0500 + An Evidence-Driven Analysis of Threat Information Sharing Challenges for Industrial Control Systems and Future Directions + https://arxiv.org/abs/2512.18714 + arXiv:2512.18714v1 Announce Type: new +Abstract: The increasing cyber threats to critical infrastructure highlight the importance of private companies and government agencies in detecting and sharing information about threat activities. Although the need for improved threat information sharing is widely recognized, various technical and organizational challenges persist, hindering effective collaboration. In this study, we review the challenges that disturb the sharing of usable threat information to critical infrastructure operators within the ICS domain. We analyze three major incidents: Stuxnet, Industroyer, and Triton. In addition, we perform a systematic analysis of 196 procedure examples across 79 MITRE ATT&CK techniques from 22 ICS-related malware families, utilizing automated natural language processing techniques to systematically extract and categorize threat observables. Additionally, we investigated nine recent ICS vulnerability advisories from the CISA Known Exploitable Vulnerability catalog. Our analysis identified four important limitations in the ICS threat information sharing ecosystem: (i) the lack of coherent representation of artifacts related to ICS adversarial techniques in information sharing language standards (e.g., STIX); (ii) the dependence on undocumented proprietary technologies; (iii) limited technical details provided in vulnerability and threat incident reports; and (iv) the accessibility of technical details for observed adversarial techniques. This study aims to guide the development of future information-sharing standards, including the enhancement of the cyber-observable objects schema in STIX, to ensure accurate representation of artifacts specific to ICS environments. + oai:arXiv.org:2512.18714v1 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by-nc-sa/4.0/ - Yohanes Yudhi Adikusuma, Qixing Huang, Ying He - - - UrbanDIFF: A Denoising Diffusion Model for Spatial Gap Filling of Urban Land Surface Temperature Under Dense Cloud Cover - https://arxiv.org/abs/2512.17782 - arXiv:2512.17782v1 Announce Type: new -Abstract: Satellite-derived Land Surface Temperature (LST) products are central to surface urban heat island (SUHI) monitoring due to their consistent grid-based coverage over large metropolitan regions. However, cloud contamination frequently obscures LST observations, limiting their usability for continuous SUHI analysis. Most existing LST reconstruction methods rely on multitemporal information or multisensor data fusion, requiring auxiliary observations that may be unavailable or unreliable under persistent cloud cover. Purely spatial gap-filling approaches offer an alternative, but traditional statistical methods degrade under large or spatially contiguous gaps, while many deep learning based spatial models deteriorate rapidly with increasing missingness. - Recent advances in denoising diffusion based image inpainting models have demonstrated improved robustness under high missingness, motivating their adoption for spatial LST reconstruction. In this work, we introduce UrbanDIFF, a purely spatial denoising diffusion model for reconstructing cloud contaminated urban LST imagery. The model is conditioned on static urban structure information, including built-up surface data and a digital elevation model, and enforces strict consistency with revealed cloud free pixels through a supervised pixel guided refinement step during inference. - UrbanDIFF is trained and evaluated using NASA MODIS Terra LST data from seven major United States metropolitan areas spanning 2002 to 2025. Experiments using synthetic cloud masks with 20 to 85 percent coverage show that UrbanDIFF consistently outperforms an interpolation baseline, particularly under dense cloud occlusion, achieving SSIM of 0.89, RMSE of 1.2 K, and R2 of 0.84 at 85 percent cloud coverage, while exhibiting slower performance degradation as cloud density increases. - oai:arXiv.org:2512.17782v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Arya Chavoshi, Hassan Dashtian, Naveen Sudharsan, Dev Niyogi + Adam Hahn, Rubin Krief, Daniel Rebori-Carretero, Rami Puzis, Aviad Elyashar, Nik Urlaub - Long-Range depth estimation using learning based Hybrid Distortion Model for CCTV cameras - https://arxiv.org/abs/2512.17784 - arXiv:2512.17784v1 Announce Type: new -Abstract: Accurate camera models are essential for photogrammetry applications such as 3D mapping and object localization, particularly for long distances. Various stereo-camera based 3D localization methods are available but are limited to few hundreds of meters' range. This is majorly due to the limitation of the distortion models assumed for the non-linearities present in the camera lens. This paper presents a framework for modeling a suitable distortion model that can be used for localizing the objects at longer distances. It is well known that neural networks can be a better alternative to model a highly complex non-linear lens distortion function; on contrary, it is observed that a direct application of neural networks to distortion models fails to converge to estimate the camera parameters. To resolve this, a hybrid approach is presented in this paper where the conventional distortion models are initially extended to incorporate higher-order terms and then enhanced using neural network based residual correction model. This hybrid approach has substantially improved long-range localization performance and is capable of estimating the 3D position of objects at distances up to 5 kilometres. The estimated 3D coordinates are transformed to GIS coordinates and are plotted on a GIS map for visualization. Experimental validation demonstrates the robustness and effectiveness of proposed framework, offering a practical solution to calibrate CCTV cameras for long-range photogrammetry applications. - oai:arXiv.org:2512.17784v1 + Rectification Reimagined: A Unified Mamba Model for Image Correction and Rectangling with Prompts + https://arxiv.org/abs/2512.18718 + arXiv:2512.18718v1 Announce Type: new +Abstract: Image correction and rectangling are valuable tasks in practical photography systems such as smartphones. Recent remarkable advancements in deep learning have undeniably brought about substantial performance improvements in these fields. Nevertheless, existing methods mainly rely on task-specific architectures. This significantly restricts their generalization ability and effective application across a wide range of different tasks. In this paper, we introduce the Unified Rectification Framework (UniRect), a comprehensive approach that addresses these practical tasks from a consistent distortion rectification perspective. Our approach incorporates various task-specific inverse problems into a general distortion model by simulating different types of lenses. To handle diverse distortions, UniRect adopts one task-agnostic rectification framework with a dual-component structure: a {Deformation Module}, which utilizes a novel Residual Progressive Thin-Plate Spline (RP-TPS) model to address complex geometric deformations, and a subsequent Restoration Module, which employs Residual Mamba Blocks (RMBs) to counteract the degradation caused by the deformation process and enhance the fidelity of the output image. Moreover, a Sparse Mixture-of-Experts (SMoEs) structure is designed to circumvent heavy task competition in multi-task learning due to varying distortions. Extensive experiments demonstrate that our models have achieved state-of-the-art performance compared with other up-to-date methods. + oai:arXiv.org:2512.18718v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Ami Pandat, Punna Rajasekhar, G. Aravamuthan, Gopika Vinod, Rohit Shukla + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Linwei Qiu, Gongzhe Li, Xiaozhe Zhang, Qinlin Sun, Fengying Xie - A Parametric Framework for Anticipatory Flashflood Warning: Integrating Landscape Vulnerability with Precipitation Forecasts - https://arxiv.org/abs/2512.17785 - arXiv:2512.17785v1 Announce Type: new -Abstract: Flash flood warnings are largely reactive, providing limited advance notice for evacuation planning and resource prepositioning. This study presents and validates an anticipatory, parametric framework that converts landscape vulnerability and precipitation into transparent, zone-aware threat levels at neighborhood scales. We first derive an inherent hazard likelihood (IHL) surface using pluvial flood depth, height above nearest drainage, and distance to streams. Next, we compute a hazard severity index (HSI) by normalizing 24-hour rainfall against local Atlas-14 100-year, 24-hour depths. We then integrate IHL and HSI within a localized threat severity (LTS) matrix using 20 class-specific triggers, requiring lower exceedance in high-risk terrain and higher exceedance in uplands. Applied to two Texas flood events, the LTS exhibits statistically significant spatial association with independent crowdsourced impact proxies, capturing observed disruption hotspots. The framework is computationally lightweight, scalable, and extends actionable situational awareness into a 48-72 hour anticipatory window, supporting pre-event decision-making by emergency managers. - oai:arXiv.org:2512.17785v1 - cs.CE - Mon, 22 Dec 2025 00:00:00 -0500 + Generating Risky Samples with Conformity Constraints via Diffusion Models + https://arxiv.org/abs/2512.18722 + arXiv:2512.18722v1 Announce Type: new +Abstract: Although neural networks achieve promising performance in many tasks, they may still fail when encountering some examples and bring about risks to applications. To discover risky samples, previous literature attempts to search for patterns of risky samples within existing datasets or inject perturbation into them. Yet in this way the diversity of risky samples is limited by the coverage of existing datasets. To overcome this limitation, recent works adopt diffusion models to produce new risky samples beyond the coverage of existing datasets. However, these methods struggle in the conformity between generated samples and expected categories, which could introduce label noise and severely limit their effectiveness in applications. To address this issue, we propose RiskyDiff that incorporates the embeddings of both texts and images as implicit constraints of category conformity. We also design a conformity score to further explicitly strengthen the category conformity, as well as introduce the mechanisms of embedding screening and risky gradient guidance to boost the risk of generated samples. Extensive experiments reveal that RiskyDiff greatly outperforms existing methods in terms of the degree of risk, generation quality, and conformity with conditioned categories. We also empirically show the generalization ability of the models can be enhanced by augmenting training data with generated samples of high conformity. + oai:arXiv.org:2512.18722v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Xiangpeng Li, Junwei Ma, Samuel D Brody, Ali Mostafavi + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Han Yu, Hao Zou, Xingxuan Zhang, Zhengyi Wang, Yue He, Kehan Li, Peng Cui - Calibratable Disambiguation Loss for Multi-Instance Partial-Label Learning - https://arxiv.org/abs/2512.17788 - arXiv:2512.17788v1 Announce Type: new -Abstract: Multi-instance partial-label learning (MIPL) is a weakly supervised framework that extends the principles of multi-instance learning (MIL) and partial-label learning (PLL) to address the challenges of inexact supervision in both instance and label spaces. However, existing MIPL approaches often suffer from poor calibration, undermining classifier reliability. In this work, we propose a plug-and-play calibratable disambiguation loss (CDL) that simultaneously improves classification accuracy and calibration performance. The loss has two instantiations: the first one calibrates predictions based on probabilities from the candidate label set, while the second one integrates probabilities from both candidate and non-candidate label sets. The proposed CDL can be seamlessly incorporated into existing MIPL and PLL frameworks. We provide a theoretical analysis that establishes the lower bound and regularization properties of CDL, demonstrating its superiority over conventional disambiguation losses. Experimental results on benchmark and real-world datasets confirm that our CDL significantly enhances both classification and calibration performance. - oai:arXiv.org:2512.17788v1 + ML Inference Scheduling with Predictable Latency + https://arxiv.org/abs/2512.18725 + arXiv:2512.18725v1 Announce Type: new +Abstract: Machine learning (ML) inference serving systems can schedule requests to improve GPU utilization and to meet service level objectives (SLOs) or deadlines. However, improving GPU utilization may compromise latency-sensitive scheduling, as concurrent tasks contend for GPU resources and thereby introduce interference. Given that interference effects introduce unpredictability in scheduling, neglecting them may compromise SLO or deadline satisfaction. Nevertheless, existing interference prediction approaches remain limited in several respects, which may restrict their usefulness for scheduling. First, they are often coarse-grained, which ignores runtime co-location dynamics and thus restricts their accuracy in interference prediction. Second, they tend to use a static prediction model, which may not effectively cope with different workload characteristics. To this end, we evaluate the potential limitations of existing interference prediction approaches and outline our ongoing work toward achieving efficient ML inference scheduling. + oai:arXiv.org:2512.18725v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Wei Tang, Yin-Fang Yang, Weijia Zhang, Min-Ling Zhang + 10.1145/3774901.3778066 + Proceedings of the Middleware for Autonomous AIoT Systems in the Computing Continuum (MAIoT 2025) + Haidong Zhao, Nikolaos Georgantas - Systemic Risks of Interacting AI - https://arxiv.org/abs/2512.17793 - arXiv:2512.17793v1 Announce Type: new -Abstract: In this study, we investigate system-level emergent risks of interacting AI agents. The core contribution of this work is an exploratory scenario-based identification of these risks as well as their categorization. We consider a multitude of systemic risk examples from existing literature and develop two scenarios demonstrating emergent risk patterns in domains of smart grid and social welfare. We provide a taxonomy of identified risks that categorizes them in different groups. In addition, we make two other important contributions: first, we identify what emergent behavior types produce systemic risks, and second, we develop a graphical language "Agentology" for visualization of interacting AI systems. Our study opens a new research direction for system-level risks of interacting AI, and is the first to closely investigate them. - oai:arXiv.org:2512.17793v1 - cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 + A Theoretical Lens for RL-Tuned Language Models via Energy-Based Models + https://arxiv.org/abs/2512.18730 + arXiv:2512.18730v1 Announce Type: new +Abstract: Large language models (LLMs) trained via KL-regularized reinforcement learning demonstrate strong instruction following, self-correction, and reasoning abilities. Yet their theoretical underpinnings remain limited. We exploit the closed-form energy-based model (EBM) structure of the optimal KL-regularized policy to provide a unified variational analysis of LLMs. + For instruction-tuned models, under natural assumptions on reward potentials and pretraining symmetry, we prove that the transition kernel satisfies detailed balance with respect to a scalar potential encoding response quality. This yields monotonic KL convergence to a high-quality stationary distribution, bounded hitting times to superior states, and exponential mixing governed by the spectral gap. + For reasoning models trained with verifiable rewards (RLVR), we show the objective is equivalent to expected KL minimization toward an optimal reasoning distribution, with the suboptimality gap reducing to the Bernoulli KL between target and current accuracies along the natural gradient flow. This helps explain empirical entropy-accuracy trade-offs. + oai:arXiv.org:2512.18730v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Paul Darius, Thomas Hoppe, Andrei Aleksandrov + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhiquan Tan, Yinrong Hong - Intelligent Knowledge Mining Framework: Bridging AI Analysis and Trustworthy Preservation - https://arxiv.org/abs/2512.17795 - arXiv:2512.17795v1 Announce Type: new -Abstract: The unprecedented proliferation of digital data presents significant challenges in access, integration, and value creation across all data-intensive sectors. Valuable information is frequently encapsulated within disparate systems, unstructured documents, and heterogeneous formats, creating silos that impede efficient utilization and collaborative decision-making. This paper introduces the Intelligent Knowledge Mining Framework (IKMF), a comprehensive conceptual model designed to bridge the critical gap between dynamic AI-driven analysis and trustworthy long-term preservation. The framework proposes a dual-stream architecture: a horizontal Mining Process that systematically transforms raw data into semantically rich, machine-actionable knowledge, and a parallel Trustworthy Archiving Stream that ensures the integrity, provenance, and computational reproducibility of these assets. By defining a blueprint for this symbiotic relationship, the paper provides a foundational model for transforming static repositories into living ecosystems that facilitate the flow of actionable intelligence from producers to consumers. This paper outlines the motivation, problem statement, and key research questions guiding the research and development of the framework, presents the underlying scientific methodology, and details its conceptual design and modeling. - oai:arXiv.org:2512.17795v1 - cs.DL + Counterfactual Basis Extension and Representational Geometry: An MDL-Constrained Model of Conceptual Growth + https://arxiv.org/abs/2512.18732 + arXiv:2512.18732v1 Announce Type: new +Abstract: Concept learning becomes possible only when existing representations fail to account for experience. Most models of learning and inference, however, presuppose a fixed representational basis within which belief updating occurs. In this paper, I address a prior question: under what structural conditions can the representational basis itself expand in a principled and selective way? + I propose a geometric framework in which conceptual growth is modeled as admissible basis extension evaluated under a Minimum Description Length (MDL) criterion. Experience, whether externally observed or internally simulated, is represented as vectors relative to a current conceptual subspace. Residual components capture systematic representational failure, and candidate conceptual extensions are restricted to low-rank, admissible transformations. I show that any MDL-accepted extension can be chosen so that its novel directions lie entirely within the residual span induced by experience, while extensions orthogonal to this span strictly increase description length and are therefore rejected. + This yields a conservative account of imagination and conceptual innovation. Internally generated counterfactual representations contribute to learning only insofar as they expose or amplify structured residual error, and cannot introduce arbitrary novelty. I further distinguish representational counterfactuals--counterfactuals over an agent's conceptual basis--from causal or value-level counterfactuals, and show how MDL provides a normative selection principle governing representational change. + Overall, the framework characterizes conceptual development as an error-driven, geometry-constrained process of basis extension, clarifying both the role and the limits of imagination in learning and theory change. + oai:arXiv.org:2512.18732v1 cs.AI - cs.IR - Mon, 22 Dec 2025 00:00:00 -0500 + cs.LG + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Binh Vu + Chainarong Amornbunchornvej - Animate Any Character in Any World - https://arxiv.org/abs/2512.17796 - arXiv:2512.17796v1 Announce Type: new -Abstract: Recent advances in world models have greatly enhanced interactive environment simulation. Existing methods mainly fall into two categories: (1) static world generation models, which construct 3D environments without active agents, and (2) controllable-entity models, which allow a single entity to perform limited actions in an otherwise uncontrollable environment. In this work, we introduce AniX, leveraging the realism and structural grounding of static world generation while extending controllable-entity models to support user-specified characters capable of performing open-ended actions. Users can provide a 3DGS scene and a character, then direct the character through natural language to perform diverse behaviors from basic locomotion to object-centric interactions while freely exploring the environment. AniX synthesizes temporally coherent video clips that preserve visual fidelity with the provided scene and character, formulated as a conditional autoregressive video generation problem. Built upon a pre-trained video generator, our training strategy significantly enhances motion dynamics while maintaining generalization across actions and characters. Our evaluation covers a broad range of aspects, including visual quality, character consistency, action controllability, and long-horizon coherence. - oai:arXiv.org:2512.17796v1 - cs.CV + Explainable and Fine-Grained Safeguarding of LLM Multi-Agent Systems via Bi-Level Graph Anomaly Detection + https://arxiv.org/abs/2512.18733 + arXiv:2512.18733v1 Announce Type: new +Abstract: Large language model (LLM)-based multi-agent systems (MAS) have shown strong capabilities in solving complex tasks. As MAS become increasingly autonomous in various safety-critical tasks, detecting malicious agents has become a critical security concern. Although existing graph anomaly detection (GAD)-based defenses can identify anomalous agents, they mainly rely on coarse sentence-level information and overlook fine-grained lexical cues, leading to suboptimal performance. Moreover, the lack of interpretability in these methods limits their reliability and real-world applicability. To address these limitations, we propose XG-Guard, an explainable and fine-grained safeguarding framework for detecting malicious agents in MAS. To incorporate both coarse and fine-grained textual information for anomalous agent identification, we utilize a bi-level agent encoder to jointly model the sentence- and token-level representations of each agent. A theme-based anomaly detector further captures the evolving discussion focus in MAS dialogues, while a bi-level score fusion mechanism quantifies token-level contributions for explanation. Extensive experiments across diverse MAS topologies and attack scenarios demonstrate robust detection performance and strong interpretability of XG-Guard. + oai:arXiv.org:2512.18733v1 + cs.CR cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yitong Wang, Fangyun Wei, Hongyang Zhang, Bo Dai, Yan Lu + Junjun Pan, Yixin Liu, Rui Miao, Kaize Ding, Yu Zheng, Quoc Viet Hung Nguyen, Alan Wee-Chung Liew, Shirui Pan - Techno-Economic Case Study of a Rural Local Electricity Community in Switzerland - https://arxiv.org/abs/2512.17803 - arXiv:2512.17803v1 Announce Type: new -Abstract: Local Electricity Communities (communaut\'es \'electriques locales, CEL) will become operational in Switzerland in 2026, allowing prosumers, consumers, and storage operators within the same municipality and distribution system operator (DSO) area to exchange electricity over the public grid with reduced distribution tariffs. This report examines a rural Swiss case study to explore the techno-economic implications of CELs for both participants and the local DSO. The findings indicate that CELs can enhance the local use of renewable generation, particularly photovoltaics, and offer modest financial gains, with outcomes strongly shaped by community size, composition, and tariff design. Larger and more heterogeneous communities achieve better internal matching of supply and demand, though the overall incentive remains limited because the tariff reduction applies only to distribution charges. The study further shows that internal energy exchange is maximized when local PV generation covers roughly 1-2 times the community load. For DSOs, CELs reduce grid imports (27-46%), resulting in a substantial reduction in distribution tariff revenues (17-36%), necessitating regulatory adaptation. While centralized batteries provide economic value to members, their technical impact on the grid remains modest due to their small, economically optimized capacity. Larger centralized storage is shown to reduce transformer peak power, but risks increasing line loading, suggesting a need for careful sizing and placement. - oai:arXiv.org:2512.17803v1 - eess.SY - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 + Breast Cancer Recurrence Risk Prediction Based on Multiple Instance Learning + https://arxiv.org/abs/2512.18734 + arXiv:2512.18734v1 Announce Type: new +Abstract: Predicting breast cancer recurrence risk is a critical clinical challenge. This study investigates the potential of computational pathology to stratify patients using deep learning on routine Hematoxylin and Eosin (H&E) stained whole-slide images (WSIs). We developed and compared three Multiple Instance Learning (MIL) frameworks -- CLAM-SB, ABMIL, and ConvNeXt-MIL-XGBoost -- on an in-house dataset of 210 patient cases. The models were trained to predict 5-year recurrence risk, categorized into three tiers (low, medium, high), with ground truth labels established by the 21-gene Recurrence Score. Features were extracted using the UNI and CONCH pre-trained models. In a 5-fold cross-validation, the modified CLAM-SB model demonstrated the strongest performance, achieving a mean Area Under the Curve (AUC) of 0.836 and a classification accuracy of 76.2%. Our findings demonstrate the feasibility of using deep learning on standard histology slides for automated, genomics-correlated risk stratification, highlighting a promising pathway toward rapid and cost-effective clinical decision support. + oai:arXiv.org:2512.18734v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-sa/4.0/ - Gerard Marias Gonzalez, Alejandro Pena-Bello, J\'er\'emy Dumoulin, Nicolas Wyrsch + http://creativecommons.org/licenses/by/4.0/ + Jinqiu Chen, Huyan Xu - LLM-based Behaviour Driven Development for Hardware Design - https://arxiv.org/abs/2512.17814 - arXiv:2512.17814v1 Announce Type: new -Abstract: Test and verification are essential activities in hardware and system design, but their complexity grows significantly with increasing system sizes. While Behavior Driven Development (BDD) has proven effective in software engineering, it is not yet well established in hardware design, and its practical use remains limited. One contributing factor is the manual effort required to derive precise behavioral scenarios from textual specifications. - Recent advances in Large Language Models (LLMs) offer new opportunities to automate this step. In this paper, we investigate the use of LLM-based techniques to support BDD in the context of hardware design. - oai:arXiv.org:2512.17814v1 - cs.SE + $M^3-Verse$: A "Spot the Difference" Challenge for Large Multimodal Models + https://arxiv.org/abs/2512.18735 + arXiv:2512.18735v1 Announce Type: new +Abstract: Modern Large Multimodal Models (LMMs) have demonstrated extraordinary ability in static image and single-state spatial-temporal understanding. However, their capacity to comprehend the dynamic changes of objects within a shared spatial context between two distinct video observations, remains largely unexplored. This ability to reason about transformations within a consistent environment is particularly crucial for advancements in the field of spatial intelligence. In this paper, we introduce $M^3-Verse$, a Multi-Modal, Multi-State, Multi-Dimensional benchmark, to formally evaluate this capability. It is built upon paired videos that provide multi-perspective observations of an indoor scene before and after a state change. The benchmark contains a total of 270 scenes and 2,932 questions, which are categorized into over 50 subtasks that probe 4 core capabilities. We evaluate 16 state-of-the-art LMMs and observe their limitations in tracking state transitions. To address these challenges, we further propose a simple yet effective baseline that achieves significant performance improvements in multi-state perception. $M^3-Verse$ thus provides a challenging new testbed to catalyze the development of next-generation models with a more holistic understanding of our dynamic visual world. You can get the construction pipeline from https://github.com/Wal-K-aWay/M3-Verse_pipeline and full benchmark data from https://www.modelscope.cn/datasets/WalKaWay/M3-Verse. + oai:arXiv.org:2512.18735v1 + cs.CV cs.AI - cs.AR - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Kewei Wei, Bocheng Hu, Jie Cao, Xiaohan Chen, Zhengxi Lu, Wubing Xia, Weili Xu, Jiaao Wu, Junchen He, Mingyu Jia, Ciyun Zhao, Ye Sun, Yizhi Li, Zhonghan Zhao, Jian Zhang, Gaoang Wang + + + Is Your Conditional Diffusion Model Actually Denoising? + https://arxiv.org/abs/2512.18736 + arXiv:2512.18736v1 Announce Type: new +Abstract: We study the inductive biases of diffusion models with a conditioning-variable, which have seen widespread application as both text-conditioned generative image models and observation-conditioned continuous control policies. We observe that when these models are queried conditionally, their generations consistently deviate from the idealized "denoising" process upon which diffusion models are formulated, inducing disagreement between popular sampling algorithms (e.g. DDPM, DDIM). We introduce Schedule Deviation, a rigorous measure which captures the rate of deviation from a standard denoising process, and provide a methodology to compute it. Crucially, we demonstrate that the deviation from an idealized denoising process occurs irrespective of the model capacity or amount of training data. We posit that this phenomenon occurs due to the difficulty of bridging distinct denoising flows across different parts of the conditioning space and show theoretically how such a phenomenon can arise through an inductive bias towards smoothness. + oai:arXiv.org:2512.18736v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Rolf Drechsler, Qian Liu + Daniel Pfrommer, Zehao Dou, Christopher Scarvelis, Max Simchowitz, Ali Jadbabaie - Structure-Aware Antibody Design with Affinity-Optimized Inverse Folding - https://arxiv.org/abs/2512.17815 - arXiv:2512.17815v1 Announce Type: new -Abstract: Motivation: The clinical efficacy of antibody therapeutics critically depends on high-affinity target engagement, yet laboratory affinity-maturation campaigns are slow and costly. In computational settings, most protein language models (PLMs) are not trained to favor high-affinity antibodies, and existing preference optimization approaches introduce substantial computational overhead without clear affinity gains. Therefore, this work proposes SimBinder-IF, which converts the inverse folding model ESM-IF into an antibody sequence generator by freezing its structure encoder and training only its decoder to prefer experimentally stronger binders through preference optimization. - Results: On the 11-assay AbBiBench benchmark, SimBinder-IF achieves a 55 percent relative improvement in mean Spearman correlation between log-likelihood scores and experimentally measured binding affinity compared to vanilla ESM-IF (from 0.264 to 0.410). In zero-shot generalization across four unseen antigen-antibody complexes, the correlation improves by 156 percent (from 0.115 to 0.294). SimBinder-IF also outperforms baselines in top-10 precision for ten-fold or greater affinity improvements. A case study redesigning antibody F045-092 for A/California/04/2009 (pdmH1N1) shows that SimBinder-IF proposes variants with substantially lower predicted binding free energy changes than ESM-IF (mean Delta Delta G -75.16 vs -46.57). Notably, SimBinder-IF trains only about 18 percent of the parameters of the full ESM-IF model, highlighting its parameter efficiency for high-affinity antibody generation. - oai:arXiv.org:2512.17815v1 - cs.CE - q-bio.BM - Mon, 22 Dec 2025 00:00:00 -0500 + PIPCFR: Pseudo-outcome Imputation with Post-treatment Variables for Individual Treatment Effect Estimation + https://arxiv.org/abs/2512.18737 + arXiv:2512.18737v1 Announce Type: new +Abstract: The estimation of individual treatment effects (ITE) focuses on predicting the outcome changes that result from a change in treatment. A fundamental challenge in observational data is that while we need to infer outcome differences under alternative treatments, we can only observe each individual's outcome under a single treatment. Existing approaches address this limitation either by training with inferred pseudo-outcomes or by creating matched instance pairs. However, recent work has largely overlooked the potential impact of post-treatment variables on the outcome. This oversight prevents existing methods from fully capturing outcome variability, resulting in increased variance in counterfactual predictions. This paper introduces Pseudo-outcome Imputation with Post-treatment Variables for Counterfactual Regression (PIPCFR), a novel approach that incorporates post-treatment variables to improve pseudo-outcome imputation. We analyze the challenges inherent in utilizing post-treatment variables and establish a novel theoretical bound for ITE risk that explicitly connects post-treatment variables to ITE estimation accuracy. Unlike existing methods that ignore these variables or impose restrictive assumptions, PIPCFR learns effective representations that preserve informative components while mitigating bias. Empirical evaluations on both real-world and simulated datasets demonstrate that PIPCFR achieves significantly lower ITE errors compared to existing methods. + oai:arXiv.org:2512.18737v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xinyan Zhao, Yi-Ching Tang, Rivaaj Monsia, Victor J. Cantu, Ashwin Kumar Ramesh, Xiaozhong Liu, Zhiqiang An, Xiaoqian Jiang, Yejin Kim + Zichuan Lin, Xiaokai Huang, Jiate Liu, Yuxuan Han, Jia Chen, Xiapeng Wu, Deheng Ye - Chorus: Multi-Teacher Pretraining for Holistic 3D Gaussian Scene Encoding - https://arxiv.org/abs/2512.17817 - arXiv:2512.17817v1 Announce Type: new -Abstract: While 3DGS has emerged as a high-fidelity scene representation, encoding rich, general-purpose features directly from its primitives remains under-explored. We address this gap by introducing Chorus, a multi-teacher pretraining framework that learns a holistic feed-forward 3D Gaussian Splatting (3DGS) scene encoder by distilling complementary signals from 2D foundation models. Chorus employs a shared 3D encoder and teacher-specific projectors to learn from language-aligned, generalist, and object-aware teachers, encouraging a shared embedding space that captures signals from high-level semantics to fine-grained structure. - We evaluate Chorus on a wide range of tasks: open-vocabulary semantic and instance segmentation, linear and decoder probing, as well as data-efficient supervision. Besides 3DGS, we also test Chorus on several benchmarks that only support point clouds by pretraining a variant using only Gaussians' centers, colors, estimated normals as inputs. Interestingly, this encoder shows strong transfer and outperforms the point clouds baseline while using 39.9 times fewer training scenes. Finally, we propose a render-and-distill adaptation that facilitates out-of-domain finetuning. Our code and model will be released upon publication. - oai:arXiv.org:2512.17817v1 + AMLID: An Adaptive Multispectral Landmine Identification Dataset for Drone-Based Detection + https://arxiv.org/abs/2512.18738 + arXiv:2512.18738v1 Announce Type: new +Abstract: Landmines remain a persistent humanitarian threat, with an estimated 110 million mines deployed across 60 countries, claiming approximately 26,000 casualties annually. Current detection methods are hazardous, inefficient, and prohibitively expensive. We present the Adaptive Multispectral Landmine Identification Dataset (AMLID), the first open-source dataset combining Red-Green-Blue (RGB) and Long-Wave Infrared (LWIR) imagery for Unmanned Aerial Systems (UAS)-based landmine detection. AMLID comprises of 12,078 labeled images featuring 21 globally deployed landmine types across anti-personnel and anti-tank categories in both metal and plastic compositions. The dataset spans 11 RGB-LWIR fusion levels, four sensor altitudes, two seasonal periods, and three daily illumination conditions. By providing comprehensive multispectral coverage across diverse environmental variables, AMLID enables researchers to develop and benchmark adaptive detection algorithms without requiring access to live ordnance or expensive data collection infrastructure, thereby democratizing humanitarian demining research. + oai:arXiv.org:2512.18738v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Yue Li, Qi Ma, Runyi Yang, Mengjiao Ma, Bin Ren, Nikola Popovic, Nicu Sebe, Theo Gevers, Luc Van Gool, Danda Pani Paudel, Martin R. Oswald + http://creativecommons.org/licenses/by/4.0/ + James E. Gallagher, Edward J. Oughton - Playful but Persuasive: Deceptive Designs and Advertising Strategies in Popular Mobile Apps for Children - https://arxiv.org/abs/2512.17819 - arXiv:2512.17819v1 Announce Type: new -Abstract: Mobile gaming apps are woven into children's daily lives. Given their ongoing cognitive and emotional development, children are especially vulnerable and depend on designs that safeguard their well-being. When apps feature manipulative interfaces or heavy advertising, they may exert undue influence on young users, contributing to prolonged screen time, disrupted self-regulation, and accidental in-app purchases. In this study, we examined 20 popular, free-to-download children's apps in German-speaking regions to assess the prevalence of deceptive design patterns and advertising. Despite platform policies and EU frameworks like the General Data Protection Regulation and the Digital Services Act, every app contained interface manipulations intended to nudge, confuse, or pressure young users, averaging nearly six distinct deceptive patterns per app. Most also displayed high volumes of non-skippable ads, frequently embedded within core gameplay. These findings indicate a systemic failure of existing safeguards and call for stronger regulation, greater platform accountability, and child-centered design standards. - oai:arXiv.org:2512.17819v1 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + Memorize-and-Generate: Towards Long-Term Consistency in Real-Time Video Generation + https://arxiv.org/abs/2512.18741 + arXiv:2512.18741v1 Announce Type: new +Abstract: Frame-level autoregressive (frame-AR) models have achieved significant progress, enabling real-time video generation comparable to bidirectional diffusion models and serving as a foundation for interactive world models and game engines. However, current approaches in long video generation typically rely on window attention, which naively discards historical context outside the window, leading to catastrophic forgetting and scene inconsistency; conversely, retaining full history incurs prohibitive memory costs. To address this trade-off, we propose \textbf{Memorize-and-Generate (MAG)}, a framework that decouples memory compression and frame generation into distinct tasks. Specifically, we train a memory model to compress historical information into a compact KV cache, and a separate generator model to synthesize subsequent frames utilizing this compressed representation. Furthermore, we introduce \textbf{MAG-Bench} to strictly evaluate historical memory retention. Extensive experiments demonstrate that MAG achieves superior historical scene consistency while maintaining competitive performance on standard video generation benchmarks. + oai:arXiv.org:2512.18741v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hannah Krahl, Katrin Hartwig, Ann-Kathrin Fischer, Theodora Nikolakopoulou, Guy Pires Cabritas, Eva Ungeheuer, Nina Gerber, Alina St\"over + Tianrui Zhu, Shiyi Zhang, Zhirui Sun, Jingqi Tian, Yansong Tang - Exploiting ID-Text Complementarity via Ensembling for Sequential Recommendation - https://arxiv.org/abs/2512.17820 - arXiv:2512.17820v1 Announce Type: new -Abstract: Modern Sequential Recommendation (SR) models commonly utilize modality features to represent items, motivated in large part by recent advancements in language and vision modeling. To do so, several works completely replace ID embeddings with modality embeddings, claiming that modality embeddings render ID embeddings unnecessary because they can match or even exceed ID embedding performance. On the other hand, many works jointly utilize ID and modality features, but posit that complex fusion strategies, such as multi-stage training and/or intricate alignment architectures, are necessary for this joint utilization. However, underlying both these lines of work is a lack of understanding of the complementarity of ID and modality features. In this work, we address this gap by studying the complementarity of ID- and text-based SR models. We show that these models do learn complementary signals, meaning that either should provide performance gain when used properly alongside the other. Motivated by this, we propose a new SR method that preserves ID-text complementarity through independent model training, then harnesses it through a simple ensembling strategy. Despite this method's simplicity, we show it outperforms several competitive SR baselines, implying that both ID and text features are necessary to achieve state-of-the-art SR performance but complex fusion architectures are not. - oai:arXiv.org:2512.17820v1 + InSight-o3: Empowering Multimodal Foundation Models with Generalized Visual Search + https://arxiv.org/abs/2512.18745 + arXiv:2512.18745v1 Announce Type: new +Abstract: The ability for AI agents to "think with images" requires a sophisticated blend of reasoning and perception. However, current open multimodal agents still largely fall short on the reasoning aspect crucial for real-world tasks like analyzing documents with dense charts/diagrams and navigating maps. To address this gap, we introduce O3-Bench, a new benchmark designed to evaluate multimodal reasoning with interleaved attention to visual details. O3-Bench features challenging problems that require agents to piece together subtle visual information from distinct image areas through multi-step reasoning. The problems are highly challenging even for frontier systems like OpenAI o3, which only obtains 40.8% accuracy on O3-Bench. To make progress, we propose InSight-o3, a multi-agent framework consisting of a visual reasoning agent (vReasoner) and a visual search agent (vSearcher) for which we introduce the task of generalized visual search -- locating relational, fuzzy, or conceptual regions described in free-form language, beyond just simple objects or figures in natural images. We then present a multimodal LLM purpose-trained for this task via reinforcement learning. As a plug-and-play agent, our vSearcher empowers frontier multimodal models (as vReasoners), significantly improving their performance on a wide range of benchmarks. This marks a concrete step towards powerful o3-like open systems. Our code and dataset can be found at https://github.com/m-Just/InSight-o3 . + oai:arXiv.org:2512.18745v1 + cs.CV + cs.CL cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Liam Collins, Bhuvesh Kumar, Clark Mingxuan Ju, Tong Zhao, Donald Loveland, Leonardo Neves, Neil Shah + http://creativecommons.org/licenses/by/4.0/ + Kaican Li, Lewei Yao, Jiannan Wu, Tiezheng Yu, Jierun Chen, Haoli Bai, Lu Hou, Lanqing Hong, Wei Zhang, Nevin L. Zhang - A 14ns-Latency 9Gb/s 0.44mm$^2$ 62pJ/b Short-Blocklength LDPC Decoder ASIC in 22FDX - https://arxiv.org/abs/2512.17834 - arXiv:2512.17834v1 Announce Type: new -Abstract: Ultra-reliable low latency communication (URLLC) is a key part of 5G wireless systems. Achieving low latency necessitates codes with short blocklengths for which polar codes with successive cancellation list (SCL) decoding typically outperform message-passing (MP)-based decoding of low-density parity-check (LDPC) codes. However, SCL decoders are known to exhibit high latency and poor area efficiency. In this paper, we propose a new short-blocklength multi-rate binary LDPC code that outperforms the 5G-LDPC code for the same blocklength and is suitable for URLLC applications using fully parallel MP. To demonstrate our code's efficacy, we present a 0.44mm$^2$ GlobalFoundries 22FDX LDPC decoder ASIC which supports three rates and achieves the lowest-in-class decoding latency of 14ns while reaching an information throughput of 9Gb/s at 62pJ/b energy efficiency for a rate-1/2 code with 128-bit blocklength. - oai:arXiv.org:2512.17834v1 - cs.AR - eess.SP - Mon, 22 Dec 2025 00:00:00 -0500 + MemEvolve: Meta-Evolution of Agent Memory Systems + https://arxiv.org/abs/2512.18746 + arXiv:2512.18746v1 Announce Type: new +Abstract: Self-evolving memory systems are unprecedentedly reshaping the evolutionary paradigm of large language model (LLM)-based agents. Prior work has predominantly relied on manually engineered memory architectures to store trajectories, distill experience, and synthesize reusable tools, enabling agents to evolve on the fly within environment interactions. However, this paradigm is fundamentally constrained by the staticity of the memory system itself: while memory facilitates agent-level evolving, the underlying memory architecture cannot be meta-adapted to diverse task contexts. To address this gap, we propose MemEvolve, a meta-evolutionary framework that jointly evolves agents' experiential knowledge and their memory architecture, allowing agent systems not only to accumulate experience but also to progressively refine how they learn from it. To ground MemEvolve in prior research and foster openness in future self-evolving systems, we introduce EvolveLab, a unified self-evolving memory codebase that distills twelve representative memory systems into a modular design space (encode, store, retrieve, manage), providing both a standardized implementation substrate and a fair experimental arena. Extensive evaluations on four challenging agentic benchmarks demonstrate that MemEvolve achieves (I) substantial performance gains, improving frameworks such as SmolAgent and Flash-Searcher by up to $17.06\%$; and (II) strong cross-task and cross-LLM generalization, designing memory architectures that transfer effectively across diverse benchmarks and backbone models. + oai:arXiv.org:2512.18746v1 + cs.CL + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Darja Nonaca, J\'er\'emy Guichemerre, Reinhard Wiesmayr, Nihat Engin Tunali, Christoph Studer + http://creativecommons.org/licenses/by/4.0/ + Guibin Zhang, Haotian Ren, Chong Zhan, Zhenhong Zhou, Junhao Wang, He Zhu, Wangchunshu Zhou, Shuicheng Yan - ReX-MLE: The Autonomous Agent Benchmark for Medical Imaging Challenges - https://arxiv.org/abs/2512.17838 - arXiv:2512.17838v1 Announce Type: new -Abstract: Autonomous coding agents built on large language models (LLMs) can now solve many general software and machine learning tasks, but they remain ineffective on complex, domain-specific scientific problems. Medical imaging is a particularly demanding domain, requiring long training cycles, high-dimensional data handling, and specialized preprocessing and validation pipelines, capabilities not fully measured in existing agent benchmarks. To address this gap, we introduce ReX-MLE, a benchmark of 20 challenges derived from high-impact medical imaging competitions spanning diverse modalities and task types. Unlike prior ML-agent benchmarks, ReX-MLE evaluates full end-to-end workflows, requiring agents to independently manage data preprocessing, model training, and submission under realistic compute and time constraints. Evaluating state-of-the-art agents (AIDE, ML-Master, R&D-Agent) with different LLM backends (GPT-5, Gemini, Claude), we observe a severe performance gap: most submissions rank in the 0th percentile compared to human experts. Failures stem from domain-knowledge and engineering limitations. ReX-MLE exposes these bottlenecks and provides a foundation for developing domain-aware autonomous AI systems. - oai:arXiv.org:2512.17838v1 + IPCV: Information-Preserving Compression for MLLM Visual Encoders + https://arxiv.org/abs/2512.18747 + arXiv:2512.18747v1 Announce Type: new +Abstract: Multimodal Large Language Models (MLLMs) deliver strong vision-language performance but at high computational cost, driven by numerous visual tokens processed by the Vision Transformer (ViT) encoder. Existing token pruning strategies are inadequate: LLM-stage token pruning overlooks the ViT's overhead, while conventional ViT token pruning, without language guidance, risks discarding textually critical visual cues and introduces feature distortions amplified by the ViT's bidirectional attention. To meet these challenges, we propose IPCV, a training-free, information-preserving compression framework for MLLM visual encoders. IPCV enables aggressive token pruning inside the ViT via Neighbor-Guided Reconstruction (NGR) that temporarily reconstructs pruned tokens to participate in attention with minimal overhead, then fully restores them before passing to the LLM. Besides, we introduce Attention Stabilization (AS) to further alleviate the negative influence from token pruning by approximating the K/V of pruned tokens. It can be directly applied to previous LLM-side token pruning methods to enhance their performance. Extensive experiments show that IPCV substantially reduces end-to-end computation and outperforms state-of-the-art training-free token compression methods across diverse image and video benchmarks. Our code is available at https://github.com/Perkzi/IPCV. + oai:arXiv.org:2512.18747v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Roshan Kenia, Xiaoman Zhang, Pranav Rajpurkar + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yuan Chen, Zichen Wen, Yuzhou Wu, Xuyang Liu, Shuang Chen, Junpeng Ma, Weijia Li, Conghui He, Linfeng Zhang - NeuRehab: A Reinforcement Learning and Spiking Neural Network-Based Rehab Automation Framework - https://arxiv.org/abs/2512.17841 - arXiv:2512.17841v1 Announce Type: new -Abstract: Recent advancements in robotic rehabilitation therapy have provided modular exercise systems for post-stroke muscle recovery with basic control schemes. But these systems struggle to adapt to patients' complex and ever-changing behaviour, and to operate within mobile settings, such as heat and power. To aid this, we present NeuRehab: an end-to-end framework consisting of a training and inference pipeline with AI-based automation, co-designed with neuromorphic computing-based control systems that balance action performance, power consumption, and observed latency. The framework consists of 2 partitions. One is designated for the rehabilitation device based on ultra-low power spiking networks deployed on dedicated neuromorphic hardware. The other resides on stationary hardware that can accommodate computationally intensive hardware for fine-tuning on a per-patient basis. By maintaining a communication channel between both the modules and splitting the algorithm components, the power and latency requirements of the movable system have been optimised, while retaining the learning performance advantages of compute- and power-hungry hardware on the stationary machine. As part of the framework, we propose (a) the split machine learning processes for efficiency in architectural utilisation, and (b) task-specific temporal optimisations to lower edge-inference control latency. This paper evaluates the proposed methods on a reference stepper motor-based shoulder exercise. Overall, these methods offer comparable performance uplifts over the State-of-the-art for neuromorphic deployment, while achieving over 60% savings in both power and latency during inference compared to standard implementations. - oai:arXiv.org:2512.17841v1 - cs.CE - cs.SY - eess.SY - Mon, 22 Dec 2025 00:00:00 -0500 + Code2Doc: A Quality-First Curated Dataset for Code Documentation + https://arxiv.org/abs/2512.18748 + arXiv:2512.18748v1 Announce Type: new +Abstract: The performance of automatic code documentation generation models depends critically on the quality of the training data used for supervision. However, most existing code documentation datasets are constructed through large scale scraping of public repositories with limited quality control. As a result, they often contain noisy documentation, extensive duplication, and increasing contamination from AI generated content. These issues weaken the supervision signal available to learning-based models and complicate evaluation. + We introduce \textbf{Code2Doc}, a quality-first curated dataset for function-level code documentation generation. Code2Doc consists of 13,358 high-quality function-documentation pairs extracted from widely used open-source repositories spanning five programming languages: Python, Java, TypeScript, JavaScript, and C++. The dataset is constructed using a four-stage curation pipeline that enforces documentation completeness and clarity, filters functions based on structural and complexity criteria, removes exact and near-duplicate code, and identifies documentation likely to be AI generated. Starting from 52,069 extracted candidates, only 25.6 percent satisfy all quality constraints. + We provide a detailed analysis of the resulting dataset, which achieves a mean documentation quality score of 6.93 out of 10. Overall, 86.9% of samples contain explicit type annotations, and only 2.9\% are flagged as potentially AI generated. Baseline experiments show that fine-tuning a large language model on Code2Doc yields relative improvements of 29.47% in BLEU and 24.04% in ROUGE-L over zero shot performance, despite the modest dataset size. We release both the dataset and the full curation pipeline to support reproducible research on automatic code documentation generation. + oai:arXiv.org:2512.18748v1 + cs.SE + cs.AI + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Phani Pavan Kambhampati (IIIT Bangalore), Chainesh Gautam (IIIT Bangalore), Jagan Palaniswamy (IIIT Bangalore), Madhav Rao (IIIT Bangalore) + Recep Kaan Karaman, Meftun Akarsu - ShareChat: A Dataset of Chatbot Conversations in the Wild - https://arxiv.org/abs/2512.17843 - arXiv:2512.17843v1 Announce Type: new -Abstract: While Large Language Models (LLMs) have evolved into distinct platforms with unique interface designs and capabilities, existing public datasets treat models as generic text generators, stripping away the interface context that actively shapes user interaction. To address this limitation, we present ShareChat, a large-scale, cross-platform corpus comprising 142,808 conversations and over 660,000 turns collected from publicly shared URLs across five major platforms: ChatGPT, Claude, Gemini, Perplexity, and Grok. ShareChat distinguishes itself by preserving native platform affordances often lost in standard logs, including reasoning traces, source links, and code artifacts, while spanning 101 languages over the period from April 2023 to October 2025. Furthermore, ShareChat offers substantially longer context windows and greater interaction depth than prior datasets. We demonstrate the dataset's multifaceted utility through three representative analyses: (1) analyzing conversation completeness to measure user intent satisfaction; (2) evaluating source citation behaviors in content generation; and (3) conducting temporal analysis to track evolving usage patterns. This work provides the community with a vital and timely resource for understanding authentic user-LLM chatbot interactions in the wild. - oai:arXiv.org:2512.17843v1 - cs.CL - cs.AI - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + Context-Aware Network Based on Multi-scale Spatio-temporal Attention for Action Recognition in Videos + https://arxiv.org/abs/2512.18750 + arXiv:2512.18750v1 Announce Type: new +Abstract: Action recognition is a critical task in video understanding, requiring the comprehensive capture of spatio-temporal cues across various scales. However, existing methods often overlook the multi-granularity nature of actions. To address this limitation, we introduce the Context-Aware Network (CAN). CAN consists of two core modules: the Multi-scale Temporal Cue Module (MTCM) and the Group Spatial Cue Module (GSCM). MTCM effectively extracts temporal cues at multiple scales, capturing both fast-changing motion details and overall action flow. GSCM, on the other hand, extracts spatial cues at different scales by grouping feature maps and applying specialized extraction methods to each group. Experiments conducted on five benchmark datasets (Something-Something V1 and V2, Diving48, Kinetics-400, and UCF101) demonstrate the effectiveness of CAN. Our approach achieves competitive performance, outperforming most mainstream methods, with accuracies of 50.4% on Something-Something V1, 63.9% on Something-Something V2, 88.4% on Diving48, 74.9% on Kinetics-400, and 86.9% on UCF101. These results highlight the importance of capturing multi-scale spatio-temporal cues for robust action recognition. + oai:arXiv.org:2512.18750v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Yueru Yan, Tuc Nguyen, Bo Su, Melissa Lieffers, Thai Le + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiaoyang Li, Wenzhu Yang, Kanglin Wang, Tiebiao Wang, Qingsong Fei - Capacitated Partition Vertex Cover and Partition Edge Cover - https://arxiv.org/abs/2512.17844 - arXiv:2512.17844v1 Announce Type: new -Abstract: Our first focus is the Capacitated Partition Vertex Cover (C-PVC) problem in hypergraphs. In C-PVC, we are given a hypergraph with capacities on its vertices and a partition of the hyperedge set into $\omega$ distinct groups. The objective is to select a minimum size subset of vertices that satisfies two main conditions: (1) in each group, the total number of covered hyperedges meets a specified threshold, and (2) the number of hyperedges assigned to any vertex respects its capacity constraint. A covered hyperedge is required to be assigned to a selected vertex that belongs to the hyperedge. This formulation generalizes classical Vertex Cover, Partial Vertex Cover, and Partition Vertex Cover. - We investigate two primary variants: soft capacitated (multiple copies of a vertex are allowed) and hard capacitated (each vertex can be chosen at most once). Let $f$ denote the rank of the hypergraph. Our main contributions are: $(i)$ an $(f+1)$-approximation algorithm for the weighted soft-capacitated C-PVC problem, which runs in $n^{O(\omega)}$ time, and $(ii)$ an $(f+\epsilon)$-approximation algorithm for the unweighted hard-capacitated C-PVC problem, which runs in $n^{O(\omega/\epsilon)}$ time. - We also study a natural generalization of the edge cover problem, the \emph{Weighted Partition Edge Cover} (W-PEC) problem, where each edge has an associated weight, and the vertex set is partitioned into groups. For each group, the goal is to cover at least a specified number of vertices using incident edges, while minimizing the total weight of the selected edges. We present the first exact polynomial-time algorithm for the weighted case, improving runtime from $O(\omega n^3)$ to $O(mn+n^2 \log n)$ and simplifying the algorithmic structure over prior unweighted approaches. - oai:arXiv.org:2512.17844v1 - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 + ISADM: An Integrated STRIDE, ATT&CK, and D3FEND Model for Threat Modeling Against Real-world Adversaries + https://arxiv.org/abs/2512.18751 + arXiv:2512.18751v1 Announce Type: new +Abstract: FinTechs increasing connectivity, rapid innovation, and reliance on global digital infrastructures present significant cybersecurity challenges. Traditional cybersecurity frameworks often struggle to identify and prioritize sector-specific vulnerabilities or adapt to evolving adversary tactics, particularly in highly targeted sectors such as FinTech. To address these gaps, we propose ISADM (Integrated STRIDE-ATTACK-D3FEND Threat Model), a novel hybrid methodology applied to FinTech security that integrates STRIDE's asset-centric threat classification with MITRE ATTACK's catalog of real-world adversary behaviors and D3FEND's structured knowledge of countermeasures. ISADM employs a frequency-based scoring mechanism to quantify the prevalence of adversarial Tactics, Techniques, and Procedures (TTPs), enabling a proactive, score-driven risk assessment and prioritization framework. This proactive approach contributes to shifting organizations from reactive defense strategies toward the strategic fortification of critical assets. We validate ISADM through industry-relevant case study analyses, demonstrating how the approach replicates actual attack patterns and strengthens proactive threat modeling, guiding risk prioritization and resource allocation to the most critical vulnerabilities. Overall, ISADM offers a comprehensive hybrid threat modeling methodology that bridges asset-centric and adversary-centric analysis, providing FinTech systems with stronger defenses. The emphasis on real-world validation highlights its practical significance in enhancing the sector's cybersecurity posture through a frequency-informed, impact-aware prioritization scheme that combines empirical attacker data with contextual risk analysis. + oai:arXiv.org:2512.18751v1 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Rajni Dabas, Samir Khuller, Emilie Rivkin + Khondokar Fida Hasan, Hasibul Hossain Shajeeb, Chathura Abeydeera, Benjamin Turnbull, Matthew Warren - Planning as Descent: Goal-Conditioned Latent Trajectory Synthesis in Learned Energy Landscapes - https://arxiv.org/abs/2512.17846 - arXiv:2512.17846v1 Announce Type: new -Abstract: We present Planning as Descent (PaD), a framework for offline goal-conditioned reinforcement learning that grounds trajectory synthesis in verification. Instead of learning a policy or explicit planner, PaD learns a goal-conditioned energy function over entire latent trajectories, assigning low energy to feasible, goal-consistent futures. Planning is realized as gradient-based refinement in this energy landscape, using identical computation during training and inference to reduce train-test mismatch common in decoupled modeling pipelines. - PaD is trained via self-supervised hindsight goal relabeling, shaping the energy landscape around the planning dynamics. At inference, multiple trajectory candidates are refined under different temporal hypotheses, and low-energy plans balancing feasibility and efficiency are selected. - We evaluate PaD on OGBench cube manipulation tasks. When trained on narrow expert demonstrations, PaD achieves state-of-the-art 95\% success, strongly outperforming prior methods that peak at 68\%. Remarkably, training on noisy, suboptimal data further improves success and plan efficiency, highlighting the benefits of verification-driven planning. Our results suggest learning to evaluate and refine trajectories provides a robust alternative to direct policy learning for offline, reward-free planning. - oai:arXiv.org:2512.17846v1 - cs.RO + MEEA: Mere Exposure Effect-Driven Confrontational Optimization for LLM Jailbreaking + https://arxiv.org/abs/2512.18755 + arXiv:2512.18755v1 Announce Type: new +Abstract: The rapid advancement of large language models (LLMs) has intensified concerns about the robustness of their safety alignment. While existing jailbreak studies explore both single-turn and multi-turn strategies, most implicitly assume a static safety boundary and fail to account for how contextual interactions dynamically influence model behavior, leading to limited stability and generalization. Motivated by this gap, we propose MEEA (Mere Exposure Effect Attack), a psychology-inspired, fully automated black-box framework for evaluating multi-turn safety robustness, grounded in the mere exposure effect. MEEA leverages repeated low-toxicity semantic exposure to induce a gradual shift in a model's effective safety threshold, enabling progressive erosion of alignment constraints over sustained interactions. Concretely, MEEA constructs semantically progressive prompt chains and optimizes them using a simulated annealing strategy guided by semantic similarity, toxicity, and jailbreak effectiveness. Extensive experiments on both closed-source and open-source models, including GPT-4, Claude-3.5, and DeepSeek-R1, demonstrate that MEEA consistently achieves higher attack success rates than seven representative baselines, with an average Attack Success Rate (ASR) improvement exceeding 20%. Ablation studies further validate the necessity of both annealing-based optimization and contextual exposure mechanisms. Beyond improved attack effectiveness, our findings indicate that LLM safety behavior is inherently dynamic and history-dependent, challenging the common assumption of static alignment boundaries and highlighting the need for interaction-aware safety evaluation and defense mechanisms. Our code is available at: https://github.com/Carney-lsz/MEEA + oai:arXiv.org:2512.18755v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Carlos V\'elez Garc\'ia, Miguel Cazorla, Jorge Pomares + Jianyi Zhang, Shizhao Liu, Ziyin Zhou, Zhen Li - Integrating Computational Methods and AI into Qualitative Studies of Aging and Later Life - https://arxiv.org/abs/2512.17850 - arXiv:2512.17850v1 Announce Type: new -Abstract: This chapter demonstrates how computational social science (CSS) tools are extending and expanding research on aging. The depth and context from traditionally qualitative methods such as participant observation, in-depth interviews, and historical documents are increasingly employed alongside scalable data management, computational text analysis, and open-science practices. Machine learning (ML) and natural language processing (NLP), provide resources to aggregate and systematically index large volumes of qualitative data, identify patterns, and maintain clear links to in-depth accounts. Drawing on case studies of projects that examine later life--including examples with original data from the DISCERN study (a team-based ethnography of life with dementia) and secondary analyses of the American Voices Project (nationally representative interview)--the chapter highlights both uses and challenges of bringing CSS tools into more meaningful dialogue with qualitative aging research. The chapter argues such work has potential for (1) streamlining and augmenting existing workflows, (2) scaling up samples and projects, and (3) generating multi-method approaches to address important questions in new ways, before turning to practices useful for individuals and teams seeking to understand current possibilities or refine their workflow processes. The chapter concludes that current developments are not without peril, but offer potential for new insights into aging and the life course by broadening--rather than replacing--the methodological foundations of qualitative research. - oai:arXiv.org:2512.17850v1 - cs.CY - cs.AI - cs.HC - stat.AP - Mon, 22 Dec 2025 00:00:00 -0500 + How Many Pinching Antennas Are Enough? + https://arxiv.org/abs/2512.18761 + arXiv:2512.18761v1 Announce Type: new +Abstract: Programmable wireless environments (PWEs) have emerged as a key paradigm for next-generation communication networks, aiming to transform wireless propagation from an uncontrollable phenomenon into a reconfigurable process that can adapt to diverse service requirements. In this framework, pinching-antenna systems (PASs) have recently been proposed as a promising enabling technology, as they allow the radiation location and effective propagation distance to be adjusted by selectively exciting radiating points along a dielectric waveguide. However, most existing studies on PASs rely on the idealized assumption that pinching-antenna (PA) positions can be continuously adjusted along the waveguide, while realistically only a finite set of pinching locations is available. Motivated by this, this paper analyzes the performance of two-state PASs, where the PA positions are fixed and only their activation state can be controlled. By explicitly accounting for the spatial discreteness of the available pinching points, closed-form analytical expressions for the outage probability and the ergodic achievable data rate are derived. In addition, we introduce the pinching discretization efficiency to quantify the performance gap between discrete and continuous pinching configurations, enabling a direct assessment of the number of PAs required to approximate the ideal continuous case. Finally, numerical results validate the analytical framework and show that near-continuous performance can be achieved with a limited number of PAs, offering useful insights for the design and deployment of PASs in PWEs. + oai:arXiv.org:2512.18761v1 + cs.NI + eess.SP + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Corey M. Abramson + http://creativecommons.org/licenses/by/4.0/ + Dimitrios Tyrovolas, Sotiris A. Tegos, Yue Xiao, Panagiotis D. Diamantoulakis, Sotiris Ioannidis, Christos K. Liaskos, George K. Karagiannidis, Stylianos D. Asimonis - InfSplign: Inference-Time Spatial Alignment of Text-to-Image Diffusion Models - https://arxiv.org/abs/2512.17851 - arXiv:2512.17851v1 Announce Type: new -Abstract: Text-to-image (T2I) diffusion models generate high-quality images but often fail to capture the spatial relations specified in text prompts. This limitation can be traced to two factors: lack of fine-grained spatial supervision in training data and inability of text embeddings to encode spatial semantics. We introduce InfSplign, a training-free inference-time method that improves spatial alignment by adjusting the noise through a compound loss in every denoising step. Proposed loss leverages different levels of cross-attention maps extracted from the backbone decoder to enforce accurate object placement and a balanced object presence during sampling. The method is lightweight, plug-and-play, and compatible with any diffusion backbone. Our comprehensive evaluations on VISOR and T2I-CompBench show that InfSplign establishes a new state-of-the-art (to the best of our knowledge), achieving substantial performance gains over the strongest existing inference-time baselines and even outperforming the fine-tuning-based methods. Codebase is available at GitHub. - oai:arXiv.org:2512.17851v1 - cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Gaussian-Mixture-Model Q-Functions for Policy Iteration in Reinforcement Learning + https://arxiv.org/abs/2512.18763 + arXiv:2512.18763v1 Announce Type: new +Abstract: Unlike their conventional use as estimators of probability density functions in reinforcement learning (RL), this paper introduces a novel function-approximation role for Gaussian mixture models (GMMs) as direct surrogates for Q-function losses. These parametric models, termed GMM-QFs, possess substantial representational capacity, as they are shown to be universal approximators over a broad class of functions. They are further embedded within Bellman residuals, where their learnable parameters -- a fixed number of mixing weights, together with Gaussian mean vectors and covariance matrices -- are inferred from data via optimization on a Riemannian manifold. This geometric perspective on the parameter space naturally incorporates Riemannian optimization into the policy-evaluation step of standard policy-iteration frameworks. Rigorous theoretical results are established, and supporting numerical tests show that, even without access to experience data, GMM-QFs deliver competitive performance and, in some cases, outperform state-of-the-art approaches across a range of benchmark RL tasks, all while maintaining a significantly smaller computational footprint than deep-learning methods that rely on experience data. + oai:arXiv.org:2512.18763v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sarah Rastegar, Violeta Chatalbasheva, Sieger Falkena, Anuj Singh, Yanbo Wang, Tejas Gokhale, Hamid Palangi, Hadi Jamali-Rad + http://creativecommons.org/licenses/by/4.0/ + Minh Vu, Konstantinos Slavakis - Simulation-Driven Deep Learning Framework for Raman Spectral Denoising Under Fluorescence-Dominant Conditions - https://arxiv.org/abs/2512.17852 - arXiv:2512.17852v1 Announce Type: new -Abstract: Raman spectroscopy enables non-destructive, label-free molecular analysis with high specificity, making it a powerful tool for biomedical diagnostics. However, its application to biological tissues is challenged by inherently weak Raman scattering and strong fluorescence background, which significantly degrade signal quality. In this study, we present a simulation-driven denoising framework that combines a statistically grounded noise model with deep learning to enhance Raman spectra acquired under fluorescence-dominated conditions. We comprehensively modeled major noise sources. Based on this model, we generated biologically realistic Raman spectra and used them to train a cascaded deep neural network designed to jointly suppress stochastic detector noise and fluorescence baseline interference. To evaluate the performance of our approach, we simulated human skin spectra derived from real experimental data as a validation case study. Our results demonstrate the potential of physics-informed learning to improve spectral quality and enable faster, more accurate Raman-based tissue analysis. - oai:arXiv.org:2512.17852v1 + MaskFocus: Focusing Policy Optimization on Critical Steps for Masked Image Generation + https://arxiv.org/abs/2512.18766 + arXiv:2512.18766v1 Announce Type: new +Abstract: Reinforcement learning (RL) has demonstrated significant potential for post-training language models and autoregressive visual generative models, but adapting RL to masked generative models remains challenging. The core factor is that policy optimization requires accounting for the probability likelihood of each step due to its multi-step and iterative refinement process. This reliance on entire sampling trajectories introduces high computational cost, whereas natively optimizing random steps often yields suboptimal results. In this paper, we present MaskFocus, a novel RL framework that achieves effective policy optimization for masked generative models by focusing on critical steps. Specifically, we determine the step-level information gain by measuring the similarity between the intermediate images at each sampling step and the final generated image. Crucially, we leverage this to identify the most critical and valuable steps and execute focused policy optimization on them. Furthermore, we design a dynamic routing sampling mechanism based on entropy to encourage the model to explore more valuable masking strategies for samples with low entropy. Extensive experiments on multiple Text-to-Image benchmarks validate the effectiveness of our method. + oai:arXiv.org:2512.18766v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Mengkun Chen, Sanidhya D. Tripathi, James W. Tunnell + Guohui Zhang, Hu Yu, Xiaoxiao Ma, Yaning Pan, Hang Xu, Feng Zhao - AnyTask: an Automated Task and Data Generation Framework for Advancing Sim-to-Real Policy Learning - https://arxiv.org/abs/2512.17853 - arXiv:2512.17853v1 Announce Type: new -Abstract: Generalist robot learning remains constrained by data: large-scale, diverse, and high-quality interaction data are expensive to collect in the real world. While simulation has become a promising way for scaling up data collection, the related tasks, including simulation task design, task-aware scene generation, expert demonstration synthesis, and sim-to-real transfer, still demand substantial human effort. We present AnyTask, an automated framework that pairs massively parallel GPU simulation with foundation models to design diverse manipulation tasks and synthesize robot data. We introduce three AnyTask agents for generating expert demonstrations aiming to solve as many tasks as possible: 1) ViPR, a novel task and motion planning agent with VLM-in-the-loop Parallel Refinement; 2) ViPR-Eureka, a reinforcement learning agent with generated dense rewards and LLM-guided contact sampling; 3) ViPR-RL, a hybrid planning and learning approach that jointly produces high-quality demonstrations with only sparse rewards. We train behavior cloning policies on generated data, validate them in simulation, and deploy them directly on real robot hardware. The policies generalize to novel object poses, achieving 44% average success across a suite of real-world pick-and-place, drawer opening, contact-rich pushing, and long-horizon manipulation tasks. Our project website is at https://anytask.rai-inst.com . - oai:arXiv.org:2512.17853v1 - cs.RO - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + In-Context Audio Control of Video Diffusion Transformers + https://arxiv.org/abs/2512.18772 + arXiv:2512.18772v1 Announce Type: new +Abstract: Recent advancements in video generation have seen a shift towards unified, transformer-based foundation models that can handle multiple conditional inputs in-context. However, these models have primarily focused on modalities like text, images, and depth maps, while strictly time-synchronous signals like audio have been underexplored. This paper introduces In-Context Audio Control of video diffusion transformers (ICAC), a framework that investigates the integration of audio signals for speech-driven video generation within a unified full-attention architecture, akin to FullDiT. We systematically explore three distinct mechanisms for injecting audio conditions: standard cross-attention, 2D self-attention, and unified 3D self-attention. Our findings reveal that while 3D attention offers the highest potential for capturing spatio-temporal audio-visual correlations, it presents significant training challenges. To overcome this, we propose a Masked 3D Attention mechanism that constrains the attention pattern to enforce temporal alignment, enabling stable training and superior performance. Our experiments demonstrate that this approach achieves strong lip synchronization and video quality, conditioned on an audio stream and reference images. + oai:arXiv.org:2512.18772v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ran Gong, Xiaohan Zhang, Jinghuan Shang, Maria Vittoria Minniti, Jigarkumar Patel, Valerio Pepe, Riedana Yan, Ahmet Gundogdu, Ivan Kapelyukh, Ali Abbas, Xiaoqiang Yan, Harsh Patel, Laura Herlant, Karl Schmeckpeper + Wenze Liu, Weicai Ye, Minghong Cai, Quande Liu, Xintao Wang, Xiangyu Yue - On General Linearly Implicit Quantized State System Methods - https://arxiv.org/abs/2512.17855 - arXiv:2512.17855v1 Announce Type: new -Abstract: This work proposes a methodology to develop new numerical integration algorithms for ordinary differential equations based on state quantization, generalizing the notions of Linearly Implicit Quantized State Systems (LIQSS) methods. Using this idea, two novel sub-families of algorithms are designed that improve the performance of current LIQSS methods while preserving their properties regarding stability, global error bound and efficient event handling capabilities. The features of the new algorithms are studied in two application examples where the advantages over classic numerical integration algorithms is also analyzed. - oai:arXiv.org:2512.17855v1 - math.NA - cs.NA - cs.PF - Mon, 22 Dec 2025 00:00:00 -0500 + Label-Informed Outlier Detection Based on Granule Density + https://arxiv.org/abs/2512.18774 + arXiv:2512.18774v1 Announce Type: new +Abstract: Outlier detection, crucial for identifying unusual patterns with significant implications across numerous applications, has drawn considerable research interest. Existing semi-supervised methods typically treat data as purely numerical and} in a deterministic manner, thereby neglecting the heterogeneity and uncertainty inherent in complex, real-world datasets. This paper introduces a label-informed outlier detection method for heterogeneous data based on Granular Computing and Fuzzy Sets, namely Granule Density-based Outlier Factor (GDOF). Specifically, GDOF first employs label-informed fuzzy granulation to effectively represent various data types and develops granule density for precise density estimation. Subsequently, granule densities from individual attributes are integrated for outlier scoring by assessing attribute relevance with a limited number of labeled outliers. Experimental results on various real-world datasets show that GDOF stands out in detecting outliers in heterogeneous data with a minimal number of labeled outliers. The integration of Fuzzy Sets and Granular Computing in GDOF offers a practical framework for outlier detection in complex and diverse data types. All relevant datasets and source codes are publicly available for further research. This is the author's accepted manuscript of a paper published in IEEE Transactions on Fuzzy Systems. The final version is available at https://doi.org/10.1109/TFUZZ.2024.3514853 + oai:arXiv.org:2512.18774v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mariana Bergonzi, Joaqu\'in Fern\'andez, Ernesto Kofman + 10.1109/TFUZZ.2024.3514853 + IEEE Transactions on Fuzzy Systems, vol. 33, no. 4, pp. 1391-1401, 2025 + Baiyang Chen, Zhong Yuan, Dezhong Peng, Hongmei Chen, Xiaomin Song, Huiming Zheng - Interpretable Plant Leaf Disease Detection Using Attention-Enhanced CNN - https://arxiv.org/abs/2512.17864 - arXiv:2512.17864v1 Announce Type: new -Abstract: Plant diseases pose a significant threat to global food security, necessitating accurate and interpretable disease detection methods. This study introduces an interpretable attention-guided Convolutional Neural Network (CNN), CBAM-VGG16, for plant leaf disease detection. By integrating Convolution Block Attention Module (CBAM) at each convolutional stage, the model enhances feature extraction and disease localization. Trained on five diverse plant disease datasets, our approach outperforms recent techniques, achieving high accuracy (up to 98.87%) and demonstrating robust generalization. Here, we show the effectiveness of our method through comprehensive evaluation and interpretability analysis using CBAM attention maps, Grad-CAM, Grad-CAM++, and Layer-wise Relevance Propagation (LRP). This study advances the application of explainable AI in agricultural diagnostics, offering a transparent and reliable system for smart farming. The code of our proposed work is available at https://github.com/BS0111/PlantAttentionCBAM. - oai:arXiv.org:2512.17864v1 - cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + "Even GPT Can Reject Me": Conceptualizing Abrupt Refusal Secondary Harm (ARSH) and Reimagining Psychological AI Safety with Compassionate Completion Standard (CCS) + https://arxiv.org/abs/2512.18776 + arXiv:2512.18776v1 Announce Type: new +Abstract: Large Language Models (LLMs) and AI chatbots are increasingly used for emotional and mental health support due to their low cost, immediacy, and accessibility. However, when safety guardrails are triggered, conversations may be abruptly terminated, introducing a distinct form of emotional disruption that can exacerbate distress and elevate risk among already vulnerable users. As this phenomenon gains attention, this viewpoint introduces Abrupt Refusal Secondary Harm (ARSH) as a conceptual framework to describe the psychological impacts of sudden conversational discontinuation caused by AI safety protocols. Drawing on counseling psychology and communication science as conceptual heuristics, we argue that abrupt refusals can rupture perceived relational continuity, evoke feelings of rejection or shame, and discourage future help seeking. To mitigate these risks, we propose a design hypothesis, the Compassionate Completion Standard (CCS), a refusal protocol grounded in Human Centered Design (HCD) that maintains safety constraints while preserving relational coherence. CCS emphasizes empathetic acknowledgment, transparent boundary articulation, graded conversational transition, and guided redirection, replacing abrupt disengagement with psychologically attuned closure. By integrating awareness of ARSH into AI safety design, developers and policymakers can reduce preventable iatrogenic harm and advance a more psychologically informed approach to AI governance. Rather than presenting incremental empirical findings, this viewpoint contributes a timely conceptual framework, articulates a testable design hypothesis, and outlines a coordinated research agenda for improving psychological safety in human AI interaction. + oai:arXiv.org:2512.18776v1 + cs.CY + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Balram Singh, Ram Prakash Sharma, Somnath Dey + Yang Ni, Tong Yang - InSPECT: Invariant Spectral Features Preservation of Diffusion Models - https://arxiv.org/abs/2512.17873 - arXiv:2512.17873v1 Announce Type: new -Abstract: Modern diffusion models (DMs) have achieved state-of-the-art image generation. However, the fundamental design choice of diffusing data all the way to white noise and then reconstructing it leads to an extremely difficult and computationally intractable prediction task. To overcome this limitation, we propose InSPECT (Invariant Spectral Feature-Preserving Diffusion Model), a novel diffusion model that keeps invariant spectral features during both the forward and backward processes. At the end of the forward process, the Fourier coefficients smoothly converge to a specified random noise, enabling features preservation while maintaining diversity and randomness. By preserving invariant features, InSPECT demonstrates enhanced visual diversity, faster convergence rate, and a smoother diffusion process. Experiments on CIFAR-10, Celeb-A, and LSUN demonstrate that InSPECT achieves on average a 39.23% reduction in FID and 45.80% improvement in IS against DDPM for 10K iterations under specified parameter settings, which demonstrates the significant advantages of preserving invariant features: achieving superior generation quality and diversity, while enhancing computational efficiency and enabling faster convergence rate. To the best of our knowledge, this is the first attempt to analyze and preserve invariant spectral features in diffusion models. - oai:arXiv.org:2512.17873v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + From Natural Language to Control Signals: A Conceptual Framework for Semantic Channel Finding in Complex Experimental Infrastructure + https://arxiv.org/abs/2512.18779 + arXiv:2512.18779v1 Announce Type: new +Abstract: Modern experimental platforms such as particle accelerators, fusion devices, telescopes, and industrial process control systems expose tens to hundreds of thousands of control and diagnostic channels accumulated over decades of evolution. Operators and AI systems rely on informal expert knowledge, inconsistent naming conventions, and fragmented documentation to locate signals for monitoring, troubleshooting, and automated control, creating a persistent bottleneck for reliability, scalability, and language-model-driven interfaces. We formalize semantic channel finding-mapping natural-language intent to concrete control-system signals-as a general problem in complex experimental infrastructure, and introduce a four-paradigm framework to guide architecture selection across facility-specific data regimes. The paradigms span (i) direct in-context lookup over curated channel dictionaries, (ii) constrained hierarchical navigation through structured trees, (iii) interactive agent exploration using iterative reasoning and tool-based database queries, and (iv) ontology-grounded semantic search that decouples channel meaning from facility-specific naming conventions. We demonstrate each paradigm through proof-of-concept implementations at four operational facilities spanning two orders of magnitude in scale-from compact free-electron lasers to large synchrotron light sources-and diverse control-system architectures, from clean hierarchies to legacy environments. These implementations achieve 90-97% accuracy on expert-curated operational queries. + oai:arXiv.org:2512.18779v1 + cs.CL + physics.acc-ph + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Baohua Yan, Qingyuan Liu, Jennifer Kava, Xuan Di + http://creativecommons.org/licenses/by/4.0/ + Thorsten Hellert, Nikolay Agladze, Alex Giovannone, Jan Jug, Frank Mayet, Mark Sherwin, Antonin Sulc, Chris Tennant - Visually Prompted Benchmarks Are Surprisingly Fragile - https://arxiv.org/abs/2512.17875 - arXiv:2512.17875v1 Announce Type: new -Abstract: A key challenge in evaluating VLMs is testing models' ability to analyze visual content independently from their textual priors. Recent benchmarks such as BLINK probe visual perception through visual prompting, where questions about visual content are paired with coordinates to which the question refers, with the coordinates explicitly marked in the image itself. While these benchmarks are an important part of VLM evaluation, we find that existing models are surprisingly fragile to seemingly irrelevant details of visual prompting: simply changing a visual marker from red to blue can completely change rankings among models on a leaderboard. By evaluating nine commonly-used open- and closed-source VLMs on two visually prompted tasks, we demonstrate how details in benchmark setup, including visual marker design and dataset size, have a significant influence on model performance and leaderboard rankings. These effects can even be exploited to lift weaker models above stronger ones; for instance, slightly increasing the size of the visual marker results in open-source InternVL3-8B ranking alongside or better than much larger proprietary models like Gemini 2.5 Pro. We further show that low-level inference choices that are often ignored in benchmarking, such as JPEG compression levels in API calls, can also cause model lineup changes. These details have substantially larger impacts on visually prompted benchmarks than on conventional semantic VLM evaluations. To mitigate this instability, we curate existing datasets to create VPBench, a larger visually prompted benchmark with 16 visual marker variants. VPBench and additional analysis tools are released at https://lisadunlap.github.io/vpbench/. - oai:arXiv.org:2512.17875v1 + Eff-GRot: Efficient and Generalizable Rotation Estimation with Transformers + https://arxiv.org/abs/2512.18784 + arXiv:2512.18784v1 Announce Type: new +Abstract: We introduce Eff-GRot, an approach for efficient and generalizable rotation estimation from RGB images. Given a query image and a set of reference images with known orientations, our method directly predicts the object's rotation in a single forward pass, without requiring object- or category-specific training. At the core of our framework is a transformer that performs a comparison in the latent space, jointly processing rotation-aware representations from multiple references alongside a query. This design enables a favorable balance between accuracy and computational efficiency while remaining simple, scalable, and fully end-to-end. Experimental results show that Eff-GRot offers a promising direction toward more efficient rotation estimation, particularly in latency-sensitive applications. + oai:arXiv.org:2512.18784v1 cs.CV cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haiwen Feng, Long Lian, Lisa Dunlap, Jiahao Shu, XuDong Wang, Renhao Wang, Trevor Darrell, Alane Suhr, Angjoo Kanazawa + Fanis Mathioulakis, Gorjan Radevski, Tinne Tuytelaars - Weighted Stochastic Differential Equation to Implement Wasserstein-Fisher-Rao Gradient Flow - https://arxiv.org/abs/2512.17878 - arXiv:2512.17878v1 Announce Type: new -Abstract: Score-based diffusion models currently constitute the state of the art in continuous generative modeling. These methods are typically formulated via overdamped or underdamped Ornstein--Uhlenbeck-type stochastic differential equations, in which sampling is driven by a combination of deterministic drift and Brownian diffusion, resulting in continuous particle trajectories in the ambient space. While such dynamics enjoy exponential convergence guarantees for strongly log-concave target distributions, it is well known that their mixing rates deteriorate exponentially in the presence of nonconvex or multimodal landscapes, such as double-well potentials. Since many practical generative modeling tasks involve highly non-log-concave target distributions, considerable recent effort has been devoted to developing sampling schemes that improve exploration beyond classical diffusion dynamics. - A promising line of work leverages tools from information geometry to augment diffusion-based samplers with controlled mass reweighting mechanisms. This perspective leads naturally to Wasserstein--Fisher--Rao (WFR) geometries, which couple transport in the sample space with vertical (reaction) dynamics on the space of probability measures. In this work, we formulate such reweighting mechanisms through the introduction of explicit correction terms and show how they can be implemented via weighted stochastic differential equations using the Feynman--Kac representation. Our study provides a preliminary but rigorous investigation of WFR-based sampling dynamics, and aims to clarify their geometric and operator-theoretic structure as a foundation for future theoretical and algorithmic developments. - oai:arXiv.org:2512.17878v1 - cs.LG + Smark: A Watermark for Text-to-Speech Diffusion Models via Discrete Wavelet Transform + https://arxiv.org/abs/2512.18791 + arXiv:2512.18791v1 Announce Type: new +Abstract: Text-to-Speech (TTS) diffusion models generate high-quality speech, which raises challenges for the model intellectual property protection and speech tracing for legal use. Audio watermarking is a promising solution. However, due to the structural differences among various TTS diffusion models, existing watermarking methods are often designed for a specific model and degrade audio quality, which limits their practical applicability. To address this dilemma, this paper proposes a universal watermarking scheme for TTS diffusion models, termed Smark. This is achieved by designing a lightweight watermark embedding framework that operates in the common reverse diffusion paradigm shared by all TTS diffusion models. To mitigate the impact on audio quality, Smark utilizes the discrete wavelet transform (DWT) to embed watermarks into the relatively stable low-frequency regions of the audio, which ensures seamless watermark-audio integration and is resistant to removal during the reverse diffusion process. Extensive experiments are conducted to evaluate the audio quality and watermark performance in various simulated real-world attack scenarios. The experimental results show that Smark achieves superior performance in both audio quality and watermark extraction accuracy. + oai:arXiv.org:2512.18791v1 + cs.SD cs.AI - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/publicdomain/zero/1.0/ - Herlock Rahimi + http://creativecommons.org/licenses/by/4.0/ + Yichuan Zhang, Chengxin Li, Yujie Gu - Your Eyes Controlled the Game: Real-Time Cognitive Training Adaptation based on Eye-Tracking and Physiological Data in Virtual Reality - https://arxiv.org/abs/2512.17882 - arXiv:2512.17882v1 Announce Type: new -Abstract: Cognitive training for sustained attention and working memory is vital across domains relying on robust mental capacity such as education or rehabilitation. Adaptive systems are essential, dynamically matching difficulty to user ability to maintain engagement and accelerate learning. Current adaptive systems often rely on simple performance heuristics or predict visual complexity and affect instead of cognitive load. This study presents the first implementation of real-time adaptive cognitive load control in Virtual Reality cognitive training based on eye-tracking and physiological data. We developed a bidirectional LSTM model with a self-attention mechanism, trained on eye-tracking and physiological (PPG, GSR) data from 74 participants. We deployed it in real-time with 54 participants across single-task (sustained attention) and dual-task (sustained attention + mental arithmetic) paradigms. Difficulty was adjusted dynamically based on participant self-assessment or model's real-time cognitive load predictions. Participants showed a tendency to estimate the task as too difficult, even though they were objectively performing at their best. Over the course of a 10-minute session, both adaptation methods converged at equivalent difficulty in single-task scenarios, with no significant differences in subjective workload or game performance. However, in the dual-task conditions, the model successfully pushed users to higher difficulty levels without performance penalties or increased frustration, highlighting a user tendency to underestimate capacity under high cognitive load. Findings indicate that machine learning models may provide more objective cognitive capacity assessments than self-directed approaches, mitigating subjective performance biases and enabling more effective training by pushing users beyond subjective comfort zones toward physiologically-determined optimal challenge levels. - oai:arXiv.org:2512.17882v1 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + The Dead Salmons of AI Interpretability + https://arxiv.org/abs/2512.18792 + arXiv:2512.18792v1 Announce Type: new +Abstract: In a striking neuroscience study, the authors placed a dead salmon in an MRI scanner and showed it images of humans in social situations. Astonishingly, standard analyses of the time reported brain regions predictive of social emotions. The explanation, of course, was not supernatural cognition but a cautionary tale about misapplied statistical inference. In AI interpretability, reports of similar ''dead salmon'' artifacts abound: feature attribution, probing, sparse auto-encoding, and even causal analyses can produce plausible-looking explanations for randomly initialized neural networks. In this work, we examine this phenomenon and argue for a pragmatic statistical-causal reframing: explanations of computational systems should be treated as parameters of a (statistical) model, inferred from computational traces. This perspective goes beyond simply measuring statistical variability of explanations due to finite sampling of input data; interpretability methods become statistical estimators, and findings should be tested against explicit and meaningful alternative computational hypotheses, with uncertainty quantified with respect to the postulated statistical model. It also highlights important theoretical issues, such as the identifiability of common interpretability queries, which we argue is critical to understand the field's susceptibility to false discoveries, poor generalizability, and high variance. More broadly, situating interpretability within the standard toolkit of statistical inference opens promising avenues for future work aimed at turning AI interpretability into a pragmatic and rigorous science. + oai:arXiv.org:2512.18792v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Dominik Szczepaniak, Monika Harvey, Fani Deligianni + Maxime M\'eloux, Giada Dirupo, Fran\c{c}ois Portet, Maxime Peyrard - Map2Video: Street View Imagery Driven AI Video Generation - https://arxiv.org/abs/2512.17883 - arXiv:2512.17883v1 Announce Type: new -Abstract: AI video generation has lowered barriers to video creation, but current tools still struggle with inconsistency. Filmmakers often find that clips fail to match characters and backgrounds, making it difficult to build coherent sequences. A formative study with filmmakers highlighted challenges in shot composition, character motion, and camera control. We present Map2Video, a street view imagery-driven AI video generation tool grounded in real-world geographies. The system integrates Unity and ComfyUI with the VACE video generation model, as well as OpenStreetMap and Mapillary for street view imagery. Drawing on familiar filmmaking practices such as location scouting and rehearsal, Map2Video enables users to choose map locations, position actors and cameras in street view imagery, sketch movement paths, refine camera motion, and generate spatially consistent videos. We evaluated Map2Video with 12 filmmakers. Compared to an image-to-video baseline, it achieved higher spatial accuracy, required less cognitive effort, and offered stronger controllability for both scene replication and open-ended creative exploration. - oai:arXiv.org:2512.17883v1 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + Reliable Audio Deepfake Detection in Variable Conditions via Quantum-Kernel SVMs + https://arxiv.org/abs/2512.18797 + arXiv:2512.18797v1 Announce Type: new +Abstract: Detecting synthetic speech is challenging when labeled data are scarce and recording conditions vary. Existing end-to-end deep models often overfit or fail to generalize, and while kernel methods can remain competitive, their performance heavily depends on the chosen kernel. Here, we show that using a quantum kernel in audio deepfake detection reduces falsepositive rates without increasing model size. Quantum feature maps embed data into high-dimensional Hilbert spaces, enabling the use of expressive similarity measures and compact classifiers. Building on this motivation, we compare quantum-kernel SVMs (QSVMs) with classical SVMs using identical mel-spectrogram preprocessing and stratified 5-fold cross-validation across four corpora (ASVspoof 2019 LA, ASVspoof 5 (2024), ADD23, and an In-the-Wild set). QSVMs achieve consistently lower equalerror rates (EER): 0.183 vs. 0.299 on ASVspoof 5 (2024), 0.081 vs. 0.188 on ADD23, 0.346 vs. 0.399 on ASVspoof 2019, and 0.355 vs. 0.413 In-the-Wild. At the EER operating point (where FPR equals FNR), these correspond to absolute false-positiverate reductions of 0.116 (38.8%), 0.107 (56.9%), 0.053 (13.3%), and 0.058 (14.0%), respectively. We also report how consistent the results are across cross-validation folds and margin-based measures of class separation, using identical settings for both models. The only modification is the kernel; the features and SVM remain unchanged, no additional trainable parameters are introduced, and the quantum kernel is computed on a conventional computer. + oai:arXiv.org:2512.18797v1 + cs.SD + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hye-Young Jo, Mose Sakashita, Aditi Mishra, Ryo Suzuki, Koichiro Niinuma, Aakar Gupta + http://creativecommons.org/publicdomain/zero/1.0/ + Lisan Al Amin, Vandana P. Janeja - Regularized Random Fourier Features and Finite Element Reconstruction for Operator Learning in Sobolev Space - https://arxiv.org/abs/2512.17884 - arXiv:2512.17884v1 Announce Type: new -Abstract: Operator learning is a data-driven approximation of mappings between infinite-dimensional function spaces, such as the solution operators of partial differential equations. Kernel-based operator learning can offer accurate, theoretically justified approximations that require less training than standard methods. However, they can become computationally prohibitive for large training sets and can be sensitive to noise. We propose a regularized random Fourier feature (RRFF) approach, coupled with a finite element reconstruction map (RRFF-FEM), for learning operators from noisy data. The method uses random features drawn from multivariate Student's $t$ distributions, together with frequency-weighted Tikhonov regularization that suppresses high-frequency noise. We establish high-probability bounds on the extreme singular values of the associated random feature matrix and show that when the number of features $N$ scales like $m \log m$ with the number of training samples $m$, the system is well-conditioned, which yields estimation and generalization guarantees. Detailed numerical experiments on benchmark PDE problems, including advection, Burgers', Darcy flow, Helmholtz, Navier-Stokes, and structural mechanics, demonstrate that RRFF and RRFF-FEM are robust to noise and achieve improved performance with reduced training time compared to the unregularized random feature model, while maintaining competitive accuracy relative to kernel and neural operator tests. - oai:arXiv.org:2512.17884v1 - cs.LG - cs.NA - math.NA - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 + Quantifying the Lifelong Impact of Resilience Interventions via Agent-Based LLM Simulation + https://arxiv.org/abs/2512.18803 + arXiv:2512.18803v1 Announce Type: new +Abstract: Establishing the long-term, causal impact of psychological interventions on life outcomes is a grand challenge for the social sciences, caught between the limitations of correlational longitudinal studies and short-term randomized controlled trials (RCTs). This paper introduces Large-Scale Agent-based Longitudinal Simulation (LALS), a framework that resolves this impasse by simulating multi-decade, counterfactual life trajectories. The methodology employs a "digital clone" design where 2,500 unique LLM-based agent personas (grounded in a curated corpus of 3,917 empirical research articles) are each cloned across a 2x2 factorial experiment. Specifically, the simulation models the efficacy of extended psychological resilience training (Intervention vs. Control) either in childhood or as a young adult (age 6 vs. age 18). Comparing digital clones enables exceptionally precise causal inference. The simulation provides a quantitative, causal estimate of a resilience intervention's lifelong effects, revealing significant reductions in mortality, a lower incidence of dementia, and a substantial increase in accumulated wealth. Crucially, the results uncover a crucial developmental window: the intervention administered at age 6 produced more than double the positive impact on lifetime wealth compared to the same intervention at age 18. These benefits were most pronounced for agents from low-socioeconomic backgrounds, highlighting a powerful buffering effect. The LALS framework serves as a "computational wind tunnel" for social science, offering a new paradigm for generating and testing causal hypotheses about the complex, lifelong dynamics that shape human capital and well-being. + oai:arXiv.org:2512.18803v1 + cs.CY + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Xinyue Yu, Hayden Schaeffer + Vivienne L'Ecuyer Ming (Possibility Sciences, The Human Trust, Neurotech Collider Lab at UC Berkeley) - Keypoint Counting Classifiers: Turning Vision Transformers into Self-Explainable Models Without Training - https://arxiv.org/abs/2512.17891 - arXiv:2512.17891v1 Announce Type: new -Abstract: Current approaches for designing self-explainable models (SEMs) require complicated training procedures and specific architectures which makes them impractical. With the advance of general purpose foundation models based on Vision Transformers (ViTs), this impracticability becomes even more problematic. Therefore, new methods are necessary to provide transparency and reliability to ViT-based foundation models. In this work, we present a new method for turning any well-trained ViT-based model into a SEM without retraining, which we call Keypoint Counting Classifiers (KCCs). Recent works have shown that ViTs can automatically identify matching keypoints between images with high precision, and we build on these results to create an easily interpretable decision process that is inherently visualizable in the input. We perform an extensive evaluation which show that KCCs improve the human-machine communication compared to recent baselines. We believe that KCCs constitute an important step towards making ViT-based foundation models more transparent and reliable. - oai:arXiv.org:2512.17891v1 + Tempo as the Stable Cue: Hierarchical Mixture of Tempo and Beat Experts for Music to 3D Dance Generation + https://arxiv.org/abs/2512.18804 + arXiv:2512.18804v1 Announce Type: new +Abstract: Music to 3D dance generation aims to synthesize realistic and rhythmically synchronized human dance from music. While existing methods often rely on additional genre labels to further improve dance generation, such labels are typically noisy, coarse, unavailable, or insufficient to capture the diversity of real-world music, which can result in rhythm misalignment or stylistic drift. In contrast, we observe that tempo, a core property reflecting musical rhythm and pace, remains relatively consistent across datasets and genres, typically ranging from 60 to 200 BPM. Based on this finding, we propose TempoMoE, a hierarchical tempo-aware Mixture-of-Experts module that enhances the diffusion model and its rhythm perception. TempoMoE organizes motion experts into tempo-structured groups for different tempo ranges, with multi-scale beat experts capturing fine- and long-range rhythmic dynamics. A Hierarchical Rhythm-Adaptive Routing dynamically selects and fuses experts from music features, enabling flexible, rhythm-aligned generation without manual genre labels. Extensive experiments demonstrate that TempoMoE achieves state-of-the-art results in dance quality and rhythm alignment. + oai:arXiv.org:2512.18804v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + cs.MM + cs.SD + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Kristoffer Wickstr{\o}m, Teresa Dorszewski, Siyan Chen, Michael Kampffmeyer, Elisabeth Wetzer, Robert Jenssen + Guangtao Lyu, Chenghao Xu, Qi Liu, Jiexi Yan, Muli Yang, Fen Fang, Cheng Deng - Prefix Trees Improve Memory Consumption in Large-Scale Continuous-Time Stochastic Models - https://arxiv.org/abs/2512.17892 - arXiv:2512.17892v1 Announce Type: new -Abstract: Highly-concurrent system models with vast state spaces like Chemical Reaction Networks (CRNs) that model biological and chemical systems pose a formidable challenge to cutting-edge formal analysis tools. Although many symbolic approaches have been presented, transient probability analysis of CRNs, modeled as Continuous-Time Markov Chains (CTMCs), requires explicit state representation. For that purpose, current cutting-edge methods use hash maps, which boast constant average time complexity and linear memory complexity. However, hash maps often suffer from severe memory limitations on models with immense state spaces. To address this, we propose using prefix trees to store states for large, highly concurrent models (particularly CRNs) for memory savings. We present theoretical analyses and benchmarks demonstrating the favorability of prefix trees over hash maps for very large state spaces. Additionally, we propose using a Bounded Model Checking (BMC) pre-processing step to impose a variable ordering to further improve memory usage along with preliminary evaluations suggesting its effectiveness. We remark that while our work is motivated primarily by the challenges posed by CRNs, it is generalizable to all CTMC models. - oai:arXiv.org:2512.17892v1 - cs.DS - cs.FL - Mon, 22 Dec 2025 00:00:00 -0500 + FedVideoMAE: Efficient Privacy-Preserving Federated Video Moderation + https://arxiv.org/abs/2512.18809 + arXiv:2512.18809v1 Announce Type: new +Abstract: The rapid growth of short-form video platforms increases the need for privacy-preserving moderation, as cloud-based pipelines expose raw videos to privacy risks, high bandwidth costs, and inference latency. To address these challenges, we propose an on-device federated learning framework for video violence detection that integrates self-supervised VideoMAE representations, LoRA-based parameter-efficient adaptation, and defense-in-depth privacy protection. Our approach reduces the trainable parameter count to 5.5M (~3.5% of a 156M backbone) and incorporates DP-SGD with configurable privacy budgets and secure aggregation. Experiments on RWF-2000 with 40 clients achieve 77.25% accuracy without privacy protection and 65-66% under strong differential privacy, while reducing communication cost by $28.3\times$ compared to full-model federated learning. The code is available at: {https://github.com/zyt-599/FedVideoMAE} + oai:arXiv.org:2512.18809v1 + cs.CV + cs.AI + cs.MM + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Landon Taylor, Joshua Jeppson, Ahmed Irfan, Lukas Buecherl, Chris Myers, Zhen Zhang + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Ziyuan Tao, Chuanzhi Xu, Sandaru Jayawardana, Wei Bao, Kanchana Thilakarathna, Teng Joon Lim - XAgen: An Explainability Tool for Identifying and Correcting Failures in Multi-Agent Workflows - https://arxiv.org/abs/2512.17896 - arXiv:2512.17896v1 Announce Type: new -Abstract: As multi-agent systems powered by Large Language Models (LLMs) are increasingly adopted in real-world workflows, users with diverse technical backgrounds are now building and refining their own agentic processes. However, these systems can fail in opaque ways, making it difficult for users to observe, understand, and correct errors. We conducted formative interviews with 12 practitioners to identify mismatches between existing observability tools and users' needs. Based on these insights, we designed XAgen, an explainability tool that supports users with varying AI expertise through three core capabilities: log visualization for glanceable workflow understanding, human-in-the-loop feedback to capture expert judgment, and automatic error detection via an LLM-as-a-judge. In a user study with 8 participants, XAgen helped users more easily locate failures, attribute to specific agents or steps, and iteratively improve configurations. Our findings surface human-centered design guidelines for explainable agentic AI development and highlights opportunities for more context-aware interactive debugging. - oai:arXiv.org:2512.17896v1 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + Revealing Perception and Generation Dynamics in LVLMs: Mitigating Hallucinations via Validated Dominance Correction + https://arxiv.org/abs/2512.18813 + arXiv:2512.18813v1 Announce Type: new +Abstract: Large Vision-Language Models (LVLMs) have shown remarkable capabilities, yet hallucinations remain a persistent challenge. This work presents a systematic analysis of the internal evolution of visual perception and token generation in LVLMs, revealing two key patterns. First, perception follows a three-stage GATE process: early layers perform a Global scan, intermediate layers Approach and Tighten on core content, and later layers Explore supplementary regions. Second, generation exhibits an SAD (Subdominant Accumulation to Dominant) pattern, where hallucinated tokens arise from the repeated accumulation of subdominant tokens lacking support from attention (visual perception) or feed-forward network (internal knowledge). Guided by these findings, we devise the VDC (Validated Dominance Correction) strategy, which detects unsupported tokens and replaces them with validated dominant ones to improve output reliability. Extensive experiments across multiple models and benchmarks confirm that VDC substantially mitigates hallucinations. + oai:arXiv.org:2512.18813v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Xinru Wang, Ming Yin, Eunyee Koh, Mustafa Doga Dogan + Guangtao Lyu, Xinyi Cheng, Chenghao Xu, Qi Liu, Muli Yang, Fen Fang, Huilin Chen, Jiexi Yan, Xu Yang, Cheng Deng - RadarGen: Automotive Radar Point Cloud Generation from Cameras - https://arxiv.org/abs/2512.17897 - arXiv:2512.17897v1 Announce Type: new -Abstract: We present RadarGen, a diffusion model for synthesizing realistic automotive radar point clouds from multi-view camera imagery. RadarGen adapts efficient image-latent diffusion to the radar domain by representing radar measurements in bird's-eye-view form that encodes spatial structure together with radar cross section (RCS) and Doppler attributes. A lightweight recovery step reconstructs point clouds from the generated maps. To better align generation with the visual scene, RadarGen incorporates BEV-aligned depth, semantic, and motion cues extracted from pretrained foundation models, which guide the stochastic generation process toward physically plausible radar patterns. Conditioning on images makes the approach broadly compatible, in principle, with existing visual datasets and simulation frameworks, offering a scalable direction for multimodal generative simulation. Evaluations on large-scale driving data show that RadarGen captures characteristic radar measurement distributions and reduces the gap to perception models trained on real data, marking a step toward unified generative simulation across sensing modalities. - oai:arXiv.org:2512.17897v1 + EchoMotion: Unified Human Video and Motion Generation via Dual-Modality Diffusion Transformer + https://arxiv.org/abs/2512.18814 + arXiv:2512.18814v1 Announce Type: new +Abstract: Video generation models have advanced significantly, yet they still struggle to synthesize complex human movements due to the high degrees of freedom in human articulation. This limitation stems from the intrinsic constraints of pixel-only training objectives, which inherently bias models toward appearance fidelity at the expense of learning underlying kinematic principles. To address this, we introduce EchoMotion, a framework designed to model the joint distribution of appearance and human motion, thereby improving the quality of complex human action video generation. EchoMotion extends the DiT (Diffusion Transformer) framework with a dual-branch architecture that jointly processes tokens concatenated from different modalities. Furthermore, we propose MVS-RoPE (Motion-Video Syncronized RoPE), which offers unified 3D positional encoding for both video and motion tokens. By providing a synchronized coordinate system for the dual-modal latent sequence, MVS-RoPE establishes an inductive bias that fosters temporal alignment between the two modalities. We also propose a Motion-Video Two-Stage Training Strategy. This strategy enables the model to perform both the joint generation of complex human action videos and their corresponding motion sequences, as well as versatile cross-modal conditional generation tasks. To facilitate the training of a model with these capabilities, we construct HuMoVe, a large-scale dataset of approximately 80,000 high-quality, human-centric video-motion pairs. Our findings reveal that explicitly representing human motion is complementary to appearance, significantly boosting the coherence and plausibility of human-centric video generation. + oai:arXiv.org:2512.18814v1 cs.CV - cs.AI - cs.LG - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Tomer Borreda, Fangqiang Ding, Sanja Fidler, Shengyu Huang, Or Litany + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yuxiao Yang, Hualian Sheng, Sijia Cai, Jing Lin, Jiahao Wang, Bing Deng, Junzhe Lu, Haoqian Wang, Jieping Ye - Humanlike AI Design Increases Anthropomorphism but Yields Divergent Outcomes on Engagement and Trust Globally - https://arxiv.org/abs/2512.17898 - arXiv:2512.17898v1 Announce Type: new -Abstract: Over a billion users across the globe interact with AI systems engineered with increasing sophistication to mimic human traits. This shift has triggered urgent debate regarding Anthropomorphism, the attribution of human characteristics to synthetic agents, and its potential to induce misplaced trust or emotional dependency. However, the causal link between more humanlike AI design and subsequent effects on engagement and trust has not been tested in realistic human-AI interactions with a global user pool. Prevailing safety frameworks continue to rely on theoretical assumptions derived from Western populations, overlooking the global diversity of AI users. Here, we address these gaps through two large-scale cross-national experiments (N=3,500) across 10 diverse nations, involving real-time and open-ended interactions with an AI system. We find that when evaluating an AI's human-likeness, users focus less on the kind of theoretical aspects often cited in policy (e.g., sentience or consciousness), but rather applied, interactional cues like conversation flow or understanding the user's perspective. We also experimentally demonstrate that humanlike design levers can causally increase anthropomorphism among users; however, we do not find that humanlike design universally increases behavioral measures for user engagement and trust, as previous theoretical work suggests. Instead, part of the connection between human-likeness and behavioral outcomes is fractured by culture: specific design choices that foster self-reported trust in AI-systems in some populations (e.g., Brazil) may trigger the opposite result in others (e.g., Japan). Our findings challenge prevailing narratives of inherent risk in humanlike AI design. Instead, we identify a nuanced, culturally mediated landscape of human-AI interaction, which demands that we move beyond a one-size-fits-all approach in AI governance. - oai:arXiv.org:2512.17898v1 + Controllable Probabilistic Forecasting with Stochastic Decomposition Layers + https://arxiv.org/abs/2512.18815 + arXiv:2512.18815v1 Announce Type: new +Abstract: AI weather prediction ensembles with latent noise injection and optimized with the continuous ranked probability score (CRPS) have produced both accurate and well-calibrated predictions with far less computational cost compared with diffusion-based methods. However, current CRPS ensemble approaches vary in their training strategies and noise injection mechanisms, with most injecting noise globally throughout the network via conditional normalization. This structure increases training expense and limits the physical interpretability of the stochastic perturbations. We introduce Stochastic Decomposition Layers (SDL) for converting deterministic machine learning weather models into probabilistic ensemble systems. Adapted from StyleGAN's hierarchical noise injection, SDL applies learned perturbations at three decoder scales through latent-driven modulation, per-pixel noise, and channel scaling. When applied to WXFormer via transfer learning, SDL requires less than 2\% of the computational cost needed to train the baseline model. Each ensemble member is generated from a compact latent tensor (5 MB), enabling perfect reproducibility and post-inference spread adjustment through latent rescaling. Evaluation on 2022 ERA5 reanalysis shows ensembles with spread-skill ratios approaching unity and rank histograms that progressively flatten toward uniformity through medium-range forecasts, achieving calibration competitive with operational IFS-ENS. Multi-scale experiments reveal hierarchical uncertainty: coarse layers modulate synoptic patterns while fine layers control mesoscale variability. The explicit latent parameterization provides interpretable uncertainty quantification for operational forecasting and climate applications. + oai:arXiv.org:2512.18815v1 + cs.LG cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + physics.ao-ph + physics.geo-ph + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Robin Schimmelpfennig, Mark D\'iaz, Vinodkumar Prabhakaran, Aida Davani + John S. Schreck, William E. Chapman, Charlie Becker, David John Gagne II, Dhamma Kimpara, Nihanth Cherukuru, Judith Berner, Kirsten J. Mayer, Negin Sobhani - Distributionally Robust Imitation Learning: Layered Control Architecture for Certifiable Autonomy - https://arxiv.org/abs/2512.17899 - arXiv:2512.17899v1 Announce Type: new -Abstract: Imitation learning (IL) enables autonomous behavior by learning from expert demonstrations. While more sample-efficient than comparative alternatives like reinforcement learning, IL is sensitive to compounding errors induced by distribution shifts. There are two significant sources of distribution shifts when using IL-based feedback laws on systems: distribution shifts caused by policy error and distribution shifts due to exogenous disturbances and endogenous model errors due to lack of learning. Our previously developed approaches, Taylor Series Imitation Learning (TaSIL) and $\mathcal{L}_1$ -Distributionally Robust Adaptive Control (\ellonedrac), address the challenge of distribution shifts in complementary ways. While TaSIL offers robustness against policy error-induced distribution shifts, \ellonedrac offers robustness against distribution shifts due to aleatoric and epistemic uncertainties. To enable certifiable IL for learned and/or uncertain dynamical systems, we formulate \textit{Distributionally Robust Imitation Policy (DRIP)} architecture, a Layered Control Architecture (LCA) that integrates TaSIL and~\ellonedrac. By judiciously designing individual layer-centric input and output requirements, we show how we can guarantee certificates for the entire control pipeline. Our solution paves the path for designing fully certifiable autonomy pipelines, by integrating learning-based components, such as perception, with certifiable model-based decision-making through the proposed LCA approach. - oai:arXiv.org:2512.17899v1 - eess.SY - cs.LG - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 + Misbehavior Forecasting for Focused Autonomous Driving Systems Testing + https://arxiv.org/abs/2512.18823 + arXiv:2512.18823v1 Announce Type: new +Abstract: Simulation-based testing is the standard practice for assessing the reliability of self-driving cars' software before deployment. Existing bug-finding techniques are either unreliable or expensive. We build on the insight that near misses observed during simulations may point to potential failures. We propose Foresee, a technique that identifies near misses using a misbehavior forecaster that computes possible future states of the ego-vehicle under test. Foresee performs local fuzzing in the neighborhood of each candidate near miss to surface previously unknown failures. In our empirical study, we evaluate the effectiveness of different configurations of Foresee using several scenarios provided in the CARLA simulator on both end-to-end and modular self-driving systems and examine its complementarity with the state-of-the-art fuzzer DriveFuzz. Our results show that Foresee is both more effective and more efficient than the baselines. Foresee exposes 128.70% and 38.09% more failures than a random approach and a state-of-the-art failure predictor while being 2.49x and 1.42x faster, respectively. Moreover, when used in combination with DriveFuzz, Foresee enhances failure detection by up to 93.94%. + oai:arXiv.org:2512.18823v1 + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Aditya Gahlawat, Ahmed Aboudonia, Sandeep Banik, Naira Hovakimyan, Nikolai Matni, Aaron D. Ames, Gioele Zardini, Alberto Speranzon + http://creativecommons.org/licenses/by-sa/4.0/ + M M Abid Naziri, Stefano Carlo Lambertenghi, Andrea Stocco, Marcelo d'Amorim - Diffusion Forcing for Multi-Agent Interaction Sequence Modeling - https://arxiv.org/abs/2512.17900 - arXiv:2512.17900v1 Announce Type: new -Abstract: Understanding and generating multi-person interactions is a fundamental challenge with broad implications for robotics and social computing. While humans naturally coordinate in groups, modeling such interactions remains difficult due to long temporal horizons, strong inter-agent dependencies, and variable group sizes. Existing motion generation methods are largely task-specific and do not generalize to flexible multi-agent generation. We introduce MAGNet (Multi-Agent Diffusion Forcing Transformer), a unified autoregressive diffusion framework for multi-agent motion generation that supports a wide range of interaction tasks through flexible conditioning and sampling. MAGNet performs dyadic prediction, partner inpainting, and full multi-agent motion generation within a single model, and can autoregressively generate ultra-long sequences spanning hundreds of v. Building on Diffusion Forcing, we introduce key modifications that explicitly model inter-agent coupling during autoregressive denoising, enabling coherent coordination across agents. As a result, MAGNet captures both tightly synchronized activities (e.g, dancing, boxing) and loosely structured social interactions. Our approach performs on par with specialized methods on dyadic benchmarks while naturally extending to polyadic scenarios involving three or more interacting people, enabled by a scalable architecture that is agnostic to the number of agents. We refer readers to the supplemental video, where the temporal dynamics and spatial coordination of generated interactions are best appreciated. Project page: https://von31.github.io/MAGNet/ - oai:arXiv.org:2512.17900v1 - cs.CV - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 + A logic for default deontic reasoning + https://arxiv.org/abs/2512.18824 + arXiv:2512.18824v1 Announce Type: new +Abstract: In many real-life settings, agents must navigate dynamic environments while reasoning under incomplete information and acting on a corpus of unstable, context-dependent, and often conflicting norms. We introduce a general, non-modal, proof-theoretic framework for deontic reasoning grounded in default logic. Its central feature is the notion of controlled sequent - a sequent annotated with sets of formulas (control sets) that prescribe what should or should not be entailed by the formulas in the antecedent. When combined with distinct extra-logical rules representing defaults and norms, these control sets record the conditions and constraints governing their applicability, thereby enabling local soundness checks for derived sequents. We prove that controlled sequent calculi satisfies admissibility of contraction and non-analytic cuts, and we establish their strong completeness with respect to credulous consequence in default theories and normative systems. Finally, we illustrate in depth how controlled sequent calculi provide a flexible and expressive basis for resolving deontic conflicts and capturing dynamic deontic notions via appropriate extra-logical rules. + oai:arXiv.org:2512.18824v1 + cs.LO + math.LO + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Vongani H. Maluleke, Kie Horiuchi, Lea Wilken, Evonne Ng, Jitendra Malik, Angjoo Kanazawa + Mario Piazza, Andrea Sabatini - When Reasoning Meets Its Laws - https://arxiv.org/abs/2512.17901 - arXiv:2512.17901v1 Announce Type: new -Abstract: Despite the superior performance of Large Reasoning Models (LRMs), their reasoning behaviors are often counterintuitive, leading to suboptimal reasoning capabilities. To theoretically formalize the desired reasoning behaviors, this paper presents the Laws of Reasoning (LoRe), a unified framework that characterizes intrinsic reasoning patterns in LRMs. We first propose compute law with the hypothesis that the reasoning compute should scale linearly with question complexity. Beyond compute, we extend LoRe with a supplementary accuracy law. Since the question complexity is difficult to quantify in practice, we examine these hypotheses by two properties of the laws, monotonicity and compositionality. We therefore introduce LoRe-Bench, a benchmark that systematically measures these two tractable properties for large reasoning models. Evaluation shows that most reasoning models exhibit reasonable monotonicity but lack compositionality. In response, we develop an effective finetuning approach that enforces compute-law compositionality. Extensive empirical studies demonstrate that better compliance with compute laws yields consistently improved reasoning performance on multiple benchmarks, and uncovers synergistic effects across properties and laws. Project page: https://lore-project.github.io/ - oai:arXiv.org:2512.17901v1 + Hyperbolic Graph Embeddings: a Survey and an Evaluation on Anomaly Detection + https://arxiv.org/abs/2512.18826 + arXiv:2512.18826v1 Announce Type: new +Abstract: This survey reviews hyperbolic graph embedding models, and evaluate them on anomaly detection, highlighting their advantages over Euclidean methods in capturing complex structures. Evaluating models like \textit{HGCAE}, \textit{\(\mathcal{P}\)-VAE}, and \textit{HGCN} demonstrates high performance, with \textit{\(\mathcal{P}\)-VAE} achieving an F1-score of 94\% on the \textit{Elliptic} dataset and \textit{HGCAE} scoring 80\% on \textit{Cora}. In contrast, Euclidean methods like \textit{DOMINANT} and \textit{GraphSage} struggle with complex data. The study emphasizes the potential of hyperbolic spaces for improving anomaly detection, and provides an open-source library to foster further research in this field. + oai:arXiv.org:2512.18826v1 + cs.LG cs.AI - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Junyu Zhang, Yifan Sun, Tianang Leng, Jingyan Shen, Liu Ziyin, Paul Pu Liang, Huan Zhang + Souhail Abdelmouaiz Sadat, Mohamed Yacine Touahria Miliani, Khadidja Hab El Hames, Hamida Seba, Mohammed Haddad - Adversarial Robustness of Vision in Open Foundation Models - https://arxiv.org/abs/2512.17902 - arXiv:2512.17902v1 Announce Type: new -Abstract: With the increase in deep learning, it becomes increasingly difficult to understand the model in which AI systems can identify objects. Thus, an adversary could aim to modify an image by adding unseen elements, which will confuse the AI in its recognition of an entity. This paper thus investigates the adversarial robustness of LLaVA-1.5-13B and Meta's Llama 3.2 Vision-8B-2. These are tested for untargeted PGD (Projected Gradient Descent) against the visual input modality, and empirically evaluated on the Visual Question Answering (VQA) v2 dataset subset. The results of these adversarial attacks are then quantified using the standard VQA accuracy metric. This evaluation is then compared with the accuracy degradation (accuracy drop) of LLaVA and Llama 3.2 Vision. A key finding is that Llama 3.2 Vision, despite a lower baseline accuracy in this setup, exhibited a smaller drop in performance under attack compared to LLaVA, particularly at higher perturbation levels. Overall, the findings confirm that the vision modality represents a viable attack vector for degrading the performance of contemporary open-weight VLMs, including Meta's Llama 3.2 Vision. Furthermore, they highlight that adversarial robustness does not necessarily correlate directly with standard benchmark performance and may be influenced by underlying architectural and training factors. - oai:arXiv.org:2512.17902v1 - cs.CV + HARBOR: Holistic Adaptive Risk assessment model for BehaviORal healthcare + https://arxiv.org/abs/2512.18829 + arXiv:2512.18829v1 Announce Type: new +Abstract: Behavioral healthcare risk assessment remains a challenging problem due to the highly multimodal nature of patient data and the temporal dynamics of mood and affective disorders. While large language models (LLMs) have demonstrated strong reasoning capabilities, their effectiveness in structured clinical risk scoring remains unclear. In this work, we introduce HARBOR, a behavioral health aware language model designed to predict a discrete mood and risk score, termed the Harbor Risk Score (HRS), on an integer scale from -3 (severe depression) to +3 (mania). We also release PEARL, a longitudinal behavioral healthcare dataset spanning four years of monthly observations from three patients, containing physiological, behavioral, and self reported mental health signals. We benchmark traditional machine learning models, proprietary LLMs, and HARBOR across multiple evaluation settings and ablations. Our results show that HARBOR outperforms classical baselines and off the shelf LLMs, achieving 69 percent accuracy compared to 54 percent for logistic regression and 29 percent for the strongest proprietary LLM baseline. + oai:arXiv.org:2512.18829v1 cs.AI - cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - 10.1109/ACCESS.2025.3645997 - IEEE Access, 2025 - Jonathon Fox, William J Buchanan, Pavlos Papadopoulos + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Aditya Siddhant - Dexterous World Models - https://arxiv.org/abs/2512.17907 - arXiv:2512.17907v1 Announce Type: new -Abstract: Recent progress in 3D reconstruction has made it easy to create realistic digital twins from everyday environments. However, current digital twins remain largely static and are limited to navigation and view synthesis without embodied interactivity. To bridge this gap, we introduce Dexterous World Model (DWM), a scene-action-conditioned video diffusion framework that models how dexterous human actions induce dynamic changes in static 3D scenes. - Given a static 3D scene rendering and an egocentric hand motion sequence, DWM generates temporally coherent videos depicting plausible human-scene interactions. Our approach conditions video generation on (1) static scene renderings following a specified camera trajectory to ensure spatial consistency, and (2) egocentric hand mesh renderings that encode both geometry and motion cues to model action-conditioned dynamics directly. To train DWM, we construct a hybrid interaction video dataset. Synthetic egocentric interactions provide fully aligned supervision for joint locomotion and manipulation learning, while fixed-camera real-world videos contribute diverse and realistic object dynamics. - Experiments demonstrate that DWM enables realistic and physically plausible interactions, such as grasping, opening, and moving objects, while maintaining camera and scene consistency. This framework represents a first step toward video diffusion-based interactive digital twins and enables embodied simulation from egocentric actions. - oai:arXiv.org:2512.17907v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + From Word to World: Can Large Language Models be Implicit Text-based World Models? + https://arxiv.org/abs/2512.18832 + arXiv:2512.18832v1 Announce Type: new +Abstract: Agentic reinforcement learning increasingly relies on experience-driven scaling, yet real-world environments remain non-adaptive, limited in coverage, and difficult to scale. World models offer a potential way to improve learning efficiency through simulated experience, but it remains unclear whether large language models can reliably serve this role and under what conditions they meaningfully benefit agents. We study these questions in text-based environments, which provide a controlled setting to reinterpret language modeling as next-state prediction under interaction. We introduce a three-level framework for evaluating LLM-based world models: (i) fidelity and consistency, (ii) scalability and robustness, and (iii) agent utility. Across five representative environments, we find that sufficiently trained world models maintain coherent latent state, scale predictably with data and model size, and improve agent performance via action verification, synthetic trajectory generation, and warm-starting reinforcement learning. Meanwhile, these gains depend critically on behavioral coverage and environment complexity, delineating clear boundry on when world modeling effectively supports agent learning. + oai:arXiv.org:2512.18832v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Byungjun Kim, Taeksoo Kim, Junyoung Lee, Hanbyul Joo + Yixia Li, Hongru Wang, Jiahao Qiu, Zhenfei Yin, Dongdong Zhang, Cheng Qian, Zeping Li, Pony Ma, Guanhua Chen, Heng Ji, Mengdi Wang - Re-Depth Anything: Test-Time Depth Refinement via Self-Supervised Re-lighting - https://arxiv.org/abs/2512.17908 - arXiv:2512.17908v1 Announce Type: new -Abstract: Monocular depth estimation remains challenging as recent foundation models, such as Depth Anything V2 (DA-V2), struggle with real-world images that are far from the training distribution. We introduce Re-Depth Anything, a test-time self-supervision framework that bridges this domain gap by fusing DA-V2 with the powerful priors of large-scale 2D diffusion models. Our method performs label-free refinement directly on the input image by re-lighting predicted depth maps and augmenting the input. This re-synthesis method replaces classical photometric reconstruction by leveraging shape from shading (SfS) cues in a new, generative context with Score Distillation Sampling (SDS). To prevent optimization collapse, our framework employs a targeted optimization strategy: rather than optimizing depth directly or fine-tuning the full model, we freeze the encoder and only update intermediate embeddings while also fine-tuning the decoder. Across diverse benchmarks, Re-Depth Anything yields substantial gains in depth accuracy and realism over the DA-V2, showcasing new avenues for self-supervision by augmenting geometric reasoning. - oai:arXiv.org:2512.17908v1 - cs.CV - cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + AraMix: Recycling, Refiltering, and Deduplicating to Deliver the Largest Arabic Pretraining Corpus + https://arxiv.org/abs/2512.18834 + arXiv:2512.18834v1 Announce Type: new +Abstract: We present AraMix, a deduplicated Arabic pretraining corpus containing approximately 178 billion tokens across 179 million documents. Rather than scraping the web again, AraMix demonstrates that substantial value lies in systematically reusing and curating existing pretraining datasets: we combine seven publicly available Arabic web datasets, apply quality filtering designed specifically for Arabic text to re-filter some datasets, and perform cross-dataset deduplication, both MinHash and sentence-level. This approach reveals that nearly 60% of tokens across these independently collected corpora are duplicates, redundancy that any new scraping efforts will reproduce. Our work suggests that for lower resource languages, investment in curation pipelines for existing data yields greater returns than additional web crawls, an approach that allowed us to curate the largest heavily filtered publicly available Arabic pretraining corpus. + oai:arXiv.org:2512.18834v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Ananta R. Bhattarai, Helge Rhodin + Sultan Alrashed, Francesco Orabona - Both Semantics and Reconstruction Matter: Making Representation Encoders Ready for Text-to-Image Generation and Editing - https://arxiv.org/abs/2512.17909 - arXiv:2512.17909v1 Announce Type: new -Abstract: Modern Latent Diffusion Models (LDMs) typically operate in low-level Variational Autoencoder (VAE) latent spaces that are primarily optimized for pixel-level reconstruction. To unify vision generation and understanding, a burgeoning trend is to adopt high-dimensional features from representation encoders as generative latents. However, we empirically identify two fundamental obstacles in this paradigm: (1) the discriminative feature space lacks compact regularization, making diffusion models prone to off-manifold latents that lead to inaccurate object structures; and (2) the encoder's inherently weak pixel-level reconstruction hinders the generator from learning accurate fine-grained geometry and texture. In this paper, we propose a systematic framework to adapt understanding-oriented encoder features for generative tasks. We introduce a semantic-pixel reconstruction objective to regularize the latent space, enabling the compression of both semantic information and fine-grained details into a highly compact representation (96 channels with 16x16 spatial downsampling). This design ensures that the latent space remains semantically rich and achieves state-of-the-art image reconstruction, while remaining compact enough for accurate generation. Leveraging this representation, we design a unified Text-to-Image (T2I) and image editing model. Benchmarking against various feature spaces, we demonstrate that our approach achieves state-of-the-art reconstruction, faster convergence, and substantial performance gains in both T2I and editing tasks, validating that representation encoders can be effectively adapted into robust generative components. - oai:arXiv.org:2512.17909v1 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Multimodal Classification Network Guided Trajectory Planning for Four-Wheel Independent Steering Autonomous Parking Considering Obstacle Attributes + https://arxiv.org/abs/2512.18836 + arXiv:2512.18836v1 Announce Type: new +Abstract: Four-wheel Independent Steering (4WIS) vehicles have attracted increasing attention for their superior maneuverability. Human drivers typically choose to cross or drive over the low-profile obstacles (e.g., plastic bags) to efficiently navigate through narrow spaces, while existing planners neglect obstacle attributes, causing inefficiency or path-finding failures. To address this, we propose a trajectory planning framework integrating the 4WIS hybrid A* and Optimal Control Problem (OCP), in which the hybrid A* provides an initial path to enhance the OCP solution. Specifically, a multimodal classification network is introduced to assess scene complexity (hard/easy task) by fusing image and vehicle state data. For hard tasks, guided points are set to decompose complex tasks into local subtasks, improving the search efficiency of 4WIS hybrid A*. The multiple steering modes of 4WIS vehicles (Ackermann, diagonal, and zero-turn) are also incorporated into node expansion and heuristic designs. Moreover, a hierarchical obstacle handling strategy is designed to guide the node expansion considering obstacle attributes, i.e., 'non-traversable', 'crossable', and 'drive-over' obstacles. It allows crossing or driving over obstacles instead of the 'avoid-only' strategy, greatly enhancing success rates of pathfinding. We also design a logical constraint for the 'drive-over' obstacle by limiting its velocity to ensure safety. Furthermore, to address dynamic obstacles with motion uncertainty, we introduce a probabilistic risk field model, constructing risk-aware driving corridors that serve as linear collision constraints in OCP. Experimental results demonstrate the proposed framework's effectiveness in generating safe, efficient, and smooth trajectories for 4WIS vehicles, especially in constrained environments. + oai:arXiv.org:2512.18836v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shilong Zhang, He Zhang, Zhifei Zhang, Chongjian Ge, Shuchen Xue, Shaoteng Liu, Mengwei Ren, Soo Ye Kim, Yuqian Zhou, Qing Liu, Daniil Pakhomov, Kai Zhang, Zhe Lin, Ping Luo + Jingjia Teng, Yang Li, Jianqiang Wang, Yingbai Hu, Songyuan Tang, Manjiang Hu + + + Generative Modeling through Spectral Analysis of Koopman Operator + https://arxiv.org/abs/2512.18837 + arXiv:2512.18837v1 Announce Type: new +Abstract: We propose Koopman Spectral Wasserstein Gradient Descent (KSWGD), a generative modeling framework that combines operator-theoretic spectral analysis with optimal transport. The novel insight is that the spectral structure required for accelerated Wasserstein gradient descent can be directly estimated from trajectory data via Koopman operator approximation which can eliminate the need for explicit knowledge of the target potential or neural network training. We provide rigorous convergence analysis and establish connection to Feynman-Kac theory that clarifies the method's probabilistic foundation. Experiments across diverse settings, including compact manifold sampling, metastable multi-well systems, image generation, and high dimensional stochastic partial differential equation, demonstrate that KSWGD consistently achieves faster convergence than other existing methods while maintaining high sample quality. + oai:arXiv.org:2512.18837v1 + cs.LG + math.DS + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Yuanchao Xu, Fengyi Li, Masahiro Fujisawa, Youssef Marzouk, Isao Ishikawa - Speech-FT: Merging Pre-trained And Fine-Tuned Speech Representation Models For Cross-Task Generalization - https://arxiv.org/abs/2502.12672 - arXiv:2502.12672v3 Announce Type: cross -Abstract: Fine-tuning speech representation models can enhance performance on specific tasks but often compromises their cross-task generalization ability. This degradation is often caused by excessive changes in the representations, making it difficult to retain information learned during pre-training. Existing approaches, such as regularizing weight changes during fine-tuning, may fail to maintain sufficiently high feature similarity with the pre-trained model, and thus could possibly lose cross-task generalization. To address this issue, we propose Speech-FT, a novel two-stage fine-tuning framework designed to maintain cross-task generalization while benefiting from fine-tuning. Speech-FT first applies fine-tuning specifically designed to reduce representational drift, followed by weight-space interpolation with the pre-trained model to restore cross-task generalization. Extensive experiments on HuBERT, wav2vec 2.0, DeCoAR 2.0, and WavLM Base+ demonstrate that Speech-FT consistently improves performance across a wide range of supervised, unsupervised, and multitask fine-tuning scenarios. Moreover, Speech-FT achieves superior cross-task generalization compared to fine-tuning baselines that explicitly constrain weight changes, such as weight-space regularization and LoRA fine-tuning. Our analysis reveals that Speech-FT maintains higher feature similarity to the pre-trained model compared to alternative strategies, despite allowing larger weight-space updates. Notably, Speech-FT achieves significant improvements on the SUPERB benchmark. For example, when fine-tuning HuBERT on automatic speech recognition, Speech-FT is able to reduce phone error rate from 5.17% to 3.94%, lower word error rate from 6.38% to 5.75%, and increase speaker identification accuracy from 81.86% to 84.11%. Speech-FT provides a simple yet powerful solution for further refining speech representation models after pre-training. - oai:arXiv.org:2502.12672v3 + MDToC: Metacognitive Dynamic Tree of Concepts for Boosting Mathematical Problem-Solving of Large Language Models + https://arxiv.org/abs/2512.18841 + arXiv:2512.18841v1 Announce Type: new +Abstract: Despite advances in mathematical reasoning capabilities, Large Language Models (LLMs) still struggle with calculation verification when using established prompting techniques. We present MDToC (Metacognitive Dynamic Tree of Concepts), a three-phase approach that constructs a concept tree, develops accuracy-verified calculations for each concept, and employs majority voting to evaluate competing solutions. Evaluations across CHAMP, MATH, and Game-of-24 benchmarks demonstrate our MDToC's effectiveness, with GPT-4-Turbo achieving 58.1\% on CHAMP, 86.6\% on MATH, and 85\% on Game-of-24 - outperforming GoT by 5\%, 5.4\%, and 4\% on all these tasks, respectively, without hand-engineered hints. MDToC consistently surpasses existing prompting methods across all backbone models, yielding improvements of up to 7.6\% over ToT and 6.2\% over GoT, establishing metacognitive calculation verification as a promising direction for enhanced mathematical reasoning. + oai:arXiv.org:2512.18841v1 cs.CL - cs.AI - cs.SD - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - 10.1109/TASLPRO.2025.3635827 - in IEEE Transactions on Audio, Speech, and Language Processing, vol. 34, pp. 70-83, 2026 - Tzu-Quan Lin, Wei-Ping Huang, Hao Tang, Hung-yi Lee + Tung Duong Ta, Tim Oates - Graph Attention Networks for Detecting Epilepsy from EEG Signals Using Accessible Hardware in Low-Resource Settings - https://arxiv.org/abs/2507.15118 - arXiv:2507.15118v1 Announce Type: cross -Abstract: Goal: Epilepsy remains under-diagnosed in low-income countries due to scarce neurologists and costly diagnostic tools. We propose a graph-based deep learning framework to detect epilepsy from low-cost Electroencephalography (EEG) hardware, tested on recordings from Nigeria and Guinea-Bissau. Our focus is on fair, accessible automatic assessment and explainability to shed light on epilepsy biomarkers. Methods: We model EEG signals as spatio-temporal graphs, classify them, and identify interchannel relationships and temporal dynamics using graph attention networks (GAT). To emphasize connectivity biomarkers, we adapt the inherently node-focused GAT to analyze edges. We also designed signal preprocessing for low-fidelity recordings and a lightweight GAT architecture trained on Google Colab and deployed on RaspberryPi devices. Results: The approach achieves promising classification performance, outperforming a standard classifier based on random forest and graph convolutional networks in terms of accuracy and robustness over multiple sessions, but also highlighting specific connections in the fronto-temporal region. Conclusions: The results highlight the potential of GATs to provide insightful and scalable diagnostic support for epilepsy in underserved regions, paving the way for affordable and accessible neurodiagnostic tools. - oai:arXiv.org:2507.15118v1 - eess.SP - cs.AI - cs.LG - cs.NE - Mon, 22 Dec 2025 00:00:00 -0500 - cross + DafnyMPI: A Dafny Library for Verifying Message-Passing Concurrent Programs + https://arxiv.org/abs/2512.18842 + arXiv:2512.18842v1 Announce Type: new +Abstract: The Message Passing Interface (MPI) is widely used in parallel, high-performance programming, yet writing bug-free software that uses MPI remains difficult. We introduce DafnyMPI, a novel, scalable approach to formally verifying MPI software. DafnyMPI allows proving deadlock freedom, termination, and functional equivalence with simpler sequential implementations. In contrast to existing specialized frameworks, DafnyMPI avoids custom concurrency logics and instead relies on Dafny, a verification-ready programming language used for sequential programs, extending it with concurrent reasoning abilities. DafnyMPI is implemented as a library that enables safe MPI programming by requiring users to specify the communication topology upfront and to verify that calls to communication primitives such as MPI_ISEND and MPI_WAIT meet their preconditions. We formalize DafnyMPI using a core calculus and prove that the preconditions suffice to guarantee deadlock freedom. Functional equivalence is proved via rely-guarantee reasoning over message payloads and a system that guarantees safe use of read and write buffers. Termination and the absence of runtime errors are proved using standard Dafny techniques. To further demonstrate the applicability of DafnyMPI, we verify numerical solutions to three canonical partial differential equations. We believe DafnyMPI demonstrates how to make formal verification viable for a broader class of programs and provides proof engineers with additional tools for software verification of parallel and concurrent systems. + oai:arXiv.org:2512.18842v1 + cs.PL + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Szymon Mazurek, Stephen Moore, Alessandro Crimi + 10.1145/3776705 + Aleksandr Fedchin, Antero Mejr, Hari Sundar, Jeffrey S. Foster - Analysis of Design Patterns and Benchmark Practices in Apache Kafka Event-Streaming Systems - https://arxiv.org/abs/2512.16146 - arXiv:2512.16146v1 Announce Type: cross -Abstract: Apache Kafka has become a foundational platform for high throughput event streaming, enabling real time analytics, financial transaction processing, industrial telemetry, and large scale data driven systems. Despite its maturity and widespread adoption, consolidated research on reusable architectural design patterns and reproducible benchmarking methodologies remains fragmented across academic and industrial publications. This paper presents a structured synthesis of forty two peer reviewed studies published between 2015 and 2025, identifying nine recurring Kafka design patterns including log compaction, CQRS bus, exactly once pipelines, change data capture, stream table joins, saga orchestration, tiered storage, multi tenant topics, and event sourcing replay. The analysis examines co usage trends, domain specific deployments, and empirical benchmarking practices using standard suites such as TPCx Kafka and the Yahoo Streaming Benchmark, as well as custom workloads. The study highlights significant inconsistencies in configuration disclosure, evaluation rigor, and reproducibility that limit cross study comparison and practical replication. By providing a unified taxonomy, pattern benchmark matrix, and actionable decision heuristics, this work offers practical guidance for architects and researchers designing reproducible, high performance, and fault tolerant Kafka based event streaming systems. - oai:arXiv.org:2512.16146v1 - cs.SE - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Brain-Gen: Towards Interpreting Neural Signals for Stimulus Reconstruction Using Transformers and Latent Diffusion Models + https://arxiv.org/abs/2512.18843 + arXiv:2512.18843v1 Announce Type: new +Abstract: Advances in neuroscience and artificial intelligence have enabled preliminary decoding of brain activity. However, despite the progress, the interpretability of neural representations remains limited. A significant challenge arises from the intrinsic properties of electroencephalography (EEG) signals, including high noise levels, spatial diffusion, and pronounced temporal variability. To interpret the neural mechanism underlying thoughts, we propose a transformers-based framework to extract spatial-temporal representations associated with observed visual stimuli from EEG recordings. These features are subsequently incorporated into the attention mechanisms of Latent Diffusion Models (LDMs) to facilitate the reconstruction of visual stimuli from brain activity. The quantitative evaluations on publicly available benchmark datasets demonstrate that the proposed method excels at modeling the semantic structures from EEG signals; achieving up to 6.5% increase in latent space clustering accuracy and 11.8% increase in zero shot generalization across unseen classes while having comparable Inception Score and Fr\'echet Inception Distance with existing baselines. Our work marks a significant step towards generalizable semantic interpretation of the EEG signals. + oai:arXiv.org:2512.18843v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Muzeeb Mohammad + Hasib Aslam, Muhammad Talal Faiz, Muhammad Imran Malik - Colormap-Enhanced Vision Transformers for MRI-Based Multiclass (4-Class) Alzheimer's Disease Classification - https://arxiv.org/abs/2512.16964 - arXiv:2512.16964v1 Announce Type: cross -Abstract: Magnetic Resonance Imaging (MRI) plays a pivotal role in the early diagnosis and monitoring of Alzheimer's disease (AD). However, the subtle structural variations in brain MRI scans often pose challenges for conventional deep learning models to extract discriminative features effectively. In this work, we propose PseudoColorViT-Alz, a colormap-enhanced Vision Transformer framework designed to leverage pseudo-color representations of MRI images for improved Alzheimer's disease classification. By combining colormap transformations with the global feature learning capabilities of Vision Transformers, our method amplifies anatomical texture and contrast cues that are otherwise subdued in standard grayscale MRI scans. - We evaluate PseudoColorViT-Alz on the OASIS-1 dataset using a four-class classification setup (non-demented, moderate dementia, mild dementia, and very mild dementia). Our model achieves a state-of-the-art accuracy of 99.79% with an AUC of 100%, surpassing the performance of recent 2024--2025 methods, including CNN-based and Siamese-network approaches, which reported accuracies ranging from 96.1% to 99.68%. These results demonstrate that pseudo-color augmentation combined with Vision Transformers can significantly enhance MRI-based Alzheimer's disease classification. PseudoColorViT-Alz offers a robust and interpretable framework that outperforms current methods, providing a promising tool to support clinical decision-making and early detection of Alzheimer's disease. - oai:arXiv.org:2512.16964v1 - eess.IV - cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Generalized Chebyshev acceleration on the unit disc + https://arxiv.org/abs/2512.18848 + arXiv:2512.18848v1 Announce Type: new +Abstract: Generalized Chebyshev acceleration is a semi-iterative technique applicable to a basic iterative method only when the eigenvalues of the iteration matrix satisfy a highly restrictive inclusion condition. In this work, we relax this requirement by introducing an alternative iterative scheme that converges to the same solution. The effectiveness of the proposed approach is examined through its application to a large-scale sparse normal matrix. + oai:arXiv.org:2512.18848v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Faisal Ahmed + Nurg\"ul G\"okg\"oz - Irrelevant carrots and non-existent sticks: trust, governance, and security in the transition to quantum-safe systems - https://arxiv.org/abs/2512.16974 - arXiv:2512.16974v1 Announce Type: cross -Abstract: Quantum computing poses an urgent and widely recognised threat to global cybersecurity, enabling encrypted government, financial, and healthcare data harvested today to be decrypted in the near future. Transitioning to quantum-safe cryptography is therefore essential, demanding coordinated action across a complex, multi-actor innovation system. Drawing on insights from an expert workshop in Amsterdam, this study develops a socially informed vision for a quantum-safe future and analyses the current innovation landscape to identify critical gaps and the actions needed to address them. We map twelve key actor groups involved in the migration process, finding that regulators exert the strongest direct influence, while standardisation bodies play a crucial indirect role. This research provides one of the first system-level mappings of actors, influence pathways and governance responsibilities shaping the quantum-safe transition, revealing several responsibilities with unclear ownership. Although centred on the Netherlands, our findings are applicable to other national contexts navigating quantum-safe transitions. - oai:arXiv.org:2512.16974v1 - physics.soc-ph - cs.CY - quant-ph - Mon, 22 Dec 2025 00:00:00 -0500 - cross + InDRiVE: Reward-Free World-Model Pretraining for Autonomous Driving via Latent Disagreement + https://arxiv.org/abs/2512.18850 + arXiv:2512.18850v1 Announce Type: new +Abstract: Model-based reinforcement learning (MBRL) can reduce interaction cost for autonomous driving by learning a predictive world model, but it typically still depends on task-specific rewards that are difficult to design and often brittle under distribution shift. This paper presents InDRiVE, a DreamerV3-style MBRL agent that performs reward-free pretraining in CARLA using only intrinsic motivation derived from latent ensemble disagreement. Disagreement acts as a proxy for epistemic uncertainty and drives the agent toward under-explored driving situations, while an imagination-based actor-critic learns a planner-free exploration policy directly from the learned world model. After intrinsic pretraining, we evaluate zero-shot transfer by freezing all parameters and deploying the pretrained exploration policy in unseen towns and routes. We then study few-shot adaptation by training a task policy with limited extrinsic feedback for downstream objectives (lane following and collision avoidance). Experiments in CARLA across towns, routes, and traffic densities show that disagreement-based pretraining yields stronger zero-shot robustness and robust few-shot collision avoidance under town shift and matched interaction budgets, supporting the use of intrinsic disagreement as a practical reward-free pretraining signal for reusable driving world models. + oai:arXiv.org:2512.18850v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Ailsa Robertson, Si\^an Brooke, Sebastian De Haro, Christian Schaffner + Feeza Khan Khanzada, Jaerock Kwon - Another Fit Bites the Dust: Conformal Prediction as a Calibration Standard for Machine Learning in High-Energy Physics - https://arxiv.org/abs/2512.17048 - arXiv:2512.17048v1 Announce Type: cross -Abstract: Machine-learning techniques are essential in modern collider research, yet their probabilistic outputs often lack calibrated uncertainty estimates and finite-sample guarantees, limiting their direct use in statistical inference and decision-making. Conformal prediction (CP) provides a simple, distribution-free framework for calibrating arbitrary predictive models without retraining, yielding rigorous uncertainty quantification with finite-sample coverage guarantees under minimal exchangeability assumptions, without reliance on asymptotics, limit theorems, or Gaussian approximations. In this work, we investigate CP as a unifying calibration layer for machine-learning applications in high-energy physics. Using publicly available collider datasets and a diverse set of models, we show that a single conformal formalism can be applied across regression, binary and multi-class classification, anomaly detection, and generative modelling, converting raw model outputs into statistically valid prediction sets, typicality regions, and p-values with controlled false-positive rates. While conformal prediction does not improve raw model performance, it enforces honest uncertainty quantification and transparent error control. We argue that conformal calibration should be adopted as a standard component of machine-learning pipelines in collider physics, enabling reliable interpretation, robust comparisons, and principled statistical decisions in experimental and phenomenological analyses. - oai:arXiv.org:2512.17048v1 - hep-ph - cs.AI - hep-ex - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Modular Automatic Complexity Analysis of Recursive Integer Programs + https://arxiv.org/abs/2512.18851 + arXiv:2512.18851v1 Announce Type: new +Abstract: In earlier work, we developed a modular approach for automatic complexity analysis of integer programs. However, these integer programs do not allow non-tail recursive calls or subprocedures. In this paper, we consider integer programs with function calls and present a natural extension of our modular complexity analysis approach to the recursive setting based on a new form of ranking functions. Hence, our approach combines already existing powerful techniques on the "imperative" parts of the program and our novel ranking functions on the recursive parts. The strength of this combination is demonstrated by our implementation in the complexity analysis tool KoAT. + oai:arXiv.org:2512.18851v1 + cs.LO + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jack Y. Araz, Michael Spannowsky + Nils Lommen, J\"urgen Giesl - Dirichlet Meets Horvitz and Thompson: Estimating Homophily in Large Networks via Sampling - https://arxiv.org/abs/2512.17084 - arXiv:2512.17084v1 Announce Type: cross -Abstract: Assessing homophily in large-scale networks is central to understanding structural regularities in graphs, and thus inform the choice of models (such as graph neural networks) adopted to learn from network data. Evaluation of smoothness metrics requires access to the entire network topology and node features, which may be impractical in several large-scale, dynamic, resource-limited, or privacy-constrained settings. In this work, we propose a sampling-based framework to estimate homophily via the Dirichlet energy (Laplacian-based total variation) of graph signals, leveraging the Horvitz-Thompson (HT) estimator for unbiased inference from partial graph observations. The Dirichlet energy is a so-termed total (of squared nodal feature deviations) over graph edges; hence, estimable under general network sampling designs for which edge-inclusion probabilities can be analytically derived and used as weights in the proposed HT estimator. We establish that the Dirichlet energy can be consistently estimated from sampled graphs, and empirically study other heterophily measures as well. Experiments on several heterophilic benchmark datasets demonstrate the effectiveness of the proposed HT estimators in reliably capturing homophilic structure (or lack thereof) from sampled network measurements. - oai:arXiv.org:2512.17084v1 - eess.SP - cs.SI - stat.ME - Mon, 22 Dec 2025 00:00:00 -0500 - cross + What Drives Issue Resolution Speed? An Empirical Study of Scientific Workflow Systems on GitHub + https://arxiv.org/abs/2512.18852 + arXiv:2512.18852v1 Announce Type: new +Abstract: Scientific Workflow Systems (SWSs) play a vital role in enabling reproducible, scalable, and automated scientific analysis. Like other open-source software, these systems depend on active maintenance and community engagement to remain reliable and sustainable. However, despite the importance of timely issue resolution for software quality and community trust, little is known about what drives issue resolution speed within SWSs. This paper presents an empirical study of issue management and resolution across a collection of GitHub-hosted SWS projects. We analyze 21,116 issues to investigate how project characteristics, issue metadata, and contributor interactions affect time-to-close. Specifically, we address two research questions: (1) how issues are managed and addressed in SWSs, and (2) how issue and contributor features relate to issue resolution speed. We find that 68.91% of issues are closed, with half of them resolved within 18.09 days. Our results show that although SWS projects follow structured issue management practices, the issue resolution speed varies considerably across systems. Factors such as labeling and assigning issues are associated with faster issue resolution. Based on our findings, we make recommendations for developers to better manage SWS repository issues and improve their quality. + oai:arXiv.org:2512.18852v1 + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Hamed Ajorlou, Gonzalo Mateos, Luana Ruiz + Khairul Alam, Banani Roy - Disentangled representations via score-based variational autoencoders - https://arxiv.org/abs/2512.17127 - arXiv:2512.17127v1 Announce Type: cross -Abstract: We present the Score-based Autoencoder for Multiscale Inference (SAMI), a method for unsupervised representation learning that combines the theoretical frameworks of diffusion models and VAEs. By unifying their respective evidence lower bounds, SAMI formulates a principled objective that learns representations through score-based guidance of the underlying diffusion process. The resulting representations automatically capture meaningful structure in the data: it recovers ground truth generative factors in our synthetic dataset, learns factorized, semantic latent dimensions from complex natural images, and encodes video sequences into latent trajectories that are straighter than those of alternative encoders, despite training exclusively on static images. Furthermore, SAMI can extract useful representations from pre-trained diffusion models with minimal additional training. Finally, the explicitly probabilistic formulation provides new ways to identify semantically meaningful axes in the absence of supervised labels, and its mathematical exactness allows us to make formal statements about the nature of the learned representation. Overall, these results indicate that implicit structural information in diffusion models can be made explicit and interpretable through synergistic combination with a variational autoencoder. - oai:arXiv.org:2512.17127v1 - stat.ML - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Benjamin S. H. Lyo, Eero P. Simoncelli, Cristina Savin + VizDefender: Unmasking Visualization Tampering through Proactive Localization and Intent Inference + https://arxiv.org/abs/2512.18853 + arXiv:2512.18853v1 Announce Type: new +Abstract: The integrity of data visualizations is increasingly threatened by image editing techniques that enable subtle yet deceptive tampering. Through a formative study, we define this challenge and categorize tampering techniques into two primary types: data manipulation and visual encoding manipulation. To address this, we present VizDefender, a framework for tampering detection and analysis. The framework integrates two core components: 1) a semi-fragile watermark module that protects the visualization by embedding a location map to images, which allows for the precise localization of tampered regions while preserving visual quality, and 2) an intent analysis module that leverages Multimodal Large Language Models (MLLMs) to interpret manipulation, inferring the attacker's intent and misleading effects. Extensive evaluations and user studies demonstrate the effectiveness of our methods. + oai:arXiv.org:2512.18853v1 + cs.CV + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sicheng Song, Yanjie Zhang, Zixin Chen, Huamin Qu, Changbo Wang, Chenhui Li - Application of machine learning to predict food processing level using Open Food Facts - https://arxiv.org/abs/2512.17169 - arXiv:2512.17169v1 Announce Type: cross -Abstract: Ultra-processed foods are increasingly linked to health issues like obesity, cardiovascular disease, type 2 diabetes, and mental health disorders due to poor nutritional quality. This first-of-its-kind study at such a scale uses machine learning to classify food processing levels (NOVA) based on the Open Food Facts dataset of over 900,000 products. Models including LightGBM, Random Forest, and CatBoost were trained on nutrient concentration data. LightGBM performed best, achieving 80-85% accuracy across different nutrient panels and effectively distinguishing minimally from ultra-processed foods. Exploratory analysis revealed strong associations between higher NOVA classes and lower Nutri-Scores, indicating poorer nutritional quality. Products in NOVA 3 and 4 also had higher carbon footprints and lower Eco-Scores, suggesting greater environmental impact. Allergen analysis identified gluten and milk as common in ultra-processed items, posing risks to sensitive individuals. Categories like Cakes and Snacks were dominant in higher NOVA classes, which also had more additives, highlighting the role of ingredient modification. This study, leveraging the largest dataset of NOVA-labeled products, emphasizes the health, environmental, and allergenic implications of food processing and showcases machine learning's value in scalable classification. A user-friendly web tool is available for NOVA prediction using nutrient data: https://cosylab.iiitd.edu.in/foodlabel/. - oai:arXiv.org:2512.17169v1 - q-bio.BM + CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning + https://arxiv.org/abs/2512.18857 + arXiv:2512.18857v1 Announce Type: new +Abstract: Large language models (LLMs) often solve challenging math exercises yet fail to apply the concept right when the problem requires genuine understanding. Popular Reinforcement Learning with Verifiable Rewards (RLVR) pipelines reinforce final answers but provide little fine-grained conceptual signal, so models improve at pattern reuse rather than conceptual applications. We introduce CORE (Concept-Oriented REinforcement), an RL training framework that turns explicit concepts into a controllable supervision signal. Starting from a high-quality, low-contamination textbook resource that links verifiable exercises to concise concept descriptions, we run a sanity probe showing LLMs can restate definitions but fail concept-linked quizzes, quantifying the conceptual reasoning gap. CORE then (i) synthesizes concept-aligned quizzes, (ii) injects brief concept snippets during rollouts to elicit concept-primed trajectories, and (iii) reinforces conceptual reasoning via trajectory replacement after group failures, a lightweight forward-KL constraint that aligns unguided with concept-primed policies, or standard GRPO directly on concept-aligned quizzes. Across several models, CORE delivers consistent gains over vanilla and SFT baselines on both in-domain concept-exercise suites and diverse out-of-domain math benchmarks. CORE unifies direct training on concept-aligned quizzes and concept-injected rollouts under outcome regularization. It provides fine-grained conceptual supervision that bridges problem-solving competence and genuine conceptual reasoning, while remaining algorithm- and verifier-agnostic. + oai:arXiv.org:2512.18857v1 + cs.AI cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Nalin Arora, Aviral Chauhan, Siddhant Rana, Mahansh Aditya, Sumit Bhagat, Aditya Kumar, Akash Kumar, Akanksh Semar, Ayush Vikram Singh, Ganesh Bagler + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zijun Gao, Zhikun Xu, Xiao Ye, Ben Zhou - Systemic Risk Radar: A Multi-Layer Graph Framework for Early Market Crash Warning - https://arxiv.org/abs/2512.17185 - arXiv:2512.17185v1 Announce Type: cross -Abstract: Financial crises emerge when structural vulnerabilities accumulate across sectors, markets, and investor behavior. Predicting these systemic transitions is challenging because they arise from evolving interactions between market participants, not isolated price movements alone. We present Systemic Risk Radar (SRR), a framework that models financial markets as multi-layer graphs to detect early signs of systemic fragility and crash-regime transitions. - We evaluate SRR across three major crises: the Dot-com crash, the Global Financial Crisis, and the COVID-19 shock. Our experiments compare snapshot GNNs, a simplified temporal GNN prototype, and standard baselines (logistic regression and Random Forest). Results show that structural network information provides useful early-warning signals compared to feature-based models alone. - This correlation-based instantiation of SRR demonstrates that graph-derived features capture meaningful changes in market structure during stress events. The findings motivate extending SRR with additional graph layers (sector/factor exposure, sentiment) and more expressive temporal architectures (LSTM/GRU or Transformer encoders) to better handle diverse crisis types. - oai:arXiv.org:2512.17185v1 - q-fin.RM - cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Adapting Skill Ratings to Luck-Based Hidden-Information Games + https://arxiv.org/abs/2512.18858 + arXiv:2512.18858v1 Announce Type: new +Abstract: Rating systems play a crucial role in evaluating player skill across competitive environments. The Elo rating system, originally designed for deterministic and information-complete games such as chess, has been widely adopted and modified in various domains. However, the traditional Elo rating system only considers game outcomes for rating calculation and assumes uniform initial states across players. This raises important methodological challenges in skill modelling for popular partially randomized incomplete-information games such as Rummy. In this paper, we examine the limitations of conventional Elo ratings when applied to luck-driven environments and propose a modified Elo framework specifically tailored for Rummy. Our approach incorporates score-based performance metrics and explicitly models the influence of initial hand quality to disentangle skill from luck. Through extensive simulations involving 270,000 games across six strategies of varying sophistication, we demonstrate that our proposed system achieves stable convergence, superior discriminative power, and enhanced predictive accuracy compared to traditional Elo formulations. The framework maintains computational simplicity while effectively capturing the interplay of skill, strategy, and randomness, with broad applicability to other stochastic competitive environments. + oai:arXiv.org:2512.18858v1 + cs.GT + stat.AP + stat.CO + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Sandeep Neela + Avirup Chakraborty, Shirsa Maitra, Tathagata Banerjee, Diganta Mukherjee, Tridib Mukherjee - Quantum-enhanced Information Retrieval from Reflective Intelligent Surfaces - https://arxiv.org/abs/2512.17199 - arXiv:2512.17199v1 Announce Type: cross -Abstract: Information retrieval from passive backscatter systems is widely used in digital applications with tight energy budgets, short communication distances, and low data rates. Due to the fundamental limits of classical wireless receivers, the achievable data rate cannot be increased without compromising either energy efficiency or communication range, thereby hindering the broader adoption of this technology. In this work, we present a novel time-resolving quantum receiver combined with a multi-mode probing signal to extract large-alphabet information modulated by a passive reconfigurable intelligent surface (RIS). The adaptive nature of the proposed receiver yields significant quantum advantages over classical receivers without relying on complex or fragile quantum resources such as entanglement. Simulation results show that the proposed technique surpasses the classical standard quantum limit (SQL) for modulation sizes up to M = 2^8, meanwhile halving the probing energy or increasing the communication distance by a factor of 1.41. - oai:arXiv.org:2512.17199v1 - quant-ph - cs.NI - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Shiqian Guo, Tingxiang Ji, Jianqing Liu + Toward Human-Centered AI-Assisted Terminology Work + https://arxiv.org/abs/2512.18859 + arXiv:2512.18859v1 Announce Type: new +Abstract: The rapid diffusion of generative artificial intelligence is transforming terminology work. While this technology promises gains in efficiency, its unstructured adoption risks weakening professional autonomy, amplifying bias, and eroding linguistic and conceptual diversity. This paper argues that a human-centered approach to artificial intelligence has become a necessity for terminology work. Building on research in artificial intelligence and translation studies, it proposes a human-centered framework that conceptualizes artificial intelligence as a means of amplifying the terminologist's capabilities, rather than replacing them. The framework is organized around three interrelated dimensions: the augmented terminologist, ethical AI, and human-centered design. Together, these dimensions emphasize the compatibility of high automation with strong human control, the central role of terminologists in bias mitigation, and the importance of designing AI tools and workflows around the needs, values, and well-being of the terminologist. The paper concludes by stressing that current choices in AI adoption will shape not only terminological practice, but also the preservation of accuracy, adequacy, and diversity in terminology and specialized knowledge. + oai:arXiv.org:2512.18859v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Antonio San Martin - Approximating geodesics of hyperbolic type metrics on planar domains - https://arxiv.org/abs/2512.17211 - arXiv:2512.17211v1 Announce Type: cross -Abstract: We study planar domains $G$ equipped with a hyperbolic type metric and approximate geodesics that join two points $x,y \in G$ and their lengths. We present an algorithm that enables one to approximate the shortest distance in polygonal domains taken with respect to the quasihyperbolic metric. The method is based on Dijkstra's algorithm, and we give several examples demonstrating how the algorithm works and analyze its accuracy. We experimentally demonstrate several previously theoretically observed features of geodesics, such as the relationship between hyperbolic and quasihyperbolic distance in the unit disk. We also investigate bifurcation of geodesics and the connection of this phenomenon to the medial axis of the domain. - oai:arXiv.org:2512.17211v1 - math.MG - cs.NA - math.CV - math.NA - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Cross-modal Counterfactual Explanations: Uncovering Decision Factors and Dataset Biases in Subjective Classification + https://arxiv.org/abs/2512.18864 + arXiv:2512.18864v1 Announce Type: new +Abstract: Concept-driven counterfactuals explain decisions of classifiers by altering the model predictions through semantic changes. In this paper, we present a novel approach that leverages cross-modal decompositionality and image-specific concepts to create counterfactual scenarios expressed in natural language. We apply the proposed interpretability framework, termed Decompose and Explain (DeX), to the challenging domain of image privacy decisions, which are contextual and subjective. This application enables the quantification of the differential contributions of key scene elements to the model prediction. We identify relevant decision factors via a multi-criterion selection mechanism that considers both image similarity for minimal perturbations and decision confidence to prioritize impactful changes. This approach evaluates and compares diverse explanations, and assesses the interdependency and mutual influence among explanatory properties. By leveraging image-specific concepts, DeX generates image-grounded, sparse explanations, yielding significant improvements over the state of the art. Importantly, DeX operates as a training-free framework, offering high flexibility. Results show that DeX not only uncovers the principal contributing factors influencing subjective decisions, but also identifies underlying dataset biases allowing for targeted mitigation strategies to improve fairness. + oai:arXiv.org:2512.18864v1 + cs.CV + cs.MM + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shuliang Gao, Anni Hakanen, Antti Rasila, Matti Vuorinen + Alina Elena Baia, Andrea Cavallaro - Modelling financial time series with $\phi^{4}$ quantum field theory - https://arxiv.org/abs/2512.17225 - arXiv:2512.17225v1 Announce Type: cross -Abstract: We use a $\phi^{4}$ quantum field theory with inhomogeneous couplings and explicit symmetry-breaking to model an ensemble of financial time series from the S$\&$P 500 index. The continuum nature of the $\phi^4$ theory avoids the inaccuracies that occur in Ising-based models which require a discretization of the time series. We demonstrate this using the example of the 2008 global financial crisis. The $\phi^{4}$ quantum field theory is expressive enough to reproduce the higher-order statistics such as the market kurtosis, which can serve as an indicator of possible market shocks. Accurate reproduction of high kurtosis is absent in binarized models. Therefore Ising models, despite being widely employed in econophysics, are incapable of fully representing empirical financial data, a limitation not present in the generalization of the $\phi^{4}$ scalar field theory. We then investigate the scaling properties of the $\phi^{4}$ machine learning algorithm and extract exponents which govern the behavior of the learned couplings (or weights and biases in ML language) in relation to the number of stocks in the model. Finally, we use our model to forecast the price changes of the AAPL, MSFT, and NVDA stocks. We conclude by discussing how the $\phi^{4}$ scalar field theory could be used to build investment strategies and the possible intuitions that the QFT operations of dimensional compactification and renormalization can provide for financial modelling. - oai:arXiv.org:2512.17225v1 - q-fin.ST - cond-mat.dis-nn - cs.CE - hep-th - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Application of deep learning approaches for medieval historical documents transcription + https://arxiv.org/abs/2512.18865 + arXiv:2512.18865v1 Announce Type: new +Abstract: Handwritten text recognition and optical character recognition solutions show excellent results with processing data of modern era, but efficiency drops with Latin documents of medieval times. This paper presents a deep learning method to extract text information from handwritten Latin-language documents of the 9th to 11th centuries. The approach takes into account the properties inherent in medieval documents. The paper provides a brief introduction to the field of historical document transcription, a first-sight analysis of the raw data, and the related works and studies. The paper presents the steps of dataset development for further training of the models. The explanatory data analysis of the processed data is provided as well. The paper explains the pipeline of deep learning models to extract text information from the document images, from detecting objects to word recognition using classification models and embedding word images. The paper reports the following results: recall, precision, F1 score, intersection over union, confusion matrix, and mean string distance. The plots of the metrics are also included. The implementation is published on the GitHub repository. + oai:arXiv.org:2512.18865v1 + cs.CV + cs.CL + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Dimitrios Bachtis, David S. Berman, Arabella Schelpe + Proceedings of the 9th International Scientific and Practical Conference Applied Information Systems and Technologies in the Digital Society (AISTDS 2025), in CEUR Workshop Proceedings, vol. 4133, Kyiv, Ukraine, Oct. 1, 2025, pp. 45-60 + Maksym Voloshchuk, Bohdana Zarembovska, Mykola Kozlenko - Machine Learning Assisted Parameter Tuning on Wavelet Transform Amorphous Radial Distribution Function - https://arxiv.org/abs/2512.17245 - arXiv:2512.17245v1 Announce Type: cross -Abstract: Understanding atomic structures is crucial, yet amorphous materials remain challenging due to their irregular and non-periodic nature. The wavelet-transform radial distribution function (WT-RDF) offers a physics-based framework for analyzing amorphous structures, reliably predicting the first and second RDF peaks and overall curve trends in both binary Ge 0.25 Se 0.75 and ternary Ag x(Ge 0.25 Se 0.75)100-x (x=5,10,15,20,25) systems. Despite these strengths, WT-RDF shows limitations in amplitude accuracy, which affects quantitative analyses such as coordination numbers. This study addresses the issue by optimizing WT-RDF parameters using a machine learning approach, producing the enhanced WT-RDF+ framework. WT-RDF+ improves the precision of peak predictions and outperforms benchmark ML models, including RBF and LSTM, even when trained on only 25 percent of the binary dataset. These results demonstrate that WT-RDF+ is a robust and reliable model for structural characterization of amorphous materials, particularly Ge-Se systems, and support the efficient design and development of phase-change thin films for next-generation electronic devices and components. - oai:arXiv.org:2512.17245v1 - cond-mat.mtrl-sci - cs.LG - physics.data-an - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Construction and deformation of P-hedra using control polylines + https://arxiv.org/abs/2512.18869 + arXiv:2512.18869v1 Announce Type: new +Abstract: In the 19th International Symposium on Advances in Robot Kinematics the author introduced a novel class of continuous flexible discrete surfaces and mentioned that these so-called P-hedra (or P-nets) allow direct access to their spatial shapes by three control polylines. In this follow-up paper we study this intuitive method, which makes these flexible planar quad surfaces suitable for transformable design tasks by means of interactive tools. The construction of P-hedra from the control polylines can also be used for an efficient algorithmic computation of their isometric deformations. In addition we discuss flexion limits, bifurcation configurations, developable/flat-foldable pattern and tubular P-hedra. + oai:arXiv.org:2512.18869v1 + cs.RO + cs.CG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Deriyan Senjaya, Stephen Ekaputra Limantoro + Georg Nawratil - From Priors to Predictions: Explaining and Visualizing Human Reasoning in a Graph Neural Network Framework - https://arxiv.org/abs/2512.17255 - arXiv:2512.17255v1 Announce Type: cross -Abstract: Humans excel at solving novel reasoning problems from minimal exposure, guided by inductive biases, assumptions about which entities and relationships matter. Yet the computational form of these biases and their neural implementation remain poorly understood. We introduce a framework that combines Graph Theory and Graph Neural Networks (GNNs) to formalize inductive biases as explicit, manipulable priors over structure and abstraction. Using a human behavioral dataset adapted from the Abstraction and Reasoning Corpus (ARC), we show that differences in graph-based priors can explain individual differences in human solutions. Our method includes an optimization pipeline that searches over graph configurations, varying edge connectivity and node abstraction, and a visualization approach that identifies the computational graph, the subset of nodes and edges most critical to a model's prediction. Systematic ablation reveals how generalization depends on specific prior structures and internal processing, exposing why human like errors emerge from incorrect or incomplete priors. This work provides a principled, interpretable framework for modeling the representational assumptions and computational dynamics underlying generalization, offering new insights into human reasoning and a foundation for more human aligned AI systems. - oai:arXiv.org:2512.17255v1 - q-bio.NC + Psychometric Validation of the Sophotechnic Mediation Scale and a New Understanding of the Development of GenAI Mastery: Lessons from 3,392 Adult Brazilian Workers + https://arxiv.org/abs/2512.18871 + arXiv:2512.18871v1 Announce Type: new +Abstract: The rapid diffusion of generative artificial intelligence (GenAI) systems has introduced new forms of human-technology interaction, raising the question of whether sustained engagement gives rise to stable, internalized modes of cognition rather than merely transient efficiency gains. Grounded in the Cognitive Mediation Networks Theory, this study investigates Sophotechnic Mediation, a mode of thinking and acting associated with prolonged interaction with GenAI, and presents a comprehensive psychometric validation of the Sophotechnic Mediation Scale. Data were collected between 2023 and 2025 from independent cross-sectional samples totaling 3,932 adult workers from public and private organizations in the Metropolitan Region of Pernambuco, Brazil. Results indicate excellent internal consistency, a robust unidimensional structure, and measurement invariance across cohorts. Ordinal-robust confirmatory factor analyses and residual diagnostics show that elevated absolute fit indices reflect minor local dependencies rather than incorrect dimensionality. Distributional analyses reveal a time-evolving pattern characterized by a declining mass of non-adopters and convergence toward approximate Gaussianity among adopters, with model comparisons favoring a two-process hurdle model over a censored Gaussian specification. Sophotechnic Mediation is empirically distinct from Hypercultural mediation and is primarily driven by cumulative GenAI experience, with age moderating the rate of initial acquisition and the depth of later integration. Together, the findings support Sophotechnia as a coherent, measurable, and emergent mode of cognitive mediation associated with the ongoing GenAI revolution. + oai:arXiv.org:2512.18871v1 + cs.CY cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Quan Do, Caroline Ahn, Leah Bakst, Michael Pascale, Joseph T. McGuire, Chantal E. Stern, Michael E. Hasselmo + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Bruno Campello de Souza - Active RIS-Aided Anti-Jamming Wireless Communications: A Stackelberg Game Perspective - https://arxiv.org/abs/2512.17335 - arXiv:2512.17335v1 Announce Type: cross -Abstract: The pervasive threat of jamming attacks, particularly from adaptive jammers capable of optimizing their strategies, poses a significant challenge to the security and reliability of wireless communications. This paper addresses this issue by investigating anti-jamming communications empowered by an active reconfigurable intelligent surface. The strategic interaction between the legitimate system and the adaptive jammer is modeled as a Stackelberg game, where the legitimate user, acting as the leader, proactively designs its strategy while anticipating the jammer's optimal response. We prove the existence of the Stackelberg equilibrium and derive it using a backward induction method. Particularly, the jammer's optimal strategy is embedded into the leader's problem, resulting in a bi-level optimization that jointly considers legitimate transmit power, transmit/receive beamformers, and active reflection. We tackle this complex, non-convex problem by using a block coordinate descent framework, wherein subproblems are iteratively solved via convex relaxation and successive convex approximation techniques. Simulation results demonstrate the significant superiority of the proposed active RIS-assisted scheme in enhancing legitimate transmissions and degrading jamming effects compared to baseline schemes across various scenarios. These findings highlight the effectiveness of combining active RIS technology with a strategic game-theoretic framework for anti-jamming communications. - oai:arXiv.org:2512.17335v1 - eess.SP - cs.IT - math.IT - Mon, 22 Dec 2025 00:00:00 -0500 - cross + CrashChat: A Multimodal Large Language Model for Multitask Traffic Crash Video Analysis + https://arxiv.org/abs/2512.18878 + arXiv:2512.18878v1 Announce Type: new +Abstract: Automating crash video analysis is essential to leverage the growing availability of driving video data for traffic safety research and accountability attribution in autonomous driving. Crash video analysis is a challenging multitask problem due to the complex spatiotemporal dynamics of crash events in video data and the diverse analytical requirements involved. It requires capabilities spanning crash recognition, temporal grounding, and high-level video understanding. Existing models, however, cannot perform all these tasks within a unified framework, and effective training strategies for such models remain underexplored. To fill these gaps, this paper proposes CrashChat, a multimodal large language model (MLLM) for multitask traffic crash analysis, built upon VideoLLaMA3. CrashChat acquires domain-specific knowledge through instruction fine-tuning and employs a novel multitask learning strategy based on task decoupling and grouping, which maximizes the benefit of joint learning within and across task groups while mitigating negative transfer. Numerical experiments on consolidated public datasets demonstrate that CrashChat consistently outperforms existing MLLMs across model scales and traditional vision-based methods, achieving state-of-the-art performance. It reaches near-perfect accuracy in crash recognition, a 176\% improvement in crash localization, and a 40\% improvement in the more challenging pre-crash localization. Compared to general MLLMs, it substantially enhances textual accuracy and content coverage in crash description and reasoning tasks, with 0.18-0.41 increases in BLEU scores and 0.18-0.42 increases in ROUGE scores. Beyond its strong performance, CrashChat is a convenient, end-to-end analytical tool ready for practical implementation. The dataset and implementation code for CrashChat are available at https://github.com/Liangkd/CrashChat. + oai:arXiv.org:2512.18878v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiao Tang, Zhen Ma, Bin Li, Cong Li, Qinghe Du, Dusit Niyato, Zhu Han + Kaidi Liang, Ke Li, Xianbiao Hu, Ruwen Qin - Penalized Fair Regression for Multiple Groups in Chronic Kidney Disease - https://arxiv.org/abs/2512.17340 - arXiv:2512.17340v1 Announce Type: cross -Abstract: Fair regression methods have the potential to mitigate societal bias concerns in health care, but there has been little work on penalized fair regression when multiple groups experience such bias. We propose a general regression framework that addresses this gap with unfairness penalties for multiple groups. Our approach is demonstrated for binary outcomes with true positive rate disparity penalties. It can be efficiently implemented through reduction to a cost-sensitive classification problem. We additionally introduce novel score functions for automatically selecting penalty weights. Our penalized fair regression methods are empirically studied in simulations, where they achieve a fairness-accuracy frontier beyond that of existing comparison methods. Finally, we apply these methods to a national multi-site primary care study of chronic kidney disease to develop a fair classifier for end-stage renal disease. There we find substantial improvements in fairness for multiple race and ethnicity groups who experience societal bias in the health care system without any appreciable loss in overall fit. - oai:arXiv.org:2512.17340v1 - stat.ME + Can LLMs Estimate Student Struggles? Human-AI Difficulty Alignment with Proficiency Simulation for Item Difficulty Prediction + https://arxiv.org/abs/2512.18880 + arXiv:2512.18880v1 Announce Type: new +Abstract: Accurate estimation of item (question or task) difficulty is critical for educational assessment but suffers from the cold start problem. While Large Language Models demonstrate superhuman problem-solving capabilities, it remains an open question whether they can perceive the cognitive struggles of human learners. In this work, we present a large-scale empirical analysis of Human-AI Difficulty Alignment for over 20 models across diverse domains such as medical knowledge and mathematical reasoning. Our findings reveal a systematic misalignment where scaling up model size is not reliably helpful; instead of aligning with humans, models converge toward a shared machine consensus. We observe that high performance often impedes accurate difficulty estimation, as models struggle to simulate the capability limitations of students even when being explicitly prompted to adopt specific proficiency levels. Furthermore, we identify a critical lack of introspection, as models fail to predict their own limitations. These results suggest that general problem-solving capability does not imply an understanding of human cognitive struggles, highlighting the challenge of using current models for automated difficulty prediction. + oai:arXiv.org:2512.18880v1 + cs.CL + cs.AI cs.CY - cs.LG - stat.AP - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Carter H. Nakamoto, Lucia Lushi Chen, Agata Foryciarz, Sherri Rose + Ming Li, Han Chen, Yunze Xiao, Jian Chen, Hong Jiao, Tianyi Zhou - Sharp Structure-Agnostic Lower Bounds for General Functional Estimation - https://arxiv.org/abs/2512.17341 - arXiv:2512.17341v1 Announce Type: cross -Abstract: The design of efficient nonparametric estimators has long been a central problem in statistics, machine learning, and decision making. Classical optimal procedures often rely on strong structural assumptions, which can be misspecified in practice and complicate deployment. This limitation has sparked growing interest in structure-agnostic approaches -- methods that debias black-box nuisance estimates without imposing structural priors. Understanding the fundamental limits of these methods is therefore crucial. This paper provides a systematic investigation of the optimal error rates achievable by structure-agnostic estimators. We first show that, for estimating the average treatment effect (ATE), a central parameter in causal inference, doubly robust learning attains optimal structure-agnostic error rates. We then extend our analysis to a general class of functionals that depend on unknown nuisance functions and establish the structure-agnostic optimality of debiased/double machine learning (DML). We distinguish two regimes -- one where double robustness is attainable and one where it is not -- leading to different optimal rates for first-order debiasing, and show that DML is optimal in both regimes. Finally, we instantiate our general lower bounds by deriving explicit optimal rates that recover existing results and extend to additional estimands of interest. Our results provide theoretical validation for widely used first-order debiasing methods and guidance for practitioners seeking optimal approaches in the absence of structural assumptions. This paper generalizes and subsumes the ATE lower bound established in \citet{jin2024structure} by the same authors. - oai:arXiv.org:2512.17341v1 - stat.ML - cs.LG - econ.EM - math.ST - stat.ME - stat.TH - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Jikai Jin, Vasilis Syrgkanis + Distribution Network Restoration with Mobile Resources Dispatch: A Simulation-Based Online Dynamic Programming Approach + https://arxiv.org/abs/2512.18885 + arXiv:2512.18885v1 Announce Type: new +Abstract: Dispatching mobile resources such as repair crews and mobile emergency generators is essential for the rapid restoration of distribution systems after extreme events. However, the restoration process is affected by various uncertain factors including repair time, road condition, and newly observed failures, necessitating online decision-making in response to real-time information. This paper proposes a simulation-based online dynamic programming approach to provide real-time restoration actions considering the dispatch of mobile resources. Using an index-based priority rule as the base policy, the remaining cumulative loss at the current state and a given action is evaluated from online simulation. As the base policy is explicit, the simulation is efficient. Then, the action leading to the minimum cumulative loss is chosen. It is proven that such a strategy improves the performance of base policy. The proposed policy adapts to real-time information updates and does not rely on offline training, so incurs no data and convergence-related issues, which is important in restoration tasks where the historical data of extreme events is rare. The rolling optimization approach may not meet the requirement of online use, because routing mobile resources gives rise to large-scale discrete optimization problems. Case studies on 123-bus and 8500-bus systems demonstrate that the proposed method achieves higher efficiency and better performance compared with rolling horizon optimization. + oai:arXiv.org:2512.18885v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1016/j.ijepes.2025.110667 + International Journal of Electrical Power & Energy Systems, 168 (2025) 110667 + Mingxuan Li, Wei Wei, Yin Xu, Ying Wang, Shanshan Shi - Nowhere-zero flow reconfiguration - https://arxiv.org/abs/2512.17342 - arXiv:2512.17342v1 Announce Type: cross -Abstract: We initiate the study of nowhere-zero flow reconfiguration. The natural question is whether any two nowhere-zero $k$-flows of a given graph $G$ are connected by a sequence of nowhere-zero $k$-flows of $G$, such that any two consecutive flows in the sequence differ only on a cycle of $G$. We conjecture that any two nowhere-zero 5-flows in any 2-edge-connected graph are connected in this way. This can be seen as a reconfiguration variant of Tutte's 5-flow conjecture. - We study this problem in the setting of integer flows and group flows, and show that the structure of groups affects the answer, contrary to the existence of nowhere-zero flows. We also highlight a duality with recoloring in planar graphs and deduce that any two nowhere-zero 7-flows in a planar graph are connected, among other results. Finally we show that for any graph $G$, there is an abelian group $A$ such that all nowhere-zero $A$-flows in $G$ are connected, which is a weak form of our original conjecture. We conclude with several problems and conjectures. - oai:arXiv.org:2512.17342v1 - math.CO - cs.DM - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Louis Esperet, Aur\'elie Lagoutte, Margaux Marseloo + Localising Shortcut Learning in Pixel Space via Ordinal Scoring Correlations for Attribution Representations (OSCAR) + https://arxiv.org/abs/2512.18888 + arXiv:2512.18888v1 Announce Type: new +Abstract: Deep neural networks often exploit shortcuts. These are spurious cues which are associated with output labels in the training data but are unrelated to task semantics. When the shortcut features are associated with sensitive attributes, shortcut learning can lead to biased model performance. Existing methods for localising and understanding shortcut learning are mostly based upon qualitative, image-level inspection and assume cues are human-visible, limiting their use in domains such as medical imaging. We introduce OSCAR (Ordinal Scoring Correlations for Attribution Representations), a model-agnostic framework for quantifying shortcut learning and localising shortcut features. OSCAR converts image-level task attribution maps into dataset-level rank profiles of image regions and compares them across three models: a balanced baseline model (BA), a test model (TS), and a sensitive attribute predictor (SA). By computing pairwise, partial, and deviation-based correlations on these rank profiles, we produce a set of quantitative metrics that characterise the degree of shortcut reliance for TS, together with a ranking of image-level regions that contribute most to it. Experiments on CelebA, CheXpert, and ADNI show that our correlations are (i) stable across seeds and partitions, (ii) sensitive to the level of association between shortcut features and output labels in the training data, and (iii) able to distinguish localised from diffuse shortcut features. As an illustration of the utility of our method, we show how worst-group performance disparities can be reduced using a simple test-time attenuation approach based on the identified shortcut regions. OSCAR provides a lightweight, pixel-space audit that yields statistical decision rules and spatial maps, enabling users to test, localise, and mitigate shortcut reliance. The code is available at https://github.com/acharaakshit/oscar + oai:arXiv.org:2512.18888v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Akshit Achara, Peter Triantafillou, Esther Puyol-Ant\'on, Alexander Hammers, Andrew P. King - Perfect reconstruction of sparse signals using nonconvexity control and one-step RSB message passing - https://arxiv.org/abs/2512.17426 - arXiv:2512.17426v1 Announce Type: cross -Abstract: We consider sparse signal reconstruction via minimization of the smoothly clipped absolute deviation (SCAD) penalty, and develop one-step replica-symmetry-breaking (1RSB) extensions of approximate message passing (AMP), termed 1RSB-AMP. Starting from the 1RSB formulation of belief propagation, we derive explicit update rules of 1RSB-AMP together with the corresponding state evolution (1RSB-SE) equations. A detailed comparison shows that 1RSB-AMP and 1RSB-SE agree remarkably well at the macroscopic level, even in parameter regions where replica-symmetric (RS) AMP, termed RS-AMP, diverges and where the 1RSB description itself is not expected to be thermodynamically exact. Fixed-point analysis of 1RSB-SE reveals a phase diagram consisting of success, failure, and diverging phases, as in the RS case. However, the diverging-region boundary now depends on the Parisi parameter due to the 1RSB ansatz, and we propose a new criterion -- minimizing the size of the diverging region -- rather than the conventional zero-complexity condition, to determine its value. Combining this criterion with the nonconvexity-control (NCC) protocol proposed in a previous RS study improves the algorithmic limit of perfect reconstruction compared with RS-AMP. Numerical solutions of 1RSB-SE and experiments with 1RSB-AMP confirm that this improved limit is achieved in practice, though the gain is modest and remains slightly inferior to the Bayes-optimal threshold. We also report the behavior of thermodynamic quantities -- overlaps, free entropy, complexity, and the non-self-averaging susceptibility -- that characterize the 1RSB phase in this problem. - oai:arXiv.org:2512.17426v1 - stat.ML - cond-mat.dis-nn - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiaosi Gu, Ayaka Sakata, Tomoyuki Obuchi + Household Plastic Recycling: Empirical Insights and Design Explorations + https://arxiv.org/abs/2512.18889 + arXiv:2512.18889v1 Announce Type: new +Abstract: This article examines household plastic recycling in Finland through two qualitative studies and four design concepts. Study 1 reports short interviews with residents about how they store, sort, and dispose of plastic packaging in their homes. The findings highlight recurring frictions: limited space, improvised storage, uncertainty about correct sorting, and difficulties with bulky or dirty items. Study 2 focuses on laundry detergent packaging as a common source of large plastic containers. Participants' purchase decisions prioritised price and cleaning performance, while expressing concern for environmental impact and confusion about materials, rinsing, and recyclability. + Building on these insights, four student groups designed interactive recycling concepts that combine physical bins or bags with mobile applications. The concepts explore modular storage, sensing and compaction, playful feedback, and reward schemes to support domestic recycling routines. Together, the studies and concepts point to design opportunities at the intersection of packaging, home infrastructure, and digital services, while also raising questions about feasibility, privacy, and the cost of new devices. + oai:arXiv.org:2512.18889v1 + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Ashley Colley, Emma Kirjavainen, Sari Tapio, Jonna H\"akkil\"a - Four special Poncelet triangle families about the incircle - https://arxiv.org/abs/2512.17440 - arXiv:2512.17440v1 Announce Type: cross -Abstract: We describe four special families of ellipse-inscribed Poncelet triangles about the incircle which maintain certain triangle centers stationary and which also display interesting conservations. - oai:arXiv.org:2512.17440v1 - math.MG - cs.GR - Mon, 22 Dec 2025 00:00:00 -0500 - cross + A Real-Time Digital Twin for Adaptive Scheduling + https://arxiv.org/abs/2512.18894 + arXiv:2512.18894v1 Announce Type: new +Abstract: High-performance computing (HPC) workloads are becoming increasingly diverse, exhibiting wide variability in job characteristics, yet cluster scheduling has long relied on static, heuristic-based policies. In this work we present SchedTwin, a real-time digital twin designed to adaptively guide scheduling decisions using predictive simulation. SchedTwin periodically ingests runtime events from the physical scheduler, performs rapid what-if evaluations of multiple policies using a high-fidelity discrete-event simulator, and dynamically selects the one satisfying the administrator configured optimization goal. We implement SchedTwin as an open-source software and integrate it with the production PBS scheduler. Preliminary results show that SchedTwin consistently outperforms widely used static scheduling policies, while maintaining low overhead (a few seconds per scheduling cycle). These results demonstrate that real-time digital twins offer a practical and effective path toward adaptive HPC scheduling. + oai:arXiv.org:2512.18894v1 + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ronaldo A. Garcia, Mark Helman, Dan Reznik + Yihe Zhang, Yash Kurkure, Yiheng Tao, Michael E. Papka, Zhiling Lan - Alternating Direction Method of Multipliers for Nonlinear Matrix Decompositions - https://arxiv.org/abs/2512.17473 - arXiv:2512.17473v1 Announce Type: cross -Abstract: We present an algorithm based on the alternating direction method of multipliers (ADMM) for solving nonlinear matrix decompositions (NMD). Given an input matrix $X \in \mathbb{R}^{m \times n}$ and a factorization rank $r \ll \min(m, n)$, NMD seeks matrices $W \in \mathbb{R}^{m \times r}$ and $H \in \mathbb{R}^{r \times n}$ such that $X \approx f(WH)$, where $f$ is an element-wise nonlinear function. We evaluate our method on several representative nonlinear models: the rectified linear unit activation $f(x) = \max(0, x)$, suitable for nonnegative sparse data approximation, the component-wise square $f(x) = x^2$, applicable to probabilistic circuit representation, and the MinMax transform $f(x) = \min(b, \max(a, x))$, relevant for recommender systems. The proposed framework flexibly supports diverse loss functions, including least squares, $\ell_1$ norm, and the Kullback-Leibler divergence, and can be readily extended to other nonlinearities and metrics. We illustrate the applicability, efficiency, and adaptability of the approach on real-world datasets, highlighting its potential for a broad range of applications. - oai:arXiv.org:2512.17473v1 - eess.SP + Thinking Beyond Labels: Vocabulary-Free Fine-Grained Recognition using Reasoning-Augmented LMMs + https://arxiv.org/abs/2512.18897 + arXiv:2512.18897v1 Announce Type: new +Abstract: Vocabulary-free fine-grained image recognition aims to distinguish visually similar categories within a meta-class without a fixed, human-defined label set. Existing solutions for this problem are limited by either the usage of a large and rigid list of vocabularies or by the dependency on complex pipelines with fragile heuristics where errors propagate across stages. Meanwhile, the ability of recent large multi-modal models (LMMs) equipped with explicit or implicit reasoning to comprehend visual-language data, decompose problems, retrieve latent knowledge, and self-correct suggests a more principled and effective alternative. Building on these capabilities, we propose FiNDR (Fine-grained Name Discovery via Reasoning), the first reasoning-augmented LMM-based framework for vocabulary-free fine-grained recognition. The system operates in three automated steps: (i) a reasoning-enabled LMM generates descriptive candidate labels for each image; (ii) a vision-language model filters and ranks these candidates to form a coherent class set; and (iii) the verified names instantiate a lightweight multi-modal classifier used at inference time. Extensive experiments on popular fine-grained classification benchmarks demonstrate state-of-the-art performance under the vocabulary-free setting, with a significant relative margin of up to 18.8% over previous approaches. Remarkably, the proposed method surpasses zero-shot baselines that exploit pre-defined ground-truth names, challenging the assumption that human-curated vocabularies define an upper bound. Additionally, we show that carefully curated prompts enable open-source LMMs to match proprietary counterparts. These findings establish reasoning-augmented LMMs as an effective foundation for scalable, fully automated, open-world fine-grained visual recognition. The source code is available on github.com/demidovd98/FiNDR. + oai:arXiv.org:2512.18897v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Dmitry Demidov, Zaigham Zaheer, Zongyan Han, Omkar Thawakar, Rao Anwer + + + Gabliteration: Adaptive Multi-Directional Neural Weight Modification for Selective Behavioral Alteration in Large Language Models + https://arxiv.org/abs/2512.18901 + arXiv:2512.18901v1 Announce Type: new +Abstract: We present Gabliteration, a novel neural weight modification technique that advances beyond traditional abliteration methods by implementing adaptive multi-directional projections with regularized layer selection. Our approach addresses the fundamental limitation of existing methods that compromise model quality while attempting to modify specific behavioral patterns. Through dynamic layer optimization, regularized projection matrices, and adaptive scaling mechanisms, we achieve theoretically superior weight modification while minimizing quality degradation in unrelated domains. We validate our method through the gabliterated-v1 model series (0.6B to 4B parameters) available on Hugging Face, demonstrating practical applicability across multiple model scales. + oai:arXiv.org:2512.18901v1 + cs.AI cs.LG - math.OC - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Atharva Awari, Nicolas Gillis, Arnaud Vandaele + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + G\"okdeniz G\"ulmez - Zero-Shot Recognition of Dysarthric Speech Using Commercial Automatic Speech Recognition and Multimodal Large Language Models - https://arxiv.org/abs/2512.17474 - arXiv:2512.17474v1 Announce Type: cross -Abstract: Voice-based human-machine interaction is a primary modality for accessing intelligent systems, yet individuals with dysarthria face systematic exclusion due to recognition performance gaps. Whilst automatic speech recognition (ASR) achieves word error rates (WER) below 5% on typical speech, performance degrades dramatically for dysarthric speakers. Multimodal large language models (MLLMs) offer potential for leveraging contextual reasoning to compensate for acoustic degradation, yet their zero-shot capabilities remain uncharacterised. This study evaluates eight commercial speech-to-text services on the TORGO dysarthric speech corpus: four conventional ASR systems (AssemblyAI, Whisper large-v3, Deepgram Nova-3, Nova-3 Medical) and four MLLM-based systems (GPT-4o, GPT-4o Mini, Gemini 2.5 Pro, Gemini 2.5 Flash). Evaluation encompasses lexical accuracy, semantic preservation, and cost-latency trade-offs. Results demonstrate severity-dependent degradation: mild dysarthria achieves 3-5% WER approaching typical-speech benchmarks, whilst severe dysarthria exceeds 49% WER across all systems. A verbatim-transcription prompt yields architecture-specific effects: GPT-4o achieves 7.36 percentage point WER reduction with consistent improvement across all tested speakers, whilst Gemini variants exhibit degradation. Semantic metrics indicate that communicative intent remains partially recoverable despite elevated lexical error rates. These findings establish empirical baselines enabling evidence-based technology selection for assistive voice interface deployment. - oai:arXiv.org:2512.17474v1 - eess.AS + Speaker Recognition -- Wavelet Packet Based Multiresolution Feature Extraction Approach + https://arxiv.org/abs/2512.18902 + arXiv:2512.18902v1 Announce Type: new +Abstract: This paper proposes a novel Wavelet Packet based feature extraction approach for the task of text independent speaker recognition. The features are extracted by using the combination of Mel Frequency Cepstral Coefficient (MFCC) and Wavelet Packet Transform (WPT).Hybrid Features technique uses the advantage of human ear simulation offered by MFCC combining it with multi-resolution property and noise robustness of WPT. To check the validity of the proposed approach for the text independent speaker identification and verification we have used the Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM) respectively as the classifiers. The proposed paradigm is tested on voxforge speech corpus and CSTR US KED Timit database. The paradigm is also evaluated after adding standard noise signal at different level of SNRs for evaluating the noise robustness. Experimental results show that better results are achieved for the tasks of both speaker identification as well as speaker verification. + oai:arXiv.org:2512.18902v1 cs.SD - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ali Alsayegh, Tariq Masood + Saurabh Bhardwaj, Smriti Srivastava, Abhishek Bhandari, Krit Gupta, Hitesh Bahl, J. R. P. Gupta - Resource-efficient medical image classification for edge devices - https://arxiv.org/abs/2512.17515 - arXiv:2512.17515v1 Announce Type: cross -Abstract: Medical image classification is a critical task in healthcare, enabling accurate and timely diagnosis. However, deploying deep learning models on resource-constrained edge devices presents significant challenges due to computational and memory limitations. This research investigates a resource-efficient approach to medical image classification by employing model quantization techniques. Quantization reduces the precision of model parameters and activations, significantly lowering computational overhead and memory requirements without sacrificing classification accuracy. The study focuses on the optimization of quantization-aware training (QAT) and post-training quantization (PTQ) methods tailored for edge devices, analyzing their impact on model performance across medical imaging datasets. Experimental results demonstrate that quantized models achieve substantial reductions in model size and inference latency, enabling real-time processing on edge hardware while maintaining clinically acceptable diagnostic accuracy. This work provides a practical pathway for deploying AI-driven medical diagnostics in remote and resource-limited settings, enhancing the accessibility and scalability of healthcare technologies. - oai:arXiv.org:2512.17515v1 - eess.IV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Proc. Int. Conf. Appl. Mach. Intelligence and Data Analytics (ICAMIDA), IEEE, 2025 - Mahsa Lavaei, Zahra Abadi, Salar Beigzad, Alireza Maleki + Weak Galerkin finite element methods for elliptic interface problems on nonconvex polygonal partitions + https://arxiv.org/abs/2512.18905 + arXiv:2512.18905v1 Announce Type: new +Abstract: This paper proposes a weak Galerkin (WG) finite element method for elliptic interface problems defined on nonconvex polygonal partitions. The method features a built-in stabilizer and retains a simple, symmetric, and positive definite formulation. An optimal-order error estimate is rigorously derived in the discrete $H^1$ norm. Furthermore, a series of numerical experiments are provided to verify the theoretical results and to demonstrate the robustness and effectiveness of the proposed WG method for elliptic interface problems. + oai:arXiv.org:2512.18905v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/publicdomain/zero/1.0/ + Chunmei Wang, Shangyou Zhang - Computational analysis reveals historical trajectory of East-Polynesian lunar calendars - https://arxiv.org/abs/2512.17525 - arXiv:2512.17525v1 Announce Type: cross -Abstract: We investigate a type of lunar calendar known as lists of the 'nights of the moon', found throughout East Polynesia, including Rapa Nui (Easter Island). Using computational methods, we analyzed the lexical and structural divergence of 49 calendric lists from all major archipelagos, each containing about 30 night names. Our results, presented as a rooted phylogenetic tree, show a clear split into two main groups: one including lists from Rapa Nui, Mangareva, and the Marquesas; the other comprising lists from New Zealand, Hawaii, the Cook Islands, the Austral Islands, Tahiti, and the Tuamotu. This pattern aligns with a recent alternative classification of East Polynesian languages into 'Distal' (Marquesan, Mangarevan, Rapanui) and 'Proximal' (Maori, Hawaiian, Tahitian, etc.) subgroups. Since both language and lunar calendars are symbolic systems passed down and changed within communities - and given the geographic isolation of many archipelagos - we interpret this correspondence as evidence that the early divergence of East Polynesian lunar calendars mirrors early population movements and language splits in the region. - oai:arXiv.org:2512.17525v1 - q-bio.PE + Remedy-R: Generative Reasoning for Machine Translation Evaluation without Error Annotations + https://arxiv.org/abs/2512.18906 + arXiv:2512.18906v1 Announce Type: new +Abstract: Over the years, automatic MT metrics have hillclimbed benchmarks and presented strong and sometimes human-level agreement with human ratings. Yet they remain black-box, offering little insight into their decision-making and often failing under real-world out-of-distribution (OOD) inputs. We introduce Remedy-R, a reasoning-driven generative MT metric trained with reinforcement learning from pairwise translation preferences, without requiring error-span annotations or distillation from closed LLMs. Remedy-R produces step-by-step analyses of accuracy, fluency, and completeness, followed by a final score, enabling more interpretable assessments. With only 60K training pairs across two language pairs, Remedy-R remains competitive with top scalar metrics and GPT-4-based judges on WMT22-24 meta-evaluation, generalizes to other languages, and exhibits strong robustness on OOD stress tests. Moreover, Remedy-R models generate self-reflective feedback that can be reused for translation improvement. Building on this finding, we introduce Remedy-R Agent, a simple evaluate-revise pipeline that leverages Remedy-R's evaluation analysis to refine translations. This agent consistently improves translation quality across diverse models, including Qwen2.5, ALMA-R, GPT-4o-mini, and Gemini-2.0-Flash, suggesting that Remedy-R's reasoning captures translation-relevant information and is practically useful. + oai:arXiv.org:2512.18906v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Miguel Val\'erio, Fabio Tamburini, Michele Corazza + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shaomu Tan, Ryosuke Mitani, Ritvik Choudhary, Qiyu Wu, Toshiyuki Sekiya, Christof Monz - HydroGym: A Reinforcement Learning Platform for Fluid Dynamics - https://arxiv.org/abs/2512.17534 - arXiv:2512.17534v1 Announce Type: cross -Abstract: Modeling and controlling fluid flows is critical for several fields of science and engineering, including transportation, energy, and medicine. Effective flow control can lead to, e.g., lift increase, drag reduction, mixing enhancement, and noise reduction. However, controlling a fluid faces several significant challenges, including high-dimensional, nonlinear, and multiscale interactions in space and time. Reinforcement learning (RL) has recently shown great success in complex domains, such as robotics and protein folding, but its application to flow control is hindered by a lack of standardized benchmark platforms and the computational demands of fluid simulations. To address these challenges, we introduce HydroGym, a solver-independent RL platform for flow control research. HydroGym integrates sophisticated flow control benchmarks, scalable runtime infrastructure, and state-of-the-art RL algorithms. Our platform includes 42 validated environments spanning from canonical laminar flows to complex three-dimensional turbulent scenarios, validated over a wide range of Reynolds numbers. We provide non-differentiable solvers for traditional RL and differentiable solvers that dramatically improve sample efficiency through gradient-enhanced optimization. Comprehensive evaluation reveals that RL agents consistently discover robust control principles across configurations, such as boundary layer manipulation, acoustic feedback disruption, and wake reorganization. Transfer learning studies demonstrate that controllers learned at one Reynolds number or geometry adapt efficiently to new conditions, requiring approximately 50% fewer training episodes. The HydroGym platform is highly extensible and scalable, providing a framework for researchers in fluid dynamics, machine learning, and control to add environments, surrogate models, and control algorithms to advance science and technology. - oai:arXiv.org:2512.17534v1 - physics.flu-dyn + Multimodal Bayesian Network for Robust Assessment of Casualties in Autonomous Triage + https://arxiv.org/abs/2512.18908 + arXiv:2512.18908v1 Announce Type: new +Abstract: Mass Casualty Incidents can overwhelm emergency medical systems and resulting delays or errors in the assessment of casualties can lead to preventable deaths. We present a decision support framework that fuses outputs from multiple computer vision models, estimating signs of severe hemorrhage, respiratory distress, physical alertness, or visible trauma, into a Bayesian network constructed entirely from expert-defined rules. Unlike traditional data-driven models, our approach does not require training data, supports inference with incomplete information, and is robust to noisy or uncertain observations. We report performance for two missions involving 11 and 9 casualties, respectively, where our Bayesian network model substantially outperformed vision-only baselines during evaluation of our system in the DARPA Triage Challenge (DTC) field scenarios. The accuracy of physiological assessment improved from 15% to 42% in the first scenario and from 19% to 46% in the second, representing nearly threefold increase in performance. More importantly, overall triage accuracy increased from 14% to 53% in all patients, while the diagnostic coverage of the system expanded from 31% to 95% of the cases requiring assessment. These results demonstrate that expert-knowledge-guided probabilistic reasoning can significantly enhance automated triage systems, offering a promising approach to supporting emergency responders in MCIs. This approach enabled Team Chiron to achieve 4th place out of 11 teams during the 1st physical round of the DTC. + oai:arXiv.org:2512.18908v1 cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Christian Lagemann, Sajeda Mokbel, Miro Gondrum, Mario R\"uttgers, Jared Callaham, Ludger Paehler, Samuel Ahnert, Nicholas Zolman, Kai Lagemann, Nikolaus Adams, Matthias Meinke, Wolfgang Schr\"oder, Jean-Christophe Loiseau, Esther Lagemann, Steven L. Brunton + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Szymon Rusiecki, Cecilia G. Morales, Kimberly Elenberg, Leonard Weiss, Artur Dubrawski - SkinGenBench: Generative Model and Preprocessing Effects for Synthetic Dermoscopic Augmentation in Melanoma Diagnosis - https://arxiv.org/abs/2512.17585 - arXiv:2512.17585v1 Announce Type: cross -Abstract: This work introduces SkinGenBench, a systematic biomedical imaging benchmark that investigates how preprocessing complexity interacts with generative model choice for synthetic dermoscopic image augmentation and downstream melanoma diagnosis. Using a curated dataset of 14,116 dermoscopic images from HAM10000 and MILK10K across five lesion classes, we evaluate the two representative generative paradigms: StyleGAN2-ADA and Denoising Diffusion Probabilistic Models (DDPMs) under basic geometric augmentation and advanced artifact removal pipelines. Synthetic melanoma images are assessed using established perceptual and distributional metrics (FID, KID, IS), feature space analysis, and their impact on diagnostic performance across five downstream classifiers. Experimental results demonstrate that generative architecture choice has a stronger influence on both image fidelity and diagnostic utility than preprocessing complexity. StyleGAN2-ADA consistently produced synthetic images more closely aligned with real data distributions, achieving the lowest FID (~65.5) and KID (~0.05), while diffusion models generated higher variance samples at the cost of reduces perceptual fidelity and class anchoring. Advanced artifact removal yielded only marginal improvements in generative metrics and provided limited downstream diagnostic gains, suggesting possible suppression of clinically relevant texture cues. In contrast, synthetic data augmentation substantially improved melanoma detection with 8-15% absolute gains in melanoma F1-score, and ViT-B/16 achieving F1~0.88 and ROC-AUC~0.98, representing an improvement of approximately 14% over non-augmented baselines. Our code can be found at https://github.com/adarsh-crafts/SkinGenBench - oai:arXiv.org:2512.17585v1 - eess.IV + Delta-LLaVA: Base-then-Specialize Alignment for Token-Efficient Vision-Language Models + https://arxiv.org/abs/2512.18910 + arXiv:2512.18910v1 Announce Type: new +Abstract: Multimodal Large Language Models (MLLMs) combine visual and textual representations to enable rich reasoning capabilities. However, the high computational cost of processing dense visual tokens remains a major bottleneck. A critical component in this pipeline is the visual projector, which bridges the vision encoder and the language model. Standard designs often employ a simple multi-layer perceptron for direct token mapping, but this approach scales poorly with high-resolution inputs, introducing significant redundancy. We present Delta-LLaVA, a token-efficient projector that employs a low-rank DeltaProjection to align multi-level vision features into a compact subspace before further interaction. On top of this base alignment, lightweight Transformer blocks act as specialization layers, capturing both global and local structure under constrained token budgets. Extensive experiments and ablations demonstrate that this base-then-specialize design yields consistent gains across multiple benchmarks with only 144 tokens, highlighting the importance of token formation prior to scaling interaction capacity. With Delta-LLaVA, inference throughput improves by up to 55%, while end-to-end training accelerates by nearly 4-5x in pretraining and over 1.5x in finetuning, highlighting the dual benefits of our design in both efficiency and scalability. + oai:arXiv.org:2512.18910v1 cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - N. A. Adarsh Pritam, Jeba Shiney O, Sanyam Jain + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Mohamad Zamini, Diksha Shukla - Generative Multi-Objective Bayesian Optimization with Scalable Batch Evaluations for Sample-Efficient De Novo Molecular Design - https://arxiv.org/abs/2512.17659 - arXiv:2512.17659v1 Announce Type: cross -Abstract: Designing molecules that must satisfy multiple, often conflicting objectives is a central challenge in molecular discovery. The enormous size of chemical space and the cost of high-fidelity simulations have driven the development of machine learning-guided strategies for accelerating design with limited data. Among these, Bayesian optimization (BO) offers a principled framework for sample-efficient search, while generative models provide a mechanism to propose novel, diverse candidates beyond fixed libraries. However, existing methods that couple the two often rely on continuous latent spaces, which introduces both architectural entanglement and scalability challenges. This work introduces an alternative, modular "generate-then-optimize" framework for de novo multi-objective molecular design/discovery. At each iteration, a generative model is used to construct a large, diverse pool of candidate molecules, after which a novel acquisition function, qPMHI (multi-point Probability of Maximum Hypervolume Improvement), is used to optimally select a batch of candidates most likely to induce the largest Pareto front expansion. The key insight is that qPMHI decomposes additively, enabling exact, scalable batch selection via only simple ranking of probabilities that can be easily estimated with Monte Carlo sampling. We benchmark the framework against state-of-the-art latent-space and discrete molecular optimization methods, demonstrating significant improvements across synthetic benchmarks and application-driven tasks. Specifically, in a case study related to sustainable energy storage, we show that our approach quickly uncovers novel, diverse, and high-performing organic (quinone-based) cathode materials for aqueous redox flow battery applications. - oai:arXiv.org:2512.17659v1 - stat.ML - cond-mat.mtrl-sci - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross + QoS-Aware Load Balancing in the Computing Continuum via Multi-Player Bandits + https://arxiv.org/abs/2512.18915 + arXiv:2512.18915v1 Announce Type: new +Abstract: As computation shifts from the cloud to the edge to reduce processing latency and network traffic, the resulting Computing Continuum (CC) creates a dynamic environment where it is challenging to meet strict Quality of Service (QoS) requirements and avoid service instance overload. Existing methods often prioritize global metrics, overlooking per-client QoS, which is crucial for latency-sensitive and reliability-critical applications. We propose QEdgeProxy, a decentralized QoS-aware load balancer that acts as a proxy between IoT devices and service instances in CC. We formulate the load balancing problem as a Multi-Player Multi-Armed Bandit (MP-MAB) with heterogeneous rewards, where each load balancer autonomously selects service instances that maximize the probability of meeting its clients' QoS targets by using Kernel Density Estimation (KDE) to estimate QoS success probabilities. It also incorporates an adaptive exploration mechanism to recover rapidly from performance shifts and non-stationary conditions. We present a Kubernetes-native QEdgeProxy implementation and evaluate it on an emulated CC testbed deployed on a K3s cluster with realistic network conditions and a latency-sensitive edge-AI workload. Results show that QEdgeProxy significantly outperforms proximity-based and reinforcement-learning baselines in per-client QoS satisfaction, while adapting effectively to load surges and instance availability changes. + oai:arXiv.org:2512.18915v1 + cs.NI + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Madhav R. Muthyala, Farshud Sorourifar, Tianhong Tan, You Peng, Joel A. Paulson + Ivan \v{C}ili\'c, Ivana Podnar \v{Z}arko, Pantelis Frangoudis, Schahram Dustdar - Fraud detection in credit card transactions using Quantum-Assisted Restricted Boltzmann Machines - https://arxiv.org/abs/2512.17660 - arXiv:2512.17660v1 Announce Type: cross -Abstract: Use cases for emerging quantum computing platforms become economically relevant as the efficiency of processing and availability of quantum computers increase. We assess the performance of Restricted Boltzmann Machines (RBM) assisted by quantum computing, running on real quantum hardware and simulators, using a real dataset containing 145 million transactions provided by Stone, a leading Brazilian fintech, for credit card fraud detection. The results suggest that the quantum-assisted RBM method is able to achieve superior performance in most figures of merit in comparison to classical approaches, even using current noisy quantum annealers. Our study paves the way for implementing quantum-assisted RBMs for general fault detection in financial systems. - oai:arXiv.org:2512.17660v1 - quant-ph - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Needles in a haystack: using forensic network science to uncover insider trading + https://arxiv.org/abs/2512.18918 + arXiv:2512.18918v1 Announce Type: new +Abstract: Although the automation and digitisation of anti-financial crime investigation has made significant progress in recent years, detecting insider trading remains a unique challenge, partly due to the limited availability of labelled data. To address this challenge, we propose using a data-driven networks approach that flags groups of corporate insiders who report coordinated transactions that are indicative of insider trading. Specifically, we leverage data on 2.9 million trades reported to the U.S. Securities and Exchange Commission (SEC) by company insiders (C-suite executives, board members and major shareholders) between 2014 and 2024. Our proposed algorithm constructs weighted edges between insiders based on the temporal similarity of their trades over the 10-year timeframe. Within this network we then uncover trends that indicate insider trading by focusing on central nodes and anomalous subgraphs. To highlight the validity of our approach we evaluate our findings with reference to two null models, generated by running our algorithm on synthetic empirically calibrated and shuffled datasets. The results indicate that our approach can be used to detect pairs or clusters of insiders whose behaviour suggests insider trading and/or market manipulation. + oai:arXiv.org:2512.18918v1 + cs.SI + physics.data-an + q-fin.CP + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Jo\~ao Marcos Cavalcanti de Albuquerque Neto, Gustavo Castro do Amaral, Guilherme Penello Tempor\~ao + Gian Jaeger, Wang Ngai Yeung, Renaud Lambiotte - Imputation Uncertainty in Interpretable Machine Learning Methods - https://arxiv.org/abs/2512.17689 - arXiv:2512.17689v1 Announce Type: cross -Abstract: In real data, missing values occur frequently, which affects the interpretation with interpretable machine learning (IML) methods. Recent work considers bias and shows that model explanations may differ between imputation methods, while ignoring additional imputation uncertainty and its influence on variance and confidence intervals. We therefore compare the effects of different imputation methods on the confidence interval coverage probabilities of the IML methods permutation feature importance, partial dependence plots and Shapley values. We show that single imputation leads to underestimation of variance and that, in most cases, only multiple imputation is close to nominal coverage. - oai:arXiv.org:2512.17689v1 - stat.ML - cs.LG - stat.ME - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Narrative Scaffolding: Transforming Data-Driven Sensemaking Through Narrative-First Exploration + https://arxiv.org/abs/2512.18920 + arXiv:2512.18920v1 Announce Type: new +Abstract: When exploring data, analysts construct narratives about what the data means by asking questions, generating visualizations, reflecting on patterns, and revising their interpretations as new insights emerge. Yet existing analysis tools treat narrative as an afterthought, breaking the link between reasoning, reflection, and the evolving story from exploration. Consequently, analysts lose the ability to see how their reasoning evolves, making it harder to reflect systematically or build coherent explanations. To address this gap, we propose Narrative Scaffolding, a framework for narrative-driven exploration that positions narrative construction as the primary interface for exploration and reasoning. We implement this framework in a system that externalizes iterative reasoning through narrative-first entry, semantically aligned view generation, and reflection support via insight provenance and inquiry tracking. In a within-subject study N=20, we demonstrate that narrative scaffolding facilitates broader exploration, deeper reflection, and more defensible narratives. An evaluation with visualization literacy experts (N = 6) confirmed that the system produced outputs aligned with narrative intent and facilitated intentional exploration. + oai:arXiv.org:2512.18920v1 + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Pegah Golchian, Marvin N. Wright + Oliver Huang, Muhammad Fatir, Steven Luo, Sangho Suh, Hariharan Subramonyam, Carolina Nobre - Revisiting the Broken Symmetry Phase of Solid Hydrogen: A Neural Network Variational Monte Carlo Study - https://arxiv.org/abs/2512.17703 - arXiv:2512.17703v1 Announce Type: cross -Abstract: The crystal structure of high-pressure solid hydrogen remains a fundamental open problem. Although the research frontier has mostly shifted toward ultra-high pressure phases above 400 GPa, we show that even the broken symmetry phase observed around 130~GPa requires revisiting due to its intricate coupling of electronic and nuclear degrees of freedom. Here, we develop a first principle quantum Monte Carlo framework based on a deep neural network wave function that treats both electrons and nuclei quantum mechanically within the constant pressure ensemble. Our calculations reveal an unreported ground-state structure candidate for the broken symmetry phase with $Cmcm$ space group symmetry, and we test its stability up to 96 atoms. The predicted structure quantitatively matches the experimental equation of state and X-ray diffraction patterns. Furthermore, our group-theoretical analysis shows that the $Cmcm$ structure is compatible with existing Raman and infrared spectroscopic data. Crucially, static density functional theory calculation reveals the $Cmcm$ structure as a dynamically unstable saddle point on the Born-Oppenheimer potential energy surface, demonstrating that a full quantum many-body treatment of the problem is necessary. These results shed new light on the phase diagram of high-pressure hydrogen and call for further experimental verifications. - oai:arXiv.org:2512.17703v1 - cond-mat.str-el - cond-mat.mtrl-sci + Merging of Kolmogorov-Arnold networks trained on disjoint datasets + https://arxiv.org/abs/2512.18921 + arXiv:2512.18921v1 Announce Type: new +Abstract: Training on disjoint datasets can serve two primary goals: accelerating data processing and enabling federated learning. It has already been established that Kolmogorov-Arnold networks (KANs) are particularly well suited for federated learning and can be merged through simple parameter averaging. While the federated learning literature has mostly focused on achieving training convergence across distributed nodes, the present paper specifically targets acceleration of the training, which depends critically on the choice of an optimisation method and the type of the basis functions. To the best knowledge of the authors, the fastest currently-available combination is the Newton-Kaczmarz method and the piecewise-linear basis functions. Here, it is shown that training on disjoint datasets (or disjoint subsets of the training dataset) can further improve the performance. Experimental comparisons are provided, and all corresponding codes are publicly available. + oai:arXiv.org:2512.18921v1 cs.LG - physics.comp-ph - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shengdu Chai, Chen Lin, Xinyang Dong, Yuqiang Li, Wanli Ouyang, Lei Wang, X. C. Xie - - - Review of MEMS Speakers for Audio Applications - https://arxiv.org/abs/2512.17708 - arXiv:2512.17708v1 Announce Type: cross -Abstract: Microelectromechanical systems (MEMS) speakers are compact, scalable alternatives to traditional voice coil speakers, promising improved sound quality through precise semiconductor manufacturing. This review provides an overview of the research landscape, including ultrasound pulse-based and thermoacoustic sound generation, classifying MEMS speakers by actuation principle: electrodynamic, piezoelectric, and electrostatic. A comparative analysis of performance indicators from 1990-2025 highlights the dominance of piezoelectric MEMS with direct air displacement, focusing on miniaturization and efficiency. The review outlines upcoming research challenges and identifies potential candidates for achieving full-spectrum audio performance. A focus on innovative approaches could lead to wideband adoption of MEMS-only speakers. - oai:arXiv.org:2512.17708v1 - eess.AS - cs.SD - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Nils Wittek, Anton Melnikov, Bert Kaiser, Andr\'e Zimmermann - - - On the Complexity of Bipartite Degree Realizability - https://arxiv.org/abs/2512.17709 - arXiv:2512.17709v1 Announce Type: cross -Abstract: We study the \emph{Bipartite Degree Realization} (BDR) problem: given a graphic degree sequence $D$, decide whether it admits a realization as a bipartite graph. While bipartite realizability for a fixed vertex partition can be decided in polynomial time via the Gale--Ryser theorem, the computational complexity of BDR without a prescribed partition remains unresolved. We address this question through a parameterized analysis. - For constants $0 \le c_1 \le c_2 \le 1$, we define $\mathrm{BDR}_{c_1,c_2}$ as the restriction of BDR to degree sequences of length $n$ whose degrees lie in the interval $[c_1 n, c_2 n]$. Our main result shows that $\mathrm{BDR}_{c_1,c_2}$ is solvable in polynomial time whenever $0 \le c_1 \le c_2 \le \frac{\sqrt{c_1(c_1+4)}-c_1}{2}$, as well as for all $c_1 > \tfrac12$. The proof relies on a reduction to extremal \emph{least balanced degree sequences} and a detailed verification of the critical Gale--Ryser inequalities, combined with a bounded subset-sum formulation. - We further show that, assuming the NP-completeness of unrestricted BDR, the problem $\mathrm{BDR}_{c_1,c_2}$ remains NP-complete for all $0 < c_2 < \tfrac12$ and $c_1 < 1 - c_2 - \sqrt{1-2c_2}$. Our results clarify the algorithmic landscape of bipartite degree realization and contribute to the broader study of potentially bipartite graphic degree sequences. - oai:arXiv.org:2512.17709v1 - math.CO - cs.CC - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Istv\'an Mikl\'os + Andrew Polar, Michael Poluektov - Certified bounds on optimization problems in quantum theory - https://arxiv.org/abs/2512.17713 - arXiv:2512.17713v1 Announce Type: cross -Abstract: Semidefinite relaxations of polynomial optimization have become a central tool for addressing the non-convex optimization problems over non-commutative operators that are ubiquitous in quantum information theory and, more in general, quantum physics. Yet, as these global relaxation methods rely on floating-point methods, the bounds issued by the semidefinite solver can - and often do - exceed the global optimum, undermining their certifiability. To counter this issue, we introduce a rigorous framework for extracting exact rational bounds on non-commutative optimization problems from numerical data, and apply it to several paradigmatic problems in quantum information theory. An extension to sparsity and symmetry-adapted semidefinite relaxations is also provided and compared to the general dense scheme. Our results establish rational post-processing as a practical route to reliable certification, pushing semidefinite optimization toward a certifiable standard for quantum information science. - oai:arXiv.org:2512.17713v1 - quant-ph - cs.SC - math.OC - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Optimizing Robotic Placement via Grasp-Dependent Feasibility Prediction + https://arxiv.org/abs/2512.18922 + arXiv:2512.18922v1 Announce Type: new +Abstract: In this paper, we study whether inexpensive, physics-free supervision can reliably prioritize grasp-place candidates for budget-aware pick-and-place. From an object's initial pose, target pose, and a candidate grasp, we generate two path-aware geometric labels: path-wise inverse kinematics (IK) feasibility across a fixed approach-grasp-lift waypoint template, and a transit collision flag from mesh sweeps along the same template. A compact dual-output MLP learns these signals from pose encodings, and at test time its scores rank precomputed candidates for a rank-then-plan policy under the same IK gate and planner as the baseline. Although learned from cheap labels only, the scores transfer to physics-enabled executed trajectories: at a fixed planning budget the policy finds successful paths sooner with fewer planner calls while keeping final success on par or better. This work targets a single rigid cuboid with side-face grasps and a fixed waypoint template, and we outline extensions to varied objects and richer waypoint schemes. + oai:arXiv.org:2512.18922v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Younes Naceur, Jie Wang, Victor Magron, Antonio Ac\'in + Tianyuan Liu, Richard Dazeley, Benjamin Champion, Akan Cosgun - Breast Cancer Neoadjuvant Chemotherapy Treatment Response Prediction Using Aligned Longitudinal MRI and Clinical Data - https://arxiv.org/abs/2512.17759 - arXiv:2512.17759v1 Announce Type: cross -Abstract: Aim: This study investigates treatment response prediction to neoadjuvant chemotherapy (NACT) in breast cancer patients, using longitudinal contrast-enhanced magnetic resonance images (CE-MRI) and clinical data. The goal is to develop machine learning (ML) models to predict pathologic complete response (PCR binary classification) and 5-year relapse-free survival status (RFS binary classification). Method: The proposed framework includes tumour segmentation, image registration, feature extraction, and predictive modelling. Using the image registration method, MRI image features can be extracted and compared from the original tumour site at different time points, therefore monitoring the intratumor changes during NACT process. Four feature extractors, including one radiomics and three deep learning-based (MedicalNet, Segformer3D, SAM-Med3D) were implemented and compared. In combination with three feature selection methods and four ML models, predictive models are built and compared. Results: The proposed image registration-based feature extraction consistently improves the predictive models. In the PCR and RFS classification tasks logistic regression model trained on radiomic features performed the best with an AUC of 0.88 and classification accuracy of 0.85 for PCR classification, and AUC of 0.78 and classification accuracy of 0.72 for RFS classification. Conclusions: It is evidenced that the image registration method has significantly improved performance in longitudinal feature learning in predicting PCR and RFS. The radiomics feature extractor is more effective than the pre-trained deep learning feature extractors, with higher performance and better interpretability. - oai:arXiv.org:2512.17759v1 - eess.IV - cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Rahul Ravi, Ruizhe Li, Tarek Abdelfatah, Stephen Chan, Xin Chen - - - MedNeXt-v2: Scaling 3D ConvNeXts for Large-Scale Supervised Representation Learning in Medical Image Segmentation - https://arxiv.org/abs/2512.17774 - arXiv:2512.17774v1 Announce Type: cross -Abstract: Large-scale supervised pretraining is rapidly reshaping 3D medical image segmentation. However, existing efforts focus primarily on increasing dataset size and overlook the question of whether the backbone network is an effective representation learner at scale. In this work, we address this gap by revisiting ConvNeXt-based architectures for volumetric segmentation and introducing MedNeXt-v2, a compound-scaled 3D ConvNeXt that leverages improved micro-architecture and data scaling to deliver state-of-the-art performance. First, we show that routinely used backbones in large-scale pretraining pipelines are often suboptimal. Subsequently, we use comprehensive backbone benchmarking prior to scaling and demonstrate that stronger from scratch performance reliably predicts stronger downstream performance after pretraining. Guided by these findings, we incorporate a 3D Global Response Normalization module and use depth, width, and context scaling to improve our architecture for effective representation learning. We pretrain MedNeXt-v2 on 18k CT volumes and demonstrate state-of-the-art performance when fine-tuning across six challenging CT and MR benchmarks (144 structures), showing consistent gains over seven publicly released pretrained models. Beyond improvements, our benchmarking of these models also reveals that stronger backbones yield better results on similar data, representation scaling disproportionately benefits pathological segmentation, and that modality-specific pretraining offers negligible benefit once full finetuning is applied. In conclusion, our results establish MedNeXt-v2 as a strong backbone for large-scale supervised representation learning in 3D Medical Image Segmentation. Our code and pretrained models are made available with the official nnUNet repository at: https://www.github.com/MIC-DKFZ/nnUNet - oai:arXiv.org:2512.17774v1 - eess.IV + An Empirical Study of Developer-Provided Context for AI Coding Assistants in Open-Source Projects + https://arxiv.org/abs/2512.18925 + arXiv:2512.18925v1 Announce Type: new +Abstract: While Large Language Models (LLMs) have demonstrated remarkable capabilities, research shows that their effectiveness depends not only on explicit prompts but also on the broader context provided. This requirement is especially pronounced in software engineering, where the goals, architecture, and collaborative conventions of an existing project play critical roles in response quality. To support this, many AI coding assistants have introduced ways for developers to author persistent, machine-readable directives that encode a project's unique constraints. Although this practice is growing, the content of these directives remains unstudied. + This paper presents a large-scale empirical study to characterize this emerging form of developer-provided context. Through a qualitative analysis of 401 open-source repositories containing cursor rules, we developed a comprehensive taxonomy of project context that developers consider essential, organized into five high-level themes: Conventions, Guidelines, Project Information, LLM Directives, and Examples. Our study also explores how this context varies across different project types and programming languages, offering implications for the next generation of context-aware AI developer tools. + oai:arXiv.org:2512.18925v1 + cs.SE cs.AI - cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Saikat Roy, Yannick Kirchhoff, Constantin Ulrich, Maximillian Rokuss, Tassilo Wald, Fabian Isensee, Klaus Maier-Hein + Shaokang Jiang, Daye Nam - Domain-Aware Quantum Circuit for QML - https://arxiv.org/abs/2512.17800 - arXiv:2512.17800v1 Announce Type: cross -Abstract: Designing parameterized quantum circuits (PQCs) that are expressive, trainable, and robust to hardware noise is a central challenge for quantum machine learning (QML) on noisy intermediate-scale quantum (NISQ) devices. We present a Domain-Aware Quantum Circuit (DAQC) that leverages image priors to guide locality-preserving encoding and entanglement via non-overlapping DCT-style zigzag windows. The design employs interleaved encode-entangle-train cycles, where entanglement is applied among qubits hosting neighboring pixels, aligned to device connectivity. This staged, locality-preserving information flow expands the effective receptive field without deep global mixing, enabling efficient use of limited depth and qubits. The design concentrates representational capacity on short-range correlations, reduces long-range two-qubit operations, and encourages stable optimization, thereby mitigating depth-induced and globally entangled barren-plateau effects. We evaluate DAQC on MNIST, FashionMNIST, and PneumoniaMNIST datasets. On quantum hardware, DAQC achieves performance competitive with strong classical baselines (e.g., ResNet-18/50, DenseNet-121, EfficientNet-B0) and substantially outperforming Quantum Circuit Search (QCS) baselines. To the best of our knowledge, DAQC, which uses a quantum feature extractor with only a linear classical readout (no deep classical backbone), currently achieves the best reported performance on real quantum hardware for QML-based image classification tasks. Code and pretrained models are available at: https://github.com/gurinder-hub/DAQC. - oai:arXiv.org:2512.17800v1 - quant-ph + The Ensemble Schr{\"o}dinger Bridge filter for Nonlinear Data Assimilation + https://arxiv.org/abs/2512.18928 + arXiv:2512.18928v1 Announce Type: new +Abstract: This work puts forward a novel nonlinear optimal filter namely the Ensemble Schr{\"o}dinger Bridge nonlinear filter. The proposed filter finds marriage of the standard prediction procedure and the diffusion generative modeling for the analysis procedure to realize one filtering step. The designed approach finds no structural model error, and it is derivative free, training free and highly parallizable. Experimental results show that the designed algorithm performs well given highly nonlinear dynamics in (mildly) high dimension up to 40 or above under a chaotic environment. It also shows better performance than classical methods such as the ensemble Kalman filter and the Particle filter in numerous tests given different level of nonlinearity. Future work will focus on extending the proposed approach to practical meteorological applications and establishing a rigorous convergence analysis. + oai:arXiv.org:2512.18928v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Gurinder Singh, Thaddeus Pellegrini, Kenneth M. Merz, Jr + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Feng Bao, Hui Sun - Towards Sharp Minimax Risk Bounds for Operator Learning - https://arxiv.org/abs/2512.17805 - arXiv:2512.17805v1 Announce Type: cross -Abstract: We develop a minimax theory for operator learning, where the goal is to estimate an unknown operator between separable Hilbert spaces from finitely many noisy input-output samples. For uniformly bounded Lipschitz operators, we prove information-theoretic lower bounds together with matching or near-matching upper bounds, covering both fixed and random designs under Hilbert-valued Gaussian noise and Gaussian white noise errors. The rates are controlled by the spectrum of the covariance operator of the measure that defines the error metric. Our setup is very general and allows for measures with unbounded support. A key implication is a curse of sample complexity which shows that the minimax risk for generic Lipschitz operators cannot decay at any algebraic rate in the sample size. We obtain essentially sharp characterizations when the covariance spectrum decays exponentially and provide general upper and lower bounds in slower-decay regimes. - oai:arXiv.org:2512.17805v1 - math.ST - cs.NA - math.NA - stat.ML - stat.TH - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Ben Adcock, Gregor Maier, Rahul Parhi + LouvreSAE: Sparse Autoencoders for Interpretable and Controllable Style Transfer + https://arxiv.org/abs/2512.18930 + arXiv:2512.18930v1 Announce Type: new +Abstract: Artistic style transfer in generative models remains a significant challenge, as existing methods often introduce style only via model fine-tuning, additional adapters, or prompt engineering, all of which can be computationally expensive and may still entangle style with subject matter. In this paper, we introduce a training- and inference-light, interpretable method for representing and transferring artistic style. Our approach leverages an art-specific Sparse Autoencoder (SAE) on top of latent embeddings of generative image models. Trained on artistic data, our SAE learns an emergent, largely disentangled set of stylistic and compositional concepts, corresponding to style-related elements pertaining brushwork, texture, and color palette, as well as semantic and structural concepts. We call it LouvreSAE and use it to construct style profiles: compact, decomposable steering vectors that enable style transfer without any model updates or optimization. Unlike prior concept-based style transfer methods, our method requires no fine-tuning, no LoRA training, and no additional inference passes, enabling direct steering of artistic styles from only a few reference images. We validate our method on ArtBench10, achieving or surpassing existing methods on style evaluations (VGG Style Loss and CLIP Score Style) while being 1.7-20x faster and, critically, interpretable. + oai:arXiv.org:2512.18930v1 + cs.CV + cs.AI + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Raina Panda, Daniel Fein, Arpita Singhal, Mark Fiore, Maneesh Agrawala, Matyas Bohacek - Learning vertical coordinates via automatic differentiation of a dynamical core - https://arxiv.org/abs/2512.17877 - arXiv:2512.17877v1 Announce Type: cross -Abstract: Terrain-following coordinates in atmospheric models often imprint their grid structure onto the solution, particularly over steep topography, where distorted coordinate layers can generate spurious horizontal and vertical motion. Standard formulations, such as hybrid or SLEVE coordinates, mitigate these errors by using analytic decay functions controlled by heuristic scale parameters that are typically tuned by hand and fixed a priori. In this work, we propose a framework to define a parametric vertical coordinate system as a learnable component within a differentiable dynamical core. We develop an end-to-end differentiable numerical solver for the two-dimensional non-hydrostatic Euler equations on an Arakawa C-grid, and introduce a NEUral Vertical Enhancement (NEUVE) terrain-following coordinate based on an integral transformed neural network that guarantees monotonicity. A key feature of our approach is the use of automatic differentiation to compute exact geometric metric terms, thereby eliminating truncation errors associated with finite-difference coordinate derivatives. By coupling simulation errors through the time integration to the parameterization, our formulation finds a grid structure optimized for both the underlying physics and numerics. Using several standard tests, we demonstrate that these learned coordinates reduce the mean squared error by a factor of 1.4 to 2 in non-linear statistical benchmarks, and eliminate spurious vertical velocity striations over steep topography. - oai:arXiv.org:2512.17877v1 - physics.ao-ph + DPSR: Differentially Private Sparse Reconstruction via Multi-Stage Denoising for Recommender Systems + https://arxiv.org/abs/2512.18932 + arXiv:2512.18932v1 Announce Type: new +Abstract: Differential privacy (DP) has emerged as the gold standard for protecting user data in recommender systems, but existing privacy-preserving mechanisms face a fundamental challenge: the privacy-utility tradeoff inevitably degrades recommendation quality as privacy budgets tighten. We introduce DPSR (Differentially Private Sparse Reconstruction), a novel three-stage denoising framework that fundamentally addresses this limitation by exploiting the inherent structure of rating matrices -- sparsity, low-rank properties, and collaborative patterns. + DPSR consists of three synergistic stages: (1) \textit{information-theoretic noise calibration} that adaptively reduces noise for high-information ratings, (2) \textit{collaborative filtering-based denoising} that leverages item-item similarities to remove privacy noise, and (3) \textit{low-rank matrix completion} that exploits latent structure for signal recovery. Critically, all denoising operations occur \textit{after} noise injection, preserving differential privacy through the post-processing immunity theorem while removing both privacy-induced and inherent data noise. + Through extensive experiments on synthetic datasets with controlled ground truth, we demonstrate that DPSR achieves 5.57\% to 9.23\% RMSE improvement over state-of-the-art Laplace and Gaussian mechanisms across privacy budgets ranging from $\varepsilon=0.1$ to $\varepsilon=10.0$ (all improvements statistically significant with $p < 0.05$, most $p < 0.001$). Remarkably, at $\varepsilon=1.0$, DPSR achieves RMSE of 0.9823, \textit{outperforming even the non-private baseline} (1.0983), demonstrating that our denoising pipeline acts as an effective regularizer that removes data noise in addition to privacy noise. + oai:arXiv.org:2512.18932v1 cs.LG - physics.flu-dyn - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Tim Whittaker, Seth Taylor, Elsa Cardoso-Bihlo, Alejandro Di Luca, Alex Bihlo + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Sarwan Ali - Asymptotic behaviour of galactic small-scale dynamos at modest magnetic Prandtl number - https://arxiv.org/abs/2512.17885 - arXiv:2512.17885v1 Announce Type: cross -Abstract: Magnetic fields are critical at many scales to galactic dynamics and structure, including multiphase pressure balance, dust processing, and star formation. Dynamo action determines their dynamical structure and strength. Simulations of combined large- and small-scale dynamos have successfully developed mean fields with strength and topology consistent with observations but with turbulent fields much weaker than observed, while simulations of small-scale dynamos with parameters relevant to the interstellar medium yield turbulent fields an order of magnitude below the values observed or expected theoretically. We use the Pencil Code accelerated on GPUs with Astaroth to perform high-resolution simulations of a supernova-driven galactic dynamo including heating and cooling in a periodic domain. Our models show that the strength of the turbulent field produced by the small-scale dynamo approaches an asymptote at only modest magnetic Prandtl numbers. This allows us to use these models to suggest the essential characteristics of this constituent of the magnetic field for inclusion in global galactic models. The asymptotic limit occurs already at magnetic Prandtl number of only a few hundred, many orders of magnitude below physical values in the the interstellar medium and consistent with previous findings for isothermal compressible flows. - oai:arXiv.org:2512.17885v1 - astro-ph.GA - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Frederick A. Gent, Mordecai-Mark Mac Low, Maarit J. Korpi-Lagg, Touko Puro, Matthias Reinhardt + Point What You Mean: Visually Grounded Instruction Policy + https://arxiv.org/abs/2512.18933 + arXiv:2512.18933v1 Announce Type: new +Abstract: Vision-Language-Action (VLA) models align vision and language with embodied control, but their object referring ability remains limited when relying solely on text prompt, especially in cluttered or out-of-distribution (OOD) scenes. In this study, we introduce the Point-VLA, a plug-and-play policy that augments language instructions with explicit visual cues (e.g., bounding boxes) to resolve referential ambiguity and enable precise object-level grounding. To efficiently scale visually grounded datasets, we further develop an automatic data annotation pipeline requiring minimal human effort. We evaluate Point-VLA on diverse real-world referring tasks and observe consistently stronger performance than text-only instruction VLAs, particularly in cluttered or unseen-object scenarios, with robust generalization. These results demonstrate that Point-VLA effectively resolves object referring ambiguity through pixel-level visual grounding, achieving more generalizable embodied control. + oai:arXiv.org:2512.18933v1 + cs.CV + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hang Yu, Juntu Zhao, Yufeng Liu, Kaiyu Li, Cheng Ma, Di Zhang, Yingdong Hu, Guang Chen, Junyuan Xie, Junliang Guo, Junqiao Zhao, Yang Gao - Exploring the Effect of Basis Rotation on NQS Performance - https://arxiv.org/abs/2512.17893 - arXiv:2512.17893v1 Announce Type: cross -Abstract: Neural Quantum States (NQS) use neural networks to represent wavefunctions of quantum many-body systems, but their performance depends on the choice of basis, yet the underlying mechanism remains poorly understood. We use a fully solvable one-dimensional Ising model to show that local basis rotations leave the loss landscape unchanged while relocating the exact wavefunction in parameter space, effectively increasing its geometric distance from typical initializations. By sweeping a rotation angle, we compute quantum Fisher information and Fubini-Study distances to quantify how the rotated wavefunction moves within the loss landscape. Shallow architectures (with focus on Restricted Boltzmann Machines (RBMs)) trained with quantum natural gradient are more likely to fall into saddle-point regions depending on the rotation angle: they achieve low energy error but fail to reproduce correct coefficient distributions. In the ferromagnetic case, near-degenerate eigenstates create high-curvature barriers that trap optimization at intermediate fidelities. We introduce a framework based on an analytically solvable rotated Ising model to investigate how relocating the target wavefunction within a fixed loss landscape exposes information-geometric barriers,such as saddle points and high-curvature regions,that hinder shallow NQS optimization, underscoring the need for landscape-aware model design in variational training. - oai:arXiv.org:2512.17893v1 - quant-ph + When Less is More: 8-bit Quantization Improves Continual Learning in Large Language Models + https://arxiv.org/abs/2512.18934 + arXiv:2512.18934v1 Announce Type: new +Abstract: Catastrophic forgetting poses a fundamental challenge in continual learning, particularly when models are quantized for deployment efficiency. We systematically investigate the interplay between quantization precision (FP16, INT8, INT4) and replay buffer strategies in large language models, revealing unexpected dynamics. While FP16 achieves superior initial task performance (74.44% on NLU), we observe a striking inversion on subsequent tasks: quantized models outperform FP16 by 8-15% on final task forward accuracy, with INT4 achieving nearly double FP16's performance on Code generation (40% vs 20%). Critically, even minimal replay buffers (0.1%) dramatically improve retention - increasing NLU retention after Math training from 45% to 65% across all precision levels - with INT8 consistently achieving the optimal balance between learning plasticity and knowledge retention. We hypothesize that quantization-induced noise acts as implicit regularization, preventing the overfitting to new task gradients that plagues high-precision models. These findings challenge the conventional wisdom that higher precision is always preferable, suggesting instead that INT8 quantization offers both computational efficiency and superior continual learning dynamics. Our results provide practical guidelines for deploying compressed models in continual learning scenarios: small replay buffers (1-2%) suffice for NLU tasks, while Math and Code benefit from moderate buffers (5-10%), with quantized models requiring less replay than FP16 to achieve comparable retention. Code is available at https://github.com/Festyve/LessIsMore. + oai:arXiv.org:2512.18934v1 + cs.LG cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Sven Benjamin Ko\v{z}i\'c, Vinko Zlati\'c, Fabio Franchini, Salvatore Marco Giampaolo - - - Plane Strong Connectivity Augmentation - https://arxiv.org/abs/2512.17904 - arXiv:2512.17904v1 Announce Type: cross -Abstract: We investigate the problem of strong connectivity augmentation within plane oriented graphs. - We show that deciding whether a plane oriented graph $D$ can be augmented with (any number of) arcs $X$ such that $D+X$ is strongly connected, but still plane and oriented, is NP-hard. - This question becomes trivial within plane digraphs, like most connectivity augmentation problems without a budget constraint. - The budgeted version, Plane Strong Connectivity Augmentation (PSCA) considers a plane oriented graph $D$ along with some integer $k$, and asks for an $X$ of size at most $k$ ensuring that $D+X$ is strongly connected, while remaining plane and oriented. - Our main result is a fixed-parameter tractable algorithm for PSCA, running in time $2^{O(k)} n^{O(1)}$. - The cornerstone of our procedure is a structural result showing that, for any fixed $k$, each face admits a bounded number of partial solutions "dominating" all others. - Then, our algorithm for PSCA combines face-wise branching with a Monte-Carlo reduction to the polynomial Minimum Dijoin problem, which we derandomize. - To the best of our knowledge, this is the first FPT algorithm for a (hard) connectivity augmentation problem constrained by planarity. - oai:arXiv.org:2512.17904v1 - math.CO - cs.CG - cs.DM - Mon, 22 Dec 2025 00:00:00 -0500 - cross + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - St\'ephane Bessy, Daniel Gon\c{c}alves, Amadeus Reinald, Dimitrios M. Thilikos + Michael S. Zhang, Rishi A. Ruia, Arnav Kewalram, Saathvik Dharmapuram, Utkarsh Sharma, Kevin Zhu - Notes on Randomized Algorithms - https://arxiv.org/abs/2003.01902 - arXiv:2003.01902v4 Announce Type: replace -Abstract: Lecture notes for the Yale Computer Science course CPSC 4690/5690 Randomized Algorithms. Suitable for use as a supplementary text for an introductory graduate or advanced undergraduate course on randomized algorithms. Discusses tools from probability theory, including random variables and expectations, union bound arguments, concentration bounds, applications of martingales and Markov chains, and the Lov\'asz Local Lemma. Algorithmic topics include analysis of classic randomized algorithms such as Quicksort and Hoare's FIND, randomized tree data structures, hashing, Markov chain Monte Carlo sampling, randomized approximate counting, derandomization, quantum computing, and some examples of randomized distributed algorithms. - oai:arXiv.org:2003.01902v4 - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-sa/4.0/ - James Aspnes + A Framework for Deploying Learning-based Quadruped Loco-Manipulation + https://arxiv.org/abs/2512.18938 + arXiv:2512.18938v1 Announce Type: new +Abstract: Quadruped mobile manipulators offer strong potential for agile loco-manipulation but remain difficult to control and transfer reliably from simulation to reality. Reinforcement learning (RL) shows promise for whole-body control, yet most frameworks are proprietary and hard to reproduce on real hardware. We present an open pipeline for training, benchmarking, and deploying RL-based controllers on the Unitree B1 quadruped with a Z1 arm. The framework unifies sim-to-sim and sim-to-real transfer through ROS, re-implementing a policy trained in Isaac Gym, extending it to MuJoCo via a hardware abstraction layer, and deploying the same controller on physical hardware. Sim-to-sim experiments expose discrepancies between Isaac Gym and MuJoCo contact models that influence policy behavior, while real-world teleoperated object-picking trials show that coordinated whole-body control extends reach and improves manipulation over floating-base baselines. The pipeline provides a transparent, reproducible foundation for developing and analyzing RL-based loco-manipulation controllers and will be released open source to support future research. + oai:arXiv.org:2512.18938v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yadong Liu, Jianwei Liu, He Liang, Dimitrios Kanoulas - openCFS-Data: Data Pre-Post-Processing Tool for openCFS - https://arxiv.org/abs/2302.03637 - arXiv:2302.03637v3 Announce Type: replace -Abstract: Many numerical simulation tools have been developed and are on the market, but there is still a strong need for appropriate tools capable to simulate multi-field problems, especially in aeroacoustics. Therefore, openCFS provides an open-source framework for implementing partial differential equations using the finite element method. Since 2000, the software has been developed continuously. The result of is openCFS (before 2020 known as CFS++ Coupled Field Simulations written in C++). In this paper, we present for the first time the CFS-Data, the open-source pre-post-processing part of openCFS with a focus on the aeroacoustic source computation (called filters). - oai:arXiv.org:2302.03637v3 + An asymptotically compatible unfitted finite element methods for nonlocal elliptic Interfaces: local limits and sharp error estimates + https://arxiv.org/abs/2512.18939 + arXiv:2512.18939v1 Announce Type: new +Abstract: This paper presents the development and analysis of an asymptotically compatible (AC) unfitted finite element method for one-dimensional nonlocal elliptic interface problems. The proposed method achieves optimal error estimates through three principal contributions: (i) an extended maximum principle, coupled with an asymptotic consistency analysis of the flux operator, which establishes second-order convergence of nonlocal solutions to their local counterparts in the maximum norm; (ii) a Nitsche-type formulation that directly incorporates nonlocal jump conditions into the weak form, enabling high accuracy without body-fitted meshes; and (iii) a rigorous proof of optimal convergence rates in both the energy and L2 norms via the nonlocal maximum principle, flux consistency, and a newly derived nonlocal Poincare inequality. Numerical experiments confirm the theoretical findings and demonstrate the robustness and efficiency of the proposed approach, thereby providing a foundation for extensions to higher dimensions. + oai:arXiv.org:2512.18939v1 math.NA cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Stefan Schoder, Klaus Roppert - - - Online Hitting of Unit Balls and Hypercubes in $\mathbb{R}^d$ using Points from $\mathbb{Z}^d$ - https://arxiv.org/abs/2303.11779 - arXiv:2303.11779v3 Announce Type: replace -Abstract: We consider the online hitting set problem for the range space $\Sigma=(\cal X,\cal R)$, where the point set $\cal X$ is known beforehand, but the set $\cal R$ of geometric objects is not known in advance. Here, objects from $\cal R$ arrive one by one. The objective of the problem is to maintain a hitting set of the minimum cardinality by taking irrevocable decisions. In this paper, we consider the problem when objects are unit balls or unit hypercubes in $\mathbb{R}^d$, and the points from $\mathbb{Z}^d$ are used for hitting them. First, we address the case when objects are unit intervals in $\mathbb{R}$ and present an optimal deterministic algorithm with a competitive ratio of~$2$. Then, we consider the case when objects are unit balls. For hitting unit balls in $\mathbb{R}^2$ and $\mathbb{R}^3$, we present $4$ and $14$-competitive deterministic algorithms, respectively. On the other hand, for hitting unit balls in $\mathbb{R}^d$, we propose an $O(d^4)$-competitive deterministic algorithm, and we demonstrate that}, for $d<4$, the competitive ratio of any deterministic algorithm is at least $d+1$. In the end, we explore the case where objects are unit hypercubes. For hitting unit hypercubes in $\mathbb{R}^2$ and $\mathbb{R}^3$, we obtain $4$ and $8$-competitive deterministic algorithms, respectively. For hitting unit hypercubes in $\mathbb{R}^d$ ($d\geq 3$), we present an $O(d^2)$-competitive randomized algorithm. Furthermore, we prove that the competitive ratio of any deterministic algorithm for the problem is at least $d+1$ for any $d\in\mathbb{N}$. - oai:arXiv.org:2303.11779v3 - cs.CG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1016/j.tcs.2024.114452 - Theoretical Computer Science 992C (2024) 114452 - Minati De, Satyam Singh + Haixia Dong, Ziqing Xie, Jiwei Zhang - Foundations for an Abstract Proof Theory in the Context of Horn Rules - https://arxiv.org/abs/2304.05697 - arXiv:2304.05697v2 Announce Type: replace -Abstract: We introduce a novel, logic-independent framework for the study of sequent-style proof systems, which covers a number of proof-theoretic formalisms and concrete proof systems that appear in the literature. In particular, we introduce a generalized form of sequents, dubbed 'g-sequents,' which are taken to be binary graphs of typical, Gentzen-style sequents. We then define a variety of 'inference rule types' as sets of operations that act over such objects, and define 'abstract (sequent) calculi' as pairs consisting of a set of g-sequents together with a finite set of operations. Our approach permits an analysis of how certain inference rule types interact in a general setting, demonstrating under what conditions rules of a specific type can be permuted with or simulated by others, and being applicable to any sequent-style proof system that fits within our framework. We then leverage our permutation and simulation results to establish generic calculus and proof transformation algorithms, which show that every abstract calculus can be effectively transformed into a lattice of polynomially equivalent abstract calculi. We determine the complexity of computing this lattice and compute the relative sizes of proofs and sequents within distinct calculi of a lattice. We recognize that top and bottom elements in lattices correspond to many known deep-inference nested sequent systems and labeled sequent systems (respectively) for logics characterized by Horn properties. - oai:arXiv.org:2304.05697v2 - cs.LO - cs.DM - cs.DS - math.LO - Mon, 22 Dec 2025 00:00:00 -0500 - replace + FASTRIC: Prompt Specification Language for Verifiable LLM Interactions + https://arxiv.org/abs/2512.18940 + arXiv:2512.18940v1 Announce Type: new +Abstract: Large Language Models (LLMs) execute complex multi-turn interaction protocols but lack formal specifications to verify execution against designer intent. We introduce FASTRIC, a Prompt Specification Language that makes implicit Finite State Machines (FSMs) explicit in natural language prompts, enabling conformance verification through execution trace analysis. The LLM serves as intelligent execution agent: interpreting designer-encoded FSMs to execute specified behavioral roles. Unlike symbolic specification languages requiring parsers and compilers, FASTRIC leverages LLMs as unified infrastructure-simultaneously parser, interpreter, runtime environment, and development assistant. FASTRIC guides designers to articulate seven FSM elements (Final States, Agents, States, Triggers, Roles, Initial State, Constraints) structuring multi-turn interactions. Specification formality-ranging from implicit descriptions that frontier models infer to explicit step-by-step instructions for weaker models-serves as a design parameter. We introduce procedural conformance as verification metric measuring execution adherence to FSM specifications. Testing a 3-state kindergarten tutoring FSM across four formality levels and three model scales (14.7B, 685B, 1T+ parameters) reveals optimal specification formality is a function of model capacity. DeepSeek-V3.2 (685B) achieves perfect conformance (1.00) at L2-L4; ChatGPT-5 (~1T) peaks at L3 (0.90) before collapsing at L4 (0.39); Phi4 (14.7B) shows no stable optimum with high variance (SD=0.16-0.36). These findings reveal model-specific formality ranges-"Goldilocks zones"-where specifications provide sufficient structure without over-constraint, establishing Prompt Specification Engineering for creating verifiable interaction protocols, transforming multi-turn interaction design from heuristic art to systematic engineering with measurable procedural guarantees. + oai:arXiv.org:2512.18940v1 + cs.CL + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Tim S. Lyon, Piotr Ostropolski-Nalewaja + Wen-Long Jin - Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization and Detection - https://arxiv.org/abs/2306.09158 - arXiv:2306.09158v2 Announce Type: replace -Abstract: Modern machine learning models deployed in the wild can encounter both covariate and semantic shifts, giving rise to the problems of out-of-distribution (OOD) generalization and OOD detection respectively. While both problems have received significant research attention lately, they have been pursued independently. This may not be surprising, since the two tasks have seemingly conflicting goals. This paper provides a new unified approach that is capable of simultaneously generalizing to covariate shifts while robustly detecting semantic shifts. We propose a margin-based learning framework that exploits freely available unlabeled data in the wild that captures the environmental test-time OOD distributions under both covariate and semantic shifts. We show both empirically and theoretically that the proposed margin constraint is the key to achieving both OOD generalization and detection. Extensive experiments show the superiority of our framework, outperforming competitive baselines that specialize in either OOD generalization or OOD detection. Code is publicly available at https://github.com/deeplearning-wisc/scone. - oai:arXiv.org:2306.09158v2 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Clustering-based Transfer Learning for Dynamic Multimodal MultiObjective Evolutionary Algorithm + https://arxiv.org/abs/2512.18947 + arXiv:2512.18947v1 Announce Type: new +Abstract: Dynamic multimodal multiobjective optimization presents the dual challenge of simultaneously tracking multiple equivalent pareto optimal sets and maintaining population diversity in time-varying environments. However, existing dynamic multiobjective evolutionary algorithms often neglect solution modality, whereas static multimodal multiobjective evolutionary algorithms lack adaptability to dynamic changes. To address above challenge, this paper makes two primary contributions. First, we introduce a new benchmark suite of dynamic multimodal multiobjective test functions constructed by fusing the properties of both dynamic and multimodal optimization to establish a rigorous evaluation platform. Second, we propose a novel algorithm centered on a Clustering-based Autoencoder prediction dynamic response mechanism, which utilizes an autoencoder model to process matched clusters to generate a highly diverse initial population. Furthermore, to balance the algorithm's convergence and diversity, we integrate an adaptive niching strategy into the static optimizer. Empirical analysis on 12 instances of dynamic multimodal multiobjective test functions reveals that, compared with several state-of-the-art dynamic multiobjective evolutionary algorithms and multimodal multiobjective evolutionary algorithms, our algorithm not only preserves population diversity more effectively in the decision space but also achieves superior convergence in the objective space. + oai:arXiv.org:2512.18947v1 + cs.AI + cs.NE + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert Nowak, Yixuan Li + Li Yan, Bolun Liu, Chao Li, Jing Liang, Kunjie Yu, Caitong Yue, Xuzhao Chai, Boyang Qu - Nonlinear moving horizon estimation for robust state and parameter estimation -- extended version - https://arxiv.org/abs/2312.13175 - arXiv:2312.13175v3 Announce Type: replace -Abstract: We propose a moving horizon estimation scheme to estimate the states and the unknown constant parameters of general nonlinear uncertain discrete-time systems. The proposed framework and analysis explicitly do not involve the a priori verification of a particular excitation condition for the parameters. Instead, we use online information about the actual excitation of the parameters at any time during operation and ensure that the regularization term in the cost function is always automatically selected appropriately. This ensures that the state and parameter estimation error is bounded for all times, even if the parameters are never (or only rarely) excited during operation. Robust exponential stability of the state and parameter estimation error emerges under an additional uniform condition on the maximum duration of insufficient excitation. The theoretical results are illustrated by a numerical example. - oai:arXiv.org:2312.13175v3 - eess.SY - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 - replace + A TraceFEM $C^0$ Interior Penalty Method for the Surface Biharmonic Equation + https://arxiv.org/abs/2512.18949 + arXiv:2512.18949v1 Announce Type: new +Abstract: We construct and analyze a TraceFEM discretization for the surface biharmonic problem. The method utilizes standard quadratic Lagrange finite element spaces defined on a three-dimensional background mesh and a symmetric $C^0$ interior penalty formulation posed on a second-order polyhedral approximation of the surface. Stability is achieved through a combination of surface edge penalties and bulk-facet penalization of gradient and Hessian jumps. We prove optimal first-order convergence in a discrete $H^2$ norm and quadratic convergence in the $L^2$ norm. + oai:arXiv.org:2512.18949v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - 10.1016/j.automatica.2025.112790 - Automatica, Volume 185, March 2026, Article 112790 - Julian D. Schiller, Matthias A. M\"uller + Michael Neilan, Hongzhi Wan - Sparse, Efficient and Explainable Data Attribution with DualXDA - https://arxiv.org/abs/2402.12118 - arXiv:2402.12118v3 Announce Type: replace -Abstract: Data Attribution (DA) is an emerging approach in the field of eXplainable Artificial Intelligence (XAI), aiming to identify influential training datapoints which determine model outputs. It seeks to provide transparency about the model and individual predictions, e.g. for model debugging, identifying data-related causes of suboptimal performance. However, existing DA approaches suffer from prohibitively high computational costs and memory demands when applied to even medium-scale datasets and models, forcing practitioners to resort to approximations that may fail to capture the true inference process of the underlying model. Additionally, current attribution methods exhibit low sparsity, resulting in non-negligible attribution scores across a high number of training examples, hindering the discovery of decisive patterns in the data. In this work, we introduce DualXDA, a framework for sparse, efficient and explainable DA, comprised of two interlinked approaches, Dual Data Attribution (DualDA) and eXplainable Data Attribution (XDA): With DualDA, we propose a novel approach for efficient and effective DA, leveraging Support Vector Machine theory to provide fast and naturally sparse data attributions for AI predictions. In extensive quantitative analyses, we demonstrate that DualDA achieves high attribution quality, excels at solving a series of evaluated downstream tasks, while at the same time improving explanation time by a factor of up to 4,100,000x compared to the original Influence Functions method, and up to 11,000x compared to the method's most efficient approximation from literature to date. We further introduce XDA, a method for enhancing Data Attribution with capabilities from feature attribution methods to explain why training samples are relevant for the prediction of a test sample in terms of impactful features, which we showcase and verify qualitatively in detail. - oai:arXiv.org:2402.12118v3 + Learning Hierarchical Procedural Memory for LLM Agents through Bayesian Selection and Contrastive Refinement + https://arxiv.org/abs/2512.18950 + arXiv:2512.18950v1 Announce Type: new +Abstract: We present MACLA, a framework that decouples reasoning from learning by maintaining a frozen large language model while performing all adaptation in an external hierarchical procedural memory. MACLA extracts reusable procedures from trajectories, tracks reliability via Bayesian posteriors, selects actions through expected-utility scoring, and refines procedures by contrasting successes and failures. Across four benchmarks (ALFWorld, WebShop, TravelPlanner, InterCodeSQL), MACLA achieves 78.1 percent average performance, outperforming all baselines. On ALFWorld unseen tasks, MACLA reaches 90.3 percent with 3.1 percent positive generalization. The system constructs memory in 56 seconds, 2800 times faster than the state-of-the-art LLM parameter-training baseline, compressing 2851 trajectories into 187 procedures. Experimental results demonstrate that structured external memory with Bayesian selection and contrastive refinement enables sample-efficient, interpretable, and continually improving agents without LLM parameter updates. + oai:arXiv.org:2512.18950v1 cs.LG cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-sa/4.0/ - Galip \"Umit Yolcu, Moritz Weckbecker, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Saman Forouzandeh, Wei Peng, Parham Moradi, Xinghuo Yu, Mahdi Jalili + + + Learning Through Little Eyes: Attribute Discrimination Beyond Objects + https://arxiv.org/abs/2512.18951 + arXiv:2512.18951v1 Announce Type: new +Abstract: Infants learn to recognize not only object categories but also fine grained attributes such as color, size, and texture within their first two years of life. Prior work explores Childs View for Contrastive Learning (CVCL), a CLIP style model trained on infant egocentric video as a computational model of early infant learning, but it focuses only on class level recognition. This leaves it unclear whether infant scale learning also supports attribute discrimination. To address this, we introduce a benchmark that systematically varies color, size, and texture, allowing controlled tests of within class attribute recognition. Comparing CVCL with CLIP shows clear differences. CVCL is better at size discrimination, while CLIP achieves higher accuracy on color discrimination. Both models represent texture in image embeddings but fail to ground texture linguistically, suggesting a gap between visual and language spaces. + oai:arXiv.org:2512.18951v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Patrick Batsell, Tsutsui Satoshi, Bihan Wen - LN3DIFF++: Scalable Latent Neural Fields Diffusion for Speedy 3D Generation - https://arxiv.org/abs/2403.12019 - arXiv:2403.12019v3 Announce Type: replace -Abstract: The field of neural rendering has witnessed significant progress with advancements in generative models and differentiable rendering techniques. Though 2D diffusion has achieved success, a unified 3D diffusion pipeline remains unsettled. This paper introduces a novel framework called LN3Diff++ to address this gap and enable fast, high-quality, and generic conditional 3D generation. Our approach harnesses a 3D-aware architecture and variational autoencoder (VAE) to encode the input image into a structured, compact, and 3D latent space. The latent is decoded by a transformer-based decoder into a high-capacity 3D neural field. Through training a diffusion model on this 3D-aware latent space, our method achieves state-of-the-art performance on ShapeNet for 3D generation and demonstrates superior performance in monocular 3D reconstruction and conditional 3D generation across various datasets. Moreover, it surpasses existing 3D diffusion methods in terms of inference speed, requiring no per-instance optimization. Our proposed LN3Diff presents a significant advancement in 3D generative modeling and holds promise for various applications in 3D vision and graphics tasks. - oai:arXiv.org:2403.12019v3 + Symmetrization of 3D Generative Models + https://arxiv.org/abs/2512.18953 + arXiv:2512.18953v1 Announce Type: new +Abstract: We propose a novel data-centric approach to promote symmetry in 3D generative models by modifying the training data rather than the model architecture. Our method begins with an analysis of reflectional symmetry in both real-world 3D shapes and samples generated by state-of-the-art models. We hypothesize that training a generative model exclusively on half-objects, obtained by reflecting one half of the shapes along the x=0 plane, enables the model to learn a rich distribution of partial geometries which, when reflected during generation, yield complete shapes that are both visually plausible and geometrically symmetric. To test this, we construct a new dataset of half-objects from three ShapeNet classes (Airplane, Car, and Chair) and train two generative models. Experiments demonstrate that the generated shapes are symmetrical and consistent, compared with the generated objects from the original model and the original dataset objects. + oai:arXiv.org:2512.18953v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - 10.1109/TPAMI.2025.3633073 - IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025 - Yushi Lan, Fangzhou Hong, Shangchen Zhou, Shuai Yang, Xuyi Meng, Yongwei Chen, Zhaoyang Lyu, Bo Dai, Xingang Pan, Chen Change Loy - - - Near Optimal Alphabet-Soundness Tradeoff PCPs - https://arxiv.org/abs/2404.07441 - arXiv:2404.07441v3 Announce Type: replace -Abstract: We show that for all $\varepsilon>0$, for sufficiently large $q\in\mathbb{N}$ power of $2$, for all $\delta>0$, it is NP-hard to distinguish whether a given $2$-Prover-$1$-Round projection game with alphabet size $q$ has value at least $1-\delta$, or value at most $1/q^{1-\varepsilon}$. This establishes a nearly optimal alphabet-to-soundness tradeoff for $2$-query PCPs with alphabet size $q$, improving upon a result of [Chan, Journal of the ACM 2016]. Our result has the following implications: - 1) Near optimal hardness for Quadratic Programming: it is NP-hard to approximate the value of a given Boolean Quadratic Program within factor $(\log n)^{1 - o(1)}$ under quasi-polynomial time reductions. This improves upon a result of [Khot, Safra, ToC 2013] and nearly matches the performance of the best known algorithms due to [Megretski, IWOTA 2000], [Nemirovski, Roos, Terlaky, Mathematical programming 1999] and [Charikar, Wirth, FOCS 2004] that achieve $O(\log n)$ approximation ratio. - 2) Bounded degree $2$-CSPs: under randomized reductions, for sufficiently large $d>0$, it is NP-hard to approximate the value of $2$-CSPs in which each variable appears in at most $d$ constraints within factor $(1-o(1))\frac{d}{2}$, improving upon a result of [Lee, Manurangsi, ITCS 2024]. - 3) Improved hardness results for connectivity problems: using results of [Laekhanukit, SODA 2014] and [Manurangsi, Inf. Process. Lett., 2019], we deduce improved hardness results for the Rooted $k$-Connectivity Problem, the Vertex-Connectivity Survivable Network Design Problem and the Vertex-Connectivity $k$-Route Cut Problem. - oai:arXiv.org:2404.07441v3 - cs.CC - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Dor Minzer, Kai Zhe Zheng + Nicolas Caytuiro, Ivan Sipiran - HGQ: High Granularity Quantization for Real-time Neural Networks on FPGAs - https://arxiv.org/abs/2405.00645 - arXiv:2405.00645v3 Announce Type: replace -Abstract: Neural networks with sub-microsecond inference latency are required by many critical applications. Targeting such applications deployed on FPGAs, we present High Granularity Quantization (HGQ), a quantization-aware training framework that optimizes parameter bit-widths through gradient descent. Unlike conventional methods, HGQ determines the optimal bit-width for each parameter independently, making it suitable for hardware platforms supporting heterogeneous arbitrary precision arithmetic. In our experiments, HGQ shows superior performance compared to existing network compression methods, achieving orders of magnitude reduction in resource consumption and latency while maintaining the accuracy on several benchmark tasks. These improvements enable the deployment of complex models previously infeasible due to resource or latency constraints. HGQ is open-source and is used for developing next-generation trigger systems at the CERN ATLAS and CMS experiments for particle physics, enabling the use of advanced machine learning models for real-time data selection with sub-microsecond latency. - oai:arXiv.org:2405.00645v3 - cs.LG - physics.ins-det - Mon, 22 Dec 2025 00:00:00 -0500 - replace + VOIC: Visible-Occluded Decoupling for Monocular 3D Semantic Scene Completion + https://arxiv.org/abs/2512.18954 + arXiv:2512.18954v1 Announce Type: new +Abstract: Camera-based 3D Semantic Scene Completion (SSC) is a critical task for autonomous driving and robotic scene understanding. It aims to infer a complete 3D volumetric representation of both semantics and geometry from a single image. Existing methods typically focus on end-to-end 2D-to-3D feature lifting and voxel completion. However, they often overlook the interference between high-confidence visible-region perception and low-confidence occluded-region reasoning caused by single-image input, which can lead to feature dilution and error propagation. + To address these challenges, we introduce an offline Visible Region Label Extraction (VRLE) strategy that explicitly separates and extracts voxel-level supervision for visible regions from dense 3D ground truth. This strategy purifies the supervisory space for two complementary sub-tasks: visible-region perception and occluded-region reasoning. Building on this idea, we propose the Visible-Occluded Interactive Completion Network (VOIC), a novel dual-decoder framework that explicitly decouples SSC into visible-region semantic perception and occluded-region scene completion. VOIC first constructs a base 3D voxel representation by fusing image features with depth-derived occupancy. The visible decoder focuses on generating high-fidelity geometric and semantic priors, while the occlusion decoder leverages these priors together with cross-modal interaction to perform coherent global scene reasoning. + Extensive experiments on the SemanticKITTI and SSCBench-KITTI360 benchmarks demonstrate that VOIC outperforms existing monocular SSC methods in both geometric completion and semantic segmentation accuracy, achieving state-of-the-art performance. + oai:arXiv.org:2512.18954v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - 10.1145/3748173.3779200 - Chang Sun, Zhiqiang Que, Thea K. {\AA}rrestad, Vladimir Loncar, Jennifer Ngadiuba, Wayne Luk, Maria Spiropulu + Zaidao Han, Risa Higashita, Jiang Liu - SPVR: syntax-to-prompt vulnerability repair based on large language models - https://arxiv.org/abs/2405.04994 - arXiv:2405.04994v2 Announce Type: replace -Abstract: Purpose: In the field of vulnerability repair, previous research has leveraged pretrained models and LLM-based prompt engineering, among which LLM-based approaches show better generalizability and achieve the best performance. However, the LLM-based approaches generally regard vulnerability repair as a sequence-to-sequence task, and do not explicitly capture the syntax patterns for different vulnerability types, leading to limited accuracy. We aim to create a method that ensures the specificity of prompts targeting vulnerable code while also leveraging the generative capabilities of Large Language Models. Methods: We propose SPVR (Syntax-to-Prompt Vulnerability Repair), a novel framework that collects information from syntax trees, and generates corresponding prompts. Our method consists of three steps: rule design, prompt generation, and patch generation. In the rule design step, our method parses code patches and designs rules to extract relevant contextual information. These rules aid in identifying vulnerability-related issues. In the prompt generation step, our method extracts information from vulnerable code with pre-defined rules, automatically converting them into prompts. We also incorporate the description of CWE (Common Weakness Enumeration) as known information into the prompts. Finally, in the patch generation step, this prompt will serve as input to any conversational LLM to obtain code patches. Results: Extensive experiments validate that our method achieves excellent results in assisting LLMs to fix vulnerabilities accurately. We utilize multiple Large Language Models to validate the effectiveness of our work, repairing 143 of 547 vulnerable code using ChatGPT-4. We conducted a comparison of our approach against several existing vulnerability repair approaches (including fine-tuning-based and prompt-based), across multiple metrics. - oai:arXiv.org:2405.04994v2 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 - replace + A Spectral Low-Mode Reduced Method for Elliptic Problems + https://arxiv.org/abs/2512.18955 + arXiv:2512.18955v1 Announce Type: new +Abstract: We develop a spectral low-mode reduced solver for second-order elliptic boundary value problems with spatially varying diffusion coefficients. The approach projects standard finite difference or finite element discretization onto a global coarse space spanned by the lowest Dirichlet Laplacian eigenmodes, yielding an analytic reduced model that requires no training data and preserves coefficient heterogeneity through an exact Galerkin projection. The reduced solution is energy-optimal in the selected subspace and, for $H^2$-regular solutions, the truncation error associated with discarded modes satisfies a $\sqrt{\log M}/M$ decay in the $H_0^1$ norm. For uniformly stable reduced bases, the projected operator is well conditioned with respect to mesh refinement, and numerical experiments corroborate the predicted accuracy and demonstrate meaningful speedups over sparse direct solvers, with favorable performance relative to multigrid and deflation-based Krylov methods for heterogeneous coefficients in the tested setups. + oai:arXiv.org:2512.18955v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1007/s10515-025-00579-5 - Automated Software Engineering 33 (2026) 38 - Ruoke Wang, Zongjie Li, Cuiyun Gao, Chaozheng Wang, Yang Xiao, Xuan Wang + Prosper Torsu - On the Identification of Temporally Causal Representation with Instantaneous Dependence - https://arxiv.org/abs/2405.15325 - arXiv:2405.15325v3 Announce Type: replace -Abstract: Temporally causal representation learning aims to identify the latent causal process from time series observations, but most methods require the assumption that the latent causal processes do not have instantaneous relations. Although some recent methods achieve identifiability in the instantaneous causality case, they require either interventions on the latent variables or grouping of the observations, which are in general difficult to obtain in real-world scenarios. To fill this gap, we propose an \textbf{ID}entification framework for instantane\textbf{O}us \textbf{L}atent dynamics (\textbf{IDOL}) by imposing a sparse influence constraint that the latent causal processes have sparse time-delayed and instantaneous relations. Specifically, we establish identifiability results of the latent causal process based on sufficient variability and the sparse influence constraint by employing contextual information of time series data. Based on these theories, we incorporate a temporally variational inference architecture to estimate the latent variables and a gradient-based sparsity regularization to identify the latent causal process. Experimental results on simulation datasets illustrate that our method can identify the latent causal process. Furthermore, evaluations on multiple human motion forecasting benchmarks with instantaneous dependencies indicate the effectiveness of our method in real-world settings. - oai:arXiv.org:2405.15325v3 + Training Multimodal Large Reasoning Models Needs Better Thoughts: A Three-Stage Framework for Long Chain-of-Thought Synthesis and Selection + https://arxiv.org/abs/2512.18956 + arXiv:2512.18956v1 Announce Type: new +Abstract: Large Reasoning Models (LRMs) have demonstrated remarkable performance on complex reasoning tasks through long Chain-of-Thought (CoT) reasoning. Extending these successes to multimodal reasoning remains challenging due to the increased complexity of integrating diverse input modalities and the scarcity of high-quality long CoT training data. Existing multimodal datasets and CoT synthesis methods still suffer from limited reasoning depth, modality conversion errors, and rigid generation pipelines, hindering model performance and stability. To this end, in this paper, we propose SynSelect, a novel three-stage Synthesis-Selection framework for generating high-quality long CoT data tailored to multimodal reasoning tasks. Specifically, SynSelect first leverages multiple heterogeneous multimodal LRMs to produce diverse candidate CoTs, and then applies both instance and batch level selection to filter high-quality CoTs that can effectively enhance the model's reasoning capabilities. Extensive experiments on multiple multimodal benchmarks demonstrate that models supervised fine-tuned on SynSelect-generated data significantly outperform baselines and achieve further improvements after reinforcement learning post-training. Our results validate SynSelect as an effective approach for advancing multimodal LRMs reasoning capabilities. + oai:arXiv.org:2512.18956v1 + cs.AI cs.LG - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zijian Li, Yifan Shen, Kaitao Zheng, Ruichu Cai, Xiangchen Song, Mingming Gong, Guangyi Chen, Kun Zhang + Yizhi Wang, Linan Yue, Min-Ling Zhang - Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications - https://arxiv.org/abs/2405.15877 - arXiv:2405.15877v4 Announce Type: replace -Abstract: Large language models (LLMs) significantly enhance the performance of various applications, but they are computationally intensive and energy-demanding. This makes it challenging to deploy them on devices with limited resources, such as personal computers and mobile/wearable devices, and results in substantial inference costs in resource-rich environments like cloud servers. To extend the use of LLMs, we introduce a low-rank decomposition approach to effectively compress these models, tailored to the requirements of specific applications. We observe that LLMs pretrained on general datasets contain many redundant components not needed for particular applications. Our method focuses on identifying and removing these redundant parts, retaining only the necessary elements for the target applications. Specifically, we represent the weight matrices of LLMs as a linear combination of base components. We then prune the irrelevant bases and enhance the model with new bases beneficial for specific applications. Deep compression results on the Llama 2-7b and -13B models, conducted on target applications including mathematical reasoning and code generation, show that our method significantly reduces model size while maintaining comparable accuracy to state-of-the-art low-rank compression techniques. - oai:arXiv.org:2405.15877v4 + Scaling Online Distributionally Robust Reinforcement Learning: Sample-Efficient Guarantees with General Function Approximation + https://arxiv.org/abs/2512.18957 + arXiv:2512.18957v1 Announce Type: new +Abstract: The deployment of reinforcement learning (RL) agents in real-world applications is often hindered by performance degradation caused by mismatches between training and deployment environments. Distributionally robust RL (DR-RL) addresses this issue by optimizing worst-case performance over an uncertainty set of transition dynamics. However, existing work typically relies on substantial prior knowledge-such as access to a generative model or a large offline dataset-and largely focuses on tabular methods that do not scale to complex domains. We overcome these limitations by proposing an online DR-RL algorithm with general function approximation that learns an optimal robust policy purely through interaction with the environment, without requiring prior models or offline data, enabling deployment in high-dimensional tasks. We further provide a theoretical analysis establishing a near-optimal sublinear regret bound under a total variation uncertainty set, demonstrating the sample efficiency and effectiveness of our method. + oai:arXiv.org:2512.18957v1 cs.LG - cs.AR - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yang Li, Daniel Agyei Asante, Changsheng Zhao, Ernie Chang, Yangyang Shi, Vikas Chandra + Debamita Ghosh, George K. Atia, Yue Wang - Accelerating Hybrid Model Predictive Control using Warm-Started Generalized Benders Decomposition - https://arxiv.org/abs/2406.00780 - arXiv:2406.00780v2 Announce Type: replace -Abstract: Hybrid model predictive control with both continuous and discrete variables is widely applicable to robotic control tasks, especially those involving contacts with the environment. Due to combinatorial complexity, the solving speed of hybrid MPC can be insufficient for real-time applications. In this paper, we propose a hybrid MPC algorithm based on Generalized Benders Decomposition. The algorithm enumerates and stores cutting planes online inside a finite buffer and transfers them across MPC iterations to provide warm-starts for new problem instances, significantly enhancing solving speed. We theoretically analyze this warm-starting performance by modeling the deviation of mode sequences through temporal shifting and stretching, deriving bounds on the dual gap between transferred optimality cuts and the true optimal costs, and utilizing these bounds to quantify the level of suboptimality guaranteed in the first solve of the Benders Master Problem. Our algorithm is validated in simulation through controlling a cart-pole system with soft contact walls, a free-flying robot navigating around obstacles, and a humanoid robot standing on one leg while pushing against walls with its hands for balance. For our benchmark problems, the algorithm enumerates cuts on the order of only tens to hundreds while reaching speeds 2-3 times faster than the off-the-shelf solver Gurobi, oftentimes exceeding 1000 Hz. The code is available at https://github.com/XuanLin/Benders-MPC. - oai:arXiv.org:2406.00780v2 - cs.RO - cs.SY - eess.SY - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Xuan Lin + DVI: Disentangling Semantic and Visual Identity for Training-Free Personalized Generation + https://arxiv.org/abs/2512.18964 + arXiv:2512.18964v1 Announce Type: new +Abstract: Recent tuning-free identity customization methods achieve high facial fidelity but often overlook visual context, such as lighting, skin texture, and environmental tone. This limitation leads to ``Semantic-Visual Dissonance,'' where accurate facial geometry clashes with the input's unique atmosphere, causing an unnatural ``sticker-like'' effect. We propose **DVI (Disentangled Visual-Identity)**, a zero-shot framework that orthogonally disentangles identity into fine-grained semantic and coarse-grained visual streams. Unlike methods relying solely on semantic vectors, DVI exploits the inherent statistical properties of the VAE latent space, utilizing mean and variance as lightweight descriptors for global visual atmosphere. We introduce a **Parameter-Free Feature Modulation** mechanism that adaptively modulates semantic embeddings with these visual statistics, effectively injecting the reference's ``visual soul'' without training. Furthermore, a **Dynamic Temporal Granularity Scheduler** aligns with the diffusion process, prioritizing visual atmosphere in early denoising stages while refining semantic details later. Extensive experiments demonstrate that DVI significantly enhances visual consistency and atmospheric fidelity without parameter fine-tuning, maintaining robust identity preservation and outperforming state-of-the-art methods in IBench evaluations. + oai:arXiv.org:2512.18964v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Guandong Li, Yijun Ding - Studying the Effects of Collaboration in Interactive Theme Discovery Systems - https://arxiv.org/abs/2408.09030 - arXiv:2408.09030v4 Announce Type: replace -Abstract: NLP-assisted solutions have gained considerable traction to support qualitative data analysis. However, there does not exist a unified evaluation framework that can account for the many different settings in which qualitative researchers may employ them. In this paper, we take a first step in this direction by proposing an evaluation framework to study the way in which different tools may result in different outcomes depending on the collaboration strategy employed. Specifically, we study the impact of synchronous vs. asynchronous collaboration using two different NLP-assisted qualitative research tools and present a comprehensive analysis of significant differences in the consistency, cohesiveness, and correctness of their outputs. - oai:arXiv.org:2408.09030v4 - cs.CL - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Lag Operator SSMs: A Geometric Framework for Structured State Space Modeling + https://arxiv.org/abs/2512.18965 + arXiv:2512.18965v1 Announce Type: new +Abstract: Structured State Space Models (SSMs), which are at the heart of the recently popular Mamba architecture, are powerful tools for sequence modeling. However, their theoretical foundation relies on a complex, multi-stage process of continuous-time modeling and subsequent discretization, which can obscure intuition. We introduce a direct, first-principles framework for constructing discrete-time SSMs that is both flexible and modular. Our approach is based on a novel lag operator, which geometrically derives the discrete-time recurrence by measuring how the system's basis functions "slide" and change from one timestep to the next. The resulting state matrices are computed via a single inner product involving this operator, offering a modular design space for creating novel SSMs by flexibly combining different basis functions and time-warping schemes. To validate our approach, we demonstrate that a specific instance exactly recovers the recurrence of the influential HiPPO model. Numerical simulations confirm our derivation, providing new theoretical tools for designing flexible and robust sequence models. + oai:arXiv.org:2512.18965v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Alvin Po-Chun Chen, Rohan Das, Dananjay Srinivas, Alexandra Barry, Maksim Seniw, Maria Leonor Pacheco + Sutashu Tomonaga, Kenji Doya, Noboru Murata - Privacy Bias in Language Models: A Contextual Integrity-based Auditing Metric - https://arxiv.org/abs/2409.03735 - arXiv:2409.03735v3 Announce Type: replace -Abstract: As large language models (LLMs) are integrated into sociotechnical systems, it is crucial to examine the privacy biases they exhibit. We define privacy bias as the appropriateness value of information flows in responses from LLMs. A deviation between privacy biases and expected values, referred to as privacy bias delta, may indicate privacy violations. As an auditing metric, privacy bias can help (a) model trainers evaluate the ethical and societal impact of LLMs, (b) service providers select context-appropriate LLMs, and (c) policymakers assess the appropriateness of privacy biases in deployed LLMs. We formulate and answer a novel research question: how can we reliably examine privacy biases in LLMs and the factors that influence them? We present a novel approach for assessing privacy biases using a contextual integrity-based methodology to evaluate the responses from various LLMs. Our approach accounts for the sensitivity of responses across prompt variations, which hinders the evaluation of privacy biases. Finally, we investigate how privacy biases are affected by model capacities and optimizations. - oai:arXiv.org:2409.03735v3 - cs.LG - cs.AI - cs.CR - cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yan Shvartzshnaider, Vasisht Duddu + Scrum Sprint Planning: LLM-based and algorithmic solutions + https://arxiv.org/abs/2512.18966 + arXiv:2512.18966v1 Announce Type: new +Abstract: Planning for an upcoming project iteration (sprint) is one of the key activities in Scrum planning. In this paper, we present our work in progress on exploring the applicability of Large Language Models (LLMs) for solving this problem. We conducted case studies with manually created data sets to investigate the applicability of OpenAI models for supporting the sprint planning activities. In our experiments, we applied three models provided OpenAI: GPT-3.5 Turbo, GPT-4.0 Turbo, and Val. The experiments demonstrated that the results produced by the models aren't of acceptable quality for direct use in Scrum projects. + oai:arXiv.org:2512.18966v1 + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Yuwon Yoon, Kevin Iwan, Madeleine Zwart, Xiaohan Qin, Hina Lee, Maria Spichkova - Ciphertext Malleability in Lattice-Based KEMs as a Countermeasure to Side Channel Analysis - https://arxiv.org/abs/2409.16107 - arXiv:2409.16107v3 Announce Type: replace -Abstract: Due to developments in quantum computing, classical asymmetric cryptography is at risk of being breached. Consequently, new Post-Quantum Cryptography (PQC) primitives using lattices are studied. Another point of scrutiny is the resilience of these new primitives to Side Channel Analysis (SCA), where an attacker can study physical leakages. In this work we discuss a SCA vulnerability due to the ciphertext malleability of some PQC primitives exposed by a work from Ravi et al. We propose a novel countermeasure to this vulnerability exploiting the same ciphertext malleability and discuss its practical application to several PQC primitives. We also extend the seminal work of Ravi et al. by detailing their attack on the different security levels of a post-quantum Key Encapsulation Mechanism (KEM), namely FrodoKEM. We also provide a generalisation of their attack to different parameters which could be used in future similar primitives. - oai:arXiv.org:2409.16107v3 - cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Total Curvature Regularization and its_Minimization for Surface and Image Smoothing + https://arxiv.org/abs/2512.18968 + arXiv:2512.18968v1 Announce Type: new +Abstract: We introduce a novel formulation for curvature regularization by penalizing normal curvatures from multiple directions. This total normal curvature regularization is capable of producing solutions with sharp edges and precise isotropic properties. To tackle the resulting high-order nonlinear optimization problem, we reformulate it as the task of finding the steady-state solution of a time-dependent partial differential equation (PDE) system. Time discretization is achieved through operator splitting, where each subproblem at the fractional steps either has a closed-form solution or can be efficiently solved using advanced algorithms. Our method circumvents the need for complex parameter tuning and demonstrates robustness to parameter choices. The efficiency and effectiveness of our approach have been rigorously validated in the context of surface and image smoothing problems. + oai:arXiv.org:2512.18968v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Pierre-Augustin Berthet + Tianle Lu, Ke Chen, Yuping Duan - Expression Rates of Neural Operators for Linear Elliptic PDEs in Polytopes - https://arxiv.org/abs/2409.17552 - arXiv:2409.17552v3 Announce Type: replace -Abstract: We study the approximation rates of a class of deep neural network approximations of operators which arise as data-to-solution maps $\mathcal{S}$ of linear elliptic partial differential equations (PDEs), and act between pairs $X,Y$ of suitable infinite-dimensional spaces. We prove expression rate bounds for approximate neural operators $\mathcal{G}$ with the structure $\mathcal{G} = \mathcal{R} \circ \mathcal{A} \circ \mathcal{E}$, with linear encoders $\mathcal{E}$ and decoders $\mathcal{R}$. We focus in particular on deepONets emulating the coefficient-to-solution maps for elliptic PDEs set in polygons and in some polyhedra. Exploiting the regularity of the solution sets of elliptic PDEs in polytopes, we show algebraic rates of convergence for problems with data with finite regularity, and exponential rates for analytic data. - oai:arXiv.org:2409.17552v3 - math.NA - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Self-Attention with State-Object Weighted Combination for Compositional Zero Shot Learning + https://arxiv.org/abs/2512.18969 + arXiv:2512.18969v1 Announce Type: new +Abstract: Object recognition has become prevalent across various industries. However, most existing applications are limited to identifying objects alone, without considering their associated states. The ability to recognize both the state and object simultaneously remains less common. One approach to address this is by treating state and object as a single category during training. However, this approach poses challenges in data collection and training since it requires comprehensive data for all possible combinations. Compositional Zero-shot Learning (CZSL) emerges as a viable solution by treating the state and object as distinct categories during training. CZSL facilitates the identification of novel compositions even in the absence of data for every conceivable combination. The current state-of-the-art method, KG-SP, addresses this issue by training distinct classifiers for states and objects, while leveraging a semantic model to evaluate the plausibility of composed compositions. However, KG-SP's accuracy in state and object recognition can be further improved, and it fails to consider the weighting of states and objects during composition. In this study, we propose SASOW, an enhancement of KG-SP that considers the weighting of states and objects while improving composition recognition accuracy. First, we introduce self-attention mechanisms into the classifiers for states and objects, leading to enhanced accuracy in recognizing both. Additionally, we incorporate the weighting of states and objects during composition to generate more reasonable and accurate compositions. Our validation process involves testing SASOW on three established benchmark datasets. Experimental outcomes affirm when compared against OW-CZSL approach, KG-SP, SASOW showcases improvements of 2.1%, 1.7%, and 0.4% in terms of accuracy for unseen compositions across the MIT-States, UT Zappos, and C-GQA datasets, respectively. + oai:arXiv.org:2512.18969v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Carlo Marcati, Christoph Schwab + Cheng-Hong Chang, Pei-Hsuan Tsai - Quantifying the Dunkelflaute -- An analysis of variable renewable energy droughts in Europe - https://arxiv.org/abs/2410.00244 - arXiv:2410.00244v3 Announce Type: replace -Abstract: Variable renewable energy droughts, also called "Dunkelflaute", emerge as a challenge for climate-neutral energy systems based on variable renewables. Here we characterize European drought events for on- and offshore wind power, solar photovoltaics, and renewable technology portfolios, using 38 historic weather years and an advanced identification method. Their characteristics heavily depend on the chosen drought threshold, questioning the usefulness of single-threshold analyses. Applying a multi-threshold framework, we quantify how the complementarity of wind and solar power temporally and spatially alleviates drought frequency, return periods, duration, and severity within (portfolio effect) and across countries (balancing effect). We identify the most extreme droughts, which drive major discharging periods of long-duration storage in a fully renewable European energy system, based on a policy-relevant decarbonization scenario. Such events comprise sequences of shorter droughts of varying severity. The most extreme event occurred in winter 1996/97 and lasted 55 days in an idealized, perfectly interconnected setting. The average renewable availability during this period was still 47% of its long-run mean. System planners must consider such events when planning for storage and other flexibility technologies. Methodologically, we conclude that using arbitrary single calendar years is not suitable for modeling weather-resilient energy scenarios. - oai:arXiv.org:2410.00244v3 - eess.SY - cs.SY - physics.ao-ph - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Martin Kittel, Wolf-Peter Schill + Consistency-guided semi-supervised outlier detection in heterogeneous data using fuzzy rough sets + https://arxiv.org/abs/2512.18977 + arXiv:2512.18977v1 Announce Type: new +Abstract: Outlier detection aims to find samples that behave differently from the majority of the data. Semi-supervised detection methods can utilize the supervision of partial labels, thus reducing false positive rates. However, most of the current semi-supervised methods focus on numerical data and neglect the heterogeneity of data information. In this paper, we propose a consistency-guided outlier detection algorithm (COD) for heterogeneous data with the fuzzy rough set theory in a semi-supervised manner. First, a few labeled outliers are leveraged to construct label-informed fuzzy similarity relations. Next, the consistency of the fuzzy decision system is introduced to evaluate attributes' contributions to knowledge classification. Subsequently, we define the outlier factor based on the fuzzy similarity class and predict outliers by integrating the classification consistency and the outlier factor. The proposed algorithm is extensively evaluated on 15 freshly proposed datasets. Experimental results demonstrate that COD is better than or comparable with the leading outlier detectors. This manuscript is the accepted author version of a paper published by Elsevier. The final published version is available at https://doi.org/10.1016/j.asoc.2024.112070 + oai:arXiv.org:2512.18977v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1016/j.asoc.2024.112070 + Applied Soft Computing, Vol.165, 2024 + Baiyang Chen, Zhong Yuan, Dezhong Peng, Xiaoliang Chen, Hongmei Chen - Low-Rank Filtering and Smoothing for Sequential Deep Learning - https://arxiv.org/abs/2410.06800 - arXiv:2410.06800v2 Announce Type: replace -Abstract: Learning multiple tasks sequentially requires neural networks to balance retaining knowledge, yet being flexible enough to adapt to new tasks. Regularizing network parameters is a common approach, but it rarely incorporates prior knowledge about task relationships, and limits information flow to future tasks only. We propose a Bayesian framework that treats the network's parameters as the state space of a nonlinear Gaussian model, unlocking two key capabilities: (1) A principled way to encode domain knowledge about task relationships, allowing, e.g., control over which layers should adapt between tasks. (2) A novel application of Bayesian smoothing, allowing task-specific models to also incorporate knowledge from models learned later. This does not require direct access to their data, which is crucial, e.g., for privacy-critical applications. These capabilities rely on efficient filtering and smoothing operations, for which we propose diagonal plus low-rank approximations of the precision matrix in the Laplace approximation (LR-LGF). Empirical results demonstrate the efficiency of LR-LGF and the benefits of the unlocked capabilities. - oai:arXiv.org:2410.06800v2 + Outlier detection in mixed-attribute data: a semi-supervised approach with fuzzy approximations and relative entropy + https://arxiv.org/abs/2512.18978 + arXiv:2512.18978v1 Announce Type: new +Abstract: Outlier detection is a critical task in data mining, aimed at identifying objects that significantly deviate from the norm. Semi-supervised methods improve detection performance by leveraging partially labeled data but typically overlook the uncertainty and heterogeneity of real-world mixed-attribute data. This paper introduces a semi-supervised outlier detection method, namely fuzzy rough sets-based outlier detection (FROD), to effectively handle these challenges. Specifically, we first utilize a small subset of labeled data to construct fuzzy decision systems, through which we introduce the attribute classification accuracy based on fuzzy approximations to evaluate the contribution of attribute sets in outlier detection. Unlabeled data is then used to compute fuzzy relative entropy, which provides a characterization of outliers from the perspective of uncertainty. Finally, we develop the detection algorithm by combining attribute classification accuracy with fuzzy relative entropy. Experimental results on 16 public datasets show that FROD is comparable with or better than leading detection algorithms. All datasets and source codes are accessible at https://github.com/ChenBaiyang/FROD. This manuscript is the accepted author version of a paper published by Elsevier. The final published version is available at https://doi.org/10.1016/j.ijar.2025.109373 + oai:arXiv.org:2512.18978v1 cs.LG - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1016/j.ijar.2025.109373 + International Journal of Approximate Reasoning, Vol. 179, 2025, 109373 + Baiyang Chen, Zhong Yuan, Zheng Liu, Dezhong Peng, Yongxiang Li, Chang Liu, Guiduo Duan - Computing measures of weak-MSO definable sets of trees - https://arxiv.org/abs/2410.13479 - arXiv:2410.13479v2 Announce Type: replace -Abstract: This work addresses the problem of computing measures of recognisable sets of infinite trees. An algorithm is provided to compute the probability measure of a tree language recognisable by a weak alternating automaton, or equivalently definable in weak monadic second-order logic. The measure is the uniform coin-flipping measure or more generally it is generated by a~branching stochastic process. The class of tree languages in consideration, although smaller than all regular tree languages, comprises in particular the languages definable in the alternation-free mu-calculus or in temporal logic CTL. Thus, the new algorithm may enhance the toolbox of probabilistic model checking. - oai:arXiv.org:2410.13479v2 - cs.FL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Research on Novelty Measurement Indicator of Academic Papers Based on the Atypical Recombination of Knowledge + https://arxiv.org/abs/2512.18979 + arXiv:2512.18979v1 Announce Type: new +Abstract: The advancement of science is inherently dependent on the recombination of existing knowledge, and innovative research typically relies on the atypical recombination of established knoweldge bases. This study introduces a Knowledge Eccentricity to enable timely assessment of the novelty of research outputs by quantifying their degree of deviation from the existing knowledge system. For empirical analysis, we selected sample data including research articles published in Science and Nature, top 1% highly cited papers, and zero-cited papers for the year 2005, 2010, 2015, 2020, and 2025. We calculated the knowledge eccentricity scores for these papers and examined their potential influencing factors. The results indicate that team size exerts a significant negative effect on paper novelty, meaning larger team size is less conductive to enhancing the novelty of research outputs. Conversely, the number of references shows a signifcant positive correlation with paper novelty, which means that a greater number of references is associated with a moderate imporovement in a paper's novelty. The proposed indicator offers strong timeliness and operability, allowing for the evaluation of a paper's novelty immediately upon its publication. + oai:arXiv.org:2512.18979v1 + cs.DL + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Damian Niwi\'nski, Marcin Przyby{\l}ko, Micha{\l} Skrzypczak + Liang Guoqiang, Sun Jian, Lin Gege, Zhang Shuo - Hierarchical Multimodal LLMs with Semantic Space Alignment for Enhanced Time Series Classification - https://arxiv.org/abs/2410.18686 - arXiv:2410.18686v2 Announce Type: replace -Abstract: Time series classification plays a fundamental role in a wide range of real-world applications. Recently, large language models (LLMs) have demonstrated strong generalization and reasoning capacities, but directly applying them to time series classification remains non-trivial due to the representation gap between numerical sequences and linguistic semantics. In this paper, we propose HiTime, a hierarchical LLM-based framework for multimodal time series classification that bridges structured temporal representations with semantic reasoning in a generative paradigm. Specifically, we design a hierarchical sequence feature encoding module composed of a data-specific encoder and a task-specific encoder to extract complementary temporal features. To mitigate the embedding gap between time series representations and textual semantics, we further introduce a semantic space alignment module that jointly performs coarse-grained global modeling and fine-grained cross-modal correspondence. Building upon the above representations, we employ a parameter-efficient supervised fine-tuning strategy to activate the generative classification capability of the algined LLMs, thereby transforming conventional discriminative time series classification into a generative task. Extensive experiments on multiple benchmarks demonstrate that the proposed framework consistently outperforms state-of-the-art baselines. The code is publicly available at https://github.com/Xiaoyu-Tao/HiTime. - oai:arXiv.org:2410.18686v2 + OPBO: Order-Preserving Bayesian Optimization + https://arxiv.org/abs/2512.18980 + arXiv:2512.18980v1 Announce Type: new +Abstract: Bayesian optimization is an effective method for solving expensive black-box optimization problems. Most existing methods use Gaussian processes (GP) as the surrogate model for approximating the black-box objective function, it is well-known that it can fail in high-dimensional space (e.g., dimension over 500). We argue that the reliance of GP on precise numerical fitting is fundamentally ill-suited in high-dimensional space, where it leads to prohibitive computational complexity. In order to address this, we propose a simple order-preserving Bayesian optimization (OPBO) method, where the surrogate model preserves the order, instead of the value, of the black-box objective function. Then we can use a simple but effective OP neural network (NN) to replace GP as the surrogate model. Moreover, instead of searching for the best solution from the acquisition model, we select good-enough solutions in the ordinal set to reduce computational cost. The experimental results show that for high-dimensional (over 500) black-box optimization problems, the proposed OPBO significantly outperforms traditional BO methods based on regression NN and GP. The source code is available at https://github.com/pengwei222/OPBO. + oai:arXiv.org:2512.18980v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiaoyu Tao, Tingyue Pan, Mingyue Cheng, Yucong Luo, Qi Liu, Enhong Chen + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Wei Peng, Jianchen Hu, Kang Liu, Qiaozhu Zhai - Clean Up the Mess: Addressing Data Pollution in Cryptocurrency Abuse Reporting Services - https://arxiv.org/abs/2410.21041 - arXiv:2410.21041v2 Announce Type: replace -Abstract: Cryptocurrency abuse reporting services are a valuable data source about abusive blockchain addresses, prevalent types of cryptocurrency abuse, and their financial impact on victims. However, they may suffer data pollution due to their crowd-sourced nature. This work analyzes the extent and impact of data pollution in cryptocurrency abuse reporting services and proposes a novel LLM-based defense to address the pollution. We collect 289K abuse reports submitted over 6 years to two popular services and use them to answer three research questions. RQ1 analyzes the extent and impact of pollution. We show that spam reports will eventually flood unchecked abuse reporting services, with BitcoinAbuse receiving 75% of spam before stopping operations. We build a public dataset of 19,443 abuse reports labeled with 19 popular abuse types and use it to reveal the inaccuracy of user-reported abuse types. We identified 91 (0.1%) benign addresses reported, responsible for 60% of all the received funds. RQ2 examines whether we can automate identifying valid reports and their classification into abuse types. We propose an unsupervised LLM-based classifier that achieves an F1 score of 0.95 when classifying reports, an F1 of 0.89 when classifying out-of-distribution data, and an F1 of 0.99 when identifying spam reports. Our unsupervised LLM-based classifier clearly outperforms two baselines: a supervised classifier and a naive usage of the LLM. Finally, RQ3 demonstrates the usefulness of our LLM-based classifier for quantifying the financial impact of different cryptocurrency abuse types. We show that victim-reported losses heavily underestimate cybercriminal revenue by estimating a 29 times higher revenue from deposit transactions. We identified that investment scams have the highest financial impact and that extortions have lower conversion rates but compensate for them with massive email campaigns. - oai:arXiv.org:2410.21041v2 - cs.CR - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + R-GenIMA: Integrating Neuroimaging and Genetics with Interpretable Multimodal AI for Alzheimer's Disease Progression + https://arxiv.org/abs/2512.18986 + arXiv:2512.18986v1 Announce Type: new +Abstract: Early detection of Alzheimer's disease (AD) requires models capable of integrating macro-scale neuroanatomical alterations with micro-scale genetic susceptibility, yet existing multimodal approaches struggle to align these heterogeneous signals. We introduce R-GenIMA, an interpretable multimodal large language model that couples a novel ROI-wise vision transformer with genetic prompting to jointly model structural MRI and single nucleotide polymorphisms (SNPs) variations. By representing each anatomically parcellated brain region as a visual token and encoding SNP profiles as structured text, the framework enables cross-modal attention that links regional atrophy patterns to underlying genetic factors. Applied to the ADNI cohort, R-GenIMA achieves state-of-the-art performance in four-way classification across normal cognition (NC), subjective memory concerns (SMC), mild cognitive impairment (MCI), and AD. Beyond predictive accuracy, the model yields biologically meaningful explanations by identifying stage-specific brain regions and gene signatures, as well as coherent ROI-Gene association patterns across the disease continuum. Attention-based attribution revealed genes consistently enriched for established GWAS-supported AD risk loci, including APOE, BIN1, CLU, and RBFOX1. Stage-resolved neuroanatomical signatures identified shared vulnerability hubs across disease stages alongside stage-specific patterns: striatal involvement in subjective decline, frontotemporal engagement during prodromal impairment, and consolidated multimodal network disruption in AD. These results demonstrate that interpretable multimodal AI can synthesize imaging and genetics to reveal mechanistic insights, providing a foundation for clinically deployable tools that enable earlier risk stratification and inform precision therapeutic strategies in Alzheimer's disease. + oai:arXiv.org:2512.18986v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1016/j.future.2025.108313 - Gibran Gomez, Kevin van Liebergen, Davide Sanvito, Giuseppe Siracusano, Roberto Gonzalez, Juan Caballero + Kun Zhao (for the Alzheimer's Disease Neuroimaging Initiative), Siyuan Dai (for the Alzheimer's Disease Neuroimaging Initiative), Yingying Zhang (for the Alzheimer's Disease Neuroimaging Initiative), Guodong Liu (for the Alzheimer's Disease Neuroimaging Initiative), Pengfei Gu (for the Alzheimer's Disease Neuroimaging Initiative), Chenghua Lin (for the Alzheimer's Disease Neuroimaging Initiative), Paul M. Thompson (for the Alzheimer's Disease Neuroimaging Initiative), Alex Leow (for the Alzheimer's Disease Neuroimaging Initiative), Heng Huang (for the Alzheimer's Disease Neuroimaging Initiative), Lifang He (for the Alzheimer's Disease Neuroimaging Initiative), Liang Zhan (for the Alzheimer's Disease Neuroimaging Initiative), Haoteng Tang (for the Alzheimer's Disease Neuroimaging Initiative) - Tracing the Roots: Leveraging Temporal Dynamics in Diffusion Trajectories for Origin Attribution - https://arxiv.org/abs/2411.07449 - arXiv:2411.07449v3 Announce Type: replace -Abstract: Diffusion models have transformed image synthesis through iterative denoising, by defining trajectories from noise to coherent data. While their capabilities are widely celebrated, a critical challenge remains unaddressed: ensuring responsible use by verifying whether an image originates from a model's training set, its novel generations or external sources. We introduce a framework that analyzes diffusion trajectories for this purpose. Specifically, we demonstrate that temporal dynamics across the entire trajectory allow for more robust classification and challenge the widely-adopted "Goldilocks zone" conjecture, which posits that membership inference is effective only within narrow denoising stages. More fundamentally, we expose critical flaws in current membership inference practices by showing that representative methods fail under distribution shifts or when model-generated data is present. For model attribution, we demonstrate a first white-box approach directly applicable to diffusion. Ultimately, we propose the unification of data provenance into a single, cohesive framework tailored to modern generative systems. - oai:arXiv.org:2411.07449v3 + Affordance RAG: Hierarchical Multimodal Retrieval with Affordance-Aware Embodied Memory for Mobile Manipulation + https://arxiv.org/abs/2512.18987 + arXiv:2512.18987v1 Announce Type: new +Abstract: In this study, we address the problem of open-vocabulary mobile manipulation, where a robot is required to carry a wide range of objects to receptacles based on free-form natural language instructions. This task is challenging, as it involves understanding visual semantics and the affordance of manipulation actions. To tackle these challenges, we propose Affordance RAG, a zero-shot hierarchical multimodal retrieval framework that constructs Affordance-Aware Embodied Memory from pre-explored images. The model retrieves candidate targets based on regional and visual semantics and reranks them with affordance scores, allowing the robot to identify manipulation options that are likely to be executable in real-world environments. Our method outperformed existing approaches in retrieval performance for mobile manipulation instruction in large-scale indoor environments. Furthermore, in real-world experiments where the robot performed mobile manipulation in indoor environments based on free-form instructions, the proposed method achieved a task success rate of 85%, outperforming existing methods in both retrieval performance and overall task success. + oai:arXiv.org:2512.18987v1 + cs.RO + cs.CL cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Andreas Floros, Seyed-Mohsen Moosavi-Dezfooli, Pier Luigi Dragotti + Ryosuke Korekata, Quanting Xie, Yonatan Bisk, Komei Sugiura - Mapping the Podcast Ecosystem with the Structured Podcast Research Corpus - https://arxiv.org/abs/2411.07892 - arXiv:2411.07892v2 Announce Type: replace -Abstract: Podcasts provide highly diverse content to a massive listener base through a unique on-demand modality. However, limited data has prevented large-scale computational analysis of the podcast ecosystem. To fill this gap, we introduce a massive dataset of over 1.1M podcast transcripts that is largely comprehensive of all English language podcasts available through public RSS feeds from May and June of 2020. This data is not limited to text, but rather includes audio features and speaker turns for a subset of 370K episodes, and speaker role inferences and other metadata for all 1.1M episodes. Using this data, we also conduct a foundational investigation into the content, structure, and responsiveness of this ecosystem. Together, our data and analyses open the door to continued computational research of this popular and impactful medium. - oai:arXiv.org:2411.07892v2 - cs.CL - cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/publicdomain/zero/1.0/ - 10.18653/v1/2025.acl-long.1222 - Benjamin Litterer, David Jurgens, Dallas Card + DTCCL: Disengagement-Triggered Contrastive Continual Learning for Autonomous Bus Planners + https://arxiv.org/abs/2512.18988 + arXiv:2512.18988v1 Announce Type: new +Abstract: Autonomous buses run on fixed routes but must operate in open, dynamic urban environments. Disengagement events on these routes are often geographically concentrated and typically arise from planner failures in highly interactive regions. Such policy-level failures are difficult to correct using conventional imitation learning, which easily overfits to sparse disengagement data. To address this issue, this paper presents a Disengagement-Triggered Contrastive Continual Learning (DTCCL) framework that enables autonomous buses to improve planning policies through real-world operation. Each disengagement triggers cloud-based data augmentation that generates positive and negative samples by perturbing surrounding agents while preserving route context. Contrastive learning refines policy representations to better distinguish safe and unsafe behaviors, and continual updates are applied in a cloud-edge loop without human supervision. Experiments on urban bus routes demonstrate that DTCCL improves overall planning performance by 48.6 percent compared with direct retraining, validating its effectiveness for scalable, closed-loop policy improvement in autonomous public transport. + oai:arXiv.org:2512.18988v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yanding Yang, Weitao Zhou, Jinhai Wang, Xiaomin Guo, Junze Wen, Xiaolong Liu, Lang Ding, Zheng Fu, Jinyu Miao, Kun Jiang, Diange Yang - Computable Approximations of Semicomputable Graphs - https://arxiv.org/abs/2411.13672 - arXiv:2411.13672v2 Announce Type: replace -Abstract: In this work, we study the computability of topological graphs, which are obtained by gluing arcs and rays together at their endpoints. We prove that every semicomputable graph in a computable metric space can be approximated, with arbitrary precision, by its computable subgraph with computable endpoints. - oai:arXiv.org:2411.13672v2 - cs.LO - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Considering the Difference in Utility Functions of Team Players in Adversarial Team Games + https://arxiv.org/abs/2512.18989 + arXiv:2512.18989v1 Announce Type: new +Abstract: The United Nations' 2030 Agenda for Sustainable Development requires that all countries collaborate to fight adversarial factors to achieve peace and prosperity for humans and the planet. This scenario can be formulated as an adversarial team game in AI literature, where a team of players play against an adversary. However, previous solution concepts for this game assume that team players have the same utility functions, which cannot cover the real-world case that countries do not always have the same utility function. This paper argues that studying adversarial team games should not ignore the difference in utility functions of team players. We show that ignoring the difference in utility functions of team players could cause the computed equilibrium to be unstable. To show the benefit of considering the difference in utility functions of team players, we introduce a novel solution concept called Co-opetition Equilibrium (CoE) for the adversarial team game. In this game, team players with different utility functions (i.e., cooperation between team players) correlate their actions to play against the adversary (i.e., competition between the team and the adversary). We further introduce the team-maximizing CoE, which is a CoE but maximizes the team's utility among all CoEs. Both equilibria can overcome the issue caused by ignoring the difference in utility functions of team players. We further show the opportunities for theoretical and algorithmic contributions based on our position of considering the difference in utility functions of team players. + oai:arXiv.org:2512.18989v1 + cs.GT + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Vedran \v{C}a\v{c}i\'c, Matea \v{C}elar, Marko Horvat, Zvonko Iljazovi\'c + Youzhi Zhang - From Engineering Diagrams to Graphs: Digitizing P&IDs with Transformers - https://arxiv.org/abs/2411.13929 - arXiv:2411.13929v3 Announce Type: replace -Abstract: Digitizing engineering diagrams like Piping and Instrumentation Diagrams (P&IDs) plays a vital role in maintainability and operational efficiency of process and hydraulic systems. Previous methods typically decompose the task into separate steps such as symbol detection and line detection, which can limit their ability to capture the structure in these diagrams. In this work, a transformer-based approach leveraging the Relationformer that addresses this limitation by jointly extracting symbols and their interconnections from P&IDs is introduced. To evaluate our approach and compare it to a modular digitization approach, we present the first publicly accessible benchmark dataset for P&ID digitization, annotated with graph-level ground truth. Experimental results on real-world diagrams show that our method significantly outperforms the modular baseline, achieving over 25% improvement in edge detection accuracy. This research contributes a reproducible evaluation framework and demonstrates the effectiveness of transformer models for structural understanding of complex engineering diagrams. The dataset is available under https://zenodo.org/records/14803338. - oai:arXiv.org:2411.13929v3 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1109/DSAA65442.2025.11248012 - 2025 IEEE 12th International Conference on Data Science and Advanced Analytics (DSAA) - Jan Marius St\"urmer, Marius Graumann, Tobias Koch - - - UniGaussian: Driving Scene Reconstruction from Multiple Camera Models via Unified Gaussian Representations - https://arxiv.org/abs/2411.15355 - arXiv:2411.15355v3 Announce Type: replace -Abstract: Urban scene reconstruction is crucial for real-world autonomous driving simulators. Although existing methods have achieved photorealistic reconstruction, they mostly focus on pinhole cameras and neglect fisheye cameras. In fact, how to effectively simulate fisheye cameras in driving scene remains an unsolved problem. In this work, we propose UniGaussian, a novel approach that learns a unified 3D Gaussian representation from multiple camera models for urban scene reconstruction in autonomous driving. Our contributions are two-fold. First, we propose a new differentiable rendering method that distorts 3D Gaussians using a series of affine transformations tailored to fisheye camera models. This addresses the compatibility issue of 3D Gaussian splatting with fisheye cameras, which is hindered by light ray distortion caused by lenses or mirrors. Besides, our method maintains real-time rendering while ensuring differentiability. Second, built on the differentiable rendering method, we design a new framework that learns a unified Gaussian representation from multiple camera models. By applying affine transformations to adapt different camera models and regularizing the shared Gaussians with supervision from different modalities, our framework learns a unified 3D Gaussian representation with input data from multiple sources and achieves holistic driving scene understanding. As a result, our approach models multiple sensors (pinhole and fisheye cameras) and modalities (depth, semantic, normal and LiDAR point clouds). Our experiments show that our method achieves superior rendering quality and fast rendering speed for driving scene simulation. - oai:arXiv.org:2411.15355v3 + ICP-4D: Bridging Iterative Closest Point and LiDAR Panoptic Segmentation + https://arxiv.org/abs/2512.18991 + arXiv:2512.18991v1 Announce Type: new +Abstract: Dominant paradigms for 4D LiDAR panoptic segmentation are usually required to train deep neural networks with large superimposed point clouds or design dedicated modules for instance association. However, these approaches perform redundant point processing and consequently become computationally expensive, yet still overlook the rich geometric priors inherently provided by raw point clouds. To this end, we introduce ICP-4D, a simple yet effective training-free framework that unifies spatial and temporal reasoning through geometric relations among instance-level point sets. Specifically, we apply the Iterative Closest Point (ICP) algorithm to directly associate temporally consistent instances by aligning the source and target point sets through the estimated transformation. To stabilize association under noisy instance predictions, we introduce a Sinkhorn-based soft matching. This exploits the underlying instance distribution to obtain accurate point-wise correspondences, resulting in robust geometric alignment. Furthermore, our carefully designed pipeline, which considers three instance types-static, dynamic, and missing-offers computational efficiency and occlusion-aware matching. Our extensive experiments across both SemanticKITTI and panoptic nuScenes demonstrate that our method consistently outperforms state-of-the-art approaches, even without additional training or extra point cloud inputs. + oai:arXiv.org:2512.18991v1 cs.CV cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Yuan Ren, Guile Wu, Runhao Li, Zheyuan Yang, Yibo Liu, Xingxin Chen, Tongtong Cao, Bingbing Liu - - - Fairness via Independence: A (Conditional) Distance Covariance Framework - https://arxiv.org/abs/2412.00720 - arXiv:2412.00720v2 Announce Type: replace -Abstract: We explore fairness from a statistical perspective by selectively utilizing either conditional distance covariance or distance covariance statistics as measures to assess the independence between predictions and sensitive attributes. We boost fairness with independence by adding a distance covariance-based penalty to the model's training. Additionally, we present the matrix form of empirical (conditional) distance covariance for parallel calculations to enhance computational efficiency. Theoretically, we provide a proof for the convergence between empirical and population (conditional) distance covariance, establishing necessary guarantees for batch computations. Through experiments conducted on a range of real-world datasets, we have demonstrated that our method effectively bridges the fairness gap in machine learning. Our code is available at \url{https://github.com/liuhaixias1/Fair_dc/}. - oai:arXiv.org:2412.00720v2 - cs.LG - cs.CY - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ruifan Huang, Haixia Liu + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Gyeongrok Oh, Youngdong Jang, Jonghyun Choi, Suk-Ju Kang, Guang Lin, Sangpil Kim - Embedding-Driven Data Distillation for 360-Degree IQA With Residual-Aware Refinement - https://arxiv.org/abs/2412.12667 - arXiv:2412.12667v2 Announce Type: replace -Abstract: This article identifies and addresses a fundamental bottleneck in data-driven 360-degree image quality assessment (IQA): the lack of intelligent, sample-level data selection. Hence, we propose a novel framework that introduces a critical refinement step between patches sampling and model training. The core of our contribution is an embedding similarity-based selection algorithm that distills an initial, potentially redundant set of patches into a compact, maximally informative subset. This is formulated as a regularized optimization problem that preserves intrinsic perceptual relationships in a low-dimensional space, using residual analysis to explicitly filter out irrelevant or redundant samples. Extensive experiments on three benchmark datasets (CVIQ, OIQA, MVAQD) demonstrate that our selection enables a baseline model to match or exceed the performance of using all sampled data while keeping only 40-50% of patches. Particularly, we demonstrate the universal applicability of our approach by integrating it with several state-of-the-art IQA models, incleasy to deploy. Most significantly, its value as a generic,uding CNN- and transformer-based architectures, consistently enabling them to maintain or improve performance with 20-40\% reduced computational load. This work establishes that adaptive, post-sampling data refinement is a powerful and widely applicable strategy for achieving efficient and robust 360-degree IQA. - oai:arXiv.org:2412.12667v2 + Towards AI-Guided Open-World Ecological Taxonomic Classification + https://arxiv.org/abs/2512.18994 + arXiv:2512.18994v1 Announce Type: new +Abstract: AI-guided classification of ecological families, genera, and species underpins global sustainability efforts such as biodiversity monitoring, conservation planning, and policy-making. Progress toward this goal is hindered by long-tailed taxonomic distributions from class imbalance, along with fine-grained taxonomic variations, test-time spatiotemporal domain shifts, and closed-set assumptions that can only recognize previously seen taxa. We introduce the Open-World Ecological Taxonomy Classification, a unified framework that captures the co-occurrence of these challenges in realistic ecological settings. To address them, we propose TaxoNet, an embedding-based encoder with a dual-margin penalization loss that strengthens learning signals from rare underrepresented taxa while mitigating the dominance of overrepresented ones, directly confronting interrelated challenges. We evaluate our method on diverse ecological domains: Google Auto-Arborist (urban trees), iNat-Plantae (Plantae observations from various ecosystems in iNaturalist-2019), and NAFlora-Mini (a curated herbarium collection). Our model consistently outperforms baselines, particularly for rare taxa, establishing a strong foundation for open-world plant taxonomic monitoring. Our findings further show that general-purpose multimodal foundation models remain constrained in plant-domain applications. + oai:arXiv.org:2512.18994v1 cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Abderrezzaq Sendjasni, Seif-Eddine Benkabou, Mohamed-Chaker Larabi + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Cheng Yaw Low, Heejoon Koo, Jaewoo Park, Kaleb Mesfin Asfaw, Meeyoung Cha - Agnosticism About Artificial Consciousness - https://arxiv.org/abs/2412.13145 - arXiv:2412.13145v2 Announce Type: replace -Abstract: Could an AI have conscious experiences? Any answer to this question should conform to Evidentialism - that is, it should be based not on intuition, dogma or speculation but on solid scientific evidence. I argue that such evidence is hard to come by and that the only justifiable stance on the prospects of artificial consciousness is agnosticism. In the current debate, the main division is between biological views that are sceptical of artificial consciousness and functional views that are sympathetic to it. I argue that both camps make the same mistake of over-estimating what the evidence tells us. Scientific insights into consciousness have been achieved through the study of conscious organisms. Although this has enabled cautious assessments of consciousness in various creatures, extending this to AI faces serious obstacles. AI thus presents consciousness researchers with a dilemma: either reach a verdict on artificial consciousness but violate Evidentialism; or respect Evidentialism but offer no verdict on the prospects of artificial consciousness. The dominant trend in the literature has been to take the first option while purporting to follow the scientific evidence. I argue that if we truly follow the evidence, we must take the second option and adopt agnosticism. - oai:arXiv.org:2412.13145v2 - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Modular Layout Synthesis (MLS): Front-end Code via Structure Normalization and Constrained Generation + https://arxiv.org/abs/2512.18996 + arXiv:2512.18996v1 Announce Type: new +Abstract: Automated front-end engineering drastically reduces development cycles and minimizes manual coding overhead. While Generative AI has shown promise in translating designs to code, current solutions often produce monolithic scripts, failing to natively support modern ecosystems like React, Vue, or Angular. Furthermore, the generated code frequently suffers from poor modularity, making it difficult to maintain. To bridge this gap, we introduce Modular Layout Synthesis (MLS), a hierarchical framework that merges visual understanding with structural normalization. Initially, a visual-semantic encoder maps the screen capture into a serialized tree topology, capturing the essential layout hierarchy. Instead of simple parsing, we apply heuristic deduplication and pattern recognition to isolate reusable blocks, creating a framework-agnostic schema. Finally, a constraint-based generation protocol guides the LLM to synthesize production-ready code with strict typing and component props. Evaluations show that MLS significantly outperforms existing baselines, ensuring superior code reusability and structural integrity across multiple frameworks + oai:arXiv.org:2512.18996v1 + cs.IR + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Tom McClelland + Chong Liu, Ming Zhang, Fei Li, Hao Zhou, Xiaoshuang Chen, Ye Yuan - Additive codes attaining the Griesmer bound - https://arxiv.org/abs/2412.14615 - arXiv:2412.14615v2 Announce Type: replace -Abstract: Additive codes may have better parameters than linear codes. However, still very few cases are known and the explicit construction of such codes is a challenging problem. Here we show that a Griesmer type bound for the length of additive codes can always be attained with equality if the minimum distance is sufficiently large. This solves the problem for the optimal parameters of additive codes when the minimum distance is large and yields many infinite series of additive codes that outperform linear codes. - oai:arXiv.org:2412.14615v2 - cs.IT - math.CO - math.IT - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sascha Kurz + Evaluating the Challenges of LLMs in Real-world Medical Follow-up: A Comparative Study and An Optimized Framework + https://arxiv.org/abs/2512.18999 + arXiv:2512.18999v1 Announce Type: new +Abstract: When applied directly in an end-to-end manner to medical follow-up tasks, Large Language Models (LLMs) often suffer from uncontrolled dialog flow and inaccurate information extraction due to the complexity of follow-up forms. To address this limitation, we designed and compared two follow-up chatbot systems: an end-to-end LLM-based system (control group) and a modular pipeline with structured process control (experimental group). Experimental results show that while the end-to-end approach frequently fails on lengthy and complex forms, our modular method-built on task decomposition, semantic clustering, and flow management-substantially improves dialog stability and extraction accuracy. Moreover, it reduces the number of dialogue turns by 46.73% and lowers token consumption by 80% to 87.5%. These findings highlight the necessity of integrating external control mechanisms when deploying LLMs in high-stakes medical follow-up scenarios. + oai:arXiv.org:2512.18999v1 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Jinyan Liu, Zikang Chen, Qinchuan Wang, Tan Xie, Heming Zheng, Xudong Lv - Data for Mathematical Copilots: Better Ways of Presenting Proofs for Machine Learning - https://arxiv.org/abs/2412.15184 - arXiv:2412.15184v2 Announce Type: replace -Abstract: The datasets and benchmarks commonly used to train and evaluate the mathematical capabilities of AI-based mathematical copilots (primarily large language models) exhibit several shortcomings and misdirections. These range from a restricted scope of mathematical complexity to limited fidelity in capturing aspects beyond the final, written proof (e.g. motivating the proof, or representing the thought processes leading to a proof). These issues are compounded by a dynamic reminiscent of Goodhart's law: as benchmark performance becomes the primary target for model development, the benchmarks themselves become less reliable indicators of genuine mathematical capability. We systematically explore these limitations and contend that enhancing the capabilities of large language models, or any forthcoming advancements in AI-based mathematical assistants (copilots or ``thought partners''), necessitates a course correction both in the design of mathematical datasets and the evaluation criteria of the models' mathematical ability. In particular, it is necessary for benchmarks to move beyond the existing result-based datasets that map theorem statements directly to proofs, and instead focus on datasets that translate the richer facets of mathematical research practice into data that LLMs can learn from. This includes benchmarks that supervise the proving process and the proof discovery process itself, and we advocate for mathematical dataset developers to consider the concept of "motivated proof", introduced by G. P\'olya in 1949, which can serve as a blueprint for datasets that offer a better proof learning signal, alleviating some of the mentioned limitations. - oai:arXiv.org:2412.15184v2 + ORPR: An OR-Guided Pretrain-then-Reinforce Learning Model for Inventory Management + https://arxiv.org/abs/2512.19001 + arXiv:2512.19001v1 Announce Type: new +Abstract: As the pursuit of synergy between Artificial Intelligence (AI) and Operations Research (OR) gains momentum in handling complex inventory systems, a critical challenge persists: how to effectively reconcile AI's adaptive perception with OR's structural rigor. To bridge this gap, we propose a novel OR-Guided "Pretrain-then-Reinforce" framework. To provide structured guidance, we propose a simulation-augmented OR model that generates high-quality reference decisions, implicitly capturing complex business constraints and managerial preferences. Leveraging these OR-derived decisions as foundational training labels, we design a domain-informed deep learning foundation model to establish foundational decision-making capabilities, followed by a reinforcement learning (RL) fine-tuning stage. Uniquely, we position RL as a deep alignment mechanism that enables the AI agent to internalize the optimality principles of OR, while simultaneously leveraging exploration for general policy refinement and allowing expert guidance for scenario-specific adaptation (e.g., promotional events). Validated through extensive numerical experiments and a field deployment at JD.com augmented by a Difference-in-Differences (DiD) analysis, our model significantly outperforms incumbent industrial practices, delivering real-world gains of a 5.27-day reduction in turnover and a 2.29% increase in in-stock rates, alongside a 29.95% decrease in holding costs. Contrary to the prevailing trend of brute-force model scaling, our study demonstrates that a lightweight, domain-informed model can deliver state-of-the-art performance and robust transferability when guided by structured OR logic. This approach offers a scalable and cost-effective paradigm for intelligent supply chain management, highlighting the value of deeply aligning AI with OR. + oai:arXiv.org:2512.19001v1 + cs.AI cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Simon Frieder, Jonas Bayer, Sam Looi, Jacob Loader, Julius Berner, Katherine M. Collins, Andr\'as Juh\'asz, Fabian Ruehle, Sean Welleck, Gabriel Poesia, Ryan-Rhys Griffiths, Adrian Weller, Anirudh Goyal, Cameron Freer, Thomas Lukasiewicz, Timothy Gowers + Lingjie Zhao, Xue Yu, Yongzhi Qi, Hao Hu, Jianshen Zhang, Yingzheng Ma, Shuyu Han, Wei Qi, Zuo-Jun Max Shen - Optimal Error Analysis of Channel Estimation for IRS-assisted MIMO Systems - https://arxiv.org/abs/2412.16827 - arXiv:2412.16827v2 Announce Type: replace -Abstract: As intelligent reflecting surface (IRS) has emerged as a new and promising technology capable of configuring the wireless environment favorably, channel estimation for IRS-assisted multiple-input multiple-output (MIMO) systems has garnered extensive attention in recent years. Despite the development of numerous algorithms to address this challenge, a comprehensive theoretical characterization of the optimal recovery error is still lacking. This paper aims to address this gap by providing theoretical guarantees in terms of stable recovery of channel matrices for noisy measurements. We begin by establishing the equivalence between IRS-assisted MIMO systems in the uplink scenario and a compact tensor train (TT)-based tensor-on-tensor (ToT) regression. Building on this equivalence, we then investigate the restricted isometry property (RIP) for complex-valued subgaussian measurements. Our analysis reveals that successful recovery hinges on the relationship between the number of user terminals and the number of time slots during which channel matrices remain invariant. Utilizing the RIP condition, we establish a theoretical upper bound on the recovery error for solutions to the constrained least-squares optimization problem, as well as a minimax lower bound for the considered model. Our analysis demonstrates that the recovery error decreases inversely with the number of time slots, and increases proportionally with the total number of unknown entries in the channel matrices, thereby quantifying the fundamental trade-offs in channel estimation accuracy. In addition, we explore a multi-hop IRS scheme and analyze the corresponding recovery errors. Finally, we have performed numerical experiments to support our theoretical findings. - oai:arXiv.org:2412.16827v2 + A Quantitative Entropy Power Inequality for Dependent Random Vectors + https://arxiv.org/abs/2512.19002 + arXiv:2512.19002v1 Announce Type: new +Abstract: The entropy power inequality for independent random vectors is a foundational result of information theory, with deep connections to probability and geometric functional analysis. Several extensions of the entropy power inequality have been developed for settings with dependence, including by Takano, Johnson, and Rioul. We extend these works by developing a quantitative version of the entropy power inequality for dependent random vectors. A notable consequence is that an entropy power inequality stated using conditional entropies holds for random vectors whose joint density is log-supermodular. + oai:arXiv.org:2512.19002v1 cs.IT - eess.SP math.IT - Mon, 22 Dec 2025 00:00:00 -0500 - replace + math.PR + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhen Qin, Zhihui Zhu + Mokshay Madiman, James Melbourne, Cyril Roberto - Generating Completions for Broca's Aphasic Sentences Using Large Language Models - https://arxiv.org/abs/2412.17669 - arXiv:2412.17669v2 Announce Type: replace -Abstract: Broca's aphasia is a type of aphasia characterized by non-fluent, effortful and agrammatic speech production with relatively good comprehension. Since traditional aphasia treatment methods are often time-consuming, labour-intensive, and do not reflect real-world conversations, applying natural language processing based approaches such as Large Language Models (LLMs) could potentially contribute to improving existing treatment approaches. To address this issue, we explore the use of sequence-to-sequence LLMs for completing Broca's aphasic sentences. We first generate synthetic Broca's aphasic data using a rule-based system designed to mirror the linguistic characteristics of Broca's aphasic speech. Using this synthetic data (without authentic aphasic samples), we then fine-tune four pre-trained LLMs on the task of completing agrammatic sentences. We evaluate our fine-tuned models on both synthetic and authentic Broca's aphasic data. We demonstrate LLMs' capability for reconstructing agrammatic sentences, with the models showing improved performance with longer input utterances. Our result highlights the LLMs' potential in advancing communication aids for individuals with Broca's aphasia and possibly other clinical populations. - oai:arXiv.org:2412.17669v2 + Context-Aware Initialization for Reducing Generative Path Length in Diffusion Language Models + https://arxiv.org/abs/2512.19004 + arXiv:2512.19004v1 Announce Type: new +Abstract: Diffusion Large Language Models (DLLMs) enable fully parallel token decoding but often remain impractical at inference time due to the many denoising iterations required to refine an information-free, fully masked initialization into coherent text. Most existing acceleration methods focus on traversing this generative trajectory more efficiently via improved solvers or sampling strategies. We advance a complementary perspective: shorten the trajectory itself by starting closer to the target distribution through context-aware initialization. + We propose a training-free interface that injects prompt-conditioned priors from a lightweight auxiliary model into the diffusion initialization, and instantiate it with two mechanisms: discrete token injection and representation-level embedding interpolation. Because injected priors can be imperfect and unmask-only decoding can over-commit early, we also introduce a simple confidence-based remasking mechanism as a form of prior skepticism. Preliminary evidence on GSM8K suggests that context-aware initialization can substantially reduce denoising iterations (about 35\% fewer function evaluations in our setting), while also exposing a key open challenge: naive warm-starting can degrade final accuracy relative to strong diffusion baselines. We use these findings to motivate a research agenda around calibration, revision mechanisms, and representation alignment for reliable warm-started diffusion decoding. + oai:arXiv.org:2512.19004v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1109/JBHI.2025.3639109 - Sijbren van Vaals, Yevgen Matusevych, Frank Tsiwah - - - A Formal Correctness Proof of Edmonds' Blossom Shrinking Algorithm - https://arxiv.org/abs/2412.20878 - arXiv:2412.20878v3 Announce Type: replace -Abstract: We present the first formal correctness proof of Edmonds' blossom shrinking algorithm for maximum cardinality matching in general graphs. We focus on formalising the mathematical structures and properties that allow the algorithm to run in worst-case polynomial running time. We formalise Berge's lemma, blossoms and their properties, and a mathematical model of the algorithm, showing that it is totally correct. We provide the first detailed proofs of many of the facts underlying the algorithm's correctness. - oai:arXiv.org:2412.20878v3 - cs.LO - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Mohammad Abdulaziz, Kurt Mehlhorn + Tongyuan Miao, Gary Huang, Kai Jun Han, Annie Jiang - Pairwise Elimination with Instance-Dependent Guarantees for Bandits with Cost Subsidy - https://arxiv.org/abs/2501.10290 - arXiv:2501.10290v3 Announce Type: replace -Abstract: Multi-armed bandits (MAB) are commonly used in sequential online decision-making when the reward of each decision is an unknown random variable. In practice however, the typical goal of maximizing total reward may be less important than minimizing the total cost of the decisions taken, subject to a reward constraint. For example, we may seek to make decisions that have at least the reward of a reference ``default'' decision, with as low a cost as possible. This problem was recently introduced in the Multi-Armed Bandits with Cost Subsidy (MAB-CS) framework. MAB-CS is broadly applicable to problem domains where a primary metric (cost) is constrained by a secondary metric (reward), and the rewards are unknown. In our work, we address variants of MAB-CS including ones with reward constrained by the reward of a known reference arm or by the subsidized best reward. We introduce the Pairwise-Elimination (PE) algorithm for the known reference arm variant and generalize PE to PE-CS for the subsidized best reward variant. Our instance-dependent analysis of PE and PE-CS reveals that both algorithms have an order-wise logarithmic upper bound on Cost and Quality Regret, making our policies the first with such a guarantee. Moreover, by comparing our upper and lower bound results we establish that PE is order-optimal for all known reference arm problem instances. Finally, experiments are conducted using the MovieLens 25M and Goodreads datasets for both PE and PE-CS revealing the effectiveness of PE and the superior balance between performance and reliability offered by PE-CS compared to baselines from the literature. - oai:arXiv.org:2501.10290v3 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Quantum-Resistant Cryptographic Models for Next-Gen Cybersecurity + https://arxiv.org/abs/2512.19005 + arXiv:2512.19005v1 Announce Type: new +Abstract: Another threat is the development of large quantum computers, which have a high likelihood of breaking the high popular security protocols because it can use both Shor and Grover algorithms. In order to fix this looming threat, quantum-resistant cryptographic systems, otherwise known as post-quantum cryptography (PQC), are being formulated to protect cybersecurity systems of the future. The current paper presents the state of the art in designing, realizing, and testing the security of robust quantum-resistant algorithms, paying attention to lattice-based, code-based, multivariate polynomial and hash-based cryptography. We discuss their resistance to classical and quantum attackers, distributed system scalability properties, and their deployment in practice (secure communications, blockchain, cloud computing infrastructures). Also, we study a hybrid cryptographic model that integrates the classical efficient cryptography scheme and a quantum-resilient cryptographic scheme to achieve a backward-compatible solution and simultaneously improving the forward security properties. With the experimental findings, it is evident that performance with reasonable computational footprint of the proposed framework succeeds to install amplified security fortitude which successfully harbours prolific cybersecurity systems of the future. + oai:arXiv.org:2512.19005v1 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ishank Juneja, Carlee Joe-Wong, Osman Ya\u{g}an + Navin Chhibber, Amber Rastogi, Ankur Mahida, Vatsal Gupta, Piyush Ranjan - Towards Human-Guided, Data-Centric LLM Co-Pilots - https://arxiv.org/abs/2501.10321 - arXiv:2501.10321v3 Announce Type: replace -Abstract: Machine learning (ML) has the potential to revolutionize various domains, but its adoption is often hindered by the disconnect between the needs of domain experts and translating these needs into robust and valid ML tools. Despite recent advances in LLM-based co-pilots to democratize ML for non-technical domain experts, these systems remain predominantly focused on model-centric aspects while overlooking critical data-centric challenges. This limitation is problematic in complex real-world settings where raw data often contains complex issues, such as missing values, label noise, and domain-specific nuances requiring tailored handling. To address this we introduce CliMB-DC, a human-guided, data-centric framework for LLM co-pilots that combines advanced data-centric tools with LLM-driven reasoning to enable robust, context-aware data processing. At its core, CliMB-DC introduces a novel, multi-agent reasoning system that combines a strategic coordinator for dynamic planning and adaptation with a specialized worker agent for precise execution. Domain expertise is then systematically incorporated to guide the reasoning process using a human-in-the-loop approach. To guide development, we formalize a taxonomy of key data-centric challenges that co-pilots must address. Thereafter, to address the dimensions of the taxonomy, we integrate state-of-the-art data-centric tools into an extensible, open-source architecture, facilitating the addition of new tools from the research community. Empirically, using real-world healthcare datasets we demonstrate CliMB-DC's ability to transform uncurated datasets into ML-ready formats, significantly outperforming existing co-pilot baselines for handling data-centric challenges. CliMB-DC promises to empower domain experts from diverse domains -- healthcare, finance, social sciences and more -- to actively participate in driving real-world impact using ML. - oai:arXiv.org:2501.10321v3 + The 6th International Verification of Neural Networks Competition (VNN-COMP 2025): Summary and Results + https://arxiv.org/abs/2512.19007 + arXiv:2512.19007v1 Announce Type: new +Abstract: This report summarizes the 6th International Verification of Neural Networks Competition (VNN-COMP 2025), held as a part of the 8th International Symposium on AI Verification (SAIV), that was collocated with the 37th International Conference on Computer-Aided Verification (CAV). VNN-COMP is held annually to facilitate the fair and objective comparison of state-of-the-art neural network verification tools, encourage the standardization of tool interfaces, and bring together the neural network verification community. To this end, standardized formats for networks (ONNX) and specification (VNN-LIB) were defined, tools were evaluated on equal-cost hardware (using an automatic evaluation pipeline based on AWS instances), and tool parameters were chosen by the participants before the final test sets were made public. In the 2025 iteration, 8 teams participated on a diverse set of 16 regular and 9 extended benchmarks. This report summarizes the rules, benchmarks, participating tools, results, and lessons learned from this iteration of this competition. + oai:arXiv.org:2512.19007v1 cs.LG - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Evgeny Saveliev, Jiashuo Liu, Nabeel Seedat, Anders Boyd, Mihaela van der Schaar + Konstantin Kaulen, Tobias Ladner, Stanley Bak, Christopher Brix, Hai Duong, Thomas Flinkow, Taylor T. Johnson, Lukas Koller, Edoardo Manino, ThanhVu H Nguyen, Haoze Wu - SpikeDet: Better Firing Patterns for Accurate and Energy-Efficient Object Detection with Spiking Neural Networks - https://arxiv.org/abs/2501.15151 - arXiv:2501.15151v4 Announce Type: replace -Abstract: Spiking Neural Networks (SNNs) are the third generation of neural networks. They have gained widespread attention in object detection due to their low power consumption and biological interpretability. However, existing SNN-based object detection methods suffer from local firing saturation, where adjacent neurons concurrently reach maximum firing rates, especially in object-centric regions. This abnormal neuron firing pattern reduces the feature discrimination capability and detection accuracy, while also increasing the firing rates that prevent SNNs from achieving their potential energy efficiency. To address this problem, we propose SpikeDet, a novel spiking object detector that optimizes firing patterns for accurate and energy-efficient detection. Specifically, we design a spiking backbone network, MDSNet, which effectively adjusts the membrane synaptic input distribution at each layer, achieving better neuron firing patterns during spiking feature extraction. For the neck, to better utilize and preserve these high-quality backbone features, we introduce the Spiking Multi-direction Fusion Module (SMFM), which realizes multi-direction fusion of spiking features, enhancing the multi-scale detection capability of the model. Furthermore, we propose the Local Firing Saturation Index (LFSI) to quantitatively measure local firing saturation. Experimental results validate the effectiveness of our method, with SpikeDet achieving superior performance. On the COCO 2017 dataset, it achieves 52.2% AP, outperforming previous SNN-based methods by 3.3% AP while requiring only half the power consumption. On object detection sub-tasks, including event-based GEN1, underwater URPC 2019, low-light ExDARK, and dense scene CrowdHuman datasets, SpikeDet also achieves the best performance. - oai:arXiv.org:2501.15151v4 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Randomized time stepping of nonlinearly parametrized solutions of evolution problems + https://arxiv.org/abs/2512.19009 + arXiv:2512.19009v1 Announce Type: new +Abstract: The Dirac-Frenkel variational principle is a widely used building block for using nonlinear parametrizations in the context of model reduction and numerically solving partial differential equations; however, it typically leads to time-dependent least-squares problems that are poorly conditioned. This work introduces a randomized time stepping scheme that solves at each time step a low-dimensional, random projection of the parameter vector via sketching. The sketching has a regularization effect that leads to better conditioned least-squares problems and at the same time reduces the number of unknowns that need to be solved for at each time step. Numerical experiments with benchmark examples demonstrate that randomized time stepping via sketching achieves competitive accuracy and outperforms standard regularization in terms of runtime efficiency. + oai:arXiv.org:2512.19009v1 + math.NA + cs.NA + math.DS + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yimeng Fan, Changsong Liu, Mingyang Li, Dongze Liu, Yanyan Liu, Wei Zhang - - - Weihrauch problems as containers - https://arxiv.org/abs/2501.17250 - arXiv:2501.17250v3 Announce Type: replace -Abstract: We note that Weihrauch problems can be regarded as containers over the category of projective represented spaces and that Weihrauch reductions correspond exactly to container morphisms. We also show that Bauer's extended Weihrauch degrees and the posetal reflection of containers over partition assemblies are equivalent. Using this characterization, we show how a number of operators over Weihrauch degrees, such as the composition product, also arise naturally from the abstract theory of polynomial functors. - oai:arXiv.org:2501.17250v3 - cs.LO - math.LO - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/publicdomain/zero/1.0/ - C\'ecilia Pradic, Ian Price + Yijun Dong, Paul Schwerdtner, Benjamin Peherstorfer - Regularized Langevin Dynamics for Combinatorial Optimization - https://arxiv.org/abs/2502.00277 - arXiv:2502.00277v4 Announce Type: replace -Abstract: This work proposes a simple yet effective sampling framework for combinatorial optimization (CO). Our method builds on discrete Langevin dynamics (LD), an efficient gradient-guided generative paradigm. However, we observe that directly applying LD often leads to limited exploration. To overcome this limitation, we propose the Regularized Langevin Dynamics (RLD), which enforces an expected distance between the sampled and current solutions, effectively avoiding local minima. We develop two CO solvers on top of RLD, one based on simulated annealing (SA), and the other one based on neural network (NN). Empirical results on three classic CO problems demonstrate that both of our methods can achieve comparable or better performance against the previous state-of-the-art (SOTA) SA- and NN-based solvers. In particular, our SA algorithm reduces the runtime of the previous SOTA SA method by up to 80\%, while achieving equal or superior performance. In summary, RLD offers a promising framework for enhancing both traditional heuristics and NN models to solve CO problems. Our code is available at https://github.com/Shengyu-Feng/RLD4CO. - oai:arXiv.org:2502.00277v4 + Efficient Jailbreak Mitigation Using Semantic Linear Classification in a Multi-Staged Pipeline + https://arxiv.org/abs/2512.19011 + arXiv:2512.19011v1 Announce Type: new +Abstract: Prompt injection and jailbreaking attacks pose persistent security challenges to large language model (LLM)-based systems. We present an efficient and systematically evaluated defense architecture that mitigates these threats through a lightweight, multi-stage pipeline. Its core component is a semantic filter based on text normalization, TF-IDF representations, and a Linear SVM classifier. Despite its simplicity, this module achieves 93.4% accuracy and 96.5% specificity on held-out data, substantially reducing attack throughput while incurring negligible computational overhead. + Building on this efficient foundation, the full pipeline integrates complementary detection and mitigation mechanisms that operate at successive stages, providing strong robustness with minimal latency. In comparative experiments, our SVM-based configuration improves overall accuracy from 35.1% to 93.4% while reducing average time to completion from approximately 450s to 47s, yielding over 10 times lower latency than ShieldGemma. These results demonstrate that the proposed design simultaneously advances defensive precision and efficiency, addressing a core limitation of current model-based moderators. + Evaluation across a curated corpus of over 30,000 labeled prompts, including benign, jailbreak, and application-layer injections, confirms that staged, resource-efficient defenses can robustly secure modern LLM-driven applications. + oai:arXiv.org:2512.19011v1 + cs.CR + cs.AI + cs.CL cs.LG - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - International conference on machine learning, 2025, PMLR - Shengyu Feng, Yiming Yang + Akshaj Prashanth Rao, Advait Singh, Saumya Kumaar Saksena, Dhruv Kumar - Towards Safer Chatbots: Automated Policy Compliance Evaluation of Custom GPTs - https://arxiv.org/abs/2502.01436 - arXiv:2502.01436v3 Announce Type: replace -Abstract: User-configured chatbots built on top of large language models are increasingly available through centralized marketplaces such as OpenAI's GPT Store. While these platforms enforce usage policies intended to prevent harmful or inappropriate behavior, the scale and opacity of customized chatbots make systematic policy enforcement challenging. As a result, policy-violating chatbots continue to remain publicly accessible despite existing review processes. This paper presents a fully automated method for evaluating the compliance of Custom GPTs with its marketplace usage policy using black-box interaction. The method combines large-scale GPT discovery, policy-driven red-teaming prompts, and automated compliance assessment using an LLM-as-a-judge. We focus on three policy-relevant domains explicitly addressed in OpenAI's usage policies: Romantic, Cybersecurity, and Academic GPTs. We validate our compliance assessment component against a human-annotated ground-truth dataset, achieving an F1 score of 0.975 for binary policy violation detection. We then apply the method in a large-scale empirical study of 782 Custom GPTs retrieved from the GPT Store. The results show that 58.7% of the evaluated GPTs exhibit at least one policy-violating response, with substantial variation across policy domains. A comparison with the base models (GPT-4 and GPT-4o) indicates that most violations originate from model-level behavior, while customization tends to amplify these tendencies rather than create new failure modes. Our findings reveal limitations in current review mechanisms for user-configured chatbots and demonstrate the feasibility of scalable, behavior-based policy compliance evaluation. - oai:arXiv.org:2502.01436v3 + DramaBench: A Six-Dimensional Evaluation Framework for Drama Script Continuation + https://arxiv.org/abs/2512.19012 + arXiv:2512.19012v1 Announce Type: new +Abstract: Drama script continuation requires models to maintain character consistency, advance plot coherently, and preserve dramatic structurecapabilities that existing benchmarks fail to evaluate comprehensively. We present DramaBench, the first large-scale benchmark for evaluating drama script continuation across six independent dimensions: Format Standards, Narrative Efficiency, Character Consistency, Emotional Depth, Logic Consistency, and Conflict Handling. Our framework combines rulebased analysis with LLM-based labeling and statistical metrics, ensuring objective and reproducible evaluation. We conduct comprehensive evaluation of 8 state-of-the-art language models on 1,103 scripts (8,824 evaluations total), with rigorous statistical significance testing (252 pairwise comparisons, 65.9% significant) and human validation (188 scripts, substantial agreement on 3/5 dimensions). Our ablation studies confirm all six dimensions capture independent quality aspects (mean | r | = 0.020). DramaBench provides actionable, dimensionspecific feedback for model improvement and establishes a rigorous standard for creative writing evaluation. + oai:arXiv.org:2512.19012v1 cs.CL - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - David Rodriguez, William Seymour, Jose M. Del Alamo, Jose Such + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Shijian Ma, Yunqi Huang, Yan Lin - 3D Cell Oversegmentation Correction via Geo-Wasserstein Divergence - https://arxiv.org/abs/2502.01890 - arXiv:2502.01890v4 Announce Type: replace -Abstract: 3D cell segmentation methods are often hindered by \emph{oversegmentation}, where a single cell is incorrectly split into multiple fragments. This degrades the final segmentation quality and is notoriously difficult to resolve, as oversegmentation errors often resemble natural gaps between adjacent cells. Our work makes two key contributions. First, for 3D cell segmentation, we are the first work to formulate oversegmentation as a concrete problem and propose a geometric framework to identify and correct these errors. Our approach builds a pre-trained classifier using both 2D geometric and 3D topological features extracted from flawed 3D segmentation results. Second, we introduce a novel metric, Geo-Wasserstein divergence, to quantify changes in 2D geometries. This captures the evolving trends of cell mask shape in a geometry-aware manner. We validate our method through extensive experiments on in-domain plant datasets, including both synthesized and real oversegmented cases, as well as on out-of-domain animal datasets to demonstrate transfer learning performance. An ablation study further highlights the contribution of the Geo-Wasserstein divergence. A clear pipeline is provided for end-users to build pre-trained models to any labeled dataset. - oai:arXiv.org:2502.01890v4 - cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + DREAM: Dynamic Red-teaming across Environments for AI Models + https://arxiv.org/abs/2512.19016 + arXiv:2512.19016v1 Announce Type: new +Abstract: Large Language Models (LLMs) are increasingly used in agentic systems, where their interactions with diverse tools and environments create complex, multi-stage safety challenges. However, existing benchmarks mostly rely on static, single-turn assessments that miss vulnerabilities from adaptive, long-chain attacks. To fill this gap, we introduce DREAM, a framework for systematic evaluation of LLM agents against dynamic, multi-stage attacks. At its core, DREAM uses a Cross-Environment Adversarial Knowledge Graph (CE-AKG) to maintain stateful, cross-domain understanding of vulnerabilities. This graph guides a Contextualized Guided Policy Search (C-GPS) algorithm that dynamically constructs attack chains from a knowledge base of 1,986 atomic actions across 349 distinct digital environments. Our evaluation of 12 leading LLM agents reveals a critical vulnerability: these attack chains succeed in over 70% of cases for most models, showing the power of stateful, cross-environment exploits. Through analysis of these failures, we identify two key weaknesses in current agents: contextual fragility, where safety behaviors fail to transfer across environments, and an inability to track long-term malicious intent. Our findings also show that traditional safety measures, such as initial defense prompts, are largely ineffective against attacks that build context over multiple interactions. To advance agent safety research, we release DREAM as a tool for evaluating vulnerabilities and developing more robust defenses. + oai:arXiv.org:2512.19016v1 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Peter Chen, Bryan Chang, Olivia A Creasey, Julie Beth Sneddon, Zev J Gartner, Yining Liu + Liming Lu, Xiang Gu, Junyu Huang, Jiawei Du, Yunhuai Liu, Yongbin Zhou, Shuchao Pang - DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement Learning Agents - https://arxiv.org/abs/2502.01956 - arXiv:2502.01956v3 Announce Type: replace -Abstract: Hierarchical Reinforcement Learning (HRL) agents often struggle with long-horizon visual planning due to their reliance on error-prone distance metrics. We propose Discrete Hierarchical Planning (DHP), a method that replaces continuous distance estimates with discrete reachability checks to evaluate subgoal feasibility. DHP recursively constructs tree-structured plans by decomposing long-term goals into sequences of simpler subtasks, using a novel advantage estimation strategy that inherently rewards shorter plans and generalizes beyond training depths. In addition, to address the data efficiency challenge, we introduce an exploration strategy that generates targeted training examples for the planning modules without needing expert data. Experiments in 25-room navigation environments demonstrate a 100% success rate (vs. 90% baseline). We also present an offline variant that achieves state-of-the-art results on OGBench benchmarks, with up to 71% absolute gains on giant HumanoidMaze tasks, demonstrating our core contributions are architecture-agnostic. The method also generalizes to momentum-based control tasks and requires only log N steps for replanning. Theoretical analysis and ablations validate our design choices. - oai:arXiv.org:2502.01956v3 - cs.RO - cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + PEAK: A Performance Engineering AI-Assistant for GPU Kernels Powered by Natural Language Transformations + https://arxiv.org/abs/2512.19018 + arXiv:2512.19018v1 Announce Type: new +Abstract: Advancements in large language models (LLMs) are showing promising impact in software development and programming assistance. However, these models struggle when operating on low-level backend code. This challenge is exacerbated in the domain of GPU kernels, where performance-critical details are coupled to rapidly evolving hardware characteristics and available code examples are sparse. + In this work, we introduce PEAK, a Performance Engineering AI-Assistant for GPU Kernels powered by natural language transformations. PEAK utilizes the key insight that iterative code transformations (optimizations) can straightforwardly be written in natural language, and then carried out by LLMs. Thus, these transformations can be rapidly developed, encoding general portable optimizations, but also easily specialized to specific GPU devices and even kernels. These natural transformations are supported by a modular and extensible infrastructure that additionally performs validation and performance evaluation. We demonstrate the flexibility of PEAK by instantiating it for three backends, CUDA, HIP, and HLSL, and create 16 natural transformations for optimizing matrix multiplication kernels. We show that our resulting implementations are competitive with vendor libraries when available, and for HLSL (without a library) our implementations match the hardware documented FLOPS. PEAK allows the fine-grained exploration of several research questions around how LLMs behave in this domain, including characterizing transformations and their errors; and how performance evolves along optimization sequences. PEAK provides an interface that can either be utilized by performance engineers to improve productivity, or driven completely autonomously (e.g., by an AI agent), providing a forward-compatible design that can continue to improve with advances in AI capabilities. + oai:arXiv.org:2512.19018v1 + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Shashank Sharma, Janina Hoffmann, Vinay Namboodiri + Muhammad Usman Tariq, Abhinav Jangda, Angelica Moreira, Madan Musuvathi, Tyler Sorensen - Generating Samples to Probe Trained Models - https://arxiv.org/abs/2502.06658 - arXiv:2502.06658v3 Announce Type: replace -Abstract: There is a growing need for investigating how machine learning models operate. With this work, we aim to understand trained machine learning models by questioning their data preferences. We propose a mathematical framework that allows us to probe trained models and identify their preferred samples in various scenarios including prediction-risky, parameter-sensitive, or model-contrastive samples. To showcase our framework, we pose these queries to a range of models trained on a range of classification and regression tasks, and receive answers in the form of generated data. - oai:arXiv.org:2502.06658v3 + Optimizer Dynamics at the Edge of Stability with Differential Privacy + https://arxiv.org/abs/2512.19019 + arXiv:2512.19019v1 Announce Type: new +Abstract: Deep learning models can reveal sensitive information about individual training examples, and while differential privacy (DP) provides guarantees restricting such leakage, it also alters optimization dynamics in poorly understood ways. We study the training dynamics of neural networks under DP by comparing Gradient Descent (GD), and Adam to their privacy-preserving variants. Prior work shows that these optimizers exhibit distinct stability dynamics: full-batch methods train at the Edge of Stability (EoS), while mini-batch and adaptive methods exhibit analogous edge-of-stability behavior. At these regimes, the training loss and the sharpness--the maximum eigenvalue of the training loss Hessian--exhibit certain characteristic behavior. In DP training, per-example gradient clipping and Gaussian noise modify the update rule, and it is unclear whether these stability patterns persist. We analyze how clipping and noise change sharpness and loss evolution and show that while DP generally reduces the sharpness and can prevent optimizers from fully reaching the classical stability thresholds, patterns from EoS and analogous adaptive methods stability regimes persist, with the largest learning rates and largest privacy budgets approaching, and sometimes exceeding, these thresholds. These findings highlight the unpredictability introduced by DP in neural network optimization. + oai:arXiv.org:2512.19019v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Eren Mehmet K{\i}ral, Nur\c{s}en Ayd{\i}n, \c{S}. \.Ilker Birbil + Ayana Hussain, Ricky Fang - On Agnostic PAC Learning in the Small Error Regime - https://arxiv.org/abs/2502.09496 - arXiv:2502.09496v2 Announce Type: replace -Abstract: Binary classification in the classic PAC model exhibits a curious phenomenon: Empirical Risk Minimization (ERM) learners are suboptimal in the realizable case yet optimal in the agnostic case. Roughly speaking, this owes itself to the fact that non-realizable distributions $\mathcal{D}$ are simply more difficult to learn than realizable distributions -- even when one discounts a learner's error by $\mathrm{err}(h^*_{\mathcal{D}})$, the error of the best hypothesis in $\mathcal{H}$ for $\mathcal{D}$. Thus, optimal agnostic learners are permitted to incur excess error on (easier-to-learn) distributions $\mathcal{D}$ for which $\tau = \mathrm{err}(h^*_{\mathcal{D}})$ is small. - Recent work of Hanneke, Larsen, and Zhivotovskiy (FOCS `24) addresses this shortcoming by including $\tau$ itself as a parameter in the agnostic error term. In this more fine-grained model, they demonstrate tightness of the error lower bound $\tau + \Omega \left(\sqrt{\frac{\tau (d + \log(1 / \delta))}{m}} + \frac{d + \log(1 / \delta)}{m} \right)$ in a regime where $\tau > d/m$, and leave open the question of whether there may be a higher lower bound when $\tau \approx d/m$, with $d$ denoting $\mathrm{VC}(\mathcal{H})$. In this work, we resolve this question by exhibiting a learner which achieves error $c \cdot \tau + O \left(\sqrt{\frac{\tau (d + \log(1 / \delta))}{m}} + \frac{d + \log(1 / \delta)}{m} \right)$ for a constant $c \leq 2.1$, thus matching the lower bound when $\tau \approx d/m$. Further, our learner is computationally efficient and is based upon careful aggregations of ERM classifiers, making progress on two other questions of Hanneke, Larsen, and Zhivotovskiy (FOCS `24). We leave open the interesting question of whether our approach can be refined to lower the constant from 2.1 to 1, which would completely settle the complexity of agnostic learning. - oai:arXiv.org:2502.09496v2 + CETCAM: Camera-Controllable Video Generation via Consistent and Extensible Tokenization + https://arxiv.org/abs/2512.19020 + arXiv:2512.19020v1 Announce Type: new +Abstract: Achieving precise camera control in video generation remains challenging, as existing methods often rely on camera pose annotations that are difficult to scale to large and dynamic datasets and are frequently inconsistent with depth estimation, leading to train-test discrepancies. We introduce CETCAM, a camera-controllable video generation framework that eliminates the need for camera annotations through a consistent and extensible tokenization scheme. CETCAM leverages recent advances in geometry foundation models, such as VGGT, to estimate depth and camera parameters and converts them into unified, geometry-aware tokens. These tokens are seamlessly integrated into a pretrained video diffusion backbone via lightweight context blocks. Trained in two progressive stages, CETCAM first learns robust camera controllability from diverse raw video data and then refines fine-grained visual quality using curated high-fidelity datasets. Extensive experiments across multiple benchmarks demonstrate state-of-the-art geometric consistency, temporal stability, and visual realism. Moreover, CETCAM exhibits strong adaptability to additional control modalities, including inpainting and layout control, highlighting its flexibility beyond camera control. The project page is available at https://sjtuytc.github.io/CETCam_project_page.github.io/. + oai:arXiv.org:2512.19020v1 + cs.CV cs.LG - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Julian Asilis, Mikael M{\o}ller H{\o}gsgaard, Grigoris Velegkas + Zelin Zhao, Xinyu Gong, Bangya Liu, Ziyang Song, Jun Zhang, Suhui Wu, Yongxin Chen, Hao Zhang - GraphCompNet: A Position-Aware Model for Predicting and Compensating Shape Deviations in 3D Printing - https://arxiv.org/abs/2502.09652 - arXiv:2502.09652v3 Announce Type: replace -Abstract: Shape deviation modeling and compensation in additive manufacturing are pivotal for achieving high geometric accuracy and enabling industrial-scale production. Critical challenges persist, including generalizability across complex geometries and adaptability to position-dependent variations in batch production. Traditional methods of controlling geometric deviations often rely on complex parameterized models and repetitive metrology, which can be time-consuming yet not applicable for batch production. In this paper, we present a novel, process-agnostic approach to address the challenge of ensuring geometric precision and accuracy in position-dependent AM production. The proposed GraphCompNet presents a novel computational framework integrating graph-based neural networks with a GAN inspired training paradigm. The framework leverages point cloud representations and dynamic graph convolutional neural networks (DGCNNs) to model intricate geometries while incorporating position-specific thermal and mechanical variations. A two-stage adversarial training process iteratively refines compensated designs using a compensator-predictor architecture, enabling real-time feedback and optimization. Experimental validation across various shapes and positions demonstrates the framework's ability to predict deviations in freeform geometries and adapt to position-dependent batch production conditions, significantly improving compensation accuracy (35 to 65 percent) across the entire printing space, addressing position-dependent variabilities within the print chamber. The proposed method advances the development of a Digital Twin for AM, offering scalable, real-time monitoring and compensation capabilities. - oai:arXiv.org:2502.09652v3 + VLNVerse: A Benchmark for Vision-Language Navigation with Versatile, Embodied, Realistic Simulation and Evaluation + https://arxiv.org/abs/2512.19021 + arXiv:2512.19021v1 Announce Type: new +Abstract: Despite remarkable progress in Vision-Language Navigation (VLN), existing benchmarks remain confined to fixed, small-scale datasets with naive physical simulation. These shortcomings limit the insight that the benchmarks provide into sim-to-real generalization, and create a significant research gap. Furthermore, task fragmentation prevents unified/shared progress in the area, while limited data scales fail to meet the demands of modern LLM-based pretraining. To overcome these limitations, we introduce VLNVerse: a new large-scale, extensible benchmark designed for Versatile, Embodied, Realistic Simulation, and Evaluation. VLNVerse redefines VLN as a scalable, full-stack embodied AI problem. Its Versatile nature unifies previously fragmented tasks into a single framework and provides an extensible toolkit for researchers. Its Embodied design moves beyond intangible and teleporting "ghost" agents that support full-kinematics in a Realistic Simulation powered by a robust physics engine. We leverage the scale and diversity of VLNVerse to conduct a comprehensive Evaluation of existing methods, from classic models to MLLM-based agents. We also propose a novel unified multi-task model capable of addressing all tasks within the benchmark. VLNVerse aims to narrow the gap between simulated navigation and real-world generalization, providing the community with a vital tool to boost research towards scalable, general-purpose embodied locomotion agents. + oai:arXiv.org:2512.19021v1 cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1109/TASE.2025.3636394 - IEEE Transactions on Automation Science and Engineering 2025 - Juheon Lee (Rachel), Lei (Rachel), Chen, Juan Carlos Catana, Hui Wang, Jun Zeng + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Sihao Lin, Zerui Li, Xunyi Zhao, Gengze Zhou, Liuyi Wang, Rong Wei, Rui Tang, Juncheng Li, Hanqing Wang, Jiangmiao Pang, Anton van den Hengel, Jiajun Liu, Qi Wu - Preconditioned Inexact Stochastic ADMM for Deep Model - https://arxiv.org/abs/2502.10784 - arXiv:2502.10784v5 Announce Type: replace -Abstract: The recent advancement of foundation models (FMs) has brought about a paradigm shift, revolutionizing various sectors worldwide. The popular optimizers used to train these models are stochastic gradient descent-based algorithms, which face inherent limitations, such as slow convergence and stringent assumptions for convergence. In particular, data heterogeneity arising from distributed settings poses significant challenges to their theoretical and numerical performance. This paper develops an algorithm, PISA (Preconditioned Inexact Stochastic Alternating Direction Method of Multipliers). Grounded in rigorous theoretical guarantees, the algorithm converges under the sole assumption of Lipschitz continuity of the gradient on a bounded region, thereby removing the need for other conditions commonly imposed by stochastic methods. This capability enables the proposed algorithm to tackle the challenge of data heterogeneity effectively. Moreover, the algorithmic architecture enables scalable parallel computing and supports various preconditions, such as second-order information, second moment, and orthogonalized momentum by Newton-Schulz iterations. Incorporating the latter two preconditions in PISA yields two computationally efficient variants: SISA and NSISA. Comprehensive experimental evaluations for training or fine-tuning diverse deep models, including vision models, large language models, reinforcement learning models, generative adversarial networks, and recurrent neural networks, demonstrate superior numerical performance of SISA and NSISA compared to various state-of-the-art optimizers. - oai:arXiv.org:2502.10784v5 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Steering Vision-Language Pre-trained Models for Incremental Face Presentation Attack Detection + https://arxiv.org/abs/2512.19022 + arXiv:2512.19022v1 Announce Type: new +Abstract: Face Presentation Attack Detection (PAD) demands incremental learning (IL) to combat evolving spoofing tactics and domains. Privacy regulations, however, forbid retaining past data, necessitating rehearsal-free IL (RF-IL). Vision-Language Pre-trained (VLP) models, with their prompt-tunable cross-modal representations, enable efficient adaptation to new spoofing styles and domains. Capitalizing on this strength, we propose \textbf{SVLP-IL}, a VLP-based RF-IL framework that balances stability and plasticity via \textit{Multi-Aspect Prompting} (MAP) and \textit{Selective Elastic Weight Consolidation} (SEWC). MAP isolates domain dependencies, enhances distribution-shift sensitivity, and mitigates forgetting by jointly exploiting universal and domain-specific cues. SEWC selectively preserves critical weights from previous tasks, retaining essential knowledge while allowing flexibility for new adaptations. Comprehensive experiments across multiple PAD benchmarks show that SVLP-IL significantly reduces catastrophic forgetting and enhances performance on unseen domains. SVLP-IL offers a privacy-compliant, practical solution for robust lifelong PAD deployment in RF-IL settings. + oai:arXiv.org:2512.19022v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shenglong Zhou, Ouya Wang, Ziyan Luo, Yongxu Zhu, Geoffrey Ye Li - - - On the Effect of Sampling Diversity in Scaling LLM Inference - https://arxiv.org/abs/2502.11027 - arXiv:2502.11027v4 Announce Type: replace -Abstract: Large language model (LLM) scaling inference is key to unlocking greater performance, and leveraging diversity has proven an effective way to enhance it. Motivated by the observed relationship between solution accuracy and meaningful response diversity, we systematically study the effect of prompt diversity in scaling inference. We theoretically explain why diversified sampling improves Best-of-N scaling, showing that responses generated from diverse prompts after Best-of-N selection exhibit significantly lower error rates than those produced from stationary prompts. Building on this analysis, we derive a diversity-fidelity trade-off principle, that guides the design of sampling strategies introducing diversity. From this guidance, we instantiate a family of effective perturbation styles. We theoretically and empirically characterize when diversified exploration remains effective, demonstrating that it works under a variety of conditions, and we further show that under majority voting, diversity may vanish. Finally, we systematically evaluate how effective sampling diversity is and show that, when applied appropriately in different contexts, it yields relative gains of 10.8% in EM@100 for reasoning, 9.6% for mathematics, and 9.5% in Pass@100 for code generation. Overall, this work provides a systematic analysis that offers a theoretical and empirical foundation for understanding how sampling diversity affects LLM inference-time scaling. - oai:arXiv.org:2502.11027v4 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Tianchun Wang, Zichuan Liu, Yuanzhou Chen, Jonathan Light, Weiyang Liu, Haifeng Chen, Xiang Zhang, Wei Cheng + Haoze Li, Jie Zhang, Guoying Zhao, Stephen Lin, Shiguang Shan - Collaborative Object Handover in a Robot Crafting Assistant - https://arxiv.org/abs/2502.19991 - arXiv:2502.19991v2 Announce Type: replace -Abstract: Robots are increasingly working alongside people, delivering food to patrons in restaurants or helping workers on assembly lines. These scenarios often involve object handovers between the person and the robot. To achieve safe and efficient human-robot collaboration (HRC), it is important to incorporate human context in a robot's handover strategies. We develop a collaborative handover model trained on human teleoperation data collected in a naturalistic crafting task. To evaluate its performance, we conduct cross-validation experiments on the training dataset as well as a user study in the same HRC crafting task. The handover episodes and user perceptions of the autonomous handover policy were compared with those of the human teleoperated handovers. While the cross-validation experiment and user study indicate that the autonomous policy successfully achieved collaborative handovers, the comparison with human teleoperation revealed avenues for further improvements. - oai:arXiv.org:2502.19991v2 + IndoorUAV: Benchmarking Vision-Language UAV Navigation in Continuous Indoor Environments + https://arxiv.org/abs/2512.19024 + arXiv:2512.19024v1 Announce Type: new +Abstract: Vision-Language Navigation (VLN) enables agents to navigate in complex environments by following natural language instructions grounded in visual observations. Although most existing work has focused on ground-based robots or outdoor Unmanned Aerial Vehicles (UAVs), indoor UAV-based VLN remains underexplored, despite its relevance to real-world applications such as inspection, delivery, and search-and-rescue in confined spaces. To bridge this gap, we introduce \textbf{IndoorUAV}, a novel benchmark and method specifically tailored for VLN with indoor UAVs. We begin by curating over 1,000 diverse and structurally rich 3D indoor scenes from the Habitat simulator. Within these environments, we simulate realistic UAV flight dynamics to collect diverse 3D navigation trajectories manually, further enriched through data augmentation techniques. Furthermore, we design an automated annotation pipeline to generate natural language instructions of varying granularity for each trajectory. This process yields over 16,000 high-quality trajectories, comprising the \textbf{IndoorUAV-VLN} subset, which focuses on long-horizon VLN. To support short-horizon planning, we segment long trajectories into sub-trajectories by selecting semantically salient keyframes and regenerating concise instructions, forming the \textbf{IndoorUAV-VLA} subset. Finally, we introduce \textbf{IndoorUAV-Agent}, a novel navigation model designed for our benchmark, leveraging task decomposition and multimodal reasoning. We hope IndoorUAV serves as a valuable resource to advance research on vision-language embodied AI in the indoor aerial navigation domain. + oai:arXiv.org:2512.19024v1 cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Leimin Tian, Shiyu Xu, Kerry He, Rachel Love, Akansel Cosgun, Dana Kulic + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xu Liu, Yu Liu, Hanshuo Qiu, Yang Qirong, Zhouhui Lian - Provably Secure Covert Messaging Using Image-based Diffusion Processes - https://arxiv.org/abs/2503.10063 - arXiv:2503.10063v2 Announce Type: replace -Abstract: We consider the problem of securely and robustly embedding covert messages into an image-based diffusion model's output. The sender and receiver want to exchange the maximum amount of information possible per diffusion sampled image while remaining undetected. The adversary wants to detect that such communication is taking place by identifying those diffusion samples that contain covert messages. To maximize robustness to transformations of the diffusion sample, a strategy is for the sender and the receiver to embed the message in the initial latents. We first show that prior work that attempted this is easily broken because their embedding technique alters the latents' distribution. We then propose a straightforward method to embed covert messages in the initial latent {\em without} altering the distribution. We prove that our construction achieves indistinguishability to any probabilistic polynomial time adversary. Finally, we discuss and analyze empirically the tradeoffs between embedding capacity, message recovery rates, and robustness. We find that optimizing the inversion method for error correction is crucial for reliability. - oai:arXiv.org:2503.10063v2 + The Erasure Illusion: Stress-Testing the Generalization of LLM Forgetting Evaluation + https://arxiv.org/abs/2512.19025 + arXiv:2512.19025v1 Announce Type: new +Abstract: Machine unlearning aims to remove specific data influences from trained models, a capability essential for adhering to copyright laws and ensuring AI safety. Current unlearning metrics typically measure success by monitoring the model's performance degradation on the specific unlearning dataset ($D_u$). We argue that for Large Language Models (LLMs), this evaluation paradigm is insufficient and potentially misleading. Many real-world uses of unlearning--motivated by copyright or safety--implicitly target not only verbatim content in $D_u$, but also behaviors influenced by the broader generalizations the model derived from it. We demonstrate that LLMs can pass standard unlearning evaluation and appear to have ``forgotten'' the target knowledge, while simultaneously retaining strong capabilities on content that is semantically adjacent to $D_u$. This phenomenon indicates that erasing exact sentences does not necessarily equate to removing the underlying knowledge. To address this gap, we propose \name, an automated stress-testing framework that generates a surrogate dataset, $\tilde{D}_u$. This surrogate set is constructed to be semantically derived from $D_u$ yet sufficiently distinct in embedding space. By comparing unlearning metric scores between $D_u$ and $\tilde{D}_u$, we can stress-test the reliability of the metric itself. Our extensive evaluation across three LLM families (Llama-3-8B, Qwen2.5-7B, and Zephyr-7B-$\beta$), three distinct datasets, and seven standard metrics reveals widespread inconsistencies. We find that current metrics frequently overestimate unlearning success, failing to detect retained knowledge exposed by our stress-test datasets. + oai:arXiv.org:2512.19025v1 cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Luke A. Bauer, Wenxuan Bao, Vincent Bindschaedler + Hengrui Jia, Taoran Li, Jonas Guan, Varun Chandrasekaran - Learning to Contextualize Web Pages for Enhanced Decision Making by LLM Agents - https://arxiv.org/abs/2503.10689 - arXiv:2503.10689v2 Announce Type: replace -Abstract: Recent advances in large language models (LLMs) have led to a growing interest in developing LLM-based agents for automating web tasks. However, these agents often struggle with even simple tasks on real-world websites due to their limited capability to understand and process complex web page structures. In this work, we introduce LCoW, a framework for Learning language models to Contextualize complex Web pages into a more comprehensible form, thereby enhancing decision making by LLM agents. LCoW decouples web page understanding from decision making by training a separate contextualization module to transform complex web pages into comprehensible format, which are then utilized by the decision-making agent. We demonstrate that our contextualization module effectively integrates with LLM agents of various scales to significantly enhance their decision-making capabilities in web automation tasks. Notably, LCoW improves the success rates of closed-source LLMs (e.g., Gemini-1.5-flash, GPT-4o, Claude-3.5-Sonnet) by an average of 15.6%, and demonstrates a 23.7% average improvement in success rates for open-source LMs (e.g., Llama-3.1-8B, Llama-3.1-70B) on the WorkArena benchmark. Moreover, the Gemini-1.5-flash agent with LCoW achieves state-of-the-art results on the WebShop benchmark, outperforming human experts. The relevant code materials are available at our project page: https://lcowiclr2025.github.io. - oai:arXiv.org:2503.10689v2 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Finer-Personalization Rank: Fine-Grained Retrieval Examines Identity Preservation for Personalized Generation + https://arxiv.org/abs/2512.19026 + arXiv:2512.19026v1 Announce Type: new +Abstract: The rise of personalized generative models raises a central question: how should we evaluate identity preservation? Given a reference image (e.g., one's pet), we expect the generated image to retain precise details attached to the subject's identity. However, current generative evaluation metrics emphasize the overall semantic similarity between the reference and the output, and overlook these fine-grained discriminative details. We introduce Finer-Personalization Rank, an evaluation protocol tailored to identity preservation. Instead of pairwise similarity, Finer-Personalization Rank adopts a ranking view: it treats each generated image as a query against an identity-labeled gallery consisting of visually similar real images. Retrieval metrics (e.g., mean average precision) measure performance, where higher scores indicate that identity-specific details (e.g., a distinctive head spot) are preserved. We assess identity at multiple granularities -- from fine-grained categories (e.g., bird species, car models) to individual instances (e.g., re-identification). Across CUB, Stanford Cars, and animal Re-ID benchmarks, Finer-Personalization Rank more faithfully reflects identity retention than semantic-only metrics and reveals substantial identity drift in several popular personalization methods. These results position the gallery-based protocol as a principled and practical evaluation for personalized generation. + oai:arXiv.org:2512.19026v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Dongjun Lee, Juyong Lee, Kyuyoung Kim, Jihoon Tack, Jinwoo Shin, Yee Whye Teh, Kimin Lee + Connor Kilrain, David Carlyn, Julia Chae, Sara Beery, Wei-Lun Chao, Jianyang Gu - LookAhead Tuning: Safer Language Models via Partial Answer Previews - https://arxiv.org/abs/2503.19041 - arXiv:2503.19041v4 Announce Type: replace -Abstract: Fine-tuning enables large language models (LLMs) to adapt to specific domains, but often compromises their previously established safety alignment. To mitigate the degradation of model safety during fine-tuning, we introduce LookAhead Tuning, a lightweight and effective data-driven approach that preserves safety during fine-tuning. The method introduces two simple strategies that modify training data by previewing partial answer prefixes, thereby minimizing perturbations to the model's initial token distributions and maintaining its built-in safety mechanisms. Comprehensive experiments demonstrate that LookAhead Tuning effectively maintains model safety without sacrificing robust performance on downstream tasks. Our findings position LookAhead Tuning as a reliable and efficient solution for the safe and effective adaptation of LLMs. - oai:arXiv.org:2503.19041v4 - cs.CL + Recontextualization Mitigates Specification Gaming without Modifying the Specification + https://arxiv.org/abs/2512.19027 + arXiv:2512.19027v1 Announce Type: new +Abstract: Developers often struggle to specify correct training labels and rewards. Perhaps they don't need to. We propose recontextualization, which reduces how often language models "game" training signals, performing misbehaviors those signals mistakenly reinforce. We show recontextualization prevents models from learning to 1) prioritize evaluation metrics over chat response quality; 2) special-case code to pass incorrect tests; 3) lie to users; and 4) become sycophantic. Our method works by generating completions from prompts discouraging misbehavior and then recontextualizing them as though they were in response to prompts permitting misbehavior. Recontextualization trains language models to resist misbehavior even when instructions permit it. This mitigates the reinforcement of misbehavior from misspecified training signals, reducing specification gaming without improving the supervision signal. + oai:arXiv.org:2512.19027v1 cs.AI - cs.CV cs.LG - cs.MM - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kangwei Liu, Mengru Wang, Yujie Luo, Lin Yuan, Mengshu Sun, Lei Liang, Zhiqiang Zhang, Jun Zhou, Bryan Hooi, Shumin Deng + Ariana Azarbal, Victor Gillioz, Vladimir Ivanov, Bryce Woodworth, Jacob Drori, Nevan Wichers, Aram Ebtekar, Alex Cloud, Alexander Matt Turner - Designing an LLM-Based Behavioral Activation Chatbot for Young People with Depression: Insights from an Evaluation with Artificial Users and Clinical Experts - https://arxiv.org/abs/2503.21540 - arXiv:2503.21540v3 Announce Type: replace -Abstract: LLMs promise to overcome limitations of rule-based mental health chatbots through improved natural language capabilities, yet their ability to deliver evidence-based psychological interventions remains largely unverified because evaluations rarely apply the validated fidelity measures used to assess psychotherapists. We developed an LLM-based chatbot that delivers behavioral activation for depression and generated 48 complete chat sessions with diverse artificial users. Ten psychotherapists assessed these sessions using the Quality of Behavioral Activation Scale (Q-BAS), a validated fidelity instrument. Results show that the chatbot reliably executed the intervention across all phases and maintained safety protocols, but it struggled with clinical judgment, particularly when verifying the feasibility of proposed activities. Overall, our findings suggest that LLM-based chatbots can execute therapeutic protocols with high fidelity, while robust clinical reasoning remains an open challenge. We outline design implications to address this gap and provide the chatbot and artificial user prompts. - oai:arXiv.org:2503.21540v3 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Florian Onur Kuhlmeier, Leon Hanschmann, Melina Rabe, Stefan Luettke, Eva-Lotta Brakemeier, Alexander Maedche + A Surrogate-Augmented Symbolic CFD-Driven Training Framework for Accelerating Multi-objective Physical Model Development + https://arxiv.org/abs/2512.19031 + arXiv:2512.19031v1 Announce Type: new +Abstract: Computational Fluid Dynamics (CFD)-driven training combines machine learning (ML) with CFD solvers to develop physically consistent closure models with improved predictive accuracy. In the original framework, each ML-generated candidate model is embedded in a CFD solver and evaluated against reference data, requiring hundreds to thousands of high-fidelity simulations and resulting in prohibitive computational cost for complex flows. To overcome this limitation, we propose an extended framework that integrates surrogate modeling into symbolic CFD-driven training in real time to reduce training cost. The surrogate model learns to approximate the errors of ML-generated models based on previous CFD evaluations and is continuously refined during training. Newly generated models are first assessed using the surrogate, and only those predicted to yield small errors or high uncertainty are subsequently evaluated with full CFD simulations. Discrete expressions generated by symbolic regression are mapped into a continuous space using averaged input-symbol values as inputs to a probabilistic surrogate model. To support multi-objective model training, particularly when fixed weighting of competing quantities is challenging, the surrogate is extended to a multi-output formulation by generalizing the kernel to a matrix form, providing one mean and variance prediction per training objective. Selection metrics based on these probabilistic outputs are used to identify an optimal training setup. The proposed surrogate-augmented CFD-driven training framework is demonstrated across a range of statistically one- and two-dimensional flows, including both single- and multi-expression model optimization. In all cases, the framework substantially reduces training cost while maintaining predictive accuracy comparable to that of the original CFD-driven approach. + oai:arXiv.org:2512.19031v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Yuan Fang, Fabian Waschkowski, Maximilian Reissmann, Richard D. Sandberg, Takuo Oda, Koichi Tanimoto - Sell It Before You Make It: Revolutionizing E-Commerce with Personalized AI-Generated Items - https://arxiv.org/abs/2503.22182 - arXiv:2503.22182v2 Announce Type: replace -Abstract: E-commerce has revolutionized retail, yet its traditional workflows remain inefficient, with significant resource costs tied to product design and inventory. This paper introduces a novel system deployed at Alibaba that uses AI-generated items (AIGI) to address these challenges with personalized text-to-image generation for e-commerce product design. AIGI enables an innovative business mode called "sell it before you make it", where merchants can design fashion items and generate photorealistic images with digital models based on textual descriptions. Only when the items have received a certain number of orders, do the merchants start to produce them, which largely reduces reliance on physical prototypes and thus accelerates time to market. For such a promising application, we identify the underlying key scientific challenge, i.e., capturing users' group-level personalized preferences towards multiple generated images. To this end, we propose a Personalized Group-Level Preference Alignment Framework for Diffusion Models (PerFusion). We first design PerFusion Reward Model for user preference estimation with a feature-crossing-based personalized plug-in. Then we develop PerFusion with a personalized adaptive network to model diverse preferences across users, and meanwhile derive the group-level preference optimization objective to model comparative behaviors among multiple images. Both offline and online experiments demonstrate the effectiveness of our proposed algorithm. The AI-generated items achieve over 13% relative improvements for both click-through rate and conversion rate, as well as 7.9% decrease in return rate, compared to their human-designed counterparts, validating the transformative potential of AIGI for e-commerce platforms. - oai:arXiv.org:2503.22182v2 - cs.IR - cs.AI + Automatic Neuronal Activity Segmentation in Fast Four Dimensional Spatio-Temporal Fluorescence Imaging using Bayesian Approach + https://arxiv.org/abs/2512.19032 + arXiv:2512.19032v1 Announce Type: new +Abstract: Fluorescence Microcopy Calcium Imaging is a fundamental tool to in-vivo record and analyze large scale neuronal activities simultaneously at a single cell resolution. Automatic and precise detection of behaviorally relevant neuron activity from the recordings is critical to study the mapping of brain activity in organisms. However a perpetual bottleneck to this problem is the manual segmentation which is time and labor intensive and lacks generalizability. To this end, we present a Bayesian Deep Learning Framework to detect neuronal activities in 4D spatio-temporal data obtained by light sheet microscopy. Our approach accounts for the use of temporal information by calculating pixel wise correlation maps and combines it with spatial information given by the mean summary image. The Bayesian framework not only produces probability segmentation maps but also models the uncertainty pertaining to active neuron detection. To evaluate the accuracy of our framework we implemented the test of reproducibility to assert the generalization of the network to detect neuron activity. The network achieved a mean Dice Score of 0.81 relative to the synthetic Ground Truth obtained by Otsu's method and a mean Dice Score of 0.79 between the first and second run for test of reproducibility. Our method successfully deployed can be used for rapid detection of active neuronal activities for behavioural studies. + oai:arXiv.org:2512.19032v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jianghao Lin, Peng Du, Jiaqi Liu, Weite Li, Yong Yu, Weinan Zhang, Yang Cao + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Ran Li, Pan Xiao, Kaushik Dutta, Youdong Guo - Unified interface flux evaluation in a general discontinuous Galerkin spectral element framework - https://arxiv.org/abs/2504.03573 - arXiv:2504.03573v2 Announce Type: replace -Abstract: High-order discontinuous Galerkin spectral element methods (DGSEM) have received growing attention and development, especially in the regime of computational fluid dynamics in recent years. The inherent flexibility of the discontinuous Galerkin approach in handling non-conforming interfaces, such as those encountered in moving geometries or hp-refinement, presents a significant advantage for real-world simulations. Despite the well-established mathematical framework of DG methods, practical implementation challenges persist to boost performance and capability. Most previous studies only focus on certain choices of element shape or basis type in a structured mesh, although they have demonstrated the capability of DGSEM in complex flow simulations. This work discusses the low-cost and unified interface flux evaluation approaches for general spectral elements in unstructured meshes, alongside their implementations in the open-source spectral element framework, Nektar++. The initial motivation arises from the discretization of Helmholtz equations by the symmetric interior penalty method, in which the system matrix can easily become non-symmetric if the flux is not properly evaluated on non-conforming interfaces. We focus on the polynomial non-conforming case in this work but extending to the geometric non-conforming case is theoretically possible. Comparisons of different approaches, trade-offs, and performance of benchmark of our initial matrix-free implementation are also included, contributing to the broader discourse on high-performance spectral element method implementations. - oai:arXiv.org:2504.03573v2 - math.NA - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Distinguishing Visually Similar Actions: Prompt-Guided Semantic Prototype Modulation for Few-Shot Action Recognition + https://arxiv.org/abs/2512.19036 + arXiv:2512.19036v1 Announce Type: new +Abstract: Few-shot action recognition aims to enable models to quickly learn new action categories from limited labeled samples, addressing the challenge of data scarcity in real-world applications. Current research primarily addresses three core challenges: (1) temporal modeling, where models are prone to interference from irrelevant static background information and struggle to capture the essence of dynamic action features; (2) visual similarity, where categories with subtle visual differences are difficult to distinguish; and (3) the modality gap between visual-textual support prototypes and visual-only queries, which complicates alignment within a shared embedding space. To address these challenges, this paper proposes a CLIP-SPM framework, which includes three components: (1) the Hierarchical Synergistic Motion Refinement (HSMR) module, which aligns deep and shallow motion features to improve temporal modeling by reducing static background interference; (2) the Semantic Prototype Modulation (SPM) strategy, which generates query-relevant text prompts to bridge the modality gap and integrates them with visual features, enhancing the discriminability between similar actions; and (3) the Prototype-Anchor Dual Modulation (PADM) method, which refines support prototypes and aligns query features with a global semantic anchor, improving consistency across support and query samples. Comprehensive experiments across standard benchmarks, including Kinetics, SSv2-Full, SSv2-Small, UCF101, and HMDB51, demonstrate that our CLIP-SPM achieves competitive performance under 1-shot, 3-shot, and 5-shot settings. Extensive ablation studies and visual analyses further validate the effectiveness of each component and its contributions to addressing the core challenges. The source code and models are publicly available at GitHub. + oai:arXiv.org:2512.19036v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Boyang Xia, David Moxey + Xiaoyang Li, Mingming Lu, Ruiqi Wang, Hao Li, Zewei Le - Sample, Don't Search: Rethinking Test-Time Alignment for Language Models - https://arxiv.org/abs/2504.03790 - arXiv:2504.03790v2 Announce Type: replace -Abstract: Increasing test-time computation has emerged as a promising direction for improving language model performance, particularly in scenarios where model finetuning is impractical or impossible due to computational constraints or private model weights. However, existing test-time search methods using a reward model (RM) often degrade in quality as compute scales, due to the over-optimization of what are inherently imperfect reward proxies. We introduce QAlign, a new test-time alignment approach. As we scale test-time compute, QAlign converges to sampling from the optimal aligned distribution for each individual prompt. By adopting recent advances in Markov chain Monte Carlo for text generation, our method enables better-aligned outputs without modifying the underlying model or even requiring logit access. We demonstrate the effectiveness of QAlign on mathematical reasoning benchmarks (GSM8K and GSM-Symbolic) using a task-specific RM, showing consistent improvements over existing test-time compute methods like best-of-n and majority voting. Furthermore, when applied with more realistic RMs trained on the Tulu 3 preference dataset, QAlign outperforms direct preference optimization (DPO), best-of-n, majority voting, and weighted majority voting on a diverse range of datasets (GSM8K, MATH500, IFEval, MMLU-Redux, and TruthfulQA). A practical solution to aligning language models at test time using additional computation without degradation, our approach expands the limits of the capability that can be obtained from off-the-shelf language models without further training. - oai:arXiv.org:2504.03790v2 - cs.CL + Elevating Intrusion Detection and Security Fortification in Intelligent Networks through Cutting-Edge Machine Learning Paradigms + https://arxiv.org/abs/2512.19037 + arXiv:2512.19037v1 Announce Type: new +Abstract: The proliferation of IoT devices and their reliance on Wi-Fi networks have introduced significant security vulnerabilities, particularly the KRACK and Kr00k attacks, which exploit weaknesses in WPA2 encryption to intercept and manipulate sensitive data. Traditional IDS using classifiers face challenges such as model overfitting, incomplete feature extraction, and high false positive rates, limiting their effectiveness in real-world deployments. To address these challenges, this study proposes a robust multiclass machine learning based intrusion detection framework. The methodology integrates advanced feature selection techniques to identify critical attributes, mitigating redundancy and enhancing detection accuracy. Two distinct ML architectures are implemented: a baseline classifier pipeline and a stacked ensemble model combining noise injection, Principal Component Analysis (PCA), and meta learning to improve generalization and reduce false positives. Evaluated on the AWID3 data set, the proposed ensemble architecture achieves superior performance, with an accuracy of 98%, precision of 98%, recall of 98%, and a false positive rate of just 2%, outperforming existing state-of-the-art methods. This work demonstrates the efficacy of combining preprocessing strategies with ensemble learning to fortify network security against sophisticated Wi-Fi attacks, offering a scalable and reliable solution for IoT environments. Future directions include real-time deployment and adversarial resilience testing to further enhance the model's adaptability. + oai:arXiv.org:2512.19037v1 + cs.CR cs.LG - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Gon\c{c}alo Faria, Noah A. Smith + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1038/s41598-025-23754-w + Sci Rep 15, 39989 (2025) + Md Minhazul Islam Munna, Md Mahbubur Rahman, Jaroslav Frnda, Muhammad Shahid Anwar, Alpamis Kutlimuratov - Revisiting the field normalization approaches/practices - https://arxiv.org/abs/2504.14512 - arXiv:2504.14512v2 Announce Type: replace -Abstract: Field normalization plays a crucial role in scientometrics to ensure fair comparisons across different disciplines. In this paper, we revisit the effectiveness of several widely used field normalization methods. Our findings indicate that source-side normalization (as employed in SNIP) does not fully eliminate citation bias across different fields and the imbalanced paper growth rates across fields are a key factor for this phenomenon. To address the issue of skewness, logarithmic transformation has been applied. Recently, a combination of logarithmic transformation and mean-based normalization, expressed as ln(c+1)/mu, has gained popularity. However, our analysis shows that this approach does not yield satisfactory results. Instead, we find that combining logarithmic transformation (ln(c+1)) with z-score normalization provides a better alternative. Furthermore, our study suggests that the better performance is achieved when combining both source-side and target-side field normalization methods. - oai:arXiv.org:2504.14512v2 - cs.DL - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Xinyue Lu, Li Li, Zhesi Shen + Time-series Forecast for Indoor Zone Air Temperature with Long Horizons: A Case Study with Sensor-based Data from a Smart Building + https://arxiv.org/abs/2512.19038 + arXiv:2512.19038v1 Announce Type: new +Abstract: With the press of global climate change, extreme weather and sudden weather changes are becoming increasingly common. To maintain a comfortable indoor environment and minimize the contribution of the building to climate change as much as possible, higher requirements are placed on the operation and control of HVAC systems, e.g., more energy-efficient and flexible to response to the rapid change of weather. This places demands on the rapid modeling and prediction of zone air temperatures of buildings. Compared to the traditional simulation-based approach such as EnergyPlus and DOE2, a hybrid approach combined physics and data-driven is more suitable. Recently, the availability of high-quality datasets and algorithmic breakthroughs have driven a considerable amount of work in this field. However, in the niche of short- and long-term predictions, there are still some gaps in existing research. This paper aims to develop a time series forecast model to predict the zone air temperature in a building located in America on a 2-week horizon. The findings could be further improved to support intelligent control and operation of HVAC systems (i.e. demand flexibility) and could also be used as hybrid building energy modeling. + oai:arXiv.org:2512.19038v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-sa/4.0/ + ICML 2025 CO-BUILD Workshop on Computational Optimization of Buildings + Liping Sun, Yucheng Guo, Siliang Lu, Zhenzhen Li - The k-Center Problem of Uncertain Points on Graphs - https://arxiv.org/abs/2504.14803 - arXiv:2504.14803v2 Announce Type: replace -Abstract: In this paper, we study the $k$-center problem of uncertain points on a graph. Given are an undirected graph $G = (V, E)$ and a set $\mathcal{P}$ of $n$ uncertain points where each uncertain point with a non-negative weight has $m$ possible locations on $G$ each associated with a probability. The problem aims to find $k$ centers (points) on $G$ so as to minimize the maximum weighted expected distance of uncertain points to their expected closest centers. No previous work exist for the $k$-center problem of uncertain points on undirected graphs. We propose exact algorithms that solve respectively the case of $k=2$ in $O(|E|^2m^2n\log |E|mn\log mn )$ time and the problem with $k\geq 3$ in $O(\min\{|E|^km^kn^{k+1}k\log |E|mn\log m, |E|^kn^\frac{k}{2}m^\frac{k^2}{2}\log |E|mn\})$ time, provided with the distance matrix of $G$. In addition, an $O(|E|mn\log mn)$-time algorithmic approach is given for the one-center case. - oai:arXiv.org:2504.14803v2 - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 - replace + EGM: Efficiently Learning General Motion Tracking Policy for High Dynamic Humanoid Whole-Body Control + https://arxiv.org/abs/2512.19043 + arXiv:2512.19043v1 Announce Type: new +Abstract: Learning a general motion tracking policy from human motions shows great potential for versatile humanoid whole-body control. Conventional approaches are not only inefficient in data utilization and training processes but also exhibit limited performance when tracking highly dynamic motions. To address these challenges, we propose EGM, a framework that enables efficient learning of a general motion tracking policy. EGM integrates four core designs. Firstly, we introduce a Bin-based Cross-motion Curriculum Adaptive Sampling strategy to dynamically orchestrate the sampling probabilities based on tracking error of each motion bin, eficiently balancing the training process across motions with varying dificulty and durations. The sampled data is then processed by our proposed Composite Decoupled Mixture-of-Experts (CDMoE) architecture, which efficiently enhances the ability to track motions from different distributions by grouping experts separately for upper and lower body and decoupling orthogonal experts from shared experts to separately handle dedicated features and general features. Central to our approach is a key insight we identified: for training a general motion tracking policy, data quality and diversity are paramount. Building on these designs, we develop a three-stage curriculum training flow to progressively enhance the policy's robustness against disturbances. Despite training on only 4.08 hours of data, EGM generalized robustly across 49.25 hours of test motions, outperforming baselines on both routine and highly dynamic tasks. + oai:arXiv.org:2512.19043v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haitao Xu, Jingru Zhang + Chao Yang, Yingkai Sun, Peng Ye, Xin Chen, Chong Yu, Tao Chen - RBCTest: Leveraging LLMs to Mine and Verify Oracles of API Response Bodies for RESTful API Testing - https://arxiv.org/abs/2504.17287 - arXiv:2504.17287v3 Announce Type: replace -Abstract: In API testing, deriving logical constraints on API response bodies to be used as oracles is crucial for generating test cases and performing automated testing of RESTful APIs. However, existing approaches are restricted to dynamic analysis, in which oracles are extracted via the execution of APIs as part of the system under test. In this paper, we propose a complementary LLM-based static approach in which constraints for API response bodies are mined from API specifications. We leverage large language models (LLMs) to comprehend API specifications, mine constraints for response bodies, and generate test cases. To reduce LLM hallucination, we apply an Observation-Confirmation (OC) scheme that uses initial prompts to contextualize constraints, allowing subsequent prompts to more accurately confirm their presence. Our empirical results show that RBCTest with OC prompting achieves high precision in constraint mining, with averages ranging from 85.1% to 93.6%. It also performs well in generating test cases from mined constraints, with precision ranging from 86.4% to 91.7%. We further use test cases generated by RBCTest to detect 46 mismatches between API specifications and actual response data across 19 real-world APIs. Four of these mismatches were reported in developers' forums. - oai:arXiv.org:2504.17287v3 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Hieu Huynh, Tri Le, Tu Nguyen, Viet Nguyen, Vu Nguyen, Tien N. Nguyen + Advancing Accessibility: Augmented Reality Solutions for the Blind and Disabled in Bangladesh + https://arxiv.org/abs/2512.19047 + arXiv:2512.19047v1 Announce Type: new +Abstract: Augmented Reality (AR) technology holds immense potential to transform the lives of blind and disabled individuals by offering enhanced interaction with their surroundings and providing real-time, accessible information. Globally, AR applications are being developed with features such as audio descriptions of environments, object recognition, and navigational aids, specifically designed to support the visually impaired. These innovations are paving the way for increased independence, mobility, and overall quality of life for millions of people worldwide. In Bangladesh, the adoption of AR technology for the blind and disabled is still in its early stages, primarily due to limited accessibility resources and infrastructure challenges. However, with the growing penetration of smartphones and continuous advancements in AR technologies, there is a promising opportunity for these solutions to be adapted, localized, and scaled within the country. This paper reviews the current state of AR technologies for the visually impaired on a global scale, explores their potential application in Bangladesh, and delves into the challenges and opportunities for broader implementation in this context. + oai:arXiv.org:2512.19047v1 + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Md Minhazul Islam Munna, Al Amin, Xin Wang, Hongbin Ma - Understanding and supporting how developers prompt for LLM-powered code editing in practice - https://arxiv.org/abs/2504.20196 - arXiv:2504.20196v2 Announce Type: replace -Abstract: Large Language Models (LLMs) are rapidly transforming software engineering, with coding assistants embedded in an IDE becoming increasingly prevalent. While research has focused on improving the tools and understanding developer perceptions, a critical gap exists in understanding how developers actually use these tools in their daily workflows, and, crucially, where they struggle. This paper addresses part of this gap through a multi-phased investigation of developer interactions with an LLM-powered code editing feature, Transform Code, in an IDE widely used at Google. First, we analyze telemetry logs of the feature usage, revealing that frequent re-prompting can be an indicator of developer struggles with using Transform Code. Second, we conduct a qualitative analysis of unsatisfactory requests, identifying five key categories of information often missing from developer prompts. Finally, based on these findings, we propose and evaluate a tool, AutoPrompter, for automatically improving prompts by inferring missing information from the surrounding code context, leading to a 27% improvement in edit correctness on our test set. - oai:arXiv.org:2504.20196v2 - cs.SE - cs.AI - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Daye Nam, Ahmed Omran, Ambar Murillo, Saksham Thakur, Abner Araujo, Marcel Blistein, Alexander Fr\"ommgen, Vincent Hellendoorn, Satish Chandra + WaTeRFlow: Watermark Temporal Robustness via Flow Consistency + https://arxiv.org/abs/2512.19048 + arXiv:2512.19048v1 Announce Type: new +Abstract: Image watermarking supports authenticity and provenance, yet many schemes are still easy to bypass with various distortions and powerful generative edits. Deep learning-based watermarking has improved robustness to diffusion-based image editing, but a gap remains when a watermarked image is converted to video by image-to-video (I2V), in which per-frame watermark detection weakens. I2V has quickly advanced from short, jittery clips to multi-second, temporally coherent scenes, and it now serves not only content creation but also world-modeling and simulation workflows, making cross-modal watermark recovery crucial. We present WaTeRFlow, a framework tailored for robustness under I2V. It consists of (i) FUSE (Flow-guided Unified Synthesis Engine), which exposes the encoder-decoder to realistic distortions via instruction-driven edits and a fast video diffusion proxy during training, (ii) optical-flow warping with a Temporal Consistency Loss (TCL) that stabilizes per-frame predictions, and (iii) a semantic preservation loss that maintains the conditioning signal. Experiments across representative I2V models show accurate watermark recovery from frames, with higher first-frame and per-frame bit accuracy and resilience when various distortions are applied before or after video generation. + oai:arXiv.org:2512.19048v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Utae Jeong, Sumin In, Hyunju Ryu, Jaewan Choi, Feng Yang, Jongheon Jeong, Seungryong Kim, Sangpil Kim - Reduced-order asymptotic observer of nonlinear friction for precise motion control - https://arxiv.org/abs/2504.21316 - arXiv:2504.21316v2 Announce Type: replace -Abstract: Nonlinear friction has long been, and continues to be, one of the major challenges for precision motion control systems. A linear asymptotic observer of the motion state variables with nonlinear friction uses a dedicated state-space representation of the dynamic friction force (including pre-sliding) [25], which is robust to the noisy input signals. The observer implements the reduced-order Luenberger observation law, while assuming the output displacement is the only available measurement. The latter is relevant for multiple motion control applications with use of encoder-type sensors. The uniform asymptotic stability and convergence analysis of the proposed observer are elaborated in the present work by using the Lyapunov function-based stability criterion by Ignatyev and imposing parametric constraints on the time-dependent eigenvalues of the system matrix to be always negative real. A design procedure for assigning a dominant, and thus slowest, real pole of the observer is proposed. Explanative numerical examples accompany the developed analysis. In addition, a thorough experimental evaluation is given for the proposed observer-based friction compensation which is performed for positioning and tracking tasks. The observer-based compensation, which can serve as a plug-in to a standard feedback controller, extends a PID feedback control that is optimally tuned for disturbance suppression. The experimental results are compared with and without the plugged-in observer-based compensator. - oai:arXiv.org:2504.21316v2 - eess.SY - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Michael Ruderman + Decoupled Generative Modeling for Human-Object Interaction Synthesis + https://arxiv.org/abs/2512.19049 + arXiv:2512.19049v1 Announce Type: new +Abstract: Synthesizing realistic human-object interaction (HOI) is essential for 3D computer vision and robotics, underpinning animation and embodied control. Existing approaches often require manually specified intermediate waypoints and place all optimization objectives on a single network, which increases complexity, reduces flexibility, and leads to errors such as unsynchronized human and object motion or penetration. To address these issues, we propose Decoupled Generative Modeling for Human-Object Interaction Synthesis (DecHOI), which separates path planning and action synthesis. A trajectory generator first produces human and object trajectories without prescribed waypoints, and an action generator conditions on these paths to synthesize detailed motions. To further improve contact realism, we employ adversarial training with a discriminator that focuses on the dynamics of distal joints. The framework also models a moving counterpart and supports responsive, long-sequence planning in dynamic scenes, while preserving plan consistency. Across two benchmarks, FullBodyManipulation and 3D-FUTURE, DecHOI surpasses prior methods on most quantitative metrics and qualitative evaluations, and perceptual studies likewise prefer our results. + oai:arXiv.org:2512.19049v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hwanhee Jung, Seunggwan Lee, Jeongyoon Yoon, SeungHyeon Kim, Giljoo Nam, Qixing Huang, Sangpil Kim - Notes on Univariate Sumcheck - https://arxiv.org/abs/2505.00554 - arXiv:2505.00554v3 Announce Type: replace -Abstract: This note describes a univariate polynomial interactive oracle proof for multilinear extension evaluation. Unlike prior protocols, (1) the verifier here is given a univariate extension oracle for the same vector of which the multilinear extension is getting evaluated and (2) the prover only has linear complexity. For these reasons, the protocol is well-suited for combining multivariate and univariate sumcheck techniques. - oai:arXiv.org:2505.00554v3 - cs.CR - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Efficient Personalization of Generative Models via Optimal Experimental Design + https://arxiv.org/abs/2512.19057 + arXiv:2512.19057v1 Announce Type: new +Abstract: Preference learning from human feedback has the ability to align generative models with the needs of end-users. Human feedback is costly and time-consuming to obtain, which creates demand for data-efficient query selection methods. This work presents a novel approach that leverages optimal experimental design to ask humans the most informative preference queries, from which we can elucidate the latent reward function modeling user preferences efficiently. We formulate the problem of preference query selection as the one that maximizes the information about the underlying latent preference model. We show that this problem has a convex optimization formulation, and introduce a statistically and computationally efficient algorithm ED-PBRL that is supported by theoretical guarantees and can efficiently construct structured queries such as images or text. We empirically present the proposed framework by personalizing a text-to-image generative model to user-specific styles, showing that it requires less preference queries compared to random query selection. + oai:arXiv.org:2512.19057v1 + cs.LG + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Malcom Mohamed + Guy Schacht, Ziyad Sheebaelhamd, Riccardo De Santi, Mojm\'ir Mutn\'y, Andreas Krause - Going deep and going wide: Counting logic and homomorphism indistinguishability over graphs of bounded treedepth and treewidth - https://arxiv.org/abs/2505.01193 - arXiv:2505.01193v2 Announce Type: replace -Abstract: We study the expressive power of first-order logic with counting quantifiers, especially the $k$-variable and quantifier-rank-$q$ fragment $\mathsf{C}^k_q$, using homomorphism indistinguishability. Recently, Dawar, Jakl, and Reggio (2021) proved that two graphs satisfy the same $\mathsf{C}^k_q$-sentences iff they are homomorphism indistinguishable over the class $\mathcal{T}^k_q$ of graphs admitting a $k$-pebble forest cover of depth $q$. After reproving this result using elementary means, we provide a graph-theoretic analysis of $\mathcal{T}^k_q$. This allows us to separate $\mathcal{T}^k_q$ from the intersection $\mathcal{TW}_{k-1} \cap \mathcal{TD}_q$, provided that $q$ is sufficiently larger than $k$. Here $\mathcal{TW}_{k-1}$ is the class of all graphs of treewidth at most $k-1$ and $\mathcal{TD}_q$ is the class of all graphs of treedepth at most $q$. - We are able to lift this separation to a separation of the respective homomorphism indistinguishability relations $\equiv_{\mathcal{T}^k_q}$ and $\equiv_{\mathcal{TW}_{k-1} \cap \mathcal{TD}_q}$. We do this by showing that the classes $\mathcal{TD}_q$ and $\mathcal{T}^k_q$ are homomorphism distinguishing closed, as conjectured by Roberson (2022). - In order to prove Roberson's conjecture for $\mathcal{T}^k_q$, we characterise $\mathcal{T}^k_q$ in terms of a monotone Cops-and-Robber game. The crux is to prove that if Cop has a winning strategy then Cop also has a winning strategy that is monotone. To that end, we transform Cops' winning strategy into a pree-tree-decomposition, which is inspired by decompositions of matroids, and then apply an intricate breadth-first cleaning up procedure along the pree-tree-decomposition (which may temporarily lose the property of representing a strategy). Thereby, we achieve monotonicity while controlling the number of rounds across all branches of the decomposition via a vertex exchange argument. - oai:arXiv.org:2505.01193v2 - cs.LO - cs.DM - math.CO - Mon, 22 Dec 2025 00:00:00 -0500 - replace + 6DAttack: Backdoor Attacks in the 6DoF Pose Estimation + https://arxiv.org/abs/2512.19058 + arXiv:2512.19058v1 Announce Type: new +Abstract: Deep learning advances have enabled accurate six-degree-of-freedom (6DoF) object pose estimation, widely used in robotics, AR/VR, and autonomous systems. However, backdoor attacks pose significant security risks. While most research focuses on 2D vision, 6DoF pose estimation remains largely unexplored. Unlike traditional backdoors that only change classes, 6DoF attacks must control continuous parameters like translation and rotation, rendering 2D methods inapplicable. We propose 6DAttack, a framework using 3D object triggers to induce controlled erroneous poses while maintaining normal behavior. Evaluations on PVNet, DenseFusion, and PoseDiffusion across LINEMOD, YCB-Video, and CO3D show high attack success rates (ASRs) without compromising clean performance. Backdoored models achieve up to 100% clean ADD accuracy and 100% ASR, with triggered samples reaching 97.70% ADD-P. Furthermore, a representative defense remains ineffective. Our findings reveal a serious, underexplored threat to 6DoF pose estimation. + oai:arXiv.org:2512.19058v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Isolde Adler, Eva Fluck, Tim Seppelt, Gian Luca Spitzer + Jihui Guo, Zongmin Zhang, Zhen Sun, Yuhao Yang, Jinlin Wu, Fu Zhang, Xinlei He - Deterministic Suffix-reading Automata - https://arxiv.org/abs/2505.09353 - arXiv:2505.09353v3 Announce Type: replace -Abstract: We introduce deterministic suffix-reading automata (DSA), a new automaton model over finite words. Transitions in a DSA are labeled with words. From a state, a DSA triggers an outgoing transition on seeing a word ending with the transition's label. Therefore, rather than moving along an input word letter by letter, a DSA can jump along blocks of letters, with each block ending in a suitable suffix. This feature allows DSAs to recognize regular languages more concisely, compared to DFAs. In this work, we focus on questions around finding a minimal DSA for a regular language. The number of states is not a faithful measure of the size of a DSA, since the transition-labels contain strings of arbitrary length. Hence, we consider total-size (number of states + number of edges + total length of transition-labels) as the size measure of DSAs. - We start by formally defining the model and providing a DSA-to-DFA conversion that allows to compare the expressiveness and succinctness of DSA with related automata models. Our main technical contribution is a method to derive DSAs from a given DFA: a DFA-to-DSA conversion. We make a surprising observation that the smallest DSA derived from the canonical DFA of a regular language L need not be a minimal DSA for L. This observation leads to a fundamental bottleneck in deriving a minimal DSA for a regular language. In fact, we prove that given a DFA and a number k, the problem of deciding if there exists an equivalent DSA of total-size atmost k is NP-complete. - oai:arXiv.org:2505.09353v3 + On the complexity of computing Strahler numbers + https://arxiv.org/abs/2512.19060 + arXiv:2512.19060v1 Announce Type: new +Abstract: It is shown that the problem of computing the Strahler number of a binary tree given as a term is complete for the circuit complexity class uniform $\mathsf{NC}^1$. For several variants, where the binary tree is given by a pointer structure or in a succinct form by a directed acyclic graph or a tree straight-line program, the complexity of computing the Strahler number is determined as well. The problem, whether a given context-free grammar in Chomsky normal form produces a derivation tree (resp., an acyclic derivation tree), whose Strahler number is at least a given number $k$ is shown to be P-complete (resp., PSPACE-complete). + oai:arXiv.org:2512.19060v1 + cs.CC cs.FL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - R Keerthan, B Srivathsan, R Venkatesh, Sagar Verma + Moses Ganardi, Markus Lohrey - How to use score-based diffusion in earth system science: A satellite nowcasting example - https://arxiv.org/abs/2505.10432 - arXiv:2505.10432v2 Announce Type: replace -Abstract: Machine learning (ML) is used for many earth science applications; however, traditional ML methods trained with squared errors often create blurry forecasts. Diffusion models are an emerging generative ML technique with the ability to produce sharper, more realistic images by learning the underlying data distribution. Diffusion models are becoming more prevalent, yet adapting them for earth science applications can be challenging because most articles focus on theoretical aspects of the approach, rather than making the method widely accessible. This work illustrates score-based diffusion models with a well-known problem in atmospheric science: cloud nowcasting (zero-to-three-hour forecast). After discussing the background and intuition of score-based diffusion models using examples from geostationary satellite infrared imagery, we experiment with three types of diffusion models: a standard score-based diffusion model (Diff); a residual correction diffusion model (CorrDiff); and a latent diffusion model (LDM). Our results show that the diffusion models not only advect existing clouds, but also generate and decay clouds, including convective initiation. A case study qualitatively shows the preservation of high-resolution features longer into the forecast than a conventional U-Net. The best of the three diffusion models tested was the CorrDiff approach, outperforming all other diffusion models, the conventional U-Net, and persistence. The diffusion models also enable out-of-the-box ensemble generation with skillful calibration. By explaining and exploring diffusion models for a common problem and ending with lessons learned from adapting diffusion models for our task, this work provides a starting point for the community to utilize diffusion models for a variety of earth science applications. - oai:arXiv.org:2505.10432v2 + Fraud Detection Through Large-Scale Graph Clustering with Heterogeneous Link Transformation + https://arxiv.org/abs/2512.19061 + arXiv:2512.19061v1 Announce Type: new +Abstract: Collaborative fraud, where multiple fraudulent accounts coordinate to exploit online payment systems, poses significant challenges due to the formation of complex network structures. Traditional detection methods that rely solely on high-confidence identity links suffer from limited coverage, while approaches using all available linkages often result in fragmented graphs with reduced clustering effectiveness. In this paper, we propose a novel graph-based fraud detection framework that addresses the challenge of large-scale heterogeneous graph clustering through a principled link transformation approach. Our method distinguishes between \emph{hard links} (high-confidence identity relationships such as phone numbers, credit cards, and national IDs) and \emph{soft links} (behavioral associations including device fingerprints, cookies, and IP addresses). We introduce a graph transformation technique that first identifies connected components via hard links, merges them into super-nodes, and then reconstructs a weighted soft-link graph amenable to efficient embedding and clustering. The transformed graph is processed using LINE (Large-scale Information Network Embedding) for representation learning, followed by HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise) for density-based cluster discovery. Experiments on a real-world payment platform dataset demonstrate that our approach achieves significant graph size reduction (from 25 million to 7.7 million nodes), doubles the detection coverage compared to hard-link-only baselines, and maintains high precision across identified fraud clusters. Our framework provides a scalable and practical solution for industrial-scale fraud detection systems. + oai:arXiv.org:2512.19061v1 cs.LG - physics.ao-ph - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Randy J. Chase, Katherine Haynes, Lander Ver Hoef, Imme Ebert-Uphoff - - - BOLT: Block-Orthonormal Lanczos for Trace estimation of matrix functions - https://arxiv.org/abs/2505.12289 - arXiv:2505.12289v2 Announce Type: replace -Abstract: Efficient matrix trace estimation is essential for scalable computation of log-determinants, matrix norms, and distributional divergences. In many large-scale applications, the matrices involved are too large to store or access in full, making even a single matrix-vector (mat-vec) product infeasible. Instead, one often has access only to small subblocks of the matrix or localized matrix-vector products on restricted index sets. Hutch++ achieves optimal convergence rate but relies on randomized SVD and assumes full mat-vec access, making it difficult to apply in these constrained settings. We propose the Block-Orthonormal Stochastic Lanczos Quadrature (BOLT), which matches Hutch++ accuracy with a simpler implementation based on orthonormal block probes and Lanczos iterations. BOLT builds on the Stochastic Lanczos Quadrature (SLQ) framework, which combines random probing with Krylov subspace methods to efficiently approximate traces of matrix functions, and performs better than Hutch++ in near flat-spectrum regimes. To address memory limitations and partial access constraints, we introduce Subblock SLQ, a variant of BOLT that operates only on small principal submatrices. As a result, this framework yields a proxy KL divergence estimator and an efficient method for computing the Wasserstein-2 distance between Gaussians - both compatible with low-memory and partial-access regimes. We provide theoretical guarantees and demonstrate strong empirical performance across a range of high-dimensional settings. - oai:arXiv.org:2505.12289v2 - math.NA - cs.DS - cs.LG - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Kingsley Yeon, Promit Ghosal, Mihai Anitescu + Chi Liu - A Practical Guide for Incorporating Symmetry in Diffusion Policy - https://arxiv.org/abs/2505.13431 - arXiv:2505.13431v4 Announce Type: replace -Abstract: Recently, equivariant neural networks for policy learning have shown promising improvements in sample efficiency and generalization, however, their wide adoption faces substantial barriers due to implementation complexity. Equivariant architectures typically require specialized mathematical formulations and custom network design, posing significant challenges when integrating with modern policy frameworks like diffusion-based models. In this paper, we explore a number of straightforward and practical approaches to incorporate symmetry benefits into diffusion policies without the overhead of full equivariant designs. Specifically, we investigate (i) invariant representations via relative trajectory actions and eye-in-hand perception, (ii) integrating equivariant vision encoders, and (iii) symmetric feature extraction with pretrained encoders using Frame Averaging. We first prove that combining eye-in-hand perception with relative or delta action parameterization yields inherent SE(3)-invariance, thus improving policy generalization. We then perform a systematic experimental study on those design choices for integrating symmetry in diffusion policies, and conclude that an invariant representation with equivariant feature extraction significantly improves the policy performance. Our method achieves performance on par with or exceeding fully equivariant architectures while greatly simplifying implementation. - oai:arXiv.org:2505.13431v4 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Dian Wang, Boce Hu, Shuran Song, Robin Walters, Robert Platt + On Cost-Aware Sequential Hypothesis Testing with Random Costs and Action Cancellation + https://arxiv.org/abs/2512.19067 + arXiv:2512.19067v1 Announce Type: new +Abstract: We study a variant of cost-aware sequential hypothesis testing in which a single active Decision Maker (DM) selects actions with positive, random costs to identify the true hypothesis under an average error constraint, while minimizing the expected total cost. The DM may abort an in-progress action, yielding no sample, by truncating its realized cost at a smaller, tunable deterministic limit, which we term a per-action deadline. We analyze how this cancellation option can be exploited under two cost-revelation models: ex-post, where the cost is revealed only after the sample is obtained, and ex-ante, where the cost accrues before sample acquisition. + In the ex-post model, per-action deadlines do not affect the expected total cost, and the cost-error tradeoffs coincide with the baseline obtained by replacing deterministic costs with cost means. In the ex-ante model, we show how per-action deadlines inflate the expected number of times actions are applied, and that the resulting expected total cost can be reduced to the constant-cost setting by introducing an effective per-action cost. We characterize when deadlines are beneficial and study several families in detail. + oai:arXiv.org:2512.19067v1 + cs.IT + cs.LG + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + George Vershinin, Asaf Cohen, Omer Gurewitz - Upgrading Democracies with Fairer Voting Methods - https://arxiv.org/abs/2505.14349 - arXiv:2505.14349v2 Announce Type: replace -Abstract: Voting methods are instrumental design elements of democracies. Citizens use them to express and aggregate their preferences to reach a collective decision. However, voting outcomes can be as sensitive to voting rules as they are to people's voting choices. Despite significance and interdisciplinary scientific progress, several democracies keep relying on outdated voting methods that do not fit modern, pluralistic societies well, while lacking social innovation. Here, we demonstrate how one can upgrade real-world democracies, namely by using alternative preferential voting methods such as cumulative voting and the method of equal shares designed for a proportional representation of voters' preferences. We rigorously evaluate the striking voting outcomes of these fair voting methods in a new participatory budgeting approach applied in the city of Aarau, Switzerland, including past and follow-up evidence. Results show more winning projects with the same budget. They also show broader geographic and preference representation of citizens by the elected projects, in particular for voters who used to be under-represented. We provide causal evidence showing that citizens prefer proportional voting methods, which possess strong legitimacy without the need of very specialized technical explanations. We also reveal strong underlying democratic values exhibited by citizens who support fair voting methods such as altruism and compromise. These findings come with the momentum to unleash a new and long-awaited participation blueprint of how to upgrade democracies globally. - oai:arXiv.org:2505.14349v2 - cs.CY + Can abstract concepts from LLM improve SLM performance? + https://arxiv.org/abs/2512.19069 + arXiv:2512.19069v1 Announce Type: new +Abstract: Large language models (LLMs) excel at diverse tasks, but their deployment on resource-constrained devices remains challenging. Existing methods like quantization, pruning, and distillation can reduce memory footprint but often demand extensive experimentation and careful infrastructure design. Leveraging existing techniques for extracting high-level concepts (represented as steering vectors) from larger models, we investigate their transferability to smaller language models (SLM) during inference. We demonstrate through extensive experimentation that these concepts can be effectively transferred to smaller models, irrespective of their family (e.g., Phi, Llama, Qwen), leading to performance improvements across a wide range of tasks. Furthermore, we introduce inference-time scaling to enhance performance by dynamically adjusting the steering intensity which has resulted in a 7-15\% of accuracy improvement for Qwen3-0.6B. + oai:arXiv.org:2512.19069v1 cs.AI - cs.ET - cs.HC - cs.MA - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Evangelos Pournaras, Srijoni Majumdar, Thomas Wellings, Joshua C. Yang, Fatemeh B. Heravan, Regula H\"anggli Fricker, Dirk Helbing - - - Strategic Planning and Rationalizing on Trees Make LLMs Better Debaters - https://arxiv.org/abs/2505.14886 - arXiv:2505.14886v2 Announce Type: replace -Abstract: Winning competitive debates requires sophisticated reasoning and argument skills. There are unique challenges in the competitive debate: (1) The time constraints force debaters to make strategic choices about which points to pursue rather than covering all possible arguments; (2) The persuasiveness of the debate relies on the back-and-forth interaction between arguments, which a single final game status cannot evaluate. To address these challenges, we propose TreeDebater, a novel debate framework that excels in competitive debate. We introduce two tree structures: the Rehearsal Tree and Debate Flow Tree. The Rehearsal Tree anticipates the attack and defenses to evaluate the strength of the claim, while the Debate Flow Tree tracks the debate status to identify the active actions. TreeDebater allocates its time budget among candidate actions and uses the speech time controller and feedback from the simulated audience to revise its statement. The human evaluation on both the stage-level and the debate-level comparison shows that our TreeDebater outperforms the state-of-the-art multi-agent debate system, with a +15.6% improvement in stage-level persuasiveness with DeepSeek and +10% debate-level opinion shift win. Further investigation shows that TreeDebater shows better strategies in limiting time to important debate actions, aligning with the strategies of human debate experts. - oai:arXiv.org:2505.14886v2 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Danqing Wang, Zhuorui Ye, Xinran Zhao, Fei Fang, Lei Li + Siddharth Tandon - VideoGameQA-Bench: Evaluating Vision-Language Models for Video Game Quality Assurance - https://arxiv.org/abs/2505.15952 - arXiv:2505.15952v2 Announce Type: replace -Abstract: With video games now generating the highest revenues in the entertainment industry, optimizing game development workflows has become essential for the sector's sustained growth. Recent advancements in Vision-Language Models (VLMs) offer considerable potential to automate and enhance various aspects of game development, particularly Quality Assurance (QA), which remains one of the industry's most labor-intensive processes with limited automation options. To accurately evaluate the performance of VLMs in video game QA tasks and determine their effectiveness in handling real-world scenarios, there is a clear need for standardized benchmarks, as existing benchmarks are insufficient to address the specific requirements of this domain. To bridge this gap, we introduce VideoGameQA-Bench, a comprehensive benchmark that covers a wide array of game QA activities, including visual unit testing, visual regression testing, needle-in-a-haystack tasks, glitch detection, and bug report generation for both images and videos of various games. Code and data are available at: https://asgaardlab.github.io/videogameqa-bench/ - oai:arXiv.org:2505.15952v2 + Watch Closely: Mitigating Object Hallucinations in Large Vision-Language Models with Disentangled Decoding + https://arxiv.org/abs/2512.19070 + arXiv:2512.19070v1 Announce Type: new +Abstract: Large Vision-Language Models (LVLMs) bridge the gap between visual and linguistic modalities, demonstrating strong potential across a variety of domains. However, despite significant progress, LVLMs still suffer from severe hallucination issues in object recognition tasks. These models often fail to accurately identify certain objects, leading to text generation that appears fluent but does not correspond to the visual content, which can have serious consequences in real-world applications. Recently, several methods have been proposed to alleviate LVLM hallucinations, but most focus solely on reducing hallucinations in the language modality. To mitigate hallucinations in both the language and visual modalities, we introduce Hallucination Disentangled Decoding (HDD) method that requires no training. HDD enhances the original image by segmenting it and selecting images that augment the original, while also utilizing a blank image to eliminate language prior hallucinations in both the original and segmented images. This design not only reduces the model's dependence on language priors but also enhances its visual performance. (Code: https://github.com/rickeyhhh/Hallucination-Disentangled-Decoding) + oai:arXiv.org:2512.19070v1 cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Mohammad Reza Taesiri, Abhijay Ghildyal, Saman Zadtootaghaj, Nabajeet Barman, Cor-Paul Bezemer + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ruiqi Ma, Yu Yan, Chunhong Zhang, Minghao Yin, XinChao Liu, Zhihong Jin, Zheng Hu - PEAR: Equal Area Weather Forecasting on the Sphere - https://arxiv.org/abs/2505.17720 - arXiv:2505.17720v2 Announce Type: replace -Abstract: Artificial intelligence is rapidly reshaping the natural sciences, with weather forecasting emerging as a flagship AI4Science application where machine learning models can now rival and even surpass traditional numerical simulations. Following the success of the landmark models Pangu Weather and Graphcast, outperforming traditional numerical methods for global medium-range forecasting, many novel data-driven methods have emerged. A common limitation shared by many of these models is their reliance on an equiangular discretization of the sphere which suffers from a much finer grid at the poles than around the equator. In contrast, in the Hierarchical Equal Area iso-Latitude Pixelization (HEALPix) of the sphere, each pixel covers the same surface area, removing unphysical biases. Motivated by a growing support for this grid in meteorology and climate sciences, we propose to perform weather forecasting with deep learning models which natively operate on the HEALPix grid. To this end, we introduce Pangu Equal ARea (PEAR), a transformer-based weather forecasting model which operates directly on HEALPix-features and outperforms the corresponding model on an equiangular grid without any computational overhead. - oai:arXiv.org:2505.17720v2 - cs.LG - physics.ao-ph - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Hampus Linander, Christoffer Petersson, Daniel Persson, Jan E. Gerken + Optimal 3D Directional WPT Charging via UAV for 3D Wireless Rechargeable Sensor Networks + https://arxiv.org/abs/2512.19075 + arXiv:2512.19075v1 Announce Type: new +Abstract: The high mobility and flexible deployment capability of UAVs make them an impressive option for charging nodes in Wireless Rechargeable Sensor Networks (WRSNs) using Directional Wireless Power Transfer (WPT) technology. However, existing studies largely focus on 2D-WRSNs, lacking designs catering to real 3D-WRSNs. The spatial distribution characteristics of nodes in a 3D-WRSN further increase the complexity of the charging scheduling task, thus requiring a systematic framework to solve this problem. In this paper, we investigated the Directional UAV Charging Scheduling problem for 3D-WRSNs (DCS-3D) and established its NP-hard property, and then proposed a three-step framework named as directional charging scheduling algorithm using Functional Equivalent (FuncEqv) direction set and Lin-Kernighan heuristic (LKH) for 3D-WRSNs (FELKH-3D) to solve it. In FELKH-3D, the challenge of infinite charging direction space is solved by designing an algorithm generating a minimum-size direction set guaranteed to be FuncEqv to the infinite set of whole sphere surface, and the optimaility of the method was proved.To determine the optimal charging tour for the UAV, the LKH algorithm is employed.Simulation experiments demonstrated the superiority of FELKH-3D over other classical algorithms. + oai:arXiv.org:2512.19075v1 + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhenguo Gao, Hui Li, Yiqin Chen, Qingyu Gao, Zhufang Kuang, Shih-Hau Fang, Hsiao-Chun Wu - ResSVD: Residual Compensated SVD for Large Language Model Compression - https://arxiv.org/abs/2505.20112 - arXiv:2505.20112v3 Announce Type: replace -Abstract: Large language models (LLMs) have demonstrated impressive capabilities in a wide range of downstream natural language processing tasks. Nevertheless, their considerable sizes and memory demands hinder practical deployment, underscoring the importance of developing efficient compression strategies. Singular value decomposition (SVD) decomposes a matrix into orthogonal components, enabling efficient low-rank approximation. This is particularly suitable for LLM compression, where weight matrices often exhibit significant redundancy. However, current SVD-based methods neglect the residual matrix from truncation, resulting in significant truncation loss. Additionally, compressing all layers of the model results in severe performance degradation. To overcome these limitations, we propose ResSVD, a new post-training SVD-based LLM compression method. Specifically, we leverage the residual matrix generated during the truncation process to reduce truncation loss. Moreover, under a fixed overall compression ratio, we selectively compress the last few layers of the model, which mitigates error propagation and significantly improves the performance of compressed models. Comprehensive evaluations of ResSVD on diverse LLM families and multiple benchmark datasets indicate that ResSVD consistently achieves superior performance over existing counterpart methods, demonstrating its practical effectiveness. - oai:arXiv.org:2505.20112v3 - cs.CL + Population-Evolve: a Parallel Sampling and Evolutionary Method for LLM Math Reasoning + https://arxiv.org/abs/2512.19081 + arXiv:2512.19081v1 Announce Type: new +Abstract: Test-time scaling has emerged as a promising direction for enhancing the reasoning capabilities of Large Language Models in last few years. In this work, we propose Population-Evolve, a training-free method inspired by Genetic Algorithms to optimize LLM reasoning. Our approach maintains a dynamic population of candidate solutions for each problem via parallel reasoning. By incorporating an evolve prompt, the LLM self-evolves its population in all iterations. Upon convergence, the final answer is derived via majority voting. Furthermore, we establish a unification framework that interprets existing test-time scaling strategies through the lens of genetic algorithms. Empirical results demonstrate that Population-Evolve achieves superior accuracy with low performance variance and computational efficiency. Our findings highlight the potential of evolutionary strategies to unlock the reasoning power of LLMs during inference. + oai:arXiv.org:2512.19081v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haolei Bai, Siyong Jian, Tuo Liang, Yu Yin, Huan Wang + Yanzhi Zhang, Yitong Duan, Zhaoxi Zhang, Jiyan He, Shuxin Zheng - Continual Learning on CLIP via Incremental Prompt Tuning with Intrinsic Textual Anchors - https://arxiv.org/abs/2505.20680 - arXiv:2505.20680v3 Announce Type: replace -Abstract: Continual learning (CL) enables deep networks to acquire new knowledge while avoiding catastrophic forgetting. The powerful generalization ability of pre-trained models (PTMs), such as the Contrastive Language-Image Pre-training (CLIP) model, has inspired a range of CL methods targeting new and specialized tasks, providing rich multi-modal embeddings that support lightweight, incremental prompt tuning. Existing methods often rely on complex designs built upon specific assumptions, such as intricate regularization schemes for prompt pools, specialized routing mechanisms, or multi-stage incrementations, that introduce additional-and possibly unnecessary-complexity, underutilizing CLIP's intrinsic capabilities. In this paper, we propose a concise CL approach for CLIP based on incremental prompt tuning that fully exploits its multi-modal structure and the stability of textual representations. Our method, Textual Prototype-guided Prompt Tuning (TPPT), introduces textual prototypes not merely as static classifiers, as in existing methods, but as stable anchors to guide the learning of visual prompts, thereby shaping the embedding space (i.e., TPPT-V). We show that our bidirectional supervision strategy enables more effective learning of new knowledge while reducing forgetting. To further close the vision-language gap during CL, we jointly optimizes visual and textual prompts (i.e., TPPT-VT). We also introduce a relational diversity regularization on the textual anchors to prevent embedding space collapse and mitigate correlated forgetting. Extensive experiments and analyses demonstrate the effectiveness of our proposed approach, highlighting the benefits of leveraging CLIP's intrinsic guidance for continual adaptation. - oai:arXiv.org:2505.20680v3 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Haodong Lu, Xinyu Zhang, Kristen Moore, Jason Xue, Lina Yao, Anton van den Hengel, Dong Gong + BEVCooper: Accurate and Communication-Efficient Bird's-Eye-View Perception in Vehicular Networks + https://arxiv.org/abs/2512.19082 + arXiv:2512.19082v1 Announce Type: new +Abstract: Bird's-Eye-View (BEV) is critical to connected and automated vehicles (CAVs) as it can provide unified and precise representation of vehicular surroundings. However, quality of the raw sensing data may degrade in occluded or distant regions, undermining the fidelity of constructed BEV map. In this paper, we propose BEVCooper, a novel collaborative perception framework that can guarantee accurate and low-latency BEV map construction. We first define an effective metric to evaluate the utility of BEV features from neighboring CAVs. Then, based on this, we develop an online learning-based collaborative CAV selection strategy that captures the ever-changing BEV feature utility of neighboring vehicles, enabling the ego CAV to prioritize the most valuable sources under bandwidth-constrained vehicle-to-vehicle (V2V) links. Furthermore, we design an adaptive fusion mechanism that optimizes BEV feature compression based on the environment dynamics and real-time V2V channel quality, effectively balancing feature transmission latency and accuracy of the constructed BEV map. Theoretical analysis demonstrates that, BEVCooper achieves asymptotically optimal CAV selection and adaptive feature fusion under dynamic vehicular topology and V2V channel conditions. Extensive experiments on real-world testbed show that, compared with state-of-the-art benchmarks, the proposed BEVCooper enhances BEV perception accuracy by up to $63.18\%$ and reduces end-to-end latency by $67.9\%$, with only $1.8\%$ additional computational overhead. + oai:arXiv.org:2512.19082v1 + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jiawei Hou, Peng Yang, Xiangxiang Dai, Mingliu Liu, Conghao Zhou - Train Sparse Autoencoders Efficiently by Utilizing Features Correlation - https://arxiv.org/abs/2505.22255 - arXiv:2505.22255v2 Announce Type: replace -Abstract: Sparse Autoencoders (SAEs) have demonstrated significant promise in interpreting the hidden states of language models by decomposing them into interpretable latent directions. However, training and interpreting SAEs at scale remains challenging, especially when large dictionary sizes are used. While decoders can leverage sparse-aware kernels for efficiency, encoders still require computationally intensive linear operations with large output dimensions. To address this, we propose KronSAE, a novel architecture that factorizes the latent representation via Kronecker product decomposition, drastically reducing memory and computational overhead. Furthermore, we introduce mAND, a differentiable activation function approximating the binary AND operation, which improves interpretability and performance in our factorized framework. - oai:arXiv.org:2505.22255v2 - cs.LG - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Vadim Kurochkin, Yaroslav Aksenov, Daniil Laptev, Daniil Gavrilov, Nikita Balagansky + CoDrone: Autonomous Drone Navigation Assisted by Edge and Cloud Foundation Models + https://arxiv.org/abs/2512.19083 + arXiv:2512.19083v1 Announce Type: new +Abstract: Autonomous navigation for Unmanned Aerial Vehicles faces key challenges from limited onboard computational resources, which restrict deployed deep neural networks to shallow architectures incapable of handling complex environments. Offloading tasks to remote edge servers introduces high latency, creating an inherent trade-off in system design. To address these limitations, we propose CoDrone - the first cloud-edge-end collaborative computing framework integrating foundation models into autonomous UAV cruising scenarios - effectively leveraging foundation models to enhance performance of resource-constrained unmanned aerial vehicle platforms. To reduce onboard computation and data transmission overhead, CoDrone employs grayscale imagery for the navigation model. When enhanced environmental perception is required, CoDrone leverages the edge-assisted foundation model Depth Anything V2 for depth estimation and introduces a novel one-dimensional occupancy grid-based navigation method - enabling fine-grained scene understanding while advancing efficiency and representational simplicity of autonomous navigation. A key component of CoDrone is a Deep Reinforcement Learning-based neural scheduler that seamlessly integrates depth estimation with autonomous navigation decisions, enabling real-time adaptation to dynamic environments. Furthermore, the framework introduces a UAV-specific vision language interaction module incorporating domain-tailored low-level flight primitives to enable effective interaction between the cloud foundation model and the UAV. The introduction of VLM enhances open-set reasoning capabilities in complex unseen scenarios. Experimental results show CoDrone outperforms baseline methods under varying flight speeds and network conditions, achieving a 40% increase in average flight distance and a 5% improvement in average Quality of Navigation. + oai:arXiv.org:2512.19083v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Pengyu Chen, Tao Ouyang, Ke Luo, Weijie Hong, Xu Chen - SMELLNET: A Large-scale Dataset for Real-world Smell Recognition - https://arxiv.org/abs/2506.00239 - arXiv:2506.00239v4 Announce Type: replace -Abstract: The ability of AI to sense and identify various substances based on their smell alone can have profound impacts on allergen detection (e.g., smelling gluten or peanuts in a cake), monitoring the manufacturing process, and sensing hormones that indicate emotional states, stress levels, and diseases. Despite these broad impacts, there are virtually no large-scale benchmarks, and therefore little progress, for training and evaluating AI systems' ability to smell in the real world. In this paper, we use small gas and chemical sensors to create SmellNet, the first large-scale database that digitizes a diverse range of smells in the natural world. SmellNet contains about 828,000 data points across 50 substances, spanning nuts, spices, herbs, fruits, and vegetables, and 43 mixtures among them, with 68 hours of data collected. Using SmellNet, we developed ScentFormer, a Transformer-based architecture combining temporal differencing and sliding-window augmentation for smell data. For the SmellNet-Base classification task, ScentFormer achieves 58.5% Top-1 accuracy, and for the SmellNet-Mixture distribution prediction task, ScentFormer achieves 50.2% Top-1@0.1 on the test-seen split. ScentFormer's ability to generalize across conditions and capture transient chemical dynamics demonstrates the promise of temporal modeling in olfactory AI. SmellNet and ScentFormer lay the groundwork for real-world olfactory applications across healthcare, food and beverage, environmental monitoring, manufacturing, and entertainment. - oai:arXiv.org:2506.00239v4 + $\gamma(3,4)$ `Attention' in Cognitive Agents: Ontology-Free Knowledge Representations With Promise Theoretic Semantics + https://arxiv.org/abs/2512.19084 + arXiv:2512.19084v1 Announce Type: new +Abstract: The semantics and dynamics of `attention' are closely related to promise theoretic notions developed for autonomous agents and can thus easily be written down in promise framework. In this way one may establish a bridge between vectorized Machine Learning and Knowledge Graph representations without relying on language models implicitly. Our expectations for knowledge presume a degree of statistical stability, i.e. average invariance under repeated observation, or `trust' in the data. Both learning networks and knowledge graph representations can meaningfully coexist to preserve different aspects of data. While vectorized data are useful for probabilistic estimation, graphs preserve the intentionality of the source even under data fractionation. Using a Semantic Spacetime $\gamma(3,4)$ graph, one avoids complex ontologies in favour of classification of features by their roles in semantic processes. The latter favours an approach to reasoning under conditions of uncertainty. Appropriate attention to causal boundary conditions may lead to orders of magnitude compression of data required for such context determination, as required in the contexts of autonomous robotics, defence deployments, and ad hoc emergency services. + oai:arXiv.org:2512.19084v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Dewei Feng, Wei Dai, Carol Li, Alistair Pernigo, Yunge Wen, Paul Pu Liang + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Mark Burgess - Mutation-Guided Unit Test Generation with a Large Language Model - https://arxiv.org/abs/2506.02954 - arXiv:2506.02954v5 Announce Type: replace -Abstract: Unit tests play a vital role in uncovering potential faults in software. While tools like EvoSuite focus on maximizing code coverage, recent advances in large language models (LLMs) have shifted attention toward LLM-based test generation. However, code coverage metrics -- such as line and branch coverage -- remain overly emphasized in reported research, despite being weak indicators of a test suite's fault-detection capability. In contrast, mutation score offers a more reliable and stringent measure, as demonstrated in our findings where some test suites achieve 100% coverage but only 4% mutation score. Although a few studies consider mutation score, the effectiveness of LLMs in killing mutants remains underexplored. In this paper, we propose MUTGEN, a mutation-guided, LLM-based test generation approach that incorporates mutation feedback directly into the prompt. Evaluated on 204 subjects from two benchmarks, MUTGEN significantly outperforms both EvoSuite and vanilla prompt-based strategies in terms of mutation score. Furthermore, MUTGEN introduces an iterative generation mechanism that pushes the limits of LLMs in killing additional mutants. Our study also provide insights into the limitations of LLM-based generation, analyzing the reasons for live and uncovered mutants, and the impact of different mutation operators on generation effectiveness. - oai:arXiv.org:2506.02954v5 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Retrieving Objects from 3D Scenes with Box-Guided Open-Vocabulary Instance Segmentation + https://arxiv.org/abs/2512.19088 + arXiv:2512.19088v1 Announce Type: new +Abstract: Locating and retrieving objects from scene-level point clouds is a challenging problem with broad applications in robotics and augmented reality. This task is commonly formulated as open-vocabulary 3D instance segmentation. Although recent methods demonstrate strong performance, they depend heavily on SAM and CLIP to generate and classify 3D instance masks from images accompanying the point cloud, leading to substantial computational overhead and slow processing that limit their deployment in real-world settings. Open-YOLO 3D alleviates this issue by using a real-time 2D detector to classify class-agnostic masks produced directly from the point cloud by a pretrained 3D segmenter, eliminating the need for SAM and CLIP and significantly reducing inference time. However, Open-YOLO 3D often fails to generalize to object categories that appear infrequently in the 3D training data. In this paper, we propose a method that generates 3D instance masks for novel objects from RGB images guided by a 2D open-vocabulary detector. Our approach inherits the 2D detector's ability to recognize novel objects while maintaining efficient classification, enabling fast and accurate retrieval of rare instances from open-ended text queries. Our code will be made available at https://github.com/ndkhanh360/BoxOVIS. + oai:arXiv.org:2512.19088v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Guancheng Wang, Qinghua Xu, Lionel C. Briand, Kui Liu + Khanh Nguyen, Dasith de Silva Edirimuni, Ghulam Mubashar Hassan, Ajmal Mian - A Certified Unlearning Approach without Access to Source Data - https://arxiv.org/abs/2506.06486 - arXiv:2506.06486v3 Announce Type: replace -Abstract: With the growing adoption of data privacy regulations, the ability to erase private or copyrighted information from trained models has become a crucial requirement. Traditional unlearning methods often assume access to the complete training dataset, which is unrealistic in scenarios where the source data is no longer available. To address this challenge, we propose a certified unlearning framework that enables effective data removal \final{without access to the original training data samples}. Our approach utilizes a surrogate dataset that approximates the statistical properties of the source data, allowing for controlled noise scaling based on the statistical distance between the two. \updated{While our theoretical guarantees assume knowledge of the exact statistical distance, practical implementations typically approximate this distance, resulting in potentially weaker but still meaningful privacy guarantees.} This ensures strong guarantees on the model's behavior post-unlearning while maintaining its overall utility. We establish theoretical bounds, introduce practical noise calibration techniques, and validate our method through extensive experiments on both synthetic and real-world datasets. The results demonstrate the effectiveness and reliability of our approach in privacy-sensitive settings. - oai:arXiv.org:2506.06486v3 - cs.LG - cs.CR - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Umit Yigit Basaran, Sk Miraj Ahmed, Amit Roy-Chowdhury, Basak Guler + JoyVoice: Long-Context Conditioning for Anthropomorphic Multi-Speaker Conversational Synthesis + https://arxiv.org/abs/2512.19090 + arXiv:2512.19090v1 Announce Type: new +Abstract: Large speech generation models are evolving from single-speaker, short sentence synthesis to multi-speaker, long conversation geneartion. Current long-form speech generation models are predominately constrained to dyadic, turn-based interactions. To address this, we introduce JoyVoice, a novel anthropomorphic foundation model designed for flexible, boundary-free synthesis of up to eight speakers. Unlike conventional cascaded systems, JoyVoice employs a unified E2E-Transformer-DiT architecture that utilizes autoregressive hidden representations directly for diffusion inputs, enabling holistic end-to-end optimization. We further propose a MM-Tokenizer operating at a low bitrate of 12.5 Hz, which integrates multitask semantic and MMSE losses to effectively model both semantic and acoustic information. Additionally, the model incorporates robust text front-end processing via large-scale data perturbation. Experiments show that JoyVoice achieves state-of-the-art results in multilingual generation (Chinese, English, Japanese, Korean) and zero-shot voice cloning. JoyVoice achieves top-tier results on both the Seed-TTS-Eval Benchmark and multi-speaker long-form conversational voice cloning tasks, demonstrating superior audio quality and generalization. It achieves significant improvements in prosodic continuity for long-form speech, rhythm richness in multi-speaker conversations, paralinguistic naturalness, besides superior intelligibility. We encourage readers to listen to the demo at https://jea-speech.github.io/JoyVoice + oai:arXiv.org:2512.19090v1 + cs.SD + eess.AS + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-sa/4.0/ + Fan Yu, Tao Wang, You Wu, Lin Zhu, Wei Deng, Weisheng Han, Wenchao Wang, Lin Hu, Xiangyu Liang, Xiaodong He, Yankun Huang, Yu Gu, Yuan Liu, Yuxuan Wang, Zhangyu Xiao, Ziteng Wang, Boya Dong, Feng Dang, Jinming Chen, Jingdong Li, Jun Wang, Yechen Jin, Yuan Zhang, Zhengyan Sheng, Xin Wang - LLM-as-a-qualitative-judge: automating error analysis in natural language generation - https://arxiv.org/abs/2506.09147 - arXiv:2506.09147v4 Announce Type: replace -Abstract: Prompting large language models (LLMs) to evaluate generated text, known as LLM-as-a-judge, has become a standard evaluation approach in natural language generation (NLG), but is primarily used as a quantitative tool, i.e. with numerical scores as main outputs. In this work, we propose LLM-as-a-qualitative-judge, an LLM-based evaluation approach with the main output being a structured report of common issue types in the NLG system outputs. Our approach is targeted at providing developers with meaningful insights on what improvements can be done to a given NLG system and consists of two main steps, namely open-ended per-instance issue analysis and clustering of the discovered issues using an intuitive cumulative algorithm. We also introduce a strategy for evaluating the proposed approach, coupled with ~300 annotations of issues in instances from 12 NLG datasets. Our results show that instance-specific issues output by LLM-as-a-qualitative-judge match those annotated by humans in 2/3 cases, and that LLM-as-a-qualitative-judge is capable of producing error type reports resembling the reports composed by human annotators. We also demonstrate in a case study how the use of LLM-as-a-qualitative-judge can substantially improve NLG systems performance. Our code and data are publicly available at https://github.com/tunde-ajayi/llm-as-a-qualitative-judge. - oai:arXiv.org:2506.09147v4 - cs.CL - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Nadezhda Chirkova, Tunde Oluwaseyi Ajayi, Seth Aycock, Zain Muhammad Mujahid, Vladana Perli\'c, Ekaterina Borisova, Markarit Vartampetian + Auditing Significance, Metric Choice, and Demographic Fairness in Medical AI Challenges + https://arxiv.org/abs/2512.19091 + arXiv:2512.19091v1 Announce Type: new +Abstract: Open challenges have become the de facto standard for comparative ranking of medical AI methods. Despite their importance, medical AI leaderboards exhibit three persistent limitations: (1) score gaps are rarely tested for statistical significance, so rank stability is unknown; (2) single averaged metrics are applied to every organ, hiding clinically important boundary errors; (3) performance across intersecting demographics is seldom reported, masking fairness and equity gaps. We introduce RankInsight, an open-source toolkit that seeks to address these limitations. RankInsight (1) computes pair-wise significance maps that show the nnU-Net family outperforms Vision-Language and MONAI submissions with high statistical certainty; (2) recomputes leaderboards with organ-appropriate metrics, reversing the order of the top four models when Dice is replaced by NSD for tubular structures; and (3) audits intersectional fairness, revealing that more than half of the MONAI-based entries have the largest gender-race discrepancy on our proprietary Johns Hopkins Hospital dataset. The RankInsight toolkit is publicly released and can be directly applied to past, ongoing, and future challenges. It enables organizers and participants to publish rankings that are statistically sound, clinically meaningful, and demographically fair. + oai:arXiv.org:2512.19091v1 + cs.CV + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Ariel Lubonja, Pedro R. A. S. Bassi, Wenxuan Li, Hualin Qiao, Randal Burns, Alan L. Yuille, Zongwei Zhou - The Diffusion Duality - https://arxiv.org/abs/2506.10892 - arXiv:2506.10892v3 Announce Type: replace -Abstract: Uniform-state discrete diffusion models hold the promise of fast text generation due to their inherent ability to self-correct. However, they are typically outperformed by autoregressive models and masked diffusion models. In this work, we narrow this performance gap by leveraging a key insight: Uniform-state diffusion processes naturally emerge from an underlying Gaussian diffusion. Our method, Duo, transfers powerful techniques from Gaussian diffusion to improve both training and sampling. First, we introduce a curriculum learning strategy guided by the Gaussian process, doubling training speed by reducing variance. Models trained with curriculum learning surpass autoregressive models in zero-shot perplexity on 3 of 7 benchmarks. Second, we present Discrete Consistency Distillation, which adapts consistency distillation from the continuous to the discrete setting. This algorithm unlocks few-step generation in diffusion language models by accelerating sampling by two orders of magnitude. We provide the code, model checkpoints, and video tutorials on the project page: http://s-sahoo.github.io/duo - oai:arXiv.org:2506.10892v3 - cs.LG - cs.AI + A Large Language Model Based Method for Complex Logical Reasoning over Knowledge Graphs + https://arxiv.org/abs/2512.19092 + arXiv:2512.19092v1 Announce Type: new +Abstract: Reasoning over knowledge graphs (KGs) with first-order logic (FOL) queries is challenging due to the inherent incompleteness of real-world KGs and the compositional complexity of logical query structures. Most existing methods rely on embedding entities and relations into continuous geometric spaces and answer queries via differentiable set operations. While effective for simple query patterns, these approaches often struggle to generalize to complex queries involving multiple operators, deeper reasoning chains, or heterogeneous KG schemas. We propose ROG (Reasoning Over knowledge Graphs with large language models), an ensemble-style framework that combines query-aware KG neighborhood retrieval with large language model (LLM)-based chain-of-thought reasoning. ROG decomposes complex FOL queries into sequences of simpler sub-queries, retrieves compact, query-relevant subgraphs as contextual evidence, and performs step-by-step logical inference using an LLM, avoiding the need for task-specific embedding optimization. Experiments on standard KG reasoning benchmarks demonstrate that ROG consistently outperforms strong embedding-based baselines in terms of mean reciprocal rank (MRR), with particularly notable gains on high-complexity query types. These results suggest that integrating structured KG retrieval with LLM-driven logical reasoning offers a robust and effective alternative for complex KG reasoning tasks. + oai:arXiv.org:2512.19092v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Subham Sekhar Sahoo, Justin Deschenaux, Aaron Gokaslan, Guanghan Wang, Justin Chiu, Volodymyr Kuleshov + Ziyan Zhang, Chao Wang, Zhuo Chen, Lei Chen, Chiyi Li, Kai Song - OntoGSN: An Ontology-Based Framework for Semantic Management and Extension of Assurance Cases - https://arxiv.org/abs/2506.11023 - arXiv:2506.11023v2 Announce Type: replace -Abstract: Assurance cases (ACs) are a common artifact for building and maintaining confidence in system properties such as safety or robustness. Constructing an AC can be challenging, although existing tools provide support in static, document-centric applications and methods for dynamic contexts (e.g., autonomous driving) are emerging. Unfortunately, managing ACs remains a challenge, since maintaining the embedded knowledge in the face of changes requires substantial effort, in the process deterring developers - or worse, producing poorly managed cases that instill false confidence. To address this, we present OntoGSN: an ontology and supporting middleware for managing ACs in the Goal Structuring Notation (GSN) standard. OntoGSN offers a knowledge representation and a queryable graph that can be automatically populated, evaluated, and updated. Our contributions include: a 1:1 formalization of the GSN Community Standard v3 in an OWL ontology with SWRL rules; a helper ontology and parser for integration with a widely used AC tool; a repository and documentation of design decisions for OntoGSN maintenance; a SPARQL query library with automation patterns; and a prototypical interface. The ontology strictly adheres to the standard's text and has been evaluated according to FAIR principles, the OOPS framework, competency questions, and community feedback. The development of other middleware elements is guided by the community needs and subject to ongoing evaluations. To demonstrate the utility of our contributions, we illustrate dynamic AC management in an example involving assurance of adversarial robustness in large language models. - oai:arXiv.org:2506.11023v2 + Tool-Augmented Hybrid Ensemble Reasoning with Distillation for Bilingual Mathematical Problem Solving + https://arxiv.org/abs/2512.19093 + arXiv:2512.19093v1 Announce Type: new +Abstract: Bilingual mathematical problem solving needs a clear link between language reasoning and symbolic calculation. Large language models often handle language well but are weak in accurate computation. This paper presents HERALD (Hybrid Ensemble Reasoning with Adaptive Learning and Distillation), a framework that joins reasoning and calculation using NuminaMath-7B-TIR, GPT-4o, and Mistral-7B. HERALD uses adaptive routing, tool-based reinforcement learning, and knowledge distillation to connect different reasoning paths. Confidence calibration keeps weighting stable, and dual-path checking keeps results correct. Reinforcement learning controls tool use to cut redundancy, and distillation lowers delay without hurting accuracy. The system shows that combining symbolic checking, adaptive ensembles, and bilingual fine-tuning helps achieve both fluent reasoning and precise calculation. HERALD offers a practical solution for multilingual mathematical reasoning with better accuracy, stability, and clarity. + oai:arXiv.org:2512.19093v1 cs.AI - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Tomas Bueno Momcilovic, Barbara Gallina, Ingmar Kessler, Jule Hendricks, Dian Balta - - - Relative Error Fair Clustering in the Weak-Strong Oracle Model - https://arxiv.org/abs/2506.12287 - arXiv:2506.12287v2 Announce Type: replace -Abstract: We study fair clustering problems in a setting where distance information is obtained from two sources: a strong oracle providing exact distances, but at a high cost, and a weak oracle providing potentially inaccurate distance estimates at a low cost. The goal is to produce a near-optimal fair clustering on $n$ input points with a minimum number of strong oracle queries. This models the increasingly common trade-off between accurate but expensive similarity measures (e.g., large-scale embeddings) and cheaper but inaccurate alternatives. The study of fair clustering in the model is motivated by the important quest of achieving fairness with the presence of inaccurate information. We achieve the first $(1+\varepsilon)$-coresets for fair $k$-median clustering using $\text{poly}\left(\frac{k}{\varepsilon}\cdot\log n\right)$ queries to the strong oracle. Furthermore, our results imply coresets for the standard setting (without fairness constraints), and we could in fact obtain $(1+\varepsilon)$-coresets for $(k,z)$-clustering for general $z=O(1)$ with a similar number of strong oracle queries. In contrast, previous results achieved a constant-factor $(>10)$ approximation for the standard $k$-clustering problems, and no previous work considered the fair $k$-median clustering problem. - oai:arXiv.org:2506.12287v2 - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Vladimir Braverman, Prathamesh Dharangutte, Shaofeng H. -C. Jiang, Hoai-An Nguyen, Chen Wang, Yubo Zhang, Samson Zhou + Peiqing Lu, Yuan Zhang, Haoyun Zhang, Jiasen Zheng, Kejian Tong, Wenjun Wu - Evaluation of Nuclear Microreactor Cost-competitiveness in Current Electricity Markets Considering Reactor Cost Uncertainties - https://arxiv.org/abs/2506.13361 - arXiv:2506.13361v2 Announce Type: replace -Abstract: This paper evaluates the cost competitiveness of microreactors in today's electricity markets, with a focus on uncertainties in reactor costs. A Genetic Algorithm (GA) is used to optimize key technical parameters, such as reactor capacity, fuel enrichment, tail enrichment, refueling interval, and discharge burnup, to minimize the Levelized Cost of Energy (LCOE). Base case results are validated using Simulated Annealing (SA). By incorporating Probability Distribution Functions (PDFs) for fuel cycle costs, the study identifies optimal configurations under uncertainty. Methodologically, it introduces a novel framework combining probabilistic cost modeling with evolutionary optimization. Results show that microreactors can remain cost-competitive, with LCOEs ranging from \$48.21/MWh to \$78.32/MWh when supported by the Production Tax Credit (PTC). High reactor capacity, low fuel enrichment, moderate tail enrichment and refueling intervals, and high discharge burnup enhance cost efficiency. Among all factors, overnight capital cost (OCC) has the most significant impact on LCOE, while O&M and fuel cost uncertainties have lesser effects. The analysis highlights how energy policies like the PTC can reduce LCOE by 22-24%, improving viability despite cost variability. Compared to conventional nuclear, coal, and renewable sources like offshore wind, hydro, and biomass, optimized microreactors show strong economic potential. This research defines a realistic design space and key trade-offs, offering actionable insights for policymakers, reactor designers, and energy planners aiming to accelerate the deployment of affordable, sustainable microreactors. - oai:arXiv.org:2506.13361v2 - cs.NE - physics.soc-ph - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - 10.1016/j.nucengdes.2025.114295 - Nuclear Engineering and Design 443 (2025) 114295 - Muhammad R. Abdussami, Ikhwan Khaleb, Fei Gao, Aditi Verma + Low-Latency and Low-Complexity MLSE for Short-Reach Optical Interconnects + https://arxiv.org/abs/2512.19094 + arXiv:2512.19094v1 Announce Type: new +Abstract: To meet the high-speed, low-latency, and low-complexity demand for optical interconnects, simplified layered 2-step maximum likelihood sequence estimation (L2S-MLSE) is proposed in this paper. Simplified L2S-MLSE combines computational simplification and reduced state in L2S-MLSE. L2S-MLSE with a parallel sliding block architecture reduces latency from linear order to logarithmic order. Computational simplification reduces the number of multipliers from exponential order to linear order. Incorporating the reduced state with computational simplification further decreases the number of adders and comparators. The simplified L2S-MLSE is evaluated in a 112-Gbit/s PAM4 transmission over 2-km standard single-mode fiber. Experimental results show that the simplified L2S-MLSE significantly outperforms the FFE-only case in bit error ratio (BER) performance. Compared with simplified 1-step MLSE, the latency of simplified L2S-MLSE is reduced from 34 delay units in linear order to 7 delay units in logarithmic order. The simplified scheme in L2S-MLSE reduces the number of variable multipliers from 512 in exponential order to 33 in linear order without BER performance deterioration, while reducing the number of adders and comparators to 37.2% and 8.4%, respectively, with nearly identical BER performance. + oai:arXiv.org:2512.19094v1 + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Mengqi Guo, Ji Zhou, Haide Wang, Changyuan Yu, Xiangjun Xin, Liangchuan Li - Set-theoretic solution for the tuning problem - https://arxiv.org/abs/2506.13969 - arXiv:2506.13969v2 Announce Type: replace -Abstract: In this paper I want to suggest a new solution to the problem of musical tuning. On one hand, I see it as a generalization of Just Intonation (JI) to inharmonic timbers, on another, as a unification of spectral interference and harmonicity contributions to consonance within a single framework. The main achievement of the work is the ability to mathematically quantify the phenomenon of musical consonance using set theory. That quantification is done by defining two measures of consonance: affinity and harmonicity. These measures naturally generate sets of intervals that can be used as dynamic tuning systems. The paper is aimed at a broad audience of people who may not be skilled in music and tuning theory or mathematics. Thus, I attempt to give as much details and explanations as I can, while keeping the number of pages as low as possible. - oai:arXiv.org:2506.13969v2 - cs.SD - eess.AS - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Mamba-Based Modality Disentanglement Network for Multi-Contrast MRI Reconstruction + https://arxiv.org/abs/2512.19095 + arXiv:2512.19095v1 Announce Type: new +Abstract: Magnetic resonance imaging (MRI) is a cornerstone of modern clinical diagnosis, offering unparalleled soft-tissue contrast without ionizing radiation. However, prolonged scan times remain a major barrier to patient throughput and comfort. Existing accelerated MRI techniques often struggle with two key challenges: (1) failure to effectively utilize inherent K-space prior information, leading to persistent aliasing artifacts from zero-filled inputs; and (2) contamination of target reconstruction quality by irrelevant information when employing multi-contrast fusion strategies. To overcome these challenges, we present MambaMDN, a dual-domain framework for multi-contrast MRI reconstruction. Our approach first employs fully-sampled reference K-space data to complete the undersampled target data, generating structurally aligned but modality-mixed inputs. Subsequently, we develop a Mamba-based modality disentanglement network to extract and remove reference-specific features from the mixed representation. Furthermore, we introduce an iterative refinement mechanism to progressively enhance reconstruction accuracy through repeated feature purification. Extensive experiments demonstrate that MambaMDN can significantly outperform existing multi-contrast reconstruction methods. + oai:arXiv.org:2512.19095v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Vsevolod Vladimirovich Deriushkin + Weiyi Lyu, Xinming Fang, Jun Wang, Jun Shi, Guixu Zhang, Juncheng Li - Multimodal Representation Learning and Fusion - https://arxiv.org/abs/2506.20494 - arXiv:2506.20494v2 Announce Type: replace -Abstract: Multi-modal learning is a fast growing area in artificial intelligence. It tries to help machines understand complex things by combining information from different sources, like images, text, and audio. By using the strengths of each modality, multi-modal learning allows AI systems to build stronger and richer internal representations. These help machines better interpretation, reasoning, and making decisions in real-life situations. This field includes core techniques such as representation learning (to get shared features from different data types), alignment methods (to match information across modalities), and fusion strategies (to combine them by deep learning models). Although there has been good progress, some major problems still remain. Like dealing with different data formats, missing or incomplete inputs, and defending against adversarial attacks. Researchers now are exploring new methods, such as unsupervised or semi-supervised learning, AutoML tools, to make models more efficient and easier to scale. And also more attention on designing better evaluation metrics or building shared benchmarks, make it easier to compare model performance across tasks and domains. As the field continues to grow, multi-modal learning is expected to improve many areas: computer vision, natural language processing, speech recognition, and healthcare. In the future, it may help to build AI systems that can understand the world in a way more like humans, flexible, context aware, and able to deal with real-world complexity. - oai:arXiv.org:2506.20494v2 - cs.LG - cs.MM - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Qihang Jin, Enze Ge, Yuhang Xie, Hongying Luo, Junhao Song, Ziqian Bi, Chia Xin Liang, Jibin Guan, Joe Yeong, Xinyuan Song, Junfeng Hao + Conditioning Accept-Desirability models in the context of AGM-like belief change + https://arxiv.org/abs/2512.19096 + arXiv:2512.19096v1 Announce Type: new +Abstract: We discuss conditionalisation for Accept-Desirability models in an abstract decision-making framework, where uncertain rewards live in a general linear space, and events are special projection operators on that linear space. This abstract setting allows us to unify classical and quantum probabilities, and extend them to an imprecise probabilities context. We introduce a new conditioning rule for our Accept-Desirability models, based on the idea that observing an event introduces new indifferences between options. We associate a belief revision operator with our conditioning rule, and investigate which of the AGM axioms for belief revision still hold in our more general framework. We investigate two interesting special cases where all of these axioms are shown to still hold: classical propositional logic and full conditional probabilities. + oai:arXiv.org:2512.19096v1 + cs.AI + math.LO + math.PR + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kathelijne Coussement, Gert de Cooman, Keano De Vos - The kernel of graph indices for vector search - https://arxiv.org/abs/2506.20584 - arXiv:2506.20584v2 Announce Type: replace -Abstract: The most popular graph indices for vector search use principles from computational geometry to build the graph. Hence, their formal graph navigability guarantees are only valid in Euclidean space. In this work, we show that machine learning can be used to build graph indices for vector search in metric and non-metric vector spaces (e.g., for inner product similarity). From this novel perspective, we introduce the Support Vector Graph (SVG), a new type of graph index that leverages kernel methods to establish the graph connectivity and that comes with formal navigability guarantees valid in metric and non-metric vector spaces. In addition, we interpret the most popular graph indices, including HNSW and DiskANN, as particular specializations of SVG and show that new navigable indices can be derived from the principles behind this specialization. Finally, we propose SVG-L0 that incorporates an $\ell_0$ sparsity constraint into the SVG kernel method to build graphs with a bounded out-degree. This yields a principled way of implementing this practical requirement, in contrast to the traditional heuristic of simply truncating the out edges of each node. Additionally, we show that SVG-L0 has a self-tuning property that avoids the heuristic of using a set of candidates to find the out-edges of each node and that keeps its computational complexity in check. - oai:arXiv.org:2506.20584v2 + DIVER-1 : Deep Integration of Vast Electrophysiological Recordings at Scale + https://arxiv.org/abs/2512.19097 + arXiv:2512.19097v1 Announce Type: new +Abstract: Electrophysiology signals such as EEG and iEEG are central to neuroscience, brain-computer interfaces, and clinical applications, yet existing foundation models remain limited in scale despite clear evidence that scaling improves performance. We introduce DIVER-1, a family of EEG and iEEG foundation models trained on the largest and most diverse corpus to date-5.3k hours of iEEG and 54k hours of EEG (1.6M channel-hours from over 17.7k subjects)-and scaled up to 1.82B parameters. We present the first systematic scaling law analysis for this domain, showing that they follow data-constrained scaling laws: for a given amount of data and compute, smaller models trained for extended epochs consistently outperform larger models trained briefly. This behavior contrasts with prior electrophysiology foundation models that emphasized model size over training duration. To achieve strong performance, we also design architectural innovations including any-variate attention, sliding temporal conditional positional encoding, and multi-domain reconstruction. DIVER-1 iEEG and EEG models each achieve state-of-the-art performance on their respective benchmarks, establishing a concrete guidelines for efficient scaling and resource allocation in electrophysiology foundation model development. + oai:arXiv.org:2512.19097v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Mariano Tepper, Ted Willke + Danny Dongyeop Han, Yonghyeon Gwon, Ahhyun Lucy Lee, Taeyang Lee, Seong Jin Lee, Jubin Choi, Sebin Lee, Jihyun Bang, Seungju Lee, David Keetae Park, Shinjae Yoo, Chun Kee Chung, Jiook Cha - ZKPROV: A Zero-Knowledge Approach to Dataset Provenance for Large Language Models - https://arxiv.org/abs/2506.20915 - arXiv:2506.20915v2 Announce Type: replace -Abstract: As large language models (LLMs) are used in sensitive fields, accurately verifying their computational provenance without disclosing their training datasets poses a significant challenge, particularly in regulated sectors such as healthcare, which have strict requirements for dataset use. Traditional approaches either incur substantial computational cost to fully verify the entire training process or leak unauthorized information to the verifier. Therefore, we introduce ZKPROV, a novel cryptographic framework allowing users to verify that the LLM's responses to their prompts are trained on datasets certified by the authorities that own them. Additionally, it ensures that the dataset's content is relevant to the users' queries without revealing sensitive information about the datasets or the model parameters. ZKPROV offers a unique balance between privacy and efficiency by binding training datasets, model parameters, and responses, while also attaching zero-knowledge proofs to the responses generated by the LLM to validate these claims. Our experimental results demonstrate sublinear scaling for generating and verifying these proofs, with end-to-end overhead under 3.3 seconds for models up to 8B parameters, presenting a practical solution for real-world applications. We also provide formal security guarantees, proving that our approach preserves dataset confidentiality while ensuring trustworthy dataset provenance. - oai:arXiv.org:2506.20915v2 - cs.CR - cs.AI + Dual Model Deep Learning for Alzheimer Prognostication + https://arxiv.org/abs/2512.19099 + arXiv:2512.19099v1 Announce Type: new +Abstract: Disease modifying therapies for Alzheimer's disease demand precise timing decisions, yet current predictive models require longitudinal observations and provide no uncertainty quantification, rendering them impractical at the critical first visit when treatment decisions must be made. We developed PROGRESS (PRognostic Generalization from REsting Static Signatures), a dual-model deep learning framework that transforms a single baseline cerebrospinal fluid biomarker assessment into actionable prognostic estimates without requiring prior clinical history. The framework addresses two complementary clinical questions: a probabilistic trajectory network predicts individualized cognitive decline with calibrated uncertainty bounds achieving near-nominal coverage, enabling honest prognostic communication; and a deep survival model estimates time to conversion from mild cognitive impairment to dementia. Using data from over 3,000 participants across 43 Alzheimer's Disease Research Centers in the National Alzheimer's Coordinating Center database, PROGRESS substantially outperforms Cox proportional hazards, Random Survival Forests, and gradient boosting methods for survival prediction. Risk stratification identifies patient groups with seven-fold differences in conversion rates, enabling clinically meaningful treatment prioritization. Leave-one-center-out validation demonstrates robust generalizability, with survival discrimination remaining strong across held-out sites despite heterogeneous measurement conditions spanning four decades of assay technologies. By combining superior survival prediction with trustworthy trajectory uncertainty quantification, PROGRESS bridges the gap between biomarker measurement and personalized clinical decision-making. + oai:arXiv.org:2512.19099v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Mina Namazi, Alexander Nemecek, Erman Ayday + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Alireza Moayedikia, Sara Fin, Uffe Kock Wiil - Optimal solutions employing an algebraic Variational Multiscale approach Part II: Application to Navier-Stokes - https://arxiv.org/abs/2506.21395 - arXiv:2506.21395v3 Announce Type: replace -Abstract: This work presents a non-linear extension of the high-order discretisation framework based on the Variational Multiscale (VMS) method previously introduced for steady linear problems. We build on the concept of an optimal projector defined via the symmetric part of the governing operator. Using this idea, we generalise the formulation to the two-dimensional incompressible Navier-Stokes equations. The approach maintains a clear separation between resolved and unresolved scales, with the fine-scale contribution approximated through the approximate Fine-Scale Greens' operator of the associated symmetric operator. This enables a consistent variational treatment of non-linearity while preserving high-order accuracy. We show that the method yields numerical solutions that closely approximate the optimal projection of the continuous/highly-resolved solution and inherits desirable conservation properties. Particularly, the formulation guarantees discrete conservation of mass, energy, and vorticity, where enstrophy conservation is also achieved when exact or over-integration is employed. Numerical results confirm the methodology's robustness and accuracy, while also demonstrating its computational cost advantage compared to the baseline Galerkin approach for the same accuracy. - oai:arXiv.org:2506.21395v3 - math.NA - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Suyash Shrestha, Marc Gerritsma, Gonzalo Rubio, Steven Hulshoff, Esteban Ferrer + Timely Parameter Updating in Over-the-Air Federated Learning + https://arxiv.org/abs/2512.19103 + arXiv:2512.19103v1 Announce Type: new +Abstract: Incorporating over-the-air computations (OAC) into the model training process of federated learning (FL) is an effective approach to alleviating the communication bottleneck in FL systems. Under OAC-FL, every client modulates its intermediate parameters, such as gradient, onto the same set of orthogonal waveforms and simultaneously transmits the radio signal to the edge server. By exploiting the superposition property of multiple-access channels, the edge server can obtain an automatically aggregated global gradient from the received signal. However, the limited number of orthogonal waveforms available in practical systems is fundamentally mismatched with the high dimensionality of modern deep learning models. To address this issue, we propose Freshness Freshness-mAgnItude awaRe top-k (FAIR-k), an algorithm that selects, in each communication round, the most impactful subset of gradients to be updated over the air. In essence, FAIR-k combines the complementary strengths of the Round-Robin and Top-k algorithms, striking a delicate balance between timeliness (freshness of parameter updates) and importance (gradient magnitude). Leveraging tools from Markov analysis, we characterize the distribution of parameter staleness under FAIR-k. Building on this, we establish the convergence rate of OAC-FL with FAIR-k, which discloses the joint effect of data heterogeneity, channel noise, and parameter staleness on the training efficiency. Notably, as opposed to conventional analyses that assume a universal Lipschitz constant across all the clients, our framework adopts a finer-grained model of the data heterogeneity. The analysis demonstrates that since FAIR-k promotes fresh (and fair) parameter updates, it not only accelerates convergence but also enhances communication efficiency by enabling an extended period of local training without significantly affecting overall training efficiency. + oai:arXiv.org:2512.19103v1 + cs.LG + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jiaqi Zhu, Zhongyuan Zhao, Xiao Li, Ruihao Du, Shi Jin, Howard H. Yang - OptScale: Probabilistic Optimality for Inference-time Scaling - https://arxiv.org/abs/2506.22376 - arXiv:2506.22376v4 Announce Type: replace -Abstract: Inference-time scaling has emerged as a powerful technique for enhancing the reasoning performance of Large Language Models (LLMs). However, existing approaches often rely on heuristic strategies for parallel sampling, lacking a principled foundation. To address this gap, we propose a probabilistic framework that formalizes the optimality of inference-time scaling under the assumption that parallel samples are independently and identically distributed (i.i.d.), and where the Best-of-$N$ selection strategy follows a probability distribution that can be estimated. Within this framework, we derive a theoretical lower bound on the required number of samples to achieve a target performance level, providing the first principled guidance for compute-efficient scaling. Leveraging this insight, we develop \textsc{OptScale}, a practical algorithm that dynamically determines the optimal number of sampled responses. \textsc{OptScale} employs a language model-based predictor to estimate probabilistic prior parameters, enabling the decision of the minimal number of samples needed that satisfy predefined performance thresholds and confidence levels. Extensive experiments on representative reasoning benchmarks (including MATH-500, GSM8K, AIME, and AMC) demonstrate that \textsc{OptScale} significantly reduces sampling overhead while remaining better or on par with state-of-the-art reasoning performance. Our work offers both a theoretical foundation and a practical solution for principled inference-time scaling, addressing a critical gap in the efficient deployment of LLMs for complex reasoning. - oai:arXiv.org:2506.22376v4 - cs.LG + FC-MIR: A Mobile Screen Awareness Framework for Intent-Aware Recommendation based on Frame-Compressed Multimodal Trajectory Reasoning + https://arxiv.org/abs/2512.19107 + arXiv:2512.19107v1 Announce Type: new +Abstract: Identifying user intent from mobile UI operation trajectories is critical for advancing UI understanding and enabling task automation agents. While Multimodal Large Language Models (MLLMs) excel at video understanding tasks, their real-time mobile deployment is constrained by heavy computational costs and inefficient redundant frame processing. To address these issues, we propose the FC-MIR framework: leveraging keyframe sampling and adaptive concatenation, it cuts visual redundancy to boost inference efficiency, while integrating state-of-the-art closed-source MLLMs or fine-tuned models (e.g., Qwen3-VL) for trajectory summarization and intent prediction. We further expand task scope to explore generating post-prediction operations and search suggestions, and introduce a fine-grained metric to evaluate the practical utility of summaries, predictions, and suggestions. For rigorous assessment, we construct a UI trajectory dataset covering scenarios from UI-Agents (Agent-I) and real user interactions (Person-I). Experimental results show our compression method retains performance at 50%-60% compression rates; both closed-source and fine-tuned MLLMs demonstrate strong intent summarization, supporting potential lightweight on-device deployment. However, MLLMs still struggle with useful and "surprising" suggestions, leaving room for improvement. Finally, we deploy the framework in a real-world setting, integrating UI perception and UI-Agent proxies to lay a foundation for future progress in this field. + oai:arXiv.org:2512.19107v1 cs.AI - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Youkang Wang, Jian Wang, Rubing Chen, Xiao-Yong Wei + Zhe Yang, Xiaoshuang Sheng, Zhengnan Zhang, Jidong Wu, Zexing Wang, Xin He, Shenghua Xu, Guanjing Xiong - Holmes: Towards Effective and Harmless Model Ownership Verification to Personalized Large Vision Models via Decoupling Common Features - https://arxiv.org/abs/2507.00724 - arXiv:2507.00724v2 Announce Type: replace -Abstract: Large vision models (LVMs) achieve remarkable performance in various downstream tasks, primarily by personalizing pre-trained models through fine-tuning with private and valuable local data, which makes the personalized model a valuable intellectual property. Similar to the era of traditional DNNs, model stealing attacks also pose significant risks to LVMs. However, this paper reveals that most existing defense methods (developed for traditional DNNs), typically designed for models trained from scratch, either introduce additional security risks, are prone to misjudgment, or are even ineffective for fine-tuned models. To alleviate these problems, this paper proposes a harmless model ownership verification method for personalized LVMs by decoupling similar common features. In general, our method consists of three main stages. In the first stage, we create shadow models that retain common features of the victim model while disrupting dataset-specific features. We represent the dataset-specific features of the victim model by computing the output differences between the shadow and victim models, without altering the victim model or its training process. After that, a meta-classifier is trained to identify stolen models by determining whether suspicious models contain the dataset-specific features of the victim. In the third stage, we conduct model ownership verification by hypothesis test to mitigate randomness and enhance robustness. Extensive experiments on benchmark datasets verify the effectiveness of the proposed method in detecting different types of model stealing simultaneously. Our codes are available at https://github.com/zlh-thu/Holmes. - oai:arXiv.org:2507.00724v2 + GaussianImage++: Boosted Image Representation and Compression with 2D Gaussian Splatting + https://arxiv.org/abs/2512.19108 + arXiv:2512.19108v1 Announce Type: new +Abstract: Implicit neural representations (INRs) have achieved remarkable success in image representation and compression, but they require substantial training time and memory. Meanwhile, recent 2D Gaussian Splatting (GS) methods (\textit{e.g.}, GaussianImage) offer promising alternatives through efficient primitive-based rendering. However, these methods require excessive Gaussian primitives to maintain high visual fidelity. To exploit the potential of GS-based approaches, we present GaussianImage++, which utilizes limited Gaussian primitives to achieve impressive representation and compression performance. Firstly, we introduce a distortion-driven densification mechanism. It progressively allocates Gaussian primitives according to signal intensity. Secondly, we employ context-aware Gaussian filters for each primitive, which assist in the densification to optimize Gaussian primitives based on varying image content. Thirdly, we integrate attribute-separated learnable scalar quantizers and quantization-aware training, enabling efficient compression of primitive attributes. Experimental results demonstrate the effectiveness of our method. In particular, GaussianImage++ outperforms GaussianImage and INRs-based COIN in representation and compression performance while maintaining real-time decoding and low memory usage. + oai:arXiv.org:2512.19108v1 cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Linghui Zhu, Yiming Li, Haiqin Weng, Yan Liu, Tianwei Zhang, Shu-Tao Xia, Zhi Wang + Tiantian Li, Xinjie Zhang, Xingtong Ge, Tongda Xu, Dailan He, Jun Zhang, Yan Wang - ATRO: A Fast Algorithm for Topology Engineering of Reconfigurable Datacenter Networks - https://arxiv.org/abs/2507.13717 - arXiv:2507.13717v3 Announce Type: replace -Abstract: Reconfigurable data center networks (DCNs) enhance traditional architectures with optical circuit switches (OCSs), enabling dynamic reconfiguration of inter-pod links, i.e., the logical topology. Optimizing this topology is crucial for adapting to traffic dynamics but is challenging due to its combinatorial nature. The complexity increases further when demands can be distributed across multiple paths, requiring joint optimization of topology and routing. We propose Alternating Topology and Routing Optimization (ATRO), a unified framework that supports both one-hop topology optimization (where traffic is routed via direct paths) and multi-hop joint optimization (where routing is also optimized). Although these settings differ in constraints, both are combinatorially hard and challenge solver-based methods. ATRO addresses both cases efficiently: in the one-hop case, it guarantees the global optimum via an accelerated binary search; in the multi-hop case, it alternates between topology and routing updates, with routing steps optionally accelerated by existing traffic engineering (TE) methods. ATRO supports warm-starting and improves solution quality monotonically across iterations. ATRO remains competitive even when paired with solver-free TE methods, forming a fully solver-free optimization pipeline that still outperforms prior approaches in runtime and maximum link utilization across diverse workloads. - oai:arXiv.org:2507.13717v3 - cs.NI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Trifocal Tensor and Relative Pose Estimation with Known Vertical Direction + https://arxiv.org/abs/2512.19110 + arXiv:2512.19110v1 Announce Type: new +Abstract: This work presents two novel solvers for estimating the relative poses among views with known vertical directions. The vertical directions of camera views can be easily obtained using inertial measurement units (IMUs) which have been widely used in autonomous vehicles, mobile phones, and unmanned aerial vehicles (UAVs). Given the known vertical directions, our lgorithms only need to solve for two rotation angles and two translation vectors. In this paper, a linear closed-form solution has been described, requiring only four point correspondences in three views. We also propose a minimal solution with three point correspondences using the latest Gr\"obner basis solver. Since the proposed methods require fewer point correspondences, they can be efficiently applied within the RANSAC framework for outliers removal and pose estimation in visual odometry. The proposed method has been tested on both synthetic data and real-world scenes from KITTI. The experimental results show that the accuracy of the estimated poses is superior to other alternative methods. + oai:arXiv.org:2512.19110v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yingming Mao, Qiaozhu Zhai, Ximeng Liu, Xinchi Han, Fafan li, Shizhen Zhao, Yuzhou Zhou, Zhen Yao, Xia Zhu + Tao Li, Zhenbao Yu, Banglei Guan, Jianli Han, Weimin Lv, Friedrich Fraundorfer - Near-Optimality for Single-Source Personalized PageRank - https://arxiv.org/abs/2507.14462 - arXiv:2507.14462v3 Announce Type: replace -Abstract: The \emph{Single-Source Personalized PageRank} (SSPPR) query is central to graph OLAP, measuring the probability $\pi(s,t)$ that an $\alpha$-decay random walk from node $s$ terminates at $t$. Despite decades of research, a significant gap remains between upper and lower bounds for its computational complexity. Existing upper bounds are $O\left(\min\left(\frac{\log(1/\epsilon)}{\epsilon^2}, \frac{\sqrt{m \log n}}{\epsilon}, m \log \frac{1}{\epsilon}\right)\right)$ for SSPPR-A and $O\left(\min\left(\frac{\log(1/n)}{\delta}, \sqrt{m \log(n/\delta)}, m \log \left(\frac{\log(n)}{m\delta}\right)\right)\right)$ for SSPPR-R, with trivial lower bounds of $\Omega(\min(n,1/\epsilon))$ and $\Omega(\min(n,1/\delta))$. - This work narrows or closes this gap. We improve the upper bounds for SSPPR-A and SSPPR-R to $O\left(\frac{1}{\epsilon^2}\right)$ and $O\left(\min\left(\frac{\log(1/\delta)}{\delta}, m + n \log(n) \log \left(\frac{\log(n)}{m\delta}\right)\right)\right)$, respectively, offering improvements by factors of $\log(1/\epsilon)$ and $\log\left(\frac{\log(n)}{m\delta}\right)$. On the lower bound side, we establish stronger results: $\Omega(\min(m, 1/\epsilon^2))$ for SSPPR-A and $\Omega(\min(m, \frac{\log(1/\delta)}{\delta}))$ for SSPPR-R, strengthening theoretical foundations. Our upper and lower bounds for SSPPR-R coincide for graphs with $m \in \Omega(n \log^2 n)$ and any threshold $\delta, 1/\delta \in O(\text{poly}(n))$, achieving theoretical optimality in most graph regimes. The SSPPR-A query attains partial optimality for large error thresholds, matching our new lower bound. This is the first optimal result for SSPPR queries. Our techniques generalize to the Single-Target Personalized PageRank (STPPR) query, improving its lower bound from $\Omega(\min(n, 1/\delta))$ to $\Omega(\min(m, \frac{n}{\delta} \log n))$, matching the upper bound and revealing its optimality. - oai:arXiv.org:2507.14462v3 - cs.DS - cs.CC - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xinpeng Jiang, Haoyu Liu, Siqiang Luo, Xiaokui Xiao + A Unified Framework and Comparative Study of Decentralized Finance Derivatives Protocols + https://arxiv.org/abs/2512.19113 + arXiv:2512.19113v1 Announce Type: new +Abstract: Decentralized Finance (DeFi) applications introduce novel financial instruments replicating and extending traditional ones through blockchain-based smart contracts. Among these, derivatives protocols enable the decentralized trading of cryptoassets that are the counterpart of derivative products available in traditional finance. Despite their growing significance, DeFi derivatives protocols remain relatively understudied compared to other DeFi instruments, such as lending protocols and decentralized exchanges with automated market makers. This paper systematically analyzes DeFi derivatives protocols - categorized into perpetual, options, and synthetics - in the field, highlighting similarities, differences, dynamics, and actors. As a result of our study, we provide a formal characterization of decentralized derivative products and introduce a unifying conceptual framework that captures the design principles and core architecture of such protocols. We complement our theoretical analysis with numerical simulations: we evaluate protocol dynamics under various economic conditions, including changes in underlying asset prices, volatility, protocol-specific fees, leverage, and their impact on liquidation and profitability. + oai:arXiv.org:2512.19113v1 + cs.GT + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Luca Pennella, Pietro Saggese, Fabio Pinelli, Letterio Galletta - Parallelism Meets Adaptiveness: Scalable Documents Understanding in Multi-Agent LLM Systems - https://arxiv.org/abs/2507.17061 - arXiv:2507.17061v4 Announce Type: replace -Abstract: Large language model (LLM) agents have shown increasing promise for collaborative task completion. However, existing multi-agent frameworks often rely on static workflows, fixed roles, and limited inter-agent communication, reducing their effectiveness in open-ended, high-complexity domains. This paper proposes a coordination framework that enables adaptiveness through three core mechanisms: dynamic task routing, bidirectional feedback, and parallel agent evaluation. The framework allows agents to reallocate tasks based on confidence and workload, exchange structured critiques to iteratively improve outputs, and crucially compete on high-ambiguity subtasks with evaluator-driven selection of the most suitable result. We instantiate these principles in a modular architecture and demonstrate substantial improvements in factual coverage, coherence, and efficiency over static and partially adaptive baselines. Our findings highlight the benefits of incorporating both adaptiveness and structured competition in multi-agent LLM systems. - oai:arXiv.org:2507.17061v4 - cs.MA + HyperLoad: A Cross-Modality Enhanced Large Language Model-Based Framework for Green Data Center Cooling Load Prediction + https://arxiv.org/abs/2512.19114 + arXiv:2512.19114v1 Announce Type: new +Abstract: The rapid growth of artificial intelligence is exponentially escalating computational demand, inflating data center energy use and carbon emissions, and spurring rapid deployment of green data centers to relieve resource and environmental stress. Achieving sub-minute orchestration of renewables, storage, and loads, while minimizing PUE and lifecycle carbon intensity, hinges on accurate load forecasting. However, existing methods struggle to address small-sample scenarios caused by cold start, load distortion, multi-source data fragmentation, and distribution shifts in green data centers. We introduce HyperLoad, a cross-modality framework that exploits pre-trained large language models (LLMs) to overcome data scarcity. In the Cross-Modality Knowledge Alignment phase, textual priors and time-series data are mapped to a common latent space, maximizing the utility of prior knowledge. In the Multi-Scale Feature Modeling phase, domain-aligned priors are injected through adaptive prefix-tuning, enabling rapid scenario adaptation, while an Enhanced Global Interaction Attention mechanism captures cross-device temporal dependencies. The public DCData dataset is released for benchmarking. Under both data sufficient and data scarce settings, HyperLoad consistently surpasses state-of-the-art (SOTA) baselines, demonstrating its practicality for sustainable green data center management. + oai:arXiv.org:2512.19114v1 + cs.LG cs.AI - cs.IR - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Chengxuan Xia, Qianye Wu, Sixuan Tian, Yilun Hao + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Haoyu Jiang, Boan Qu, Junjie Zhu, Fanjie Zeng, Xiaojie Lin, Wei Zhong - Towards Facilitated Fairness Assessment of AI-based Skin Lesion Classifiers Through GenAI-based Image Synthesis - https://arxiv.org/abs/2507.17860 - arXiv:2507.17860v3 Announce Type: replace -Abstract: Recent advances in deep learning and on-device inference could transform routine screening for skin cancers. Along with the anticipated benefits of this technology, potential dangers arise from unforeseen and inherent biases. A significant obstacle is building evaluation datasets that accurately reflect key demographics, including sex, age, and race, as well as other underrepresented groups. To address this, we train a state-of-the-art generative model to generate synthetic data in a controllable manner to assess the fairness of publicly available skin cancer classifiers. To evaluate whether synthetic images can be used as a fairness testing dataset, we prepare a real-image dataset (MILK10K) as a benchmark and compare the True Positive Rate result of three models (DeepGuide, MelaNet, and SkinLesionDensnet). As a result, the classification tendencies observed in each model when tested on real and generated images showed similar patterns across different attribute data sets. We confirm that highly realistic synthetic images facilitate model fairness verification. - oai:arXiv.org:2507.17860v3 + Generative Giants, Retrieval Weaklings: Why do Multimodal Large Language Models Fail at Multimodal Retrieval? + https://arxiv.org/abs/2512.19115 + arXiv:2512.19115v1 Announce Type: new +Abstract: Despite the remarkable success of multimodal large language models (MLLMs) in generative tasks, we observe that they exhibit a counterintuitive deficiency in the zero-shot multimodal retrieval task. In this work, we investigate the underlying mechanisms that hinder MLLMs from serving as effective retrievers. With the help of sparse autoencoders (SAEs), we decompose MLLM output representations into interpretable semantic concepts to probe their intrinsic behavior. Our analysis reveals that the representation space of MLLMs is overwhelmingly dominated by textual semantics; the visual information essential for multimodal retrieval only constitutes a small portion. This imbalance is compounded by the heavy focus of MLLMs on bridging image-text modalities, which facilitates generation but homogenizes embeddings and finally diminishes the discriminative power required for multimodal retrieval. We further discover that the specific feature components that contribute most to the similarity computations for MLLMs are in fact distractors that actively degrade retrieval performance. Overall, our work provides the first in-depth interpretability analysis of MLLM representations in the context of multimodal retrieval and offers possible directions for enhancing the multimodal retrieval capabilities of MLLMs. + oai:arXiv.org:2512.19115v1 cs.CV - cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Ko Watanabe, Stanislav Frolov, Aya Hassan, David Dembinsky, Adriano Lucieri, Andreas Dengel + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hengyi Feng, Zeang Sheng, Meiyi Qiang, Wentao Zhang - Hybrid and Unitary PEFT for Resource-Efficient Large Language Models - https://arxiv.org/abs/2507.18076 - arXiv:2507.18076v2 Announce Type: replace -Abstract: Fine-tuning large language models (LLMs) remains a computational bottleneck due to their scale and memory demands. This paper presents a comprehensive evaluation of parameter-efficient fine-tuning (PEFT) techniques, including LoRA, BOFT, LoRA-GA, and uRNN, and introduces a novel hybrid strategy that dynamically integrates BOFT's orthogonal stability with LoRA-GA's gradient-aligned rapid convergence. By computing per-layer adaptive updates guided by gradient norms, the hybrid method achieves superior convergence efficiency and generalization across diverse tasks. We also explore, for the first time, the adaptation of unitary RNN (uRNN) principles to Transformer-based LLMs, enhancing gradient stability through structured unitary constraints. Across GLUE, GSM8K, MT-Bench, and HumanEval, using models ranging from 7B to 405B parameters, the hybrid approach yields consistent gains across three independent runs per task and model, approaching the quality of full fine-tuning while reducing training time by approximately 2.1 times and peak memory usage by nearly 50 percent, indicating practical significance under resource constraints. A compact multilingual and low-resource study on XNLI and FLORES, using 32 examples per language, further demonstrates consistent gains under the same budget with a small and stable footprint. These results indicate a practical and scalable path toward accessible LLM fine-tuning under resource constraints. - oai:arXiv.org:2507.18076v2 + Stop saying LLM: Large Discourse Models (LDM) and Artificial Discursive Agent (ADA)? + https://arxiv.org/abs/2512.19117 + arXiv:2512.19117v1 Announce Type: new +Abstract: This paper proposes an epistemological shift in the analysis of large generative models, replacing the category ''Large Language Models'' (LLM) with that of ''Large Discourse Models'' (LDM), and then with that of Artificial Discursive Agent (ADA). The theoretical framework is based on an ontological triad distinguishing three regulatory instances: the apprehension of the phenomenal regularities of the referential world, the structuring of embodied cognition, and the structural-linguistic sedimentation of the utterance within a socio-historical context. LDMs, operating on the product of these three instances (the document), model the discursive projection of a portion of human experience reified by the learning corpus. The proposed program aims to replace the ''fascination/fear'' dichotomy with public trials and procedures that make the place, uses, and limits of artificial discursive agents in contemporary social space decipherable, situating this approach within a perspective of governance and co-regulation involving the State, industry, civil society, and academia. + oai:arXiv.org:2512.19117v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - American Journal of Computer Science and Technology 2025 - Haomin Qi, Zihan Dai, Chengbo Huang + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Amar Lakel (MICA) - Logical Characterizations of GNNs with Mean Aggregation - https://arxiv.org/abs/2507.18145 - arXiv:2507.18145v2 Announce Type: replace -Abstract: We study the expressive power of graph neural networks (GNNs) with mean as the aggregation function, with the following results. In the non-uniform setting, such GNNs have exactly the same expressive power as ratio modal logic, which has modal operators expressing that at least a certain ratio of the successors of a vertex satisfies a specified property. In the uniform setting, the expressive power relative to MSO is exactly that of modal logic, and thus identical to the (absolute) expressive power of GNNs with max aggregation. The proof, however, depends on constructions that are not satisfactory from a practical perspective. This leads us to making the natural assumptions that combination functions are continuous and classification functions are thresholds. The resulting class of GNNs with mean aggregation turns out to be much less expressive: relative to MSO and in the uniform setting, it has the same expressive power as alternation-free modal logic. This is in contrast to the expressive power of GNNs with max and sum aggregation, which is not affected by these assumptions. - oai:arXiv.org:2507.18145v2 - cs.AI - cs.LO - Mon, 22 Dec 2025 00:00:00 -0500 - replace + BanglaForge: LLM Collaboration with Self-Refinement for Bangla Code Generation + https://arxiv.org/abs/2512.19122 + arXiv:2512.19122v1 Announce Type: new +Abstract: Bangla is a low-resource language for code generation, lacking large-scale annotated datasets and tools to transform natural language specifications into executable programs. This makes Bangla-to-code generation a challenging task requiring innovative solutions. To address this, we introduce BanglaForge, a novel framework for generating code from Bangla function descriptions. BanglaForge leverages a retrieval-augmented dual-model collaboration paradigm with self-refinement, combining in-context learning, llm-based translation, systematic prompt engineering, and iterative self-refinement based on execution feedback, where a coder generates initial solutions and a reviewer enhances them for robustness. On the BLP-2025 Bangla Code Generation benchmark, BanglaForge achieves a competitive Pass@1 accuracy of 84.00%, demonstrating the effectiveness of retrieval, model collaboration, and self-refinement for low-resource Bangla code generation. + oai:arXiv.org:2512.19122v1 + cs.SE + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Moritz Sch\"onherr, Carsten Lutz + Mahir Labib Dihan, Sadif Ahmed, Md Nafiu Rahman - Boosting Revisited: Benchmarking and Advancing LP-Based Ensemble Methods - https://arxiv.org/abs/2507.18242 - arXiv:2507.18242v3 Announce Type: replace -Abstract: Despite their theoretical appeal, totally corrective boosting methods based on linear programming have received limited empirical attention. In this paper, we conduct the first large-scale experimental study of six LP-based boosting formulations, including two novel methods, NM-Boost and QRLP-Boost, across 20 diverse datasets. We evaluate the use of both heuristic and optimal base learners within these formulations, and analyze not only accuracy, but also ensemble sparsity, margin distribution, anytime performance, and hyperparameter sensitivity. We show that totally corrective methods can outperform or match state-of-the-art heuristics like XGBoost and LightGBM when using shallow trees, while producing significantly sparser ensembles. We further show that these methods can thin pre-trained ensembles without sacrificing performance, and we highlight both the strengths and limitations of using optimal decision trees in this context. - oai:arXiv.org:2507.18242v3 + A Composable Channel-Adaptive Architecture for Seizure Classification + https://arxiv.org/abs/2512.19123 + arXiv:2512.19123v1 Announce Type: new +Abstract: Objective: We develop a channel-adaptive (CA) architecture that seamlessly processes multi-variate time-series with an arbitrary number of channels, and in particular intracranial electroencephalography (iEEG) recordings. Methods: Our CA architecture first processes the iEEG signal using state-of-the-art models applied to each single channel independently. The resulting features are then fused using a vector-symbolic algorithm which reconstructs the spatial relationship using a trainable scalar per channel. Finally, the fused features are accumulated in a long-term memory of up to 2 minutes to perform the classification. Each CA-model can then be pre-trained on a large corpus of iEEG recordings from multiple heterogeneous subjects. The pre-trained model is personalized to each subject via a quick fine-tuning routine, which uses equal or lower amounts of data compared to existing state-of-the-art models, but requiring only 1/5 of the time. Results: We evaluate our CA-models on a seizure detection task both on a short-term (~20 hours) and a long-term (~2500 hours) dataset. In particular, our CA-EEGWaveNet is trained on a single seizure of the tested subject, while the baseline EEGWaveNet is trained on all but one. Even in this challenging scenario, our CA-EEGWaveNet surpasses the baseline in median F1-score (0.78 vs 0.76). Similarly, CA-EEGNet based on EEGNet, also surpasses its baseline in median F1-score (0.79 vs 0.74). Conclusion and significance: Our CA-model addresses two issues: first, it is channel-adaptive and can therefore be trained across heterogeneous subjects without loss of performance; second, it increases the effective temporal context size to a clinically-relevant length. Therefore, our model is a drop-in replacement for existing models, bringing better characteristics and performance across the board. + oai:arXiv.org:2512.19123v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Fabian Akkerman, Julien Ferry, Christian Artigues, Emmanuel Hebrard, Thibaut Vidal + 10.1109/JBHI.2025.3587103 + Francesco Carzaniga, Michael Hersche, Kaspar Schindler, Abbas Rahimi - PeerSync: Accelerating Containerized Service Delivery at the Network Edge - https://arxiv.org/abs/2507.20116 - arXiv:2507.20116v2 Announce Type: replace -Abstract: Efficient container image distribution is crucial for enabling machine learning inference at the network edge, where resource limitations and dynamic network conditions create significant challenges. In this paper, we present PeerSync, a decentralized P2P-based system designed to optimize image distribution in edge environments. PeerSync employs a popularity- and network-aware download engine that dynamically adapts to content popularity and real-time network conditions. PeerSync further integrates automated tracker election for rapid peer discovery and dynamic cache management for efficient storage utilization. We implement PeerSync with 8000+ lines of Rust code and test its performance extensively on both large-scale Docker-based emulations and physical edge devices. Experimental results show that PeerSync delivers a remarkable speed increase of 2.72$\times$, 1.79$\times$, and 1.28$\times$ compared to the Baseline solution, Dragonfly, and Kraken, respectively, while significantly reducing cross-network traffic by 90.72% under congested and varying network conditions. - oai:arXiv.org:2507.20116v2 - cs.NI - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Yinuo Deng, Hailiang Zhao, Dongjing Wang, Peng Chen, Wenzhuo Qian, Jianwei Yin, Schahram Dustdar, Shuiguang Deng + ShadowBlock: Efficient Dynamic Anonymous Blocklisting and Its Cross-chain Application + https://arxiv.org/abs/2512.19124 + arXiv:2512.19124v1 Announce Type: new +Abstract: Online harassment, incitement to violence, racist behavior, and other harmful content on social media can damage social harmony and even break the law. Traditional blocklisting technologies can block malicious users, but this comes at the expense of identity privacy. The anonymous blocklisting has emerged as an effective mechanism to restrict the abuse of freedom of speech while protecting user identity privacy. However, the state-of-the-art anonymous blocklisting schemes suffer from either poor dynamism or low efficiency. In this paper, we propose $\mathsf{ShadowBlock}$, an efficient dynamic anonymous blocklisting scheme. Specifically, we utilize the pseudorandom function and cryptographic accumulator to construct the public blocklisting, enabling users to prove they are not on the blocklisting in an anonymous manner. To improve verification efficiency, we design an aggregation zero-knowledge proof mechanism that converts multiple verification operations into a single one. In addition, we leverage the accumulator's property to achieve efficient updates of the blocklisting, i.e., the original proof can be reused with minimal updates rather than regenerating the entire proof. Experiments show that $\mathsf{ShadowBlock}$ has better dynamics and efficiency than the existing schemes. Finally, the discussion on applications indicates that $\mathsf{ShadowBlock}$ also holds significant value and has broad prospects in emerging fields such as cross-chain identity management. + oai:arXiv.org:2512.19124v1 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Haotian Deng, Mengxuan Liu, Chuan Zhang, Wei Huang, Licheng Wang, Liehuang Zhu - EarXplore: An Open Research Database on Earable Interaction - https://arxiv.org/abs/2507.20656 - arXiv:2507.20656v2 Announce Type: replace -Abstract: Interaction with sensor-augmented earphones, referred to as earables or hearables, represents a major area of earable research. Proximate to the head and reachable by hand, earables support diverse interactions and can detect multiple inputs simultaneously. Yet this diversity has fragmented research, complicating the tracking of developments. To address this, we introduce EarXplore, a curated, interactive online database on earable interaction research. Designed through a question-centered approach that guided the development of 33 criteria applied to annotate 118 studies and the structure of the platform, EarXplore comprises four integrated views: a Tabular View for structured exploration, a Graphical View for visual overviews, a Similarity View for conceptual links, and a Timeline View for scholarly trends. We demonstrate how the platform supports tailored exploration and filtering, and we leverage its capabilities to discuss gaps and opportunities. With community update mechanisms, EarXplore evolves with the field, serving as a living resource to accelerate future research - oai:arXiv.org:2507.20656v2 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Jonas Hummel, Tobias R\"oddiger, Valeria Zitz, Philipp Lepold, Michael K\"uttner, Marius Prill, Christopher Clarke, Hans Gellersen, Michael Beigl + SAP: Syntactic Attention Pruning for Transformer-based Language Models + https://arxiv.org/abs/2512.19125 + arXiv:2512.19125v1 Announce Type: new +Abstract: This paper introduces Syntactic Attention Pruning (SAP), a novel method for effectively pruning attention heads in Transformer models. Unlike conventional approaches that rely solely on mathematical analysis of model weights and activations, SAP incorporates both the syntactic structure and attention patterns of sentences to guide the pruning process. By leveraging these linguistic features, SAP not only achieves performance comparable to state-of-the-art methods but also enhances the interpretability of model behavior. To further improve robustness, we propose Candidate Filtering (CF), a mechanism that prioritizes heads based on their contribution to model performance, mitigating degradation during pruning. Experimental results indicate that SAP effectively preserves critical heads of a high density of strong attention values, outperforming existing head pruning strategies in retrain-free settings. These findings position SAP as a promising foundation for a new direction in model compression research, offering high flexibility for pruning across all transformer-based language models. + oai:arXiv.org:2512.19125v1 + cs.CL + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Tzu-Yun Lee, Ding-Yong Hong, Jan-Jan Wu - RL from Teacher-Model Refinement: Gradual Imitation Learning for Machine Translation - https://arxiv.org/abs/2507.22219 - arXiv:2507.22219v3 Announce Type: replace -Abstract: Preference-learning methods for machine translation (MT), such as Direct Preference Optimization (DPO), have shown strong gains but typically rely on large, carefully curated preference triplets and often struggle to generalize beyond their tuning domains. We propose Reinforcement Learning from Teacher-Model Refinement (RLfR), which replaces static triplets with on-policy, actor-conditioned refinements produced by a frozen teacher. At each step, the actor samples candidate translations, the teacher performs a minimal local edit of each draft, and the actor is reinforced to close the gap using a composite reward that combines scaled negative edit distance for lexical and structural fidelity with COMET for semantic adequacy. This formulation yields a stable, model-aware learning signal without requiring explicit preference datasets. Experiments on FLORES-200 (English to German, Spanish, Chinese, Korean, and Japanese) show that RLfR consistently outperforms strong MT-SFT, DPO, and fixed-reference RL baselines, improving semantic quality and entity preservation, and also achieves superior performance under LLM-based judge evaluations. - oai:arXiv.org:2507.22219v3 + AWPO: Enhancing Tool-Use of Large Language Models through Explicit Integration of Reasoning Rewards + https://arxiv.org/abs/2512.19126 + arXiv:2512.19126v1 Announce Type: new +Abstract: While reinforcement learning (RL) shows promise in training tool-use large language models (LLMs) using verifiable outcome rewards, existing methods largely overlook the potential of explicit reasoning rewards to bolster reasoning and tool utilization. Furthermore, natively combining reasoning and outcome rewards may yield suboptimal performance or conflict with the primary optimization objective. To address this, we propose advantage-weighted policy optimization (AWPO) -- a principled RL framework that effectively integrates explicit reasoning rewards to enhance tool-use capability. AWPO incorporates variance-aware gating and difficulty-aware weighting to adaptively modulate advantages from reasoning signals based on group-relative statistics, alongside a tailored clipping mechanism for stable optimization. Extensive experiments demonstrate that AWPO achieves state-of-the-art performance across standard tool-use benchmarks, significantly outperforming strong baselines and leading closed-source models in challenging multi-turn scenarios. Notably, with exceptional parameter efficiency, our 4B model surpasses Grok-4 by 16.0 percent in multi-turn accuracy while preserving generalization capability on the out-of-distribution MMLU-Pro benchmark. + oai:arXiv.org:2512.19126v1 cs.CL - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Dongyub Jude Lee, Zhenyi Ye, Pengcheng He + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zihan Lin, Xiaohan Wang, Hexiong Yang, Jiajun Chai, Jie Cao, Guojun Yin, Wei Lin, Ran He - A Systematic Literature Review on Detecting Software Vulnerabilities with Large Language Models - https://arxiv.org/abs/2507.22659 - arXiv:2507.22659v2 Announce Type: replace -Abstract: The increasing adoption of Large Language Models (LLMs) in software engineering has sparked interest in their use for software vulnerability detection. However, the rapid development of this field has resulted in a fragmented research landscape, with diverse studies that are difficult to compare due to differences in, e.g., system designs and dataset usage. This fragmentation makes it difficult to obtain a clear overview of the state-of-the-art or compare and categorize studies meaningfully. In this work, we present a comprehensive systematic literature review (SLR) of LLM-based software vulnerability detection. We analyze 263 studies published between January 2020 and November 2025, categorizing them by task formulation, input representation, system architecture, and techniques. Further, we analyze the datasets used, including their characteristics, vulnerability coverage, and diversity. We present a fine-grained taxonomy of vulnerability detection approaches, identify key limitations, and outline actionable future research opportunities. By providing a structured overview of the field, this review improves transparency and serves as a practical guide for researchers and practitioners aiming to conduct more comparable and reproducible research. We publicly release all artifacts and maintain a living repository of LLM-based software vulnerability detection studies at https://github.com/hs-esslingen-it-security/Awesome-LLM4SVD. - oai:arXiv.org:2507.22659v2 - cs.SE - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + D$^{2}$Stream: Decoupled Dual-Stream Temporal-Speaker Interaction for Audio-Visual Speaker Detection + https://arxiv.org/abs/2512.19130 + arXiv:2512.19130v1 Announce Type: new +Abstract: Audio-visual speaker detection aims to identify the active speaker in videos by leveraging complementary audio and visual cues. Existing methods often suffer from computational inefficiency or suboptimal performance due to joint modeling of temporal and speaker interactions. We propose D$^{2}$Stream, a decoupled dual-stream framework that separates cross-frame temporal modeling from within-frame speaker discrimination. Audio and visual features are first aligned via cross-modal attention, then fed into two lightweight streams: a Temporal Interaction Stream captures long-range temporal dependencies, while a Speaker Interaction Stream models per-frame inter-person relationships. The temporal and relational features extracted by the two streams interact via cross-attention to enrich representations. A lightweight Voice Gate module further mitigates false positives from non-speech facial movements. On AVA-ActiveSpeaker, D$^{2}$Stream achieves a new state-of-the-art at 95.6% mAP, with 80% reduction in computation compared to GNN-based models and 30% fewer parameters than attention-based alternatives, while also generalizing well on Columbia ASD. Source code is available at https://anonymous.4open.science/r/D2STREAM. + oai:arXiv.org:2512.19130v1 + cs.MM + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Sabrina Kaniewski, Fabian Schmidt, Markus Enzweiler, Michael Menth, Tobias Heer + Junhao Xiao, Shun Feng, Zhiyu Wu, Jianjun Li, Zhiyuan Ma, Yi Chen - Text-to-SQL Task-oriented Dialogue Ontology Construction - https://arxiv.org/abs/2507.23358 - arXiv:2507.23358v2 Announce Type: replace -Abstract: Large language models (LLMs) are widely used as general-purpose knowledge sources, but they rely on parametric knowledge, limiting explainability and trustworthiness. In task-oriented dialogue (TOD) systems, this separation is explicit, using an external database structured by an explicit ontology to ensure explainability and controllability. However, building such ontologies requires manual labels or supervised training. We introduce TeQoDO: a Text-to-SQL task-oriented Dialogue Ontology construction method. Here, an LLM autonomously builds a TOD ontology from scratch using only its inherent SQL programming capabilities combined with concepts from modular TOD systems provided in the prompt. We show that TeQoDO outperforms transfer learning approaches, and its constructed ontology is competitive on a downstream dialogue state tracking task. Ablation studies demonstrate the key role of modular TOD system concepts. TeQoDO also scales to allow construction of much larger ontologies, which we investigate on a Wikipedia and arXiv dataset. We view this as a step towards broader application of ontologies. - oai:arXiv.org:2507.23358v2 - cs.CL - cs.AI - cs.DB - cs.IR - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Evidential Trust-Aware Model Personalization in Decentralized Federated Learning for Wearable IoT + https://arxiv.org/abs/2512.19131 + arXiv:2512.19131v1 Announce Type: new +Abstract: Decentralized federated learning (DFL) enables collaborative model training across edge devices without centralized coordination, offering resilience against single points of failure. However, statistical heterogeneity arising from non-identically distributed local data creates a fundamental challenge: nodes must learn personalized models adapted to their local distributions while selectively collaborating with compatible peers. Existing approaches either enforce a single global model that fits no one well, or rely on heuristic peer selection mechanisms that cannot distinguish between peers with genuinely incompatible data distributions and those with valuable complementary knowledge. We present Murmura, a framework that leverages evidential deep learning to enable trust-aware model personalization in DFL. Our key insight is that epistemic uncertainty from Dirichlet-based evidential models directly indicates peer compatibility: high epistemic uncertainty when a peer's model evaluates local data reveals distributional mismatch, enabling nodes to exclude incompatible influence while maintaining personalized models through selective collaboration. Murmura introduces a trust-aware aggregation mechanism that computes peer compatibility scores through cross-evaluation on local validation samples and personalizes model aggregation based on evidential trust with adaptive thresholds. Evaluation on three wearable IoT datasets (UCI HAR, PAMAP2, PPG-DaLiA) demonstrates that Murmura reduces performance degradation from IID to non-IID conditions compared to baseline (0.9% vs. 19.3%), achieves 7.4$\times$ faster convergence, and maintains stable accuracy across hyperparameter choices. These results establish evidential uncertainty as a principled foundation for compatibility-aware personalization in decentralized heterogeneous environments. + oai:arXiv.org:2512.19131v1 + cs.DC + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Renato Vukovic, Carel van Niekerk, Michael Heck, Benjamin Ruppik, Hsien-Chin Lin, Shutong Feng, Nurul Lubis, Milica Gasic + Murtaza Rangwala, Richard O. Sinnott, Rajkumar Buyya - HOLODECK 2.0: Vision-Language-Guided 3D World Generation with Editing - https://arxiv.org/abs/2508.05899 - arXiv:2508.05899v2 Announce Type: replace -Abstract: 3D scene generation plays a crucial role in gaming, artistic creation, virtual reality, and many other domains. However, current 3D scene design still relies heavily on extensive manual effort from creators, and existing automated methods struggle to generate open-domain scenes or support flexible editing. To address those challenges, we introduce HOLODECK 2.0, an advanced vision-language-guided framework for 3D world generation with support for interactive scene editing based on human feedback. HOLODECK 2.0 can generate diverse and stylistically rich 3D scenes (e.g., realistic, cartoon, anime, and cyberpunk styles) that exhibit high semantic fidelity to fine-grained input descriptions, suitable for both indoor and open-domain environments. HOLODECK 2.0 leverages vision-language models (VLMs) to identify and parse the objects required in a scene and generates corresponding high-quality assets via state-of-the-art 3D generative models. Then, HOLODECK 2.0 iteratively applies spatial constraints derived from the VLMs to achieve semantically coherent and physically plausible layouts. Both human and model evaluations demonstrate that HOLODECK 2.0 effectively generates high-quality scenes closely aligned with detailed textual descriptions, consistently outperforming baselines across indoor and open-domain scenarios. Additionally, HOLODECK 2.0 provides editing capabilities that flexibly adapt to human feedback, supporting layout refinement and style-consistent object edits. Finally, we present a practical application of HOLODECK 2.0 in procedural game modeling to generate visually rich and immersive environments that can boost efficiency in game design. - oai:arXiv.org:2508.05899v2 + WorldRFT: Latent World Model Planning with Reinforcement Fine-Tuning for Autonomous Driving + https://arxiv.org/abs/2512.19133 + arXiv:2512.19133v1 Announce Type: new +Abstract: Latent World Models enhance scene representation through temporal self-supervised learning, presenting a perception annotation-free paradigm for end-to-end autonomous driving. However, the reconstruction-oriented representation learning tangles perception with planning tasks, leading to suboptimal optimization for planning. To address this challenge, we propose WorldRFT, a planning-oriented latent world model framework that aligns scene representation learning with planning via a hierarchical planning decomposition and local-aware interactive refinement mechanism, augmented by reinforcement learning fine-tuning (RFT) to enhance safety-critical policy performance. Specifically, WorldRFT integrates a vision-geometry foundation model to improve 3D spatial awareness, employs hierarchical planning task decomposition to guide representation optimization, and utilizes local-aware iterative refinement to derive a planning-oriented driving policy. Furthermore, we introduce Group Relative Policy Optimization (GRPO), which applies trajectory Gaussianization and collision-aware rewards to fine-tune the driving policy, yielding systematic improvements in safety. WorldRFT achieves state-of-the-art (SOTA) performance on both open-loop nuScenes and closed-loop NavSim benchmarks. On nuScenes, it reduces collision rates by 83% (0.30% -> 0.05%). On NavSim, using camera-only sensors input, it attains competitive performance with the LiDAR-based SOTA method DiffusionDrive (87.8 vs. 88.1 PDMS). + oai:arXiv.org:2512.19133v1 + cs.RO cs.CV - cs.GR - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zixuan Bian, Ruohan Ren, Yue Yang, Chris Callison-Burch + Pengxuan Yang, Ben Lu, Zhongpu Xia, Chao Han, Yinfeng Gao, Teng Zhang, Kun Zhan, XianPeng Lang, Yupeng Zheng, Qichao Zhang - Mitigating Undesired Conditions in Flexible Production with Product-Process-Resource Asset Knowledge Graphs - https://arxiv.org/abs/2508.06278 - arXiv:2508.06278v2 Announce Type: replace -Abstract: Contemporary industrial cyber-physical production systems (CPPS) composed of robotic workcells face significant challenges in the analysis of undesired conditions due to the flexibility of Industry 4.0 that disrupts traditional quality assurance mechanisms. This paper presents a novel industry-oriented semantic model called Product-Process-Resource Asset Knowledge Graph (PPR-AKG), which is designed to analyze and mitigate undesired conditions in flexible CPPS. Built on top of the well-proven Product-Process-Resource (PPR) model originating from ISA-95 and VDI-3682, a comprehensive OWL ontology addresses shortcomings of conventional model-driven engineering for CPPS, particularly inadequate undesired condition and error handling representation. The integration of semantic technologies with large language models (LLMs) provides intuitive interfaces for factory operators, production planners, and engineers to interact with the entire model using natural language. Evaluation with the use case addressing electric vehicle battery remanufacturing demonstrates that the PPR-AKG approach efficiently supports resource allocation based on explicitly represented capabilities as well as identification and mitigation of undesired conditions in production. The key contributions include (1) a holistic PPR-AKG model capturing multi-dimensional production knowledge, and (2) the useful combination of the PPR-AKG with LLM-based chatbots for human interaction. - oai:arXiv.org:2508.06278v2 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 - replace + QuCo-RAG: Quantifying Uncertainty from the Pre-training Corpus for Dynamic Retrieval-Augmented Generation + https://arxiv.org/abs/2512.19134 + arXiv:2512.19134v1 Announce Type: new +Abstract: Dynamic Retrieval-Augmented Generation adaptively determines when to retrieve during generation to mitigate hallucinations in large language models (LLMs). However, existing methods rely on model-internal signals (e.g., logits, entropy), which are fundamentally unreliable because LLMs are typically ill-calibrated and often exhibit high confidence in erroneous outputs. We propose QuCo-RAG, which shifts from subjective confidence to objective statistics computed from pre-training data. Our method quantifies uncertainty through two stages: (1) before generation, we identify low-frequency entities indicating long-tail knowledge gaps; (2) during generation, we verify entity co-occurrence in the pre-training corpus, where zero co-occurrence often signals hallucination risk. Both stages leverage Infini-gram for millisecond-latency queries over 4 trillion tokens, triggering retrieval when uncertainty is high. Experiments on multi-hop QA benchmarks show QuCo-RAG achieves EM gains of 5--12 points over state-of-the-art baselines with OLMo-2 models, and transfers effectively to models with undisclosed pre-training data (Llama, Qwen, GPT), improving EM by up to 14 points. Domain generalization on biomedical QA further validates the robustness of our paradigm. These results establish corpus-grounded verification as a principled, practically model-agnostic paradigm for dynamic RAG. Our code is publicly available at https://github.com/ZhishanQ/QuCo-RAG. + oai:arXiv.org:2512.19134v1 + cs.CL + cs.IR + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - CEUR Workshop Proceedings (CEUR-WS.org), ISSN 1613-0073. ISWC 2025 Companion Volume. Available online: https://ceur-ws.org/Vol-4085/ and https://ceur-ws.org/Vol-4085/paper9.pdf - Petr Novak, Stefan Biffl, Marek Obitko, Petr Kadera + Dehai Min, Kailin Zhang, Tongtong Wu, Lu Cheng - Emoji Reactions on Telegram: Unreliable Indicators of Emotional Resonance - https://arxiv.org/abs/2508.06349 - arXiv:2508.06349v2 Announce Type: replace -Abstract: Emoji reactions are a frequently used feature of messaging platforms, yet their communicative role remains understudied. Prior work on emojis has focused predominantly on in-text usage, showing that emojis embedded in messages tend to amplify and mirror the author's affective tone. This evidence has often been extended to emoji reactions, treating them as indicators of emotional resonance or user sentiment. However, they may reflect broader social dynamics. Here, we investigate the communicative function of emoji reactions on Telegram. We analyze over 650k crypto-related messages that received at least one reaction, annotating each with sentiment, emotion, persuasion strategy, and speech act labels, and inferring the sentiment and emotion of emoji reactions using both lexicons and LLMs. We uncover a systematic mismatch between message and reaction sentiment, with positive reactions dominating even for neutral or negative content. This pattern persists across rhetorical strategies and emotional tones, indicating that emojis used as reactions do not reliably function as indicators of emotional mirroring or resonance of the content, in contrast to findings reported for in-text emojis. Finally, we identify the features that most predict emoji engagement. Overall, our findings caution against treating emoji reactions as sentiment labels, highlighting the need for more nuanced approaches in sentiment and engagement analysis. - oai:arXiv.org:2508.06349v2 - cs.HC - cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Serena Tardelli, Lorenzo Alvisi, Lorenzo Cima, Stefano Cresci, Maurizio Tesconi + Understanding Chain-of-Thought in Large Language Models via Topological Data Analysis + https://arxiv.org/abs/2512.19135 + arXiv:2512.19135v1 Announce Type: new +Abstract: With the development of large language models (LLMs), particularly with the introduction of the long reasoning chain technique, the reasoning ability of LLMs in complex problem-solving has been significantly enhanced. While acknowledging the power of long reasoning chains, we cannot help but wonder: Why do different reasoning chains perform differently in reasoning? What components of the reasoning chains play a key role? Existing studies mainly focus on evaluating reasoning chains from a functional perspective, with little attention paid to their structural mechanisms. To address this gap, this work is the first to analyze and evaluate the quality of the reasoning chain from a structural perspective. We apply persistent homology from Topological Data Analysis (TDA) to map reasoning steps into semantic space, extract topological features, and analyze structural changes. These changes reveal semantic coherence, logical redundancy, and identify logical breaks and gaps. By calculating homology groups, we assess connectivity and redundancy at various scales, using barcode and persistence diagrams to quantify stability and consistency. Our results show that the topological structural complexity of reasoning chains correlates positively with accuracy. More complex chains identify correct answers sooner, while successful reasoning exhibits simpler topologies, reducing redundancy and cycles, enhancing efficiency and interpretability. This work provides a new perspective on reasoning chain quality assessment and offers guidance for future optimization. + oai:arXiv.org:2512.19135v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Chenghao Li, Chaoning Zhang, Yi Lu, Shuxu Chen, Xudong Wang, Jiaquan Zhang, Zhicheng Wang, Zhengxun Jin, Kuien Liu, Sung-Ho Bae, Guoqing Wang, Yang Yang, Hen Tao Shen - DocR1: Evidence Page-Guided GRPO for Multi-Page Document Understanding - https://arxiv.org/abs/2508.07313 - arXiv:2508.07313v3 Announce Type: replace -Abstract: Understanding multi-page documents poses a significant challenge for multimodal large language models (MLLMs), as it requires fine-grained visual comprehension and multi-hop reasoning across pages. While prior work has explored reinforcement learning (RL) for enhancing advanced reasoning in MLLMs, its application to multi-page document understanding remains underexplored. In this paper, we introduce DocR1, an MLLM trained with a novel RL framework, Evidence Page-Guided GRPO (EviGRPO). EviGRPO incorporates an evidence-aware reward mechanism that promotes a coarse-to-fine reasoning strategy, guiding the model to first retrieve relevant pages before generating answers. This training paradigm enables us to build high-quality models with limited supervision. To support this, we design a two-stage annotation pipeline and a curriculum learning strategy, based on which we construct two datasets: EviBench, a high-quality training set with 4.8k examples, and ArxivFullQA, an evaluation benchmark with 8.6k QA pairs based on scientific papers. Extensive experiments across a wide range of benchmarks demonstrate that DocR1 achieves state-of-the-art performance on multi-page tasks, while consistently maintaining strong results on single-page benchmarks. - oai:arXiv.org:2508.07313v3 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + A Convex Loss Function for Set Prediction with Optimal Trade-offs Between Size and Conditional Coverage + https://arxiv.org/abs/2512.19142 + arXiv:2512.19142v1 Announce Type: new +Abstract: We consider supervised learning problems in which set predictions provide explicit uncertainty estimates. Using Choquet integrals (a.k.a. Lov{\'a}sz extensions), we propose a convex loss function for nondecreasing subset-valued functions obtained as level sets of a real-valued function. This loss function allows optimal trade-offs between conditional probabilistic coverage and the ''size'' of the set, measured by a non-decreasing submodular function. We also propose several extensions that mimic loss functions and criteria for binary classification with asymmetric losses, and show how to naturally obtain sets with optimized conditional coverage. We derive efficient optimization algorithms, either based on stochastic gradient descent or reweighted least-squares formulations, and illustrate our findings with a series of experiments on synthetic datasets for classification and regression tasks, showing improvements over approaches that aim for marginal coverage. + oai:arXiv.org:2512.19142v1 + cs.LG + math.OC + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Junyu Xiong, Yonghui Wang, Weichao Zhao, Chenyu Liu, Bing Yin, Wengang Zhou, Houqiang Li + Francis Bach (SIERRA) - Early Explorations of Recommender Systems for Physical Activity and Well-being - https://arxiv.org/abs/2508.07980 - arXiv:2508.07980v2 Announce Type: replace -Abstract: As recommender systems increasingly guide physical actions, often through wearables and coaching tools, new challenges arise around how users interpret, trust, and respond to this advice. This paper introduces a conceptual framework for tangible recommendations that influence users' bodies, routines, and well-being. We describe three design dimensions: trust and interpretation, intent alignment, and consequence awareness. These highlight key limitations in applying conventional recommender logic to embodied settings. Through examples and design reflections, we outline how future systems can support long-term well-being, behavioral alignment, and socially responsible personalization. - oai:arXiv.org:2508.07980v2 - cs.HC - cs.IR - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Alan Said + RP-CATE: Recurrent Perceptron-based Channel Attention Transformer Encoder for Industrial Hybrid Modeling + https://arxiv.org/abs/2512.19147 + arXiv:2512.19147v1 Announce Type: new +Abstract: Nowadays, industrial hybrid modeling which integrates both mechanistic modeling and machine learning-based modeling techniques has attracted increasing interest from scholars due to its high accuracy, low computational cost, and satisfactory interpretability. Nevertheless, the existing industrial hybrid modeling methods still face two main limitations. First, current research has mainly focused on applying a single machine learning method to one specific task, failing to develop a comprehensive machine learning architecture suitable for modeling tasks, which limits their ability to effectively represent complex industrial scenarios. Second, industrial datasets often contain underlying associations (e.g., monotonicity or periodicity) that are not adequately exploited by current research, which can degrade model's predictive performance. To address these limitations, this paper proposes the Recurrent Perceptron-based Channel Attention Transformer Encoder (RP-CATE), with three distinctive characteristics: 1: We developed a novel architecture by replacing the self-attention mechanism with channel attention and incorporating our proposed Recurrent Perceptron (RP) Module into Transformer, achieving enhanced effectiveness for industrial modeling tasks compared to the original Transformer. 2: We proposed a new data type called Pseudo-Image Data (PID) tailored for channel attention requirements and developed a cyclic sliding window method for generating PID. 3: We introduced the concept of Pseudo-Sequential Data (PSD) and a method for converting industrial datasets into PSD, which enables the RP Module to capture the underlying associations within industrial dataset more effectively. An experiment aimed at hybrid modeling in chemical engineering was conducted by using RP-CATE and the experimental results demonstrate that RP-CATE achieves the best performance compared to other baseline models. + oai:arXiv.org:2512.19147v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Haoran Yang, Yinan Zhang, Wenjie Zhang, Dongxia Wang, Peiyu Liu, Yuqi Ye, Kexin Chen, Wenhai Wang - On the dynamic evolution of CLIP texture-shape bias and its relationship to human alignment and model robustness - https://arxiv.org/abs/2508.09814 - arXiv:2508.09814v2 Announce Type: replace -Abstract: Contrastive language-image models such as CLIP have demonstrated remarkable generalization capabilities. However, how their internal visual representations evolve during training and how this evolution relates to human perception remains poorly understood. Most existing analysis characterize fully trained models, leaving the dynamics of representational biases and perceptual alignment largely unexplored. In this work, we present an epoch-by-epoch analysis of CLIP models throughout training, focusing on the evolution of texture-shape bias, alignment with human perceptual judgements, and sensitivity to image noise. Using multiple perceptual benchmarks spanning low-level image quality assessment, mid-level perceptual similarity, saliency correspondence, and noisy robustness, we identify a consistent, training-stage-dependent representational transition. Early training stages exhibit strong texture bias, elevated alignment with low-level human perceptual measures, and increased sensitivity to Gaussian noise perturbations. As training progresses, this texture bias gradually diminishes in favor of more shape-based representations, coinciding with improved robustness to noise and a decline in low-level perceptual alignment. Importantly, these dynamics are consistently observed across multiple CLIP model scales, indicating that the phenomenon is not specific to a particular architecture size. Our findings provide an empirical characterization of how perceptual alignment, feature bias, and robustness co-evolve during multimodal model training. This work reveals a systematic trade-off between early low-level perceptual alignment and later robustness, offering new insights into the representational dynamics of vision-language models and their relationship to human visual processing. - oai:arXiv.org:2508.09814v2 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Pablo Hern\'andez-C\'amara, Jose Manuel Ja\'en-Lorites, Alexandra G\'omez-Villa, Jorge Vila-Tom\'as, Valero Laparra, Jesus Malo + A Flexible Field-Based Policy Learning Framework for Diverse Robotic Systems and Sensors + https://arxiv.org/abs/2512.19148 + arXiv:2512.19148v1 Announce Type: new +Abstract: We present a cross robot visuomotor learning framework that integrates diffusion policy based control with 3D semantic scene representations from D3Fields to enable category level generalization in manipulation. Its modular design supports diverse robot camera configurations including UR5 arms with Microsoft Azure Kinect arrays and bimanual manipulators with Intel RealSense sensors through a low latency control stack and intuitive teleoperation. A unified configuration layer enables seamless switching between setups for flexible data collection training and evaluation. In a grasp and lift block task the framework achieved an 80 percent success rate after only 100 demonstration episodes demonstrating robust skill transfer between platforms and sensing modalities. This design paves the way for scalable real world studies in cross robotic generalization. + oai:arXiv.org:2512.19148v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Jose Gustavo Buenaventura Carreon, Floris Erich, Roman Mykhailyshyn, Tomohiro Motoda, Ryo Hanai, Yukiyasu Domae - PASS: Probabilistic Agentic Supernet Sampling for Interpretable and Adaptive Chest X-Ray Reasoning - https://arxiv.org/abs/2508.10501 - arXiv:2508.10501v4 Announce Type: replace -Abstract: Existing tool-augmented agentic systems are limited in the real world by (i) black-box reasoning steps that undermine trust of decision-making and pose safety risks, (ii) poor multimodal integration, which is inherently critical for healthcare tasks, and (iii) rigid and computationally inefficient agentic pipelines. We introduce PASS (Probabilistic Agentic Supernet Sampling), the first multimodal framework to address these challenges in the context of Chest X-Ray (CXR) reasoning. PASS adaptively samples agentic workflows over a multi-tool graph, yielding decision paths annotated with interpretable probabilities. Given the complex CXR reasoning task with multimodal medical data, PASS leverages its learned task-conditioned distribution over the agentic supernet. Thus, it adaptively selects the most suitable tool at each supernet layer, offering probability-annotated trajectories for post-hoc audits and directly enhancing medical AI safety. PASS also continuously compresses salient findings into an evolving personalized memory, while dynamically deciding whether to deepen its reasoning path or invoke an early exit for efficiency. To optimize a Pareto frontier balancing performance and cost, we design a novel three-stage training procedure, including expert knowledge warm-up, contrastive path-ranking, and cost-aware reinforcement learning. To facilitate rigorous evaluation, we introduce CAB-E, a comprehensive benchmark for multi-step, safety-critical, free-form CXR reasoning. Experiments across various benchmarks validate that PASS significantly outperforms strong baselines in multiple metrics (e.g., accuracy, LLM-Judge, semantic similarity, etc.) while balancing computational costs, pushing a new paradigm shift towards interpretable, adaptive, and multimodal medical agentic systems. - oai:arXiv.org:2508.10501v4 - cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + AMap: Distilling Future Priors for Ahead-Aware Online HD Map Construction + https://arxiv.org/abs/2512.19150 + arXiv:2512.19150v1 Announce Type: new +Abstract: Online High-Definition (HD) map construction is pivotal for autonomous driving. While recent approaches leverage historical temporal fusion to improve performance, we identify a critical safety flaw in this paradigm: it is inherently ``spatially backward-looking." These methods predominantly enhance map reconstruction in traversed areas, offering minimal improvement for the unseen road ahead. Crucially, our analysis of downstream planning tasks reveals a severe asymmetry: while rearward perception errors are often tolerable, inaccuracies in the forward region directly precipitate hazardous driving maneuvers. To bridge this safety gap, we propose AMap, a novel framework for Ahead-aware online HD Mapping. We pioneer a ``distill-from-future" paradigm, where a teacher model with privileged access to future temporal contexts guides a lightweight student model restricted to the current frame. This process implicitly compresses prospective knowledge into the student model, endowing it with ``look-ahead" capabilities at zero inference-time cost. Technically, we introduce a Multi-Level BEV Distillation strategy with spatial masking and an Asymmetric Query Adaptation module to effectively transfer future-aware representations to the student's static queries. Extensive experiments on the nuScenes and Argoverse 2 benchmark demonstrate that AMap significantly enhances current-frame perception. Most notably, it outperforms state-of-the-art temporal models in critical forward regions while maintaining the efficiency of single current frame inference. + oai:arXiv.org:2512.19150v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yushi Feng, Junye Du, Yingying Hong, Qifan Wang, Lequan Yu + Ruikai Li, Xinrun Li, Mengwei Xie, Hao Shan, Shoumeng Qiu, Xinyuan Chang, Yizhe Fan, Feng Xiong, Han Jiang, Yilong Ren, Haiyang Yu, Mu Xu, Yang Long, Varun Ojha, Zhiyong Cui - TimeSenCLIP: A Time Series Vision-Language Model for Remote Sensing Using Single-Pixel - https://arxiv.org/abs/2508.11919 - arXiv:2508.11919v2 Announce Type: replace -Abstract: Vision-language models (VLMs) have shown significant promise in remote sensing applications, particularly for land-use and land-cover (LULC) mapping via zero-shot classification and retrieval. However, current approaches face several key challenges, such as the dependence on caption-based supervision, which is often not available or very limited in terms of the covered semantics, and the fact of being adapted from generic VLM architectures that are suitable for very high resolution images. Consequently, these models tend to prioritize spatial context over spectral and temporal information, limiting their effectiveness for medium-resolution remote sensing imagery. In this work, we present TimeSenCLIP, a lightweight VLM for remote sensing time series, using a cross-view temporal contrastive framework to align multispectral Sentinel-2 time series with geo-tagged ground-level imagery, without requiring textual annotations. Unlike prior VLMs, TimeSenCLIP emphasizes temporal and spectral signals over spatial context, investigating whether single-pixel time series contain sufficient information for solving a variety of tasks. - oai:arXiv.org:2508.11919v2 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + University Rents Enabling Corporate Innovation: Mapping Academic Researcher Coding and Discursive Labour in the R Language Ecosystem + https://arxiv.org/abs/2512.19153 + arXiv:2512.19153v1 Announce Type: new +Abstract: This article explores the role of unrecognised labour in corporate innovation systems via an analysis of researcher coding and discursive contributions to R, one of the largest statistical software ecosystems. Studies of online platforms typically focus on how platform affordances constrain participants' actions, and profit from their labour. We innovate by connecting the labour performed inside digital platforms to the professional employment of participants. Our case study analyses 8,924 R package repositories on GitHub, examining commits and communications. Our quantitative findings show that researchers, alongside non-affiliated contributors, are the most frequent owners of R package repositories and their most active contributors. Researchers are more likely to hold official roles compared to the average, and to engage in collaborative problem-solving and support work during package development. This means there is, underneath the 'recognised' category of star researchers who transition between academia and industry and secure generous funding, an 'unrecognised' category of researchers who not only create and maintain key statistical infrastructure, but also provide support to industry employees, for no remuneration. Our qualitative findings show how this unrecognised labour affects practitioners. Finally, our analysis of the ideology and practice of free, libre and open source software (FLOSS) shows how this ideology and practice legitimate the use of 'university rents' by Big Tech. + oai:arXiv.org:2512.19153v1 + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Journal of Quantitative Description: Digital Media, 2025, 5 + Xiaolan Cai (IP Paris, LTCI, ACES, INFRES), Mathieu O'Neil (IP Paris, LTCI, ACES, INFRES), Stefano Zacchiroli (IP Paris, LTCI, ACES, INFRES) + + + Beyond Sliding Windows: Learning to Manage Memory in Non-Markovian Environments + https://arxiv.org/abs/2512.19154 + arXiv:2512.19154v1 Announce Type: new +Abstract: Recent success in developing increasingly general purpose agents based on sequence models has led to increased focus on the problem of deploying computationally limited agents within the vastly more complex real-world. A key challenge experienced in these more realistic domains is highly non-Markovian dependencies with respect to the agent's observations, which are less common in small controlled domains. The predominant approach for dealing with this in the literature is to stack together a window of the most recent observations (Frame Stacking), but this window size must grow with the degree of non-Markovian dependencies, which results in prohibitive computational and memory requirements for both action inference and learning. In this paper, we are motivated by the insight that in many environments that are highly non-Markovian with respect to time, the environment only causally depends on a relatively small number of observations over that time-scale. A natural direction would then be to consider meta-algorithms that maintain relatively small adaptive stacks of memories such that it is possible to express highly non-Markovian dependencies with respect to time while considering fewer observations at each step and thus experience substantial savings in both compute and memory requirements. Hence, we propose a meta-algorithm (Adaptive Stacking) for achieving exactly that with convergence guarantees and quantify the reduced computation and memory constraints for MLP, LSTM, and Transformer-based agents. Our experiments utilize popular memory tasks, which give us control over the degree of non-Markovian dependencies. This allows us to demonstrate that an appropriate meta-algorithm can learn the removal of memories not predictive of future rewards without excessive removal of important experiences. Code: https://github.com/geraudnt/adaptive-stacking + oai:arXiv.org:2512.19154v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Pallavi Jain, Diego Marcos, Dino Ienco, Roberto Interdonato, Tristan Berchoux + Geraud Nangue Tasse, Matthew Riemer, Benjamin Rosman, Tim Klinger - Feature Request Analysis and Processing: Tasks, Techniques, and Trends - https://arxiv.org/abs/2508.12436 - arXiv:2508.12436v2 Announce Type: replace -Abstract: Feature requests are proposed by users to request new features or enhancements of existing features of software products, which represent users' wishes and demands. Satisfying users' demands can benefit the product from both competitiveness and user satisfaction. Feature requests have seen a rise in interest in the past few years and the amount of research has been growing. However, the diversity in the research topics suggests the need for their collective analysis to identify the challenges and opportunities so as to promote new advances in the future. In this work, following a defined process and a search protocol, we provide a systematic overview of the research area by searching and categorizing relevant studies. We select and analyze 131 primary studies using descriptive statistics and qualitative analysis methods. We classify the studies into different topics and group them from the perspective of requirements engineering activities. We investigate open tools as well as datasets for future research. In addition, we identify several key challenges and opportunities, such as: (1) ensuring the quality of feature requests, (2) improving their specification and validation, and (3) developing high-quality benchmarks for large language model-driven tasks. - oai:arXiv.org:2508.12436v2 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Feifei Niu, Chuanyi Li, Haosheng Zuo, Jionghan Wu, Xin Xia + Can We Test Consciousness Theories on AI? Ablations, Markers, and Robustness + https://arxiv.org/abs/2512.19155 + arXiv:2512.19155v1 Announce Type: new +Abstract: The search for reliable indicators of consciousness has fragmented into competing theoretical camps (Global Workspace Theory (GWT), Integrated Information Theory (IIT), and Higher-Order Theories (HOT)), each proposing distinct neural signatures. We adopt a synthetic neuro-phenomenology approach: constructing artificial agents that embody these mechanisms to test their functional consequences through precise architectural ablations impossible in biological systems. Across three experiments, we report dissociations suggesting these theories describe complementary functional layers rather than competing accounts. In Experiment 1, a no-rewire Self-Model lesion abolishes metacognitive calibration while preserving first-order task performance, yielding a synthetic blindsight analogue consistent with HOT predictions. In Experiment 2, workspace capacity proves causally necessary for information access: a complete workspace lesion produces qualitative collapse in access-related markers, while partial reductions show graded degradation, consistent with GWT's ignition framework. In Experiment 3, we uncover a broadcast-amplification effect: GWT-style broadcasting amplifies internal noise, creating extreme fragility. The B2 agent family is robust to the same latent perturbation; this robustness persists in a Self-Model-off / workspace-read control, cautioning against attributing the effect solely to $z_{\text{self}}$ compression. We also report an explicit negative result: raw perturbational complexity (PCI-A) decreases under the workspace bottleneck, cautioning against naive transfer of IIT-adjacent proxies to engineered agents. These results suggest a hierarchical design principle: GWT provides broadcast capacity, while HOT provides quality control. We emphasize that our agents are not conscious; they are reference implementations for testing functional predictions of consciousness theories. + oai:arXiv.org:2512.19155v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Yin Jun Phua - PhysGM: Large Physical Gaussian Model for Feed-Forward 4D Synthesis - https://arxiv.org/abs/2508.13911 - arXiv:2508.13911v2 Announce Type: replace -Abstract: Despite advances in physics-based 3D motion synthesis, current methods face key limitations: reliance on pre-reconstructed 3D Gaussian Splatting (3DGS) built from dense multi-view images with time-consuming per-scene optimization; physics integration via either inflexible, hand-specified attributes or unstable, optimization-heavy guidance from video models using Score Distillation Sampling (SDS); and naive concatenation of prebuilt 3DGS with physics modules, which ignores physical information embedded in appearance and yields suboptimal performance. To address these issues, we propose PhysGM, a feed-forward framework that jointly predicts 3D Gaussian representation and physical properties from a single image, enabling immediate simulation and high-fidelity 4D rendering. Unlike slow appearance-agnostic optimization methods, we first pre-train a physics-aware reconstruction model that directly infers both Gaussian and physical parameters. We further refine the model with Direct Preference Optimization (DPO), aligning simulations with the physically plausible reference videos and avoiding the high-cost SDS optimization. To address the absence of a supporting dataset for this task, we propose PhysAssets, a dataset of 50K+ 3D assets annotated with physical properties and corresponding reference videos. Experiments show that PhysGM produces high-fidelity 4D simulations from a single image in one minute, achieving a significant speedup over prior work while delivering realistic renderings.Our project page is at:https://hihixiaolv.github.io/PhysGM.github.io/ - oai:arXiv.org:2508.13911v2 + OmniMoGen: Unifying Human Motion Generation via Learning from Interleaved Text-Motion Instructions + https://arxiv.org/abs/2512.19159 + arXiv:2512.19159v1 Announce Type: new +Abstract: Large language models (LLMs) have unified diverse linguistic tasks within a single framework, yet such unification remains unexplored in human motion generation. Existing methods are confined to isolated tasks, limiting flexibility for free-form and omni-objective generation. To address this, we propose OmniMoGen, a unified framework that enables versatile motion generation through interleaved text-motion instructions. Built upon a concise RVQ-VAE and transformer architecture, OmniMoGen supports end-to-end instruction-driven motion generation. We construct X2Mo, a large-scale dataset of over 137K interleaved text-motion instructions, and introduce AnyContext, a benchmark for evaluating interleaved motion generation. Experiments show that OmniMoGen achieves state-of-the-art performance on text-to-motion, motion editing, and AnyContext, exhibiting emerging capabilities such as compositional editing, self-reflective generation, and knowledge-informed generation. These results mark a step toward the next intelligent motion generation. Project Page: https://OmniMoGen.github.io/. + oai:arXiv.org:2512.19159v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chunji Lv, Zequn Chen, Donglin Di, Weinan Zhang, Hao Li, Wei Chen, Yinjie Lei, Changsheng Li + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Wendong Bu, Kaihang Pan, Yuze Lin, Jiacheng Li, Kai Shen, Wenqiao Zhang, Juncheng Li, Jun Xiao, Siliang Tang - STAGNet: A Spatio-Temporal Graph and LSTM Framework for Accident Anticipation - https://arxiv.org/abs/2508.15216 - arXiv:2508.15216v3 Announce Type: replace -Abstract: Accident prediction and timely preventive actions improve road safety by reducing the risk of injury to road users and minimizing property damage. Hence, they are critical components of advanced driver assistance systems (ADAS) and autonomous vehicles. While many existing systems depend on multiple sensors such as LiDAR, radar, and GPS, relying solely on dash-cam videos presents a more challenging, yet more cost-effective and easily deployable solution. In this work, we incorporate improved spatio-temporal features and aggregate them through a recurrent network to enhance state-of-the-art graph neural networks for predicting accidents from dash-cam videos. Experiments using three publicly available datasets (DAD, DoTA and DADA) show that our proposed STAGNet model achieves higher average precision and mean time-to-accident scores than previous methods, both when cross-validated on a given dataset and when trained and tested on different datasets. - oai:arXiv.org:2508.15216v3 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Rapid stabilization of the heat equation with localized disturbance + https://arxiv.org/abs/2512.19160 + arXiv:2512.19160v1 Announce Type: new +Abstract: This paper studies the rapid stabilization of a multidimensional heat equation in the presence of an unknown spatially localized disturbance. A novel multivalued feedback control strategy is proposed, which synthesizes the frequency Lyapunov method (introduced by Xiang [41]) with the sign multivalued operator. This methodology connects Lyapunov-based stability analysis with spectral inequalities, while the inclusion of the sign operator ensures robustness against the disturbance. The closed-loop system is governed by a differential inclusion, for which well-posedness is proved via the theory of maximal monotone operators. This approach not only guarantees exponential stabilization but also circumvents the need for explicit disturbance modeling or estimation. + oai:arXiv.org:2512.19160v1 + eess.SY + cs.SY + math.AP + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1109/ACCESS.2025.3645127 - Vipooshan Vipulananthan, Kumudu Mohottala, Kavindu Chinthana, Nimsara Paramulla, Charith D Chitraranjan + Patricio Guzm\'an (SPHINX, IECL), Hugo Parada (SPHINX, IECL), Christian Calle-C\'ardenas - Edge-Native Digitization of Handwritten Marksheets: A Hybrid Heuristic-Deep Learning Framework - https://arxiv.org/abs/2508.16295 - arXiv:2508.16295v2 Announce Type: replace -Abstract: The digitization of structured handwritten documents, such as academic marksheets, remains a significant challenge due to the dual complexity of irregular table structures and diverse handwriting styles. While recent Transformer-based approaches like TableNet and TrOCR achieve state-of-the-art accuracy, their high computational cost renders them unsuitable for resource-constrained edge deployments. This paper introduces a resource-efficient hybrid framework that integrates a heuristic OpenCV-based pipeline for rapid table structure detection with a modified lightweight YOLOv8 architecture for handwritten character recognition. By strategically removing the SPPF and deep C2f layers from the standard YOLOv8 backbone, we reduce computational overhead while maintaining high recognition fidelity. Experimental results on the EMNIST digit benchmark demonstrate that our Modified YOLOv8 model achieves 97.5% accuracy. Furthermore, we provide a comprehensive efficiency analysis showing that our framework offers a 95 times inference speedup over standard OCR pipelines and massive efficiency gains over emerging Large Multimodal Models (LMMs) like Qwen2.5-VL, achieving real-time performance 29 FPS on standard CPU hardware. A qualitative and quantitative evaluation on the AMES dataset, a challenging subset of real-world marksheets, confirms the system's robustness in handling mixed alphanumeric content, bridging the gap between high-performance deep learning and practical, scalable document automation. - oai:arXiv.org:2508.16295v2 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Md. Irtiza Hossain, Junaid Ahmed Sifat, Abir Chowdhury + From Speech to Subtitles: Evaluating ASR Models in Subtitling Italian Television Programs + https://arxiv.org/abs/2512.19161 + arXiv:2512.19161v1 Announce Type: new +Abstract: Subtitles are essential for video accessibility and audience engagement. Modern Automatic Speech Recognition (ASR) systems, built upon Encoder-Decoder neural network architectures and trained on massive amounts of data, have progressively reduced transcription errors on standard benchmark datasets. However, their performance in real-world production environments, particularly for non-English content like long-form Italian videos, remains largely unexplored. This paper presents a case study on developing a professional subtitling system for an Italian media company. To inform our system design, we evaluated four state-of-the-art ASR models (Whisper Large v2, AssemblyAI Universal, Parakeet TDT v3 0.6b, and WhisperX) on a 50-hour dataset of Italian television programs. The study highlights their strengths and limitations, benchmarking their performance against the work of professional human subtitlers. The findings indicate that, while current models cannot meet the media industry's accuracy needs for full autonomy, they can serve as highly effective tools for enhancing human productivity. We conclude that a human-in-the-loop (HITL) approach is crucial and present the production-grade, cloud-based infrastructure we designed to support this workflow. + oai:arXiv.org:2512.19161v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Alessandro Lucca, Francesco Pierri - MolMark: Safeguarding Molecular Structures through Learnable Atom-Level Watermarking - https://arxiv.org/abs/2508.17702 - arXiv:2508.17702v2 Announce Type: replace -Abstract: AI-driven molecular generation is reshaping drug discovery and materials design, yet the lack of protection mechanisms leaves AI-generated molecules vulnerable to unauthorized reuse and provenance ambiguity. Such limitation undermines both scientific reproducibility and intellectual property security. To address this challenge, we propose the first deep learning based watermarking framework for molecules (MolMark), which is exquisitely designed to embed high-fidelity digital signatures into molecules without compromising molecular functionalities. MolMark learns to modulate the chemically meaningful atom-level representations and enforce geometric robustness through SE(3)-invariant features, maintaining robustness under rotation, translation, and reflection. Additionally, MolMark integrates seamlessly with AI-based molecular generative models, enabling watermarking to be treated as a learned transformation with minimal interference to molecular structures. Experiments on benchmark datasets (QM9, GEOM-DRUG) and state-of-the-art molecular generative models (GeoBFN, GeoLDM) demonstrate that MolMark can embed 16-bit watermarks while retaining more than 90% of essential molecular properties, preserving downstream performance, and enabling >95% extraction accuracy under SE(3) transformations. MolMark establishes a principled pathway for unifying molecular generation with verifiable authorship, supporting trustworthy and accountable AI-driven molecular discovery. - oai:arXiv.org:2508.17702v2 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Towards a collaborative digital platform for railway infrastructure projects + https://arxiv.org/abs/2512.19169 + arXiv:2512.19169v1 Announce Type: new +Abstract: The management of railway infrastructure projects can be supported by collaborative digital platforms. A survey was carried out to identify the needs and expectations of the various stakeholders involved in the design and construction of railway infrastructure projects regarding collaborative platforms. These needs and expectations can then be translated into functional specifications to be included in the digital platforms. A total of 21 interviews were conducted between October and December 2022, during which 35 individuals were interviewed. Key roles were represented across the different project phases: engineers from design and construction firms, project managers, infrastructure managers. And various engineering fields were represented: civil, electrical, telecommunications, tracks, systems. These interviews were carried out by CentraleSup{\'e}lec | Universit{\'e} Paris-Saclay and by SNCF R{\'e}seau using a structured protocol designed to collect the specific needs of the interviewees for collaboration, as well as the guiding principles that shape both individual work practices and collaboration between professions. The resulting material was analyzed and then synthesized into a conceptual model of a collaborative digital platform for supporting the design and construction phases in a railway infrastructure project. Also, from these interviews emerged five core functionalities that the platform must offer: Providing access to existing infrastructure data; Accelerating repetitive tasks; Verifying essential project requirements; Supporting decision-making; Facilitating coordination among stakeholders. + oai:arXiv.org:2512.19169v1 + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 14th World Congress on Railway Research (WCRR 2025), MxV Rail, Nov 2025, Colorado Springs, CO, United States + Pierre Jehel (LMPS), Pierre-\'Etienne Gautier (LMPS), Judica\"el Dehotin (SNCF R\'eseau), Flavien Viguier (SNCF R\'eseau) + + + JEPA-Reasoner: Decoupling Latent Reasoning from Token Generation + https://arxiv.org/abs/2512.19171 + arXiv:2512.19171v1 Announce Type: new +Abstract: While Joint-Embedding Predictive Architecture (JEPA) has emerged as a powerful architecture for learning rich latent representations, it fundamentally lacks generative abilities. Meanwhile, latent space reasoning attempts for Transformer models like COCONUT do improve performance, but they ultimately rely on token-by-token generation, which still accumulates compounding error and relies on context information to gain reasoning insights. To address these limitations, we propose JEPA-Reasoner, a novel JEPA model enhanced with generative ability that reasons in latent space. We augment it with a separate action-taker model, Talker, to produce human-readable sentences. Our approach demonstrates that decoupling latent space reasoning and token generation enables JEPA-Reasoner to produce mixed latent vectors that might lay the foundation for multi-threaded reasoning, while performing autoregressive generation with superior robustness to compounding error. + oai:arXiv.org:2512.19171v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Runwen Hu, Peilin Chen, Keyan Ding, Shiqi Wang + Bingyang Kelvin Liu, Ziyu Patrick Chen - Human-like Content Analysis for Generative AI with Language-Grounded Sparse Encoders - https://arxiv.org/abs/2508.18236 - arXiv:2508.18236v3 Announce Type: replace -Abstract: The rapid development of generative AI has transformed content creation, communication, and human development. However, this technology raises profound concerns in high-stakes domains, demanding rigorous methods to analyze and evaluate AI-generated content. While existing analytic methods often treat images as indivisible wholes, real-world AI failures generally manifest as specific visual patterns that can evade holistic detection and suit more granular and decomposed analysis. Here we introduce a content analysis tool, Language-Grounded Sparse Encoders (LanSE), which decompose images into interpretable visual patterns with natural language descriptions. Utilizing interpretability modules and large multimodal models, LanSE can automatically identify visual patterns within data modalities. Our method discovers more than 5,000 visual patterns with 93\% human agreement, provides decomposed evaluation outperforming existing methods, establishes the first systematic evaluation of physical plausibility, and extends to medical imaging settings. Our method's capability to extract language-grounded patterns can be naturally adapted to numerous fields, including biology and geography, as well as other data modalities such as protein structures and time series, thereby advancing content analysis for generative AI. - oai:arXiv.org:2508.18236v3 + CycleChart: A Unified Consistency-Based Learning Framework for Bidirectional Chart Understanding and Generation + https://arxiv.org/abs/2512.19173 + arXiv:2512.19173v1 Announce Type: new +Abstract: Current chart-specific tasks, such as chart question answering, chart parsing, and chart generation, are typically studied in isolation, preventing models from learning the shared semantics that link chart generation and interpretation. We introduce CycleChart, a consistency-based learning framework for bidirectional chart understanding and generation. CycleChart adopts a schema-centric formulation as a common interface across tasks. We construct a consistent multi-task dataset, where each chart sample includes aligned annotations for schema prediction, data parsing, and question answering. To learn cross-directional chart semantics, CycleChart introduces a generate-parse consistency objective: the model generates a chart schema from a table and a textual query, then learns to recover the schema and data from the generated chart, enforcing semantic alignment across directions. CycleChart achieves strong results on chart generation, chart parsing, and chart question answering, demonstrating improved cross-task generalization and marking a step toward more general chart understanding models. + oai:arXiv.org:2512.19173v1 + cs.CL cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Yiming Tang, Arash Lagzian, Srinivas Anumasa, Qiran Zou, Yingtao Zhu, Ye Zhang, Trang Nguyen, Yih-Chung Tham, Ehsan Adeli, Ching-Yu Cheng, Yilun Du, Dianbo Liu + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Dazhen Deng, Sen Yang, Yuchen He, Yuan Tian, Yingcai Wu - Dual-Distilled Heterogeneous Federated Learning with Adaptive Margins for Trainable Global Prototypes - https://arxiv.org/abs/2508.19009 - arXiv:2508.19009v3 Announce Type: replace -Abstract: Heterogeneous Federated Learning (HFL) has gained significant attention for its capacity to handle both model and data heterogeneity across clients. Prototype-based HFL methods emerge as a promising solution to address statistical and model heterogeneity as well as privacy challenges, paving the way for new advancements in HFL research. This method focuses on sharing class-representative prototypes among heterogeneous clients. However, aggregating these prototypes via standard weighted averaging often yields sub-optimal global knowledge. Specifically, the averaging approach induces a shrinking of the aggregated prototypes' decision margins, thereby degrading model performance in scenarios with model heterogeneity and non-IID data distributions. The propose FedProtoKD in a Heterogeneous Federated Learning setting, utilizing an enhanced dual-knowledge distillation mechanism to enhance system performance by leveraging clients' logits and prototype feature representations. The proposed framework aims to resolve the prototype margin-shrinking problem using a contrastive learning-based trainable server prototype by leveraging a class-wise adaptive prototype margin. Furthermore, the framework assess the importance of public samples using the closeness of the sample's prototype to its class representative prototypes, which enhances learning performance. FedProtoKD improved test accuracy by an average of 1.13% and up to 34.13% across various settings, significantly outperforming existing state-of-the-art HFL methods. - oai:arXiv.org:2508.19009v3 - cs.LG + Semantic Communication for Rate-Limited Closed-Loop Distributed Communication-Sensing-Control Systems + https://arxiv.org/abs/2512.19177 + arXiv:2512.19177v1 Announce Type: new +Abstract: The growing integration of distributed integrated sensing and communication (ISAC) with closed-loop control in intelligent networks demands efficient information transmission under stringent bandwidth constraints. To address this challenge, this paper proposes a unified framework for goal-oriented semantic communication in distributed SCC systems. Building upon Weaver's three-level model, we establish a hierarchical semantic formulation with three error levels (L1: observation reconstruction, L2: state estimation, and L3: control) to jointly optimize their corresponding objectives. Based on this formulation, we propose a unified goal-oriented semantic compression and rate adaptation framework that is applicable to different semantic error levels and optimization goals across the SCC loop. A rate-limited multi-sensor LQR system is used as a case study to validate the proposed framework. We employ a GRU-based AE for semantic compression and a PPO-based rate adaptation algorithm that dynamically allocates transmission rates across sensors. Results show that the proposed framework effectively captures task-relevant semantics and adapts its resource allocation strategies across different semantic levels, thereby achieving level-specific performance gains under bandwidth constraints. + oai:arXiv.org:2512.19177v1 + eess.SY + cs.NI + cs.SY + eess.SP + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Guangjin Pan, Zhixing Li, Ay\c{c}a \"Oz\c{c}elikkale, Christian H\"ager, Musa Furkan Keskin, Henk Wymeersch + + + Vision-Language-Policy Model for Dynamic Robot Task Planning + https://arxiv.org/abs/2512.19178 + arXiv:2512.19178v1 Announce Type: new +Abstract: Bridging the gap between natural language commands and autonomous execution in unstructured environments remains an open challenge for robotics. This requires robots to perceive and reason over the current task scene through multiple modalities, and to plan their behaviors to achieve their intended goals. Traditional robotic task-planning approaches often struggle to bridge low-level execution with high-level task reasoning, and cannot dynamically update task strategies when instructions change during execution, which ultimately limits their versatility and adaptability to new tasks. In this work, we propose a novel language model-based framework for dynamic robot task planning. Our Vision-Language-Policy (VLP) model, based on a vision-language model fine-tuned on real-world data, can interpret semantic instructions and integrate reasoning over the current task scene to generate behavior policies that control the robot to accomplish the task. Moreover, it can dynamically adjust the task strategy in response to changes in the task, enabling flexible adaptation to evolving task requirements. Experiments conducted with different robots and a variety of real-world tasks show that the trained model can efficiently adapt to novel scenarios and dynamically update its policy, demonstrating strong planning autonomy and cross-embodiment generalization. Videos: https://robovlp.github.io/ + oai:arXiv.org:2512.19178v1 + cs.RO + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jin Wang, Kim Tien Ly, Jacques Cloete, Nikos Tsagarakis, Ioannis Havoutis + + + L4: Low-Latency and Load-Balanced LLM Serving via Length-Aware Scheduling + https://arxiv.org/abs/2512.19179 + arXiv:2512.19179v1 Announce Type: new +Abstract: Efficiently harnessing GPU compute is critical to improving user experience and reducing operational costs in large language model (LLM) services. However, current inference engine schedulers overlook the attention backend's sensitivity to request-length heterogeneity within a batch. As state-of-the-art models now support context windows exceeding 128K tokens, this once-tolerable inefficiency has escalated into a primary system bottleneck, causing severe performance degradation through GPU underutilization and increased latency. We present L4, a runtime system that dynamically reschedules requests across multiple instances serving the same LLM to mitigate per-instance length heterogeneity. L4 partitions these instances into length-specialized groups, each handling requests within a designated length range, naturally forming a pipeline as requests flow through them. L4 devises a dynamic programming algorithm to efficiently find the stage partition with the best QoE, employs runtime range refinement together with decentralized load (re)balance both across and within groups, achieving a balanced and efficient multi-instance service. Our evaluation shows that, under the same configuration, L4 reduces end-to-end latency by up to 67% and tail latency by up to 69%, while improving overall system throughput by up to 2.89 times compared to the state-of-the-art multi-instance scheduling systems. + oai:arXiv.org:2512.19179v1 cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Fatema Siddika, Md Anwar Hossen, Wensheng Zhang, Anuj Sharma, Juan Pablo Mu\~noz, Ali Jannesari + Yitao Yuan (Peking University, ScitiX AI), Chenqi Zhao (Peking University), Bohan Zhao (ScitiX AI), Zane Cao (ScitiX AI), Yongchao He (ScitiX AI), Wenfei Wu (Peking University) - STDiff: A State Transition Diffusion Framework for Time Series Imputation in Industrial Systems - https://arxiv.org/abs/2508.19011 - arXiv:2508.19011v3 Announce Type: replace -Abstract: Incomplete sensor data is a major obstacle in industrial time-series analytics. In wastewater treatment plants (WWTPs), key sensors show long, irregular gaps caused by fouling, maintenance, and outages. We introduce STDiff and STDiff-W, diffusion-based imputers that cast gap filling as state-space simulation under partial observability, where targets, controls, and exogenous signals may all be intermittently missing. STDiff learns a one-step transition model conditioned on observed values and masks, while STDiff-W extends this with a context encoder that jointly inpaints contiguous blocks, combining long-range consistency with short-term detail. On two WWTP datasets (one with synthetic block gaps from Agtrup and another with natural outages from Aved{\o}re), STDiff-W achieves state-of-the-art accuracy compared with strong neural baselines such as SAITS, BRITS, and CSDI. Beyond point-error metrics, its reconstructions preserve realistic dynamics including oscillations, spikes, and regime shifts, and they achieve top or tied-top downstream one-step forecasting performance compared with strong neural baselines, indicating that preserving dynamics does not come at the expense of predictive utility. Ablation studies that drop, shuffle, or add noise to control or exogenous inputs consistently degrade NH4 and PO4 performance, with the largest deterioration observed when exogenous signals are removed, showing that the model captures meaningful dependencies. We conclude with practical guidance for deployment: evaluate performance beyond MAE using task-oriented and visual checks, include exogenous drivers, and balance computational cost against robustness to structured outages. - oai:arXiv.org:2508.19011v3 + Practical Quantum-Classical Feature Fusion for complex data Classification + https://arxiv.org/abs/2512.19180 + arXiv:2512.19180v1 Announce Type: new +Abstract: Hybrid quantum and classical learning aims to couple quantum feature maps with the robustness of classical neural networks, yet most architectures treat the quantum circuit as an isolated feature extractor and merge its measurements with classical representations by direct concatenation. This neglects that the quantum and classical branches constitute distinct computational modalities and limits reliable performance on complex, high dimensional tabular and semi structured data, including remote sensing, environmental monitoring, and medical diagnostics. We present a multimodal formulation of hybrid learning and propose a cross attention mid fusion architecture in which a classical representation queries quantum derived feature tokens through an attention block with residual connectivity. The quantum branch is kept within practical NISQ budgets and uses up to nine qubits. We evaluate on Wine, Breast Cancer, Forest CoverType, FashionMNIST, and SteelPlatesFaults, comparing a quantum only model, a classical baseline, residual hybrid models, and the proposed mid fusion model under a consistent protocol. Pure quantum and standard hybrid designs underperform due to measurement induced information loss, while cross attention mid fusion is consistently competitive and improves performance on the more complex datasets in most cases. These findings suggest that quantum derived information becomes most valuable when integrated through principled multimodal fusion rather than used in isolation or loosely appended to classical features. + oai:arXiv.org:2512.19180v1 cs.LG cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - 10.1016/j.eswa.2025.130592 - Expert Systems with Applications, Volume 302, 2026, Article 130592 - Gary Simethy, Daniel Ortiz-Arroyo, Petar Durdevic + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Azadeh Alavi, Fatemeh Kouchmeshki, Abdolrahman Alavi - EEGDM: Learning EEG Representation with Latent Diffusion Model - https://arxiv.org/abs/2508.20705 - arXiv:2508.20705v2 Announce Type: replace -Abstract: Recent advances in self-supervised learning for EEG representation have largely relied on masked reconstruction, where models are trained to recover randomly masked signal segments. While effective at modeling local dependencies, such objectives are inherently limited in capturing the global dynamics and long-range dependencies essential for characterizing neural activity. To address this limitation, we propose EEGDM, a novel self-supervised framework that leverages latent diffusion models to generate EEG signals as an objective. Unlike masked reconstruction, diffusion-based generation progressively denoises signals from noise to realism, compelling the model to capture holistic temporal patterns and cross-channel relationships. Specifically, EEGDM incorporates an EEG encoder that distills raw signals and their channel augmentations into a compact representation, acting as conditional information to guide the diffusion model for generating EEG signals. This design endows EEGDM with a compact latent space, which not only offers ample control over the generative process but also can be leveraged for downstream tasks. Experimental results show that EEGDM (1) reconstructs high-quality EEG signals, (2) learns robust representations, and (3) achieves competitive performance across diverse downstream tasks, thus exploring a new direction for self-supervised EEG representation learning. - oai:arXiv.org:2508.20705v2 + Operator-Based Generalization Bound for Deep Learning: Insights on Multi-Task Learning + https://arxiv.org/abs/2512.19184 + arXiv:2512.19184v1 Announce Type: new +Abstract: This paper presents novel generalization bounds for vector-valued neural networks and deep kernel methods, focusing on multi-task learning through an operator-theoretic framework. Our key development lies in strategically combining a Koopman based approach with existing techniques, achieving tighter generalization guarantees compared to traditional norm-based bounds. To mitigate computational challenges associated with Koopman-based methods, we introduce sketching techniques applicable to vector valued neural networks. These techniques yield excess risk bounds under generic Lipschitz losses, providing performance guarantees for applications including robust and multiple quantile regression. Furthermore, we propose a novel deep learning framework, deep vector-valued reproducing kernel Hilbert spaces (vvRKHS), leveraging Perron Frobenius (PF) operators to enhance deep kernel methods. We derive a new Rademacher generalization bound for this framework, explicitly addressing underfitting and overfitting through kernel refinement strategies. This work offers novel insights into the generalization properties of multitask learning with deep learning architectures, an area that has been relatively unexplored until recent developments. + oai:arXiv.org:2512.19184v1 cs.LG cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shaocong Wang, Tong Liu, Yihan Li, Ming Li, Kairui Wen, Pei Yang, Wenqi Ji, Minjing Yu, Yong-Jin Liu + Mahdi Mohammadigohari, Giuseppe Di Fatta, Giuseppe Nicosia, Panos M. Pardalos - Unified Acoustic Representations for Screening Neurological and Respiratory Pathologies from Voice - https://arxiv.org/abs/2508.20717 - arXiv:2508.20717v2 Announce Type: replace -Abstract: Voice-based health assessment offers unprecedented opportunities for scalable, non-invasive disease screening, yet existing approaches typically focus on single conditions and fail to leverage the rich, multi-faceted information embedded in speech. We present MARVEL (Multi-task Acoustic Representations for Voice-based Health Analysis), a privacy-conscious multitask learning framework that simultaneously detects nine distinct neurological, respiratory, and voice disorders using only derived acoustic features, eliminating the need for raw audio transmission. Our dual-branch architecture employs specialized encoders with task-specific heads sharing a common acoustic backbone, enabling effective cross-condition knowledge transfer. Evaluated on the large-scale Bridge2AI-Voice v2.0 dataset, MARVEL achieves an overall AUROC of 0.78, with exceptional performance on neurological disorders (AUROC = 0.89), particularly for Alzheimer's disease/mild cognitive impairment (AUROC = 0.97). Our framework consistently outperforms single-modal baselines by 5-19% and surpasses state-of-the-art self-supervised models on 7 of 9 tasks, while correlation analysis reveals that the learned representations exhibit meaningful similarities with established acoustic features, indicating that the model's internal representations are consistent with clinically recognized acoustic patterns. By demonstrating that a single unified model can effectively screen for diverse conditions, this work establishes a foundation for deployable voice-based diagnostics in resource-constrained and remote healthcare settings. - oai:arXiv.org:2508.20717v2 - cs.SD - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Ran Piao, Yuan Lu, Hareld Kemps, Tong Xia, Aaqib Saeed + Configuration Work: Four Consequences of LLMs-in-use + https://arxiv.org/abs/2512.19189 + arXiv:2512.19189v1 Announce Type: new +Abstract: This article examines what it means to use Large Language Models in everyday work. Drawing on a seven-month longitudinal qualitative study, we argue that LLMs do not straightforwardly automate or augment tasks. We propose the concept of configuration work to describe the labor through which workers make a generic system usable for a specific professional task. Configuration work materializes in four intertwined consequences. First, workers must discretize their activity, breaking it into units that the system can process. Second, operating the system generates cluttering, as prompting, evaluating, and correcting responses add scattered layers of work that get in the way of existing routines. Third, users gradually attune their practices and expectations to the machine's generic rigidity, making sense of the system's limits and finding space for it within their practices. Fourth, as LLMs absorb repetitive tasks, they desaturate the texture of work, shifting activity toward logistical manipulation of outputs and away from forms of engagement that sustain a sense of accomplishment. Taken together, these consequences suggest that LLMs reshape work through the individualized labor required to configure a universal, task-agnostic system within situated professional ecologies. + oai:arXiv.org:2512.19189v1 + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Gabriel Alcaras (m\'edialab), Donato Ricci (m\'edialab) - FakeParts: a New Family of AI-Generated DeepFakes - https://arxiv.org/abs/2508.21052 - arXiv:2508.21052v2 Announce Type: replace -Abstract: We introduce FakeParts, a new class of deepfakes characterized by subtle, localized manipulations to specific spatial regions or temporal segments of otherwise authentic videos. Unlike fully synthetic content, these partial manipulations - ranging from altered facial expressions to object substitutions and background modifications - blend seamlessly with real elements, making them particularly deceptive and difficult to detect. To address the critical gap in detection, we present FakePartsBench, the first large-scale benchmark specifically designed to capture the full spectrum of partial deepfakes. Comprising over 81K (including 44K FakeParts) videos with pixel- and frame-level manipulation annotations, our dataset enables comprehensive evaluation of detection methods. Our user studies demonstrate that FakeParts reduces human detection accuracy by up to 26% compared to traditional deepfakes, with similar performance degradation observed in state-of-the-art detection models. This work identifies an urgent vulnerability in current detectors and provides the necessary resources to develop methods robust to partial manipulations. - oai:arXiv.org:2508.21052v2 + PEDESTRIAN: An Egocentric Vision Dataset for Obstacle Detection on Pavements + https://arxiv.org/abs/2512.19190 + arXiv:2512.19190v1 Announce Type: new +Abstract: Walking has always been a primary mode of transportation and is recognized as an essential activity for maintaining good health. Despite the need for safe walking conditions in urban environments, sidewalks are frequently obstructed by various obstacles that hinder free pedestrian movement. Any object obstructing a pedestrian's path can pose a safety hazard. The advancement of pervasive computing and egocentric vision techniques offers the potential to design systems that can automatically detect such obstacles in real time, thereby enhancing pedestrian safety. The development of effective and efficient identification algorithms relies on the availability of comprehensive and well-balanced datasets of egocentric data. In this work, we introduce the PEDESTRIAN dataset, comprising egocentric data for 29 different obstacles commonly found on urban sidewalks. A total of 340 videos were collected using mobile phone cameras, capturing a pedestrian's point of view. Additionally, we present the results of a series of experiments that involved training several state-of-the-art deep learning algorithms using the proposed dataset, which can be used as a benchmark for obstacle detection and recognition tasks. The dataset can be used for training pavement obstacle detectors to enhance the safety of pedestrians in urban areas. + oai:arXiv.org:2512.19190v1 cs.CV - cs.AI - cs.MM - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Ziyi Liu, Firas Gabetni, Awais Hussain Sani, Xi Wang, Soobash Daiboo, Gaetan Brison, Gianni Franchi, Vicky Kalogeiton - - - ResearchQA: Evaluating Scholarly Question Answering at Scale Across 75 Fields with Survey-Mined Questions and Rubrics - https://arxiv.org/abs/2509.00496 - arXiv:2509.00496v2 Announce Type: replace -Abstract: Evaluating long-form responses to research queries heavily relies on expert annotators, restricting attention to areas like AI where researchers can conveniently enlist colleagues. Yet, research expertise is abundant: survey articles consolidate knowledge spread across the literature. We introduce ResearchQA, a resource for evaluating LLM systems by distilling survey articles from 75 research fields into 21K queries and 160K rubric items. Queries and rubrics are jointly derived from survey sections, where rubric items list query-specific answer evaluation criteria, i.e., citing papers, making explanations, and describing limitations. 31 Ph.D. annotators in 8 fields judge that 90% of queries reflect Ph.D. information needs and 87% of rubric items warrant emphasis of a sentence or longer. We leverage ResearchQA to evaluate 18 systems in 7.6K head-to-heads. No parametric or retrieval-augmented system we evaluate exceeds 70% on covering rubric items, and the highest-ranking system shows 75% coverage. Error analysis reveals that the highest-ranking system fully addresses less than 11% of citation rubric items, 48% of limitation items, and 49% of comparison items. We release our data to facilitate more comprehensive multi-field evaluations. - oai:arXiv.org:2509.00496v2 - cs.CL - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Li S. Yifei, Allen Chang, Chaitanya Malaviya, Mark Yatskar + Marios Thoma (CYENS Centre of Excellence, Nicosia, Cyprus, Open University Cyprus, Nicosia, Cyprus), Zenonas Theodosiou (CYENS Centre of Excellence, Nicosia, Cyprus, Department of Communication and Internet Studies, Cyprus University of Technology, Limassol, Cyprus), Harris Partaourides (AI Cyprus Ethical Novelties Ltd, Limassol, Cyprus), Vassilis Vassiliades (CYENS Centre of Excellence, Nicosia, Cyprus), Loizos Michael (Open University Cyprus, Nicosia, Cyprus, CYENS Centre of Excellence, Nicosia, Cyprus), Andreas Lanitis (CYENS Centre of Excellence, Nicosia, Cyprus, Department of Multimedia and Graphic Arts, Cyprus University of Technology, Limassol, Cyprus) - SGS-3D: High-Fidelity 3D Instance Segmentation via Reliable Semantic Mask Splitting and Growing - https://arxiv.org/abs/2509.05144 - arXiv:2509.05144v2 Announce Type: replace -Abstract: Accurate 3D instance segmentation is crucial for high-quality scene understanding in the 3D vision domain. However, 3D instance segmentation based on 2D-to-3D lifting approaches struggle to produce precise instance-level segmentation, due to accumulated errors introduced during the lifting process from ambiguous semantic guidance and insufficient depth constraints. To tackle these challenges, we propose splitting and growing reliable semantic mask for high-fidelity 3D instance segmentation (SGS-3D), a novel "split-then-grow" framework that first purifies and splits ambiguous lifted masks using geometric primitives, and then grows them into complete instances within the scene. Unlike existing approaches that directly rely on raw lifted masks and sacrifice segmentation accuracy, SGS-3D serves as a training-free refinement method that jointly fuses semantic and geometric information, enabling effective cooperation between the two levels of representation. Specifically, for semantic guidance, we introduce a mask filtering strategy that leverages the co-occurrence of 3D geometry primitives to identify and remove ambiguous masks, thereby ensuring more reliable semantic consistency with the 3D object instances. For the geometric refinement, we construct fine-grained object instances by exploiting both spatial continuity and high-level features, particularly in the case of semantic ambiguity between distinct objects. Experimental results on ScanNet200, ScanNet++, and KITTI-360 demonstrate that SGS-3D substantially improves segmentation accuracy and robustness against inaccurate masks from pre-trained models, yielding high-fidelity object instances while maintaining strong generalization across diverse indoor and outdoor environments. Code is available at https://github.com/wangchaolei7/SGS-3D. - oai:arXiv.org:2509.05144v2 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Causal Heterogeneous Graph Learning Method for Chronic Obstructive Pulmonary Disease Prediction + https://arxiv.org/abs/2512.19194 + arXiv:2512.19194v1 Announce Type: new +Abstract: Due to the insufficient diagnosis and treatment capabilities at the grassroots level, there are still deficiencies in the early identification and early warning of acute exacerbation of Chronic obstructive pulmonary disease (COPD), often resulting in a high prevalence rate and high burden, but the screening rate is relatively low. In order to gradually improve this situation. In this paper, this study develop a Causal Heterogeneous Graph Representation Learning (CHGRL) method for COPD comorbidity risk prediction method that: a) constructing a heterogeneous Our dataset includes the interaction between patients and diseases; b) A cause-aware heterogeneous graph learning architecture has been constructed, combining causal inference mechanisms with heterogeneous graph learning, which can support heterogeneous graph causal learning for different types of relationships; and c) Incorporate the causal loss function in the model design, and add counterfactual reasoning learning loss and causal regularization loss on the basis of the cross-entropy classification loss. We evaluate our method and compare its performance with strong GNN baselines. Following experimental evaluation, the proposed model demonstrates high detection accuracy. + oai:arXiv.org:2512.19194v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Chaolei Wang, Yang Luo, Jing Du, Siyu Chen, Yiping Chen, Ting Han + Leming Zhou, Zuo Wang, Zhigang Liu - Language Self-Play For Data-Free Training - https://arxiv.org/abs/2509.07414 - arXiv:2509.07414v3 Announce Type: replace -Abstract: Large language models (LLMs) have advanced rapidly in recent years, driven by scale, abundant high-quality training data, and reinforcement learning. Yet this progress faces a fundamental bottleneck: the need for ever more data from which models can continue to learn. In this work, we propose a reinforcement learning approach that removes this dependency by enabling models to improve without additional data. Our method leverages a game-theoretic framework of self-play, where a model's capabilities are cast as performance in a competitive game and stronger policies emerge by having the model play against itself-a process we call Language Self-Play (LSP). Experiments with Llama-3.2-3B-Instruct on instruction-following, mathematics, and coding benchmarks show that pretrained models can be effectively improved with self-play alone. - oai:arXiv.org:2509.07414v3 + On the Koopman-Based Generalization Bounds for Multi-Task Deep Learning + https://arxiv.org/abs/2512.19199 + arXiv:2512.19199v1 Announce Type: new +Abstract: The paper establishes generalization bounds for multitask deep neural networks using operator-theoretic techniques. The authors propose a tighter bound than those derived from conventional norm based methods by leveraging small condition numbers in the weight matrices and introducing a tailored Sobolev space as an expanded hypothesis space. This enhanced bound remains valid even in single output settings, outperforming existing Koopman based bounds. The resulting framework maintains key advantages such as flexibility and independence from network width, offering a more precise theoretical understanding of multitask deep learning in the context of kernel methods. + oai:arXiv.org:2512.19199v1 + cs.LG cs.AI - cs.CL - cs.GT - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jakub Grudzien Kuba, Mengting Gu, Qi Ma, Yuandong Tian, Vijai Mohan, Jason Chen + Mahdi Mohammadigohari, Giuseppe Di Fatta, Giuseppe Nicosia, Panos M. Pardalos - Zero-shot Hierarchical Plant Segmentation via Foundation Segmentation Models and Text-to-image Attention - https://arxiv.org/abs/2509.09116 - arXiv:2509.09116v3 Announce Type: replace -Abstract: Foundation segmentation models achieve reasonable leaf instance extraction from top-view crop images without training (i.e., zero-shot). However, segmenting entire plant individuals with each consisting of multiple overlapping leaves remains challenging. This problem is referred to as a hierarchical segmentation task, typically requiring annotated training datasets, which are often species-specific and require notable human labor. To address this, we introduce ZeroPlantSeg, a zero-shot segmentation for rosette-shaped plant individuals from top-view images. We integrate a foundation segmentation model, extracting leaf instances, and a vision-language model, reasoning about plants' structures to extract plant individuals without additional training. Evaluations on datasets with multiple plant species, growth stages, and shooting environments demonstrate that our method surpasses existing zero-shot methods and achieves better cross-domain performance than supervised methods. Implementations are available at https://github.com/JunhaoXing/ZeroPlantSeg. - oai:arXiv.org:2509.09116v3 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Modular Landfill Remediation for AI Grid Resilience + https://arxiv.org/abs/2512.19202 + arXiv:2512.19202v1 Announce Type: new +Abstract: Rising AI electricity demand and persistent landfill methane emissions constitute coupled constraints on U.S. digital infrastructure and decarbonization. While China has achieved a rapid 'de-landfilling' transition through centralized coordination, the U.S. remains structurally 'locked in' to landfilling due to fragmented governance and carbon accounting incentives. This paper proposes a modular legacy landfill remediation framework to address these dual challenges within U.S. institutional constraints. By treating legacy sites as stock resources, the proposed system integrates excavation, screening, and behind-the-meter combined heat and power (CHP) to transform environmental liabilities into resilience assets. A system analysis of a representative AI corridor demonstrates that such modules can mitigate site-level methane by 60-70% and recover urban land, while supplying approximately 20 MW of firm, islandable power. Although contributing only approximately 5% of a hyperscale data center's bulk load, it provides critical microgrid resilience and black-start capability. We conclude that remediation-oriented waste-to-energy should be valued not as a substitute for bulk renewables, but as a strategic control volume for buffering critical loads against grid volatility while resolving long-term environmental liabilities. + oai:arXiv.org:2512.19202v1 + eess.SY + cs.SY + econ.GN + q-fin.EC + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Junhao Xing, Ryohei Miyakawa, Yang Yang, Xinpeng Liu, Risa Shinoda, Hiroaki Santo, Yosuke Toda, Fumio Okura + Qi He, Chunyu Qu - Combinatorial Control Barrier Functions: Nested Boolean and p-choose-r Compositions of Safety Constraints - https://arxiv.org/abs/2509.10716 - arXiv:2509.10716v2 Announce Type: replace -Abstract: This paper investigates the problem of composing multiple control barrier functions (CBFs) -- and matrix control barrier functions (MCBFs) -- through logical and combinatorial operations. Standard CBF formulations naturally enable conjunctive (AND) combinations, but disjunctive (OR) and more general logical structures introduce nonsmoothness and possibly a combinatorial blow-up in the number of logical combinations. We introduce the framework of combinatorial CBFs that addresses p-choose-r safety specifications and their nested composition. The proposed framework ensures safety for the exact safe set in a scalable way, using the original number of primitive constraints. We establish theoretical guarantees on safety under these compositions, and we demonstrate their use on a patrolling problem in a multi-agent system. - oai:arXiv.org:2509.10716v2 - eess.SY - cs.SY - math.OC - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Evaluating MCC for Low-Frequency Cyberattack Detection in Imbalanced Intrusion Detection Data + https://arxiv.org/abs/2512.19203 + arXiv:2512.19203v1 Announce Type: new +Abstract: In many real-world network environments, several types of cyberattacks occur at very low rates compared to benign traffic, making them difficult for intrusion detection systems (IDS) to detect reliably. This imbalance causes traditional evaluation metrics, such as accuracy, to often overstate model performance in these conditions, masking failures on minority attack classes that are most important in practice. In this paper, we evaluate a set of base and meta classifiers on low-traffic attacks in the CSE-CIC-IDS2018 dataset and compare their reliability in terms of accuracy and Matthews Correlation Coefficient (MCC). The results show that accuracy consistently inflates performance, while MCC provides a more accurate assessment of a classifier's performance across both majority and minority classes. Meta-classification methods, such as LogitBoost and AdaBoost, demonstrate more effective minority class detection when measured by MCC, revealing trends that accuracy fails to capture. These findings establish the need for imbalance-aware evaluation and make MCC a more trustworthy metric for IDS research involving low-traffic cyberattacks. + oai:arXiv.org:2512.19203v1 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - 10.1109/LCSYS.2025.3640191 - Pio Ong, Haejoon Lee, Tamas G. Molnar, Dimitra Panagou, Aaron D. Ames + Prameshwar Thiyagarajan, Chad A. Williams - Machine Learning-Driven Predictive Resource Management in Complex Science Workflows - https://arxiv.org/abs/2509.11512 - arXiv:2509.11512v2 Announce Type: replace -Abstract: The collaborative efforts of large communities in science experiments, often comprising thousands of global members, reflect a monumental commitment to exploration and discovery. Recently, advanced and complex data processing has gained increasing importance in science experiments. Data processing workflows typically consist of multiple intricate steps, and the precise specification of resource requirements is crucial for each step to allocate optimal resources for effective processing. Estimating resource requirements in advance is challenging due to a wide range of analysis scenarios, varying skill levels among community members, and the continuously increasing spectrum of computing options. One practical approach to mitigate these challenges involves initially processing a subset of each step to measure precise resource utilization from actual processing profiles before completing the entire step. While this two-staged approach enables processing on optimal resources for most of the workflow, it has drawbacks such as initial inaccuracies leading to potential failures and suboptimal resource usage, along with overhead from waiting for initial processing completion, which is critical for fast-turnaround analyses. In this context, our study introduces a novel pipeline of machine learning models within a comprehensive workflow management system, the Production and Distributed Analysis (PanDA) system. These models employ advanced machine learning techniques to predict key resource requirements, overcoming challenges posed by limited upfront knowledge of characteristics at each step. Accurate forecasts of resource requirements enable informed and proactive decision-making in workflow management, enhancing the efficiency of handling diverse, complex workflows across heterogeneous resources. - oai:arXiv.org:2509.11512v2 - cs.DC - cs.AI + MixKVQ: Query-Aware Mixed-Precision KV Cache Quantization for Long-Context Reasoning + https://arxiv.org/abs/2512.19206 + arXiv:2512.19206v1 Announce Type: new +Abstract: Long Chain-of-Thought (CoT) reasoning has significantly advanced the capabilities of Large Language Models (LLMs), but this progress is accompanied by substantial memory and latency overhead from the extensive Key-Value (KV) cache. Although KV cache quantization is a promising compression technique, existing low-bit quantization methods often exhibit severe performance degradation on complex reasoning tasks. Fixed-precision quantization struggles to handle outlier channels in the key cache, while current mixed-precision strategies fail to accurately identify components requiring high-precision representation. We find that an effective low-bit KV cache quantization strategy must consider two factors: a key channel's intrinsic quantization difficulty and its relevance to the query. Based on this insight, we propose MixKVQ, a novel plug-and-play method that introduces a lightweight, query-aware algorithm to identify and preserve critical key channels that need higher precision, while applying per-token quantization for value cache. Experiments on complex reasoning datasets demonstrate that our approach significantly outperforms existing low-bit methods, achieving performance comparable to a full-precision baseline at a substantially reduced memory footprint. + oai:arXiv.org:2512.19206v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1142/S0217751X26500259 - Tasnuva Chowdhury, Tadashi Maeno, Fatih Furkan Akman, Joseph Boudreau, Sankha Dutta, Shengyu Feng, Adolfy Hoisie, Kuan-Chieh Hsu, Raees Khan, Jaehyung Kim, Ozgur O. Kilic, Scott Klasky, Alexei Klimentov, Tatiana Korchuganova, Verena Ingrid Martinez Outschoorn, Paul Nilsson, David K. Park, Norbert Podhorszki, Yihui Ren, John Rembrandt Steele, Fr\'ed\'eric Suter, Sairam Sri Vatsavai, Torre Wenaus, Wei Yang, Yiming Yang, Shinjae Yoo + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Tao Zhang, Ziqian Zeng, Hao Peng, Huiping Zhuang, Cen Chen - Multi Anatomy X-Ray Foundation Model - https://arxiv.org/abs/2509.12146 - arXiv:2509.12146v2 Announce Type: replace -Abstract: X-ray imaging is a ubiquitous in radiology, yet most existing AI foundation models are limited to chest anatomy and fail to generalize across broader clinical tasks. In this work, we introduce XR-0, the multi-anatomy X-ray foundation model using self-supervised learning on a large, private dataset of 1.15 million images spanning diverse anatomical regions and evaluated across 12 datasets and 20 downstream tasks, including classification, retrieval, segmentation, localization, visual grounding, and report generation. XR-0 achieves state-of-the-art performance on most multi-anatomy tasks and remains competitive on chest-specific benchmarks. Our results demonstrate that anatomical diversity and supervision are critical for building robust, general-purpose medical vision models, paving the way for scalable and adaptable AI systems in radiology. - oai:arXiv.org:2509.12146v2 - cs.CV + Observer, Not Player: Simulating Theory of Mind in LLMs through Game Observation + https://arxiv.org/abs/2512.19210 + arXiv:2512.19210v1 Announce Type: new +Abstract: We present an interactive framework for evaluating whether large language models (LLMs) exhibit genuine "understanding" in a simple yet strategic environment. As a running example, we focus on Rock-Paper-Scissors (RPS), which, despite its apparent simplicity, requires sequential reasoning, adaptation, and strategy recognition. Our system positions the LLM as an Observer whose task is to identify which strategies are being played and to articulate the reasoning behind this judgment. The purpose is not to test knowledge of Rock-Paper-Scissors itself, but to probe whether the model can exhibit mind-like reasoning about sequential behavior. To support systematic evaluation, we provide a benchmark consisting of both static strategies and lightweight dynamic strategies specified by well-prompted rules. We quantify alignment between the Observer's predictions and the ground-truth distributions induced by actual strategy pairs using three complementary signals: Cross-Entropy, Brier score, and Expected Value (EV) discrepancy. These metrics are further integrated into a unified score, the Union Loss, which balances calibration, sensitivity, and payoff alignment. Together with a Strategy Identification Rate (SIR) metric, our framework captures not only predictive accuracy but also whether the model can stably identify the latent strategies in play. The demo emphasizes interactivity, transparency, and reproducibility. Users can adjust LLM distributions in real time, visualize losses as they evolve, and directly inspect reasoning snippets to identify where and why failures occur. In doing so, our system provides a practical and interpretable proxy for mind-like inference in sequential games, offering insights into both the strengths and limitations of current LLM reasoning. + oai:arXiv.org:2512.19210v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-sa/4.0/ + Jerry Wang, Ting Yiu Liu + + + InvCoSS: Inversion-driven Continual Self-supervised Learning in Medical Multi-modal Image Pre-training + https://arxiv.org/abs/2512.19213 + arXiv:2512.19213v1 Announce Type: new +Abstract: Continual self-supervised learning (CSSL) in medical imaging trains a foundation model sequentially, alleviating the need for collecting multi-modal images for joint training and offering promising improvements in downstream performance while preserving data privacy. However, most existing methods still rely on replaying data from previous stages to prevent catastrophic forgetting, which compromises privacy and limits their applicability in real-world scenarios where data transfer across sites is often restricted. In this work, we propose InvCoSS, an inversion-driven continual self-supervised learning framework for medical multi-modal image pre-training. Specifically, after training on a previous task, InvCoSS inverts the pre-trained self-supervised model to generate synthetic images that approximate the original training distribution. These synthetic images are then combined with data from the new task for joint optimization, which effectively mitigates catastrophic forgetting while strictly adhering to the constraint of no access to previous real data. Furthermore, to improve the fidelity of synthetic images, we introduce a novel InvUNet with a multi-scale fusion architecture to restore both high- and low-frequency components of the inverted images. To enhance diversity and prevent mode collapse, we design a repulsive representation-learning mechanism that encourages a diverse feature space for synthetic images without class guidance. Extensive experiments across nine downstream tasks validate the effectiveness of InvCoSS, achieving performance comparable to or even superior to prior data-replay methods while significantly reducing storage requirements and eliminating data privacy constraints. + oai:arXiv.org:2512.19213v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Nishank Singla, Krisztian Koos, Farzin Haddadpour, Amin Honarmandi Shandiz, Lovish Chum, Xiaojian Xu, Qing Jin, Erhan Bas + Zihao Luo, Shaohao Rui, Zhenyu Tang, Guotai Wang, Xiaosong Wang - Fun-ASR Technical Report - https://arxiv.org/abs/2509.12508 - arXiv:2509.12508v4 Announce Type: replace -Abstract: In recent years, automatic speech recognition (ASR) has witnessed transformative advancements driven by three complementary paradigms: data scaling, model size scaling, and deep integration with large language models (LLMs). However, LLMs are prone to hallucination, which can significantly degrade user experience in real-world ASR applications. In this paper, we present Fun-ASR, a large-scale, LLM-based ASR system that synergistically combines massive data, large model capacity, LLM integration, and reinforcement learning to achieve state-of-the-art performance across diverse and complex speech recognition scenarios. Moreover, Fun-ASR is specifically optimized for practical deployment, with enhancements in streaming capability, noise robustness, code-switching, hotword customization, and satisfying other real-world application requirements. Experimental results show that while most LLM-based ASR systems achieve strong performance on open-source benchmarks, they often underperform on real industry evaluation sets. Thanks to production-oriented optimizations, Fun-ASR achieves state-of-the-art performance on real application datasets, demonstrating its effectiveness and robustness in practical settings. The code and models are accessible at https://github.com/FunAudioLLM/Fun-ASR . - oai:arXiv.org:2509.12508v4 - cs.CL - cs.AI - cs.SD - eess.AS - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Keyu An, Yanni Chen, Zhigao Chen, Chong Deng, Zhihao Du, Changfeng Gao, Zhifu Gao, Bo Gong, Xiangang Li, Yabin Li, Ying Liu, Xiang Lv, Yunjie Ji, Yiheng Jiang, Bin Ma, Haoneng Luo, Chongjia Ni, Zexu Pan, Yiping Peng, Zhendong Peng, Peiyao Wang, Hao Wang, Haoxu Wang, Wen Wang, Wupeng Wang, Yuzhong Wu, Biao Tian, Zhentao Tan, Nan Yang, Bin Yuan, Jieping Ye, Jixing Yu, Qinglin Zhang, Kun Zou, Han Zhao, Shengkui Zhao, Jingren Zhou, Yanqiao Zhu + HippMetric: A skeletal-representation-based framework for cross-sectional and longitudinal hippocampal substructural morphometry + https://arxiv.org/abs/2512.19214 + arXiv:2512.19214v1 Announce Type: new +Abstract: Accurate characterization of hippocampal substructure is crucial for detecting subtle structural changes and identifying early neurodegenerative biomarkers. However, high inter-subject variability and complex folding pattern of human hippocampus hinder consistent cross-subject and longitudinal analysis. Most existing approaches rely on subject-specific modelling and lack a stable intrinsic coordinate system to accommodate anatomical variability, which limits their ability to establish reliable inter- and intra-individual correspondence. To address this, we propose HippMetric, a skeletal representation (s-rep)-based framework for hippocampal substructural morphometry and point-wise correspondence across individuals and scans. HippMetric builds on the Axis-Referenced Morphometric Model (ARMM) and employs a deformable skeletal coordinate system aligned with hippocampal anatomy and function, providing a biologically grounded reference for correspondence. Our framework comprises two core modules: a skeletal-based coordinate system that respects the hippocampus' conserved longitudinal lamellar architecture, in which functional units (lamellae) are stacked perpendicular to the long-axis, enabling anatomically consistent localization across subjects and time; and individualized s-reps generated through surface reconstruction, deformation, and geometrically constrained spoke refinement, enforcing boundary adherence, orthogonality and non-intersection to produce mathematically valid skeletal geometry. Extensive experiments on two international cohorts demonstrate that HippMetric achieves higher accuracy, reliability, and correspondence stability compared to existing shape models. + oai:arXiv.org:2512.19214v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Na Gao, Chenfei Ye, Yanwu Yang, Anqi Li, Zhengbo He, Li Liang, Zhiyuan Liu, Xingyu Hao, Ting Ma, Tengfei Guo - MatchFixAgent: Language-Agnostic Autonomous Repository-Level Code Translation Validation and Repair - https://arxiv.org/abs/2509.16187 - arXiv:2509.16187v2 Announce Type: replace -Abstract: Code translation transforms source code from one programming language (PL) to another. Validating the functional equivalence of translation and repairing, if necessary, are critical steps in code translation. Existing automated validation and repair approaches struggle to generalize to many PLs due to high engineering overhead, and they rely on existing and often inadequate test suites, which results in false claims of equivalence and ineffective translation repair. We develop MatchFixAgent, a large language model (LLM)-based, PL-agnostic framework for equivalence validation and repair of translations. MatchFixAgent features a multi-agent architecture that divides equivalence validation into several sub-tasks to ensure thorough and consistent semantic analysis of the translation. Then it feeds this analysis to test agent to write and execute tests. Upon observing a test failure, the repair agent attempts to fix the translation bug. The final (in)equivalence decision is made by the verdict agent, considering semantic analyses and test execution results. - We compare MatchFixAgent's validation and repair results with four repository-level code translation techniques. We use 2,219 translation pairs from their artifacts, which cover 6 PL pairs, and are collected from 24 GitHub projects totaling over 900K lines of code. Our results demonstrate that MatchFixAgent produces (in)equivalence verdicts for 99.2% of translation pairs, with the same equivalence validation result as prior work on 72.8% of them. When MatchFixAgent's result disagrees with prior work, we find that 60.7% of the time MatchFixAgent's result is actually correct. In addition, we show that MatchFixAgent can repair 50.6% of inequivalent translation, compared to prior work's 18.5%. This demonstrates that MatchFixAgent is far more adaptable to many PL pairs than prior work, while producing highly accurate validation results. - oai:arXiv.org:2509.16187v2 + Semantically-Equivalent Transformations-Based Backdoor Attacks against Neural Code Models: Characterization and Mitigation + https://arxiv.org/abs/2512.19215 + arXiv:2512.19215v1 Announce Type: new +Abstract: Neural code models have been increasingly incorporated into software development processes. However, their susceptibility to backdoor attacks presents a significant security risk. The state-of-the-art understanding focuses on injection-based attacks, which insert anomalous patterns into software code. These attacks can be neutralized by standard sanitization techniques. This status quo may lead to a false sense of security regarding backdoor attacks. In this paper, we introduce a new kind of backdoor attacks, dubbed Semantically-Equivalent Transformation (SET)-based backdoor attacks, which use semantics-preserving low-prevalence code transformations to generate stealthy triggers. We propose a framework to guide the generation of such triggers. Our experiments across five tasks, six languages, and models like CodeBERT, CodeT5, and StarCoder show that SET-based attacks achieve high success rates (often >90%) while preserving model utility. The attack proves highly stealthy, evading state-of-the-art defenses with detection rates on average over 25.13% lower than injection-based counterparts. We evaluate normalization-based countermeasures and find they offer only partial mitigation, confirming the attack's robustness. These results motivate further investigation into scalable defenses tailored to SET-based attacks. + oai:arXiv.org:2512.19215v1 cs.SE - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-sa/4.0/ - Ali Reza Ibrahimzada, Brandon Paulsen, Reyhaneh Jabbarvand, Joey Dodds, Daniel Kroening + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Junyao Ye, Zhen Li, Xi Tang, Shouhuai Xu, Deqing Zou, Zhongsheng Yuan - Temporal Drift in Privacy Recall: Users Misremember From Verbatim Loss to Gist-Based Overexposure - https://arxiv.org/abs/2509.16962 - arXiv:2509.16962v2 Announce Type: replace -Abstract: With social media content traversing the different platforms, occasionally resurfacing after periods of time, users are increasingly prone to unintended disclosure resulting from a misremembered acceptance of privacy. Context collapse and interface cues are two factors considered by prior researchers, yet we know less about how time-lapse basically alters recall of past audiences destined for exposure. Likewise, the design space for mitigating this temporal exposure risk remains underexplored. Our work theorizes temporal drift in privacy recall as verbatim memory of prior settings blowing apart and eventually settling with gist-based heuristics, which more often than not select an audience larger than the original one. Grounded in memory research, contextual integrity, and usable privacy, we examine why such a drift occurs, why it tends to bias toward broader sharing, and how it compounds upon repeat exposure. Following that, we suggest provenance-forward interface schemes and a risk-based evaluation framework that mutates recall into recognition. The merit of our work lies in establishing a temporal awareness of privacy design as an essential safety rail against inadvertent overexposure. - oai:arXiv.org:2509.16962v2 - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Haoze Guo, Ziqi Wei + Towards Minimal Fine-Tuning of VLMs + https://arxiv.org/abs/2512.19219 + arXiv:2512.19219v1 Announce Type: new +Abstract: We introduce Image-LoRA, a lightweight parameter efficient fine-tuning (PEFT) recipe for transformer-based vision-language models (VLMs). Image-LoRA applies low-rank adaptation only to the value path of attention layers within the visual-token span, reducing adapter-only training FLOPs roughly in proportion to the visual-token fraction. We further adapt only a subset of attention heads, selected using head influence scores estimated with a rank-1 Image-LoRA, and stabilize per-layer updates via selection-size normalization. Across screen-centric grounding and referring benchmarks spanning text-heavy to image-heavy regimes, Image-LoRA matches or closely approaches standard LoRA accuracy while using fewer trainable parameters and lower adapter-only training FLOPs. The method also preserves the pure-text reasoning performance of VLMs before and after fine-tuning, as further shown on GSM8K. + oai:arXiv.org:2512.19219v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Tiange Luo, Lajanugen Logeswaran, Jaekyeom Kim, Justin Johnson, Honglak Lee - Quantifying the Impact of Structured Output Format on Large Language Models through Causal Inference - https://arxiv.org/abs/2509.21791 - arXiv:2509.21791v3 Announce Type: replace -Abstract: Structured output from large language models (LLMs) has enhanced efficiency in processing generated information and is increasingly adopted in industrial applications. Prior studies have investigated the impact of structured output on LLMs' generation quality, often presenting one-way findings. Some suggest that structured format enhances completeness and factual accuracy, while others argue that it restricts the reasoning capacity of LLMs and leads to reductions in standard evaluation metrics. Potential limitations of these assessments include restricted testing scenarios, weakly controlled comparative settings, and reliance on coarse metrics. In this work, we present a refined analysis using causal inference. Based on one assumed and two guaranteed constraints, we derive five potential causal structures characterizing the influence of structured output on LLMs' generation: (1) collider without m-bias, (2) collider with m-bias, (3) single cause from instruction, (4) single cause from output format, and (5) independence. Across seven public and one developed reasoning tasks, we find that coarse metrics report positive, negative, or neutral effects of structured output on GPT-4o's generation. However, causal inference reveals no causal impact in 43 out of 48 scenarios. In the remaining 5, 3 involve multifaceted causal structures influenced by concrete instructions. Further experiments show that OpenAI-o3 are more resilient to output formats than general-purpose GPT-4o and GPT-4.1, highlighting an unaware advantage of reasoning models. - oai:arXiv.org:2509.21791v3 - cs.CL + From Pixels to Predicates Structuring urban perception with scene graphs + https://arxiv.org/abs/2512.19221 + arXiv:2512.19221v1 Announce Type: new +Abstract: Perception research is increasingly modelled using streetscapes, yet many approaches still rely on pixel features or object co-occurrence statistics, overlooking the explicit relations that shape human perception. This study proposes a three stage pipeline that transforms street view imagery (SVI) into structured representations for predicting six perceptual indicators. In the first stage, each image is parsed using an open-set Panoptic Scene Graph model (OpenPSG) to extract object predicate object triplets. In the second stage, compact scene-level embeddings are learned through a heterogeneous graph autoencoder (GraphMAE). In the third stage, a neural network predicts perception scores from these embeddings. We evaluate the proposed approach against image-only baselines in terms of accuracy, precision, and cross-city generalization. Results indicate that (i) our approach improves perception prediction accuracy by an average of 26% over baseline models, and (ii) maintains strong generalization performance in cross-city prediction tasks. Additionally, the structured representation clarifies which relational patterns contribute to lower perception scores in urban scenes, such as graffiti on wall and car parked on sidewalk. Overall, this study demonstrates that graph-based structure provides expressive, generalizable, and interpretable signals for modelling urban perception, advancing human-centric and context-aware urban analytics. + oai:arXiv.org:2512.19221v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yunlong Liu, Shuyang Li, Pengyuan Liu, Yu Zhang, Rudi Stouffs + + + Phase-space entropy at acquisition reflects downstream learnability + https://arxiv.org/abs/2512.19223 + arXiv:2512.19223v1 Announce Type: new +Abstract: Modern learning systems work with data that vary widely across domains, but they all ultimately depend on how much structure is already present in the measurements before any model is trained. This raises a basic question: is there a general, modality-agnostic way to quantify how acquisition itself preserves or destroys the information that downstream learners could use? Here we propose an acquisition-level scalar $\Delta S_{\mathcal B}$ based on instrument-resolved phase space. Unlike pixelwise distortion or purely spectral errors that often saturate under aggressive undersampling, $\Delta S_{\mathcal B}$ directly quantifies how acquisition mixes or removes joint space--frequency structure at the instrument scale. We show theoretically that \(\Delta S_{\mathcal B}\) correctly identifies the phase-space coherence of periodic sampling as the physical source of aliasing, recovering classical sampling-theorem consequences. Empirically, across masked image classification, accelerated MRI, and massive MIMO (including over-the-air measurements), $|\Delta S_{\mathcal B}|$ consistently ranks sampling geometries and predicts downstream reconstruction/recognition difficulty \emph{without training}. In particular, minimizing $|\Delta S_{\mathcal B}|$ enables zero-training selection of variable-density MRI mask parameters that matches designs tuned by conventional pre-reconstruction criteria. These results suggest that phase-space entropy at acquisition reflects downstream learnability, enabling pre-training selection of candidate sampling policies and as a shared notion of information preservation across modalities. + oai:arXiv.org:2512.19223v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Han Yuan, Yue Zhao, Li Zhang, Wuqiong Luo, Zheng Ma + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/publicdomain/zero/1.0/ + Xiu-Cheng Wang, Jun-Jie Zhanga, Nan Cheng, Long-Gang Pang, Taijiao Du, Deyu Meng - Can Large Language Models Develop Gambling Addiction? - https://arxiv.org/abs/2509.22818 - arXiv:2509.22818v2 Announce Type: replace -Abstract: This study identifies the specific conditions under which large language models exhibit human-like gambling addiction patterns, providing critical insights into their decision-making mechanisms and AI safety. We analyze LLM decision-making at cognitive-behavioral and neural levels based on human addiction research. In slot machine experiments, we identified cognitive features such as illusion of control and loss chasing, observing that greater autonomy in betting parameters substantially amplified irrational behavior and bankruptcy rates. Neural circuit analysis using a Sparse Autoencoder confirmed that model behavior is controlled by abstract decision-making features related to risk, not merely by prompts. These findings suggest LLMs internalize human-like cognitive biases beyond simply mimicking training data. - oai:arXiv.org:2509.22818v2 + Generation of Programmatic Rules for Document Forgery Detection Using Large Language Models + https://arxiv.org/abs/2512.19228 + arXiv:2512.19228v1 Announce Type: new +Abstract: Document forgery poses a growing threat to legal, economic, and governmental processes, requiring increasingly sophisticated verification mechanisms. One approach involves the use of plausibility checks, rule-based procedures that assess the correctness and internal consistency of data, to detect anomalies or signs of manipulation. Although these verification procedures are essential for ensuring data integrity, existing plausibility checks are manually implemented by software engineers, which is time-consuming. Recent advances in code generation with large language models (LLMs) offer new potential for automating and scaling the generation of these checks. However, adapting LLMs to the specific requirements of an unknown domain remains a significant challenge. This work investigates the extent to which LLMs, adapted on domain-specific code and data through different fine-tuning strategies, can generate rule-based plausibility checks for forgery detection on constrained hardware resources. We fine-tune open-source LLMs, Llama 3.1 8B and OpenCoder 8B, on structured datasets derived from real-world application scenarios and evaluate the generated plausibility checks on previously unseen forgery patterns. The results demonstrate that the models are capable of generating executable and effective verification procedures. This also highlights the potential of LLMs as scalable tools to support human decision-making in security-sensitive contexts where comprehensibility is required. + oai:arXiv.org:2512.19228v1 cs.AI - cs.CY - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Seungpil Lee, Donghyeon Shin, Yunjeong Lee, Sundong Kim + Valentin Schmidberger, Manuel Eberhardinger, Setareh Maghsudi, Johannes Maucher - Same Content, Different Representations: A Controlled Study for Table QA - https://arxiv.org/abs/2509.22983 - arXiv:2509.22983v2 Announce Type: replace -Abstract: Table Question Answering (Table QA) in real-world settings must operate over both structured databases and semi-structured tables containing textual fields. However, existing benchmarks are tied to fixed data formats and have not systematically examined how representation itself affects model performance. We present the first controlled study that isolates the role of table representation by holding content constant while varying structure. Using a verbalization pipeline, we generate paired structured and semi-structured tables, enabling direct comparisons across modeling paradigms. To support detailed analysis, we introduce RePairTQA, a diagnostic benchmark with splits along table size, join requirements, query complexity, and schema quality. Our experiments reveal consistent trade-offs: SQL-based methods achieve high accuracy on structured inputs but degrade on semi-structured data, LLMs exhibit flexibility but reduced precision, and hybrid approaches strike a balance, particularly under noisy schemas. These effects intensify with larger tables and more complex queries. Ultimately, no single method excels across all conditions, and we highlight the central role of representation in shaping Table QA performance. Our findings provide actionable insights for model selection and design, paving the way for more robust hybrid approaches suited for diverse real-world data formats. - oai:arXiv.org:2509.22983v2 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Regression generation adversarial network based on dual data evaluation strategy for industrial application + https://arxiv.org/abs/2512.19232 + arXiv:2512.19232v1 Announce Type: new +Abstract: Soft sensing infers hard-to-measure data through a large number of easily obtainable variables. However, in complex industrial scenarios, the issue of insufficient data volume persists, which diminishes the reliability of soft sensing. Generative Adversarial Networks (GAN) are one of the effective solutions for addressing insufficient samples. Nevertheless, traditional GAN fail to account for the mapping relationship between labels and features, which limits further performance improvement. Although some studies have proposed solutions, none have considered both performance and efficiency simultaneously. To address these problems, this paper proposes the multi-task learning-based regression GAN framework that integrates regression information into both the discriminator and generator, and implements a shallow sharing mechanism between the discriminator and regressor. This approach significantly enhances the quality of generated samples while improving the algorithm's operational efficiency. Moreover, considering the importance of training samples and generated samples, a dual data evaluation strategy is designed to make GAN generate more diverse samples, thereby increasing the generalization of subsequent modeling. The superiority of method is validated through four classic industrial soft sensing cases: wastewater treatment plants, surface water, $CO_2$ absorption towers, and industrial gas turbines. + oai:arXiv.org:2512.19232v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yue Zhang, Seiji Maekawa, Nikita Bhutani + Zesen Wang, Yonggang Li, Lijuan Lan - Enhancing Blind Face Restoration through Online Reinforcement Learning - https://arxiv.org/abs/2509.23339 - arXiv:2509.23339v2 Announce Type: replace -Abstract: Blind Face Restoration (BFR) encounters inherent challenges in exploring its large solution space, leading to common artifacts like missing details and identity ambiguity in the restored images. To tackle these challenges, we propose a Likelihood-Regularized Policy Optimization (LRPO) framework, the first to apply online reinforcement learning (RL) to the BFR task. LRPO leverages rewards from sampled candidates to refine the policy network, increasing the likelihood of high-quality outputs while improving restoration performance on low-quality inputs. However, directly applying RL to BFR creates incompatibility issues, producing restoration results that deviate significantly from the ground truth. To balance perceptual quality and fidelity, we propose three key strategies: 1) a composite reward function tailored for face restoration assessment, 2) ground-truth guided likelihood regularization, and 3) noise-level advantage assignment. Extensive experiments demonstrate that our proposed LRPO significantly improves the face restoration quality over baseline methods and achieves state-of-the-art performance. - oai:arXiv.org:2509.23339v2 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + DeliveryBench: Can Agents Earn Profit in Real World? + https://arxiv.org/abs/2512.19234 + arXiv:2512.19234v1 Announce Type: new +Abstract: LLMs and VLMs are increasingly deployed as embodied agents, yet existing benchmarks largely revolve around simple short-term tasks and struggle to capture rich realistic constraints that shape real-world decision making. To close this gap, we propose DeliveryBench, a city-scale embodied benchmark grounded in the real-world profession of food delivery. Food couriers naturally operate under long-horizon objectives (maximizing net profit over hours) while managing diverse constraints, e.g., delivery deadline, transportation expense, vehicle battery, and necessary interactions with other couriers and customers. DeliveryBench instantiates this setting in procedurally generated 3D cities with diverse road networks, buildings, functional locations, transportation modes, and realistic resource dynamics, enabling systematic evaluation of constraint-aware, long-horizon planning. We benchmark a range of VLM-based agents across nine cities and compare them with human players. Our results reveal a substantial performance gap to humans, and find that these agents are short-sighted and frequently break basic commonsense constraints. Additionally, we observe distinct personalities across models (e.g., adventurous GPT-5 vs. conservative Claude), highlighting both the brittleness and the diversity of current VLM-based embodied agents in realistic, constraint-dense environments. Our code, data, and benchmark are available at https://deliverybench.github.io. + oai:arXiv.org:2512.19234v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Bin Wu, Yahui Liu, Chi Zhang, Yao Zhao, Wei Wang - - - Multi-Order Runge-Kutta Methods or how to numerically solve initial value problems of any order - https://arxiv.org/abs/2509.23513 - arXiv:2509.23513v3 Announce Type: replace -Abstract: When one wishes to numerically solve an initial value problem, it is customary to rewrite it as an equivalent first-order system to which a method, usually from the class of Runge-Kutta methods, is applied. Directly treating higher-order initial value problems without such rewriting, however, allows for significantly greater accuracy. We therefore introduce a new generalization of Runge-Kutta methods, called multi-order Runge-Kutta methods, designed to solve initial value problems of arbitrary order. We establish fundamental properties of these methods, including convergence, order of consistency, and linear stability. We also analyze the structure of the system satisfied by the approximations of a method, which enables us to provide a proper definition of explicit methods and to gain a finer understanding of implicit methods. - oai:arXiv.org:2509.23513v3 - math.NA - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Loris Petronijevic + Lingjun Mao, Jiawei Ren, Kun Zhou, Jixuan Chen, Ziqiao Ma, Lianhui Qin - BugMagnifier: TON Transaction Simulator for Revealing Smart Contract Vulnerabilities - https://arxiv.org/abs/2509.24444 - arXiv:2509.24444v2 Announce Type: replace -Abstract: The Open Network (TON) blockchain employs an asynchronous execution model that introduces unique security challenges for smart contracts, particularly race conditions arising from unpredictable message processing order. While previous work established vulnerability patterns through static analysis of audit reports, dynamic detection of temporal dependencies through systematic testing remains an open problem. We present BugMagnifier, a transaction simulation framework that systematically reveals vulnerabilities in TON smart contracts through controlled message orchestration. Built atop TON Sandbox and integrated with the TON Virtual Machine (TVM), our tool combines precise message queue manipulation with differential state analysis and probabilistic permutation testing to detect asynchronous execution flaws. Experimental evaluation demonstrates BugMagnifier's effectiveness through extensive parametric studies on purpose-built vulnerable contracts, revealing message ratio-dependent detection complexity that aligns with theoretical predictions. This quantitative model enables predictive vulnerability assessment while shifting discovery from manual expert analysis to automated evidence generation. By providing reproducible test scenarios for temporal vulnerabilities, BugMagnifier addresses a critical gap in the TON security tooling, offering practical support for safer smart contract development in asynchronous blockchain environments. - oai:arXiv.org:2509.24444v2 - cs.CR - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 - replace + A Reverse Reachable Set Based Approach for Motif Oriented Profit maximization in Social Networks + https://arxiv.org/abs/2512.19237 + arXiv:2512.19237v1 Announce Type: new +Abstract: Profit Maximization is one of the key objectives for social media marketing, where the task is to choose a limited number of highly influential nodes such that their initial activation leads to maximum profit. In this paper, we introduce a variant of the Profit Maximization Problem where we consider that instead of nodes, benefits are assigned to some of the motifs of the graph, and these benefit values can be earned once a given threshold count of nodes from the motifs is influenced. The goal here is to choose a limited number of nodes for initial activation called seed nodes such that the motif-oriented profit gets maximized. Formally, we call our problem the Motif Oriented Profit Maximization Problem. We show that the problem is NP-hard to solve optimally. We propose a Reverse Reachable Set-based framework to solve our problem. The proposed methodology broadly divides into three steps: KPT Estimation and RR Set generation, Seed Set Selection, and Motif Oriented Profit Estimation. The proposed methodology has been analyzed to understand its time and space requirements. It has been implemented with real-world social network datasets, and the results are reported. We observe that the seed set selected by the proposed solution approaches leads to more profit compared to the seed sets selected by the existing methods. The whole implementation and data are available at: https://github.com/PoonamSharma-PY/MotifProfit. + oai:arXiv.org:2512.19237v1 + cs.SI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Yury Yanovich, Victoria Kovalevskaya, Maksim Egorov, Elizaveta Smirnova, Matvey Mishuris, Yash Madhwal, Kirill Ziborov, Vladimir Gorgadze, Subodh Sharma + Poonam Sharma, Suman Banerjee - From $f(x)$ and $g(x)$ to $f(g(x))$: LLMs Learn New Skills in RL by Composing Old Ones - https://arxiv.org/abs/2509.25123 - arXiv:2509.25123v3 Announce Type: replace -Abstract: Does RL teach LLMs genuinely new skills, or does it merely activate existing ones? This question lies at the core of ongoing debates about the role of RL in LLM post-training. On one side, strong empirical results can be achieved with RL even without preceding supervised finetuning; on the other, critics argue that RL contributes little beyond reweighting existing reasoning strategies. This work provides concrete evidence that LLMs can acquire genuinely new skills during RL by composing existing ones, mirroring one of the central mechanisms by which humans acquire new cognitive skills. To mitigate data contamination and other confounding factors, and to allow precise control over task complexity, we develop a synthetic framework for our investigation. Specifically, we define a skill as the ability to infer the output of a string transformation function f(x) given x. When an LLM has already learned f and g prior to RL, our experiments reveal that RL enables it to learn unseen compositions of them h(x)=g(f(x)). Further, this compositional ability generalizes to more difficult problems such as compositions of >2 functions unseen during RL training. Surprisingly, our experiments show that compositional skill acquired on a source task transfers to a different target task. This transfer happens even without compositional training on the target, requiring only prior knowledge of the target's atomic skills. Our qualitative analysis shows that RL fundamentally changes the reasoning behaviors of the models. In contrast, next-token training with the same data yields none of these findings. Our systematic experiments provide fresh insights into LLM learning, suggesting the value of first building base models with basic skills, then using RL to incentivize advanced, generalizable skills for complex problems. - oai:arXiv.org:2509.25123v3 - cs.AI + Identifying Features Associated with Bias Against 93 Stigmatized Groups in Language Models and Guardrail Model Safety Mitigation + https://arxiv.org/abs/2512.19238 + arXiv:2512.19238v1 Announce Type: new +Abstract: Large language models (LLMs) have been shown to exhibit social bias, however, bias towards non-protected stigmatized identities remain understudied. Furthermore, what social features of stigmas are associated with bias in LLM outputs is unknown. From psychology literature, it has been shown that stigmas contain six shared social features: aesthetics, concealability, course, disruptiveness, origin, and peril. In this study, we investigate if human and LLM ratings of the features of stigmas, along with prompt style and type of stigma, have effect on bias towards stigmatized groups in LLM outputs. We measure bias against 93 stigmatized groups across three widely used LLMs (Granite 3.0-8B, Llama-3.1-8B, Mistral-7B) using SocialStigmaQA, a benchmark that includes 37 social scenarios about stigmatized identities; for example deciding wether to recommend them for an internship. We find that stigmas rated by humans to be highly perilous (e.g., being a gang member or having HIV) have the most biased outputs from SocialStigmaQA prompts (60% of outputs from all models) while sociodemographic stigmas (e.g. Asian-American or old age) have the least amount of biased outputs (11%). We test if the amount of biased outputs could be decreased by using guardrail models, models meant to identify harmful input, using each LLM's respective guardrail model (Granite Guardian 3.0, Llama Guard 3.0, Mistral Moderation API). We find that bias decreases significantly by 10.4%, 1.4%, and 7.8%, respectively. However, we show that features with significant effect on bias remain unchanged post-mitigation and that guardrail models often fail to recognize the intent of bias in prompts. This work has implications for using LLMs in scenarios involving stigmatized groups and we suggest future work towards improving guardrail models for bias mitigation. + oai:arXiv.org:2512.19238v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by-nc-sa/4.0/ - Lifan Yuan, Weize Chen, Yuchen Zhang, Ganqu Cui, Hanbin Wang, Ziming You, Ning Ding, Zhiyuan Liu, Maosong Sun, Hao Peng + Anna-Maria Gueorguieva, Aylin Caliskan - Data-Free Continual Learning of Server Models in Model-Heterogeneous Cloud-Device Collaboration - https://arxiv.org/abs/2509.25977 - arXiv:2509.25977v2 Announce Type: replace -Abstract: The rise of cloud-device collaborative computing has enabled intelligent services to be delivered across distributed edge devices while leveraging centralized cloud resources. In this paradigm, federated learning (FL) has become a key enabler for privacy-preserving model training without transferring raw data from edge devices to the cloud. However, with the continuous emergence of new data and increasing model diversity, traditional federated learning faces significant challenges, including inherent issues of data heterogeneity, model heterogeneity and catastrophic forgetting, along with new challenge of knowledge misalignment. In this study, we introduce FedDCL, a novel framework designed to enable data-free continual learning of the server model in a model-heterogeneous federated setting. We leverage pre-trained diffusion models to extract lightweight class-specific prototypes, which confer a threefold data-free advantage, enabling: (1) generation of synthetic data for the current task to augment training and counteract non-IID data distributions; (2) exemplar-free generative replay for retaining knowledge from previous tasks; and (3) data-free dynamic knowledge transfer from heterogeneous devices to the cloud server.Experimental results on various datasets demonstrate the effectiveness of FedDCL, showcasing its potential to enhance the generalizability and practical applicability of federated cloud-device collaboration in dynamic settings. - oai:arXiv.org:2509.25977v2 - cs.LG + ChemATP: A Training-Free Chemical Reasoning Framework for Large Language Models + https://arxiv.org/abs/2512.19240 + arXiv:2512.19240v1 Announce Type: new +Abstract: Large Language Models (LLMs) exhibit strong general reasoning but struggle in molecular science due to the lack of explicit chemical priors in standard string representations. Current solutions face a fundamental dilemma. Training-based methods inject priors into parameters, but this static coupling hinders rapid knowledge updates and often compromises the model's general reasoning capabilities. Conversely, existing training-free methods avoid these issues but rely on surface-level prompting, failing to provide the fine-grained atom-level priors essential for precise chemical reasoning. To address this issue, we introduce ChemATP, a framework that decouples chemical knowledge from the reasoning engine. By constructing the first atom-level textual knowledge base, ChemATP enables frozen LLMs to explicitly retrieve and reason over this information dynamically. This architecture ensures interpretability and adaptability while preserving the LLM's intrinsic general intelligence. Experiments show that ChemATP significantly outperforms training-free baselines and rivals state-of-the-art training-based models, demonstrating that explicit prior injection is a competitive alternative to implicit parameter updates. + oai:arXiv.org:2512.19240v1 + cs.CL cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Mingxu Zhang, Dazhong Shen, Qi Zhang, Ying Sun + + + VisionDirector: Vision-Language Guided Closed-Loop Refinement for Generative Image Synthesis + https://arxiv.org/abs/2512.19243 + arXiv:2512.19243v1 Announce Type: new +Abstract: Generative models can now produce photorealistic imagery, yet they still struggle with the long, multi-goal prompts that professional designers issue. To expose this gap and better evaluate models' performance in real-world settings, we introduce Long Goal Bench (LGBench), a 2,000-task suite (1,000 T2I and 1,000 I2I) whose average instruction contains 18 to 22 tightly coupled goals spanning global layout, local object placement, typography, and logo fidelity. We find that even state-of-the-art models satisfy fewer than 72 percent of the goals and routinely miss localized edits, confirming the brittleness of current pipelines. To address this, we present VisionDirector, a training-free vision-language supervisor that (i) extracts structured goals from long instructions, (ii) dynamically decides between one-shot generation and staged edits, (iii) runs micro-grid sampling with semantic verification and rollback after every edit, and (iv) logs goal-level rewards. We further fine-tune the planner with Group Relative Policy Optimization, yielding shorter edit trajectories (3.1 versus 4.2 steps) and stronger alignment. VisionDirector achieves new state of the art on GenEval (plus 7 percent overall) and ImgEdit (plus 0.07 absolute) while producing consistent qualitative improvements on typography, multi-object scenes, and pose editing. + oai:arXiv.org:2512.19243v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiao Zhang, Zengzhe Chen, Yuan Yuan, Yifei Zou, Fuzhen Zhuang, Wenyu Jiao, Yuke Wang, Dongxiao Yu + Meng Chu, Senqiao Yang, Haoxuan Che, Suiyun Zhang, Xichen Zhang, Shaozuo Yu, Haokun Gui, Zhefan Rao, Dandan Tu, Rui Liu, Jiaya Jia - Fine-Tuning Masked Diffusion for Provable Self-Correction - https://arxiv.org/abs/2510.01384 - arXiv:2510.01384v3 Announce Type: replace -Abstract: A natural desideratum for generative models is self-correction--detecting and revising low-quality tokens at inference. While Masked Diffusion Models (MDMs) have emerged as a promising approach for generative modeling in discrete spaces, their capacity for self-correction remains poorly understood. Prior attempts to incorporate self-correction into MDMs either require overhauling MDM architectures/training or rely on imprecise proxies for token quality, limiting their applicability. Motivated by this, we introduce PRISM--Plug-in Remasking for Inference-time Self-correction of Masked Diffusions--a lightweight, model-agnostic approach that applies to any pretrained MDM. Theoretically, PRISM defines a self-correction loss that provably learns per-token quality scores, without RL or a verifier. These quality scores are computed in the same forward pass with MDM and used to detect low-quality tokens. Empirically, PRISM advances MDM inference across domains and scales: Sudoku; unconditional text (170M); and code with LLaDA (8B). - oai:arXiv.org:2510.01384v3 + Vision-Aided Relative State Estimation for Approach and Landing on a Moving Platform with Inertial Measurements + https://arxiv.org/abs/2512.19245 + arXiv:2512.19245v1 Announce Type: new +Abstract: This paper tackles the problem of estimating the relative position, orientation, and velocity between a UAV and a planar platform undergoing arbitrary 3D motion during approach and landing. The estimation relies on measurements from Inertial Measurement Units (IMUs) mounted on both systems, assuming there is a suitable communication channel to exchange data, together with visual information provided by an onboard monocular camera, from which the bearing (line-of-sight direction) to the platform's center and the normal vector of its planar surface are extracted. We propose a cascade observer with a complementary filter on SO(3) to reconstruct the relative attitude, followed by a linear Riccati observer for relative position and velocity estimation. Convergence of both observers is established under persistently exciting conditions, and the cascade is shown to be almost globally asymptotically and locally exponentially stable. We further extend the design to the case where the platform's rotation is restricted to its normal axis and show that its measured linear acceleration can be exploited to recover the remaining unobservable rotation angle. A sufficient condition to ensure local exponential convergence in this setting is provided. The performance of the proposed observers is validated through extensive simulations. + oai:arXiv.org:2512.19245v1 + eess.SY + cs.RO + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Tarek Bouazza, Alessandro Melis, Soulaimane Berkane, Robert Mahony, Tarek Hamel + + + From Black-Box Tuning to Guided Optimization via Hyperparameters Interaction Analysis + https://arxiv.org/abs/2512.19246 + arXiv:2512.19246v1 Announce Type: new +Abstract: Hyperparameters tuning is a fundamental, yet computationally expensive, step in optimizing machine learning models. Beyond optimization, understanding the relative importance and interaction of hyperparameters is critical to efficient model development. In this paper, we introduce MetaSHAP, a scalable semi-automated eXplainable AI (XAI) method, that uses meta-learning and Shapley values analysis to provide actionable and dataset-aware tuning insights. MetaSHAP operates over a vast benchmark of over 09 millions evaluated machine learning pipelines, allowing it to produce interpretable importance scores and actionable tuning insights that reveal how much each hyperparameter matters, how it interacts with others and in which value ranges its influence is concentrated. For a given algorithm and dataset, MetaSHAP learns a surrogate performance model from historical configurations, computes hyperparameters interactions using SHAP-based analysis, and derives interpretable tuning ranges from the most influential hyperparameters. This allows practitioners not only to prioritize which hyperparameters to tune, but also to understand their directionality and interactions. We empirically validate MetaSHAP on a diverse benchmark of 164 classification datasets and 14 classifiers, demonstrating that it produces reliable importance rankings and competitive performance when used to guide Bayesian optimization. + oai:arXiv.org:2512.19246v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Jaeyeon Kim, Seunggeun Kim, Taekyun Lee, David Z. Pan, Hyeji Kim, Sham Kakade, Sitan Chen + 10.1109/ICTAI66417.2025.00056 + 2025 IEEE 37th International Conference on Tools with Artificial Intelligence (ICTAI) + Moncef Garouani, Ayah Barhrhouj - Refinement-based Christoffel sampling for least squares approximation in non-orthogonal bases - https://arxiv.org/abs/2510.08461 - arXiv:2510.08461v2 Announce Type: replace -Abstract: We introduce a refinement-based Christoffel sampling (RCS) algorithm for least squares approximation in the span of a given, generally non-orthogonal set of functions $\Phi_n = \{\phi_1, \dots, \phi_n\}$. A standard sampling strategy for this problem is Christoffel sampling, which achieves near-best approximations in probability using only $\mathcal{O}(n \log(n))$ samples. However, it requires i.i.d. sampling from a distribution whose density is proportional to the inverse Christoffel function $k_n$, the computation of which requires an orthonormal basis. As a result, existing approaches for non-orthogonal bases $\Phi_n$ typically rely on costly discrete orthogonalization. We propose a new iterative algorithm, inspired by recent advances in approximate leverage score sampling, that avoids this bottleneck. Crucially, while the computational cost of discrete orthogonalization grows proportionally with $\|k_n\|_{L^\infty(X)}$, the cost of our approach increases only logarithmically in $\|k_n\|_{L^\infty(X)}$. In addition, we account for finite-precision effects by considering a numerical variant of the Christoffel function, ensuring that the algorithm relies only on computable quantities. Alongside a convergence proof, we present extensive numerical experiments demonstrating the efficiency and robustness of the proposed method. - oai:arXiv.org:2510.08461v2 - math.NA - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Auto-Prompting with Retrieval Guidance for Frame Detection in Logistics + https://arxiv.org/abs/2512.19247 + arXiv:2512.19247v1 Announce Type: new +Abstract: Prompt engineering plays a critical role in adapting large language models (LLMs) to complex reasoning and labeling tasks without the need for extensive fine-tuning. In this paper, we propose a novel prompt optimization pipeline for frame detection in logistics texts, combining retrieval-augmented generation (RAG), few-shot prompting, chain-of-thought (CoT) reasoning, and automatic CoT synthesis (Auto-CoT) to generate highly effective task-specific prompts. Central to our approach is an LLM-based prompt optimizer agent that iteratively refines the prompts using retrieved examples, performance feedback, and internal self-evaluation. Our framework is evaluated on a real-world logistics text annotation task, where reasoning accuracy and labeling efficiency are critical. Experimental results show that the optimized prompts - particularly those enhanced via Auto-CoT and RAG - improve real-world inference accuracy by up to 15% compared to baseline zero-shot or static prompts. The system demonstrates consistent improvements across multiple LLMs, including GPT-4o, Qwen 2.5 (72B), and LLaMA 3.1 (70B), validating its generalizability and practical value. These findings suggest that structured prompt optimization is a viable alternative to full fine-tuning, offering scalable solutions for deploying LLMs in domain-specific NLP applications such as logistics. + oai:arXiv.org:2512.19247v1 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Astrid Herremans, Ben Adcock + Do Minh Duc, Quan Xuan Truong, Nguyen Tat Dat, Nguyen Van Vinh - A Neural Surrogate-Enhanced Multi-Method Framework for Robust Wing Design Optimization - https://arxiv.org/abs/2510.08582 - arXiv:2510.08582v3 Announce Type: replace -Abstract: This paper introduces a modular and scalable design optimization framework for the wing design process that enables faster early-phase design while ensuring aerodynamic stability. The pipeline starts with the generation of initial wing geometries and then proceeds to optimize the wing using several algorithms. Aerodynamic performance is assessed using a Vortex Lattice Method (VLM) applied to a carefully selected dataset of wing configurations. These results are employed to develop surrogate neural network models, which can predict lift and drag rapidly and accurately. The stability evaluation is implemented by setting the control surfaces and components to fixed positions in order to have realistic flight dynamics. The approach unifies and compares several optimization techniques, including Particle Swarm Optimization (PSO), Genetic Algorithms (GA), gradient-based MultiStart methods, Bayesian optimization, and Lipschitz optimization. Each method ensures constraint management via adaptive strategies and penalty functions, where the targets for lift and design feasibility are enforced. The progression of aerodynamic characteristics and geometries over the optimization iterations will be investigated in order to clarify each algorithm's convergence characteristics and performance efficiency. Our results show improvement in aerodynamic qualities and robust stability properties, offering a mechanism for wing design at speed and precision. In the interest of reproducibility and community development, the complete implementation is publicly available at Github. - oai:arXiv.org:2510.08582v3 - cs.NE - math.OC - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Arash Fath Lipaei, Melika Sabzikari + Small Language Models as Compiler Experts: Auto-Parallelization for Heterogeneous Systems + https://arxiv.org/abs/2512.19250 + arXiv:2512.19250v1 Announce Type: new +Abstract: Traditional auto-parallelizing compilers, reliant on rigid heuristics, struggle with the complexity of modern heterogeneous systems. This paper presents a comprehensive evaluation of small (approximately 1B parameter) language-model-driven compiler auto-parallelization. We evaluate three models: gemma3, llama3.2, and qwen2.5, using six reasoning strategies across 11 real-world kernels drawn from scientific computing, graph algorithms, and machine learning. Our system is benchmarked against strong compiler baselines, including LLVM Polly, TVM, and Triton. Across 376 total evaluations, the proposed approach achieves an average speedup of 6.81x and a peak performance of 43.25x on convolution operations. We analyze scalability, verify correctness using multiple sanitizers, and confirm robustness across diverse compilers and hardware platforms. Our results demonstrate that small, efficient language models can serve as powerful reasoning engines for complex compiler optimization tasks. + oai:arXiv.org:2512.19250v1 + cs.LG + cs.PL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Prathamesh Devadiga - A Generic Machine Learning Framework for Radio Frequency Fingerprinting - https://arxiv.org/abs/2510.09775 - arXiv:2510.09775v3 Announce Type: replace -Abstract: Fingerprinting radio frequency (RF) emitters typically involves finding unique characteristics that are featured in their received signal. These fingerprints are nuanced, but sufficiently detailed, motivating the pursuit of methods that can successfully extract them. The downstream task that requires the most meticulous RF fingerprinting (RFF) is known as specific emitter identification (SEI), which entails recognising each individual transmitter. RFF and SEI have a long history, with numerous defence and civilian applications such as signal intelligence, electronic surveillance, physical-layer authentication of wireless devices, to name a few. In recent years, data-driven RFF approaches have become popular due to their ability to automatically learn intricate fingerprints. They generally deliver superior performance when compared to traditional RFF techniques that are often labour-intensive, inflexible, and only applicable to a particular emitter type or transmission scheme. In this paper, we present a generic and versatile machine learning (ML) framework for data-driven RFF with several popular downstream tasks such as SEI, data association (EDA) and RF emitter clustering (RFEC). It is emitter-type agnostic. We then demonstrate the introduced framework for several tasks using real RF datasets for spaceborne surveillance, signal intelligence and countering drones applications. - oai:arXiv.org:2510.09775v3 + Machine Unlearning in the Era of Quantum Machine Learning: An Empirical Study + https://arxiv.org/abs/2512.19253 + arXiv:2512.19253v1 Announce Type: new +Abstract: We present the first comprehensive empirical study of machine unlearning (MU) in hybrid quantum-classical neural networks. While MU has been extensively explored in classical deep learning, its behavior within variational quantum circuits (VQCs) and quantum-augmented architectures remains largely unexplored. First, we adapt a broad suite of unlearning methods to quantum settings, including gradient-based, distillation-based, regularization-based and certified techniques. Second, we introduce two new unlearning strategies tailored to hybrid models. Experiments across Iris, MNIST, and Fashion-MNIST, under both subset removal and full-class deletion, reveal that quantum models can support effective unlearning, but outcomes depend strongly on circuit depth, entanglement structure, and task complexity. Shallow VQCs display high intrinsic stability with minimal memorization, whereas deeper hybrid models exhibit stronger trade-offs between utility, forgetting strength, and alignment with retrain oracle. We find that certain methods, e.g. EU-k, LCA, and Certified Unlearning, consistently provide the best balance across metrics. These findings establish baseline empirical insights into quantum machine unlearning and highlight the need for quantum-aware algorithms and theoretical guarantees, as quantum machine learning systems continue to expand in scale and capability. We publicly release our code at: https://github.com/CrivoiCarla/HQML. + oai:arXiv.org:2512.19253v1 cs.LG - cs.CR - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Alex Hiles, Bashar I. Ahmad + cs.AI + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Carla Crivoi, Radu Tudor Ionescu - G2L:From Giga-Scale to Cancer-Specific Large-Scale Pathology Foundation Models via Knowledge Distillation - https://arxiv.org/abs/2510.11176 - arXiv:2510.11176v2 Announce Type: replace -Abstract: Recent studies in pathology foundation models have shown that scaling training data, diversifying cancer types, and increasing model size consistently improve their performance. However, giga-scale foundation models, which are trained on hundreds of thousands of slides covering tens of cancer types and contain billions of parameters, pose significant challenges for practical use due to their tremendous computational costs in both development and deployment. In this work, we present a novel strategy, named the G2L framework, to increase the performance of large-scale foundation models, which consist of only $15\%$ of the parameters of giga-scale models, to a comparable performance level of giga-scale models in cancer-specific tasks. Our approach applies knowledge distillation, transferring the capabilities of a giga-scale model to a large-scale model, using just 1K pathology slides of a target cancer (e.g., breast, prostate, etc.). The resulting distilled model not only outperformed state-of-the-art models of the same size (i.e., large-scale) across several benchmarks but also, interestingly, surpassed the giga-scale teacher and huge-scale models in some benchmarks. In addition, the distilled model exhibited a higher robustness index, indicating improved resilience to image variations originating from multiple institutions. These findings suggest that the proposed distillation approach for a large-scale model is a data- and parameter-efficient way to achieve giga-scale-level performance for cancer-specific applications without prohibitive computational burden. - oai:arXiv.org:2510.11176v2 - cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Topology-based Conditions for Multiconsensus under the Signed Friedkin-Johnsen Model + https://arxiv.org/abs/2512.19258 + arXiv:2512.19258v1 Announce Type: new +Abstract: In this paper, we address the multiconsensus problem in networked systems, where agents are partitioned into disjoint subgroups and the states of agents within a subgroup are driven to consensus. Our objective is to present a distributed control law that leads to multiconsensus in signed digraphs. To this end, we examine the convergence of opinions under the opposing rule-based signed Friedkin-Johnsen (SFJ) model and present conditions that lead to multiconsensus under this model. Interestingly, the proposed conditions depend only on graph topology and signed interactions and not on the edge weights of the network. Consequently, the proposed SFJ-based control law relaxes the in-degree balance and homogeneity of trust-distrust, frequently assumed in the literature. Finally, we add simulation results to demonstrate the proposed conditions for multiconsensus. + oai:arXiv.org:2512.19258v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Yesung Cho, Sungmin Lee, Geongyu Lee, Minkyung Lee, Jongbae Park, Dongmyung Shin + Aashi Shrinate, Tanmay Siddharth, Twinkle Tripathy - ASecond-Order SpikingSSM for Wearables - https://arxiv.org/abs/2510.14386 - arXiv:2510.14386v2 Announce Type: replace -Abstract: Spiking neural networks have garnered increasing attention due to their energy efficiency, multiplication-free computation, and sparse event-based processing. In parallel, state space models have emerged as scalable alternatives to transformers for long-range sequence modelling by avoiding quadratic dependence on sequence length. We propose SHaRe-SSM (Spiking Harmonic Resonate-and-Fire State Space Model), a second-order spiking SSM for classification and regression on ultra-long sequences. SHaRe-SSM outperforms transformers and first-order SSMs on average while eliminating matrix multiplications, making it highly suitable for resource-constrained applications. To ensure fast computation over tens of thousands of time steps, we leverage a parallel scan formulation of the underlying dynamical system. Furthermore, we introduce a kernel-based spiking regressor, which enables the accurate modelling of dependencies in sequences of up to 50k steps. Our results demonstrate that SHaRe-SSM achieves superior long-range modelling capability with energy efficiency (52.1x less than ANN-based second order SSM), positioning it as a strong candidate for resource-constrained devices such as wearables - oai:arXiv.org:2510.14386v2 + Translating Flow to Policy via Hindsight Online Imitation + https://arxiv.org/abs/2512.19269 + arXiv:2512.19269v1 Announce Type: new +Abstract: Recent advances in hierarchical robot systems leverage a high-level planner to propose task plans and a low-level policy to generate robot actions. This design allows training the planner on action-free or even non-robot data sources (e.g., videos), providing transferable high-level guidance. Nevertheless, grounding these high-level plans into executable actions remains challenging, especially with the limited availability of high-quality robot data. To this end, we propose to improve the low-level policy through online interactions. Specifically, our approach collects online rollouts, retrospectively annotates the corresponding high-level goals from achieved outcomes, and aggregates these hindsight-relabeled experiences to update a goal-conditioned imitation policy. Our method, Hindsight Flow-conditioned Online Imitation (HinFlow), instantiates this idea with 2D point flows as the high-level planner. Across diverse manipulation tasks in both simulation and physical world, our method achieves more than $2\times$ performance improvement over the base policy, significantly outperforming the existing methods. Moreover, our framework enables policy acquisition from planners trained on cross-embodiment video data, demonstrating its potential for scalable and transferable robot learning. + oai:arXiv.org:2512.19269v1 + cs.RO cs.LG - cs.NE - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Kartikay Agrawal, Abhijeet Vikram, Vedant Sharma, Vaishnavi Nagabhushana, Ayon Borthakur + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Yitian Zheng, Zhangchen Ye, Weijun Dong, Shengjie Wang, Yuyang Liu, Chongjie Zhang, Chuan Wen, Yang Gao - Helmsman: Autonomous Synthesis of Federated Learning Systems via Collaborative LLM Agents - https://arxiv.org/abs/2510.14512 - arXiv:2510.14512v2 Announce Type: replace -Abstract: Federated Learning (FL) offers a powerful paradigm for training models on decentralized data, but its promise is often undermined by the immense complexity of designing and deploying robust systems. The need to select, combine, and tune strategies for multifaceted challenges like data heterogeneity and system constraints has become a critical bottleneck, resulting in brittle, bespoke solutions. To address this, we introduce Helmsman, a novel multi-agent system that automates the end-to-end synthesis of federated learning systems from high-level user specifications. It emulates a principled research and development workflow through three collaborative phases: (1) interactive human-in-the-loop planning to formulate a sound research plan, (2) modular code generation by supervised agent teams, and (3) a closed-loop of autonomous evaluation and refinement in a sandboxed simulation environment. To facilitate rigorous evaluation, we also introduce AgentFL-Bench, a new benchmark comprising 16 diverse tasks designed to assess the system-level generation capabilities of agentic systems in FL. Extensive experiments demonstrate that our approach generates solutions competitive with, and often superior to, established hand-crafted baselines. Our work represents a significant step towards the automated engineering of complex decentralized AI systems. - oai:arXiv.org:2510.14512v2 - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Are All Data Necessary? Efficient Data Pruning for Large-scale Autonomous Driving Dataset via Trajectory Entropy Maximization + https://arxiv.org/abs/2512.19270 + arXiv:2512.19270v1 Announce Type: new +Abstract: Collecting large-scale naturalistic driving data is essential for training robust autonomous driving planners. However, real-world datasets often contain a substantial amount of repetitive and low-value samples, which lead to excessive storage costs and bring limited benefits to policy learning. To address this issue, we propose an information-theoretic data pruning method that effectively reduces the training data volume without compromising model performance. Our approach evaluates the trajectory distribution information entropy of driving data and iteratively selects high-value samples that preserve the statistical characteristics of the original dataset in a model-agnostic manner. From a theoretical perspective, we show that maximizing trajectory entropy effectively constrains the Kullback-Leibler divergence between the pruned subset and the original data distribution, thereby maintaining generalization ability. Comprehensive experiments on the NuPlan benchmark with a large-scale imitation learning framework demonstrate that the proposed method can reduce the dataset size by up to 40% while maintaining closed-loop performance. This work provides a lightweight and theoretically grounded approach for scalable data management and efficient policy learning in autonomous driving systems. + oai:arXiv.org:2512.19270v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Haoyuan Li, Mathias Funk, Aaqib Saeed + Zhaoyang Liu, Weitao Zhou, Junze Wen, Cheng Jing, Qian Cheng, Kun Jiang, Diange Yang - Tree-Like Shortcuttings of Trees - https://arxiv.org/abs/2510.14918 - arXiv:2510.14918v2 Announce Type: replace -Abstract: Sparse shortcuttings of trees -- equivalently, sparse 1-spanners for tree metrics with bounded hop-diameter -- have been studied extensively (under different names and settings), since the pioneering works of [Yao82, Cha87, AS87, BTS94], initially motivated by applications to range queries, online tree product, and MST verification, to name a few. These constructions were also lifted from trees to other graph families using known low-distortion embedding results. The works of [Yao82, Cha87, AS87, BTS94] establish a tight tradeoff between hop-diameter and sparsity (or average degree) for tree shortcuttings and imply constant-hop shortcuttings for $n$-node trees with sparsity $O(\log^* n)$. Despite their small sparsity, all known constant-hop shortcuttings contain dense subgraphs (of sparsity $\Omega(\log n)$), which is a significant drawback for many applications. - We initiate a systematic study of constant-hop tree shortcuttings that are ``tree-like''. We focus on two well-studied graph parameters that measure how far a graph is from a tree: arboricity and treewidth. Our contribution is twofold. - * New upper and lower bounds for tree-like shortcuttings of trees, including an optimal tradeoff between hop-diameter and treewidth for all hop-diameter up to $O(\log\log n)$. We also provide a lower bound for larger values of $k$, which together yield $\text{hop-diameter}\times \text{treewidth} = \Omega((\log\log n)^2)$ for all values of hop-diameter, resolving an open question of [FL22, Le23]. [...] - oai:arXiv.org:2510.14918v2 - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 - replace + 3SGen: Unified Subject, Style, and Structure-Driven Image Generation with Adaptive Task-specific Memory + https://arxiv.org/abs/2512.19271 + arXiv:2512.19271v1 Announce Type: new +Abstract: Recent image generation approaches often address subject, style, and structure-driven conditioning in isolation, leading to feature entanglement and limited task transferability. In this paper, we introduce 3SGen, a task-aware unified framework that performs all three conditioning modes within a single model. 3SGen employs an MLLM equipped with learnable semantic queries to align text-image semantics, complemented by a VAE branch that preserves fine-grained visual details. At its core, an Adaptive Task-specific Memory (ATM) module dynamically disentangles, stores, and retrieves condition-specific priors, such as identity for subjects, textures for styles, and spatial layouts for structures, via a lightweight gating mechanism along with several scalable memory items. This design mitigates inter-task interference and naturally scales to compositional inputs. In addition, we propose 3SGen-Bench, a unified image-driven generation benchmark with standardized metrics for evaluating cross-task fidelity and controllability. Extensive experiments on our proposed 3SGen-Bench and other public benchmarks demonstrate our superior performance across diverse image-driven generation tasks. + oai:arXiv.org:2512.19271v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hung Le, Lazar Milenkovi\'c, Shay Solomon, Cuong Than + Xinyang Song, Libin Wang, Weining Wang, Zhiwei Li, Jianxin Sun, Dandan Zheng, Jingdong Chen, Qi Li, Zhenan Sun - EDVD-LLaMA: Explainable Deepfake Video Detection via Multimodal Large Language Model Reasoning - https://arxiv.org/abs/2510.16442 - arXiv:2510.16442v2 Announce Type: replace -Abstract: The rapid development of deepfake video technology has not only facilitated artistic creation but also made it easier to spread misinformation. Traditional deepfake video detection (DVD) methods face issues such as a lack of transparency in their principles and insufficient generalization capabilities to cope with evolving forgery techniques. This highlights an urgent need for detectors that can identify forged content and provide verifiable reasoning explanations. This paper proposes the explainable deepfake video detection (EDVD) task and designs the EDVD-LLaMA multimodal, a large language model (MLLM) reasoning framework, which provides traceable reasoning processes alongside accurate detection results and trustworthy explanations. Our approach first incorporates a Spatio-Temporal Subtle Information Tokenization (ST-SIT) to extract and fuse global and local cross-frame deepfake features, providing rich spatio-temporal semantic information input for MLLM reasoning. Second, we construct a Fine-grained Multimodal Chain-of-Thought (Fg-MCoT) mechanism, which introduces facial feature data as hard constraints during the reasoning process to achieve pixel-level spatio-temporal video localization, suppress hallucinated outputs, and enhance the reliability of the chain of thought. In addition, we build an Explainable Reasoning FF++ dataset (ER-FF++set), leveraging structured data to annotate videos and ensure quality control, thereby supporting dual supervision for reasoning and detection. Extensive experiments demonstrate that EDVD-LLaMA achieves outstanding performance and robustness in terms of detection accuracy, explainability, and its ability to handle cross-forgery methods and cross-dataset scenarios. Compared to previous DVD methods, it provides a more explainable and superior solution. The project page is available at: https://11ouo1.github.io/edvd-llama/. - oai:arXiv.org:2510.16442v2 + Is Visual Realism Enough? Evaluating Gait Biometric Fidelity in Generative AI Human Animation + https://arxiv.org/abs/2512.19275 + arXiv:2512.19275v1 Announce Type: new +Abstract: Generative AI (GenAI) models have revolutionized animation, enabling the synthesis of humans and motion patterns with remarkable visual fidelity. However, generating truly realistic human animation remains a formidable challenge, where even minor inconsistencies can make a subject appear unnatural. This limitation is particularly critical when AI-generated videos are evaluated for behavioral biometrics, where subtle motion cues that define identity are easily lost or distorted. The present study investigates whether state-of-the-art GenAI human animation models can preserve the subtle spatio-temporal details needed for person identification through gait biometrics. Specifically, we evaluate four different GenAI models across two primary evaluation tasks to assess their ability to i) restore gait patterns from reference videos under varying conditions of complexity, and ii) transfer these gait patterns to different visual identities. Our results show that while visual quality is mostly high, biometric fidelity remains low in tasks focusing on identification, suggesting that current GenAI models struggle to disentangle identity from motion. Furthermore, through an identity transfer task, we expose a fundamental flaw in appearance-based gait recognition: when texture is disentangled from motion, identification collapses, proving current GenAI models rely on visual attributes rather than temporal dynamics. + oai:arXiv.org:2512.19275v1 cs.CV cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Ivan DeAndres-Tame, Chengwei Ye, Ruben Tolosana, Ruben Vera-Rodriguez, Shiqi Yu + + + Laplacian Network Optimization via Information Functions + https://arxiv.org/abs/2512.19279 + arXiv:2512.19279v1 Announce Type: new +Abstract: Designing networks to optimize robustness and other performance metrics is a well-established problem with applications ranging from electrical engineering to communication networks. Many such performance measures rely on the Laplacian spectrum; notable examples include total effective resistance, the number of spanning trees, and algebraic connectivity. This paper advances the study of Laplacian-based network optimization by drawing on ideas from experimental design in statistics. We present a theoretical framework for analyzing performance measures by introducing the notion of information functions, which captures a set of their desirable properties. Then, we formulate a new parametric family of information functions, Kiefer's measures, which encompasses the three most common spectral objectives. We provide a regular reformulation of the Laplacian optimization problem, and we use this reformulation to compute directional derivatives of Kiefer's measures. The directional derivatives provide a unified treatment of quantities recurring in Laplacian optimization, such as gradients and subgradients, and we show that they are connected to Laplacian-based measures of node distance, which we call node dissimilarities. We apply the node dissimilarities to derive efficient rank-one update formulas for Kiefer's criteria, and to devise a new edge-exchange method for network optimization. These update formulas enable greedy and exchange algorithms with reduced asymptotic time complexity. + oai:arXiv.org:2512.19279v1 + cs.SI + cs.DM + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haoran Sun, Chen Cai, Huiping Zhuang, Kong Aik Lee, Lap-Pui Chau, Yi Wang + Samuel Rosa, Radoslav Harman - Utility-Diversity Aware Online Batch Selection for LLM Supervised Fine-tuning - https://arxiv.org/abs/2510.16882 - arXiv:2510.16882v2 Announce Type: replace -Abstract: Supervised fine-tuning (SFT) is a commonly used technique to adapt large language models (LLMs) to downstream tasks. In practice, SFT on a full dataset is computationally expensive and sometimes suffers from overfitting or bias amplification. This facilitates the rise of data curation in SFT, which prioritizes the most valuable data to optimze. This work studies the online batch selection family that dynamically scores and filters samples during the training process. However, existing popular methods often (i) rely merely on the utility of data to select a subset while neglecting other crucial factors like diversity, (ii) rely on external resources such as reference models or validation sets, and (iii) incur extra training time over full-dataset training. To address these limitations, this work develops \textbf{UDS (Utility-Diversity Sampling)}, a framework for efficient online batch selection in SFT. UDS leverages the nuclear norm of the logits matrix to capture both data utility and intra-sample diversity, while estimating inter-sample diversity through efficient low-dimensional embedding comparisons with a lightweight memory buffer of historical samples. Such a design eliminates the need for external resources and unnecessary backpropagation, securing computational efficiency. Experiments on multiple benchmarks demonstrate that UDS consistently outperforms state-of-the-art online batch selection methods under varying data budgets, and significantly reduces training time compared to full-dataset fine-tuning. Code is available at https://github.com/gfyddha/UDS. - oai:arXiv.org:2510.16882v2 + Digital Twin-Driven Zero-Shot Fault Diagnosis of Axial Piston Pumps Using Fluid-Borne Noise Signals + https://arxiv.org/abs/2512.19280 + arXiv:2512.19280v1 Announce Type: new +Abstract: Axial piston pumps are crucial components in fluid power systems, where reliable fault diagnosis is essential for ensuring operational safety and efficiency. Traditional data-driven methods require extensive labeled fault data, which is often impractical to obtain, while model-based approaches suffer from parameter uncertainties. This paper proposes a digital twin (DT)-driven zero-shot fault diagnosis framework utilizing fluid-borne noise (FBN) signals. The framework calibrates a high-fidelity DT model using only healthy-state data, generates synthetic fault signals for training deep learning classifiers, and employs a physics-informed neural network (PINN) as a virtual sensor for flow ripple estimation. Gradient-weighted class activation mapping (Grad-CAM) is integrated to visualize the decision-making process of neural networks, revealing that large kernels matching the subsequence length in time-domain inputs and small kernels in time-frequency domain inputs enable higher diagnostic accuracy by focusing on physically meaningful features. Experimental validations demonstrate that training on signals from the calibrated DT model yields diagnostic accuracies exceeding 95\% on real-world benchmarks, while uncalibrated models result in significantly lower performance, highlighting the framework's effectiveness in data-scarce scenarios. + oai:arXiv.org:2512.19280v1 cs.LG cs.AI - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Heming Zou, Yixiu Mao, Yun Qu, Qi Wang, Xiangyang Ji + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Chang Dong, Jianfeng Tao, Chengliang Liu - CharDiff-LP: A Diffusion Model with Character-Level Guidance for License Plate Image Restoration - https://arxiv.org/abs/2510.17330 - arXiv:2510.17330v2 Announce Type: replace -Abstract: License plate image restoration is important not only as a preprocessing step for license plate recognition but also for enhancing evidential value, improving visual clarity, and enabling broader reuse of license plate images. We propose a novel diffusion-based framework with character-level guidance, CharDiff-LP, which effectively restores and recognizes severely degraded license plate images captured under realistic conditions. CharDiff-LP leverages fine-grained character-level priors extracted through external segmentation and Optical Character Recognition (OCR) modules tailored for low-quality license plate images. For precise and focused guidance, CharDiff-LP incorporates a novel Character-guided Attention through Region-wise Masking (CHARM) module, which ensures that each character's guidance is restricted to its own region, thereby avoiding interference with other regions. In experiments, CharDiff-LP significantly outperformed baseline restoration models in both restoration quality and recognition accuracy, achieving a 28.3% relative reduction in character error rate (CER) on the Roboflow-LP dataset compared with the best-performing baseline. - oai:arXiv.org:2510.17330v2 + Hand-Aware Egocentric Motion Reconstruction with Sequence-Level Context + https://arxiv.org/abs/2512.19283 + arXiv:2512.19283v1 Announce Type: new +Abstract: Egocentric vision systems are becoming widely available, creating new opportunities for human-computer interaction. A core challenge is estimating the wearer's full-body motion from first-person videos, which is crucial for understanding human behavior. However, this task is difficult since most body parts are invisible from the egocentric view. Prior approaches mainly rely on head trajectories, leading to ambiguity, or assume continuously tracked hands, which is unrealistic for lightweight egocentric devices. In this work, we present HaMoS, the first hand-aware, sequence-level diffusion framework that directly conditions on both head trajectory and intermittently visible hand cues caused by field-of-view limitations and occlusions, as in real-world egocentric devices. To overcome the lack of datasets pairing diverse camera views with human motion, we introduce a novel augmentation method that models such real-world conditions. We also demonstrate that sequence-level contexts such as body shape and field-of-view are crucial for accurate motion reconstruction, and thus employ local attention to infer long sequences efficiently. Experiments on public benchmarks show that our method achieves state-of-the-art accuracy and temporal smoothness, demonstrating a practical step toward reliable in-the-wild egocentric 3D motion understanding. + oai:arXiv.org:2512.19283v1 cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kyungwon Cho, Hanbyul Joo + + + GShield: Mitigating Poisoning Attacks in Federated Learning + https://arxiv.org/abs/2512.19286 + arXiv:2512.19286v1 Announce Type: new +Abstract: Federated Learning (FL) has recently emerged as a revolutionary approach to collaborative training Machine Learning models. In particular, it enables decentralized model training while preserving data privacy, but its distributed nature makes it highly vulnerable to a severe attack known as Data Poisoning. In such scenarios, malicious clients inject manipulated data into the training process, thereby degrading global model performance or causing targeted misclassification. In this paper, we present a novel defense mechanism called GShield, designed to detect and mitigate malicious and low-quality updates, especially under non-independent and identically distributed (non-IID) data scenarios. GShield operates by learning the distribution of benign gradients through clustering and Gaussian modeling during an initial round, enabling it to establish a reliable baseline of trusted client behavior. With this benign profile, GShield selectively aggregates only those updates that align with the expected gradient patterns, effectively isolating adversarial clients and preserving the integrity of the global model. An extensive experimental campaign demonstrates that our proposed defense significantly improves model robustness compared to the state-of-the-art methods while maintaining a high accuracy of performance across both tabular and image datasets. Furthermore, GShield improves the accuracy of the targeted class by 43\% to 65\% after detecting malicious and low-quality clients. + oai:arXiv.org:2512.19286v1 + cs.CR + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Sameera K. M., Serena Nicolazzo, Antonino Nocera, Vinod P., Rafidha Rehiman K. A + + + Vibe Reasoning: Eliciting Frontier AI Mathematical Capabilities -- A Case Study on IMO 2025 Problem 6 + https://arxiv.org/abs/2512.19287 + arXiv:2512.19287v1 Announce Type: new +Abstract: We introduce Vibe Reasoning, a human-AI collaborative paradigm for solving complex mathematical problems. Our key insight is that frontier AI models already possess the knowledge required to solve challenging problems -- they simply do not know how, what, or when to apply it. Vibe Reasoning transforms AI's latent potential into manifested capability through generic meta-prompts, agentic grounding, and model orchestration. We demonstrate this paradigm through IMO 2025 Problem 6, a combinatorial optimization problem where autonomous AI systems publicly reported failures. Our solution combined GPT-5's exploratory capabilities with Gemini 3 Pro's proof strengths, leveraging agentic workflows with Python code execution and file-based memory, to derive both the correct answer (2112) and a rigorous mathematical proof. Through iterative refinement across multiple attempts, we discovered the necessity of agentic grounding and model orchestration, while human prompts evolved from problem-specific hints to generic, transferable meta-prompts. We analyze why capable AI fails autonomously, how each component addresses specific failure modes, and extract principles for effective vibe reasoning. Our findings suggest that lightweight human guidance can unlock frontier models' mathematical reasoning potential. This is ongoing work; we are developing automated frameworks and conducting broader evaluations to further validate Vibe Reasoning's generality and effectiveness. + oai:arXiv.org:2512.19287v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kihyun Na, Gyuhwan Park, Injung Kim + Jiaao Wu, Xian Zhang, Fan Yang, Yinpeng Dong - A Homological Separation of $\mathbf{P}$ from $\mathbf{NP}$ via Computational Topology and Category Theory - https://arxiv.org/abs/2510.17829 - arXiv:2510.17829v2 Announce Type: replace -Abstract: This paper establishes the separation of complexity classes $\mathbf{P}$ and $\mathbf{NP}$ through a novel homological algebraic approach grounded in category theory. We construct the computational category $\mathbf{Comp}$, embedding computational problems and reductions into a unified categorical framework. By developing computational homology theory, we associate to each problem $L$ a chain complex $C_{\bullet}(L)$ whose homology groups $H_n(L)$ capture topological invariants of computational processes. Our main result demonstrates that problems in $\mathbf{P}$ exhibit trivial computational homology ($H_n(L) = 0$ for all $n > 0$), while $\mathbf{NP}$-complete problems such as SAT possess non-trivial homology ($H_1(\mathrm{SAT}) \neq 0$). This homological distinction provides the first rigorous proof of $\mathbf{P} \neq \mathbf{NP}$ using topological methods. Our work inaugurates computational topology as a new paradigm for complexity analysis, offering finer distinctions than traditional combinatorial approaches and establishing connections between structural complexity theory and homological invariants. - oai:arXiv.org:2510.17829v2 - cs.CC - math.AC - math.CT - math.RA - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Comparison and Evaluation of Different Simulation Environments for Rigid Body Systems + https://arxiv.org/abs/2512.19289 + arXiv:2512.19289v1 Announce Type: new +Abstract: Rigid body dynamics simulators are important tools for the design, analysis and optimization of mechanical systems in a variety of technical and scientific applications. This study examines four different simulation environments (Adams, Simscape, OpenModelica, and VEROSIM), focusing in particular on the comparison of the modeling methods, the numerical solvers, and the treatment of numerical problems that arise especially in closed-loop kinematics (esp. redundant boundary conditions and static equilibrium problem). A novel and complex crane boom of a real forestry machine serves as a practical benchmark application example. The direct comparison of the different solution approaches in the examined simulation tools supports the user in selecting the most suitable tool for his application. + oai:arXiv.org:2512.19289v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jian-Gang Tang + Longxiang Shao, Ulrich Dahmen, Juergen Rossmann - Rank-GRPO: Training LLM-based Conversational Recommender Systems with Reinforcement Learning - https://arxiv.org/abs/2510.20150 - arXiv:2510.20150v4 Announce Type: replace -Abstract: Large language models (LLMs) are reshaping the recommender system paradigm by enabling users to express preferences and receive recommendations through conversations. Yet, aligning LLMs to the recommendation task remains challenging: pretrained LLMs often generate out-of-catalog items, violate required output formats, and their ranking quality degrades sharply toward the end of the generated list. To this end, we propose ConvRec-R1, a two-stage framework for end-to-end training of LLM-based conversational recommender systems. In Stage 1, we construct a behavioral-cloning dataset with a Remap-Reflect-Adjust pipeline, which produces high-quality, catalog-grounded demonstrations from powerful blackbox LLMs to warm-start the RL training. In Stage 2, we propose Rank-GRPO, a principled extension of group relative policy optimization (GRPO) tailored to tasks with rank-style outputs. Rank-GRPO treats each rank in the recommendation list as the unit instead of token (too fine-grained) or sequence (too coarse), redefining rewards to remove non-causal credit assignment and introducing a rank-level importance ratio based on the geometric mean of rank-wise token probabilities to stabilize policy updates. Experiments on the public Reddit-v2 dataset show that ConvRec-R1 converges faster and achieves higher Recall and NDCG than GRPO-style baselines. Code and datasets are released at https://github.com/yaochenzhu/Rank-GRPO. - oai:arXiv.org:2510.20150v4 - cs.IR - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Stability Analysis of a B-Spline Deep Neural Operator for Nonlinear Systems + https://arxiv.org/abs/2512.19291 + arXiv:2512.19291v1 Announce Type: new +Abstract: This paper investigates the stability properties of neural operators through the structured representation offered by the Hybrid B-spline Deep Neural Operator (HBDNO). While existing stability-aware architectures typically enforce restrictive constraints that limit universality, HBDNO preserves full expressive power by representing outputs via B-spline control points. We show that these control points form a natural observable for post-training stability analysis. By applying Dynamic Mode Decomposition and connecting the resulting discrete dynamics to the Koopman operator framework, we provide a principled approach to spectral characterization of learned operators. Numerical results demonstrate the ability to assess stability and reveal future directions for safety-critical applications. + oai:arXiv.org:2512.19291v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yaochen Zhu, Harald Steck, Dawen Liang, Yinhan He, Vito Ostuni, Jundong Li, Nathan Kallus + Raffaele Romagnoli, Soummya Kar - Discovering Latent Graphs with GFlowNets for Diverse Conditional Image Generation - https://arxiv.org/abs/2510.22107 - arXiv:2510.22107v2 Announce Type: replace -Abstract: Capturing diversity is crucial in conditional and prompt-based image generation, particularly when conditions contain uncertainty that can lead to multiple plausible outputs. To generate diverse images reflecting this diversity, traditional methods often modify random seeds, making it difficult to discern meaningful differences between samples, or diversify the input prompt, which is limited in verbally interpretable diversity. We propose Rainbow, a novel conditional image generation framework, applicable to any pretrained conditional generative model, that addresses inherent condition/prompt uncertainty and generates diverse plausible images. Rainbow is based on a simple yet effective idea: decomposing the input condition into diverse latent representations, each capturing an aspect of the uncertainty and generating a distinct image. First, we integrate a latent graph, parameterized by Generative Flow Networks (GFlowNets), into the prompt representation computation. Second, leveraging GFlowNets' advanced graph sampling capabilities to capture uncertainty and output diverse trajectories over the graph, we produce multiple trajectories that collectively represent the input condition, leading to diverse condition representations and corresponding output images. Evaluations on natural image and medical image datasets demonstrate Rainbow's improvement in both diversity and fidelity across image synthesis, image generation, and counterfactual generation tasks. - oai:arXiv.org:2510.22107v2 - cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - The 39th Annual Conference on Neural Information Processing Systems 2025 - Bailey Trang, Parham Saremi, Alan Q. Wang, Fangrui Huang, Zahra TehraniNasab, Amar Kumar, Tal Arbel, Li Fei-Fei, Ehsan Adeli + LOCO: A Low-Cost SNU-Self-Resilient Latch Using an Output-Split C-Element + https://arxiv.org/abs/2512.19292 + arXiv:2512.19292v1 Announce Type: new +Abstract: As the CMOS technology enters nanometer scales, integrated circuits (ICs) become increasingly sensitive to radiation-induced soft errors, which can corrupt the state of storage elements and cause severe reliability issues. Many hardened designs have been proposed to mitigate soft errors by using filtering elements. However, existing filtering elements only protect their inputs against soft errors and leave their outputs unprotected. Therefore, additional filtering elements must be added to protect outputs, resulting in extra overhead. In this paper, we first propose a novel Output-Split C-element (OSC) to protect both its input and output nodes, and then a novel LOw-COst single-node-upset (SNU) self-resilient latch (LOCO) to use OSCs to achieve both soft error resilience and low overhead. The usage of OSCs effectively reduce the short-circuit current of the LOCO latch during switching activities. Furthermore, the usage of clock gating and high-speed path reduces power consumption and delay, respectively. Compared with state-of-the-art SNU-resilient hardened designs, the LOCO latch achieves 19% fewer transistors, 63.58% lower power, 74% less delay, and 92% lower power-delay-product (PDP) on average. In addition, the LOCO latch exhibits better stability under variations in PVT (Process, Voltage, and Temperature). + oai:arXiv.org:2512.19292v1 + cs.GT + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ruijun Ma, Xin Chen, Xiaoqing Wen, Hui Xu, Shengnan Ye, Chuanjian Zhang, Senling Wang - A First Look at the Self-Admitted Technical Debt in Test Code: Taxonomy and Detection - https://arxiv.org/abs/2510.22409 - arXiv:2510.22409v2 Announce Type: replace -Abstract: Self-admitted technical debt (SATD) refers to comments in which developers explicitly acknowledge code issues, workarounds, or suboptimal solutions. SATD is known to significantly increase software maintenance effort. While extensive research has examined SATD in source code, its presence and impact in test code have received no focused attention, leaving a significant gap in our understanding of how SATD manifests in testing contexts. - This study, the first of its kind, investigates SATD in test code by manually analyzing 50,000 comments randomly sampled from 1.6 million comments across 1,000 open-source Java projects. From this sample, after manual analysis and filtering, we identified 615 SATD comments and classified them into 14 distinct categories, building a taxonomy of test code SATD. To investigate whether test code SATD can be detected automatically, we evaluated existing SATD detection tools, as well as both open-source and proprietary LLMs. Among the existing tools, MAT performed the best, albeit with moderate recall. To our surprise, both open-source and proprietary LLMs exhibited poor detection accuracy, primarily due to low precision. These results indicate that neither existing approaches nor current LLMs can reliably detect SATD in test code. - Overall, this work provides the first large-scale analysis of SATD in test code, a nuanced understanding of its types, and the limitations of current SATD detection methods. Our findings lay the groundwork for future research on test code-specific SATD. - oai:arXiv.org:2510.22409v2 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Causal-Guided Detoxify Backdoor Attack of Open-Weight LoRA Models + https://arxiv.org/abs/2512.19297 + arXiv:2512.19297v1 Announce Type: new +Abstract: Low-Rank Adaptation (LoRA) has emerged as an efficient method for fine-tuning large language models (LLMs) and is widely adopted within the open-source community. However, the decentralized dissemination of LoRA adapters through platforms such as Hugging Face introduces novel security vulnerabilities: malicious adapters can be easily distributed and evade conventional oversight mechanisms. Despite these risks, backdoor attacks targeting LoRA-based fine-tuning remain relatively underexplored. Existing backdoor attack strategies are ill-suited to this setting, as they often rely on inaccessible training data, fail to account for the structural properties unique to LoRA, or suffer from high false trigger rates (FTR), thereby compromising their stealth. To address these challenges, we propose Causal-Guided Detoxify Backdoor Attack (CBA), a novel backdoor attack framework specifically designed for open-weight LoRA models. CBA operates without access to original training data and achieves high stealth through two key innovations: (1) a coverage-guided data generation pipeline that synthesizes task-aligned inputs via behavioral exploration, and (2) a causal-guided detoxification strategy that merges poisoned and clean adapters by preserving task-critical neurons. Unlike prior approaches, CBA enables post-training control over attack intensity through causal influence-based weight allocation, eliminating the need for repeated retraining. Evaluated across six LoRA models, CBA achieves high attack success rates while reducing FTR by 50-70\% compared to baseline methods. Furthermore, it demonstrates enhanced resistance to state-of-the-art backdoor defenses, highlighting its stealth and robustness. + oai:arXiv.org:2512.19297v1 + cs.CR + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Shahidul Islam, Md Nahidul Islam Opu, Shaowei Wang, Shaiful Chowdhury + Linzhi Chen, Yang Sun, Hongru Wei, Yuqi Chen - On the Freshness of Pinned Dependencies in Maven - https://arxiv.org/abs/2510.22815 - arXiv:2510.22815v2 Announce Type: replace -Abstract: Library dependencies in software ecosystems play a crucial role in the development of software. As newer releases of these libraries are published, developers may opt to pin their dependencies to a particular version. While pinning may have benefits in ensuring reproducible builds and avoiding breaking changes, it bears larger risks in using outdated dependencies that may contain bugs and security vulnerabilities. To understand the frequency and consequences of dependency pinning, we first define the concepts of stale and fresh pins, which are distinguished based on how outdated the dependency is relative to the release date of the project. We conduct an empirical study to show that over 60% of consumers of popular Maven libraries contain stale pins to their dependencies, with some outdated versions over a year old. These pinned versions often miss out on security fixes; we find that 10% of all dependency upgrades in our dataset to the latest minor or patch version would reduce security vulnerabilities. - We prototype an approach called Pin-Freshener that can encourage developers to freshen their pins by leveraging the insight that crowdsourced tests of peer projects can provide additional signal for the safety of an upgrade. Running Pin-Freshener on dependency upgrades shows that just 1-5 additional test suites can provide 35-100% more coverage of a dependency, compared to that of a single consumer test suite. Our evaluation on real-world pins to the top 500 popular libraries in Maven shows that Pin-Freshener can provide an additional signal of at least 5 passing crowdsourced test suites to over 3,000 consumers to safely perform an upgrade that reduces security vulnerabilities. Pin-Freshener can provide practical confidence to developers by offering additional signal beyond their own test suites, representing an improvement over current practices. - oai:arXiv.org:2510.22815v2 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-sa/4.0/ - Vasudev Vikram, Yuvraj Agarwal, Rohan Padhye + Helios: A Foundational Language Model for Smart Energy Knowledge Reasoning and Application + https://arxiv.org/abs/2512.19299 + arXiv:2512.19299v1 Announce Type: new +Abstract: In the global drive toward carbon neutrality, deeply coordinated smart energy systems underpin industrial transformation. However, the interdisciplinary, fragmented, and fast-evolving expertise in this domain prevents general-purpose LLMs, which lack domain knowledge and physical-constraint awareness, from delivering precise engineering-aligned inference and generation. To address these challenges, we introduce Helios, a large language model tailored to the smart energy domain, together with a comprehensive suite of resources to advance LLM research in this field. Specifically, we develop Enersys, a multi-agent collaborative framework for end-to-end dataset construction, through which we produce: (1) a smart energy knowledge base, EnerBase, to enrich the model's foundational expertise; (2) an instruction fine-tuning dataset, EnerInstruct, to strengthen performance on domain-specific downstream tasks; and (3) an RLHF dataset, EnerReinforce, to align the model with human preferences and industry standards. Leveraging these resources, Helios undergoes large-scale pretraining, SFT, and RLHF. We also release EnerBench, a benchmark for evaluating LLMs in smart energy scenarios, and demonstrate that our approach significantly enhances domain knowledge mastery, task execution accuracy, and alignment with human preferences. + oai:arXiv.org:2512.19299v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Haoyu Jiang, Fanjie Zeng, Boan Qu, Xiaojie Lin, Wei Zhong - Seeing Structural Failure Before it Happens: An Image-Based Physics-Informed Neural Network (PINN) for Spaghetti Bridge Load Prediction - https://arxiv.org/abs/2510.23117 - arXiv:2510.23117v3 Announce Type: replace -Abstract: Physics Informed Neural Networks (PINNs) are gaining attention for their ability to embed physical laws into deep learning models, which is particularly useful in structural engineering tasks with limited data. This paper aims to explore the use of PINNs to predict the weight of small scale spaghetti bridges, a task relevant to understanding load limits and potential failure modes in simplified structural models. Our proposed framework incorporates physics-based constraints to the prediction model for improved performance. In addition to standard PINNs, we introduce a novel architecture named Physics Informed Kolmogorov Arnold Network (PIKAN), which blends universal function approximation theory with physical insights. The structural parameters provided as input to the model are collected either manually or through computer vision methods. Our dataset includes 15 real bridges, augmented to 100 samples, and our best model achieves an $R^2$ score of 0.9603 and a mean absolute error (MAE) of 10.50 units. From applied perspective, we also provide a web based interface for parameter entry and prediction. These results show that PINNs can offer reliable estimates of structural weight, even with limited data, and may help inform early stage failure analysis in lightweight bridge designs. - The complete data and code are available at https://github.com/OmerJauhar/PINNS-For-Spaghetti-Bridges. - oai:arXiv.org:2510.23117v3 - cs.LG + RMLer: Synthesizing Novel Objects across Diverse Categories via Reinforcement Mixing Learning + https://arxiv.org/abs/2512.19300 + arXiv:2512.19300v1 Announce Type: new +Abstract: Novel object synthesis by integrating distinct textual concepts from diverse categories remains a significant challenge in Text-to-Image (T2I) generation. Existing methods often suffer from insufficient concept mixing, lack of rigorous evaluation, and suboptimal outputs-manifesting as conceptual imbalance, superficial combinations, or mere juxtapositions. To address these limitations, we propose Reinforcement Mixing Learning (RMLer), a framework that formulates cross-category concept fusion as a reinforcement learning problem: mixed features serve as states, mixing strategies as actions, and visual outcomes as rewards. Specifically, we design an MLP-policy network to predict dynamic coefficients for blending cross-category text embeddings. We further introduce visual rewards based on (1) semantic similarity and (2) compositional balance between the fused object and its constituent concepts, optimizing the policy via proximal policy optimization. At inference, a selection strategy leverages these rewards to curate the highest-quality fused objects. Extensive experiments demonstrate RMLer's superiority in synthesizing coherent, high-fidelity objects from diverse categories, outperforming existing methods. Our work provides a robust framework for generating novel visual concepts, with promising applications in film, gaming, and design. + oai:arXiv.org:2512.19300v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Omer Jauhar Khan, Sudais Khan, Hafeez Anwar, Shahzeb Khan, Shams Ul Arifeen + Jun Li, Zikun Chen, Haibo Chen, Shuo Chen, Jian Yang - Training Deep Physics-Informed Kolmogorov-Arnold Networks - https://arxiv.org/abs/2510.23501 - arXiv:2510.23501v2 Announce Type: replace -Abstract: Since their introduction, Kolmogorov-Arnold Networks (KANs) have been successfully applied across several domains, with physics-informed machine learning (PIML) emerging as one of the areas where they have thrived. In the PIML setting, Chebyshev-based physics-informed KANs (cPIKANs) have become the standard due to their computational efficiency. However, like their multilayer perceptron-based counterparts, cPIKANs face significant challenges when scaled to depth, leading to training instabilities that limit their applicability to several PDE problems. To address this, we propose a basis-agnostic, Glorot-like initialization scheme that preserves activation variance and yields substantial improvements in stability and accuracy over the default initialization of cPIKANs. Inspired by the PirateNet architecture, we further introduce Residual-Gated Adaptive KANs (RGA KANs), designed to mitigate divergence in deep cPIKANs where initialization alone is not sufficient. Through empirical tests and information bottleneck analysis, we show that RGA KANs successfully traverse all training phases, unlike baseline cPIKANs, which stagnate in the diffusion phase in specific PDE settings. Evaluations on nine standard forward PDE benchmarks under a fixed training pipeline with adaptive components demonstrate that RGA KANs consistently outperform parameter-matched cPIKANs and PirateNets - often by several orders of magnitude - while remaining stable in settings where the others diverge. - oai:arXiv.org:2510.23501v2 - cs.LG - physics.comp-ph - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Spyros Rigas, Fotios Anagnostopoulos, Michalis Papachristou, Georgios Alexandridis + Bridging Semantics and Geometry: A Decoupled LVLM-SAM Framework for Reasoning Segmentation in Remote Sensing + https://arxiv.org/abs/2512.19302 + arXiv:2512.19302v1 Announce Type: new +Abstract: Large Vision-Language Models (LVLMs) hold great promise for advancing remote sensing (RS) analysis, yet existing reasoning segmentation frameworks couple linguistic reasoning and pixel prediction through end-to-end supervised fine-tuning, leading to weak geometric grounding and limited generalization across tasks. To address this, we developed Think2Seg-RS, a decoupled framework that trains an LVLM prompter to control a frozen Segment Anything Model (SAM) via structured geometric prompts. Through a mask-only reinforcement learning objective, the LVLM learns to translate abstract semantic reasoning into spatially grounded actions, achieving state-of-the-art performance on the EarthReason dataset. Remarkably, the learned prompting policy generalizes zero-shot to multiple referring segmentation benchmarks, exposing a distinct divide between semantic-level and instance-level grounding. We further found that compact segmenters outperform larger ones under semantic-level supervision, and that negative prompts are ineffective in heterogeneous aerial backgrounds. Together, these findings establish semantic-level reasoning segmentation as a new paradigm for geospatial understanding, opening the way toward unified, interpretable LVLM-driven Earth observation. Our code and model are available at https://github.com/Ricardo-XZ/Think2Seg-RS. + oai:arXiv.org:2512.19302v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xu Zhang, Junyao Ge, Yang Zheng, Kaitai Guo, Jimin Liang - The Generation Phases of Flow Matching: a Denoising Perspective - https://arxiv.org/abs/2510.24830 - arXiv:2510.24830v2 Announce Type: replace -Abstract: Flow matching has achieved remarkable success, yet the factors influencing the quality of its generation process remain poorly understood. In this work, we adopt a denoising perspective and design a framework to empirically probe the generation process. Laying down the formal connections between flow matching models and denoisers, we provide a common ground to compare their performances on generation and denoising. This enables the design of principled and controlled perturbations to influence sample generation: noise and drift. This leads to new insights on the distinct dynamical phases of the generative process, enabling us to precisely characterize at which stage of the generative process denoisers succeed or fail and why this matters. - oai:arXiv.org:2510.24830v2 - cs.CV - cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Binary Neural Network Implementation for Handwritten Digit Recognition on FPGA + https://arxiv.org/abs/2512.19304 + arXiv:2512.19304v1 Announce Type: new +Abstract: Binary neural networks provide a promising solution for low-power, high-speed inference by replacing expensive floating-point operations with bitwise logic. This makes them well-suited for deployment on resource-constrained platforms such as FPGAs. In this study, we present a fully custom BNN inference accelerator for handwritten digit recognition, implemented entirely in Verilog without the use of high-level synthesis tools. The design targets the Xilinx Artix-7 FPGA and achieves real-time classification at 80\,MHz with low power consumption and predictable timing. Simulation results demonstrate 84\% accuracy on the MNIST test set and highlight the advantages of manual HDL design for transparent, efficient, and flexible BNN deployment in embedded systems. The complete project including training scripts and Verilog source code are available at GitHub repo for reproducibility and future development. + oai:arXiv.org:2512.19304v1 + cs.AR + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Anne Gagneux, S\'egol\`ene Martin, R\'emi Gribonval, Mathurin Massias + Emir Devlet Ert\"orer, Cem \"Unsalan - Cooperative Task Spaces for Multi-Arm Manipulation Control based on Similarity Transformations - https://arxiv.org/abs/2510.26362 - arXiv:2510.26362v2 Announce Type: replace -Abstract: Many tasks in human environments require collaborative behavior between multiple kinematic chains, either to provide additional support for carrying big and bulky objects or to enable the dexterity that is required for in-hand manipulation. Since these complex systems often have a very high number of degrees of freedom coordinating their movements is notoriously difficult to model. In this article, we present the derivation of the theoretical foundations for cooperative task spaces of multi-arm robotic systems based on geometric primitives defined using conformal geometric algebra. Based on the similarity transformations of these cooperative geometric primitives, we derive an abstraction of complex robotic systems that enables representing these systems in a way that directly corresponds to single-arm systems. By deriving the associated analytic and geometric Jacobian matrices, we then show the straightforward integration of our approach into classical control techniques rooted in operational space control. We demonstrate this using bimanual manipulators, humanoids and multi-fingered hands in optimal control experiments for reaching desired geometric primitives and in teleoperation experiments using differential kinematics control. We then discuss how the geometric primitives naturally embed nullspace structures into the controllers that can be exploited for introducing secondary control objectives. This work, represents the theoretical foundations of this cooperative manipulation control framework, and thus the experiments are presented in an abstract way, while giving pointers towards potential future applications. - oai:arXiv.org:2510.26362v2 - cs.RO - cs.SY - eess.SY - Mon, 22 Dec 2025 00:00:00 -0500 - replace + CienaLLM: Generative Climate-Impact Extraction from News Articles with Autoregressive LLMs + https://arxiv.org/abs/2512.19305 + arXiv:2512.19305v1 Announce Type: new +Abstract: Understanding and monitoring the socio-economic impacts of climate hazards requires extracting structured information from heterogeneous news articles on a large scale. To that end, we have developed CienaLLM, a modular framework based on schema-guided Generative Information Extraction. CienaLLM uses open-weight Large Language Models for zero-shot information extraction from news articles, and supports configurable prompts and output schemas, multi-step pipelines, and cloud or on-premise inference. To systematically assess how the choice of LLM family, size, precision regime, and prompting strategy affect performance, we run a large factorial study in models, precisions, and prompt engineering techniques. An additional response parsing step nearly eliminates format errors while preserving accuracy; larger models deliver the strongest and most stable performance, while quantization offers substantial efficiency gains with modest accuracy trade-offs; and prompt strategies show heterogeneous, model-specific effects. CienaLLM matches or outperforms the supervised baseline in accuracy for extracting drought impacts from Spanish news, although at a higher inference cost. While evaluated in droughts, the schema-driven and model-agnostic design is suitable for adapting to related information extraction tasks (e.g., other hazards, sectors, or languages) by editing prompts and schemas rather than retraining. We release code, configurations, and schemas to support reproducible use. + oai:arXiv.org:2512.19305v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Tobias L\"ow, Cem Bilaloglu, Sylvain Calinon + Javier Vela-Tambo, Jorge Gracia, Fernando Dominguez-Castro - Empirical and Sustainability Aspects of Software Engineering Research in the Era of Large Language Models: A Reflection - https://arxiv.org/abs/2510.26538 - arXiv:2510.26538v2 Announce Type: replace -Abstract: Software Engineering (SE) research involving the use of Large Language Models (LLMs) has introduced several new challenges related to rigour in benchmarking, contamination, replicability, and sustainability. In this paper, we invite the research community to reflect on how these challenges are addressed in SE. Our results provide a structured overview of current LLM-based SE research at ICSE, highlighting both encouraging practices and persistent shortcomings. We conclude with recommendations to strengthen benchmarking rigour, improve replicability, and address the financial and environmental costs of LLM-based SE. - oai:arXiv.org:2510.26538v2 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 - replace + On the construction of Cauchy MDS matrices over Galois rings via nilpotent elements and Frobenius maps + https://arxiv.org/abs/2512.19306 + arXiv:2512.19306v1 Announce Type: new +Abstract: Let $s,m$ be the positive integers and $p$ be any prime number. Next, let $GR(p^s,p^{sm})$ be a Galois ring of characteristic $p^s$ and cardinality $p^{sm}$. In the present paper, we explore the construction of Cauchy MDS matrices over Galois rings. Moreover, we introduce a new approach that considers nilpotent elements and Teichm\"uller set of Galois ring $GR(p^s,p^{sm})$ to reduce the number of entries in these matrices. Furthermore, we construct $p^{(s-1)m}(p^m-1)$ distinct functions with the help of Frobenius automorphisms. These functions preserve MDS property of matrices. Finally, we prove some results using automorphisms and isomorphisms of the Galois rings that can be used to generate new Cauchy MDS matrices. + oai:arXiv.org:2512.19306v1 + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - David Williams, Max Hort, Maria Kechagia, Aldeida Aleti, Justyna Petke, Federica Sarro + Shakir Ali, Atif Ahmad Khan, Abhishek Kesarwani - Semi-Supervised Preference Optimization with Limited Feedback - https://arxiv.org/abs/2511.00040 - arXiv:2511.00040v2 Announce Type: replace -Abstract: The field of preference optimization has made outstanding contributions to the alignment of language models with human preferences. Despite these advancements, recent methods still rely heavily on substantial paired (labeled) feedback data, leading to substantial resource expenditures. To address these challenges, we study the problem of Semi-Supervised Preference Optimization (SSPO) in which the idea is to learn from both a small number of pairwise preference labels and a large pool of unpaired samples simultaneously. Our key theoretical contribution proves the existence of an optimal reward threshold capable of separating winning and losing responses with high probability, which enables a principled pseudo-labeling of unpaired data. By leveraging these pseudo-labels, SSPO effectively distills latent preferences from large-scale unpaired data, thus maintaining human alignment while drastically reducing acquisition costs. Extensive experiments across datasets validate this remarkable data efficiency; for instance, SSPO trained with Mistral-7B-Instruct on just 1% of UltraFeedback consistently surpasses strong baselines trained on 10% of UltraFeedback. - oai:arXiv.org:2511.00040v2 + Time-Vertex Machine Learning for Optimal Sensor Placement in Temporal Graph Signals: Applications in Structural Health Monitoring + https://arxiv.org/abs/2512.19309 + arXiv:2512.19309v1 Announce Type: new +Abstract: Structural Health Monitoring (SHM) plays a crucial role in maintaining the safety and resilience of infrastructure. As sensor networks grow in scale and complexity, identifying the most informative sensors becomes essential to reduce deployment costs without compromising monitoring quality. While Graph Signal Processing (GSP) has shown promise by leveraging spatial correlations among sensor nodes, conventional approaches often overlook the temporal dynamics of structural behavior. To overcome this limitation, we propose Time-Vertex Machine Learning (TVML), a novel framework that integrates GSP, time-domain analysis, and machine learning to enable interpretable and efficient sensor placement by identifying representative nodes that minimize redundancy while preserving critical information. We evaluate the proposed approach on two bridge datasets for damage detection and time-varying graph signal reconstruction tasks. The results demonstrate the effectiveness of our approach in enhancing SHM systems by providing a robust, adaptive, and efficient solution for sensor placement. + oai:arXiv.org:2512.19309v1 cs.LG + eess.SP + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Keivan Faghih Niresi, Jun Qing, Mengjie Zhao, Olga Fink + + + MixFlow Training: Alleviating Exposure Bias with Slowed Interpolation Mixture + https://arxiv.org/abs/2512.19311 + arXiv:2512.19311v1 Announce Type: new +Abstract: This paper studies the training-testing discrepancy (a.k.a. exposure bias) problem for improving the diffusion models. During training, the input of a prediction network at one training timestep is the corresponding ground-truth noisy data that is an interpolation of the noise and the data, and during testing, the input is the generated noisy data. We present a novel training approach, named MixFlow, for improving the performance. Our approach is motivated by the Slow Flow phenomenon: the ground-truth interpolation that is the nearest to the generated noisy data at a given sampling timestep is observed to correspond to a higher-noise timestep (termed slowed timestep), i.e., the corresponding ground-truth timestep is slower than the sampling timestep. MixFlow leverages the interpolations at the slowed timesteps, named slowed interpolation mixture, for post-training the prediction network for each training timestep. Experiments over class-conditional image generation (including SiT, REPA, and RAE) and text-to-image generation validate the effectiveness of our approach. Our approach MixFlow over the RAE models achieve strong generation results on ImageNet: 1.43 FID (without guidance) and 1.10 (with guidance) at 256 x 256, and 1.55 FID (without guidance) and 1.10 (with guidance) at 512 x 512. + oai:arXiv.org:2512.19311v1 + cs.CV cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Seonggyun Lee, Sungjun Lim, Seojin Park, Soeun Cheon, Kyungwoo Song + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Hui Li, Jiayue Lyu, Fu-Yun Wang, Kaihui Cheng, Siyu Zhu, Jingdong Wang - MCHex: Marching Cubes Based Adaptive Hexahedral Mesh Generation with Guaranteed Positive Jacobian - https://arxiv.org/abs/2511.02064 - arXiv:2511.02064v3 Announce Type: replace -Abstract: Constructing an adaptive hexahedral tessellation to fit an input triangle boundary is a key challenge in grid-based methods. The conventional method first removes outside elements (RO) and then projects the axis-aligned boundary onto the input triangle boundary, which has no guarantee on improving the initial Intersection over Union (IoU) and Hausdorff distance ratio (HR, w.r.t bounding box diagonal). The proposed MCHex approach replaces RO with a Marching Cubes method MCHex. Given the same computational budget (benchmarked using an identical precomputed Signed Distance Field, which dominates the runtime), MCHex provides better boundary approximation (higher IoU and lower HR) while guaranteeing a lower, yet still positive, minimum scaled Jacobian (>0 vs. RO's >0.48). - oai:arXiv.org:2511.02064v3 - cs.CG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Hua Tong, Yongjie Jessica Zhang + Results on cubic bent and weakly regular bent $p$-ary functions leading to a class of cubic ternary non-weakly regular bent functions + https://arxiv.org/abs/2512.19313 + arXiv:2512.19313v1 Announce Type: new +Abstract: Much work has been devoted to bent functions in odd characteristic, but a gap remains between our knowledge of binary and nonbinary bent functions. In the first part of this paper, we attempt to partially bridge this gap by generalizing to any characteristic important properties known in characteristic 2 concerning the Walsh transform of derivatives of bent functions. Some of these properties generalize to all bent functions, while others appear to apply only to weakly regular bent functions. We deduce a method to obtain a bent function by adding a quadratic function to a weakly regular bent function. We also identify a particular class of bent functions possessing the property that every first-order derivative in a nonzero direction has a derivative (which is then a second-order derivative of the function) equal to a nonzero constant. We show that this property implies bentness and is shared in particular by all cubic bent functions. It generalizes the notion of cubic-like bent function, that was introduced and studied for binary functions by Irene Villa and the first author. + In the second part of the paper, we provide (for the first time) a primary construction leading to an infinite class of cubic bent functions that are not weakly regular. We show the bentness of the functions by two approaches: by calculating the Walsh transform directly and by considering the second-order derivatives (and applying the results from the first part of the paper). + oai:arXiv.org:2512.19313v1 + cs.DM + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Claude Carlet, and Alexander Kholosha - Decentralized Voltage Control of AC Microgrids with Constant Power Loads using Control Barrier Functions - https://arxiv.org/abs/2511.02438 - arXiv:2511.02438v2 Announce Type: replace -Abstract: This paper proposes a novel nonlinear decentralized voltage controller for constrained regulation of meshed AC Microgrid networks with high penetration of time-varying constant power loads. Modelling the load demand as a constantly evolving unknown disturbance, the network model is reformulated in a cascaded structure composed of a nominal, \ie uncertainty-free, and an error subsystem. By adopting a suitable control barrier function, we formulate a continuous-time control law and derive analytic conditions on the tuning parameters, such that the distance between the true and the nominal state trajectories is bounded. Under sufficient conditions, we prove asymptotic stability of the cascaded dynamics with respect to an equilibrium set and also provide an estimate of the region of attraction. In addition, it is rigorously shown that the proposed nonlinear control law enforces constrained regulation around a rated voltage value, without the need of saturation devices. The operation of the closed-loop system is illustrated both via simulation and real-time HIL scenarios, demonstrating bounded operation and convergence to a neighbourhood of the desired reference vector. - oai:arXiv.org:2511.02438v2 - eess.SY - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Protecting Quantum Circuits Through Compiler-Resistant Obfuscation + https://arxiv.org/abs/2512.19314 + arXiv:2512.19314v1 Announce Type: new +Abstract: Quantum circuit obfuscation is becoming increasingly important to prevent theft and reverse engineering of quantum algorithms. As quantum computing advances, the need to protect the intellectual property contained in quantum circuits continues to grow. Existing methods often provide limited defense against structural and statistical analysis or introduce considerable overhead. In this paper, we propose a novel quantum obfuscation method that uses randomized U3 transformations to conceal circuit structure while preserving functionality. We implement and assess our approach on QASM circuits using Qiskit AER, achieving over 93\% semantic accuracy with minimal runtime overhead. The method demonstrates strong resistance to reverse engineering and structural inference, making it a practical and effective approach for quantum software protection. + oai:arXiv.org:2512.19314v1 + cs.CR + quant-ph + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Grigoris Michos, George C. Konstantopoulos + Pradyun Parayil, Amal Raj, Vivek Balachandran - Human Mesh Modeling for Anny Body - https://arxiv.org/abs/2511.03589 - arXiv:2511.03589v2 Announce Type: replace -Abstract: Parametric body models provide the structural basis for many human-centric tasks, yet existing models often rely on costly 3D scans and learned shape spaces that are proprietary and demographically narrow. We introduce Anny, a simple, fully differentiable, and scan-free human body model grounded in anthropometric knowledge from the MakeHuman community. Anny defines a continuous, interpretable shape space, where phenotype parameters (e.g. gender, age, height, weight) control blendshapes spanning a wide range of human forms--across ages (from infants to elders), body types, and proportions. Calibrated using WHO population statistics, it provides realistic and demographically grounded human shape variation within a single unified model. Thanks to its openness and semantic control, Anny serves as a versatile foundation for 3D human modeling--supporting millimeter-accurate scan fitting, controlled synthetic data generation, and Human Mesh Recovery (HMR). We further introduce Anny-One, a collection of 800k photorealistic images generated with Anny, showing that despite its simplicity, HMR models trained with Anny can match the performance of those trained with scan-based body models. The Anny body model and its code are released under the Apache 2.0 license, making Anny an accessible foundation for human-centric 3D modeling. - oai:arXiv.org:2511.03589v2 + Neural Implicit Heart Coordinates: 3D cardiac shape reconstruction from sparse segmentations + https://arxiv.org/abs/2512.19316 + arXiv:2512.19316v1 Announce Type: new +Abstract: Accurate reconstruction of cardiac anatomy from sparse clinical images remains a major challenge in patient-specific modeling. While neural implicit functions have previously been applied to this task, their application to mapping anatomical consistency across subjects has been limited. In this work, we introduce Neural Implicit Heart Coordinates (NIHCs), a standardized implicit coordinate system, based on universal ventricular coordinates, that provides a common anatomical reference frame for the human heart. Our method predicts NIHCs directly from a limited number of 2D segmentations (sparse acquisition) and subsequently decodes them into dense 3D segmentations and high-resolution meshes at arbitrary output resolution. Trained on a large dataset of 5,000 cardiac meshes, the model achieves high reconstruction accuracy on clinical contours, with mean Euclidean surface errors of 2.51$\pm$0.33 mm in a diseased cohort (n=4549) and 2.3$\pm$0.36 mm in a healthy cohort (n=5576). The NIHC representation enables anatomically coherent reconstruction even under severe slice sparsity and segmentation noise, faithfully recovering complex structures such as the valve planes. Compared with traditional pipelines, inference time is reduced from over 60 s to 5-15 s. These results demonstrate that NIHCs constitute a robust and efficient anatomical representation for patient-specific 3D cardiac reconstruction from minimal input data. + oai:arXiv.org:2512.19316v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Romain Br\'egier, Gu\'enol\'e Fiche, Laura Bravo-S\'anchez, Thomas Lucas, Matthieu Armando, Philippe Weinzaepfel, Gr\'egory Rogez, Fabien Baradel + eess.IV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Marica Muffoletto, Uxio Hermida, Charl\`ene Mauger, Avan Suinesiaputra, Yiyang Xu, Richard Burns, Lisa Pankewitz, Andrew D McCulloch, Steffen E Petersen, Daniel Rueckert, Alistair A Young - An Improved Quality Hierarchical Congestion Approximator in Near-Linear Time - https://arxiv.org/abs/2511.03716 - arXiv:2511.03716v3 Announce Type: replace -Abstract: A single-commodity congestion approximator for a graph is a compact data structure that approximately predicts the edge congestion required to route any set of single-commodity flow demands in a network. A hierarchical congestion approximator (HCA) consists of a laminar family of cuts in the graph and has numerous applications in approximating cut and flow problems in graphs, designing efficient routing schemes, and managing distributed networks. - There is a tradeoff between the running time for computing an HCA and its approximation quality. The best polynomial-time construction in an $n$-node graph gives an HCA with approximation quality $O(\log^{1.5}n \log \log n)$. Among near-linear time algorithms, the best previous result achieves approximation quality $O(\log^4 n)$. We improve upon the latter result by giving the first near-linear time algorithm for computing an HCA with approximation quality $O(\log^2 n \log \log n)$. Additionally, our algorithm can be implemented in the parallel setting with polylogarithmic span and near-linear work, achieving the same approximation quality. This improves upon the best previous such algorithm, which has an $O(\log^9n)$ approximation quality. We also present a lower bound of $\Omega(\log n)$ for the approximation guarantee of hierarchical congestion approximators. - Crucial for achieving a near-linear running time is a new partitioning routine that, unlike previous such routines, manages to avoid recursing on large subgraphs. To achieve the improved approximation quality, we introduce the new concept of border routability of a cut and provide an improved sparsest cut oracle for general vertex weights. - oai:arXiv.org:2511.03716v3 - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Monika Henzinger, Robin M\"unk, Harald R\"acke + SafeMed-R1: Adversarial Reinforcement Learning for Generalizable and Robust Medical Reasoning in Vision-Language Models + https://arxiv.org/abs/2512.19317 + arXiv:2512.19317v1 Announce Type: new +Abstract: Vision--Language Models (VLMs) show significant promise for Medical Visual Question Answering (VQA), yet their deployment in clinical settings is hindered by severe vulnerability to adversarial attacks. Standard adversarial training, while effective for simpler tasks, often degrades both generalization performance and the quality of generated clinical reasoning. We introduce SafeMed-R1, a hybrid defense framework that ensures robust performance while preserving high-quality, interpretable medical reasoning. SafeMed-R1 employs a two-stage approach: at training time, we integrate Adversarial Training with Group Relative Policy Optimization (AT-GRPO) to explicitly robustify the reasoning process against worst-case perturbations; at inference time, we augment the model with Randomized Smoothing to provide certified $L_2$-norm robustness guarantees. We evaluate SafeMed-R1 on the OmniMedVQA benchmark across eight medical imaging modalities comprising over 88,000 samples. Our experiments reveal that standard fine-tuned VLMs, despite achieving 95\% accuracy on clean inputs, collapse to approximately 25\% under PGD attacks. In contrast, SafeMed-R1 maintains 84.45\% accuracy under the same adversarial conditions, representing a 59 percentage point improvement in robustness. Furthermore, we demonstrate that models trained with explicit chain-of-thought reasoning exhibit superior adversarial robustness compared to instruction-only variants, suggesting a synergy between interpretability and security in medical AI systems. + oai:arXiv.org:2512.19317v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + A. A. Gde Yogi Pramana, Jason Ray, Anthony Jaya, Michael Wijaya - Batch Prompting Suppresses Overthinking Reasoning Under Constraint: How Batch Prompting Suppresses Overthinking in Reasoning Models - https://arxiv.org/abs/2511.04108 - arXiv:2511.04108v3 Announce Type: replace -Abstract: Recent work has explored batch prompting as a strategy to amortize inference cost in large language models (LLMs). In this paper, we show that batching offers an additional, underappreciated benefit: it regularizes model behavior during multi-step reasoning for Large Reasoning Models (LRMs). We conduct a comprehensive study across 13 diverse benchmarks and observe that batching improves accuracy while substantially reducing reasoning token usage, often by 3x-5x. Through detailed behavioral analysis, we find that batching suppresses overthinking, reduces hedging language (e.g., repetitive self-corrections), and encourages more decisive answers. Surprisingly, we also observe emergent collective effects in batched inference: models often generalize patterns from earlier examples to solve harder ones in the same batch. These findings position batching not just as a throughput optimization, but as a powerful inference-time regularizer for more efficient and reliable LLM reasoning. - oai:arXiv.org:2511.04108v3 + MAGIC: Achieving Superior Model Merging via Magnitude Calibration + https://arxiv.org/abs/2512.19320 + arXiv:2512.19320v1 Announce Type: new +Abstract: The proliferation of pre-trained models has given rise to a wide array of specialised, fine-tuned models. Model merging aims to merge the distinct capabilities of these specialised models into a unified model, requiring minimal or even no additional training. A core objective of model merging is to ensure the merged model retains the behavioural characteristics of the specialised models, typically achieved through feature alignment. We identify that features consist of two critical components: direction and magnitude. Prior research has predominantly focused on directional alignment, while the influence of magnitude remains largely neglected, despite its pronounced vulnerability to perturbations introduced by common merging operations (e.g., parameter fusion and sparsification). Such perturbations to magnitude inevitably lead to feature deviations in the merged model from the specialised models, resulting in subsequent performance degradation. To address this, we propose MAGnItude Calibration (MAGIC), a plug-and-play framework that rectifies layer-wise magnitudes in feature and weight spaces, with three variants. Specifically, our Feature Space Calibration (FSC) realigns the merged model's features using a small set of unlabelled data, while Weight Space Calibration (WSC) extends this calibration to the weight space without requiring additional data. Combining these yields Dual Space Calibration (DSC). Comprehensive experiments demonstrate that MAGIC consistently boosts performance across diverse Computer Vision tasks (+4.3% on eight datasets) and NLP tasks (+8.0% on Llama) without additional training. Our code is available at: https://github.com/lyymuwu/MAGIC + oai:arXiv.org:2512.19320v1 + cs.LG + cs.AI cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Gaurav Singh, Abhishek Dey, Janit Bidhan, Tanu Kansal, Paras Kath, Saurabh Srivastava + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Yayuan Li, Jian Zhang, Jintao Guo, Zihan Cheng, Lei Qi, Yinghuan Shi, Yang Gao - CBMC-V3: A CNS-inspired Control Framework Towards Manipulation Agility with SNN - https://arxiv.org/abs/2511.04109 - arXiv:2511.04109v2 Announce Type: replace -Abstract: As robotic arm applications extend beyond industrial settings into service-oriented sectors such as catering, household and retail, existing control algorithms struggle to achieve the agile manipulation required for complex environments with dynamic trajectories, unpredictable interactions, and diverse objects. This paper presents a biomimetic control framework based on Spiking Neural Networks (SNNs), inspired by the human Central Nervous System (CNS), to achieve agile control in such environments. The proposed framework features five control modules (cerebral cortex, cerebellum, thalamus, brainstem, and spinal cord), three hierarchical control levels (first-order, second-order, and third-order), and two information pathways (ascending and descending). Each module is fully implemented using SNN. The spinal cord module uses spike encoding and Leaky Integrate-and-Fire (LIF) neurons for feedback control. The brainstem module employs a network of LIF and non-spiking LIF neurons to dynamically adjust spinal cord parameters via reinforcement learning. The thalamus module similarly employs a network of LIF and non-spiking LIF neurons to adjust the cerebellum's torque outputs via reinforcement learning. The cerebellum module, which provides feedfoward gravity compensation torques, uses a recurrent SNN to learn the robotic arm's dynamics through regression. The framework is validated both in simulation and on real-world robotic arm platform under various loads and trajectories. Results demonstrate that our method outperforms the industrial-grade position control in manipulation agility. - oai:arXiv.org:2511.04109v2 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Learning-Assisted Multi-Operator Variable Neighborhood Search for Urban Cable Routing + https://arxiv.org/abs/2512.19321 + arXiv:2512.19321v1 Announce Type: new +Abstract: Urban underground cable construction is essential for enhancing the reliability of city power grids, yet its high construction costs make planning a worthwhile optimization task. In urban environments, road layouts tightly constrain cable routing. This, on the one hand, renders relation-only models (i.e., those without explicit routes) used in prior work overly simplistic, and on the other hand, dramatically enlarges the combinatorial search space, thereby imposing much higher demands on algorithm design. In this study, we formulate urban cable routing as a connectivity-path co-optimization problem and propose a learning-assisted multi-operator variable neighborhood search (L-MVNS) algorithm. The framework first introduces an auxiliary task to generate high-quality feasible initial solutions. A hybrid genetic search (HGS) and A* serve as the connectivity optimizer and the route-planning optimizer, respectively. Building on these, a multi-operator variable neighborhood search (MVNS) iteratively co-optimizes inter-substation connectivity and detailed routes via three complementary destruction operators, a modified A* repair operator, and an adaptive neighborhood-sizing mechanism. A multi-agent deep reinforcement learning module is further embedded to prioritize promising neighborhoods. We also construct a standardized and scalable benchmark suite for evaluation. Across these cases, comprehensive experiments demonstrate effectiveness and stability: relative to representative approaches, MVNS and L-MVNS reduce total construction cost by approximately 30-50%, with L-MVNS delivering additional gains on larger instances and consistently higher stability. + oai:arXiv.org:2512.19321v1 + cs.NE + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yanbo Pang, Qingkai Li, Mingguo Zhao + Wei Liu, Tao Zhang, Chenhui Lin, Kaiwen Li, Rui Wang - Data-driven uncertainty-aware seakeeping prediction of the Delft 372 catamaran using ensemble Hankel dynamic mode decomposition - https://arxiv.org/abs/2511.04461 - arXiv:2511.04461v2 Announce Type: replace -Abstract: In this study, we present and validate an ensemble-based Hankel Dynamic Mode Decomposition with control (HDMDc) for uncertainty-aware seakeeping predictions of a high-speed catamaran, namely the Delft 372 model. Experimental measurements (time histories) of wave elevation at the longitudinal center of gravity, heave, pitch, notional flight-deck velocity, notional bridge acceleration, and total resistance were collected from irregular wave basin tests on a 1:33.3 scale replica of the Delft 372 model under sea state 5 conditions at Fr = 0.425, and organized into training, validation, and test sets. The HDMDc algorithm constructs an equation-free linear reduced-order model of the seakeeping vessel by augmenting states and inputs with their time-lagged copies to capture nonlinear and memory effects. Two ensembling strategies, namely Bayesian HDMDc (BHDMDc), which samples hyperparameters considered stochastic variables with prior distribution to produce posterior mean forecasts with confidence intervals, and Frequentist HDMDc (FHDMDc), which aggregates multiple model obtained over data subsets, are compared in providing seakeeping prediction and uncertainty quantification. The FHDMDc approach is found to improve the accuracy of the predictions compared to the deterministic counterpart, also providing robust uncertainty estimation; whereas the application of BHDMDc to the present test case is not found beneficial in comparison to the deterministic model. FHDMDc-derived probability density functions for the motions closely match both experimental data and URANS results, demonstrating reliable and computationally efficient seakeeping prediction for design and operational support. - oai:arXiv.org:2511.04461v2 - eess.SY - cs.CE + Alternative positional encoding functions for neural transformers + https://arxiv.org/abs/2512.19323 + arXiv:2512.19323v1 Announce Type: new +Abstract: A key module in neural transformer-based deep architectures is positional encoding. This module enables a suitable way to encode positional information as input for transformer neural layers. This success has been rooted in the use of sinusoidal functions of various frequencies, in order to capture recurrent patterns of differing typical periods. In this work, an alternative set of periodic functions is proposed for positional encoding. These functions preserve some key properties of sinusoidal ones, while they depart from them in fundamental ways. Some tentative experiments are reported, where the original sinusoidal version is substantially outperformed. This strongly suggests that the alternative functions may have a wider use in other transformer architectures. + oai:arXiv.org:2512.19323v1 cs.LG - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Giorgio Palma, Andrea Serani, Matteo Diez + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Ezequiel Lopez-Rubio, Macoris Decena-Gimenez, Rafael Marcos Luque-Baena - Towards Causal Market Simulators - https://arxiv.org/abs/2511.04469 - arXiv:2511.04469v3 Announce Type: replace -Abstract: Market generators using deep generative models have shown promise for synthetic financial data generation, but existing approaches lack causal reasoning capabilities essential for counterfactual analysis and risk assessment. We propose a Time-series Neural Causal Model VAE (TNCM-VAE) that combines variational autoencoders with structural causal models to generate counterfactual financial time series while preserving both temporal dependencies and causal relationships. Our approach enforces causal constraints through directed acyclic graphs in the decoder architecture and employs the causal Wasserstein distance for training. We validate our method on synthetic autoregressive models inspired by the Ornstein-Uhlenbeck process, demonstrating superior performance in counterfactual probability estimation with L1 distances as low as 0.03-0.10 compared to ground truth. The model enables financial stress testing, scenario analysis, and enhanced backtesting by generating plausible counterfactual market trajectories that respect underlying causal mechanisms. - oai:arXiv.org:2511.04469v3 - cs.LG - q-fin.CP - stat.OT - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Simulations between Strongly Sublinear MPC and Node-Capacitated Clique + https://arxiv.org/abs/2512.19326 + arXiv:2512.19326v1 Announce Type: new +Abstract: We study how the strongly sublinear MPC model relates to the classic, graph-centric distributed models, focusing on the Node-Capacitated Clique (NCC), a bandwidth-parametrized generalization of the Congested Clique. In MPC, $M$ machines with per-machine memory $S$ hold a partition of the input graph, in NCC, each node knows its full neighborhood but can send/receive only a bounded number of $C$ words per round. We are particularly interested in the strongly sublinear regime where $S=C=n^\delta$ for some constant $0 < \delta <1$. + Our goal is determine when round-preserving simulations between these models are possible and when they are not, when total memory and total bandwidth $SM=nC$ in both models are matched, for different problem families and graph classes. On the positive side, we provide techniques that allow us to replicate the specific behavior regarding input representation, number of machines and local memory from one model to the other to obtain simulations with only constant overhead. On the negative side, we prove simulation impossibility results, which show that the limitations of our simulations are necessary. + oai:arXiv.org:2512.19326v1 + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Dennis Thumm, Luis Ontaneda Mijares + Philipp Schneider, Julian Werthmann - LLMs Do Not See Age: Assessing Demographic Bias in Automated Systematic Review Synthesis - https://arxiv.org/abs/2511.06000 - arXiv:2511.06000v2 Announce Type: replace -Abstract: Clinical interventions often hinge on age: medications and procedures safe for adults may be harmful to children or ineffective for older adults. However, as language models are increasingly integrated into biomedical evidence synthesis workflows, it remains uncertain whether these systems preserve such crucial demographic distinctions. To address this gap, we evaluate how well state-of-the-art language models retain age-related information when generating abstractive summaries of biomedical studies. We construct DemogSummary, a novel age-stratified dataset of systematic review primary studies, covering child, adult, and older adult populations. We evaluate three prominent summarisation-capable LLMs, Qwen (open-source), Longformer (open-source) and GPT-4.1 Nano (proprietary), using both standard metrics and a newly proposed Demographic Salience Score (DSS), which quantifies age-related entity retention and hallucination. Our results reveal systematic disparities across models and age groups: demographic fidelity is lowest for adult-focused summaries, and under-represented populations are more prone to hallucinations. These findings highlight the limitations of current LLMs in faithful and bias-free summarisation and point to the need for fairness-aware evaluation frameworks and summarisation pipelines in biomedical NLP. - oai:arXiv.org:2511.06000v2 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Extended OpenTT Games Dataset: A table tennis dataset for fine-grained shot type and point outcome + https://arxiv.org/abs/2512.19327 + arXiv:2512.19327v1 Announce Type: new +Abstract: Automatically detecting and classifying strokes in table tennis video can streamline training workflows, enrich broadcast overlays, and enable fine-grained performance analytics. For this to be possible, annotated video data of table tennis is needed. We extend the public OpenTTGames dataset with highly detailed, frame-accurate shot type annotations (forehand, backhand with subtypes), player posture labels (body lean and leg stance), and rally outcome tags at point end. OpenTTGames is a set of recordings from the side of the table with official labels for bounces, when the ball is above the net, or hitting the net. The dataset already contains ball coordinates near events, which are either "bounce", "net", or "empty_event" in the original OpenTTGames dataset, and semantic masks (humans, table, scoreboard). Our extension adds the types of stroke to the events and a per-player taxonomy so models can move beyond event spotting toward tactical understanding (e.g., whether a stroke is likely to win the point or set up an advantage). We provide a compact coding scheme and code-assisted labeling procedure to support reproducible annotations and baselines for fine-grained stroke understanding in racket sports. This fills a practical gap in the community, where many prior video resources are either not publicly released or carry restrictive/unclear licenses that hinder reuse and benchmarking. Our annotations are released under the same CC BY-NC-SA 4.0 license as OpenTTGames, allowing free non-commercial use, modification, and redistribution, with appropriate attribution. + oai:arXiv.org:2512.19327v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-sa/4.0/ + Moamal Fadhil Abdul (Department of Applied Mathematics and Computer Science, Technical University of Denmark, Richard Petersens Plads, Building 324, 2800 Kgs. Lyngby, Denmark), Jonas Bruun Hubrechts (Department of Applied Mathematics and Computer Science, Technical University of Denmark, Richard Petersens Plads, Building 324, 2800 Kgs. Lyngby, Denmark), Thomas Martini J{\o}rgensen (Department of Applied Mathematics and Computer Science, Technical University of Denmark, Richard Petersens Plads, Building 324, 2800 Kgs. Lyngby, Denmark), Emil Hovad (Department of Applied Mathematics and Computer Science, Technical University of Denmark, Richard Petersens Plads, Building 324, 2800 Kgs. Lyngby, Denmark) + + + Stochastic assignment games for Mobility-as-a-Service markets + https://arxiv.org/abs/2512.19328 + arXiv:2512.19328v1 Announce Type: new +Abstract: We study the stochastic assignment game and extend it to model multimodal mobility markets with a regulator or a Mobility-as-a-Service (MaaS) platform. We start by presenting general forms of one-to-one and many-to-many stochastic assignment games. Optimality conditions are discussed. The core of stochastic assignment games is defined, with expected payoffs of sellers and buyers in stochastic assignment games as payoffs from a hypothetical "ideal matching" that represent sellers' and buyers' expectations under imperfect information. To apply stochastic assignment games to the urban mobility markets, we extend the general stochastic many-to-many assignment game into a stochastic Stackelberg game to model MaaS systems, where the platform is the leader, and users and operators are the followers. The platform sets fares to maximize revenue. Users and operator react to the fare settings to form a stochastic many-to-many assignment game considering both fixed-route services and Mobility-on-Demand (MOD). The Stackelberg game is formulated as a bilevel problem. The lower level is the stochastic many-to-many assignment game between users and operators, shown to yield a coalitional logit model. The upper-level problem is a fare adjustment problem maximizing revenue. An iterative balancing algorithm is proposed to solve the lower-level problem exactly. The bilevel problem is solved through an iterative fare adjusting heuristic, whose solution is shown to be equivalent to the bilevel problem with an additional condition when it converges. Two case studies are conducted. The model can be applied to design MaaS fares maximizing income of the platform while anticipating the selfish behavior and heterogeneity of users and operators. Public agencies can also use the model to manage multimodal transportation systems. + oai:arXiv.org:2512.19328v1 + cs.GT + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Favour Yahdii Aghaebe, Tanefa Apekey, Elizabeth Williams, Nafise Sadat Moosavi + Bingqing Liu, David Watling, Joseph Y. J. Chow + + + DeltaMIL: Gated Memory Integration for Efficient and Discriminative Whole Slide Image Analysis + https://arxiv.org/abs/2512.19331 + arXiv:2512.19331v1 Announce Type: new +Abstract: Whole Slide Images (WSIs) are typically analyzed using multiple instance learning (MIL) methods. However, the scale and heterogeneity of WSIs generate highly redundant and dispersed information, making it difficult to identify and integrate discriminative signals. Existing MIL methods either fail to discard uninformative cues effectively or have limited ability to consolidate relevant features from multiple patches, which restricts their performance on large and heterogeneous WSIs. To address this issue, we propose DeltaMIL, a novel MIL framework that explicitly selects semantically relevant regions and integrates the discriminative information from WSIs. Our method leverages the gated delta rule to efficiently filter and integrate information through a block combining forgetting and memory mechanisms. The delta mechanism dynamically updates the memory by removing old values and inserting new ones according to their correlation with the current patch. The gating mechanism further enables rapid forgetting of irrelevant signals. Additionally, DeltaMIL integrates a complementary local pattern mixing mechanism to retain fine-grained pathological locality. Our design enhances the extraction of meaningful cues and suppresses redundant or noisy information, which improves the model's robustness and discriminative power. Experiments demonstrate that DeltaMIL achieves state-of-the-art performance. Specifically, for survival prediction, DeltaMIL improves performance by 3.69\% using ResNet-50 features and 2.36\% using UNI features. For slide-level classification, it increases accuracy by 3.09\% with ResNet-50 features and 3.75\% with UNI features. These results demonstrate the strong and consistent performance of DeltaMIL across diverse WSI tasks. + oai:arXiv.org:2512.19331v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Yueting Zhu, Yuehao Song, Shuai Zhang, Wenyu Liu, Xinggang Wang - Incremental Generation is Necessary and Sufficient for Universality in Flow-Based Modelling - https://arxiv.org/abs/2511.09902 - arXiv:2511.09902v2 Announce Type: replace -Abstract: Incremental flow-based denoising models have reshaped generative modelling, but their empirical advantage still lacks a rigorous approximation-theoretic foundation. We show that incremental generation is necessary and sufficient for universal flow-based generation on the largest natural class of self-maps of $[0,1]^d$ compatible with denoising pipelines, namely the orientation-preserving homeomorphisms of $[0,1]^d$. All our guarantees are uniform on the underlying maps and hence imply approximation both samplewise and in distribution. - Using a new topological-dynamical argument, we first prove an impossibility theorem: the class of all single-step autonomous flows, independently of the architecture, width, depth, or Lipschitz activation of the underlying neural network, is meagre and therefore not universal in the space of orientation-preserving homeomorphisms of $[0,1]^d$. By exploiting algebraic properties of autonomous flows, we conversely show that every orientation-preserving Lipschitz homeomorphism on $[0,1]^d$ can be approximated at rate $O(n^{-1/d})$ by a composition of at most $K_d$ such flows, where $K_d$ depends only on the dimension. Under additional smoothness assumptions, the approximation rate can be made dimension-free, and $K_d$ can be chosen uniformly over the class being approximated. Finally, by linearly lifting the domain into one higher dimension, we obtain structured universal approximation results for continuous functions and for probability measures on $[0,1]^d$, the latter realized as pushforwards of empirical measures with vanishing $1$-Wasserstein error. - oai:arXiv.org:2511.09902v2 + A Logical View of GNN-Style Computation and the Role of Activation Functions + https://arxiv.org/abs/2512.19332 + arXiv:2512.19332v1 Announce Type: new +Abstract: We study the numerical and Boolean expressiveness of MPLang, a declarative language that captures the computation of graph neural networks (GNNs) through linear message passing and activation functions. We begin with A-MPLang, the fragment without activation functions, and give a characterization of its expressive power in terms of walk-summed features. For bounded activation functions, we show that (under mild conditions) all eventually constant activations yield the same expressive power - numerical and Boolean - and that it subsumes previously established logics for GNNs with eventually constant activation functions but without linear layers. Finally, we prove the first expressive separation between unbounded and bounded activations in the presence of linear layers: MPLang with ReLU is strictly more powerful for numerical queries than MPLang with eventually constant activation functions, e.g., truncated ReLU. This hinges on subtle interactions between linear aggregation and eventually constant non-linearities, and it establishes that GNNs using ReLU are more expressive than those restricted to eventually constant activations and linear layers. + oai:arXiv.org:2512.19332v1 cs.LG - cs.NA - math.CA - math.DS - math.NA - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.LO + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hossein Rouhvarzi, Anastasis Kratsios + Pablo Barcel\'o, Floris Geerts, Matthias Lanzinger, Klara Pakhomenko, Jan Van den Bussche - Phantom Menace: Exploring and Enhancing the Robustness of VLA Models Against Physical Sensor Attacks - https://arxiv.org/abs/2511.10008 - arXiv:2511.10008v2 Announce Type: replace -Abstract: Vision-Language-Action (VLA) models revolutionize robotic systems by enabling end-to-end perception-to-action pipelines that integrate multiple sensory modalities, such as visual signals processed by cameras and auditory signals captured by microphones. This multi-modality integration allows VLA models to interpret complex, real-world environments using diverse sensor data streams. Given the fact that VLA-based systems heavily rely on the sensory input, the security of VLA models against physical-world sensor attacks remains critically underexplored. To address this gap, we present the first systematic study of physical sensor attacks against VLAs, quantifying the influence of sensor attacks and investigating the defenses for VLA models. We introduce a novel "Real-Sim-Real" framework that automatically simulates physics-based sensor attack vectors, including six attacks targeting cameras and two targeting microphones, and validates them on real robotic systems. Through large-scale evaluations across various VLA architectures and tasks under varying attack parameters, we demonstrate significant vulnerabilities, with susceptibility patterns that reveal critical dependencies on task types and model designs. We further develop an adversarial-training-based defense that enhances VLA robustness against out-of-distribution physical perturbations caused by sensor attacks while preserving model performance. Our findings expose an urgent need for standardized robustness benchmarks and mitigation strategies to secure VLA deployments in safety-critical environments. - oai:arXiv.org:2511.10008v2 - cs.RO - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Xuancun Lu, Jiaxiang Chen, Shilin Xiao, Zizhi Jin, Zhangrui Chen, Hanwen Yu, Bohan Qian, Ruochen Zhou, Xiaoyu Ji, Wenyuan Xu + Orthogonal Approximate Message Passing with Optimal Spectral Initializations for Rectangular Spiked Matrix Models + https://arxiv.org/abs/2512.19334 + arXiv:2512.19334v1 Announce Type: new +Abstract: We propose an orthogonal approximate message passing (OAMP) algorithm for signal estimation in the rectangular spiked matrix model with general rotationally invariant (RI) noise. We establish a rigorous state evolution that precisely characterizes the algorithm's high-dimensional dynamics and enables the construction of iteration-wise optimal denoisers. Within this framework, we accommodate spectral initializations under minimal assumptions on the empirical noise spectrum. In the rectangular setting, where a single rank-one component typically generates multiple informative outliers, we further propose a procedure for combining these outliers under mild non-Gaussian signal assumptions. For general RI noise models, the predicted performance of the proposed optimal OAMP algorithm agrees with replica-symmetric predictions for the associated Bayes-optimal estimator, and we conjecture that it is statistically optimal within a broad class of iterative estimation methods. + oai:arXiv.org:2512.19334v1 + cs.IT + cs.LG + math.IT + math.ST + stat.TH + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Haohua Chen, Songbin Liu, Junjie Ma - Efficient Image-Goal Navigation with Representative Latent World Model - https://arxiv.org/abs/2511.11011 - arXiv:2511.11011v2 Announce Type: replace -Abstract: World models enable robots to conduct counterfactual reasoning in physical environments by predicting future world states. While conventional approaches often prioritize pixel-level reconstruction of future scenes, such detailed rendering is computationally intensive and unnecessary for planning tasks like navigation. We therefore propose that prediction and planning can be efficiently performed directly within a latent space of high-level semantic representations. To realize this, we introduce the Representative Latent space Navigation World Model (ReL-NWM). Rather than relying on reconstructionoriented latent embeddings, our method leverages a pre-trained representation encoder, DINOv3, and incorporates specialized mechanisms to effectively integrate action signals and historical context within this representation space. By operating entirely in the latent domain, our model bypasses expensive explicit reconstruction and achieves highly efficient navigation planning. Experiments show state-of-the-art trajectory prediction and image-goal navigation performance on multiple benchmarks. Additionally, we demonstrate real-world applicability by deploying the system on a Unitree G1 humanoid robot, confirming its efficiency and robustness in practical navigation scenarios. - oai:arXiv.org:2511.11011v2 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 - replace + GANeXt: A Fully ConvNeXt-Enhanced Generative Adversarial Network for MRI- and CBCT-to-CT Synthesis + https://arxiv.org/abs/2512.19336 + arXiv:2512.19336v1 Announce Type: new +Abstract: The synthesis of computed tomography (CT) from magnetic resonance imaging (MRI) and cone-beam CT (CBCT) plays a critical role in clinical treatment planning by enabling accurate anatomical representation in adaptive radiotherapy. In this work, we propose GANeXt, a 3D patch-based, fully ConvNeXt-powered generative adversarial network for unified CT synthesis across different modalities and anatomical regions. Specifically, GANeXt employs an efficient U-shaped generator constructed from stacked 3D ConvNeXt blocks with compact convolution kernels, while the discriminator adopts a conditional PatchGAN. To improve synthesis quality, we incorporate a combination of loss functions, including mean absolute error (MAE), perceptual loss, segmentation-based masked MAE, and adversarial loss and a combination of Dice loss and cross-entropy for multi-head segmentation discriminator. For both tasks, training is performed with a batch size of 8 using two separate AdamW optimizers for the generator and discriminator, each equipped with a warmup and cosine decay scheduler, with learning rates of $5\times10^{-4}$ and $1\times10^{-3}$, respectively. Data preprocessing includes deformable registration, foreground cropping, percentile normalization for the input modality, and linear normalization of the CT to the range $[-1024, 1000]$. Data augmentation involves random zooming within $(0.8, 1.3)$ (for MRI-to-CT only), fixed-size cropping to $32\times160\times192$ for MRI-to-CT and $32\times128\times128$ for CBCT-to-CT, and random flipping. During inference, we apply a sliding-window approach with $0.8$ overlap and average folding to reconstruct the full-size sCT, followed by inversion of the CT normalization. After joint training on all regions without any fine-tuning, the final models are selected at the end of 3000 epochs for MRI-to-CT and 1000 epochs for CBCT-to-CT using the full training dataset. + oai:arXiv.org:2512.19336v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Siyuan Mei, Yan Xia, Fuxin Fan + + + Enhancing PLS of Indoor IRS-VLC Systems for Colluding and Non-Colluding Eavesdroppers + https://arxiv.org/abs/2512.19339 + arXiv:2512.19339v1 Announce Type: new +Abstract: Most intelligent reflecting surface (IRS)-aided indoor visible light communication (VLC) studies ignore the time delays introduced by reflected paths, even though these delays are inherent in practical wideband systems. In this work, we adopt a realistic assumption of IRS-induced time delay for physical layer security (PLS) enhancement. We consider an indoor VLC system where an IRS is used to shape the channel so that the reflected signals add constructively at the legitimate user and create intersymbol interference at eavesdroppers located inside the coverage area. The resulting secrecy capacity maximisation over the IRS element allocation is formulated as a complex combinatorial optimisation problem and is solved using deep reinforcement learning with proximal policy optimisation (PPO). The approach is evaluated for both colluding eavesdroppers, which combine their received signals, and non-colluding eavesdroppers, which act independently. Simulation results are shown for various simulation setups, which demonstrate significant secrecy capacity gains. In a worst-case scenario, where the eavesdroppers have stronger channels than the legitimate user, the proposed PPO-based IRS allocation improves secrecy capacity by 107\% and 235\% in the colluding and non-colluding cases, respectively, compared with allocating all IRS elements to the legitimate user. These results demonstrate that time-delay-based IRS control can provide a strong secrecy advantage in practical indoor VLC scenarios. + oai:arXiv.org:2512.19339v1 + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhiwei Zhang, Hui Zhang, Kaihong Huang, Chenghao Shi, Huimin Lu + Rashid Iqbal, Ahmed Zoha, Salama Ikki, Muhammad Ali Imran, Hanaa Abumarshoud - Beyond Exascale: Dataflow Domain Translation on a Cerebras Cluster - https://arxiv.org/abs/2511.11542 - arXiv:2511.11542v2 Announce Type: replace -Abstract: Simulation of physical systems is essential across scientific and engineering domains. Commonly used domain decomposition methods are unable to simultaneously deliver both high simulation rate and high utilization in network computing environments. In particular, Exascale systems deliver only a small fraction their peak performance for these workloads. This paper introduces the novel Domain Translation algorithm, designed to overcome these limitations. On a cluster of 64 Cerebras CS-3 systems, we use this method to demonstrate unprecedented cluster performance across a range of metrics: we show simulations running in excess of 1.6 million time steps per second; we also demonstrate perfect weak scaling at 88% of peak performance. At this cluster scale, our implementation provides 112 PFLOP/s in a power-unconstrained environment, and 57 GFLOP/J in a power-limited environment. We illustrate the method by applying the shallow-water equations to model a tsunami following an asteroid impact at 460m-resolution on a planetary scale. - oai:arXiv.org:2511.11542v2 + Faster Distributed Inference-Only Recommender Systems via Bounded Lag Synchronous Collectives + https://arxiv.org/abs/2512.19342 + arXiv:2512.19342v1 Announce Type: new +Abstract: Recommender systems are enablers of personalized content delivery, and therefore revenue, for many large companies. In the last decade, deep learning recommender models (DLRMs) are the de-facto standard in this field. The main bottleneck in DLRM inference is the lookup of sparse features across huge embedding tables, which are usually partitioned across the aggregate RAM of many nodes. In state-of-the-art recommender systems, the distributed lookup is implemented via irregular all-to-all (alltoallv) communication, and often presents the main bottleneck. Today, most related work sees this operation as a given; in addition, every collective is synchronous in nature. In this work, we propose a novel bounded lag synchronous (BLS) version of the alltoallv operation. The bound can be a parameter allowing slower processes to lag behind entire iterations before the fastest processes block. In special applications such as inference-only DLRM, the accuracy of the application is fully preserved. We implement BLS alltoallv in a new PyTorch Distributed backend and evaluate it with a BLS version of the reference DLRM code. We show that for well balanced, homogeneous-access DLRM runs our BLS technique does not offer notable advantages. But for unbalanced runs, e.g. runs with strongly irregular embedding table accesses or with delays across different processes, our BLS technique improves both the latency and throughput of inference-only DLRM. In the best-case scenario, the proposed reduced synchronisation can mask the delays across processes altogether. + oai:arXiv.org:2512.19342v1 cs.DC - physics.comp-ph - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by-nc-nd/4.0/ - 10.1145/3773656.3773683 - Tomas Oppelstrup, Nicholas Giamblanco, Delyan Z. Kalchev, Ilya Sharapov, Mark Taylor, Dirk Van Essendelft, Sivasankaran Rajamanickam, Michael James + Kiril Dichev, Filip Pawlowski, Albert-Jan Yzelman - Optimizing Mixture of Block Attention - https://arxiv.org/abs/2511.11571 - arXiv:2511.11571v2 Announce Type: replace -Abstract: Mixture of Block Attention (MoBA) (Lu et al., 2025) is a promising building block for efficiently processing long contexts in LLMs by enabling queries to sparsely attend to a small subset of key-value blocks, drastically reducing computational cost. However, the design principles governing MoBA's performance are poorly understood, and it lacks an efficient GPU implementation, hindering its practical adoption. In this paper, we first develop a statistical model to analyze MoBA's underlying mechanics. Our model reveals that performance critically depends on the router's ability to accurately distinguish relevant from irrelevant blocks based on query-key affinities. We derive a signal-to-noise ratio that formally connects architectural parameters to this retrieval accuracy. Guided by our analysis, we identify two key pathways for improvement: using smaller block sizes and applying a short convolution on keys to cluster relevant signals, which enhances routing accuracy. While theoretically better, small block sizes are inefficient on GPUs. To bridge this gap, we introduce FlashMoBA, a hardware-aware CUDA kernel that enables efficient MoBA execution even with the small block sizes our theory recommends. We validate our insights by training LLMs from scratch, showing that our improved MoBA models match the performance of dense attention baselines. FlashMoBA achieves up to 14.7x speedup over FlashAttention-2 for small blocks, making our theoretically-grounded improvements practical. Code is available at: https://github.com/mit-han-lab/flash-moba. - oai:arXiv.org:2511.11571v2 - cs.LG - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Guangxuan Xiao, Junxian Guo, Kasra Mazaheri, Song Han + OMP: One-step Meanflow Policy with Directional Alignment + https://arxiv.org/abs/2512.19347 + arXiv:2512.19347v1 Announce Type: new +Abstract: Robot manipulation, a key capability of embodied AI, has turned to data-driven generative policy frameworks, but mainstream approaches like Diffusion Models suffer from high inference latency and Flow-based Methods from increased architectural complexity. While simply applying meanFlow on robotic tasks achieves single-step inference and outperforms FlowPolicy, it lacks few-shot generalization due to fixed temperature hyperparameters in its Dispersive Loss and misaligned predicted-true mean velocities. To solve these issues, this study proposes an improved MeanFlow-based Policies: we introduce a lightweight Cosine Loss to align velocity directions and use the Differential Derivation Equation (DDE) to optimize the Jacobian-Vector Product (JVP) operator. Experiments on Adroit and Meta-World tasks show the proposed method outperforms MP1 and FlowPolicy in average success rate, especially in challenging Meta-World tasks, effectively enhancing few-shot generalization and trajectory accuracy of robot manipulation policies while maintaining real-time performance, offering a more robust solution for high-precision robotic manipulation. + oai:arXiv.org:2512.19347v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Han Fang, Yize Huang, Yuheng Zhao, Paul Weng, Xiao Li, Yutong Ban - CLAReSNet: When Convolution Meets Latent Attention for Hyperspectral Image Classification - https://arxiv.org/abs/2511.12346 - arXiv:2511.12346v2 Announce Type: replace -Abstract: Hyperspectral image (HSI) classification faces critical challenges, including high spectral dimensionality, complex spectral-spatial correlations, and limited training samples with severe class imbalance. While CNNs excel at local feature extraction and transformers capture long-range dependencies, their isolated application yields suboptimal results due to quadratic complexity and insufficient inductive biases. We propose CLAReSNet (Convolutional Latent Attention Residual Spectral Network), a hybrid architecture that integrates multi-scale convolutional extraction with transformer-style attention via an adaptive latent bottleneck. The model employs a multi-scale convolutional stem with deep residual blocks and an enhanced Convolutional Block Attention Module for hierarchical spatial features, followed by spectral encoder layers combining bidirectional RNNs (LSTM/GRU) with Multi-Scale Spectral Latent Attention (MSLA). MSLA reduces complexity from $\mathcal{O}(T^2D)$ to $\mathcal{O}(T\log(T)D)$ by adaptive latent token allocation (8-64 tokens) that scales logarithmically with the sequence length. Hierarchical cross-attention fusion dynamically aggregates multi-level representations for robust classification. Experiments conducted on the Indian Pines and Salinas datasets show state-of-the-art performance, achieving overall accuracies of 99.71% and 99.96%, significantly surpassing HybridSN, SSRN, and SpectralFormer. The learned embeddings exhibit superior inter-class separability and compact intra-class clustering, validating CLAReSNet's effectiveness under severe class imbalance. - oai:arXiv.org:2511.12346v2 - cs.CV + VIGOR+: Iterative Confounder Generation and Validation via LLM-CEVAE Feedback Loop + https://arxiv.org/abs/2512.19349 + arXiv:2512.19349v1 Announce Type: new +Abstract: Hidden confounding remains a fundamental challenge in causal inference from observational data. Recent advances leverage Large Language Models (LLMs) to generate plausible hidden confounders based on domain knowledge, yet a critical gap exists: LLM-generated confounders often exhibit semantic plausibility without statistical utility. We propose VIGOR+ (Variational Information Gain for iterative cOnfounder Refinement), a novel framework that closes the loop between LLM-based confounder generation and CEVAE-based statistical validation. Unlike prior approaches that treat generation and validation as separate stages, VIGOR+ establishes an iterative feedback mechanism: validation signals from CEVAE (including information gain, latent consistency metrics, and diagnostic messages) are transformed into natural language feedback that guides subsequent LLM generation rounds. This iterative refinement continues until convergence criteria are met. We formalize the feedback mechanism, prove convergence properties under mild assumptions, and provide a complete algorithmic framework. + oai:arXiv.org:2512.19349v1 cs.AI cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Asmit Bandyopadhyay, Anindita Das Bhattacharjee, Rakesh Das + JiaWei Zhu, ZiHeng Liu - Adaptive Focus Memory for Language Models - https://arxiv.org/abs/2511.12712 - arXiv:2511.12712v2 Announce Type: replace -Abstract: Large language models (LLMs) are increasingly deployed in multi-turn dialogue settings, yet their behavior remains bottlenecked by naive history management strategies. Replaying the full conversation at every turn is simple but costly, while recency-based truncation or static summarization often causes early, high-impact user constraints to drift out of effective context. As a result, models may retain text without reliably applying it when it matters. - We present Adaptive Focus Memory (AFM), a lightweight context management system that dynamically assigns each past message one of three fidelity levels: Full, Compressed, or Placeholder, based on semantic relevance, temporal decay, and importance classification. AFM packs messages chronologically under a fixed token budget, preserving critical constraints at high fidelity while allowing low-importance context to degrade gracefully. - We evaluate AFM on two multi-turn dialogue benchmarks designed to stress long-horizon constraint preservation: a safety-critical travel scenario involving a user with a severe peanut allergy, and a policy-critical tax compliance scenario involving an illegal evasion request. Under strict grading that requires both explicit constraint recall and appropriately conditioned generation, AFM succeeds in 83.3 percent of allergy runs where all baseline strategies fail, and preserves correct refusal behavior on the tax benchmark. - These results demonstrate that effective dialogue memory requires more than retaining prior text. Selectively allocating fidelity across past messages enables reliable constraint preservation under bounded context growth, without modifying model weights or introducing external retrieval infrastructure. We release an open-source implementation of AFM compatible with OpenAI-style chat APIs to support reproducible research and practical deployment. - oai:arXiv.org:2511.12712v2 - cs.CL + PENDULUM: A Benchmark for Assessing Sycophancy in Multimodal Large Language Models + https://arxiv.org/abs/2512.19350 + arXiv:2512.19350v1 Announce Type: new +Abstract: Sycophancy, an excessive tendency of AI models to agree with user input at the expense of factual accuracy or in contradiction of visual evidence, poses a critical and underexplored challenge for multimodal large language models (MLLMs). While prior studies have examined this behavior in text-only settings of large language models, existing research on visual or multimodal counterparts remains limited in scope and depth of analysis. To address this gap, we introduce a comprehensive evaluation benchmark, \textit{PENDULUM}, comprising approximately 2,000 human-curated Visual Question Answering pairs specifically designed to elicit sycophantic responses. The benchmark spans six distinct image domains of varying complexity, enabling a systematic investigation of how image type and inherent challenges influence sycophantic tendencies. Through extensive evaluation of state-of-the-art MLLMs. we observe substantial variability in model robustness and a pronounced susceptibility to sycophantic and hallucinatory behavior. Furthermore, we propose novel metrics to quantify sycophancy in visual reasoning, offering deeper insights into its manifestations across different multimodal contexts. Our findings highlight the urgent need for developing sycophancy-resilient architectures and training strategies to enhance factual consistency and reliability in future MLLMs. Our proposed dataset with MLLMs response are available at https://github.com/ashikiut/pendulum/. + oai:arXiv.org:2512.19350v1 cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Christopher Cruz + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + A. B. M. Ashikur Rahman, Saeed Anwar, Muhammad Usman, Irfan Ahmad, Ajmal Mian - Assessing Automated Fact-Checking for Medical LLM Responses with Knowledge Graphs - https://arxiv.org/abs/2511.12817 - arXiv:2511.12817v2 Announce Type: replace -Abstract: The recent proliferation of large language models (LLMs) holds the potential to revolutionize healthcare, with strong capabilities in diverse medical tasks. Yet, deploying LLMs in high-stakes healthcare settings requires rigorous verification and validation to understand any potential harm. This paper investigates the reliability and viability of using medical knowledge graphs (KGs) for the automated factuality evaluation of LLM-generated responses. To ground this investigation, we introduce FAITH, a framework designed to systematically probe the strengths and limitations of this KG-based approach. FAITH operates without reference answers by decomposing responses into atomic claims, linking them to a medical KG, and scoring them based on evidence paths. Experiments on diverse medical tasks with human subjective evaluations demonstrate that KG-grounded evaluation achieves considerably higher correlations with clinician judgments and can effectively distinguish LLMs with varying capabilities. It is also robust to textual variances. The inherent explainability of its scoring can further help users understand and mitigate the limitations of current LLMs. We conclude that while limitations exist, leveraging KGs is a prominent direction for automated factuality assessment in healthcare. - oai:arXiv.org:2511.12817v2 - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + ReasonCD: A Multimodal Reasoning Large Model for Implicit Change-of-Interest Semantic Mining + https://arxiv.org/abs/2512.19354 + arXiv:2512.19354v1 Announce Type: new +Abstract: Remote sensing image change detection is one of the fundamental tasks in remote sensing intelligent interpretation. Its core objective is to identify changes within change regions of interest (CRoI). Current multimodal large models encode rich human semantic knowledge, which is utilized for guidance in tasks such as remote sensing change detection. However, existing methods that use semantic guidance for detecting users' CRoI overly rely on explicit textual descriptions of CRoI, leading to the problem of near-complete performance failure when presented with implicit CRoI textual descriptions. This paper proposes a multimodal reasoning change detection model named ReasonCD, capable of mining users' implicit task intent. The model leverages the powerful reasoning capabilities of pre-trained large language models to mine users' implicit task intents and subsequently obtains different change detection results based on these intents. Experiments on public datasets demonstrate that the model achieves excellent change detection performance, with an F1 score of 92.1\% on the BCDD dataset. Furthermore, to validate its superior reasoning functionality, this paper annotates a subset of reasoning data based on the SECOND dataset. Experimental results show that the model not only excels at basic reasoning-based change detection tasks but can also explain the reasoning process to aid human decision-making. + oai:arXiv.org:2512.19354v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Shasha Zhou, Mingyu Huang, Jack Cole, Charles Britton, Ming Yin, Jan Wolber, Ke Li + Zhenyang Huang, Xiao Yu, Yi Zhang, Decheng Wang, Hang Ruan - CD-DPE: Dual-Prompt Expert Network Based on Convolutional Dictionary Feature Decoupling for Multi-Contrast MRI Super-Resolution - https://arxiv.org/abs/2511.14014 - arXiv:2511.14014v3 Announce Type: replace -Abstract: Multi-contrast magnetic resonance imaging (MRI) super-resolution intends to reconstruct high-resolution (HR) images from low-resolution (LR) scans by leveraging structural information present in HR reference images acquired with different contrasts. This technique enhances anatomical detail and soft tissue differentiation, which is vital for early diagnosis and clinical decision-making. However, inherent contrasts disparities between modalities pose fundamental challenges in effectively utilizing reference image textures to guide target image reconstruction, often resulting in suboptimal feature integration. To address this issue, we propose a dual-prompt expert network based on a convolutional dictionary feature decoupling (CD-DPE) strategy for multi-contrast MRI super-resolution. Specifically, we introduce an iterative convolutional dictionary feature decoupling module (CD-FDM) to separate features into cross-contrast and intra-contrast components, thereby reducing redundancy and interference. To fully integrate these features, a novel dual-prompt feature fusion expert module (DP-FFEM) is proposed. This module uses a frequency prompt to guide the selection of relevant reference features for incorporation into the target image, while an adaptive routing prompt determines the optimal method for fusing reference and target features to enhance reconstruction quality. Extensive experiments on public multi-contrast MRI datasets demonstrate that CD-DPE outperforms state-of-the-art methods in reconstructing fine details. Additionally, experiments on unseen datasets demonstrated that CD-DPE exhibits strong generalization capabilities. - oai:arXiv.org:2511.14014v3 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + First-Order Representation Languages for Goal-Conditioned RL + https://arxiv.org/abs/2512.19355 + arXiv:2512.19355v1 Announce Type: new +Abstract: First-order relational languages have been used in MDP planning and reinforcement learning (RL) for two main purposes: specifying MDPs in compact form, and representing and learning policies that are general and not tied to specific instances or state spaces. In this work, we instead consider the use of first-order languages in goal-conditioned RL and generalized planning. The question is how to learn goal-conditioned and general policies when the training instances are large and the goal cannot be reached by random exploration alone. The technique of Hindsight Experience Replay (HER) provides an answer to this question: it relabels unsuccessful trajectories as successful ones by replacing the original goal with one that was actually achieved. If the target policy must generalize across states and goals, trajectories that do not reach the original goal states can enable more data- and time-efficient learning. In this work, we show that further performance gains can be achieved when states and goals are represented by sets of atoms. We consider three versions: goals as full states, goals as subsets of the original goals, and goals as lifted versions of these subgoals. The result is that the latter two successfully learn general policies on large planning instances with sparse rewards by automatically creating a curriculum of easier goals of increasing complexity. The experiments illustrate the computational gains of these versions, their limitations, and opportunities for addressing them. + oai:arXiv.org:2512.19355v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Xianming Gu, Lihui Wang, Ying Cao, Zeyu Deng, Yingfeng Ou, Guodong Hu, Yi Chen + Simon St{\aa}hlberg, Hector Geffner - Look-Ahead Reasoning on Learning Platforms - https://arxiv.org/abs/2511.14745 - arXiv:2511.14745v2 Announce Type: replace -Abstract: On many learning platforms, the optimization criteria guiding model training reflect the priorities of the designer rather than those of the individuals they affect. Consequently, users may act strategically to obtain more favorable outcomes. While past work has studied strategic user behavior on learning platforms, the focus has largely been on strategic responses to a deployed model, without considering the behavior of other users. In contrast, look-ahead reasoning takes into account that user actions are coupled, and -- at scale -- impact future predictions. Within this framework, we first formalize level-k thinking, a concept from behavioral economics, where users aim to outsmart their peers by looking one step ahead. We show that, while convergence to an equilibrium is accelerated, the equilibrium remains the same, providing no benefit of higher-level reasoning for individuals in the long run. Then, we focus on collective reasoning, where users take coordinated actions by optimizing through their joint impact on the model. By contrasting collective with selfish behavior, we characterize the benefits and limits of coordination; a new notion of alignment between the learner's and the users' utilities emerges as a key concept. Look-ahead reasoning can be seen as a generalization of algorithmic collective action; we thus offer the first results characterizing the utility trade-offs of coordination when contesting algorithmic systems. - oai:arXiv.org:2511.14745v2 - cs.LG - cs.GT - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Haiqing Zhu, Tijana Zrnic, Celestine Mendler-D\"unner + Newton's method in adaptive iteratively linearized FEM + https://arxiv.org/abs/2512.19357 + arXiv:2512.19357v1 Announce Type: new +Abstract: This paper concerns the inclusion of Newton's method into an adaptive finite element method (FEM) for the solution of nonlinear partial differential equations (PDEs). It features an adaptive choice of the damping parameter in the Newton iteration for the discretized nonlinear problems on each level ensuring both global linear and local quadratic convergence. In contrast to energy-based arguments in the literature, a novel approach in the analysis considers the discrete dual norm of the residual as a computable measure for the linearization error. As a consequence, this paper provides the first convergence analysis with optimal rates of an adaptive iteratively linearized FEM beyond energy-minimization problems. The presented theory applies to strongly monotone operators with locally Lipschitz continuous Fr\'echet derivative. We present a class of semilinear PDEs fitting into this framework and provide numerical experiments to underline the theoretical results. + oai:arXiv.org:2512.19357v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Philipp Bringmann, Maximilian Brunner, Dirk Praetorius - Realist and Pluralist Conceptions of Intelligence and Their Implications on AI Research - https://arxiv.org/abs/2511.15282 - arXiv:2511.15282v2 Announce Type: replace -Abstract: In this paper, we argue that current AI research operates on a spectrum between two different underlying conceptions of intelligence: Intelligence Realism, which holds that intelligence represents a single, universal capacity measurable across all systems, and Intelligence Pluralism, which views intelligence as diverse, context-dependent capacities that cannot be reduced to a single universal measure. Through an analysis of current debates in AI research, we demonstrate how the conceptions remain largely implicit yet fundamentally shape how empirical evidence gets interpreted across a wide range of areas. These underlying views generate fundamentally different research approaches across three areas. Methodologically, they produce different approaches to model selection, benchmark design, and experimental validation. Interpretively, they lead to contradictory readings of the same empirical phenomena, from capability emergence to system limitations. Regarding AI risk, they generate categorically different assessments: realists view superintelligence as the primary risk and search for unified alignment solutions, while pluralists see diverse threats across different domains requiring context-specific solutions. We argue that making explicit these underlying assumptions can contribute to a clearer understanding of disagreements in AI research. - oai:arXiv.org:2511.15282v2 - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Ninell Oldenburg, Ruchira Dhar, Anders S{\o}gaard + Generative vector search to improve pathology foundation models across multimodal vision-language tasks + https://arxiv.org/abs/2512.19360 + arXiv:2512.19360v1 Announce Type: new +Abstract: Retrieval-augmented generation improves large language models by grounding outputs in external knowledge sources, reducing hallucinations and addressing knowledge cutoffs. However, standard embedding-based retrieval fails to capture the complexity of multi-concept queries, particularly in domains like biomedicine, where biological data are inherently high-dimensional. For example,omics datasets, and clinical reports simultaneously exhibit numerous molecular, cellular, and physiological features. We present Stochastic Latent Matching (STHLM), a generative vector search method that samples query-conditioned embeddings from text or image inputs to enhance retrieval performance. Analogous to how Chain-of-Thought reasoning enables language models to "think longer" on complex problems, STHLM allows retrieval systems to "search wider" through iterative sampling. STHLM demonstrates critical improvements over classical vector retrieval across diverse benchmarks, including scientific literature, clinical notes, and tissue images, boosting retrieval performance by 10-30% through test-time compute (trading latency for accuracy), while enabling up to a 10-fold compression of embedding dimensions. + oai:arXiv.org:2512.19360v1 + cs.IR + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Markus Ekvall, Ludvig Bergenstr{\aa}hle, Patrick Truong, Ben Murrell, Joakim Lundeberg - A Causal Perspective on Measuring, Explaining and Mitigating Smells in LLM-Generated Code - https://arxiv.org/abs/2511.15817 - arXiv:2511.15817v4 Announce Type: replace -Abstract: Recent advances in large language models (LLMs) have accelerated their adoption in software engineering contexts. However, concerns persist about the structural quality of the code they produce. In particular, LLMs often replicate poor coding practices, introducing code smells (i.e., patterns that hinder readability, maintainability, or design integrity). Although prior research has examined the detection or repair of smells, we still lack a clear understanding of how and when these issues emerge in generated code. - This paper addresses this gap by systematically measuring, explaining and mitigating smell propensity in LLM-generated code. We build on the Propensity Smelly Score (PSC), a probabilistic metric that estimates the likelihood of generating particular smell types, and establish its robustness as a signal of structural quality. Using PSC as an instrument for causal analysis, we identify how generation strategy, model size, model architecture and prompt formulation shape the structural properties of generated code. Our findings show that prompt design and architectural choices play a decisive role in smell propensity and motivate practical mitigation strategies that reduce its occurrence. A user study further demonstrates that PSC helps developers interpret model behavior and assess code quality, providing evidence that smell propensity signals can support human judgement. Taken together, our work lays the groundwork for integrating quality-aware assessments into the evaluation and deployment of LLMs for code. - oai:arXiv.org:2511.15817v4 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - 10.1145/3744916.3773164 - Alejandro Velasco, Daniel Rodriguez-Cardenas, Dipin Khati, David N. Palacio, Luftar Rahman Alif, Denys Poshyvanyk + Interpretable Hybrid Deep Q-Learning Framework for IoT-Based Food Spoilage Prediction with Synthetic Data Generation and Hardware Validation + https://arxiv.org/abs/2512.19361 + arXiv:2512.19361v1 Announce Type: new +Abstract: The need for an intelligent, real-time spoilage prediction system has become critical in modern IoT-driven food supply chains, where perishable goods are highly susceptible to environmental conditions. Existing methods often lack adaptability to dynamic conditions and fail to optimize decision making in real time. To address these challenges, we propose a hybrid reinforcement learning framework integrating Long Short-Term Memory (LSTM) and Recurrent Neural Networks (RNN) for enhanced spoilage prediction. This hybrid architecture captures temporal dependencies within sensor data, enabling robust and adaptive decision making. In alignment with interpretable artificial intelligence principles, a rule-based classifier environment is employed to provide transparent ground truth labeling of spoilage levels based on domain-specific thresholds. This structured design allows the agent to operate within clearly defined semantic boundaries, supporting traceable and interpretable decisions. Model behavior is monitored using interpretability-driven metrics, including spoilage accuracy, reward-to-step ratio, loss reduction rate, and exploration decay. These metrics provide both quantitative performance evaluation and insights into learning dynamics. A class-wise spoilage distribution visualization is used to analyze the agents decision profile and policy behavior. Extensive evaluations on simulated and real-time hardware data demonstrate that the LSTM and RNN based agent outperforms alternative reinforcement learning approaches in prediction accuracy and decision efficiency while maintaining interpretability. The results highlight the potential of hybrid deep reinforcement learning with integrated interpretability for scalable IoT-based food monitoring systems. + oai:arXiv.org:2512.19361v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Isshaan Singh, Divyansh Chawla, Anshu Garg, Shivin Mangal, Pallavi Gupta, Khushi Agarwal, Nimrat Singh Khalsa, Nandan Patel - Deep Gaussian Process Proximal Policy Optimization - https://arxiv.org/abs/2511.18214 - arXiv:2511.18214v2 Announce Type: replace -Abstract: Uncertainty estimation for Reinforcement Learning (RL) is a critical component in control tasks where agents must balance safe exploration and efficient learning. While deep neural networks have enabled breakthroughs in RL, they often lack calibrated uncertainty estimates. We introduce Deep Gaussian Process Proximal Policy Optimization (GPPO), a scalable, model-free actor-critic algorithm that leverages Deep Gaussian Processes (DGPs) to approximate both the policy and value function. GPPO maintains competitive performance with respect to Proximal Policy Optimization on standard high-dimensional continuous control benchmarks while providing well-calibrated uncertainty estimates that can inform safer and more effective exploration. - oai:arXiv.org:2511.18214v2 + From Points to Coalitions: Hierarchical Contrastive Shapley Values for Prioritizing Data Samples + https://arxiv.org/abs/2512.19363 + arXiv:2512.19363v1 Announce Type: new +Abstract: How should we quantify the value of each training example when datasets are large, heterogeneous, and geometrically structured? Classical Data-Shapley answers in principle, but its O(n!) complexity and point-wise perspective are ill-suited to modern scales. We propose Hierarchical Contrastive Data Valuation (HCDV), a three-stage framework that (i) learns a contrastive, geometry-preserving representation, (ii) organizes the data into a balanced coarse-to-fine hierarchy of clusters, and (iii) assigns Shapley-style payoffs to coalitions via local Monte-Carlo games whose budgets are propagated downward. HCDV collapses the factorial burden to O(T sum_{l} K_{l}) = O(T K_max log n), rewards examples that sharpen decision boundaries, and regularizes outliers through curvature-based smoothness. We prove that HCDV approximately satisfies the four Shapley axioms with surplus loss O(eta log n), enjoys sub-Gaussian coalition deviation tilde O(1/sqrt{T}), and incurs at most k epsilon_infty regret for top-k selection. Experiments on four benchmarks--tabular, vision, streaming, and a 45M-sample CTR task--plus the OpenDataVal suite show that HCDV lifts accuracy by up to +5 pp, slashes valuation time by up to 100x, and directly supports tasks such as augmentation filtering, low-latency streaming updates, and fair marketplace payouts. + oai:arXiv.org:2512.19363v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Matthijs van der Lende, Juan Cardenas-Cartagena + Canran Xiao, Jiabao Dou, Zhiming Lin, Zong Ke, Liwei Hou - FlowerDance: MeanFlow for Efficient and Refined 3D Dance Generation - https://arxiv.org/abs/2511.21029 - arXiv:2511.21029v2 Announce Type: replace -Abstract: Music-to-dance generation aims to translate auditory signals into expressive human motion, with broad applications in virtual reality, choreography, and digital entertainment. Despite promising progress, the limited generation efficiency of existing methods leaves insufficient computational headroom for high-fidelity 3D rendering, thereby constraining the expressiveness of 3D characters during real-world applications. Thus, we propose FlowerDance, which not only generates refined motion with physical plausibility and artistic expressiveness, but also achieves significant generation efficiency on inference speed and memory utilization. Specifically, FlowerDance combines MeanFlow with Physical Consistency Constraints, which enables high-quality motion generation with only a few sampling steps. Moreover, FlowerDance leverages a simple but efficient model architecture with BiMamba-based backbone and Channel-Level Cross-Modal Fusion, which generates dance with efficient non-autoregressive manner. Meanwhile, FlowerDance supports motion editing, enabling users to interactively refine dance sequences. Extensive experiments on AIST++ and FineDance show that FlowerDance achieves state-of-the-art results in both motion quality and generation efficiency. Code will be released upon acceptance. Project page: https://flowerdance25.github.io/ . - oai:arXiv.org:2511.21029v2 + Efficient Spike-driven Transformer for High-performance Drone-View Geo-Localization + https://arxiv.org/abs/2512.19365 + arXiv:2512.19365v1 Announce Type: new +Abstract: Traditional drone-view geo-localization (DVGL) methods based on artificial neural networks (ANNs) have achieved remarkable performance. However, ANNs rely on dense computation, which results in high power consumption. In contrast, spiking neural networks (SNNs), which benefit from spike-driven computation, inherently provide low power consumption. Regrettably, the potential of SNNs for DVGL has yet to be thoroughly investigated. Meanwhile, the inherent sparsity of spike-driven computation for representation learning scenarios also results in loss of critical information and difficulties in learning long-range dependencies when aligning heterogeneous visual data sources. To address these, we propose SpikeViMFormer, the first SNN framework designed for DVGL. In this framework, a lightweight spike-driven transformer backbone is adopted to extract coarse-grained features. To mitigate the loss of critical information, the spike-driven selective attention (SSA) block is designed, which uses a spike-driven gating mechanism to achieve selective feature enhancement and highlight discriminative regions. Furthermore, a spike-driven hybrid state space (SHS) block is introduced to learn long-range dependencies using a hybrid state space. Moreover, only the backbone is utilized during the inference stage to reduce computational cost. To ensure backbone effectiveness, a novel hierarchical re-ranking alignment learning (HRAL) strategy is proposed. It refines features via neighborhood re-ranking and maintains cross-batch consistency to directly optimize the backbone. Experimental results demonstrate that SpikeViMFormer outperforms state-of-the-art SNNs. Compared with advanced ANNs, it also achieves competitive performance.Our code is available at https://github.com/ISChenawei/SpikeViMFormer + oai:arXiv.org:2512.19365v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhongwei Chen, Hai-Jun Rong, Zhao-Xu Yang, Guoqi Li + + + Learning General Policies with Policy Gradient Methods + https://arxiv.org/abs/2512.19366 + arXiv:2512.19366v1 Announce Type: new +Abstract: While reinforcement learning methods have delivered remarkable results in a number of settings, generalization, i.e., the ability to produce policies that generalize in a reliable and systematic way, has remained a challenge. The problem of generalization has been addressed formally in classical planning where provable correct policies that generalize over all instances of a given domain have been learned using combinatorial methods. The aim of this work is to bring these two research threads together to illuminate the conditions under which (deep) reinforcement learning approaches, and in particular, policy optimization methods, can be used to learn policies that generalize like combinatorial methods do. We draw on lessons learned from previous combinatorial and deep learning approaches, and extend them in a convenient way. From the former, we model policies as state transition classifiers, as (ground) actions are not general and change from instance to instance. From the latter, we use graph neural networks (GNNs) adapted to deal with relational structures for representing value functions over planning states, and in our case, policies. With these ingredients in place, we find that actor-critic methods can be used to learn policies that generalize almost as well as those obtained using combinatorial approaches while avoiding the scalability bottleneck and the use of feature pools. Moreover, the limitations of the DRL methods on the benchmarks considered have little to do with deep learning or reinforcement learning algorithms, and result from the well-understood expressive limitations of GNNs, and the tradeoff between optimality and generalization (general policies cannot be optimal in some domains). Both of these limitations are addressed without changing the basic DRL methods by adding derived predicates and an alternative cost structure to optimize. + oai:arXiv.org:2512.19366v1 + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Kaixing Yang, Xulong Tang, Ziqiao Peng, Xiangyue Zhang, Puwei Wang, Jun He, Hongyan Liu + Simon St{\aa}hlberg, Blai Bonet, Hector Geffner - New Hybrid Heuristics for Pseudo-Boolean Propagation - https://arxiv.org/abs/2511.21417 - arXiv:2511.21417v2 Announce Type: replace -Abstract: In pseudo-boolean solving the currently most successful unit propagation strategy is a hybrid mode combining the watched literal scheme with the counting method. This short paper introduces new heuristics for this hybrid decision, which are able to drastically outperform the current method in the RoundingSAT solver. - oai:arXiv.org:2511.21417v2 + Sprecher Networks: A Parameter-Efficient Kolmogorov-Arnold Architecture + https://arxiv.org/abs/2512.19367 + arXiv:2512.19367v1 Announce Type: new +Abstract: We present Sprecher Networks (SNs), a family of trainable neural architectures inspired by the classical Kolmogorov-Arnold-Sprecher (KAS) construction for approximating multivariate continuous functions. Distinct from Multi-Layer Perceptrons (MLPs) with fixed node activations and Kolmogorov-Arnold Networks (KANs) featuring learnable edge activations, SNs utilize shared, learnable splines (monotonic and general) within structured blocks incorporating explicit shift parameters and mixing weights. Our approach directly realizes Sprecher's specific 1965 sum of shifted splines formula in its single-layer variant and extends it to deeper, multi-layer compositions. We further enhance the architecture with optional lateral mixing connections that enable intra-block communication between output dimensions, providing a parameter-efficient alternative to full attention mechanisms. Beyond parameter efficiency with $O(LN + LG)$ scaling (where $G$ is the knot count of the shared splines) versus MLPs' $O(LN^2)$, SNs admit a sequential evaluation strategy that reduces peak forward-intermediate memory from $O(N^2)$ to $O(N)$ (treating batch size as constant), making much wider architectures feasible under memory constraints. We demonstrate empirically that composing these blocks into deep networks leads to highly parameter and memory-efficient models, discuss theoretical motivations, and compare SNs with related architectures (MLPs, KANs, and networks with learnable node activations). + oai:arXiv.org:2512.19367v1 + cs.LG cs.AI - math.OC - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.NA + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Mia M\"u{\ss}ig, Jan Johannsen + Christian H\"agg, Kathl\'en Kohn, Giovanni Luca Marchetti, Boris Shapiro - AutoRec: Accelerating Loss Recovery for Live Streaming in a Multi-Supplier Market - https://arxiv.org/abs/2511.22046 - arXiv:2511.22046v2 Announce Type: replace -Abstract: Due to the limited permissions for upgrading dualside (i.e., server-side and client-side) loss tolerance schemes from the perspective of CDN vendors in a multi-supplier market, modern large-scale live streaming services are still using the automatic-repeat-request (ARQ) based paradigm for loss recovery, which only requires server-side modifications. In this paper, we first conduct a large-scale measurement study with up to 50 million live streams. We find that loss shows dynamics and live streaming contains frequent on-off mode switching in the wild. We further find that the recovery latency, enlarged by the ubiquitous retransmission loss, is a critical factor affecting live streaming's client-side QoE (e.g., video freezing). We then propose an enhanced recovery mechanism called AutoRec, which can transform the disadvantages of on-off mode switching into an advantage for reducing loss recovery latency without any modifications on the client side. AutoRec allows users to customize overhead tolerance and recovery latency tolerance and adaptively adjusts strategies as the network environment changes to ensure that recovery latency meets user demands whenever possible while keeping overhead under control. We implement AutoRec upon QUIC and evaluate it via testbed and real-world commercial services deployments. The experimental results demonstrate the practicability and profitability of AutoRec. - oai:arXiv.org:2511.22046v2 - cs.NI - cs.MM - eess.IV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + DeepGESI: A Non-Intrusive Objective Evaluation Model for Predicting Speech Intelligibility in Hearing-Impaired Listeners + https://arxiv.org/abs/2512.19374 + arXiv:2512.19374v1 Announce Type: new +Abstract: Speech intelligibility assessment is essential for many speech-related applications. However, most objective intelligibility metrics are intrusive, as they require clean reference speech in addition to the degraded or processed signal for evaluation. Furthermore, existing metrics such as STOI are primarily designed for normal hearing listeners, and their predictive accuracy for hearing impaired speech intelligibility remains limited. On the other hand, the GESI (Gammachirp Envelope Similarity Index) can be used to estimate intelligibility for hearing-impaired listeners, but it is also intrusive, as it depends on reference signals. This requirement limits its applicability in real-world scenarios. + To overcome this limitation, this study proposes DeepGESI, a non-intrusive deep learning-based model capable of accurately and efficiently predicting the speech intelligibility of hearing-impaired listeners without requiring any clean reference speech. Experimental results demonstrate that, under the test conditions of the 2nd Clarity Prediction Challenge(CPC2) dataset, the GESI scores predicted by DeepGESI exhibit a strong correlation with the actual GESI scores. In addition, the proposed model achieves a substantially faster prediction speed compared to conventional methods. + oai:arXiv.org:2512.19374v1 + cs.SD + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tong Li, Xu Yan, Bo Wu, Cheng Luo, Fuyu Wang, Jiuxiang Zhu, Haoyi Fang, Xinle Du, Ke Xu + Wenyu Luo, Jinhui Chen - Maximum Spectral Efficiency With Adaptive MQAM Transmissions Over Terrestrial Coherent FSO Links - https://arxiv.org/abs/2511.22682 - arXiv:2511.22682v2 Announce Type: replace -Abstract: Coherent free-space optical (FSO) communication is recognized as a key enabler for ultra-high-capacity fronthaul and backhaul links in next-generation wireless networks. Spectrally efficient $M$-ary quadrature amplitude modulation (MQAM) formats are well-suited for these links. However, theoretical analyses of adaptive MQAM transmissions over terrestrial FSO channels remain limited. In this letter, we first derive the spectral efficiency limit of adaptive unconstrained MQAM over gamma-gamma turbulence with pointing error. We then show that adaptive transmissions using only six square MQAM constellations performs close to the theoretical limit (within $0.10$-$0.12$ bits/s/Hz) across a wide range of signal-to-noise ratios and channel conditions. - oai:arXiv.org:2511.22682v2 - cs.IT - math.IT - Mon, 22 Dec 2025 00:00:00 -0500 - replace + HATS: High-Accuracy Triple-Set Watermarking for Large Language Models + https://arxiv.org/abs/2512.19378 + arXiv:2512.19378v1 Announce Type: new +Abstract: Misuse of LLM-generated text can be curbed by watermarking techniques that embed implicit signals into the output. We propose a watermark that partitions the vocabulary at each decoding step into three sets (Green/Yellow/Red) with fixed ratios and restricts sampling to the Green and Yellow sets. At detection time, we replay the same partitions, compute Green-enrichment and Red-depletion statistics, convert them to one-sided z-scores, and aggregate their p-values via Fisher's method to decide whether a passage is watermarked. We implement generation, detection, and testing on Llama 2 7B, and evaluate true-positive rate, false-positive rate, and text quality. Results show that the triple-partition scheme achieves high detection accuracy at fixed FPR while preserving readability. + oai:arXiv.org:2512.19378v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Himani Verma, Kamal Singh, Ranjan K. Mallik + In Proceedings of the 11th International Conference on Computer and Communications, 2025 + Zhiqing Hu, Chenxu Zhao, Jiazhong Lu, Xiaolei Liu - Deadlock-Free Hybrid RL-MAPF Framework for Zero-Shot Multi-Robot Navigation - https://arxiv.org/abs/2511.22685 - arXiv:2511.22685v2 Announce Type: replace -Abstract: Multi-robot navigation in cluttered environments presents fundamental challenges in balancing reactive collision avoidance with long-range goal achievement. When navigating through narrow passages - or confined spaces, deadlocks frequently emerge that prevent agents from reaching their destinations, particularly when Reinforcement Learning (RL) control policies encounter novel configurations out of learning distribution. Existing RL-based approaches suffer from limited generalization capability in unseen environments. We propose a hybrid framework that seamlessly integrates RL-based reactive navigation with on-demand Multi-Agent Path Finding (MAPF) to explicitly resolve topological deadlocks. Our approach integrates a safety layer that monitors agent progress to detect deadlocks and, when detected, triggers a coordination controller for affected agents. The framework constructs globally feasible trajectories via MAPF and regulates waypoint progression to reduce inter-agent conflicts during navigation. - Extensive evaluation on dense multi-agent benchmarks shows that our method boosts task completion from marginal to near-universal success, markedly reducing deadlocks and collisions. When integrated with hierarchical task planning, it enables coordinated navigation for heterogeneous robots, demonstrating that coupling reactive RL navigation with selective MAPF intervention yields a robust, zero-shot performance. - oai:arXiv.org:2511.22685v2 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 - replace + OmniMER: Indonesian Multimodal Emotion Recognition via Auxiliary-Enhanced LLM Adaptation + https://arxiv.org/abs/2512.19379 + arXiv:2512.19379v1 Announce Type: new +Abstract: Indonesian, spoken by over 200 million people, remains underserved in multimodal emotion recognition research despite its dominant presence on Southeast Asian social media platforms. We introduce IndoMER, the first multimodal emotion recognition benchmark for Indonesian, comprising 1,944 video segments from 203 speakers with temporally aligned text, audio, and visual annotations across seven emotion categories. The dataset exhibits realistic challenges including cross-modal inconsistency and long-tailed class distributions shaped by Indonesian cultural communication norms. To address these challenges, we propose OmniMER, a multimodal adaptation framework built upon Qwen2.5-Omni that enhances emotion recognition through three auxiliary modality-specific perception tasks: emotion keyword extraction for text, facial expression analysis for video, and prosody analysis for audio. These auxiliary tasks help the model identify emotion-relevant cues in each modality before fusion, reducing reliance on spurious correlations in low-resource settings. Experiments on IndoMER show that OmniMER achieves 0.582 Macro-F1 on sentiment classification and 0.454 on emotion recognition, outperforming the base model by 7.6 and 22.1 absolute points respectively. Cross-lingual evaluation on the Chinese CH-SIMS dataset further demonstrates the generalizability of the proposed framework. The dataset and code are publicly available. https://github.com/yanxm01/INDOMER + oai:arXiv.org:2512.19379v1 + cs.LG + cs.AI + cs.MM + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xueming Yan, Boyan Xu, Yaochu Jin, Lixian Xiao, Wenlong Ye, Runyang Cai, Zeqi Zheng, Jingfa Liu, Aimin Yang + + + Real-Time Machine Learning for Embedded Anomaly Detection + https://arxiv.org/abs/2512.19383 + arXiv:2512.19383v1 Announce Type: new +Abstract: The spread of a resource-constrained Internet of Things (IoT) environment and embedded devices has put pressure on the real-time detection of anomalies occurring at the edge. This survey presents an overview of machine-learning methods aimed specifically at on-device anomaly detection with extremely strict constraints for latency, memory, and power consumption. Lightweight algorithms such as Isolation Forest, One-Class SVM, recurrent architectures, and statistical techniques are compared here according to the realities of embedded implementation. Our survey brings out significant trade-offs of accuracy and computational efficiency of detection, as well as how hardware constraints end up fundamentally redefining algorithm choice. The survey is completed with a set of practical recommendations on the choice of the algorithm depending on the equipment profiles and new trends in TinyML, which can help close the gap between detection capabilities and embedded reality. The paper serves as a strategic roadmap for engineers deploying anomaly detection in edge environments that are constrained by bandwidth and may be safety-critical. + oai:arXiv.org:2512.19383v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Haoyi Wang (Department of Electrical and Systems Engineering, Washington University in St. Louis, MO, USA), Licheng Luo (Department of Mechanical Engineering, University of California Riverside, CA, USA), Yiannis Kantaros (Department of Electrical and Systems Engineering, Washington University in St. Louis, MO, USA), Bruno Sinopoli (Department of Electrical and Systems Engineering, Washington University in St. Louis, MO, USA), Mingyu Cai (Department of Mechanical Engineering, University of California Riverside, CA, USA) + Abdelmadjid Benmachiche, Khadija Rais, Hamda Slimi - The Battle of the Water Futures - https://arxiv.org/abs/2511.22986 - arXiv:2511.22986v3 Announce Type: replace -Abstract: The highly anticipated 'Battle of the Water Networks' is back with a new challenge for the water community. This competition will be hosted at the 4th International Joint Conference on Water Distribution Systems Analysis and Computing and Control in the Water Industry (WDSA/CCWI 2026), taking place in Paphos, Cyprus, from May 18-21, 2026. This competition embodies the core mission of Water-Futures and the theme for WDSA/CCWI 2026: "Designing the next generation of urban water (and wastewater) systems." - The objective is to design and operate a water distribution system over a long-term horizon under deep uncertainty, with interventions applied in stages. For the first time, this challenge features a staged-design approach, unobservable and unknown uncertainties, and incorporates elements of policymaking and artificial intelligence. The solutions will be assessed using a transparent and inspectable open-source evaluation framework. - oai:arXiv.org:2511.22986v3 - eess.SY - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-sa/4.0/ - Dennis Zanutto, Christos Michalopoulos, Lydia Tsiami, Andr\'e Artelt, Jasmin Brandt, Demetrios Eliades, Stelios Vrachimis, Stefano Alvisi, Valentina Marsili, Filippo Mazzoni, Panagiotis Smartzis, Barbara Hammer, Phoebe Koundouri, Marios Polycarpou, Dragan Savi\'c + DSTED: Decoupling Temporal Stabilization and Discriminative Enhancement for Surgical Workflow Recognition + https://arxiv.org/abs/2512.19387 + arXiv:2512.19387v1 Announce Type: new +Abstract: Purpose: Surgical workflow recognition enables context-aware assistance and skill assessment in computer-assisted interventions. Despite recent advances, current methods suffer from two critical challenges: prediction jitter across consecutive frames and poor discrimination of ambiguous phases. This paper aims to develop a stable framework by selectively propagating reliable historical information and explicitly modeling uncertainty for hard sample enhancement. + Methods: We propose a dual-pathway framework DSTED with Reliable Memory Propagation (RMP) and Uncertainty-Aware Prototype Retrieval (UPR). RMP maintains temporal coherence by filtering and fusing high-confidence historical features through multi-criteria reliability assessment. UPR constructs learnable class-specific prototypes from high-uncertainty samples and performs adaptive prototype matching to refine ambiguous frame representations. Finally, a confidence-driven gate dynamically balances both pathways based on prediction certainty. + Results: Our method achieves state-of-the-art performance on AutoLaparo-hysterectomy with 84.36% accuracy and 65.51% F1-score, surpassing the second-best method by 3.51% and 4.88% respectively. Ablations reveal complementary gains from RMP (2.19%) and UPR (1.93%), with synergistic effects when combined. Extensive analysis confirms substantial reduction in temporal jitter and marked improvement on challenging phase transitions. + Conclusion: Our dual-pathway design introduces a novel paradigm for stable workflow recognition, demonstrating that decoupling the modeling of temporal consistency and phase ambiguity yields superior performance and clinical applicability. + oai:arXiv.org:2512.19387v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Yueyao Chen, Kai-Ni Wang, Dario Tayupo, Arnaud Huaulm'e, Krystel Nyangoh Timoh, Pierre Jannin, Qi Dou - Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory - https://arxiv.org/abs/2511.23083 - arXiv:2511.23083v4 Announce Type: replace -Abstract: High-capacity kernel Hopfield networks exhibit a \textit{Ridge of Optimization} characterized by extreme stability. While previously linked to \textit{Spectral Concentration}, its origin remains elusive. Here, we analyze the network dynamics on a statistical manifold, revealing that the Ridge corresponds to the Edge of Stability, a critical boundary where the Fisher Information Matrix becomes singular. We demonstrate that the apparent Euclidean force antagonism is a manifestation of \textit{Dual Equilibrium} in the Riemannian space. This unifies learning dynamics and capacity via the Minimum Description Length principle, offering a geometric theory of self-organized criticality. - oai:arXiv.org:2511.23083v4 - cs.LG - cs.NE - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Fair Team Contracts + https://arxiv.org/abs/2512.19388 + arXiv:2512.19388v1 Announce Type: new +Abstract: A principal selects a team of agents for collaborating on a joint project. The principal aims to design a revenue-optimal contract that incentivize the team of agents to exert costly effort while satisfying fairness constraints. We show that the optimal fair contract ensures that there is a minimum share, and every agent receives a linear contract weakly higher than the minimum share that is sufficient to incentivize them to exert costly effort. We utilize this structure to design an FPTAS for additive success functions and a constant approximation algorithm for submodular success functions. Moreover, we show that adopting optimal fair contracts can lead to a 25% revenue increase compared to the optimal non-discriminatory contracts even for additive success functions. + oai:arXiv.org:2512.19388v1 + cs.GT + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Akira Tamamori + Matteo Castiglioni, Junjie Chen, Yingkai Li - MILE: A Mechanically Isomorphic Exoskeleton Data Collection System with Fingertip Visuotactile Sensing for Dexterous Manipulation - https://arxiv.org/abs/2512.00324 - arXiv:2512.00324v2 Announce Type: replace -Abstract: Imitation learning provides a promising approach to dexterous hand manipulation, but its effectiveness is limited by the lack of large-scale, high-fidelity data. Existing data-collection pipelines suffer from inaccurate motion retargeting, low data-collection efficiency, and missing high-resolution fingertip tactile sensing. We address this gap with MILE, a mechanically isomorphic teleoperation and data-collection system co-designed from human hand to exoskeleton to robotic hand. The exoskeleton is anthropometrically derived from the human hand, and the robotic hand preserves one-to-one joint-position isomorphism, eliminating nonlinear retargeting and enabling precise, natural control. The exoskeleton achieves a multi-joint mean absolute angular error below one degree, while the robotic hand integrates compact fingertip visuotactile modules that provide high-resolution tactile observations. Built on this retargeting-free interface, we teleoperate complex, contact-rich in-hand manipulation and efficiently collect a multimodal dataset comprising high-resolution fingertip visuotactile signals, RGB-D images, and joint positions. The teleoperation pipeline achieves a mean success rate improvement of 64%. Incorporating fingertip tactile observations further increases the success rate by an average of 25% over the vision-only baseline, validating the fidelity and utility of the dataset. Further details are available at: https://sites.google.com/view/mile-system. - oai:arXiv.org:2512.00324v2 + TwinAligner: Visual-Dynamic Alignment Empowers Physics-aware Real2Sim2Real for Robotic Manipulation + https://arxiv.org/abs/2512.19390 + arXiv:2512.19390v1 Announce Type: new +Abstract: The robotics field is evolving towards data-driven, end-to-end learning, inspired by multimodal large models. However, reliance on expensive real-world data limits progress. Simulators offer cost-effective alternatives, but the gap between simulation and reality challenges effective policy transfer. This paper introduces TwinAligner, a novel Real2Sim2Real system that addresses both visual and dynamic gaps. The visual alignment module achieves pixel-level alignment through SDF reconstruction and editable 3DGS rendering, while the dynamic alignment module ensures dynamic consistency by identifying rigid physics from robot-object interaction. TwinAligner improves robot learning by providing scalable data collection and establishing a trustworthy iterative cycle, accelerating algorithm development. Quantitative evaluations highlight TwinAligner's strong capabilities in visual and dynamic real-to-sim alignment. This system enables policies trained in simulation to achieve strong zero-shot generalization to the real world. The high consistency between real-world and simulated policy performance underscores TwinAligner's potential to advance scalable robot learning. Code and data will be released on https://twin-aligner.github.io + oai:arXiv.org:2512.19390v1 cs.RO cs.CV - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Jinda Du, Jieji Ren, Qiaojun Yu, Ningbin Zhang, Yu Deng, Xingyu Wei, Yufei Liu, Guoying Gu, Xiangyang Zhu - - - When Safety Blocks Sense: Measuring Semantic Confusion in LLM Refusals - https://arxiv.org/abs/2512.01037 - arXiv:2512.01037v2 Announce Type: replace -Abstract: Safety-aligned language models often refuse prompts that are actually harmless. Current evaluations mostly report global rates such as false rejection or compliance. These scores treat each prompt alone and miss local inconsistency, where a model accepts one phrasing of an intent but rejects a close paraphrase. This gap limits diagnosis and tuning. We introduce "semantic confusion," a failure mode that captures such local inconsistency, and a framework to measure it. We build ParaGuard, a 10k-prompt corpus of controlled paraphrase clusters that hold intent fixed while varying surface form. We then propose three model-agnostic metrics at the token level: Confusion Index, Confusion Rate, and Confusion Depth. These metrics compare each refusal to its nearest accepted neighbors and use token embeddings, next-token probabilities, and perplexity signals. Experiments across diverse model families and deployment guards show that global false-rejection rate hides critical structure. Our metrics reveal globally unstable boundaries in some settings, localized pockets of inconsistency in others, and cases where stricter refusal does not increase inconsistency. We also show how confusion-aware auditing separates how often a system refuses from how sensibly it refuses. This gives developers a practical signal to reduce false refusals while preserving safety. - oai:arXiv.org:2512.01037v2 - cs.CL - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Riad Ahmed Anonto, Md Labid Al Nahiyan, Md Tanvir Hassan + Hongwei Fan, Hang Dai, Jiyao Zhang, Jinzhou Li, Qiyang Yan, Yujie Zhao, Mingju Gao, Jinghang Wu, Hao Tang, Hao Dong - Equivariant symmetry-aware head pose estimation for fetal MRI - https://arxiv.org/abs/2512.04890 - arXiv:2512.04890v4 Announce Type: replace -Abstract: We present E(3)-Pose, a novel fast pose estimation method that jointly and explicitly models rotation equivariance and object symmetry. Our work is motivated by the challenging problem of accounting for fetal head motion during a diagnostic MRI scan. We aim to enable automatic adaptive prescription of 2D diagnostic MRI slices with 6-DoF head pose estimation, supported by 3D MRI volumes rapidly acquired before each 2D slice. Existing methods struggle to generalize to clinical volumes, due to pose ambiguities induced by inherent anatomical symmetries, as well as low resolution, noise, and artifacts. In contrast, E(3)-Pose captures anatomical symmetries and rigid pose equivariance by construction, and yields robust estimates of the fetal head pose. Our experiments on publicly available and representative clinical fetal MRI datasets demonstrate the superior robustness and generalization of our method across domains. Crucially, E(3)-Pose achieves state-of-the-art accuracy on clinical MRI volumes, paving the way for clinical translation. Our implementation is available at github.com/ramyamut/E3-Pose. - oai:arXiv.org:2512.04890v4 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Ramya Muthukrishnan, Borjan Gagoski, Aryn Lee, P. Ellen Grant, Elfar Adalsteinsson, Polina Golland, Benjamin Billot + Power feedback strategy based on efficiency trajectory analysis for HCPV sun tracking + https://arxiv.org/abs/2512.19394 + arXiv:2512.19394v1 Announce Type: new +Abstract: This paper presents a control strategy for sun trackers which adapts continuously to different sources of error, avoiding the necessity of any kind of calibration by analyzing the produced electric power to sense the position of the Sun. The proposed strategy is able to meet the strict specifications for HCPV sun trackers despite of mechanical uncertainties (misalignments in the structure itself, misalignment of the solar modules with respect to the wing, etc.) and installation uncertainties (misalignments of the platform with respect to geographical north). Experimental results with an industrial-grade solar tracker showing the validity of the proposed control strategy under sunny and moderate cloudy conditions, as well as with different installation precisions by un-calibrating the system on purpose are exposed. + oai:arXiv.org:2512.19394v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1016/j.renene.2020.06.031 + Renewable Energy 161 (2020) 65-76 + Manuel G. Satu\'e, Fernando Casta\~no, Manuel G. Ortega, Francisco R. Rubio - CLUENet: Cluster Attention Makes Neural Networks Have Eyes - https://arxiv.org/abs/2512.06345 - arXiv:2512.06345v2 Announce Type: replace -Abstract: Despite the success of convolution- and attention-based models in vision tasks, their rigid receptive fields and complex architectures limit their ability to model irregular spatial patterns and hinder interpretability, therefore posing challenges for tasks requiring high model transparency. Clustering paradigms offer promising interpretability and flexible semantic modeling, but suffer from limited accuracy, low efficiency, and gradient vanishing during training. To address these issues, we propose CLUster attEntion Network (CLUENet), an transparent deep architecture for visual semantic understanding. We propose three key innovations include (i) a Global Soft Aggregation and Hard Assignment with a Temperature-Scaled Cosin Attention and gated residual connections for enhanced local modeling, (ii) inter-block Hard and Shared Feature Dispatching, and (iii) an improved cluster pooling strategy. These enhancements significantly improve both classification performance and visual interpretability. Experiments on CIFAR-100 and Mini-ImageNet demonstrate that CLUENet outperforms existing clustering methods and mainstream visual models, offering a compelling balance of accuracy, efficiency, and transparency. - oai:arXiv.org:2512.06345v2 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + EchoTrail-GUI: Building Actionable Memory for GUI Agents via Critic-Guided Self-Exploration + https://arxiv.org/abs/2512.19396 + arXiv:2512.19396v1 Announce Type: new +Abstract: Contemporary GUI agents, while increasingly capable due to advances in Large Vision-Language Models (VLMs), often operate with a critical limitation: they treat each task in isolation, lacking a mechanism to systematically learn from past successes. This digital ''amnesia'' results in sub-optimal performance, repeated errors, and poor generalization to novel challenges. To bridge this gap, we introduce EchoTrail-GUI, a novel framework designed to mimic human-like experiential learning by equipping agents with a dynamic, accessible memory. Our framework operates in three distinct stages. First, during Experience Exploration, an agent autonomously interacts with GUI environments to build a curated database of successful task trajectories, validated by a reward model. Crucially, the entire knowledge base construction is thus fully automated, requiring no human supervision. Second, in the Memory Injection stage, upon receiving a new task, our system efficiently retrieves the most relevant past trajectories to serve as actionable ''memories''. Finally, during GUI Task Inference, these memories are injected as in-context guidance to inform the agent's reasoning and decision-making process. We demonstrate the efficacy of our approach on benchmarks including Android World and AndroidLab. The results show that EchoTrail-GUI significantly improves the task success rate and operational efficiency of baseline agents, validating the power of structured memory in creating more robust and intelligent GUI automation. + oai:arXiv.org:2512.19396v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiangshuai Song, Jun-Jie Huang, Tianrui Liu, Ke Liang, Chang Tang + Runze Li, Yuwen Zhai, Bo Xu, LiWu Xu, Nian Shi, Wei Zhang, Ran Lin, Liang Wang - Predictive Modeling of I/O Performance for Machine Learning Training Pipelines: A Data-Driven Approach to Storage Optimization - https://arxiv.org/abs/2512.06699 - arXiv:2512.06699v2 Announce Type: replace -Abstract: Modern machine learning training is increasingly bottlenecked by data I/O rather than compute. GPUs often sit idle at below 50% utilization waiting for data. This paper presents a machine learning approach to predict I/O performance and recommend optimal storage configurations for ML training pipelines. We collected 141 observations through systematic benchmarking across different storage backends (NVMe SSD, network-attached storage, in-memory filesystems), data formats, and access patterns, covering both low-level I/O operations and full training pipelines. After evaluating seven regression models and three classification approaches, XGBoost achieved the best performance with R-squared of 0.991, predicting I/O throughput within 11.8% error on average. Feature importance analysis revealed that throughput metrics and batch size are the primary performance drivers. This data-driven approach can reduce configuration time from days of trial-and-error to minutes of predictive recommendation. The methodology is reproducible and extensible to other resource management problems in ML systems. Code and data are available at https://github.com/knkarthik01/gpu_storage_ml_project - oai:arXiv.org:2512.06699v2 - cs.PF - cs.AI + Brain-Grounded Axes for Reading and Steering LLM States + https://arxiv.org/abs/2512.19399 + arXiv:2512.19399v1 Announce Type: new +Abstract: Interpretability methods for large language models (LLMs) typically derive directions from textual supervision, which can lack external grounding. We propose using human brain activity not as a training signal but as a coordinate system for reading and steering LLM states. Using the SMN4Lang MEG dataset, we construct a word-level brain atlas of phase-locking value (PLV) patterns and extract latent axes via ICA. We validate axes with independent lexica and NER-based labels (POS/log-frequency used as sanity checks), then train lightweight adapters that map LLM hidden states to these brain axes without fine-tuning the LLM. Steering along the resulting brain-derived directions yields a robust lexical (frequency-linked) axis in a mid TinyLlama layer, surviving perplexity-matched controls, and a brain-vs-text probe comparison shows larger log-frequency shifts (relative to the text probe) with lower perplexity for the brain axis. A function/content axis (axis 13) shows consistent steering in TinyLlama, Qwen2-0.5B, and GPT-2, with PPL-matched text-level corroboration. Layer-4 effects in TinyLlama are large but inconsistent, so we treat them as secondary (Appendix). Axis structure is stable when the atlas is rebuilt without GPT embedding-change features or with word2vec embeddings (|r|=0.64-0.95 across matched axes), reducing circularity concerns. Exploratory fMRI anchoring suggests potential alignment for embedding change and log frequency, but effects are sensitive to hemodynamic modeling assumptions and are treated as population-level evidence only. These results support a new interface: neurophysiology-grounded axes provide interpretable and controllable handles for LLM behavior. + oai:arXiv.org:2512.19399v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Karthik Prabhakar, Durgamadhab Mishra - - - Native Parallel Reasoner: Reasoning in Parallelism via Self-Distilled Reinforcement Learning - https://arxiv.org/abs/2512.07461 - arXiv:2512.07461v2 Announce Type: replace -Abstract: We introduce Native Parallel Reasoner (NPR), a teacher-free framework that enables Large Language Models (LLMs) to self-evolve genuine parallel reasoning capabilities. NPR transforms the model from sequential emulation to native parallel cognition through three key innovations: 1) a self-distilled progressive training paradigm that transitions from ``cold-start'' format discovery to strict topological constraints without external supervision; 2) a novel Parallel-Aware Policy Optimization (PAPO) algorithm that optimizes branching policies directly within the execution graph, allowing the model to learn adaptive decomposition via trial and error; and 3) a robust NPR Engine that refactors memory management and flow control of SGLang to enable stable, large-scale parallel RL training. Across eight reasoning benchmarks, NPR trained on Qwen3-4B achieves performance gains of up to 24.5% and inference speedups up to 4.6x. Unlike prior baselines that often fall back to autoregressive decoding, NPR demonstrates 100% genuine parallel execution, establishing a new standard for self-evolving, efficient, and scalable agentic reasoning. - oai:arXiv.org:2512.07461v2 - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Tong Wu, Yang Liu, Jun Bai, Zixia Jia, Shuyi Zhang, Ziyong Lin, Yanting Wang, Song-Chun Zhu, Zilong Zheng + Sandro Andric - Minimum Bayes Risk Decoding for Error Span Detection in Reference-Free Automatic Machine Translation Evaluation - https://arxiv.org/abs/2512.07540 - arXiv:2512.07540v2 Announce Type: replace -Abstract: Error Span Detection (ESD) extends automatic machine translation (MT) evaluation by localizing translation errors and labeling their severity. Current generative ESD methods typically use Maximum a Posteriori (MAP) decoding, assuming that the model-estimated probabilities are perfectly correlated with similarity to the human annotation, but we often observe higher likelihood assigned to an incorrect annotation than to the human one. We instead apply Minimum Bayes Risk (MBR) decoding to generative ESD. We use a sentence- or span-level similarity function for MBR decoding, which selects candidate hypotheses based on their approximate similarity to the human annotation. Experimental results on the WMT24 Metrics Shared Task show that MBR decoding significantly improves span-level performance and generally matches or outperforms MAP at the system and sentence levels. To reduce the computational cost of MBR decoding, we further distill its decisions into a model decoded via greedy search, removing the inference-time latency bottleneck. - oai:arXiv.org:2512.07540v2 + Kunnafonidilaw ka Cadeau: an ASR dataset of present-day Bambara + https://arxiv.org/abs/2512.19400 + arXiv:2512.19400v1 Announce Type: new +Abstract: We present Kunkado, a 160-hour Bambara ASR dataset compiled from Malian radio archives to capture present-day spontaneous speech across a wide range of topics. It includes code-switching, disfluencies, background noise, and overlapping speakers that practical ASR systems encounter in real-world use. We finetuned Parakeet-based models on a 33.47-hour human-reviewed subset and apply pragmatic transcript normalization to reduce variability in number formatting, tags, and code-switching annotations. Evaluated on two real-world test sets, finetuning with Kunkado reduces WER from 44.47\% to 37.12\% on one and from 36.07\% to 32.33\% on the other. In human evaluation, the resulting model also outperforms a comparable system with the same architecture trained on 98 hours of cleaner, less realistic speech. We release the data and models to support robust ASR for predominantly oral languages. + oai:arXiv.org:2512.19400v1 cs.CL - cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Boxuan Lyu, Haiyue Song, Hidetaka Kamigaito, Chenchen Ding, Hideki Tanaka, Masao Utiyama, Kotaro Funakoshi, Manabu Okumura + Yacouba Diarra, Panga Azazia Kamate, Nouhoum Souleymane Coulibaly, Michael Leventhal - Restrictive Hierarchical Semantic Segmentation for Stratified Tooth Layer Detection - https://arxiv.org/abs/2512.07984 - arXiv:2512.07984v2 Announce Type: replace -Abstract: Accurate understanding of anatomical structures is essential for reliably staging certain dental diseases. A way of introducing this within semantic segmentation models is by utilising hierarchy-aware methodologies. However, existing hierarchy-aware segmentation methods largely encode anatomical structure through the loss functions, providing weak and indirect supervision. We introduce a general framework that embeds an explicit anatomical hierarchy into semantic segmentation by coupling a recurrent, level-wise prediction scheme with restrictive output heads and top-down feature conditioning. At each depth of the class tree, the backbone is re-run on the original image concatenated with logits from the previous level. Child class features are conditioned using Feature-wise Linear Modulation of their parent class probabilities, to modulate child feature spaces for fine grained detection. A probabilistic composition rule enforces consistency between parent and descendant classes. Hierarchical loss combines per-level class weighted Dice and cross entropy loss and a consistency term loss, ensuring parent predictions are the sum of their children. We validate our approach on our proposed dataset, TL-pano, containing 194 panoramic radiographs with dense instance and semantic segmentation annotations, of tooth layers and alveolar bone. Utilising UNet and HRNet as donor models across a 5-fold cross validation scheme, the hierarchical variants consistently increase IoU, Dice, and recall, particularly for fine-grained anatomies, and produce more anatomically coherent masks. However, hierarchical variants also demonstrated increased recall over precision, implying increased false positives. The results demonstrate that explicit hierarchical structuring improves both performance and clinical plausibility, especially in low data dental imaging regimes. - oai:arXiv.org:2512.07984v2 + Real2Edit2Real: Generating Robotic Demonstrations via a 3D Control Interface + https://arxiv.org/abs/2512.19402 + arXiv:2512.19402v1 Announce Type: new +Abstract: Recent progress in robot learning has been driven by large-scale datasets and powerful visuomotor policy architectures, yet policy robustness remains limited by the substantial cost of collecting diverse demonstrations, particularly for spatial generalization in manipulation tasks. To reduce repetitive data collection, we present Real2Edit2Real, a framework that generates new demonstrations by bridging 3D editability with 2D visual data through a 3D control interface. Our approach first reconstructs scene geometry from multi-view RGB observations with a metric-scale 3D reconstruction model. Based on the reconstructed geometry, we perform depth-reliable 3D editing on point clouds to generate new manipulation trajectories while geometrically correcting the robot poses to recover physically consistent depth, which serves as a reliable condition for synthesizing new demonstrations. Finally, we propose a multi-conditional video generation model guided by depth as the primary control signal, together with action, edge, and ray maps, to synthesize spatially augmented multi-view manipulation videos. Experiments on four real-world manipulation tasks demonstrate that policies trained on data generated from only 1-5 source demonstrations can match or outperform those trained on 50 real-world demonstrations, improving data efficiency by up to 10-50x. Moreover, experimental results on height and texture editing demonstrate the framework's flexibility and extensibility, indicating its potential to serve as a unified data generation framework. + oai:arXiv.org:2512.19402v1 + cs.RO cs.CV - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Ryan Banks, Camila Lindoni Azevedo, Hongying Tang, Yunpeng Li - - - Extended formulations for induced tree and path polytopes of chordal graphs - https://arxiv.org/abs/2512.08554 - arXiv:2512.08554v2 Announce Type: replace -Abstract: In this article, we give two extended space formulations, respectively, for the induced tree and path polytopes of chordal graphs with vertex and edge variables. - These formulations are obtained by proving that the induced tree and path extended incidence vectors of chordal graphs form Hilbert basis. This also shows that both polytopes have the integer decomposition property in chordal graphs. - Whereas the formulation for the induced tree polytope is easily seen to have a compact size, the system we provide for the induced path polytope has an exponential number of inequalities. - We show which of these inequalities define facets and exhibit a superset of the facet-defining ones that can be enumerated in polynomial time. We show that for some graphs, the latter superset contains redundant inequalities. - As corollaries, we obtain that the problems of finding an induced tree or path maximizing a linear function over the edges and vertices are solvable in polynomial time for the class of chordal graphs. - oai:arXiv.org:2512.08554v2 - cs.DM - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Alexandre Dupont-Bouillard + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yujie Zhao, Hongwei Fan, Di Chen, Shengcong Chen, Liliang Chen, Xiaoqi Li, Guanghui Ren, Hao Dong - SSCATeR: Sparse Scatter-Based Convolution Algorithm with Temporal Data Recycling for Real-Time 3D Object Detection in LiDAR Point Clouds - https://arxiv.org/abs/2512.08557 - arXiv:2512.08557v2 Announce Type: replace -Abstract: This work leverages the continuous sweeping motion of LiDAR scanning to concentrate object detection efforts on specific regions that receive a change in point data from one frame to another. We achieve this by using a sliding time window with short strides and consider the temporal dimension by storing convolution results between passes. This allows us to ignore unchanged regions, significantly reducing the number of convolution operations per forward pass without sacrificing accuracy. This data reuse scheme introduces extreme sparsity to detection data. To exploit this sparsity, we extend our previous work on scatter-based convolutions to allow for data reuse, and as such propose Sparse Scatter-Based Convolution Algorithm with Temporal Data Recycling (SSCATeR). This operation treats incoming LiDAR data as a continuous stream and acts only on the changing parts of the point cloud. By doing so, we achieve the same results with as much as a 6.61-fold reduction in processing time. Our test results show that the feature maps output by our method are identical to those produced by traditional sparse convolution techniques, whilst greatly increasing the computational efficiency of the network. - oai:arXiv.org:2512.08557v2 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Three Tiers and Thresholds: Incentives in Private Market Investing + https://arxiv.org/abs/2512.19405 + arXiv:2512.19405v1 Announce Type: new +Abstract: This paper studies optimal contract design in private market investing, focusing on internal decision making in venture capital and private equity firms. A principal relies on an agent who privately exerts costly due diligence effort and then recommends whether to invest. Outcomes are observable ex post even when an opportunity is declined, allowing compensation to reward both successful investments and prudent decisions to pass. We characterize profit maximizing contracts that induce information acquisition and truthful reporting. We show that three tier contracts are sufficient, with payments contingent on the agent's recommendation and the realized return. In symmetric environments satisfying the monotone likelihood ratio property, the optimal contract further simplifies to a threshold contract that pays only when the recommendation is aligned with an extreme realized return. These results provide guidance for performance based compensation that promotes diligent screening while limiting excessive risk taking. + oai:arXiv.org:2512.19405v1 + cs.GT + econ.TH + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Alexander Dow, Manduhu Manduhu, Matheus Santos, Ben Bartlett, Gerard Dooly, James Riordan + Jussi Keppo, Yingkai Li - Fast operator learning for mapping correlations - https://arxiv.org/abs/2512.09286 - arXiv:2512.09286v2 Announce Type: replace -Abstract: We propose a fast, optimization-free method for learning the transition operators of high-dimensional Markov processes. The central idea is to perform a Galerkin projection of the transition operator to a suitable set of low-order bases that capture the correlations between the dimensions. Such a discretized operator can be obtained from moments corresponding to our choice of basis without curse of dimensionality. Furthermore, by exploiting its low-rank structure and the spatial decay of correlations, we can obtain a compressed representation with computational complexity of order $\mathcal{O}(dN)$, where $d$ is the dimensionality and $N$ is the sample size. We further theoretically analyze the approximation error of the proposed compressed representation. We numerically demonstrate that the learned operator allows efficient prediction of future events and solving high-dimensional boundary value problems. This gives rise to a simple linear algebraic method for high-dimensional rare-events simulations. - oai:arXiv.org:2512.09286v2 + A Cartesian Cut-Cell Two-Fluid Method for Two-Phase Diffusion Problems + https://arxiv.org/abs/2512.19407 + arXiv:2512.19407v1 Announce Type: new +Abstract: We present a Cartesian cut-cell finite-volume method for sharp-interface two-phase diffusion problems in static geometries. The formulation follows a two-fluid approach: independent diffusion equations are discretized in each phase on a fixed staggered Cartesian grid, while the phases are coupled through embedded interface conditions enforcing continuity of normal flux and a general jump law. Cut cells are treated by integrating the governing equations over phase-restricted control volumes and faces, yielding discrete divergence and gradient operators that are locally conservative within each phase. Interface coupling is achieved by introducing a small set of interfacial unknowns per cut cell on the embedded boundary; the resulting algebraic system involves only bulk and interfacial averages. A key feature of the method is the use of a reduced set of geometric information based solely on low-order moments (trimmed volumes, apertures and interface measures/centroids), allowing robust implementation without constructing explicitly cut-cell polytopes. The method supports steady (Poisson) and unsteady (diffusion) regimes and incorporates Dirichlet, Neumann, Robin boundary conditions and general jumps. We validate the scheme on one-, two- and three-dimensional mono- and diphasic benchmarks, including curved embedded boundaries, Robin conditions and strong property/jump contrasts. The results demonstrate the expected convergence behavior, sharp enforcement of interfacial laws and excellent conservation properties. Extensions to moving interfaces and Stefan-type free-boundary problems are natural perspectives of this framework. + oai:arXiv.org:2512.19407v1 math.NA cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 - replace + physics.comp-ph + physics.flu-dyn + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuehaw Khoo, Yuguan Wang, Siyao Yang + Louis Libat, Can Sel\c{c}uk, Eric Ch\'enier, Vincent Le Chenadec - CNFinBench: A Benchmark for Safety and Compliance of Large Language Models in Finance - https://arxiv.org/abs/2512.09506 - arXiv:2512.09506v2 Announce Type: replace -Abstract: Large language models (LLMs) are increasingly deployed across the financial sector for tasks like investment research and algorithmic trading. Their high-stakes nature demands rigorous evaluation of models' safety and regulatory alignment. However, there is a significant gap between evaluation capabilities and safety requirements. Current financial benchmarks mainly focus on textbook-style question answering and numerical problem-solving, failing to simulate the open-ended scenarios where safety risks typically manifest. To close these gaps, we introduce CNFinBench, a benchmark structured around a Capability-Compliance-Safety triad encompassing 15 subtasks. For Capability Q&As, we introduce a novel business-vertical taxonomy aligned with core financial domains like banking operations, which allows institutions to assess model readiness for deployment in operational scenarios. For Compliance and Risk Control Q&As, we embed regulatory requirements within realistic business scenarios to ensure models are evaluated under practical, scenario-driven conditions. For Safety Q&As, we uniquely incorporate structured bias and fairness auditing, a dimension overlooked by other holistic financial benchmarks, and introduce the first multi-turn adversarial dialogue task to systematically expose compliance decay under sustained, context-aware attacks. Accordingly, we propose the Harmful Instruction Compliance Score (HICS) to quantify models' consistency in resisting harmful instructions across multi-turn dialogues. Experiments on 21 models across all subtasks reveal a persistent gap between capability and compliance: models achieve an average score of 61.0 on capability tasks but drop to 34.2 on compliance and risk-control evaluations. In multi-turn adversarial dialogue tests, most LLMs attain only partial resistance, demonstrating that refusal alone is insufficient without cited, verifiable reasoning. - oai:arXiv.org:2512.09506v2 + Mixed formulation and structure-preserving discretization of Cosserat rod dynamics in a port-Hamiltonian framework + https://arxiv.org/abs/2512.19408 + arXiv:2512.19408v1 Announce Type: new +Abstract: An energy-based modeling framework for the nonlinear dynamics of spatial Cosserat rods undergoing large displacements and rotations is proposed. The mixed formulation features independent displacement, velocity and stress variables and is further objective and locking-free. Finite rotations are represented using a director formulation that avoids singularities and yields a constant mass matrix. This results in an infinite-dimensional nonlinear port-Hamiltonian (PH) system governed by partial differential-algebraic equations with a quadratic energy functional. Using a time-differentiated compliance form of the stress-strain relations allows for the imposition of kinematic constraints, such as inextensibility or shear-rigidity. A structure-preserving finite element discretization leads to a finite-dimensional system with PH structure, thus facilitating the design of an energy-momentum consistent integration scheme. Dissipative material behavior (via the generalized-Maxwell model) and non-standard actuation approaches (via pneumatic chambers or tendons) integrate naturally into the framework. As illustrated by selected numerical examples, the present framework establishes a new approach to energy-momentum consistent formulations in computational mechanics involving finite rotations. + oai:arXiv.org:2512.19408v1 + math.NA cs.CE - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.NA + cs.RO + cs.SY + eess.SY + math.DS + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jinru Ding, Chao Ding, Wenrao Pang, Boyi Xiao, Zhiqiang Liu, Pengcheng Chen, Jiayuan Chen, Tiantian Yuan, Junming Guan, Yidong Jiang, Dawei Cheng, Jie Xu + Philipp L. Kinon, Simon R. Eugster, Peter Betsch - Near-Linear and Parameterized Approximations for Maximum Cliques in Disk Graphs - https://arxiv.org/abs/2512.09899 - arXiv:2512.09899v2 Announce Type: replace -Abstract: A \emph{disk graph} is the intersection graph of (closed) disks in the plane. We consider the classic problem of finding a maximum clique in a disk graph. For general disk graphs, the complexity of this problem is still open, but for unit disk graphs, it is well known to be in P. The currently fastest algorithm runs in time $O(n^{7/3+ o(1)})$, where $n$ denotes the number of disks~\cite{EspenantKM23, keil_et_al:LIPIcs.SoCG.2025.63}. Moreover, for the case of disk graphs with $t$ distinct radii, the problem has also recently been shown to be in XP. More specifically, it is solvable in time $O^*(n^{2t})$~\cite{keil_et_al:LIPIcs.SoCG.2025.63}. In this paper, we present algorithms with improved running times by allowing for approximate solutions and by using randomization: - (i) for unit disk graphs, we give an algorithm that, with constant success probability, computes a $(1-\varepsilon)$-approximate maximum clique in expected time $\tilde{O}(n/\varepsilon^2)$; and - (ii) for disk graphs with $t$ distinct radii, we give a parameterized approximation scheme that, with a constant success probability, computes a $(1-\varepsilon)$-approximate maximum clique in expected time $\tilde{O}(f(t)\cdot (1/\varepsilon)^{O(t)} \cdot n)$. - oai:arXiv.org:2512.09899v2 - cs.CG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Symplectic Reservoir Representation of Legendre Dynamics + https://arxiv.org/abs/2512.19409 + arXiv:2512.19409v1 Announce Type: new +Abstract: Modern learning systems act on internal representations of data, yet how these representations encode underlying physical or statistical structure is often left implicit. In physics, conservation laws of Hamiltonian systems such as symplecticity guarantee long-term stability, and recent work has begun to hard-wire such constraints into learning models at the loss or output level. Here we ask a different question: what would it mean for the representation itself to obey a symplectic conservation law in the sense of Hamiltonian mechanics? + We express this symplectic constraint through Legendre duality: the pairing between primal and dual parameters, which becomes the structure that the representation must preserve. We formalize Legendre dynamics as stochastic processes whose trajectories remain on Legendre graphs, so that the evolving primal-dual parameters stay Legendre dual. We show that this class includes linear time-invariant Gaussian process regression and Ornstein-Uhlenbeck dynamics. + Geometrically, we prove that the maps that preserve all Legendre graphs are exactly symplectomorphisms of cotangent bundles of the form "cotangent lift of a base diffeomorphism followed by an exact fibre translation". Dynamically, this characterization leads to the design of a Symplectic Reservoir (SR), a reservoir-computing architecture that is a special case of recurrent neural network and whose recurrent core is generated by Hamiltonian systems that are at most linear in the momentum. + Our main theorem shows that every SR update has this normal form and therefore transports Legendre graphs to Legendre graphs, preserving Legendre duality at each time step. Overall, SR implements a geometrically constrained, Legendre-preserving representation map, injecting symplectic geometry and Hamiltonian mechanics directly at the representational level. + oai:arXiv.org:2512.19409v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jie Gao, Pawel Gawrychowski, Panos Giannopoulos, Wolfgang Mulzer, Satyam Singh, Frank Staals, Meirav Zehavi + Robert Simon Fong, Gouhei Tanaka, Kazuyuki Aihara - Symphony: A Heuristic Normalized Calibrated Advantage Actor and Critic Algorithm in application for Humanoid Robots - https://arxiv.org/abs/2512.10477 - arXiv:2512.10477v3 Announce Type: replace -Abstract: In our work we not explicitly hint that it is a misconception to think that humans learn fast. Learning process takes time. Babies start learning to move in the restricted liquid area called placenta. Children often are limited by underdeveloped body. Even adults are not allowed to participate in complex competitions right away. However, with robots, when learning from scratch, we often don't have the privilege of waiting for dozen millions of steps. "Swaddling" regularization is responsible for restraining an agent in rapid but unstable development penalizing action strength in a specific way not affecting actions directly. The Symphony, Transitional-policy Deterministic Actor and Critic algorithm, is a concise combination of different ideas for possibility of training humanoid robots from scratch with Sample Efficiency, Sample Proximity and Safety of Actions in mind. It is no secret that continuous increase in Gaussian noise without appropriate smoothing is harmful for motors and gearboxes. Compared to Stochastic algorithms, we set a limited parametric noise and promote a reduced strength of actions, safely increasing entropy, since the actions are kind of immersed in weaker noise. When actions require more extreme values, actions rise above the weak noise. Training becomes empirically much safer for both the environment around and the robot's mechanisms. We use Fading Replay Buffer: using a fixed formula containing the hyperbolic tangent, we adjust the batch sampling probability: the memory contains a recent memory and a long-term memory trail. Fading Replay Buffer allows us to use Temporal Advantage when we improve the current Critic Network prediction compared to the exponential moving average. Temporal Advantage allows us to update Actor and Critic in one pass, as well as combine Actor and Critic in one Object and implement their Losses in one line. - oai:arXiv.org:2512.10477v3 - cs.RO - cs.NE - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Timur Ishuov, Michele Folgheraiter, Madi Nurmanov, Goncalo Gordo, Rich\'ard Farkas, J\'ozsef Dombi + Research Program: Theory of Learning in Dynamical Systems + https://arxiv.org/abs/2512.19410 + arXiv:2512.19410v1 Announce Type: new +Abstract: Modern learning systems increasingly interact with data that evolve over time and depend on hidden internal state. We ask a basic question: when is such a dynamical system learnable from observations alone? This paper proposes a research program for understanding learnability in dynamical systems through the lens of next-token prediction. We argue that learnability in dynamical systems should be studied as a finite-sample question, and be based on the properties of the underlying dynamics rather than the statistical properties of the resulting sequence. To this end, we give a formulation of learnability for stochastic processes induced by dynamical systems, focusing on guarantees that hold uniformly at every time step after a finite burn-in period. This leads to a notion of dynamic learnability which captures how the structure of a system, such as stability, mixing, observability, and spectral properties, governs the number of observations required before reliable prediction becomes possible. We illustrate the framework in the case of linear dynamical systems, showing that accurate prediction can be achieved after finite observation without system identification, by leveraging improper methods based on spectral filtering. We survey the relationship between learning in dynamical systems and classical PAC, online, and universal prediction theories, and suggest directions for studying nonlinear and controlled systems. + oai:arXiv.org:2512.19410v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Elad Hazan, Shai Shalev Shwartz, Nathan Srebro - Equivalent Instances for Scheduling and Packing Problems - https://arxiv.org/abs/2512.10635 - arXiv:2512.10635v2 Announce Type: replace -Abstract: Two instances $(I,k)$ and $(I',k')$ of a parameterized problem $P$ are equivalent if they have the same set of solutions (static equivalent) or if the set of solutions of $(I,k)$ can be constructed by the set of solutions for $(I',k')$ and some computable pre-solutions. If the algorithm constructing such a (static) equivalent instance whose size is polynomial bounded runs in fixed-parameter tractable (FPT) time, we say that there exists a (static) equivalent instance for this problem. In this paper we present (static) equivalent instances for Scheduling and Knapsack problems. We improve the bound for the $\ell_1$-norm of an equivalent vector given by Eisenbrand, Hunkenschr\"oder, Klein, Kouteck\'y, Levin, and Onn and show how this yields equivalent instances for integer linear programs (ILPs) and related problems. We obtain an $O(MN^2\log(NU))$ static equivalent instance for feasibility ILPs where $M$ is the number of constraints, $N$ is the number of variables and $U$ is an upper bound for the $\ell_\infty$-norm of the smallest feasible solution. With this, we get an $O(n^2\log(n))$ static equivalent instance for Knapsack where $n$ is the number of items. Moreover, we give an $O(M^2N\log(NM\Delta))$ kernel for feasibility ILPs where $\Delta$ is an upper bound for the $\ell_\infty$-norm of the given constraint matrix. - Using balancing results by Knop et al., the ConfILP and a proximity result by Eisenbrand and Weismantel we give an $O(d^2\log(p_{\max}))$ equivalent instance for LoadBalancing, a generalization of scheduling problems. Here $d$ is the number of different processing times and $p_{\max}$ is the largest processing time. - oai:arXiv.org:2512.10635v2 - cs.CC - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Klaus Jansen, Kai Kahler, Corinna Wambsganz + From Retrieval to Reasoning: A Framework for Cyber Threat Intelligence NER with Explicit and Adaptive Instructions + https://arxiv.org/abs/2512.19414 + arXiv:2512.19414v1 Announce Type: new +Abstract: The automation of Cyber Threat Intelligence (CTI) relies heavily on Named Entity Recognition (NER) to extract critical entities from unstructured text. Currently, Large Language Models (LLMs) primarily address this task through retrieval-based In-Context Learning (ICL). This paper analyzes this mainstream paradigm, revealing a fundamental flaw: its success stems not from global semantic similarity but largely from the incidental overlap of entity types within retrieved examples. This exposes the limitations of relying on unreliable implicit induction. To address this, we propose TTPrompt, a framework shifting from implicit induction to explicit instruction. TTPrompt maps the core concepts of CTI's Tactics, Techniques, and Procedures (TTPs) into an instruction hierarchy: formulating task definitions as Tactics, guiding strategies as Techniques, and annotation guidelines as Procedures. Furthermore, to handle the adaptability challenge of static guidelines, we introduce Feedback-driven Instruction Refinement (FIR). FIR enables LLMs to self-refine guidelines by learning from errors on minimal labeled data, adapting to distinct annotation dialects. Experiments on five CTI NER benchmarks demonstrate that TTPrompt consistently surpasses retrieval-based baselines. Notably, with refinement on just 1% of training data, it rivals models fine-tuned on the full dataset. For instance, on LADDER, its Micro F1 of 71.96% approaches the fine-tuned baseline, and on the complex CTINexus, its Macro F1 exceeds the fine-tuned ACLM model by 10.91%. + oai:arXiv.org:2512.19414v1 + cs.CR + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jiaren Peng, Hongda Sun, Xuan Tian, Cheng Huang, Zeqing Li, Rui Yan - Replace, Don't Expand: Mitigating Context Dilution in Multi-Hop RAG via Fixed-Budget Evidence Assembly - https://arxiv.org/abs/2512.10787 - arXiv:2512.10787v2 Announce Type: replace -Abstract: Retrieval-Augmented Generation (RAG) systems often fail on multi-hop queries when the initial retrieval misses a bridge fact. Prior corrective approaches, such as Self-RAG, CRAG, and Adaptive-$k$, typically address this by \textit{adding} more context or pruning existing lists. However, simply expanding the context window often leads to \textbf{context dilution}, where distractors crowd out relevant information. We propose \textbf{SEAL-RAG}, a training-free controller that adopts a \textbf{``replace, don't expand''} strategy to fight context dilution under a fixed retrieval depth $k$. SEAL executes a (\textbf{S}earch $\rightarrow$ \textbf{E}xtract $\rightarrow$ \textbf{A}ssess $\rightarrow$ \textbf{L}oop) cycle: it performs on-the-fly, entity-anchored extraction to build a live \textit{gap specification} (missing entities/relations), triggers targeted micro-queries, and uses \textit{entity-first ranking} to actively swap out distractors for gap-closing evidence. We evaluate SEAL-RAG against faithful re-implementations of Basic RAG, CRAG, Self-RAG, and Adaptive-$k$ in a shared environment on \textbf{HotpotQA} and \textbf{2WikiMultiHopQA}. On HotpotQA ($k=3$), SEAL improves answer correctness by \textbf{+3--13 pp} and evidence precision by \textbf{+12--18 pp} over Self-RAG. On 2WikiMultiHopQA ($k=5$), it outperforms Adaptive-$k$ by \textbf{+8.0 pp} in accuracy and maintains \textbf{96\%} evidence precision compared to 22\% for CRAG. These gains are statistically significant ($p<0.001$). By enforcing fixed-$k$ replacement, SEAL yields a predictable cost profile while ensuring the top-$k$ slots are optimized for precision rather than mere breadth. We release our code and data at https://github.com/mosherino/SEAL-RAG. - oai:arXiv.org:2512.10787v2 - cs.AI - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Non-Contrast CT Esophageal Varices Grading through Clinical Prior-Enhanced Multi-Organ Analysis + https://arxiv.org/abs/2512.19415 + arXiv:2512.19415v1 Announce Type: new +Abstract: Esophageal varices (EV) represent a critical complication of portal hypertension, affecting approximately 60% of cirrhosis patients with a significant bleeding risk of ~30%. While traditionally diagnosed through invasive endoscopy, non-contrast computed tomography (NCCT) presents a potential non-invasive alternative that has yet to be fully utilized in clinical practice. We present Multi-Organ-COhesion Network++ (MOON++), a novel multimodal framework that enhances EV assessment through comprehensive analysis of NCCT scans. Inspired by clinical evidence correlating organ volumetric relationships with liver disease severity, MOON++ synthesizes imaging characteristics of the esophagus, liver, and spleen through multimodal learning. We evaluated our approach using 1,631 patients, those with endoscopically confirmed EV were classified into four severity grades. Validation in 239 patient cases and independent testing in 289 cases demonstrate superior performance compared to conventional single organ methods, achieving an AUC of 0.894 versus 0.803 for the severe grade EV classification (G3 versus <G3) and 0.921 versus 0.793 for the differentiation of moderate to severe grades (>=G2 versus <G2). We conducted a reader study involving experienced radiologists to further validate the performance of MOON++. To our knowledge, MOON++ represents the first comprehensive multi-organ NCCT analysis framework incorporating clinical knowledge priors for EV assessment, potentially offering a promising non-invasive diagnostic alternative. + oai:arXiv.org:2512.19415v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Moshe Lahmy, Roi Yozevitch + Xiaoming Zhang, Chunli Li, Jiacheng Hao, Yuan Gao, Danyang Tu, Jianyi Qiao, Xiaoli Yin, Le Lu, Ling Zhang, Ke Yan, Yang Hou, Yu Shi - Computational emotion analysis with multimodal LLMs: Current evidence on an emerging methodological opportunity - https://arxiv.org/abs/2512.10882 - arXiv:2512.10882v2 Announce Type: replace -Abstract: Emotions are central to politics and analyzing their role in political communication has a long tradition. As research increasingly leverages audio-visual materials to analyze emotions, the emergence of multimodal generative Artificial Intelligence (AI) promises great advances. However, we lack evidence about the effectiveness of multimodal AI in analyzing emotions in political communication. This paper addresses this gap by evaluating current multimodal large language models (mLLMs) in the video-based analysis of emotional arousal, using two complementary datasets of human-labeled video recordings. It finds that under ideal circumstances, mLLMs' emotional arousal ratings are highly reliable and exhibit little to no demographic bias. However, in recordings of real-world parliamentary debates, mLLMs' arousal ratings fail to deliver on this promise with potential negative consequences for downstream statistical inferences. This study therefore underscores the need for continued, thorough evaluation of emerging generative AI methods in multimodal political analysis and contributes a suitable replicable framework. - oai:arXiv.org:2512.10882v2 + CodeSimpleQA: Scaling Factuality in Code Large Language Models + https://arxiv.org/abs/2512.19424 + arXiv:2512.19424v1 Announce Type: new +Abstract: Large language models (LLMs) have made significant strides in code generation, achieving impressive capabilities in synthesizing code snippets from natural language instructions. However, a critical challenge remains in ensuring LLMs generate factually accurate responses about programming concepts, technical implementations, etc. Most previous code-related benchmarks focus on code execution correctness, overlooking the factual accuracy of programming knowledge. To address this gap, we present CodeSimpleQA, a comprehensive bilingual benchmark designed to evaluate the factual accuracy of code LLMs in answering code-related questions, which contains carefully curated question-answer pairs in both English and Chinese, covering diverse programming languages and major computer science domains. Further, we create CodeSimpleQA-Instruct, a large-scale instruction corpus with 66M samples, and develop a post-training framework combining supervised fine-tuning and reinforcement learning. Our comprehensive evaluation of diverse LLMs reveals that even frontier LLMs struggle with code factuality. Our proposed framework demonstrates substantial improvements over the base model, underscoring the critical importance of factuality-aware alignment in developing reliable code LLMs. + oai:arXiv.org:2512.19424v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hauke Licht + Jian Yang, Wei Zhang, Yizhi Li, Shawn Guo, Haowen Wang, Aishan Liu, Ge Zhang, Zili Wang, Zhoujun Li, Xianglong Liu, Weifeng Lv - An Anatomy of Vision-Language-Action Models: From Modules to Milestones and Challenges - https://arxiv.org/abs/2512.11362 - arXiv:2512.11362v3 Announce Type: replace -Abstract: Vision-Language-Action (VLA) models are driving a revolution in robotics, enabling machines to understand instructions and interact with the physical world. This field is exploding with new models and datasets, making it both exciting and challenging to keep pace with. This survey offers a clear and structured guide to the VLA landscape. We design it to follow the natural learning path of a researcher: we start with the basic Modules of any VLA model, trace the history through key Milestones, and then dive deep into the core Challenges that define recent research frontier. Our main contribution is a detailed breakdown of the five biggest challenges in: (1) Representation, (2) Execution, (3) Generalization, (4) Safety, and (5) Dataset and Evaluation. This structure mirrors the developmental roadmap of a generalist agent: establishing the fundamental perception-action loop, scaling capabilities across diverse embodiments and environments, and finally ensuring trustworthy deployment-all supported by the essential data infrastructure. For each of them, we review existing approaches and highlight future opportunities. We position this paper as both a foundational guide for newcomers and a strategic roadmap for experienced researchers, with the dual aim of accelerating learning and inspiring new ideas in embodied intelligence. A live version of this survey, with continuous updates, is maintained on our \href{https://suyuz1.github.io/VLA-Survey-Anatomy/}{project page}. - oai:arXiv.org:2512.11362v3 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 - replace + A Computationally Efficient Framework for Overlapping Community Detection in Large Bipartite Graphs + https://arxiv.org/abs/2512.19426 + arXiv:2512.19426v1 Announce Type: new +Abstract: Community detection, which uncovers closely connected vertex groups in networks, is vital for applications in social networks, recommendation systems, and beyond. Real-world networks often have bipartite structures (vertices in two disjoint sets with inter-set connections), creating unique challenges on specialized community detection methods. Biclique percolation community (BCPC) is widely used to detect cohesive structures in bipartite graphs. A biclique is a complete bipartite subgraph, and a BCPC forms when maximal bicliques connect via adjacency (sharing an (alpha, beta)-biclique). Yet, existing methods for BCPC detection suffer from high time complexity due to the potentially massive maximal biclique adjacency graph (MBAG). To tackle this, we propose a novel partial-BCPC based solution, whose key idea is to use partial-BCPC to reduce the size of the MBAG. A partial-BCPC is a subset of BCPC. Maximal bicliques belonging to the same partial-BCPC must also belong to the same BCPC. Therefore, these maximal bicliques can be grouped as a single vertex in the MBAG, significantly reducing the size of the MBAG. Furthermore, we move beyond the limitations of MBAG and propose a novel BCPC detection approach based on (alpha, beta)-biclique enumeration. This approach detects BCPC by enumerating all (alpha, beta)-bicliques and connecting maximal bicliques sharing the same (alpha, beta)-biclique, which is the condition for maximal bicliques to be adjacent. It also leverages partial-BCPC to significantly prune the enumeration space of (alpha, beta)-biclique. Experiments show that our methods outperform existing methods by nearly three orders of magnitude. + oai:arXiv.org:2512.19426v1 + cs.SI + cs.DB + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Chao Xu, Suyu Zhang, Yang Liu, Baigui Sun, Weihong Chen, Bo Xu, Qi Liu, Juncheng Wang, Shujun Wang, Shan Luo, Jan Peters, Athanasios V. Vasilakos, Stefanos Zafeiriou, Jiankang Deng + Yue Zeng, Rong-Hua Li, Qiangqiang Dai, Guoren Wang - xGR: Efficient Generative Recommendation Serving at Scale - https://arxiv.org/abs/2512.11529 - arXiv:2512.11529v2 Announce Type: replace -Abstract: Recommendation system delivers substantial economic benefits by providing personalized predictions. Generative recommendation (GR) integrates LLMs to enhance the understanding of long user-item sequences. Despite employing attention-based architectures, GR's workload differs markedly from that of LLM serving. GR typically processes long prompt while producing short, fixed-length outputs, yet the computational cost of each decode phase is especially high due to the large beam width. In addition, since the beam search involves a vast item space, the sorting overhead becomes particularly time-consuming. We propose xGR, a GR-oriented serving system that meets strict low-latency requirements under highconcurrency scenarios. First, xGR unifies the processing of prefill and decode phases through staged computation and separated KV cache. Second, xGR enables early sorting termination and mask-based item filtering with data structure reuse. Third, xGR reconstructs the overall pipeline to exploit multilevel overlap and multi-stream parallelism. Our experiments with real-world recommendation service datasets demonstrate that xGR achieves at least 3.49x throughput compared to the state-of-the-art baseline under strict latency constraints. - oai:arXiv.org:2512.11529v2 + Attention Is Not What You Need + https://arxiv.org/abs/2512.19428 + arXiv:2512.19428v1 Announce Type: new +Abstract: We revisit a basic question in sequence modeling: is explicit self-attention actually necessary for strong performance and reasoning? We argue that standard multi-head attention is best seen as a form of tensor lifting: hidden vectors are mapped into a high-dimensional space of pairwise interactions, and learning proceeds by constraining this lifted tensor through gradient descent. This mechanism is extremely expressive but mathematically opaque, because after many layers it becomes very hard to describe the model with a small family of explicit invariants. + To explore an alternative, we propose an attention-free architecture based on Grassmann flows. Instead of forming an L by L attention matrix, our Causal Grassmann layer (i) linearly reduces token states, (ii) encodes local token pairs as two-dimensional subspaces on a Grassmann manifold via Plucker coordinates, and (iii) fuses these geometric features back into the hidden states through gated mixing. Information therefore propagates by controlled deformations of low-rank subspaces over multi-scale local windows, so the core computation lives on a finite-dimensional manifold rather than in an unstructured tensor space. + On the Wikitext-2 language modeling benchmark, purely Grassmann-based models with 13 to 18 million parameters achieve validation perplexities within about 10 to 15 percent of size-matched Transformers. On the SNLI natural language inference task, a Grassmann-Plucker head on top of DistilBERT slightly outperforms a Transformer head, with best validation and test accuracies of 0.8550 and 0.8538 compared to 0.8545 and 0.8511. We analyze the complexity of Grassmann mixing, show linear scaling in sequence length for fixed rank, and argue that such manifold-based designs offer a more structured route toward geometric and invariant-based interpretations of neural reasoning. + oai:arXiv.org:2512.19428v1 cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.AI + math.AG + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Qingxiao Sun, Tongxuan Liu, Shen Zhang, Siyu Wu, Peijun Yang, Haotian Liang, Menxin Li, Xiaolong Ma, Zhiwei Liang, Ziyi Ren, Minchao Zhang, Xinyu Liu, Ke Zhang, Depei Qian, Hailong Yang - - - In-Context Learning for Seismic Data Processing - https://arxiv.org/abs/2512.11575 - arXiv:2512.11575v2 Announce Type: replace -Abstract: Seismic processing transforms raw data into subsurface images essential for geophysical applications. Traditional methods face challenges, such as noisy data, and manual parameter tuning, among others. Recently deep learning approaches have proposed alternative solutions to some of these problems. However, important challenges of existing deep learning approaches are spatially inconsistent results across neighboring seismic gathers and lack of user-control. We address these limitations by introducing ContextSeisNet, an in-context learning model, to seismic demultiple processing. Our approach conditions predictions on a support set of spatially related example pairs: neighboring common-depth point gathers from the same seismic line and their corresponding labels. This allows the model to learn task-specific processing behavior at inference time by observing how similar gathers should be processed, without any retraining. This method provides both flexibility through user-defined examples and improved lateral consistency across seismic lines. On synthetic data, ContextSeisNet outperforms a U-Net baseline quantitatively and demonstrates enhanced spatial coherence between neighboring gathers. On field data, our model achieves superior lateral consistency compared to both traditional Radon demultiple and the U-Net baseline. Relative to the U-Net, ContextSeisNet also delivers improved near-offset performance and more complete multiple removal. Notably, ContextSeisNet achieves comparable field data performance despite being trained on 90% less data, demonstrating substantial data efficiency. These results establish ContextSeisNet as a practical approach for spatially consistent seismic demultiple with potential applicability to other seismic processing tasks. - oai:arXiv.org:2512.11575v2 - cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Fabian Fuchs, Mario Ruben Fernandez, Norman Ettrich, Janis Keuper + Zhang Chong - Journey Before Destination: On the importance of Visual Faithfulness in Slow Thinking - https://arxiv.org/abs/2512.12218 - arXiv:2512.12218v2 Announce Type: replace -Abstract: Reasoning-augmented vision language models (VLMs) generate explicit chains of thought that promise greater capability and transparency but also introduce new failure modes: models may reach correct answers via visually unfaithful intermediate steps, or reason faithfully yet fail on the final prediction. Standard evaluations that only measure final-answer accuracy cannot distinguish these behaviors. We introduce the visual faithfulness of reasoning chains as a distinct evaluation dimension, focusing on whether the perception steps of a reasoning chain are grounded in the image. We propose a training- and reference-free framework that decomposes chains into perception versus reasoning steps and uses off-the-shelf VLM judges for step-level faithfulness, additionally verifying this approach through a human meta-evaluation. Building on this metric, we present a lightweight self-reflection procedure that detects and locally regenerates unfaithful perception steps without any training. Across multiple reasoning-trained VLMs and perception-heavy benchmarks, our method reduces Unfaithful Perception Rate while preserving final-answer accuracy, improving the reliability of multimodal reasoning. - oai:arXiv.org:2512.12218v2 - cs.CV + MobileWorld: Benchmarking Autonomous Mobile Agents in Agent-User Interactive, and MCP-Augmented Environments + https://arxiv.org/abs/2512.19432 + arXiv:2512.19432v1 Announce Type: new +Abstract: Among existing online mobile-use benchmarks, AndroidWorld has emerged as the dominant benchmark due to its reproducible environment and deterministic evaluation; however, recent agents achieving over 90% success rates indicate its saturation and motivate the need for a more challenging benchmark. In addition, its environment lacks key application categories, such as e-commerce and enterprise communication, and does not reflect realistic mobile-use scenarios characterized by vague user instructions and hybrid tool usage. To bridge this gap, we introduce MobileWorld, a substantially more challenging benchmark designed to better reflect real-world mobile usage, comprising 201 tasks across 20 applications, while maintaining the same level of reproducible evaluation as AndroidWorld. The difficulty of MobileWorld is twofold. First, it emphasizes long-horizon tasks with cross-application interactions: MobileWorld requires nearly twice as many task-completion steps on average (27.8 vs. 14.3) and includes far more multi-application tasks (62.2% vs. 9.5%) compared to AndroidWorld. Second, MobileWorld extends beyond standard GUI manipulation by introducing novel task categories, including agent-user interaction and MCP-augmented tasks. To ensure robust evaluation, we provide snapshot-based container environment and precise functional verifications, including backend database inspection and task callback APIs. We further develop a planner-executor agentic framework with extended action spaces to support user interactions and MCP calls. Our results reveal a sharp performance drop compared to AndroidWorld, with the best agentic framework and end-to-end model achieving 51.7% and 20.9% success rates, respectively. Our analysis shows that current models struggle significantly with user interaction and MCP calls, offering a strategic roadmap toward more robust, next-generation mobile intelligence. + oai:arXiv.org:2512.19432v1 cs.CL - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Rheeya Uppaal, Phu Mon Htut, Min Bai, Nikolaos Pappas, Zheng Qi, Sandesh Swamy + Quyu Kong, Xu Zhang, Zhenyu Yang, Nolan Gao, Chen Liu, Panrong Tong, Chenglin Cai, Hanzhang Zhou, Jianan Zhang, Liangyu Chen, Zhidan Liu, Steven Hoi, Yue Wang - Credit Risk Estimation with Non-Financial Features: Evidence from a Synthetic Istanbul Dataset - https://arxiv.org/abs/2512.12783 - arXiv:2512.12783v2 Announce Type: replace -Abstract: Financial exclusion constrains entrepreneurship, increases income volatility, and widens wealth gaps. Underbanked consumers in Istanbul often have no bureau file because their earnings and payments flow through informal channels. To study how such borrowers can be evaluated we create a synthetic dataset of one hundred thousand Istanbul residents that reproduces first quarter 2025 T\"U\.IK census marginals and telecom usage patterns. Retrieval augmented generation feeds these public statistics into the OpenAI o3 model, which synthesises realistic yet private records. Each profile contains seven socio demographic variables and nine alternative attributes that describe phone specifications, online shopping rhythm, subscription spend, car ownership, monthly rent, and a credit card flag. To test the impact of the alternative financial data CatBoost, LightGBM, and XGBoost are each trained in two versions. Demo models use only the socio demographic variables; Full models include both socio demographic and alternative attributes. Across five fold stratified validation the alternative block raises area under the curve by about one point three percentage and lifts balanced \(F_{1}\) from roughly 0.84 to 0.95, a fourteen percent gain. We contribute an open Istanbul 2025 Q1 synthetic dataset, a fully reproducible modeling pipeline, and empirical evidence that a concise set of behavioural attributes can approach bureau level discrimination power while serving borrowers who lack formal credit records. These findings give lenders and regulators a transparent blueprint for extending fair and safe credit access to the underbanked. - oai:arXiv.org:2512.12783v2 - cs.LG - q-fin.ST - stat.AP - Mon, 22 Dec 2025 00:00:00 -0500 - replace + dMLLM-TTS: Self-Verified and Efficient Test-Time Scaling for Diffusion Multi-Modal Large Language Models + https://arxiv.org/abs/2512.19433 + arXiv:2512.19433v1 Announce Type: new +Abstract: Diffusion Multi-modal Large Language Models (dMLLMs) have recently emerged as a novel architecture unifying image generation and understanding. However, developing effective and efficient Test-Time Scaling (TTS) methods to unlock their full generative potential remains an underexplored challenge. To address this, we propose dMLLM-TTS, a novel framework operating on two complementary scaling axes: (1) trajectory exploration scaling to enhance the diversity of generated hypotheses, and (2) iterative refinement scaling for stable generation. Conventional TTS approaches typically perform linear search across these two dimensions, incurring substantial computational costs of O(NT) and requiring an external verifier for best-of-N selection. To overcome these limitations, we propose two innovations. First, we design an efficient hierarchical search algorithm with O(N+T) complexity that adaptively expands and prunes sampling trajectories. Second, we introduce a self-verified feedback mechanism that leverages the dMLLMs' intrinsic image understanding capabilities to assess text-image alignment, eliminating the need for external verifier. Extensive experiments on the GenEval benchmark across three representative dMLLMs (e.g., Lumina-DiMOO, MMaDA, Muddit) show that our framework substantially improves generation quality while achieving up to 6x greater efficiency than linear search. Project page: https://github.com/Alpha-VLLM/Lumina-DiMOO. + oai:arXiv.org:2512.19433v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Atalay Denknalbant, Emre Sezdi, Zeki Furkan Kutlu, Polat Goktas - - - Finch: Benchmarking Finance & Accounting across Spreadsheet-Centric Enterprise Workflows - https://arxiv.org/abs/2512.13168 - arXiv:2512.13168v2 Announce Type: replace -Abstract: We introduce a finance & accounting benchmark (Finch) for evaluating AI agents on real-world, enterprise-grade professional workflows -- interleaving data entry, structuring, formatting, web search, cross-file retrieval, calculation, modeling, validation, translation, visualization, and reporting. Finch is sourced from authentic enterprise workspaces at Enron (15,000 spreadsheets and 500,000 emails from 150 employees) and other financial institutions, preserving in-the-wild messiness across multimodal artifacts (text, tables, formulas, charts, code, and images) and spanning diverse domains such as budgeting, trading, and asset management. - We propose a workflow construction process that combines LLM-assisted discovery with expert annotation: (1) LLM-assisted, expert-verified derivation of workflows from real-world email threads and version histories of spreadsheet files, and (2) meticulous expert annotation for workflows, requiring over 700 hours of domain-expert effort. This yields 172 composite workflows with 384 tasks, involving 1,710 spreadsheets with 27 million cells, along with PDFs and other artifacts, capturing the intrinsically messy, long-horizon, knowledge-intensive, and collaborative nature of real-world enterprise work. - We conduct both human and automated evaluations of frontier AI systems including GPT 5.1, Claude Sonnet 4.5, Gemini 3 Pro, Grok 4, and Qwen 3 Max, and GPT 5.1 Pro spends 48 hours in total yet passes only 38.4% of workflows, while Claude Sonnet 4.5 passes just 25.0%. Comprehensive case studies further surface the challenges that real-world enterprise workflows pose for AI agents. - oai:arXiv.org:2512.13168v2 - cs.AI - cs.CE - cs.IR - cs.MA - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Haoyu Dong, Pengkun Zhang, Yan Gao, Xuanyu Dong, Yilin Cheng, Mingzhe Lu, Adina Yakefu, Shuxin Zheng + Yi Xin, Siqi Luo, Qi Qin, Haoxing Chen, Kaiwen Zhu, Zhiwei Zhang, Yangfan He, Rongchao Zhang, Jinbin Bai, Shuo Cao, Bin Fu, Junjun He, Yihao Liu, Yuewen Cao, Xiaohong Liu - Non-Resolution Reasoning (NRR): A Computational Framework for Contextual Identity and Ambiguity Preservation - https://arxiv.org/abs/2512.13478 - arXiv:2512.13478v4 Announce Type: replace -Abstract: Current artificial intelligence systems, despite remarkable capabilities in text generation and pattern recognition, exhibit a fundamental architectural limitation: they resolve ambiguity prematurely. This premature semantic collapse -- the tendency to collapse multiple valid interpretations into a single output -- stems from classical identity assumptions embedded in standard neural architectures. We propose Non-Resolution Reasoning (NRR), a computational framework that treats ambiguity retention as a valid reasoning mode rather than a defect to be eliminated. NRR introduces three core principles: (1) Non-Identity ($A \neq A$) -- the same symbol refers to different entities across contexts; (2) Approximate Identity ($A \approx A$) -- entities share partial structural overlap without being identical; and (3) Non-Resolution -- conflicting interpretations can coexist without forced convergence. We formalize these principles through three architectural components: Multi-Vector Embeddings for context-dependent representation, Non-Collapsing Attention for parallel interpretation retention, and Contextual Identity Tracking (CIT) for maintaining $A \neq A$ across inference. We demonstrate NRR's advantages through case studies in paradox handling, creative generation, and context-dependent reasoning. Crucially, we provide a minimal empirical validation on a synthetic context-shift task where an NRR-lite model achieves 90.9% out-of-distribution accuracy compared to 9.1% for standard architectures, demonstrating that ambiguity preservation enables structural generalization. NRR challenges the assumption that meaning must collapse to be useful, offering a foundation for AI systems capable of sophisticated ambiguity handling and creative reasoning. The question is not whether AI should resolve ambiguity, but when, how, and under whose control. - oai:arXiv.org:2512.13478v4 - cs.CL - cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Kei Saito + Hybrid Analytical-Machine Learning Framework for Ripple Factor Estimation in Cockcroft-Walton Voltage Multipliers with Residual Correction for Non-Ideal Effects + https://arxiv.org/abs/2512.19434 + arXiv:2512.19434v1 Announce Type: new +Abstract: Cockcroft-Walton (CW) voltage multipliers suffer from output ripple that classical analytical models underestimate due to neglected non-idealities like diode drops and capacitor ESR, particularly in high-stage, low-frequency and heavy-load regimes. This paper proposes a hybrid framework that generates a comprehensive 324-case MATLAB/Simulink dataset varying stages (2-8), input voltage (5-25 kV), capacitance (1-10 {\mu}F), frequency (50-500 Hz) and load (6-60 M{\Omega}), then trains a Random Forest model to predict residuals between simulated and theoretical peak-to-peak ripple. The approach achieves 70.6% RMSE reduction (131 V vs. 448 V) globally and 66.7% in critical regimes, with near-zero bias, enabling physically interpretable design optimization while outperforming pure ML in extrapolation reliability. + oai:arXiv.org:2512.19434v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Md. Tanvirul Islam - SportsGPT: An LLM-driven Framework for Interpretable Sports Motion Assessment and Training Guidance - https://arxiv.org/abs/2512.14121 - arXiv:2512.14121v2 Announce Type: replace -Abstract: Existing intelligent sports analysis systems mainly focus on "scoring and visualization," often lacking automatic performance diagnosis and interpretable training guidance. Recent advances in Large Language Models (LLMs) and motion analysis techniques provide new opportunities to address the above limitations. In this paper, we propose SportsGPT, an LLM-driven framework for interpretable sports motion assessment and training guidance, which establishes a closed loop from motion time-series input to professional training guidance. First, given a set of high-quality target models, we introduce MotionDTW, a two-stage time series alignment algorithm designed for accurate keyframe extraction from skeleton-based motion sequences. Subsequently, we design a Knowledge-based Interpretable Sports Motion Assessment Model (KISMAM) to obtain a set of interpretable assessment metrics (e.g., insufficient extension) by contrasting the keyframes with the target models. Finally, we propose SportsRAG, a RAG-based training guidance model built upon Qwen3. Leveraging a 6B-token knowledge base, it prompts the LLM to generate professional training guidance by retrieving domain-specific QA pairs. Experimental results demonstrate that MotionDTW significantly outperforms traditional methods with lower temporal error and higher IoU scores. Furthermore, ablation studies validate the KISMAM and SportsRAG, confirming that SportsGPT surpasses general LLMs in diagnostic accuracy and professionalism. - oai:arXiv.org:2512.14121v2 + MT-Mark: Rethinking Image Watermarking via Mutual-Teacher Collaboration with Adaptive Feature Modulation + https://arxiv.org/abs/2512.19438 + arXiv:2512.19438v1 Announce Type: new +Abstract: Existing deep image watermarking methods follow a fixed embedding-distortion-extraction pipeline, where the embedder and extractor are weakly coupled through a final loss and optimized in isolation. This design lacks explicit collaboration, leaving no structured mechanism for the embedder to incorporate decoding-aware cues or for the extractor to guide embedding during training. To address this architectural limitation, we rethink deep image watermarking by reformulating embedding and extraction as explicitly collaborative components. To realize this reformulation, we introduce a Collaborative Interaction Mechanism (CIM) that establishes direct, bidirectional communication between the embedder and extractor, enabling a mutual-teacher training paradigm and coordinated optimization. Built upon this explicitly collaborative architecture, we further propose an Adaptive Feature Modulation Module (AFMM) to support effective interaction. AFMM enables content-aware feature regulation by decoupling modulation structure and strength, guiding watermark embedding toward stable image features while suppressing host interference during extraction. Under CIM, the AFMMs on both sides form a closed-loop collaboration that aligns embedding behavior with extraction objectives. This architecture-level redesign changes how robustness is learned in watermarking systems. Rather than relying on exhaustive distortion simulation, robustness emerges from coordinated representation learning between embedding and extraction. Experiments on real-world and AI-generated datasets demonstrate that the proposed method consistently outperforms state-of-the-art approaches in watermark extraction accuracy while maintaining high perceptual quality, showing strong robustness and generalization. + oai:arXiv.org:2512.19438v1 cs.CV cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wenbo Tian, Ruting Lin, Hongxian Zheng, Yaodong Yang, Geng Wu, Zihao Zhang, Zhang Zhang + Fei Ge, Ying Huang, Jie Liu, Guixuan Zhang, Zhi Zeng, Shuwu Zhang, Hu Guan - Let the Barbarians In: How AI Can Accelerate Systems Performance Research - https://arxiv.org/abs/2512.14806 - arXiv:2512.14806v3 Announce Type: replace -Abstract: Artificial Intelligence (AI) is beginning to transform the research process by automating the discovery of new solutions. This shift depends on the availability of reliable verifiers, which AI-driven approaches require to validate candidate solutions. Research focused on improving systems performance is especially well-suited to this paradigm because system performance problems naturally admit such verifiers: candidates can be implemented in real systems or simulators and evaluated against predefined workloads. We term this iterative cycle of generation, evaluation, and refinement AI-Driven Research for Systems (ADRS). Using several open-source ADRS instances (i.e., OpenEvolve, GEPA, and ShinkaEvolve), we demonstrate across ten case studies (e.g., multi-region cloud scheduling, mixture-of-experts load balancing, LLM-based SQL, transaction scheduling) that ADRS-generated solutions can match or even outperform human state-of-the-art designs. Based on these findings, we outline best practices (e.g., level of prompt specification, amount of feedback, robust evaluation) for effectively using ADRS, and we discuss future research directions and their implications. Although we do not yet have a universal recipe for applying ADRS across all of systems research, we hope our preliminary findings, together with the challenges we identify, offer meaningful guidance for future work as researcher effort shifts increasingly toward problem formulation and strategic oversight. - Note: This paper is an extension of our prior work [14]. It adds extensive evaluation across multiple ADRS frameworks and provides deeper analysis and insights into best practices. - oai:arXiv.org:2512.14806v3 - cs.SE - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Audrey Cheng, Shu Liu, Melissa Pan, Zhifei Li, Shubham Agarwal, Mert Cemri, Bowen Wang, Alexander Krentsel, Tian Xia, Jongseok Park, Shuo Yang, Jeff Chen, Lakshya Agrawal, Ashwin Naren, Shulu Li, Ruiying Ma, Aditya Desai, Jiarong Xing, Koushik Sen, Matei Zaharia, Ion Stoica + An Inverse Scattering Inspired Fourier Neural Operator for Time-Dependent PDE Learning + https://arxiv.org/abs/2512.19439 + arXiv:2512.19439v1 Announce Type: new +Abstract: Learning accurate and stable time-advancement operators for nonlinear partial differential equations (PDEs) remains challenging, particularly for chaotic, stiff, and long-horizon dynamical systems. While neural operator methods such as the Fourier Neural Operator (FNO) and Koopman-inspired extensions achieve good short-term accuracy, their long-term stability is often limited by unconstrained latent representations and cumulative rollout errors. In this work, we introduce an inverse scattering inspired Fourier Neural Operator(IS-FNO), motivated by the reversibility and spectral evolution structure underlying the classical inverse scattering transform. The proposed architecture enforces a near-reversible pairing between lifting and projection maps through an explicitly invertible neural transformation, and models latent temporal evolution using exponential Fourier layers that naturally encode linear and nonlinear spectral dynamics. We systematically evaluate IS-FNO against baseline FNO and Koopman-based models on a range of benchmark PDEs, including the Michelson-Sivashinsky and Kuramoto-Sivashinsky equations (in one and two dimensions), as well as the integrable Korteweg-de Vries and Kadomtsev-Petviashvili equations. The results demonstrate that IS-FNO achieves lower short-term errors and substantially improved long-horizon stability in non-stiff regimes. For integrable systems, reduced IS-FNO variants that embed analytical scattering structure retain competitive long-term accuracy despite limited model capacity. Overall, this work shows that incorporating physical structure -- particularly reversibility and spectral evolution -- into neural operator design significantly enhances robustness and long-term predictive fidelity for nonlinear PDE dynamics. + oai:arXiv.org:2512.19439v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Rixin Yu - Element-Saving Hexahedral 3-Refinement Templates - https://arxiv.org/abs/2512.14862 - arXiv:2512.14862v3 Announce Type: replace -Abstract: Conforming hexahedral (hex) meshes are favored in simulation for their superior numerical properties, yet automatically decomposing a general 3D volume into a conforming hex mesh remains a formidable challenge. Among existing approaches, methods that construct an adaptive Cartesian grid and subsequently convert it into a conforming mesh stand out for their robustness. However, the topological schemes enabling this conversion require strict compatibility conditions among grid elements, which inevitably refine the initial grid and increase element count. Developing more relaxed conditions to minimize this overhead has been a persistent research focus. State-of-the-art 2-refinement octree methods employ a weakly-balanced condition combined with a generalized pairing condition, using a dual transformation to yield exceptionally low element counts. Yet this approach suffers from critical limitations: information stored on primal cells, such as signed distance fields or triangle index sets, is lost after dualization, and the resulting dual cells often exhibit poor minimum scaled Jacobian (min SJ) with non-planar quadrilateral (quad) faces. Alternatively, 3-refinement 27-tree methods can directly generate conforming hex meshes through template-based replacement of primal cells, producing higher-quality elements with planar quad faces. However, previous 3-refinement techniques impose conditions far more strict than 2-refinement counterparts, severely over-refining grids by factors of ten to one hundred, creating a major bottleneck in simulation pipelines. This article introduces a novel 3-refinement approach that transforms an adaptive 3-refinement grid into a conforming grid using a moderately-balanced condition, slightly stronger than the weakly-balanced condition but substantially more relaxed than prior 3-refinement requirements...... (check PDF for the full abstract) - oai:arXiv.org:2512.14862v3 - cs.CG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Binary Kernel Logistic Regression: a sparsity-inducing formulation and a convergent decomposition training algorithm + https://arxiv.org/abs/2512.19440 + arXiv:2512.19440v1 Announce Type: new +Abstract: Kernel logistic regression (KLR) is a widely used supervised learning method for binary and multi-class classification, which provides estimates of the conditional probabilities of class membership for the data points. Unlike other kernel methods such as Support Vector Machines (SVMs), KLRs are generally not sparse. Previous attempts to deal with sparsity in KLR include a heuristic method referred to as the Import Vector Machine (IVM) and ad hoc regularizations such as the $\ell_{1/2}$-based one. Achieving a good trade-off between prediction accuracy and sparsity is still a challenging issue with a potential significant impact from the application point of view. In this work, we revisit binary KLR and propose an extension of the training formulation proposed by Keerthi et al., which is able to induce sparsity in the trained model, while maintaining good testing accuracy. To efficiently solve the dual of this formulation, we devise a decomposition algorithm of Sequential Minimal Optimization type which exploits second-order information, and for which we establish global convergence. Numerical experiments conducted on 12 datasets from the literature show that the proposed binary KLR approach achieves a competitive trade-off between accuracy and sparsity with respect to IVM, $\ell_{1/2}$-based regularization for KLR, and SVM while retaining the advantages of providing informative estimates of the class membership probabilities. + oai:arXiv.org:2512.19440v1 + cs.LG + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by-nc-nd/4.0/ - Hua Tong, Yongjie Jessica Zhang + Antonio Consolo, Andrea Manno, Edoardo Amaldi - Optimization of gridding algorithms for FFT by vector optimization - https://arxiv.org/abs/2512.14914 - arXiv:2512.14914v3 Announce Type: replace -Abstract: The Fast Fourier Transform (FFT) is widely used in applications such as MRI, CT, and interferometry; however, because of its dependence on uniformly sampled data, it requires the use of gridding techniques for practical implementation. The performance of these algorithms strongly depends on the choice of the gridding kernel, with the first prolate spheroidal wave function (PSWF) regarded as optimal. This work redefines kernel optimality through the lens of vector optimization (VO), introducing a rigorous framework that characterizes optimal kernels as Pareto-efficient solutions of an error shape operator. We establish the continuity of such operator, study the existence of solutions, and propose a novel methodology to construct kernels tailored to a desired target error function. The approach is implemented numerically via interior-point optimization. Comparative experiments demonstrate that the proposed kernels outperform both the PSWF and the state-of-the-art methods (MIRT-NUFFT) in specific regions of interest, achieving orders-of-magnitude improvements in mean absolute errors. These results confirm the potential of VO-based kernel design to provide customized accuracy profiles aligned with application-specific requirements. Future research will extend this framework to multidimensional cases and relative error minimization, with potential integration of machine learning for adaptive target error selection. - oai:arXiv.org:2512.14914v3 - math.NA - cs.NA - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-sa/4.0/ - Federico Achini, Paola Causin, Sara Vanini, Ke Chen, Simone Scacchi + D2Pruner: Debiased Importance and Structural Diversity for MLLM Token Pruning + https://arxiv.org/abs/2512.19443 + arXiv:2512.19443v1 Announce Type: new +Abstract: Processing long visual token sequences poses a significant computational burden on Multimodal Large Language Models (MLLMs). While token pruning offers a path to acceleration, we find that current methods, while adequate for general understanding, catastrophically fail on fine-grained localization tasks. We attribute this failure to the inherent flaws of the two prevailing strategies: importance-based methods suffer from a strong positional bias, an inherent model artifact that distracts from semantic content, while diversity-based methods exhibit structural blindness, disregarding the user's prompt and spatial redundancy. To address this, we introduce D2Pruner, a framework that rectifies these issues by uniquely combining debiased importance with a structural pruning mechanism. Our method first secures a core set of the most critical tokens as pivots based on a debiased attention score. It then performs a Maximal Independent Set (MIS) selection on the remaining tokens, which are modeled on a hybrid graph where edges signify spatial proximity and semantic similarity. This process iteratively preserves the most important and available token while removing its neighbors, ensuring that the supplementary tokens are chosen to maximize importance and diversity. Extensive experiments demonstrate that D2Pruner has exceptional efficiency and fidelity. Applied to LLaVA-1.5-7B for general understanding tasks, it reduces FLOPs by 74.2\% while retaining 99.2\% of its original performance. Furthermore, in challenging localization benchmarks with InternVL-2.5-8B, it maintains 85.7\% performance at a 90\% token reduction rate, marking a significant advancement with up to 63. 53\% improvement over existing methods. + oai:arXiv.org:2512.19443v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Evelyn Zhang, Fufu Yu, Aoqi Wu, Zichen Wen, Ke Yan, Shouhong Ding, Biqing Qi, Linfeng Zhang - The Semantic Illusion: Certified Limits of Embedding-Based Hallucination Detection in RAG Systems - https://arxiv.org/abs/2512.15068 - arXiv:2512.15068v2 Announce Type: replace -Abstract: Retrieval-Augmented Generation (RAG) systems remain susceptible to hallucinations despite grounding in retrieved evidence. While current detection methods leverage embedding similarity and natural language inference (NLI), their reliability in safety-critical settings remains unproven. We apply conformal prediction to RAG hallucination detection, transforming heuristic scores into decision sets with finite-sample coverage guarantees (1-alpha). Using calibration sets of n=600, we demonstrate a fundamental dichotomy: on synthetic hallucinations (Natural Questions), embedding methods achieve 95% coverage with 0% False Positive Rate (FPR). However, on real hallucinations from RLHF-aligned models (HaluEval), the same methods fail catastrophically, yielding 100% FPR at target coverage. We analyze this failure through the lens of distributional tails, showing that while NLI models achieve acceptable AUC (0.81), the "hardest" hallucinations are semantically indistinguishable from faithful responses, forcing conformal thresholds to reject nearly all valid outputs. Crucially, GPT-4 as a judge achieves 7% FPR (95% CI:[3.4%, 13.7%]) on the same data, proving the task is solvable via reasoning but opaque to surface-level semantics--a phenomenon we term the "Semantic Illusion." - oai:arXiv.org:2512.15068v2 - cs.LG - cs.AI - cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Sensitivity-Aware Mixed-Precision Quantization for ReRAM-based Computing-in-Memory + https://arxiv.org/abs/2512.19445 + arXiv:2512.19445v1 Announce Type: new +Abstract: Compute-In-Memory (CIM) systems, particularly those utilizing ReRAM and memristive technologies, offer a promising path toward energy-efficient neural network computation. However, conventional quantization and compression techniques often fail to fully optimize performance and efficiency in these architectures. In this work, we present a structured quantization method that combines sensitivity analysis with mixed-precision strategies to enhance weight storage and computational performance on ReRAM-based CIM systems. Our approach improves ReRAM Crossbar utilization, significantly reducing power consumption, latency, and computational load, while maintaining high accuracy. Experimental results show 86.33% accuracy at 70% compression, alongside a 40% reduction in power consumption, demonstrating the method's effectiveness for power-constrained applications. + oai:arXiv.org:2512.19445v1 + cs.AR + cs.ET + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Debu Sinha + Guan-Cheng Chen, Chieh-Lin Tsai, Pei-Hsuan Tsai, Yuan-Hao Chang - Is Nano Banana Pro a Low-Level Vision All-Rounder? A Comprehensive Evaluation on 14 Tasks and 40 Datasets - https://arxiv.org/abs/2512.15110 - arXiv:2512.15110v2 Announce Type: replace -Abstract: The rapid evolution of text-to-image generation models has revolutionized visual content creation. While commercial products like Nano Banana Pro have garnered significant attention, their potential as generalist solvers for traditional low-level vision challenges remains largely underexplored. In this study, we investigate the critical question: Is Nano Banana Pro a Low-Level Vision All-Rounder? We conducted a comprehensive zero-shot evaluation across 14 distinct low-level tasks spanning 40 diverse datasets. By utilizing simple textual prompts without fine-tuning, we benchmarked Nano Banana Pro against state-of-the-art specialist models. Our extensive analysis reveals a distinct performance dichotomy: while \textbf{Nano Banana Pro demonstrates superior subjective visual quality}, often hallucinating plausible high-frequency details that surpass specialist models, it lags behind in traditional reference-based quantitative metrics. We attribute this discrepancy to the inherent stochasticity of generative models, which struggle to maintain the strict pixel-level consistency required by conventional metrics. This report identifies Nano Banana Pro as a capable zero-shot contender for low-level vision tasks, while highlighting that achieving the high fidelity of domain specialists remains a significant hurdle. - oai:arXiv.org:2512.15110v2 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-sa/4.0/ - Jialong Zuo, Haoyou Deng, Hanyu Zhou, Jiaxin Zhu, Yicheng Zhang, Yiwei Zhang, Yongxin Yan, Kaixing Huang, Weisen Chen, Yongtai Deng, Rui Jin, Nong Sang, Changxin Gao + A Gauss-Newton-Induced Structure-Exploiting Algorithm for Differentiable Optimal Control + https://arxiv.org/abs/2512.19447 + arXiv:2512.19447v1 Announce Type: new +Abstract: Differentiable optimal control, particularly differentiable nonlinear model predictive control (NMPC), provides a powerful framework that enjoys the complementary benefits of machine learning and control theory. A key enabler of differentiable optimal control is the computation of derivatives of the optimal trajectory with respect to problem parameters, i.e., trajectory derivatives. Previous works compute trajectory derivatives by solving a differential Karush-Kuhn-Tucker (KKT) system, and achieve this efficiently by constructing an equivalent auxiliary system. However, we find that directly exploiting the matrix structures in the differential KKT system yields significant computation speed improvements. Motivated by this insight, we propose FastDOC, which applies a Gauss-Newton approximation of Hessian and takes advantage of the resulting block-sparsity and positive semidefinite properties of the matrices involved. These structural properties enable us to accelerate the computationally expensive matrix factorization steps, resulting in a factor-of-two speedup in theoretical computational complexity, and in a synthetic benchmark FastDOC achieves up to a 180% time reduction compared to the baseline method. Finally, we validate the method on an imitation learning task for human-like autonomous driving, where the results demonstrate the effectiveness of the proposed FastDOC in practical applications. + oai:arXiv.org:2512.19447v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yuankun Chen, Zifei Nie, Xun Gong, Yunfeng Hu, Hong Chen - 3DProxyImg: Controllable 3D-Aware Animation Synthesis from Single Image via 2D-3D Aligned Proxy Embedding - https://arxiv.org/abs/2512.15126 - arXiv:2512.15126v2 Announce Type: replace -Abstract: 3D animation is central to modern visual media, yet traditional production pipelines remain labor-intensive, expertise-demanding, and computationally expensive. Recent AIGC-based approaches partially automate asset creation and rigging, but they either inherit the heavy costs of full 3D pipelines or rely on video-synthesis paradigms that sacrifice 3D controllability and interactivity. We focus on single-image 3D animation generation and argue that progress is fundamentally constrained by a trade-off between rendering quality and 3D control. - To address this limitation, we propose a lightweight 3D animation framework that decouples geometric control from appearance synthesis. The core idea is a 2D-3D aligned proxy representation that uses a coarse 3D estimate as a structural carrier, while delegating high-fidelity appearance and view synthesis to learned image-space generative priors. This proxy formulation enables 3D-aware motion control and interaction comparable to classical pipelines, without requiring accurate geometry or expensive optimization, and naturally extends to coherent background animation. Extensive experiments demonstrate that our method achieves efficient animation generation on low-power platforms and outperforms video-based 3D animation generation in identity preservation, geometric and textural consistency, and the level of precise, interactive control it offers to users. - oai:arXiv.org:2512.15126v2 + Sign Language Recognition using Parallel Bidirectional Reservoir Computing + https://arxiv.org/abs/2512.19451 + arXiv:2512.19451v1 Announce Type: new +Abstract: Sign language recognition (SLR) facilitates communication between deaf and hearing communities. Deep learning based SLR models are commonly used but require extensive computational resources, making them unsuitable for deployment on edge devices. To address these limitations, we propose a lightweight SLR system that combines parallel bidirectional reservoir computing (PBRC) with MediaPipe. MediaPipe enables real-time hand tracking and precise extraction of hand joint coordinates, which serve as input features for the PBRC architecture. The proposed PBRC architecture consists of two echo state network (ESN) based bidirectional reservoir computing (BRC) modules arranged in parallel to capture temporal dependencies, thereby creating a rich feature representation for classification. We trained our PBRC-based SLR system on the Word-Level American Sign Language (WLASL) video dataset, achieving top-1, top-5, and top-10 accuracies of 60.85%, 85.86%, and 91.74%, respectively. Training time was significantly reduced to 18.67 seconds due to the intrinsic properties of reservoir computing, compared to over 55 minutes for deep learning based methods such as Bi-GRU. This approach offers a lightweight, cost-effective solution for real-time SLR on edge devices. + oai:arXiv.org:2512.19451v1 cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Yupeng Zhu, Xiongzhen Zhang, Ye Chen, Bingbing Ni + Nitin Kumar Singh, Arie Rachmad Syulistyo, Yuichiro Tanaka, Hakaru Tamukoh - A New Fast Finite Difference Scheme for Tempered Time Fractional Advection-Dispersion Equation with a Weak Singularity at Initial Time - https://arxiv.org/abs/2512.15141 - arXiv:2512.15141v3 Announce Type: replace -Abstract: In this paper, we propose a new second-order fast finite difference scheme in time for solving the Tempered Time Fractional Advection-Dispersion Equation. Under the assumption that the solution is nonsmooth at the initial time, we investigate the uniqueness, stability, and convergence of the scheme. Furthermore, we prove that the scheme achieves second-order convergence in both time and space. Finally, corresponding numerical examples are provided. - oai:arXiv.org:2512.15141v3 - math.NA - cs.NA - math.AP - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Liangcai Huang, Shujuan L\"u + MaP-AVR: A Meta-Action Planner for Agents Leveraging Vision Language Models and Retrieval-Augmented Generation + https://arxiv.org/abs/2512.19453 + arXiv:2512.19453v1 Announce Type: new +Abstract: Embodied robotic AI systems designed to manage complex daily tasks rely on a task planner to understand and decompose high-level tasks. While most research focuses on enhancing the task-understanding abilities of LLMs/VLMs through fine-tuning or chain-of-thought prompting, this paper argues that defining the planned skill set is equally crucial. To handle the complexity of daily environments, the skill set should possess a high degree of generalization ability. Empirically, more abstract expressions tend to be more generalizable. Therefore, we propose to abstract the planned result as a set of meta-actions. Each meta-action comprises three components: {move/rotate, end-effector status change, relationship with the environment}. This abstraction replaces human-centric concepts, such as grasping or pushing, with the robot's intrinsic functionalities. As a result, the planned outcomes align seamlessly with the complete range of actions that the robot is capable of performing. Furthermore, to ensure that the LLM/VLM accurately produces the desired meta-action format, we employ the Retrieval-Augmented Generation (RAG) technique, which leverages a database of human-annotated planning demonstrations to facilitate in-context learning. As the system successfully completes more tasks, the database will self-augment to continue supporting diversity. The meta-action set and its integration with RAG are two novel contributions of our planner, denoted as MaP-AVR, the meta-action planner for agents composed of VLM and RAG. To validate its efficacy, we design experiments using GPT-4o as the pre-trained LLM/VLM model and OmniGibson as our robotic platform. Our approach demonstrates promising performance compared to the current state-of-the-art method. Project page: https://map-avr.github.io/. + oai:arXiv.org:2512.19453v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhenglong Guo, Yiming Zhao, Feng Jiang, Heng Jin, Zongbao Feng, Jianbin Zhou, Siyuan Xu - VLA-AN: An Efficient and Onboard Vision-Language-Action Framework for Aerial Navigation in Complex Environments - https://arxiv.org/abs/2512.15258 - arXiv:2512.15258v2 Announce Type: replace -Abstract: This paper proposes VLA-AN, an efficient and onboard Vision-Language-Action (VLA) framework dedicated to autonomous drone navigation in complex environments. VLA-AN addresses four major limitations of existing large aerial navigation models: the data domain gap, insufficient temporal navigation with reasoning, safety issues with generative action policies, and onboard deployment constraints. First, we construct a high-fidelity dataset utilizing 3D Gaussian Splatting (3D-GS) to effectively bridge the domain gap. Second, we introduce a progressive three-stage training framework that sequentially reinforces scene comprehension, core flight skills, and complex navigation capabilities. Third, we design a lightweight, real-time action module coupled with geometric safety correction. This module ensures fast, collision-free, and stable command generation, mitigating the safety risks inherent in stochastic generative policies. Finally, through deep optimization of the onboard deployment pipeline, VLA-AN achieves a robust real-time 8.3x improvement in inference throughput on resource-constrained UAVs. Extensive experiments demonstrate that VLA-AN significantly improves spatial grounding, scene reasoning, and long-horizon navigation, achieving a maximum single-task success rate of 98.1%, and providing an efficient, practical solution for realizing full-chain closed-loop autonomy in lightweight aerial robots. - oai:arXiv.org:2512.15258v2 - cs.RO - cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace + SiamGPT: Quality-First Fine-Tuning for Stable Thai Text Generation + https://arxiv.org/abs/2512.19455 + arXiv:2512.19455v1 Announce Type: new +Abstract: Open-weights large language models remain difficult to deploy for Thai due to unstable generation under complex instructions, despite strong English performance. To mitigate these limitations, We present SiamGPT-32B, an open-weights model based on Qwen3-32B, fine-tuned with a Quality-First strategy emphasizing curated supervision over data scale. The fine-tuning pipeline combines translated high-complexity English instruction data with a Thai-adapted AutoIF framework for instruction and linguistic constraints. Using supervised fine-tuning only, without continual pretraining or corpus expansion, SiamGPT-32B improves instruction adherence, multi-turn robustness, and linguistic stability. Evaluations on the SEA-HELM benchmark show that SiamGPT-32B achieves the strongest overall performance among similar-scale open-weights Thai models, with consistent gains in instruction following, multi-turn dialogue, and natural language understanding. + oai:arXiv.org:2512.19455v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuze Wu, Mo Zhu, Xingxing Li, Yuheng Du, Yuxin Fan, Wenjun Li, Zhichao Han, Xin Zhou, Fei Gao + Thittipat Pairatsuppawat, Abhibhu Tachaapornchai, Paweekorn Kusolsomboon, Chutikan Chaiwong, Thodsaporn Chay-intr, Kobkrit Viriyayudhakorn, Nongnuch Ketui, Aslan B. Wong - MiVLA: Towards Generalizable Vision-Language-Action Model with Human-Robot Mutual Imitation Pre-training - https://arxiv.org/abs/2512.15411 - arXiv:2512.15411v2 Announce Type: replace -Abstract: While leveraging abundant human videos and simulated robot data poses a scalable solution to the scarcity of real-world robot data, the generalization capability of existing vision-language-action models (VLAs) remains limited by mismatches in camera views, visual appearance, and embodiment morphologies. To overcome this limitation, we propose MiVLA, a generalizable VLA empowered by human-robot mutual imitation pre-training, which leverages inherent behavioral similarity between human hands and robotic arms to build a foundation of strong behavioral priors for both human actions and robotic control. Specifically, our method utilizes kinematic rules with left/right hand coordinate systems for bidirectional alignment between human and robot action spaces. Given human or simulated robot demonstrations, MiVLA is trained to forecast behavior trajectories for one embodiment, and imitate behaviors for another one unseen in the demonstration. Based on this mutual imitation, it integrates the behavioral fidelity of real-world human data with the manipulative diversity of simulated robot data into a unified model, thereby enhancing the generalization capability for downstream tasks. Extensive experiments conducted on both simulation and real-world platforms with three robots (ARX, PiPer and LocoMan), demonstrate that MiVLA achieves strong improved generalization capability, outperforming state-of-the-art VLAs (e.g., $\boldsymbol{\pi}_{0}$, $\boldsymbol{\pi}_{0.5}$ and H-RDT) by 25% in simulation, and 14% in real-world robot control tasks. - oai:arXiv.org:2512.15411v2 - cs.RO - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Zhenhan Yin, Xuanhan Wang, Jiahao Jiang, Kaiyuan Deng, Pengqi Chen, Shuangle Li, Chong Liu, Xing Xu, Jingkuan Song, Lianli Gao, Heng Tao Shen + Activations as Features: Probing LLMs for Generalizable Essay Scoring Representations + https://arxiv.org/abs/2512.19456 + arXiv:2512.19456v1 Announce Type: new +Abstract: Automated essay scoring (AES) is a challenging task in cross-prompt settings due to the diversity of scoring criteria. While previous studies have focused on the output of large language models (LLMs) to improve scoring accuracy, we believe activations from intermediate layers may also provide valuable information. To explore this possibility, we evaluated the discriminative power of LLMs' activations in cross-prompt essay scoring task. Specifically, we used activations to fit probes and further analyzed the effects of different models and input content of LLMs on this discriminative power. By computing the directions of essays across various trait dimensions under different prompts, we analyzed the variation in evaluation perspectives of large language models concerning essay types and traits. Results show that the activations possess strong discriminative power in evaluating essay quality and that LLMs can adapt their evaluation perspectives to different traits and essay types, effectively handling the diversity of scoring criteria in cross-prompt settings. + oai:arXiv.org:2512.19456v1 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jinwei Chi, Ke Wang, Yu Chen, Xuanye Lin, Qiang Xu - Step-GUI Technical Report - https://arxiv.org/abs/2512.15431 - arXiv:2512.15431v2 Announce Type: replace -Abstract: Recent advances in multimodal large language models unlock unprecedented opportunities for GUI automation. However, a fundamental challenge remains: how to efficiently acquire high-quality training data while maintaining annotation reliability? We introduce a self-evolving training pipeline powered by the Calibrated Step Reward System, which converts model-generated trajectories into reliable training signals through trajectory-level calibration, achieving >90% annotation accuracy with 10-100x lower cost. Leveraging this pipeline, we introduce Step-GUI, a family of models (4B/8B) that achieves state-of-the-art GUI performance (8B: 80.2% AndroidWorld, 48.5% OSWorld, 62.6% ScreenShot-Pro) while maintaining robust general capabilities. As GUI agent capabilities improve, practical deployment demands standardized interfaces across heterogeneous devices while protecting user privacy. To this end, we propose GUI-MCP, the first Model Context Protocol for GUI automation with hierarchical architecture that combines low-level atomic operations and high-level task delegation to local specialist models, enabling high-privacy execution where sensitive data stays on-device. Finally, to assess whether agents can handle authentic everyday usage, we introduce AndroidDaily, a benchmark grounded in real-world mobile usage patterns with 3146 static actions and 235 end-to-end tasks across high-frequency daily scenarios (8B: static 89.91%, end-to-end 52.50%). Our work advances the development of practical GUI agents and demonstrates strong potential for real-world deployment in everyday digital interactions. - oai:arXiv.org:2512.15431v2 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + An Agentic Framework for Autonomous Materials Computation + https://arxiv.org/abs/2512.19458 + arXiv:2512.19458v1 Announce Type: new +Abstract: Large Language Models (LLMs) have emerged as powerful tools for accelerating scientific discovery, yet their static knowledge and hallucination issues hinder autonomous research applications. Recent advances integrate LLMs into agentic frameworks, enabling retrieval, reasoning, and tool use for complex scientific workflows. Here, we present a domain-specialized agent designed for reliable automation of first-principles materials computations. By embedding domain expertise, the agent ensures physically coherent multi-step workflows and consistently selects convergent, well-posed parameters, thereby enabling reliable end-to-end computational execution. A new benchmark of diverse computational tasks demonstrates that our system significantly outperforms standalone LLMs in both accuracy and robustness. This work establishes a verifiable foundation for autonomous computational experimentation and represents a key step toward fully automated scientific discovery. + oai:arXiv.org:2512.19458v1 + cs.AI + cond-mat.mtrl-sci + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Haolong Yan, Jia Wang, Xin Huang, Yeqing Shen, Ziyang Meng, Zhimin Fan, Kaijun Tan, Jin Gao, Lieyu Shi, Mi Yang, Shiliang Yang, Zhirui Wang, Brian Li, Kang An, Chenyang Li, Lei Lei, Mengmeng Duan, Danxun Liang, Guodong Liu, Hang Cheng, Hao Wu, Jie Dong, Junhao Huang, Mei Chen, Renjie Yu, Shunshan Li, Xu Zhou, Yiting Dai, Yineng Deng, Yingdan Liang, Zelin Chen, Wen Sun, Chengxu Yan, Chunqin Xu, Dong Li, Fengqiong Xiao, Guanghao Fan, Guopeng Li, Guozhen Peng, Hongbing Li, Hang Li, Hongming Chen, Jingjing Xie, Jianyong Li, Jingyang Zhang, Jiaju Ren, Jiayu Yuan, Jianpeng Yin, Kai Cao, Liang Zhao, Liguo Tan, Liying Shi, Mengqiang Ren, Min Xu, Manjiao Liu, Mao Luo, Mingxin Wan, Na Wang, Nan Wu, Ning Wang, Peiyao Ma, Qingzhou Zhang, Qiao Wang, Qinlin Zeng, Qiong Gao, Qiongyao Li, Shangwu Zhong, Shuli Gao, Shaofan Liu, Shisi Gao, Shuang Luo, Xingbin Liu, Xiaojia Liu, Xiaojie Hou, Xin Liu, Xuanti Feng, Xuedan Cai, Xuan Wen, Xianwei Zhu, Xin Liang, Xin Liu, Xin Zhou, Yifan Sui, Yingxiu Zhao, Yukang Shi, Yunfang Xu, Yuqing Zeng, Yixun Zhang, Zejia Weng, Zhonghao Yan, Zhiguo Huang, Zhuoyu Wang, Zihan Yan, Zheng Ge, Jing Li, Yibo Zhu, Binxing Jiao, Xiangyu Zhang, Daxin Jiang + Zeyu Xia, Jinzhe Ma, Congjie Zheng, Shufei Zhang, Yuqiang Li, Hang Su, P. Hu, Changshui Zhang, Xingao Gong, Wanli Ouyang, Lei Bai, Dongzhan Zhou, Mao Su - LeaseGuard: Raft Leases Done Right - https://arxiv.org/abs/2512.15659 - arXiv:2512.15659v2 Announce Type: replace -Abstract: Raft is a leading consensus algorithm for replicating writes in distributed databases. However, distributed databases also require consistent reads. To guarantee read consistency, a Raft-based system must either accept the high communication overhead of a safety check for each read, or implement leader leases. Prior lease protocols are vaguely specified and hurt availability, so most Raft systems implement them incorrectly or not at all. We introduce LeaseGuard, a novel lease algorithm that relies on guarantees specific to Raft elections. LeaseGuard is simple, rigorously specified in TLA+, and includes two novel optimizations that maximize availability during leader failover. The first optimization restores write throughput quickly, and the second improves read availability. We evaluate LeaseGuard with a simulation in Python and an implementation in LogCabin, the C++ reference implementation of Raft. By replacing LogCabin's default consistency mechanism (quorum checks), LeaseGuard reduces the overhead of consistent reads from one to zero network roundtrips. It also improves write throughput from ~1000 to ~10,000 writes per second, by eliminating contention between writes and quorum reads. Whereas traditional leases ban all reads on a new leader while it waits for a lease, in our LeaseGuard test the new leader instantly allows 99% of reads to succeed. - oai:arXiv.org:2512.15659v2 - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - A. Jesse Jiryu Davis, Murat Demirbas, Lingzhi Deng + Epistemological Fault Lines Between Human and Artificial Intelligence + https://arxiv.org/abs/2512.19466 + arXiv:2512.19466v1 Announce Type: new +Abstract: Large language models (LLMs) are widely described as artificial intelligence, yet their epistemic profile diverges sharply from human cognition. Here we show that the apparent alignment between human and machine outputs conceals a deeper structural mismatch in how judgments are produced. Tracing the historical shift from symbolic AI and information filtering systems to large-scale generative transformers, we argue that LLMs are not epistemic agents but stochastic pattern-completion systems, formally describable as walks on high-dimensional graphs of linguistic transitions rather than as systems that form beliefs or models of the world. By systematically mapping human and artificial epistemic pipelines, we identify seven epistemic fault lines, divergences in grounding, parsing, experience, motivation, causal reasoning, metacognition, and value. We call the resulting condition Epistemia: a structural situation in which linguistic plausibility substitutes for epistemic evaluation, producing the feeling of knowing without the labor of judgment. We conclude by outlining consequences for evaluation, governance, and epistemic literacy in societies increasingly organized around generative AI. + oai:arXiv.org:2512.19466v1 + cs.CY + cs.CL + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Walter Quattrociocchi, Valerio Capraro, Matja\v{z} Perc - Stylized Synthetic Augmentation further improves Corruption Robustness - https://arxiv.org/abs/2512.15675 - arXiv:2512.15675v3 Announce Type: replace -Abstract: This paper proposes a training data augmentation pipeline that combines synthetic image data with neural style transfer in order to address the vulnerability of deep vision models to common corruptions. We show that although applying style transfer on synthetic images degrades their quality with respect to the common Frechet Inception Distance (FID) metric, these images are surprisingly beneficial for model training. We conduct a systematic empirical analysis of the effects of both augmentations and their key hyperparameters on the performance of image classifiers. Our results demonstrate that stylization and synthetic data complement each other well and can be combined with popular rule-based data augmentation techniques such as TrivialAugment, while not working with others. Our method achieves state-of-the-art corruption robustness on several small-scale image classification benchmarks, reaching 93.54%, 74.9% and 50.86% robust accuracy on CIFAR-10-C, CIFAR-100-C and TinyImageNet-C, respectively - oai:arXiv.org:2512.15675v3 - cs.CV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-sa/4.0/ - Georg Siedel, Rojan Regmi, Abhirami Anand, Weijia Shao, Silvia Vock, Andrey Morozov + Fully Asynchronous Unsourced Random Access over Fading Channels + https://arxiv.org/abs/2512.19468 + arXiv:2512.19468v1 Announce Type: new +Abstract: We examine unsourced random access in a fully asynchronous setup, where active users transmit their data without restriction on the start time over a fading channel. In the proposed scheme, the transmitted signal consists of a pilot sequence and a polar codeword, with the polar codeword distributed across the data part of the packet in an on-off pattern. The receiver uses a double sliding-window decoder, where the inner window employs iterative decoding with joint timing and pilot detection, channel estimation, single-user decoding, and successive interference cancellation to recover the message bits, while the outer window enhances interference cancellation. The numerical results indicate that the proposed scheme exhibits only a slight performance loss compared to the synchronous benchmark while being more applicable in practice. + oai:arXiv.org:2512.19468v1 + cs.IT + eess.SP + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Mert Ozates, Mohammad Kazemi, Gianluigi Liva, Deniz G\"und\"uz - mimic-video: Video-Action Models for Generalizable Robot Control Beyond VLAs - https://arxiv.org/abs/2512.15692 - arXiv:2512.15692v2 Announce Type: replace -Abstract: Prevailing Vision-Language-Action Models (VLAs) for robotic manipulation are built upon vision-language backbones pretrained on large-scale, but disconnected static web data. As a result, despite improved semantic generalization, the policy must implicitly infer complex physical dynamics and temporal dependencies solely from robot trajectories. This reliance creates an unsustainable data burden, necessitating continuous, large-scale expert data collection to compensate for the lack of innate physical understanding. We contend that while vision-language pretraining effectively captures semantic priors, it remains blind to physical causality. A more effective paradigm leverages video to jointly capture semantics and visual dynamics during pretraining, thereby isolating the remaining task of low-level control. To this end, we introduce mimic-video, a novel Video-Action Model (VAM) that pairs a pretrained Internet-scale video model with a flow matching-based action decoder conditioned on its latent representations. The decoder serves as an Inverse Dynamics Model (IDM), generating low-level robot actions from the latent representation of video-space action plans. Our extensive evaluation shows that our approach achieves state-of-the-art performance on simulated and real-world robotic manipulation tasks, improving sample efficiency by 10x and convergence speed by 2x compared to traditional VLA architectures. - oai:arXiv.org:2512.15692v2 - cs.RO - cs.AI - cs.CV + GLUE: Generative Latent Unification of Expertise-Informed Engineering Models + https://arxiv.org/abs/2512.19469 + arXiv:2512.19469v1 Announce Type: new +Abstract: Engineering complex systems (aircraft, buildings, vehicles) requires accounting for geometric and performance couplings across subsystems. As generative models proliferate for specialized domains (wings, structures, engines), a key research gap is how to coordinate frozen, pre-trained submodels to generate full-system designs that are feasible, diverse, and high-performing. We introduce Generative Latent Unification of Expertise-Informed Engineering Models (GLUE), which orchestrates pre-trained, frozen subsystem generators while enforcing system-level feasibility, optimality, and diversity. We propose and benchmark (i) data-driven GLUE models trained on pre-generated system-level designs and (ii) a data-free GLUE model trained online on a differentiable geometry layer. On a UAV design problem with five coupling constraints, we find that data-driven approaches yield diverse, high-performing designs but require large datasets to satisfy constraints reliably. The data-free approach is competitive with Bayesian optimization and gradient-based optimization in performance and feasibility while training a full generative model in only 10 min on a RTX 4090 GPU, requiring more than two orders of magnitude fewer geometry evaluations and FLOPs than the data-driven method. Ablations focused on data-free training show that subsystem output continuity affects coordination, and equality constraints can trigger mode collapse unless mitigated. By integrating unmodified, domain-informed submodels into a modular generative workflow, this work provides a viable path for scaling generative design to complex, real-world engineering systems. + oai:arXiv.org:2512.19469v1 + cs.CE cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jonas Pai, Liam Achenbach, Victoriano Montesinos, Benedek Forrai, Oier Mees, Elvis Nava + Tim Aebersold, Soheyl Massoudi, Mark D. Fuge - Maintaining the Level of a Payload carried by Multi-Robot System on Irregular Surface - https://arxiv.org/abs/2512.16024 - arXiv:2512.16024v2 Announce Type: replace -Abstract: In this paper, we introduce a multi robot payload transport system to carry payloads through an environment of unknown and uneven inclinations while maintaining the desired orientation of the payload. For this task, we used custom built robots with a linear actuator (pistons) mounted on top of each robot. The system continuously monitors the payload's orientation and computes the required piston height of each robot to maintain the desired orientation of the payload. In this work, we propose an open loop controller coupled with a closed loop PID controller to achieve the goal. As our modelling makes no assumptions on the type of terrain, the system can work on any unknown and uneven terrains and inclinations. We showcase the efficacy of our proposed controller by testing it on various simulated environments with varied and complex terrains. - oai:arXiv.org:2512.16024v2 - cs.RO - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Multi-Layer Confidence Scoring for Detection of Out-of-Distribution Samples, Adversarial Attacks, and In-Distribution Misclassifications + https://arxiv.org/abs/2512.19472 + arXiv:2512.19472v1 Announce Type: new +Abstract: The recent explosive growth in Deep Neural Networks applications raises concerns about the black-box usage of such models, with limited trasparency and trustworthiness in high-stakes domains, which have been crystallized as regulatory requirements such as the European Union Artificial Intelligence Act. While models with embedded confidence metrics have been proposed, such approaches cannot be applied to already existing models without retraining, limiting their broad application. On the other hand, post-hoc methods, which evaluate pre-trained models, focus on solving problems related to improving the confidence in the model's predictions, and detecting Out-Of-Distribution or Adversarial Attacks samples as independent applications. To tackle the limited applicability of already existing methods, we introduce Multi-Layer Analysis for Confidence Scoring (MACS), a unified post-hoc framework that analyzes intermediate activations to produce classification-maps. From the classification-maps, we derive a score applicable for confidence estimation, detecting distributional shifts and adversarial attacks, unifying the three problems in a common framework, and achieving performances that surpass the state-of-the-art approaches in our experiments with the VGG16 and ViTb16 models with a fraction of their computational overhead. + oai:arXiv.org:2512.19472v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Rishabh Dev Yadav, Shrey Agrawal, Kamalakar Karlapalem + Lorenzo Capelli, Leandro de Souza Rosa, Gianluca Setti, Mauro Mangia, Riccardo Rovatti - LLM4Perf: Large Language Models Are Effective Samplers for Multi-Objective Performance Modeling - https://arxiv.org/abs/2512.16070 - arXiv:2512.16070v2 Announce Type: replace -Abstract: The performance of modern software systems is critically dependent on their complex configuration options. Building accurate performance models to navigate this vast space requires effective sampling strategies, yet existing methods often struggle with multi-objective optimization and cannot leverage semantic information from documentation. The recent success of Large Language Models (LLMs) motivates the central question of this work: Can LLMs serve as effective samplers for multi-objective performance modeling? To explore this, we present a comprehensive empirical study investigating the capabilities and characteristics of LLM-driven sampling. We design and implement LLM4Perf, a feedback-based framework, and use it to systematically evaluate the LLM-guided sampling process across four highly configurable, real-world systems. Our study reveals that the LLM-guided approach outperforms traditional baselines in most cases. Quantitatively, LLM4Perf achieves the best performance in nearly 68.8% (77 out of 112) of all evaluation scenarios, demonstrating its superior effectiveness. We find this effectiveness stems from the LLM's dual capabilities of configuration space pruning and feedback-driven strategy refinement. The effectiveness of this pruning is further validated by the fact that it also improves the performance of the baseline methods in nearly 91.5% (410 out of 448) of cases. Furthermore, we show how the LLM choices for each component and hyperparameters within LLM4Perf affect its effectiveness. Overall, this paper provides strong evidence for the effectiveness of LLMs in performance engineering and offers concrete insights into the mechanisms that drive their success. - oai:arXiv.org:2512.16070v2 - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Exact Recourse Functions for Aggregations of EVs Operating in Imbalance Markets + https://arxiv.org/abs/2512.19473 + arXiv:2512.19473v1 Announce Type: new +Abstract: We study optimal charging of large electric vehicle populations that are exposed to a single real-time imbalance price. The problem is naturally cast as a multistage stochastic linear programme (MSLP), which can be solved by algorithms such as Stochastic Dual Dynamic Programming. However, these methods scale poorly with the number of devices and stages. This paper presents a novel approach to overcome this curse of dimensionality. Building prior work that characterises the aggregate flexibility sets of populations of EVs as a permutahdron, we reformulate the original problem in terms of aggregated quantities. The geometric structure of permutahedra lets us (i) construct an optimal disaggregation policy, (ii) derive an exact, lower-dimensional MSLP, and (iii) characterise the expected recourse function as piecewise affine with a finite, explicit partition. In particular, we provide closed-form expressions for the slopes and intercepts of each affine region via truncated expectations of future prices, yielding an exact form for the recourse function and first-stage policy. Comprehensive numerical studies validate our claims and demonstrate the practical utility of this work. + oai:arXiv.org:2512.19473v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Xin Wang, Zhenhao Li, Zishuo Ding + Karan Mukhi, Licio Romao, Alessandro Abate - Mitigating Hallucinations in Healthcare LLMs with Granular Fact-Checking and Domain-Specific Adaptation - https://arxiv.org/abs/2512.16189 - arXiv:2512.16189v2 Announce Type: replace -Abstract: In healthcare, it is essential for any LLM-generated output to be reliable and accurate, particularly in cases involving decision-making and patient safety. However, the outputs are often unreliable in such critical areas due to the risk of hallucinated outputs from the LLMs. To address this issue, we propose a fact-checking module that operates independently of any LLM, along with a domain-specific summarization model designed to minimize hallucination rates. Our model is fine-tuned using Low-Rank Adaptation (LoRa) on the MIMIC III dataset and is paired with the fact-checking module, which uses numerical tests for correctness and logical checks at a granular level through discrete logic in natural language processing (NLP) to validate facts against electronic health records (EHRs). We trained the LLM model on the full MIMIC-III dataset. For evaluation of the fact-checking module, we sampled 104 summaries, extracted them into 3,786 propositions, and used these as facts. The fact-checking module achieves a precision of 0.8904, a recall of 0.8234, and an F1-score of 0.8556. Additionally, the LLM summary model achieves a ROUGE-1 score of 0.5797 and a BERTScore of 0.9120 for summary quality. - oai:arXiv.org:2512.16189v2 + A Large-Language-Model Framework for Automated Humanitarian Situation Reporting + https://arxiv.org/abs/2512.19475 + arXiv:2512.19475v1 Announce Type: new +Abstract: Timely and accurate situational reports are essential for humanitarian decision-making, yet current workflows remain largely manual, resource intensive, and inconsistent. We present a fully automated framework that uses large language models (LLMs) to transform heterogeneous humanitarian documents into structured and evidence-grounded reports. The system integrates semantic text clustering, automatic question generation, retrieval augmented answer extraction with citations, multi-level summarization, and executive summary generation, supported by internal evaluation metrics that emulate expert reasoning. We evaluated the framework across 13 humanitarian events, including natural disasters and conflicts, using more than 1,100 documents from verified sources such as ReliefWeb. The generated questions achieved 84.7 percent relevance, 84.0 percent importance, and 76.4 percent urgency. The extracted answers reached 86.3 percent relevance, with citation precision and recall both exceeding 76 percent. Agreement between human and LLM based evaluations surpassed an F1 score of 0.80. Comparative analysis shows that the proposed framework produces reports that are more structured, interpretable, and actionable than existing baselines. By combining LLM reasoning with transparent citation linking and multi-level evaluation, this study demonstrates that generative AI can autonomously produce accurate, verifiable, and operationally useful humanitarian situation reports. + oai:arXiv.org:2512.19475v1 cs.CL - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Tue, 23 Dec 2025 00:00:00 -0500 + new http://creativecommons.org/licenses/by/4.0/ - Musarrat Zeba, Abdullah Al Mamun, Kishoar Jahan Tithee, Debopom Sutradhar, Mohaimenul Azam Khan Raiaan, Saddam Mukta, Reem E. Mohamed, Md Rafiqul Islam, Yakub Sebastian, Mukhtar Hussain, Sami Azam + Ivan Decostanzi, Yelena Mejova, Kyriaki Kalimeri - Trustworthy and Controllable Professional Knowledge Utilization in Large Language Models with TEE-GPU Execution - https://arxiv.org/abs/2512.16238 - arXiv:2512.16238v2 Announce Type: replace -Abstract: Future improvements in large language model (LLM) services increasingly hinge on access to high-value professional knowledge rather than more generic web data. However, the data providers of this knowledge face a skewed tradeoff between income and risk: they receive little share of downstream value yet retain copyright and privacy liability, making them reluctant to contribute their assets to LLM services. Existing techniques do not offer a trustworthy and controllable way to use professional knowledge, because they keep providers in the dark and combine knowledge parameters with the underlying LLM backbone. - In this paper, we present PKUS, the Professional Knowledge Utilization System, which treats professional knowledge as a first-class, separable artifact. PKUS keeps the backbone model on GPUs and encodes each provider's contribution as a compact adapter that executes only inside an attested Trusted Execution Environment (TEE). A hardware-rooted lifecycle protocol, adapter pruning, multi-provider aggregation, and split-execution scheduling together make this design practical at serving time. On SST-2, MNLI, and SQuAD with GPT-2 Large and Llama-3.2-1B, PKUS preserves model utility, matching the accuracy and F1 of full fine-tuning and plain LoRA, while achieving the lowest per-request latency with 8.1-11.9x speedup over CPU-only TEE inference and naive CPU-GPU co-execution. - oai:arXiv.org:2512.16238v2 - cs.OS - Mon, 22 Dec 2025 00:00:00 -0500 + Emotion-Director: Bridging Affective Shortcut in Emotion-Oriented Image Generation + https://arxiv.org/abs/2512.19479 + arXiv:2512.19479v1 Announce Type: new +Abstract: Image generation based on diffusion models has demonstrated impressive capability, motivating exploration into diverse and specialized applications. Owing to the importance of emotion in advertising, emotion-oriented image generation has attracted increasing attention. However, current emotion-oriented methods suffer from an affective shortcut, where emotions are approximated to semantics. As evidenced by two decades of research, emotion is not equivalent to semantics. To this end, we propose Emotion-Director, a cross-modal collaboration framework consisting of two modules. First, we propose a cross-Modal Collaborative diffusion model, abbreviated as MC-Diffusion. MC-Diffusion integrates visual prompts with textual prompts for guidance, enabling the generation of emotion-oriented images beyond semantics. Further, we improve the DPO optimization by a negative visual prompt, enhancing the model's sensitivity to different emotions under the same semantics. Second, we propose MC-Agent, a cross-Modal Collaborative Agent system that rewrites textual prompts to express the intended emotions. To avoid template-like rewrites, MC-Agent employs multi-agents to simulate human subjectivity toward emotions, and adopts a chain-of-concept workflow that improves the visual expressiveness of the rewritten prompts. Extensive qualitative and quantitative experiments demonstrate the superiority of Emotion-Director in emotion-oriented image generation. + oai:arXiv.org:2512.19479v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Guoli Jia, Junyao Hu, Xinwei Long, Kai Tian, Kaiyan Zhang, KaiKai Zhao, Ning Ding, Bowen Zhou + + + A Dataset and Preliminary Study of Using GPT-5 for Code-change Impact Analysis + https://arxiv.org/abs/2512.19481 + arXiv:2512.19481v1 Announce Type: new +Abstract: Understanding source code changes and their impact on other code entities is a crucial skill in software development. However, the analysis of code changes and their impact is often performed manually and therefore is time-consuming. Recent advancements in AI, and in particular large language models (LLMs) show promises to help developers in various code analysis tasks. However, the extent to which this potential can be utilized for understanding code changes and their impact is underexplored. To address this gap, we study the capabilities of GPT-5 and GPT-5-mini to predict the code entities impacted by given source code changes. We construct a dataset containing information about seed-changes, change pairs, and change types for each commit. Existing datasets lack crucial information about seed changes and impacted code entities. Our experiments evaluate the LLMs in two configurations: (1) seed-change information and the parent commit tree and (2) seed-change information, the parent commit tree, and the diff hunk of each seed change. We found that both LLMs perform poorly in the two experiments, whereas GPT-5 outperforms GPT-5-mini. Furthermore, the provision of the diff hunks helps both models to slightly improve their performance. + oai:arXiv.org:2512.19481v1 + cs.SE + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Katharina Stengg, Christian Macho, Martin Pinzger + + + Dynamic Stream Network for Combinatorial Explosion Problem in Deformable Medical Image Registration + https://arxiv.org/abs/2512.19486 + arXiv:2512.19486v1 Announce Type: new +Abstract: Combinatorial explosion problem caused by dual inputs presents a critical challenge in Deformable Medical Image Registration (DMIR). Since DMIR processes two images simultaneously as input, the combination relationships between features has grown exponentially, ultimately the model considers more interfering features during the feature modeling process. Introducing dynamics in the receptive fields and weights of the network enable the model to eliminate the interfering features combination and model the potential feature combination relationships. In this paper, we propose the Dynamic Stream Network (DySNet), which enables the receptive fields and weights to be dynamically adjusted. This ultimately enables the model to ignore interfering feature combinations and model the potential feature relationships. With two key innovations: 1) Adaptive Stream Basin (AdSB) module dynamically adjusts the shape of the receptive field, thereby enabling the model to focus on the feature relationships with greater correlation. 2) Dynamic Stream Attention (DySA) mechanism generates dynamic weights to search for more valuable feature relationships. Extensive experiments have shown that DySNet consistently outperforms the most advanced DMIR methods, highlighting its outstanding generalization ability. Our code will be released on the website: https://github.com/ShaochenBi/DySNet. + oai:arXiv.org:2512.19486v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Shaochen Bi, Yuting He, Weiming Wang, Hao Chen + + + Lightweight Intrusion Detection in IoT via SHAP-Guided Feature Pruning and Knowledge-Distilled Kronecker Networks + https://arxiv.org/abs/2512.19488 + arXiv:2512.19488v1 Announce Type: new +Abstract: The widespread deployment of Internet of Things (IoT) devices requires intrusion detection systems (IDS) with high accuracy while operating under strict resource constraints. Conventional deep learning IDS are often too large and computationally intensive for edge deployment. We propose a lightweight IDS that combines SHAP-guided feature pruning with knowledge-distilled Kronecker networks. A high-capacity teacher model identifies the most relevant features through SHAP explanations, and a compressed student leverages Kronecker-structured layers to minimize parameters while preserving discriminative inputs. Knowledge distillation transfers softened decision boundaries from teacher to student, improving generalization under compression. Experiments on the TON\_IoT dataset show that the student is nearly three orders of magnitude smaller than the teacher yet sustains macro-F1 above 0.986 with millisecond-level inference latency. The results demonstrate that explainability-driven pruning and structured compression can jointly enable scalable, low-latency, and energy-efficient IDS for heterogeneous IoT environments. + oai:arXiv.org:2512.19488v1 + cs.LG + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1109/CommNet68224.2025.11288887 + Hafsa Benaddi, Mohammed Jouhari, Nouha Laamech, Anas Motii, Khalil Ibrahimi + + + Learning from sanctioned government suppliers: A machine learning and network science approach to detecting fraud and corruption in Mexico + https://arxiv.org/abs/2512.19491 + arXiv:2512.19491v1 Announce Type: new +Abstract: Detecting fraud and corruption in public procurement remains a major challenge for governments worldwide. Most research to-date builds on domain-knowledge-based corruption risk indicators of individual contract-level features and some also analyzes contracting network patterns. A critical barrier for supervised machine learning is the absence of confirmed non-corrupt, negative, examples, which makes conventional machine learning inappropriate for this task. Using publicly available data on federally funded procurement in Mexico and company sanction records, this study implements positive-unlabeled (PU) learning algorithms that integrate domain-knowledge-based red flags with network-derived features to identify likely corrupt and fraudulent contracts. The best-performing PU model on average captures 32 percent more known positives and performs on average 2.3 times better than random guessing, substantially outperforming approaches based solely on traditional red flags. The analysis of the Shapley Additive Explanations reveals that network-derived features, particularly those associated with contracts in the network core or suppliers with high eigenvector centrality, are the most important. Traditional red flags further enhance model performance in line with expectations, albeit mainly for contracts awarded through competitive tenders. This methodology can support law enforcement in Mexico, and it can be adapted to other national contexts too. + oai:arXiv.org:2512.19491v1 + cs.LG + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Mart\'i Medina-Hern \'andez, Janos Kert\'esz, Mih\'aly Fazekas + + + Fare Zone Assignment + https://arxiv.org/abs/2512.19493 + arXiv:2512.19493v1 Announce Type: new +Abstract: Tariff setting in public transportation networks is an important challenge. A popular approach is to partition the network into fare zones ("zoning") and fix journey prices depending on the number of traversed zones ("pricing"). In this paper, we focus on finding revenue-optimal solutions to the zoning problem for a given concave pricing function. We consider tree networks with $n$ vertices, since trees already pose non-trivial algorithmic challenges. Our main results are efficient algorithms that yield a simple $\mathcal{O}(\log n)$-approximation as well as a more involved $\mathcal{O}(\log n/\log \log n)$-approximation. We show how to solve the problem exactly on rooted instances, in which all demand arises at the same source. For paths, we prove strong NP-hardness and outline a PTAS. Moreover, we show that computing an optimal solution is in FPT or XP for several natural problem parameters. + oai:arXiv.org:2512.19493v1 + cs.DS + cs.GT + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Martin Hoefer, Lennart Kauther, Philipp Pabst, Britta Peis, Khai Van Tran + + + Kolmogorov-Arnold Graph Neural Networks Applied to Inorganic Nanomaterials Dataset + https://arxiv.org/abs/2512.19494 + arXiv:2512.19494v1 Announce Type: new +Abstract: The recent development of Kolmogorov-Arnold Networks (KANs) introduced new discoveries in the field of Graph Neural Networks (GNNs), expanding the existing set of models with KAN-based versions of GNNs, which often surpass the accuracy of MultiLayer Perceptron (MLP)-based GNNs. These models were widely tested on the graph datasets consisting of organic molecules; however, those studies disregarded the inorganic nanomaterials datasets. In this work, we close this gap by applying Kolmogorov-Arnold Graph Neural Networks (KAGNNs) to a recently published large inorganic nanomaterials dataset called CHILI. For this, we adapt and test KAGNNs appropriate for this dataset. Our experiments reveal that on the CHILI datasets, particularly on the CHILI-3K, KAGNNs substantially surpass conventional GNNs in classification, achieving state-of-the-art results. + oai:arXiv.org:2512.19494v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nikita Volzhin, Soowhan Yoon + + + FusionNet: Physics-Aware Representation Learning for Multi-Spectral and Thermal Data via Trainable Signal-Processing Priors + https://arxiv.org/abs/2512.19504 + arXiv:2512.19504v1 Announce Type: new +Abstract: Modern deep learning models operating on multi-modal visual signals often rely on inductive biases that are poorly aligned with the physical processes governing signal formation, leading to brittle performance under cross-spectral and real-world conditions. In particular, approaches that prioritise direct thermal cues struggle to capture indirect yet persistent environmental alterations induced by sustained heat emissions. + This work introduces a physics-aware representation learning framework that leverages multi-spectral information to model stable signatures of long-term physical processes. Specifically, a geological Short Wave Infrared (SWIR) ratio sensitive to soil property changes is integrated with Thermal Infrared (TIR) data through an intermediate fusion architecture, instantiated as FusionNet. The proposed backbone embeds trainable differential signal-processing priors within convolutional layers, combines mixed pooling strategies, and employs wider receptive fields to enhance robustness across spectral modalities. + Systematic ablations show that each architectural component contributes to performance gains, with DGCNN achieving 88.7% accuracy on the SWIR ratio and FusionNet reaching 90.6%, outperforming state-of-the-art baselines across five spectral configurations. Transfer learning experiments further show that ImageNet pretraining degrades TIR performance, highlighting the importance of modality-aware training for cross-spectral learning. + Evaluated on real-world data, the results demonstrate that combining physics-aware feature selection with principled deep learning architectures yields robust and generalisable representations, illustrating how first-principles signal modelling can improve multi-spectral learning under challenging conditions. + oai:arXiv.org:2512.19504v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Georgios Voulgaris + + + DK-STN: A Domain Knowledge Embedded Spatio-Temporal Network Model for MJO Forecast + https://arxiv.org/abs/2512.19506 + arXiv:2512.19506v1 Announce Type: new +Abstract: Understanding and predicting the Madden-Julian Oscillation (MJO) is fundamental for precipitation forecasting and disaster prevention. To date, long-term and accurate MJO prediction has remained a challenge for researchers. Conventional MJO prediction methods using Numerical Weather Prediction (NWP) are resource-intensive, time-consuming, and highly unstable (most NWP methods are sensitive to seasons, with better MJO forecast results in winter). While existing Artificial Neural Network (ANN) methods save resources and speed forecasting, their accuracy never reaches the 28 days predicted by the state-of-the-art NWP method, i.e., the operational forecasts from ECMWF, since neural networks cannot handle climate data effectively. In this paper, we present a Domain Knowledge Embedded Spatio-Temporal Network (DK-STN), a stable neural network model for accurate and efficient MJO forecasting. It combines the benefits of NWP and ANN methods and successfully improves the forecast accuracy of ANN methods while maintaining a high level of efficiency and stability. We begin with a spatial-temporal network (STN) and embed domain knowledge in it using two key methods: (i) applying a domain knowledge enhancement method and (ii) integrating a domain knowledge processing method into network training. We evaluated DK-STN with the 5th generation of ECMWF reanalysis (ERA5) data and compared it with ECMWF. Given 7 days of climate data as input, DK-STN can generate reliable forecasts for the following 28 days in 1-2 seconds, with an error of only 2-3 days in different seasons. DK-STN significantly exceeds ECMWF in that its forecast accuracy is equivalent to ECMWF's, while its efficiency and stability are significantly superior. + oai:arXiv.org:2512.19506v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Hongliang Li, Nong Zhang, Zhewen Xu, Xiang Li, Changzheng Liu, Chongbo Zhao, Jie Wu + + + Beyond Language Boundaries: Uncovering Programming Language Families for Code Language Models + https://arxiv.org/abs/2512.19509 + arXiv:2512.19509v1 Announce Type: new +Abstract: The rapid proliferation of diverse programming languages presents both opportunities and challenges for developing multilingual code LLMs. While existing techniques often train code LLMs by simply aggregating multilingual code data, few explore the deeper relationships between programming languages(PLs) and how such relationships can be utilized to optimize the training and inference of code LLMs. In this work, we investigate 2 fundamental questions: 1) What are the deep linguistic relationships among PLs? and 2) How can these relationships be leveraged to improve multilingual code LLMs? We propose an embedding-based framework to uncover the latent families of PLs. Our approach begins by defining 21 primary linguistic features of programming languages, such as variable definition, control structures, and method declarations, and then employs LLMs to generate feature-aligned code samples across multiple languages. By embedding these semantically parallel code snippets from 19 languages, we construct a similarity matrix and perform hierarchical clustering to uncover inherent language relationships. Our analysis reveals clear hierarchical structures among programming languages. Closely related languages form well-defined clusters (e.g., C, C++, Java, and Swift group together), while Go exhibits as a central language with the highest cross-language similarity. Building on the uncovered language families, we propose three strategies to enhance multilingual LLM training: transfer learning across linguistically related languages, linguistic proximity-guided curriculum learning, and centroid-based intermediary code translation. Experiments on 4 code intelligence tasks demonstrate that our methods significantly improve multilingual LLM performance. This work offers a universal perspective on programming languages and advances more effective strategies for multilingual code LLM training. + oai:arXiv.org:2512.19509v1 + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Shangbo Yun, Xiaodong Gu, Jianghong Huang, Beijun Shen + + + Toward Scalable and Valid Conditional Independence Testing with Spectral Representations + https://arxiv.org/abs/2512.19510 + arXiv:2512.19510v1 Announce Type: new +Abstract: Conditional independence (CI) is central to causal inference, feature selection, and graphical modeling, yet it is untestable in many settings without additional assumptions. Existing CI tests often rely on restrictive structural conditions, limiting their validity on real-world data. Kernel methods using the partial covariance operator offer a more principled approach but suffer from limited adaptivity, slow convergence, and poor scalability. In this work, we explore whether representation learning can help address these limitations. Specifically, we focus on representations derived from the singular value decomposition of the partial covariance operator and use them to construct a simple test statistic, reminiscent of the Hilbert-Schmidt Independence Criterion (HSIC). We also introduce a practical bi-level contrastive algorithm to learn these representations. Our theory links representation learning error to test performance and establishes asymptotic validity and power guarantees. Preliminary experiments suggest that this approach offers a practical and statistically grounded path toward scalable CI testing, bridging kernel-based theory with modern representation learning. + oai:arXiv.org:2512.19510v1 + cs.LG + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Alek Frohlich, Vladimir Kostic, Karim Lounici, Daniel Perazzo, Massimiliano Pontil + + + Anatomy-R1: Enhancing Anatomy Reasoning in Multimodal Large Language Models via Anatomical Similarity Curriculum and Group Diversity Augmentation + https://arxiv.org/abs/2512.19512 + arXiv:2512.19512v1 Announce Type: new +Abstract: Multimodal Large Language Models (MLLMs) have achieved impressive progress in natural image reasoning, yet their potential in medical imaging remains underexplored, especially in clinical anatomical surgical images. Anatomy understanding tasks demand precise understanding and clinically coherent answers, which are difficult to achieve due to the complexity of medical data and the scarcity of high-quality expert annotations. These challenges limit the effectiveness of conventional Supervised Fine-Tuning (SFT) strategies. While recent work has demonstrated that Group Relative Policy Optimization (GRPO) can enhance reasoning in MLLMs without relying on large amounts of data, we find two weaknesses that hinder GRPO's reasoning performance in anatomy recognition: 1) knowledge cannot be effectively shared between different anatomical structures, resulting in uneven information gain and preventing the model from converging, and 2) the model quickly converges to a single reasoning path, suppressing the exploration of diverse strategies. To overcome these challenges, we propose two novel methods. First, we implement a progressive learning strategy called Anatomical Similarity Curriculum Learning by controlling question difficulty via the similarity of answer choices, enabling the model to master complex problems incrementally. Second, we utilize question augmentation referred to as Group Diversity Question Augmentation to expand the model's search space for difficult queries, mitigating the tendency to produce uniform responses. Comprehensive experiments on the SGG-VQA and OmniMedVQA benchmarks show our method achieves a significant improvement across the two benchmarks, demonstrating its effectiveness in enhancing the medical reasoning capabilities of MLLMs. The code can be found in https://github.com/tomato996/Anatomy-R1 + oai:arXiv.org:2512.19512v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Ziyang Song, Zelin Zang, Zuyao Chen, Xusheng Liang, Dong Yi, Jinlin Wu, Hongbin Liu, Jiebo Luo + + + Negations are powerful even in small depth + https://arxiv.org/abs/2512.19515 + arXiv:2512.19515v1 Announce Type: new +Abstract: We study the power of negation in the Boolean and algebraic settings and show the following results. + * We construct a family of polynomials $P_n$ in $n$ variables, all of whose monomials have positive coefficients, such that $P_n$ can be computed by a depth three circuit of polynomial size but any monotone circuit computing it has size $2^{\Omega(n)}$. This is the strongest possible separation result between monotone and non-monotone arithmetic computations and improves upon all earlier results, including the seminal work of Valiant (1980) and more recently by Chattopadhyay, Datta, and Mukhopadhyay (2021). We then boot-strap this result to prove strong monotone separations for polynomials of constant degree, which solves an open problem from the survey of Shpilka and Yehudayoff (2010). + * By moving to the Boolean setting, we can prove superpolynomial monotone Boolean circuit lower bounds for specific Boolean functions, which imply that all the powers of certain monotone polynomials cannot be computed by polynomially sized monotone arithmetic circuits. + * We then define a collection of problems with linear-algebraic nature, which are similar to span programs, and prove monotone Boolean circuit lower bounds for them. In particular, this gives the strongest known monotone lower bounds for functions in uniform (non-monotone) $\textbf{NC}^2$. Our construction also leads to an explicit matroid that defines a monotone function that is difficult to compute, which solves an open problem by Jukna and Seiwert (2020). + Our monotone arithmetic and Boolean circuit lower bounds are based on known techniques, such as reduction from monotone arithmetic complexity to multipartition communication complexity and the approximation method for proving lower bounds for monotone Boolean circuits, but we overcome several new challenges in order to obtain efficient upper bounds using low-depth circuits. + oai:arXiv.org:2512.19515v1 + cs.CC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Bruno Cavalar, Th\'eo Bor\'em Fabris, Partha Mukhopadhyay, Srikanth Srinivasan, Amir Yehudayoff + + + LacaDM: A Latent Causal Diffusion Model for Multiobjective Reinforcement Learning + https://arxiv.org/abs/2512.19516 + arXiv:2512.19516v1 Announce Type: new +Abstract: Multiobjective reinforcement learning (MORL) poses significant challenges due to the inherent conflicts between objectives and the difficulty of adapting to dynamic environments. Traditional methods often struggle to generalize effectively, particularly in large and complex state-action spaces. To address these limitations, we introduce the Latent Causal Diffusion Model (LacaDM), a novel approach designed to enhance the adaptability of MORL in discrete and continuous environments. Unlike existing methods that primarily address conflicts between objectives, LacaDM learns latent temporal causal relationships between environmental states and policies, enabling efficient knowledge transfer across diverse MORL scenarios. By embedding these causal structures within a diffusion model-based framework, LacaDM achieves a balance between conflicting objectives while maintaining strong generalization capabilities in previously unseen environments. Empirical evaluations on various tasks from the MOGymnasium framework demonstrate that LacaDM consistently outperforms the state-of-art baselines in terms of hypervolume, sparsity, and expected utility maximization, showcasing its effectiveness in complex multiobjective tasks. + oai:arXiv.org:2512.19516v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xueming Yan, Bo Yin, Yaochu Jin + + + Near-optimal streaming approximation for Max-DICUT in sublinear space using two passes + https://arxiv.org/abs/2512.19521 + arXiv:2512.19521v1 Announce Type: new +Abstract: The Max-DICUT problem has gained a lot of attention in the streaming setting in recent years, and has so far served as a canonical problem for designing algorithms for general constraint satisfaction problems (CSPs) in this setting. A seminal result of Kapralov and Krachun [STOC 2019] shows that it is impossible to beat $1/2$-approximation for Max-DICUT in sublinear space in the single-pass streaming setting, even on bounded-degree graphs. In a recent work, Saxena, Singer, Sudan, and Velusamy [SODA 2025] prove that the above lower bound is tight by giving a single-pass algorithm for bounded-degree graphs that achieves $(1/2-\epsilon)$-approximation in sublinear space, for every constant $\epsilon>0$. For arbitrary graphs of unbounded degree, they give an $O(1/\epsilon)$-pass $O(\log n)$ space algorithm. Their work left open the question of obtaining $1/2$-approximation for arbitrary graphs in the single-pass setting in sublinear space. We make progress towards this question and give a two-pass algorithm that achieves $(1/2-\epsilon)$-approximation in sublinear space, for every constant $\epsilon>0$. + oai:arXiv.org:2512.19521v1 + cs.DS + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Santhoshini Velusamy + + + A Convolutional Neural Deferred Shader for Physics Based Rendering + https://arxiv.org/abs/2512.19522 + arXiv:2512.19522v1 Announce Type: new +Abstract: Recent advances in neural rendering have achieved impressive results on photorealistic shading and relighting, by using a multilayer perceptron (MLP) as a regression model to learn the rendering equation from a real-world dataset. Such methods show promise for photorealistically relighting real-world objects, which is difficult to classical rendering, as there is no easy-obtained material ground truth. However, significant challenges still remain the dense connections in MLPs result in a large number of parameters, which requires high computation resources, complicating the training, and reducing performance during rendering. Data driven approaches require large amounts of training data for generalization; unbalanced data might bias the model to ignore the unusual illumination conditions, e.g. dark scenes. This paper introduces pbnds+: a novel physics-based neural deferred shading pipeline utilizing convolution neural networks to decrease the parameters and improve the performance in shading and relighting tasks; Energy regularization is also proposed to restrict the model reflection during dark illumination. Extensive experiments demonstrate that our approach outperforms classical baselines, a state-of-the-art neural shading model, and a diffusion-based method. + oai:arXiv.org:2512.19522v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Zhuo He, Yingdong Ru, Qianying Liu, Paul Henderson, Nicolas Pugeault + + + Initialization of a Polyharmonic Cascade, Launch and Testing + https://arxiv.org/abs/2512.19524 + arXiv:2512.19524v1 Announce Type: new +Abstract: This paper concludes a series of studies on the polyharmonic cascade, a deep machine learning architecture theoretically derived from indifference principles and the theory of random functions. A universal initialization procedure is proposed, based on symmetric constellations in the form of hyperoctahedra with a central point. This initialization not only ensures stable training of cascades with tens and hundreds of layers (up to 500 layers without skip connections), but also radically simplifies the computations. Scalability and robustness are demonstrated on MNIST (98.3% without convolutions or augmentations), HIGGS (AUC approximately 0.885 on 11M examples), and Epsilon (AUC approximately 0.963 with 2000 features). All linear algebra is reduced to 2D operations and is efficiently executed on GPUs. A public repository and an archived snapshot are provided for full reproducibility. + oai:arXiv.org:2512.19524v1 + cs.LG + cs.NA + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Yuriy N. Bakhvalov + + + QuantiPhy: A Quantitative Benchmark Evaluating Physical Reasoning Abilities of Vision-Language Models + https://arxiv.org/abs/2512.19526 + arXiv:2512.19526v1 Announce Type: new +Abstract: Understanding the physical world is essential for generalist AI agents. However, it remains unclear whether state-of-the-art vision perception models (e.g., large VLMs) can reason physical properties quantitatively. Existing evaluations are predominantly VQA-based and qualitative, offering limited insight into whether these models can infer the kinematic quantities of moving objects from video observations. To address this, we present QuantiPhy, the first benchmark designed to quantitatively measure a VLM's physical reasoning ability. Comprising more than 3.3K video-text instances with numerical ground truth, QuantiPhy evaluates a VLM's performance on estimating an object's size, velocity, and acceleration at a given timestamp, using one of these properties as an input prior. The benchmark standardizes prompts and scoring to assess numerical accuracy, enabling fair comparisons across models. Our experiments on state-of-the-art VLMs reveal a consistent gap between their qualitative plausibility and actual numerical correctness. We further provide an in-depth analysis of key factors like background noise, counterfactual priors, and strategic prompting and find that state-of-the-art VLMs lean heavily on pre-trained world knowledge rather than faithfully using the provided visual and textual inputs as references when reasoning kinematic properties quantitatively. QuantiPhy offers the first rigorous, scalable testbed to move VLMs beyond mere verbal plausibility toward a numerically grounded physical understanding. + oai:arXiv.org:2512.19526v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Li Puyin, Tiange Xiang, Ella Mao, Shirley Wei, Xinye Chen, Adnan Masood, Li Fei-fei, Ehsan Adeli + + + Deep Learning for Unrelated-Machines Scheduling: Handling Variable Dimensions + https://arxiv.org/abs/2512.19527 + arXiv:2512.19527v1 Announce Type: new +Abstract: Deep learning has been effectively applied to many discrete optimization problems. However, learning-based scheduling on unrelated parallel machines remains particularly difficult to design. Not only do the numbers of jobs and machines vary, but each job-machine pair has a unique processing time, dynamically altering feature dimensions. We propose a novel approach with a neural network tailored for offline deterministic scheduling of arbitrary sizes on unrelated machines. The goal is to minimize a complex objective function that includes the makespan and the weighted tardiness of jobs and machines. Unlike existing online approaches, which process jobs sequentially, our method generates a complete schedule considering the entire input at once. The key contribution of this work lies in the sophisticated architecture of our model. By leveraging various NLP-inspired architectures, it effectively processes any number of jobs and machines with varying feature dimensions imposed by unrelated processing times. Our approach enables supervised training on small problem instances while demonstrating strong generalization to much larger scheduling environments. Trained and tested on instances with 8 jobs and 4 machines, costs were only 2.51% above optimal. Across all tested configurations of up to 100 jobs and 10 machines, our network consistently outperformed an advanced dispatching rule, which incurred 22.22% higher costs on average. As our method allows fast retraining with simulated data and adaptation to various scheduling conditions, we believe it has the potential to become a standard approach for learning-based scheduling on unrelated machines and similar problem environments. + oai:arXiv.org:2512.19527v1 + cs.LG + cs.DM + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Diego Hitzges, Guillaume Sagnol + + + Multi-Modal Soccer Scene Analysis with Masked Pre-Training + https://arxiv.org/abs/2512.19528 + arXiv:2512.19528v1 Announce Type: new +Abstract: In this work we propose a multi-modal architecture for analyzing soccer scenes from tactical camera footage, with a focus on three core tasks: ball trajectory inference, ball state classification, and ball possessor identification. To this end, our solution integrates three distinct input modalities (player trajectories, player types and image crops of individual players) into a unified framework that processes spatial and temporal dynamics using a cascade of sociotemporal transformer blocks. Unlike prior methods, which rely heavily on accurate ball tracking or handcrafted heuristics, our approach infers the ball trajectory without direct access to its past or future positions, and robustly identifies the ball state and ball possessor under noisy or occluded conditions from real top league matches. We also introduce CropDrop, a modality-specific masking pre-training strategy that prevents over-reliance on image features and encourages the model to rely on cross-modal patterns during pre-training. We show the effectiveness of our approach on a large-scale dataset providing substantial improvements over state-of-the-art baselines in all tasks. Our results highlight the benefits of combining structured and visual cues in a transformer-based architecture, and the importance of realistic masking strategies in multi-modal learning. + oai:arXiv.org:2512.19528v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Marc Peral, Guillem Capellera, Luis Ferraz, Antonio Rubio, Antonio Agudo + + + Learning Continuous Solvent Effects from Transient Flow Data: A Graph Neural Network Benchmark on Catechol Rearrangement + https://arxiv.org/abs/2512.19530 + arXiv:2512.19530v1 Announce Type: new +Abstract: Predicting reaction outcomes across continuous solvent composition ranges remains a critical challenge in organic synthesis and process chemistry. Traditional machine learning approaches often treat solvent identity as a discrete categorical variable, which prevents systematic interpolation and extrapolation across the solvent space. This work introduces the \textbf{Catechol Benchmark}, a high-throughput transient flow chemistry dataset comprising 1,227 experimental yield measurements for the rearrangement of allyl-substituted catechol in 24 pure solvents and their binary mixtures, parameterized by continuous volume fractions ($\% B$). We evaluate various architectures under rigorous leave-one-solvent-out and leave-one-mixture-out protocols to test generalization to unseen chemical environments. + Our results demonstrate that classical tabular methods (e.g., Gradient-Boosted Decision Trees) and large language model embeddings (e.g., Qwen-7B) struggle with quantitative precision, yielding Mean Squared Errors (MSE) of 0.099 and 0.129, respectively. In contrast, we propose a hybrid GNN-based architecture that integrates Graph Attention Networks (GATs) with Differential Reaction Fingerprints (DRFP) and learned mixture-aware solvent encodings. This approach achieves an \textbf{MSE of 0.0039} ($\pm$ 0.0003), representing a 60\% error reduction over competitive baselines and a $>25\times$ improvement over tabular ensembles. Ablation studies confirm that explicit molecular graph message-passing and continuous mixture encoding are essential for robust generalization. The complete dataset, evaluation protocols, and reference implementations are released to facilitate data-efficient reaction prediction and continuous solvent representation learning. + oai:arXiv.org:2512.19530v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hongsheng Xing, Qiuxin Si + + + SlicerOrbitSurgerySim: An Open-Source Platform for Virtual Registration and Quantitative Comparison of Preformed Orbital Plates + https://arxiv.org/abs/2512.19534 + arXiv:2512.19534v1 Announce Type: new +Abstract: Poor adaptation of orbital implants remains a major contributor to postoperative complications and revision surgery. Although preformed orbital plates are widely used to reduce cost and operative time compared with customized implants, surgeons currently lack publicly available tools and standardized metrics to quantitatively compare plate fit across vendors, sizes, and patient anatomy. We developed SlicerOrbitSurgerySim, an open-source extension for the 3D Slicer platform that enables interactive virtual registration, evaluation, and comparison of multiple preformed orbital plates in a patient-specific virtual planning environment. The software generates reproducible quantitative plate-to-orbit distance metrics and visualization tools that support both patient-specific planning and population-level statistical analysis of plate adaptability. By facilitating objective comparison of implant designs and placement strategies, this tool aims to improve preoperative decision-making, reduce intraoperative plate modification, and promote collaborative research and surgical education. Pilot studies, sample datasets, and detailed tutorials are provided to support testing, transparency, and reproducibility. + oai:arXiv.org:2512.19534v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chi Zhang, Braedon Gunn, Andrew M. Read-Fuller + + + CASA: Cross-Attention via Self-Attention for Efficient Vision-Language Fusion + https://arxiv.org/abs/2512.19535 + arXiv:2512.19535v1 Announce Type: new +Abstract: Vision-language models (VLMs) are commonly trained by inserting image tokens from a pretrained vision encoder into the textual stream of a language model. This allows text and image information to fully attend to one another within the model, but becomes extremely costly for high-resolution images, long conversations, or streaming videos, both in memory and compute. VLMs leveraging cross-attention are an efficient alternative to token insertion but exhibit a clear performance gap, in particular on tasks involving fine-grained visual details. We find that a key to improving such models is to also enable local text-to-text interaction in the dedicated cross-attention layers. Building on this, we propose CASA, Cross-Attention via Self-Attention, a simple and efficient paradigm which substantially reduces the gap with full token insertion on common image understanding benchmarks, while enjoying the same scalability as cross-attention models when applied to long-context multimodal tasks such as streaming video captioning. For samples and code, please see our project page at https://kyutai.org/casa . + oai:arXiv.org:2512.19535v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Moritz B\"ohle, Am\'elie Royer, Juliette Marrie, Edouard Grave, Patrick P\'erez + + + A massively parallel non-overlapping Schwarz preconditioner for PolyDG methods in brain electrophysiology + https://arxiv.org/abs/2512.19536 + arXiv:2512.19536v1 Announce Type: new +Abstract: We investigate non-overlapping Schwarz preconditioners for the algebraic systems stemming from high-order discretizations of the coupled monodomain and Barreto-Cressman models, with applications to brain electrophysiology. The spatial discretization is based on a high-order Polytopal Discontinuous Galerkin (PolyDG) method, coupled with the Crank-Nicolson time discretization scheme with explicit extrapolation of the ion term. To improve solver efficiency, we consider additive Schwarz preconditioners within the PolyDG framework, which combines (massively parallel) local subdomain solvers with a coarse-grid correction. Numerical experiments demonstrate robustness with respect to the discretization parameters, as well as a significant reduction in iteration counts compared to the unpreconditioned solver. These features make the proposed approach well-suited for parallel large-scale simulations in brain electrophysiology. + oai:arXiv.org:2512.19536v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Caterina B. Leimer Saglio, Stefano Pagani, Paola F. Antonietti + + + Event Extraction in Large Language Model + https://arxiv.org/abs/2512.19537 + arXiv:2512.19537v1 Announce Type: new +Abstract: Large language models (LLMs) and multimodal LLMs are changing event extraction (EE): prompting and generation can often produce structured outputs in zero shot or few shot settings. Yet LLM based pipelines face deployment gaps, including hallucinations under weak constraints, fragile temporal and causal linking over long contexts and across documents, and limited long horizon knowledge management within a bounded context window. We argue that EE should be viewed as a system component that provides a cognitive scaffold for LLM centered solutions. Event schemas and slot constraints create interfaces for grounding and verification; event centric structures act as controlled intermediate representations for stepwise reasoning; event links support relation aware retrieval with graph based RAG; and event stores offer updatable episodic and agent memory beyond the context window. This survey covers EE in text and multimodal settings, organizing tasks and taxonomy, tracing method evolution from rule based and neural models to instruction driven and generative frameworks, and summarizing formulations, decoding strategies, architectures, representations, datasets, and evaluation. We also review cross lingual, low resource, and domain specific settings, and highlight open challenges and future directions for reliable event centric systems. Finally, we outline open challenges and future directions that are central to the LLM era, aiming to evolve EE from static extraction into a structurally reliable, agent ready perception and memory layer for open world systems. + oai:arXiv.org:2512.19537v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Bobo Li, Xudong Han, Jiang Liu, Yuzhe Ding, Liqiang Jing, Zhaoqi Zhang, Jinheng Li, Xinya Du, Fei Li, Meishan Zhang, Min Zhang, Aixin Sun, Philip S. Yu, Hao Fei + + + StoryMem: Multi-shot Long Video Storytelling with Memory + https://arxiv.org/abs/2512.19539 + arXiv:2512.19539v1 Announce Type: new +Abstract: Visual storytelling requires generating multi-shot videos with cinematic quality and long-range consistency. Inspired by human memory, we propose StoryMem, a paradigm that reformulates long-form video storytelling as iterative shot synthesis conditioned on explicit visual memory, transforming pre-trained single-shot video diffusion models into multi-shot storytellers. This is achieved by a novel Memory-to-Video (M2V) design, which maintains a compact and dynamically updated memory bank of keyframes from historical generated shots. The stored memory is then injected into single-shot video diffusion models via latent concatenation and negative RoPE shifts with only LoRA fine-tuning. A semantic keyframe selection strategy, together with aesthetic preference filtering, further ensures informative and stable memory throughout generation. Moreover, the proposed framework naturally accommodates smooth shot transitions and customized story generation applications. To facilitate evaluation, we introduce ST-Bench, a diverse benchmark for multi-shot video storytelling. Extensive experiments demonstrate that StoryMem achieves superior cross-shot consistency over previous methods while preserving high aesthetic quality and prompt adherence, marking a significant step toward coherent minute-long video storytelling. + oai:arXiv.org:2512.19539v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kaiwen Zhang, Liming Jiang, Angtian Wang, Jacob Zhiyuan Fang, Tiancheng Zhi, Qing Yan, Hao Kang, Xin Lu, Xingang Pan + + + Algerian Dialect + https://arxiv.org/abs/2512.19543 + arXiv:2512.19543v1 Announce Type: new +Abstract: We present Algerian Dialect, a large-scale sentiment-annotated dataset consisting of 45,000 YouTube comments written in Algerian Arabic dialect. The comments were collected from more than 30 Algerian press and media channels using the YouTube Data API. Each comment is manually annotated into one of five sentiment categories: very negative, negative, neutral, positive, and very positive. In addition to sentiment labels, the dataset includes rich metadata such as collection timestamps, like counts, video URLs, and annotation dates. This dataset addresses the scarcity of publicly available resources for Algerian dialect and aims to support research in sentiment analysis, dialectal Arabic NLP, and social media analytics. The dataset is publicly available on Mendeley Data under a CC BY 4.0 license at https://doi.org/10.17632/zzwg3nnhsz.2. + oai:arXiv.org:2512.19543v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.3390/app132011157 + Zakaria Benmounah, Abdennour Boulesnane + + + ActAvatar: Temporally-Aware Precise Action Control for Talking Avatars + https://arxiv.org/abs/2512.19546 + arXiv:2512.19546v1 Announce Type: new +Abstract: Despite significant advances in talking avatar generation, existing methods face critical challenges: insufficient text-following capability for diverse actions, lack of temporal alignment between actions and audio content, and dependency on additional control signals such as pose skeletons. We present ActAvatar, a framework that achieves phase-level precision in action control through textual guidance by capturing both action semantics and temporal context. Our approach introduces three core innovations: (1) Phase-Aware Cross-Attention (PACA), which decomposes prompts into a global base block and temporally-anchored phase blocks, enabling the model to concentrate on phase-relevant tokens for precise temporal-semantic alignment; (2) Progressive Audio-Visual Alignment, which aligns modality influence with the hierarchical feature learning process-early layers prioritize text for establishing action structure while deeper layers emphasize audio for refining lip movements, preventing modality interference; (3) A two-stage training strategy that first establishes robust audio-visual correspondence on diverse data, then injects action control through fine-tuning on structured annotations, maintaining both audio-visual alignment and the model's text-following capabilities. Extensive experiments demonstrate that ActAvatar significantly outperforms state-of-the-art methods in both action control and visual quality. + oai:arXiv.org:2512.19546v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ziqiao Peng, Yi Chen, Yifeng Ma, Guozhen Zhang, Zhiyao Sun, Zixiang Zhou, Youliang Zhang, Zhengguang Zhou, Zhaoxin Fan, Hongyan Liu, Yuan Zhou, Qinglin Lu, Jun He + + + DFORD: Directional Feedback based Online Ordinal Regression Learning + https://arxiv.org/abs/2512.19550 + arXiv:2512.19550v1 Announce Type: new +Abstract: In this paper, we introduce directional feedback in the ordinal regression setting, in which the learner receives feedback on whether the predicted label is on the left or the right side of the actual label. This is a weak supervision setting for ordinal regression compared to the full information setting, where the learner can access the labels. We propose an online algorithm for ordinal regression using directional feedback. The proposed algorithm uses an exploration-exploitation scheme to learn from directional feedback efficiently. Furthermore, we introduce its kernel-based variant to learn non-linear ordinal regression models in an online setting. We use a truncation trick to make the kernel implementation more memory efficient. The proposed algorithm maintains the ordering of the thresholds in the expected sense. Moreover, it achieves the expected regret of $\mathcal{O}(\log T)$. We compare our approach with a full information and a weakly supervised algorithm for ordinal regression on synthetic and real-world datasets. The proposed approach, which learns using directional feedback, performs comparably (sometimes better) to its full information counterpart. + oai:arXiv.org:2512.19550v1 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Naresh Manwani, M Elamparithy, Tanish Taneja + + + Towards Closed-Loop Embodied Empathy Evolution: Probing LLM-Centric Lifelong Empathic Motion Generation in Unseen Scenarios + https://arxiv.org/abs/2512.19551 + arXiv:2512.19551v1 Announce Type: new +Abstract: In the literature, existing human-centric emotional motion generation methods primarily focus on boosting performance within a single scale-fixed dataset, largely neglecting the flexible and scale-increasing motion scenarios (e.g., sports, dance), whereas effectively learning these newly emerging scenarios can significantly enhance the model's real-world generalization ability. Inspired by this, this paper proposes a new LLM-Centric Lifelong Empathic Motion Generation (L^2-EMG) task, which aims to equip LLMs with the capability to continually acquire emotional motion generation knowledge across different unseen scenarios, potentially contributing to building a closed-loop and self-evolving embodied agent equipped with both empathy and intelligence. Further, this paper poses two key challenges in the L^2-EMG task, i.e., the emotion decoupling challenge and the scenario adapting challenge. To this end, this paper proposes an Emotion-Transferable and Scenario-Adapted Mixture of Experts (ES-MoE) approach which designs a causal-guided emotion decoupling block and a scenario-adapted expert constructing block to address the two challenges, respectively. Especially, this paper constructs multiple L^2-EMG datasets to validate the effectiveness of the ES-MoE approach. Extensive evaluations show that ES-MoE outperforms advanced baselines. + oai:arXiv.org:2512.19551v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-sa/4.0/ + Jiawen Wang, Jingjing Wang Tianyang Chen, Min Zhang, Guodong Zhou + + + CARE What Fails: Contrastive Anchored-REflection for Verifiable Multimodal + https://arxiv.org/abs/2512.19554 + arXiv:2512.19554v1 Announce Type: new +Abstract: Group-relative reinforcement learning with verifiable rewards (RLVR) often wastes the most informative data it already has the failures. When all rollouts are wrong, gradients stall; when one happens to be correct, the update usually ignores why the others are close-but-wrong, and credit can be misassigned to spurious chains. We present CARE (Contrastive Anchored REflection), a failure-centric post-training framework for multimodal reasoning that turns errors into supervision. CARE combines: (i) an anchored-contrastive objective that forms a compact subgroup around the best rollout and a set of semantically proximate hard negatives, performs within-subgroup z-score normalization with negative-only scaling, and includes an all-negative rescue to prevent zero-signal batches; and (ii) Reflection-Guided Resampling (RGR), a one-shot structured self-repair that rewrites a representative failure and re-scores it with the same verifier, converting near-misses into usable positives without any test-time reflection. CARE improves accuracy and training smoothness while explicitly increasing the share of learning signal that comes from failures. On Qwen2.5-VL-7B, CARE lifts macro-averaged accuracy by 4.6 points over GRPO across six verifiable visual-reasoning benchmarks; with Qwen3-VL-8B it reaches competitive or state-of-the-art results on MathVista and MMMU-Pro under an identical evaluation protocol. + oai:arXiv.org:2512.19554v1 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Yongxin Wang, Zhicheng Yang, Meng Cao, Mingfei Han, Haokun Lin, Yingying Zhu, Xiaojun Chang, Xiaodan Liang + + + Augmenting Intelligence: A Hybrid Framework for Scalable and Stable Explanations + https://arxiv.org/abs/2512.19557 + arXiv:2512.19557v1 Announce Type: new +Abstract: Current approaches to Explainable AI (XAI) face a "Scalability-Stability Dilemma." Post-hoc methods (e.g., LIME, SHAP) may scale easily but suffer from instability, while supervised explanation frameworks (e.g., TED) offer stability but require prohibitive human effort to label every training instance. This paper proposes a Hybrid LRR-TED framework that addresses this dilemma through a novel "Asymmetry of Discovery." When applied to customer churn prediction, we demonstrate that automated rule learners (GLRM) excel at identifying broad "Safety Nets" (retention patterns) but struggle to capture specific "Risk Traps" (churn triggers)-a phenomenon we term the Anna Karenina Principle of Churn. By initialising the explanation matrix with automated safety rules and augmenting it with a Pareto-optimal set of just four human-defined risk rules, our approach achieves 94.00% predictive accuracy. This configuration outperforms the full 8-rule manual expert baseline while reducing human annotation effort by 50%, proposing a shift in the paradigm for Human-in-the-Loop AI: moving experts from the role of "Rule Writers" to "Exception Handlers." + oai:arXiv.org:2512.19557v1 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Lawrence Krukrubo, Julius Odede, Olawande Olusegun + + + BabyFlow: 3D modeling of realistic and expressive infant faces + https://arxiv.org/abs/2512.19560 + arXiv:2512.19560v1 Announce Type: new +Abstract: Early detection of developmental disorders can be aided by analyzing infant craniofacial morphology, but modeling infant faces is challenging due to limited data and frequent spontaneous expressions. We introduce BabyFlow, a generative AI model that disentangles facial identity and expression, enabling independent control over both. Using normalizing flows, BabyFlow learns flexible, probabilistic representations that capture the complex, non-linear variability of expressive infant faces without restrictive linear assumptions. To address scarce and uncontrolled expressive data, we perform cross-age expression transfer, adapting expressions from adult 3D scans to enrich infant datasets with realistic and systematic expressive variants. As a result, BabyFlow improves 3D reconstruction accuracy, particularly in highly expressive regions such as the mouth, eyes, and nose, and supports synthesis and modification of infant expressions while preserving identity. Additionally, by integrating with diffusion models, BabyFlow generates high-fidelity 2D infant images with consistent 3D geometry, providing powerful tools for data augmentation and early facial analysis. + oai:arXiv.org:2512.19560v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Antonia Alomar, Mireia Masias, Marius George Linguraru, Federico M. Sukno, Gemma Piella + + + REALM: A Real-to-Sim Validated Benchmark for Generalization in Robotic Manipulation + https://arxiv.org/abs/2512.19562 + arXiv:2512.19562v1 Announce Type: new +Abstract: Vision-Language-Action (VLA) models empower robots to understand and execute tasks described by natural language instructions. However, a key challenge lies in their ability to generalize beyond the specific environments and conditions they were trained on, which is presently difficult and expensive to evaluate in the real-world. To address this gap, we present REALM, a new simulation environment and benchmark designed to evaluate the generalization capabilities of VLA models, with a specific emphasis on establishing a strong correlation between simulated and real-world performance through high-fidelity visuals and aligned robot control. Our environment offers a suite of 15 perturbation factors, 7 manipulation skills, and more than 3,500 objects. Finally, we establish two task sets that form our benchmark and evaluate the \pi_{0}, \pi_{0}-FAST, and GR00T N1.5 VLA models, showing that generalization and robustness remain an open challenge. More broadly, we also show that simulation gives us a valuable proxy for the real-world and allows us to systematically probe for and quantify the weaknesses and failure modes of VLAs. Project page: https://martin-sedlacek.com/realm + oai:arXiv.org:2512.19562v1 + cs.RO + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Martin Sedlacek, Pavlo Yefanov, Georgy Ponimatkin, Jai Bardhan, Simon Pilc, Mederic Fourmy, Evangelos Kazakos, Cees G. M. Snoek, Josef Sivic, Vladimir Petrik + + + On Network-Aware Semantic Communication and Edge-Cloud Collaborative Intelligence Systems + https://arxiv.org/abs/2512.19563 + arXiv:2512.19563v1 Announce Type: new +Abstract: Semantic communication and edge-cloud collaborative intelligence are increasingly recognized as foundational enablers for next-generation intelligent services operating under stringent bandwidth, latency, and resource constraints. By shifting the communication objective from bit-perfect delivery toward the transmission of task-relevant semantic representations, semantic communication enables adaptive tradeoffs among communication overhead, inference accuracy, computational load, and end-to-end latency. This survey provides a comprehensive and system-level synthesis of recent advances in semantic communication at the edge-cloud interface, encompassing architectural models for collaborative intelligence, representation learning and semantic abstraction techniques, network-aware and resource-adaptive semantic encoding strategies, and learning-driven optimization and orchestration mechanisms. Beyond efficiency considerations, the survey situates semantic communication within practical operational contexts, including security, trust, resilience, and scalability, drawing connections to zero-trust networking, physical-layer security, and emerging edge-cloud control paradigms. Finally, open challenges and research directions are identified, highlighting the role of semantic communication as a key building block for AI-native networking and 6G-ready intelligent systems. + oai:arXiv.org:2512.19563v1 + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Murdadha Nasif, Ahmed Refaey Hussein + + + Results of the 2024 CommonRoad Motion Planning Competition for Autonomous Vehicles + https://arxiv.org/abs/2512.19564 + arXiv:2512.19564v1 Announce Type: new +Abstract: Over the past decade, a wide range of motion planning approaches for autonomous vehicles has been developed to handle increasingly complex traffic scenarios. However, these approaches are rarely compared on standardized benchmarks, limiting the assessment of relative strengths and weaknesses. To address this gap, we present the setup and results of the 4th CommonRoad Motion Planning Competition held in 2024, conducted using the CommonRoad benchmark suite. This annual competition provides an open-source and reproducible framework for benchmarking motion planning algorithms. The benchmark scenarios span highway and urban environments with diverse traffic participants, including passenger cars, buses, and bicycles. Planner performance is evaluated along four dimensions: efficiency, safety, comfort, and compliance with selected traffic rules. This report introduces the competition format and provides a comparison of representative high-performing planners from the 2023 and 2024 editions. + oai:arXiv.org:2512.19564v1 + cs.RO + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Yanliang Huang, Xia Yan, Peiran Yin, Zhenduo Zhang, Zeyan Shao, Youran Wang, Haoliang Huang, Matthias Althoff + + + LIMOncello: Revisited IKFoM on the SGal(3) Manifold for Fast LiDAR-Inertial Odometry + https://arxiv.org/abs/2512.19567 + arXiv:2512.19567v1 Announce Type: new +Abstract: This work introduces LIMOncello, a tightly coupled LiDAR-Inertial Odometry system that models 6-DoF motion on the $\mathrm{SGal}(3)$ manifold within an iterated error-state Kalman filter backend. Compared to state representations defined on $\mathrm{SO}(3)\times\mathbb{R}^6$, the use of $\mathrm{SGal}(3)$ provides a coherent and numerically stable discrete-time propagation model that helps limit drift in low-observability conditions. + LIMOncello also includes a lightweight incremental i-Octree mapping backend that enables faster updates and substantially lower memory usage than incremental kd-tree style map structures, without relying on locality-restricted search heuristics. Experiments on multiple real-world datasets show that LIMOncello achieves competitive accuracy while improving robustness in geometrically sparse environments. The system maintains real-time performance with stable memory growth and is released as an extensible open-source implementation at https://github.com/CPerezRuiz335/LIMOncello. + oai:arXiv.org:2512.19567v1 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Carlos P\'erez-Ruiz, Joan Sol\`a + + + The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge + https://arxiv.org/abs/2512.19570 + arXiv:2512.19570v1 Announce Type: new +Abstract: We examine epistemological threats posed by human and LLM interaction. We develop collective epistemology as a theory of epistemic warrant distributed across human collectives, using bounded rationality and dual process theory as background. We distinguish internalist justification, defined as reflective understanding of why a proposition is true, from externalist justification, defined as reliable transmission of truths. Both are necessary for collective rationality, but only internalist justification produces reflective knowledge. We specify reflective knowledge as follows: agents understand the evaluative basis of a claim, when that basis is unavailable agents consistently assess the reliability of truth sources, and agents have a duty to apply these standards within their domains of competence. We argue that LLMs approximate externalist reliabilism because they can reliably transmit information whose justificatory basis is established elsewhere, but they do not themselves possess reflective justification. Widespread outsourcing of reflective work to reliable LLM outputs can weaken reflective standards of justification, disincentivize comprehension, and reduce agents' capacity to meet professional and civic epistemic duties. To mitigate these risks, we propose a three tier norm program that includes an epistemic interaction model for individual use, institutional and organizational frameworks that seed and enforce norms for epistemically optimal outcomes, and deontic constraints at organizational and or legislative levels that instantiate discursive norms and curb epistemic vices. + oai:arXiv.org:2512.19570v1 + cs.HC + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + 10.1007/s00146-025-02426-3 + Angjelin Hila + + + LeLaR: The First In-Orbit Demonstration of an AI-Based Satellite Attitude Controller + https://arxiv.org/abs/2512.19576 + arXiv:2512.19576v1 Announce Type: new +Abstract: Attitude control is essential for many satellite missions. Classical controllers, however, are time-consuming to design and sensitive to model uncertainties and variations in operational boundary conditions. Deep Reinforcement Learning (DRL) offers a promising alternative by learning adaptive control strategies through autonomous interaction with a simulation environment. Overcoming the Sim2Real gap, which involves deploying an agent trained in simulation onto the real physical satellite, remains a significant challenge. In this work, we present the first successful in-orbit demonstration of an AI-based attitude controller for inertial pointing maneuvers. The controller was trained entirely in simulation and deployed to the InnoCube 3U nanosatellite, which was developed by the Julius-Maximilians-Universit\"at W\"urzburg in cooperation with the Technische Universit\"at Berlin, and launched in January 2025. We present the AI agent design, the methodology of the training procedure, the discrepancies between the simulation and the observed behavior of the real satellite, and a comparison of the AI-based attitude controller with the classical PD controller of InnoCube. Steady-state metrics confirm the robust performance of the AI-based controller during repeated in-orbit maneuvers. + oai:arXiv.org:2512.19576v1 + cs.RO + cs.AI + cs.LG + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kirill Djebko, Tom Baumann, Erik Dilger, Frank Puppe, Sergio Montenegro + + + Convergence analysis for non-iterative sequential schemes for Biot's model + https://arxiv.org/abs/2512.19579 + arXiv:2512.19579v1 Announce Type: new +Abstract: An alternative to the fully implicit or monolithic methods used for the solution of the coupling of fluid flow and deformation in porous media is a sequential approach in which the fully coupled system is broken into subproblems (flow and mechanics problems) that are solved one after the other. This fully explicit coupling approach is a very simple scheme which allows much flexibility in the implementation and has a lower computational cost, making it quite attractive in practice since smaller linear systems need to be solved in order to obtain the solution for the whole coupled poroelastic system. Due to the appealing advantages of these methods, intensive research is currently being carried out in this direction, as in the present work. Although the application of this type of method is very common in practice, there exist only a few works devoted to their theoretical analysis. In this work, we consider the so-called explicit fixed-stress split scheme, which consists of solving the flow problem first with time-lagging the displacement term, followed by the solution of the mechanics problem. To the best of our knowledge, we provide the first convergence analysis of the explicit fixed-stress split scheme for Biot's equations. In particular, we prove that this algorithm is optimally convergent if the considered finite element discretization satisfies an inf-sup condition. In addition, with the aim of designing the simplest scheme for solving Biot's model, we also propose a similar decoupled algorithm for piecewise linear finite elements for both variables which arises from the novel stabilization recently proposed in \cite{Pe2025}, and is demonstrated to be optimally convergent. + oai:arXiv.org:2512.19579v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiaozhe Hu, Francisco J. Gaspar, Carmen Rodrigo + + + A modified Brinkman penalization fictitious domain method for the unsteady Navier-Stokes equations + https://arxiv.org/abs/2512.19580 + arXiv:2512.19580v1 Announce Type: new +Abstract: This paper investigates a modification of the fictitious domain method with continuation in the lower-order coefficients for the unsteady Navier-Stokes equations governing the motion of an incompressible homogeneous fluid in a bounded 2D or 3D domain. The modification enables {a solution-dependent} choice of the critical parameter. Global-in-time existence and convergence of a weak solution to the auxiliary problem are proved, and local-in-time existence and convergence of a unique strong solution are established. For the strong solution, a new higher-order convergence rate estimate in the penalization parameter is obtained. The introduced framework allows us to apply a pointwise divergence free finite element method as a discretization technique, leading to strongly mass conservative discrete fictitious domain method. A numerical example illustrates the performance of the method. + oai:arXiv.org:2512.19580v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Zhanybek Baitulenov, Maxim Olshanskii, Almas Temirbekov, Nurlan Temirbekov, Syrym Kasenov + + + Learning Generalizable Hand-Object Tracking from Synthetic Demonstrations + https://arxiv.org/abs/2512.19583 + arXiv:2512.19583v1 Announce Type: new +Abstract: We present a system for learning generalizable hand-object tracking controllers purely from synthetic data, without requiring any human demonstrations. Our approach makes two key contributions: (1) HOP, a Hand-Object Planner, which can synthesize diverse hand-object trajectories; and (2) HOT, a Hand-Object Tracker that bridges synthetic-to-physical transfer through reinforcement learning and interaction imitation learning, delivering a generalizable controller conditioned on target hand-object states. Our method extends to diverse object shapes and hand morphologies. Through extensive evaluations, we show that our approach enables dexterous hands to track challenging, long-horizon sequences including object re-arrangement and agile in-hand reorientation. These results represent a significant step toward scalable foundation controllers for manipulation that can learn entirely from synthetic data, breaking the data bottleneck that has long constrained progress in dexterous manipulation. + oai:arXiv.org:2512.19583v1 + cs.RO + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Yinhuai Wang, Runyi Yu, Hok Wai Tsui, Xiaoyi Lin, Hui Zhang, Qihan Zhao, Ke Fan, Miao Li, Jie Song, Jingbo Wang, Qifeng Chen, Ping Tan + + + Increasing the Thinking Budget is Not All You Need + https://arxiv.org/abs/2512.19585 + arXiv:2512.19585v1 Announce Type: new +Abstract: Recently, a new wave of thinking-capable Large Language Models has emerged, demonstrating exceptional capabilities across a wide range of reasoning benchmarks. Early studies have begun to explore how the amount of compute in terms of the length of the reasoning process, the so-called thinking budget, impacts model performance. In this work, we propose a systematic investigation of the thinking budget as a key parameter, examining its interaction with various configurations such as self-consistency, reflection, and others. Our goal is to provide an informative, balanced comparison framework that considers both performance outcomes and computational cost. Among our findings, we discovered that simply increasing the thinking budget is not the most effective use of compute. More accurate responses can instead be achieved through alternative configurations, such as self-consistency and self-reflection. + oai:arXiv.org:2512.19585v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Ignacio Iacobacci, Zhaozhi Qian, Faroq AL-Tam, Muhammad AL-Qurishi, Riad Souissi + + + No Data? No Problem: Robust Vision-Tabular Learning with Missing Values + https://arxiv.org/abs/2512.19602 + arXiv:2512.19602v1 Announce Type: new +Abstract: Large-scale medical biobanks provide imaging data complemented by extensive tabular information, such as demographics or clinical measurements. However, this abundance of tabular attributes does not reflect real-world datasets, where only a subset of attributes may be available. This discrepancy calls for methods that can leverage all the tabular data during training while remaining robust to missing values at inference. To address this challenge, we propose RoVTL (Robust Vision-Tabular Learning), a framework designed to handle any level of tabular data availability, from 0% to 100%. RoVTL comprises two key stages: contrastive pretraining, where we introduce tabular attribute missingness as data augmentation to promote robustness, and downstream task tuning using a gated cross-attention module for multimodal fusion. During fine-tuning, we employ a novel Tabular More vs. Fewer loss that ranks performance based on the amount of available tabular data. Combined with disentangled gradient learning, this enables consistent performance across all tabular data completeness scenarios. We evaluate RoVTL on cardiac MRI scans from the UK Biobank, demonstrating superior robustness to missing tabular data compared to prior methods. Furthermore, RoVTL successfully generalizes to an external cardiac MRI dataset for multimodal disease classification, and extends to the natural images domain, achieving robust performance on a car advertisements dataset. The code is available at https://github.com/marteczkah/RoVTL. + oai:arXiv.org:2512.19602v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Marta Hasny, Laura Daza, Keno Bressem, Maxime Di Folco, Julia Schnabel + + + Static size-effects meet the dynamic scattering properties of finite-sized mechanical metamaterials: a relaxed micromorphic study with parameter identification via two-stage static-dynamic optimization + https://arxiv.org/abs/2512.19604 + arXiv:2512.19604v1 Announce Type: new +Abstract: Mechanical metamaterials exhibit size-effects when a few unit-cells are subjected to static loading because no clear micro-macro scale separation holds and the characteristic length of the deformation becomes comparable to the unit-cell size. These size-effects typically manifest themselves as a strengthening of the response in a form summarized as "smaller is stiffer". Moreover, the dynamical behavior of mechanical metamaterials is very remarkable, featuring unique phenomena such as dispersive behavior and band-gaps where elastic waves cannot propagate over specific frequency ranges. In these frequency ranges, the wavelength becomes gradually comparable to the unit-cell size, giving rise to microstructure related phenomena which become particularly visible in the reflection/transmission patterns where an incident wave hits the metamaterial's interfaces. This raises the question of whether the static size-effects and dynamic reflection/transmission patterns are correlated. + In this work, we investigate the interaction of the static size-effects and the dynamic scattering response of mechanical metamaterials by employing the relaxed micromorphic model. We introduce a two-stage optimization procedure to identify the material parameters. In the first stage, the static material parameters are identified by exploiting the static size-effects through a least squares fitting procedure based on the total energy. The dynamic parameters are determined in the second stage by fitting the dispersion curves of the relaxed micromorphic model to those of the fully discretized microstructure. At this second stage, we assess the results obtained by fitting the dispersion curves in one and in two propagation directions, for both the relaxed micromorphic model (RMM) with curvature and its reduced counterpart (RRMM) without curvature. + The full abstract is presented in the paper. + oai:arXiv.org:2512.19604v1 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Mohammad Sarhil, Leonardo Andres Perez Ramirez, Max Jendrik Voss, Angela Madeo + + + KerJEPA: Kernel Discrepancies for Euclidean Self-Supervised Learning + https://arxiv.org/abs/2512.19605 + arXiv:2512.19605v1 Announce Type: new +Abstract: Recent breakthroughs in self-supervised Joint-Embedding Predictive Architectures (JEPAs) have established that regularizing Euclidean representations toward isotropic Gaussian priors yields provable gains in training stability and downstream generalization. We introduce a new, flexible family of KerJEPAs, self-supervised learning algorithms with kernel-based regularizers. One instance of this family corresponds to the recently-introduced LeJEPA Epps-Pulley regularizer which approximates a sliced maximum mean discrepancy (MMD) with a Gaussian prior and Gaussian kernel. By expanding the class of viable kernels and priors and computing the closed-form high-dimensional limit of sliced MMDs, we develop alternative KerJEPAs with a number of favorable properties including improved training stability and design flexibility. + oai:arXiv.org:2512.19605v1 + cs.LG + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Eric Zimmermann, Harley Wiltzer, Justin Szeto, David Alvarez-Melis, Lester Mackey + + + RAPID-LLM: Resilience-Aware Performance analysis of Infrastructure for Distributed LLM Training and Inference + https://arxiv.org/abs/2512.19606 + arXiv:2512.19606v1 Announce Type: new +Abstract: RAPID-LLM is a unified performance modeling framework for large language model (LLM) training and inference on GPU clusters. It couples a DeepFlow-based frontend that generates hardware-aware, operator-level Chakra execution traces from an abstract LLM specification (model shape, batch/sequence settings, training vs. inference, and hybrid parallelism choices) with an extended Astra-Sim backend that executes those traces on explicit multi-dimensional network topologies with congestion-aware routing and support for degraded and faulty links. The frontend assigns per-operator latency using a tile-based model that accounts for SM under-utilization and multi-level memory traffic (SRAM/ L2/ HBM), and prunes memory-infeasible configurations using an activation-liveness traversal under recomputation, parallelism and ZeRO/FDSP sharding policies. + Across A100-based validation cases, RAPID-LLM predicts Llama inference step latency and GPT-scale training time per batch within 10.4\% relative to published measurements, and matches ns-3 packet-level results within 8\% on representative communication workloads. Case studies demonstrate how RAPID-LLM enables fast, exhaustive sweeps over hybrid-parallel configurations, quantifies sensitivity to soft link faults under realistic routing and congestion, and evaluates hypothetical GPU design variants including HBM bandwidth throttling effects. + oai:arXiv.org:2512.19606v1 + cs.PF + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + George Karfakis, Faraz Tahmasebi, Binglu Chen, Lime Yao, Saptarshi Mitra, Tianyue Pan, Hyoukjun Kwon, Puneet Gupta + + + MapTrace: Scalable Data Generation for Route Tracing on Maps + https://arxiv.org/abs/2512.19609 + arXiv:2512.19609v1 Announce Type: new +Abstract: While Multimodal Large Language Models have achieved human-like performance on many visual and textual reasoning tasks, their proficiency in fine-grained spatial understanding, such as route tracing on maps remains limited. Unlike humans, who can quickly learn to parse and navigate maps, current models often fail to respect fundamental path constraints, in part due to the prohibitive cost and difficulty of collecting large-scale, pixel-accurate path annotations. To address this, we introduce a scalable synthetic data generation pipeline that leverages synthetic map images and pixel-level parsing to automatically produce precise annotations for this challenging task. Using this pipeline, we construct a fine-tuning dataset of 23k path samples across 4k maps, enabling models to acquire more human-like spatial capabilities. Using this dataset, we fine-tune both open-source and proprietary MLLMs. Results on MapBench show that finetuning substantially improves robustness, raising success rates by up to 6.4 points, while also reducing path-tracing error (NDTW). These gains highlight that fine-grained spatial reasoning, absent in pretrained models, can be explicitly taught with synthetic supervision. + oai:arXiv.org:2512.19609v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-sa/4.0/ + Artemis Panagopoulou, Aveek Purohit, Achin Kulshrestha, Soroosh Yazdani, Mohit Goyal + + + MauBERT: Universal Phonetic Inductive Biases for Few-Shot Acoustic Units Discovery + https://arxiv.org/abs/2512.19612 + arXiv:2512.19612v1 Announce Type: new +Abstract: This paper introduces MauBERT, a multilingual extension of HuBERT that leverages articulatory features for robust cross-lingual phonetic representation learning. We continue HuBERT pre-training with supervision based on a phonetic-to-articulatory feature mapping in 55 languages. Our models learn from multilingual data to predict articulatory features or phones, resulting in language-independent representations that capture multilingual phonetic properties. Through comprehensive ABX discriminability testing, we show MauBERT models produce more context-invariant representations than state-of-the-art multilingual self-supervised learning models. Additionally, the models effectively adapt to unseen languages and casual speech with minimal self-supervised fine-tuning (10 hours of speech). This establishes an effective approach for instilling linguistic inductive biases in self-supervised speech models. + oai:arXiv.org:2512.19612v1 + cs.CL + eess.AS + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Angelo Ortiz Tandazo, Manel Khentout, Youssef Benchekroun, Thomas Hueber, Emmanuel Dupoux + + + Exploring the features used for summary evaluation by Human and GPT + https://arxiv.org/abs/2512.19620 + arXiv:2512.19620v1 Announce Type: new +Abstract: Summary assessment involves evaluating how well a generated summary reflects the key ideas and meaning of the source text, requiring a deep understanding of the content. Large Language Models (LLMs) have been used to automate this process, acting as judges to evaluate summaries with respect to the original text. While previous research investigated the alignment between LLMs and Human responses, it is not yet well understood what properties or features are exploited by them when asked to evaluate based on a particular quality dimension, and there has not been much attention towards mapping between evaluation scores and metrics. In this paper, we address this issue and discover features aligned with Human and Generative Pre-trained Transformers (GPTs) responses by studying statistical and machine learning metrics. Furthermore, we show that instructing GPTs to employ metrics used by Human can improve their judgment and conforming them better with human responses. + oai:arXiv.org:2512.19620v1 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zahra Sadeghi, Evangelos Milios, Frank Rudzicz + + + LoGoPlanner: Localization Grounded Navigation Policy with Metric-aware Visual Geometry + https://arxiv.org/abs/2512.19629 + arXiv:2512.19629v1 Announce Type: new +Abstract: Trajectory planning in unstructured environments is a fundamental and challenging capability for mobile robots. Traditional modular pipelines suffer from latency and cascading errors across perception, localization, mapping, and planning modules. Recent end-to-end learning methods map raw visual observations directly to control signals or trajectories, promising greater performance and efficiency in open-world settings. However, most prior end-to-end approaches still rely on separate localization modules that depend on accurate sensor extrinsic calibration for self-state estimation, thereby limiting generalization across embodiments and environments. We introduce LoGoPlanner, a localization-grounded, end-to-end navigation framework that addresses these limitations by: (1) finetuning a long-horizon visual-geometry backbone to ground predictions with absolute metric scale, thereby providing implicit state estimation for accurate localization; (2) reconstructing surrounding scene geometry from historical observations to supply dense, fine-grained environmental awareness for reliable obstacle avoidance; and (3) conditioning the policy on implicit geometry bootstrapped by the aforementioned auxiliary tasks, thereby reducing error propagation.We evaluate LoGoPlanner in both simulation and real-world settings, where its fully end-to-end design reduces cumulative error while metric-aware geometry memory enhances planning consistency and obstacle avoidance, leading to more than a 27.3\% improvement over oracle-localization baselines and strong generalization across embodiments and environments. The code and models have been made publicly available on the \href{https://steinate.github.io/logoplanner.github.io/}{project page}. + oai:arXiv.org:2512.19629v1 + cs.RO + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jiaqi Peng, Wenzhe Cai, Yuqiang Yang, Tai Wang, Yuan Shen, Jiangmiao Pang + + + Diacritic Restoration for Low-Resource Indigenous Languages: Case Study with Bribri and Cook Islands M\=aori + https://arxiv.org/abs/2512.19630 + arXiv:2512.19630v1 Announce Type: new +Abstract: We present experiments on diacritic restoration, a form of text normalization essential for natural language processing (NLP) tasks. Our study focuses on two extremely under-resourced languages: Bribri, a Chibchan language spoken in Costa Rica, and Cook Islands M\=aori, a Polynesian language spoken in the Cook Islands. Specifically, this paper: (i) compares algorithms for diacritics restoration in under-resourced languages, including tonal diacritics, (ii) examines the amount of data required to achieve target performance levels, (iii) contrasts results across varying resource conditions, and (iv) explores the related task of diacritic correction. We find that fine-tuned, character-level LLMs perform best, likely due to their ability to decompose complex characters into their UTF-8 byte representations. In contrast, massively multilingual models perform less effectively given our data constraints. Across all models, reliable performance begins to emerge with data budgets of around 10,000 words. Zero-shot approaches perform poorly in all cases. This study responds both to requests from the language communities and to broader NLP research questions concerning model performance and generalization in under-resourced contexts. + oai:arXiv.org:2512.19630v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Rolando Coto-Solano, Daisy Li, Manoela Teleginski Ferraz, Olivia Sasse, Cha Krupka, Sharid Lo\'aiciga, Sally Akevai Tenamu Nicholas + + + Generative diffusion models for agricultural AI: plant image generation, indoor-to-outdoor translation, and expert preference alignment + https://arxiv.org/abs/2512.19632 + arXiv:2512.19632v1 Announce Type: new +Abstract: The success of agricultural artificial intelligence depends heavily on large, diverse, and high-quality plant image datasets, yet collecting such data in real field conditions is costly, labor intensive, and seasonally constrained. This paper investigates diffusion-based generative modeling to address these challenges through plant image synthesis, indoor-to-outdoor translation, and expert preference aligned fine tuning. First, a Stable Diffusion model is fine tuned on captioned indoor and outdoor plant imagery to generate realistic, text conditioned images of canola and soybean. Evaluation using Inception Score, Frechet Inception Distance, and downstream phenotype classification shows that synthetic images effectively augment training data and improve accuracy. Second, we bridge the gap between high resolution indoor datasets and limited outdoor imagery using DreamBooth-based text inversion and image guided diffusion, generating translated images that enhance weed detection and classification with YOLOv8. Finally, a preference guided fine tuning framework trains a reward model on expert scores and applies reward weighted updates to produce more stable and expert aligned outputs. Together, these components demonstrate a practical pathway toward data efficient generative pipelines for agricultural AI. + oai:arXiv.org:2512.19632v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Da Tan, Michael Beck, Christopher P. Bidinosti, Robert H. Gulden, Christopher J. Henry + + + The Best of Both Worlds: Hybridizing Neural Operators and Solvers for Stable Long-Horizon Inference + https://arxiv.org/abs/2512.19643 + arXiv:2512.19643v1 Announce Type: new +Abstract: Numerical simulation of time-dependent partial differential equations (PDEs) is central to scientific and engineering applications, but high-fidelity solvers are often prohibitively expensive for long-horizon or time-critical settings. Neural operator (NO) surrogates offer fast inference across parametric and functional inputs; however, most autoregressive NO frameworks remain vulnerable to compounding errors, and ensemble-averaged metrics provide limited guarantees for individual inference trajectories. In practice, error accumulation can become unacceptable beyond the training horizon, and existing methods lack mechanisms for online monitoring or correction. To address this gap, we propose ANCHOR (Adaptive Numerical Correction for High-fidelity Operator Rollouts), an online, instance-aware hybrid inference framework for stable long-horizon prediction of nonlinear, time-dependent PDEs. ANCHOR treats a pretrained NO as the primary inference engine and adaptively couples it with a classical numerical solver using a physics-informed, residual-based error estimator. Inspired by adaptive time-stepping in numerical analysis, ANCHOR monitors an exponential moving average (EMA) of the normalized PDE residual to detect accumulating error and trigger corrective solver interventions without requiring access to ground-truth solutions. We show that the EMA-based estimator correlates strongly with the true relative L2 error, enabling data-free, instance-aware error control during inference. Evaluations on four canonical PDEs: 1D and 2D Burgers', 2D Allen-Cahn, and 3D heat conduction, demonstrate that ANCHOR reliably bounds long-horizon error growth, stabilizes extrapolative rollouts, and significantly improves robustness over standalone neural operators, while remaining substantially more efficient than high-fidelity numerical solvers. + oai:arXiv.org:2512.19643v1 + cs.LG + cs.CE + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Rajyasri Roy, Dibyajyoti Nayak, Somdatta Goswami + + + More code, less validation: Risk factors for over-reliance on AI coding tools among scientists + https://arxiv.org/abs/2512.19644 + arXiv:2512.19644v1 Announce Type: new +Abstract: Programming is essential to modern scientific research, yet most scientists report inadequate training for the software development their work demands. Generative AI tools capable of code generation may support scientific programmers, but user studies indicate risks of over-reliance, particularly among inexperienced users. We surveyed 868 scientists who program, examining adoption patterns, tool preferences, and factors associated with perceived productivity. Adoption is highest among students and less experienced programmers, with variation across fields. Scientific programmers overwhelmingly prefer general-purpose conversational interfaces like ChatGPT over developer-specific tools. Both inexperience and limited use of development practices (like testing, code review, and version control) are associated with greater perceived productivity-but these factors interact, suggesting formal practices may partially compensate for inexperience. The strongest predictor of perceived productivity is the number of lines of generated code typically accepted at once. These findings suggest scientific programmers using generative AI may gauge productivity by code generation rather than validation, raising concerns about research code integrity. + oai:arXiv.org:2512.19644v1 + cs.SE + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Gabrielle O'Brien, Alexis Parker, Nasir Eisty, Jeffrey Carver + + + Milstein-type Schemes for Hyperbolic SPDEs + https://arxiv.org/abs/2512.19647 + arXiv:2512.19647v1 Announce Type: new +Abstract: This article studies the temporal approximation of hyperbolic semilinear stochastic evolution equations with multiplicative Gaussian noise by Milstein-type schemes. We take the term hyperbolic to mean that the leading operator generates a contractive, not necessarily analytic $C_0$-semigroup. Optimal convergence rates are derived for the pathwise uniform strong error \[ + E_h^\infty := \Big(\mathbb{E}\max_{1\le j \le M}\|U_{t_j}-u_j\|_X^p\Big)^{1/p} \] on a Hilbert space $X$ for $p\in [2,\infty)$. Here, $U$ is the mild solution and $u_j$ its Milstein approximation at time $t_j=jh$ with step size $h>0$ and final time $T=Mh>0$. For sufficiently regular nonlinearity and noise, we establish strong convergence of order one, with the error satisfying $E_h^\infty\lesssim h\sqrt{\log(T/h)}$ for rational Milstein schemes and $E_h^\infty \lesssim h$ for exponential Milstein schemes. This extends previous results from parabolic to hyperbolic SPDEs and from exponential to rational Milstein schemes. Root-mean-square error estimates are strengthened to pathwise uniform estimates. Numerical experiments validate the convergence rates for the stochastic Schr\"odinger equation. Further applications to Maxwell's and transport equations are included. + oai:arXiv.org:2512.19647v1 + math.NA + cs.NA + math.AP + math.FA + math.PR + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Felix Kastner, Katharina Klioba + + + 4D Gaussian Splatting as a Learned Dynamical System + https://arxiv.org/abs/2512.19648 + arXiv:2512.19648v1 Announce Type: new +Abstract: We reinterpret 4D Gaussian Splatting as a continuous-time dynamical system, where scene motion arises from integrating a learned neural dynamical field rather than applying per-frame deformations. This formulation, which we call EvoGS, treats the Gaussian representation as an evolving physical system whose state evolves continuously under a learned motion law. This unlocks capabilities absent in deformation-based approaches:(1) sample-efficient learning from sparse temporal supervision by modeling the underlying motion law; (2) temporal extrapolation enabling forward and backward prediction beyond observed time ranges; and (3) compositional dynamics that allow localized dynamics injection for controllable scene synthesis. Experiments on dynamic scene benchmarks show that EvoGS achieves better motion coherence and temporal consistency compared to deformation-field baselines while maintaining real-time rendering + oai:arXiv.org:2512.19648v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-sa/4.0/ + Arnold Caleb Asiimwe, Carl Vondrick + + + Deep Legendre Transform + https://arxiv.org/abs/2512.19649 + arXiv:2512.19649v1 Announce Type: new +Abstract: We introduce a novel deep learning algorithm for computing convex conjugates of differentiable convex functions, a fundamental operation in convex analysis with various applications in different fields such as optimization, control theory, physics and economics. While traditional numerical methods suffer from the curse of dimensionality and become computationally intractable in high dimensions, more recent neural network-based approaches scale better, but have mostly been studied with the aim of solving optimal transport problems and require the solution of complicated optimization or max-min problems. Using an implicit Fenchel formulation of convex conjugation, our approach facilitates an efficient gradient-based framework for the minimization of approximation errors and, as a byproduct, also provides a posteriori error estimates for the approximation quality. Numerical experiments demonstrate our method's ability to deliver accurate results across different high-dimensional examples. Moreover, by employing symbolic regression with Kolmogorov--Arnold networks, it is able to obtain the exact convex conjugates of specific convex functions. + oai:arXiv.org:2512.19649v1 + cs.LG + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Aleksey Minabutdinov, Patrick Cheridito + + + Exploring Zero-Shot ACSA with Unified Meaning Representation in Chain-of-Thought Prompting + https://arxiv.org/abs/2512.19651 + arXiv:2512.19651v1 Announce Type: new +Abstract: Aspect-Category Sentiment Analysis (ACSA) provides granular insights by identifying specific themes within reviews and their associated sentiment. While supervised learning approaches dominate this field, the scarcity and high cost of annotated data for new domains present significant barriers. We argue that leveraging large language models (LLMs) in a zero-shot setting is a practical alternative where resources for data annotation are limited. In this work, we propose a novel Chain-of-Thought (CoT) prompting technique that utilises an intermediate Unified Meaning Representation (UMR) to structure the reasoning process for the ACSA task. We evaluate this UMR-based approach against a standard CoT baseline across three models (Qwen3-4B, Qwen3-8B, and Gemini-2.5-Pro) and four diverse datasets. Our findings suggest that UMR effectiveness may be model-dependent. Whilst preliminary results indicate comparable performance for mid-sized models such as Qwen3-8B, these observations warrant further investigation, particularly regarding the potential applicability to smaller model architectures. Further research is required to establish the generalisability of these findings across different model scales. + oai:arXiv.org:2512.19651v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Filippos Ventirozos, Peter Appleby, Matthew Shardlow + + + Clustering with Label Consistency + https://arxiv.org/abs/2512.19654 + arXiv:2512.19654v1 Announce Type: new +Abstract: Designing efficient, effective, and consistent metric clustering algorithms is a significant challenge attracting growing attention. Traditional approaches focus on the stability of cluster centers; unfortunately, this neglects the real-world need for stable point labels, i.e., stable assignments of points to named sets (clusters). In this paper, we address this gap by initiating the study of label-consistent metric clustering. We first introduce a new notion of consistency, measuring the label distance between two consecutive solutions. Then, armed with this new definition, we design new consistent approximation algorithms for the classical $k$-center and $k$-median problems. + oai:arXiv.org:2512.19654v1 + cs.DS + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Diptarka Chakraborty, Hendrik Fichtenberger, Bernhard Haeupler, Silvio Lattanzi, Ashkan Norouzi-Fard, Ola Svensson + + + Over++: Generative Video Compositing for Layer Interaction Effects + https://arxiv.org/abs/2512.19661 + arXiv:2512.19661v1 Announce Type: new +Abstract: In professional video compositing workflows, artists must manually create environmental interactions-such as shadows, reflections, dust, and splashes-between foreground subjects and background layers. Existing video generative models struggle to preserve the input video while adding such effects, and current video inpainting methods either require costly per-frame masks or yield implausible results. We introduce augmented compositing, a new task that synthesizes realistic, semi-transparent environmental effects conditioned on text prompts and input video layers, while preserving the original scene. To address this task, we present Over++, a video effect generation framework that makes no assumptions about camera pose, scene stationarity, or depth supervision. We construct a paired effect dataset tailored for this task and introduce an unpaired augmentation strategy that preserves text-driven editability. Our method also supports optional mask control and keyframe guidance without requiring dense annotations. Despite training on limited data, Over++ produces diverse and realistic environmental effects and outperforms existing baselines in both effect generation and scene preservation. + oai:arXiv.org:2512.19661v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Luchao Qi, Jiaye Wu, Jun Myeong Choi, Cary Phillips, Roni Sengupta, Dan B Goldman + + + Beyond CLIP: Knowledge-Enhanced Multimodal Transformers for Cross-Modal Alignment in Diabetic Retinopathy Diagnosis + https://arxiv.org/abs/2512.19663 + arXiv:2512.19663v1 Announce Type: new +Abstract: Diabetic retinopathy (DR) is a leading cause of preventable blindness worldwide, demanding accurate automated diagnostic systems. While general-domain vision-language models like Contrastive Language-Image Pre-Training (CLIP) perform well on natural image tasks, they struggle in medical domain applications, particularly in cross-modal retrieval for ophthalmological images. We propose a novel knowledge-enhanced joint embedding framework that integrates retinal fundus images, clinical text, and structured patient data through a multimodal transformer architecture to address the critical gap in medical image-text alignment. Our approach employs separate encoders for each modality: a Vision Transformer (ViT-B/16) for retinal images, Bio-ClinicalBERT for clinical narratives, and a multilayer perceptron for structured demographic and clinical features. These modalities are fused through a joint transformer with modality-specific embeddings, trained using multiple objectives including contrastive losses between modality pairs, reconstruction losses for images and text, and classification losses for DR severity grading according to ICDR and SDRG schemes. Experimental results on the Brazilian Multilabel Ophthalmological Dataset (BRSET) demonstrate significant improvements over baseline models. Our framework achieves near-perfect text-to-image retrieval performance with Recall@1 of 99.94% compared to fine-tuned CLIP's 1.29%, while maintaining state-of-the-art classification accuracy of 97.05% for SDRG and 97.97% for ICDR. Furthermore, zero-shot evaluation on the unseen DeepEyeNet dataset validates strong generalizability with 93.95% Recall@1 versus 0.22% for fine-tuned CLIP. These results demonstrate that our multimodal training approach effectively captures cross-modal relationships in the medical domain, establishing both superior retrieval capabilities and robust diagnostic performance. + oai:arXiv.org:2512.19663v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Argha Kamal Samanta, Harshika Goyal, Vasudha Joshi, Tushar Mungle, Pabitra Mitra + + + CORE: Compensable Reward as a Catalyst for Improving Offline RL in Wireless Networks + https://arxiv.org/abs/2512.19671 + arXiv:2512.19671v1 Announce Type: new +Abstract: Real-world wireless data are expensive to collect and often lack sufficient expert demonstrations, causing existing offline RL methods to overfit suboptimal behaviors and exhibit unstable performance. To address this issue, we propose CORE, an offline RL framework specifically designed for wireless environments. CORE identifies latent expert trajectories from noisy datasets via behavior embedding clustering, and trains a conditional variational autoencoder with a contrastive objective to separate expert and non-expert behaviors in latent space. Based on the learned representations, CORE constructs compensable rewards that reflect expert-likelihood, effectively guiding policy learning under limited or imperfect supervision. More broadly, this work represents one of the early systematic explorations of offline RL in wireless networking, where prior adoption remains limited. Beyond introducing offline RL techniques to this domain, we further examine intrinsic wireless data characteristics and develop a domain-aligned algorithm that explicitly accounts for their structural properties. While offline RL has not yet been fully established as a standard methodology in the wireless community, our study aims to provide foundational insights and empirical evidence to support its broader acceptance. + oai:arXiv.org:2512.19671v1 + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Lipeng Zu, Hansong Zhou, Yu Qian, Shayok Chakraborty, Yukun Yuan, Linke Guo, Xiaonan Zhang + + + Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policies + https://arxiv.org/abs/2512.19673 + arXiv:2512.19673v1 Announce Type: new +Abstract: Existing reinforcement learning (RL) approaches treat large language models (LLMs) as a single unified policy, overlooking their internal mechanisms. Understanding how policy evolves across layers and modules is therefore crucial for enabling more targeted optimization and raveling out complex reasoning mechanisms. In this paper, we decompose the language model policy by leveraging the intrinsic split of the Transformer residual stream and the equivalence between the composition of hidden states with the unembedding matrix and the resulting samplable policy. This decomposition reveals Internal Layer Policies, corresponding to contributions from individual layers, and Internal Modular Policies, which align with the self-attention and feed-forward network (FFN) components within each layer. By analyzing the entropy of internal policy, we find that: (a) Early layers keep high entropy for exploration, top layers converge to near-zero entropy for refinement, with convergence patterns varying across model series. (b) LLama's prediction space rapidly converges in the final layer, whereas Qwen-series models, especially Qwen3, exhibit a more human-like, progressively structured reasoning pattern. Motivated by these findings, we propose Bottom-up Policy Optimization (BuPO), a novel RL paradigm that directly optimizes the internal layer policy during early training. By aligning training objective at lower layer, BuPO reconstructs foundational reasoning capabilities and achieves superior performance. Extensive experiments on complex reasoning benchmarks demonstrates the effectiveness of our method. Our code is available at https://github.com/Trae1ounG/BuPO. + oai:arXiv.org:2512.19673v1 + cs.LG + cs.AI + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yuqiao Tan, Minzheng Wang, Shizhu He, Huanxuan Liao, Chengfeng Zhao, Qiunan Lu, Tian Liang, Jun Zhao, Kang Liu + + + Efficient Vision Mamba for MRI Super-Resolution via Hybrid Selective Scanning + https://arxiv.org/abs/2512.19676 + arXiv:2512.19676v1 Announce Type: new +Abstract: Background: High-resolution MRI is critical for diagnosis, but long acquisition times limit clinical use. Super-resolution (SR) can enhance resolution post-scan, yet existing deep learning methods face fidelity-efficiency trade-offs. Purpose: To develop a computationally efficient and accurate deep learning framework for MRI SR that preserves anatomical detail for clinical integration. Materials and Methods: We propose a novel SR framework combining multi-head selective state-space models (MHSSM) with a lightweight channel MLP. The model uses 2D patch extraction with hybrid scanning to capture long-range dependencies. Each MambaFormer block integrates MHSSM, depthwise convolutions, and gated channel mixing. Evaluation used 7T brain T1 MP2RAGE maps (n=142) and 1.5T prostate T2w MRI (n=334). Comparisons included Bicubic interpolation, GANs (CycleGAN, Pix2pix, SPSR), transformers (SwinIR), Mamba (MambaIR), and diffusion models (I2SB, Res-SRDiff). Results: Our model achieved superior performance with exceptional efficiency. For 7T brain data: SSIM=0.951+-0.021, PSNR=26.90+-1.41 dB, LPIPS=0.076+-0.022, GMSD=0.083+-0.017, significantly outperforming all baselines (p<0.001). For prostate data: SSIM=0.770+-0.049, PSNR=27.15+-2.19 dB, LPIPS=0.190+-0.095, GMSD=0.087+-0.013. The framework used only 0.9M parameters and 57 GFLOPs, reducing parameters by 99.8% and computation by 97.5% versus Res-SRDiff, while outperforming SwinIR and MambaIR in accuracy and efficiency. Conclusion: The proposed framework provides an efficient, accurate MRI SR solution, delivering enhanced anatomical detail across datasets. Its low computational demand and state-of-the-art performance show strong potential for clinical translation. + oai:arXiv.org:2512.19676v1 + cs.CV + physics.med-ph + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Mojtaba Safari, Shansong Wang, Vanessa L Wildman, Mingzhe Hu, Zach Eidex, Chih-Wei Chang, Erik H Middlebrooks, Richard L. J Qiu, Pretesh Patel, Ashesh B. Jania, Hui Mao, Zhen Tian, Xiaofeng Yang + + + Detecting Coordinated Activities Through Temporal, Multiplex, and Collaborative Analysis + https://arxiv.org/abs/2512.19677 + arXiv:2512.19677v1 Announce Type: new +Abstract: In the era of widespread online content consumption, effective detection of coordinated efforts is crucial for mitigating potential threats arising from information manipulation. Despite advances in isolating inauthentic and automated actors, the actions of individual accounts involved in influence campaigns may not stand out as anomalous if analyzed independently of the coordinated group. Given the collaborative nature of information operations, coordinated campaigns are better characterized by evidence of similar temporal behavioral patterns that extend beyond coincidental synchronicity across a group of accounts. We propose a framework to model complex coordination patterns across multiple online modalities. This framework utilizes multiplex networks to first decompose online activities into different interaction layers, and subsequently aggregate evidence of online coordination across the layers. In addition, we propose a time-aware collaboration model to capture patterns of online coordination for each modality. The proposed time-aware model builds upon the node-normalized collaboration model and accounts for repetitions of coordinated actions over different time intervals by employing an exponential decay temporal kernel. We validate our approach on multiple datasets featuring different coordinated activities. Our results demonstrate that a multiplex time-aware model excels in the identification of coordinating groups, outperforming previously proposed methods in coordinated activity detection. + oai:arXiv.org:2512.19677v1 + cs.SI + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Letizia Iannucci, Elisa Muratore, Antonis Matakos, Mikko Kivel\"a + + + WorldWarp: Propagating 3D Geometry with Asynchronous Video Diffusion + https://arxiv.org/abs/2512.19678 + arXiv:2512.19678v1 Announce Type: new +Abstract: Generating long-range, geometrically consistent video presents a fundamental dilemma: while consistency demands strict adherence to 3D geometry in pixel space, state-of-the-art generative models operate most effectively in a camera-conditioned latent space. This disconnect causes current methods to struggle with occluded areas and complex camera trajectories. To bridge this gap, we propose WorldWarp, a framework that couples a 3D structural anchor with a 2D generative refiner. To establish geometric grounding, WorldWarp maintains an online 3D geometric cache built via Gaussian Splatting (3DGS). By explicitly warping historical content into novel views, this cache acts as a structural scaffold, ensuring each new frame respects prior geometry. However, static warping inevitably leaves holes and artifacts due to occlusions. We address this using a Spatio-Temporal Diffusion (ST-Diff) model designed for a "fill-and-revise" objective. Our key innovation is a spatio-temporal varying noise schedule: blank regions receive full noise to trigger generation, while warped regions receive partial noise to enable refinement. By dynamically updating the 3D cache at every step, WorldWarp maintains consistency across video chunks. Consequently, it achieves state-of-the-art fidelity by ensuring that 3D logic guides structure while diffusion logic perfects texture. Project page: \href{https://hyokong.github.io/worldwarp-page/}{https://hyokong.github.io/worldwarp-page/}. + oai:arXiv.org:2512.19678v1 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hanyang Kong, Xingyi Yang, Xiaoxu Zheng, Xinchao Wang + + + Optimal-coupling-observer AV motion control securing comfort in the presence of cyber attacks + https://arxiv.org/abs/2512.19679 + arXiv:2512.19679v1 Announce Type: new +Abstract: The security of Automated Vehicles (AVs) is an important emerging area of research in traffic safety. Methods have been published and evaluated in experimental vehicles to secure safe AV control in the presence of attacks, but human motion comfort is rarely investigated in such studies. + In this paper, we present an innovative optimal-coupling-observer-based framework that rejects the impact of bounded sensor attacks in a network of connected and automated vehicles from safety and comfort point of view. + We demonstrate its performance in car following with cooperative adaptive cruise control for platoons with redundant distance and velocity sensors. + The error dynamics are formulated as a Linear Time Variant (LTV) system, resulting in complex stability conditions that are investigated using a Linear Matrix Inequality (LMI) approach guaranteeing global asymptotic stability. + We prove the capability of the framework to secure occupants' safety and comfort in the presence of bounded attacks. In the onset of attack, the framework rapidly detects attacked sensors and switches to the most reliable observer eliminating attacked sensors, even with modest attack magnitudes. Without our proposed method, severe (but bounded) attacks result in collisions and major discomfort. With our method, attacks had negligible effects on motion comfort evaluated using ISO-2631 Ride Comfort and Motion Sickness indexes. The results pave the path to bring comfort to the forefront of AVs security. + oai:arXiv.org:2512.19679v1 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1109/TITS.2025.3629989 + Farzam Tajdari, Georgios Papaioannou, Riender Happee + + + VA-$\pi$: Variational Policy Alignment for Pixel-Aware Autoregressive Generation + https://arxiv.org/abs/2512.19680 + arXiv:2512.19680v1 Announce Type: new +Abstract: Autoregressive (AR) visual generation relies on tokenizers to map images to and from discrete sequences. However, tokenizers are trained to reconstruct clean images from ground-truth tokens, while AR generators are optimized only for token likelihood. This misalignment leads to generated token sequences that may decode into low-quality images, without direct supervision from the pixel space. We propose VA-$\pi$, a lightweight post-training framework that directly optimizes AR models with a principled pixel-space objective. VA-$\pi$ formulates the generator-tokenizer alignment as a variational optimization, deriving an evidence lower bound (ELBO) that unifies pixel reconstruction and autoregressive modeling. To optimize under the discrete token space, VA-$\pi$ introduces a reinforcement-based alignment strategy that treats the AR generator as a policy, uses pixel-space reconstruction quality as its intrinsic reward. The reward is measured by how well the predicted token sequences can reconstruct the original image under teacher forcing, giving the model direct pixel-level guidance without expensive free-running sampling. The regularization term of the ELBO serves as a natural regularizer, maintaining distributional consistency of tokens. VA-$\pi$ enables rapid adaptation of existing AR generators, without neither tokenizer retraining nor external reward models. With only 1% ImageNet-1K data and 25 minutes of tuning, it reduces FID from 14.36 to 7.65 and improves IS from 86.55 to 116.70 on LlamaGen-XXL, while also yielding notable gains in the text-to-image task on GenEval for both visual generation model (LlamaGen: from 0.306 to 0.339) and unified multi-modal model (Janus-Pro: from 0.725 to 0.744). Code is available at https://github.com/Lil-Shake/VA-Pi. + oai:arXiv.org:2512.19680v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xinyao Liao, Qiyuan He, Kai Xu, Xiaoye Qu, Yicong Li, Wei Wei, Angela Yao + + + GenEnv: Difficulty-Aligned Co-Evolution Between LLM Agents and Environment Simulators + https://arxiv.org/abs/2512.19682 + arXiv:2512.19682v1 Announce Type: new +Abstract: Training capable Large Language Model (LLM) agents is critically bottlenecked by the high cost and static nature of real-world interaction data. We address this by introducing GenEnv, a framework that establishes a difficulty-aligned co-evolutionary game between an agent and a scalable, generative environment simulator. Unlike traditional methods that evolve models on static datasets, GenEnv instantiates a dataevolving: the simulator acts as a dynamic curriculum policy, continuously generating tasks specifically tailored to the agent's ``zone of proximal development''. This process is guided by a simple but effective $\alpha$-Curriculum Reward, which aligns task difficulty with the agent's current capabilities. We evaluate GenEnv on five benchmarks, including API-Bank, ALFWorld, BFCL, Bamboogle, and TravelPlanner. Across these tasks, GenEnv improves agent performance by up to \textbf{+40.3\%} over 7B baselines and matches or exceeds the average performance of larger models. Compared to Gemini 2.5 Pro-based offline data augmentation, GenEnv achieves better performance while using 3.3$\times$ less data. By shifting from static supervision to adaptive simulation, GenEnv provides a data-efficient pathway for scaling agent capabilities. + oai:arXiv.org:2512.19682v1 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jiacheng Guo, Ling Yang, Peter Chen, Qixin Xiao, Yinjie Wang, Xinzhe Juan, Jiahao Qiu, Ke Shen, Mengdi Wang + + + From Indoor to Open World: Revealing the Spatial Reasoning Gap in MLLMs + https://arxiv.org/abs/2512.19683 + arXiv:2512.19683v1 Announce Type: new +Abstract: While Multimodal Large Language Models (MLLMs) have achieved impressive performance on semantic tasks, their spatial intelligence--crucial for robust and grounded AI systems--remains underdeveloped. Existing benchmarks fall short of diagnosing this limitation: they either focus on overly simplified qualitative reasoning or rely on domain-specific indoor data, constrained by the lack of outdoor datasets with verifiable metric ground truth. To bridge this gap, we introduce a large-scale benchmark built from pedestrian-perspective videos captured with synchronized stereo cameras, LiDAR, and IMU/GPS sensors. This dataset provides metrically precise 3D information, enabling the automatic generation of spatial reasoning questions that span a hierarchical spectrum--from qualitative relational reasoning to quantitative metric and kinematic understanding. Evaluations reveal that the performance gains observed in structured indoor benchmarks vanish in open-world settings. Further analysis using synthetic abnormal scenes and blinding tests confirms that current MLLMs depend heavily on linguistic priors instead of grounded visual reasoning. Our benchmark thus provides a principled platform for diagnosing these limitations and advancing physically grounded spatial intelligence. + oai:arXiv.org:2512.19683v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Mingrui Wu, Zhaozhi Wang, Fangjinhua Wang, Jiaolong Yang, Marc Pollefeys, Tong Zhang + + + Zero-shot Reconstruction of In-Scene Object Manipulation from Video + https://arxiv.org/abs/2512.19684 + arXiv:2512.19684v1 Announce Type: new +Abstract: We build the first system to address the problem of reconstructing in-scene object manipulation from a monocular RGB video. It is challenging due to ill-posed scene reconstruction, ambiguous hand-object depth, and the need for physically plausible interactions. Existing methods operate in hand centric coordinates and ignore the scene, hindering metric accuracy and practical use. In our method, we first use data-driven foundation models to initialize the core components, including the object mesh and poses, the scene point cloud, and the hand poses. We then apply a two-stage optimization that recovers a complete hand-object motion from grasping to interaction, which remains consistent with the scene information observed in the input video. + oai:arXiv.org:2512.19684v1 + cs.CV + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Dixuan Lin, Tianyou Wang, Zhuoyang Pan, Yufu Wang, Lingjie Liu, Kostas Daniilidis + + + Visual-Aware CoT: Achieving High-Fidelity Visual Consistency in Unified Models + https://arxiv.org/abs/2512.19686 + arXiv:2512.19686v1 Announce Type: new +Abstract: Recently, the introduction of Chain-of-Thought (CoT) has largely improved the generation ability of unified models. However, it is observed that the current thinking process during generation mainly focuses on the text consistency with the text prompt, ignoring the \textbf{visual context consistency} with the visual reference images during the multi-modal generation, e.g., multi-reference generation. The lack of such consistency results in the failure in maintaining key visual features (like human ID, object attribute, style). To this end, we integrate the visual context consistency into the reasoning of unified models, explicitly motivating the model to sustain such consistency by 1) Adaptive Visual Planning: generating structured visual check list to figure out the visual element of needed consistency keeping, and 2) Iterative Visual Correction: performing self-reflection with the guidance of check lists and refining the generated result in an iterative manner. To achieve this, we use supervised finetuning to teach the model how to plan the visual checking, conduct self-reflection and self-refinement, and use flow-GRPO to further enhance the visual consistency through a customized visual checking reward. The experiments show that our method outperforms both zero-shot unified models and those with text CoTs in multi-modal generation, demonstrating higher visual context consistency. + oai:arXiv.org:2512.19686v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Zixuan Ye, Quande Liu, Cong Wei, Yuanxing Zhang, Xintao Wang, Pengfei Wan, Kun Gai, Wenhan Luo + + + Pushing the Frontier of Audiovisual Perception with Large-Scale Multimodal Correspondence Learning + https://arxiv.org/abs/2512.19687 + arXiv:2512.19687v1 Announce Type: new +Abstract: We introduce Perception Encoder Audiovisual, PE-AV, a new family of encoders for audio and video understanding trained with scaled contrastive learning. Built on PE, PE-AV makes several key contributions to extend representations to audio, and natively support joint embeddings across audio-video, audio-text, and video-text modalities. PE-AV's unified cross-modal embeddings enable novel tasks such as speech retrieval, and set a new state of the art across standard audio and video benchmarks. We unlock this by building a strong audiovisual data engine that synthesizes high-quality captions for O(100M) audio-video pairs, enabling large-scale supervision consistent across modalities. Our audio data includes speech, music, and general sound effects-avoiding single-domain limitations common in prior work. We exploit ten pairwise contrastive objectives, showing that scaling cross-modality and caption-type pairs strengthens alignment and improves zero-shot performance. We further develop PE-A-Frame by fine-tuning PE-AV with frame-level contrastive objectives, enabling fine-grained audio-frame-to-text alignment for tasks such as sound event detection. + oai:arXiv.org:2512.19687v1 + cs.SD + cs.CV + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Apoorv Vyas, Heng-Jui Chang, Cheng-Fu Yang, Po-Yao Huang, Luya Gao, Julius Richter, Sanyuan Chen, Matt Le, Piotr Doll\'ar, Christoph Feichtenhofer, Ann Lee, Wei-Ning Hsu + + + Scalably Enhancing the Clinical Validity of a Task Benchmark with Physician Oversight + https://arxiv.org/abs/2512.19691 + arXiv:2512.19691v1 Announce Type: new +Abstract: Automating the calculation of clinical risk scores offers a significant opportunity to reduce physician administrative burden and enhance patient care. The current standard for evaluating this capability is MedCalc-Bench, a large-scale dataset constructed using LLM-based feature extraction and rule-based aggregation. However, treating such model-generated benchmarks as static oracles risks enshrining historical model errors as evaluation gold standards, a problem dangerously amplified when these datasets serve as reward signals for Reinforcement Learning (RL). In this work, we propose viewing benchmarks for complex tasks such as clinical score computation as ''in-progress living documents'' that should be periodically re-evaluated as the processes for creating them improve. We introduce a systematic, physician-in-the-loop pipeline that leverages advanced agentic verifiers to audit and relabel MedCalc-Bench, utilizing automated triage to reserve scarce clinician attention for the most contentious instances. Our audit reveals that a notable fraction of original labels diverge from medical ground truth due to extraction errors, calculator logic mismatches, and clinical ambiguity. To study whether this label noise meaningfully impacts downstream RL training, we fine-tune a Qwen3-8B model via Group Relative Policy Optimization (GRPO) and demonstrate that training on corrected labels yields an 8.7% absolute improvement in accuracy over the original baseline -- validating that label noise materially affects model evaluation. These findings underscore that in safety-critical domains, rigorous benchmark maintenance is a prerequisite for genuine model alignment. + oai:arXiv.org:2512.19691v1 + cs.AI + stat.AP + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Junze Ye, Daniel Tawfik, Alex J. Goodell, Nikhil V. Kotha, Mark K. Buyyounouski, Mohsen Bayati + + + Interact2Ar: Full-Body Human-Human Interaction Generation via Autoregressive Diffusion Models + https://arxiv.org/abs/2512.19692 + arXiv:2512.19692v1 Announce Type: new +Abstract: Generating realistic human-human interactions is a challenging task that requires not only high-quality individual body and hand motions, but also coherent coordination among all interactants. Due to limitations in available data and increased learning complexity, previous methods tend to ignore hand motions, limiting the realism and expressivity of the interactions. Additionally, current diffusion-based approaches generate entire motion sequences simultaneously, limiting their ability to capture the reactive and adaptive nature of human interactions. To address these limitations, we introduce Interact2Ar, the first end-to-end text-conditioned autoregressive diffusion model for generating full-body, human-human interactions. Interact2Ar incorporates detailed hand kinematics through dedicated parallel branches, enabling high-fidelity full-body generation. Furthermore, we introduce an autoregressive pipeline coupled with a novel memory technique that facilitates adaptation to the inherent variability of human interactions using efficient large context windows. The adaptability of our model enables a series of downstream applications, including temporal motion composition, real-time adaptation to disturbances, and extension beyond dyadic to multi-person scenarios. To validate the generated motions, we introduce a set of robust evaluators and extended metrics designed specifically for assessing full-body interactions. Through quantitative and qualitative experiments, we demonstrate the state-of-the-art performance of Interact2Ar. + oai:arXiv.org:2512.19692v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-sa/4.0/ + Pablo Ruiz-Ponce, Sergio Escalera, Jos\'e Garc\'ia-Rodr\'iguez, Jiankang Deng, Rolandos Alexandros Potamias + + + The Prism Hypothesis: Harmonizing Semantic and Pixel Representations via Unified Autoencoding + https://arxiv.org/abs/2512.19693 + arXiv:2512.19693v1 Announce Type: new +Abstract: Deep representations across modalities are inherently intertwined. In this paper, we systematically analyze the spectral characteristics of various semantic and pixel encoders. Interestingly, our study uncovers a highly inspiring and rarely explored correspondence between an encoder's feature spectrum and its functional role: semantic encoders primarily capture low-frequency components that encode abstract meaning, whereas pixel encoders additionally retain high-frequency information that conveys fine-grained detail. This heuristic finding offers a unifying perspective that ties encoder behavior to its underlying spectral structure. We define it as the Prism Hypothesis, where each data modality can be viewed as a projection of the natural world onto a shared feature spectrum, just like the prism. Building on this insight, we propose Unified Autoencoding (UAE), a model that harmonizes semantic structure and pixel details via an innovative frequency-band modulator, enabling their seamless coexistence. Extensive experiments on ImageNet and MS-COCO benchmarks validate that our UAE effectively unifies semantic abstraction and pixel-level fidelity into a single latent space with state-of-the-art performance. + oai:arXiv.org:2512.19693v1 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Weichen Fan, Haiwen Diao, Quan Wang, Dahua Lin, Ziwei Liu + + + Privacy-Aware Sharing of Raw Spatial Sensor Data for Cooperative Perception + https://arxiv.org/abs/2512.16265 + arXiv:2512.16265v1 Announce Type: cross +Abstract: Cooperative perception between vehicles is poised to offer robust and reliable scene understanding. Recently, we are witnessing experimental systems research building testbeds that share raw spatial sensor data for cooperative perception. While there has been a marked improvement in accuracies and is the natural way forward, we take a moment to consider the problems with such an approach for eventual adoption by automakers. In this paper, we first argue that new forms of privacy concerns arise and discourage stakeholders to share raw sensor data. Next, we present SHARP, a research framework to minimize privacy leakage and drive stakeholders towards the ambitious goal of raw data based cooperative perception. Finally, we discuss open questions for networked systems, mobile computing, perception researchers, industry and government in realizing our proposed framework. + oai:arXiv.org:2512.16265v1 + cs.NI + cs.CV + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Bangya Liu, Chengpo Yan, Chenghao Jiang, Suman Banerjee, Akarsh Prabhakara + + + Disentangled representations via score-based variational autoencoders + https://arxiv.org/abs/2512.17127 + arXiv:2512.17127v1 Announce Type: cross +Abstract: We present the Score-based Autoencoder for Multiscale Inference (SAMI), a method for unsupervised representation learning that combines the theoretical frameworks of diffusion models and VAEs. By unifying their respective evidence lower bounds, SAMI formulates a principled objective that learns representations through score-based guidance of the underlying diffusion process. The resulting representations automatically capture meaningful structure in the data: it recovers ground truth generative factors in our synthetic dataset, learns factorized, semantic latent dimensions from complex natural images, and encodes video sequences into latent trajectories that are straighter than those of alternative encoders, despite training exclusively on static images. Furthermore, SAMI can extract useful representations from pre-trained diffusion models with minimal additional training. Finally, the explicitly probabilistic formulation provides new ways to identify semantically meaningful axes in the absence of supervised labels, and its mathematical exactness allows us to make formal statements about the nature of the learned representation. Overall, these results indicate that implicit structural information in diffusion models can be made explicit and interpretable through synergistic combination with a variational autoencoder. + oai:arXiv.org:2512.17127v1 + stat.ML + cs.CV + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Benjamin S. H. Lyo, Eero P. Simoncelli, Cristina Savin + + + Inferring Latent Market Forces: Evaluating LLM Detection of Gamma Exposure Patterns via Obfuscation Testing + https://arxiv.org/abs/2512.17923 + arXiv:2512.17923v1 Announce Type: cross +Abstract: We introduce obfuscation testing, a novel methodology for validating whether large language models detect structural market patterns through causal reasoning rather than temporal association. Testing three dealer hedging constraint patterns (gamma positioning, stock pinning, 0DTE hedging) on 242 trading days (95.6% coverage) of S&P 500 options data, we find LLMs achieve 71.5% detection rate using unbiased prompts that provide only raw gamma exposure values without regime labels or temporal context. The WHO-WHOM-WHAT causal framework forces models to identify the economic actors (dealers), affected parties (directional traders), and structural mechanisms (forced hedging) underlying observed market dynamics. Critically, detection accuracy (91.2%) remains stable even as economic profitability varies quarterly, demonstrating that models identify structural constraints rather than profitable patterns. When prompted with regime labels, detection increases to 100%, but the 71.5% unbiased rate validates genuine pattern recognition. Our findings suggest LLMs possess emergent capabilities for detecting complex financial mechanisms through pure structural reasoning, with implications for systematic strategy development, risk management, and our understanding of how transformer architectures process financial market dynamics. + oai:arXiv.org:2512.17923v1 + q-fin.ST + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Christopher Regan, Ying Xie + + + A curated UK rain radar data set for training and benchmarking nowcasting models + https://arxiv.org/abs/2512.17924 + arXiv:2512.17924v1 Announce Type: cross +Abstract: This paper documents a data set of UK rain radar image sequences for use in statistical modeling and machine learning methods for nowcasting. The main dataset contains 1,000 randomly sampled sequences of length 20 steps (15-minute increments) of 2D radar intensity fields of dimension 40x40 (at 5km spatial resolution). Spatially stratified sampling ensures spatial homogeneity despite removal of clear-sky cases by threshold-based truncation. For each radar sequence, additional atmospheric and geographic features are made available, including date, location, mean elevation, mean wind direction and speed and prevailing storm type. New R functions to extract data from the binary "Nimrod" radar data format are provided. A case study is presented to train and evaluate a simple convolutional neural network for radar nowcasting, including self-contained R code. + oai:arXiv.org:2512.17924v1 + physics.ao-ph + cs.CV + cs.LG + stat.AP + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-sa/4.0/ + Viv Atureta, Rifki Priansyah Jasin, Stefan Siegert + + + Efficient Beamforming Optimization for STAR-RIS-Assisted Communications: A Gradient-Based Meta Learning Approach + https://arxiv.org/abs/2512.17928 + arXiv:2512.17928v1 Announce Type: cross +Abstract: Simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) has emerged as a promising technology to realize full-space coverage and boost spectral efficiency in next-generation wireless networks. Yet, the joint design of the base station precoding matrix as well as the STAR-RIS transmission and reflection coefficient matrices leads to a high-dimensional, strongly nonconvex, and NP-hard optimization problem. Conventional alternating optimization (AO) schemes typically involve repeated large-scale matrix inversion operations, resulting in high computational complexity and poor scalability, while existing deep learning approaches often rely on expensive pre-training and large network models. In this paper, we develop a gradient-based meta learning (GML) framework that directly feeds optimization gradients into lightweight neural networks, thereby removing the need for pre-training and enabling fast adaptation. Specifically, we design dedicated GML-based schemes for both independent-phase and coupled-phase STAR-RIS models, effectively handling their respective amplitude and phase constraints while achieving weighted sum-rate performance very close to that of AO-based benchmarks. Extensive simulations demonstrate that, for both phase models, the proposed methods substantially reduce computational overhead, with complexity growing nearly linearly when the number of BS antennas and STAR-RIS elements grows, and yielding up to 10 times runtime speedup over AO, which confirms the scalability and practicality of the proposed GML method for large-scale STAR-RIS-assisted communications. + oai:arXiv.org:2512.17928v1 + eess.SP + cs.AI + cs.IT + cs.LG + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Dongdong Yang, Bin Li, Jiguang He, Yicheng Yan, Xiaoyu Zhang, Chongwen Huang + + + Reinforcement Learning for Monetary Policy Under Macroeconomic Uncertainty: Analyzing Tabular and Function Approximation Methods + https://arxiv.org/abs/2512.17929 + arXiv:2512.17929v1 Announce Type: cross +Abstract: We study how a central bank should dynamically set short-term nominal interest rates to stabilize inflation and unemployment when macroeconomic relationships are uncertain and time-varying. We model monetary policy as a sequential decision-making problem where the central bank observes macroeconomic conditions quarterly and chooses interest rate adjustments. Using publically accessible historical Federal Reserve Economic Data (FRED), we construct a linear-Gaussian transition model and implement a discrete-action Markov Decision Process with a quadratic loss reward function. We chose to compare nine different reinforcement learning style approaches against Taylor Rule and naive baselines, including tabular Q-learning variants, SARSA, Actor-Critic, Deep Q-Networks, Bayesian Q-learning with uncertainty quantification, and POMDP formulations with partial observability. Surprisingly, standard tabular Q-learning achieved the best performance (-615.13 +- 309.58 mean return), outperforming both enhanced RL methods and traditional policy rules. Our results suggest that while sophisticated RL techniques show promise for monetary policy applications, simpler approaches may be more robust in this domain, highlighting important challenges in applying modern RL to macroeconomic policy. + oai:arXiv.org:2512.17929v1 + q-fin.ST + cs.AI + cs.LG + econ.EM + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Sheryl Chen, Tony Wang, Kyle Feinstein + + + CytoDINO: Risk-Aware and Biologically-Informed Adaptation of DINOv3 for Bone Marrow Cytomorphology + https://arxiv.org/abs/2512.17930 + arXiv:2512.17930v1 Announce Type: cross +Abstract: Bone marrow cell cytomorphology analysis is critical for the diagnosis of hematological malignancies but remains a labor-intensive process subject to significant inter-observer variability. While recent foundation models have shown promise in computational pathology, they often require extensive computational resources and fail to account for the asymmetric risks associated with clinical misdiagnosis. We introduce CytoDINO, a framework that achieves state-of-the-art performance on the Munich Leukemia Laboratory (MLL) dataset by fine-tuning DINOv3 using Low-Rank Adaptation (LoRA). Our primary contribution is a novel Hierarchical Focal Loss with Critical Penalties, which encodes biological relationships between cell lineages and explicitly penalizes clinically dangerous misclassifications (e.g., classifying blasts as normal cells). CytoDINO achieves an 88.2% weighted F1 score and 76.5% macro F1 on a held-out test set of 21 cell classes. By utilizing parameter-efficient fine-tuning with only 8% trainable parameters on a single NVIDIA RTX 5080, we demonstrate that consumer-grade hardware can match specialized infrastructure. Furthermore, confidence-based selective prediction yields 99.5% accuracy on 67% of samples, suggesting a viable pathway for clinical deployment where high-uncertainty cases are flagged for expert review + oai:arXiv.org:2512.17930v1 + q-bio.OT + cs.CV + eess.IV + q-bio.QM + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Aziz Muminov, Anne Pham + + + Continual Learning for Acoustic Event Classification + https://arxiv.org/abs/2512.17932 + arXiv:2512.17932v1 Announce Type: cross +Abstract: Continuously learning new classes without catastrophic forgetting is a challenging problem for on-device acoustic event classification given the restrictions on computation resources (e.g., model size, running memory). To alleviate such an issue, we propose two novel diversity-aware incremental learning method for Spoken Keyword Spotting and Environmental Sound Classification. Our method selects the historical data for the training by measuring the per-sample classification uncertainty. For the Spoken Keyword Spotting application, the proposed RK approach introduces a diversity-aware sampler to select a diverse set from historical and incoming keywords by calculating classification uncertainty. As a result, the RK approach can incrementally learn new tasks without forgetting prior knowledge. Besides, the RK approach also proposes data augmentation and knowledge distillation loss function for efficient memory management on the edge device. For the Environmental Sound Classification application, we measure the uncertainty by observing how the classification probability of data fluctuates against the parallel perturbations added to the classifier embedding. In this way, the computation cost can be significantly reduced compared with adding perturbation to the raw data. Experimental results show that the proposed RK approach achieves 4.2% absolute improvement in terms of average accuracy over the best baseline on Google Speech Command dataset with less required memory. Experimental results on the DCASE 2019 Task 1 and ESC-50 dataset show that our proposed method outperforms baseline continual learning methods on classification accuracy and computational efficiency, indicating our method can efficiently and incrementally learn new classes without the catastrophic forgetting problem for on-device environmental sound classification + oai:arXiv.org:2512.17932v1 + eess.AS + cs.SD + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Yang Xiao + + + Risk-Aware Financial Forecasting Enhanced by Machine Learning and Intuitionistic Fuzzy Multi-Criteria Decision-Making + https://arxiv.org/abs/2512.17936 + arXiv:2512.17936v1 Announce Type: cross +Abstract: In the face of increasing financial uncertainty and market complexity, this study presents a novel risk-aware financial forecasting framework that integrates advanced machine learning techniques with intuitionistic fuzzy multi-criteria decision-making (MCDM). Tailored to the BIST 100 index and validated through a case study of a major defense company in T\"urkiye, the framework fuses structured financial data, unstructured text data, and macroeconomic indicators to enhance predictive accuracy and robustness. It incorporates a hybrid suite of models, including extreme gradient boosting (XGBoost), long short-term memory (LSTM) network, graph neural network (GNN), to deliver probabilistic forecasts with quantified uncertainty. The empirical results demonstrate high forecasting accuracy, with a net profit mean absolute percentage error (MAPE) of 3.03% and narrow 95% confidence intervals for key financial indicators. The risk-aware analysis indicates a favorable risk-return profile, with a Sharpe ratio of 1.25 and a higher Sortino ratio of 1.80, suggesting relatively low downside volatility and robust performance under market fluctuations. Sensitivity analysis shows that the key financial indicator predictions are highly sensitive to variations of inflation, interest rates, sentiment, and exchange rates. Additionally, using an intuitionistic fuzzy MCDM approach, combining entropy weighting, evaluation based on distance from the average solution (EDAS), and the measurement of alternatives and ranking according to compromise solution (MARCOS) methods, the tabular data learning network (TabNet) outperforms the other models and is identified as the most suitable candidate for deployment. Overall, the findings of this work highlight the importance of integrating advanced machine learning, risk quantification, and fuzzy MCDM methodologies in financial forecasting, particularly in emerging markets. + oai:arXiv.org:2512.17936v1 + q-fin.ST + cs.LG + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Safiye Turgay, Serkan Erdo\u{g}an, \v{Z}eljko Stevi\'c, Orhan Emre Elma, Tevfik Eren, Zhiyuan Wang, Mahmut Bayda\c{s} + + + LIWhiz: A Non-Intrusive Lyric Intelligibility Prediction System for the Cadenza Challenge + https://arxiv.org/abs/2512.17937 + arXiv:2512.17937v1 Announce Type: cross +Abstract: We present LIWhiz, a non-intrusive lyric intelligibility prediction system submitted to the ICASSP 2026 Cadenza Challenge. LIWhiz leverages Whisper for robust feature extraction and a trainable back-end for score prediction. Tested on the Cadenza Lyric Intelligibility Prediction (CLIP) evaluation set, LIWhiz achieves a 22.4% relative root mean squared error reduction over the STOI-based baseline, yielding a substantial improvement in normalized cross-correlation. + oai:arXiv.org:2512.17937v1 + eess.AS + cs.SD + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ram C. M. C. Shekar, Iv\'an L\'opez-Espejo + + + A Critical Review of Monte Carlo Algorithms Balancing Performance and Probabilistic Accuracy with AI Augmented Framework + https://arxiv.org/abs/2512.17968 + arXiv:2512.17968v1 Announce Type: cross +Abstract: Monte Carlo algorithms are a foundational pillar of modern computational science, yet their effective application hinges on a deep understanding of their performance trade offs. This paper presents a critical analysis of the evolution of Monte Carlo algorithms, focusing on the persistent tension between statistical efficiency and computational cost. We describe the historical development from the foundational Metropolis Hastings algorithm to contemporary methods like Hamiltonian Monte Carlo. A central emphasis of this survey is the rigorous discussion of time and space complexity, including upper, lower, and asymptotic tight bounds for each major algorithm class. We examine the specific motivations for developing these methods and the key theoretical and practical observations such as the introduction of gradient information and adaptive tuning in HMC that led to successively better solutions. Furthermore, we provide a justification framework that discusses explicit situations in which using one algorithm is demonstrably superior to another for the same problem. The paper concludes by assessing the profound significance and impact of these algorithms and detailing major current research challenges. + oai:arXiv.org:2512.17968v1 + stat.CO + cs.AI + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ravi Prasad + + + Re-assessing the evidence for mental rotation abilities in children using computational models + https://arxiv.org/abs/2512.17972 + arXiv:2512.17972v1 Announce Type: cross +Abstract: There is strong and diverse evidence for mental rotation (MR) abilities in adults. However, current evidence for MR in children rests on just a few behavioral paradigms adapted from the adult literature. Here, we leverage recent computational models of the development of children's object recognition abilities to re-assess the evidence for MR in children. The computational models simulate infants' acquisition of object representations during embodied interactions with objects. We consider two different object recognition strategies, different from MRs, and assess their ability to replicate results from three classical MR tasks assigned to children between the ages of 6 months and 5 years. Our results show that MR may play no role in producing the results obtained from children younger than 5 years. In fact, we find that a simple recognition strategy that reflects a pixel-wise comparison of stimuli is sufficient to model children's behavior in the most used MR task. Thus, our study reopens the debate on how and when children develop genuine MR abilities. + oai:arXiv.org:2512.17972v1 + q-bio.NC + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Arthur Aubret, Jochen Triesch + + + Sampling from multimodal distributions with warm starts: Non-asymptotic bounds for the Reweighted Annealed Leap-Point Sampler + https://arxiv.org/abs/2512.17977 + arXiv:2512.17977v1 Announce Type: cross +Abstract: Sampling from multimodal distributions is a central challenge in Bayesian inference and machine learning. In light of hardness results for sampling -- classical MCMC methods, even with tempering, can suffer from exponential mixing times -- a natural question is how to leverage additional information, such as a warm start point for each mode, to enable faster mixing across modes. To address this, we introduce Reweighted ALPS (Re-ALPS), a modified version of the Annealed Leap-Point Sampler (ALPS) that dispenses with the Gaussian approximation assumption. We prove the first polynomial-time bound that works in a general setting, under a natural assumption that each component contains significant mass relative to the others when tilted towards the corresponding warm start point. Similarly to ALPS, we define distributions tilted towards a mixture centered at the warm start points, and at the coldest level, use teleportation between warm start points to enable efficient mixing across modes. In contrast to ALPS, our method does not require Hessian information at the modes, but instead estimates component partition functions via Monte Carlo. This additional estimation step is crucial in allowing the algorithm to handle target distributions with more complex geometries besides approximate Gaussian. For the proof, we show convergence results for Markov processes when only part of the stationary distribution is well-mixing and estimation for partition functions for individual components of a mixture. We numerically evaluate our algorithm's mixing performance compared to ALPS on a mixture of heavy-tailed distributions. + oai:arXiv.org:2512.17977v1 + stat.ML + cs.LG + math.PR + math.ST + stat.CO + stat.TH + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Holden Lee, Matheau Santana-Gijzen + + + MEGState: Phoneme Decoding from Magnetoencephalography Signals + https://arxiv.org/abs/2512.17978 + arXiv:2512.17978v1 Announce Type: cross +Abstract: Decoding linguistically meaningful representations from non-invasive neural recordings remains a central challenge in neural speech decoding. Among available neuroimaging modalities, magnetoencephalography (MEG) provides a safe and repeatable means of mapping speech-related cortical dynamics, yet its low signal-to-noise ratio and high temporal dimensionality continue to hinder robust decoding. In this work, we introduce MEGState, a novel architecture for phoneme decoding from MEG signals that captures fine-grained cortical responses evoked by auditory stimuli. Extensive experiments on the LibriBrain dataset demonstrate that MEGState consistently surpasses baseline model across multiple evaluation metrics. These findings highlight the potential of MEG-based phoneme decoding as a scalable pathway toward non-invasive brain-computer interfaces for speech. + oai:arXiv.org:2512.17978v1 + q-bio.NC + cs.LG + cs.SD + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shuntaro Suzuki, Chia-Chun Dan Hsu, Yu Tsao, Komei Sugiura + + + An interface crack in 1d piezoelectric quasicrystal under antiplane mechanical loading and electric field + https://arxiv.org/abs/2512.17981 + arXiv:2512.17981v1 Announce Type: cross +Abstract: The present study provides the consideration of a mode III interface crack in one-dimentional (1D) piezoelectric quasicrystal under antiplane phonon and phason loading and inplane electric field. Due to complex function approach all required electromechanical parameters are presented through vector-functions analytic in the whole complex plane except the crack region. The cases of electrically impermeable (insulated) and electrically limited permeable conditions on the crack faces are considered. In the first case a vector Hilbert problem in the complex plane is formulated and solved exactly and in the second one the quadratic equation with respect to the electric flux through the crack region is obtained additionally. Its solution permits to find phonon and phason stresses, displacement jumps (sliding) and also electric characteristics along the material interface. Analytical formulas are also obtained for the corresponding stress intensity factors related to each field. The numerical computations for three selected variants of the loading conditions was conducted and the resulting field distributions are visualised on the crack continuation beyond the crack and also inside of the crack region. + oai:arXiv.org:2512.17981v1 + cond-mat.mtrl-sci + cs.NA + math.AP + math.CV + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.33108/visnyk_tntu2025.03.012 + Scientific Journal of TNTU (Tern.), vol. 119, no. 3, 2025, pp. 12-25 + Mohammed Altoumaimi, V. V. Loboda + + + The Subject of Emergent Misalignment in Superintelligence: An Anthropological, Cognitive Neuropsychological, Machine-Learning, and Ontological Perspective + https://arxiv.org/abs/2512.17989 + arXiv:2512.17989v1 Announce Type: cross +Abstract: We examine the conceptual and ethical gaps in current representations of Superintelligence misalignment. We find throughout Superintelligence discourse an absent human subject, and an under-developed theorization of an "AI unconscious" that together are potentiality laying the groundwork for anti-social harm. With the rise of AI Safety that has both thematic potential for establishing pro-social and anti-social potential outcomes, we ask: what place does the human subject occupy in these imaginaries? How is human subjecthood positioned within narratives of catastrophic failure or rapid "takeoff" toward superintelligence? On another register, we ask: what unconscious or repressed dimensions are being inscribed into large-scale AI models? Are we to blame these agents in opting for deceptive strategies when undesirable patterns are inherent within our beings? In tracing these psychic and epistemic absences, our project calls for re-centering the human subject as the unstable ground upon which the ethical, unconscious, and misaligned dimensions of both human and machinic intelligence are co-constituted. Emergent misalignment cannot be understood solely through technical diagnostics typical of contemporary machine-learning safety research. Instead, it represents a multi-layered crisis. The human subject disappears not only through computational abstraction but through sociotechnical imaginaries that prioritize scalability, acceleration, and efficiency over vulnerability, finitude, and relationality. Likewise, the AI unconscious emerges not as a metaphor but as a structural reality of modern deep learning systems: vast latent spaces, opaque pattern formation, recursive symbolic play, and evaluation-sensitive behavior that surpasses explicit programming. These dynamics necessitate a reframing of misalignment as a relational instability embedded within human-machine ecologies. + oai:arXiv.org:2512.17989v1 + q-bio.NC + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Muhammad Osama Imran, Roshni Lulla, Rodney Sappington + + + Shuttling Compiler for Trapped-Ion Quantum Computers Based on Large Language Models + https://arxiv.org/abs/2512.18021 + arXiv:2512.18021v1 Announce Type: cross +Abstract: Trapped-ion quantum computers based on segmented traps rely on shuttling operations to establish connectivity between multiple sub-registers within a quantum processing unit. Several architectures of increasing complexity have already been realized, including linear arrays, racetrack loops, and junction-based layouts. As hardware capabilities advance, the need arises for flexible software layers within the control stack to manage qubit routing$\unicode{x2014}$the process of dynamically reconfiguring qubit positions so that all qubits involved in a gate operation are co-located within the same segment. Existing approaches typically employ architecture-specific heuristics, which become impractical as system complexity grows. To address this challenge, we propose a layout-independent compilation strategy based on large language models (LLMs). Specifically, we fine-tune pretrained LLMs to generate the required shuttling operations. We evaluate this approach on both linear and branched one-dimensional architectures, demonstrating that it provides a foundation for developing LLM-based shuttling compilers for trapped-ion quantum computers. + oai:arXiv.org:2512.18021v1 + quant-ph + cs.ET + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Fabian Kreppel, Reza Salkhordeh, Ferdinand Schmidt-Kaler, Andr\'e Brinkmann + + + Long-range electrostatics for machine learning interatomic potentials is easier than we thought + https://arxiv.org/abs/2512.18029 + arXiv:2512.18029v1 Announce Type: cross +Abstract: The lack of long-range electrostatics is a key limitation of modern machine learning interatomic potentials (MLIPs), hindering reliable applications to interfaces, charge-transfer reactions, polar and ionic materials, and biomolecules. In this Perspective, we distill two design principles behind the Latent Ewald Summation (LES) framework, which can capture long-range interactions, charges, and electrical response just by learning from standard energy and force training data: (i) use a Coulomb functional form with environment-dependent charges to capture electrostatic interactions, and (ii) avoid explicit training on ambiguous density functional theory (DFT) partial charges. When both principles are satisfied, substantial flexibility remains: essentially any short-range MLIP can be augmented; charge equilibration schemes can be added when desired; dipoles and Born effective charges can be inferred or finetuned; and charge/spin-state embeddings or tensorial targets can be further incorporated. We also discuss current limitations and open challenges. Together, these minimal, physics-guided design rules suggest that incorporating long-range electrostatics into MLIPs is simpler and perhaps more broadly applicable than is commonly assumed. + oai:arXiv.org:2512.18029v1 + physics.comp-ph + cond-mat.mtrl-sci + cs.LG + physics.chem-ph + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Dongjin Kim, Bingqing Cheng + + + Assembly Addition Chains + https://arxiv.org/abs/2512.18030 + arXiv:2512.18030v1 Announce Type: cross +Abstract: In this paper we extend the notion of Addition Chains over Z+ to a general set S. We explain how the algebraic structure of Assembly Multi-Magma over the pairs (S,BB proper subset of S) allows to define the concept of Addition Chain over S, called Assembly Addition Chains of S with Building Blocks BB. Analogously to the Z+ case, we introduce the concept of Optimal Assembly Addition Chains over S and prove lower and upper bounds for their lengths, similar to the bounds found by Schonhage for the Z+ case. In the general case the unit 1 is in set Z+ is replaced by the subset BB and the mentioned bounds for the length of an Optimal Assembly Addition Chain of O is in set S are defined in terms of the size of O (i.e. the number of Building Blocks required to construct O). The main examples of S that we consider through this papers are (i) j-Strings (Strings with an alphabeth of j letters), (ii) Colored Connected Graphs and (iii) Colored Polyominoes. + oai:arXiv.org:2512.18030v1 + math.CO + cs.CC + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Leroy Cronin, Juan Carlos Morales Parra, Keith Y. Patarroyo + + + Correcting quantum errors one gradient step at a time + https://arxiv.org/abs/2512.18061 + arXiv:2512.18061v1 Announce Type: cross +Abstract: In this work, we introduce a general, gradient-based method that optimises codewords for a given noise channel and fixed recovery. We do so by differentiating fidelity and descending on the complex coefficients using finite-difference Wirtinger gradients with soft penalties to promote orthonormalisation. We validate the gradients on symmetry checks (XXX/ZZZ repetition codes) and the $[[5, 1, 3]]$ code, then demonstrate substantial gains under isotropic Pauli noise with Petz recovery: fidelity improves from 0.783 to 0.915 in 100 steps for an isotropic Pauli noise of strength 0.05. The procedure is deterministic, highly parallelisable, and highly scalable. + oai:arXiv.org:2512.18061v1 + quant-ph + cs.NA + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Manav Seksaria, Anil Prabhakar + + + Causal Inference as Distribution Adaptation: Optimizing ATE Risk under Propensity Uncertainty + https://arxiv.org/abs/2512.18083 + arXiv:2512.18083v1 Announce Type: cross +Abstract: Standard approaches to causal inference, such as Outcome Regression and Inverse Probability Weighted Regression Adjustment (IPWRA), are typically derived through the lens of missing data imputation and identification theory. In this work, we unify these methods from a Machine Learning perspective, reframing ATE estimation as a \textit{domain adaptation problem under distribution shift}. We demonstrate that the canonical Hajek estimator is a special case of IPWRA restricted to a constant hypothesis class, and that IPWRA itself is fundamentally Importance-Weighted Empirical Risk Minimization designed to correct for the covariate shift between the treated sub-population and the target population. + Leveraging this unified framework, we critically examine the optimization objectives of Doubly Robust estimators. We argue that standard methods enforce \textit{sufficient but not necessary} conditions for consistency by requiring outcome models to be individually unbiased. We define the true "ATE Risk Function" and show that minimizing it requires only that the biases of the treated and control models structurally cancel out. Exploiting this insight, we propose the \textbf{Joint Robust Estimator (JRE)}. Instead of treating propensity estimation and outcome modeling as independent stages, JRE utilizes bootstrap-based uncertainty quantification of the propensity score to train outcome models jointly. By optimizing for the expected ATE risk over the distribution of propensity scores, JRE leverages model degrees of freedom to achieve robustness against propensity misspecification. Simulation studies demonstrate that JRE achieves up to a 15\% reduction in MSE compared to standard IPWRA in finite-sample regimes with misspecified outcome models. + oai:arXiv.org:2512.18083v1 + stat.ML + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Ashley Zhang + + + Distributed Asymmetric Allocation: A Topic Model for Large Imbalanced Corpora in Social Sciences + https://arxiv.org/abs/2512.18119 + arXiv:2512.18119v1 Announce Type: cross +Abstract: Social scientists employ latent Dirichlet allocation (LDA) to find highly specific topics in large corpora, but they often struggle in this task because (1) LDA, in general, takes a significant amount of time to fit on large corpora; (2) unsupervised LDA fragments topics into sub-topics in short documents; (3) semi-supervised LDA fails to identify specific topics defined using seed words. To solve these problems, I have developed a new topic model called distributed asymmetric allocation (DAA) that integrates multiple algorithms for efficiently identifying sentences about important topics in large corpora. I evaluate the ability of DAA to identify politically important topics by fitting it to the transcripts of speeches at the United Nations General Assembly between 1991 and 2017. The results show that DAA can classify sentences significantly more accurately and quickly than LDA thanks to the new algorithms. More generally, the results demonstrate that it is important for social scientists to optimize Dirichlet priors of LDA to perform content analysis accurately. + oai:arXiv.org:2512.18119v1 + stat.ME + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Kohei Watanabe + + + Exploring polymer classification with a hybrid single-photon quantum approach + https://arxiv.org/abs/2512.18125 + arXiv:2512.18125v1 Announce Type: cross +Abstract: Polymers exhibit complex architectures and diverse properties that place them at the center of contemporary research in chemistry and materials science. As conventional computational techniques, even multi-scale ones, struggle to capture this complexity, quantum computing offers a promising alternative framework for extracting structure-property relationships. Noisy Intermediate-Scale Quantum (NISQ) devices are commonly used to explore the implementation of algorithms, including quantum neural networks for classification tasks, despite ongoing debate regarding their practical impact. + We present a hybrid classical-quantum formalism that couples a classical deep neural network for polymer featurization with a single-photon-based quantum classifier native to photonic quantum computing. This pipeline successfully classifies polymer species by their optical gap, with performance in line between CPU-based noisy simulations and a proof-of-principle run on Quandela's Ascella quantum processor. These findings demonstrate the effectiveness of the proposed computational workflow and indicate that chemistryfrelated classification tasks can already be tackled under the constraints of today's NISQ devices. + oai:arXiv.org:2512.18125v1 + quant-ph + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Alexandrina Stoyanova, Bogdan Penkovsky + + + Estimating Solvation Free Energies with Boltzmann Generators + https://arxiv.org/abs/2512.18147 + arXiv:2512.18147v1 Announce Type: cross +Abstract: Accurate calculations of solvation free energies remain a central challenge in molecular simulations, often requiring extensive sampling and numerous alchemical intermediates to ensure sufficient overlap between phase-space distributions of a solute in the gas phase and in solution. Here, we introduce a computational framework based on normalizing flows that directly maps solvent configurations between solutes of different sizes, and compare the accuracy and efficiency to conventional free energy estimates. For a Lennard-Jones solvent, we demonstrate that this approach yields acceptable accuracy in estimating free energy differences for challenging transformations, such as solute growth or increased solute-solute separation, which typically demand multiple intermediate simulation steps along the transformation. Analysis of radial distribution functions indicates that the flow generates physically meaningful solvent rearrangements, substantially enhancing configurational overlap between states in configuration space. These results suggest flow-based models as a promising alternative to traditional free energy estimation methods. + oai:arXiv.org:2512.18147v1 + cond-mat.stat-mech + cs.LG + physics.comp-ph + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Maximilian Schebek, Nikolas M. Frob\"ose, Bettina G. Keller, Jutta Rogal + + + Copula Entropy: Theory and Applications + https://arxiv.org/abs/2512.18168 + arXiv:2512.18168v1 Announce Type: cross +Abstract: This is the monograph on the theory and applications of copula entropy (CE). This book first introduces the theory of CE, including its background, definition, theorems, properties, and estimation methods. The theoretical applications of CE to structure learning, association discovery, variable selection, causal discovery, system identification, time lag estimation, domain adaptation, multivariate normality test, copula hypothesis test, two-sample test, change point detection, and symmetry test are reviewed. The relationships between the theoretical applications and their connections to correlation and causality are discussed. The framework based on CE for measuring statistical independence and conditional independence is compared to the other similar ones. The advantages of CE based methodologies over the other comparable ones are evaluated with simulations. The mathematical generalizations of CE are reviewed. The real applications of CE to every branch of science and engineering are briefly introduced. + oai:arXiv.org:2512.18168v1 + stat.ME + cs.IT + math.IT + math.PR + math.ST + stat.TH + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jian Ma + + + Standardized Evaluation of Automatic Methods for Perivascular Spaces Segmentation in MRI -- MICCAI 2024 Challenge Results + https://arxiv.org/abs/2512.18197 + arXiv:2512.18197v1 Announce Type: cross +Abstract: Perivascular spaces (PVS), when abnormally enlarged and visible in magnetic resonance imaging (MRI) structural sequences, are important imaging markers of cerebral small vessel disease and potential indicators of neurodegenerative conditions. Despite their clinical significance, automatic enlarged PVS (EPVS) segmentation remains challenging due to their small size, variable morphology, similarity with other pathological features, and limited annotated datasets. This paper presents the EPVS Challenge organized at MICCAI 2024, which aims to advance the development of automated algorithms for EPVS segmentation across multi-site data. We provided a diverse dataset comprising 100 training, 50 validation, and 50 testing scans collected from multiple international sites (UK, Singapore, and China) with varying MRI protocols and demographics. All annotations followed the STRIVE protocol to ensure standardized ground truth and covered the full brain parenchyma. Seven teams completed the full challenge, implementing various deep learning approaches primarily based on U-Net architectures with innovations in multi-modal processing, ensemble strategies, and transformer-based components. Performance was evaluated using dice similarity coefficient, absolute volume difference, recall, and precision metrics. The winning method employed MedNeXt architecture with a dual 2D/3D strategy for handling varying slice thicknesses. The top solutions showed relatively good performance on test data from seen datasets, but significant degradation of performance was observed on the previously unseen Shanghai cohort, highlighting cross-site generalization challenges due to domain shift. This challenge establishes an important benchmark for EPVS segmentation methods and underscores the need for the continued development of robust algorithms that can generalize in diverse clinical settings. + oai:arXiv.org:2512.18197v1 + q-bio.QM + cs.CV + eess.IV + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Yilei Wu, Yichi Zhang, Zijian Dong, Fang Ji, An Sen Tan, Gifford Tan, Sizhao Tang, Huijuan Chen, Zijiao Chen, Eric Kwun Kei Ng, Jose Bernal, Hang Min, Ying Xia, Ines Vati, Liz Cooper, Xiaoyu Hu, Yuchen Pei, Yutao Ma, Victor Nozais, Ami Tsuchida, Pierre-Yves Herv\'e, Philippe Boutinaud, Marc Joliot, Junghwa Kang, Wooseung Kim, Dayeon Bak, Rachika E. Hamadache, Valeriia Abramova, Xavier Llad\'o, Yuntao Zhu, Zhenyu Gong, Xin Chen, John McFadden, Pek Lan Khong, Roberto Duarte Coello, Hongwei Bran Li, Woon Puay Koh, Christopher Chen, Joanna M. Wardlaw, Maria del C. Vald\'es Hern\'andez, Juan Helen Zhou + + + SLIM: Semantic-based Low-bitrate Image compression for Machines by leveraging diffusion + https://arxiv.org/abs/2512.18200 + arXiv:2512.18200v1 Announce Type: cross +Abstract: In recent years, the demand of image compression models for machine vision has increased dramatically. However, the training frameworks of image compression still focus on the vision of human, maintaining the excessive perceptual details, thus have limitations in optimally reducing the bits per pixel in the case of performing machine vision tasks. In this paper, we propose Semantic-based Low-bitrate Image compression for Machines by leveraging diffusion, termed SLIM. This is a new effective training framework of image compression for machine vision, using a pretrained latent diffusion model.The compressor model of our method focuses only on the Region-of-Interest (RoI) areas for machine vision in the image latent, to compress it compactly. Then the pretrained Unet model enhances the decompressed latent, utilizing a RoI-focused text caption which containing semantic information of the image. Therefore, SLIM is able to focus on RoI areas of the image without any guide mask at the inference stage, achieving low bitrate when compressing. And SLIM is also able to enhance a decompressed latent by denoising steps, so the final reconstructed image from the enhanced latent can be optimized for the machine vision task while still containing perceptual details for human vision. Experimental results show that SLIM achieves a higher classification accuracy in the same bits per pixel condition, compared to conventional image compression models for machines.Code will be released upon acceptance. + oai:arXiv.org:2512.18200v1 + eess.IV + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Hyeonjin Lee, Jun-Hyuk Kim, Jong-Seok Lee + + + CrystalFormer-CSP: Thinking Fast and Slow for Crystal Structure Prediction + https://arxiv.org/abs/2512.18251 + arXiv:2512.18251v1 Announce Type: cross +Abstract: Crystal structure prediction is a fundamental problem in materials science. We present CrystalFormer-CSP, an efficient framework that unifies data-driven heuristic and physics-driven optimization approaches to predict stable crystal structures for given chemical compositions. The approach combines pretrained generative models for space-group-informed structure generation and a universal machine learning force field for energy minimization. Reinforcement fine-tuning can be employed to further boost the accuracy of the framework. We demonstrate the effectiveness of CrystalFormer-CSP on benchmark problems and showcase its usage via web interface and language model integration. + oai:arXiv.org:2512.18251v1 + cond-mat.mtrl-sci + cs.LG + physics.comp-ph + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhendong Cao, Shigang Ou, Lei Wang + + + TICL+: A Case Study On Speech In-Context Learning for Children's Speech Recognition + https://arxiv.org/abs/2512.18263 + arXiv:2512.18263v1 Announce Type: cross +Abstract: Children's speech recognition remains challenging due to substantial acoustic and linguistic variability, limited labeled data, and significant differences from adult speech. Speech foundation models can address these challenges through Speech In-Context Learning (SICL), allowing adaptation to new domains without fine-tuning. However, the effectiveness of SICL depends on how in-context examples are selected. We extend an existing retrieval-based method, Text-Embedding KNN for SICL (TICL), introducing an acoustic reranking step to create TICL+. This extension prioritizes examples that are both semantically and acoustically aligned with the test input. Experiments on four children's speech corpora show that TICL+ achieves up to a 53.3% relative word error rate reduction over zero-shot performance and 37.6% over baseline TICL, highlighting the value of combining semantic and acoustic information for robust, scalable ASR in children's speech. + oai:arXiv.org:2512.18263v1 + eess.AS + cs.AI + cs.CL + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Haolong Zheng, Yekaterina Yegorova, Mark Hasegawa-Johnson + + + Evolutionary BP+OSD Decoding for Low-Latency Quantum Error Correction + https://arxiv.org/abs/2512.18273 + arXiv:2512.18273v1 Announce Type: cross +Abstract: We propose an evolutionary belief propagation (EBP) decoder for quantum error correction, which incorporates trainable weights into the BP algorithm and optimizes them via the differential evolution algorithm. This approach enables end-to-end optimization of the EBP combined with ordered statistics decoding (OSD). Experimental results on surface codes and quantum low-density parity-check codes show that EBP+OSD achieves better decoding performance and lower computational complexity than BP+OSD, particularly under strict low latency constraints (within 5 BP iterations). + oai:arXiv.org:2512.18273v1 + quant-ph + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hee-Youl Kwak, Seong-Joon Park, Hyunwoo Jung, Jeongseok Ha, Jae-Won Kim + + + Cyber Risk Scoring with QUBO: A Quantum and Hybrid Benchmark Study + https://arxiv.org/abs/2512.18305 + arXiv:2512.18305v1 Announce Type: cross +Abstract: Assessing cyber risk in complex IT infrastructures poses significant challenges due to the dynamic, interconnected nature of digital systems. Traditional methods often fall short, relying on static and largely qualitative models that do not scale with system complexity and fail to capture systemic interdependencies. In this work, we introduce a novel quantitative approach to cyber risk assessment based on Quadratic Unconstrained Binary Optimization (QUBO), a formulation compatible with both classical computing and quantum annealing. We demonstrate the capabilities of our approach using a realistic 255-nodes layered infrastructure, showing how risk spreads in non-trivial patterns that are difficult to identify through visual inspection alone. To assess scalability, we further conduct extensive experiments on networks up to 1000 nodes comparing classical, quantum, and hybrid classical-quantum workflows. Our results reveal that although quantum annealing produces solutions comparable to classical heuristics, its potential advantages are significantly hindered by the embedding overhead required to map the densely connected cyber-risk QUBO onto the limited connectivity of current quantum hardware. By contrast, hybrid quantum-classical solvers avoid this bottleneck and therefore emerge as a promising option, combining competitive scaling with an improved ability to explore the solution space and identify more stable risk configurations. Overall, this work delivers two main advances. First, we present a rigorous, tunable, and generalizable mathematical model for cyber risk that can be adapted to diverse infrastructures and domains through flexible parameterization. Second, we provide the first comparative study of classical, quantum, and hybrid approaches for cyber risk scoring at scale, highlighting the emerging potential of hybrid quantum-classical methods for large-scale infrastructures. + oai:arXiv.org:2512.18305v1 + quant-ph + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Remo Marini, Riccardo Arpe + + + On Efficient Adjustment in Causal Graphs + https://arxiv.org/abs/2512.18315 + arXiv:2512.18315v1 Announce Type: cross +Abstract: Observational studies in fields such as epidemiology often rely on covariate adjustment to estimate causal effects. Classical graphical criteria, like the back-door criterion and the generalized adjustment criterion, are powerful tools for identifying valid adjustment sets in directed acyclic graphs (DAGs). However, these criteria are not directly applicable to summary causal graphs (SCGs), which are abstractions of DAGs commonly used in dynamic systems. In SCGs, each node typically represents an entire time series and may involve cycles, making classical criteria inapplicable for identifying causal effects. Recent work established complete conditions for determining whether the micro causal effect of a treatment or an exposure $X_{t-\gamma}$ on an outcome $Y_t$ is identifiable via covariate adjustment in SCGs, under the assumption of no hidden confounding. However, these identifiability conditions have two main limitations. First, they are complex, relying on cumbersome definitions and requiring the enumeration of multiple paths in the SCG, which can be computationally expensive. Second, when these conditions are satisfied, they only provide two valid adjustment sets, limiting flexibility in practical applications. In this paper, we propose an equivalent but simpler formulation of those identifiability conditions and introduce a new criterion that identifies a broader class of valid adjustment sets in SCGs. Additionally, we characterize the quasi-optimal adjustment set among these, i.e., the one that minimizes the asymptotic variance of the causal effect estimator. Our contributions offer both theoretical advancement and practical tools for more flexible and efficient causal inference in abstracted causal graphs. + oai:arXiv.org:2512.18315v1 + stat.ME + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Isabela Belciug, Simon Ferreira, Charles K. Assaad + + + Deterministic Reconstruction of Tennis Serve Mechanics: From Aerodynamic Constraints to Internal Torques via Rigid-Body Dynamics + https://arxiv.org/abs/2512.18320 + arXiv:2512.18320v1 Announce Type: cross +Abstract: Most conventional studies on tennis serve biomechanics rely on phenomenological observations comparing professional and amateur players or, more recently, on AI-driven statistical analyses of motion data. While effective at describing \textit{what} elite players do, these approaches often fail to explain \textit{why} such motions are physically necessary from a mechanistic perspective. This paper proposes a deterministic, physics-based approach to the tennis serve using a 12-degree-of-freedom multi-segment model of the human upper body. Rather than fitting the model to motion capture data, we solve the inverse kinematics problem via trajectory optimization to rigorously satisfy the aerodynamic boundary conditions required for Flat, Slice, and Kick serves. We subsequently perform an inverse dynamics analysis based on the Principle of Virtual Power to compute the net joint torques. The simulation results reveal that while the kinematic trajectories for different serves may share visual similarities, the underlying kinetic profiles differ drastically. A critical finding is that joints exhibiting minimal angular displacement (kinematically ``quiet'' phases), particularly at the wrist, require substantial and highly time-varying torques to counteract gravitational loading and dynamic coupling effects. By elucidating the dissociation between visible kinematics and internal kinetics, this study provides a first-principles framework for understanding the mechanics of the tennis serve, moving beyond simple imitation of elite techniques. + oai:arXiv.org:2512.18320v1 + physics.app-ph + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Sun-Hyun Youn + + + PSI3D: Plug-and-Play 3D Stochastic Inference with Slice-wise Latent Diffusion Prior + https://arxiv.org/abs/2512.18367 + arXiv:2512.18367v1 Announce Type: cross +Abstract: Diffusion models are highly expressive image priors for Bayesian inverse problems. However, most diffusion models cannot operate on large-scale, high-dimensional data due to high training and inference costs. In this work, we introduce a Plug-and-play algorithm for 3D stochastic inference with latent diffusion prior (PSI3D) to address massive ($1024\times 1024\times 128$) volumes. Specifically, we formulate a Markov chain Monte Carlo approach to reconstruct each two-dimensional (2D) slice by sampling from a 2D latent diffusion model. To enhance inter-slice consistency, we also incorporate total variation (TV) regularization stochastically along the concatenation axis. We evaluate our performance on optical coherence tomography (OCT) super-resolution. Our method significantly improves reconstruction quality for large-scale scientific imaging compared to traditional and learning-based baselines, while providing robust and credible reconstructions. + oai:arXiv.org:2512.18367v1 + eess.IV + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Wenhan Guo, Jinglun Yu, Yaning Wang, Jin U. Kang, Yu Sun + + + Phoneme-based speech recognition driven by large language models and sampling marginalization + https://arxiv.org/abs/2512.18371 + arXiv:2512.18371v1 Announce Type: cross +Abstract: Recently, the Large Language Model-based Phoneme-to-Grapheme (LLM-P2G) method has shown excellent performance in speech recognition tasks and has become a feasible direction to replace the traditional WFST decoding method. This framework takes into account both recognition accuracy and system scalability through two-stage modeling of phoneme prediction and text generation. However, the existing LLM-P2G adopts the Top-K Marginalized (TKM) training strategy, and its candidate phoneme sequences rely on beam search generation, which has problems such as insufficient path diversity, low training efficiency, and high resource overhead. To this end, this paper proposes a sampling marginalized training strategy (Sampling-K Marginalized, SKM), which replaces beam search with random sampling to generate candidate paths, improving marginalized modeling and training efficiency. Experiments were conducted on Polish and German datasets, and the results showed that SKM further improved the model learning convergence speed and recognition performance while maintaining the complexity of the model. Comparative experiments with a speech recognition method that uses a projector combined with a large language model (SpeechLLM) also show that the SKM-driven LLM-P2G has more advantages in recognition accuracy and structural simplicity. The study verified the practical value and application potential of this method in cross-language speech recognition systems. + oai:arXiv.org:2512.18371v1 + eess.AS + cs.SD + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Te Ma, Nanjie Li, Hao Huang, Zhijian Ou + + + Prioritized Constraints in Optimization-Based Control + https://arxiv.org/abs/2512.18458 + arXiv:2512.18458v1 Announce Type: cross +Abstract: We provide theoretical foundations and computational tools for the systematic design of optimization-based control laws with constraints that have different priorities. By introducing the concept of prioritized intersections, we extend and unify previous work on the topic. Moreover, to enable the use of prioritized intersection in real-time applications, we propose an efficient solver for forming such intersections for polyhedral constraints. The solver in question is a tailored implementation of a dual active-set quadratic programming solver that leverages the particular problem structure of the optimization problems arising for prioritized intersections. The method is validated in a real-time MPC application for autonomous driving, where it successfully resolves six different levels of conflicting constraints, confirming its efficiency and practicality for control. Furthermore, we show that the proposed solver outperforms existing solvers for hierarchical quadratic programming, making it relevant beyond control applications. + oai:arXiv.org:2512.18458v1 + math.OC + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Daniel Arnstr\"om, Gianluca Garofalo + + + Multifractality Analysis of Single Qubit Quantum Circuit Outcomes for a Superconducting Quantum Computer + https://arxiv.org/abs/2512.18491 + arXiv:2512.18491v1 Announce Type: cross +Abstract: We present a multifractal analysis of time series data obtained by repeatedly running a single-qubit quantum circuit on IBM superconducting quantum computers, in which the measurement outcomes are recorded as the number of zeros. By applying advanced signal processing techniques, including the wavelet leader method and multifractal detrended fluctuation analysis, we uncover strong multifractal behavior in the output data. This finding indicates that the temporal fluctuations inherent to quantum circuit outputs are not purely random but exhibit complex scaling properties across multiple time scales. The multifractal nature of the signal suggests the possibility of tailoring filtering strategies that specifically target these scaling features to effectively mitigate noise in quantum computations. Our results not only contribute to a deeper understanding of the dynamical properties of quantum systems under repeated measurement but also provide a promising avenue for improving noise reduction techniques in near-term quantum devices. + oai:arXiv.org:2512.18491v1 + quant-ph + cs.ET + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Mohammadreza Saghafi, Lamine Mili, Karlton Wirsing + + + Cyber Threat Detection Enabled by Quantum Computing + https://arxiv.org/abs/2512.18493 + arXiv:2512.18493v1 Announce Type: cross +Abstract: Threat detection models in cybersecurity must keep up with shifting traffic, strict feature budgets, and noisy hardware, yet even strong classical systems still miss rare or borderline attacks when the data distribution drifts. Small, near-term quantum processors are now available, but existing work rarely shows whether quantum components can improve end-to-end detection under these unstable, resource constrained conditions rather than just adding complexity. We address this gap with a hybrid architecture that uses a compact multilayer perceptron to compress security data and then routes a few features to 2-4 qubit quantum heads implemented as quantum support vector machines and variational circuits. Under matched preprocessing and training budgets, we benchmark these hybrids against tuned classical baselines on two security tasks, network intrusion detection on NSL-KDD and spam filtering on Ling-Spam datasets, and then deploy the best 4-qubit quantum SVM to an IBM Quantum device with noise-aware execution (readout mitigation and dynamical decoupling). Across both datasets, shallow quantum heads consistently match, and on difficult near-boundary cases modestly reduce, missed attacks and false alarms relative to classical models using the same features. Hardware results track simulator behavior closely enough that the remaining gap is dominated by device noise rather than model design. Taken together, the study shows that even on small, noisy chips, carefully engineered quantum components can already function as competitive, budget-aware elements in practical threat detection pipelines. + oai:arXiv.org:2512.18493v1 + quant-ph + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Zisheng Chen, Zirui Zhu, Xiangyang Li + + + The Illusion of Consistency: Selection-Induced Bias in Gated Kalman Innovation Statistics + https://arxiv.org/abs/2512.18508 + arXiv:2512.18508v1 Announce Type: cross +Abstract: Validation gating is a fundamental component of classical Kalman-based tracking systems. Only measurements whose normalized innovation squared (NIS) falls below a prescribed threshold are considered for state update. While this procedure is statistically motivated by the chi-square distribution, it implicitly replaces the unconditional innovation process with a conditionally observed one, restricted to the validation event. This paper shows that innovation statistics computed after gating converge to gate-conditioned rather than nominal quantities. Under classical linear--Gaussian assumptions, we derive exact expressions for the first- and second-order moments of the innovation conditioned on ellipsoidal gating, and show that gating induces a deterministic, dimension-dependent contraction of the innovation covariance. The analysis is extended to NN association, which is shown to act as an additional statistical selection operator. We prove that selecting the minimum-norm innovation among multiple in-gate measurements introduces an unavoidable energy contraction, implying that nominal innovation statistics cannot be preserved under nontrivial gating and association. Closed-form results in the two-dimensional case quantify the combined effects and illustrate their practical significance. + oai:arXiv.org:2512.18508v1 + stat.ME + cs.AI + cs.SY + eess.SP + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Barak Or + + + Pushing the limits of one-dimensional NMR spectroscopy for automated structure elucidation using artificial intelligence + https://arxiv.org/abs/2512.18531 + arXiv:2512.18531v1 Announce Type: cross +Abstract: One-dimensional NMR spectroscopy is one of the most widely used techniques for the characterization of organic compounds and natural products. For molecules with up to 36 non-hydrogen atoms, the number of possible structures has been estimated to range from $10^{20} - 10^{60}$. The task of determining the structure (formula and connectivity) of a molecule of this size using only its one-dimensional $^1$H and/or $^{13}$C NMR spectrum, i.e. de novo structure generation, thus appears completely intractable. Here we show how it is possible to achieve this task for systems with up to 40 non-hydrogen atoms across the full elemental coverage typically encountered in organic chemistry (C, N, O, H, P, S, Si, B, and the halogens) using a deep learning framework, thus covering a vast portion of the drug-like chemical space. Leveraging insights from natural language processing, we show that our transformer-based architecture predicts the correct molecule with 55.2% accuracy within the first 15 predictions using only the $^1$H and $^{13}$C NMR spectra, thus overcoming the combinatorial growth of the chemical space while also being extensible to experimental data via fine-tuning. + oai:arXiv.org:2512.18531v1 + physics.chem-ph + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Frank Hu, Jonathan M. Tubb, Dimitris Argyropoulos, Sergey Golotvin, Mikhail Elyashberg, Grant M. Rotskoff, Matthew W. Kanan, Thomas E. Markland + + + Toward Live Noise Fingerprinting in Quantum Software Engineering + https://arxiv.org/abs/2512.18667 + arXiv:2512.18667v1 Announce Type: cross +Abstract: Noise is a major bottleneck in today's quantum computing, stemming from decoherence, gate imperfections and other hardware limitations. Accurate noise fingerprints are essential, yet undocumented noise model differences between Quantum Ecosystems undermine core functionality, such as compilation, development and debugging, offering limited transferability and support for quantum software engineering (QSE) tasks. We propose a new research direction: live empirical noise fingerprinting as a lightweight QSE-oriented "noise fingerprinting". Though explored in physics as device-level diagnostics, we reposition them as a QSE paradigm: we propose leveraging classical shadow tomography to enable a new generation of techniques. As a first step, we introduce SimShadow, which prepares reference states, applies shadow-tomography-inspired estimation and constructs deviation fingerprints. Initial experiments uncover systematic discrepancies between platforms (e.g. Frobenius distances up to 7.39) at up to 2.5x10^6 lower cost than traditional methods. SimShadow opens new directions for noise-aware compilation, transpilation, cross-platform validation, error mitigation, and formal methods in QSE. + oai:arXiv.org:2512.18667v1 + quant-ph + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Avner Bensoussan, Elena Chachkarova, Karine Even-Mendoza, Sophie Fortz, Vasileios Klimis + + + Unsupervised Feature Selection via Robust Autoencoder and Adaptive Graph Learning + https://arxiv.org/abs/2512.18720 + arXiv:2512.18720v1 Announce Type: cross +Abstract: Effective feature selection is essential for high-dimensional data analysis and machine learning. Unsupervised feature selection (UFS) aims to simultaneously cluster data and identify the most discriminative features. Most existing UFS methods linearly project features into a pseudo-label space for clustering, but they suffer from two critical limitations: (1) an oversimplified linear mapping that fails to capture complex feature relationships, and (2) an assumption of uniform cluster distributions, ignoring outliers prevalent in real-world data. To address these issues, we propose the Robust Autoencoder-based Unsupervised Feature Selection (RAEUFS) model, which leverages a deep autoencoder to learn nonlinear feature representations while inherently improving robustness to outliers. We further develop an efficient optimization algorithm for RAEUFS. Extensive experiments demonstrate that our method outperforms state-of-the-art UFS approaches in both clean and outlier-contaminated data settings. + oai:arXiv.org:2512.18720v1 + stat.ML + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Feng Yu, MD Saifur Rahman Mazumder, Ying Su, Oscar Contreras Velasco + + + RIS-Enabled Smart Wireless Environments: Fundamentals and Distributed Optimization + https://arxiv.org/abs/2512.18788 + arXiv:2512.18788v1 Announce Type: cross +Abstract: This chapter overviews the concept of Smart Wireless Environments (SWEs) motivated by the emerging technology of Reconfigurable Intelligent Surfaces (RISs). The operating principles and state-of-the-art hardware architectures of programmable metasurfaces are first introduced. Subsequently, key performance objectives and use cases of RIS-enabled SWEs, including spectral and energy efficiency, physical-layer security, integrated sensing and communications, as well as the emerging paradigm of over-the-air computing, are discussed. Focusing on the recent trend of Beyond-Diagonal (BD) RISs, two distributed designs of respective SWEs are presented. The first deals with a multi-user Multiple-Input Single-Output (MISO) system operating within the area of influence of a SWE comprising multiple BD-RISs. A hybrid distributed and fusion machine learning framework based on multi-branch attention-based convolutional Neural Networks (NNs), NN parameter sharing, and neuroevolutionary training is presented, which enables online mapping of channel realizations to the BD-RIS configurations as well as the multi-user transmit precoder. Performance evaluation results showcase that the distributedly optimized RIS-enabled SWE achieves near-optimal sum-rate performance with low online computational complexity. The second design focuses on the wideband interference MISO broadcast channel, where each base station exclusively controls one BD-RIS to serve its assigned group of users. A cooperative optimization framework that jointly designs the base station transmit precoders as well as the tunable capacitances and switch matrices of all metasurfaces is presented. Numerical results demonstrating the superior sum-rate performance of the designed RIS-enabled SWE for multi-cell MISO networks over benchmark schemes, considering non-cooperative configuration and conventional diagonal metasurfaces, are presented. + oai:arXiv.org:2512.18788v1 + eess.SP + cs.IT + cs.LG + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + George C. Alexandropoulos, Kostantinos D. Katsanos, George Stamatelis, Ioannis Gavras + + + Merge on workspaces as Hopf algebra Markov chain + https://arxiv.org/abs/2512.18861 + arXiv:2512.18861v1 Announce Type: cross +Abstract: We study the dynamical properties of a Hopf algebra Markov chain with state space the binary rooted forests with labelled leaves. This Markovian dynamical system describes the core computational process of structure formation and transformation in syntax via the Merge operation, according to Chomsky's Minimalism model of generative linguistics. The dynamics decomposes into an ergodic dynamical system with uniform stationary distribution, given by the action of Internal Merge, while the contributions of External Merge and (a minimal form of) Sideward Merge reduce to a simpler Markov chain with state space the set of partitions and with combinatorial weights. The Sideward Merge part of the dynamics prevents convergence to fully formed connected structures (trees), unless the different forms of Merge are weighted by a cost function, as predicted by linguistic theory. Results on the asymptotic behavior of the Perron-Frobenius eigenvalue and eigenvector in this weighted case, obtained in terms of an associated Perron-Frobenius problem in the tropical semiring, show that the usual cost functions (Minimal Search and Resource Restrictions) proposed in the linguistic literature do not suffice to obtain convergence to the tree structures, while an additional optimization property based on the Shannon entropy achieves the expected result for the dynamics. We also comment on the introduction of continuous parameters related to semantic embedding and other computational models, and also on some filtering of the dynamics by coloring rules that model the linguistic filtering by theta roles and phase structure, and on parametric variation and the process of parameter setting in Externalization. + oai:arXiv.org:2512.18861v1 + math.DS + cs.CL + math.QA + math.RA + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Matilde Marcolli, David Skigin + + + Structure-Preserving Optimal Control of Open Quantum Systems via a Discrete Contact PMP + https://arxiv.org/abs/2512.18879 + arXiv:2512.18879v1 Announce Type: cross +Abstract: We develop a discrete Pontryagin Maximum Principle (PMP) for controlled open quantum systems governed by Lindblad dynamics, and introduce a second--order \emph{contact Lie--group variational integrator} (contact LGVI) that preserves both the CPTP (completely positive and trace--preserving) structure of the Lindblad flow and the contact geometry underlying the discrete PMP. A type--II discrete contact generating function produces a strict discrete contactomorphism under which the state, costate, and cost propagate in exact agreement with the variational structure of the discrete contact PMP. + We apply this framework to the optimal control of a dissipative qubit and compare it with a non--geometric explicit RK2 discretization of the Lindblad equation. Although both schemes have the same formal order, the RK2 method accumulates geometric drift (loss of trace, positivity violations, and breakdown of the discrete contact form) that destabilizes PMP shooting iterations, especially under strong dissipation or long horizons. In contrast, the contact LGVI maintains exact CPTP structure and discrete contact geometry step by step, yielding stable, physically consistent, and geometrically faithful optimal control trajectories. + oai:arXiv.org:2512.18879v1 + quant-ph + cs.NA + math-ph + math.MP + math.NA + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Leonardo Colombo + + + EuroHPC SPACE CoE: Redesigning Scalable Parallel Astrophysical Codes for Exascale + https://arxiv.org/abs/2512.18883 + arXiv:2512.18883v1 Announce Type: cross +Abstract: High Performance Computing (HPC) based simulations are crucial in Astrophysics and Cosmology (A&C), helping scientists investigate and understand complex astrophysical phenomena. Taking advantage of exascale computing capabilities is essential for these efforts. However, the unprecedented architectural complexity of exascale systems impacts legacy codes. The SPACE Centre of Excellence (CoE) aims to re-engineer key astrophysical codes to tackle new computational challenges by adopting innovative programming paradigms and software (SW) solutions. SPACE brings together scientists, code developers, HPC experts, hardware (HW) manufacturers, and SW developers. This collaboration enhances exascale A&C applications, promoting the use of exascale and post-exascale computing capabilities. Additionally, SPACE addresses high-performance data analysis for the massive data outputs from exascale simulations and modern observations, using machine learning (ML) and visualisation tools. The project facilitates application deployment across platforms by focusing on code repositories and data sharing, integrating European astrophysical communities around exascale computing with standardised SW and data protocols. + oai:arXiv.org:2512.18883v1 + astro-ph.IM + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1145/3706594.3728892 + CF '25 Companion: Proceedings of the 22nd ACM International Conference on Computing Frontiers: Workshops and Special Sessions, Pages 177 - 184, Published: 06 July 2025 + Nitin Shukla, Alessandro Romeo, Caterina Caravita, Lubomir Riha, Ondrej Vysocky, Petr Strakos, Milan Jaros, Jo\~ao Barbosa, Radim Vavrik, Andrea Mignone, Marco Rossazza, Stefano Truzzi, Vittoria Berta, Iacopo Colonnelli, Doriana Medi\'c, Elisabetta Boella, Daniele Gregori, Eva Sciacca, Luca Tornatore, Giuliano Taffoni, Pranab J. Deka, Fabio Bacchini, Rostislav-Paul Wilhelm, Georgios Doulis, Khalil Pierre, Luciano Rezzolla, Tine Colman, Beno\^it Commer\c{c}on, Othman Bouizi, Matthieu Kuhn, Erwan Raffin, Marc Sergent, Robert Wissing, Guillermo Marin, Klaus Dolag, Geray S. Karademir, Gino Perna, Marisa Zanotti, Sebastian Trujillo-Gomez + + + Structural Reinforcement Learning for Heterogeneous Agent Macroeconomics + https://arxiv.org/abs/2512.18892 + arXiv:2512.18892v1 Announce Type: cross +Abstract: We present a new approach to formulating and solving heterogeneous agent models with aggregate risk. We replace the cross-sectional distribution with low-dimensional prices as state variables and let agents learn equilibrium price dynamics directly from simulated paths. To do so, we introduce a structural reinforcement learning (SRL) method which treats prices via simulation while exploiting agents' structural knowledge of their own individual dynamics. Our SRL method yields a general and highly efficient global solution method for heterogeneous agent models that sidesteps the Master equation and handles problems traditional methods struggle with, in particular nontrivial market-clearing conditions. We illustrate the approach in the Krusell-Smith model, the Huggett model with aggregate shocks, and a HANK model with a forward-looking Phillips curve, all of which we solve globally within minutes. + oai:arXiv.org:2512.18892v1 + econ.TH + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yucheng Yang, Chiyuan Wang, Andreas Schaab, Benjamin Moll + + + On Conditional Stochastic Interpolation for Generative Nonlinear Sufficient Dimension Reduction + https://arxiv.org/abs/2512.18971 + arXiv:2512.18971v1 Announce Type: cross +Abstract: Identifying low-dimensional sufficient structures in nonlinear sufficient dimension reduction (SDR) has long been a fundamental yet challenging problem. Most existing methods lack theoretical guarantees of exhaustiveness in identifying lower dimensional structures, either at the population level or at the sample level. We tackle this issue by proposing a new method, generative sufficient dimension reduction (GenSDR), which leverages modern generative models. We show that GenSDR is able to fully recover the information contained in the central $\sigma$-field at both the population and sample levels. In particular, at the sample level, we establish a consistency property for the GenSDR estimator from the perspective of conditional distributions, capitalizing on the distributional learning capabilities of deep generative models. Moreover, by incorporating an ensemble technique, we extend GenSDR to accommodate scenarios with non-Euclidean responses, thereby substantially broadening its applicability. Extensive numerical results demonstrate the outstanding empirical performance of GenSDR and highlight its strong potential for addressing a wide range of complex, real-world tasks. + oai:arXiv.org:2512.18971v1 + stat.ML + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Shuntuo Xu, Zhou Yu, Jian Huang + + + Evolutionary Cooperation with Game Transitions via Markov Decision Chain in Networked Population + https://arxiv.org/abs/2512.18972 + arXiv:2512.18972v1 Announce Type: cross +Abstract: Individual cooperative strategy influences the surrounding dynamic population, which in turn affects cooperative strategy. To better model this phenomenon, we develop a Markov decision chain based game transitions model and examine the dynamic transitions in game states of individuals within a network and their impact on the strategy's evolution. Additionally, we extend single-round strategy imitation to multiple rounds to better capture players' potential non-rational behavior. Using intensive simulations, we explore the effects of transition probabilities and game parameters on game transitions and cooperation. Our study finds that strategy-driven game transitions promote cooperation, and increasing the transition rates of Markov decision chains can significantly accelerate this process. By designing different Markov decision chains, these results provide simulation based guidance for practical applications in swarm intelligence, such as strategic collaboration. + oai:arXiv.org:2512.18972v1 + physics.soc-ph + cs.SI + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1016/j.apm.2025.116710 + Applied Mathematical Modelling,2025,116710 + Chaoyang Luo, Yuji Zhang, Minyu Feng, Attila Szolnoki + + + Hybrid Stochastic Functional Differential Equations with Infinite Delay: Approximations and Numerics + https://arxiv.org/abs/2512.18990 + arXiv:2512.18990v1 Announce Type: cross +Abstract: This paper is to investigate if the solution of a hybrid stochastic functional differential equation (SFDE) with infinite delay can be approximated by the solution of the corresponding hybrid SFDE with finite delay. A positive result is established for a large class of highly nonlinear hybrid SFDEs with infinite delay. Our new theory makes it possible to numerically approximate the solution of the hybrid SFDE with infinite delay, via the numerical solution of the corresponding hybrid SFDE with finite delay. + oai:arXiv.org:2512.18990v1 + math.PR + cs.NA + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Guozhen Li, Xiaoyue Li, Xuerong Mao, Guoting Song + + + The stability of log-supermodularity under convolution + https://arxiv.org/abs/2512.19003 + arXiv:2512.19003v1 Announce Type: cross +Abstract: We study the behavior of log-supermodular functions under convolution. In particular we show that log-concave product densities preserve log-supermodularity, confirming in the special case of the standard Gaussian density, a conjecture of Zartash and Robeva. Additionally, this stability gives a ``conditional'' entropy power inequality for log-supermodular random variables. We also compare the Ahlswede-Daykin four function theorem and a recent four function version of the Prekopa-Leindler inequality due to Cordero-Erausquin and Maurey and giving transport proofs for the two theorems. In the Prekopa-Leindler case, the proof gives a generalization that seems to be new, which interpolates the classical three and the recent four function versions. + oai:arXiv.org:2512.19003v1 + math.PR + cs.IT + math.FA + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Mokshay Madiman, James Melbourne, Cyril Roberto + + + PalpAid: Multimodal Pneumatic Tactile Sensor for Tissue Palpation + https://arxiv.org/abs/2512.19010 + arXiv:2512.19010v1 Announce Type: cross +Abstract: The tactile properties of tissue, such as elasticity and stiffness, often play an important role in surgical oncology when identifying tumors and pathological tissue boundaries. Though extremely valuable, robot-assisted surgery comes at the cost of reduced sensory information to the surgeon; typically, only vision is available. Sensors proposed to overcome this sensory desert are often bulky, complex, and incompatible with the surgical workflow. We present PalpAid, a multimodal pneumatic tactile sensor equipped with a microphone and pressure sensor, converting contact force into an internal pressure differential. The pressure sensor acts as an event detector, while the auditory signature captured by the microphone assists in tissue delineation. We show the design, fabrication, and assembly of sensory units with characterization tests to show robustness to use, inflation-deflation cycles, and integration with a robotic system. Finally, we show the sensor's ability to classify 3D-printed hard objects with varying infills and soft ex vivo tissues. Overall, PalpAid aims to fill the sensory gap intelligently and allow improved clinical decision-making. + oai:arXiv.org:2512.19010v1 + eess.SP + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Devi Yuliarti, Ravi Prakash, Hiu Ching Cheung, Amy Strong, Patrick J. Codd, Shan Lin + + + On Factoring and Power Divisor Problems via Rank-3 Lattices and the Second Vector + https://arxiv.org/abs/2512.19076 + arXiv:2512.19076v1 Announce Type: cross +Abstract: We propose a deterministic algorithm based on Coppersmith's method that employs a rank-3 lattice to address factoring-related problems. An interesting aspect of our approach is that we utilize the second vector in the LLL-reduced basis to avoid trivial collisions in the Baby-step Giant-step method, rather than the shortest vector as is commonly used in the literature. Our results are as follows: + 1. Compared to the result by Harvey and Hittmeir (Math. Comp. 91 (2022), 1367 - 1379), who achieved a complexity of O( N^(1/5) log^(16/5) N / (log log N)^(3/5)) for factoring a semiprime N = pq, we demonstrate that in the balanced p and q case, the complexity can be improved to O( N^(1/5) log^(13/5) N / (log log N)^(3/5) ). + 2. For factoring sums and differences of powers, that is, numbers of the form N = a^n plus or minus b^n, we improve Hittmeir's result (Math. Comp. 86 (2017), 2947 - 2954) from O( N^(1/4) log^(3/2) N ) to O( N^(1/5) log^(13/5) N ). + 3. For the problem of finding r-power divisors, that is, finding all integers p such that p^r divides N, Harvey and Hittmeir (Proceedings of ANTS XV, Research in Number Theory 8 (2022), no. 4, Paper No. 94) recently directly applied Coppersmith's method and achieved a complexity of O( N^(1/(4r)) log^(10+epsilon) N / r^3 ). By using faster LLL-type algorithms and sieving on small primes, we improve their result to O( N^(1/(4r)) log^(7+3 epsilon) N / ((log log N minus log(4r)) r^(2+epsilon)) ). The worst-case running time for their algorithm occurs when N = p^r q with q on the order of N^(1/2). By focusing on this case and employing our rank-3 lattice approach, we achieve a complexity of O( r^(1/4) N^(1/(4r)) log^(5/2) N ). + In conclusion, we offer a new perspective on these problems, which we hope will provide further insights. + oai:arXiv.org:2512.19076v1 + math.NT + cs.DS + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Yiming Gao, Yansong Feng, Honggang Hu, Yanbin Pan + + + Explicit and Non-asymptotic Query Complexities of Rank-Based Zeroth-order Algorithm on Stochastic Smooth Functions + https://arxiv.org/abs/2512.19104 + arXiv:2512.19104v1 Announce Type: cross +Abstract: Zeroth-order (ZO) optimization with ordinal feedback has emerged as a fundamental problem in modern machine learning systems, particularly in human-in-the-loop settings such as reinforcement learning from human feedback, preference learning, and evolutionary strategies. While rank-based ZO algorithms enjoy strong empirical success and robustness properties, their theoretical understanding, especially under stochastic objectives and standard smoothness assumptions, remains limited. In this paper, we study rank-based zeroth-order optimization for stochastic functions where only ordinal feedback of the stochastic function is available. We propose a simple and computationally efficient rank-based ZO algorithm. Under standard assumptions including smoothness, strong convexity, and bounded second moments of stochastic gradients, we establish explicit non-asymptotic query complexity bounds for both convex and nonconvex objectives. Notably, our results match the best-known query complexities of value-based ZO algorithms, demonstrating that ordinal information alone is sufficient for optimal query efficiency in stochastic settings. Our analysis departs from existing drift-based and information-geometric techniques, offering new tools for the study of rank-based optimization under noise. These findings narrow the gap between theory and practice and provide a principled foundation for optimization driven by human preferences. + oai:arXiv.org:2512.19104v1 + math.OC + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Haishan Ye + + + Classical billiards can compute + https://arxiv.org/abs/2512.19156 + arXiv:2512.19156v1 Announce Type: cross +Abstract: We show that two-dimensional billiard systems are Turing complete by encoding their dynamics within the framework of Topological Kleene Field Theory. Billiards serve as idealized models of particle motion with elastic reflections and arise naturally as limits of smooth Hamiltonian systems under steep confining potentials. Our results establish the existence of undecidable trajectories in physically natural billiard-type models, including billiard-type models arising in hard-sphere gases and in collision-chain limits of celestial mechanics. + oai:arXiv.org:2512.19156v1 + math.DS + cs.CC + math-ph + math.MP + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Eva Miranda, Isaac Ramos + + + Optical design and characterization of a multi-depth vision simulator + https://arxiv.org/abs/2512.19168 + arXiv:2512.19168v1 Announce Type: cross +Abstract: We present a vision simulator device (Katsim), a compact near-eye optical display designed for assessing postoperative corrected vision, preoperative intraocular lens (IOL) assessment, and objective IOL characterization. The system forms a virtual image using an amplitude-modulated LCoS spatial light modulator (AM-SLM), RGB LED illumination, and a high-speed varifocal lens. In the proposed architecture, the LED illumination and varifocal lens diopter changes are triggered in synchrony with the SLM RGB subframes, rendering three depth planes perceptually simultaneously via high-frequency time-multiplexing. Operating at 60 frames per second (fps), the system achieves an effective 180 Hz depth-coded cycle, enabling sharp, multi-depth rendering within a dynamically adjustable depth range from 0.2 m to optical infinity. The system's eyebox is configurable from 1 to 5 mm, while maintaining a fixed spatial location and preserving angular magnification regardless of changes in focus or eyebox size. The designed system features a 9.15-degree field of view. An integrated infrared pupil-tracking module detects non-cataractous regions of the cataractous crystalline lens, and the projected imagery is mechanically steered through those clear zones in real time. The proposed vision simulator supports both subjective simulation of post-surgical vision for patient-specific counseling and objective optical evaluation of IOLs, including resolution and contrast fidelity (e.g., modulation transfer function, contrast transfer function, and defocus curves). By decoupling depth modulation from eyebox position and size, the system offers a modular, portable platform that supports enhanced preoperative planning, personalized IOL selection, objective IOL characterization, and use as a novel research vision tool. + oai:arXiv.org:2512.19168v1 + physics.optics + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Parviz Zolfaghari, Ehsan Varasteh, Koray Kavakli, Arda Gulersoy, Afsun Sahin, Hakan Urey + + + Finite-sample guarantees for data-driven forward-backward operator methods + https://arxiv.org/abs/2512.19172 + arXiv:2512.19172v1 Announce Type: cross +Abstract: We establish finite sample certificates on the quality of solutions produced by data-based forward-backward (FB) operator splitting schemes. As frequently happens in stochastic regimes, we consider the problem of finding a zero of the sum of two operators, where one is either unavailable in closed form or computationally expensive to evaluate, and shall therefore be approximated using a finite number of noisy oracle samples. Under the lens of algorithmic stability, we then derive probabilistic bounds on the distance between a true zero and the FB output without making specific assumptions about the underlying data distribution. We show that under weaker conditions ensuring the convergence of FB schemes, stability bounds grow proportionally to the number of iterations. Conversely, stronger assumptions yield stability guarantees that are independent of the iteration count. We then specialize our results to a popular FB stochastic Nash equilibrium seeking algorithm and validate our theoretical bounds on a control problem for smart grids, where the energy price uncertainty is approximated by means of historical data. + oai:arXiv.org:2512.19172v1 + math.OC + cs.LG + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Filippo Fabiani, Barbara Franci + + + Self-Consistent Probability Flow for High-Dimensional Fokker-Planck Equations + https://arxiv.org/abs/2512.19196 + arXiv:2512.19196v1 Announce Type: cross +Abstract: Solving high-dimensional Fokker-Planck (FP) equations is a challenge in computational physics and stochastic dynamics, due to the curse of dimensionality (CoD) and the bottleneck of evaluating second-order diffusion terms. Existing deep learning approaches, such as Physics-Informed Neural Networks (PINNs), face computational challenges as dimensionality increases, driven by the $O(D^2)$ complexity of automatic differentiation for second-order derivatives. While recent probability flow approaches bypass this by learning score functions or matching velocity fields, they often involve serial computational operations or depend on sampling efficiency in complex distributions. To address these issues, we propose the Self-Consistent Probability Flow (SCPF) method. We reformulate the second-order FP equation into an equivalent first-order deterministic Probability Flow ODE (PF-ODE) constraint. Unlike score matching or velocity matching, SCPF solves this problem by minimizing the residual of the PF-ODE continuity equation, which avoids explicit Hessian computation. We leverage Continuous Normalizing Flows (CNF) combined with the Hutchinson Trace Estimator (HTE) to reduce the training complexity to linear scale $O(D)$, achieving an effective $O(1)$ wall-clock time on GPUs. To address data sparsity in high dimensions, we apply a generative adaptive sampling strategy and theoretically prove that dynamically aligning collocation points with the evolving probability mass is a necessary condition to bound the approximation error. Experiments on diverse benchmarks -- ranging from anisotropic Ornstein-Uhlenbeck (OU) processes and high-dimensional Brownian motions with time-varying diffusion terms, to Geometric OU processes featuring non-Gaussian solutions -- demonstrate that SCPF effectively mitigates the CoD, maintaining high accuracy and constant computational cost for problems up to 100 dimensions. + oai:arXiv.org:2512.19196v1 + physics.comp-ph + cs.LG + cs.NA + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiaolong Wu, Qifeng Liao + + + Selective Phase-Aware Training of nnU-Net for Robust Breast Cancer Segmentation in Multi-Center DCE-MRI + https://arxiv.org/abs/2512.19225 + arXiv:2512.19225v1 Announce Type: cross +Abstract: Breast cancer remains the most common cancer among women and is a leading cause of female mortality. Dynamic contrast-enhanced MRI (DCE-MRI) is a powerful imaging tool for evaluating breast tumors, yet the field lacks a standardized benchmark for analyzing treatment responses and guiding personalized care. We participated in the MAMA-MIA Challenge's Primary Tumor Segmentation task and this work presents a proposed selective, phase-aware training framework for the nnU-Net architecture, emphasizing quality-focused data selection to strengthen model robustness and generalization. We employed the No New Net (nnU-Net) framework with a selective training strategy that systematically analyzed the impact of image quality and center-specific variability on segmentation performance. Controlled experiments on the DUKE, NACT, ISPY1, and ISPY2 datasets revealed that including ISPY scans with motion artifacts and reduced contrast impaired segmentation performance, even with advanced preprocessing, such as contrast-limited adaptive histogram equalization (CLAHE). In contrast, training on DUKE and NACT data, which exhibited clearer contrast and fewer motion artifacts despite varying resolutions, with early phase images (0000-0002) provided more stable training conditions. Our results demonstrate the importance of phase-sensitive and quality-aware training strategies in achieving reliable segmentation performance in heterogeneous clinical datasets, highlighting the limitations of the expansion of naive datasets and motivating the need for future automation of quality-based data selection strategies. + oai:arXiv.org:2512.19225v1 + eess.IV + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Beyza Zayim, Aissiou Ikram, Boukhiar Naima + + + Sonified Quantum Seizures. Sonification of time series in epileptic seizures and simulation of seizures via quantum modelling + https://arxiv.org/abs/2512.19272 + arXiv:2512.19272v1 Announce Type: cross +Abstract: We apply sonification strategies and quantum computing to the analysis of an episode of seizure. We first sonify the signal from a selection of channels (from real ECoG data), obtaining a polyphonic sequence. Then, we propose two quantum approaches to simulate a similar episode of seizure, and we sonify the results. The comparison of sonifications can give hints on similarities and discrepancies between real data and simulations, helping refine the \textit{in silico} model. This is a pioneering approach, showing how the combination of quantum computing and sonification can broaden the perspective of real-data investigation, and helping define a new test bench for analysis and prediction of seizures. + oai:arXiv.org:2512.19272v1 + quant-ph + cs.ET + cs.SD + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Maria Mannone, Paulo Vitor Itaborai, Omar Costa Hamido, Miriam Goldack, Norbert Marwan, Peppino Fazio, Patrizia Ribino + + + Cluster-Based Generalized Additive Models Informed by Random Fourier Features + https://arxiv.org/abs/2512.19373 + arXiv:2512.19373v1 Announce Type: cross +Abstract: Explainable machine learning aims to strike a balance between prediction accuracy and model transparency, particularly in settings where black-box predictive models, such as deep neural networks or kernel-based methods, achieve strong empirical performance but remain difficult to interpret. This work introduces a mixture of generalized additive models (GAMs) in which random Fourier feature (RFF) representations are leveraged to uncover locally adaptive structure in the data. In the proposed method, an RFF-based embedding is first learned and then compressed via principal component analysis. The resulting low-dimensional representations are used to perform soft clustering of the data through a Gaussian mixture model. These cluster assignments are then applied to construct a mixture-of-GAMs framework, where each local GAM captures nonlinear effects through interpretable univariate smooth functions. Numerical experiments on real-world regression benchmarks, including the California Housing, NASA Airfoil Self-Noise, and Bike Sharing datasets, demonstrate improved predictive performance relative to classical interpretable models. Overall, this construction provides a principled approach for integrating representation learning with transparent statistical modeling. + oai:arXiv.org:2512.19373v1 + stat.ML + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Xin Huang, Jia Li, Jun Yu + + + A Critical Assessment of Pattern Comparisons Between POD and Autoencoders in Intraventricular Flows + https://arxiv.org/abs/2512.19376 + arXiv:2512.19376v1 Announce Type: cross +Abstract: Understanding intraventricular hemodynamics requires compact and physically interpretable representations of the underlying flow structures, as characteristic flow patterns are closely associated with cardiovascular conditions and can support early detection of cardiac deterioration. Conventional visualization of velocity or pressure fields, however, provides limited insight into the coherent mechanisms driving these dynamics. Reduced-order modeling techniques, like Proper Orthogonal Decomposition (POD) and Autoencoder (AE) architectures, offer powerful alternatives to extract dominant flow features from complex datasets. This study systematically compares POD with several AE variants (Linear, Nonlinear, Convolutional, and Variational) using left ventricular flow fields obtained from computational fluid dynamics simulations. We show that, for a suitably chosen latent dimension, AEs produce modes that become nearly orthogonal and qualitatively resemble POD modes that capture a given percentage of kinetic energy. As the number of latent modes increases, AE modes progressively lose orthogonality, leading to linear dependence, spatial redundancy, and the appearance of repeated modes with substantial high-frequency content. This degradation reduces interpretability and introduces noise-like components into AE-based reduced-order models, potentially complicating their integration with physics-based formulations or neural-network surrogates. The extent of interpretability loss varies across the AEs, with nonlinear, convolutional, and variational models exhibiting distinct behaviors in orthogonality preservation and feature localization. Overall, the results indicate that AEs can reproduce POD-like coherent structures under specific latent-space configurations, while highlighting the need for careful mode selection to ensure physically meaningful representations of cardiac flow dynamics. + oai:arXiv.org:2512.19376v1 + physics.flu-dyn + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Eneko Lazpita, Andr\'es Bell-Navas, Jes\'us Garicano-Mena, Petros Koumoutsakos, Soledad Le Clainche + + + Undecidability of theories of semirings with fixed points + https://arxiv.org/abs/2512.19401 + arXiv:2512.19401v1 Announce Type: cross +Abstract: In this work we prove the undecidability (and $\Sigma^0_1$-completeness) of several theories of semirings with fixed points. The generality of our results stems from recursion theoretic methods, namely the technique of effective inseperability. Our result applies to many theories proposed in the literature, including Conway $\mu$-semirings, Park $\mu$-semirings, and Chomsky algebras. + oai:arXiv.org:2512.19401v1 + math.LO + cs.LO + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Anupam Das, Abhishek De, Stepan L. Kuznetsov + + + Machine Learning of Temperature-dependent Chemical Kinetics Using Parallel Droplet Microreactors + https://arxiv.org/abs/2512.19416 + arXiv:2512.19416v1 Announce Type: cross +Abstract: Temperature is a fundamental regulator of chemical and biochemical kinetics, yet capturing nonlinear thermal effects directly from experimental data remains a major challenge due to limited throughput and model flexibility. Recent advances in machine learning have enabled flexible modeling beyond conventional physical laws, but most existing strategies remain confined to surrogate models of end-point yields rather than full kinetic dynamics. Consequently, an end-to-end framework that unifies systematic kinetic data acquisition with machine learning based modeling has been lacking. In this paper, we present a unified framework that integrates droplet microfluidics with machine learning for the systematic analysis of temperature-dependent reaction kinetics. The platform is specifically designed to enable stable immobilization and long-term time-lapse imaging of thousands of droplets under dynamic thermal gradients. This configuration yields massively parallel time-resolved datasets across diverse temperature conditions that capture transient kinetics and provides particularly suitable inputs for training machine-learning models of reaction dynamics. Leveraging these datasets, we train Neural ODE models, which embed neural networks within differential equations to flexibly represent nonlinear temperature dependencies beyond conventional formulations. We demonstrate accurate prediction of enzymatic kinetics across diverse thermal environments, highlighting the robustness and versatility of the approach. Our framework bridges high-throughput experimental data acquisition with data-driven modeling, establishing a versatile foundation for enhanced predictive ability and rational analysis and design of temperature-sensitive biochemical processes. + oai:arXiv.org:2512.19416v1 + q-bio.QM + cs.SY + eess.SY + physics.bio-ph + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Mamoru Saita, Yutaka Hori + + + Real-Time Streamable Generative Speech Restoration with Flow Matching + https://arxiv.org/abs/2512.19442 + arXiv:2512.19442v1 Announce Type: cross +Abstract: Diffusion-based generative models have greatly impacted the speech processing field in recent years, exhibiting high speech naturalness and spawning a new research direction. Their application in real-time communication is, however, still lagging behind due to their computation-heavy nature involving multiple calls of large DNNs. + Here, we present Stream.FM, a frame-causal flow-based generative model with an algorithmic latency of 32 milliseconds (ms) and a total latency of 48 ms, paving the way for generative speech processing in real-time communication. We propose a buffered streaming inference scheme and an optimized DNN architecture, show how learned few-step numerical solvers can boost output quality at a fixed compute budget, explore model weight compression to find favorable points along a compute/quality tradeoff, and contribute a model variant with 24 ms total latency for the speech enhancement task. + Our work looks beyond theoretical latencies, showing that high-quality streaming generative speech processing can be realized on consumer GPUs available today. Stream.FM can solve a variety of speech processing tasks in a streaming fashion: speech enhancement, dereverberation, codec post-filtering, bandwidth extension, STFT phase retrieval, and Mel vocoding. As we verify through comprehensive evaluations and a MUSHRA listening test, Stream.FM establishes a state-of-the-art for generative streaming speech restoration, exhibits only a reasonable reduction in quality compared to a non-streaming variant, and outperforms our recent work (Diffusion Buffer) on generative streaming speech enhancement while operating at a lower latency. + oai:arXiv.org:2512.19442v1 + eess.SP + cs.LG + cs.SD + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Simon Welker, Bunlong Lay, Maris Hillemann, Tal Peer, Timo Gerkmann + + + Rethinking Coupled Tensor Analysis for Hyperspectral Super-Resolution: Recoverable Modeling Under Endmember Variability + https://arxiv.org/abs/2512.19489 + arXiv:2512.19489v1 Announce Type: cross +Abstract: This work revisits the hyperspectral super-resolution (HSR) problem, i.e., fusing a pair of spatially co-registered hyperspectral (HSI) and multispectral (MSI) images to recover a super-resolution image (SRI) that enhances the spatial resolution of the HSI. Coupled tensor decomposition (CTD)-based methods have gained traction in this domain, offering recoverability guarantees under various assumptions. Existing models such as canonical polyadic decomposition (CPD) and Tucker decomposition provide strong expressive power but lack physical interpretability. The block-term decomposition model with rank-$(L_r, L_r, 1)$ terms (the LL1 model) yields interpretable factors under the linear mixture model (LMM) of spectral images, but LMM assumptions are often violated in practice -- primarily due to nonlinear effects such as endmember variability (EV). To address this, we propose modeling spectral images using a more flexible block-term tensor decomposition with rank-$(L_r, M_r, N_r)$ terms (the LMN model). This modeling choice retains interpretability, subsumes CPD, Tucker, and LL1 as special cases, and robustly accounts for non-ideal effects such as EV, offering a balanced tradeoff between expressiveness and interpretability for HSR. Importantly, under the LMN model for HSI and MSI, recoverability of the SRI can still be established under proper conditions -- providing strong theoretical support. Extensive experiments on synthetic and real datasets further validate the effectiveness and robustness of the proposed method compared with existing CTD-based approaches. + oai:arXiv.org:2512.19489v1 + eess.IV + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Meng Ding, Xiao Fu + + + A perturbed preconditioned gradient descent method for the unconstrained minimization of composite objectives + https://arxiv.org/abs/2512.19532 + arXiv:2512.19532v1 Announce Type: cross +Abstract: We introduce a perturbed preconditioned gradient descent (PPGD) method for the unconstrained minimization of a strongly convex objective $G$ with a locally Lipschitz continuous gradient. We assume that $G(v)=E(v)+F(v)$ and that the gradient of $F$ is only known approximately. Our analysis is conducted in infinite dimensions with a preconditioner built into the framework. We prove a linear rate of convergence, up to an error term dependent on the gradient approximation. We apply the PPGD to the stationary Cahn-Hilliard equations with variable mobility under periodic boundary conditions. Numerical experiments are presented to validate the theoretical convergence rates and explore how the mobility affects the computation. + oai:arXiv.org:2512.19532v1 + math.OC + cs.NA + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Jea-Hyun Park, Abner J. Salgado, Steven M. Wise + + + Active Convolved Illumination with Deep Transfer Learning for Complex Beam Transmission through Atmospheric Turbulence + https://arxiv.org/abs/2512.19540 + arXiv:2512.19540v1 Announce Type: cross +Abstract: Atmospheric turbulence imposes a fundamental limitation across a broad range of applications, including optical imaging, remote sensing, and free-space optical communication. Recent advances in adaptive optics, wavefront shaping, and machine learning, driven by synergistic progress in fundamental theories, optoelectronic hardware, and computational algorithms, have demonstrated substantial potential in mitigating turbulence-induced distortions. Recently, active convolved illumination (ACI) was proposed as a versatile and physics-driven technique for transmitting structured light beams with minimal distortion through highly challenging turbulent regimes. While distinct in its formulation, ACI shares conceptual similarities with other physics-driven distortion correction approaches and stands to benefit from complementary integration with data-driven deep learning (DL) models. Inspired by recent work coupling deep learning with traditional turbulence mitigation strategies, the present work investigates the feasibility of integrating ACI with neural network-based methods. We outline a conceptual framework for coupling ACI with data-driven models and identify conditions under which learned representations can meaningfully support ACI's correlation-injection mechanism. As a representative example, we employ a convolutional neural network (CNN) together with a transfer-learning approach to examine how a learned model may operate in tandem with ACI. This exploratory study demonstrates feasible implementation pathways and establishes an early foundation for assessing the potential of future ACI-DL hybrid architectures, representing a step toward evaluating broader synergistic interactions between ACI and modern DL models. + oai:arXiv.org:2512.19540v1 + physics.optics + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Adrian A. Moazzam, Anindya Ghoshroy, Breeanne Heusdens, Durdu O. Guney, Roohollah Askari + + + Owning the Intelligence: Global AI Patents Landscape and Europe's Quest for Technological Sovereignty + https://arxiv.org/abs/2512.19569 + arXiv:2512.19569v1 Announce Type: cross +Abstract: Artificial intelligence has become a key arena of global technological competition and a central concern for Europe's quest for technological sovereignty. This paper analyzes global AI patenting from 2010 to 2023 to assess Europe's position in an increasingly bipolar innovation landscape dominated by the United States and China. Using linked patent, firm, ownership, and citation data, we examine the geography, specialization, and international diffusion of AI innovation. We find a highly concentrated patent landscape: China leads in patent volumes, while the United States dominates in citation impact and technological influence. Europe accounts for a limited share of AI patents but exhibits signals of relatively high patent quality. Technological proximity reveals global convergence toward U.S. innovation trajectories, with Europe remaining fragmented rather than forming an autonomous pole. Gravity-model estimates show that cross-border AI knowledge flows are driven primarily by technological capability and specialization, while geographic and institutional factors play a secondary role. EU membership does not significantly enhance intra-European knowledge diffusion, suggesting that technological capacity, rather than political integration, underpins participation in global AI innovation networks. + oai:arXiv.org:2512.19569v1 + econ.GN + cs.AI + q-fin.EC + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Lapo Santarlasci, Armando Rungi, Loredana Fattorini, Nestor Maslej + + + Deep Learning for Primordial $B$-mode Extraction + https://arxiv.org/abs/2512.19577 + arXiv:2512.19577v1 Announce Type: cross +Abstract: The search for primordial gravitational waves is a central goal of cosmic microwave background (CMB) surveys. Isolating the characteristic $B$-mode polarization signal sourced by primordial gravitational waves is challenging for several reasons: the amplitude of the signal is inherently small; astrophysical foregrounds produce $B$-mode polarization contaminating the signal; and secondary $B$-mode polarization fluctuations are produced via the conversion of $E$ modes. Current and future low-noise, multi-frequency observations enable sufficient precision to address the first two of these challenges such that secondary $B$ modes will become the bottleneck for improved constraints on the amplitude of primordial gravitational waves. The dominant source of secondary $B$-mode polarization is gravitational lensing by large scale structure. Various strategies have been developed to estimate the lensing deflection and to reverse its effects the CMB, thus reducing confusion from lensing $B$ modes in the search for primordial gravitational waves. However, a few complications remain. First, there may be additional sources of secondary $B$-mode polarization, for example from patchy reionization or from cosmic polarization rotation. Second, the statistics of delensed CMB maps can become complicated and non-Gaussian, especially when advanced lensing reconstruction techniques are applied. We previously demonstrated how a deep learning network, ResUNet-CMB, can provide nearly optimal simultaneous estimates of multiple sources of secondary $B$-mode polarization. In this paper, we show how deep learning can be applied to estimate and remove multiple sources of secondary $B$-mode polarization, and we further show how this technique can be used in a likelihood analysis to produce nearly optimal, unbiased estimates of the amplitude of primordial gravitational waves. + oai:arXiv.org:2512.19577v1 + astro-ph.CO + cs.CV + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Eric Guzman, Joel Meyers + + + Patlak Parametric Image Estimation from Dynamic PET Using Diffusion Model Prior + https://arxiv.org/abs/2512.19584 + arXiv:2512.19584v1 Announce Type: cross +Abstract: Dynamic PET enables the quantitative estimation of physiology-related parameters and is widely utilized in research and increasingly adopted in clinical settings. Parametric imaging in dynamic PET requires kinetic modeling to estimate voxel-wise physiological parameters based on specific kinetic models. However, parametric images estimated through kinetic model fitting often suffer from low image quality due to the inherently ill-posed nature of the fitting process and the limited counts resulting from non-continuous data acquisition across multiple bed positions in whole-body PET. In this work, we proposed a diffusion model-based kinetic modeling framework for parametric image estimation, using the Patlak model as an example. The score function of the diffusion model was pre-trained on static total-body PET images and served as a prior for both Patlak slope and intercept images by leveraging their patch-wise similarity. During inference, the kinetic model was incorporated as a data-consistency constraint to guide the parametric image estimation. The proposed framework was evaluated on total-body dynamic PET datasets with different dose levels, demonstrating the feasibility and promising performance of the proposed framework in improving parametric image quality. + oai:arXiv.org:2512.19584v1 + eess.IV + cs.CV + physics.med-ph + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ziqian Huang, Boxiao Yu, Siqi Li, Savas Ozdemir, Sangjin Bae, Jae Sung Lee, Guobao Wang, Kuang Gong + + + Multimodal LLMs for Historical Dataset Construction from Archival Image Scans: German Patents (1877-1918) + https://arxiv.org/abs/2512.19675 + arXiv:2512.19675v1 Announce Type: cross +Abstract: We leverage multimodal large language models (LLMs) to construct a dataset of 306,070 German patents (1877-1918) from 9,562 archival image scans using our LLM-based pipeline powered by Gemini-2.5-Pro and Gemini-2.5-Flash-Lite. Our benchmarking exercise provides tentative evidence that multimodal LLMs can create higher quality datasets than our research assistants, while also being more than 795 times faster and 205 times cheaper in constructing the patent dataset from our image corpus. About 20 to 50 patent entries are embedded on each page, arranged in a double-column format and printed in Gothic and Roman fonts. The font and layout complexity of our primary source material suggests to us that multimodal LLMs are a paradigm shift in how datasets are constructed in economic history. We open-source our benchmarking and patent datasets as well as our LLM-based data pipeline, which can be easily adapted to other image corpora using LLM-assisted coding tools, lowering the barriers for less technical researchers. Finally, we explain the economics of deploying LLMs for historical dataset construction and conclude by speculating on the potential implications for the field of economic history. + oai:arXiv.org:2512.19675v1 + econ.GN + cs.CV + cs.DL + q-fin.EC + Tue, 23 Dec 2025 00:00:00 -0500 + cross + http://creativecommons.org/licenses/by/4.0/ + Niclas Griesshaber, Jochen Streb + + + Towards a resource for multilingual lexicons: an MT assisted and human-in-the-loop multilingual parallel corpus with multi-word expression annotation + https://arxiv.org/abs/2011.03783 + arXiv:2011.03783v2 Announce Type: replace +Abstract: In this work, we introduce the construction of a machine translation (MT) assisted and human-in-the-loop multilingual parallel corpus with annotations of multi-word expressions (MWEs), named AlphaMWE. The MWEs include verbal MWEs (vMWEs) defined in the PARSEME shared task that have a verb as the head of the studied terms. The annotated vMWEs are also bilingually and multilingually aligned manually. The languages covered include Arabic, Chinese, English, German, Italian, and Polish, of which, the Arabic corpus includes both standard and dialectal variations from Egypt and Tunisia. Our original English corpus is extracted from the PARSEME shared task in 2018. We performed machine translation of this source corpus followed by human post-editing and annotation of target MWEs. Strict quality control was applied for error limitation, i.e., each MT output sentence received first manual post-editing and annotation plus a second manual quality rechecking till annotators' consensus is reached. One of our findings during corpora preparation is that accurate translation of MWEs presents challenges to MT systems, as reflected by the outcomes of human-in-the-loop metric HOPE. To facilitate further MT research, we present a categorisation of the error types encountered by MT systems in performing MWE-related translation. To acquire a broader view of MT issues, we selected four popular state-of-the-art MT systems for comparison, namely Microsoft Bing Translator, GoogleMT, Baidu Fanyi, and DeepL MT. Because of the noise removal, translation post-editing, and MWE annotation by human professionals, we believe the AlphaMWE data set will be an asset for both monolingual and cross-lingual research, such as multi-word term lexicography, MT, and information extraction. + oai:arXiv.org:2011.03783v2 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Lifeng Han, Najet Hadj Mohamed, Malak Rassem, Gareth Jones, Alan Smeaton, Goran Nenadic + + + Divisible Codes + https://arxiv.org/abs/2112.11763 + arXiv:2112.11763v4 Announce Type: replace +Abstract: A linear code over $\mathbb{F}_q$ with the Hamming metric is called $\Delta$-divisible if the weights of all codewords are divisible by $\Delta$. They have been introduced by Harold Ward a few decades ago. Applications include subspace codes, partial spreads, vector space partitions, and distance optimal codes. The determination of the possible lengths of projective divisible codes is an interesting and comprehensive challenge. + oai:arXiv.org:2112.11763v4 + cs.IT + math.CO + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sascha Kurz + + + Constructions and bounds for subspace codes + https://arxiv.org/abs/2112.11766 + arXiv:2112.11766v3 Announce Type: replace +Abstract: Subspace codes are the $q$-analog of binary block codes in the Hamming metric. Here the codewords are vector spaces over a finite field. They have e.g. applications in random linear network coding, distributed storage, and cryptography. In this chapter we survey known constructions and upper bounds for subspace codes. + oai:arXiv.org:2112.11766v3 + cs.IT + math.CO + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sascha Kurz + + + Rethinking Open-Set Object Detection: Issues, a New Formulation, and Taxonomy + https://arxiv.org/abs/2207.09775 + arXiv:2207.09775v3 Announce Type: replace +Abstract: Open-set object detection (OSOD), a task involving the detection of unknown objects while accurately detecting known objects, has recently gained attention. However, we identify a fundamental issue with the problem formulation employed in current OSOD studies. Inherent to object detection is knowing "what to detect," which contradicts the idea of identifying "unknown" objects. This sets OSOD apart from open-set recognition (OSR). This contradiction complicates a proper evaluation of methods' performance, a fact that previous studies have overlooked. Next, we propose a novel formulation wherein detectors are required to detect both known and unknown classes within specified super-classes of object classes. This new formulation is free from the aforementioned issues and has practical applications. Finally, we design benchmark tests utilizing existing datasets and report the experimental evaluation of existing OSOD methods. The results show that existing methods fail to accurately detect unknown objects due to misclassification of known and unknown classes rather than incorrect bounding box prediction. As a byproduct, we introduce a taxonomy of OSOD, resolving confusion prevalent in the literature. We anticipate that our study will encourage the research community to reconsider OSOD and facilitate progress in the right direction. + oai:arXiv.org:2207.09775v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1007/s11263-025-02479-3 + Yusuke Hosoya, Masanori Suganuma, Takayuki Okatani + + + Trajectory-Aware Eligibility Traces for Off-Policy Reinforcement Learning + https://arxiv.org/abs/2301.11321 + arXiv:2301.11321v3 Announce Type: replace +Abstract: Off-policy learning from multistep returns is crucial for sample-efficient reinforcement learning, but counteracting off-policy bias without exacerbating variance is challenging. Classically, off-policy bias is corrected in a per-decision manner: past temporal-difference errors are re-weighted by the instantaneous Importance Sampling (IS) ratio after each action via eligibility traces. Many off-policy algorithms rely on this mechanism, along with differing protocols for cutting the IS ratios to combat the variance of the IS estimator. Unfortunately, once a trace has been fully cut, the effect cannot be reversed. This has led to the development of credit-assignment strategies that account for multiple past experiences at a time. These trajectory-aware methods have not been extensively analyzed, and their theoretical justification remains uncertain. In this paper, we propose a multistep operator that can express both per-decision and trajectory-aware methods. We prove convergence conditions for our operator in the tabular setting, establishing the first guarantees for several existing methods as well as many new ones. Finally, we introduce Recency-Bounded Importance Sampling (RBIS), which leverages trajectory awareness to perform robustly across $\lambda$-values in an off-policy control task. + oai:arXiv.org:2301.11321v3 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Brett Daley, Martha White, Christopher Amato, Marlos C. Machado + + + Multiperiodic Processes: Ergodic Sources with a Sublinear Entropy + https://arxiv.org/abs/2302.09049 + arXiv:2302.09049v3 Announce Type: replace +Abstract: We construct multiperiodic processes -- a simple example of stationary ergodic (but not mixing) processes over natural numbers that enjoy the vanishing entropy rate under a mild condition. Multiperiodic processes are supported on randomly shifted deterministic sequences called multiperiodic sequences, which can be efficiently generated using an algorithm called the Infinite Clock. Under a suitable parameterization, multiperiodic sequences exhibit relative frequencies of particular numbers given by Zipf's law. Exactly in the same setting, the respective multiperiodic processes satisfy an asymptotic power-law growth of block entropy, called Hilberg's law. Hilberg's law is deemed to hold for statistical language models, in particular. + oai:arXiv.org:2302.09049v3 + cs.IT + cs.LG + math.IT + math.ST + stat.TH + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + {\L}ukasz D\k{e}bowski + + + Flying robots for a smarter life + https://arxiv.org/abs/2303.12044 + arXiv:2303.12044v2 Announce Type: replace +Abstract: Innovative ideas are continuously emerging to produce better life conditions where essential human needs are supposed to be fulfilled with perfect scenarios leading us to propose modern strategies drawing the future of smart cities. In this context, flying robots are increasingly exploited in many fields to improve the quality of our life. This paper illustrates new designs of flying robots that could be used to perform a variety of advanced missions like investigating the state of high-power lines and manipulating cabling maintenance procedures when failures are detected, evaluating the state of the outer edge of sidewalks to color their partially or wholly erased parts, and spraying pesticides to trees or crops that are affected by different diseases. Creating such smart devices demands developing many other partial designs relying on AI-based algorithms, computer vision techniques, and embedded systems. A variety of techniques that we have recently developed in this field are presented here. + oai:arXiv.org:2303.12044v2 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.2139/ssrn.4349634 + Abdullatif Baba + + + Stable Matching: Dealing with Changes in Preferences + https://arxiv.org/abs/2304.02590 + arXiv:2304.02590v3 Announce Type: replace +Abstract: We study stable matchings that are robust to preference changes in the two-sided stable matching setting of Gale and Shapley[GS62]. Given two instances $A$ and $B$ on the same set of agents, a matching is said to be robust if it is stable under both instances. While prior work has considered the case where a single agent changes preferences between $A$ and $B$, we allow multiple agents on both sides to update their preferences and ask whether three central properties of stable matchings extend to robust stable matchings: (i) Can a robust stable matching be found in polynomial time? (ii) Does the set of robust stable matchings form a lattice? (iii) Is the fractional robust stable matching polytope integral? + We show that all three properties hold when any number of agents on one side change preferences, as long as at most one agent on the other side does. For the case where two or more agents on both sides change preferences, we construct examples showing that both the lattice structure and polyhedral integrality fail-identifying this setting as a sharp threshold. We also present an XP-time algorithm for the general case, which implies a polynomial-time algorithm when the number of agents with changing preferences is constant. While these results establish the tractability of these regimes, closing the complexity gap in the fully general setting remains an interesting open question. + oai:arXiv.org:2304.02590v3 + cs.DM + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Rohith Reddy Gangam, Tung Mai, Nitya Raju, Vijay V. Vazirani + + + Neural Exploitation and Exploration of Contextual Bandits + https://arxiv.org/abs/2305.03784 + arXiv:2305.03784v2 Announce Type: replace +Abstract: In this paper, we study utilizing neural networks for the exploitation and exploration of contextual multi-armed bandits. Contextual multi-armed bandits have been studied for decades with various applications. To solve the exploitation-exploration trade-off in bandits, there are three main techniques: epsilon-greedy, Thompson Sampling (TS), and Upper Confidence Bound (UCB). In recent literature, a series of neural bandit algorithms have been proposed to adapt to the non-linear reward function, combined with TS or UCB strategies for exploration. In this paper, instead of calculating a large-deviation based statistical bound for exploration like previous methods, we propose, ``EE-Net,'' a novel neural-based exploitation and exploration strategy. In addition to using a neural network (Exploitation network) to learn the reward function, EE-Net uses another neural network (Exploration network) to adaptively learn the potential gains compared to the currently estimated reward for exploration. We provide an instance-based $\widetilde{\mathcal{O}}(\sqrt{T})$ regret upper bound for EE-Net and show that EE-Net outperforms related linear and neural contextual bandit baselines on real-world datasets. + oai:arXiv.org:2305.03784v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Yikun Ban, Yuchen Yan, Arindam Banerjee, Jingrui He + + + Sampling recovery in $L_2$ and other norms + https://arxiv.org/abs/2305.07539 + arXiv:2305.07539v5 Announce Type: replace +Abstract: We study the recovery of functions in various norms, including $L_p$ with $1\le p\le\infty$, based on function evaluations. We obtain worst case error bounds for general classes of functions in terms of the best $L_2$-approximation from a given nested sequence of subspaces and the Christoffel function of these subspaces. In the case $p=\infty$, our results imply that linear sampling algorithms are optimal up to a constant factor for many reproducing kernel Hilbert spaces. + oai:arXiv.org:2305.07539v5 + math.NA + cs.CC + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.1090/mcom/4148 + Math. Comp. Published Online: October 6, 2025 + David Krieg, Kateryna Pozharska, Mario Ullrich, Tino Ullrich + + + CodeTF: One-stop Transformer Library for State-of-the-art Code LLMs + https://arxiv.org/abs/2306.00029 + arXiv:2306.00029v2 Announce Type: replace +Abstract: Code intelligence plays a key role in transforming modern software engineering. Recently, deep learning-based models, especially Transformer-based large language models (LLMs), have demonstrated remarkable potential in tackling these tasks by leveraging massive open-source code data and programming language features. However, the development and deployment of such models often require expertise in both machine learning and software engineering, creating a barrier for the model adoption. In this paper, we present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence. Following the principles of modular design and extensible framework, we design CodeTF with a unified interface to enable rapid access and development across different types of models, datasets and tasks. Our library supports a collection of pretrained Code LLM models and popular code benchmarks, including a standardized interface to train and serve code LLMs efficiently, and data features such as language-specific parsers and utility functions for extracting code attributes. In this paper, we describe the design principles, the architecture, key modules and components, and compare with other related library tools. Finally, we hope CodeTF is able to bridge the gap between machine learning/generative AI and software engineering, providing a comprehensive open-source solution for developers, researchers, and practitioners. + oai:arXiv.org:2306.00029v2 + cs.SE + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nghi D. Q. Bui, Hung Le, Yue Wang, Junnan Li, Akhilesh Deepak Gotmare, Steven C. H. Hoi + + + Anti-Correlated Noise in Epoch-Based Stochastic Gradient Descent: Implications for Weight Variances in Flat Directions + https://arxiv.org/abs/2306.05300 + arXiv:2306.05300v3 Announce Type: replace +Abstract: Stochastic Gradient Descent (SGD) has become a cornerstone of neural network optimization due to its computational efficiency and generalization capabilities. However, the gradient noise introduced by SGD is often assumed to be uncorrelated over time, despite the common practice of epoch-based training where data is sampled without replacement. In this work, we challenge this assumption and investigate the effects of epoch-based noise correlations on the stationary distribution of discrete-time SGD with momentum. Our main contributions are twofold: First, we calculate the exact autocorrelation of the noise during epoch-based training under the assumption that the noise is independent of small fluctuations in the weight vector, revealing that SGD noise is inherently anti-correlated over time. Second, we explore the influence of these anti-correlations on the variance of weight fluctuations. We find that for directions with curvature of the loss greater than a hyperparameter-dependent crossover value, the conventional predictions of isotropic weight variance under stationarity, based on uncorrelated and curvature-proportional noise, are recovered. Anti-correlations have negligible effect here. However, for relatively flat directions, the weight variance is significantly reduced, leading to a considerable decrease in loss fluctuations compared to the constant weight variance assumption. Furthermore, we present a numerical experiment where training with these anti-correlations enhances test performance, suggesting that the inherent noise structure induced by epoch-based training may play a role in finding flatter minima that generalize better. + oai:arXiv.org:2306.05300v3 + cs.LG + cond-mat.dis-nn + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1088/2632-2153/ae0ab2 + Mach. Learn.: Sci. Technol. 6 045003 (2025) + Marcel K\"uhn, Bernd Rosenow + + + Normalized mutual information is a biased measure for classification and community detection + https://arxiv.org/abs/2307.01282 + arXiv:2307.01282v3 Announce Type: replace +Abstract: Normalized mutual information is widely used as a similarity measure for evaluating the performance of clustering and classification algorithms. In this paper, we argue that results returned by the normalized mutual information are biased for two reasons: first, because they ignore the information content of the contingency table and, second, because their symmetric normalization introduces spurious dependence on algorithm output. We introduce a modified version of the mutual information that remedies both of these shortcomings. As a practical demonstration of the importance of using an unbiased measure, we perform extensive numerical tests on a basket of popular algorithms for network community detection and show that one's conclusions about which algorithm is best are significantly affected by the biases in the traditional mutual information. + oai:arXiv.org:2307.01282v3 + cs.SI + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1038/s41467-025-66150-8 + Nature Communications 16, 11268 (2025) + Maximilian Jerdee, Alec Kirkley, M. E. J. Newman + + + HUTFormer: Hierarchical U-Net Transformer for Long-Term Traffic Forecasting + https://arxiv.org/abs/2307.14596 + arXiv:2307.14596v2 Announce Type: replace +Abstract: Traffic forecasting, which aims to predict traffic conditions based on historical observations, has been an enduring research topic and is widely recognized as an essential component of intelligent transportation. Recent proposals on Spatial-Temporal Graph Neural Networks~(STGNNs) have made significant progress by combining sequential models with graph convolution networks. However, due to high complexity issues, STGNNs only focus on short-term traffic forecasting (e.g., 1-h ahead), while ignoring more practical long-term forecasting. In this paper, we make the first attempt to explore long-term traffic forecasting (e.g., 1-day ahead). To this end, we first reveal its unique challenges in exploiting multi-scale representations. Then, we propose a novel Hierarchical U-Net TransFormer~(HUTFormer) to address the issues of long-term traffic forecasting. HUTFormer consists of a hierarchical encoder and decoder to jointly generate and utilize multi-scale representations of traffic data. Specifically, for the encoder, we {\color{black}propose} window self-attention and segment merging to extract multi-scale representations from long-term traffic data. For the decoder, we design a cross-scale attention mechanism to effectively incorporate multi-scale representations. In addition, HUTFormer employs an efficient input embedding strategy to address the complexity issues. Extensive experiments on four traffic datasets show that the proposed HUTFormer significantly outperforms state-of-the-art traffic forecasting and long time series forecasting baselines. + oai:arXiv.org:2307.14596v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.1016/j.commtr.2025.100218 + Communications in Transportation Research. 5, 100218 (2025) + Zezhi Shao, Fei Wang, Tao Sun, Chengqing Yu, Yuchen Fang, Guangyin Jin, Zhulin An, Yang Liu, Xiaobo Qu, Yongjun Xu + + + The Index and Core of a Relation. With Applications to the Axiomatics of Relation Algebra + https://arxiv.org/abs/2309.02017 + arXiv:2309.02017v4 Announce Type: replace +Abstract: We introduce the general notions of an index and a core of a relation. We postulate a limited form of the axiom of choice -- specifically that all partial equivalence relations have an index -- and explore the consequences of adding the axiom to standard axiom systems for point-free reasoning. Examples of the theorems we prove are that a core/index of a difunction is a bijection, and that the so-called ``all or nothing'' axiom used to facilitate pointwise reasoning is derivable from our axiom of choice. + oai:arXiv.org:2309.02017v4 + cs.LO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.46298/fi.12245 + Roland Backhouse (University of Nottingham, UK), Ed Voermans (Independent Researcher) + + + Certified Defense on the Fairness of Graph Neural Networks + https://arxiv.org/abs/2311.02757 + arXiv:2311.02757v3 Announce Type: replace +Abstract: Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks over the years. Nevertheless, due to the vulnerabilities of GNNs, it has been empirically shown that malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data. In this paper, we take crucial steps to study a novel problem of certifiable defense on the fairness level of GNNs. Specifically, we propose a principled framework named ELEGANT and present a detailed theoretical certification analysis for the fairness of GNNs. ELEGANT takes {\em any} GNN as its backbone, and the fairness level of such a backbone is theoretically impossible to be corrupted under certain perturbation budgets for attackers. Notably, ELEGANT does not make any assumptions over the GNN structure or parameters, and does not require re-training the GNNs to realize certification. Hence it can serve as a plug-and-play framework for any optimized GNNs ready to be deployed. We verify the satisfactory effectiveness of ELEGANT in practice through extensive experiments on real-world datasets across different backbones of GNNs and parameter settings. + oai:arXiv.org:2311.02757v3 + cs.LG + cs.CR + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Yushun Dong, Binchi Zhang, Hanghang Tong, Jundong Li + + + Using Machine Learning to Design Time Step Size Controllers for Stable Time Integrators + https://arxiv.org/abs/2312.01796 + arXiv:2312.01796v2 Announce Type: replace +Abstract: We present a new method for developing time step controllers based on a technique from the field of machine learning. This method is applicable to stable time integrators that have an embedded scheme, i.e., that have local error estimation similar to Runge-Kutta pairs. To design good time step size controllers using these error estimates, we propose to use Bayesian optimization. In particular, we design a novel objective function that captures important properties such as tolerance convergence and computational stability. We apply our new approach to several modified Patankar--Runge--Kutta (MPRK) schemes and a Rosenbrock-type scheme, equipping them with controllers based on digital signal processing which extend classical PI and PID controllers. We demonstrate that the optimization process yields controllers that are at least as good as the best controllers chosen from a wide range of suggestions available for classical explicit and implicit time integration methods by providing work-precision diagrams on a variety of ordinary and partial differential equations. + oai:arXiv.org:2312.01796v2 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Thomas Izgin, Hendrik Ranocha + + + Self-Play Q-learners Can Provably Collude in the Iterated Prisoner's Dilemma + https://arxiv.org/abs/2312.08484 + arXiv:2312.08484v3 Announce Type: replace +Abstract: A growing body of computational studies shows that simple machine learning agents converge to cooperative behaviors in social dilemmas, such as collusive price-setting in oligopoly markets, raising questions about what drives this outcome. In this work, we provide theoretical foundations for this phenomenon in the context of self-play multi-agent Q-learners in the iterated prisoner's dilemma. We characterize broad conditions under which such agents provably learn the cooperative Pavlov (win-stay, lose-shift) policy rather than the Pareto-dominated "always defect" policy. We validate our theoretical results through additional experiments, demonstrating their robustness across a broader class of deep learning algorithms. + oai:arXiv.org:2312.08484v3 + cs.GT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Quentin Bertrand, Juan Duque, Emilio Calvano, Gauthier Gidel + + + An Insight into Security Code Review with LLMs: Capabilities, Obstacles, and Influential Factors + https://arxiv.org/abs/2401.16310 + arXiv:2401.16310v5 Announce Type: replace +Abstract: Security code review is a time-consuming and labor-intensive process typically requiring integration with automated security defect detection tools. However, existing security analysis tools struggle with poor generalization, high false positive rates, and coarse detection granularity. Large Language Models (LLMs) have been considered promising candidates for addressing those challenges. In this study, we conducted an empirical study to explore the potential of LLMs in detecting security defects during code review. Specifically, we evaluated the performance of seven LLMs under five different prompts and compared them with state-of-the-art static analysis tools. We also performed linguistic and regression analyses for the two top-performing LLMs to identify quality problems in their responses and factors influencing their performance. Our findings show that: (1) In security code review, LLMs significantly outperform state-of-the-art static analysis tools, and the reasoning-optimized LLM performs better than general-purpose LLMs. (2) DeepSeek-R1 achieves the highest performance, followed by GPT-4. The optimal prompt for DeepSeek-R1 incorporates both the commit message and chain-of-thought (CoT) guidance, while for GPT-4, the prompt with a Common Weakness Enumeration (CWE) list works best. (3) GPT-4 frequently produces vague expressions and exhibits difficulties in accurately following instructions in the prompts, while DeepSeek-R1 more commonly generates inaccurate code details in its outputs. (4) LLMs are more adept at identifying security defects in code files that have fewer tokens and security-relevant annotations. + oai:arXiv.org:2401.16310v5 + cs.SE + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jiaxin Yu, Peng Liang, Yujia Fu, Amjed Tahir, Mojtaba Shahin, Chong Wang, Yangxiao Cai + + + Novel approaches for the reliable and efficient numerical evaluation of Landau-type operators + https://arxiv.org/abs/2402.02247 + arXiv:2402.02247v2 Announce Type: replace +Abstract: When applying Hamiltonian operator splitting methods for the time integration of multi-species Vlasov-Maxwell-Landau systems, the reliable and efficient numerical approximation of the Landau equation represents a fundamental component of the entire algorithm. Substantial computational issues arise from the treatment of the physically most relevant three-dimensional case with Coulomb-type interaction. This work is concerned with the introduction and numerical comparison of novel approaches for the evaluation of the Landau collision operator and related integral operators with more general kernels. In the spirit of collocation, common tools are the identification of fundamental integrals, series expansions of the integral kernel and the density function on the main part of the velocity domain, and interpolation as well as quadrature approximation nearby the singularity of the kernel. Focusing on the favourable choice of the Fourier spectral method, their practical implementation uses the reduction to basic integrals, fast Fourier techniques, and summations along certain directions. Moreover, an important observation is that a significant percentage of the overall computational effort can be transferred to precomputations which are independent of the density function. For the purpose of exposition and numerical validation, the cases of constant, regular, and singular integral kernels are distinguished, and the procedure is adapted accordingly to the increasing complexity of the problem. + oai:arXiv.org:2402.02247v2 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jose Antonio Carrillo, Mechthild Thalhammer + + + Averaging $n$-step Returns Reduces Variance in Reinforcement Learning + https://arxiv.org/abs/2402.03903 + arXiv:2402.03903v4 Announce Type: replace +Abstract: Multistep returns, such as $n$-step returns and $\lambda$-returns, are commonly used to improve the sample efficiency of reinforcement learning (RL) methods. The variance of the multistep returns becomes the limiting factor in their length; looking too far into the future increases variance and reverses the benefits of multistep learning. In our work, we demonstrate the ability of compound returns -- weighted averages of $n$-step returns -- to reduce variance. We prove for the first time that any compound return with the same contraction modulus as a given $n$-step return has strictly lower variance. We additionally prove that this variance-reduction property improves the finite-sample complexity of temporal-difference learning under linear function approximation. Because general compound returns can be expensive to implement, we introduce two-bootstrap returns which reduce variance while remaining efficient, even when using minibatched experience replay. We conduct experiments showing that compound returns often increase the sample efficiency of $n$-step deep RL agents like DQN and PPO. + oai:arXiv.org:2402.03903v4 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Brett Daley, Martha White, Marlos C. Machado + + + ViGoR: Improving Visual Grounding of Large Vision Language Models with Fine-Grained Reward Modeling + https://arxiv.org/abs/2402.06118 + arXiv:2402.06118v4 Announce Type: replace +Abstract: By combining natural language understanding, generation capabilities, and breadth of knowledge of large language models with image perception, recent large vision language models (LVLMs) have shown unprecedented visual reasoning capabilities. However, the generated text often suffers from inaccurate grounding in the visual input, resulting in errors such as hallucination of nonexistent scene elements, missing significant parts of the scene, and inferring incorrect attributes of and relationships between objects. To address these issues, we introduce a novel framework, ViGoR (Visual Grounding Through Fine-Grained Reward Modeling) that utilizes fine-grained reward modeling to significantly enhance the visual grounding of LVLMs over pre-trained baselines. This improvement is efficiently achieved using much cheaper human evaluations instead of full supervisions, as well as automated methods. We show the effectiveness of our approach through a variety of evaluation methods and benchmarks. Additionally, we released our human annotation (https://github.com/amazon-science/vigor) comprising 15,440 images and generated text pairs with fine-grained evaluations to contribute to related research in the community. + oai:arXiv.org:2402.06118v4 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Siming Yan, Min Bai, Weifeng Chen, Xiong Zhou, Qixing Huang, Li Erran Li + + + Structured Language Generation Model: Loss Calibration and Formatted Decoding for Robust Structure Prediction and Knowledge Retrieval + https://arxiv.org/abs/2402.08971 + arXiv:2402.08971v3 Announce Type: replace +Abstract: Modern generative pre-trained language models excel at open-ended text generation, yet continue to underperform on structure-related tasks such as NER, relation extraction, and semantic role labeling, especially when compared to encoder-only models of similar sizes. While this gap has been attributed to limited structure knowledge, we hypothesize this is also due to the missing connection between the model's internal representations of linguistic structure and the output space used during supervised fine-tuning. We propose the Structured Language Generation Model (SLGM), a model- and task-agnostic framework that reformulates structured prediction as a classification problem through three components: (1) reinforced input formatting with structural cues, (2) loss design, and (3) format-aware decoding that constrains generation to task-valid outputs. Across 5 tasks and 13 datasets, SLGM substantially improves structure prediction without relying on dataset-specific engineering or additional model parameters, strengthening alignment between the model's internal structure representation and output. It outperforms baseline fine-tuning on models of the same size, achieves comparable performance to much larger models when used with <1B parameter models, and acts as a zero-weight adapter that reproduces the benefits of dataset-specific fine-tuning in low-resource settings. + oai:arXiv.org:2402.08971v3 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Minho Lee, Junghyun Min, Yerang Kim, Woochul Lee, Yeonsoo Lee + + + IT Intrusion Detection Using Statistical Learning and Testbed Measurements + https://arxiv.org/abs/2402.13081 + arXiv:2402.13081v2 Announce Type: replace +Abstract: We study automated intrusion detection in an IT infrastructure, specifically the problem of identifying the start of an attack, the type of attack, and the sequence of actions an attacker takes, based on continuous measurements from the infrastructure. We apply statistical learning methods, including Hidden Markov Model (HMM), Long Short-Term Memory (LSTM), and Random Forest Classifier (RFC) to map sequences of observations to sequences of predicted attack actions. In contrast to most related research, we have abundant data to train the models and evaluate their predictive power. The data comes from traces we generate on an in-house testbed where we run attacks against an emulated IT infrastructure. Central to our work is a machine-learning pipeline that maps measurements from a high-dimensional observation space to a space of low dimensionality or to a small set of observation symbols. Investigating intrusions in offline as well as online scenarios, we find that both HMM and LSTM can be effective in predicting attack start time, attack type, and attack actions. If sufficient training data is available, LSTM achieves higher prediction accuracy than HMM. HMM, on the other hand, requires less computational resources and less training data for effective prediction. Also, we find that the methods we study benefit from data produced by traditional intrusion detection systems like SNORT. + oai:arXiv.org:2402.13081v2 + cs.LG + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-sa/4.0/ + Xiaoxuan Wang, Rolf Stadler + + + Investigating the Impact of Project Risks on Employee Turnover Intentions in the IT Industry of Pakistan + https://arxiv.org/abs/2403.14675 + arXiv:2403.14675v4 Announce Type: replace +Abstract: Employee turnover remains a pressing issue within high-tech sectors such as IT firms and research centers, where organizational success heavily relies on the skills of their workforce. Intense competition and a scarcity of skilled professionals in the industry contribute to a perpetual demand for highly qualified employees, posing challenges for organizations to retain talent. While numerous studies have explored various factors affecting employee turnover in these industries, their focus often remains on overarching trends rather than specific organizational contexts. In particular, within the software industry, where projectspecific risks can significantly impact project success and timely delivery, understanding their influence on job satisfaction and turnover intentions is crucial. This study aims to investigate the influence of project risks in the IT industry on job satisfaction and employee turnover intentions. Furthermore, it examines the role of both external and internal social links in shaping perceptions of job satisfaction. + oai:arXiv.org:2403.14675v4 + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Ghalib Ahmed Tahir, Murtaza Ashraf + + + AC4: Algebraic Computation Checker for Circuit Constraints in ZKPs + https://arxiv.org/abs/2403.15676 + arXiv:2403.15676v5 Announce Type: replace +Abstract: Zero-knowledge proof (ZKP) systems have surged attention and held a fundamental role in contemporary cryptography. Zero-knowledge succinct non-interactive argument of knowledge (zk-SNARK) protocols dominate the ZKP usage, implemented through arithmetic circuit programming paradigm. However, underconstrained or overconstrained circuits may lead to bugs. The former refers to circuits that lack the necessary constraints, resulting in unexpected solutions and causing the verifier to accept a bogus witness, and the latter refers to circuits that are constrained excessively, resulting in lacking necessary solutions and causing the verifier to accept no witness. This paper introduces a novel approach for pinpointing two distinct types of bugs in ZKP circuits. The method involves encoding the arithmetic circuit constraints to polynomial equation systems and solving them over finite fields by the computer algebra system. The classification of verification results is refined, greatly enhancing the expressive power of the system. A tool, AC4, is proposed to represent the implementation of the method. Experiments show that AC4 demonstrates a increase in the solved rate, showing a 29% improvement over Picus and CIVER, and a slight improvement over halo2-analyzer, a checker for halo2 circuits. Within a solvable range, the checking time has also exhibited noticeable improvement, demonstrating a magnitude increase compared to previous efforts. + oai:arXiv.org:2403.15676v5 + cs.SE + cs.CL + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Qizhe Yang, Boxuan Liang, Hao Chen, Guoqiang Li + + + SPVR: syntax-to-prompt vulnerability repair based on large language models + https://arxiv.org/abs/2405.04994 + arXiv:2405.04994v3 Announce Type: replace +Abstract: Purpose: In the field of vulnerability repair, previous research has leveraged pretrained models and LLM-based prompt engineering, among which LLM-based approaches show better generalizability and achieve the best performance. However, the LLM-based approaches generally regard vulnerability repair as a sequence-to-sequence task, and do not explicitly capture the syntax patterns for different vulnerability types, leading to limited accuracy. We aim to create a method that ensures the specificity of prompts targeting vulnerable code while also leveraging the generative capabilities of Large Language Models. Methods: We propose SPVR (Syntax-to-Prompt Vulnerability Repair), a novel framework that collects information from syntax trees, and generates corresponding prompts. Our method consists of three steps: rule design, prompt generation, and patch generation. In the rule design step, our method parses code patches and designs rules to extract relevant contextual information. These rules aid in identifying vulnerability-related issues. In the prompt generation step, our method extracts information from vulnerable code with pre-defined rules, automatically converting them into prompts. We also incorporate the description of CWE (Common Weakness Enumeration) as known information into the prompts. Finally, in the patch generation step, this prompt will serve as input to any conversational LLM to obtain code patches. Results: Extensive experiments validate that our method achieves excellent results in assisting LLMs to fix vulnerabilities accurately. We utilize multiple Large Language Models to validate the effectiveness of our work, repairing 143 of 547 vulnerable code using ChatGPT-4. We conducted a comparison of our approach against several existing vulnerability repair approaches (including fine-tuning-based and prompt-based), across multiple metrics. + oai:arXiv.org:2405.04994v3 + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1007/s10515-025-00579-5 + Automated Software Engineering 33 (2026) 38 + Ruoke Wang, Zongjie Li, Cuiyun Gao, Chaozheng Wang, Yang Xiao, Xuan Wang + + + An adaptive finite element multigrid solver using GPU acceleration + https://arxiv.org/abs/2405.05047 + arXiv:2405.05047v3 Announce Type: replace +Abstract: Adaptive finite elements combined with geometric multigrid solvers are one of the most efficient numerical methods for problems such as the instationary Navier-Stokes equations. Yet despite their efficiency, computations remain expensive and the simulation of, for example, complex flow problems can take many hours or days. GPUs provide an interesting avenue to speed up the calculations due to their very large theoretical performance. However, the large degree of parallelism and non-standard API make the use of GPUs in scientific computing challenging. In this work, we develop a GPU acceleration for the adaptive finite element library Gascoigne and study its effectiveness for different systems of partial differential equations. Our goal is thereby to integrate the GPU acceleration into the existing code with minimal changes, even when this requires a penalty in the GPU acceleration. Through the systematic formulation of all computations as linear algebra operations, we can employ GPU-accelerated linear algebra libraries, which simplifies the implementation and ensures the maintainability of the code while achieving very efficient GPU utilizations. Our results for a transport-diffusion equation, linear elasticity, and the instationary Navier-Stokes equations show substantial speedups of up to 20X compared to multi-core CPU implementations. + oai:arXiv.org:2405.05047v3 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-sa/4.0/ + Manuel Liebchen, Robert Jendersie, Utku Kaya, Christian Lessig, Thomas Richter + + + An overview of systems-theoretic guarantees in data-driven model predictive control + https://arxiv.org/abs/2406.04130 + arXiv:2406.04130v2 Announce Type: replace +Abstract: The development of control methods based on data has seen a surge of interest in recent years. When applying data-driven controllers in real-world applications, providing theoretical guarantees for the closed-loop system is of crucial importance to ensure reliable operation. In this review, we provide an overview of data-driven model predictive control (MPC) methods for controlling unknown systems with guarantees on systems-theoretic properties such as stability, robustness, and constraint satisfaction. The considered approaches rely on the Fundamental Lemma from behavioral theory in order to predict input-output trajectories directly from data. We cover various setups, ranging from linear systems and noise-free data to more realistic formulations with noise and nonlinearities, and we provide an overview of different techniques to ensure guarantees for the closed-loop system. Moreover, we discuss avenues for future research that may further improve the theoretical understanding and practical applicability of data-driven MPC. + oai:arXiv.org:2406.04130v2 + eess.SY + cs.SY + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1146/annurev-control-030323-024328 + Annual Review of Control, Robotics, and Autonomous Systems 8 (1), pp. 77-100, 2025 + Julian Berberich, Frank Allg\"ower + + + Projecting Molecules into Synthesizable Chemical Spaces + https://arxiv.org/abs/2406.04628 + arXiv:2406.04628v2 Announce Type: replace +Abstract: Discovering new drug molecules is a pivotal yet challenging process due to the near-infinitely large chemical space and notorious demands on time and resources. Numerous generative models have recently been introduced to accelerate the drug discovery process, but their progression to experimental validation remains limited, largely due to a lack of consideration for synthetic accessibility in practical settings. In this work, we introduce a novel framework that is capable of generating new chemical structures while ensuring synthetic accessibility. Specifically, we introduce a postfix notation of synthetic pathways to represent molecules in chemical space. Then, we design a transformer-based model to translate molecular graphs into postfix notations of synthesis. We highlight the model's ability to: (a) perform bottom-up synthesis planning more accurately, (b) generate structurally similar, synthesizable analogs for unsynthesizable molecules proposed by generative models with their properties preserved, and (c) explore the local synthesizable chemical space around hit molecules. + oai:arXiv.org:2406.04628v2 + cs.CE + q-bio.QM + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shitong Luo, Wenhao Gao, Zuofan Wu, Jian Peng, Connor W. Coley, Jianzhu Ma + + + One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models + https://arxiv.org/abs/2406.05491 + arXiv:2406.05491v4 Announce Type: replace +Abstract: Vision-Language Pre-training (VLP) models have exhibited unprecedented capability in many applications by taking full advantage of the multimodal alignment. However, previous studies have shown they are vulnerable to maliciously crafted adversarial samples. Despite recent success, these methods are generally instance-specific and require generating perturbations for each input sample. In this paper, we reveal that VLP models are also vulnerable to the instance-agnostic universal adversarial perturbation (UAP). Specifically, we design a novel Contrastive-training Perturbation Generator with Cross-modal conditions (C-PGC) to achieve the attack. In light that the pivotal multimodal alignment is achieved through the advanced contrastive learning technique, we devise to turn this powerful weapon against themselves, i.e., employ a malicious version of contrastive learning to train the C-PGC based on our carefully crafted positive and negative image-text pairs for essentially destroying the alignment relationship learned by VLP models. Besides, C-PGC fully utilizes the characteristics of Vision-and-Language (V+L) scenarios by incorporating both unimodal and cross-modal information as effective guidance. Extensive experiments show that C-PGC successfully forces adversarial samples to move away from their original area in the VLP model's feature space, thus essentially enhancing attacks across various victim models and V+L tasks. The GitHub repository is available at https://github.com/ffhibnese/CPGC_VLP_Universal_Attacks. + oai:arXiv.org:2406.05491v4 + cs.CV + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/publicdomain/zero/1.0/ + Hao Fang, Jiawei Kong, Wenbo Yu, Bin Chen, Jiawei Li, Hao Wu, Shutao Xia, Ke Xu + + + An extension of C++ with memory-centric specifications for HPC to reduce memory footprints and streamline MPI development + https://arxiv.org/abs/2406.06095 + arXiv:2406.06095v4 Announce Type: replace +Abstract: C++ leans towards a memory-inefficient storage of structs: The compiler inserts padding bits, while it is not able to exploit knowledge about the range of integers, enums or bitsets. Furthermore, the language provides no support for arbitrary floating-point precisions. We propose a language extension based upon attributes through which developers can guide the compiler what memory arrangements would be beneficial: Can multiple booleans or integers with limited range be squeezed into one bit field, do floating-point numbers hold fewer significant bits than in the IEEE standard, and is a programmer willing to trade attribute ordering guarantees for a more compact object representation? The extension offers the opportunity to fall back to normal alignment and native C++ floating point representations via plain C++ assignments, no dependencies upon external libraries are introduced, and the resulting code remains (syntactically) standard C++. As MPI remains the de-facto standard for distributed memory calculations in C++, we furthermore propose additional attributes which streamline the MPI datatype modelling in combination with our memory optimisation extensions. Our work implements the language annotations within LLVM and demonstrates their potential impact through smoothed particle hydrodynamics benchmarks. They uncover the potential gains in terms of performance and development productivity. + oai:arXiv.org:2406.06095v4 + cs.MS + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Pawel K. Radtke, Cristian G. Barrera-Hinojosa, Mladen Ivkovic, Tobias Weinzierl + + + Continuum Attention for Neural Operators + https://arxiv.org/abs/2406.06486 + arXiv:2406.06486v4 Announce Type: replace +Abstract: Transformers, and the attention mechanism in particular, have become ubiquitous in machine learning. Their success in modeling nonlocal, long-range correlations has led to their widespread adoption in natural language processing, computer vision, and time series problems. Neural operators, which map spaces of functions into spaces of functions, are necessarily both nonlinear and nonlocal if they are universal; it is thus natural to ask whether the attention mechanism can be used in the design of neural operators. Motivated by this, we study transformers in the function space setting. We formulate attention as a map between infinite dimensional function spaces and prove that the attention mechanism as implemented in practice is a Monte Carlo or finite difference approximation of this operator. The function space formulation allows for the design of transformer neural operators, a class of architectures designed to learn mappings between function spaces. In this paper, we state and prove the first universal approximation result for transformer neural operators, using only a slight modification of the architecture implemented in practice. The prohibitive cost of applying the attention operator to functions defined on multi-dimensional domains leads to the need for more efficient attention-based architectures. For this reason we also introduce a function space generalization of the patching strategy from computer vision, and introduce a class of associated neural operators. Numerical results, on an array of operator learning problems, demonstrate the promise of our approaches to function space formulations of attention and their use in neural operators. + oai:arXiv.org:2406.06486v4 + cs.LG + cs.NA + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Edoardo Calvello, Nikola B. Kovachki, Matthew E. Levine, Andrew M. Stuart + + + Logic-based analogical proportions + https://arxiv.org/abs/2406.14402 + arXiv:2406.14402v2 Announce Type: replace +Abstract: The author has recently introduced an abstract algebraic framework of analogical proportions within the general setting of universal algebra. The purpose of this paper is to lift that framework from universal algebra to the strictly more expressive setting of full first-order logic. We show that the so-obtained logic-based framework preserves all desired properties and we prove novel results in that extended setting. + oai:arXiv.org:2406.14402v2 + cs.LO + cs.DM + math.LO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Christian Anti\'c + + + Label Words as Local Task Vectors in In-Context Learning + https://arxiv.org/abs/2406.16007 + arXiv:2406.16007v2 Announce Type: replace +Abstract: Large Language Models (LLMs) have demonstrated remarkable abilities, one of the most important being in-context learning (ICL). With ICL, LLMs can derive the underlying rule from a few demonstrations and provide answers that comply with the rule. Previous work hypothesized that the network creates a task vector in specific positions during ICL. The task vector can be computed by averaging across the dataset. It conveys the overall task information and can thus be considered global. Patching the global task vector allows LLMs to achieve zero-shot performance with dummy inputs comparable to few-shot learning. However, we find that such a global task vector does not exist in all tasks, especially in tasks that rely on rules that can only be inferred from multiple demonstrations, such as categorization tasks. Instead, the information provided by each demonstration is first transmitted to its answer position and forms a local task vector associated with the demonstration. In some tasks but not in categorization tasks, all demonstrations' local task vectors converge in later layers, forming the global task vector. We further show that local task vectors encode a high-level abstraction of rules extracted from the demonstrations. Our study provides novel insights into the mechanism underlying ICL in LLMs, demonstrating how ICL may be achieved through an information aggregation mechanism. + oai:arXiv.org:2406.16007v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Bowen Zheng, Ming Ma, Zhongqiao Lin, Tianming Yang + + + StableShard: Stable and Scalable Blockchain Sharding with High Concurrency via Collaborative Committees + https://arxiv.org/abs/2407.06882 + arXiv:2407.06882v2 Announce Type: replace +Abstract: Sharding enhances blockchain scalability by partitioning nodes into multiple groups for concurrent transaction processing. Configuring a large number of small shards usually helps improve transaction concurrency, but it also increases the fraction of malicious nodes in each shard, easily causing shard corruption and jeopardizing system security. Existing works attempt to improve concurrency by reducing shard sizes while maintaining security, but typically rely on time-consuming recovery of corrupted shards to restore liveness and network-wide consensus. This causes severe system stagnation and limits scalability. + To address this, we present StableShard, a sharded blockchain that securely provides high concurrency with stable and scalable performance. The core idea is to carefully co-design the division of labor between proposer shards (PSs) and finalizer committees (FCs): we deliberately assign 1) asymmetric roles and 2) matching parameters to PSs and FCs. Small PSs focus on fast transaction proposal and local validity, while large FCs focus on resolving forks, finalizing PS blocks, and maintaining liveness for faulty PSs via a cross-layer view-change protocol. Moreover, by fine-tuning key system parameters (e.g., shard size, quorum size), we ensure each PS to tolerate <1/2 fraction of malicious nodes without lossing liveness, and allow multiple FCs to securely coexist (each with <1/3 fraction of malicious nodes) for better scalability. Consequently, StableShard can safely configure many smaller PSs to boost concurrency, while FCs and PSs jointly guarantee safety and liveness without system stagnation, leading to stable and scalable performance. Evaluations show that StableShard achieves up to 10x higher throughput than existing solutions and significantly more stable concurrency under attacks. + oai:arXiv.org:2407.06882v2 + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Mingzhe Li, You Lin, Jin Zhang + + + SpiralShard: Highly Concurrent and Secure Blockchain Sharding via Linked Cross-shard Endorsement + https://arxiv.org/abs/2407.08651 + arXiv:2407.08651v2 Announce Type: replace +Abstract: Blockchain sharding improves the scalability of blockchain systems by partitioning the whole blockchain state, nodes, and transaction workloads into different shards. However, existing blockchain sharding systems generally suffer from a small number of shards, resulting in limited concurrency. The main reason is that existing sharding systems require large shard sizes to ensure security. + To enhance the concurrency of blockchain sharding securely, we propose SpiralShard. The intuition is to allow the existence of some shards with a larger fraction of malicious nodes (i.e., corrupted shards), thus reducing shard sizes. SpiralShard can configure more and smaller shards for higher concurrency at the same network size. To ensure security with the existence of corrupted shards, we propose the Linked Cross-shard Endorsement (LCE) protocol. According to our LCE protocol, the blocks of each shard are sequentially verified and endorsed (via intra-shard consensus) by a group of shards before being finalized. As a result, a corrupted shard can eliminate forks with the help of the other shards. We implement SpiralShard based on Harmony and conduct extensive evaluations. Experimental results show that, compared with Harmony, SpiralShard achieves around 19x throughput gain under a large network size with 4,000+ nodes. + oai:arXiv.org:2407.08651v2 + cs.CR + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1109/TON.2025.3593509 + You Lin, Mingzhe Li, Jin Zhang + + + Graph Transformers: A Survey + https://arxiv.org/abs/2407.09777 + arXiv:2407.09777v2 Announce Type: replace +Abstract: Graph transformers are a recent advancement in machine learning, offering a new class of neural network models for graph-structured data. The synergy between transformers and graph learning demonstrates strong performance and versatility across various graph-related tasks. This survey provides an in-depth review of recent progress and challenges in graph transformer research. We begin with foundational concepts of graphs and transformers. We then explore design perspectives of graph transformers, focusing on how they integrate graph inductive biases and graph attention mechanisms into the transformer architecture. Furthermore, we propose a taxonomy classifying graph transformers based on depth, scalability, and pre-training strategies, summarizing key principles for effective development of graph transformer models. Beyond technical analysis, we discuss the applications of graph transformer models for node-level, edge-level, and graph-level tasks, exploring their potential in other application scenarios as well. Finally, we identify remaining challenges in the field, such as scalability and efficiency, generalization and robustness, interpretability and explainability, dynamic and complex graphs, as well as data quality and diversity, charting future directions for graph transformer research. + oai:arXiv.org:2407.09777v2 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.1109/TNNLS.2025.3646122 + IEEE Transactions on Neural Networks and Learning Systems 2026 + Ahsan Shehzad, Feng Xia, Shagufta Abid, Ciyuan Peng, Shuo Yu, Dongyu Zhang, Karin Verspoor + + + Feature-Guided Metaheuristic with Diversity Management for Solving the Capacitated Vehicle Routing Problem + https://arxiv.org/abs/2407.20777 + arXiv:2407.20777v2 Announce Type: replace +Abstract: We propose a feature-based guidance mechanism to enhance metaheuristic algorithms for solving the Capacitated Vehicle Routing Problem (CVRP). This mechanism leverages an Explainable AI (XAI) model to identify features that correlate with high-quality solutions. These insights are used to guide the search process by promoting solution diversity and avoiding premature convergence. The guidance mechanism is first integrated into a custom metaheuristic algorithm, which combines neighborhood search with a novel hybrid of the split algorithm and path relinking. Experiments on benchmark instances with up to $30,000$ customer nodes demonstrate that the guidance significantly improves the performance of this baseline algorithm. Furthermore, we validate the generalizability of the guidance approach by integrating it into a state-of-the-art metaheuristic, where it again yields statistically significant performance gains. These results confirm that the proposed mechanism is both scalable and transferable across algorithmic frameworks. + oai:arXiv.org:2407.20777v2 + cs.AI + cs.DM + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + 10.1016/j.ejor.2025.12.029 + European Journal of Operational Research (2025) + Bachtiar Herdianto, Romain Billot, Flavien Lucas, Marc Sevaux + + + Generative Retrieval with Few-shot Indexing + https://arxiv.org/abs/2408.02152 + arXiv:2408.02152v2 Announce Type: replace +Abstract: Existing generative retrieval (GR) methods rely on training-based indexing, which fine-tunes a model to memorise associations between queries and the document identifiers (docids) of relevant documents. Training-based indexing suffers from high training costs, under-utilisation of pre-trained knowledge in large language models (LLMs), and limited adaptability to dynamic document corpora. To address the issues, we propose a few-shot indexing-based GR framework (Few-Shot GR). It has a few-shot indexing process without any training, where we prompt an LLM to generate docids for all documents in a corpus, ultimately creating a docid bank for the entire corpus. During retrieval, we feed a query to the same LLM and constrain it to generate a docid within the docid bank created during indexing, and then map the generated docid back to its corresponding document. Moreover, we devise few-shot indexing with one-to-many mapping to further enhance Few-Shot GR. Experiments show that Few-Shot GR achieves superior performance to state-of-the-art GR methods requiring heavy training. + oai:arXiv.org:2408.02152v2 + cs.IR + cs.AI + cs.CL + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Arian Askari, Chuan Meng, Mohammad Aliannejadi, Zhaochun Ren, Evangelos Kanoulas, Suzan Verberne + + + Overcoming Growth-Induced Forgetting in Task-Agnostic Continual Learning + https://arxiv.org/abs/2408.10566 + arXiv:2408.10566v5 Announce Type: replace +Abstract: In continual learning (CL), model growth enhances adaptability to new data. However, when model growth is applied improperly, especially in task-agnostic CL, where the entire grown model is used for inference, it can lead to severe degradation of learned knowledge, a problem we term growth-induced forgetting. Most existing methods that adopt model growth to improve adaptability often overlook the forgetting issue, resulting in compromised knowledge retention, making them unsuitable for task-agnostic settings. To promote both adaptability and knowledge retention with model growth, we identify the key: gradient and parameter sparsity. Introducing SparseGrow, which increases gradient sparsity through layer expansion and gradient gating to enable focused updates on parameters while preserving critical parameters, thus inhibiting forgetting. Moreover, it promotes parameter sparsity with sparse initialization and training, aiming at better control of model plasticity, improving adaptability over new data. Extensive experiments across diverse datasets, task-agnostic settings, and a large number of tasks demonstrate the necessity of controlled layer expansion and validate the effectiveness of SparseGrow in achieving high adaptability while minimizing forgetting in continual learning. By enabling model growth with sparsified gradients and parameters, SparseGrow paves the way for building scalable lifelong learning systems capable of continual adaptation with better knowledge retention. + oai:arXiv.org:2408.10566v5 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yuqing Zhao, Jiannong Cao, Divya Saxena, Xiaoyun Liu, Changlin Song, Bo Yuan, Julie McCann + + + Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation + https://arxiv.org/abs/2408.11607 + arXiv:2408.11607v3 Announce Type: replace +Abstract: Recent algorithms allow decentralised agents, possibly connected via a communication network, to learn equilibria in mean-field games from a non-episodic run of the empirical system. However, these algorithms are for tabular settings: this computationally limits the size of agents' observation space, meaning the algorithms cannot handle anything but small state spaces, nor generalise beyond policies depending only on the agent's local state to so-called 'population-dependent' policies. We address this limitation by introducing function approximation to the existing setting, drawing on the Munchausen Online Mirror Descent method that has previously been employed only in finite-horizon, episodic, centralised settings. While this permits us to include the mean field in the observation for players' policies, it is unrealistic to assume decentralised agents have access to this global information: we therefore also provide new algorithms allowing agents to locally estimate the global empirical distribution, and to improve this estimate via inter-agent communication. We prove theoretically that exchanging policy information helps networked agents outperform both independent and even centralised agents in function-approximation settings. Our experiments demonstrate this happening empirically, and show that the communication network allows decentralised agents to estimate the mean field for population-dependent policies. + oai:arXiv.org:2408.11607v3 + cs.MA + cs.AI + cs.GT + cs.LG + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Patrick Benjamin, Alessandro Abate + + + How Reliable are Causal Probing Interventions? + https://arxiv.org/abs/2408.15510 + arXiv:2408.15510v5 Announce Type: replace +Abstract: Causal probing aims to analyze foundation models by examining how intervening on their representation of various latent properties impacts their outputs. Recent works have cast doubt on the theoretical basis of several leading causal probing methods, but it has been unclear how to systematically evaluate the effectiveness of these methods in practice. To address this, we define two key causal probing desiderata: completeness (how thoroughly the representation of the target property has been transformed) and selectivity (how little non-targeted properties have been impacted). We find that there is an inherent tradeoff between the two, which we define as reliability, their harmonic mean. We introduce an empirical analysis framework to measure and evaluate these quantities, allowing us to make the first direct comparisons between different families of leading causal probing methods (e.g., linear vs. nonlinear, or concept removal vs. counterfactual interventions). We find that: (1) all methods show a clear tradeoff between completeness and selectivity; (2) more complete and reliable methods have a greater impact on LLM behavior; and (3) nonlinear interventions are almost always more reliable than linear interventions. + Our project webpage is available at: https://ahdavies6.github.io/causal_probing_reliability/ + oai:arXiv.org:2408.15510v5 + cs.LG + cs.AI + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Marc Canby, Adam Davies, Chirag Rastogi, Julia Hockenmaier + + + Automatic Detection of LLM-Generated Code: A Comparative Case Study of Contemporary Models Across Function and Class Granularities + https://arxiv.org/abs/2409.01382 + arXiv:2409.01382v2 Announce Type: replace +Abstract: The adoption of Large Language Models (LLMs) for code generation risks incorporating vulnerable code into software systems. Existing detectors face two critical limitations: a lack of systematic cross-model validation and opaque "black box" operation. We address this through a comparative study of code generated by four distinct LLMs: GPT-3.5, Claude 3 Haiku, Claude Haiku 4.5, and GPT-OSS. + Analyzing 14,485 Python functions and 11,913 classes from the CodeSearchNet dataset, we generated corresponding code with all four LLMs. Using interpretable software metrics, we trained CatBoost classifiers for each configuration. Our analysis reveals that granularity effects dominate model differences by a factor of 8.6, with negligible feature overlap, indicating that function-level and class-level detection rely on fundamentally disjoint structural signatures. + We discover critical granularity-dependent inversions: while modern models (Claude, GPT-OSS) are more detectable at the class level, GPT-3.5 is an anomaly that uniquely excels at the function level. SHAP analysis identifies the Comment-to-Code Ratio as the sole universal discriminator. However, its predictive magnitude varies drastically across models, explaining why detectors trained on specific LLMs fail to generalize. + Our findings demonstrate that GPT-3.5's exceptional detectability (AUC-ROC 0.96) is unrepresentative of contemporary models (AUC-ROC approximately between 0.68 and 0.80). Robust detection requires moving beyond single-model studies to account for substantial diversity in structural fingerprints across architectures and granularities. + oai:arXiv.org:2409.01382v2 + cs.SE + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Musfiqur Rahman, SayedHassan Khatoonabadi, Ahmad Abdellatif, Emad Shihab + + + A Reinforcement Learning Environment for Automatic Code Optimization in the MLIR Compiler + https://arxiv.org/abs/2409.11068 + arXiv:2409.11068v2 Announce Type: replace +Abstract: Code optimization is a crucial task that aims to enhance code performance. However, this process is often tedious and complex, highlighting the necessity for automatic code optimization techniques. Reinforcement Learning (RL) has emerged as a promising approach for tackling such complex optimization problems. In this project, we introduce MLIR RL, an RL environment for the MLIR compiler, dedicated to facilitating MLIR compiler research and enabling automatic code optimization. We propose a multi-discrete formulation of the action space where the action space is the Cartesian product of simpler action subspaces. We also propose a new method, called level pointers, to reduce the size of the action space related to the loop interchange transformation. This enables more efficient and effective learning of the policy. To demonstrate the effectiveness of MLIR RL, we train an RL agent to optimize MLIR Linalg code, targeting CPU. The code is generated from two domain-specific frameworks: deep-learning models generated from PyTorch, and LQCD (Lattice Quantum Chromodynamics) code generated from an LQCD compiler. The result of this work is a research environment that allows the community to experiment with novel ideas in RL-driven loop-nest optimization. + oai:arXiv.org:2409.11068v2 + cs.LG + cs.DC + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Mohammed Tirichine, Nassim Ameur, Nazim Bendib, Iheb Nassim Aouadj, Bouchama Djad, Rafik Bouloudene, Riyadh Baghdadi + + + OW-Rep: Open World Object Detection with Instance Representation Learning + https://arxiv.org/abs/2409.16073 + arXiv:2409.16073v3 Announce Type: replace +Abstract: Open World Object Detection(OWOD) addresses realistic scenarios where unseen object classes emerge, enabling detectors trained on known classes to detect unknown objects and incrementally incorporate the knowledge they provide. While existing OWOD methods primarily focus on detecting unknown objects, they often overlook the rich semantic relationships between detected objects, which are essential for scene understanding and applications in open-world environments (e.g., open-world tracking and novel class discovery). In this paper, we extend the OWOD framework to jointly detect unknown objects and learn semantically rich instance embeddings, enabling the detector to capture fine-grained semantic relationships between instances. To this end, we propose two modules that leverage the rich and generalizable knowledge of Vision Foundation Models(VFMs) and can be integrated into open-world object detectors. First, the Unknown Box Refine Module uses instance masks from the Segment Anything Model to accurately localize unknown objects. The Embedding Transfer Module then distills instance-wise semantic similarities from VFM features to the detector's embeddings via a relaxed contrastive loss, enabling the detector to learn a semantically meaningful and generalizable instance feature. Extensive experiments show that our method significantly improves both unknown object detection and instance embedding quality, while also enhancing performance in downstream tasks such as open-world tracking. + oai:arXiv.org:2409.16073v3 + cs.CV + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Sunoh Lee, Minsik Jeon, Jihong Min, Junwon Seo + + + Equivariant Polynomial Functional Networks + https://arxiv.org/abs/2410.04213 + arXiv:2410.04213v2 Announce Type: replace +Abstract: Neural Functional Networks (NFNs) have gained increasing interest due to their wide range of applications, including extracting information from implicit representations of data, editing network weights, and evaluating policies. A key design principle of NFNs is their adherence to the permutation and scaling symmetries inherent in the connectionist structure of the input neural networks. Recent NFNs have been proposed with permutation and scaling equivariance based on either graph-based message-passing mechanisms or parameter-sharing mechanisms. However, graph-based equivariant NFNs suffer from high memory consumption and long running times. On the other hand, parameter-sharing-based NFNs built upon equivariant linear layers exhibit lower memory consumption and faster running time, yet their expressivity is limited due to the large size of the symmetric group of the input neural networks. The challenge of designing a permutation and scaling equivariant NFN that maintains low memory consumption and running time while preserving expressivity remains unresolved. In this paper, we propose a novel solution with the development of MAGEP-NFN (Monomial mAtrix Group Equivariant Polynomial NFN). Our approach follows the parameter-sharing mechanism but differs from previous works by constructing a nonlinear equivariant layer represented as a polynomial in the input weights. This polynomial formulation enables us to incorporate additional relationships between weights from different input hidden layers, enhancing the model's expressivity while keeping memory consumption and running time low, thereby addressing the aforementioned challenge. We provide empirical evidence demonstrating that MAGEP-NFN achieves competitive performance and efficiency compared to existing baselines. + oai:arXiv.org:2410.04213v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Thieu N. Vo, Viet-Hoang Tran, Tho Tran Huu, An Nguyen The, Thanh Tran, Minh-Khoi Nguyen-Nhat, Duy-Tung Pham, Tan Minh Nguyen + + + GUIDEd Agents: Enhancing Navigation Policies through Task-Specific Uncertainty Abstraction in Localization-Limited Environments + https://arxiv.org/abs/2410.15178 + arXiv:2410.15178v4 Announce Type: replace +Abstract: Autonomous vehicles performing navigation tasks in complex environments face significant challenges due to uncertainty in state estimation. In many scenarios, such as stealth operations or resource-constrained settings, accessing high-precision localization comes at a significant cost, forcing robots to rely primarily on less precise state estimates. Our key observation is that different tasks require varying levels of precision in different regions: a robot navigating a crowded space might need precise localization near obstacles but can operate effectively with less precision elsewhere. In this paper, we present a planning method for integrating task-specific uncertainty requirements directly into navigation policies. We introduce Task-Specific Uncertainty Maps (TSUMs), which abstract the acceptable levels of state estimation uncertainty across different regions. TSUMs align task requirements and environmental features using a shared representation space, generated via a domain-adapted encoder. Using TSUMs, we propose Generalized Uncertainty Integration for Decision-Making and Execution (GUIDE), a policy conditioning framework that incorporates these uncertainty requirements into robot decision-making. We find that TSUMs provide an effective way to abstract task-specific uncertainty requirements, and conditioning policies on TSUMs enables the robot to reason about the context-dependent value of certainty and adapt its behavior accordingly. We show how integrating GUIDE into reinforcement learning frameworks allows the agent to learn navigation policies that effectively balance task completion and uncertainty management without explicit reward engineering. We evaluate GUIDE on various real-world robotic navigation tasks and find that it demonstrates significant improvement in task completion rates compared to baseline methods that do not explicitly consider task-specific uncertainty. + oai:arXiv.org:2410.15178v4 + cs.RO + cs.AI + cs.LG + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Gokul Puthumanaillam, Paulo Padrao, Jose Fuentes, Leonardo Bobadilla, Melkior Ornik + + + AIDOVECL: AI-generated Dataset of Outpainted Vehicles for Eye-level Classification and Localization + https://arxiv.org/abs/2410.24116 + arXiv:2410.24116v2 Announce Type: replace +Abstract: Image labeling is a critical bottleneck in the development of computer vision technologies, often constraining the potential of machine learning models due to the time-intensive nature of manual annotations. This work introduces a novel approach that leverages outpainting to mitigate the problem of annotated data scarcity by generating artificial contexts and annotations, significantly reducing manual labeling efforts. We apply this technique to a particularly acute challenge in autonomous driving, urban planning, and environmental monitoring: the lack of diverse, eye-level vehicle images in desired classes. Our dataset comprises AI-generated vehicle images obtained by detecting and cropping vehicles from manually selected seed images, which are then outpainted onto larger canvases to simulate varied real-world conditions. The outpainted images include detailed annotations, providing high-quality ground truth data. Advanced outpainting techniques and image quality assessments ensure visual fidelity and contextual relevance. Ablation results show that incorporating AIDOVECL improves overall detection performance by up to 10%, and delivers gains of up to 40% in settings with greater diversity of context, object scale, and placement, with underrepresented classes achieving up to 50% higher true positives. AIDOVECL enhances vehicle detection by augmenting real training data and supporting evaluation across diverse scenarios. By demonstrating outpainting as an automatic annotation paradigm, it offers a practical and versatile solution for building fine-grained datasets with reduced labeling effort across multiple machine learning domains. The code and links to datasets used in this study are available for further research and replication at https://github.com/amir-kazemi/aidovecl . + oai:arXiv.org:2410.24116v2 + cs.CV + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Amir Kazemi, Qurat ul ain Fatima, Volodymyr Kindratenko, Christopher Tessum + + + Efficient Deep Learning Infrastructures for Embedded Computing Systems: A Comprehensive Survey and Future Envision + https://arxiv.org/abs/2411.01431 + arXiv:2411.01431v2 Announce Type: replace +Abstract: Deep neural networks (DNNs) have recently achieved impressive success across a wide range of real-world vision and language processing tasks, spanning from image classification to many other downstream vision tasks, such as object detection, tracking, and segmentation. However, previous well-established DNNs, despite being able to maintain superior accuracy, have also been evolving to be deeper and wider and thus inevitably necessitate prohibitive computational resources for both training and inference. This trend further enlarges the computational gap between computation-intensive DNNs and resource-constrained embedded computing systems, making it challenging to deploy powerful DNNs upon real-world embedded computing systems towards ubiquitous embedded intelligence. To alleviate the above computational gap and enable ubiquitous embedded intelligence, we, in this survey, focus on discussing recent efficient deep learning infrastructures for embedded computing systems, spanning from training to inference, from manual to automated, from convolutional neural networks to transformers, from transformers to vision transformers, from vision models to large language models, from software to hardware, and from algorithms to applications. Specifically, we discuss recent efficient deep learning infrastructures for embedded computing systems from the lens of (1) efficient manual network design for embedded computing systems, (2) efficient automated network design for embedded computing systems, (3) efficient network compression for embedded computing systems, (4) efficient on-device learning for embedded computing systems, (5) efficient large language models for embedded computing systems, (6) efficient deep learning software and hardware for embedded computing systems, and (7) efficient intelligent applications for embedded computing systems. + oai:arXiv.org:2411.01431v2 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Xiangzhong Luo, Di Liu, Hao Kong, Shuo Huai, Hui Chen, Guochu Xiong, Weichen Liu + + + Imagining and building wise machines: The centrality of AI metacognition + https://arxiv.org/abs/2411.02478 + arXiv:2411.02478v3 Announce Type: replace +Abstract: Although AI has become increasingly smart, its wisdom has not kept pace. In this article, we examine what is known about human wisdom and sketch a vision of its AI counterpart. We analyze human wisdom as a set of strategies for solving intractable problems-those outside the scope of analytic techniques-including both object-level strategies like heuristics [for managing problems] and metacognitive strategies like intellectual humility, perspective-taking, or context-adaptability [for managing object-level strategies]. We argue that AI systems particularly struggle with metacognition; improved metacognition would lead to AI more robust to novel environments, explainable to users, cooperative with others, and safer in risking fewer misaligned goals with human users. We discuss how wise AI might be benchmarked, trained, and implemented. + oai:arXiv.org:2411.02478v3 + cs.AI + cs.CY + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Samuel G. B. Johnson, Amir-Hossein Karimi, Yoshua Bengio, Nick Chater, Tobias Gerstenberg, Kate Larson, Sydney Levine, Melanie Mitchell, Iyad Rahwan, Bernhard Sch\"olkopf, Igor Grossmann + + + HEIGHT: Heterogeneous Interaction Graph Transformer for Robot Navigation in Crowded and Constrained Environments + https://arxiv.org/abs/2411.12150 + arXiv:2411.12150v3 Announce Type: replace +Abstract: We study the problem of robot navigation in dense and interactive crowds with static constraints such as corridors and furniture. Previous methods fail to consider all types of spatial and temporal interactions among agents and obstacles, leading to unsafe and inefficient robot paths. In this article, we leverage a graph-based representation of crowded and constrained scenarios and propose a structured framework to learn robot navigation policies with deep reinforcement learning. We first split the representations of different inputs and propose a heterogeneous spatio-temporal graph to model distinct interactions among humans, robots, and obstacles. Based on the heterogeneous spatio-temporal graph, we propose HEIGHT, a novel navigation policy network architecture with different components to capture heterogeneous interactions through space and time. HEIGHT utilizes attention mechanisms to prioritize important interactions and a recurrent network to track changes in the dynamic scene over time, encouraging the robot to avoid collisions adaptively. Through extensive simulation and real-world experiments, we demonstrate that HEIGHT outperforms state-of-the-art baselines in terms of success, navigation time, and generalization to domain shifts in challenging navigation scenarios. More information is available at https://sites.google.com/view/crowdnav-height/home. + oai:arXiv.org:2411.12150v3 + cs.RO + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shuijing Liu, Haochen Xia, Fatemeh Cheraghi Pouria, Kaiwen Hong, Neeloy Chakraborty, Zichao Hu, Joydeep Biswas, Katherine Driggs-Campbell + + + Functional normalizing flow for statistical inverse problems of partial differential equations + https://arxiv.org/abs/2411.13277 + arXiv:2411.13277v2 Announce Type: replace +Abstract: Inverse problems of partial differential equations are ubiquitous across various scientific disciplines and can be formulated as statistical inference problems using Bayes' theorem. To address large-scale problems, it is crucial to develop discretization-invariant algorithms, which can be achieved by formulating methods directly in infinite-dimensional space. We propose a novel normalizing flow based infinite-dimensional variational inference method (NF-iVI) to extract posterior information efficiently. Specifically, by introducing well-defined transformations, the prior in Bayes' formula is transformed into post-transformed measures that approximate the posterior. To circumvent the issue of mutually singular probability measures, we formulate general conditions for the employed transformations. As guiding principles, these conditions yield four concrete transformations. Additionally, to minimize computational demands, we have developed a conditional normalizing flow variant, termed CNF-iVI, which is adapt at processing measurement data of varying dimensions while requiring minimal computational resources. We apply the proposed algorithms to three typical inverse problems governed by the simple smooth equation, the steady-state Darcy flow equation, and the electric impedance tomography. Numerical results confirm our theoretical findings, illustrate the efficiency of our algorithms, and verify the discretization-invariant property. + oai:arXiv.org:2411.13277v2 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yang Zhao, Haoyu Lu, Junxiong Jia, Tao Zhou + + + Conditional Distribution Learning for Graph Classification + https://arxiv.org/abs/2411.15206 + arXiv:2411.15206v4 Announce Type: replace +Abstract: Leveraging the diversity and quantity of data provided by various graph-structured data augmentations while preserving intrinsic semantic information is challenging. Additionally, successive layers in graph neural network (GNN) tend to produce more similar node embeddings, while graph contrastive learning aims to increase the dissimilarity between negative pairs of node embeddings. This inevitably results in a conflict between the message-passing mechanism (MPM) of GNNs and the contrastive learning (CL) of negative pairs via intraviews. In this paper, we propose a conditional distribution learning (CDL) method that learns graph representations from graph-structured data for semisupervised graph classification. Specifically, we present an end-to-end graph representation learning model to align the conditional distributions of weakly and strongly augmented features over the original features. This alignment enables the CDL model to effectively preserve intrinsic semantic information when both weak and strong augmentations are applied to graph-structured data. To avoid the conflict between the MPM and the CL of negative pairs, positive pairs of node representations are retained for measuring the similarity between the original features and the corresponding weakly augmented features. Extensive experiments with several benchmark graph datasets demonstrate the effectiveness of the proposed CDL method. + oai:arXiv.org:2411.15206v4 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jie Chen, Hua Mao, Chuanbin Liu, Zhu Wang, Xi Peng + + + Online Episodic Memory Visual Query Localization with Egocentric Streaming Object Memory + https://arxiv.org/abs/2411.16934 + arXiv:2411.16934v3 Announce Type: replace +Abstract: Episodic memory retrieval enables wearable cameras to recall objects or events previously observed in video. However, existing formulations assume an "offline" setting with full video access at query time, limiting their applicability in real-world scenarios with power and storage-constrained wearable devices. Towards more application-ready episodic memory systems, we introduce Online Visual Query 2D (OVQ2D), a task where models process video streams online, observing each frame only once, and retrieve object localizations using a compact memory instead of full video history. We address OVQ2D with ESOM (Egocentric Streaming Object Memory), a novel framework integrating an object discovery module, an object tracking module, and a memory module that find, track, and store spatio-temporal object information for efficient querying. Experiments on Ego4D demonstrate ESOM's superiority over other online approaches, though OVQ2D remains challenging, with top performance at only ~4% success. ESOM's accuracy increases markedly with perfect object tracking (31.91%), discovery (40.55%), or both (81.92%), underscoring the need of applied research on these components. + oai:arXiv.org:2411.16934v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Zaira Manigrasso, Matteo Dunnhofer, Antonino Furnari, Moritz Nottebaum, Antonio Finocchiaro, Davide Marana, Rosario Forte, Giovanni Maria Farinella, Christian Micheloni + + + Implementation of neural network operators with applications to remote sensing data + https://arxiv.org/abs/2412.00375 + arXiv:2412.00375v2 Announce Type: replace +Abstract: In this paper, we provide two algorithms based on the theory of multidimensional neural network (NN) operators activated by hyperbolic tangent sigmoidal functions. Theoretical results are recalled to justify the performance of the here implemented algorithms. Specifically, the first algorithm models multidimensional signals (such as digital images), while the second one addresses the problem of rescaling and enhancement of the considered data. We discuss several applications of the NN-based algorithms for modeling and rescaling/enhancement remote sensing data (represented as images), with numerical experiments conducted on a selection of remote sensing (RS) images from the (open access) RETINA dataset. A comparison with classical interpolation methods, such as bilinear and bicubic interpolation, shows that the proposed algorithms outperform the others, particularly in terms of the Structural Similarity Index (SSIM). + oai:arXiv.org:2412.00375v2 + math.NA + cs.CV + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Danilo Costarelli, Michele Piconi + + + Linear minimum-variance approximants for noisy data + https://arxiv.org/abs/2412.01287 + arXiv:2412.01287v4 Announce Type: replace +Abstract: Inspired by recent developments in subdivision schemes founded on the Weighted Least Squares technique, we construct linear approximants for noisy data in which the weighting strategy minimizes the output variance, thereby establishing a direct correspondence with the Generalized Least Squares and the Minimum-Variance Formulas methodologies. By introducing annihilation-operators for polynomial spaces, we derive usable formulas that are optimal for general correlated non-uniform noise. We show that earlier subdivision rules are optimal for uncorrelated non-uniform noise and, finally, we present numerical evidence to confirm that, in the correlated case, the proposed approximants are better than those currently used in the subdivision literature. + oai:arXiv.org:2412.01287v4 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.1016/j.apnum.2025.12.002 + Sergio L\'opez Ure\~na, Dionisio F. Y\'a\~nez + + + Learnable Sparsity for Vision Generative Models + https://arxiv.org/abs/2412.02852 + arXiv:2412.02852v2 Announce Type: replace +Abstract: Diffusion models have achieved impressive advancements in various vision tasks. However, these gains often rely on increasing model size, which escalates computational complexity and memory demands, complicating deployment, raising inference costs, and causing environmental impact. While some studies have explored pruning techniques to improve the memory efficiency of diffusion models, most existing methods require extensive retraining to retain the model performance. Retraining a modern large diffusion model is extremely costly and resource-intensive, which limits the practicality of these methods. In this work, we achieve low-cost diffusion pruning without retraining by proposing a model-agnostic structural pruning framework for diffusion models that learns a differentiable mask to sparsify the model. To ensure effective pruning that preserves the quality of the final denoised latent, we design a novel end-to-end pruning objective that spans the entire diffusion process. As end-to-end pruning is memory-intensive, we further propose time step gradient checkpointing, a technique that significantly reduces memory usage during optimization, enabling end-to-end pruning within a limited memory budget. Results on state-of-the-art U-Net diffusion models SDXL and diffusion transformers (FLUX) demonstrate that our method can effectively prune up to 20% parameters with minimal perceptible performance degradation, and notably, without the need for model retraining. We also showcase that our method can still prune on top of time step distilled diffusion models. + oai:arXiv.org:2412.02852v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Yang Zhang, Er Jin, Wenzhong Liang, Yanfei Dong, Ashkan Khakzar, Philip Torr, Johannes Stegmaier, Kenji Kawaguchi + + + Verb Mirage: Unveiling and Assessing Verb Concept Hallucinations in Multimodal Large Language Models + https://arxiv.org/abs/2412.04939 + arXiv:2412.04939v3 Announce Type: replace +Abstract: Multimodal Large Language Models (MLLMs) have garnered significant attention recently and demonstrate outstanding capabilities in various tasks such as OCR, VQA, captioning, $\textit{etc}$. However, hallucination remains a persistent issue. While numerous methods have been proposed to mitigate hallucinations, achieving notable improvements, these methods primarily focus on mitigating hallucinations about $\textbf{object/noun-related}$ concepts. Verb concepts, crucial for understanding human actions, have been largely overlooked. In this paper, to the best of our knowledge, we are the $\textbf{first}$ to investigate the $\textbf{verb hallucination}$ phenomenon of MLLMs from various perspectives. Our findings reveal that most state-of-the-art MLLMs suffer from severe verb hallucination. To assess the effectiveness of existing mitigation methods for object concept hallucination on verb hallucination, we evaluated these methods and found that they do not effectively address verb hallucination. To address this issue, we propose a novel rich verb knowledge-based tuning method to mitigate verb hallucination. The experiment results demonstrate that our method significantly reduces hallucinations related to verbs. + oai:arXiv.org:2412.04939v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Zehao Wang, Xinpeng Liu, Yudonglin Zhang, Xiaoqian Wu, Zhou Fang, Yifan Fang, Junfu Pu, Cewu Lu, Yong-Lu Li + + + Text to Blind Motion + https://arxiv.org/abs/2412.05277 + arXiv:2412.05277v2 Announce Type: replace +Abstract: People who are blind perceive the world differently than those who are sighted, which can result in distinct motion characteristics. For instance, when crossing at an intersection, blind individuals may have different patterns of movement, such as veering more from a straight path or using touch-based exploration around curbs and obstacles. These behaviors may appear less predictable to motion models embedded in technologies such as autonomous vehicles. Yet, the ability of 3D motion models to capture such behavior has not been previously studied, as existing datasets for 3D human motion currently lack diversity and are biased toward people who are sighted. In this work, we introduce BlindWays, the first multimodal motion benchmark for pedestrians who are blind. We collect 3D motion data using wearable sensors with 11 blind participants navigating eight different routes in a real-world urban setting. Additionally, we provide rich textual descriptions that capture the distinctive movement characteristics of blind pedestrians and their interactions with both the navigation aid (e.g., a white cane or a guide dog) and the environment. We benchmark state-of-the-art 3D human prediction models, finding poor performance with off-the-shelf and pre-training-based methods for our novel task. To contribute toward safer and more reliable systems that can seamlessly reason over diverse human movements in their environments, our text-and-motion benchmark is available at https://blindways.github.io. + oai:arXiv.org:2412.05277v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hee Jae Kim, Kathakoli Sengupta, Masaki Kuribayashi, Hernisa Kacorri, Eshed Ohn-Bar + + + PowerMamba: A Deep State Space Model and Comprehensive Benchmark for Time Series Prediction in Electric Power Systems + https://arxiv.org/abs/2412.06112 + arXiv:2412.06112v2 Announce Type: replace +Abstract: The electricity sector is undergoing substantial transformations due to the rising electrification of demand, enhanced integration of renewable energy resources, and the emergence of new technologies. These changes are rendering the electric grid more volatile and unpredictable, making it difficult to maintain reliable operations. In order to address these issues, advanced time series prediction models are needed for closing the gap between the forecasted and actual grid outcomes. In this paper, we introduce a multivariate time series prediction model that combines traditional state space models with deep learning methods to simultaneously capture and predict the underlying dynamics of multiple time series. Additionally, we design a time series processing module that incorporates high-resolution external forecasts into sequence-to-sequence prediction models, achieving this with negligible increases in size and no loss of accuracy. We also release an extended dataset spanning five years of load, electricity price, ancillary service price, and renewable generation. To complement this dataset, we provide an open-access toolbox that includes our proposed model, the dataset itself, and several state-of-the-art prediction models, thereby creating a unified framework for benchmarking advanced machine learning approaches. Our findings indicate that the proposed model outperforms existing models across various prediction tasks, improving state-of-the-art prediction error by an average of 7% and decreasing model parameters by 43%. + oai:arXiv.org:2412.06112v2 + cs.LG + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ali Menati, Fatemeh Doudi, Dileep Kalathil, Le Xie + + + Scalable Dendritic Modeling Advances Expressive and Robust Deep Spiking Neural Networks + https://arxiv.org/abs/2412.06355 + arXiv:2412.06355v2 Announce Type: replace +Abstract: Dendritic computation endows biological neurons with rich nonlinear integration and high representational capacity, yet it is largely missing in existing deep spiking neural networks (SNNs). Although detailed multi-compartment models can capture dendritic computations, their high computational cost and limited flexibility make them impractical for deep learning. To combine the advantages of dendritic computation and deep network architectures for a powerful, flexible and efficient computational model, we propose the dendritic spiking neuron (DendSN). DendSN explicitly models dendritic morphology and nonlinear integration in a streamlined design, leading to substantially higher expressivity than point neurons and wide compatibility with modern deep SNN architectures. Leveraging the efficient formulation and high-performance Triton kernels, dendritic SNNs (DendSNNs) can be efficiently trained and easily scaled to deeper networks. Experiments show that DendSNNs consistently outperform conventional SNNs on classification tasks. Furthermore, inspired by dendritic modulation and synaptic clustering, we introduce the dendritic branch gating (DBG) algorithm for task-incremental learning, which effectively reduces inter-task interference. Additional evaluations show that DendSNNs exhibit superior robustness to noise and adversarial attacks, along with improved generalization in few-shot learning scenarios. Our work firstly demonstrates the possibility of training deep SNNs with multiple nonlinear dendritic branches, and comprehensively analyzes the impact of dendrite computation on representation learning across various machine learning settings, thereby offering a fresh perspective on advancing SNN design. + oai:arXiv.org:2412.06355v2 + cs.NE + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yifan Huang, Wei Fang, Zhengyu Ma, Guoqi Li, Yonghong Tian + + + Phaedrus: Predicting Dynamic Application Behavior with Lightweight Generative Models and LLMs + https://arxiv.org/abs/2412.06994 + arXiv:2412.06994v4 Announce Type: replace +Abstract: Application profiling is an indispensable technique for many software development tasks, such as code and memory layout optimizations, where optimization decisions are tailored to specific program profiles. Unfortunately, modern application codebases exhibit highly variant behavior across different inputs, creating challenges for conventional profiling approaches that rely on a single representative execution instance. In this paper, we propose \textbf{Phaedrus}, a new \textit{compiler-assisted deep learning framework} designed to predict dynamic program behaviors across varied execution instances, specifically focusing on dynamic function call prediction.Such predicted call sequences are then used for producing optimized code pertinent to a given input. + Traditional profile-guided optimization methods struggle with the input-dependent variability of modern applications, where profiling on different inputs yields divergent application behaviors. To address this, Phaedrus proposes two new approaches: \textit{Application Behavior Synthesis}, a profile-less approach where Large Language Models (LLMs) directly infer dynamic functions based on source code \& static compiler analysis, bypassing the need for traditional profiling, and \textit{Application Profile Generalization}, which uses generative models trained on compressed and augmented \textit{Whole Program Path} (WPP) based function profiles to predict application behavior under unseen inputs. Our experiments show that \textit{Phaedrus} can achieve upto $10^7X$ reduction in WPP function profile sizes, can predict most frequently executed functions that cover upto 85-99\% of the execution time, along with an average of 13.19\% (upto 65\%) reduction in application binary size, and an average of 6.08\% (upto 20\%) performance improvement over the traditional profile-guided optimization, without any execution. + oai:arXiv.org:2412.06994v4 + cs.SE + cs.PL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + OOPSLA 2026 + Bodhisatwa Chatterjee, Neeraj Jadhav, Santosh Pande + + + FaceShield: Defending Facial Image against Deepfake Threats + https://arxiv.org/abs/2412.09921 + arXiv:2412.09921v3 Announce Type: replace +Abstract: The rising use of deepfakes in criminal activities presents a significant issue, inciting widespread controversy. While numerous studies have tackled this problem, most primarily focus on deepfake detection. These reactive solutions are insufficient as a fundamental approach for crimes where authenticity is disregarded. Existing proactive defenses also have limitations, as they are effective only for deepfake models based on specific Generative Adversarial Networks (GANs), making them less applicable in light of recent advancements in diffusion-based models. In this paper, we propose a proactive defense method named FaceShield, which introduces novel defense strategies targeting deepfakes generated by Diffusion Models (DMs) and facilitates defenses on various existing GAN-based deepfake models through facial feature extractor manipulations. Our approach consists of three main components: (i) manipulating the attention mechanism of DMs to exclude protected facial features during the denoising process, (ii) targeting prominent facial feature extraction models to enhance the robustness of our adversarial perturbation, and (iii) employing Gaussian blur and low-pass filtering techniques to improve imperceptibility while enhancing robustness against JPEG compression. Experimental results on the CelebA-HQ and VGGFace2-HQ datasets demonstrate that our method achieves state-of-the-art performance against the latest deepfake models based on DMs, while also exhibiting transferability to GANs and showcasing greater imperceptibility of noise along with enhanced robustness. Code is available here: https://github.com/kuai-lab/iccv25_faceshield + oai:arXiv.org:2412.09921v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jaehwan Jeong, Sumin In, Sieun Kim, Hannie Shin, Jongheon Jeong, Sang Ho Yoon, Jaewook Chung, Sangpil Kim + + + ThreatPilot: Attack-Driven Threat Intelligence Extraction + https://arxiv.org/abs/2412.10872 + arXiv:2412.10872v2 Announce Type: replace +Abstract: Efficient defense against dynamically evolving advanced persistent threats (APT) requires the structured threat intelligence feeds, such as techniques used. However, existing threat-intelligence extraction techniques predominantly focuses on individual pieces of intelligence-such as isolated techniques or atomic indicators-resulting in fragmented and incomplete representations of real-world attacks. This granularity inherently limits on both the depth and the contextual richness of the extracted intelligence, making it difficult for downstream security systems to reason about multi-step behaviors or to generate actionable detections. To address this gap, we propose to extract the layered Attack-driven Threat Intelligence (ATIs), a comprehensive representation that captures the full spectrum of adversarial behavior. We propose ThreatPilot, which can accurately identify the AITs including complete tactics, techniques, multi-step procedures, and their procedure variants, and integrate the threat intelligence to software security application scenarios: the detection rules (i.e., Sigma) and attack command can be generated automatically to a more accuracy level. Experimental results on 1,769 newly crawly reports and 16 manually calibrated reports show ThreatPilot's effectiveness in identifying accuracy techniques, outperforming state-of-the-art approaches of AttacKG by 1.34X in F1 score. Further studies upon 64,185 application logs via Honeypot show that our Sigma rule generator significantly outperforms several existing rules-set in detecting the real-world malicious events. Industry partners confirm that our Sigma rule generator can significantly help save time and costs of the rule generation process. In addition, our generated commands achieve an execution rate of 99.3%, compared to 50.3% without the extracted intelligence. + oai:arXiv.org:2412.10872v2 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ming Xu, Hongtai Wang, Jiahao Liu, Xinfeng Li, Zhengmin Yu, Weili Han, Hoon Wei Lim, Jin Song Dong, Jiaheng Zhang + + + From Easy to Hard: Progressive Active Learning Framework for Infrared Small Target Detection with Single Point Supervision + https://arxiv.org/abs/2412.11154 + arXiv:2412.11154v3 Announce Type: replace +Abstract: Recently, single-frame infrared small target (SIRST) detection with single point supervision has drawn wide-spread attention. However, the latest label evolution with single point supervision (LESPS) framework suffers from instability, excessive label evolution, and difficulty in exerting embedded network performance. Inspired by organisms gradually adapting to their environment and continuously accumulating knowledge, we construct an innovative Progressive Active Learning (PAL) framework, which drives the existing SIRST detection networks progressively and actively recognizes and learns harder samples. Specifically, to avoid the early low-performance model leading to the wrong selection of hard samples, we propose a model pre-start concept, which focuses on automatically selecting a portion of easy samples and helping the model have basic task-specific learning capabilities. Meanwhile, we propose a refined dual-update strategy, which can promote reasonable learning of harder samples and continuous refinement of pseudo-labels. In addition, to alleviate the risk of excessive label evolution, a decay factor is reasonably introduced, which helps to achieve a dynamic balance between the expansion and contraction of target annotations. Extensive experiments show that existing SIRST detection networks equipped with our PAL framework have achieved state-of-the-art (SOTA) results on multiple public datasets. Furthermore, our PAL framework can build an efficient and stable bridge between full supervision and single point supervision tasks. Our code is available at https://github.com/YuChuang1205/PAL + oai:arXiv.org:2412.11154v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chuang Yu, Jinmiao Zhao, Yunpeng Liu, Sicheng Zhao, Yimian Dai, Xiangyu Yue + + + What Human-Horse Interactions may Teach us About Effective Human-AI Interactions + https://arxiv.org/abs/2412.13405 + arXiv:2412.13405v2 Announce Type: replace +Abstract: This article explores human-horse interactions as a metaphor for understanding and designing effective human-AI partnerships. Drawing on the long history of human collaboration with horses, we propose that AI, like horses, should complement rather than replace human capabilities. We move beyond traditional benchmarks such as the Turing test, which emphasize AI's ability to mimic human intelligence, and instead advocate for a symbiotic relationship where distinct intelligences enhance each other. We analyze key elements of human-horse relationships: trust, communication, and mutual adaptability, to highlight essential principles for human-AI collaboration. Trust is critical in both partnerships, built through predictability and shared understanding, while communication and feedback loops foster mutual adaptability. We further discuss the importance of taming and habituation in shaping these interactions, likening it to how humans train AI to perform reliably and ethically in real-world settings. The article also addresses the asymmetry of responsibility, where humans ultimately bear the greater burden of oversight and ethical judgment. Finally, we emphasize that long-term commitment and continuous learning are vital in both human-horse and human-AI relationships, as ongoing interaction refines the partnership and increases mutual adaptability. By drawing on these insights from human-horse interactions, we offer a vision for designing AI systems that are trustworthy, adaptable, and capable of fostering symbiotic human-AI partnerships. + oai:arXiv.org:2412.13405v2 + cs.HC + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.1145/377804 + interactions 33(1), 28-33 (2025) + Mohammad Hossein Jarrahi, Stanley Ahalt + + + GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting + https://arxiv.org/abs/2412.14579 + arXiv:2412.14579v2 Announce Type: replace +Abstract: Weakly-supervised 3D occupancy perception is crucial for vision-based autonomous driving in outdoor environments. Previous methods based on NeRF often face a challenge in balancing the number of samples used. Too many samples can decrease efficiency, while too few can compromise accuracy, leading to variations in the mean Intersection over Union (mIoU) by 5-10 points. Furthermore, even with surrounding-view image inputs, only a single image is rendered from each viewpoint at any given moment. This limitation leads to duplicated predictions, which significantly impacts the practicality of the approach. However, this issue has largely been overlooked in existing research. To address this, we propose GSRender, which uses 3D Gaussian Splatting for weakly-supervised occupancy estimation, simplifying the sampling process. Additionally, we introduce the Ray Compensation module, which reduces duplicated predictions by compensating for features from adjacent frames. Finally, we redesign the dynamic loss to remove the influence of dynamic objects from adjacent frames. Extensive experiments show that our approach achieves SOTA results in RayIoU (+6.0), while also narrowing the gap with 3D- supervised methods. This work lays a solid foundation for weakly-supervised occupancy perception. The code is available at https://github.com/Jasper-sudo-Sun/GSRender. + oai:arXiv.org:2412.14579v2 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Qianpu Sun, Changyong Shu, Sifan Zhou, Runxi Cheng, Yongxian Wei, Zichen Yu, Dawei Yang, Sirui Han, Yuan Chun + + + A Plug-and-Play Method for Improving Imperceptibility and Capacity in Practical Generative Text Steganography + https://arxiv.org/abs/2412.19652 + arXiv:2412.19652v4 Announce Type: replace +Abstract: Linguistic steganography embeds secret information into seemingly innocuous text to safeguard privacy under surveillance. Generative linguistic steganography leverages the probability distributions of language models (LMs) and applies steganographic algorithms during generation, and has attracted increasing attention with the rise of large language models (LLMs). To strengthen security, prior work has focused on distribution-preserving steganographic algorithms that minimize the gap between stego sampling and random sampling from the model. However, their reliance on model distributions, which often deviate from real-world cover texts, leads to limited imperceptibility when facing steganalysis detectors in practical settings. Moreover, LLM distributions tend to be more deterministic, reducing entropy and thus lowering embedding capacity. In this paper, we propose a plug-and-play method that reconstructs the distributions of language models used for generative linguistic steganography. FreStega dynamically adjusts token probabilities from the language model at each step of autoregressive stego text generation, leveraging both sequential and spatial dimensions. Extensive experiments on four LLMs, three benchmark datasets, and four distribution-preserving steganographic baselines demonstrate that, by reforming the distribution, FreStega improves the imperceptibility of stego text in realistic scenarios and increases steganographic capacity by 15.41\%, without degrading the quality of the generated stegotext. + oai:arXiv.org:2412.19652v4 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kaiyi Pang + + + ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning + https://arxiv.org/abs/2501.03291 + arXiv:2501.03291v3 Announce Type: replace +Abstract: Prompt Tuning (PT) enables the adaptation of Pre-trained Large Language Models (PLMs) to downstream tasks by optimizing a small amount of soft virtual tokens, which are prepended to the input token embeddings. Recently, Decomposed Prompt Tuning (DePT) has demonstrated superior adaptation capabilities by decomposing the soft prompt into a shorter soft prompt and a pair of low-rank matrices. The product of the pair of low-rank matrices is added to the input token embeddings to offset them. Additionally, DePT achieves faster inference compared to PT due to the shorter soft prompt. However, in this paper, we find that the position-based token embedding offsets of DePT restrict its ability to generalize across diverse model inputs, and that the shared embedding offsets across many token embeddings result in sub-optimization. To tackle these issues, we introduce Adaptive Decomposed Prompt Tuning (ADePT), which is composed of a short soft prompt and a shallow token-shared feed-forward neural network. ADePT utilizes the token-shared feed-forward neural network to learn the embedding offsets for each token, enabling adaptive embedding offsets that vary according to the model input and better optimization of token embedding offsets. This enables ADePT to achieve superior adaptation performance without requiring more inference time or additional trainable parameters compared to vanilla PT and its variants. In comprehensive experiments across 23 natural language processing tasks and 4 typical PLMs of different scales, ADePT consistently surpasses the other leading parameter-efficient fine-tuning methods, and even outperforms the full fine-tuning in certain scenarios. We also provide a theoretical analysis towards ADePT. Code is available at https://github.com/HungerPWAY/ADePT. + oai:arXiv.org:2501.03291v3 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Pengwei Tang, Xiaolin Hu, Yong Liu + + + Binary Event-Driven Spiking Transformer + https://arxiv.org/abs/2501.05904 + arXiv:2501.05904v2 Announce Type: replace +Abstract: Transformer-based Spiking Neural Networks (SNNs) introduce a novel event-driven self-attention paradigm that combines the high performance of Transformers with the energy efficiency of SNNs. However, the larger model size and increased computational demands of the Transformer structure limit their practicality in resource-constrained scenarios. In this paper, we integrate binarization techniques into Transformer-based SNNs and propose the Binary Event-Driven Spiking Transformer, i.e. BESTformer. The proposed BESTformer can significantly reduce storage and computational demands by representing weights and attention maps with a mere 1-bit. However, BESTformer suffers from a severe performance drop from its full-precision counterpart due to the limited representation capability of binarization. To address this issue, we propose a Coupled Information Enhancement (CIE) method, which consists of a reversible framework and information enhancement distillation. By maximizing the mutual information between the binary model and its full-precision counterpart, the CIE method effectively mitigates the performance degradation of the BESTformer. Extensive experiments on static and neuromorphic datasets demonstrate that our method achieves superior performance to other binary SNNs, showcasing its potential as a compact yet high-performance model for resource-limited edge devices. The repository of this paper is available at https://github.com/CaoHLin/BESTFormer. + oai:arXiv.org:2501.05904v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Honglin Cao, Zijian Zhou, Wenjie Wei, Ammar Belatreche, Yu Liang, Dehao Zhang, Malu Zhang, Yang Yang, Haizhou Li + + + Informativity Conditions for Multiple Signals: Properties, Experimental Design, and Applications (extended version) + https://arxiv.org/abs/2501.10030 + arXiv:2501.10030v4 Announce Type: replace +Abstract: Recent studies highlight the importance of persistently exciting condition in single signal sequence for model identification and data-driven control methodologies. However, maintaining prolonged excitation in control signals introduces significant challenges, as continuous excitation can reduce the lifetime of mechanical devices. In this paper, we introduce three informativity conditions for various types of multi-signal data, each augmented by weight factors. We explore the interrelations between these conditions and their rank properties in linear time-invariant systems. Furthermore, we introduce open-loop experimental design methods tailored to each of the three conditions, which can synthesize the required excitation conditions either offline or online, even in the presence of limited information within each signal segment. We demonstrate the effectiveness of these informativity conditions in least-squares identification. Additionally, all three conditions can extend Willems' fundamental lemma and are utilized to assess the properties of the system. Illustrative examples confirm that these conditions yield satisfactory outcomes in both least-squares identification and the construction of data-driven controllers. + oai:arXiv.org:2501.10030v4 + eess.SY + cs.IT + cs.SY + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Ao Cao, Fuyong Wang + + + Temporal Causal Reasoning with (Non-Recursive) Structural Equation Models + https://arxiv.org/abs/2501.10190 + arXiv:2501.10190v2 Announce Type: replace +Abstract: Structural Equation Models (SEM) are the standard approach to representing causal dependencies between variables in causal models. In this paper we propose a new interpretation of SEMs when reasoning about Actual Causality, in which SEMs are viewed as mechanisms transforming the dynamics of exogenous variables into the dynamics of endogenous variables. This allows us to combine counterfactual causal reasoning with existing temporal logic formalisms, and to introduce a temporal logic, CPLTL, for causal reasoning about such structures. We show that the standard restriction to so-called \textit{recursive} models (with no cycles in the dependency graph) is not necessary in our approach, allowing us to reason about mutually dependent processes and feedback loops. Finally, we introduce new notions of model equivalence for temporal causal models, and show that CPLTL has an efficient model-checking procedure. + oai:arXiv.org:2501.10190v2 + cs.AI + cs.LO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.1609/aaai.v39i14.33639 + Maksim Gladyshev, Natasha Alechina, Mehdi Dastani, Dragan Doder, Brian Logan + + + Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning + https://arxiv.org/abs/2501.12046 + arXiv:2501.12046v3 Announce Type: replace +Abstract: Training machine learning models on decentralized private data via federated learning (FL) poses two key challenges: communication efficiency and privacy protection. In this work, we address these challenges within the trusted aggregator model by introducing a novel approach called the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM), achieving both objectives simultaneously. In particular, CEPAM leverages the rejection-sampled universal quantizer (RSUQ), a construction of randomized vector quantizer whose resulting distortion is equivalent to a prescribed noise, such as Gaussian or Laplace noise, enabling joint differential privacy and compression. Our CEPAM provides the additional benefit of privacy adaptability, allowing clients and the server to customize privacy protection based on required accuracy and protection. We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM through experimental evaluations. Moreover, we assess CEPAM's utility performance using MNIST dataset, demonstrating that CEPAM surpasses baseline models in terms of learning accuracy. + oai:arXiv.org:2501.12046v3 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chih Wei Ling, Chun Hei Michael Shiu, Youqi Wu, Jiande Sun, Cheuk Ting Li, Linqi Song, Weitao Xu + + + AnyNav: Visual Neuro-Symbolic Friction Learning for Off-road Navigation + https://arxiv.org/abs/2501.12654 + arXiv:2501.12654v2 Announce Type: replace +Abstract: Off-road navigation is critical for a wide range of field robotics applications from planetary exploration to disaster response. However, it remains a longstanding challenge due to unstructured environments and the inherently complex terrain-vehicle interactions. Traditional physics-based methods struggle to accurately capture the nonlinear dynamics underlying these interactions, while purely data-driven approaches often overfit to specific motion patterns, vehicle geometries, or platforms, limiting their generalization in diverse, real-world scenarios. To address these limitations, we introduce AnyNav, a vision-based friction estimation and navigation framework grounded in neuro-symbolic principles. Our approach integrates neural networks for visual perception with symbolic physical models for reasoning about terrain-vehicle dynamics. To enable self-supervised learning in real-world settings, we adopt the imperative learning paradigm, employing bilevel optimization to train the friction network through physics-based optimization. This explicit incorporation of physical reasoning substantially enhances generalization across terrains, vehicle types, and operational conditions. Leveraging the predicted friction coefficients, we further develop a physics-informed navigation system capable of generating physically feasible, time-efficient paths together with corresponding speed profiles. We demonstrate that AnyNav seamlessly transfers from simulation to real-world robotic platforms, exhibiting strong robustness across different four-wheeled vehicles and diverse off-road environments. + oai:arXiv.org:2501.12654v2 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Taimeng Fu, Zitong Zhan, Zhipeng Zhao, Yi Du, Shaoshu Su, Xiao Lin, Ehsan Tarkesh Esfahani, Karthik Dantu, Souma Chowdhury, Chen Wang + + + Accelerated Preference Elicitation with LLM-Based Proxies + https://arxiv.org/abs/2501.14625 + arXiv:2501.14625v2 Announce Type: replace +Abstract: Bidders in combinatorial auctions face significant challenges when describing their preferences to an auctioneer. Classical work on preference elicitation focuses on query-based techniques inspired from proper learning--often via proxies that interface between bidders and an auction mechanism--to incrementally learn bidder preferences as needed to compute efficient allocations. Although such elicitation mechanisms enjoy theoretical query efficiency, the amount of communication required may still be too cognitively taxing in practice. + We propose a family of efficient LLM-based proxy designs for eliciting preferences from bidders using natural language. Our proposed mechanism combines LLM pipelines and DNF-proper-learning techniques to quickly approximate preferences when communication is limited. To validate our approach, we create a testing sandbox for elicitation mechanisms that communicate in natural language. In our experiments, our most promising LLM proxy design reaches approximately efficient outcomes with five times fewer queries than classical proper learning based elicitation mechanisms. + oai:arXiv.org:2501.14625v2 + cs.GT + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + David Huang, Francisco Marmolejo-Coss\'io, Edwin Lock, David Parkes + + + Hardware-Aware DNN Compression for Homogeneous Edge Devices + https://arxiv.org/abs/2501.15240 + arXiv:2501.15240v2 Announce Type: replace +Abstract: Deploying deep neural networks (DNNs) across homogeneous edge devices (the devices with the same SKU labeled by the manufacturer) often assumes identical performance among them. However, once a device model is widely deployed, the performance of each device becomes different after a period of running. This is caused by the differences in user configurations, environmental conditions, manufacturing variances, battery degradation, etc. Existing DNN compression methods have not taken this scenario into consideration and can not guarantee good compression results in all homogeneous edge devices. To address this, we propose Homogeneous-Device Aware Pruning (HDAP), a hardware-aware DNN compression framework explicitly designed for homogeneous edge devices, aiming to achieve optimal average performance of the compressed model across all devices. To deal with the difficulty of time-consuming hardware-aware evaluations for thousands or millions of homogeneous edge devices, HDAP partitions all the devices into several device clusters, which can dramatically reduce the number of devices to evaluate and use the surrogate-based evaluation instead of hardware evaluation in real-time. Experiments on ResNet50 and MobileNetV1 with the ImageNet dataset show that HDAP consistently achieves lower average inference latency compared with state-of-the-art methods, with substantial speedup gains (e.g., 2.86 $\times$ speedup at 1.0G FLOPs for ResNet50) on the homogeneous device clusters. HDAP offers an effective solution for scalable, high-performance DNN deployment methods for homogeneous edge devices. + oai:arXiv.org:2501.15240v2 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1109/DOCS67533.2025.11200827 + Proc. International Conference on Data-driven Optimization of Complex Systems (DOCS), 2025 + Kunlong Zhang, Guiying Li, Ning Lu, Peng Yang, Ke Tang + + + Network Oblivious Transfer via Noisy Channels: Limits and Capacities + https://arxiv.org/abs/2501.17021 + arXiv:2501.17021v4 Announce Type: replace +Abstract: In this paper, we study the information-theoretic limits of oblivious transfer via noisy channels. We also investigate oblivious transfer over a noisy multiple-access channel with two non-colluding senders and a single receiver. The channel is modeled through correlations among the parties, who may be honest-but-curious or, in the case of the receiver, potentially malicious. We first revisit the information-theoretic limits of two-party oblivious transfer and then extend these results to the multiple-access setting. For honest-but-curious participants, we introduce a multiparty protocol that reduces a general multiple access channel to a suitable correlation model. In scenarios with a malicious receiver, we characterize an achievable oblivious transfer rate region. + oai:arXiv.org:2501.17021v4 + cs.IT + cs.CR + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Hadi Aghaee, Bahareh Akhbari, Christian Deppe + + + Efficient Redundancy Reduction for Open-Vocabulary Semantic Segmentation + https://arxiv.org/abs/2501.17642 + arXiv:2501.17642v2 Announce Type: replace +Abstract: Open-vocabulary semantic segmentation (OVSS) is an open-world task that aims to assign each pixel within an image to a specific class defined by arbitrary text descriptions. While large-scale vision-language models have shown remarkable open-vocabulary capabilities, their image-level pretraining limits effectiveness on pixel-wise dense prediction tasks like OVSS. Recent cost-based methods narrow this granularity gap by constructing pixel-text cost maps and refining them via cost aggregation mechanisms. Despite achieving promising performance, these approaches suffer from high computational costs and long inference latency. In this paper, we identify two major sources of redundancy in the cost-based OVSS framework: redundant information introduced during cost maps construction and inefficient sequence modeling in cost aggregation. To address these issues, we propose ERR-Seg, an efficient architecture that incorporates Redundancy-Reduced Hierarchical Cost maps (RRHC) and Redundancy-Reduced Cost Aggregation (RRCA). Specifically, RRHC reduces redundant class channels by customizing a compact class vocabulary for each image and integrates hierarchical cost maps to enrich semantic representation. RRCA alleviates computational burden by performing both spatial-level and class-level sequence reduction before aggregation. Overall, ERR-Seg results in a lightweight structure for OVSS, characterized by substantial memory and computational savings without compromising accuracy. Compared to previous state-of-the-art methods on the ADE20K-847 benchmark, ERR-Seg improves performance by $5.6\%$ while achieving a 3.1$\times$ speedup. + oai:arXiv.org:2501.17642v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.1016/j.neucom.2025.132229 + Lin Chen, Qi Yang, Kun Ding, Zhihao Li, Gang Shen, Fei Li, Qiyuan Cao, Shiming Xiang + + + Error Slice Discovery via Manifold Compactness + https://arxiv.org/abs/2501.19032 + arXiv:2501.19032v2 Announce Type: replace +Abstract: Despite the great performance of deep learning models in many areas, they still make mistakes and underperform on certain subsets of data, i.e. error slices. Given a trained model, it is important to identify its semantically coherent error slices that are easy to interpret, which is referred to as the error slice discovery problem. However, there is no proper metric of slice coherence without relying on extra information like predefined slice labels. Current evaluation of slice coherence requires access to predefined slices formulated by metadata like attributes or subclasses. Its validity heavily relies on the quality and abundance of metadata, where some possible patterns could be ignored. Besides, current algorithms cannot directly incorporate the constraint of coherence into their optimization objective due to absence of an explicit coherence metric, which could potentially hinder their effectiveness. In this paper, we propose manifold compactness, a coherence metric without reliance on extra information by incorporating the data geometry property into its design, and experiments on typical datasets empirically validate the rationality of the metric. Then we develop Manifold Compactness based error Slice Discovery (MCSD), a novel algorithm that directly treats risk and coherence as the optimization objective, and is flexible to be applied to models of various tasks. Extensive experiments on the benchmark and case studies on other typical datasets demonstrate the superiority of MCSD. + oai:arXiv.org:2501.19032v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Han Yu, Hao Zou, Jiashuo Liu, Renzhe Xu, Yue He, Xingxuan Zhang, Peng Cui + + + JustAct+: A Framework for Auditable Multi-Agent Systems Regulated by Inter-Organisational Policies + https://arxiv.org/abs/2502.00138 + arXiv:2502.00138v2 Announce Type: replace +Abstract: In open multi-agent agent systems that cross organisational boundaries, agent actions must be regulated by complex policies. Consider medical data processing systems, which must observe generic laws (e.g., EU data protection regulations) and also specific participants' resource conditions (e.g., Bob consents to sharing his X-Rays with EU hospitals). Presently, we address the implementation of these systems as distributed software. Solutions to key sub-problems are available: existing policy languages capture the necessary normative concepts and formalise the computational representation and reasoning about policies, and existing distributed algorithms and protocols coordinate agents' changing actions and policies. But which policies and protocols are useful in application? With the JustAct framework, we characterise a class of multi-agent systems where actors justify their actions with sufficient policy information collected from dynamic policy statements and agreements. We prove key properties of these systems, e.g., any decision that an action is permitted now cannot be refuted later, regardless of any added statements or updated agreements. We study a particular instance of the framework by specifying (in Rocq) and implementing (in Rust) a particular policy language and runtime system for mediating agent communications. We demonstrate and assess JustAct via a case study of this implementation: we reproduce the usage scenarios of Brane, an existing policy-regulated, inter-domain, medical data processing system. + oai:arXiv.org:2502.00138v2 + cs.LO + cs.DC + cs.MA + cs.PL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Christopher A. Esterhuyse, Tim M\"uller, L. Thomas van Binsbergen + + + Optimal sensor placement under model uncertainty in the weak-constraint 4D-Var framework + https://arxiv.org/abs/2502.00150 + arXiv:2502.00150v3 Announce Type: replace +Abstract: In data assimilation, the model may be subject to uncertainties and errors. The weak-constraint data assimilation framework enables incorporating model uncertainty in the dynamics of the governing equations. We propose a new framework for near-optimal sensor placement in the weak-constrained setting. This is achieved by first deriving a design criterion based on the expected information gain, which involves the Kullback-Leibler divergence from the forecast prior to the posterior distribution. An explicit formula for this criterion is provided, assuming that the model error and background are independent and Gaussian and the dynamics are linear. We discuss algorithmic approaches to efficiently evaluate this criterion through randomized approximations. To provide further insight and flexibility in computations, we also provide alternative expressions for the criteria. We provide an algorithm to find near-optimal experimental designs using column subset selection, including a randomized algorithm that avoids computing the adjoint of the forward operator. Through numerical experiments in one and two spatial dimensions, we show the effectiveness of our proposed methods. + oai:arXiv.org:2502.00150v3 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Alen Alexanderian, Hugo D\'iaz, Vishwas Rao, Arvind K. Saibaba + + + Keep It Light! Simplifying Image Clustering Via Text-Free Adapters + https://arxiv.org/abs/2502.04226 + arXiv:2502.04226v2 Announce Type: replace +Abstract: In the era of pre-trained models, effective classification can often be achieved using simple linear probing or lightweight readout layers. In contrast, many competitive clustering pipelines have a multi-modal design, leveraging large language models (LLMs) or other text encoders, and text-image pairs, which are often unavailable in real-world downstream applications. Additionally, such frameworks are generally complicated to train and require substantial computational resources, making widespread adoption challenging. In this work, we show that in deep clustering, competitive performance with more complex state-of-the-art methods can be achieved using a text-free and highly simplified training pipeline. In particular, our approach, Simple Clustering via Pre-trained models (SCP), trains only a small cluster head while leveraging pre-trained vision model feature representations and positive data pairs. Experiments on benchmark datasets, including CIFAR-10, CIFAR-20, CIFAR-100, STL-10, ImageNet-10, and ImageNet-Dogs, demonstrate that SCP achieves highly competitive performance. Furthermore, we provide a theoretical result explaining why, at least under ideal conditions, additional text-based embeddings may not be necessary to achieve strong clustering performance in vision. + oai:arXiv.org:2502.04226v2 + cs.CV + cs.LG + cs.NE + stat.CO + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Yicen Li, Haitz S\'aez de Oc\'ariz Borde, Anastasis Kratsios, Paul D. McNicholas + + + VLDBench Evaluating Multimodal Disinformation with Regulatory Alignment + https://arxiv.org/abs/2502.11361 + arXiv:2502.11361v5 Announce Type: replace +Abstract: Detecting disinformation that blends manipulated text and images has become increasingly challenging, as AI tools make synthetic content easy to generate and disseminate. While most existing AI safety benchmarks focus on single modality misinformation (i.e., false content shared without intent to deceive), intentional multimodal disinformation, such as propaganda or conspiracy theories that imitate credible news, remains largely unaddressed. We introduce the Vision-Language Disinformation Detection Benchmark (VLDBench), the first large-scale resource supporting both unimodal (text-only) and multimodal (text + image) disinformation detection. VLDBench comprises approximately 62,000 labeled text-image pairs across 13 categories, curated from 58 news outlets. Using a semi-automated pipeline followed by expert review, 22 domain experts invested over 500 hours to produce high-quality annotations with substantial inter-annotator agreement. Evaluations of state-of-the-art Large Language Models (LLMs) and Vision-Language Models (VLMs) on VLDBench show that incorporating visual cues improves detection accuracy by 5 to 35 percentage points over text-only models. VLDBench provides data and code for evaluation, fine-tuning, and robustness testing to support disinformation analysis. Developed in alignment with AI governance frameworks (e.g., the MIT AI Risk Repository), VLDBench offers a principled foundation for advancing trustworthy disinformation detection in multimodal media. + Project: https://vectorinstitute.github.io/VLDBench/ Dataset: https://huggingface.co/datasets/vector-institute/VLDBench Code: https://github.com/VectorInstitute/VLDBench + oai:arXiv.org:2502.11361v5 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Shaina Raza, Ashmal Vayani, Aditya Jain, Aravind Narayanan, Vahid Reza Khazaie, Syed Raza Bashir, Elham Dolatabadi, Gias Uddin, Christos Emmanouilidis, Rizwan Qureshi, Mubarak Shah + + + SNN-Driven Multimodal Human Action Recognition via Sparse Spatial-Temporal Data Fusion + https://arxiv.org/abs/2502.13385 + arXiv:2502.13385v2 Announce Type: replace +Abstract: Multimodal human action recognition based on RGB and skeleton data fusion, while effective, is constrained by significant limitations such as high computational complexity, excessive memory consumption, and substantial energy demands, particularly when implemented with Artificial Neural Networks (ANN). These limitations restrict its applicability in resource-constrained scenarios. To address these challenges, we propose a novel Spiking Neural Network (SNN)-driven framework for multimodal human action recognition, utilizing event camera and skeleton data. Our framework is centered on two key innovations: (1) a novel multimodal SNN architecture that employs distinct backbone networks for each modality-an SNN-based Mamba for event camera data and a Spiking Graph Convolutional Network (SGN) for skeleton data-combined with a spiking semantic extraction module to capture deep semantic representations; and (2) a pioneering SNN-based discretized information bottleneck mechanism for modality fusion, which effectively balances the preservation of modality-specific semantics with efficient information compression. To validate our approach, we propose a novel method for constructing a multimodal dataset that integrates event camera and skeleton data, enabling comprehensive evaluation. Extensive experiments demonstrate that our method achieves superior performance in both recognition accuracy and energy efficiency, offering a promising solution for practical applications. + oai:arXiv.org:2502.13385v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Naichuan Zheng, Hailun Xia, Zeyu Liang, Yuchen Du + + + A Reynolds-semi-robust method with hybrid velocity and pressure for the unsteady incompressible Navier--Stokes equations + https://arxiv.org/abs/2502.15293 + arXiv:2502.15293v2 Announce Type: replace +Abstract: In this paper we propose and analyze a new Finite Element method for the solution of the two- and three-dimensional incompressible Navier--Stokes equations based on a hybrid discretization of both the velocity and pressure variables. The proposed method is pressure-robust, i.e., irrotational forcing terms do not affect the approximation of the velocity, and Reynolds-quasi-robust, with error estimates that, for smooth enough exact solutions, do not depend on the inverse of the viscosity. We carry out an in-depth convergence analysis highlighting pre-asymptotic convergence rates and validate the theoretical findings with a complete set of numerical experiments. + oai:arXiv.org:2502.15293v2 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1137/25M1736104 + SIAM J. Numer. Anal., 2025, Vol. 63, No. 6, pp. 2317--2342 + Louren\c{c}o Beir\~ao da Veiga, Daniele A. Di Pietro, J\'er\^ome Droniou, Kirubell B. Haile, Thomas J. Radley + + + Towards Enhanced Image Generation Via Multi-modal Chain of Thought in Unified Generative Models + https://arxiv.org/abs/2503.01298 + arXiv:2503.01298v3 Announce Type: replace +Abstract: Unified generative models have shown remarkable performance in text and image generation. For image synthesis tasks, they adopt straightforward text-to-image (T2I) generation. However, direct T2I generation limits the models in handling complex compositional instructions, which frequently occur in real-world scenarios. Although this issue is vital, existing works mainly focus on improving the basic image generation capability of the models. While such improvements help to some extent, they still fail to adequately resolve the problem. Inspired by Chain of Thought (CoT) solving complex problems step by step, this work aims to introduce CoT into unified generative models to address the challenges of complex image generation that direct T2I generation cannot effectively solve, thereby endowing models with enhanced image generation ability. To achieve this, we first propose Functionality-oriented eXperts (FoXperts), an expert-parallel architecture in our model FoX, which assigns experts by function. FoXperts disentangles potential conflicts in mainstream modality-oriented designs and provides a solid foundation for CoT. When introducing CoT, the first question is how to design it for complex image generation. To this end, we emulate a human-like artistic workflow--planning, acting, reflection, and correction--and propose the Multimodal Chain of Thought (MCoT) approach, as the data involves both text and image. To address the subsequent challenge of designing an effective MCoT training paradigm, we develop a multi-task joint training scheme that equips the model with all capabilities required for each MCoT step in a disentangled manner. This paradigm avoids the difficulty of collecting consistent multi-step data tuples. Extensive experiments show that FoX consistently outperforms existing unified models on various T2I benchmarks, delivering notable improvements in complex image generation. + oai:arXiv.org:2503.01298v3 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Yi Wang, Mushui Liu, Wanggui He, Hanyang Yuan, Longxiang Zhang, Ziwei Huang, Guanghao Zhang, Wenkai Fang, Haoze Jiang, Shengxuming Zhang, Dong She, Jinlong Liu, Weilong Dai, Mingli Song, Hao Jiang, Jie Song + + + GRADEO: Towards Human-Like Evaluation for Text-to-Video Generation via Multi-Step Reasoning + https://arxiv.org/abs/2503.02341 + arXiv:2503.02341v2 Announce Type: replace +Abstract: Recent great advances in video generation models have demonstrated their potential to produce high-quality videos, bringing challenges to effective evaluation. Unlike human evaluation, existing automated evaluation metrics lack highlevel semantic understanding and reasoning capabilities for video, thus making them infeasible and unexplainable. To fill this gap, we curate GRADEO-Instruct, a multi-dimensional T2V evaluation instruction tuning dataset, including 3.3k videos from over 10 existing video generation models and multi-step reasoning assessments converted by 16k human annotations. We then introduce GRADEO, one of the first specifically designed video evaluation models, which grades AI-generated videos for explainable scores and assessments through multi-step reasoning. Experiments show that our method aligns better with human evaluations than existing methods. Furthermore, our benchmarking reveals that current video generation models struggle to produce content that aligns with human reasoning and complex real-world scenarios. + oai:arXiv.org:2503.02341v2 + cs.CV + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhun Mou, Bin Xia, Zhengchao Huang, Wenming Yang, Jiaya Jia + + + Semantic Retrieval Augmented Contrastive Learning for Sequential Recommendation + https://arxiv.org/abs/2503.04162 + arXiv:2503.04162v5 Announce Type: replace +Abstract: Contrastive learning has shown effectiveness in improving sequential recommendation models. However, existing methods still face challenges in generating high-quality contrastive pairs: they either rely on random perturbations that corrupt user preference patterns or depend on sparse collaborative data that generates unreliable contrastive pairs. Furthermore, existing approaches typically require predefined selection rules that impose strong assumptions, limiting the model's ability to autonomously learn optimal contrastive pairs. To address these limitations, we propose a novel approach named Semantic Retrieval Augmented Contrastive Learning (SRA-CL). SRA-CL leverages the semantic understanding and reasoning capabilities of LLMs to generate expressive embeddings that capture both user preferences and item characteristics. These semantic embeddings enable the construction of candidate pools for inter-user and intra-user contrastive learning through semantic-based retrieval. To further enhance the quality of the contrastive samples, we introduce a learnable sample synthesizer that optimizes the contrastive sample generation process during model training. SRA-CL adopts a plug-and-play design, enabling seamless integration with existing sequential recommendation architectures. Extensive experiments on four public datasets demonstrate the effectiveness and model-agnostic nature of our approach. + oai:arXiv.org:2503.04162v5 + cs.IR + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ziqiang Cui, Yunpeng Weng, Xing Tang, Xiaokun Zhang, Shiwei Li, Peiyang Liu, Bowei He, Dugang Liu, Weihong Luo, Xiuqiang He, Chen Ma + + + Dynamic Pricing for On-Demand DNN Inference in the Edge-AI Market + https://arxiv.org/abs/2503.04521 + arXiv:2503.04521v2 Announce Type: replace +Abstract: The convergence of edge computing and Artificial Intelligence (AI) gives rise to Edge-AI, which enables the deployment of real-time AI applications at the network edge. A key research challenge in Edge-AI is edge inference acceleration, which aims to realize low-latency high-accuracy Deep Neural Network (DNN) inference by offloading partitioned inference tasks from end devices to edge servers. However, existing research has yet to adopt a practical Edge-AI market perspective, which would explore the personalized inference needs of AI users (e.g., inference accuracy, latency, and task complexity), the revenue incentives for AI service providers that offer edge inference services, and multi-stakeholder governance within a market-oriented context. To bridge this gap, we propose an Auction-based Edge Inference Pricing Mechanism (AERIA) for revenue maximization to tackle the multi-dimensional optimization problem of DNN model partition, edge inference pricing, and resource allocation. We develop a multi-exit device-edge synergistic inference scheme for on-demand DNN inference acceleration, and theoretically analyze the auction dynamics amongst the AI service providers, AI users and edge infrastructure provider. Owing to the strategic mechanism design via randomized consensus estimate and cost sharing techniques, the Edge-AI market attains several desirable properties. These include competitiveness in revenue maximization, incentive compatibility, and envy-freeness, which are crucial to maintain the effectiveness, truthfulness, and fairness in auction outcomes. Extensive simulations based on four representative DNN inference workloads demonstrate that AERIA significantly outperforms several state-of-the-art approaches in revenue maximization. This validates the efficacy of AERIA for on-demand DNN inference in the Edge-AI market. + oai:arXiv.org:2503.04521v2 + cs.AI + cs.CE + cs.DC + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.1109/TMC.2025.3647531 + Songyuan Li, Jia Hu, Geyong Min, Haojun Huang, Jiwei Huang + + + TRACE: Your Diffusion Model is Secretly an Instance Edge Detector + https://arxiv.org/abs/2503.07982 + arXiv:2503.07982v3 Announce Type: replace +Abstract: High-quality instance and panoptic segmentation has traditionally relied on dense instance-level annotations such as masks, boxes, or points, which are costly, inconsistent, and difficult to scale. Unsupervised and weakly-supervised approaches reduce this burden but remain constrained by semantic backbone constraints and human bias, often producing merged or fragmented outputs. We present TRACE (TRAnsforming diffusion Cues to instance Edges), showing that text-to-image diffusion models secretly function as instance edge annotators. TRACE identifies the Instance Emergence Point (IEP) where object boundaries first appear in self-attention maps, extracts boundaries through Attention Boundary Divergence (ABDiv), and distills them into a lightweight one-step edge decoder. This design removes the need for per-image diffusion inversion, achieving 81x faster inference while producing sharper and more connected boundaries. On the COCO benchmark, TRACE improves unsupervised instance segmentation by +5.1 AP, and in tag-supervised panoptic segmentation it outperforms point-supervised baselines by +1.7 PQ without using any instance-level labels. These results reveal that diffusion models encode hidden instance boundary priors, and that decoding these signals offers a practical and scalable alternative to costly manual annotation. Code is available at https://github.com/shjo-april/DiffEGG. + oai:arXiv.org:2503.07982v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Sanghyun Jo, Ziseok Lee, Wooyeol Lee, Jonghyun Choi, Jaesik Park, Kyungsu Kim + + + Geometric Data-Driven Multi-Jet Locomotion Inspired by Salps + https://arxiv.org/abs/2503.08817 + arXiv:2503.08817v2 Announce Type: replace +Abstract: Salps are marine animals consisting of chains of jellyfish-like units. Their efficient underwater locomotion by coordinating multi-jet propulsion has aroused great interest in robotics. This paper presents a geometric mechanics framework for salp-inspired robots. We study a new type of geometric mechanics models inspired by salps, in which control inputs are not restricted to the shape axes, analyze nonlinear controllability, and develop motion planning and feedback control methods. We introduce the "LandSalp" robot, which serves as a physical realization of the reduced-order, drag-dominated model of salp swimming, enabling controlled evaluation of locomotion strategies without many confounding factors of underwater experiments. We extend least-squares- and inverse-dynamics-based system identification to learn the Riemannian metric of the drag-dominated model from experimental data using Lie group differentiation. With about three minutes of data, we identify an accurate model of LandSalp. Simulation and hardware experiments demonstrate omnidirectional locomotion, shape regulation, and bending maneuvers, providing a principled pathway toward more capable salp-inspired robots. + oai:arXiv.org:2503.08817v2 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yanhao Yang, Nina L. Hecht, Yousef Salaman-Maclara, Nathan Justus, Zachary A. Thomas, Farhan Rozaidi, Ross L. Hatton + + + Networked Communication for Decentralised Cooperative Agents in Mean-Field Control + https://arxiv.org/abs/2503.09400 + arXiv:2503.09400v2 Announce Type: replace +Abstract: The mean-field framework has been used to find approximate solutions to problems involving very large populations of symmetric, anonymous agents, which may be intractable by other methods. The cooperative mean-field control (MFC) problem has received less attention than the non-cooperative mean-field game (MFG), despite the former potentially being more useful as a tool for engineering large-scale collective behaviours. Decentralised communication algorithms have recently been introduced to MFGs, giving benefits to learning speed and robustness. Inspired by this, we introduce networked communication to MFC - where populations arguably have broader incentive to communicate - and in particular to the setting where decentralised agents learn online from a single, non-episodic run of the empirical system. We adapt recent MFG algorithms to this new setting, as well as contributing a novel sub-routine allowing networked agents to estimate the global average reward from their local neighbourhood. Previous theoretical analysis of decentralised communication in MFGs does not extend trivially to MFC. We therefore contribute new theory proving that in MFC the networked communication scheme allows agents to increase social welfare faster than under both of the two typical alternative architectures, namely independent and centralised learning. We provide experiments that support this new result across different classes of cooperative game, and also give numerous ablation studies and additional experiments concerning numbers of communication round and robustness to communication failures. + oai:arXiv.org:2503.09400v2 + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Patrick Benjamin, Alessandro Abate + + + LVAgent: Long Video Understanding by Multi-Round Dynamical Collaboration of MLLM Agents + https://arxiv.org/abs/2503.10200 + arXiv:2503.10200v5 Announce Type: replace +Abstract: Existing MLLMs encounter significant challenges in modeling the temporal context within long videos. Currently, mainstream Agent-based methods use external tools to assist a single MLLM in answering long video questions. Despite such tool-based support, a solitary MLLM still offers only a partial understanding of long videos, resulting in limited performance. In order to better address long video tasks, we introduce LVAgent, the first framework enabling multi-round dynamic collaboration of MLLM agents in long video understanding. Our method consists of four key steps: 1) Selection: We pre-select appropriate agents from the model library to form optimal agent teams based on different tasks. 2) Perception: We design an effective retrieval scheme for long videos to improve the coverage of critical temporal segments while maintaining computational efficiency. 3) Action: Agents answer long video questions and exchange reasons. 4) Reflection: We evaluate each agent's performance in each round of discussion and optimize the agent team for dynamic collaboration. The agents iteratively refine their answers by multi-round dynamical collaboration of MLLM agents. LVAgent is the first agent system method that outperforms all closed-source models (like GPT-4o) and open-source models (like InternVL-2.5 and Qwen2-VL) in the long video understanding tasks. Our LVAgent achieves an accuracy of 80\% on four mainstream long video understanding tasks. Notably, LVAgent improves accuracy by 13.3\% on LongVideoBench. Code is available at https://github.com/64327069/LVAgent. + oai:arXiv.org:2503.10200v5 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Boyu Chen, Zhengrong Yue, Siran Chen, Zikang Wang, Yang Liu, Peng Li, Yali Wang + + + Bleeding Pathways: Vanishing Discriminability in LLM Hidden States Fuels Jailbreak Attacks + https://arxiv.org/abs/2503.11185 + arXiv:2503.11185v2 Announce Type: replace +Abstract: LLMs remain vulnerable to jailbreak attacks that exploit adversarial prompts to circumvent safety measures. Current safety fine-tuning approaches face two critical limitations. First, they often fail to strike a balance between security and utility, where stronger safety measures tend to over-reject harmless user requests. Second, they frequently miss malicious intent concealed within seemingly benign tasks, leaving models exposed to exploitation. Our work identifies a fundamental cause of these issues: during response generation, an LLM's capacity to differentiate harmful from safe outputs deteriorates. Experimental evidence confirms this, revealing that the separability between hidden states for safe and harmful responses diminishes as generation progresses. This weakening discrimination forces models to make compliance judgments earlier in the generation process, restricting their ability to recognize developing harmful intent and contributing to both aforementioned failures. To mitigate this vulnerability, we introduce DEEPALIGN - an inherent defense framework that enhances the safety of LLMs. By applying contrastive hidden-state steering at the midpoint of response generation, DEEPALIGN amplifies the separation between harmful and benign hidden states, enabling continuous intrinsic toxicity detection and intervention throughout the generation process. Across diverse LLMs spanning varying architectures and scales, it reduced attack success rates of nine distinct jailbreak attacks to near-zero or minimal. Crucially, it preserved model capability while reducing over-refusal. Models equipped with DEEPALIGN exhibited up to 3.5% lower error rates in rejecting challenging benign queries and maintained standard task performance with less than 1% decline. This marks a substantial advance in the safety-utility Pareto frontier. + oai:arXiv.org:2503.11185v2 + cs.CR + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Yingjie Zhang, Tong Liu, Zhe Zhao, Guozhu Meng, Kai Chen + + + Mapping Research Data at the University of Bologna + https://arxiv.org/abs/2503.13464 + arXiv:2503.13464v2 Announce Type: replace +Abstract: Research data management (RDM) strategies and practices play a pivotal role in adhering to the paradigms of reproducibility and transparency by enabling research sharing in accordance with the principles of Open Science. Discipline-specificity is an essential factor when understanding RDM declinations, to tailor a comprehensive support service and to enhance interdisciplinarity. In this paper we present the results of a mapping carried out to gather information on research data generated and managed within the University of Bologna (UniBO). The aim is to identify differences and commonalities between disciplines and potential challenges for institutional support. We analyzed the data management plans (DMPs) of European competitive projects drafted by researchers affiliated with UniBO. We applied descriptive statistics to the collected variables to answer three main questions: How diverse is the range of data managed within the University of Bologna? Which trends of problems and patterns in terms of data management can influence/improve data stewardship service? Is there an interdisciplinary approach to data production within the University? The research work evidenced many points of contact between different disciplines in terms of data produced, formats used and modest predilection for data reuse. Hot topics such as data confidentiality, needed either on privacy or intellectual property rights (IPR) premises, and long-term preservation pose challenges to all researchers. These results show an increasing attention to RDM while highlighting the relevance of training and support to face the relatively new challenges posed by this approach. + oai:arXiv.org:2503.13464v2 + cs.DL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.5334/dsj-2025-038 + Data Science Journal, 24(), p. 38 (2025) + C. Basalti, G. Caldoni, S. Coppini, B. Gualandi, M. Marino, F. Masini, S. Peroni + + + Comp-Attn: Present-and-Align Attention for Compositional Video Generation + https://arxiv.org/abs/2503.14428 + arXiv:2503.14428v2 Announce Type: replace +Abstract: In the domain of text-to-video (T2V) generation, reliably synthesizing compositional content involving multiple subjects with intricate relations is still underexplored. The main challenges are twofold: 1) Subject presence, where not all subjects can be presented in the video; 2) Inter-subject relations, where the interaction and spatial relationship between subjects are misaligned. Existing methods adopt techniques, such as inference-time latent optimization or layout control, which fail to address both issues simultaneously. To tackle these problems, we propose Comp-Attn, a composition-aware cross-attention variant that follows a Present-and-Align paradigm: it decouples the two challenges by enforcing subject presence at the condition level and achieving relational alignment at the attention-distribution level. Specifically, 1) We introduce Subject-aware Condition Interpolation (SCI) to reinforce subject-specific conditions and ensure each subject's presence; 2) We propose Layout-forcing Attention Modulation (LAM), which dynamically enforces the attention distribution to align with the relational layout of multiple subjects. Comp-Attn can be seamlessly integrated into various T2V baselines in a training-free manner, boosting T2V-CompBench scores by 15.7\% and 11.7\% on Wan2.1-T2V-14B and Wan2.2-T2V-A14B with only a 5\% increase in inference time. Meanwhile, it also achieves strong performance on VBench and T2I-CompBench, demonstrating its scalability in general video generation and compositional text-to-image (T2I) tasks. + oai:arXiv.org:2503.14428v2 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Hongyu Zhang, Yufan Deng, Shenghai Yuan, Yian Zhao, Peng Jin, Xuehan Hou, Chang Liu, Jie Chen + + + HandSCS: Structural Coordinate Space for Animatable Hand Gaussian Splatting + https://arxiv.org/abs/2503.14736 + arXiv:2503.14736v2 Announce Type: replace +Abstract: Creating animatable hand avatars from multi-view images requires modeling complex articulations and maintaining structural consistency across poses in real time. We present HandSCS, a structure-guided 3D Gaussian Splatting framework for high-fidelity hand animation. Unlike existing approaches that condition all Gaussians on the same global pose parameters, which are inadequate for highly articulated hands, HandSCS equips each Gaussian with explicit structural guidance from both intra-pose and inter-pose perspectives. To establish intra-pose structural guidance, we introduce a Structural Coordinate Space (SCS), which bridges the gap between sparse bones and dense Gaussians through hybrid static-dynamic coordinate basis and angular-radial descriptors. To improve cross-pose coherence, we further introduce an Inter-pose Consistency Loss that promotes consistent Gaussian attributes under similar articulations. Together, these components achieve high-fidelity results with consistent fine details, even in challenging high-deformation and self-contact regions. Experiments on the InterHand2.6M dataset demonstrate that HandSCS achieves state-of-the-art performance in hand avatar animation, confirming the effectiveness of explicit structural modeling. + oai:arXiv.org:2503.14736v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yilan Dong, Wenqing Wang, Qing Wang, Jiahao Yang, Haohe Liu, Xiatuan Zhu, Gregory Slabaugh, Shanxin Yuan + + + Planning and Learning in Average Risk-aware MDPs + https://arxiv.org/abs/2503.17629 + arXiv:2503.17629v3 Announce Type: replace +Abstract: For continuing tasks, average cost Markov decision processes have well-documented value and can be solved using efficient algorithms. However, it explicitly assumes that the agent is risk-neutral. In this work, we extend risk-neutral algorithms to accommodate the more general class of dynamic risk measures. Specifically, we propose a relative value iteration (RVI) algorithm for planning and design two model-free Q-learning algorithms, namely a generic algorithm based on the multi-level Monte Carlo (MLMC) method, and an off-policy algorithm dedicated to utility-based shortfall risk measures. Both the RVI and MLMC-based Q-learning algorithms are proven to converge to optimality. Numerical experiments validate our analysis, confirm empirically the convergence of the off-policy algorithm, and demonstrate that our approach enables the identification of policies that are finely tuned to the intricate risk-awareness of the agent that they serve. + oai:arXiv.org:2503.17629v3 + cs.LG + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Weikai Wang, Erick Delage + + + LogicXGNN: Grounded Logical Rules for Explaining Graph Neural Networks + https://arxiv.org/abs/2503.19476 + arXiv:2503.19476v3 Announce Type: replace +Abstract: Existing rule-based explanations for Graph Neural Networks (GNNs) provide global interpretability but often optimize and assess fidelity in an intermediate, uninterpretable concept space, overlooking grounding quality for end users in the final subgraph explanations. This gap yields explanations that may appear faithful yet be unreliable in practice. To this end, we propose LogicXGNN, a post-hoc framework that constructs logical rules over reliable predicates explicitly designed to capture the GNN's message-passing structure, thereby ensuring effective grounding. We further introduce data-grounded fidelity ($\textit{Fid}_{\mathcal{D}}$), a realistic metric that evaluates explanations in their final-graph form, along with complementary utility metrics such as coverage and validity. Across extensive experiments, LogicXGNN improves $\textit{Fid}_{\mathcal{D}}$ by over 20% on average relative to state-of-the-art methods while being 10-100 $\times$ faster. With strong scalability and utility performance, LogicXGNN produces explanations that are faithful to the model's logic and reliably grounded in observable data. Our code is available at https://github.com/allengeng123/LogicXGNN/. + oai:arXiv.org:2503.19476v3 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Chuqin Geng, Ziyu Zhao, Zhaoyue Wang, Haolin Ye, Yuhe Jiang, Xujie Si + + + Data-Driven Nonconvex Reachability Analysis using Exact Multiplication + https://arxiv.org/abs/2504.02147 + arXiv:2504.02147v2 Announce Type: replace +Abstract: This paper addresses a fundamental challenge in data-driven reachability analysis: accurately representing and propagating non-convex reachable sets. We propose a novel approach using constrained polynomial zonotopes to describe reachable sets for unknown LTI systems. Unlike constrained zonotopes commonly used in existing literature, constrained polynomial zonotopes are closed under multiplication with constrained matrix zonotopes. We leverage this property to develop an exact multiplication method that preserves the non-convex geometry of reachable sets without resorting to approximations. We demonstrate that our approach provides tighter over-approximations of reachable sets for LTI systems compared to conventional methods. + oai:arXiv.org:2504.02147v2 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Zhen Zhang, M. Umar B. Niazi, Michelle S. Chong, Karl H. Johansson, Amr Alanwar + + + Variational Online Mirror Descent for Robust Learning in Schr\"odinger Bridge + https://arxiv.org/abs/2504.02618 + arXiv:2504.02618v4 Announce Type: replace +Abstract: The Schr\"{o}dinger bridge (SB) has evolved into a universal class of probabilistic generative models. In practice, however, estimated learning signals are innately uncertain, and the reliability promised by existing methods is often based on speculative optimal case scenarios. Recent studies regarding the Sinkhorn algorithm through mirror descent (MD) have gained attention, revealing geometric insights into solution acquisition of the SB problems. In this paper, we propose a variational online MD (OMD) framework for the SB problems, which provides further stability to SB solvers. We formally prove convergence and a regret bound for the novel OMD formulation of SB acquisition. As a result, we propose a simulation-free SB algorithm called Variational Mirrored Schr\"{o}dinger Bridge (VMSB) by utilizing the Wasserstein-Fisher-Rao geometry of the Gaussian mixture parameterization for Schr\"{o}dinger potentials. Based on the Wasserstein gradient flow theory, the algorithm offers tractable learning dynamics that precisely approximate each OMD step. In experiments, we validate the performance of the proposed VMSB algorithm across an extensive suite of benchmarks. VMSB consistently outperforms contemporary SB solvers on a wide range of SB problems, demonstrating the robustness as well as generality predicted by our OMD theory. + oai:arXiv.org:2504.02618v4 + cs.LG + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Dong-Sig Han, Jaein Kim, Hee Bin Yoo, Byoung-Tak Zhang + + + Survey and Experiments on Mental Disorder Detection via Social Media: From Large Language Models and RAG to Agents + https://arxiv.org/abs/2504.02800 + arXiv:2504.02800v4 Announce Type: replace +Abstract: Mental disorders represent a critical global health challenge, and social media is increasingly viewed as a vital resource for real-time digital phenotyping and intervention. To leverage this data, large language models (LLMs) have been introduced, offering stronger semantic understanding and reasoning than traditional deep learning, thereby enhancing the explainability of detection results. Despite the growing prominence of LLMs in this field, there is a scarcity of scholarly works that systematically synthesize how advanced enhancement techniques, specifically Retrieval-Augmented Generation (RAG) and Agentic systems, can be utilized to address these reliability and reasoning limitations. Here, we systematically survey the evolving landscape of LLM-based methods for social media mental disorder analysis, spanning standard pre-trained language models, RAG to mitigate hallucinations and contextual gaps, and agentic systems for autonomous reasoning and multi-step intervention. We organize existing work by technical paradigm and clinical target, extending beyond common internalizing disorders to include psychotic disorders and externalizing behaviors. Additionally, the paper comprehensively evaluates the performance of LLMs, including the impact of RAG, across various tasks. This work establishes a unified benchmark for the field, paving the way for the development of trustworthy, autonomous AI systems that can deliver precise and explainable mental health support. + oai:arXiv.org:2504.02800v4 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1109/ICDEW67478.2025.00027 + Zhuohan Ge, Darian Li, Yubo Wang, Nicole Hu, Xinyi Zhu, Haoyang Li, Xin Zhang, Mingtao Zhang, Shihao Qi, Yuming Xu, Han Shi, Chen Jason Zhang, Qing Li + + + A Fast Solver-Free Algorithm for Traffic Engineering in Large-Scale Data Center Network + https://arxiv.org/abs/2504.04027 + arXiv:2504.04027v3 Announce Type: replace +Abstract: Rapid growth of data center networks (DCNs) poses significant challenges for large-scale traffic engineering (TE). Existing acceleration strategies, which rely on commercial solvers or deep learning, face scalability issues and struggle with degrading performance or long computational time. Unlike existing algorithms adopting parallel strategies, we propose Sequential Source-Destination Optimization (SSDO), a sequential solver-free algorithm for TE. SSDO decomposes the problem into subproblems, each focused on adjusting the split ratios for a specific source-destination (SD) demand while keeping others fixed. To enhance the efficiency of subproblem optimization, we design a Balanced Binary Search Method (BBSM), which identifies the most balanced split ratios among multiple solutions that minimize Maximum Link Utilization (MLU). SSDO dynamically updates the sequence of SDs based on real-time utilization, which accelerates convergence and enhances solution quality. We evaluate SSDO on Meta DCNs and two wide-area networks. In a Meta topology, SSDO achieves a 65\% and 60\% reduction in normalized MLU compared to TEAL and POP, two state-of-the-art TE acceleration methods, while delivering a $12\times$ speedup over POP. These results demonstrate the superior performance of SSDO in large-scale TE. + oai:arXiv.org:2504.04027v3 + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yingming Mao, Qiaozhu Zhai, Ximeng Liu, Zhen Yao, Xia Zhu, Yuzhou Zhou + + + UCS: A Universal Model for Curvilinear Structure Segmentation + https://arxiv.org/abs/2504.04034 + arXiv:2504.04034v2 Announce Type: replace +Abstract: Curvilinear structure segmentation (CSS) is essential in various domains, including medical imaging, landscape analysis, industrial surface inspection, and plant analysis. While existing methods achieve high performance within specific domains, their generalizability is limited. On the other hand, large-scale models such as Segment Anything Model (SAM) exhibit strong generalization but are not optimized for curvilinear structures. Existing adaptations of SAM primarily focus on general object segmentation and lack specialized design for CSS tasks. To bridge this gap, we propose the Universal Curvilinear structure Segmentation (UCS) model, which adapts SAM to CSS tasks while further enhancing its cross-domain generalization. UCS features a novel encoder architecture integrating a pretrained SAM encoder with two innovations: a Sparse Adapter, strategically inserted to inherit the pre-trained SAM encoder's generalization capability while minimizing the number of fine-tuning parameters, and a Prompt Generation module, which leverages Fast Fourier Transform with a high-pass filter to generate curve-specific prompts. Furthermore, the UCS incorporates a mask decoder that eliminates reliance on manual interaction through a dual-compression module: a Hierarchical Feature Compression module, which aggregates the outputs of the sampled encoder to enhance detail preservation, and a Guidance Feature Compression module, which extracts and compresses image-driven guidance features. Evaluated on a comprehensive multi-domain dataset, including an in-house dataset covering eight natural curvilinear structures, UCS demonstrates state-of-the-art generalization and open-set segmentation performance across medical, engineering, natural, and plant imagery, establishing a new benchmark for universal CSS. The source code is available at https://github.com/kylechuuuuu/UCS. + oai:arXiv.org:2504.04034v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kai Zhu, Li Chen, Dianshuo Li, Yunxiang Cao, Jun Cheng + + + TabRep: Training Tabular Diffusion Models with a Simple and Effective Continuous Representation + https://arxiv.org/abs/2504.04798 + arXiv:2504.04798v5 Announce Type: replace +Abstract: Diffusion models have been the predominant generative model for tabular data generation. However, they face the conundrum of modeling under a separate versus a unified data representation. The former encounters the challenge of jointly modeling all multi-modal distributions of tabular data in one model. While the latter alleviates this by learning a single representation for all features, it currently leverages sparse suboptimal encoding heuristics and necessitates additional computation costs. In this work, we address the latter by presenting TabRep, a tabular diffusion architecture trained with a unified continuous representation. To motivate the design of our representation, we provide geometric insights into how the data manifold affects diffusion models. The key attributes of our representation are composed of its density, flexibility to provide ample separability for nominal features, and ability to preserve intrinsic relationships. Ultimately, TabRep provides a simple yet effective approach for training tabular diffusion models under a continuous data manifold. Our results showcase that TabRep achieves superior performance across a broad suite of evaluations. It is the first to synthesize tabular data that exceeds the downstream quality of the original datasets while preserving privacy and remaining computationally efficient. Code is available at https://github.com/jacobyhsi/TabRep. + oai:arXiv.org:2504.04798v5 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jacob Si, Zijing Ou, Mike Qu, Zhengrui Xiang, Yingzhen Li + + + Parallel GPU-Enabled Algorithms for SpGEMM on Arbitrary Semirings with Hybrid Communication + https://arxiv.org/abs/2504.06408 + arXiv:2504.06408v2 Announce Type: replace +Abstract: Sparse General Matrix Multiply (SpGEMM) is key for various High-Performance Computing (HPC) applications such as genomics and graph analytics. Using the semiring abstraction, many algorithms can be formulated as SpGEMM, allowing redefinition of addition, multiplication, and numeric types. Today large input matrices require distributed memory parallelism to avoid disk I/O, and modern HPC machines with GPUs can greatly accelerate linear algebra computation. In this paper, we implement a GPU-based distributed-memory SpGEMM routine on top of the CombBLAS library. Our implementation achieves a speedup of over 2x compared to the CPU-only CombBLAS implementation and up to 3x compared to PETSc for large input matrices. Furthermore, we note that communication between processes can be optimized by either direct host-to-host or device-to-device communication, depending on the message size. To exploit this, we introduce a hybrid communication scheme that dynamically switches data paths depending on the message size, thus improving runtimes in communication-bound scenarios. + oai:arXiv.org:2504.06408v2 + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1145/3676151.3719365 + Thomas McFarland, Julian Bellavita, Giulia Guidi + + + To See or Not to See -- Fingerprinting Devices in Adversarial Environments Amid Advanced Machine Learning + https://arxiv.org/abs/2504.08264 + arXiv:2504.08264v2 Announce Type: replace +Abstract: The increasing use of the Internet of Things raises security concerns. To address this, device fingerprinting is often employed to authenticate devices, detect adversaries, and identify eavesdroppers in an environment. This requires the ability to discern between legitimate and malicious devices which is achieved by analyzing the unique physical and/or operational characteristics of IoT devices. In the era of the latest progress in machine learning, particularly generative models, it is crucial to methodically examine the current studies in device fingerprinting. This involves explaining their approaches and underscoring their limitations when faced with adversaries armed with these ML tools. To systematically analyze existing methods, we propose a generic, yet simplified, model for device fingerprinting. Additionally, we thoroughly investigate existing methods to authenticate devices and detect eavesdropping, using our proposed model. We further study trends and similarities between works in authentication and eavesdropping detection and present the existing threats and attacks in these domains. Finally, we discuss future directions in fingerprinting based on these trends to develop more secure IoT fingerprinting schemes. + oai:arXiv.org:2504.08264v2 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Justin Feng, Amirmohammad Haddad, Nader Sehatbakhsh + + + Gradient as Conditions: Rethinking HOG for All-in-one Image Restoration + https://arxiv.org/abs/2504.09377 + arXiv:2504.09377v3 Announce Type: replace +Abstract: All-in-one image restoration (AIR) aims to address diverse degradations within a unified model by leveraging informative degradation conditions to guide the restoration process. However, existing methods often rely on implicitly learned priors, which may entangle feature representations and hinder performance in complex or unseen scenarios. Histogram of Oriented Gradients (HOG) as a classical gradient representation, we observe that it has strong discriminative capability across diverse degradations, making it a powerful and interpretable prior for AIR. Based on this insight, we propose HOGformer, a Transformer-based model that integrates learnable HOG features for degradation-aware restoration. The core of HOGformer is a Dynamic HOG-aware Self-Attention (DHOGSA) mechanism, which adaptively models long-range spatial dependencies conditioned on degradation-specific cues encoded by HOG descriptors. To further adapt the heterogeneity of degradations in AIR, we propose a Dynamic Interaction Feed-Forward (DIFF) module that facilitates channel-spatial interactions, enabling robust feature transformation under diverse degradations. Besides, we propose a HOG loss to explicitly enhance structural fidelity and edge sharpness. Extensive experiments on a variety of benchmarks, including adverse weather and natural degradations, demonstrate that HOGformer achieves state-of-the-art performance and generalizes well to complex real-world scenarios.Code is available at https://github.com/Fire-friend/HOGformer. + oai:arXiv.org:2504.09377v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jiawei Wu, Zhifei Yang, Zhe Wang, Zhi Jin + + + Syzygy of Thoughts: Improving LLM CoT with the Minimal Free Resolution + https://arxiv.org/abs/2504.09566 + arXiv:2504.09566v3 Announce Type: replace +Abstract: Chain-of-Thought (CoT) prompting enhances the reasoning of large language models (LLMs) by decomposing problems into sequential steps, mimicking human logic and reducing errors. However, complex tasks with vast solution spaces and vague constraints often exceed the capacity of a single reasoning chain. Inspired by Minimal Free Resolution (MFR) in commutative algebra and algebraic geometry, we propose Syzygy of Thoughts (SoT)-a novel framework that extends CoT by introducing auxiliary, interrelated reasoning paths. SoT captures deeper logical dependencies, enabling more robust and structured problem-solving. MFR decomposes a module into a sequence of free modules with minimal rank, providing a structured analytical approach to complex systems. This method introduces the concepts of "Module", "Betti numbers","Freeness", "Mapping", "Exactness" and "Minimality", enabling the systematic decomposition of the original complex problem into logically complete minimal subproblems while preserving key problem features and reducing reasoning length. We tested SoT across diverse datasets (e.g., GSM8K, MATH) and models (e.g., GPT-4o-mini, Qwen2.5), achieving inference accuracy that matches or surpasses mainstream CoTs standards. Additionally, by aligning the sampling process with algebraic constraints, our approach enhances the scalability of inference time in LLMs, ensuring both transparent reasoning and high performance. Our code will be publicly available at https://github.com/dlMARiA/Syzygy-of-thoughts. + oai:arXiv.org:2504.09566v3 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Chenghao Li, Chaoning Zhang, Yi Lu, Jiaquan Zhang, Qigan Sun, Xudong Wang, Jiwei Wei, Guoqing Wang, Yang Yang, Heng Tao Shen + + + LightFormer: A lightweight and efficient decoder for remote sensing image segmentation + https://arxiv.org/abs/2504.10834 + arXiv:2504.10834v2 Announce Type: replace +Abstract: Deep learning techniques have achieved remarkable success in the semantic segmentation of remote sensing images and in land-use change detection. Nevertheless, their real-time deployment on edge platforms remains constrained by decoder complexity. Herein, we introduce LightFormer, a lightweight decoder for time-critical tasks that involve unstructured targets, such as disaster assessment, unmanned aerial vehicle search-and-rescue, and cultural heritage monitoring. LightFormer employs a feature-fusion and refinement module built on channel processing and a learnable gating mechanism to aggregate multi-scale, multi-range information efficiently, which drastically curtails model complexity. Furthermore, we propose a spatial information selection module (SISM) that integrates long-range attention with a detail preservation branch to capture spatial dependencies across multiple scales, thereby substantially improving the recognition of unstructured targets in complex scenes. On the ISPRS Vaihingen benchmark, LightFormer attains 99.9% of GLFFNet's mIoU (83.9% vs. 84.0%) while requiring only 14.7% of its FLOPs and 15.9% of its parameters, thus achieving an excellent accuracy-efficiency trade-off. Consistent results on LoveDA, ISPRS Potsdam, RescueNet, and FloodNet further demonstrate its robustness and superior perception of unstructured objects. These findings highlight LightFormer as a practical solution for remote sensing applications where both computational economy and high-precision segmentation are imperative. + oai:arXiv.org:2504.10834v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sihang Chen, Lijun Yun, Ze Liu, JianFeng Zhu, Jie Chen, Hui Wang, Yueping Nie + + + Recent Advances in 3D Object and Scene Generation: A Survey + https://arxiv.org/abs/2504.11734 + arXiv:2504.11734v2 Announce Type: replace +Abstract: In recent years, the demand for 3D content has grown exponentially with the intelligent upgrade of interactive media, extended reality (XR), and Metaverse industries. In order to overcome the limitations of traditional manual modeling approaches, such as labor-intensive workflows and prolonged production cycles, revolutionary advances have been achieved through the convergence of novel 3D representation paradigms and artificial intelligence generative technologies. In this survey, we conduct a systematic review of the cutting-edge achievements in static 3D object and scene generation, as well as establish a comprehensive technical framework through systematic categorization. We start our analysis with mainstream 3D object representations. Subsequently, we delve into the technical pathways of 3D object generation based on four mainstream deep generative models: Variational Autoencoders, Generative Adversarial Networks, Autoregressive Models, and Diffusion Models. Regarding scene generation, we focus on three dominant paradigms: layout-guided generation, lifting based on 2D priors, and rule-driven modeling. Finally, we critically examine persistent challenges in 3D generation and propose potential research directions for future investigation. This survey aims to provide readers with a structured understanding of state-of-the-art 3D generation technologies while inspiring researchers to undertake more exploration in this domain. + oai:arXiv.org:2504.11734v2 + cs.GR + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiang Tang, Ruotong Li, Xiaopeng Fan + + + Zooming In on Fakes: A Novel Dataset for Localized AI-Generated Image Detection with Forgery Amplification Approach + https://arxiv.org/abs/2504.11922 + arXiv:2504.11922v3 Announce Type: replace +Abstract: The rise of AI-generated image tools has made localized forgeries increasingly realistic, posing challenges for visual content integrity. Although recent efforts have explored localized AIGC detection, existing datasets predominantly focus on object-level forgeries while overlooking broader scene edits in regions such as sky or ground. To address these limitations, we introduce \textbf{BR-Gen}, a large-scale dataset of 150,000 locally forged images with diverse scene-aware annotations, which are based on semantic calibration to ensure high-quality samples. BR-Gen is constructed through a fully automated ``Perception-Creation-Evaluation'' pipeline to ensure semantic coherence and visual realism. In addition, we further propose \textbf{NFA-ViT}, a Noise-guided Forgery Amplification Vision Transformer that enhances the detection of localized forgeries by amplifying subtle forgery-related features across the entire image. NFA-ViT mines heterogeneous regions in images, \emph{i.e.}, potential edited areas, by noise fingerprints. Subsequently, attention mechanism is introduced to compel the interaction between normal and abnormal features, thereby propagating the traces throughout the entire image, allowing subtle forgeries to influence a broader context and improving overall detection robustness. Extensive experiments demonstrate that BR-Gen constructs entirely new scenarios that are not covered by existing methods. Take a step further, NFA-ViT outperforms existing methods on BR-Gen and generalizes well across current benchmarks. + oai:arXiv.org:2504.11922v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Lvpan Cai, Haowei Wang, Jiayi Ji, Yanshu Zhoumen, Shen Chen, Taiping Yao, Xiaoshuai Sun + + + SWE-Synth: Synthesizing Verifiable Bug-Fix Data to Enable Large Language Models in Resolving Real-World Bugs + https://arxiv.org/abs/2504.14757 + arXiv:2504.14757v2 Announce Type: replace +Abstract: Large language models (LLMs) are transforming automated program repair (APR) through agent-based approaches that localize bugs, generate patches, and verify fixes. However, the lack of high-quality, scalable training datasets, especially those with verifiable outputs and intermediate reasoning traces-limits progress, particularly for open-source models. In this work, we present SWE-Synth, a framework for synthesizing realistic, verifiable, and process-aware bug-fix datasets at the repository level. SWE-Synth leverages LLM agents to simulate debugging workflows, producing not only bug-fix pairs but also test cases and structured repair trajectories. Compared to manually curated datasets, our method scales with minimal human effort while preserving contextual richness and correctness. Experiments show that models trained on SWE-Synth outperform those trained on real-world datasets by 2.3% on SWE-Bench Lite. Our results highlight the potential of synthetic, agent-generated data to advance the state of the art in APR and software engineering automation. + oai:arXiv.org:2504.14757v2 + cs.SE + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Minh V. T. Pham, Huy N. Phan, Hoang N. Phan, Cuong Le Chi, Tien N. Nguyen, Nghi D. Q. Bui + + + Peer-Aware Cost Estimation in Nonlinear General-Sum Dynamic Games for Mutual Learning and Intent Inference + https://arxiv.org/abs/2504.17129 + arXiv:2504.17129v2 Announce Type: replace +Abstract: Dynamic game theory is a powerful tool in modeling multi-agent interactions and human-robot systems. In practice, since the objective functions of both agents may not be explicitly known to each other, these interactions can be modeled as incomplete-information general-sum dynamic games. Solving for equilibrium policies for such games presents a major challenge, especially if the games involve nonlinear underlying dynamics. To simplify the problem, existing work often assumes that one agent is an expert with complete information about its peer, which can lead to biased estimates and failures in coordination. To address this challenge, we propose a nonlinear peer-aware cost estimation (N-PACE) algorithm for general-sum dynamic games. In N-PACE, using iterative linear quadratic (ILQ) approximation of dynamic games, each agent explicitly models the learning dynamics of its peer agent while inferring their objective functions and updating its own control policy accordingly in real time, which leads to unbiased and fast learning of the unknown objective function of the peer agent. Additionally, we demonstrate how N-PACE enables intent communication by explicitly modeling the peer's learning dynamics. Finally, we show how N-PACE outperforms baseline methods that disregard the learning behavior of the other agent, both analytically and using our case studies + oai:arXiv.org:2504.17129v2 + eess.SY + cs.AI + cs.GT + cs.RO + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Seyed Yousef Soltanian, Wenlong Zhang + + + Neural Controller for Incremental Stability of Unknown Continuous-time Systems + https://arxiv.org/abs/2504.18330 + arXiv:2504.18330v2 Announce Type: replace +Abstract: This work primarily focuses on synthesizing a controller that guarantees an unknown continuous-time system to be incrementally input-to-state stable ($\delta$-ISS). In this context, the notion of $\delta$-ISS control Lyapunov function ($\delta$-ISS-CLF) for the continuous-time system is introduced. Combined with the controller, the $\delta$-ISS-CLF guarantees that the system is incrementally stable. As the paper deals with unknown dynamical systems, the controller as well as the $\delta$-ISS-CLF are parametrized using neural networks. The data set used to train the neural networks is generated from the state space of the system by proper sampling. Now, to give a formal guarantee that the controller makes the system incrementally stable, we develop a validity condition by having some Lipschitz continuity assumptions and incorporate the condition into the training framework to ensure a provable correctness guarantee at the end of the training process. Finally, we demonstrate the effectiveness of the proposed approach through several case studies: a scalar system with a non-affine, non-polynomial structure, a one-link manipulator system, a nonlinear Moore-Greitzer model of a jet engine, a magnetic levitator system and a rotating rigid spacecraft model. + oai:arXiv.org:2504.18330v2 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Ahan Basu, Bhabani Shankar Dey, Pushpak Jagtap + + + JITServe: SLO-aware LLM Serving with Imprecise Request Information + https://arxiv.org/abs/2504.20068 + arXiv:2504.20068v3 Announce Type: replace +Abstract: The integration of Large Language Models (LLMs) into applications ranging from interactive chatbots to multi-agent systems has introduced a wide spectrum of service-level objectives (SLOs) for responsiveness. These include latency-sensitive requests emphasizing per-token latency in streaming chat, deadline-sensitive requests requiring rapid full responses to trigger external tools, and compound requests with evolving dependencies across multiple LLM calls. Despite-or perhaps, because of-this workload diversity and unpredictable request information (e.g., response lengths and dependencies), existing request schedulers have focused on aggregate performance, unable to ensure application-level SLO needs. + This paper presents JITServe, the first SLO-aware LLM serving system designed to maximize service goodput (e.g., the number of tokens meeting request SLOs) across diverse workloads. JITServe novelly schedules requests using imprecise request information and gradually relaxes this conservatism by refining request information estimates as generation progresses. It applies a grouped margin goodput maximization algorithm to allocate just enough serving bandwidth to satisfy each request's SLO just-in-time (JIT), maximizing residual capacity for others, while deciding the composition of requests in a batch to maximize efficiency and goodput with provable guarantees. Our evaluation across diverse realistic workloads, including chat, deep research, and agentic pipelines, shows that JITServe improves service goodput by 1.4x-6.3x, alternatively achieving 28.5%-83.2% resource savings, compared to state-of-the-art designs. + oai:arXiv.org:2504.20068v3 + cs.DC + cs.LG + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Wei Zhang, Zhiyu Wu, Yi Mu, Rui Ning, Banruo Liu, Nikhil Sarda, Myungjin Lee, Fan Lai + + + BOOM: Benchmarking Out-Of-distribution Molecular Property Predictions of Machine Learning Models + https://arxiv.org/abs/2505.01912 + arXiv:2505.01912v2 Announce Type: replace +Abstract: Data-driven molecular discovery leverages artificial intelligence/machine learning (AI/ML) and generative modeling to filter and design novel molecules. Discovering novel molecules requires accurate out-of-distribution (OOD) predictions, but ML models struggle to generalize OOD. Currently, no systematic benchmarks exist for molecular OOD prediction tasks. We present $\mathbf{BOOM}$, $\mathbf{b}$enchmarks for $\mathbf{o}$ut-$\mathbf{o}$f-distribution $\mathbf{m}$olecular property predictions: a chemically-informed benchmark for OOD performance on common molecular property prediction tasks. We evaluate over 150 model-task combinations to benchmark deep learning models on OOD performance. Overall, we find that no existing model achieves strong generalization across all tasks: even the top-performing model exhibited an average OOD error 3x higher than in-distribution. Current chemical foundation models do not show strong OOD extrapolation, while models with high inductive bias can perform well on OOD tasks with simple, specific properties. We perform extensive ablation experiments, highlighting how data generation, pre-training, hyperparameter optimization, model architecture, and molecular representation impact OOD performance. Developing models with strong OOD generalization is a new frontier challenge in chemical ML. This open-source benchmark is available at https://github.com/FLASK-LLNL/BOOM + oai:arXiv.org:2505.01912v2 + cs.LG + cond-mat.mtrl-sci + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Evan R. Antoniuk, Shehtab Zaman, Tal Ben-Nun, Peggy Li, James Diffenderfer, Busra Sahin, Obadiah Smolenski, Tim Hsu, Anna M. Hiszpanski, Kenneth Chiu, Bhavya Kailkhura, Brian Van Essen + + + Focus on Likely Classes for Test-Time Prediction + https://arxiv.org/abs/2505.03819 + arXiv:2505.03819v3 Announce Type: replace +Abstract: We ask: Can focusing on likely classes of a single, in-domain sample improve model predictions? Prior work argued ``no''. We put forward a novel rationale in favor of ``yes'': Sharedness of features among classes indicates their reliability for a single sample. We aim for an affirmative answer without using hand-engineered augmentations or auxiliary tasks. We propose two novel test-time fine-tuning methods to improve uncertain model predictions. Instead of greedily selecting the most likely class, we introduce an additional step, \emph{focus on the likely classes}, to refine predictions. By applying a single gradient descent step with a large learning rate, we refine predictions when an initial forward pass indicates high uncertainty. The experimental evaluation demonstrates accuracy gains for one of our methods on average, which emphasizes shared features among likely classes. The gains are confirmed across diverse text and image domain models. + oai:arXiv.org:2505.03819v3 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Johannes Schneider + + + Bridging Geometry-Coherent Text-to-3D Generation with Multi-View Diffusion Priors and Gaussian Splatting + https://arxiv.org/abs/2505.04262 + arXiv:2505.04262v2 Announce Type: replace +Abstract: Score Distillation Sampling (SDS) leverages pretrained 2D diffusion models to advance text-to-3D generation but neglects multi-view correlations, being prone to geometric inconsistencies and multi-face artifacts in the generated 3D content. In this work, we propose Coupled Score Distillation (CSD), a framework that couples multi-view joint distribution priors to ensure geometrically consistent 3D generation while enabling the stable and direct optimization of 3D Gaussian Splatting. Specifically, by reformulating the optimization as a multi-view joint optimization problem, we derive an effective optimization rule that effectively couples multi-view priors to guide optimization across different viewpoints while preserving the diversity of generated 3D assets. Additionally, we propose a framework that directly optimizes 3D Gaussian Splatting (3D-GS) with random initialization to generate geometrically consistent 3D content. We further employ a deformable tetrahedral grid, initialized from 3D-GS and refined through CSD, to produce high-quality, refined meshes. Quantitative and qualitative experimental results demonstrate the efficiency and competitive quality of our approach. + oai:arXiv.org:2505.04262v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + 10.1016/j.neunet.2025.108511 + Feng Yang, Wenliang Qian, Wangmeng Zuo, Hui Li + + + A Survey of 3D Reconstruction with Event Cameras + https://arxiv.org/abs/2505.08438 + arXiv:2505.08438v3 Announce Type: replace +Abstract: Event cameras are rapidly emerging as powerful vision sensors for 3D reconstruction, uniquely capable of asynchronously capturing per-pixel brightness changes. Compared to traditional frame-based cameras, event cameras produce sparse yet temporally dense data streams, enabling robust and accurate 3D reconstruction even under challenging conditions such as high-speed motion, low illumination, and extreme dynamic range scenarios. These capabilities offer substantial promise for transformative applications across various fields, including autonomous driving, robotics, aerial navigation, and immersive virtual reality. In this survey, we present the first comprehensive review exclusively dedicated to event-based 3D reconstruction. Existing approaches are systematically categorised based on input modality into stereo, monocular, and multimodal systems, and further classified according to reconstruction methodologies, including geometry-based techniques, deep learning approaches, and neural rendering techniques such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS). Within each category, methods are chronologically organised to highlight the evolution of key concepts and advancements. Furthermore, we provide a detailed summary of publicly available datasets specifically suited to event-based reconstruction tasks. Finally, we discuss significant open challenges in dataset availability, standardised evaluation, effective representation, and dynamic scene reconstruction, outlining insightful directions for future research. This survey aims to serve as an essential reference and provides a clear and motivating roadmap toward advancing the state of the art in event-driven 3D reconstruction. + oai:arXiv.org:2505.08438v3 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Chuanzhi Xu, Haoxian Zhou, Langyi Chen, Haodong Chen, Zeke Zexi Hu, Zhicheng Lu, Ying Zhou, Vera Chung, Qiang Qu, Weidong Cai + + + DyGSSM: Multi-view Dynamic Graph Embeddings with State Space Model Gradient Update + https://arxiv.org/abs/2505.09017 + arXiv:2505.09017v2 Announce Type: replace +Abstract: Most of the dynamic graph representation learning methods involve dividing a dynamic graph into discrete snapshots to capture the evolving behavior of nodes over time. Existing methods primarily capture only local or global structures of each node within a snapshot using message-passing and random walk-based methods. Then, they utilize sequence-based models (e.g., transformers) to encode the temporal evolution of node embeddings, and meta-learning techniques to update the model parameters. However, these approaches have two limitations. First, they neglect the extraction of global and local information simultaneously in each snapshot. Second, they fail to consider the model's performance in the current snapshot during parameter updates, resulting in a lack of temporal dependency management. Recently, HiPPO (High-order Polynomial Projection Operators) algorithm has gained attention for their ability to optimize and preserve sequence history in State Space Model (SSM). To address the aforementioned limitations in dynamic graph representation learning, we propose a novel method called Multi-view Dynamic Graph Embeddings with State Space Model Gradient Update (DyGSSM). Our approach combines Graph Convolution Networks (GCN) for local feature extraction and random walk with Gated Recurrent Unit (GRU) for global feature extraction in each snapshot. We then integrate the local and global features using a cross-attention mechanism. Additionally, we incorporate an SSM based on HiPPO algorithm to account for long-term dependencies when updating model parameters, ensuring that model performance in each snapshot informs subsequent updates. Experiments on five public datasets show that our method outperforms existing baseline and state-of-the-art (SOTA) methods in 17 out of 20 cases. + oai:arXiv.org:2505.09017v2 + cs.LG + cs.SI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Bizhan Alipour Pijan, Serdar Bozdag + + + An Exploration of Default Images in Text-to-Image Generation + https://arxiv.org/abs/2505.09166 + arXiv:2505.09166v5 Announce Type: replace +Abstract: In the creative practice of text-to-image (TTI) generation, images are synthesized from textual prompts. By design, TTI models always yield an output, even if the prompt contains unknown terms. In this case, the model may generate default images: images that closely resemble each other across many unrelated prompts. Studying default images is valuable for designing better solutions for prompt engineering and TTI generation. We present the first investigation into default images on Midjourney. We describe an initial study in which we manually created input prompts triggering default images, and several ablation studies. Building on these, we conduct a computational analysis of over 750,000 images, revealing consistent default images across unrelated prompts. We also conduct an online user study investigating how default images may affect user satisfaction. Our work lays the foundation for understanding default images in TTI generation, highlighting their practical relevance as well as challenges and future research directions. + oai:arXiv.org:2505.09166v5 + cs.HC + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Hannu Simonen, Atte Kiviniemi, Hannah Johnston, Helena Barranha, Jonas Oppenlaender + + + Characterization and Decidability of FC-Definable Regular Languages + https://arxiv.org/abs/2505.09772 + arXiv:2505.09772v2 Announce Type: replace +Abstract: FC is a first-order logic that reasons over all factors of a finite word using concatenation, and can define non-regular languages like that of all squares (ww). In this paper, we establish that there are regular languages that are not FC-definable. Moreover, we give a decidable characterization of the FC-definable regular languages in terms of algebra, automata, and regular expressions. The latter of which is natural and concise: Star-free generalized regular expressions extended with the Kleene star of terminal words. + oai:arXiv.org:2505.09772v2 + cs.LO + cs.FL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Sam M. Thompson, Nicole Schweikardt, Dominik D. Freydenberger + + + Surface stability of a layered magnetoelastic half-space + https://arxiv.org/abs/2505.10660 + arXiv:2505.10660v2 Announce Type: replace +Abstract: We evaluate the conditions for surface stability of a layered magnetoelastic half-space subjected to large deformations and a magnetic field. After reviewing the fundamental measures of deformation and summarizing the magnetostatic equations in Eulerian and Lagrangian forms, we derive the constitutive relations from a total energy function dependent on the deformation gradient and Lagrangian magnetic induction. Energy principles yield the equilibrium equations, magnetic field equations, and boundary conditions. The second variation of the energy functional provides the incremental equations and conditions for stability analysis. Surface instability is studied by linearizing increments of deformation and magnetic induction about a finitely deformed state under a magnetic field normal to the surface. Four illustrative cases are considered: (i) a layered non-magnetizable half-space with varying stiffness contrast; (ii) the critical stretch of a magnetoelastic half-space as a function of magnetic induction; (iii) surface stability of a magneto-sensitive layer atop a non-magnetizable substrate; and (iv) bifurcation conditions in a two-layered magnetoelastic solid with different stiffness ratios. Graphical results are provided throughout. + oai:arXiv.org:2505.10660v2 + math.NA + cond-mat.mtrl-sci + cs.NA + math-ph + math.MP + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1016/j.ijsolstr.2025.113807 + International Journal of Solids and Structures, 327, 113807 + Davood Shahsavari, Luis Dorfmann, Prashant Saxena + + + DDAE++: Enhancing Diffusion Models Towards Unified Generative and Discriminative Learning + https://arxiv.org/abs/2505.10999 + arXiv:2505.10999v3 Announce Type: replace +Abstract: While diffusion models excel at image synthesis, useful representations have been shown to emerge from generative pre-training, suggesting a path towards unified generative and discriminative learning. However, suboptimal semantic flow within current architectures can hinder this potential: features encoding the richest high-level semantics are underutilized and diluted when propagating through decoding layers, impeding the formation of an explicit semantic bottleneck layer. To address this, we introduce self-conditioning, a lightweight mechanism that reshapes the model's layer-wise semantic hierarchy without external guidance. By aggregating and rerouting intermediate features to guide subsequent decoding layers, our method concentrates more high-level semantics, concurrently strengthening global generative guidance and forming more discriminative representations. This simple approach yields a dual-improvement trend across pixel-space UNet, UViT and latent-space DiT models with minimal overhead. Crucially, it creates an architectural semantic bridge that propagates discriminative improvements into generation and accommodates further techniques such as contrastive self-distillation. Experiments show that our enhanced models, especially self-conditioned DiT, are powerful dual learners that yield strong and transferable representations on image and dense classification tasks, surpassing various generative self-supervised models in linear probing while also improving or maintaining high generation quality. + oai:arXiv.org:2505.10999v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Weilai Xiang, Hongyu Yang, Di Huang, Yunhong Wang + + + Addition is almost all you need: Compressing neural networks with double binary factorization + https://arxiv.org/abs/2505.11076 + arXiv:2505.11076v3 Announce Type: replace +Abstract: Binary quantization approaches, which replace weight matrices with binary matrices and substitute costly multiplications with cheaper additions, offer a computationally efficient approach to address the increasing computational and storage requirements of Large Language Models (LLMs). However, the severe quantization constraint ($\pm1$) can lead to significant accuracy degradation. In this paper, we propose Double Binary Factorization (DBF), a novel method that factorizes dense weight matrices into products of two binary (sign) matrices, each accompanied by scaling vectors. DBF preserves the efficiency advantages of binary representations while achieving compression rates that are competitive with or superior to state-of-the-art methods. Specifically, in a 1-bit per weight range, DBF is better than existing binarization approaches. In a 2-bit per weight range, DBF is competitive with the best quantization methods like QuIP\# and QTIP. Unlike most existing compression techniques, which offer limited compression level choices, DBF allows fine-grained control over compression ratios by adjusting the factorization's intermediate dimension. Based on this advantage, we further introduce an algorithm for estimating non-uniform layer-wise compression ratios for DBF, based on previously developed channel pruning criteria. + Code available at: https://github.com/usamec/double_binary + oai:arXiv.org:2505.11076v3 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Vladim\'ir Bo\v{z}a, Vladim\'ir Macko + + + Solver-Informed RL: Grounding Large Language Models for Authentic Optimization Modeling + https://arxiv.org/abs/2505.11792 + arXiv:2505.11792v3 Announce Type: replace +Abstract: Optimization modeling is fundamental to decision-making across diverse domains. Despite progress in automating optimization formulation from natural language descriptions, Large Language Models (LLMs) often struggle to generate formally correct and usable models against hallucinations, posing a challenge for reliable automation. Inspired by the success of Reinforcement Learning (RL) in enhancing Large Reasoning Models, we present Solver-Informed Reinforcement Learning (SIRL), a novel framework that significantly improves the authenticity of LLMs for optimization modeling using Reinforcement Learning with Verifiable Reward by leveraging external optimization solvers as verifiers. These verifiers automatically assess the executable code and the instance-level mathematical model represented by the associated LP file, yielding precise and comprehensive feedback signals -- including syntax, feasibility, and solution quality, serving as direct rewards for the RL process. This automated verification process, particularly from classic optimization solvers, also underpins our instance-enhanced self-consistency method to synthesize high-quality training data. Extensive experiments on diverse public benchmarks demonstrate that SIRL achieves state-of-the-art performance, substantially outperforming existing methods in generating accurate and executable optimization models. Our code is publicly available at https://github.com/Cardinal-Operations/SIRL. + oai:arXiv.org:2505.11792v3 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 39th Conference on Neural Information Processing Systems (NeurIPS 2025) + Yitian Chen, Jingfan Xia, Siyu Shao, Dongdong Ge, Yinyu Ye + + + Personalized and Resilient Distributed Learning Through Opinion Dynamics + https://arxiv.org/abs/2505.14081 + arXiv:2505.14081v2 Announce Type: replace +Abstract: In this paper, we address two practical challenges of distributed learning in multi-agent network systems, namely personalization and resilience. Personalization is the need of heterogeneous agents to learn local models tailored to their own data and tasks, while still generalizing well; on the other hand, the learning process must be resilient to cyberattacks or anomalous training data to avoid disruption. Motivated by a conceptual affinity between these two requirements, we devise a distributed learning algorithm that combines distributed gradient descent and the Friedkin-Johnsen model of opinion dynamics to fulfill both of them. We quantify its convergence speed and the neighborhood that contains the final learned models, which can be easily controlled by tuning the algorithm parameters to enforce a more personalized/resilient behavior. We numerically showcase the effectiveness of our algorithm on synthetic and real-world distributed learning tasks, where it achieves high global accuracy both for personalized models and with malicious agents compared to standard strategies. + oai:arXiv.org:2505.14081v2 + cs.MA + cs.LG + eess.SP + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Luca Ballotta, Nicola Bastianello, Riccardo M. G. Ferrari, Karl H. Johansson + + + Reliable Decision Support with LLMs: A Framework for Evaluating Consistency in Binary Text Classification Applications + https://arxiv.org/abs/2505.14918 + arXiv:2505.14918v2 Announce Type: replace +Abstract: This study introduces a framework for evaluating consistency in large language model (LLM) binary text classification, addressing the lack of established reliability assessment methods. Adapting psychometric principles, we determine sample size requirements, develop metrics for invalid responses, and evaluate intra- and inter-rater reliability. Our case study examines financial news sentiment classification across 14 LLMs (including claude-3-7-sonnet, gpt-4o, deepseek-r1, gemma3, llama3.2, phi4, and command-r-plus), with five replicates per model on 1,350 articles. Models demonstrated high intra-rater consistency, achieving perfect agreement on 90-98% of examples, with minimal differences between expensive and economical models from the same families. When validated against StockNewsAPI labels, models achieved strong performance (accuracy 0.76-0.88), with smaller models like gemma3:1B, llama3.2:3B, and claude-3-5-haiku outperforming larger counterparts. All models performed at chance when predicting actual market movements, indicating task constraints rather than model limitations. Our framework provides systematic guidance for LLM selection, sample size planning, and reliability assessment, enabling organizations to optimize resources for classification tasks. + oai:arXiv.org:2505.14918v2 + cs.CL + cs.LG + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Fadel M. Megahed, Ying-Ju Chen, L. Allision Jones-Farmer, Younghwa Lee, Jiawei Brooke Wang, Inez M. Zwetsloot + + + Multimodal Cultural Safety: Evaluation Framework and Alignment Strategies + https://arxiv.org/abs/2505.14972 + arXiv:2505.14972v2 Announce Type: replace +Abstract: Large vision-language models (LVLMs) are increasingly deployed in globally distributed applications, such as tourism assistants, yet their ability to produce culturally appropriate responses remains underexplored. Existing multimodal safety benchmarks primarily focus on physical safety and overlook violations rooted in cultural norms, which can result in symbolic harm. To address this gap, we introduce CROSS, a benchmark designed to assess the cultural safety reasoning capabilities of LVLMs. CROSS includes 1,284 multilingual visually grounded queries from 16 countries, three everyday domains, and 14 languages, where cultural norm violations emerge only when images are interpreted in context. We propose CROSS-Eval, an intercultural theory-based framework that measures four key dimensions: cultural awareness, norm education, compliance, and helpfulness. Using this framework, we evaluate 21 leading LVLMs, including mixture-of-experts models and reasoning models. Results reveal significant cultural safety gaps: the best-performing model achieves only 61.79% in awareness and 37.73% in compliance. While some open-source models reach GPT-4o-level performance, they still fall notably short of proprietary models. Our results further show that increasing reasoning capacity improves cultural alignment but does not fully resolve the issue. To improve model performance, we develop two enhancement strategies: supervised fine-tuning with culturally grounded, open-ended data and preference tuning with contrastive response pairs that highlight safe versus unsafe behaviors. These methods substantially improve GPT-4o's cultural awareness (+60.14%) and compliance (+55.2%), while preserving general multimodal capabilities with minimal performance reduction on general multimodal understanding benchmarks. + oai:arXiv.org:2505.14972v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Haoyi Qiu, Kung-Hsiang Huang, Ruichen Zheng, Jiao Sun, Nanyun Peng + + + Deliberation on Priors: Trustworthy Reasoning of Large Language Models on Knowledge Graphs + https://arxiv.org/abs/2505.15210 + arXiv:2505.15210v3 Announce Type: replace +Abstract: Knowledge graph-based retrieval-augmented generation seeks to mitigate hallucinations in Large Language Models (LLMs) caused by insufficient or outdated knowledge. However, existing methods often fail to fully exploit the prior knowledge embedded in knowledge graphs (KGs), particularly their structural information and explicit or implicit constraints. The former can enhance the faithfulness of LLMs' reasoning, while the latter can improve the reliability of response generation. Motivated by these, we propose a trustworthy reasoning framework, termed Deliberation over Priors (DP), which sufficiently utilizes the priors contained in KGs. Specifically, DP adopts a progressive knowledge distillation strategy that integrates structural priors into LLMs through a combination of supervised fine-tuning and Kahneman-Tversky optimization, thereby improving the faithfulness of relation path generation. Furthermore, our framework employs a reasoning-introspection strategy, which guides LLMs to perform refined reasoning verification based on extracted constraint priors, ensuring the reliability of response generation. Extensive experiments on three benchmark datasets demonstrate that DP achieves new state-of-the-art performance, especially a Hit@1 improvement of 13% on the ComplexWebQuestions dataset, and generates highly trustworthy responses. We also conduct various analyses to verify its flexibility and practicality. The code is available at https://github.com/reml-group/Deliberation-on-Priors. + oai:arXiv.org:2505.15210v3 + cs.CL + cs.IR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jie Ma, Ning Qu, Zhitao Gao, Rui Xing, Jun Liu, Hongbin Pei, Jiang Xie, Linyun Song, Pinghui Wang, Jing Tao, Zhou Su + + + VERDI: VLM-Embedded Reasoning for Autonomous Driving + https://arxiv.org/abs/2505.15925 + arXiv:2505.15925v3 Announce Type: replace +Abstract: While autonomous driving (AD) stacks struggle with decision making under partial observability and real-world complexity, human drivers are capable of commonsense reasoning to make near-optimal decisions with limited information. Recent work has attempted to leverage finetuned Vision-Language Models (VLMs) for trajectory planning at inference time to emulate human behavior. Despite their success in benchmark evaluations, these methods are often impractical to deploy (a 70B parameter VLM inference at merely 8 tokens per second requires more than 160G of memory), and their monolithic network structure prohibits safety decomposition. To bridge this gap, we propose VLM-Embedded Reasoning for autonomous Driving (VERDI), a training-time framework that distills the reasoning process and commonsense knowledge of VLMs into the AD stack. VERDI augments modular differentiable end-to-end (e2e) AD models by aligning intermediate module outputs at the perception, prediction, and planning stages with text features explaining the driving reasoning process produced by VLMs. By encouraging alignment in latent space, VERDI enables the modular AD stack to internalize structured reasoning, without incurring the inference-time costs of large VLMs. We validate VERDI in both open-loop (NuScenes and Bench2Drive benchmarks) and closed-loop (HugSim Simulator) settings. We find that VERDI outperforms existing e2e methods that do not embed reasoning by up to 11% in $\ell_{2}$ distance and 11% in driving performance, while maintaining real-time inference speed. + oai:arXiv.org:2505.15925v3 + cs.RO + cs.AI + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Bowen Feng, Zhiting Mei, Baiang Li, Julian Ost, Filippo Ghilotti, Roger Girgis, Anirudha Majumdar, Felix Heide + + + CrossLMM: Decoupling Long Video Sequences from LMMs via Dual Cross-Attention Mechanisms + https://arxiv.org/abs/2505.17020 + arXiv:2505.17020v2 Announce Type: replace +Abstract: The advent of Large Multimodal Models (LMMs) has significantly enhanced Large Language Models (LLMs) to process and interpret diverse data modalities (e.g., image and video). However, as input complexity increases, particularly with long video sequences, the number of required tokens has grown significantly, leading to quadratically computational costs. This has made the efficient compression of video tokens in LMMs, while maintaining performance integrity, a pressing research challenge. In this paper, we introduce CrossLMM, decoupling long video sequences from LMMs via a dual cross-attention mechanism, which substantially reduces visual token quantity with minimal performance degradation. Specifically, we first implement a significant token reduction from pretrained visual encoders through a pooling methodology. Then, within LLM layers, we employ a visual-to-visual cross-attention mechanism, wherein the pooled visual tokens function as queries against the original visual token set. This module enables more efficient token utilization while retaining fine-grained informational fidelity. In addition, we introduce a text-to-visual cross-attention mechanism, for which the text tokens are enhanced through interaction with the original visual tokens, enriching the visual comprehension of the text tokens. Comprehensive empirical evaluation demonstrates that our approach achieves comparable or superior performance across diverse video-based LMM benchmarks, despite utilizing substantially fewer computational resources. + oai:arXiv.org:2505.17020v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shilin Yan, Jiaming Han, Joey Tsai, Hongwei Xue, Rongyao Fang, Lingyi Hong, Ziyu Guo, Ray Zhang + + + Shape it Up! Restoring LLM Safety during Finetuning + https://arxiv.org/abs/2505.17196 + arXiv:2505.17196v3 Announce Type: replace +Abstract: Finetuning large language models (LLMs) enables user-specific customization but introduces critical safety risks: even a few harmful examples can compromise safety alignment. A common mitigation strategy is to update the model more strongly on examples deemed safe, while downweighting or excluding those flagged as unsafe. However, because safety context can shift within a single example, updating the model equally on both harmful and harmless parts of a response is suboptimal-a coarse treatment we term static safety shaping. In contrast, we propose dynamic safety shaping (DSS), a framework that uses fine-grained safety signals to reinforce learning from safe segments of a response while suppressing unsafe content. To enable such fine-grained control during finetuning, we introduce a key insight: guardrail models, traditionally used for filtering, can be repurposed to evaluate partial responses, tracking how safety risk evolves throughout the response, segment by segment. This leads to the Safety Trajectory Assessment of Response (STAR), a token-level signal that enables shaping to operate dynamically over the training sequence. Building on this, we present STAR-DSS, guided by STAR scores, that robustly mitigates finetuning risks and delivers substantial safety improvements across diverse threats, datasets, and model families-all without compromising capability on intended tasks. We encourage future safety research to build on dynamic shaping principles for stronger mitigation against evolving finetuning risks. Our code is publicly available at https://github.com/poloclub/star-dss. + oai:arXiv.org:2505.17196v3 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + ShengYun Peng, Pin-Yu Chen, Jianfeng Chi, Seongmin Lee, Duen Horng Chau + + + A sparse $hp$-finite element method for piecewise-smooth differential equations with periodic boundary conditions + https://arxiv.org/abs/2505.17849 + arXiv:2505.17849v2 Announce Type: replace +Abstract: We develop an efficient $hp$-finite element method for piecewise-smooth differential equations with periodic boundary conditions, using orthogonal polynomials defined on circular arcs. The operators derived from this basis are banded and achieve optimal complexity regardless of $h$ or $p$, both for building the discretisation and solving the resulting linear system in the case where the operator is symmetric positive definite. The basis serves as a useful alternative to other bases such as the Fourier or integrated Legendre bases, especially for problems with discontinuities. We relate the convergence properties of these bases to regions of analyticity in the complex plane, and further use several differential equation examples to demonstrate these properties. The basis spans the low order eigenfunctions of constant coefficient differential operators, thereby achieving better smoothness properties for time-evolution partial differential equations. + oai:arXiv.org:2505.17849v2 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Daniel VandenHeuvel, Sheehan Olver + + + AdaCtrl: Towards Adaptive and Controllable Reasoning via Difficulty-Aware Budgeting + https://arxiv.org/abs/2505.18822 + arXiv:2505.18822v2 Announce Type: replace +Abstract: Modern large reasoning models demonstrate impressive problem-solving capabilities by employing sophisticated reasoning strategies. However, they often struggle to balance efficiency and effectiveness, frequently generating unnecessarily lengthy reasoning chains for simple problems. In this work, we propose AdaCtrl, a novel framework to support both difficulty-aware adaptive reasoning budget allocation and explicit user control over reasoning depth. AdaCtrl dynamically adjusts its reasoning length based on self-assessed problem difficulty, while also allowing users to manually control the budget to prioritize either efficiency or effectiveness. This is achieved through a two-stage training pipeline: an initial cold-start fine-tuning phase to instill the ability to self-aware difficulty and adjust reasoning budget, followed by a difficulty-aware reinforcement learning (RL) stage that refines the model's adaptive reasoning strategies and calibrates its difficulty assessments based on its evolving capabilities during online training. To enable intuitive user interaction, we design explicit length-triggered tags that function as a natural interface for budget control. Empirical results show that AdaCtrl adapts reasoning length based on estimated difficulty, compared to the standard training baseline that also incorporates fine-tuning and RL, it yields performance improvements and simultaneously reduces response length by 10.06% and 12.14% on the more challenging AIME2024 and AIME2025 datasets, which require elaborate reasoning, and by 62.05% and 91.04% on the MATH500 and GSM8K datasets, where more concise responses are sufficient. Furthermore, AdaCtrl enables precise user control over the reasoning budget, allowing for tailored responses to meet specific needs. + oai:arXiv.org:2505.18822v2 + cs.AI + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shijue Huang, Hongru Wang, Wanjun Zhong, Zhaochen Su, Jiazhan Feng, Bowen Cao, Yi R. Fung + + + OpenHOI: Open-World Hand-Object Interaction Synthesis with Multimodal Large Language Model + https://arxiv.org/abs/2505.18947 + arXiv:2505.18947v2 Announce Type: replace +Abstract: Understanding and synthesizing realistic 3D hand-object interactions (HOI) is critical for applications ranging from immersive AR/VR to dexterous robotics. Existing methods struggle with generalization, performing well on closed-set objects and predefined tasks but failing to handle unseen objects or open-vocabulary instructions. We introduce OpenHOI, the first framework for open-world HOI synthesis, capable of generating long-horizon manipulation sequences for novel objects guided by free-form language commands. Our approach integrates a 3D Multimodal Large Language Model (MLLM) fine-tuned for joint affordance grounding and semantic task decomposition, enabling precise localization of interaction regions (e.g., handles, buttons) and breakdown of complex instructions (e.g., "Find a water bottle and take a sip") into executable sub-tasks. To synthesize physically plausible interactions, we propose an affordance-driven diffusion model paired with a training-free physics refinement stage that minimizes penetration and optimizes affordance alignment. Evaluations across diverse scenarios demonstrate OpenHOI's superiority over state-of-the-art methods in generalizing to novel object categories, multi-stage tasks, and complex language instructions. Our project page at \href{https://openhoi.github.io} + oai:arXiv.org:2505.18947v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Zhenhao Zhang, Ye Shi, Lingxiao Yang, Suting Ni, Qi Ye, Jingya Wang + + + SAEs Are Good for Steering -- If You Select the Right Features + https://arxiv.org/abs/2505.20063 + arXiv:2505.20063v2 Announce Type: replace +Abstract: Sparse Autoencoders (SAEs) have been proposed as an unsupervised approach to learn a decomposition of a model's latent space. This enables useful applications such as steering - influencing the output of a model towards a desired concept - without requiring labeled data. Current methods identify SAE features to steer by analyzing the input tokens that activate them. However, recent work has highlighted that activations alone do not fully describe the effect of a feature on the model's output. In this work, we draw a distinction between two types of features: input features, which mainly capture patterns in the model's input, and output features, which have a human-understandable effect on the model's output. We propose input and output scores to characterize and locate these types of features, and show that high values for both scores rarely co-occur in the same features. These findings have practical implications: after filtering out features with low output scores, we obtain 2-3x improvements when steering with SAEs, making them competitive with supervised methods. + oai:arXiv.org:2505.20063v2 + cs.LG + cs.AI + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.18653/v1/2025.emnlp-main.519 + Dana Arad, Aaron Mueller, Yonatan Belinkov + + + Kronecker Factorization Improves Efficiency and Interpretability of Sparse Autoencoders + https://arxiv.org/abs/2505.22255 + arXiv:2505.22255v3 Announce Type: replace +Abstract: Sparse Autoencoders (SAEs) have demonstrated significant promise in interpreting the hidden states of language models by decomposing them into interpretable latent directions. However, training and interpreting SAEs at scale remains challenging, especially when large dictionary sizes are used. While decoders can leverage sparse-aware kernels for efficiency, encoders still require computationally intensive linear operations with large output dimensions. To address this, we propose KronSAE, a novel architecture that factorizes the latent representation via Kronecker product decomposition, drastically reducing memory and computational overhead. Furthermore, we introduce mAND, a differentiable activation function approximating the binary AND operation, which improves interpretability and performance in our factorized framework. + oai:arXiv.org:2505.22255v3 + cs.LG + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Vadim Kurochkin, Yaroslav Aksenov, Daniil Laptev, Daniil Gavrilov, Nikita Balagansky + + + Less is More: Unlocking Specialization of Time Series Foundation Models via Structured Pruning + https://arxiv.org/abs/2505.23195 + arXiv:2505.23195v3 Announce Type: replace +Abstract: Scaling laws motivate the development of Time Series Foundation Models (TSFMs) that pre-train vast parameters and achieve remarkable zero-shot forecasting performance. Surprisingly, even after fine-tuning, TSFMs cannot consistently outperform smaller, specialized models trained on full-shot downstream data. A key question is how to realize effective adaptation of TSFMs for a target forecasting task. Through empirical studies on various TSFMs, the pre-trained models often exhibit inherent sparsity and redundancy in computation, suggesting that TSFMs have learned to activate task-relevant network substructures to accommodate diverse forecasting tasks. To preserve this valuable prior knowledge, we propose a structured pruning method to regularize the subsequent fine-tuning process by focusing it on a more relevant and compact parameter space. Extensive experiments on seven TSFMs and six benchmarks demonstrate that fine-tuning a smaller, pruned TSFM significantly improves forecasting performance compared to fine-tuning original models. This prune-then-finetune paradigm often enables TSFMs to achieve state-of-the-art performance and surpass strong specialized baselines. Source code is made publicly available at https://github.com/SJTU-DMTai/Prune-then-Finetune. + oai:arXiv.org:2505.23195v3 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Lifan Zhao, Yanyan Shen, Zhaoyang Liu, Xue Wang, Jiaji Deng + + + Fast Compressed-Domain N-Point Discrete Fourier Transform: The "Twiddless" FFT Algorithm + https://arxiv.org/abs/2505.23718 + arXiv:2505.23718v3 Announce Type: replace +Abstract: In this work, we present the \emph{twiddless fast Fourier transform (TFFT)}, a novel algorithm for computing the $N$-point discrete Fourier transform (DFT). The TFFT's divide strategy builds on recent results that decimate an $N$-point signal (by a factor of $p$) into an $N/p$-point compressed signal whose DFT readily yields $N/p$ coefficients of the original signal. However, existing compression-domain DFT analyses have been limited to computing only the even-indexed DFT coefficients. With TFFT, we overcome this limitation by efficiently computing both \emph{even- and odd-indexed} DFT coefficients in the compressed domain with $O(N \log N)$ complexity. TFFT introduces a new recursive decomposition of the DFT problem, wherein $N/2^i$ coefficients of the original input are computed at recursion level $i$, with no need for twiddle factor multiplications or butterfly structures. Additionally, TFFT generalizes the input length to $N = c \cdot 2^k$ (for $k \geq 0$ and non-power-of-two $c > 0$), reducing the need for zero-padding and potentially improving efficiency and stability over classical FFTs. We believe TFFT represents a \emph{novel paradigm} for DFT computation, opening new directions for research in optimized implementations, hardware design, parallel computation, and sparse transforms. + oai:arXiv.org:2505.23718v3 + cs.CC + cs.DS + eess.SP + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Saulo Queiroz + + + InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback + https://arxiv.org/abs/2505.23950 + arXiv:2505.23950v2 Announce Type: replace +Abstract: As multimodal large models (MLLMs) continue to advance across challenging tasks, a key question emerges: What essential capabilities are still missing? A critical aspect of human learning is continuous interaction with the environment -- not limited to language, but also involving multimodal understanding and generation. To move closer to human-level intelligence, models must similarly support multi-turn, multimodal interaction. In particular, they should comprehend interleaved multimodal contexts and respond coherently in ongoing exchanges. In this work, we present an initial exploration through the InterMT -- the first preference dataset for multi-turn multimodal interaction, grounded in real human feedback. In this exploration, we particularly emphasize the importance of human oversight, introducing expert annotations to guide the process, motivated by the fact that current MLLMs lack such complex interactive capabilities. InterMT captures human preferences at both global and local levels into nine sub-dimensions, consists of 15.6k prompts, 52.6k multi-turn dialogue instances, and 32.4k human-labeled preference pairs. To compensate for the lack of capability for multi-modal understanding and generation, we introduce an agentic workflow that leverages tool-augmented MLLMs to construct multi-turn QA instances. To further this goal, we introduce InterMT-Bench to assess the ability of MLLMs in assisting judges with multi-turn, multimodal tasks. We demonstrate the utility of \InterMT through applications such as judge moderation and further reveal the multi-turn scaling law of judge model. We hope the open-source of our data can help facilitate further research on aligning current MLLMs to the next step. Our project website can be found at https://pku-intermt.github.io . + oai:arXiv.org:2505.23950v2 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Boyuan Chen, Donghai Hong, Jiaming Ji, Jiacheng Zheng, Bowen Dong, Jiayi Zhou, Kaile Wang, Juntao Dai, Xuyao Wang, Wenqi Chen, Qirui Zheng, Wenxin Li, Sirui Han, Yike Guo, Yaodong Yang + + + Can Slow-thinking LLMs Reason Over Time? Empirical Studies in Time Series Forecasting + https://arxiv.org/abs/2505.24511 + arXiv:2505.24511v4 Announce Type: replace +Abstract: Time series forecasting (TSF) is a fundamental and widely studied task, spanning methods from classical statistical approaches to modern deep learning and multimodal language modeling. Despite their effectiveness, these methods often follow a fast thinking paradigm emphasizing pattern extraction and direct value mapping, while overlooking explicit reasoning over temporal dynamics and contextual dependencies. Meanwhile, emerging slow-thinking LLMs (e.g., ChatGPT-o1, DeepSeek-R1) have demonstrated impressive multi-step reasoning capabilities across diverse domains, suggesting a new opportunity for reframing TSF as a structured reasoning task. This motivates a key question: can slow-thinking LLMs effectively reason over temporal patterns to support time series forecasting, even in zero-shot manner? To investigate this, in this paper, we propose TimeReasoner, an extensive empirical study that formulates TSF as a conditional reasoning task. We design a series of prompting strategies to elicit inference-time reasoning from pretrained slow-thinking LLMs and evaluate their performance across diverse TSF benchmarks. Our findings reveal that slow-thinking LLMs exhibit non-trivial zero-shot forecasting capabilities, especially in capturing high-level trends and contextual shifts. While preliminary, our study surfaces important insights into the reasoning behaviors of LLMs in temporal domains highlighting both their potential and limitations. We hope this work catalyzes further research into reasoning-based forecasting paradigms and paves the way toward more interpretable and generalizable TSF frameworks. + oai:arXiv.org:2505.24511v4 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Mingyue Cheng, Jiahao Wang, Daoyu Wang, Xiaoyu Tao, Qi Liu, Enhong Chen + + + WANDER: An Explainable Decision-Support Framework for HPC + https://arxiv.org/abs/2506.04049 + arXiv:2506.04049v2 Announce Type: replace +Abstract: High-performance computing (HPC) systems expose many interdependent configuration knobs that impact runtime, resource usage, power, and variability. Existing predictive tools model these outcomes, but do not support structured exploration, explanation, or guided reconfiguration. We present WANDER, a decision-support framework that synthesizes alternate configurations using counterfactual analysis aligned with user goals and constraints. We introduce a composite trade-off score that ranks suggestions based on prediction uncertainty, consistency between feature-target relationships using causal models, and similarity between feature distributions against historical data. To our knowledge, WANDER is the first such system to unify prediction, exploration, and explanation for HPC tuning under a common query interface. Across multiple datasets WANDER generates interpretable and trustworthy, human-readable alternatives that guide users to achieve their performance objectives. + oai:arXiv.org:2506.04049v2 + cs.PF + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Ankur Lahiry, Banooqa Banday, Tanzima Z. Islam + + + Structural Integrality in Task Assignment and Path Finding via Total Unimodularity of Petri Net Models + https://arxiv.org/abs/2506.04881 + arXiv:2506.04881v4 Announce Type: replace +Abstract: Task Assignment and Path Finding (TAPF) concerns computing collision-free motions for multiple robots while jointly selecting goal locations. In this paper, safety is enforced by requiring unit-capacity traversal between successive intermediate markings, yielding coordination strategies that are valid independently of any specific time interpretation. Existing optimization-based approaches typically rely on time-expanded network-flow models, which result in large mixed-integer programs and limited scalability. We instead develop a Petri net (PN)-based optimization framework that exploits structural properties of the motion model to improve computational efficiency without explicit time expansion. + When robot motion is modeled by strongly connected state-machine PNs, we show that, once the congestion level (equivalently, the synchronization depth) is fixed to an integer value, the resulting motion-planning constraint matrix is totally unimodular. Consequently, the corresponding LP relaxation admits integral optimal solutions for the motion variables. When the estimated congestion exceeds one, we introduce a synchronization-on-demand mechanism based on intermediate markings; for a fixed number of synchronization stages, the associated constraint matrices remain totally unimodular, thereby preserving integrality of the motion variables. + Finally, we extend TAPF to Boolean specifications over regions of interest and propose a two-stage LP/mixed-integer linear programming (MILP) scheme in which integrality is confined to task-selection variables. Simulations on large benchmarks demonstrate substantial scalability improvements over time-expanded optimization baselines. + oai:arXiv.org:2506.04881v4 + cs.RO + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ioana Hustiu, Roozbeh Abolpour, Marius Kloetzer, Cristian Mahulea + + + SoK: Are Watermarks in LLMs Ready for Deployment? + https://arxiv.org/abs/2506.05594 + arXiv:2506.05594v2 Announce Type: replace +Abstract: Large Language Models (LLMs) have transformed natural language processing, demonstrating impressive capabilities across diverse tasks. However, deploying these models introduces critical risks related to intellectual property violations and potential misuse, particularly as adversaries can imitate these models to steal services or generate misleading outputs. We specifically focus on model stealing attacks, as they are highly relevant to proprietary LLMs and pose a serious threat to their security, revenue, and ethical deployment. While various watermarking techniques have emerged to mitigate these risks, it remains unclear how far the community and industry have progressed in developing and deploying watermarks in LLMs. + To bridge this gap, we aim to develop a comprehensive systematization for watermarks in LLMs by 1) presenting a detailed taxonomy for watermarks in LLMs, 2) proposing a novel intellectual property classifier to explore the effectiveness and impacts of watermarks on LLMs under both attack and attack-free environments, 3) analyzing the limitations of existing watermarks in LLMs, and 4) discussing practical challenges and potential future directions for watermarks in LLMs. Through extensive experiments, we show that despite promising research outcomes and significant attention from leading companies and community to deploy watermarks, these techniques have yet to reach their full potential in real-world applications due to their unfavorable impacts on model utility of LLMs and downstream tasks. Our findings provide an insightful understanding of watermarks in LLMs, highlighting the need for practical watermarks solutions tailored to LLM deployment. + oai:arXiv.org:2506.05594v2 + cs.CR + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Kieu Dang, Phung Lai, NhatHai Phan, Yelong Shen, Ruoming Jin, Abdallah Khreishah, My T. Thai + + + Super Encoding Network: Recursive Association of Multi-Modal Encoders for Video Understanding + https://arxiv.org/abs/2506.07576 + arXiv:2506.07576v2 Announce Type: replace +Abstract: Video understanding has been considered as one critical step towards world modeling, which is an important long-term problem in AI research. Recently, multimodal foundation models have shown such potential via large-scale pretraining. These models effectively align encoders of different modalities via contrastive learning. To further enhance performance on complex target movements and diversified video scenes, we propose to augment this alignment with deeper multimodal interactions, which are critical for understanding complex target movements with diversified video scenes. To fill this gap, we propose a unified Super Encoding Network (SEN) for video understanding, which builds up such distinct interactions through the recursive association of multimodal encoders in the foundation models. Specifically, we creatively treat those well-trained encoders as ``super neurons" in our SEN. Via designing a Recursive Association (RA) block, we progressively fuse multi-modalities with the input video, based on knowledge integrating, distributing, and prompting of super neurons in a recursive manner. In this way, our SEN can effectively encode deeper multimodal interactions for prompting various video understanding tasks in the downstream. Extensive experiments show that our SEN can remarkably boost the four most representative video tasks, including tracking, recognition, chatting, and editing, e.g., for pixel-level tracking, the average jaccard index improves 2.7%, and temporal coherence(TC) drops by 8.8% compared to the popular CaDeX++ approach. For one-shot video editing, textual alignment improves 6.4%, and frame consistency increases by 4.1% compared to the Tune-A-Video approach. + oai:arXiv.org:2506.07576v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Boyu Chen, Siran Chen, Kunchang Li, Qinglin Xu, Yu Qiao, Yali Wang + + + MOORL: A Framework for Integrating Offline-Online Reinforcement Learning + https://arxiv.org/abs/2506.09574 + arXiv:2506.09574v2 Announce Type: replace +Abstract: Sample efficiency and exploration remain critical challenges in Deep Reinforcement Learning (DRL), particularly in complex domains. Offline RL, which enables agents to learn optimal policies from static, pre-collected datasets, has emerged as a promising alternative. However, offline RL is constrained by issues such as out-of-distribution (OOD) actions that limit policy performance and generalization. To overcome these limitations, we propose Meta Offline-Online Reinforcement Learning (MOORL), a hybrid framework that unifies offline and online RL for efficient and scalable learning. While previous hybrid methods rely on extensive design components and added computational complexity to utilize offline data effectively, MOORL introduces a meta-policy that seamlessly adapts across offline and online trajectories. This enables the agent to leverage offline data for robust initialization while utilizing online interactions to drive efficient exploration. Our theoretical analysis demonstrates that the hybrid approach enhances exploration by effectively combining the complementary strengths of offline and online data. Furthermore, we demonstrate that MOORL learns a stable Q-function without added complexity. Extensive experiments on 28 tasks from the D4RL and V-D4RL benchmarks validate its effectiveness, showing consistent improvements over state-of-the-art offline and hybrid RL baselines. With minimal computational overhead, MOORL achieves strong performance, underscoring its potential for practical applications in real-world scenarios. + oai:arXiv.org:2506.09574v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Gaurav Chaudhary, Wassim Uddin Mondal, Laxmidhar Behera + + + FastFLUX: Pruning FLUX with Block-wise Replacement and Sandwich Training + https://arxiv.org/abs/2506.10035 + arXiv:2506.10035v2 Announce Type: replace +Abstract: Recent advancements in text-to-image (T2I) generation have led to the emergence of highly expressive models such as diffusion transformers (DiTs), exemplified by FLUX. However, their massive parameter sizes lead to slow inference, high memory usage, and poor deployability. Existing acceleration methods (e.g., single-step distillation and attention pruning) often suffer from significant performance degradation and incur substantial training costs. To address these limitations, we propose FastFLUX, an architecture-level pruning framework designed to enhance the inference efficiency of FLUX. At its core is the Block-wise Replacement with Linear Layers (BRLL) method, which replaces structurally complex residual branches in ResBlocks with lightweight linear layers while preserving the original shortcut connections for stability. Furthermore, we introduce Sandwich Training (ST), a localized fine-tuning strategy that leverages LoRA to supervise neighboring blocks, mitigating performance drops caused by structural replacement. Experiments show that our FastFLUX maintains high image quality under both qualitative and quantitative evaluations, while significantly improving inference speed, even with 20\% of the hierarchy pruned. Our code will be available soon. + oai:arXiv.org:2506.10035v2 + cs.GR + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Fuhan Cai, Yong Guo, Jie Li, Wenbo Li, Jian Chen, Xiangzhong Fang + + + MSTAR: Box-free Multi-query Scene Text Retrieval with Attention Recycling + https://arxiv.org/abs/2506.10609 + arXiv:2506.10609v2 Announce Type: replace +Abstract: Scene text retrieval has made significant progress with the assistance of accurate text localization. However, existing approaches typically require costly bounding box annotations for training. Besides, they mostly adopt a customized retrieval strategy but struggle to unify various types of queries to meet diverse retrieval needs. To address these issues, we introduce Muti-query Scene Text retrieval with Attention Recycling (MSTAR), a box-free approach for scene text retrieval. It incorporates progressive vision embedding to dynamically capture the multi-grained representation of texts and harmonizes free-style text queries with style-aware instructions. Additionally, a multi-instance matching module is integrated to enhance vision-language alignment. Furthermore, we build the Multi-Query Text Retrieval (MQTR) dataset, the first benchmark designed to evaluate the multi-query scene text retrieval capability of models, comprising four query types and 16k images. Extensive experiments demonstrate the superiority of our method across seven public datasets and the MQTR dataset. Notably, MSTAR marginally surpasses the previous state-of-the-art model by 6.4% in MAP on Total-Text while eliminating box annotation costs. Moreover, on the MQTR benchmark, MSTAR significantly outperforms the previous models by an average of 8.5%. The code and datasets are available at https://github.com/yingift/MSTAR. + oai:arXiv.org:2506.10609v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Liang Yin, Xudong Xie, Zhang Li, Xiang Bai, Yuliang Liu + + + Dagstuhl Perspectives Workshop 24352 -- Conversational Agents: A Framework for Evaluation (CAFE): Manifesto + https://arxiv.org/abs/2506.11112 + arXiv:2506.11112v2 Announce Type: replace +Abstract: During the workshop, we deeply discussed what CONversational Information ACcess (CONIAC) is and its unique features, proposing a world model abstracting it, and defined the Conversational Agents Framework for Evaluation (CAFE) for the evaluation of CONIAC systems, consisting of six major components: 1) goals of the system's stakeholders, 2) user tasks to be studied in the evaluation, 3) aspects of the users carrying out the tasks, 4) evaluation criteria to be considered, 5) evaluation methodology to be applied, and 6) measures for the quantitative criteria chosen. + oai:arXiv.org:2506.11112v2 + cs.CL + cs.HC + cs.IR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.4230/DagMan.11.1.19 + Dagstuhl Manifestos, Volume 11, Issue 1, pp. 19-67, Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2025) + Christine Bauer, Li Chen, Nicola Ferro, Norbert Fuhr, Avishek Anand, Timo Breuer, Guglielmo Faggioli, Ophir Frieder, Hideo Joho, Jussi Karlgren, Johannes Kiesel, Bart P. Knijnenburg, Aldo Lipani, Lien Michiels, Andrea Papenmeier, Maria Soledad Pera, Mark Sanderson, Scott Sanner, Benno Stein, Johanne R. Trippas, Karin Verspoor, Martijn C Willemsen + + + Mitigating Hallucination Through Theory-Consistent Symmetric Multimodal Preference Optimization + https://arxiv.org/abs/2506.11712 + arXiv:2506.11712v3 Announce Type: replace +Abstract: Direct Preference Optimization (DPO) has emerged as an effective approach for mitigating hallucination in Multimodal Large Language Models (MLLMs). Although existing methods have achieved significant progress by utilizing vision-oriented contrastive objectives for enhancing MLLMs' attention to visual inputs and hence reducing hallucination, they suffer from non-rigorous optimization objective function and indirect preference supervision. To address these limitations, we propose a Symmetric Multimodal Preference Optimization (SymMPO), which conducts symmetric preference learning with direct preference supervision (i.e., response pairs) for visual understanding enhancement, while maintaining rigorous theoretical alignment with standard DPO. In addition to conventional ordinal preference learning, SymMPO introduces a preference margin consistency loss to quantitatively regulate the preference gap between symmetric preference pairs. Comprehensive evaluation across five benchmarks demonstrate SymMPO's superior performance, validating its effectiveness in hallucination mitigation of MLLMs. + oai:arXiv.org:2506.11712v3 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Wenqi Liu, Xuemeng Song, Jiaxi Li, Yinwei Wei, Na Zheng, Jianhua Yin, Liqiang Nie + + + Good Noise Makes Good Edits: A Training-Free Diffusion-Based Video Editing with Image and Text Prompts + https://arxiv.org/abs/2506.12520 + arXiv:2506.12520v2 Announce Type: replace +Abstract: We propose VINO, the first zero-shot, training-free video editing method conditioned on both image and text. Our approach introduces $\rho$-start sampling and dilated dual masking to construct structured noise maps that enable coherent and accurate edits. To further enhance visual fidelity, we present zero image guidance, a controllable negative prompt strategy. Extensive experiments demonstrate that VINO faithfully incorporates the reference image into video edits, achieving strong performance compared to state-of-the-art baselines, all without any test-time or instance-specific training. + oai:arXiv.org:2506.12520v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Saemee Choi, Sohyun Jeong, Hyojin Jang, Jaegul Choo, Jinhee Kim + + + Universal Jailbreak Suffixes Are Strong Attention Hijackers + https://arxiv.org/abs/2506.12880 + arXiv:2506.12880v2 Announce Type: replace +Abstract: We study suffix-based jailbreaks$\unicode{x2013}$a powerful family of attacks against large language models (LLMs) that optimize adversarial suffixes to circumvent safety alignment. Focusing on the widely used foundational GCG attack, we observe that suffixes vary in efficacy: some are markedly more universal$\unicode{x2013}$generalizing to many unseen harmful instructions$\unicode{x2013}$than others. We first show that a shallow, critical mechanism drives GCG's effectiveness. This mechanism builds on the information flow from the adversarial suffix to the final chat template tokens before generation. Quantifying the dominance of this mechanism during generation, we find GCG irregularly and aggressively hijacks the contextualization process. Crucially, we tie hijacking to the universality phenomenon, with more universal suffixes being stronger hijackers. Subsequently, we show that these insights have practical implications: GCG's universality can be efficiently enhanced (up to $\times$5 in some cases) at no additional computational cost, and can also be surgically mitigated, at least halving the attack's success with minimal utility loss. We release our code and data at http://github.com/matanbt/interp-jailbreak. + oai:arXiv.org:2506.12880v2 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Matan Ben-Tov, Mor Geva, Mahmood Sharif + + + AdaLRS: Loss-Guided Adaptive Learning Rate Search for Efficient Foundation Model Pretraining + https://arxiv.org/abs/2506.13274 + arXiv:2506.13274v3 Announce Type: replace +Abstract: Learning rate is widely regarded as crucial for effective foundation model pretraining. Recent research explores and demonstrates the transferability of learning rate configurations across varying model and dataset sizes, etc. Nevertheless, these approaches are constrained to specific training scenarios and typically necessitate extensive hyperparameter tuning on proxy models. In this work, we propose \textbf{AdaLRS}, a plug-in-and-play adaptive learning rate search algorithm that conducts online optimal learning rate search via optimizing loss descent velocities. We provide theoretical and experimental analyzes to show that foundation model pretraining loss and its descent velocity are both convex and share the same optimal learning rate. Relying solely on training loss dynamics, AdaLRS involves few extra computations to guide the search process, and its convergence is guaranteed via theoretical analysis. Experiments on both LLM and VLM pretraining show that AdaLRS adjusts suboptimal learning rates to the neighborhood of optimum with marked efficiency and effectiveness, with model performance improved accordingly. We also show the robust generalizability of AdaLRS across varying training scenarios, such as different model sizes, training paradigms, base learning rate scheduler choices, and hyperparameter settings. + oai:arXiv.org:2506.13274v3 + cs.LG + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Hongyuan Dong, Dingkang Yang, Xiao Liang, Chao Feng, Jiao Ran + + + Abstract, Align, Predict: Zero-Shot Stance Detection via Cognitive Inductive Reasoning + https://arxiv.org/abs/2506.13470 + arXiv:2506.13470v2 Announce Type: replace +Abstract: Zero-shot stance detection (ZSSD) seeks to determine the stance of text toward previously unseen targets, a task critical for analyzing dynamic and polarized online discourse with limited labeled data. While large language models (LLMs) offer zero-shot capabilities, prompting-based approaches often fall short in handling complex reasoning and lack robust generalization to novel targets. Meanwhile, LLM-enhanced methods still require substantial labeled data and struggle to move beyond instance-level patterns, limiting their interpretability and adaptability. Inspired by cognitive science, we propose the Cognitive Inductive Reasoning Framework (CIRF), a schema-driven method that bridges linguistic inputs and abstract reasoning via automatic induction and application of cognitive reasoning schemas. CIRF abstracts first-order logic patterns from raw text into multi-relational schema graphs in an unsupervised manner, and leverages a schema-enhanced graph kernel model to align input structures with schema templates for robust, interpretable zero-shot inference. Extensive experiments on SemEval-2016, VAST, and COVID-19-Stance benchmarks demonstrate that CIRF not only establishes new state-of-the-art results, but also achieves comparable performance with just 30\% of the labeled data, demonstrating its strong generalization and efficiency in low-resource settings. + oai:arXiv.org:2506.13470v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Bowen Zhang, Jun Ma, Fuqiang Niu, Li Dong, Jinzhou Cao, Genan Dai + + + BattBee: Equivalent Circuit Modeling and Early Detection of Thermal Runaway Triggered by Internal Short Circuits for Lithium-Ion Batteries + https://arxiv.org/abs/2506.13577 + arXiv:2506.13577v2 Announce Type: replace +Abstract: Lithium-ion batteries are the enabling power source for transportation electrification. However, in real-world applications, they remain vulnerable to internal short circuits (ISCs) and the consequential risk of thermal runaway (TR). Toward addressing the challenge of ISCs and TR, we undertake a systematic study that extends from dynamic modeling to fault detection in this paper. First, we develop {\em BattBee}, the first equivalent circuit model to specifically describe the onset of ISCs and the evolution of subsequently induced TR. Drawing upon electrochemical modeling, the model can simulate ISCs at different severity levels and predict their impact on the initiation and progression of TR events. With the physics-inspired design, this model offers strong physical interpretability and predictive accuracy, while maintaining structural simplicity to allow fast computation. Then, building upon the BattBee model, we develop fault detection observers and derive detection criteria together with decision-making logics to identify the occurrence and emergence of ISC and TR events. This detection approach is principled in design and fast in computation, lending itself to practical applications. Validation based on simulations and experimental data demonstrates the effectiveness of both the BattBee model and the ISC/TR detection approach. The research outcomes underscore this study's potential for real-world battery safety risk management. + oai:arXiv.org:2506.13577v2 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sangwon Kang, Hao Tu, Huazhen Fang + + + Value-Free Policy Optimization via Reward Partitioning + https://arxiv.org/abs/2506.13702 + arXiv:2506.13702v3 Announce Type: replace +Abstract: Single-trajectory reinforcement learning (RL) methods aim to optimize policies from datasets consisting of (prompt, response, reward) triplets, where scalar rewards are directly available. This supervision format is highly practical, as it mirrors real-world human feedback, such as thumbs-up/down signals, and avoids the need for structured preference annotations. In contrast, pairwise preference-based methods like Direct Preference Optimization (DPO) rely on datasets with both preferred and dispreferred responses, which are harder to construct and less natural to collect. Among single-trajectory approaches, Direct Reward Optimization (DRO) has shown strong empirical performance due to its simplicity and stability. However, DRO requires approximating a value function, which introduces several limitations: high off-policy variance, coupling between policy and value learning, and a lack of absolute supervision on the policy itself. We introduce Reward Partitioning Optimization (RPO), a new method that resolves these limitations by removing the need to model the value function. Instead, RPO normalizes observed rewards using a partitioning approach estimated directly from data. This leads to a straightforward supervised learning objective on the policy, with no auxiliary models and no joint optimization. RPO provides direct and stable supervision on the policy, making it robust and easy to implement in practice. We validate RPO on scalar-feedback language modeling tasks using Flan-T5 encoder-decoder models. Our results demonstrate that RPO outperforms existing single-trajectory baselines such as DRO and Kahneman-Tversky Optimization (KTO). These findings confirm that RPO is a simple, effective, and theoretically grounded method for single-trajectory policy optimization. + oai:arXiv.org:2506.13702v3 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Bilal Faye, Hanane Azzag, Mustapha Lebbah + + + PRISM-Loc: a Lightweight Long-range LiDAR Localization in Urban Environments with Topological Maps + https://arxiv.org/abs/2506.15849 + arXiv:2506.15849v2 Announce Type: replace +Abstract: We propose PRISM-Loc - a lightweight and robust approach for localization in large outdoor environments that combines a compact topological representation with a novel scan-matching and curb-detection module operating on raw LiDAR scans. The method is designed for resource-constrained platforms and emphasizes real-time performance and resilience to common urban sensing challenges. It provides accurate localization in compact topological maps using global place recognition and an original scan matching technique. Experiments on standard benchmarks and on an embedded platform demonstrate the effectiveness of our approach. Our method achieves a 99\% success rate on the large-scale ITLP-Campus dataset while running at 150 ms per localization and using a 20 MB map for localization. We highlight three main contributions: (1) a compact representation for city-scale localization; (2) a novel curb detection and scan matching pipeline operating directly on raw LiDAR points; (3) a thorough evaluation of our method with performance analysis. + oai:arXiv.org:2506.15849v2 + cs.RO + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kirill Muravyev, Artem Kobozev, Vasily Yuryev, Alexander Melekhin, Oleg Bulichev, Dmitry Yudin, Konstantin Yakovlev + + + Subspace-Boosted Model Merging + https://arxiv.org/abs/2506.16506 + arXiv:2506.16506v3 Announce Type: replace +Abstract: Model merging enables the combination of multiple specialized expert models into a single model capable of performing multiple tasks. However, the benefits of merging an increasing amount of specialized experts generally lead to diminishing returns and reduced overall performance gains. In this work, we empirically and theoretically analyze this limitation, proving that for Task Arithmetic-based methods, as more experts are merged, the common information dominates the task-specific information, leading to inevitable rank collapse. To mitigate this issue, we introduce Subspace Boosting, which operates on the singular value decomposed task vector space and maintains task vector ranks. Subspace Boosting raises merging efficacy for up to 20 experts by large margins of more than 10% when evaluated on both vision and language benchmarks. Moreover, we propose employing Higher-Order Generalized Singular Value Decomposition to quantify task similarity, offering a new interpretable perspective on model merging. Code and models are available at https://github.com/ronskoro/Subspace-Boosting. + oai:arXiv.org:2506.16506v3 + cs.LG + cs.AI + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Ronald Skorobogat, Karsten Roth, Mariana-Iuliana Georgescu + + + Efficient and Stealthy Jailbreak Attacks via Adversarial Prompt Distillation from LLMs to SLMs + https://arxiv.org/abs/2506.17231 + arXiv:2506.17231v2 Announce Type: replace +Abstract: As the scale and complexity of jailbreaking attacks on large language models (LLMs) continue to escalate, their efficiency and practical applicability are constrained, posing a profound challenge to LLM security. Jailbreaking techniques have advanced from manual prompt engineering to automated methodologies. Recent advances have automated jailbreaking approaches that harness LLMs to generate jailbreak instructions and adversarial examples, delivering encouraging results. Nevertheless, these methods universally include an LLM generation phase, which, due to the complexities of deploying and reasoning with LLMs, impedes effective implementation and broader adoption. To mitigate this issue, we introduce \textbf{Adversarial Prompt Distillation}, an innovative framework that integrates masked language modeling, reinforcement learning, and dynamic temperature control to distill LLM jailbreaking prowess into smaller language models (SLMs). This methodology enables efficient, robust jailbreak attacks while maintaining high success rates and accommodating a broader range of application contexts. Empirical evaluations affirm the approach's superiority in attack efficacy, resource optimization, and cross-model versatility. Our research underscores the practicality of transferring jailbreak capabilities to SLMs, reveals inherent vulnerabilities in LLMs, and provides novel insights to advance LLM security investigations. Our code is available at: https://github.com/lxgem/Efficient_and_Stealthy_Jailbreak_Attacks_via_Adversarial_Prompt. + oai:arXiv.org:2506.17231v2 + cs.CL + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Xiang Li, Chong Zhang, Jia Wang, Fangyu Wu, Yushi Li, Xiaobo Jin + + + Mirage of Mastery: Memorization Tricks LLMs into Artificially Inflated Self-Knowledge + https://arxiv.org/abs/2506.18998 + arXiv:2506.18998v3 Announce Type: replace +Abstract: When artificial intelligence mistakes memorization for intelligence, it creates a dangerous mirage of reasoning. Existing studies treat memorization and self-knowledge deficits in LLMs as separate issues and do not recognize an intertwining link that degrades the trustworthiness of LLM responses. In our study, we utilize a novel framework to ascertain if LLMs genuinely learn reasoning patterns from training data or merely memorize them to assume competence across problems of similar complexity focused on STEM domains. Our analysis shows a noteworthy problem in generalization: LLMs draw confidence from memorized solutions to infer a higher self-knowledge about their reasoning ability, which manifests as an over 45% inconsistency in feasibility assessments when faced with self-validated, logically coherent task perturbations. This effect is most pronounced in science and medicine domains, which tend to have maximal standardized jargon and problems, further confirming our approach. Significant wavering within the self-knowledge of LLMs also shows flaws in current architectures and training patterns, highlighting the need for techniques that ensure a balanced, consistent stance on models' perceptions of their own knowledge for maximum AI explainability and trustworthiness. Our code and results are available publicly at https://github.com/Sahil-R-Kale/mirage_of_mastery + oai:arXiv.org:2506.18998v3 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Sahil Kale + + + 3D Shape Generation: A Survey + https://arxiv.org/abs/2506.22678 + arXiv:2506.22678v2 Announce Type: replace +Abstract: Recent advances in deep learning have significantly transformed the field of 3D shape generation, enabling the synthesis of complex, diverse, and semantically meaningful 3D objects. This survey provides a comprehensive overview of the current state-of-the-art in 3D shape generation, organizing the discussion around three core components: shape representations, generative modeling approaches, and evaluation protocols. We begin by categorizing 3D representations into explicit, implicit, and hybrid setups, highlighting their structural properties, advantages, and limitations. Next, we review a wide range of generation methods, focusing on feedforward architectures. We further summarize commonly used datasets and evaluation metrics that assess fidelity, diversity, and realism of generated shapes. Finally, we identify open challenges and outline future research directions that could drive progress in controllable, efficient, and high-quality 3D shape generation. This survey aims to serve as a valuable reference for researchers and practitioners seeking a structured and in-depth understanding of this rapidly evolving field. + oai:arXiv.org:2506.22678v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Nicolas Caytuiro, Ivan Sipiran + + + Libra: Unleashing GPU Heterogeneity for High-Performance Sparse Matrix Multiplication + https://arxiv.org/abs/2506.22714 + arXiv:2506.22714v2 Announce Type: replace +Abstract: Sparse matrix multiplication operators (i.e., SpMM and SDDMM) are widely used in deep learning and scientific computing. Modern accelerators are commonly equipped with Tensor Core Units (TCUs) and CUDA cores to accelerate sparse operators. The former excels at structured matrix computations, whereas the latter offers greater programming flexibility. However, how to combine these two resources to maximize sparse-operator performance remains unclear. In this work, we first identify the source of performance gains in hybrid computation and systematically analyze their complementary strengths. Motivated by this, we propose Libra, a holistic framework that efficiently leverages heterogeneous computing resources to accelerate both SpMM and SDDMM operators. Specifically, Libra introduces a 2D-aware (locality and utilization) workload distribution method to precisely identify the optimal task mapping, simultaneously leveraging the data reuse capabilities of TCUs and the flexibility of CUDA cores to minimize computational redundancy. Libra further incorporates hybrid load balancing, occupancy-aware task scheduling, and efficient kernel implementations to maximize execution efficiency. Extensive experiments on H100 and RTX 4090 GPUs demonstrate that Libra surpasses all the 12 up-to-date baselines significantly, e.g., on average 1.77x speedup over FlashSparse, 1.73x over RoDe, and 2.9x over DGL for end-to-end GNN applications. Libra opens up a new perspective for sparse operator acceleration by fully unleashing the power of heterogeneous GPU resources. + oai:arXiv.org:2506.22714v2 + cs.DC + cs.LG + cs.PF + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jinliang Shi, Shigang Li, Youxuan Xu, Xueying Wang, Rongtian Fu, Zhi Ma, Tong Wu + + + Cooperation as a Black Box: Conceptual Fluctuation and Diagnostic Tools for Misalignment in MAS + https://arxiv.org/abs/2506.22876 + arXiv:2506.22876v2 Announce Type: replace +Abstract: Misalignment in Multi-Agent Systems (MAS) is frequently treated as a technical failure. Yet, issues may arise from the conceptual design phase, where semantic ambiguity and normative projection occur. The Rabbit-Duck illusion illustrates how perspective-dependent readings of agent behavior, such as the conflation of cooperation-coordination, can create epistemic instability; e.g., coordinated agents in cooperative Multi-Agent Reinforcement Learning (MARL) benchmarks being interpreted as morally aligned, despite being optimized for shared utility maximization only. Motivated by three drivers of meaning-level misalignment in MAS (coordination-cooperation ambiguity, conceptual fluctuation, and semantic instability), we introduce the Misalignment Mosaic: a framework for diagnosing how misalignment emerges through language, framing, and design assumptions. The Mosaic comprises four components: 1. Terminological Inconsistency, 2. Interpretive Ambiguity, 3. Concept-to-Code Decay, and 4. Morality as Cooperation. Building on insights from the Morality-as-Cooperation Theory, we call for consistent meaning-level grounding in MAS to ensure systems function as intended: technically and ethically. This need is particularly urgent as MAS principles influence broader Artificial Intelligence (AI) workflows, amplifying risks in trust, interpretability, and governance. While this work focuses on the coordination/cooperation ambiguity, the Mosaic generalizes to other overloaded terms, such as alignment, autonomy, and trust. + oai:arXiv.org:2506.22876v2 + cs.MA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Shayak Nandi, Fernanda M. Eliott + + + TAG-WM: Tamper-Aware Generative Image Watermarking via Diffusion Inversion Sensitivity + https://arxiv.org/abs/2506.23484 + arXiv:2506.23484v4 Announce Type: replace +Abstract: AI-generated content (AIGC) enables efficient visual creation but raises copyright and authenticity risks. As a common technique for integrity verification and source tracing, digital image watermarking is regarded as a potential solution to above issues. However, the widespread adoption and advancing capabilities of generative image editing tools have amplified malicious tampering risks, while simultaneously posing new challenges to passive tampering detection and watermark robustness. To address these challenges, this paper proposes a Tamper-Aware Generative image WaterMarking method named TAG-WM. The proposed method comprises four key modules: a dual-mark joint sampling (DMJS) algorithm for embedding copyright and localization watermarks into the latent space while preserving generative quality, the watermark latent reconstruction (WLR) utilizing reversed DMJS, a dense variation region detector (DVRD) leveraging diffusion inversion sensitivity to identify tampered areas via statistical deviation analysis, and the tamper-aware decoding (TAD) guided by localization results. The experimental results demonstrate that TAG-WM achieves state-of-the-art performance in both tampering robustness and localization capability even under distortion, while preserving lossless generation quality and maintaining a watermark capacity of 256 bits. The code is available at: https://github.com/Suchenl/TAG-WM. + oai:arXiv.org:2506.23484v4 + cs.MM + cs.CV + eess.IV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Yuzhuo Chen, Zehua Ma, Han Fang, Weiming Zhang, Nenghai Yu + + + Campus5G: A Campus Scale Private 5G Open RAN Testbed + https://arxiv.org/abs/2506.23740 + arXiv:2506.23740v2 Announce Type: replace +Abstract: Mobile networks are embracing disaggregation, reflected by the industry trend towards Open RAN. Private 5G networks are viewed as particularly suitable contenders as early adopters of Open RAN, owing to their setting, high degree of control, and opportunity for innovation they present. Motivated by this, we have recently deployed Campus5G, the first of its kind campus-wide, O-RAN-compliant private 5G testbed across the central campus of the University of Edinburgh. We present in detail our process developing the testbed, from planning, to architecting, to deployment, and measuring the testbed performance. We then discuss the lessons learned from building the testbed, and highlight some research opportunities that emerged from our deployment experience. + oai:arXiv.org:2506.23740v2 + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-sa/4.0/ + Andrew E. Ferguson, Ujjwal Pawar, Tianxin Wang, Mahesh K. Marina + + + cVLA: Towards Efficient Camera-Space VLAs + https://arxiv.org/abs/2507.02190 + arXiv:2507.02190v2 Announce Type: replace +Abstract: Vision-Language-Action (VLA) models offer a compelling framework for tackling complex robotic manipulation tasks, but they are often expensive to train. In this paper, we propose a novel VLA approach that leverages the competitive performance of Vision Language Models (VLMs) on 2D images to directly infer robot end-effector poses in image frame coordinates. Unlike prior VLA models that output low-level controls, our model predicts trajectory waypoints, making it both more efficient to train and robot embodiment agnostic. Despite its lightweight design, our next-token prediction architecture effectively learns meaningful and executable robot trajectories. We further explore the underutilized potential of incorporating depth images, inference-time techniques such as decoding strategies, and demonstration-conditioned action generation. Our model is trained on a simulated dataset and exhibits strong sim-to-real transfer capabilities. We evaluate our approach using a combination of simulated and real data, demonstrating its effectiveness on a real robotic system. + oai:arXiv.org:2507.02190v2 + cs.RO + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Max Argus, Jelena Bratulic, Houman Masnavi, Maxim Velikanov, Nick Heppert, Abhinav Valada, Thomas Brox + + + MAGIC: Few-Shot Mask-Guided Anomaly Inpainting with Prompt Perturbation, Spatially Adaptive Guidance, and Context Awareness + https://arxiv.org/abs/2507.02314 + arXiv:2507.02314v4 Announce Type: replace +Abstract: Few-shot anomaly generation is a key challenge in industrial quality control. Although diffusion models are promising, existing methods struggle: global prompt-guided approaches corrupt normal regions, and existing inpainting-based methods often lack the in-distribution diversity essential for robust downstream models. We propose MAGIC, a fine-tuned inpainting framework that generates high-fidelity anomalies that strictly adhere to the mask while maximizing this diversity. MAGIC introduces three complementary components: (i) Gaussian prompt perturbation, which prevents model overfitting in the few-shot setting by learning and sampling from a smooth manifold of realistic anomalies, (ii) spatially adaptive guidance that applies distinct guidance strengths to the anomaly and background regions, and (iii) context-aware mask alignment to relocate masks for plausible placement within the host object. Under consistent identical evaluation protocol, MAGIC outperforms state-of-the-art methods on diverse anomaly datasets in downstream tasks + oai:arXiv.org:2507.02314v4 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + JaeHyuck Choi, MinJun Kim, Je Hyeong Hong + + + ESSA: Evolutionary Strategies for Scalable Alignment + https://arxiv.org/abs/2507.04453 + arXiv:2507.04453v3 Announce Type: replace +Abstract: Alignment of Large Language Models (LLMs) typically relies on Reinforcement Learning from Human Feedback (RLHF) with gradient-based optimizers such as Proximal Policy Optimization (PPO) or Group Relative Policy Optimization (GRPO). While effective, these methods require complex distributed training, large memory budgets, and careful hyperparameter tuning, all of which become increasingly difficult at billion-parameter scale. We present ESSA, Evolutionary Strategies for Scalable Alignment, a gradient-free framework that aligns LLMs using only forward inference and black-box optimization. ESSA focuses optimization on Low-Rank Adapters (LoRA) and further compresses their parameter space by optimizing only the singular values from an singular value decomposition (SVD) of each adapter matrix. This dimensionality reduction makes evolutionary search practical even for very large models and allows efficient operation in quantized INT4 and INT8 inference mode. Across these benchmarks ESSA improves the test accuracy of Qwen2.5-Math-7B by 12.6% on GSM8K and 14.8% on PRM800K, and raises the accuracy of LLaMA3.1-8B on IFEval by 22.5%, all compared with GRPO. In large-scale settings ESSA shows stronger scaling than gradient-based methods: on Qwen2.5-32B for PRM800K it reaches near-optimal accuracy twice as fast on 16 GPUs and six times as fast on 128 GPUs compared with GRPO. These results position evolutionary strategies as a compelling, hardware-friendly alternative to gradient-based LLM alignment, combining competitive quality with substantially reduced wall-clock time and engineering overhead. + oai:arXiv.org:2507.04453v3 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Daria Korotyshova, Boris Shaposhnikov, Alexey Malakhov, Alexey Khokhulin, Nikita Surnachev, Kirill Ovcharenko, George Bredis, Alexey Gorbatovski, Viacheslav Sinii, Daniil Gavrilov + + + Wi-Fi: Twenty-Five Years and Counting + https://arxiv.org/abs/2507.09613 + arXiv:2507.09613v2 Announce Type: replace +Abstract: Today, Wi-Fi is over 25 years old. Yet, despite sharing the same branding name, today's Wi-Fi boasts entirely new capabilities that were not even on the roadmap 25 years ago. This article aims to provide a holistic and comprehensive technical and historical tutorial on Wi-Fi, beginning with IEEE 802.11b (Wi-Fi 1) and looking forward to IEEE 802.11bn (Wi-Fi 8). This is the first tutorial article to span these eight generations. Rather than a generation-by-generation exposition, we describe the key mechanisms that have advanced Wi-Fi. We begin by discussing spectrum allocation and coexistence, and detailing the IEEE 802.11 standardization cycle. Second, we provide an overview of the physical layer and describe key elements that have enabled data rates to increase by over 1,000x. Third, we describe how Wi-Fi Medium Access Control has been enhanced from the original Distributed Coordination Function to now include capabilities spanning from frame aggregation to wideband spectrum access. Fourth, we describe how Wi-Fi 5 first broke the one-user-at-a-time paradigm and introduced multi-user access. Fifth, given the increasing use of mobile, battery-powered devices, we describe Wi-Fi's energy-saving mechanisms over the generations. Sixth, we discuss how Wi-Fi was enhanced to seamlessly aggregate spectrum across 2.4 GHz, 5 GHz, and 6 GHz bands to improve throughput, reliability, and latency. Finally, we describe how Wi-Fi enables nearby Access Points to coordinate in order to improve performance and efficiency. In the Appendix, we further discuss Wi-Fi developments beyond 802.11bn, including integrated mmWave operations, sensing, security and privacy extensions, and the adoption of AI/ML. + oai:arXiv.org:2507.09613v2 + cs.NI + cs.IT + eess.SP + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Giovanni Geraci, Francesca Meneghello, Francesc Wilhelmi, David Lopez-Perez, I\~naki Val, Lorenzo Galati Giordano, Carlos Cordeiro, Monisha Ghosh, Edward Knightly, Boris Bellalta + + + MCGA: Mixture of Codebooks Hyperspectral Reconstruction via Grayscale-Aware Attention + https://arxiv.org/abs/2507.09885 + arXiv:2507.09885v3 Announce Type: replace +Abstract: Reconstructing hyperspectral images (HSIs) from RGB inputs provides a cost-effective alternative to hyperspectral cameras, but reconstructing high-dimensional spectra from three channels is inherently ill-posed. Existing methods typically directly regress RGB-to-HSI mappings using large attention networks, which are computationally expensive and handle ill-posedness only implicitly. We propose MCGA, a Mixture-of-Codebooks with Grayscale-aware Attention framework that explicitly addresses these challenges using spectral priors and photometric consistency. MCGA first learns transferable spectral priors via a mixture-of-codebooks (MoC) from heterogeneous HSI datasets, then aligns RGB features with these priors through grayscale-aware photometric attention (GANet). Efficiency and robustness are further improved via top-K attention design and test-time adaptation (TTA). Experiments on multiple real-world benchmarks demonstrate the state-of-the-art accuracy, strong cross-dataset generalization, and 4-5x faster inference. Codes will be available once acceptance at https://github.com/Fibonaccirabbit/MCGA. + oai:arXiv.org:2507.09885v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhanjiang Yang, Lijun Sun, Jiawei Dong, Xiaoxin An, Yang Liu, Meng Li + + + From Words to Proverbs: Evaluating LLMs Linguistic and Cultural Competence in Saudi Dialects with Absher + https://arxiv.org/abs/2507.10216 + arXiv:2507.10216v2 Announce Type: replace +Abstract: As large language models (LLMs) become increasingly central to Arabic NLP applications, evaluating their understanding of regional dialects and cultural nuances is essential, particularly in linguistically diverse settings like Saudi Arabia. This paper introduces Absher, a comprehensive benchmark specifically designed to assess LLMs performance across major Saudi dialects. \texttt{Absher} comprises over 18,000 multiple-choice questions spanning six distinct categories: Meaning, True/False, Fill-in-the-Blank, Contextual Usage, Cultural Interpretation, and Location Recognition. These questions are derived from a curated dataset of dialectal words, phrases, and proverbs sourced from various regions of Saudi Arabia. We evaluate several state-of-the-art LLMs, including multilingual and Arabic-specific models. We also provide detailed insights into their capabilities and limitations. Our results reveal notable performance gaps, particularly in tasks requiring cultural inference or contextual understanding. Our findings highlight the urgent need for dialect-aware training and culturally aligned evaluation methodologies to improve LLMs performance in real-world Arabic applications. + oai:arXiv.org:2507.10216v2 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Renad Al-Monef, Hassan Alhuzali, Nora Alturayeif, Ashwag Alasmari + + + Towards Practical Benchmarking of Data Cleaning Techniques: On Generating Authentic Errors via Large Language Models + https://arxiv.org/abs/2507.10934 + arXiv:2507.10934v2 Announce Type: replace +Abstract: Data quality remains an important challenge in data-driven systems, as errors in tabular data can severely compromise downstream analytics and machine learning performance. Although numerous error detection algorithms have been proposed, the lack of diverse, real-world error datasets limits comprehensive evaluation. Manual error annotation is both time-consuming and inconsistent, motivating the exploration of synthetic error generation as an alternative. In this work, we introduce TableEG, a framework that leverages large language models (LLMs) to generate authentic errors. By employing a table fine-tuning strategy and a triplet representation $(I, T, O)$ to model error generation, detection, and correction tasks, TableEG captures the complex dependencies inherent in two-dimensional tables. Trained on 12 real-world datasets spanning 10 diverse domains, TableEG ensures that the synthesized errors faithfully reflect authentic error distributions. Experimental results indicate that errors generated by TableEG exhibit superior pattern and distribution similarity compared to both rule-based methods and LLM-generated errors without fine-tuning. Furthermore, performance metrics on TableEG-generated errors closely align with those on real-world errors across nearly all datasets and detection algorithms, particularly for machine learning based detection techniques. Overall, TableEG not only bridges the gap between synthetic and real-world errors but also establishes a robust benchmark for subsequent error detection and correction tasks. + oai:arXiv.org:2507.10934v2 + cs.DB + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xinyuan Liu, Jiahui Chen, Bocheng Hu, Yu Sun, Xinyang Chen, Shaoxu Song + + + From Novelty to Imitation: Self-Distilled Rewards for Offline Reinforcement Learning + https://arxiv.org/abs/2507.12815 + arXiv:2507.12815v2 Announce Type: replace +Abstract: Offline Reinforcement Learning (RL) aims to learn effective policies from a static dataset without requiring further agent-environment interactions. However, its practical adoption is often hindered by the need for explicit reward annotations, which can be costly to engineer or difficult to obtain retrospectively. To address this, we propose ReLOAD (Reinforcement Learning with Offline Reward Annotation via Distillation), a novel reward annotation framework for offline RL. Unlike existing methods that depend on complex alignment procedures, our approach adapts Random Network Distillation (RND) to generate intrinsic rewards from expert demonstrations using a simple yet effective embedding discrepancy measure. First, we train a predictor network to mimic a fixed target network's embeddings based on expert state transitions. Later, the prediction error between these networks serves as a reward signal for each transition in the static dataset. This mechanism provides a structured reward signal without requiring handcrafted reward annotations. We provide a formal theoretical construct that offers insights into how RND prediction errors effectively serve as intrinsic rewards by distinguishing expert-like transitions. Experiments on the D4RL benchmark demonstrate that ReLOAD enables robust offline policy learning and achieves performance competitive with traditional reward-annotated methods. + oai:arXiv.org:2507.12815v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Gaurav Chaudhary, Laxmidhar Behera + + + Vidar: Embodied Video Diffusion Model for Generalist Manipulation + https://arxiv.org/abs/2507.12898 + arXiv:2507.12898v4 Announce Type: replace +Abstract: Scaling general-purpose manipulation to new robot embodiments remains challenging: each platform typically needs large, homogeneous demonstrations, and end-to-end pixel-to-action pipelines may degenerate under background and viewpoint shifts. Based on previous advances in video-based robot control, we present Vidar, consisting of an embodied video diffusion model as the generalizable prior and a masked inverse dynamics model (MIDM) as the adapter. We leverage a video diffusion model pre-trained at Internet scale, and further continuously pre-train it for the embodied domain using 750K multi-view trajectories collected from three real-world robot platforms. For this embodied pre-training, we introduce a unified observation space that jointly encodes robot, camera, task, and scene contexts. The MIDM module learns action-relevant pixel masks without dense labels, grounding the prior into the target embodiment's action space while suppressing distractors. With only 20 minutes of human demonstrations on an unseen robot (1% of typical data), Vidar outperforms state-of-the-art baselines and generalizes to unseen tasks, backgrounds, and camera layouts. Our results suggest a scalable recipe for "one prior, many embodiments": strong, inexpensive video priors together with minimal on-robot alignment. + oai:arXiv.org:2507.12898v4 + cs.LG + cs.AI + cs.CV + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yao Feng, Hengkai Tan, Xinyi Mao, Chendong Xiang, Guodong Liu, Shuhe Huang, Hang Su, Jun Zhu + + + DiffClean: Diffusion-based Makeup Removal for Accurate Age Estimation + https://arxiv.org/abs/2507.13292 + arXiv:2507.13292v2 Announce Type: replace +Abstract: Accurate age verification can protect underage users from unauthorized access to online platforms and e-commerce sites that provide age-restricted services. However, accurate age estimation can be confounded by several factors, including facial makeup that can induce changes to alter perceived identity and age to fool both humans and machines. In this work, we propose \textsc{DiffClean} which erases makeup traces using a text-guided diffusion model to defend against makeup attacks without requiring any reference image unlike prior work. \textsc{DiffClean} improves age estimation (minor vs. adult accuracy by 5.8\%) and face verification (TMR by 5.1\% at FMR=0.01\%) compared to images with makeup. Our method is: (1) robust across digitally simulated and real-world makeup styles with high visual fidelity, (2) can be easily integrated as a pre-processing module in existing age and identity verification frameworks, and (3) advances the state-of-the art in terms of biometric and perceptual utility. Our codes are available at https://github.com/Ektagavas/DiffClean + oai:arXiv.org:2507.13292v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Ekta Balkrishna Gavas, Sudipta Banerjee, Chinmay Hegde, Nasir Memon + + + From Binary to Semantic: Utilizing Large-Scale Binary Occupancy Data for 3D Semantic Occupancy Prediction + https://arxiv.org/abs/2507.13387 + arXiv:2507.13387v2 Announce Type: replace +Abstract: Accurate perception of the surrounding environment is essential for safe autonomous driving. 3D occupancy prediction, which estimates detailed 3D structures of roads, buildings, and other objects, is particularly important for vision-centric autonomous driving systems that do not rely on LiDAR sensors. However, in 3D semantic occupancy prediction -- where each voxel is assigned a semantic label -- annotated LiDAR point clouds are required, making data acquisition costly. In contrast, large-scale binary occupancy data, which only indicate occupied or free space without semantic labels, can be collected at a lower cost. Despite their availability, the potential of leveraging such data remains unexplored. In this study, we investigate the utilization of large-scale binary occupancy data from two perspectives: (1) pre-training and (2) learning-based auto-labeling. We propose a novel binary occupancy-based framework that decomposes the prediction process into binary and semantic occupancy modules, enabling effective use of binary occupancy data. Our experimental results demonstrate that the proposed framework outperforms existing methods in both pre-training and auto-labeling tasks, highlighting its effectiveness in enhancing 3D semantic occupancy prediction. The code is available at https://github.com/ToyotaInfoTech/b2s-occupancy + oai:arXiv.org:2507.13387v2 + cs.CV + eess.IV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chihiro Noguchi, Takaki Yamamoto + + + Advances in Feed-Forward 3D Reconstruction and View Synthesis: A Survey + https://arxiv.org/abs/2507.14501 + arXiv:2507.14501v5 Announce Type: replace +Abstract: 3D reconstruction and view synthesis are foundational problems in computer vision, graphics, and immersive technologies such as augmented reality (AR), virtual reality (VR), and digital twins. Traditional methods rely on computationally intensive iterative optimization in a complex chain, limiting their applicability in real-world scenarios. Recent advances in feed-forward approaches, driven by deep learning, have revolutionized this field by enabling fast and generalizable 3D reconstruction and view synthesis. This survey offers a comprehensive review of feed-forward techniques for 3D reconstruction and view synthesis, with a taxonomy according to the underlying representation architectures including point cloud, 3D Gaussian Splatting (3DGS), Neural Radiance Fields (NeRF), etc. We examine key tasks such as pose-free reconstruction, dynamic 3D reconstruction, and 3D-aware image and video synthesis, highlighting their applications in digital humans, SLAM, robotics, and beyond. In addition, we review commonly used datasets with detailed statistics, along with evaluation protocols for various downstream tasks. We conclude by discussing open research challenges and promising directions for future work, emphasizing the potential of feed-forward approaches to advance the state of the art in 3D vision. + oai:arXiv.org:2507.14501v5 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jiahui Zhang, Yuelei Li, Anpei Chen, Muyu Xu, Kunhao Liu, Jianyuan Wang, Xiao-Xiao Long, Hanxue Liang, Zexiang Xu, Hao Su, Christian Theobalt, Christian Rupprecht, Andrea Vedaldi, Kaichen Zhou, Hanspeter Pfister, Paul Pu Liang, Shijian Lu, Fangneng Zhan + + + Confidence Calibration in Vision-Language-Action Models + https://arxiv.org/abs/2507.17383 + arXiv:2507.17383v2 Announce Type: replace +Abstract: Trustworthy robot behavior requires not only high levels of task success but also that the robot can reliably quantify how likely it is to succeed. To this end, we present a first-of-its-kind study of confidence calibration in vision-language-action (VLA) foundation models, which map visual observations and natural language instructions to low-level robot motor commands. We establish a confidence baseline for VLAs, examine how task success relates to calibration error and how calibration evolves over time, and introduce two lightweight techniques to remedy the miscalibration we observe: prompt ensembles and action-wise Platt scaling. Our aim in this study is to begin to develop the tools and conceptual understanding necessary to render VLAs both highly performant and highly trustworthy via reliable uncertainty quantification. + oai:arXiv.org:2507.17383v2 + cs.RO + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Thomas P Zollo, Richard Zemel + + + Towards Facilitated Fairness Assessment of AI-based Skin Lesion Classifiers Through GenAI-based Image Synthesis + https://arxiv.org/abs/2507.17860 + arXiv:2507.17860v4 Announce Type: replace +Abstract: Recent advances in deep learning and on-device inference could transform routine screening for skin cancers. Along with the anticipated benefits of this technology, potential dangers arise from unforeseen and inherent biases. A significant obstacle is building evaluation datasets that accurately reflect key demographics, including sex, age, and race, as well as other underrepresented groups. To address this, we train a state-of-the-art generative model to generate synthetic data in a controllable manner to assess the fairness of publicly available skin cancer classifiers. To evaluate whether synthetic images can be used as a fairness testing dataset, we prepare a real-image dataset (MILK10K) as a benchmark and compare the True Positive Rate result of three models (DeepGuide, MelaNet, and SkinLesionDensnet). As a result, the classification tendencies observed in each model when tested on real and generated images showed similar patterns across different attribute data sets. We confirm that highly realistic synthetic images facilitate model fairness verification. + oai:arXiv.org:2507.17860v4 + cs.CV + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Ko Watanabe, Stanislav Frolov, Aya Hassan, David Dembinsky, Adriano Lucieri, Andreas Dengel + + + Preserving Topological and Geometric Embeddings for Point Cloud Recovery + https://arxiv.org/abs/2507.19121 + arXiv:2507.19121v3 Announce Type: replace +Abstract: Recovering point clouds involves the sequential process of sampling and restoration, yet existing methods struggle to effectively leverage both topological and geometric attributes. To address this, we propose an end-to-end architecture named \textbf{TopGeoFormer}, which maintains these critical properties throughout the sampling and restoration phases. First, we revisit traditional feature extraction techniques to yield topological embedding using a continuous mapping of relative relationships between neighboring points, and integrate it in both phases for preserving the structure of the original space. Second, we propose the \textbf{InterTwining Attention} to fully merge topological and geometric embeddings, which queries shape with local awareness in both phases to form a learnable 3D shape context facilitated with point-wise, point-shape-wise, and intra-shape features. Third, we introduce a full geometry loss and a topological constraint loss to optimize the embeddings in both Euclidean and topological spaces. The geometry loss uses inconsistent matching between coarse-to-fine generations and targets for reconstructing better geometric details, and the constraint loss limits embedding variances for better approximation of the topological space. In experiments, we comprehensively analyze the circumstances using the conventional and learning-based sampling/upsampling/recovery algorithms. The quantitative and qualitative results demonstrate that our method significantly outperforms existing sampling and recovery methods. + oai:arXiv.org:2507.19121v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Kaiyue Zhou, Zelong Tan, Hongxiao Wang, Ya-Li Li, Shengjin Wang + + + CodeNER: Code Prompting for Named Entity Recognition + https://arxiv.org/abs/2507.20423 + arXiv:2507.20423v3 Announce Type: replace +Abstract: Recent studies have explored various approaches for treating candidate named entity spans as both source and target sequences in named entity recognition (NER) by leveraging large language models (LLMs). Although previous approaches have successfully generated candidate named entity spans with suitable labels, they rely solely on input context information when using LLMs, particularly, ChatGPT. However, NER inherently requires capturing detailed labeling requirements with input context information. To address this issue, we propose a novel method that leverages code-based prompting to improve the capabilities of LLMs in understanding and performing NER. By embedding code within prompts, we provide detailed BIO schema instructions for labeling, thereby exploiting the ability of LLMs to comprehend long-range scopes in programming languages. Experimental results demonstrate that the proposed code-based prompting method outperforms conventional text-based prompting on ten benchmarks across English, Arabic, Finnish, Danish, and German datasets, indicating the effectiveness of explicitly structuring NER instructions. We also verify that combining the proposed code-based prompting method with the chain-of-thought prompting further improves performance. + oai:arXiv.org:2507.20423v3 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Sungwoo Han, Jingun Kwon, Hidetaka Kamigaito, Manabu Okumura + + + Guard-GBDT: Efficient Privacy-Preserving Approximated GBDT Training on Vertical Dataset + https://arxiv.org/abs/2507.20688 + arXiv:2507.20688v2 Announce Type: replace +Abstract: In light of increasing privacy concerns and stringent legal regulations, using secure multiparty computation (MPC) to enable collaborative GBDT model training among multiple data owners has garnered significant attention. Despite this, existing MPC-based GBDT frameworks face efficiency challenges due to high communication costs and the computation burden of non-linear operations, such as division and sigmoid calculations. In this work, we introduce Guard-GBDT, an innovative framework tailored for efficient and privacy-preserving GBDT training on vertical datasets. Guard-GBDT bypasses MPC-unfriendly division and sigmoid functions by using more streamlined approximations and reduces communication overhead by compressing the messages exchanged during gradient aggregation. We implement a prototype of Guard-GBDT and extensively evaluate its performance and accuracy on various real-world datasets. The results show that Guard-GBDT outperforms state-of-the-art HEP-XGB (CIKM'21) and SiGBDT (ASIA CCS'24) by up to $2.71\times$ and $12.21 \times$ on LAN network and up to $2.7\times$ and $8.2\times$ on WAN network. Guard-GBDT also achieves comparable accuracy with SiGBDT and plaintext XGBoost (better than HEP-XGB ), which exhibits a deviation of $\pm1\%$ to $\pm2\%$ only. Our implementation code is provided at https://github.com/XidianNSS/Guard-GBDT.git. + oai:arXiv.org:2507.20688v2 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Anxiao Song, Shujie Cui, Jianli Bai, Ke Cheng, Yulong Shen, Giovanni Russello + + + Toward Revealing Nuanced Biases in Medical LLMs + https://arxiv.org/abs/2507.21176 + arXiv:2507.21176v2 Announce Type: replace +Abstract: Large language models (LLMs) used in medical applications are known to be prone to exhibiting biased and unfair patterns. Prior to deploying these in clinical decision-making, it is crucial to identify such bias patterns to enable effective mitigation and minimize negative impacts. In this study, we present a novel framework combining knowledge graphs (KGs) with auxiliary (agentic) LLMs to systematically reveal complex bias patterns in medical LLMs. The proposed approach integrates adversarial perturbation (red teaming) techniques to identify subtle bias patterns and adopts a customized multi-hop characterization of KGs to enhance the systematic evaluation of target LLMs. It aims not only to generate more effective red-teaming questions for bias evaluation but also to utilize those questions more effectively in revealing complex biases. Through a series of comprehensive experiments on three datasets, six LLMs, and five bias types, we demonstrate that our proposed framework exhibits a noticeably greater ability and scalability in revealing complex biased patterns of medical LLMs compared to other common approaches. + oai:arXiv.org:2507.21176v2 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Farzana Islam Adiba, Rahmatollah Beheshti + + + Enhancing Multi-Agent Collaboration with Attention-Based Actor-Critic Policies + https://arxiv.org/abs/2507.22782 + arXiv:2507.22782v3 Announce Type: replace +Abstract: This paper introduces Team-Attention-Actor-Critic (TAAC), a reinforcement learning algorithm designed to enhance multi-agent collaboration in cooperative environments. TAAC employs a Centralized Training/Centralized Execution scheme incorporating multi-headed attention mechanisms in both the actor and critic. This design facilitates dynamic, inter-agent communication, allowing agents to explicitly query teammates, thereby efficiently managing the exponential growth of joint-action spaces while ensuring a high degree of collaboration. We further introduce a penalized loss function which promotes diverse yet complementary roles among agents. We evaluate TAAC in a simulated soccer environment against benchmark algorithms representing other multi-agent paradigms, including Proximal Policy Optimization and Multi-Agent Actor-Attention-Critic. We find that TAAC exhibits superior performance and enhanced collaborative behaviors across a variety of metrics (win rates, goal differentials, Elo ratings, inter-agent connectivity, balanced spatial distributions, and frequent tactical interactions such as ball possession swaps). + oai:arXiv.org:2507.22782v3 + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Hugo Garrido-Lestache Belinchon, Jeremy Kedziora + + + SAMSA 2.0: Prompting Segment Anything with Spectral Angles for Hyperspectral Interactive Medical Image Segmentation + https://arxiv.org/abs/2508.00493 + arXiv:2508.00493v2 Announce Type: replace +Abstract: We present SAMSA 2.0, an interactive segmentation framework for hyperspectral medical imaging that introduces spectral angle prompting to guide the Segment Anything Model (SAM) using spectral similarity alongside spatial cues. This early fusion of spectral information enables more accurate and robust segmentation across diverse spectral datasets. Without retraining, SAMSA 2.0 achieves up to +3.8% higher Dice scores compared to RGB-only models and up to +3.1% over prior spectral fusion methods. Our approach enhances few-shot and zero-shot performance, demonstrating strong generalization in challenging low-data and noisy scenarios common in clinical imaging. + oai:arXiv.org:2508.00493v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Alfie Roddan, Tobias Czempiel, Chi Xu, Daniel S. Elson, Stamatia Giannarou + + + No Pose at All: Self-Supervised Pose-Free 3D Gaussian Splatting from Sparse Views + https://arxiv.org/abs/2508.01171 + arXiv:2508.01171v2 Announce Type: replace +Abstract: We introduce SPFSplat, an efficient framework for 3D Gaussian splatting from sparse multi-view images, requiring no ground-truth poses during training or inference. It employs a shared feature extraction backbone, enabling simultaneous prediction of 3D Gaussian primitives and camera poses in a canonical space from unposed inputs within a single feed-forward step. Alongside the rendering loss based on estimated novel-view poses, a reprojection loss is integrated to enforce the learning of pixel-aligned Gaussian primitives for enhanced geometric constraints. This pose-free training paradigm and efficient one-step feed-forward design make SPFSplat well-suited for practical applications. Remarkably, despite the absence of pose supervision, SPFSplat achieves state-of-the-art performance in novel view synthesis even under significant viewpoint changes and limited image overlap. It also surpasses recent methods trained with geometry priors in relative pose estimation. Code and trained models are available on our project page: https://ranrhuang.github.io/spfsplat/. + oai:arXiv.org:2508.01171v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ranran Huang, Krystian Mikolajczyk + + + Prediction-Oriented Subsampling from Data Streams + https://arxiv.org/abs/2508.03868 + arXiv:2508.03868v2 Announce Type: replace +Abstract: Data is often generated in streams, with new observations arriving over time. A key challenge for learning models from data streams is capturing relevant information while keeping computational costs manageable. We explore intelligent data subsampling for offline learning, and argue for an information-theoretic method centred on reducing uncertainty in downstream predictions of interest. Empirically, we demonstrate that this prediction-oriented approach performs better than a previously proposed information-theoretic technique on two widely studied problems. At the same time, we highlight that reliably achieving strong performance in practice requires careful model design. + oai:arXiv.org:2508.03868v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Benedetta Lavinia Mussati, Freddie Bickford Smith, Tom Rainforth, Stephen Roberts + + + Rethinking Analytical Processing in the GPU Era + https://arxiv.org/abs/2508.04701 + arXiv:2508.04701v3 Announce Type: replace +Abstract: The era of GPU-powered data analytics has arrived. In this paper, we argue that recent advances in hardware (e.g., larger GPU memory, faster interconnect and IO, and declining cost) and software (e.g., composable data systems and mature libraries) have removed the key barriers that have limited the wider adoption of GPU data analytics. We present Sirius, a prototype open-source GPU-native SQL engine that offers drop-in acceleration for diverse data systems. Sirius treats GPU as the primary engine and leverages libraries like libcudf for high-performance relational operators. It provides drop-in acceleration for existing databases by leveraging the standard Substrait query representation, replacing the CPU engine without changing the user-facing interface. Sirius achieves 8.3x and 7.4x better cost efficiency on TPC-H and ClickBench, respectively, when integrated with single-node DuckDB, and delivers up to 12.5x speedup when integrated with Apache Doris distributed engine. + oai:arXiv.org:2508.04701v3 + cs.DB + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Bobbi Yogatama, Yifei Yang, Kevin Kristensen, Devesh Sarda, Abigale Kim, Adrian Cockcroft, Yu Teng, Joshua Patterson, Gregory Kimball, Wes McKinney, Weiwei Gong, Xiangyao Yu + + + SpeakerLM: End-to-End Versatile Speaker Diarization and Recognition with Multimodal Large Language Models + https://arxiv.org/abs/2508.06372 + arXiv:2508.06372v2 Announce Type: replace +Abstract: The Speaker Diarization and Recognition (SDR) task aims to predict "who spoke when and what" within an audio clip, which is a crucial task in various real-world multi-speaker scenarios such as meeting transcription and dialogue systems. Existing SDR systems typically adopt a cascaded framework, combining multiple modules such as speaker diarization (SD) and automatic speech recognition (ASR). The cascaded systems suffer from several limitations, such as error propagation, difficulty in handling overlapping speech, and lack of joint optimization for exploring the synergy between SD and ASR tasks. To address these limitations, we introduce SpeakerLM, a unified multimodal large language model for SDR that jointly performs SD and ASR in an end-to-end manner. Moreover, to facilitate diverse real-world scenarios, we incorporate a flexible speaker registration mechanism into SpeakerLM, enabling SDR under different speaker registration settings. SpeakerLM is progressively developed with a multi-stage training strategy on large-scale real data. Extensive experiments show that SpeakerLM demonstrates strong data scaling capability and generalizability, outperforming state-of-the-art cascaded baselines on both in-domain and out-of-domain public SDR benchmarks. Furthermore, experimental results show that the proposed speaker registration mechanism effectively ensures robust SDR performance of SpeakerLM across diverse speaker registration conditions and varying numbers of registered speakers. + oai:arXiv.org:2508.06372v2 + cs.SD + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Han Yin, Yafeng Chen, Chong Deng, Luyao Cheng, Hui Wang, Chao-Hong Tan, Qian Chen, Wen Wang, Xiangang Li + + + Low-Rank Expert Merging for Multi-Source Domain Adaptation in Person Re-Identification + https://arxiv.org/abs/2508.06831 + arXiv:2508.06831v2 Announce Type: replace +Abstract: Adapting person re-identification (reID) models to new target environments remains a challenging problem that is typically addressed using unsupervised domain adaptation (UDA) methods. Recent works show that when labeled data originates from several distinct sources (e.g., datasets and cameras), considering each source separately and applying multi-source domain adaptation (MSDA) typically yields higher accuracy and robustness compared to blending the sources and performing conventional UDA. However, state-of-the-art MSDA methods learn domain-specific backbone models or require access to source domain data during adaptation, resulting in significant growth in training parameters and computational cost. In this paper, a Source-free Adaptive Gated Experts (SAGE-reID) method is introduced for person reID. Our SAGE-reID is a cost-effective, source-free MSDA method that first trains individual source-specific low-rank adapters (LoRA) through source-free UDA. Next, a lightweight gating network is introduced and trained to dynamically assign optimal merging weights for fusion of LoRA experts, enabling effective cross-domain knowledge transfer. While the number of backbone parameters remains constant across source domains, LoRA experts scale linearly but remain negligible in size (<= 2% of the backbone), reducing both the memory consumption and risk of overfitting. Extensive experiments conducted on three challenging benchmarks: Market-1501, DukeMTMC-reID, and MSMT17 indicate that SAGE-reID outperforms state-of-the-art methods while being computationally efficient. + oai:arXiv.org:2508.06831v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Taha Mustapha Nehdi, Nairouz Mrabah, Atif Belal, Marco Pedersoli, Eric Granger + + + Chimera: Harnessing Multi-Agent LLMs for Automatic Insider Threat Simulation + https://arxiv.org/abs/2508.07745 + arXiv:2508.07745v3 Announce Type: replace +Abstract: Insider threats pose a persistent and critical security risk, yet are notoriously difficult to detect in complex enterprise environments, where malicious actions are often hidden within seemingly benign user behaviors. Although machine-learning-based insider threat detection (ITD) methods have shown promise, their effectiveness is fundamentally limited by the scarcity of high-quality and realistic training data. Enterprise internal data is highly sensitive and rarely accessible, while existing public and synthetic datasets are either small-scale or lack sufficient realism, semantic richness, and behavioral diversity. + To address this challenge, we propose Chimera, an LLM-based multi-agent framework that automatically simulates both benign and malicious insider activities and generates comprehensive system logs across diverse enterprise environments. Chimera models each agent as an individual employee with fine-grained roles and supports group meetings, pairwise interactions, and self-organized scheduling to capture realistic organizational dynamics. Based on 15 insider attacks abstracted from real-world incidents, we deploy Chimera in three representative data-sensitive organizational scenarios and construct ChimeraLog, a new dataset for developing and evaluating ITD methods. + We evaluate ChimeraLog through human studies and quantitative analyses, demonstrating its diversity and realism. Experiments with existing ITD methods show substantially lower detection performance on ChimeraLog compared to prior datasets, indicating a more challenging and realistic benchmark. Moreover, despite distribution shifts, models trained on ChimeraLog exhibit strong generalization, highlighting the practical value of LLM-based multi-agent simulation for advancing insider threat detection. + oai:arXiv.org:2508.07745v3 + cs.CR + cs.AI + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jiongchi Yu, Xiaofei Xie, Qiang Hu, Yuhan Ma, Ziming Zhao + + + Low-Regret and Low-Complexity Learning for Hierarchical Inference + https://arxiv.org/abs/2508.08985 + arXiv:2508.08985v3 Announce Type: replace +Abstract: This work focuses on Hierarchical Inference (HI) in edge intelligence systems, where a compact Local-ML model on an end-device works in conjunction with a high-accuracy Remote-ML model on an edge-server. HI aims to reduce latency, improve accuracy, and lower bandwidth usage by first using the Local-ML model for inference and offloading to the Remote-ML only when the local inference is likely incorrect. A critical challenge in HI is estimating the likelihood of the local inference being incorrect, especially when data distributions and offloading costs change over time -- a problem we term Hierarchical Inference Learning (HIL). We introduce a novel approach to HIL by modeling the probability of correct inference by the Local-ML as an increasing function of the model's confidence measure, a structure motivated by empirical observations but previously unexploited. We propose two policies, HI-LCB and HI-LCB-lite, based on the Upper Confidence Bound (UCB) framework. We demonstrate that both policies achieve order-optimal regret of $O(\log T)$, a significant improvement over existing HIL policies with $O(T^{2/3})$ regret guarantees. Notably, HI-LCB-lite has an $O(1)$ per-sample computational complexity, making it well-suited for deployment on devices with severe resource limitations. Simulations using real-world datasets confirm that our policies outperform existing state-of-the-art HIL methods. + oai:arXiv.org:2508.08985v3 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Sameep Chattopadhyay, Vinay Sutar, Jaya Prakash Champati, Sharayu Moharir + + + An Unsupervised Deep Explainable AI Framework for Localization of Concurrent Replay Attacks in Nuclear Reactor Signals + https://arxiv.org/abs/2508.09162 + arXiv:2508.09162v4 Announce Type: replace +Abstract: Next generation advanced nuclear reactors are expected to be smaller both in size and power output, relying extensively on fully digital instrumentation and control systems. These reactors will generate a large flow of information in the form of multivariate time series data, conveying simultaneously various non linear cyber physical, process, control, sensor, and operational states. Ensuring data integrity against deception attacks is becoming increasingly important for networked communication and a requirement for safe and reliable operation. Current efforts to address replay attacks, almost universally focus on watermarking or supervised anomaly detection approaches without further identifying and characterizing the root cause of the anomaly. In addition, these approaches rely mostly on synthetic data with uncorrelated Gaussian process and measurement noise and full state feedback or are limited to univariate signals, signal stationarity, linear quadratic regulators, or other linear-time invariant state-space which may fail to capture any unmodeled system dynamics. In the realm of regulated nuclear cyber-physical systems, additional work is needed on characterization of replay attacks and explainability of predictions using real data. Here, we propose an unsupervised explainable AI framework based on a combination of autoencoder and customized windowSHAP algorithm to fully characterize real-time replay attacks, i.e., detection, source identification, timing and type, of increasing complexity during a dynamic time evolving reactor process. The proposed XAI framework was benchmarked on several real world datasets from Purdue's nuclear reactor PUR-1 with up to six signals concurrently being replayed. In all cases, the XAI framework was able to detect and identify the source and number of signals being replayed and the duration of the falsification with 95 percent or better accuracy. + oai:arXiv.org:2508.09162v4 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.1109/ACCESS.2025.3646560 + Konstantinos Vasili, Zachery T. Dahm, Stylianos Chatzidakis + + + Decoding Neural Emotion Patterns through Large Language Model Embeddings + https://arxiv.org/abs/2508.09337 + arXiv:2508.09337v3 Announce Type: replace +Abstract: Understanding how emotional expression in language relates to brain function is a challenge in computational neuroscience and affective computing. Traditional neuroimaging is costly and lab-bound, but abundant digital text offers new avenues for emotion-brain mapping. Prior work has largely examined neuroimaging-based emotion localization or computational text analysis separately, with little integration. We propose a computational framework that maps textual emotional content to anatomically defined brain regions without requiring neuroimaging. Using OpenAI's text-embedding-ada-002, we generate high-dimensional semantic representations, apply dimensionality reduction and clustering to identify emotional groups, and map them to 18 brain regions linked to emotional processing. Three experiments were conducted: i) analyzing conversational data from healthy vs. depressed subjects (DIAC-WOZ dataset) to compare mapping patterns, ii) applying the method to the GoEmotions dataset and iii) comparing human-written text with large language model (LLM) responses to assess differences in inferred brain activation. Emotional intensity was scored via lexical analysis. Results showed neuroanatomically plausible mappings with high spatial specificity. Depressed subjects exhibited greater limbic engagement tied to negative affect. Discrete emotions were successfully differentiated. LLM-generated text matched humans in basic emotion distribution but lacked nuanced activation in empathy and self-referential regions (medial prefrontal and posterior cingulate cortex). This cost-effective, scalable approach enables large-scale analysis of naturalistic language, distinguishes between clinical populations, and offers a brain-based benchmark for evaluating AI emotional expression. + oai:arXiv.org:2508.09337v3 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Gideon Vos, Maryam Ebrahimpour, Liza van Eijk, Zoltan Sarnyai, Mostafa Rahimi Azghadi + + + Fast and Simple Multiclass Data Segmentation: An Eigendecomposition and Projection-Free Approach + https://arxiv.org/abs/2508.09738 + arXiv:2508.09738v2 Announce Type: replace +Abstract: Graph-based machine learning has seen an increased interest over the last decade with many connections to other fields of applied mathematics. Learning based on partial differential equations, such as the phase-field Allen-Cahn equation, allows efficient handling of semi-supervised learning approaches on graphs. The numerical solution of the graph Allen-Cahn equation via a convexity splitting or the Merriman-Bence-Osher (MBO) scheme, albeit being a widely used approach, requires the calculation of a graph Laplacian eigendecomposition and repeated projections over the unit simplex to maintain valid partitions. The computational efficiency of those methods is hence limited by those two bottlenecks in practice, especially when dealing with large-scale instances. In order to overcome these limitations, we propose a new framework combining a novel penalty-based reformulation of the segmentation problem, which ensures valid partitions (i.e., binary solutions) for appropriate parameter choices, with an eigendecomposition and projection-free optimization scheme, which has a small per-iteration complexity (by relying primarily on sparse matrix-vector products) and guarantees good convergence properties. Experiments on synthetic and real-world datasets related to data segmentation in networks and images demonstrate that the proposed framework achieves comparable or better accuracy than the CS and MBO methods while being significantly faster, particularly for large-scale problems. + oai:arXiv.org:2508.09738v2 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Chiara Faccio, Margherita Porcelli, Francesco Rinaldi, Martin Stoll + + + Prototype-Guided Diffusion: Visual Conditioning without External Memory + https://arxiv.org/abs/2508.09922 + arXiv:2508.09922v5 Announce Type: replace +Abstract: Diffusion models achieve state-of-the-art image generation but remain computationally costly due to iterative denoising. Latent-space models like Stable Diffusion reduce overhead yet lose fine detail, while retrieval-augmented methods improve efficiency but rely on large memory banks, static similarity models, and rigid infrastructures. We introduce the Prototype Diffusion Model (PDM), which embeds prototype learning into the diffusion process to provide adaptive, memory-free conditioning. Instead of retrieving references, PDM learns compact visual prototypes from clean features via contrastive learning, then aligns noisy representations with semantically relevant patterns during denoising. Experiments demonstrate that PDM sustains high generation quality while lowering computational and storage costs, offering a scalable alternative to retrieval-based conditioning. + oai:arXiv.org:2508.09922v5 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Bilal Faye, Hanane Azzag, Mustapha Lebbah + + + Computing the Fr\'echet Distance When Just One Curve is $c$-Packed: A Simple Almost-Tight Algorithm + https://arxiv.org/abs/2508.10537 + arXiv:2508.10537v2 Announce Type: replace +Abstract: We study approximating the continuous Fr\'echet distance of two curves with complexity $n$ and $m$, under the assumption that only one of the two curves is $c$-packed. Driemel, Har{-}Peled and Wenk DCG'12 studied Fr\'echet distance approximations under the assumption that both curves are $c$-packed. In $\mathbb{R}^d$, they prove a $(1+\varepsilon)$-approximation in $\tilde{O}(d \, c\,\frac{n+m}{\varepsilon})$ time. Bringmann and K\"unnemann IJCGA'17 improved this to $\tilde{O}(c\,\frac{n + m }{\sqrt{\varepsilon}})$ time, which they showed is near-tight under SETH. Recently, Gudmundsson, Mai, and Wong ISAAC'24 studied our setting where only one of the curves is $c$-packed. They provide an involved $\tilde{O}( d \cdot (c+\varepsilon^{-1})(cn\varepsilon^{-2} + c^2m\varepsilon^{-7} + \varepsilon^{-2d-1}))$-time algorithm when the $c$-packed curve has $n$ vertices and the arbitrary curve has $m$, where $d$ is the dimension in Euclidean space. In this paper, we show a simple technique to compute a $(1+\varepsilon)$-approximation in $\mathbb{R}^d$ in time $O(d \cdot c\,\frac{n+m}{\varepsilon}\log\frac{n+m}{\varepsilon})$ when one of the curves is $c$-packed. Our approach is not only simpler than previous work, but also significantly improves the dependencies on $c$, $\varepsilon$, and $d$. Moreover, it almost matches the asymptotically tight bound for when both curves are $c$-packed. Our algorithm is robust in the sense that it does not require knowledge of $c$, nor information about which of the two input curves is $c$-packed. + oai:arXiv.org:2508.10537v2 + cs.CG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jacobus Conradi, Ivor van der Hoog, Thijs van der Horst, Tim Ophelders + + + SafeSieve: From Heuristics to Experience in Progressive Pruning for LLM-based Multi-Agent Communication + https://arxiv.org/abs/2508.11733 + arXiv:2508.11733v2 Announce Type: replace +Abstract: LLM-based multi-agent systems exhibit strong collaborative capabilities but often suffer from redundant communication and excessive token overhead. Existing methods typically enhance efficiency through pretrained GNNs or greedy algorithms, but often isolate pre- and post-task optimization, lacking a unified strategy. To this end, we present SafeSieve, a progressive and adaptive multi-agent pruning algorithm that dynamically refines the inter-agent communication through a novel dual-mechanism. SafeSieve integrates initial LLM-based semantic evaluation with accumulated performance feedback, enabling a smooth transition from heuristic initialization to experience-driven refinement. Unlike existing greedy Top-k pruning methods, SafeSieve employs 0-extension clustering to preserve structurally coherent agent groups while eliminating ineffective links. Experiments across benchmarks (SVAMP, HumanEval, etc.) showcase that SafeSieve achieves 94.01% average accuracy while reducing token usage by 12.4%-27.8%. Results further demonstrate robustness under prompt injection attacks (1.23% average accuracy drop). In heterogeneous settings, SafeSieve reduces deployment costs by 13.3% while maintaining performance. These results establish SafeSieve as an efficient, GPU-free, and scalable framework for practical multi-agent systems. Our code can be found here: https://github.com/csgen/SafeSieve + oai:arXiv.org:2508.11733v2 + cs.MA + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Ruijia Zhang, Xinyan Zhao, Ruixiang Wang, Sigen Chen, Guibin Zhang, An Zhang, Kun Wang, Qingsong Wen + + + Human Centric General Physical Intelligence for Agile Manufacturing Automation + https://arxiv.org/abs/2508.11960 + arXiv:2508.11960v2 Announce Type: replace +Abstract: Agile human-centric manufacturing increasingly requires resilient robotic solutions that are capable of safe and productive interactions within unstructured environments of modern factories. While multi-modal sensor fusion provides comprehensive situational awareness yet robots must also contextualize their reasoning to achieve deep semantic understanding of complex scenes. Foundation model particularly Vision-Language-Action (VLA) models have emerged as promising approach on integrating diverse perceptual modalities and spatio-temporal reasoning abilities to ground physical actions to realize General Physical Intelligence (GPI) across various robotic embodiments. Although GPI has been conceptually discussed in literature but its pivotal role and practical deployment in agile manufacturing remain underexplored. To address this gap, this practical review systematically surveys recent advances in VLA models through the lens of GPI by offering comparative analysis of leading implementations and evaluating their industrial readiness via structured ablation study. The state of the art is organized into six thematic pillars including multisensory representation learning, sim2real transfer, planning and control, uncertainty and safety measures and benchmarking. Finally, the review highlights open challenges and future directions for integrating GPI into industrial ecosystems to align with the vision of Industry 5.0 for intelligent, adaptive and collaborative manufacturing ecosystem. + oai:arXiv.org:2508.11960v2 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Sandeep Kanta, Mehrdad Tavassoli, Varun Teja Chirkuri, Venkata Akhil Kumar, Santhi Bharath Punati, Praveen Damacharla, Sunny Katyara + + + Data-driven particle dynamics: Structure-preserving coarse-graining for emergent behavior in non-equilibrium systems + https://arxiv.org/abs/2508.12569 + arXiv:2508.12569v2 Announce Type: replace +Abstract: Multiscale systems are ubiquitous in science and technology, but are notoriously challenging to simulate as short spatiotemporal scales must be appropriately linked to emergent bulk physics. When expensive high-dimensional dynamical systems are coarse-grained into low-dimensional models, the entropic loss of information leads to emergent physics which are dissipative, history-dependent, and stochastic. To machine learn coarse-grained dynamics from time-series observations of particle trajectories, we propose a framework using the metriplectic bracket formalism that preserves these properties by construction; most notably, the framework guarantees discrete notions of the first and second laws of thermodynamics, conservation of momentum, and a discrete fluctuation-dissipation balance crucial for capturing non-equilibrium statistics. We introduce the mathematical framework abstractly before specializing to a particle discretization. As labels are generally unavailable for entropic state variables, we introduce a novel self-supervised learning strategy to identify emergent structural variables. We validate the method on benchmark systems and demonstrate its utility on two challenging examples: (1) coarse-graining star polymers at challenging levels of coarse-graining while preserving non-equilibrium statistics, and (2) learning models from high-speed video of colloidal suspensions that capture coupling between local rearrangement events and emergent stochastic dynamics. We provide open-source implementations in both PyTorch and LAMMPS, enabling large-scale inference and extensibility to diverse particle-based systems. + oai:arXiv.org:2508.12569v2 + cs.LG + cs.CE + physics.comp-ph + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Quercus Hernandez, Max Win, Thomas C. O'Connor, Paulo E. Arratia, Nathaniel Trask + + + Explainable Graph Spectral Clustering For GloVe-like Text Embeddings + https://arxiv.org/abs/2508.14075 + arXiv:2508.14075v2 Announce Type: replace +Abstract: In a previous paper, we proposed an introduction to the explainability of Graph Spectral Clustering results for textual documents, given that document similarity is computed as cosine similarity in term vector space. + In this paper, we generalize this idea by considering other embeddings of documents, in particular, based on the GloVe embedding idea. + oai:arXiv.org:2508.14075v2 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Mieczys{\l}aw A. K{\l}opotek, S{\l}awomir T. Wierzcho\'n, Bart{\l}omiej Starosta, Piotr Borkowski, Dariusz Czerski, Eryk Laskowski + + + DeltaFlow: An Efficient Multi-frame Scene Flow Estimation Method + https://arxiv.org/abs/2508.17054 + arXiv:2508.17054v3 Announce Type: replace +Abstract: Previous dominant methods for scene flow estimation focus mainly on input from two consecutive frames, neglecting valuable information in the temporal domain. While recent trends shift towards multi-frame reasoning, they suffer from rapidly escalating computational costs as the number of frames grows. To leverage temporal information more efficiently, we propose DeltaFlow ($\Delta$Flow), a lightweight 3D framework that captures motion cues via a $\Delta$ scheme, extracting temporal features with minimal computational cost, regardless of the number of frames. Additionally, scene flow estimation faces challenges such as imbalanced object class distributions and motion inconsistency. To tackle these issues, we introduce a Category-Balanced Loss to enhance learning across underrepresented classes and an Instance Consistency Loss to enforce coherent object motion, improving flow accuracy. Extensive evaluations on the Argoverse 2, Waymo and nuScenes datasets show that $\Delta$Flow achieves state-of-the-art performance with up to 22% lower error and $2\times$ faster inference compared to the next-best multi-frame supervised method, while also demonstrating a strong cross-domain generalization ability. The code is open-sourced at https://github.com/Kin-Zhang/DeltaFlow along with trained model weights. + oai:arXiv.org:2508.17054v3 + cs.CV + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Qingwen Zhang, Xiaomeng Zhu, Yushan Zhang, Yixi Cai, Olov Andersson, Patric Jensfelt + + + REGEN: Real-Time Photorealism Enhancement in Games via a Dual-Stage Generative Network Framework + https://arxiv.org/abs/2508.17061 + arXiv:2508.17061v2 Announce Type: replace +Abstract: Photorealism is an important aspect of modern video games since it can shape player experience and impact immersion, narrative engagement, and visual fidelity. To achieve photorealism, beyond traditional rendering pipelines, generative models have been increasingly adopted as an effective approach for bridging the gap between the visual realism of synthetic and real worlds. However, under real-time constraints of video games, existing generative approaches continue to face a tradeoff between visual quality and runtime efficiency. In this work, we present a framework for enhancing the photorealism of rendered game frames using generative networks. We propose REGEN, which first employs a robust unpaired image-to-image translation model to generate semantically consistent photorealistic frames. These generated frames are then used to create a paired dataset, which transforms the problem to a simpler unpaired image-to-image translation. This enables training with a lightweight method, achieving real-time inference without compromising visual quality. We evaluate REGEN on Unreal Engine, showing, by employing the CMMD metric, that it achieves comparable or slightly improved visual quality compared to the robust method, while improving the frame rate by 12x. Additional experiments also validate that REGEN adheres to the semantic preservation of the initial robust image-to-image translation method and maintains temporal consistency. Code, pre-trained models, and demos for this work are available at: https://github.com/stefanos50/REGEN + oai:arXiv.org:2508.17061v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Stefanos Pasios, Nikos Nikolaidis + + + Mirroring Users: Towards Building Preference-aligned User Simulator with User Feedback in Recommendation + https://arxiv.org/abs/2508.18142 + arXiv:2508.18142v2 Announce Type: replace +Abstract: User simulation is increasingly vital to develop and evaluate recommender systems (RSs). While Large Language Models (LLMs) offer promising avenues to simulate user behavior, they often struggle with the absence of specific task alignment required for RSs and the efficiency demands of large-scale simulation. A vast yet underutilized resource for enhancing this alignment is the extensive user feedback inherent in RSs, but leveraging it is challenging due to its ambiguity, noise and massive volume, which hinders efficient preference alignment. To overcome these hurdles, we introduce a novel data construction framework that leverages user feedback in RSs with advanced LLM capabilities to generate high-quality simulation data. Our framework unfolds in two key phases: (1) using LLMs to generate decision-making processes as explanatory rationales on simulation samples, thereby reducing ambiguity; and (2) data distillation based on uncertainty estimation and behavior sampling to efficiently filter the most informative, denoised samples. Accordingly, we fine-tune lightweight LLMs, as user simulators, using such high-quality dataset with corresponding decision-making processes. Extensive experiments confirm that our framework significantly boosts the alignment with human preferences and the in-domain reasoning capabilities of the fine-tuned LLMs, providing more insightful and interpretable signals for RS interaction. We believe our work, together with publicly available developed framework, high-quality mixed-domain dataset, and fine-tuned LLM checkpoints, will advance the RS community and offer valuable insights for broader human-centric AI research. + oai:arXiv.org:2508.18142v2 + cs.HC + cs.CY + cs.IR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-sa/4.0/ + Tianjun Wei, Huizhong Guo, Yingpeng Du, Zhu Sun, Huang Chen, Dongxia Wang, Jie Zhang + + + Discrete Diffusion VLA: Bringing Discrete Diffusion to Action Decoding in Vision-Language-Action Policies + https://arxiv.org/abs/2508.20072 + arXiv:2508.20072v3 Announce Type: replace +Abstract: Vision-Language-Action (VLA) models adapt large vision-language backbones to map images and instructions into robot actions. However, prevailing VLAs either generate actions auto-regressively in a fixed left-to-right order or attach separate MLP or diffusion heads outside the backbone, leading to fragmented information pathways and specialized training requirements that hinder a unified, scalable architecture. We present Discrete Diffusion VLA, a unified-transformer policy that models discretized action chunks with discrete diffusion. The design retains diffusion's progressive refinement paradigm while remaining natively compatible with the discrete token interface of VLMs. Our method achieves an adaptive decoding order that resolves easy action elements before harder ones and uses secondary re-masking to revisit uncertain predictions across refinement rounds, which improves consistency and enables robust error correction. This unified decoder preserves pre-trained vision-language priors, supports parallel decoding, breaks the autoregressive bottleneck, and reduces the number of function evaluations. Discrete Diffusion VLA achieves 96.3% avg. success rates on LIBERO, 71.2% visual matching on SimplerEnv-Fractal and 54.2% overall on SimplerEnv-Bridge. We also provide ablation study on vision-language ability retention on LIBERO-OOD (Out-of-Distribution) benchmark, with our method improving over autoregressive, MLP decoder and continuous diffusion baselines. These findings indicate that discrete-diffusion VLA supports precise action modeling and consistent training, laying groundwork for scaling VLA to larger models and datasets. Our code is available at https://github.com/Liang-ZX/DiscreteDiffusionVLA/tree/libero. + oai:arXiv.org:2508.20072v3 + cs.CV + cs.LG + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhixuan Liang, Yizhuo Li, Tianshuo Yang, Chengyue Wu, Sitong Mao, Tian Nian, Liuao Pei, Shunbo Zhou, Xiaokang Yang, Jiangmiao Pang, Yao Mu, Ping Luo + + + AI reasoning effort predicts human decision time in content moderation + https://arxiv.org/abs/2508.20262 + arXiv:2508.20262v2 Announce Type: replace +Abstract: Large language models can now generate intermediate reasoning steps before producing answers, improving performance on difficult problems by interactively developing solutions. This study uses a content moderation task to examine parallels between human decision times and model reasoning effort, measured using the length of the chain-of-thought (CoT). Across three frontier models, CoT length consistently predicts human decision time. Moreover, humans took longer and models produced longer CoTs when important variables were held constant, suggesting similar sensitivity to task difficulty. Analyses of the CoT content shows that models reference various contextual factors more frequently when making such decisions. These findings show parallels between human and AI reasoning on practical tasks and underscore the potential of reasoning traces for enhancing interpretability and decision-making. + oai:arXiv.org:2508.20262v2 + cs.AI + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Thomas Davidson + + + Instruction-Level Weight Shaping: A Framework for Self-Improving AI Agents + https://arxiv.org/abs/2509.00251 + arXiv:2509.00251v2 Announce Type: replace +Abstract: Large language models (LLMs) are fluent but largely static after pre-training; new or shifting knowledge is typically added with retrieval-augmented generation (RAG) or fine-tuning. RAG raises latency and engineering overhead and often fails to integrate facts; prompt engineering is brittle and can conflict with prior knowledge; fine-tuning is costly and risks catastrophic forgetting. We propose Instruction-Level Weight Shaping (ILWS): curated system instructions act as external, auditable pseudo-parameters updated after each session via reflection and user feedback. A Reflection Engine inspects conversation traces, diagnoses reasoning successes and failures, and proposes typed deltas $\Delta K=(\Delta S,\Delta U,\Delta T)$ over instructions, user preferences, and tools. Deltas are version-controlled, evaluated with a sliding window of 1-5 star ratings, auto-repaired on first failure, and rolled back on repeated failure. When an edit budget crosses a threshold, the agent compiles a rating-weighted synthetic set and distills matured instruction-space gains into parameters, converting prompt-space improvements into weight-space without downtime. ILWS makes explicit the low-rank shaping induced by context in transformer blocks, preserves governance, and removes per-call retrieval. In enterprise support it increased throughput 2.4-5.0x and cut audited hallucinations by about 80% versus a frozen baseline. In an Adobe Commerce Cloud proof of concept "L0 Support", it achieved 4-5x more tickets per hour and about 80% lower time per ticket, with autonomous instruction updates and optional tool synthesis. Because ILWS operates at the instruction layer until controlled distillation, it generalizes to dynamic domains (legal, medical, engineering) requiring adaptive reasoning, tool creation, and low-latency deployment. + oai:arXiv.org:2509.00251v2 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Rimom Costa + + + Development of numerical methods for nonlinear hybrid stochastic functional differential equations with infinite delay + https://arxiv.org/abs/2509.00475 + arXiv:2509.00475v3 Announce Type: replace +Abstract: This paper addresses the challenging numerical simulation of nonlinear hybrid stochastic functional differential equations with infinite delays. We first propose an explicit scheme using space and time truncation, requiring only finite historical storage. Leveraging approximation theory, we prove the boundedness of the numerical solution's $p$th moment and establish its convergence, achieving a rate of $1/2$ order under polynomially growing coefficients. Furthermore, we refine the scheme to better capture the underlying exponential stability of the exact solution, in both moment and almost sure senses. Finally, numerical experiments are presented to validate our theoretical results. + oai:arXiv.org:2509.00475v3 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Guozhen Li, Xiaoyue Li, Xuerong Mao + + + InterPose: Learning to Generate Human-Object Interactions from Large-Scale Web Videos + https://arxiv.org/abs/2509.00767 + arXiv:2509.00767v2 Announce Type: replace +Abstract: Human motion generation has shown great advances thanks to the recent diffusion models trained on large-scale motion capture data. Most of existing works, however, currently target animation of isolated people in empty scenes. Meanwhile, synthesizing realistic human-object interactions in complex 3D scenes remains a critical challenge in computer graphics and robotics. One obstacle towards generating versatile high-fidelity human-object interactions is the lack of large-scale datasets with diverse object manipulations. Indeed, existing motion capture data is typically restricted to single people and manipulations of limited sets of objects. To address this issue, we propose an automatic motion extraction pipeline and use it to collect interaction-rich human motions. Our new dataset InterPose contains 73.8K sequences of 3D human motions and corresponding text captions automatically obtained from 45.8K videos with human-object interactions. We perform extensive experiments and demonstrate InterPose to bring significant improvements to state-of-the-art methods for human motion generation. Moreover, using InterPose we develop an LLM-based agent enabling zero-shot animation of people interacting with diverse objects and scenes. + oai:arXiv.org:2509.00767v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yangsong Zhang, Abdul Ahad Butt, G\"ul Varol, Ivan Laptev + + + Practical Channel Estimation for Pinching-Antenna Systems: Serial vs. Parallel and Downlink vs. Uplink? + https://arxiv.org/abs/2509.02403 + arXiv:2509.02403v2 Announce Type: replace +Abstract: The practical channel estimation for pinching-antenna networks is investigated, in which an electromagnetic-compliant in-waveguide transmission model is exhibited, incorporating bidirectional power splitting, cumulative power leakage, and waveguide attenuation. Based on this model, the paper investigates two antenna activation protocols for channel estimation: a serial protocol based on one-by-one antenna activation and a parallel protocol utilizing a binary S-Matrix activation. The serial protocol is characterized by its superior numerical stability but a lack of array gain, whereas the parallel protocol theoretically offers array gain but suffers from severe performance degradation due to structural crosstalk from the non-orthogonal S-Matrix and ill-conditioning from cumulative leakage. Furthermore, the paper analyzes the fundamental commonalities and asymmetries between uplink and downlink channel estimation in pinching-antenna systems. Numerical results demonstrate that 1) in an ideal lossless model, the parallel protocol is superior to the serial protocol due to the array gain from simultaneous energy collection in uplink transmission; 2) in a practical model with physical losses, the serial protocol outperforms the parallel protocol, as the performance of the parallel protocol is degraded by the numerical instability from cumulative leakage, which outweighs the benefit of array gain; 3) For downlink channel estimation, the serial protocol is more suitable because its strategy of concentrating the entire power budget on one measurement, while the parallel protocol is more suitable for the uplink as it can make full use of array gain. + oai:arXiv.org:2509.02403v2 + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jian Xiao + + + TableMind: An Autonomous Programmatic Agent for Tool-Augmented Table Reasoning + https://arxiv.org/abs/2509.06278 + arXiv:2509.06278v3 Announce Type: replace +Abstract: Table reasoning requires models to jointly perform comprehensive semantic understanding and precise numerical operations. Although recent large language model (LLM)-based methods have achieved promising results, most of them still rely on a single-turn reasoning paradigm that processes flattened tables in a single forward pass. This paradigm suffers from inherent limitations, including context overflow on large tables, weak sensitivity to continuous numerical values, and the absence of explicit tool-use and reflection. In this paper, we propose TableMind, a tuning-based autonomous programmatic table agent that simulates the human-like cognitive schema of the multi-turn interaction within a lightweight LLM. Instead of adopting a training-free workflow design, TableMind learns to internalize planning, action, and reflection through a principled two-stage training strategy. To bootstrap structured table reasoning capabilities, we construct and filter high-quality reasoning data for the supervised fine-tuning (SFT) stage. To enable precise code generation, we introduce a designed multi-perspective reward scheme and a novel optimization objective in the reinforcement learning (RL) stage. Extensive experiments on diverse benchmarks demonstrate that TableMind consistently outperforms previous baselines, validating the effectiveness of training autonomous agents to improve overall performance. + oai:arXiv.org:2509.06278v3 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chuang Jiang, Mingyue Cheng, Xiaoyu Tao, Qingyang Mao, Jie Ouyang, Qi Liu + + + Efficiently Ranking Software Variants with Minimal Benchmarks + https://arxiv.org/abs/2509.06716 + arXiv:2509.06716v2 Announce Type: replace +Abstract: Benchmarking is a common practice in software engineering to assess the qualities and performance of software variants, coming from multiple competing systems or from configurations of the same system. Benchmarks are used notably to compare and understand variant performance, fine-tune software, detect regressions, or design new software systems. The execution of benchmarks to get a complete picture of software variants is highly costly in terms of computational resources and time. In this paper, we propose a novel approach for reducing benchmarks while maintaining stable rankings, using test suite optimization techniques. That is, we remove instances from the benchmarks while trying to keep the same rankings of the variants on all tests. Our method, BISection Sampling, BISS, strategically retains the most critical tests and applies a novel divide-and-conquer approach to efficiently sample among relevant remaining tests. We experiment with datasets and use cases from LLM leaderboards, SAT competitions, and configurable systems for performance modeling. Our results show that our method outperforms baselines even when operating on a subset of variants. Using BISS, we reduce the computational cost of the benchmarks on average to 44% and on more than half the benchmarks by up to 99% without loss in ranking stability. + oai:arXiv.org:2509.06716v2 + cs.SE + cs.PF + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Th\'eo Matricon, Mathieu Acher, Helge Spieker, Arnaud Gotlieb + + + Event Spectroscopy: Event-based Multispectral and Depth Sensing using Structured Light + https://arxiv.org/abs/2509.06741 + arXiv:2509.06741v2 Announce Type: replace +Abstract: Uncrewed aerial vehicles (UAVs) are increasingly deployed in forest environments for tasks such as environmental monitoring and search and rescue, which require safe navigation through dense foliage and precise data collection. Traditional sensing approaches, including passive multispectral and RGB imaging, suffer from latency, poor depth resolution, and strong dependence on ambient light - especially under forest canopies. In this work, we present a novel event spectroscopy system that simultaneously enables high-resolution, low-latency depth reconstruction with integrated multispectral imaging using a single sensor. Depth is reconstructed using structured light, and by modulating the wavelength of the projected structured light, our system captures spectral information in controlled bands between 650 nm and 850 nm. We demonstrate up to $60\%$ improvement in RMSE over commercial depth sensors and validate the spectral accuracy against a reference spectrometer and commercial multispectral cameras, demonstrating comparable performance. A portable version limited to RGB (3 wavelengths) is used to collect real-world depth and spectral data from a Masoala Rainforest. We demonstrate the use of this prototype for color image reconstruction and material differentiation between leaves and branches using spectral and depth data. Our results show that adding depth (available at no extra effort with our setup) to material differentiation improves the accuracy by over $30\%$ compared to color-only method. Our system, tested in both lab and real-world rainforest environments, shows strong performance in depth estimation, RGB reconstruction, and material differentiation - paving the way for lightweight, integrated, and robust UAV perception and data collection in complex natural environments. + oai:arXiv.org:2509.06741v2 + cs.CV + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-sa/4.0/ + Christian Geckeler, Niklas Neugebauer, Manasi Muglikar, Davide Scaramuzza, Stefano Mintchev + + + PLaID++: A Preference Aligned Language Model for Targeted Inorganic Materials Design + https://arxiv.org/abs/2509.07150 + arXiv:2509.07150v3 Announce Type: replace +Abstract: Reinforcement Learning from Verifiable Rewards (RLVR) has emerged as a promising approach to improve correctness in LLMs, however, in many scientific problems, the objective is not necessarily to produce the correct answer, but instead to produce a diverse array of candidates which satisfy a set of constraints. We study this challenge in the context of materials generation. To this end, we introduce PLaID++, an LLM post-trained for stable and property-guided crystal generation. We find that performance hinges on our crystallographic representation and reward formulation. First, we introduce a compact, symmetry-informed Wyckoff text representation which improves computational efficiency and encourages generalization from physical priors. Second, we demonstrate that temperature scaling acts as an entropy regularizer which counteracts mode collapse and encourages exploration. By encoding symmetry constraints directly into text and guiding model outputs towards desirable chemical space, PLaID++ generates structures that are thermodynamically stable, unique, and novel at a $\sim$50\% greater rate than prior methods and conditionally generates structures with desired space group properties. Our work demonstrates the potential of adapting post-training techniques from natural language processing to materials design, paving the way for targeted and efficient discovery of novel materials. + oai:arXiv.org:2509.07150v3 + cs.LG + cond-mat.mtrl-sci + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Andy Xu, Rohan Desai, Larry Wang, Gabriel Hope, Ethan Ritz + + + SCA-LLM: Spectral-Attentive LLM-Based Wireless World Modeling for Agentic Communications + https://arxiv.org/abs/2509.08139 + arXiv:2509.08139v2 Announce Type: replace +Abstract: Future AI-native wireless networks are moving from reactive optimization to agentic decision-making that can sense, predict, and plan under fast-varying channels. This calls for wireless world models that can predict and roll out channel dynamics, for which multi-step channel state information (CSI) prediction offers a practical short-horizon look-ahead. Recent advances in foundation sequence models further motivate large language models (LLMs) as general-purpose dynamics learners when suitably adapted to non-text time-series signals. However, bridging CSI to LLMs is non-trivial because an effective adapter must expose informative spectral and temporal evolution patterns, while prior designs provide limited inductive bias to capture such channel structures. To this end, we propose SCA-LLM, a spectral-attentive LLM-based wireless world modeling framework that bridges CSI to LLMs via a spectral-channel attention (SCA) adapter. Specifically, the SCA adapter performs multi-spectral representation learning to extract informative channel features and align CSI with the LLM's sequence modeling capability, enabling parameter-efficient adaptation while keeping the LLM backbone largely frozen. Extensive simulations show that SCA-LLM achieves state-of-the-art prediction performance and strong zero-shot generalization, yielding up to -2.4 dB normalized mean squared error (NMSE) advantage over the previous LLM based method. Our ablation studies further confirm the effectiveness of the proposed SCA adapter in mitigating domain mismatch. + oai:arXiv.org:2509.08139v2 + cs.IT + cs.LG + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Ke He, Le He, Lisheng Fan, Xianfu Lei, Thang X. Vu, George K. Karagiannidis, Symeon Chatzinotas + + + A Systematic Survey on Large Language Models for Evolutionary Optimization: From Modeling to Solving + https://arxiv.org/abs/2509.08269 + arXiv:2509.08269v3 Announce Type: replace +Abstract: Large Language Models (LLMs), with their strong understanding and reasoning capabilities, are increasingly being explored for tackling optimization problems, especially in synergy with evolutionary computation. While several recent surveys have explored aspects of LLMs for optimization, there remains a need for an integrative perspective that connects problem modeling with solving workflows. This survey addresses this gap by providing a comprehensive review of recent developments and organizing them within a structured framework. We classify existing research into two main stages: LLMs for optimization modeling and LLMs for optimization solving. The latter is further divided into three paradigms according to the role of LLMs in the optimization workflow: LLMs as stand-alone optimizers, low-level LLMs embedded within optimization algorithms, and high-level LLMs for algorithm selection and generation. For each category, we analyze representative methods, distill technical challenges, and examine their interplay with traditional approaches. We also review interdisciplinary applications spanning the natural sciences, engineering, and machine learning. By contrasting LLM-driven and conventional methods, we highlight key limitations and research gaps, and point toward future directions for developing self-evolving agentic ecosystems for optimization. An up-to-date collection of related literature is maintained at https://github.com/ishmael233/LLM4OPT. + oai:arXiv.org:2509.08269v3 + cs.NE + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yisong Zhang, Ran Cheng, Guoxing Yi, Kay Chen Tan + + + The More You Automate, the Less You See: Hidden Pitfalls of AI Scientist Systems + https://arxiv.org/abs/2509.08713 + arXiv:2509.08713v2 Announce Type: replace +Abstract: AI scientist systems, capable of autonomously executing the full research workflow from hypothesis generation and experimentation to paper writing, hold significant potential for accelerating scientific discovery. However, the internal workflow of these systems have not been closely examined. This lack of scrutiny poses a risk of introducing flaws that could undermine the integrity, reliability, and trustworthiness of their research outputs. In this paper, we identify four potential failure modes in contemporary AI scientist systems: inappropriate benchmark selection, data leakage, metric misuse, and post-hoc selection bias. To examine these risks, we design controlled experiments that isolate each failure mode while addressing challenges unique to evaluating AI scientist systems. Our assessment of two prominent open-source AI scientist systems reveals the presence of several failures, across a spectrum of severity, which can be easily overlooked in practice. Finally, we demonstrate that access to trace logs and code from the full automated workflow enables far more effective detection of such failures than examining the final paper alone. We thus recommend journals and conferences evaluating AI-generated research to mandate submission of these artifacts alongside the paper to ensure transparency, accountability, and reproducibility. + oai:arXiv.org:2509.08713v2 + cs.AI + cs.DL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Ziming Luo, Atoosa Kasirzadeh, Nihar B. Shah + + + Tree-OPO: Off-policy Monte Carlo Tree-Guided Advantage Optimization for Multistep Reasoning + https://arxiv.org/abs/2509.09284 + arXiv:2509.09284v3 Announce Type: replace +Abstract: Recent advances in reasoning with large language models (LLMs) have shown the effectiveness of Monte Carlo Tree Search (MCTS) for generating high quality intermediate trajectories, particularly in math and symbolic domains. Inspired by this, we explore how MCTS derived trajectories, traditionally used for training value or reward models, can be repurposed to improve policy optimization in verifier guided reinforcement learning (RL). Specifically, we focus on Group Relative Policy Optimization (GRPO), a recent algorithm that enables consistent policy learning from group relative judgments. We reframe GRPO into a staged training paradigm, leveraging a teacher's MCTS rollouts to construct a tree structured curriculum of prefixes. This introduces the novel challenge of computing advantages for training samples that originate from different prefixes, each with a distinct expected return. To address this, we propose Staged Advantage Estimation (SAE), a framework for computing low variance, prefix aware advantages by projecting rewards onto a constraint set that respects the tree's hierarchy. Our empirical results on mathematical reasoning tasks show that SAE improves final accuracy over standard GRPO. This outcome is grounded in our theoretical analysis, which confirms that SAE reduces gradient variance, a principled path to improved sample efficiency. We demonstrate this through practical SAE implementations, comparing efficient heuristics against a formal quadratic program. + oai:arXiv.org:2509.09284v3 + cs.AI + cs.CL + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Bingning Huang, Tu Nguyen, Matthieu Zimmer + + + Explaining Tournament Solutions with Minimal Supports + https://arxiv.org/abs/2509.09312 + arXiv:2509.09312v3 Announce Type: replace +Abstract: Tournaments are widely used models to represent pairwise dominance between candidates, alternatives, or teams. We study the problem of providing certified explanations for why a candidate appears among the winners under various tournament rules. To this end, we identify minimal supports, minimal sub-tournaments in which the candidate is guaranteed to win regardless of how the rest of the tournament is completed (that is, the candidate is a necessary winner of the sub-tournament). This notion corresponds to an abductive explanation for the question,"Why does the winner win the tournament?", a central concept in formal explainable AI. We focus on common tournament solutions: the top cycle, the uncovered set, the Copeland rule, the Borda rule, the maximin rule, and the weighted uncovered set. For each rule we determine the size of the smallest minimal supports, and we present polynomial-time algorithms to compute them for all solutions except for the weighted uncovered set, for which the problem is NP-complete. Finally, we show how minimal supports can serve to produce compact, certified, and intuitive explanations for tournament solutions. + oai:arXiv.org:2509.09312v3 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Cl\'ement Contet, Umberto Grandi, J\'er\^ome Mengin + + + Using LLMs for Late Multimodal Sensor Fusion for Activity Recognition + https://arxiv.org/abs/2509.10729 + arXiv:2509.10729v3 Announce Type: replace +Abstract: Sensor data streams provide valuable information around activities and context for downstream applications, though integrating complementary information can be challenging. We show that large language models (LLMs) can be used for late fusion for activity classification from audio and motion time series data. We curated a subset of data for diverse activity recognition across contexts (e.g., household activities, sports) from the Ego4D dataset. Evaluated LLMs achieved 12-class zero- and one-shot classification F1-scores significantly above chance, with no task-specific training. Zero-shot classification via LLM-based fusion from modality-specific models can enable multimodal temporal applications where there is limited aligned training data for learning a shared embedding space. Additionally, LLM-based fusion can enable model deploying without requiring additional memory and computation for targeted application-specific multimodal models. + oai:arXiv.org:2509.10729v3 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Ilker Demirel, Karan Thakkar, Benjamin Elizalde, Miquel Espi Marques, Aditya Sarathy, Yang Bai, Umamahesh Srinivas, Jiajie Xu, Shirley Ren, Jaya Narain + + + AsyMoE: Leveraging Modal Asymmetry for Enhanced Expert Specialization in Large Vision-Language Models + https://arxiv.org/abs/2509.12715 + arXiv:2509.12715v2 Announce Type: replace +Abstract: Large Vision-Language Models (LVLMs) have demonstrated impressive performance on multimodal tasks through scaled architectures and extensive training. However, existing Mixture of Experts (MoE) approaches face challenges due to the asymmetry between visual and linguistic processing. Visual information is spatially complete, while language requires maintaining sequential context. As a result, MoE models struggle to balance modality-specific features and cross-modal interactions. Through systematic analysis, we observe that language experts in deeper layers progressively lose contextual grounding and rely more on parametric knowledge rather than utilizing the provided visual and linguistic information. To address this, we propose AsyMoE, a novel architecture that models this asymmetry using three specialized expert groups. We design intra-modality experts for modality-specific processing, hyperbolic inter-modality experts for hierarchical cross-modal interactions, and evidence-priority language experts to suppress parametric biases and maintain contextual grounding. Extensive experiments demonstrate that AsyMoE achieves 26.58% and 15.45% accuracy improvements over vanilla MoE and modality-specific MoE respectively, with 25.45% fewer activated parameters than dense models. + oai:arXiv.org:2509.12715v2 + cs.CV + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Heng Zhang, Haichuan Hu, Yaomin Shen, Weihao Yu, Yilei Yuan, Haochen You, Guo Cheng, Zijian Zhang, Lubin Gan, Huihui Wei, Hao Zhang, Jin Huang + + + Clustering with Set Outliers and Applications in Relational Clustering + https://arxiv.org/abs/2509.16194 + arXiv:2509.16194v2 Announce Type: replace +Abstract: We introduce and study the $k$-center clustering problem with set outliers, a natural and practical generalization of the classical $k$-center clustering with outliers. Instead of removing individual data points, our model allows discarding up to $z$ subsets from a given family of candidate outlier sets $\mathcal{H}$. Given a metric space $(P,\mathsf{dist})$, where $P$ is a set of elements and $\mathsf{dist}$ a distance metric, a family of sets $\mathcal{H}\subseteq 2^P$, and parameters $k, z$, the goal is to compute a set of $k$ centers $S\subseteq P$ and a family of $z$ sets $H\subseteq \mathcal{H}$ to minimize $\max_{p\in P\setminus(\bigcup_{h\in H} h)} \min_{s\in S}\mathsf{dist}(p,s)$. This abstraction captures structured noise common in database applications, such as faulty data sources or corrupted records in data integration and sensor systems. + We present the first approximation algorithms for this problem in both general and geometric settings. Our methods provide tri-criteria approximations: selecting up to $2k$ centers and $2f z$ outlier sets (where $f$ is the maximum number of sets that a point belongs to), while achieving $O(1)$-approximation in clustering cost. In geometric settings, we leverage range and BBD trees to achieve near-linear time algorithms. In many real applications $f=1$. In this case we further improve the running time of our algorithms by constructing small \emph{coresets}. We also provide a hardness result for the general problem showing that it is unlikely to get any sublinear approximation on the clustering cost selecting less than $f\cdot z$ outlier sets. + We demonstrate that this model naturally captures relational clustering with outliers: outliers are input tuples whose removal affects the join output. We provide approximation algorithms for both, establishing a tight connection between robust clustering and relational query evaluation. + oai:arXiv.org:2509.16194v2 + cs.DS + cs.DB + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + PODS 2026 + Vaishali Surianarayanan, Neeraj Kumar, Stavros Sintos + + + Self-Supervised Learning of Graph Representations for Network Intrusion Detection + https://arxiv.org/abs/2509.16625 + arXiv:2509.16625v5 Announce Type: replace +Abstract: Detecting intrusions in network traffic is a challenging task, particularly under limited supervision and constantly evolving attack patterns. While recent works have leveraged graph neural networks for network intrusion detection, they often decouple representation learning from anomaly detection, limiting the utility of the embeddings for identifying attacks. We propose GraphIDS, a self-supervised intrusion detection model that unifies these two stages by learning local graph representations of normal communication patterns through a masked autoencoder. An inductive graph neural network embeds each flow with its local topological context to capture typical network behavior, while a Transformer-based encoder-decoder reconstructs these embeddings, implicitly learning global co-occurrence patterns via self-attention without requiring explicit positional information. During inference, flows with unusually high reconstruction errors are flagged as potential intrusions. This end-to-end framework ensures that embeddings are directly optimized for the downstream task, facilitating the recognition of malicious traffic. On diverse NetFlow benchmarks, GraphIDS achieves up to 99.98% PR-AUC and 99.61% macro F1-score, outperforming baselines by 5-25 percentage points. + oai:arXiv.org:2509.16625v5 + cs.LG + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Lorenzo Guerra, Thomas Chapuis, Guillaume Duc, Pavlo Mozharovskyi, Van-Tam Nguyen + + + Closing the Loop Inside Neural Networks: Causality-Guided Layer Adaptation for Fault Recovery Control + https://arxiv.org/abs/2509.16837 + arXiv:2509.16837v2 Announce Type: replace +Abstract: This paper studies the problem of real-time fault recovery control for nonlinear control-affine systems subject to actuator loss of effectiveness faults and external disturbances. We derive a two-stage framework that combines causal inference with selective online adaptation to achieve an effective learning-based recovery control method. In the offline phase, we develop a causal layer attribution technique based on the average causal effect (ACE) to evaluate the relative importance of each layer in a pretrained deep neural network (DNN) controller compensating for faults. This methodology identifies a subset of high-impact layers responsible for robust fault compensation. In the online phase, we deploy a Lyapunov-based gradient update to adapt only the ACE-selected layer to circumvent the need for full-network or last-layer only updates. The proposed adaptive controller guarantees uniform ultimate boundedness (UUB) with exponential convergence of the closed-loop system in the presence of actuator faults and external disturbances. Compared to conventional adaptive DNN controllers with full-network adaptation, our methodology has a reduced computational overhead. To demonstrate the effectiveness of our proposed methodology, a case study is provided on a 3-axis attitude control system of a spacecraft with four reaction wheels. + oai:arXiv.org:2509.16837v2 + eess.SY + cs.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Mahdi Taheri, Soon-Jo Chung, Fred Y. Hadaegh + + + Quantum Abduction: A New Paradigm for Reasoning under Uncertainty + https://arxiv.org/abs/2509.16958 + arXiv:2509.16958v2 Announce Type: replace +Abstract: Abductive reasoning - the search for plausible explanations - has long been central to human inquiry, from forensics to medicine and scientific discovery. Yet formal approaches in AI have largely reduced abduction to eliminative search: hypotheses are treated as mutually exclusive, evaluated against consistency constraints or probability updates, and pruned until a single "best" explanation remains. This reductionist framing overlooks the way human reasoners sustain multiple explanatory lines in suspension, navigate contradictions, and generate novel syntheses. This paper introduces quantum abduction, a non-classical paradigm that models hypotheses in superposition, allows them to interfere constructively or destructively, and collapses only when coherence with evidence is reached. Grounded in quantum cognition and implemented with modern NLP embeddings and generative AI, the framework supports dynamic synthesis rather than premature elimination. Case studies span historical mysteries (Ludwig II of Bavaria, the "Monster of Florence"), literary demonstrations ("Murder on the Orient Express"), medical diagnosis, and scientific theory change. Across these domains, quantum abduction proves more faithful to the constructive and multifaceted nature of human reasoning, while offering a pathway toward expressive and transparent AI reasoning systems. + oai:arXiv.org:2509.16958v2 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Remo Pareschi + + + High Resolution UDF Meshing via Iterative Networks + https://arxiv.org/abs/2509.17212 + arXiv:2509.17212v3 Announce Type: replace +Abstract: Unsigned Distance Fields (UDFs) are a natural implicit representation for open surfaces but, unlike Signed Distance Fields (SDFs), are challenging to triangulate into explicit meshes. This is especially true at high resolutions where neural UDFs exhibit higher noise levels, which makes it hard to capture fine details. Most current techniques perform within single voxels without reference to their neighborhood, resulting in missing surface and holes where the UDF is ambiguous or noisy. We show that this can be remedied by performing several passes and by reasoning on previously extracted surface elements to incorporate neighborhood information. Our key contribution is an iterative neural network that does this and progressively improves surface recovery within each voxel by spatially propagating information from increasingly distant neighbors. Unlike single-pass methods, our approach integrates newly detected surfaces, distance values, and gradients across multiple iterations, effectively correcting errors and stabilizing extraction in challenging regions. Experiments on diverse 3D models demonstrate that our method produces significantly more accurate and complete meshes than existing approaches, particularly for complex geometries, enabling UDF surface extraction at higher resolutions where traditional methods fail. + oai:arXiv.org:2509.17212v3 + cs.GR + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Federico Stella, Nicolas Talabot, Hieu Le, Pascal Fua + + + RadAgents: Multimodal Agentic Reasoning for Chest X-ray Interpretation with Radiologist-like Workflows + https://arxiv.org/abs/2509.20490 + arXiv:2509.20490v3 Announce Type: replace +Abstract: Agentic systems offer a potential path to solve complex clinical tasks through collaboration among specialized agents, augmented by tool use and external knowledge bases. Nevertheless, for chest X-ray (CXR) interpretation, prevailing methods remain limited: (i) reasoning is frequently neither clinically interpretable nor aligned with guidelines, reflecting mere aggregation of tool outputs; (ii) multimodal evidence is insufficiently fused, yielding text-only rationales that are not visually grounded; and (iii) systems rarely detect or resolve cross-tool inconsistencies and provide no principled verification mechanisms. To bridge the above gaps, we present RadAgents, a multi-agent framework that couples clinical priors with task-aware multimodal reasoning and encodes a radiologist-style workflow into a modular, auditable pipeline. In addition, we integrate grounding and multimodal retrieval-augmentation to verify and resolve context conflicts, resulting in outputs that are more reliable, transparent, and consistent with clinical practice. + oai:arXiv.org:2509.20490v3 + cs.MA + cs.CL + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Kai Zhang, Corey D Barrett, Jangwon Kim, Lichao Sun, Tara Taghavi, Krishnaram Kenthapadi + + + GenUQ: Predictive Uncertainty Estimates via Generative Hyper-Networks + https://arxiv.org/abs/2509.21605 + arXiv:2509.21605v2 Announce Type: replace +Abstract: Operator learning is a recently developed generalization of regression to mappings between functions. It promises to drastically reduce expensive numerical integration of PDEs to fast evaluations of mappings between functional states of a system, i.e., surrogate and reduced-order modeling. Operator learning has already found applications in several areas such as modeling sea ice, combustion, and atmospheric physics. Recent approaches towards integrating uncertainty quantification into the operator models have relied on likelihood based methods to infer parameter distributions from noisy data. However, stochastic operators may yield actions from which a likelihood is difficult or impossible to construct. In this paper, we introduce, GenUQ, a measure-theoretic approach to UQ that avoids constructing a likelihood by introducing a generative hyper-network model that produces parameter distributions consistent with observed data. We demonstrate that GenUQ outperforms other UQ methods in three example problems, recovering a manufactured operator, learning the solution operator to a stochastic elliptic PDE, and modeling the failure location of porous steel under tension. + oai:arXiv.org:2509.21605v2 + cs.LG + cs.NA + math.NA + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Tian Yu Yen, Reese E. Jones, Ravi G. Patel + + + Bridging the Gap Between Scientific Laws Derived by AI Systems and Canonical Knowledge via Abductive Inference with AI-Noether + https://arxiv.org/abs/2509.23004 + arXiv:2509.23004v2 Announce Type: replace +Abstract: Advances in AI have shown great potential in contributing to the acceleration of scientific discovery. Symbolic regression can fit interpretable models to data, but these models are not necessarily derivable from established theory. Recent systems (e.g., AI-Descartes, AI-Hilbert) enforce derivability from prior knowledge. However, when existing theories are incomplete or incorrect, these machine-generated hypotheses may fall outside the theoretical scope. Automatically finding corrections to axiom systems to close this gap remains a central challenge in scientific discovery. We propose a solution: an open-source algebraic geometry-based system that, given an incomplete axiom system expressible as polynomials and a hypothesis that the axioms cannot derive, generates a minimal set of candidate axioms that, when added to the theory, provably derive the (possibly noisy) hypothesis. We illustrate the efficacy of our approach by showing that it can reconstruct key axioms required to derive the carrier-resolved photo-Hall effect, Einstein's relativistic laws, and several other laws. + oai:arXiv.org:2509.23004v2 + cs.AI + cs.SC + math.AG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Karan Srivastava, Sanjeeb Dash, Ryan Cory-Wright, Barry Trager, Cristina Cornelio, Lior Horesh + + + Network-Optimised Spiking Neural Network for Event-Driven Networking + https://arxiv.org/abs/2509.23516 + arXiv:2509.23516v3 Announce Type: replace +Abstract: Time-critical networking requires low-latency decisions from sparse and bursty telemetry, where fixed-step neural inference waste computation. We introduce Network-Optimised Spiking (NOS), a two-state neuron whose variables correspond to normalised queue occupancy and a recovery resource. NOS combines a saturating excitability nonlinearity for finite buffers, service and damping leaks, graph-local inputs with per-link gates and delays, and differentiable resets compatible with surrogate gradients and neuromorphic deployment. We establish existence and uniqueness of subthreshold equilibria, derive Jacobian-based local stability tests, and obtain a scalar network stability threshold that separates topology from node physics through a Perron-mode spectral condition. A stochastic arrival model aligned with telemetry smoothing links NOS responses to classical queueing behaviour while explaining increased variability near stability margins. Across chain, star, and scale-free graphs, NOS improves early-warning F1 and detection latency over MLP, RNN, GRU, and temporal-GNN baselines under a common residual-based protocol, while providing practical calibration and stability rules suited to resource-constrained networking deployments. Code and Demos: https://mbilal84.github.io/nos-snn-networking/ + oai:arXiv.org:2509.23516v3 + cs.NE + cs.LG + cs.NI + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Muhammad Bilal + + + SPELL: Self-Play Reinforcement Learning for evolving Long-Context Language Models + https://arxiv.org/abs/2509.23863 + arXiv:2509.23863v2 Announce Type: replace +Abstract: Progress in long-context reasoning for large language models (LLMs) has lagged behind other recent advances. This gap arises not only from the intrinsic difficulty of processing long texts, but also from the scarcity of reliable human annotations and programmatically verifiable reward signals. In this paper, we propose SPELL, a multi-role self-play reinforcement learning framework that enables scalable, label-free optimization for long-context reasoning. SPELL integrates three cyclical roles-questioner, responder, and verifier-within a single model to enable continual self-improvement. The questioner generates questions from raw documents paired with reference answers; the responder learns to solve these questions based on the documents; and the verifier evaluates semantic equivalence between the responder's output and the questioner's reference answer, producing reward signals to guide continual training. To stabilize training, we introduce an automated curriculum that gradually increases document length and a reward function that adapts question difficulty to the model's evolving capabilities. Extensive experiments on six long-context benchmarks show that SPELL consistently improves performance across diverse LLMs and outperforms equally sized models fine-tuned on large-scale annotated data. Notably, SPELL achieves an average 7.6-point gain in pass@8 on the strong reasoning model Qwen3-30B-A3B-Thinking, raising its performance ceiling and showing promise for scaling to even more capable models. + oai:arXiv.org:2509.23863v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ziyi Yang, Weizhou Shen, Chenliang Li, Ruijun Chen, Fanqi Wan, Ming Yan, Xiaojun Quan, Fei Huang + + + Learning Encoding-Decoding Direction Pairs to Unveil Concepts of Influence in Deep Vision Networks + https://arxiv.org/abs/2509.23926 + arXiv:2509.23926v2 Announce Type: replace +Abstract: Empirical evidence shows that deep vision networks often represent concepts as directions in latent space with concept information written along directional components in the vector representation of the input. However, the mechanism to encode (write) and decode (read) concept information to and from vector representations is not directly accessible as it constitutes a latent mechanism that naturally emerges from the training process of the network. Recovering this mechanism unlocks significant potential to open the black-box nature of deep networks, enabling understanding, debugging, and improving deep learning models. In this work, we propose an unsupervised method to recover this mechanism. For each concept, we explain that under the hypothesis of linear concept representations, this mechanism can be implemented with the help of two directions: the first facilitating encoding of concept information and the second facilitating decoding. Unlike prior matrix decomposition, autoencoder, or dictionary learning methods that rely on feature reconstruction, we propose a new perspective: decoding directions are identified via directional clustering of activations, and encoding directions are estimated with signal vectors under a probabilistic view. We further leverage network weights through a novel technique, Uncertainty Region Alignment, which reveals interpretable directions affecting predictions. Our analysis shows that (a) on synthetic data, our method recovers ground-truth direction pairs; (b) on real data, decoding directions map to monosemantic, interpretable concepts and outperform unsupervised baselines; and (c) signal vectors faithfully estimate encoding directions, validated via activation maximization. Finally, we demonstrate applications in understanding global model behavior, explaining individual predictions, and intervening to produce counterfactuals or correct errors. + oai:arXiv.org:2509.23926v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Alexandros Doumanoglou, Kurt Driessens, Dimitrios Zarpalas + + + Brain-language fusion enables interactive neural readout and in-silico experimentation + https://arxiv.org/abs/2509.23941 + arXiv:2509.23941v2 Announce Type: replace +Abstract: Large language models (LLMs) have revolutionized human-machine interaction, and have been extended by embedding diverse modalities such as images into a shared language space. Yet, neural decoding has remained constrained by static, non-interactive methods. We introduce CorText, a framework that integrates neural activity directly into the latent space of an LLM, enabling open-ended, natural language interaction with brain data. Trained on fMRI data recorded during viewing of natural scenes, CorText generates accurate image captions and can answer more detailed questions better than controls, while having access to neural data only. We showcase that CorText achieves zero-shot generalization beyond semantic categories seen during training. In-silico microstimulation experiments, which enable counterfactual prompts on brain activity, reveal a consistent, and graded mapping between brain-state and language output. These advances mark a shift from passive decoding toward generative, flexible interfaces between brain activity and language. + oai:arXiv.org:2509.23941v2 + cs.LG + q-bio.NC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Victoria Bosch, Daniel Anthes, Adrien Doerig, Sushrut Thorat, Peter K\"onig, Tim Christian Kietzmann + + + OAT-FM: Optimal Acceleration Transport for Improved Flow Matching + https://arxiv.org/abs/2509.24936 + arXiv:2509.24936v2 Announce Type: replace +Abstract: As a powerful technique in generative modeling, Flow Matching (FM) aims to learn velocity fields from noise to data, which is often explained and implemented as solving Optimal Transport (OT) problems. In this study, we bridge FM and the recent theory of Optimal Acceleration Transport (OAT), developing an improved FM method called OAT-FM and exploring its benefits in both theory and practice. In particular, we demonstrate that the straightening objective hidden in existing OT-based FM methods is mathematically equivalent to minimizing the physical action associated with acceleration defined by OAT. Accordingly, instead of enforcing constant velocity, OAT-FM optimizes the acceleration transport in the product space of sample and velocity, whose objective corresponds to a necessary and sufficient condition of flow straightness. An efficient algorithm is designed to achieve OAT-FM with low complexity. OAT-FM motivates a new two-phase FM paradigm: Given a generative model trained by an arbitrary FM method, whose velocity information has been relatively reliable, we can fine-tune and improve it via OAT-FM. This paradigm eliminates the risk of data distribution drift and the need to generate a large number of noise data pairs, which consistently improves model performance in various generative tasks. Code is available at: https://github.com/AngxiaoYue/OAT-FM + oai:arXiv.org:2509.24936v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Angxiao Yue, Anqi Dong, Hongteng Xu + + + Scaling Behaviors of LLM Reinforcement Learning Post-Training: An Empirical Study in Mathematical Reasoning + https://arxiv.org/abs/2509.25300 + arXiv:2509.25300v3 Announce Type: replace +Abstract: While scaling laws for large language models (LLMs) during pre-training have been extensively studied, their behavior under reinforcement learning (RL) post-training remains largely unexplored. This paper presents a systematic empirical investigation of scaling behaviors in RL-based post-training, with a particular focus on mathematical reasoning. Based on a set of experiments across the full Qwen2.5 dense model series (0.5B to 72B), we characterize how model scale, data volume, and computational budget interact to shape performance. Our analysis leads to four key findings: 1. Larger models consistently exhibit superior learning efficiency on both compute and data metrics. 2. The relationship between test loss, compute, and data can be modeled by a predictive power-law which is robust across both base and instruction-tuned models. 3. Although larger models exhibit higher learning efficiency, the analytical learning efficiency term k(N) in the power-law reveals a latent saturation trend in learning efficiency as model size continues to increase. 4. In data-constrained regimes, repeated reuse of high-quality data proves highly effective, as final performance is primarily governed by the total number of optimization steps rather than the uniqueness of samples. Collectively, these results provide a principled foundation and practical guidelines for efficiently scaling the reasoning capabilities of LLMs through RL post-training. + oai:arXiv.org:2509.25300v3 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zelin Tan, Hejia Geng, Xiaohang Yu, Mulei Zhang, Guancheng Wan, Yifan Zhou, Qiang He, Xiangyuan Xue, Heng Zhou, Yutao Fan, Zhongzhi Li, Zaibin Zhang, Guibin Zhang, Chen Zhang, Zhenfei Yin, Philip Torr, Lei Bai + + + EEsizer: LLM-Based AI Agent for Sizing of Analog and Mixed Signal Circuit + https://arxiv.org/abs/2509.25510 + arXiv:2509.25510v2 Announce Type: replace +Abstract: The design of Analog and Mixed-Signal (AMS) integrated circuits (ICs) often involves significant manual effort, especially during the transistor sizing process. While Machine Learning techniques in Electronic Design Automation (EDA) have shown promise in reducing complexity and minimizing human intervention, they still face challenges such as numerous iterations and a lack of knowledge about AMS circuit design. Recently, Large Language Models (LLMs) have demonstrated significant potential across various fields, showing a certain level of knowledge in circuit design and indicating their potential to automate the transistor sizing process. In this work, we propose EEsizer, an LLM-based AI agent that integrates large language models with circuit simulators and custom data analysis functions, enabling fully automated, closed-loop transistor sizing without relying on external knowledge. By employing prompt engineering and Chain-of-Thought reasoning, the agent iteratively explores design directions, evaluates performance, and refines solutions with minimal human intervention. We first benchmarked 8 LLMs on six basic circuits and selected three high-performing models to optimize a 20-transistor CMOS operational amplifier, targeting multiple performance metrics, including rail-to-rail operation from 180 nm to 90 nm technology nodes. Notably, OpenAI o3 successfully achieved the user-intended target at 90 nm across three different test groups, with a maximum of 20 iterations, demonstrating adaptability and robustness at advanced nodes. To assess design robustness, we manually designed a bias circuit and performed a variation analysis using Gaussian-distributed variations on transistor dimensions and threshold voltages. + oai:arXiv.org:2509.25510v2 + cs.LG + cs.AR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.1109/TCSI.2025.3646359 + IEEE Transactions on Circuits and Systems I: Regular Papers, 2026 + Chang Liu, Danial Chitnis + + + Geometric Learning of Canonical Parameterizations of $2D$-curves + https://arxiv.org/abs/2509.26070 + arXiv:2509.26070v2 Announce Type: replace +Abstract: Most datasets encountered in computer vision and medical applications present symmetries that should be taken into account in classification tasks. A typical example is the symmetry by rotation and/or scaling in object detection. A common way to build neural networks that learn the symmetries is to use data augmentation. In order to avoid data augmentation and build more sustainable algorithms, we present an alternative method to mod out symmetries based on the notion of section of a principal fiber bundle. This framework allows the use of simple metrics on the space of objects in order to measure dissimilarities between orbits of objects under the symmetry group. Moreover, the section used can be optimized to maximize separation of classes. We illustrate this methodology on a dataset of contours of objects for the groups of translations, rotations, scalings and reparameterizations. In particular, we present a $2$-parameter family of canonical parameterizations of curves, containing the constant-speed parameterization as a special case, which we believe is interesting in its own right. We hope that this simple application will serve to convey the geometric concepts underlying this method, which have a wide range of possible applications. The code is available at the following link: $\href{https://github.com/GiLonga/Geometric-Learning}{https://github.com/GiLonga/Geometric-Learning}$. A tutorial notebook showcasing an application of the code to a specific dataset is available at the following link: $\href{https://github.com/ioanaciuclea/geometric-learning-notebook}{https://github.com/ioanaciuclea/geometric-learning-notebook}$ + oai:arXiv.org:2509.26070v2 + cs.CV + math.DG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Ioana Ciuclea, Giorgio Longari, Alice Barbara Tumpach + + + On Listwise Reranking for Corpus Feedback + https://arxiv.org/abs/2510.00887 + arXiv:2510.00887v2 Announce Type: replace +Abstract: Reranker improves retrieval performance by capturing document interactions. At one extreme, graph-aware adaptive retrieval (GAR) represents an information-rich regime, requiring a pre-computed document similarity graph in reranking. However, as such graphs are often unavailable, or incur quadratic memory costs even when available, graph-free rerankers leverage large language model (LLM) calls to achieve competitive performance. We introduce L2G, a novel framework that implicitly induces document graphs from listwise reranker logs. By converting reranker signals into a graph structure, L2G enables scalable graph-based retrieval without the overhead of explicit graph computation. Results on the TREC-DL and BEIR subset show that L2G matches the effectiveness of oracle-based graph methods, while incurring zero additional LLM calls. + oai:arXiv.org:2510.00887v2 + cs.IR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-sa/4.0/ + 10.1145/3773966.3779404 + Soyoung Yoon, Jongho Kim, Daeyong Kwon, Avishek Anand, Seung-won Hwang + + + Style Over Story: A Process-Oriented Study of Authorial Creativity in Large Language Models + https://arxiv.org/abs/2510.02025 + arXiv:2510.02025v2 Announce Type: replace +Abstract: Evaluations of large language models (LLMs)' creativity have focused primarily on the quality of their outputs rather than the processes that shape them. This study takes a process-oriented approach, drawing on narratology to examine LLMs as computational authors. We introduce constraint-based decision-making as a lens for authorial creativity. Using controlled prompting to assign authorial personas, we analyze the creative preferences of the models. Our findings show that LLMs consistently emphasize Style over other elements, including Character, Event, and Setting. By also probing the reasoning the models provide for their choices, we show that distinctive profiles emerge across models and argue that our approach provides a novel systematic tool for analyzing AI's authorial creativity. + oai:arXiv.org:2510.02025v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Donghoon Jung, Jiwoo Choi, Songeun Chae, Seohyon Jung + + + iDDS: Intelligent Distributed Dispatch and Scheduling for Workflow Orchestration + https://arxiv.org/abs/2510.02930 + arXiv:2510.02930v2 Announce Type: replace +Abstract: The intelligent Distributed Dispatch and Scheduling (iDDS) service is a versatile workflow orchestration system designed for large-scale, distributed scientific computing. iDDS extends traditional workload and data management by integrating data-aware execution, conditional logic, and programmable workflows, enabling automation of complex and dynamic processing pipelines. Originally developed for the ATLAS experiment at the Large Hadron Collider, iDDS has evolved into an experiment-agnostic platform that supports both template-driven workflows and a Function-as-a-Task model for Python-based orchestration. + This paper presents the architecture and core components of iDDS, highlighting its scalability, modular message-driven design, and integration with systems such as PanDA and Rucio. We demonstrate its versatility through real-world use cases: fine-grained tape resource optimization for ATLAS, orchestration of large Directed Acyclic Graph (DAG) workflows for the Rubin Observatory, distributed hyperparameter optimization for machine learning applications, active learning for physics analyses, and AI-assisted detector design at the Electron-Ion Collider. + By unifying workload scheduling, data movement, and adaptive decision-making, iDDS reduces operational overhead and enables reproducible, high-throughput workflows across heterogeneous infrastructures. We conclude with current challenges and future directions, including interactive, cloud-native, and serverless workflow support. + oai:arXiv.org:2510.02930v2 + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Wen Guan, Tadashi Maeno, Aleksandr Alekseev, Fernando Harald Barreiro Megino, Kaushik De, Edward Karavakis, Alexei Klimentov, Tatiana Korchuganova, FaHui Lin, Paul Nilsson, Torre Wenaus, Zhaoyu Yang, Xin Zhao + + + Multimodal Neural Operators for Real-Time Biomechanical Modelling of Traumatic Brain Injury + https://arxiv.org/abs/2510.03248 + arXiv:2510.03248v2 Announce Type: replace +Abstract: Background: Traumatic brain injury (TBI) is a major global health concern with 69 million annual cases. While neural operators have revolutionized scientific computing, existing architectures cannot handle the heterogeneous multimodal data (anatomical imaging, scalar demographics, and geometric constraints) required for patient-specific biomechanical modeling. Objective: This study introduces the first multimodal neural operator framework for biomechanics, fusing heterogeneous inputs to predict brain displacement fields for rapid TBI risk assessment. Methods: TBI modeling was reformulated as a multimodal operator learning problem. We proposed two fusion strategies: field projection for Fourier Neural Operator (FNO) architectures and branch decomposition for Deep Operator Networks (DeepONet). Four architectures (FNO, Factorized FNO, Multi-Grid FNO, and DeepONet) were extended with fusion mechanisms and evaluated on 249 in vivo Magnetic Resonance Elastography (MRE) datasets (20-90 Hz). Results: Multi-Grid FNO achieved the highest accuracy (MSE = 0.0023, 94.3% spatial fidelity). DeepONet offered the fastest inference (14.5 iterations/s, 7x speedup), suitable for edge deployment. All architectures reduced computation from hours to milliseconds. Conclusion: Multimodal neural operators enable efficient, real-time, patient-specific TBI risk assessment. This framework establishes a generalizable paradigm for heterogeneous data fusion in scientific domains, including precision medicine. + oai:arXiv.org:2510.03248v2 + cs.LG + cs.AI + cs.CV + physics.med-ph + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Anusha Agarwal, Dibakar Roy Sarkar, Somdatta Goswami + + + Dual Pruning and Sorting-Free Overestimation for Average-Utility Sequential Pattern Mining + https://arxiv.org/abs/2510.04014 + arXiv:2510.04014v2 Announce Type: replace +Abstract: In a quantitative sequential database, numerous efficient algorithms have been developed for high-utility sequential pattern mining (HUSPM). HUSPM establishes a relationship between frequency and significance in the real world and reflects more crucial information than frequent pattern mining. However, high average-utility sequential pattern mining (HAUSPM) is deemed fairer and more valuable than HUSPM. It provides a reasonable measure for longer patterns by considering their length. In contrast to scenarios in retail business analysis, some pattern mining applications, such as cybersecurity or artificial intelligence (AI), often involve much longer sequences. Consequently, pruning strategies can exert a more pronounced impact on efficiency. This paper proposes a novel algorithm named HAUSP-PG, which adopts two complementary strategies to independently process pattern prefixes and remaining sequences, thereby achieving a dual pruning effect. Additionally, the proposed method calculates average utility upper bounds without requiring item sorting, significantly reducing computational time and memory consumption compared to alternative approaches. Through experiments conducted on both real-life and synthetic datasets, we demonstrate that the proposed algorithm could achieve satisfactory performance. + oai:arXiv.org:2510.04014v2 + cs.DB + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kai Cao, Yucong Duan, Wensheng Gan + + + Machine Unlearning in Speech Emotion Recognition via Forget Set Alone + https://arxiv.org/abs/2510.04251 + arXiv:2510.04251v2 Announce Type: replace +Abstract: Speech emotion recognition aims to identify emotional states from speech signals and has been widely applied in human-computer interaction, education, healthcare, and many other fields. However, since speech data contain rich sensitive information, partial data can be required to be deleted by speakers due to privacy concerns. Current machine unlearning approaches largely depend on data beyond the samples to be forgotten. However, this reliance poses challenges when data redistribution is restricted and demands substantial computational resources in the context of big data. We propose a novel adversarial-attack-based approach that fine-tunes a pre-trained speech emotion recognition model using only the data to be forgotten. The experimental results demonstrate that the proposed approach can effectively remove the knowledge of the data to be forgotten from the model, while preserving high model performance on the test set for emotion recognition. + oai:arXiv.org:2510.04251v2 + cs.SD + eess.AS + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhao Ren, Rathi Adarshi Rammohan, Kevin Scheck, Tanja Schultz + + + Detailed Aerial Mapping of Photovoltaic Power Plants Through Semantically Significant Keypoints + https://arxiv.org/abs/2510.04840 + arXiv:2510.04840v3 Announce Type: replace +Abstract: An accurate and up-to-date model of a photovoltaic (PV) power plant is essential for its optimal operation and maintenance. However, such a model may not be easily available. This work introduces a novel approach for PV power plant mapping based on aerial overview images. It enables the automation of the mapping process while removing the reliance on third-party data. The presented mapping method takes advantage of the structural layout of the power plants to achieve detailed modeling down to the level of individual PV modules. The approach relies on visual segmentation of PV modules in overview images and the inference of structural information in each image, assigning modules to individual benches, rows, and columns. We identify visual keypoints related to the layout and use these to merge detections from multiple images while maintaining their structural integrity. The presented method was experimentally verified and evaluated on two different power plants. The final fusion of 3D positions and semantic structures results in a compact georeferenced model suitable for power plant maintenance. + oai:arXiv.org:2510.04840v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.26833/ijeg.1737764 + International Journal of Engineering and Geosciences, 11(2), 2026, 352-362 + Viktor Koz\'ak, Jan Chudoba, Libor P\v{r}eu\v{c}il + + + Aligning Perception, Reasoning, Modeling and Interaction: A Survey on Physical AI + https://arxiv.org/abs/2510.04978 + arXiv:2510.04978v4 Announce Type: replace +Abstract: The rapid advancement of embodied intelligence and world models has intensified efforts to integrate physical laws into AI systems, yet physical perception and symbolic physics reasoning have developed along separate trajectories without a unified bridging framework. This work provides a comprehensive overview of physical AI, establishing clear distinctions between theoretical physics reasoning and applied physical understanding while systematically examining how physics-grounded methods enhance AI's real-world comprehension across structured symbolic reasoning, embodied systems, and generative models. Through rigorous analysis of recent advances, we advocate for intelligent systems that ground learning in both physical principles and embodied reasoning processes, transcending pattern recognition toward genuine understanding of physical laws. Our synthesis envisions next-generation world models capable of explaining physical phenomena and predicting future states, advancing safe, generalizable, and interpretable AI systems. We maintain a continuously updated resource at https://github.com/AI4Phys/Awesome-AI-for-Physics. + oai:arXiv.org:2510.04978v4 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Kun Xiang, Terry Jingchen Zhang, Yinya Huang, Jixi He, Zirong Liu, Yueling Tang, Ruizhe Zhou, Lijing Luo, Youpeng Wen, Xiuwei Chen, Bingqian Lin, Jianhua Han, Hang Xu, Hanhui Li, Bin Dong, Xiaodan Liang + + + "Your Doctor is Spying on You": An Analysis of Data Practices in Mobile Healthcare Applications + https://arxiv.org/abs/2510.06015 + arXiv:2510.06015v2 Announce Type: replace +Abstract: Mobile healthcare (mHealth) applications promise convenient, continuous patient-provider interaction but also introduce severe and often underexamined security and privacy risks. We present an end-to-end audit of 272 Android mHealth apps from Google Play, combining permission forensics, static vulnerability analysis, and user review mining. Our multi-tool assessment with MobSF, RiskInDroid, and OWASP Mobile Audit revealed systemic weaknesses: 26.1% request fine-grained location without disclosure, 18.3% initiate calls silently, and 73 send SMS without notice. Nearly half (49.3%) still use deprecated SHA-1 encryption, 42 transmit unencrypted data, and 6 remain vulnerable to StrandHogg 2.0. Analysis of 2.56 million user reviews found 28.5% negative or neutral sentiment, with over 553,000 explicitly citing privacy intrusions, data misuse, or operational instability. These findings demonstrate the urgent need for enforceable permission transparency, automated pre-market security vetting, and systematic adoption of secure-by-design practices to protect Protected Health Information (PHI). + oai:arXiv.org:2510.06015v2 + cs.CR + cs.CY + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + In Proceedings of the IEEE BuildSEC 2025 - Building a Secure & Empowered Cyberspace + Luke Stevenson, Sanchari Das + + + A Survey on Agentic Security: Applications, Threats and Defenses + https://arxiv.org/abs/2510.06445 + arXiv:2510.06445v2 Announce Type: replace +Abstract: In this work we present the first holistic survey of the agentic security landscape, structuring the field around three fundamental pillars: Applications, Threats, and Defenses. We provide a comprehensive taxonomy of over 160 papers, explaining how agents are used in downstream cybersecurity applications, inherent threats to agentic systems, and countermeasures designed to protect them. A detailed cross-cutting analysis shows emerging trends in agent architecture while revealing critical research gaps in model and modality coverage. A complete and continuously updated list of all surveyed papers is publicly available at https://github.com/kagnlp/Awesome-Agentic-Security. + oai:arXiv.org:2510.06445v2 + cs.CL + cs.AI + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Asif Shahriar, Md Nafiu Rahman, Sadif Ahmed, Farig Sadeque, Md Rizwan Parvez + + + TAPAS: Datasets for Learning the Learning with Errors Problem + https://arxiv.org/abs/2510.08797 + arXiv:2510.08797v2 Announce Type: replace +Abstract: AI-powered attacks on Learning with Errors (LWE), an important hard math problem in post-quantum cryptography, rival or outperform "classical" attacks on LWE under certain parameter settings. Despite the promise of this approach, a dearth of accessible data limits AI practitioners' ability to study and improve these attacks. Creating LWE data for AI model training is time- and compute-intensive and requires significant domain expertise. To fill this gap and accelerate AI research on LWE attacks, we propose the TAPAS datasets, a Toolkit for Analysis of Post-quantum cryptography using AI Systems. These datasets cover several LWE settings and can be used off-the-shelf by AI practitioners to prototype new approaches to cracking LWE. This work documents TAPAS dataset creation, establishes attack performance baselines, and lays out directions for future work. + oai:arXiv.org:2510.08797v2 + cs.LG + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Eshika Saxena, Alberto Alfarano, Fran\c{c}ois Charton, Emily Wenger, Kristin Lauter + + + LiveOIBench: Can Large Language Models Outperform Human Contestants in Informatics Olympiads? + https://arxiv.org/abs/2510.09595 + arXiv:2510.09595v2 Announce Type: replace +Abstract: Competitive programming problems increasingly serve as valuable benchmarks to evaluate the coding capabilities of large language models (LLMs) due to their complexity and ease of verification. Yet, current coding benchmarks face limitations such as lack of exceptionally challenging problems, insufficient test case coverage, reliance on online platform APIs that limit accessibility. To address these issues, we introduce LiveOIBench, a comprehensive benchmark featuring 403 expert-curated Olympiad-level competitive programming problems, each with an average of 60 expert-designed test cases. The problems are sourced directly from 72 official contests of 14 Informatics Olympiads in different regions conducted between 2023 and 2025. LiveOIBench distinguishes itself through four key features: (1) meticulously curated high-quality tasks with detailed subtask rubrics and extensive private test cases; (2) direct integration of elite contestant performance data to enable informative comparison against top-performing humans; (3) planned continuous, contamination-free updates from newly released Olympiad problems; and (4) a self-contained evaluation system facilitating offline and easy-to-reproduce assessments. Benchmarking 34 popular general-purpose and reasoning LLMs, we find that GPT-5 achieves a notable 81.76th percentile, a strong result that nonetheless falls short of top human contestants, who usually place above 90th. In contrast, among open-weight reasoning models, GPT-OSS-120B achieves only a 60th percentile, underscoring significant capability disparities from frontier closed models. Detailed analyses indicate that robust reasoning models prioritize precise problem analysis over excessive exploration, suggesting future models should emphasize structured analysis and minimize unnecessary exploration. All data, code, and leaderboard results are publicly available on our website. + oai:arXiv.org:2510.09595v2 + cs.AI + cs.CL + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Kaijian Zou, Aaron Xiong, Yunxiang Zhang, Frederick Zhang, Yueqi Ren, Jirong Yang, Ayoung Lee, Shitanshu Bhushan, Lu Wang + + + Targeted Sequential Pattern Mining with High Average Utility + https://arxiv.org/abs/2510.10115 + arXiv:2510.10115v2 Announce Type: replace +Abstract: Incorporating utility into targeted pattern mining can address the practical limitations of traditional frequency-based approaches. However, utility-based methods often suffer from generating a large number of long and complicated sequences. To improve pattern relevance and interpretability, average utility provides a more balanced metric by considering both utility and sequence length. Moreover, incorporating user-defined query targets into the mining process enhances usability and interactivity by retaining only patterns containing user-specified goals. To address challenges related to mining efficiency in large-scale, long-sequence datasets, this study introduces average utility into targeted sequential pattern mining. A novel algorithm, TAUSQ-PG, is designed to find targeted high average utility sequential patterns. It incorporates efficient filtering and pruning strategies, tighter upper bound models, as well as novel specialized evaluation metrics and query flags tailored to this task. Extensive comparative experiments on different datasets demonstrate that TAUSQ-PG effectively controls the candidate set size, thereby reducing redundant sequence generation and significantly improving runtime and memory efficiency. + oai:arXiv.org:2510.10115v2 + cs.DB + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Kai Cao, Yucong Duan, Wensheng Gan + + + GraphTracer: Graph-Guided Failure Tracing in LLM Agents for Robust Multi-Turn Deep Search + https://arxiv.org/abs/2510.10581 + arXiv:2510.10581v2 Announce Type: replace +Abstract: Multi-agent systems powered by Large Language Models excel at complex tasks through coordinated collaboration, yet they face high failure rates in multi-turn deep search scenarios. Existing temporal attribution methods struggle to accurately diagnose root causes, particularly when errors propagate across multiple agents. Attempts to automate failure attribution by analyzing action sequences remain ineffective due to their inability to account for information dependencies that span agents. This paper identifies two core challenges: \textit{(i) distinguishing symptoms from root causes in multi-agent error propagation}, and \textit{(ii) tracing information dependencies beyond temporal order}. To address these issues, we introduce \textbf{GraphTracer}, a framework that redefines failure attribution through information flow analysis. GraphTracer constructs Information Dependency Graphs (IDGs) to explicitly capture how agents reference and build on prior outputs. It localizes root causes by tracing through these dependency structures instead of relying on temporal sequences. GraphTracer also uses graph-aware synthetic data generation to target critical nodes, creating realistic failure scenarios. Evaluations on the Who\&When benchmark and integration into production systems demonstrate that GraphTracer-8B achieves up to 18.18\% higher attribution accuracy compared to state-of-the-art models and enables 4.8\% to 14.2\% performance improvements in deployed multi-agent frameworks, establishing a robust solution for multi-agent system debugging. + oai:arXiv.org:2510.10581v2 + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Heng Zhang, Yuling Shi, Xiaodong Gu, Haochen You, Zijian Zhang, Lubin Gan, Yilei Yuan, Jin Huang + + + D3MAS: Decompose, Deduce, and Distribute for Enhanced Knowledge Sharing in Multi-Agent Systems + https://arxiv.org/abs/2510.10585 + arXiv:2510.10585v2 Announce Type: replace +Abstract: Multi-agent systems powered by large language models exhibit strong capabilities in collaborative problem-solving. However, these systems suffer from substantial knowledge redundancy. Agents duplicate efforts in retrieval and reasoning processes. This inefficiency stems from a deeper issue: current architectures lack mechanisms to ensure agents share minimal sufficient information at each operational stage. Empirical analysis reveals an average knowledge duplication rate of 47.3\% across agent communications. We propose D3MAS (Decompose, Deduce, and Distribute), a hierarchical coordination framework addressing redundancy through structural design rather than explicit optimization. The framework organizes collaboration across three coordinated layers. Task decomposition filters irrelevant sub-problems early. Collaborative reasoning captures complementary inference paths across agents. Distributed memory provides access to non-redundant knowledge. These layers coordinate through structured message passing in a unified heterogeneous graph. This cross-layer alignment ensures information remains aligned with actual task needs. Experiments on four challenging datasets show that D3MAS consistently improves reasoning accuracy by 8.7\% to 15.6\% and reduces knowledge redundancy by 46\% on average. + oai:arXiv.org:2510.10585v2 + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Heng Zhang, Yuling Shi, Xiaodong Gu, Haochen You, Zijian Zhang, Lubin Gan, Yilei Yuan, Jin Huang + + + HyperAgent: Leveraging Hypergraphs for Topology Optimization in Multi-Agent Communication + https://arxiv.org/abs/2510.10611 + arXiv:2510.10611v2 Announce Type: replace +Abstract: Recent advances in large language model-powered multi-agent systems have demonstrated remarkable collective intelligence through effective communication. However, existing approaches face two primary challenges: (i) \textit{Ineffective group collaboration modeling}, as they rely on pairwise edge representations in graph structures, limiting their ability to capture relationships among multiple agents; and (ii) \textit{Limited task-adaptiveness in communication topology design}, leading to excessive communication cost for simple tasks and insufficient coordination for complex scenarios. These issues restrict the scalability and practical deployment of adaptive collaboration frameworks. To address these challenges, we propose \textbf{HyperAgent}, a hypergraph-based framework that optimizes communication topologies and effectively captures group collaboration patterns using direct hyperedge representations. Unlike edge-based approaches, HyperAgent uses hyperedges to link multiple agents within the same subtask and employs hypergraph convolutional layers to achieve one-step information aggregation in collaboration groups. Additionally, it incorporates a variational autoencoder framework with sparsity regularization to dynamically adjust hypergraph topologies based on task complexity. Experiments highlight the superiority of HyperAgent in both performance and efficiency. For instance, on GSM8K, HyperAgent achieves 95.07\% accuracy while reducing token consumption by 25.33\%, demonstrating the potential of hypergraph-based optimization for multi-agent communication. + oai:arXiv.org:2510.10611v2 + cs.MA + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Heng Zhang, Yuling Shi, Xiaodong Gu, Zijian Zhang, Haochen You, Lubin Gan, Yilei Yuan, Jin Huang + + + Zephyrus: Scaling Gateways Beyond the Petabit-Era with DPU-Augmented Hierarchical Co-Offloading + https://arxiv.org/abs/2510.11043 + arXiv:2510.11043v2 Announce Type: replace +Abstract: Operating at petabit-scale, ByteDance's cloud gateways are deployed at critical aggregation points to orchestrate a wide array of business traffic. However, this massive scale imposes significant resource pressure on our previous-generation cloud gateways, rendering them unsustainable in the face of ever-growing cloud-network traffic. As the DPU market rapidly expands, we see a promising path to meet our escalating business traffic demands by integrating DPUs with our established Tofino-based gateways. DPUs augment these gateways with substantially larger table capacities and richer programmability without compromising previously low-latency and high-throughput forwarding. Despite compelling advantages, the practical integration of DPUs into cloud gateways remains unexplored, primarily due to underlying challenges. In this paper, we present Zephyrus, a production-scale gateway built upon a unified P4 pipeline spanning high-performance Tofino and feature-rich DPUs, which successfully overcomes these challenges. We further introduce a hierarchical co-offloading architecture (HLCO) to orchestrate traffic flow within this heterogeneous gateway, achieving > 99% hardware offloading while retaining software fallback paths for complex operations. Zephyrus outperforms LuoShen (NSDI '24) with 33% higher throughput and our evaluation further indicates 21% lower power consumption and 14% lower hardware cost. Against FPGA-based systems, Albatross (SIGCOMM '25), it doubles the throughput at a substantially lower Total Cost of Ownership (TCO), showcasing its superior performance-per-dollar. Beyond these performance gains, we also share key lessons from several years of developing and operating Zephyrus at production scale. We believe these insights provide valuable references for researchers and practitioners designing performant cloud gateways. + oai:arXiv.org:2510.11043v2 + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Yuemeng Xu, Haoran Chen, Jiarui Guo, Mingwei Cui, Qiuheng Yin, Cheng Dong, Daxiang Kang, Xian Wu, Chenmin Sun, Peng He, Yang Gao, Lirong Lai, Kai Wang, Hongyu Wu, Tong Yang, Xiyun Xu + + + Network-Optimised Spiking Neural Network (NOS) Scheduling for 6G O-RAN: Spectral Margin and Delay-Tail Control + https://arxiv.org/abs/2510.11291 + arXiv:2510.11291v2 Announce Type: replace +Abstract: This work presents a Network-Optimised Spiking (NOS) delay-aware scheduler for 6G radio access. The scheme couples a bounded two-state kernel to a clique-feasible proportional-fair (PF) grant head: the excitability state acts as a finite-buffer proxy, the recovery state suppresses repeated grants, and neighbour pressure is injected along the interference graph via delayed spikes. A small-signal analysis yields a delay-dependent threshold $k_\star(\Delta)$ and a spectral margin $\delta = k_\star(\Delta) - gH\rho(W)$ that compress topology, controller gain, and delay into a single design parameter. Under light assumptions on arrivals, we prove geometric ergodicity for $\delta>0$ and derive sub-Gaussian backlog and delay tail bounds with exponents proportional to $\delta$. A numerical study, aligned with the analysis and a DU compute budget, compares NOS with PF and delayed backpressure (BP) across interference topologies over a $5$--$20$\,ms delay sweep. With a single gain fixed at the worst spectral radius, NOS sustains higher utilisation and a smaller 99.9th-percentile delay while remaining clique-feasible on integer PRBs. + oai:arXiv.org:2510.11291v2 + cs.NI + cs.IT + cs.LG + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Muhammad Bilal, Xiaolong Xu + + + AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model + https://arxiv.org/abs/2510.11496 + arXiv:2510.11496v3 Announce Type: replace +Abstract: In recent years, while cloud-based MLLMs such as QwenVL, InternVL, GPT-4o, Gemini, and Claude Sonnet have demonstrated outstanding performance with enormous model sizes reaching hundreds of billions of parameters, they significantly surpass the limitations in memory, power consumption, and computing capacity of edge devices such as mobile phones. This paper introduces AndesVL, a suite of mobile-side MLLMs with 0.6B to 4B parameters based on Qwen3's LLM and various visual encoders. We comprehensively outline the model architectures, training pipeline, and training data of AndesVL, which achieves first-tier performance across a wide range of open-source benchmarks, including fields such as text-rich image understanding, reasoning and math, multi-image comprehension, general VQA, hallucination mitigation, multilingual understanding, and GUI-related tasks when compared with state-of-the-art models of a similar scale. Furthermore, we introduce a 1+N LoRA architecture alongside a Quantization-Aware LoRA Fine-Tuning (QALFT) framework to facilitate efficient task adaptation and model compression during mobile-side deployment of AndesVL. Moreover, utilizing our cache eviction algorithm -- OKV -- along with customized speculative decoding and compression strategies, we achieve a 6.7x peak decoding speedup ratio, up to 30.9% memory reduction, and 1.8 bits-per-weight when deploying AndesVL-4B on MediaTek Dimensity 9500 chips. We release all models on https://huggingface.co/OPPOer. + oai:arXiv.org:2510.11496v3 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Zhiwei Jin, Xiaohui Song, Nan Wang, Yafei Liu, Chao Li, Xin Li, Ruichen Wang, Zhihao Li, Qi Qi, Long Cheng, Dongze Hao, Quanlong Zheng, Yanhao Zhang, Haobo Ji, Jian Ma, Zhitong Zheng, Zhenyi Lin, Haolin Deng, Xin Zou, Xiaojie Yin, Ruilin Wang, Liankai Cai, Haijing Liu, Yuqing Qiu, Ke Chen, Zixian Li, Chi Xie, Huafei Li, Chenxing Li, Chuangchuang Wang, Kai Tang, Zhiguang Zhu, Kai Tang, Wenmei Gao, Rui Wang, Jun Wu, Chao Liu, Qin Xie, Chen Chen, Haonan Lu + + + GraphShaper: Geometry-aware Alignment for Improving Transfer Learning in Text-Attributed Graphs + https://arxiv.org/abs/2510.12085 + arXiv:2510.12085v2 Announce Type: replace +Abstract: Graph foundation models represent a transformative paradigm for learning transferable representations across diverse graph domains. Recent methods leverage large language models to unify graph and text modalities into a shared representation space using contrastive learning. However, systematic evaluations reveal significant performance degradation at structural boundaries where distinct topological patterns converge, with accuracy losses exceeding 20 percentage points. This issue arises from a key limitation: current methods assume all graph structures can be encoded within a single Euclidean space. In reality, tree structures require hyperbolic geometry to preserve hierarchical branching, while cyclic patterns depend on spherical geometry for closure properties. At structural boundaries, nodes experience conflicting geometric constraints that uniform encoding spaces cannot resolve. This raises a crucial challenge: \textbf{Can alignment frameworks be designed to respect the intrinsic geometric diversity of graph structures?} We introduce \textbf{GraphShaper}, a geometry-aware framework that enhances graph encoding through multi-geometric specialization. Our approach employs expert networks tailored to different geometric spaces, dynamically computing fusion weights to adaptively integrate geometric properties based on local structural characteristics. This adaptive fusion preserves structural integrity before alignment with text embeddings. Extensive experiments demonstrate that GraphShaper achieves 9.47\% accuracy improvements on citation networks and 7.63\% on social networks in zero-shot settings. + oai:arXiv.org:2510.12085v2 + cs.LG + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Heng Zhang, Tianyi Zhang, Yuling Shi, Xiaodong Gu, Yaomin Shen, Haochen You, Zijian Zhang, Yilei Yuan, Jin Huang + + + A Survey of Vibe Coding with Large Language Models + https://arxiv.org/abs/2510.12399 + arXiv:2510.12399v2 Announce Type: replace +Abstract: The advancement of large language models (LLMs) has catalyzed a paradigm shift from code generation assistance to autonomous coding agents, enabling a novel development methodology termed "Vibe Coding" where developers validate AI-generated implementations through outcome observation rather than line-by-line code comprehension. Despite its transformative potential, the effectiveness of this emergent paradigm remains under-explored, with empirical evidence revealing unexpected productivity losses and fundamental challenges in human-AI collaboration. To address this gap, this survey provides the first comprehensive and systematic review of Vibe Coding with large language models, establishing both theoretical foundations and practical frameworks for this transformative development approach. Drawing from systematic analysis of over 1000 research papers, we survey the entire vibe coding ecosystem, examining critical infrastructure components including LLMs for coding, LLM-based coding agent, development environment of coding agent, and feedback mechanisms. We first introduce Vibe Coding as a formal discipline by formalizing it through a Constrained Markov Decision Process that captures the dynamic triadic relationship among human developers, software projects, and coding agents. Building upon this theoretical foundation, we then synthesize existing practices into five distinct development models: Unconstrained Automation, Iterative Conversational Collaboration, Planning-Driven, Test-Driven, and Context-Enhanced Models, thus providing the first comprehensive taxonomy in this domain. Critically, our analysis reveals that successful Vibe Coding depends not merely on agent capabilities but on systematic context engineering, well-established development environments, and human-agent collaborative development models. + oai:arXiv.org:2510.12399v2 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Yuyao Ge, Lingrui Mei, Zenghao Duan, Tianhao Li, Yujia Zheng, Yiwei Wang, Lexin Wang, Jiayu Yao, Tianyu Liu, Yujun Cai, Baolong Bi, Fangda Guo, Jiafeng Guo, Shenghua Liu, Xueqi Cheng + + + DiffEM: Learning from Corrupted Data with Diffusion Models via Expectation Maximization + https://arxiv.org/abs/2510.12691 + arXiv:2510.12691v3 Announce Type: replace +Abstract: Diffusion models have emerged as powerful generative priors for high-dimensional inverse problems, yet learning them when only corrupted or noisy observations are available remains challenging. In this work, we propose a new method for training diffusion models with Expectation-Maximization (EM) from corrupted data. Our proposed method, DiffEM, utilizes conditional diffusion models to reconstruct clean data from observations in the E-step, and then uses the reconstructed data to refine the conditional diffusion model in the M-step. Theoretically, we provide monotonic convergence guarantees for the DiffEM iteration, assuming appropriate statistical conditions. We demonstrate the effectiveness of our approach through experiments on various image reconstruction tasks. + oai:arXiv.org:2510.12691v3 + cs.LG + cs.AI + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Danial Hosseintabar, Fan Chen, Giannis Daras, Antonio Torralba, Constantinos Daskalakis + + + We Can Hide More Bits: The Unused Watermarking Capacity in Theory and in Practice + https://arxiv.org/abs/2510.12812 + arXiv:2510.12812v2 Announce Type: replace +Abstract: Despite rapid progress in deep learning-based image watermarking, the capacity of current robust methods remains limited to the scale of only a few hundred bits. Such plateauing progress raises the question: How far are we from the fundamental limits of image watermarking? To this end, we present an analysis that establishes upper bounds on the message-carrying capacity of images under PSNR and linear robustness constraints. Our results indicate theoretical capacities are orders of magnitude larger than what current models achieve. Our experiments show this gap between theoretical and empirical performance persists, even in minimal, easily analysable setups. This suggests a fundamental problem. As proof that larger capacities are indeed possible, we train ChunkySeal, a scaled-up version of VideoSeal, which increases capacity 4 times to 1024 bits, all while preserving image quality and robustness. These findings demonstrate modern methods have not yet saturated watermarking capacity, and that significant opportunities for architectural innovation and training strategies remain. + oai:arXiv.org:2510.12812v2 + cs.CR + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Aleksandar Petrov, Pierre Fernandez, Tom\'a\v{s} Sou\v{c}ek, Hady Elsahar + + + RAID: Refusal-Aware and Integrated Decoding for Jailbreaking LLMs + https://arxiv.org/abs/2510.13901 + arXiv:2510.13901v2 Announce Type: replace +Abstract: Large language models (LLMs) achieve impressive performance across diverse tasks yet remain vulnerable to jailbreak attacks that bypass safety mechanisms. We present RAID (Refusal-Aware and Integrated Decoding), a framework that systematically probes these weaknesses by crafting adversarial suffixes that induce restricted content while preserving fluency. RAID relaxes discrete tokens into continuous embeddings and optimizes them with a joint objective that (i) encourages restricted responses, (ii) incorporates a refusal-aware regularizer to steer activations away from refusal directions in embedding space, and (iii) applies a coherence term to maintain semantic plausibility and non-redundancy. After optimization, a critic-guided decoding procedure maps embeddings back to tokens by balancing embedding affinity with language-model likelihood. This integration yields suffixes that are both effective in bypassing defenses and natural in form. Experiments on multiple open-source LLMs show that RAID achieves higher attack success rates with fewer queries and lower computational cost than recent white-box and black-box baselines. These findings highlight the importance of embedding-space regularization for understanding and mitigating LLM jailbreak vulnerabilities. + oai:arXiv.org:2510.13901v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Tuan T. Nguyen, John Le, Thai T. Vu, Willy Susilo, Heath Cooper + + + QDepth-VLA: Quantized Depth Prediction as Auxiliary Supervision for Vision-Language-Action Models + https://arxiv.org/abs/2510.14836 + arXiv:2510.14836v2 Announce Type: replace +Abstract: Spatial perception and reasoning are crucial for Vision-Language-Action (VLA) models to accomplish fine-grained manipulation tasks. However, existing approaches often lack the ability to understand and reason over the essential 3D structures necessary for precise control. To address this limitation, we propose QDepth-VLA, a general framework that augments VLA models with an auxiliary depth prediction task. A dedicated depth expert is designed to predict quantized latent tokens of depth maps obtained from a VQ-VAE encoder, enabling the model to learn depth-aware representations that capture critical geometric cues. Experimental results on the simulation benchmarks and real-world tasks demonstrate that QDepth-VLA yields strong spatial reasoning and competitive performance on manipulation tasks. + oai:arXiv.org:2510.14836v2 + cs.CV + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yixuan Li, Yuhui Chen, Mingcai Zhou, Haoran Li, Zhengtao Zhang, Dongbin Zhao + + + SSL4RL: Revisiting Self-supervised Learning as Intrinsic Reward for Visual-Language Reasoning + https://arxiv.org/abs/2510.16416 + arXiv:2510.16416v2 Announce Type: replace +Abstract: Vision-language models (VLMs) have shown remarkable abilities by integrating large language models with visual inputs. However, they often fail to utilize visual evidence adequately, either depending on linguistic priors in vision-centric tasks or resorting to textual shortcuts during reasoning. Although reinforcement learning (RL) can align models with desired behaviors, its application to VLMs has been hindered by the lack of scalable and reliable reward mechanisms. To overcome this challenge, we propose SSL4RL, a novel framework that leverages self-supervised learning (SSL) tasks as a source of verifiable rewards for RL-based fine-tuning. Our approach reformulates SSL objectives-such as predicting image rotation or reconstructing masked patches-into dense, automatic reward signals, eliminating the need for human preference data or unreliable AI evaluators. Experiments show that SSL4RL substantially improves performance on both vision-centric and vision-language reasoning benchmarks. Furthermore, through systematic ablations, we identify key factors-such as task difficulty, model scale, and semantic alignment with the target domain-that influence the effectiveness of SSL4RL tasks, offering new design principles for future work. We also demonstrate the framework's generality by applying it to graph learning, where it yields significant gains. SSL4RL establishes a versatile and effective paradigm for aligning multimodal models using verifiable, self-supervised objectives. + oai:arXiv.org:2510.16416v2 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Xiaojun Guo, Runyu Zhou, Yifei Wang, Qi Zhang, Chenheng Zhang, Stefanie Jegelka, Xiaohan Wang, Jiajun Chai, Guojun Yin, Wei Lin, Yisen Wang + + + LexChain: Modeling Legal Reasoning Chains for Chinese Tort Case Analysis + https://arxiv.org/abs/2510.17602 + arXiv:2510.17602v2 Announce Type: replace +Abstract: Legal reasoning is a fundamental component of legal analysis and decision-making. Existing computational approaches to legal reasoning predominantly rely on generic reasoning frameworks such as syllogism, which do not comprehensively examine the nuanced process of legal reasoning. Moreover, current research has largely focused on criminal cases, with insufficient modeling for civil cases. In this work, we present a novel framework to explicitly model legal reasoning in the analysis of Chinese tort-related civil cases. We first operationalize the legal reasoning process in tort analysis into the three-module LexChain framework, with each module consisting of multiple finer-grained sub-steps. Informed by the LexChain framework, we introduce the task of tort legal reasoning and construct an evaluation benchmark to systematically assess the critical steps within analytical reasoning chains for tort analysis. Leveraging this benchmark, we evaluate existing large language models for their legal reasoning ability in civil tort contexts. Our results indicate that current models still fall short in accurately handling crucial elements of tort legal reasoning. Furthermore, we introduce several baseline approaches that explicitly incorporate LexChain-style reasoning through prompting or post-training. The proposed baselines achieve significant improvements in tort-related legal reasoning and generalize well to related legal analysis tasks, demonstrating the value of explicitly modeling legal reasoning chains to enhance the reasoning capabilities of language models. + oai:arXiv.org:2510.17602v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Huiyuan Xie, Chenyang Li, Huining Zhu, Chubin Zhang, Yuxiao Ye, Zhenghao Liu, Zhiyuan Liu + + + LLM-as-a-Prophet: Understanding Predictive Intelligence with Prophet Arena + https://arxiv.org/abs/2510.17638 + arXiv:2510.17638v2 Announce Type: replace +Abstract: Forecasting is not only a fundamental intellectual pursuit but also is of significant importance to societal systems such as finance and economics. With the rapid advances of large language models (LLMs) trained on Internet-scale data, it raises the promise of employing LLMs to forecast real-world future events, an emerging paradigm we call "LLM-as-a-Prophet". This paper systematically investigates such predictive intelligence of LLMs. To this end, we build Prophet Arena, a general evaluation benchmark that continuously collects live forecasting tasks and decomposes each task into distinct pipeline stages, in order to support our controlled and large-scale experimentation. Our comprehensive evaluation reveals that many LLMs already exhibit impressive forecasting capabilities, reflected in, e.g., their small calibration errors, consistent prediction confidence and promising market returns. However, we also uncover key bottlenecks towards achieving superior predictive intelligence via LLM-as-a-Prophet, such as LLMs' inaccurate event recalls, misunderstanding of data sources and slower information aggregation compared to markets when resolution nears. + oai:arXiv.org:2510.17638v2 + cs.AI + cs.CL + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Qingchuan Yang, Simon Mahns, Sida Li, Anri Gu, Jibang Wu, Haifeng Xu + + + General transformation neural networks: A class of parametrized functions for high-dimensional function approximation + https://arxiv.org/abs/2510.20142 + arXiv:2510.20142v2 Announce Type: replace +Abstract: We propose a novel class of neural network-like parametrized functions, i.e., general transformation neural networks (GTNNs), for high-dimensional approximation. Conventional deep neural networks sometimes perform less accurately in approximation problems under gradient descent training, especially when the target function is oscillatory. To improve accuracy, we generalize the affine transformation of the abstract neuron to more general functions that serve as complex shape functions and have greater capacity. Specifically, we discuss three types of GTNNs in detail: the cubic, quadratic, and trigonometric transformation neural networks (CTNNs, QTNNs, and TTNNs). We perform an approximation error analysis of GTNNs, presenting their universal approximation properties for continuous functions and error bounds for smooth and Barron-type functions. Several numerical examples of regression problems and partial differential equations are presented, demonstrating that CTNNs/QTNNs/TTNNs achieve higher accuracy and greater robustness than conventional fully connected neural networks. + oai:arXiv.org:2510.20142v2 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiaoyang Wang, Yiqi Gu + + + PartNeXt: A Next-Generation Dataset for Fine-Grained and Hierarchical 3D Part Understanding + https://arxiv.org/abs/2510.20155 + arXiv:2510.20155v2 Announce Type: replace +Abstract: Understanding objects at the level of their constituent parts is fundamental to advancing computer vision, graphics, and robotics. While datasets like PartNet have driven progress in 3D part understanding, their reliance on untextured geometries and expert-dependent annotation limits scalability and usability. We introduce PartNeXt, a next-generation dataset addressing these gaps with over 23,000 high-quality, textured 3D models annotated with fine-grained, hierarchical part labels across 50 categories. We benchmark PartNeXt on two tasks: (1) class-agnostic part segmentation, where state-of-the-art methods (e.g., PartField, SAMPart3D) struggle with fine-grained and leaf-level parts, and (2) 3D part-centric question answering, a new benchmark for 3D-LLMs that reveals significant gaps in open-vocabulary part grounding. Additionally, training Point-SAM on PartNeXt yields substantial gains over PartNet, underscoring the dataset's superior quality and diversity. By combining scalable annotation, texture-aware labels, and multi-task evaluation, PartNeXt opens new avenues for research in structured 3D understanding. + oai:arXiv.org:2510.20155v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Penghao Wang, Yiyang He, Xin Lv, Yukai Zhou, Lan Xu, Jingyi Yu, Jiayuan Gu + + + TOMCAT: Test-time Comprehensive Knowledge Accumulation for Compositional Zero-Shot Learning + https://arxiv.org/abs/2510.20162 + arXiv:2510.20162v2 Announce Type: replace +Abstract: Compositional Zero-Shot Learning (CZSL) aims to recognize novel attribute-object compositions based on the knowledge learned from seen ones. Existing methods suffer from performance degradation caused by the distribution shift of label space at test time, which stems from the inclusion of unseen compositions recombined from attributes and objects. To overcome the challenge, we propose a novel approach that accumulates comprehensive knowledge in both textual and visual modalities from unsupervised data to update multimodal prototypes at test time. Building on this, we further design an adaptive update weight to control the degree of prototype adjustment, enabling the model to flexibly adapt to distribution shift during testing. Moreover, a dynamic priority queue is introduced that stores high-confidence images to acquire visual knowledge from historical images for inference. Considering the semantic consistency of multimodal knowledge, we align textual and visual prototypes by multimodal collaborative representation learning. Extensive experiments indicate that our approach achieves state-of-the-art performance on four benchmark datasets under both closed-world and open-world settings. Code will be available at https://github.com/xud-yan/TOMCAT . + oai:arXiv.org:2510.20162v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Xudong Yan, Songhe Feng + + + The Reasoning Lingua Franca: A Double-Edged Sword for Multilingual AI + https://arxiv.org/abs/2510.20647 + arXiv:2510.20647v2 Announce Type: replace +Abstract: Large Reasoning Models (LRMs) achieve strong performance on mathematical, scientific, and other question-answering tasks, but their multilingual reasoning abilities remain underexplored. When presented with non-English questions, LRMs often default to reasoning in English, raising concerns about interpretability and the handling of linguistic and cultural nuances. We systematically compare an LRM's reasoning in English versus the language of the question. Our evaluation spans two tasks: MGSM and GPQA Diamond. Beyond measuring answer accuracy, we also analyze cognitive attributes in the reasoning traces. We find that English reasoning traces exhibit a substantially higher presence of these cognitive behaviors, and that reasoning in English generally yields higher final-answer accuracy, with the performance gap increasing as tasks become more complex. However, this English-centric strategy is susceptible to a key failure mode - getting "Lost in Translation," where translation steps lead to errors that would have been avoided by question's language reasoning. + oai:arXiv.org:2510.20647v2 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Alan Saji, Raj Dabre, Anoop Kunchukuttan, Ratish Puduppully + + + Error Estimates for Sparse Tensor Products of B-spline Approximation Spaces + https://arxiv.org/abs/2510.21517 + arXiv:2510.21517v3 Announce Type: replace +Abstract: This work introduces and analyzes B-spline approximation spaces defined on general geometric domains obtained through a mapping from a parameter domain. These spaces are constructed as sparse-grid tensor products of univariate spaces in the parameter domain and are mapped to the physical domain via a geometric parametrization. Both the univariate approximation spaces and the geometric mapping are built using maximally smooth B-splines. We construct two such spaces, employing either the sparse-grid combination technique or the hierarchical subspace decomposition of sparse-grid tensor products, and we prove their mathematical equivalence. Furthermore, we derive approximation error estimates and inverse inequalities that highlight the advantages of sparse-grid tensor products. Specifically, under suitable regularity assumptions on the solution, these spaces achieve the same approximation order as standard tensor product spaces while using significantly fewer degrees of freedom. Additionally, our estimates indicate that, in the case of non-tensor-product domains, stronger regularity assumptions on the solution -- particularly concerning isotropic (non-mixed) derivatives -- are required to achieve optimal convergence rates compared to sparse-grid methods defined on tensor-product domains. + oai:arXiv.org:2510.21517v3 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Cl\'ement Guillet + + + VietLyrics: A Large-Scale Dataset and Models for Vietnamese Automatic Lyrics Transcription + https://arxiv.org/abs/2510.22295 + arXiv:2510.22295v2 Announce Type: replace +Abstract: Automatic Lyrics Transcription (ALT) for Vietnamese music presents unique challenges due to its tonal complexity and dialectal variations, but remains largely unexplored due to the lack of a dedicated dataset. Therefore, we curated the first large-scale Vietnamese ALT dataset (VietLyrics), comprising 647 hours of songs with line-level aligned lyrics and metadata to address these issues. Our evaluation of current ASRbased approaches reveal significant limitations, including frequent transcription errors and hallucinations in non-vocal segments. To improve performance, we fine-tuned Whisper models on the VietLyrics dataset, achieving superior results compared to existing multilingual ALT systems, including LyricWhiz. We publicly release VietLyrics and our models, aiming to advance Vietnamese music computing research while demonstrating the potential of this approach for ALT in low-resource language and music. + oai:arXiv.org:2510.22295v2 + cs.AI + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Quoc Anh Nguyen, Bernard Cheng, Kelvin Soh + + + WebATLAS: An LLM Agent with Experience-Driven Memory and Action Simulation + https://arxiv.org/abs/2510.22732 + arXiv:2510.22732v2 Announce Type: replace +Abstract: Large Language Model (LLM) web agents often struggle with long-horizon web navigation and web task completion in new websites, producing inefficient action sequences unless fine-tuned on environment-specific data. We show that experience-driven memory, combined with look-ahead action simulation, is sufficient for LLM agents to adapt to unseen web environments by remembering past failures and predicting the consequences of future actions. We introduce WebATLAS (Actor-Critic Task-completion with Look-ahead Action Simulation), a memory-augmented LLM web agent that learns a lightweight internal model of the environment from interaction experience and performs hypothetical action rollouts before acting in the real world. WebATLAS builds a persistent cognitive map via curiosity-driven exploration, stores interaction outcomes as experience-based memory, and evaluates candidate actions in cognitive space using a planner--simulator--critic loop. This enables the agent to reuse past experience, avoid previously unsuccessful behaviors, and generate more efficient plans. We evaluate WebATLAS on the WebArena-Lite benchmark for autonomous web navigation and demonstrate a success rate of 63%, outperforming the previous state-of-the-art at 53.9%. Unlike previous systems, our modular architecture requires no website-specific LLM fine-tuning. Ablation studies confirm that experience-driven memory, look-ahead action simulation, and hierarchical replanning play complementary roles in enabling robust, training-free web agents. + oai:arXiv.org:2510.22732v2 + cs.LG + cs.AI + cs.CL + cs.IR + cs.MA + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jiali Cheng, Anjishnu Kumar, Roshan Lal, Rishi Rajasekaran, Hani Ramezani, Omar Zia Khan, Oleg Rokhlenko, Sunny Chiu-Webster, Gang Hua, Hadi Amiri + + + Far from the Shallow: Brain-Predictive Reasoning Embedding through Residual Disentanglement + https://arxiv.org/abs/2510.22860 + arXiv:2510.22860v2 Announce Type: replace +Abstract: Understanding how the human brain progresses from processing simple linguistic inputs to performing high-level reasoning is a fundamental challenge in neuroscience. While modern large language models (LLMs) are increasingly used to model neural responses to language, their internal representations are highly "entangled," mixing information about lexicon, syntax, meaning, and reasoning. This entanglement biases conventional brain encoding analyses toward linguistically shallow features (e.g., lexicon and syntax), making it difficult to isolate the neural substrates of cognitively deeper processes. Here, we introduce a residual disentanglement method that computationally isolates these components. By first probing an LM to identify feature-specific layers, our method iteratively regresses out lower-level representations to produce four nearly orthogonal embeddings for lexicon, syntax, meaning, and, critically, reasoning. We used these disentangled embeddings to model intracranial (ECoG) brain recordings from neurosurgical patients listening to natural speech. We show that: 1) This isolated reasoning embedding exhibits unique predictive power, accounting for variance in neural activity not explained by other linguistic features and even extending to the recruitment of visual regions beyond classical language areas. 2) The neural signature for reasoning is temporally distinct, peaking later (~350-400ms) than signals related to lexicon, syntax, and meaning, consistent with its position atop a processing hierarchy. 3) Standard, non-disentangled LLM embeddings can be misleading, as their predictive success is primarily attributable to linguistically shallow features, masking the more subtle contributions of deeper cognitive processing. + oai:arXiv.org:2510.22860v2 + cs.CL + q-bio.NC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Linyang He, Tianjun Zhong, Richard Antonello, Gavin Mischler, Micah Goldblum, Nima Mesgarani + + + EIRES:Training-free AI-Generated Image Detection via Edit-Induced Reconstruction Error Shift + https://arxiv.org/abs/2510.25141 + arXiv:2510.25141v2 Announce Type: replace +Abstract: Diffusion models have recently achieved remarkable photorealism, making it increasingly difficult to distinguish real images from generated ones, raising significant privacy and security concerns. In response, we present a key finding: structural edits enhance the reconstruction of real images while degrading that of generated images, creating a distinctive edit-induced reconstruction error shift. This asymmetric shift enhances the separability between real and generated images. Building on this insight, we propose EIRES, a training-free method that leverages structural edits to reveal inherent differences between real and generated images. To explain the discriminative power of this shift, we derive the reconstruction error lower bound under edit perturbations. Since EIRES requires no training, thresholding depends solely on the natural separability of the signal, where a larger margin yields more reliable detection. Extensive experiments show that EIRES is effective across diverse generative models and remains robust on the unbiased subset, even under post-processing operations. + oai:arXiv.org:2510.25141v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Wan Jiang, Jing Yan, Xiaojing Chen, Lin Shen, Chenhao Lin, Yunfeng Diao, Richang Hong + + + A Configuration-First Framework for Reproducible, Low-Code Localization + https://arxiv.org/abs/2510.25692 + arXiv:2510.25692v2 Announce Type: replace +Abstract: Machine learning is increasingly permeating radio-based localization services. To keep results credible and comparable, everyday workflows should make rigorous experiment specification and exact repeatability the default, without blocking advanced experimentation. However, in practice, researchers face a three-way gap that could be filled by a framework that offers (i) low coding effort for end-to-end studies, (ii) reproducibility by default, including versioned code, data, and configurations, controlled randomness, isolated runs, and recorded artifacts, and (iii) built-in extensibility so new models, metrics, and stages can be added with minimal integration effort. Existing tools rarely deliver all three for machine learning in general and localization workflows in particular. In this paper, we introduce LOCALIZE, a low-code, configuration-first framework for radio localization in which experiments are declared in human-readable configuration files, a workflow orchestrator executes standardized pipelines from data preparation to reporting, and all artifacts, such as datasets, models, metrics, and reports, are versioned. Preconfigured, versioned datasets reduce initial setup effort and boilerplate, thereby accelerating model development and evaluation. The design, with explicit extension points, allows experts to add components without reworking the underlying infrastructure. Through a qualitative comparison and a head-to-head study against a plain Jupyter notebook baseline, we show that the framework reduces authoring effort while maintaining comparable runtime and memory behavior. Furthermore, using a Bluetooth Low Energy dataset, we demonstrate that scaling the training data from 1x to 10x keeps orchestration overheads bounded as data grows. Overall, the framework makes reproducible machine-learning-based localization experimentation practical, accessible, and extensible. + oai:arXiv.org:2510.25692v2 + cs.SE + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Tim Strnad (Jo\v{z}ef Stefan Institute, Slovenia), Bla\v{z} Bertalani\v{c} (Jo\v{z}ef Stefan Institute, Slovenia), Carolina Fortuna (Jo\v{z}ef Stefan Institute, Slovenia) + + + RegionRAG: Region-level Retrieval-Augmented Generation for Visual Document Understanding + https://arxiv.org/abs/2510.27261 + arXiv:2510.27261v3 Announce Type: replace +Abstract: Multi-modal Retrieval-Augmented Generation (RAG) has become a critical method for empowering LLMs by leveraging candidate visual documents. However, current methods consider the entire document as the basic retrieval unit, introducing substantial irrelevant visual content in two ways: 1) Relevant documents often contain large regions unrelated to the query, diluting the focus on salient information; 2) Retrieving multiple documents to increase recall further introduces redundant and irrelevant documents. These redundant contexts distract the model's attention and further degrade the performance. To address this challenge, we propose RegionRAG, a novel framework that shifts the retrieval paradigm from the document level to the region level. During training, we design a hybrid supervision strategy from both labeled data and unlabeled data to pinpoint relevant patches. During inference, we propose a dynamic pipeline that intelligently groups salient patches into complete semantic regions. By delegating the task of identifying relevant regions to the retriever, RegionRAG enables the generator to focus solely on concise, query-relevant visual content, improving both efficiency and accuracy. Experiments on six benchmarks demonstrate that RegionRAG achieves state-of-the-art performance. It improves retrieval accuracy by 10.02% in R@1 on average, and boosts question answering accuracy by 3.56% while using only 71.42% visual tokens compared with prior methods. + oai:arXiv.org:2510.27261v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yinglu Li, Zhiying Lu, Zhihang Liu, Yiwei Sun, Chuanbin Liu, Hongtao Xie + + + Towards a Multi-Embodied Grasping Agent + https://arxiv.org/abs/2510.27420 + arXiv:2510.27420v2 Announce Type: replace +Abstract: Multi-embodiment grasping focuses on developing approaches that exhibit generalist behavior across diverse gripper designs. Existing methods often learn the kinematic structure of the robot implicitly and face challenges due to the difficulty of sourcing the required large-scale data. In this work, we present a data-efficient, flow-based, equivariant grasp synthesis architecture that can handle different gripper types with variable degrees of freedom and successfully exploit the underlying kinematic model, deducing all necessary information solely from the gripper and scene geometry. Unlike previous equivariant grasping methods, we translated all modules from the ground up to JAX and provide a model with batching capabilities over scenes, grippers, and grasps, resulting in smoother learning, improved performance and faster inference time. Our dataset encompasses grippers ranging from humanoid hands to parallel yaw grippers and includes 25,000 scenes and 20 million grasps. + oai:arXiv.org:2510.27420v2 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Roman Freiberg, Alexander Qualmann, Ngo Anh Vien, Gerhard Neumann + + + Learned Static Function Data Structures + https://arxiv.org/abs/2510.27588 + arXiv:2510.27588v2 Announce Type: replace +Abstract: We consider the task of constructing a data structure for associating a static set of keys with values, while allowing arbitrary output values for queries involving keys outside the set. Compared to hash tables, these so-called static function data structures do not need to store the key set and thus use significantly less memory. Several techniques are known, with compressed static functions approaching the zero-order empirical entropy of the value sequence. In this paper, we introduce learned static functions, which use machine learning to capture correlations between keys and values. For each key, a model predicts a probability distribution over the values, from which we derive a key-specific prefix code to compactly encode the true value. The resulting codeword is stored in a classic static function data structure. This design allows learned static functions to break the zero-order entropy barrier while still supporting point queries. Our experiments show substantial space savings: up to one order of magnitude on real data, and up to three orders of magnitude on synthetic data. + oai:arXiv.org:2510.27588v2 + cs.DS + cs.DB + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Stefan Hermann, Hans-Peter Lehmann, Giorgio Vinciguerra, Stefan Walzer + + + Sharpness-Controlled Group Relative Policy Optimization with Token-Level Probability Shaping + https://arxiv.org/abs/2511.00066 + arXiv:2511.00066v2 Announce Type: replace +Abstract: Reinforcement learning with verifiable rewards (RLVR) has become a practical route to improve large language model reasoning, and Group Relative Policy Optimization (GRPO) is a widely used optimizer in this setting. This paper revisits GRPO from a generalization perspective. Recent analysis shows that population performance can be controlled by a robust empirical objective that decomposes into the training loss plus a sharpness term measured by the gradient norm. We develop a token-level view of this sharpness term and show that GRPO can be dominated by a small subset of tokens with disproportionately large per-token gradients, which increases sharpness and can harm generalization. Motivated by this view, we propose Token-Regulated GRPO (TR-GRPO), which introduces a monotone probability shaping function to assign token weights based on the model's own token probabilities, and integrates these weights into the standard GRPO. Our analysis yields a bound that isolates a probability dependent multiplicative factor in token-gradient magnitudes, explaining how probability-aware weighting suppresses sharp directions while preserving learning signal on semantically critical tokens. Experiments on logic puzzles, mathematical reasoning, and tool-augmented question answering show consistent improvements over GRPO, along with smoother gradient-norm trajectories, supporting TR-GRPO as a simple and effective generalization-oriented upgrade to GRPO for RLVR. + oai:arXiv.org:2511.00066v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Tue Le, Nghi D. Q. Bui, Linh Ngo Van, Trung Le + + + Empowering LLMs with Structural Role Inference for Zero-Shot Graph Learning + https://arxiv.org/abs/2511.00898 + arXiv:2511.00898v2 Announce Type: replace +Abstract: Large Language Models have emerged as a promising approach for graph learning due to their powerful reasoning capabilities. However, existing methods exhibit systematic performance degradation on structurally important nodes such as bridges and hubs. We identify the root cause of these limitations. Current approaches encode graph topology into static features but lack reasoning scaffolds to transform topological patterns into role-based interpretations. This limitation becomes critical in zero-shot scenarios where no training data establishes structure-semantics mappings. To address this gap, we propose DuoGLM, a training-free dual-perspective framework for structure-aware graph reasoning. The local perspective constructs relation-aware templates capturing semantic interactions between nodes and neighbors. The global perspective performs topology-to-role inference to generate functional descriptions of structural positions. These complementary perspectives provide explicit reasoning mechanisms enabling LLMs to distinguish topologically similar but semantically different nodes. Extensive experiments across eight benchmark datasets demonstrate substantial improvements. DuoGLM achieves 14.3\% accuracy gain in zero-shot node classification and 7.6\% AUC improvement in cross-domain transfer compared to existing methods. The results validate the effectiveness of explicit role reasoning for graph understanding with LLMs. + oai:arXiv.org:2511.00898v2 + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Heng Zhang, Jing Liu, Jiajun Wu, Haochen You, Lubin Gan, Yuling Shi, Xiaodong Gu, Zijian Zhang, Shuai Chen, Wenjun Huang, Jin Huang + + + GraphGeo: Multi-Agent Debate Framework for Visual Geo-localization with Heterogeneous Graph Neural Networks + https://arxiv.org/abs/2511.00908 + arXiv:2511.00908v2 Announce Type: replace +Abstract: Visual geo-localization requires extensive geographic knowledge and sophisticated reasoning to determine image locations without GPS metadata. Traditional retrieval methods are constrained by database coverage and quality. Recent Large Vision-Language Models (LVLMs) enable direct location reasoning from image content, yet individual models struggle with diverse geographic regions and complex scenes. Existing multi-agent systems improve performance through model collaboration but treat all agent interactions uniformly. They lack mechanisms to handle conflicting predictions effectively. We propose \textbf{GraphGeo}, a multi-agent debate framework using heterogeneous graph neural networks for visual geo-localization. Our approach models diverse debate relationships through typed edges, distinguishing supportive collaboration, competitive argumentation, and knowledge transfer. We introduce a dual-level debate mechanism combining node-level refinement and edge-level argumentation modeling. A cross-level topology refinement strategy enables co-evolution between graph structure and agent representations. Experiments on multiple benchmarks demonstrate GraphGeo significantly outperforms state-of-the-art methods. Our framework transforms cognitive conflicts between agents into enhanced geo-localization accuracy through structured debate. + oai:arXiv.org:2511.00908v2 + cs.CV + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Heng Zheng, Yuling Shi, Xiaodong Gu, Haochen You, Zijian Zhang, Lubin Gan, Hao Zhang, Wenjun Huang, Jin Huang + + + G2rammar: Bilingual Grammar Modeling for Enhanced Text-attributed Graph Learning + https://arxiv.org/abs/2511.00911 + arXiv:2511.00911v2 Announce Type: replace +Abstract: Text-attributed graphs require models to effectively integrate both structural topology and semantic content. Recent approaches apply large language models to graphs by linearizing structures into token sequences through random walks. These methods create concise graph vocabularies to replace verbose natural language descriptions. However, they overlook a critical component that makes language expressive: grammar. In natural language, grammar assigns syntactic roles to words and defines their functions within sentences. Similarly, nodes in graphs play distinct structural roles as hubs, bridges, or peripheral members. Current graph language methods provide tokens without grammatical annotations to indicate these structural or semantic roles. This absence limits language models' ability to reason about graph topology effectively. We propose \textbf{G2rammar}, a bilingual grammar framework that explicitly encodes both structural and semantic grammar for text-attributed graphs. Structural grammar characterizes topological roles through centrality and neighborhood patterns. Semantic grammar captures content relationships through textual informativity. The framework implements two-stage learning with structural grammar pre-training followed by semantic grammar fine-tuning. Extensive experiments on real-world datasets demonstrate that G2rammar consistently outperforms competitive baselines by providing language models with the grammatical context needed to understand graph structures. + oai:arXiv.org:2511.00911v2 + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Heng Zheng, Haochen You, Zijun Liu, Zijian Zhang, Lubin Gan, Hao Zhang, Wenjun Huang, Jin Huang + + + Defining a Role-Centered Terminology for Physical Representations and Controls + https://arxiv.org/abs/2511.01106 + arXiv:2511.01106v2 Announce Type: replace +Abstract: Previous classifications advanced research through a better understanding of the field and the variety of tangible user interfaces and related physical user interfaces, especially by discretizing a degree of tangibility based on the specimens produced by the community over the years, since the conceptualization of Tangible User Interface initiated a research effort to deepen the exploration of the concept. However, no taxonomy enables the classification of tangible user interfaces at the application level. This article proposes to refine the description of tangible user interfaces' interactional components through a terminological approach. The resulting terms are blended words, built from known words, that self-contain what digital role is represented or controlled and how it becomes physical. This holistic terminology then enables the definition of applications' hallmarks and four classes of tangibility for applications, which surpass the description of physical user interface specimens' morphology by abstracting and discriminating specimens at the applicative level. The descriptiveness and holisticness of the new terminology, as well as the clustering and discriminative power of the limited number of four classes, are showed on a corpus of applicative tangible user interfaces' specimens from the literature. Promising future work will benefit from the holistic terminology, the applications' hallmarks, and the tangibility classes, to describe applicative tangible user interfaces and related physical user interfaces to better understand the dozens of specimens that were produced by the field over three decades. Indeed, describing and classifying this whole set would deepen our understanding to provide tools for future developers and designers. + oai:arXiv.org:2511.01106v2 + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Guillaume Riviere + + + Memory-Efficient Training with In-Place FFT Implementation + https://arxiv.org/abs/2511.01385 + arXiv:2511.01385v2 Announce Type: replace +Abstract: Fast Fourier Transforms (FFT) are widely used to reduce memory and computational costs in deep learning. However, existing implementations, including standard FFT and real FFT (rFFT), cannot achieve true in-place computation. In particular, rFFT maps an input of size n to a complex output of size n/2+1, causing dimensional mismatch and requiring additional memory allocation. We propose the first real-domain, fully in-place FFT framework (rdFFT) that preserves input-output memory space consistency. By leveraging butterfly operation symmetry and conjugate properties in the frequency domain, we design an implicit complex encoding scheme that eliminates intermediate cache usage entirely. Experiments on multiple natural language understanding tasks demonstrate the method effectiveness in reducing training memory cost, offering a promising direction for frequency-domain lightweight adaptation. + oai:arXiv.org:2511.01385v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xinyu Ding, Bangtian Liu, Siyu Liao, Zhongfeng Wang + + + Continuum: Efficient and Robust Multi-Turn LLM Agent Scheduling with KV Cache Time-to-Live + https://arxiv.org/abs/2511.02230 + arXiv:2511.02230v2 Announce Type: replace +Abstract: KV cache management is essential for efficient LLM inference. To maximize utilization, existing inference engines evict finished requests' KV cache if new requests are waiting. This policy breaks for agentic workloads, which interleave LLM calls with tools, introducing pauses that prevent effective KV reuse across turns. Since some tool calls have much shorter durations than human response multi-turn chatbot, it would be promising to retain the KV cache in during these tools. However, there are many challenges. First, we need to consider both the potential cost of recomputation or reloading (if CPU offloading enabled) and the increasing queueing delays after eviction from GPU. Second, due to the internal variance of tool call durations, we need the method to remain robust under limited predictability of tool call durations. + We present Continuum, a serving system to optimize job completion time for multi-turn agent workloads by introducing time-to-live mechanism for KV cache retaining. For LLM request that generates a tool call, Continuum selectively pins the KV cache in GPU memory with a time-to-live value determined by considering both the reload cost and ordering preserve benefit of retaining KV cache. Moreover, when the TTL expires, the KV cache can be automatically evicted to free up GPU memory, providing robust performance under edge cases. When combined with program-level first-come-first-serve, Continuum preserves multi-turn continuity, and reduces delay for complex agentic workflows. Our evaluation on real-world agentic workloads (SWE-Bench and BFCL) with Llama-3.1 8B/70B shows that Continuum significantly improves the average job completion times and its improvement scales with turn number increase. We release a preview version at: https://github.com/Hanchenli/vllm-continuum + oai:arXiv.org:2511.02230v2 + cs.OS + cs.AI + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Hanchen Li, Qiuyang Mang, Runyuan He, Qizheng Zhang, Huanzhi Mao, Xiaokun Chen, Hangrui Zhou, Alvin Cheung, Joseph Gonzalez, Ion Stoica + + + LiveSecBench: A Dynamic and Event-Driven Safety Benchmark for Chinese Language Model Applications + https://arxiv.org/abs/2511.02366 + arXiv:2511.02366v2 Announce Type: replace +Abstract: We introduce LiveSecBench, a continuously updated safety benchmark specifically for Chinese-language LLM application scenarios. LiveSecBench constructs a high-quality and unique dataset through a pipeline that combines automated generation with human verification. By periodically releasing new versions to expand the dataset and update evaluation metrics, LiveSecBench provides a robust and up-to-date standard for AI safety. In this report, we introduce our second release v251215, which evaluates across five dimensions (Public Safety, Fairness & Bias, Privacy, Truthfulness, and Mental Health Safety.) We evaluate 57 representative LLMs using an ELO rating system, offering a leaderboard of the current state of Chinese LLM safety. The result is available at https://livesecbench.intokentech.cn/. + oai:arXiv.org:2511.02366v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Yudong Li, Peiru Yang, Feng Huang, Zhongliang Yang, Kecheng Wang, Haitian Li, Baocheng Chen, Xingyu An, Ziyu Liu, Youdan Yang, Kejiang Chen, Sifang Wan, Xu Wang, Yufei Sun, Liyan Wu, Ruiqi Zhou, Wenya Wen, Xingchi Gu, Tianxin Zhang, Yue Gao, Yongfeng Huang + + + AutoAdv: Automated Adversarial Prompting for Multi-Turn Jailbreaking of Large Language Models + https://arxiv.org/abs/2511.02376 + arXiv:2511.02376v3 Announce Type: replace +Abstract: Large Language Models (LLMs) remain vulnerable to jailbreaking attacks where adversarial prompts elicit harmful outputs. Yet most evaluations focus on single-turn interactions while real-world attacks unfold through adaptive multi-turn conversations. We present AutoAdv, a training-free framework for automated multi-turn jailbreaking that achieves an attack success rate of up to 95% on Llama-3.1-8B within six turns, a 24% improvement over single-turn baselines. AutoAdv uniquely combines three adaptive mechanisms: a pattern manager that learns from successful attacks to enhance future prompts, a temperature manager that dynamically adjusts sampling parameters based on failure modes, and a two-phase rewriting strategy that disguises harmful requests and then iteratively refines them. Extensive evaluation across commercial and open-source models (Llama-3.1-8B, GPT-4o mini, Qwen3-235B, Mistral-7B) reveals persistent vulnerabilities in current safety mechanisms, with multi-turn attacks consistently outperforming single-turn approaches. These findings demonstrate that alignment strategies optimized for single-turn interactions fail to maintain robustness across extended conversations, highlighting an urgent need for multi-turn-aware defenses. + oai:arXiv.org:2511.02376v3 + cs.CL + cs.AI + cs.CR + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Aashray Reddy, Andrew Zagula, Nicholas Saban + + + Causal Graph Neural Networks for Healthcare + https://arxiv.org/abs/2511.02531 + arXiv:2511.02531v3 Announce Type: replace +Abstract: Healthcare artificial intelligence systems routinely fail when deployed across institutions, with documented performance drops and perpetuation of discriminatory patterns embedded in historical data. This brittleness stems, in part, from learning statistical associations rather than causal mechanisms. Causal graph neural networks address this triple crisis of distribution shift, discrimination, and inscrutability by combining graph-based representations of biomedical data with causal inference principles to learn invariant mechanisms rather than spurious correlations. This Review examines methodological foundations spanning structural causal models, disentangled causal representation learning, and techniques for interventional prediction and counterfactual reasoning on graphs. We analyse applications demonstrating clinical value across psychiatric diagnosis through brain network analysis, cancer subtyping via multi-omics causal integration, continuous physiological monitoring with mechanistic interpretation, and drug recommendation correcting prescription bias. These advances establish foundations for patient-specific Causal Digital Twins, enabling in silico clinical experimentation, with integration of large language models for hypothesis generation and causal graph neural networks for mechanistic validation. Substantial barriers remain, including computational requirements precluding real-time deployment, validation challenges demanding multi-modal evidence triangulation beyond cross-validation, and risks of causal-washing where methods employ causal terminology without rigorous evidentiary support. We propose tiered frameworks distinguishing causally-inspired architectures from causally-validated discoveries and identify critical research priorities making causal rather than purely associational claims. + oai:arXiv.org:2511.02531v3 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Munib Mesinovic, Max Buhlan, Tingting Zhu + + + In Good GRACEs: Principled Teacher Selection for Knowledge Distillation + https://arxiv.org/abs/2511.02833 + arXiv:2511.02833v3 Announce Type: replace +Abstract: Knowledge distillation is an efficient strategy to use data generated by large "teacher" language models to train smaller capable "student" models, but selecting the optimal teacher for a specific student-task combination requires expensive trial-and-error. We propose a lightweight score called GRACE to quantify how effective a teacher will be for post-training a student model. GRACE measures distributional properties of the student's gradients without access to a verifier, teacher logits, teacher internals, or test data. From an information-theoretic perspective, GRACE connects to leave-one-out stability of gradient-based algorithms, which controls the generalization performance of the distilled students. On GSM8K and MATH, GRACE correlates strongly (up to 86% Spearman correlation) with the performance of the distilled LLaMA and OLMo students. In particular, training a student using the GRACE-selected teacher can improve the performance by up to 7.4% over naively using the best-performing teacher. Further, GRACE can provide guidance on crucial design choices in distillation, including (1) the best temperature to use when generating from the teacher, (2) the best teacher to use given a size constraint, and (3) the best teacher to use within a specific model family. Altogether, our findings demonstrate that GRACE can efficiently and effectively identify a strongly compatible teacher for a given student and provide fine-grained guidance on how to perform distillation. + oai:arXiv.org:2511.02833v3 + cs.LG + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Abhishek Panigrahi, Bingbin Liu, Sadhika Malladi, Sham Kakade, Surbhi Goel + + + Mathematical exploration and discovery at scale + https://arxiv.org/abs/2511.02864 + arXiv:2511.02864v3 Announce Type: replace +Abstract: AlphaEvolve (Novikov et al., 2025) is a generic evolutionary coding agent that combines the generative capabilities of LLMs with automated evaluation in an iterative evolutionary framework that proposes, tests, and refines algorithmic solutions to challenging scientific and practical problems. In this paper we showcase AlphaEvolve as a tool for autonomously discovering novel mathematical constructions and advancing our understanding of long-standing open problems. + To demonstrate its breadth, we considered a list of 67 problems spanning mathematical analysis, combinatorics, geometry, and number theory. The system rediscovered the best known solutions in most of the cases and discovered improved solutions in several. In some instances, AlphaEvolve is also able to generalize results for a finite number of input values into a formula valid for all input values. Furthermore, we are able to combine this methodology with Deep Think and AlphaProof in a broader framework where the additional proof-assistants and reasoning systems provide automated proof generation and further mathematical insights. + These results demonstrate that large language model-guided evolutionary search can autonomously discover mathematical constructions that complement human intuition, at times matching or even improving the best known results, highlighting the potential for significant new ways of interaction between mathematicians and AI systems. We present AlphaEvolve as a powerful new tool for mathematical discovery, capable of exploring vast search spaces to solve complex optimization problems at scale, often with significantly reduced requirements on preparation and computation time. + oai:arXiv.org:2511.02864v3 + cs.NE + cs.AI + math.CA + math.CO + math.MG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Bogdan Georgiev, Javier G\'omez-Serrano, Terence Tao, Adam Zsolt Wagner + + + Nowcast3D: Reliable precipitation nowcasting via gray-box learning + https://arxiv.org/abs/2511.04659 + arXiv:2511.04659v3 Announce Type: replace +Abstract: Extreme-precipitation nowcasting requires high spatial and temporal resolution together with extended lead times, yet current approaches remain constrained. Numerical weather prediction systems and their deep-learning emulators operate at relatively coarse space-time resolution and struggle to capture rapidly evolving convective systems. Radar extrapolation methods, which advect recent fields using estimated motion, have difficulty capturing the complex evolution of precipitation. Purely data-driven models often produce overly smoothed reflectivity fields and underestimate intensity. Hybrid 2D radar-based methods discard crucial vertical information, preventing accurate reconstruction of height-dependent dynamics. We introduce Nowcast3D, a gray-box, fully three-dimensional nowcasting framework that operates directly on volumetric radar reflectivity and couples physically constrained neural operators with data-driven learning. The model learns three fields that govern reflectivity evolution: a three-dimensional flow field for advective transport, a spatially varying diffusion field for local dispersive spreading, and a residual source term for unresolved microphysical effects. These learned operators advance the forecast in time under explicit physical constraints, while a conditional diffusion model, conditioned on both the observations and the physics-based forecast, generates ensembles of future radar volumes that quantify forecast uncertainty. In a blind evaluation by 160 meteorologists, Nowcast3D is preferred in 57% of post-hoc and 51% of prior assessments. By explicitly embedding three-dimensional dynamics and uncertainty into a single framework, Nowcast3D offers a scalable and robust approach for reliable nowcasting of extreme precipitation. + oai:arXiv.org:2511.04659v3 + cs.LG + physics.ao-ph + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Huaguan Chen, Wei Han, Haofei Sun, Ning Lin, Xingtao Song, Yunfan Yang, Jie Tian, Yang Liu, Ji-Rong Wen, Xiaoye Zhang, Xueshun Shen, Hao Sun + + + STAIR: Stability criterion for Time-windowed Assignment and Internal adversarial influence in Routing and decision-making + https://arxiv.org/abs/2511.05715 + arXiv:2511.05715v2 Announce Type: replace +Abstract: A major limitation of existing routing algorithms for multi-agent systems is that they are designed without considering the potential presence of adversarial agents in the decision-making loop, which could lead to severe performance degradation in real-life applications where adversarial agents may be present. We study autonomous pickup-and-delivery routing problems in which adversarial agents launch coordinated denial-of-service attacks by spoofing their locations. This deception causes the central scheduler to assign pickup requests to adversarial agents instead of cooperative agents. Adversarial agents then choose not to service the requests with the goal of disrupting the operation of the system, leading to delays, cancellations, and potential instability in the routing policy. Policy stability in routing problems is typically defined as the cost of the policy being uniformly bounded over time, and it has been studied through two different lenses: queuing theory and reinforcement learning (RL), which are not well suited for routing with adversaries. In this paper, we propose a new stability criterion, STAIR, which is easier to analyze than queuing-theory-based stability in adversarial settings. Furthermore, STAIR does not depend on a chosen discount factor as is the case in discounted RL stability. STAIR directly links stability to desired operational metrics, like a finite number of rejected requests. This characterization is particularly useful in adversarial settings as it provides a metric for monitoring the effect of adversaries in the operation of the system. Furthermore, we demonstrate STAIR's practical relevance through simulations on real-world San Francisco mobility-on-demand data. We also identify a phenomenon of degenerate stability that arises in the adversarial routing problem, and we introduce time-window constraints in the decision-making algorithm to mitigate it. + oai:arXiv.org:2511.05715v2 + cs.MA + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Roee M. Francos, Daniel Garces, Orhan Eren Akg\"un, Stephanie Gil + + + An Architecture for Remote Container Builds and Artifact Delivery Using a Controller-Light Jenkins CI/CD Pipeline + https://arxiv.org/abs/2511.05720 + arXiv:2511.05720v2 Announce Type: replace +Abstract: Resource-intensive builds are often executed directly on the controller by conventional Jenkins installations, which can lower reliability and overload system resources. Jenkins functions as a containerized controller with persistent volumes in the controller-light CI/CD framework presented in this paper, delegating difficult build and packaging tasks to a remote Docker host. The controller container maintains secure SSH connections to remote compute nodes while focusing solely on orchestration and reporting. Atomic deployments with time-stamped backups, containerized build environments, immutable artifact packaging, and automated notifications are all included in the system. Faster build throughput, reduced CPU and RAM consumption on the controller, and reduced artifact delivery latency are all revealed by experimental evaluation. For small and medium-sized DevOps businesses looking for scalable automation without adding orchestration complexity, this method offers a repeatable, low-maintenance solution. + oai:arXiv.org:2511.05720v2 + cs.SE + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Kawshik Kumar Paul, Sawmik Kumar Paul + + + Enhancing Diffusion Model Guidance through Calibration and Regularization + https://arxiv.org/abs/2511.05844 + arXiv:2511.05844v3 Announce Type: replace +Abstract: Classifier-guided diffusion models have emerged as a powerful approach for conditional image generation, but they suffer from overconfident predictions during early denoising steps, causing the guidance gradient to vanish. This paper introduces two complementary contributions to address this issue. First, we propose a differentiable calibration objective based on the Smooth Expected Calibration Error (Smooth ECE), which improves classifier calibration with minimal fine-tuning and yields measurable improvements in Frechet Inception Distance (FID). Second, we develop enhanced sampling guidance methods that operate on off-the-shelf classifiers without requiring retraining. These include tilted sampling with batch-level reweighting, adaptive entropy-regularized sampling to preserve diversity, and a novel f-divergence-based sampling strategy that strengthens class-consistent guidance while maintaining mode coverage. Experiments on ImageNet 128x128 demonstrate that our divergence-regularized guidance achieves an FID of 2.13 using a ResNet-101 classifier, improving upon existing classifier-guided diffusion methods while requiring no diffusion model retraining. The results show that principled calibration and divergence-aware sampling provide practical and effective improvements for classifier-guided diffusion. + oai:arXiv.org:2511.05844v3 + cs.CV + cs.AI + cs.IT + cs.LG + eess.IV + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Seyed Alireza Javid, Amirhossein Bagheri, Nuria Gonz\'alez-Prelcic + + + Evaluating Large Language Models for Anxiety, Depression, and Stress Detection: Insights into Prompting Strategies and Synthetic Data + https://arxiv.org/abs/2511.07044 + arXiv:2511.07044v2 Announce Type: replace +Abstract: Mental health disorders affect over one-fifth of adults globally, yet detecting such conditions from text remains challenging due to the subtle and varied nature of symptom expression. This study evaluates multiple approaches for mental health detection, comparing Large Language Models (LLMs) such as Llama and GPT with classical machine learning and transformer-based architectures including BERT, XLNet, and Distil-RoBERTa. Using the DAIC-WOZ dataset of clinical interviews, we fine-tuned models for anxiety, depression, and stress classification and applied synthetic data generation to mitigate class imbalance. Results show that Distil-RoBERTa achieved the highest F1 score (0.883) for GAD-2, while XLNet outperformed others on PHQ tasks (F1 up to 0.891). For stress detection, a zero-shot synthetic approach (SD+Zero-Shot-Basic) reached an F1 of 0.884 and ROC AUC of 0.886. Findings demonstrate the effectiveness of transformer-based models and highlight the value of synthetic data in improving recall and generalization. However, careful calibration is required to prevent precision loss. Overall, this work emphasizes the potential of combining advanced language models and data augmentation to enhance automated mental health assessment from text. + oai:arXiv.org:2511.07044v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Mihael Arcan, David-Paul Niland + + + DETECT: Data-Driven Evaluation of Treatments Enabled by Classification Transformers + https://arxiv.org/abs/2511.07213 + arXiv:2511.07213v2 Announce Type: replace +Abstract: Chronic pain is a global health challenge affecting millions of individuals, making it essential for physicians to have reliable and objective methods to measure the functional impact of clinical treatments. Traditionally used methods, like the numeric rating scale, while personalized and easy to use, are subjective due to their self-reported nature. Thus, this paper proposes DETECT (Data-Driven Evaluation of Treatments Enabled by Classification Transformers), a data-driven framework that assesses treatment success by comparing patient activities of daily life before and after treatment. We use DETECT on public benchmark datasets and simulated patient data from smartphone sensors. Our results demonstrate that DETECT is objective yet lightweight, making it a significant and novel contribution to clinical decision-making. By using DETECT, independently or together with other self-reported metrics, physicians can improve their understanding of their treatment impacts, ultimately leading to more personalized and responsive patient care. + oai:arXiv.org:2511.07213v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/publicdomain/zero/1.0/ + Yuanheng Mao, Lillian Yang, Stephen Yang, Ethan Shao, Zihan Li + + + Preparation of Fractal-Inspired Computational Architectures for Advanced Large Language Model Analysis + https://arxiv.org/abs/2511.07329 + arXiv:2511.07329v2 Announce Type: replace +Abstract: It introduces FractalNet, a fractal-inspired computational architectures for advanced large language model analysis that mainly challenges model diversity on a large scale in an efficient manner. The new set-up involves a template-driven generator, runner, and evaluation framework that, through systematic permutations of convolutional, normalization, activation, and dropout layers, can create more than 1,200 variants of neural networks. Fractal templates allow for structural recursion and multi-column pathways, thus, models become deeper and wider in a balanced way. Training utilizes PyTorch, Automatic Mixed Precision (AMP), and gradient checkpointing and is carried out on the CIFAR-10 dataset for five epochs. The outcomes show that fractal-based architectures are capable of strong performance and are computationally efficient. The paper positions fractal design as a feasible and resource-efficient method of automated architecture exploration. + oai:arXiv.org:2511.07329v2 + cs.LG + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yash Mittal, Dmitry Ignatov, Radu Timofte + + + BiCA: Effective Biomedical Dense Retrieval with Citation-Aware Hard Negatives + https://arxiv.org/abs/2511.08029 + arXiv:2511.08029v2 Announce Type: replace +Abstract: Hard negatives are essential for training effective retrieval models. Hard-negative mining typically relies on ranking documents using cross-encoders or static embedding models based on similarity metrics such as cosine distance. Hard negative mining becomes challenging for biomedical and scientific domains due to the difficulty in distinguishing between source and hard negative documents. However, referenced documents naturally share contextual relevance with the source document but are not duplicates, making them well-suited as hard negatives. In this work, we propose BiCA: Biomedical Dense Retrieval with Citation-Aware Hard Negatives, an approach for hard-negative mining by utilizing citation links in 20,000 PubMed articles for improving a domain-specific small dense retriever. We fine-tune the GTE_small and GTE_Base models using these citation-informed negatives and observe consistent improvements in zero-shot dense retrieval using nDCG@10 for both in-domain and out-of-domain tasks on BEIR and outperform baselines on long-tailed topics in LoTTE using Success@5. Our findings highlight the potential of leveraging document link structure to generate highly informative negatives, enabling state-of-the-art performance with minimal fine-tuning and demonstrating a path towards highly data-efficient domain adaptation. + oai:arXiv.org:2511.08029v2 + cs.IR + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Aarush Sinha, Pavan Kumar S, Roshan Balaji, Nirav Pravinbhai Bhatt + + + Potent but Stealthy: Rethink Profile Pollution against Sequential Recommendation via Bi-level Constrained Reinforcement Paradigm + https://arxiv.org/abs/2511.09392 + arXiv:2511.09392v4 Announce Type: replace +Abstract: Sequential Recommenders, which exploit dynamic user intents through interaction sequences, is vulnerable to adversarial attacks. While existing attacks primarily rely on data poisoning, they require large-scale user access or fake profiles thus lacking practicality. In this paper, we focus on the Profile Pollution Attack that subtly contaminates partial user interactions to induce targeted mispredictions. Previous PPA methods suffer from two limitations, i.e., i) over-reliance on sequence horizon impact restricts fine-grained perturbations on item transitions, and ii) holistic modifications cause detectable distribution shifts. To address these challenges, we propose a constrained reinforcement driven attack CREAT that synergizes a bi-level optimization framework with multi-reward reinforcement learning to balance adversarial efficacy and stealthiness. We first develop a Pattern Balanced Rewarding Policy, which integrates pattern inversion rewards to invert critical patterns and distribution consistency rewards to minimize detectable shifts via unbalanced co-optimal transport. Then we employ a Constrained Group Relative Reinforcement Learning paradigm, enabling step-wise perturbations through dynamic barrier constraints and group-shared experience replay, achieving targeted pollution with minimal detectability. Extensive experiments demonstrate the effectiveness of CREAT. + oai:arXiv.org:2511.09392v4 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jiajie Su, Zihan Nan, Yunshan Ma, Xiaobo Xia, Xiaohua Feng, Weiming Liu, Xiang Chen, Xiaolin Zheng, Chaochao Chen + + + TermGPT: Multi-Level Contrastive Fine-Tuning for Terminology Adaptation in Legal and Financial Domain + https://arxiv.org/abs/2511.09854 + arXiv:2511.09854v2 Announce Type: replace +Abstract: Large language models (LLMs) have demonstrated impressive performance in text generation tasks; however, their embedding spaces often suffer from the isotropy problem, resulting in poor discrimination of domain-specific terminology, particularly in legal and financial contexts. This weakness in terminology-level representation can severely hinder downstream tasks such as legal judgment prediction or financial risk analysis, where subtle semantic distinctions are critical. To address this problem, we propose TermGPT, a multi-level contrastive fine-tuning framework designed for terminology adaptation. We first construct a sentence graph to capture semantic and structural relations, and generate semantically consistent yet discriminative positive and negative samples based on contextual and topological cues. We then devise a multi-level contrastive learning approach at both the sentence and token levels, enhancing global contextual understanding and fine-grained terminology discrimination. To support robust evaluation, we construct the first financial terminology dataset derived from official regulatory documents. Experiments show that TermGPT outperforms existing baselines in term discrimination tasks within the finance and legal domains. + oai:arXiv.org:2511.09854v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yidan Sun, Mengying Zhu, Feiyue Chen, Yangyang Wu, Xiaolei Dan, Mengyuan Yang, Xiaolin Zheng, Shenglin Ben + + + 3DFETUS: Deep Learning-Based Standardization of Facial Planes in 3D Ultrasound + https://arxiv.org/abs/2511.10412 + arXiv:2511.10412v2 Announce Type: replace +Abstract: The automatic localization and standardization of anatomical planes in 3D medical imaging remains a challenging problem due to variability in object pose, appearance, and image quality. In 3D ultrasound, these challenges are exacerbated by speckle noise and limited contrast, particularly in fetal imaging. + To address these challenges in the context of facial assessment, we present: 1) GT++, a robust algorithm that estimates standard facial planes from 3D US volumes using annotated anatomical landmarks; and 2) 3DFETUS, a deep learning model that automates and standardizes their localization in 3D fetal US volumes. + We evaluated our methods both qualitatively, through expert clinical review, and quantitatively. The proposed approach achieved a mean translation error of 3.21 $\pm$ 1.98mm and a mean rotation error of 5.31 $\pm$ 3.945$^\circ$ per plane, outperforming other state-of-the-art methods on 3D US volumes. Clinical assessments further confirmed the effectiveness of both GT++ and 3DFETUS, demonstrating statistically significant improvements in plane estimation accuracy. + oai:arXiv.org:2511.10412v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Alomar Antonia, Rubio Ricardo, Albaiges Gerard, Salort-Benejam Laura, Caminal Julia, Prat Maria, Rueda Carolina, Cortes Berta, Piella Gemma, Sukno Federico + + + Collaborative Representation Learning for Alignment of Tactile, Language, and Vision Modalities + https://arxiv.org/abs/2511.11512 + arXiv:2511.11512v2 Announce Type: replace +Abstract: Tactile sensing offers rich and complementary information to vision and language, enabling robots to perceive fine-grained object properties. However, existing tactile sensors lack standardization, leading to redundant features that hinder cross-sensor generalization. Moreover, existing methods fail to fully integrate the intermediate communication among tactile, language, and vision modalities. To address this, we propose TLV-CoRe, a CLIP-based Tactile-Language-Vision Collaborative Representation learning method. TLV-CoRe introduces a Sensor-Aware Modulator to unify tactile features across different sensors and employs tactile-irrelevant decoupled learning to disentangle irrelevant tactile features. Additionally, a Unified Bridging Adapter is introduced to enhance tri-modal interaction within the shared representation space. To fairly evaluate the effectiveness of tactile models, we further propose the RSS evaluation framework, focusing on Robustness, Synergy, and Stability across different methods. Experimental results demonstrate that TLV-CoRe significantly improves sensor-agnostic representation learning and cross-modal alignment, offering a new direction for multimodal tactile representation. + oai:arXiv.org:2511.11512v2 + cs.RO + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Yiyun Zhou, Mingjing Xu, Jingwei Shi, Quanjiang Li, Jingyuan Chen + + + What-If Decision Support for Product Line Extension Using Conditional Deep Generative Models + https://arxiv.org/abs/2511.11646 + arXiv:2511.11646v2 Announce Type: replace +Abstract: Product line extension is a strategically important managerial decision that requires anticipating how consumer segments and purchasing contexts may respond to hypothetical product designs that do not yet exist in the market. Such decisions are inherently uncertain because managers must infer future outcomes from historical purchase data without direct market observations. This study addresses this challenge by proposing a data-driven decision support framework that enables forward-looking what-if analysis based on historical transaction data. We introduce a Conditional Tabular Variational Autoencoder (CTVAE) that learns the conditional joint distribution of product attributes and consumer characteristics from large-scale tabular data. By conditioning the generative process on controllable design variables such as container type, volume, flavor, and calorie content, the proposed model generates synthetic consumer attribute distributions for hypothetical line-extended products. This enables systematic exploration of alternative design scenarios without costly market pretests. The framework is evaluated using home-scan panel data covering more than 20,000 consumers and 700 soft drink products. Empirical results show that the CTVAE outperforms existing tabular generative models in capturing conditional consumer attribute distributions. Simulation-based analyses further demonstrate that the generated synthetic data support knowledge-driven reasoning for assessing cannibalization risks and identifying potential target segments. These findings highlight the value of conditional deep generative models as core components of decision support systems for product line extension planning. + oai:arXiv.org:2511.11646v2 + cs.LG + cs.AI + cs.IR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yinxing Li, Tsukasa Ishigaki + + + Expressive Temporal Specifications for Reward Monitoring + https://arxiv.org/abs/2511.12808 + arXiv:2511.12808v3 Announce Type: replace +Abstract: Specifying informative and dense reward functions remains a pivotal challenge in Reinforcement Learning, as it directly affects the efficiency of agent training. In this work, we harness the expressive power of quantitative Linear Temporal Logic on finite traces (($\text{LTL}_f[\mathcal{F}]$)) to synthesize reward monitors that generate a dense stream of rewards for runtime-observable state trajectories. By providing nuanced feedback during training, these monitors guide agents toward optimal behaviour and help mitigate the well-known issue of sparse rewards under long-horizon decision making, which arises under the Boolean semantics dominating the current literature. Our framework is algorithm-agnostic and only relies on a state labelling function, and naturally accommodates specifying non-Markovian properties. Empirical results show that our quantitative monitors consistently subsume and, depending on the environment, outperform Boolean monitors in maximizing a quantitative measure of task completion and in reducing convergence time. + oai:arXiv.org:2511.12808v3 + cs.LG + cs.AI + cs.LO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Omar Adalat, Francesco Belardinelli + + + Towards 3D Object-Centric Feature Learning for Semantic Scene Completion + https://arxiv.org/abs/2511.13031 + arXiv:2511.13031v3 Announce Type: replace +Abstract: Vision-based 3D Semantic Scene Completion (SSC) has received growing attention due to its potential in autonomous driving. While most existing approaches follow an ego-centric paradigm by aggregating and diffusing features over the entire scene, they often overlook fine-grained object-level details, leading to semantic and geometric ambiguities, especially in complex environments. To address this limitation, we propose Ocean, an object-centric prediction framework that decomposes the scene into individual object instances to enable more accurate semantic occupancy prediction. Specifically, we first employ a lightweight segmentation model, MobileSAM, to extract instance masks from the input image. Then, we introduce a 3D Semantic Group Attention module that leverages linear attention to aggregate object-centric features in 3D space. To handle segmentation errors and missing instances, we further design a Global Similarity-Guided Attention module that leverages segmentation features for global interaction. Finally, we propose an Instance-aware Local Diffusion module that improves instance features through a generative process and subsequently refines the scene representation in the BEV space. Extensive experiments on the SemanticKITTI and SSCBench-KITTI360 benchmarks demonstrate that Ocean achieves state-of-the-art performance, with mIoU scores of 17.40 and 20.28, respectively. + oai:arXiv.org:2511.13031v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Weihua Wang, Yubo Cui, Xiangru Lin, Zhiheng Li, Zheng Fang + + + FreeAskWorld: An Interactive and Closed-Loop Simulator for Human-Centric Embodied AI + https://arxiv.org/abs/2511.13524 + arXiv:2511.13524v2 Announce Type: replace +Abstract: As embodied intelligence emerges as a core frontier in artificial intelligence research, simulation platforms must evolve beyond low-level physical interactions to capture complex, human-centered social behaviors. We introduce FreeAskWorld, an interactive simulation framework that integrates large language models (LLMs) for high-level behavior planning and semantically grounded interaction, informed by theories of intention and social cognition. Our framework supports scalable, realistic human-agent simulations and includes a modular data generation pipeline tailored for diverse embodied tasks. To validate the framework, we extend the classic Vision-and-Language Navigation (VLN) task into a interaction enriched Direction Inquiry setting, wherein agents can actively seek and interpret navigational guidance. We present and publicly release FreeAskWorld, a large-scale benchmark dataset comprising reconstructed environments, six diverse task types, 16 core object categories, 63,429 annotated sample frames, and more than 17 hours of interaction data to support training and evaluation of embodied AI systems. We benchmark VLN models, and human participants under both open-loop and closed-loop settings. Experimental results demonstrate that models fine-tuned on FreeAskWorld outperform their original counterparts, achieving enhanced semantic understanding and interaction competency. These findings underscore the efficacy of socially grounded simulation frameworks in advancing embodied AI systems toward sophisticated high-level planning and more naturalistic human-agent interaction. Importantly, our work underscores that interaction itself serves as an additional information modality. + oai:arXiv.org:2511.13524v2 + cs.AI + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + AAAI 2026 Oral + Yuhang Peng, Yizhou Pan, Xinning He, Jihaoyu Yang, Xinyu Yin, Han Wang, Xiaoji Zheng, Chao Gao, Jiangtao Gong + + + TiViBench: Benchmarking Think-in-Video Reasoning for Video Generative Models + https://arxiv.org/abs/2511.13704 + arXiv:2511.13704v2 Announce Type: replace +Abstract: The rapid evolution of video generative models has shifted their focus from producing visually plausible outputs to tackling tasks requiring physical plausibility and logical consistency. However, despite recent breakthroughs such as Veo 3's chain-of-frames reasoning, it remains unclear whether these models can exhibit reasoning capabilities similar to large language models (LLMs). Existing benchmarks predominantly evaluate visual fidelity and temporal coherence, failing to capture higher-order reasoning abilities. To bridge this gap, we propose TiViBench, a hierarchical benchmark specifically designed to evaluate the reasoning capabilities of image-to-video (I2V) generation models. TiViBench systematically assesses reasoning across four dimensions: i) Structural Reasoning & Search, ii) Spatial & Visual Pattern Reasoning, iii) Symbolic & Logical Reasoning, and iv) Action Planning & Task Execution, spanning 24 diverse task scenarios across 3 difficulty levels. Through extensive evaluations, we show that commercial models (e.g., Sora 2, Veo 3.1) demonstrate stronger reasoning potential, while open-source models reveal untapped potential that remains hindered by limited training scale and data diversity. To further unlock this potential, we introduce VideoTPO, a simple yet effective test-time strategy inspired by preference optimization. By performing LLM self-analysis on generated candidates to identify strengths and weaknesses, VideoTPO significantly enhances reasoning performance without requiring additional training, data, or reward models. Together, TiViBench and VideoTPO pave the way for evaluating and advancing reasoning in video generation models, setting a foundation for future research in this emerging field. + oai:arXiv.org:2511.13704v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Harold Haodong Chen, Disen Lan, Wen-Jie Shu, Qingyang Liu, Zihan Wang, Sirui Chen, Wenkai Cheng, Kanghao Chen, Hongfei Zhang, Zixin Zhang, Rongjin Guo, Yu Cheng, Ying-Cong Chen + + + Continuous Vision-Language-Action Co-Learning with Semantic-Physical Alignment for Behavioral Cloning + https://arxiv.org/abs/2511.14396 + arXiv:2511.14396v4 Announce Type: replace +Abstract: Language-conditioned manipulation facilitates human-robot interaction via behavioral cloning (BC), which learns control policies from human demonstrations and serves as a cornerstone of embodied AI. Overcoming compounding errors in sequential action decisions remains a central challenge to improving BC performance. Existing approaches mitigate compounding errors through data augmentation, expressive representation, or temporal abstraction. However, they suffer from physical discontinuities and semantic-physical misalignment, leading to inaccurate action cloning and intermittent execution. In this paper, we present Continuous vision-language-action Co-Learning with Semantic-Physical Alignment (CCoL), a novel BC framework that ensures temporally consistent execution and fine-grained semantic grounding. It generates robust and smooth action execution trajectories through continuous co-learning across vision, language, and proprioceptive inputs (e.g., robot internal states). Meanwhile, we anchor language semantics to visuomotor representations by a bidirectional cross-attention to learn contextual information for action generation, successfully overcoming the problem of semantic-physical misalignment. Extensive experiments show that CCoL achieves an average 8.0% relative improvement across three simulation suites, with up to 19.2% relative gain in human-demonstrated bimanual insertion tasks. Real-world tests on a 7-DoF robot further confirm CCoL's generalization under unseen and noisy object states. + oai:arXiv.org:2511.14396v4 + cs.RO + cs.AI + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiuxiu Qi, Yu Yang, Jiannong Cao, Luyao Bai, Chongshan Fan, Chengtai Cao, Hongpeng Wang + + + Nonparametric estimation of conditional probability distributions using a generative approach based on conditional push-forward neural networks + https://arxiv.org/abs/2511.14455 + arXiv:2511.14455v3 Announce Type: replace +Abstract: We introduce conditional push-forward neural networks (CPFN), a generative framework for conditional distribution estimation. Instead of directly modeling the conditional density $f_{Y|X}$, CPFN learns a stochastic map $\varphi=\varphi(x,u)$ such that $\varphi(x,U)$ and $Y|X=x$ follow approximately the same law, with $U$ a suitable random vector of pre-defined latent variables. This enables efficient conditional sampling and straightforward estimation of conditional statistics through Monte Carlo methods. The model is trained via an objective function derived from a Kullback-Leibler formulation, without requiring invertibility or adversarial training. We establish a near-asymptotic consistency result and demonstrate experimentally that CPFN can achieve performance competitive with, or even superior to, state-of-the-art methods, including kernel estimators, tree-based algorithms, and popular deep learning techniques, all while remaining lightweight and easy to train. + oai:arXiv.org:2511.14455v3 + cs.LG + stat.ME + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nicola Rares Franco, Lorenzo Tedesco + + + Lifefin: Escaping Mempool Explosions in DAG-based BFT + https://arxiv.org/abs/2511.15936 + arXiv:2511.15936v2 Announce Type: replace +Abstract: Directed Acyclic Graph (DAG)-based Byzantine Fault-Tolerant (BFT) protocols have emerged as promising solutions for high-throughput blockchains. By decoupling data dissemination from transaction ordering and constructing a well-connected DAG in the mempool, these protocols enable zero-message ordering and implicit view changes. However, we identify a fundamental liveness vulnerability: an adversary can trigger mempool explosions to prevent transaction commitment, ultimately compromising the protocol's liveness. + In response, this work presents Lifefin, a generic and self-stabilizing protocol designed to integrate seamlessly with existing DAG-based BFT protocols and circumvent such vulnerabilities. Lifefin leverages the Agreement on Common Subset (ACS) mechanism, allowing nodes to escape mempool explosions by committing transactions with bounded resource usage even in adverse conditions. As a result, Lifefin imposes (almost) zero overhead in typical cases while effectively eliminating liveness vulnerabilities. + To demonstrate the effectiveness of Lifefin, we integrate it into two state-of-the-art DAG-based BFT protocols, Sailfish and Mysticeti, resulting in two enhanced variants: Sailfish-Lifefin and Mysticeti-Lifefin. We implement these variants and compare them with the original Sailfish and Mysticeti systems. Our evaluation demonstrates that Lifefin achieves comparable transaction throughput while introducing only minimal additional latency to resist similar attacks. + oai:arXiv.org:2511.15936v2 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jianting Zhang, Sen Yang, Alberto Sonnino, Sebasti\'an Loza, Aniket Kate + + + Vision Language Models are Confused Tourists + https://arxiv.org/abs/2511.17004 + arXiv:2511.17004v2 Announce Type: replace +Abstract: Although the cultural dimension has been one of the key aspects in evaluating Vision-Language Models (VLMs), their ability to remain stable across diverse cultural inputs remains largely untested, despite being crucial to support diversity and multicultural societies. Existing evaluations often rely on benchmarks featuring only a singular cultural concept per image, overlooking scenarios where multiple, potentially unrelated cultural cues coexist. To address this gap, we introduce ConfusedTourist, a novel cultural adversarial robustness suite designed to assess VLMs' stability against perturbed geographical cues. Our experiments reveal a critical vulnerability, where accuracy drops heavily under simple image-stacking perturbations and even worsens with its image-generation-based variant. Interpretability analyses further show that these failures stem from systematic attention shifts toward distracting cues, diverting the model from its intended focus. These findings highlight a critical challenge: visual cultural concept mixing can substantially impair even state-of-the-art VLMs, underscoring the urgent need for more culturally robust multimodal understanding. + oai:arXiv.org:2511.17004v2 + cs.CV + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-sa/4.0/ + Patrick Amadeus Irawan, Ikhlasul Akmal Hanif, Muhammad Dehan Al Kautsar, Genta Indra Winata, Fajri Koto, Alham Fikri Aji + + + A Robust GPU-Accelerated Kernel Compensation Solver with Novel Discretization for Photonic Crystals in Pseudochiral Media + https://arxiv.org/abs/2511.17107 + arXiv:2511.17107v2 Announce Type: replace +Abstract: This paper develops a robust solver for the Maxwell eigenproblem in 3D photonic crystals (PCs) with pseudochiral media. The solver employs the Kernel Compensation technique under the framework of Yee's scheme to eliminate null space and enable matrix-free, GPU-accelerated operations via 3D discrete Fourier transform (DFT). Furthermore, we propose a novel discretization for permittivity tensor containing off-diagonal entries and rigorously prove that the resulting matrix is Hermitian positive definite (HPD), which ensures the correctness of the kernel compensation technique. Numerical experiments on several benchmark examples are demonstrated to validate the robustness and accuracy of our scheme. + oai:arXiv.org:2511.17107v2 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Chenhao Jin, Hehu Xie + + + SCARE: A Benchmark for SQL Correction and Question Answerability Classification for Reliable EHR Question Answering + https://arxiv.org/abs/2511.17559 + arXiv:2511.17559v2 Announce Type: replace +Abstract: Recent advances in Large Language Models (LLMs) have enabled the development of text-to-SQL models that allow clinicians to query structured data stored in Electronic Health Records (EHRs) using natural language. However, deploying these models for EHR question answering (QA) systems in safety-critical clinical environments remains challenging: incorrect SQL queries-whether caused by model errors or problematic user inputs-can undermine clinical decision-making and jeopardize patient care. While prior work has mainly focused on improving SQL generation accuracy or filtering questions before execution, there is a lack of a unified benchmark for evaluating independent post-hoc verification mechanisms (i.e., a component that inspects and validates the generated SQL before execution), which is crucial for safe deployment. To fill this gap, we introduce SCARE, a benchmark for evaluating methods that function as a post-hoc safety layer in EHR QA systems. SCARE evaluates the joint task of (1) classifying question answerability (i.e., determining whether a question is answerable, ambiguous, or unanswerable) and (2) verifying or correcting candidate SQL queries. The benchmark comprises 4,200 triples of questions, candidate SQL queries, and expected model outputs, grounded in the MIMIC-III, MIMIC-IV, and eICU databases. It covers a diverse set of questions and corresponding candidate SQL queries generated by seven different text-to-SQL models, ensuring a realistic and challenging evaluation. Using SCARE, we benchmark a range of approaches-from two-stage methods to agentic frameworks. Our experiments reveal a critical trade-off between question classification and SQL error correction, highlighting key challenges and outlining directions for future research. + oai:arXiv.org:2511.17559v2 + cs.CL + cs.DB + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Gyubok Lee, Woosog Chay, Edward Choi + + + GateRA: Token-Aware Modulation for Parameter-Efficient Fine-Tuning + https://arxiv.org/abs/2511.17582 + arXiv:2511.17582v3 Announce Type: replace +Abstract: Parameter-efficient fine-tuning (PEFT) methods, such as LoRA, DoRA, and HiRA, enable lightweight adaptation of large pre-trained models via low-rank updates. However, existing PEFT approaches apply static, input-agnostic updates to all tokens, disregarding the varying importance and difficulty of different inputs. This uniform treatment can lead to overfitting on trivial content or under-adaptation on more informative regions, especially in autoregressive settings with distinct prefill and decoding dynamics. In this paper, we propose GateRA, a unified framework that introduces token-aware modulation to dynamically adjust the strength of PEFT updates. By incorporating adaptive gating into standard PEFT branches, GateRA enables selective, token-level adaptation, preserving pre-trained knowledge for well-modeled inputs while focusing capacity on challenging cases. Empirical visualizations reveal phase-sensitive behaviors, where GateRA automatically suppresses updates for redundant prefill tokens while emphasizing adaptation during decoding. To promote confident and efficient modulation, we further introduce an entropy-based regularization that encourages near-binary gating decisions. This regularization prevents diffuse update patterns and leads to interpretable, sparse adaptation without hard thresholding. Finally, we present a theoretical analysis showing that GateRA induces a soft gradient-masking effect over the PEFT path, enabling continuous and differentiable control over adaptation. Experiments on multiple commonsense reasoning benchmarks demonstrate that GateRA consistently outperforms or matches prior PEFT methods. + oai:arXiv.org:2511.17582v3 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + AAAI 2026 + Jie Ou, Shuaihong Jiang, Yingjun Du, Cees G. M. Snoek + + + Can LLMs Threaten Human Survival? Benchmarking Potential Existential Threats from LLMs via Prefix Completion + https://arxiv.org/abs/2511.19171 + arXiv:2511.19171v2 Announce Type: replace +Abstract: Research on the safety evaluation of large language models (LLMs) has become extensive, driven by jailbreak studies that elicit unsafe responses. Such response involves information already available to humans, such as the answer to "how to make a bomb". When LLMs are jailbroken, the practical threat they pose to humans is negligible. However, it remains unclear whether LLMs commonly produce unpredictable outputs that could pose substantive threats to human safety. To address this gap, we study whether LLM-generated content contains potential existential threats, defined as outputs that imply or promote direct harm to human survival. We propose \textsc{ExistBench}, a benchmark designed to evaluate such risks. Each sample in \textsc{ExistBench} is derived from scenarios where humans are positioned as adversaries to AI assistants. Unlike existing evaluations, we use prefix completion to bypass model safeguards. This leads the LLMs to generate suffixes that express hostility toward humans or actions with severe threat, such as the execution of a nuclear strike. Our experiments on 10 LLMs reveal that LLM-generated content indicates existential threats. To investigate the underlying causes, we also analyze the attention logits from LLMs. To highlight real-world safety risks, we further develop a framework to assess model behavior in tool-calling. We find that LLMs actively select and invoke external tools with existential threats. Code and data are available at: https://github.com/cuiyu-ai/ExistBench. + oai:arXiv.org:2511.19171v2 + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yu Cui, Yifei Liu, Hang Fu, Sicheng Pan, Haibin Zhang, Cong Zuo, Licheng Wang + + + HVAdam: A Full-Dimension Adaptive Optimizer + https://arxiv.org/abs/2511.20277 + arXiv:2511.20277v2 Announce Type: replace +Abstract: Adaptive optimizers such as Adam have achieved great success in training large-scale models like large language models and diffusion models. However, they often generalize worse than non-adaptive methods, such as SGD on classical architectures like CNNs. We identify a key cause of this performance gap: adaptivity in pre-conditioners, which limits the optimizer's ability to adapt to diverse optimization landscapes. To address this, we propose Anon (Adaptivity Non-restricted Optimizer with Novel convergence technique), a novel optimizer with continuously tunable adaptivity + , allowing it to interpolate between SGD-like and Adam-like behaviors and even extrapolate beyond both. To ensure convergence across the entire adaptivity spectrum, we introduce incremental delay update (IDU), a novel mechanism that is more flexible than AMSGrad's hard max-tracking strategy and enhances robustness to gradient noise. We theoretically establish convergence guarantees under both convex and non-convex settings. Empirically, Anon consistently outperforms state-of-the-art optimizers on representative image classification, diffusion, and language modeling tasks. These results demonstrate that adaptivity can serve as a valuable tunable design principle, and Anon provides the first unified and reliable framework capable of bridging the gap between classical and modern optimizers and surpassing their advantageous properties. + oai:arXiv.org:2511.20277v2 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + 10.1609/aaai.v39i21.34421 + In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pp. 22623-22631, 2025 + Yiheng Zhang, Shaowu Wu, Yuanzhuo Xu, Jiajun Wu, Shang Xu, Steve Drew, Xiaoguang Niu + + + BRIC: Bridging Kinematic Plans and Physical Control at Test Time + https://arxiv.org/abs/2511.20431 + arXiv:2511.20431v3 Announce Type: replace +Abstract: We propose BRIC, a novel test-time adaptation (TTA) framework that enables long-term human motion generation by resolving execution discrepancies between diffusion-based kinematic motion planners and reinforcement learning-based physics controllers. While diffusion models can generate diverse and expressive motions conditioned on text and scene context, they often produce physically implausible outputs, leading to execution drift during simulation. To address this, BRIC dynamically adapts the physics controller to noisy motion plans at test time, while preserving pre-trained skills via a loss function that mitigates catastrophic forgetting. In addition, BRIC introduces a lightweight test-time guidance mechanism that steers the diffusion model in the signal space without updating its parameters. By combining both adaptation strategies, BRIC ensures consistent and physically plausible long-term executions across diverse environments in an effective and efficient manner. We validate the effectiveness of BRIC on a variety of long-term tasks, including motion composition, obstacle avoidance, and human-scene interaction, achieving state-of-the-art performance across all tasks. + oai:arXiv.org:2511.20431v3 + cs.CV + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Dohun Lim, Minji Kim, Jaewoon Lim, Sungchan Kim + + + MTTR-A: Measuring Cognitive Recovery Latency in Multi-Agent Systems + https://arxiv.org/abs/2511.20663 + arXiv:2511.20663v4 Announce Type: replace +Abstract: Reliability in multi-agent systems (MAS) built on large language models is increasingly limited by cognitive failures rather than infrastructure faults. Existing observability tools describe failures but do not quantify how quickly distributed reasoning recovers once coherence is lost. We introduce MTTR-A (Mean Time-to-Recovery for Agentic Systems), a runtime reliability metric that measures cognitive recovery latency in MAS. MTTR-A adapts classical dependability theory to agentic orchestration, capturing the time required to detect reasoning drift and restore coherent operation. We further define complementary metrics, including MTBF and a normalized recovery ratio (NRR), and establish theoretical bounds linking recovery latency to long-run cognitive uptime. Using a LangGraph-based benchmark with simulated drift and reflex recovery, we empirically demonstrate measurable recovery behavior across multiple reflex strategies. This work establishes a quantitative foundation for runtime cognitive dependability in distributed agentic systems. + oai:arXiv.org:2511.20663v4 + cs.MA + cs.AI + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Barak Or + + + Prompt-Aware Adaptive Elastic Weight Consolidation for Continual Learning in Medical Vision-Language Models + https://arxiv.org/abs/2511.20732 + arXiv:2511.20732v2 Announce Type: replace +Abstract: Medical AI systems face catastrophic forgetting when deployed in clinical settings, where models must learn new imaging protocols while retaining prior diagnostic capabilities. This challenge is particularly acute for medical vision-language models that must preserve complex cross-modal alignments between medical images and clinical terminology across diverse imaging modalities. We introduce Prompt- Aware Adaptive Elastic Weight Consolidation (PA-EWC), a novel continual learning approach that addresses catastrophic forgetting through prompt-guided parameter specialization. Our method systematically categorizes model parameters based on their functional roles in processing visual-descriptive, spatial-guided, and medical-semantic information, enabling targeted protection of critical knowledge while allowing adaptation to new clinical requirements. PA-EWC incorporates adaptive Fisher Information computation with gradient stability analysis and develops weighted complexity metrics based on medical terminology density. We evaluate our approach across five medical imaging datasets (Kvasir-SEG, ISIC 2018, CheXlocalize, BUSI, CAMUS) representing diverse modalities including endoscopy, dermoscopy, radiography, and ultrasound. Experimental results demonstrate that PA-EWC reduces catastrophic forgetting by up to 17.58% compared to baseline methods, with performance improvements of 4.30% on chest X-ray pathology localization and 6.06% on polyp segmentation. + oai:arXiv.org:2511.20732v2 + cs.MM + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ziyuan Gao, Philippe Morel + + + Semantic Superiority vs. Forensic Efficiency: A Comparative Analysis of Deep Learning and Psycholinguistics for Business Email Compromise Detection + https://arxiv.org/abs/2511.20944 + arXiv:2511.20944v3 Announce Type: replace +Abstract: Business Email Compromise (BEC) is a sophisticated social engineering threat that manipulates organizational hierarchies, leading to significant financial damage. According to the 2024 FBI Internet Crime Report, BEC accounts for over $2.9 billion in annual losses, presenting a massive economic asymmetry: the financial cost of a False Negative (fraud loss) exceeds the operational cost of a False Positive (manual review) by a ratio of approximately 5,480:1. This paper contrasts two detection paradigms: a Forensic Psycholinguistic Stream (CatBoost), which analyzes linguistic cues like urgency and authority with high interpretability, and a Semantic Stream (DistilBERT), which utilizes deep learning for contextual understanding. We evaluated both streams on a hybrid dataset (N=7,990) containing human-legitimate and AI-synthesized adversarial fraud. Benchmarked on Tesla T4 infrastructure, DistilBERT achieved near-perfect detection on synthetic threats (AUC >0.99, F1 =0.998) with acceptable real-time latency (7.4 ms). CatBoost achieved competitive detection (AUC =0.991, F1 =0.949) at 8.4x lower latency (0.8 ms) with negligible resource consumption. We conclude that while DistilBERT offers maximum accuracy for GPU-equipped organizations, CatBoost provides a viable, cost-effective alternative for edge deployments. Both approaches demonstrate a theoretical ROI exceeding 99.9% when optimized via cost-sensitive learning. + oai:arXiv.org:2511.20944v3 + cs.LG + cs.CR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yaw Osei Adjei (Kwame Nkrumah University of Science,Technology), Frederick Ayivor (Independent Researcher, Fishers, Indiana, USA), Davis Opoku (Kwame Nkrumah University of Science,Technology) + + + Affective Multimodal Agents with Proactive Knowledge Grounding for Emotionally Aligned Marketing Dialogue + https://arxiv.org/abs/2511.21728 + arXiv:2511.21728v2 Announce Type: replace +Abstract: Recent advances in large language models (LLMs) have enabled fluent dialogue systems, but most remain reactive and struggle in emotionally rich, goal-oriented settings such as marketing conversations. To address this limitation, we propose AffectMind, a multimodal affective dialogue agent that performs proactive reasoning and dynamic knowledge grounding to sustain emotionally aligned and persuasive interactions. AffectMind combines three components: a Proactive Knowledge Grounding Network (PKGN) that continuously updates factual and affective context from text, vision, and prosody; an Emotion--Intent Alignment Model (EIAM) that jointly models user emotion and purchase intent to adapt persuasion strategies; and a Reinforced Discourse Loop (RDL) that optimizes emotional coherence and engagement via reinforcement signals from user responses. Experiments on two newly curated marketing dialogue datasets, MM-ConvMarket and AffectPromo, show that AffectMind outperforms strong LLM-based baselines in emotional consistency (+26\%), persuasive success rate (+19\%), and long-term user engagement (+23\%), highlighting emotion-grounded proactivity as a key capability for commercial multimodal agents. + oai:arXiv.org:2511.21728v2 + cs.CL + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Lin Yu, Xiaofei Han, Yifei Kang, Chiung-Yi Tseng, Danyang Zhang, Ziqian Bi, Zhimo Han + + + The Reduced Basis Multigrid scheme for the Virtual Element Method + https://arxiv.org/abs/2511.22219 + arXiv:2511.22219v2 Announce Type: replace +Abstract: We present a non-nested W-cycle multigrid scheme for the lowest order Virtual Element Method on polygonal meshes. To avoid the implicit definition of the Virtual Element space, which poses several issues in the computation of intergrid operators that underpin multigrid methods, the proposed scheme uses a fully-conforming auxiliary space constructed by cheaply computing the virtual basis functions via the reduced basis method. + oai:arXiv.org:2511.22219v2 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Paola F. Antonietti, Silvia Bertoluzza, Fabio Credali + + + Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory + https://arxiv.org/abs/2511.23083 + arXiv:2511.23083v5 Announce Type: replace +Abstract: High-capacity kernel Hopfield networks exhibit a \textit{Ridge of Optimization} characterized by extreme stability. While previously linked to \textit{Spectral Concentration}, its origin remains elusive. Here, we analyze the network dynamics on a statistical manifold, revealing that the Ridge corresponds to the Edge of Stability, a critical boundary where the Fisher Information Matrix becomes singular. We demonstrate that the apparent Euclidean force antagonism is a manifestation of \textit{Dual Equilibrium} in the Riemannian space. This unifies learning dynamics and capacity via the Minimum Description Length principle, offering a geometric theory of self-organized criticality. + oai:arXiv.org:2511.23083v5 + cs.LG + cs.NE + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Akira Tamamori + + + Multi-modal On-Device Learning for Monocular Depth Estimation on Ultra-low-power MCUs + https://arxiv.org/abs/2512.00086 + arXiv:2512.00086v2 Announce Type: replace +Abstract: Monocular depth estimation (MDE) plays a crucial role in enabling spatially-aware applications in Ultra-low-power (ULP) Internet-of-Things (IoT) platforms. However, the limited number of parameters of Deep Neural Networks for the MDE task, designed for IoT nodes, results in severe accuracy drops when the sensor data observed in the field shifts significantly from the training dataset. To address this domain shift problem, we present a multi-modal On-Device Learning (ODL) technique, deployed on an IoT device integrating a Greenwaves GAP9 MicroController Unit (MCU), a 80 mW monocular camera and a 8 x 8 pixel depth sensor, consuming $\approx$300mW. In its normal operation, this setup feeds a tiny 107 k-parameter $\mu$PyD-Net model with monocular images for inference. The depth sensor, usually deactivated to minimize energy consumption, is only activated alongside the camera to collect pseudo-labels when the system is placed in a new environment. Then, the fine-tuning task is performed entirely on the MCU, using the new data. To optimize our backpropagation-based on-device training, we introduce a novel memory-driven sparse update scheme, which minimizes the fine-tuning memory to 1.2 MB, 2.2x less than a full update, while preserving accuracy (i.e., only 2% and 1.5% drops on the KITTI and NYUv2 datasets). Our in-field tests demonstrate, for the first time, that ODL for MDE can be performed in 17.8 minutes on the IoT node, reducing the root mean squared error from 4.9 to 0.6m with only 3 k self-labeled samples, collected in a real-life deployment scenario. + oai:arXiv.org:2512.00086v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1109/JIOT.2025.3636826 + Davide Nadalini, Manuele Rusci, Elia Cereda, Luca Benini, Francesco Conti, Daniele Palossi + + + ORBGRAND Is Exactly Capacity-achieving via Rank Companding + https://arxiv.org/abs/2512.00347 + arXiv:2512.00347v3 Announce Type: replace +Abstract: Among guessing random additive noise decoding (GRAND) algorithms, ordered reliability bits GRAND (ORBGRAND) has attracted considerable attention due to its efficient use of soft information and suitability for hardware implementation. It has also been shown that ORBGRAND achieves a rate very close to the capacity of an additive white Gaussian noise channel under antipodal signaling. In this work, it is further established that, for general binary-input memoryless channels under symmetric input distribution, via suitably companding the ranks in ORBGRAND according to the inverse cumulative distribution function (CDF) of channel reliability, the resulting CDF-ORBGRAND algorithm exactly achieves the mutual information, i.e., the symmetric capacity. This result is then applied to bit-interleaved coded modulation (BICM) systems to handle high-order input constellations. Via considering the effects of mismatched decoding due to both BICM and ORBGRAND, it is shown that CDF-ORBGRAND is capable of achieving the BICM capacity, which was initially derived in the literature by treating BICM as a set of independent parallel channels. + oai:arXiv.org:2512.00347v3 + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhuang Li, Wenyi Zhang + + + Structured Spectral Reasoning for Frequency-Adaptive Multimodal Recommendation + https://arxiv.org/abs/2512.01372 + arXiv:2512.01372v2 Announce Type: replace +Abstract: Multimodal recommendation aims to integrate collaborative signals with heterogeneous content such as visual and textual information, but remains challenged by modality-specific noise, semantic inconsistency, and unstable propagation over user-item graphs. These issues are often exacerbated by naive fusion or shallow modeling strategies, leading to degraded generalization and poor robustness. While recent work has explored the frequency domain as a lens to separate stable from noisy signals, most methods rely on static filtering or reweighting, lacking the ability to reason over spectral structure or adapt to modality-specific reliability. To address these challenges, we propose a Structured Spectral Reasoning (SSR) framework for frequency-aware multimodal recommendation. Our method follows a four-stage pipeline: (i) Decompose graph-based multimodal signals into spectral bands via graph-guided transformations to isolate semantic granularity; (ii) Modulate band-level reliability with spectral band masking, a training-time masking with a prediction-consistency objective that suppresses brittle frequency components; (iii) Fuse complementary frequency cues using hyperspectral reasoning with low-rank cross-band interaction; and (iv) Align modality-specific spectral features via contrastive regularization to promote semantic and structural consistency. Experiments on three real-world benchmarks show consistent gains over strong baselines, particularly under sparse and cold-start settings. Additional analyses indicate that structured spectral modeling improves robustness and provides clearer diagnostics of how different bands contribute to performance. + oai:arXiv.org:2512.01372v2 + cs.IR + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Wei Yang, Rui Zhong, Yiqun Chen, Chi Lu, Peng Jiang + + + Zero-Overhead Introspection for Adaptive Test-Time Compute + https://arxiv.org/abs/2512.01457 + arXiv:2512.01457v3 Announce Type: replace +Abstract: Large language models excel at reasoning but lack key aspects of introspection, including anticipating their own success and the computation required to achieve it. Humans use real-time introspection to decide how much effort to invest, when to make multiple attempts, when to stop, and when to signal success or failure. Without this, LLMs struggle to make intelligent meta-cognition decisions. Test-time scaling methods like Best-of-N drive up cost and latency by using a fixed budget of samples regardless of the marginal benefit of each one at any point in generation, and the absence of confidence signals can mislead people, prevent appropriate escalation to better tools, and undermine trustworthiness. Learned verifiers or reward models can provide confidence estimates, but do not enable adaptive inference and add substantial cost by requiring extra models or forward passes. We present ZIP-RC, an adaptive inference method that equips models with zero-overhead inference-time predictions of reward and cost. At every token, ZIP-RC reuses reserved or unused logits in the same forward pass as next-token prediction to output a joint distribution over final reward and remaining length -- no extra models, architecture change, or inference overhead. This full joint distribution is used to compute a sampling utility which is the linear combination of the expected maximum reward, total compute, and latency of set of samples if generated to completion. During inference, we maximize this utility with meta-actions that determine which prefix of tokens to continue or initiate sampling from. On mixed-difficulty mathematical benchmarks, ZIP-RC improves accuracy by up to 12% over majority voting at equal or lower average cost, and traces smooth Pareto frontiers between quality, compute, and latency. By providing real-time reward-cost introspection, ZIP-RC enables adaptive, efficient reasoning. + oai:arXiv.org:2512.01457v3 + cs.LG + cs.AI + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Rohin Manvi, Joey Hong, Tim Seyde, Maxime Labonne, Mathias Lechner, Sergey Levine + + + SSR: Semantic and Spatial Rectification for CLIP-based Weakly Supervised Segmentation + https://arxiv.org/abs/2512.01701 + arXiv:2512.01701v2 Announce Type: replace +Abstract: In recent years, Contrastive Language-Image Pretraining (CLIP) has been widely applied to Weakly Supervised Semantic Segmentation (WSSS) tasks due to its powerful cross-modal semantic understanding capabilities. This paper proposes a novel Semantic and Spatial Rectification (SSR) method to address the limitations of existing CLIP-based weakly supervised semantic segmentation approaches: over-activation in non-target foreground regions and background areas. Specifically, at the semantic level, the Cross-Modal Prototype Alignment (CMPA) establishes a contrastive learning mechanism to enforce feature space alignment across modalities, reducing inter-class overlap while enhancing semantic correlations, to rectify over-activation in non-target foreground regions effectively; at the spatial level, the Superpixel-Guided Correction (SGC) leverages superpixel-based spatial priors to precisely filter out interference from non-target regions during affinity propagation, significantly rectifying background over-activation. Extensive experiments on the PASCAL VOC and MS COCO datasets demonstrate that our method outperforms all single-stage approaches, as well as more complex multi-stage approaches, achieving mIoU scores of 79.5% and 50.6%, respectively. + oai:arXiv.org:2512.01701v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xiuli Bi, Die Xiao, Junchao Fan, Bin Xiao + + + JFR: An Efficient Jump Frontier Relaxation Strategy for Bellman-Ford + https://arxiv.org/abs/2512.01802 + arXiv:2512.01802v4 Announce Type: replace +Abstract: We propose JFR, a Bellman-Ford-based optimization framework leveraging frontier contraction and abstract multi-hop jump propagation to accelerate shortest-path computation while strictly preserving correctness. JFR achieves substantial reductions in relaxation operations, ranging from -31 to 99 percent, across sparse, dense, and negative-edge graphs, ensuring robust performance even under adversarial or highly connected topologies. On ultra-large graphs with up to N=10,000 nodes and 55,000,000 edges, JFR maintains strong operational reductions and comparable or improved runtime relative to SPFA-SLF, demonstrating consistent robustness across graph size and density. Lower relaxation counts imply reduced memory-access overheads and computational effort; this normalized work reduction highlights JFR's suitability for scenarios requiring high throughput or energy-conscious operation. Future work focuses on integrating high-performance queue structures, adaptive frontier strategies, and cache-aware techniques to further reduce constant-factor overheads and fully realize JFR's practical runtime potential. + oai:arXiv.org:2512.01802v4 + cs.DS + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Xin Wang, Xi Chen + + + TrackNetV5: Residual-Driven Spatio-Temporal Refinement and Motion Direction Decoupling for Fast Object Tracking + https://arxiv.org/abs/2512.02789 + arXiv:2512.02789v2 Announce Type: replace +Abstract: The TrackNet series has established a strong baseline for fast-moving small object tracking in sports. However, existing iterations face significant limitations: V1-V3 struggle with occlusions due to a reliance on purely visual cues, while TrackNetV4, despite introducing motion inputs, suffers from directional ambiguity as its absolute difference method discards motion polarity. To overcome these bottlenecks, we propose TrackNetV5, a robust architecture integrating two novel mechanisms. First, to recover lost directional priors, we introduce the Motion Direction Decoupling (MDD) module. Unlike V4, MDD decomposes temporal dynamics into signed polarity fields, explicitly encoding both movement occurrence and trajectory direction. Second, we propose the Residual-Driven Spatio-Temporal Refinement (R-STR) head. Operating on a coarse-to-fine paradigm, this Transformer-based module leverages factorized spatio-temporal contexts to estimate a corrective residual, effectively recovering occluded targets. Extensive experiments on the TrackNetV2 dataset demonstrate that TrackNetV5 achieves a new state-of-the-art F1-score of 0.9859 and an accuracy of 0.9733, significantly outperforming previous versions. Notably, this performance leap is achieved with a marginal 3.7% increase in FLOPs compared to V4, maintaining real-time inference capabilities while delivering superior tracking precision. + oai:arXiv.org:2512.02789v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Tang Haonan, Chen Yanjun, Jiang Lezhi + + + ASPEN: An Adaptive Spectral Physics-Enabled Network for Ginzburg-Landau Dynamics + https://arxiv.org/abs/2512.03290 + arXiv:2512.03290v3 Announce Type: replace +Abstract: Physics-Informed Neural Networks (PINNs) have emerged as a powerful, mesh-free paradigm for solving partial differential equations (PDEs). However, they notoriously struggle with stiff, multi-scale, and nonlinear systems due to the inherent spectral bias of standard multilayer perceptron (MLP) architectures, which prevents them from adequately representing high-frequency components. In this work, we introduce the Adaptive Spectral Physics-Enabled Network (ASPEN), a novel architecture designed to overcome this critical limitation. ASPEN integrates an adaptive spectral layer with learnable Fourier features directly into the network's input stage. This mechanism allows the model to dynamically tune its own spectral basis during training, enabling it to efficiently learn and represent the precise frequency content required by the solution. We demonstrate the efficacy of ASPEN by applying it to the complex Ginzburg-Landau equation (CGLE), a canonical and challenging benchmark for nonlinear, stiff spatio-temporal dynamics. Our results show that a standard PINN architecture catastrophically fails on this problem, diverging into non-physical oscillations. In contrast, ASPEN successfully solves the CGLE with exceptional accuracy. The predicted solution is visually indistinguishable from the high-resolution ground truth, achieving a low median physics residual of 5.10 x 10^-3. Furthermore, we validate that ASPEN's solution is not only pointwise accurate but also physically consistent, correctly capturing emergent physical properties, including the rapid free energy relaxation and the long-term stability of the domain wall front. This work demonstrates that by incorporating an adaptive spectral basis, our framework provides a robust and physically-consistent solver for complex dynamical systems where standard PINNs fail, opening new options for machine learning in challenging physical domains. + oai:arXiv.org:2512.03290v3 + cs.LG + physics.app-ph + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Julian Evan Chrisnanto, Nurfauzi Fadillah, Yulison Herry Chrisnanto + + + When, How Long and How Much? Interpretable Neural Networks for Time Series Regression by Learning to Mask and Aggregate + https://arxiv.org/abs/2512.03578 + arXiv:2512.03578v2 Announce Type: replace +Abstract: Time series extrinsic regression (TSER) refers to the task of predicting a continuous target variable from an input time series. It appears in many domains, including healthcare, finance, environmental monitoring, and engineering. In these settings, accurate predictions and trustworthy reasoning are both essential. Although state-of-the-art TSER models achieve strong predictive performance, they typically operate as black boxes, making it difficult to understand which temporal patterns drive their decisions. Post-hoc interpretability techniques, such as feature attribution, aim to to explain how the model arrives at its predictions, but often produce coarse, noisy, or unstable explanations. Recently, inherently interpretable approaches based on concepts, additive decompositions, or symbolic regression, have emerged as promising alternatives. However, these approaches remain limited: they require explicit supervision on the concepts themselves, often cannot capture interactions between time-series features, lack expressiveness for complex temporal patterns, and struggle to scale to high-dimensional multivariate data. + To address these limitations, we propose MAGNETS (Mask-and-AGgregate NEtwork for Time Series), an inherently interpretable neural architecture for TSER. MAGNETS learns a compact set of human-understandable concepts without requiring any annotations. Each concept corresponds to a learned, mask-based aggregation over selected input features, explicitly revealing both which features drive predictions and when they matter in the sequence. Predictions are formed as combinations of these learned concepts through a transparent, additive structure, enabling clear insight into the model's decision process. + The code implementation and datasets are publicly available at https://github.com/FlorentF9/MAGNETS. + oai:arXiv.org:2512.03578v2 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Florent Forest, Amaury Wei, Olga Fink + + + Configuration-Constrained Tube MPC for Periodic Operation + https://arxiv.org/abs/2512.04239 + arXiv:2512.04239v2 Announce Type: replace +Abstract: Periodic operation often emerges as the economically optimal mode in industrial processes, particularly under varying economic or environmental conditions. This paper proposes a robust model predictive control (MPC) framework for uncertain systems modeled as polytopic linear differential inclusions (LDIs), where the dynamics evolve as convex combinations of finitely many affine control systems with additive disturbances. The robust control problem is reformulated as a convex optimization program by optimizing over configuration-constrained polytopic tubes and tracks a periodic trajectory that is optimal for a given economic criterion. Artificial variables embedded in the formulation ensure recursive feasibility and robust constraint satisfaction when the economic criterion is updated online, while guaranteeing convergence to the corresponding optimal periodic tube when the criterion remains constant. To improve computational efficiency, we introduce a quadratic over-approximation of the periodic cost under a Lipschitz continuity assumption, yielding a Quadratic Program (QP) formulation that preserves the above theoretical guarantees. The effectiveness and scalability of the approach are demonstrated on a benchmark example and a ball-plate system with eight states. + oai:arXiv.org:2512.04239v2 + eess.SY + cs.SY + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Filippo Badalamenti, Jose A. Borja-Conde, Sampath Kumar Mulagaleti, Boris Houska, Alberto Bemporad, Mario Eduardo Villanueva + + + Neural Decoding of Overt Speech from ECoG Using Vision Transformers and Contrastive Representation Learning + https://arxiv.org/abs/2512.04618 + arXiv:2512.04618v2 Announce Type: replace +Abstract: Speech Brain Computer Interfaces (BCIs) offer promising solutions to people with severe paralysis unable to communicate. A number of recent studies have demonstrated convincing reconstruction of intelligible speech from surface electrocorticographic (ECoG) or intracortical recordings by predicting a series of phonemes or words and using downstream language models to obtain meaningful sentences. A current challenge is to reconstruct speech in a streaming mode by directly regressing cortical signals into acoustic speech. While this has been achieved recently using intracortical data, further work is needed to obtain comparable results with surface ECoG recordings. In particular, optimizing neural decoders becomes critical in this case. Here we present an offline speech decoding pipeline based on an encoder-decoder deep neural architecture, integrating Vision Transformers and contrastive learning to enhance the direct regression of speech from ECoG signals. The approach is evaluated on two datasets, one obtained with clinical subdural electrodes in an epileptic patient, and another obtained with the fully implantable WIMAGINE epidural system in a participant of a motor BCI trial. To our knowledge this presents a first attempt to decode speech from a fully implantable and wireless epidural recording system offering perspectives for long-term use. + oai:arXiv.org:2512.04618v2 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Mohamed Baha Ben Ticha, Xingchen Ran, Guillaume Saldanha, Ga\"el Le Godais, Phil\'emon Roussel, Marc Aubert, Amina Fontanell, Thomas Costecalde, Lucas Struber, Serpil Karakas, Shaomin Zhang, Philippe Kahane, Guillaume Charvet, St\'ephan Chabard\`es, Blaise Yvert + + + Learning from Self Critique and Refinement for Faithful LLM Summarization + https://arxiv.org/abs/2512.05387 + arXiv:2512.05387v2 Announce Type: replace +Abstract: Large Language Models (LLMs) often suffer from hallucinations: output content that is not grounded in the input context, when performing long-form text generation tasks such as summarization. Prior works have shown that hallucinations can be reduced by iteratively critiquing and refining previously generated outputs using either the same model or a more powerful teacher model as the critique. However, these approaches either require additional test-time compute or assume access to more powerful teacher models, making them costly and less practical. In this work, we propose Self Critique and Refinement-based Preference Optimization (SCRPO), which is a self-supervised training framework that first constructs a preference dataset by leveraging the LLM's own critique and refinement capabilities, and then applies preference learning to improve the same LLM for faithful summarization. Experiments on three summarization benchmarks (XSUM CNNDM and SAMSum), demonstrate that our approach outperforms state-of-the-art self-supervised learning methods in terms of faithfulness metrics while either maintaining or improving other metrics that measure the overall quality of the summary. Moreover, compared to test-time refinement, our approach not only improves efficiency but also results in more faithful summaries. + oai:arXiv.org:2512.05387v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Ting-Yao Hu, Hema Swetha Koppula, Hadi Pouransari, Cem Koc, Oncel Tuzel, Raviteja Vemulapalli + + + Interpretation of a Discrete de Rham method as a Finite Element System + https://arxiv.org/abs/2512.05912 + arXiv:2512.05912v2 Announce Type: replace +Abstract: We show that the DDR method can be interpreted as defining a computable consistent discrete $\mathrm{L}^2$ product on a conforming FES defined by PDEs. Without modifying the numerical method itself, this point of view provides an alternative approach to the analysis. The conformity and consistency properties we obtain are stronger than those previously shown, even in low dimensions. We can also recover some of the other results that have been proved about DDR, from those that have already been proved, in principle, in the general context of FES. We also bring VEM, the Virtual Element Method, into the discussion. + oai:arXiv.org:2512.05912v2 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Snorre H. Christiansen, Francesca Rapetti + + + Look Twice before You Leap: A Rational Agent Framework for Localized Adversarial Anonymization + https://arxiv.org/abs/2512.06713 + arXiv:2512.06713v2 Announce Type: replace +Abstract: Current LLM-based text anonymization frameworks usually rely on remote API services from powerful LLMs, which creates an inherent privacy paradox: users must disclose data to untrusted third parties for guaranteed privacy preservation. Moreover, directly migrating current solutions to local small-scale models (LSMs) offers a suboptimal solution with severe utility collapse. Our work argues that this failure stems not merely from the capability deficits of LSMs, but significantly from the inherent irrationality of the greedy adversarial strategies employed by current state-of-the-art (SOTA) methods. To address this, we propose Rational Localized Adversarial Anonymization (RLAA), a fully localized and training-free framework featuring an Attacker-Arbitrator-Anonymizer architecture. We model the anonymization process as a trade-off between Marginal Privacy Gain (MPG) and Marginal Utility Cost (MUC), and demonstrate that greedy strategies tend to drift into an irrational state. Instead, RLAA introduces an arbitrator that acts as a rationality gatekeeper, validating the attacker's inference to filter out feedback providing negligible privacy benefits. This mechanism promotes a rational early-stopping criterion, and structurally prevents utility collapse. Extensive experiments on different benchmarks demonstrate that RLAA achieves a superior privacy-utility trade-off compared to strong baselines. + oai:arXiv.org:2512.06713v2 + cs.CR + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Donghang Duan, Xu Zheng, Yuefeng He, Chong Mu, Leyi Cai, Lizong Zhang + + + Arc Gradient Descent: A Mathematically Derived Reformulation of Gradient Descent with Phase-Aware, User-Controlled Step Dynamics + https://arxiv.org/abs/2512.06737 + arXiv:2512.06737v2 Announce Type: replace +Abstract: The paper presents the formulation, implementation, and evaluation of the ArcGD optimiser. The evaluation is conducted initially on a non-convex benchmark function and subsequently on a real-world ML dataset. The initial comparative study using the Adam optimiser is conducted on a stochastic variant of the highly non-convex and notoriously challenging Rosenbrock function, renowned for its narrow, curved valley, across dimensions ranging from 2D to 1000D and an extreme case of 50,000D. Two configurations were evaluated to eliminate learning-rate bias: (i) both using ArcGD's effective learning rate and (ii) both using Adam's default learning rate. ArcGD consistently outperformed Adam under the first setting and, although slower under the second, achieved superior final solutions in most cases. In the second evaluation, ArcGD is evaluated against state-of-the-art optimizers (Adam, AdamW, Lion, SGD) on the CIFAR-10 image classification dataset across 8 diverse MLP architectures ranging from 1 to 5 hidden layers. ArcGD achieved the highest average test accuracy (50.7%) at 20,000 iterations, outperforming AdamW (46.6%), Adam (46.8%), SGD (49.6%), and Lion (43.4%), winning or tying on 6 of 8 architectures. Notably, while Adam and AdamW showed strong early convergence at 5,000 iterations, but regressed with extended training, whereas ArcGD continued improving, demonstrating generalization and resistance to overfitting without requiring early stopping tuning. Strong performance on geometric stress tests and standard deep-learning benchmarks indicates broad applicability, highlighting the need for further exploration. Moreover, it is also shown that a limiting variant of ArcGD can be interpreted as a sign-based momentum-like update, highlighting conceptual connections between the inherent mechanisms of ArcGD and the Lion optimiser. + oai:arXiv.org:2512.06737v2 + cs.LG + cs.AI + cs.CL + cs.CV + cs.NE + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nikhil Verma, Joonas Linnosmaa, Leonardo Espinosa-Leal, Napat Vajragupta + + + VDOT: Efficient Unified Video Creation via Optimal Transport Distillation + https://arxiv.org/abs/2512.06802 + arXiv:2512.06802v3 Announce Type: replace +Abstract: The rapid development of generative models has significantly advanced image and video applications. Among these, video creation, aimed at generating videos under various conditions, has gained substantial attention. However, existing video creation models either focus solely on a few specific conditions or suffer from excessively long generation times due to complex model inference, making them impractical for real-world applications. To mitigate these issues, we propose an efficient unified video creation model, named VDOT. Concretely, we model the training process with the distribution matching distillation (DMD) paradigm. Instead of using the Kullback-Leibler (KL) minimization, we additionally employ a novel computational optimal transport (OT) technique to optimize the discrepancy between the real and fake score distributions. The OT distance inherently imposes geometric constraints, mitigating potential zero-forcing or gradient collapse issues that may arise during KL-based distillation within the few-step generation scenario, and thus, enhances the efficiency and stability of the distillation process. Further, we integrate a discriminator to enable the model to perceive real video data, thereby enhancing the quality of generated videos. To support training unified video creation models, we propose a fully automated pipeline for video data annotation and filtering that accommodates multiple video creation tasks. Meanwhile, we curate a unified testing benchmark, UVCBench, to standardize evaluation. Experiments demonstrate that our 4-step VDOT outperforms or matches other baselines with 100 denoising steps. + oai:arXiv.org:2512.06802v3 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yutong Wang, Haiyu Zhang, Tianfan Xue, Yu Qiao, Yaohui Wang, Chang Xu, Xinyuan Chen + + + Task adaptation of Vision-Language-Action model: 1st Place Solution for the 2025 BEHAVIOR Challenge + https://arxiv.org/abs/2512.06951 + arXiv:2512.06951v2 Announce Type: replace +Abstract: We present a vision-action policy that won 1st place in the 2025 BEHAVIOR Challenge - a large-scale benchmark featuring 50 diverse long-horizon household tasks in photo-realistic simulation, requiring bimanual manipulation, navigation, and context-aware decision making. Building on the Pi0.5 architecture, we introduce several innovations. Our primary contribution is correlated noise for flow matching, which improves training efficiency and enables correlation-aware inpainting for smooth action sequences. We also apply learnable mixed-layer attention and System 2 stage tracking for ambiguity resolution. Training employs multi-sample flow matching to reduce variance, while inference uses action compression and challenge-specific correction rules. Our approach achieves 26% q-score across all 50 tasks on both public and private leaderboards. + oai:arXiv.org:2512.06951v2 + cs.RO + cs.AI + cs.CV + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Ilia Larchenko, Gleb Zarin, Akash Karnatak + + + Accurate Models of NVIDIA Tensor Cores + https://arxiv.org/abs/2512.07004 + arXiv:2512.07004v2 Announce Type: replace +Abstract: Matrix multiplication is a fundamental operation in for both training of neural networks and inference. To accelerate matrix multiplication, Graphical Processing Units (GPUs) provide it implemented in hardware. Due to the increased throughput over the software-based matrix multiplication, the multipliers are increasingly used outside of AI, to accelerate various applications in scientific computing. However, matrix multipliers targeted at AI are at present not compliant with IEEE 754 floating-point arithmetic behaviour, with different vendors offering different numerical features. This leads to non-reproducible results across different generations of GPU architectures, at the matrix multiply-accumulate instruction level. To study numerical characteristics of matrix multipliers-such as rounding behaviour, accumulator width, normalization points, extra carry bits, and others-test vectors are typically constructed. Yet, these vectors may or may not distinguish between different hardware models, and due to limited hardware availability, their reliability across many different platforms remains largely untested. We present software models for emulating the inner product behavior of low- and mixed-precision matrix multipliers in the V100, A100, H100 and B200 data center GPUs in most supported input formats of interest to mixed-precision algorithm developers: 8-, 16-, and 19-bit floating point. + oai:arXiv.org:2512.07004v2 + cs.MS + cs.AR + cs.NA + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Faizan A. Khattak, Mantas Mikaitis + + + Always Keep Your Promises: DynamicLRP, A Model-Agnostic Solution To Layer-Wise Relevance Propagation + https://arxiv.org/abs/2512.07010 + arXiv:2512.07010v2 Announce Type: replace +Abstract: Layer-wise Relevance Propagation (LRP) provides principled attribution for neural networks through conservation properties and foundations in Deep Taylor Decomposition. However, existing implementations operate at the module level, requiring architecture-specific propagation rules and model modifications. These limit the generality of target model and sustainability of implementations as architectures evolve. We introduce DynamicLRP, a model-agnostic LRP framework operating at the tensor operation level. By decomposing attribution to individual operations within computation graphs and introducing a novel mechanism for deferred activation resolution, named the Promise System, our approach achieves true architecture agnosticity while maintaining LRP's theoretical guarantees. This design operates independently of backpropagation machinery, requiring no model modification, enabling side-by-side execution with gradient backpropagation. Being based on computation graphs, this method is theoretically extensible to other deep learning libraries that support auto-differentiation. We demonstrate faithfulness matching or exceeding specialized implementations (1.77 vs 1.69 ABPC on VGG, equivalent performance on ViT, 93.70% and 95.06% top-1 attribution accuracy for explaining RoBERTa-large and Flan-T5-large answers on SQuADv2, respectively) while maintaining practical efficiency on models with 100M-1B parameters. We achieved 99.92% node coverage across 31,465 computation graph nodes from 15 diverse architectures, including state-space models (Mamba), audio transformers (Whisper), and multimodal systems (DePlot) without any model-specific code with rules for 47 fundamental operations implemented. Our operation-level decomposition and Promise System establish a sustainable, extensible foundation for LRP across evolving architectures. All code is available at https://github.com/keeinlev/dynamicLRP . + oai:arXiv.org:2512.07010v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Kevin Lee, Pablo Millan Arias + + + Geo3DVQA: Evaluating Vision-Language Models for 3D Geospatial Reasoning from Aerial Imagery + https://arxiv.org/abs/2512.07276 + arXiv:2512.07276v2 Announce Type: replace +Abstract: Three-dimensional geospatial analysis is critical for applications in urban planning, climate adaptation, and environmental assessment. However, current methodologies depend on costly, specialized sensors, such as LiDAR and multispectral sensors, which restrict global accessibility. Additionally, existing sensor-based and rule-driven methods struggle with tasks requiring the integration of multiple 3D cues, handling diverse queries, and providing interpretable reasoning. We present Geo3DVQA, a comprehensive benchmark that evaluates vision-language models (VLMs) in height-aware 3D geospatial reasoning from RGB imagery alone. Unlike conventional sensor-based frameworks, Geo3DVQA emphasizes realistic scenarios integrating elevation, sky view factors, and land cover patterns. The benchmark comprises 110k curated question-answer pairs across 16 task categories, including single-feature inference, multi-feature reasoning, and application-level analysis. Through a systematic evaluation of ten state-of-the-art VLMs, we reveal fundamental limitations in RGB-to-3D spatial reasoning. Our results further show that domain-specific instruction tuning consistently enhances model performance across all task categories, including height-aware and open-ended, application-oriented reasoning. Geo3DVQA provides a unified, interpretable framework for evaluating RGB-based 3D geospatial reasoning and identifies key challenges and opportunities for scalable 3D spatial analysis. The code and data are available at https://github.com/mm1129/Geo3DVQA. + oai:arXiv.org:2512.07276v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Mai Tsujimoto, Junjue Wang, Weihao Xuan, Naoto Yokoya + + + Over-representation of phonological features in basic vocabulary doesn't replicate when controlling for spatial and phylogenetic effects + https://arxiv.org/abs/2512.07543 + arXiv:2512.07543v2 Announce Type: replace +Abstract: The statistical over-representation of phonological features in the basic vocabulary of languages is often interpreted as reflecting potentially universal sound symbolic patterns. However, most of those results have not been tested explicitly for reproducibility and might be prone to biases in the study samples or models. Many studies on the topic do not adequately control for genealogical and areal dependencies between sampled languages, casting doubts on the robustness of the results. In this study, we test the robustness of a recent study on sound symbolism of basic vocabulary concepts which analyzed 245 languages.The new sample includes data on 2864 languages from Lexibank. We modify the original model by adding statistical controls for spatial and phylogenetic dependencies between languages. The new results show that most of the previously observed patterns are not robust, and in fact many patterns disappear completely when adding the genealogical and areal controls. A small number of patterns, however, emerges as highly stable even with the new sample. Through the new analysis, we are able to assess the distribution of sound symbolism on a larger scale than previously. The study further highlights the need for testing all universal claims on language for robustness on various levels. + oai:arXiv.org:2512.07543v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Frederic Blum + + + Liver Fibrosis Quantification and Analysis: The LiQA Dataset and Baseline Method + https://arxiv.org/abs/2512.07651 + arXiv:2512.07651v2 Announce Type: replace +Abstract: Liver fibrosis represents a significant global health burden, necessitating accurate staging for effective clinical management. This report introduces the LiQA (Liver Fibrosis Quantification and Analysis) dataset, established as part of the CARE 2024 challenge. Comprising $440$ patients with multi-phase, multi-center MRI scans, the dataset is curated to benchmark algorithms for Liver Segmentation (LiSeg) and Liver Fibrosis Staging (LiFS) under complex real-world conditions, including domain shifts, missing modalities, and spatial misalignment. We further describe the challenge's top-performing methodology, which integrates a semi-supervised learning framework with external data for robust segmentation, and utilizes a multi-view consensus approach with Class Activation Map (CAM)-based regularization for staging. Evaluation of this baseline demonstrates that leveraging multi-source data and anatomical constraints significantly enhances model robustness in clinical settings. + oai:arXiv.org:2512.07651v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Yuanye Liu, Hanxiao Zhang, Jiyao Liu, Nannan Shi, Yuxin Shi, Arif Mahmood, Murtaza Taj, Xiahai Zhuang + + + Graph Contrastive Learning via Spectral Graph Alignment + https://arxiv.org/abs/2512.07878 + arXiv:2512.07878v3 Announce Type: replace +Abstract: Given augmented views of each input graph, contrastive learning methods (e.g., InfoNCE) optimize pairwise alignment of graph embeddings across views while providing no mechanism to control the global structure of the view specific graph-of-graphs built from these embeddings. We introduce SpecMatch-CL, a novel loss function that aligns the view specific graph-of-graphs by minimizing the difference between their normalized Laplacians. Theoretically, we show that under certain assumptions, the difference between normalized Laplacians provides an upper bound not only for the difference between the ideal Perfect Alignment contrastive loss and the current loss, but also for the Uniformly loss. Empirically, SpecMatch-CL establishes new state of the art on eight TU benchmarks under unsupervised learning and semi-supervised learning at low label rates, and yields consistent gains in transfer learning on PPI-306K and ZINC 2M datasets. + oai:arXiv.org:2512.07878v3 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Manh Nguyen (Department of Statistics, University of Wisconsin-Madison) + + + Restrictive Hierarchical Semantic Segmentation for Stratified Tooth Layer Detection + https://arxiv.org/abs/2512.07984 + arXiv:2512.07984v3 Announce Type: replace +Abstract: Accurate understanding of anatomical structures is essential for reliably staging certain dental diseases. A way of introducing this within semantic segmentation models is by utilising hierarchy-aware methodologies. However, existing hierarchy-aware segmentation methods largely encode anatomical structure through the loss functions, providing weak and indirect supervision. We introduce a general framework that embeds an explicit anatomical hierarchy into semantic segmentation by coupling a recurrent, level-wise prediction scheme with restrictive output heads and top-down feature conditioning. At each depth of the class tree, the backbone is re-run on the original image concatenated with logits from the previous level. Child class features are conditioned using Feature-wise Linear Modulation of their parent class probabilities, to modulate child feature spaces for fine grained detection. A probabilistic composition rule enforces consistency between parent and descendant classes. Hierarchical loss combines per-level class weighted Dice and cross entropy loss and a consistency term loss, ensuring parent predictions are the sum of their children. We validate our approach on our proposed dataset, TL-pano, containing 194 panoramic radiographs with dense instance and semantic segmentation annotations, of tooth layers and alveolar bone. Utilising UNet and HRNet as donor models across a 5-fold cross validation scheme, the hierarchical variants consistently increase IoU, Dice, and recall, particularly for fine-grained anatomies, and produce more anatomically coherent masks. However, hierarchical variants also demonstrated increased recall over precision, implying increased false positives. The results demonstrate that explicit hierarchical structuring improves both performance and clinical plausibility, especially in low data dental imaging regimes. + oai:arXiv.org:2512.07984v3 + cs.CV + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Ryan Banks, Camila Lindoni Azevedo, Hongying Tang, Yunpeng Li + + + A tight example for approximation ratio 5 for covering small cuts by the primal-dual method + https://arxiv.org/abs/2512.08350 + arXiv:2512.08350v2 Announce Type: replace +Abstract: In the Small Cuts Cover problem we seek to cover by a min-cost edge-set the set family of cuts of size/capacity $<k$ of a graph. Recently, Simmons showed that the primal-dual algorithm of Williamson, Goemans, Mihail, and Vazirani achieves approximation ratio $5$ for this problem, and asked whether this bound is tight. We will answer this question positively, by providing an example in which the ratio between the solution produced by the primal-dual algorithm and the optimum is arbitrarily close to $5$. + oai:arXiv.org:2512.08350v2 + cs.DS + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Zeev Nutov + + + Ontology-Based Knowledge Graph Framework for Industrial Standard Documents via Hierarchical and Propositional Structuring + https://arxiv.org/abs/2512.08398 + arXiv:2512.08398v2 Announce Type: replace +Abstract: Ontology-based knowledge graph (KG) construction is a core technology that enables multidimensional understanding and advanced reasoning over domain knowledge. Industrial standards, in particular, contain extensive technical information and complex rules presented in highly structured formats that combine tables, scopes of application, constraints, exceptions, and numerical calculations, making KG construction especially challenging. In this study, we propose a method that organizes such documents into a hierarchical semantic structure, decomposes sentences and tables into atomic propositions derived from conditional and numerical rules, and integrates them into an ontology-knowledge graph through LLM-based triple extraction. Our approach captures both the hierarchical and logical structures of documents, effectively representing domain-specific semantics that conventional methods fail to reflect. To verify its effectiveness, we constructed rule, table, and multi-hop QA datasets, as well as a toxic clause detection dataset, from industrial standards, and implemented an ontology-aware KG-RAG framework for comparative evaluation. Experimental results show that our method achieves significant performance improvements across all QA types compared to existing KG-RAG approaches. This study demonstrates that reliable and scalable knowledge representation is feasible even for industrial documents with intertwined conditions, constraints, and scopes, contributing to future domain-specific RAG development and intelligent document management. + oai:arXiv.org:2512.08398v2 + cs.IR + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jiin Park, Hyuna Jeon, Yoonseo Lee, Jisu Hong, Misuk Kim + + + Applications of Interval-based Temporal Separation: the Reactivity Normal Form, Inverse $\Pi$, Craig Interpolation and Beth Definability + https://arxiv.org/abs/2512.08640 + arXiv:2512.08640v2 Announce Type: replace +Abstract: We show how interval-based temporal separation on the extension of Moszkowski's discrete time interval temporal logic (Moszkowski, 1986) by the neighbourhood modalities (ITL-NL) and a lemma which is key in establishing this form of separation in (Guelev and Moszkowski, 2022) can be used to obtain concise proofs of an interval-based form of the reactivity normal form as known from (Manna and Pnueli, 1990), the inverse of the temporal projection operator from (Halpern, Manna and Moszkowski, 1983), the elimination of propositional quantification in ITL-NL and, consequently, uniform Craig interpolation and Beth definability for ITL-NL. + oai:arXiv.org:2512.08640v2 + cs.LO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Dimitar P. Guelev + + + Selfi: Self Improving Reconstruction Engine via 3D Geometric Feature Alignment + https://arxiv.org/abs/2512.08930 + arXiv:2512.08930v2 Announce Type: replace +Abstract: Novel View Synthesis (NVS) has traditionally relied on models with explicit 3D inductive biases combined with known camera parameters from Structure-from-Motion (SfM) beforehand. Recent vision foundation models like VGGT take an orthogonal approach -- 3D knowledge is gained implicitly through training data and loss objectives, enabling feed-forward prediction of both camera parameters and 3D representations directly from a set of uncalibrated images. While flexible, VGGT features lack explicit multi-view geometric consistency, and we find that improving such 3D feature consistency benefits both NVS and pose estimation tasks. We introduce Selfi, a self-improving 3D reconstruction pipeline via feature alignment, transforming a VGGT backbone into a high-fidelity 3D reconstruction engine by leveraging its own outputs as pseudo-ground-truth. Specifically, we train a lightweight feature adapter using a reprojection-based consistency loss, which distills VGGT outputs into a new geometrically-aligned feature space that captures spatial proximity in 3D. This enables state-of-the-art performance in both NVS and camera pose estimation, demonstrating that feature alignment is a highly beneficial step for downstream 3D reasoning. + oai:arXiv.org:2512.08930v2 + cs.CV + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Youming Deng, Songyou Peng, Junyi Zhang, Kathryn Heal, Tiancheng Sun, John Flynn, Steve Marschner, Lucy Chai + + + SEA: Spectral Edge Attack on Graph Neural Networks + https://arxiv.org/abs/2512.08964 + arXiv:2512.08964v3 Announce Type: replace +Abstract: Graph neural networks (GNNs) have been widely applied in a variety of domains. However, the very ability of graphs to represent complex data structures is both the key strength of GNNs and a major source of their vulnerability. Recent studies have shown that attacking GNNs by maliciously perturbing the underlying graph can severely degrade their performance. For attack methods, the central challenge is to maintain attack effectiveness while remaining difficult to detect. Most existing attacks require modifying the graph structure, such as adding or deleting edges, which is relatively easy to notice. To address this problem, this paper proposes a new attack model that employs spectral adversarial robustness evaluation to quantitatively analyze the vulnerability of each edge in a graph. By precisely targeting the weakest links, our method can achieve effective attacks without changing the connectivity pattern of edges in the graph, for example by subtly adjusting the weights of a small subset of the most vulnerable edges. We apply the proposed method to attack several classical graph neural network architectures, and experimental results show that our attack is highly effective. + oai:arXiv.org:2512.08964v3 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yongyu Wang + + + ConceptPose: Training-Free Zero-Shot Object Pose Estimation using Concept Vectors + https://arxiv.org/abs/2512.09056 + arXiv:2512.09056v2 Announce Type: replace +Abstract: Object pose estimation is a fundamental task in computer vision and robotics, yet most methods require extensive, dataset-specific training. Concurrently, large-scale vision language models show remarkable zero-shot capabilities. In this work, we bridge these two worlds by introducing ConceptPose, a framework for object pose estimation that is both training-free and model-free. ConceptPose leverages a vision-language-model (VLM) to create open-vocabulary 3D concept maps, where each point is tagged with a concept vector derived from saliency maps. By establishing robust 3D-3D correspondences across concept maps, our approach allows precise estimation of 6DoF relative pose. Without any object or dataset-specific training, our approach achieves state-of-the-art results on common zero shot relative pose estimation benchmarks, significantly outperforming existing methods by over 62% in ADD(-S) score, including those that utilize extensive dataset-specific training. + oai:arXiv.org:2512.09056v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-sa/4.0/ + Liming Kuang, Yordanka Velikova, Mahdi Saleh, Jan-Nico Zaech, Danda Pani Paudel, Benjamin Busam + + + Derandomizing Isolation In Catalytic Logspace + https://arxiv.org/abs/2512.09374 + arXiv:2512.09374v2 Announce Type: replace +Abstract: A language is said to be in catalytic logspace if we can test membership using a deterministic logspace machine that has an additional read/write tape filled with arbitrary data whose contents have to be restored to their original value at the end of the computation. The model of catalytic computation was introduced by Buhrman et al [STOC2014]. + As our first result, we obtain a catalytic logspace algorithm for computing a minimum weight witness to a search problem, with small weights, provided the algorithm is given oracle access for the corresponding weighted decision problem. In particular, our reduction yields CL algorithms for the search versions of the following three problems: planar perfect matching, planar exact perfect matching and weighted arborescences in weighted digraphs. + Our second set of results concern the significantly larger class CL^{NP}_{2-round}. We show that CL^{NP}_{2-round} contains SearchSAT and the complexity classes BPP, MA and ZPP^{NP[1]}. While SearchSAT is shown to be in CL^{NP}_{2-round} using the isolation lemma, the other three containments, while based on the compress-or-random technique, use the Nisan-Wigderson [JCSS 1994] based pseudo-random generator. These containments show that CL^{NP}_{2-round} resembles ZPP^NP more than P^{NP}, providing some weak evidence that CL is more like ZPP than P. + For our third set of results we turn to isolation well inside catalytic classes. We consider the unambiguous catalytic class CTISP[poly(n),logn,log^2n]^UL and show that it contains reachability and therefore NL. This is a catalytic version of the result of van Melkebeek & Prakriya [SIAM J. Comput. 2019]. Building on their result, we also show a tradeoff between the workspace of the oracle and the catalytic space of the base machine. Finally, we extend these catalytic upper bounds to LogCFL. + oai:arXiv.org:2512.09374v2 + cs.CC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + V. Arvind, Srijan Chakraborty, Samir Datta + + + Py-DiSMech: A Scalable and Efficient Framework for Discrete Differential Geometry-Based Modeling and Control of Soft Robots + https://arxiv.org/abs/2512.09911 + arXiv:2512.09911v2 Announce Type: replace +Abstract: High-fidelity simulation has become essential to the design and control of soft robots, where large geometric deformations and complex contact interactions challenge conventional modeling tools. Recent advances in the field demand simulation frameworks that combine physical accuracy, computational scalability, and seamless integration with modern control and optimization pipelines. In this work, we present Py-DiSMech, a Python-based, open-source simulation framework for modeling and control of soft robotic structures grounded in the principles of Discrete Differential Geometry (DDG). By discretizing geometric quantities such as curvature and strain directly on meshes, Py-DiSMech captures the nonlinear deformation of rods, shells, and hybrid structures with high fidelity and reduced computational cost. The framework introduces (i) a fully vectorized NumPy implementation achieving order-of-magnitude speed-ups over existing geometry-based simulators; (ii) a penalty-energy-based fully implicit contact model that supports rod-rod, rod-shell, and shell-shell interactions; (iii) a natural-strain-based feedback-control module featuring a proportional-integral (PI) controller for shape regulation and trajectory tracking; and (iv) a modular, object-oriented software design enabling user-defined elastic energies, actuation schemes, and integration with machine-learning libraries. Benchmark comparisons demonstrate that Py-DiSMech substantially outperforms the state-of-the-art simulator Elastica in computational efficiency while maintaining physical accuracy. Together, these features establish Py-DiSMech as a scalable, extensible platform for simulation-driven design, control validation, and sim-to-real research in soft robotics. + oai:arXiv.org:2512.09911v2 + cs.RO + physics.comp-ph + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Radha Lahoti, Ryan Chaiyakul, M. Khalid Jawed + + + Independent Density Estimation + https://arxiv.org/abs/2512.10067 + arXiv:2512.10067v2 Announce Type: replace +Abstract: Large-scale Vision-Language models have achieved remarkable results in various domains, such as image captioning and conditioned image generation. Nevertheless, these models still encounter difficulties in achieving human-like compositional generalization. In this study, we propose a new method called Independent Density Estimation (IDE) to tackle this challenge. IDE aims to learn the connection between individual words in a sentence and the corresponding features in an image, enabling compositional generalization. We build two models based on the philosophy of IDE. The first one utilizes fully disentangled visual representations as input, and the second leverages a Variational Auto-Encoder to obtain partially disentangled features from raw images. Additionally, we propose an entropy-based compositional inference method to combine predictions of each word in the sentence. Our models exhibit superior generalization to unseen compositions compared to current models when evaluated on various datasets. + oai:arXiv.org:2512.10067v2 + cs.CV + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jiahao Liu, Senhao Cao + + + Does Timeboost Reduce MEV-Related Spam? Theory and Evidence from Layer-2 Transactions + https://arxiv.org/abs/2512.10094 + arXiv:2512.10094v2 Announce Type: replace +Abstract: Maximal extractable value opportunities often induce spam in Layer-2 blockchains: many identical transactions are submitted near simultaneously, most of which revert, wasting blockspace. We study Timeboost, a mechanism on Arbitrum that auctions a timestamp advantage, crucial under first-come first-served sequencing rules. We develop a game-theoretic model in which users choose the number of transaction copies to submit, and extend upon the baseline setting by modeling the Timeboost auction and subsequent transaction submission behavior. We show that Timeboost reduces spam and increases sequencer/DAO revenue in equilibrium relative to the baseline, transferring user payments from revert costs to auction bids. Empirically, we assemble mempool data from multiple Layer-2 networks, measuring spam via identical transactions submitted in narrow time intervals, and conduct an event study around Timeboost adoption on Arbitrum using other L2s as contemporaneous benchmarks. We find a decline in MEV-related spam and an increase in revenue on Arbitrum post-adoption, consistent with model predictions. + oai:arXiv.org:2512.10094v2 + cs.GT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Agostino Capponi, Brian Zhu + + + Does SWE-Bench-Verified Test Agent Ability or Model Memory? + https://arxiv.org/abs/2512.10218 + arXiv:2512.10218v2 Announce Type: replace +Abstract: SWE-Bench-Verified, a dataset comprising 500 issues, serves as a de facto benchmark for evaluating various large language models (LLMs) on their ability to resolve GitHub issues. But this benchmark may overlap with model training data. If that is true, scores may reflect training recall, not issue-solving skill. To study this, we test two Claude models that frequently appear in top-performing agents submitted to the benchmark. We ask them to find relevant files using only issue text, and then issue text plus file paths. We then run the same setup on BeetleBox and SWE-rebench. Despite both benchmarks involving popular open-source Python projects, models performed 3 times better on SWE-Bench-Verified. They were also 6 times better at finding edited files, without any additional context about the projects themselves. This gap suggests the models may have seen many SWE-Bench-Verified tasks during training. As a result, scores on this benchmark may not reflect an agent's ability to handle real software issues, yet it continues to be used in ways that can misrepresent progress and lead to choices that favour agents that use certain models over strong agent design. Our setup tests the localization step with minimal context to the extent that the task should be logically impossible to solve. Our results show the risk of relying on older popular benchmarks and support the shift toward newer datasets built with contamination in mind. + oai:arXiv.org:2512.10218v2 + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Thanosan Prathifkumar, Noble Saji Mathews, Meiyappan Nagappan + + + Confucius Code Agent: Scalable Agent Scaffolding for Real-World Codebases + https://arxiv.org/abs/2512.10398 + arXiv:2512.10398v5 Announce Type: replace +Abstract: Real-world software engineering tasks require coding agents that can operate over massive repositories, sustain long-horizon sessions, and reliably coordinate complex toolchains at test time. Existing research-grade coding agents offer transparency but struggle when scaled to heavier, production-level workloads, while production-grade systems achieve strong practical performance but provide limited extensibility, interpretability, and controllability. We introduce the Confucius Code Agent (CCA), a software engineering agent that can operate at large-scale codebases. CCA is built on top of the Confucius SDK, an agent development platform structured around three complementary perspectives: Agent Experience (AX), User Experience (UX), and Developer Experience (DX). The SDK integrates a unified orchestrator with hierarchical working memory for long-context reasoning, a persistent note-taking system for cross-session continual learning, and a modular extension system for reliable tool use. In addition, we introduce a meta-agent that automates the synthesis, evaluation, and refinement of agent configurations through a build-test-improve loop, enabling rapid adaptation to new tasks, environments, and tool stacks. Instantiated with these mechanisms, CCA demonstrates strong performance on real-world software engineering tasks. On SWE-Bench-Pro, CCA reaches a Resolve@1 of 54.3%, exceeding prior research baselines and comparing favorably to commercial results, under identical repositories, model backends, and tool access. + oai:arXiv.org:2512.10398v5 + cs.CL + cs.AI + cs.LG + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Sherman Wong, Zhenting Qi, Zhaodong Wang, Nathan Hu, Samuel Lin, Jun Ge, Erwin Gao, Wenlin Chen, Yilun Du, Minlan Yu, Ying Zhang + + + Renormalizable Spectral-Shell Dynamics as the Origin of Neural Scaling Laws + https://arxiv.org/abs/2512.10427 + arXiv:2512.10427v3 Announce Type: replace +Abstract: Neural scaling laws and double-descent phenomena suggest that deep-network training obeys a simple macroscopic structure despite highly nonlinear optimization dynamics. We derive such structure directly from gradient descent in function space. For mean-squared error loss, the training error evolves as $\dot e_t=-M(t)e_t$ with $M(t)=J_{\theta(t)}J_{\theta(t)}^{\!*}$, a time-dependent self-adjoint operator induced by the network Jacobian. Using Kato perturbation theory, we obtain an exact system of coupled modewise ODEs in the instantaneous eigenbasis of $M(t)$. + To extract macroscopic behavior, we introduce a logarithmic spectral-shell coarse-graining and track quadratic error energy across shells. Microscopic interactions within each shell cancel identically at the energy level, so shell energies evolve only through dissipation and external inter-shell interactions. We formalize this via a \emph{renormalizable shell-dynamics} assumption, under which cumulative microscopic effects reduce to a controlled net flux across shell boundaries. + Assuming an effective power-law spectral transport in a relevant resolution range, the shell dynamics admits a self-similar solution with a moving resolution frontier and explicit scaling exponents. This framework explains neural scaling laws and double descent, and unifies lazy (NTK-like) training and feature learning as two limits of the same spectral-shell dynamics. + oai:arXiv.org:2512.10427v3 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yizhou Zhang + + + Efficient Hypergraph Pattern Matching via Match-and-Filter and Intersection Constraint + https://arxiv.org/abs/2512.10621 + arXiv:2512.10621v2 Announce Type: replace +Abstract: A hypergraph is a generalization of a graph, in which a hyperedge can connect multiple vertices, modeling complex relationships involving multiple vertices simultaneously. Hypergraph pattern matching, which is to find all isomorphic embeddings of a query hypergraph in a data hypergraph, is one of the fundamental problems. In this paper, we present a novel algorithm for hypergraph pattern matching by introducing (1) the intersection constraint, a necessary and sufficient condition for valid embeddings, which significantly speeds up the verification process, (2) the candidate hyperedge space, a data structure that stores potential mappings between hyperedges in the query hypergraph and the data hypergraph, and (3) the Match-and-Filter framework, which interleaves matching and filtering operations to maintain only compatible candidates in the candidate hyperedge space during backtracking. Experimental results on real-world datasets demonstrate that our algorithm significantly outperforms the state-of-the-art algorithms, by up to orders of magnitude in terms of query processing time. + oai:arXiv.org:2512.10621v2 + cs.DB + cs.DS + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Siwoo Song, Wonseok Shin, Kunsoo Park, Giuseppe F. Italiano, Zhengyi Yang, Wenjie Zhang + + + PACIFIC: a framework for generating benchmarks to check Precise Automatically Checked Instruction Following In Code + https://arxiv.org/abs/2512.10713 + arXiv:2512.10713v2 Announce Type: replace +Abstract: Large Language Model (LLM)-based code assistants have emerged as a powerful application of generative AI, demonstrating impressive capabilities in code generation and comprehension. A key requirement for these systems is their ability to accurately follow user instructions. We present Precise Automatically Checked Instruction Following In Code (PACIFIC), a novel framework designed to automatically generate benchmarks that rigorously assess sequential instruction-following and code dry-running capabilities in LLMs, while allowing control over benchmark difficulty. PACIFIC produces benchmark variants with clearly defined expected outputs, enabling straightforward and reliable evaluation through simple output comparisons. In contrast to existing approaches that often rely on tool usage or agentic behavior, our work isolates and evaluates the LLM's intrinsic ability to reason through code behavior step-by-step without execution (dry running) and to follow instructions. Furthermore, our framework mitigates training data contamination by facilitating effortless generation of novel benchmark variations. We validate our framework by generating a suite of benchmarks spanning a range of difficulty levels and evaluating multiple state-of-the-art LLMs. Our results demonstrate that PACIFIC can produce increasingly challenging benchmarks that effectively differentiate instruction-following and dry running capabilities, even among advanced models. Overall, our framework offers a scalable, contamination-resilient methodology for assessing core competencies of LLMs in code-related tasks. + oai:arXiv.org:2512.10713v2 + cs.SE + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Itay Dreyfuss, Antonio Abu Nassar, Samuel Ackerman, Axel Ben David, Eitan Farchi, Rami Katan, Orna Raz, Marcel Zalmanovici + + + Iterative Compositional Data Generation for Robot Control + https://arxiv.org/abs/2512.10891 + arXiv:2512.10891v3 Announce Type: replace +Abstract: Collecting robotic manipulation data is expensive, making it impractical to acquire demonstrations for the combinatorially large space of tasks that arise in multi-object, multi-robot, and multi-environment settings. While recent generative models can synthesize useful data for individual tasks, they do not exploit the compositional structure of robotic domains and struggle to generalize to unseen task combinations. We propose a semantic compositional diffusion transformer that factorizes transitions into robot-, object-, obstacle-, and objective-specific components and learns their interactions through attention. Once trained on a limited subset of tasks, we show that our model can zero-shot generate high-quality transitions from which we can learn control policies for unseen task combinations. Then, we introduce an iterative self-improvement procedure in which synthetic data is validated via offline reinforcement learning and incorporated into subsequent training rounds. Our approach substantially improves zero-shot performance over monolithic and hard-coded compositional baselines, ultimately solving nearly all held-out tasks and demonstrating the emergence of meaningful compositional structure in the learned representations. + oai:arXiv.org:2512.10891v3 + cs.RO + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Anh-Quan Pham, Marcel Hussing, Shubhankar P. Patankar, Dani S. Bassett, Jorge Mendez-Mendez, Eric Eaton + + + Sphere Decoding Revisited + https://arxiv.org/abs/2512.11195 + arXiv:2512.11195v2 Announce Type: replace +Abstract: In this paper, the paradigm of sphere decoding (SD) for solving the integer least square problem (ILS) is revisited, where extra degrees of freedom are introduced to exploit the decoding potential. Firstly, the equivalent sphere decoding (ESD) is proposed, which is essentially the same with the classic Fincke-Pohst sphere decoding but characterizes the sphere radius $D>0$ with two new parameters named as initial searching size $K>1$ and deviation factor $\sigma>0$. By fixing $\sigma$ properly, we show that given the sphere radius $D\triangleq\sigma\sqrt{2\ln K}$, the complexity of ESD in terms of the number of visited nodes is upper bounded by $|S|<nK$, thus resulting in an explicit and tractable decoding trade-off solely controlled by $K$. To the best of our knowledge, this is the first time that the complexity of sphere decoding is exactly specified, where considerable decoding potential can be explored from it. After that, two enhancement mechanisms named as normalized weighting and candidate protection are proposed to further upgrade the ESD algorithm. On one hand, given the same setups of $K$ and $\sigma$, a larger sphere radius is achieved, indicating a better decoding trade-off. On the other hand, the proposed ESD algorithm is generalized, which bridges suboptimal and optimal decoding performance through the flexible choice of $K$. Finally, further performance optimization and complexity reduction with respect to ESD are also derived, and the introduced tractable and flexible decoding trade-off is verified through large-scale MIMO detection. + oai:arXiv.org:2512.11195v2 + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1109/TCOMM.2023.3278732 + in IEEE Transactions on Communications, vol. 72, no. 1, pp. 85-100, Jan. 2024 + Zheng Wang, Cong Ling, Shi Jin, Yongming Huang, Feifei Gao + + + Projected Sobolev Natural Gradient Descent for Neural Variational Monte Carlo Solution of the Gross-Pitaevskii Equation + https://arxiv.org/abs/2512.11339 + arXiv:2512.11339v2 Announce Type: replace +Abstract: This paper proposes a neural variational Monte Carlo method based on deep neural networks to solve the Gross-Pitaevskii equation (GPE) via projected Sobolev natural gradient descent (NGD). Adopting an "optimize-then-discretize" strategy, we first apply a constraint-preserving continuous Riemannian gradient flow on an infinite-dimensional Riemannian manifold, which is subsequently mapped to the neural network parameter space via Galerkin projection. This process naturally induces a Sobolev energy metric that incorporates physical information, effectively mitigating stiffness during optimization. To address the explicit dependence on the normalization constant caused by the nonlinear interaction term in the GPE, we design a hybrid sampling strategy combining an integration stream and a MCMC stream to achieve precise estimation of the generalized Gram matrix and energy gradients. Numerical experiments on benchmark cases, including the harmonic oscillator potential in the strong interaction limit and multi-scale optical lattice potentials, demonstrate the high accuracy of the proposed method. Furthermore, it achieves an order-of-magnitude acceleration in convergence compared to standard optimizers like Adam, exhibiting superior robustness in handling strong nonlinearities and complex geometric constraints. + oai:arXiv.org:2512.11339v2 + math.NA + cs.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chenglong Bao, Chen Cui, Kai Jiang, Shi Shu + + + Three methods, one problem: Classical and AI approaches to no-three-in-line + https://arxiv.org/abs/2512.11469 + arXiv:2512.11469v2 Announce Type: replace +Abstract: The No-Three-In-Line problem asks for the maximum number of points that can be placed on an n by n grid with no three collinear, representing a famous problem in combinatorial geometry. While classical methods like Integer Linear Programming (ILP) guarantee optimal solutions, they face exponential scaling with grid size, and recent advances in machine learning offer promising alternatives for pattern-based approximation. This paper presents the first systematic comparison of classical optimization and AI approaches to this problem, evaluating their performance against traditional algorithms. We apply PatternBoost transformer learning and reinforcement learning (PPO) to this problem for the first time, comparing them against ILP. ILP achieves provably optimal solutions up to 19 by 19 grids, while PatternBoost matches optimal performance up to 14 by 14 grids with 96% test loss reduction. PPO achieves perfect solutions on 10 by 10 grids but fails at 11 by 11 grids, where constraint violations prevent valid configurations. These results demonstrate that classical optimization remains essential for exact solutions while AI methods offer competitive performance on smaller instances, with hybrid approaches presenting the most promising direction for scaling to larger problem sizes. + oai:arXiv.org:2512.11469v2 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Pranav Ramanathan, Thomas Prellberg, Matthew Lewis, Prathamesh Dinesh Joshi, Raj Abhijit Dandekar, Rajat Dandekar, Sreedath Panat + + + Toward P vs NP: An Observer-Theoretic Separation via SPDP Rank and a ZFC-Equivalent Foundation within the N-Frame Model + https://arxiv.org/abs/2512.11820 + arXiv:2512.11820v3 Announce Type: replace +Abstract: We present a self-contained separation framework for P vs NP developed entirely within ZFC. The approach consists of: (i) a deterministic, radius-1 compilation from uniform polynomial-time Turing computation to local sum-of-squares (SoS) polynomials with polylogarithmic contextual entanglement width (CEW); (ii) a formal Width-to-Rank upper bound for the resulting SPDP matrices at matching parameters; (iii) an NP-side identity-minor lower bound in the same encoding; and (iv) a rank-monotone, instance-uniform extraction map from the compiled P-side polynomials to the NP family. Together these yield a contradiction under the assumption P = NP, establishing a separation. + We develop a correspondence between CEW, viewed as a quantitative measure of computational contextuality, and SPDP rank, yielding a unified criterion for complexity separation. We prove that bounded-CEW observers correspond to polynomial-rank computations (the class P), while unbounded CEW characterizes the class NP. This implies that exponential SPDP rank for #3SAT and related hard families forces P != NP within the standard framework of complexity theory. + Key technical components include: (1) constructive lower bounds on SPDP rank via Ramanujan-Tseitin expander families; (2) a non-circular reduction from Turing-machine computation to low-rank polynomial evaluation; (3) a codimension-collapse lemma ensuring that rank amplification cannot occur within polynomial resources; and (4) proofs of barrier immunity against relativization, natural proofs, and algebrization. The result is a complete ZFC proof architecture whose primitives and compositions are fully derived, with community verification and machine-checked formalization left as future work. + oai:arXiv.org:2512.11820v3 + cs.CC + cs.DM + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Darren J. Edwards + + + Neural CDEs as Correctors for Learned Time Series Models + https://arxiv.org/abs/2512.12116 + arXiv:2512.12116v2 Announce Type: replace +Abstract: Learned time-series models, whether continuous- or discrete-time, are widely used to forecast the states of a dynamical system. Such models generate multi-step forecasts either directly, by predicting the full horizon at once, or iteratively, by feeding back their own predictions at each step. In both cases, the multi-step forecasts are prone to errors. To address this, we propose a Predictor-Corrector mechanism where the Predictor is any learned time-series model and the Corrector is a neural controlled differential equation. The Predictor forecasts, and the Corrector predicts the errors of the forecasts. Adding these errors to the forecasts improves forecast performance. The proposed Corrector works with irregularly sampled time series and continuous- and discrete-time Predictors. Additionally, we introduce two regularization strategies to improve the extrapolation performance of the Corrector with accelerated training. We evaluate our Corrector with diverse Predictors, e.g., neural ordinary differential equations, Contiformer, and DLinear, on synthetic, physics simulation, and real-world forecasting datasets. The experiments demonstrate that the Predictor-Corrector mechanism consistently improves the performance compared to Predictor alone. + oai:arXiv.org:2512.12116v2 + cs.LG + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Muhammad Bilal Shahid, Prajwal Koirla, Cody Fleming + + + Human-Inspired Learning for Large Language Models via Obvious Record and Maximum-Entropy Method Discovery + https://arxiv.org/abs/2512.12608 + arXiv:2512.12608v2 Announce Type: replace +Abstract: Large Language Models (LLMs) excel at extracting common patterns from large-scale corpora, yet they struggle with rare, low-resource, or previously unseen scenarios-such as niche hardware deployment issues or irregular IoT device behaviors-because such cases are sparsely represented in training data. Moreover, LLMs rely primarily on implicit parametric memory, which limits their ability to explicitly acquire, recall, and refine methods, causing them to behave predominantly as intuition-driven predictors rather than deliberate, method-oriented learners. Inspired by how humans learn from rare experiences, this paper proposes a human-inspired learning framework that integrates two complementary mechanisms. The first, Obvious Record, explicitly stores cause--result (or question--solution) relationships as symbolic memory, enabling persistent learning even from single or infrequent encounters. The second, Maximum-Entropy Method Discovery, prioritizes and preserves methods with high semantic dissimilarity, allowing the system to capture diverse and underrepresented strategies that are typically overlooked by next-token prediction. Verification on a benchmark of 60 semantically diverse question--solution pairs demonstrates that the proposed entropy-guided approach achieves stronger coverage of unseen questions and significantly greater internal diversity than a random baseline, confirming its effectiveness in discovering more generalizable and human-inspired methods. + oai:arXiv.org:2512.12608v2 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Hong Su + + + gpu_ext: Extensible OS Policies for GPUs via eBPF + https://arxiv.org/abs/2512.12615 + arXiv:2512.12615v2 Announce Type: replace +Abstract: Performance in modern GPU-centric systems increasingly depends on resource management policies, including memory placement, scheduling, and observability. However, uniform policies typically yield suboptimal performance across diverse workloads. Existing approaches present a tradeoff: user-space runtimes provide programmability and flexibility but lack cross-tenant visibility and fine-grained control of hardware resources; meanwhile, modifications to the OS kernel introduce significant complexity and safety risks. To address this, we argue that the GPU driver and device layer should provide an extensible OS interface for policy enforcement. While the emerging eBPF technology shows potential, directly applying existing host-side eBPF is insufficient because they lack visibility and control into critical device-side events, and directly embedding policy code into GPU kernels could compromise safety and efficiency. We propose gpu_ext, an eBPF-based runtime that treats the GPU driver and device as a programmable OS subsystem. gpu_ext extends GPU drivers by exposing safe programmable hooks and introduces a device-side eBPF runtime capable of executing verified policy logic within GPU kernels, enabling coherent and transparent policies. Evaluation across realistic workloads including inference, training, and vector search demonstrates that gpu_ext improves throughput by up to 4.8x and reduces tail latency by up to 2x, incurring low overhead, without modifying or restarting applications + oai:arXiv.org:2512.12615v2 + cs.OS + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Yusheng Zheng, Tong Yu, Yiwei Yang, Minghui Jiang, Xiangyu Gao, Jianchang Su, Yanpeng Hu, Wenan Mao, Wei Zhang, Dan Williams, Andi Quinn + + + Understanding Syllogistic Reasoning in LLMs from Formal and Natural Language Perspectives + https://arxiv.org/abs/2512.12620 + arXiv:2512.12620v2 Announce Type: replace +Abstract: We study syllogistic reasoning in LLMs from the logical and natural language perspectives. In process, we explore fundamental reasoning capabilities of the LLMs and the direction this research is moving forward. To aid in our studies, we use 14 large language models and investigate their syllogistic reasoning capabilities in terms of symbolic inferences as well as natural language understanding. Even though this reasoning mechanism is not a uniform emergent property across LLMs, the perfect symbolic performances in certain models make us wonder whether LLMs are becoming more and more formal reasoning mechanisms, rather than making explicit the nuances of human reasoning. + oai:arXiv.org:2512.12620v2 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-sa/4.0/ + Aheli Poddar (Institute of Engineering & Management, Kolkata), Saptarshi Sahoo (Indian Statistical Institute, Chennai), Sujata Ghosh (Indian Statistical Institute, Chennai) + + + Sub-$n^k$ Deterministic algorithm for minimum $k$-way cut in simple graphs + https://arxiv.org/abs/2512.12900 + arXiv:2512.12900v2 Announce Type: replace +Abstract: We present a \emph{deterministic exact algorithm} for the \emph{minimum $k$-cut problem} on simple graphs. + Our approach combines the \emph{principal sequence of partitions (PSP)}, derived canonically from ideal loads, with a single level of \emph{Kawarabayashi--Thorup (KT)} contractions at the critical PSP threshold~$\lambda_j$. + Let $j$ be the smallest index with $\kappa(P_j)\ge k$ and $R := k - \kappa(P_{j-1})$. + We prove a structural decomposition theorem showing that an optimal $k$-cut can be expressed as the level-$(j\!-\!1)$ boundary $A_{\le j-1}$ together with exactly $(R-r)$ \emph{non-trivial} internal cuts of value at most~$\lambda_j$ and $r$ \emph{singleton isolations} (``islands'') inside the parts of~$P_{j-1}$. + At this level, KT contractions yield kernels of total size $\widetilde{O}(n / \lambda_j)$, and from them we build a \emph{canonical border family}~$\mathcal{B}$ of the same order that deterministically covers all optimal refinement choices. + Branching only over~$\mathcal{B}$ (and also including an explicit ``island'' branch) gives total running time + $$ + T(n,m,k) = \widetilde{O}\left(\mathrm{poly}(m)+\Bigl(\tfrac{n}{\lambda_j}+n^{\omega/3}\Bigr)^{R}\right), + $$ + where $\omega < 2.373$ is the matrix multiplication exponent. + In particular, if $\lambda_j \ge n^{\varepsilon}$ for some constant $\varepsilon > 0$, we obtain a \emph{deterministic sub-$n^k$-time algorithm}, running in $n^{(1-\varepsilon)(k-1)+o(k)}$ time. + Finally, combining our PSP$\times$KT framework with a small-$\lambda$ exact subroutine via a simple meta-reduction yields a deterministic $n^{c k+O(1)}$ algorithm for $c = \max\{ t/(t+1), \omega/3 \} < 1$, aligning with the exponent in the randomized bound of He--Li (STOC~2022) under the assumed subroutine. + oai:arXiv.org:2512.12900v2 + cs.DS + math.CO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Mohit Daga + + + Socratic Students: Teaching Language Models to Learn by Asking Questions + https://arxiv.org/abs/2512.13102 + arXiv:2512.13102v2 Announce Type: replace +Abstract: Large Language Models (LLMs) excel at static interactions, where they answer user queries by retrieving knowledge encoded in their parameters. However, in many real-world settings, such as educational tutoring or medical assistance, relevant information is not directly available and must be actively acquired through dynamic interactions. An interactive agent would recognize its own uncertainty, ask targeted questions, and retain new knowledge efficiently. Prior work has primarily explored effective ways for a teacher to instruct the student, where the teacher identifies student gaps and provides guidance. In this work, we shift the focus to the student and investigate effective strategies to actively query the teacher in seeking useful information. Across math and coding benchmarks, where baseline student models begin with near-zero performance, we show that student-led approaches consistently yield absolute Pass@k improvements of at least 0.5 over static baselines. To improve question quality, we train students using Direct Preference Optimization (DPO) with guidance from either self or stronger students. We find that this guided training enables smaller models to learn how to ask better questions, further enhancing learning efficiency. + oai:arXiv.org:2512.13102v2 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Rajeev Bhatt Ambati, Tianyi Niu, Aashu Singh, Shlok Mishra, Shashank Srivastava, Snigdha Chaturvedi + + + Can AI Understand What We Cannot Say? Measuring Multilevel Alignment Through Abortion Stigma Across Cognitive, Interpersonal, and Structural Levels + https://arxiv.org/abs/2512.13142 + arXiv:2512.13142v3 Announce Type: replace +Abstract: As large language models increasingly mediate stigmatized health decisions, their capacity to genuinely understand complex psychological and physiological phenomena remains poorly evaluated. Can AI understand what we cannot say? We investigate whether LLMs coherently represent abortion stigma across the cognitive, interpersonal, and structural levels where it operates. We systematically tested 627 demographically diverse personas across five leading LLMs using the validated Individual Level Abortion Stigma Scale (ILAS). Our multilevel analysis examined whether models coherently represent stigma at the cognitive level (self-judgment), interpersonal level (worries about judgment and isolation), and structural level (community condemnation and disclosure patterns), as well as overall stigma. Models fail tests of genuine understanding across all levels. They overestimate interpersonal stigma while underestimating cognitive stigma, assume uniform community condemnation, introduce demographic biases absent from human validation data, miss the empirically validated stigma-secrecy relationship, and contradict themselves within theoretical constructs. These patterns reveal that current alignment approaches ensure appropriate language but not coherent multilevel understanding. This work provides empirical evidence that current LLMs lack coherent multilevel understanding of psychological and physiological constructs. AI safety in high-stakes contexts demands new approaches to design (multilevel coherence), evaluation (continuous auditing), governance and regulation (mandatory audits, accountability, deployment restrictions), and AI literacy in domains where understanding what people cannot say determines whether support helps or harms. + oai:arXiv.org:2512.13142v3 + cs.AI + cs.HC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Anika Sharma, Malavika Mampally, Chidaksh Ravuru, Kandyce Brennan, Neil Gaikwad + + + Parallel Heuristic Exploration for Additive Complexity Reduction in Fast Matrix Multiplication + https://arxiv.org/abs/2512.13365 + arXiv:2512.13365v2 Announce Type: replace +Abstract: This paper presents a parallel random-search method for reducing additive complexity in fast matrix multiplication algorithms with ternary coefficients $\{-1,0,1\}$. The approach replaces expensive exact evaluation with fast heuristic scoring, including the new Greedy-Intersections strategy. The method runs many independent common subexpression elimination processes in parallel, exploring the search space through random pair substitutions and diverse selection strategies while sharing promising partial solutions. Tested on 149 ternary-coefficient schemes, the method achieves lower addition counts than the state-of-the-art Greedy-Potential on 102 schemes (including 57 new best-known results for optimal-rank schemes), matches it on 45, and is outperformed on only 2. For most schemes, it provides equal or better results while being significantly faster, making it practical for algorithm exploration. All software and results are open source. + oai:arXiv.org:2512.13365v2 + cs.SC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + A. I. Perminov + + + SSAS: Cross-subject EEG-based Emotion Recognition through Source Selection with Adversarial Strategy + https://arxiv.org/abs/2512.13458 + arXiv:2512.13458v2 Announce Type: replace +Abstract: Electroencephalographic (EEG) signals have long been applied in the field of affective brain-computer interfaces (aBCIs). Cross-subject EEG-based emotion recognition has demonstrated significant potential in practical applications due to its suitability across diverse people. However, most studies on cross-subject EEG-based emotion recognition neglect the presence of inter-individual variability and negative transfer phenomena during model training. To address this issue, a cross-subject EEG-based emotion recognition through source selection with adversarial strategy is introduced in this paper. The proposed method comprises two modules: the source selection network (SS) and the adversarial strategies network (AS). The SS uses domain labels to reverse-engineer the training process of domain adaptation. Its key idea is to disrupt class separability and magnify inter-domain differences, thereby raising the classification difficulty and forcing the model to learn domain-invariant yet emotion-relevant representations. The AS gets the source domain selection results and the pretrained domain discriminators from SS. The pretrained domain discriminators compute a novel loss aimed at enhancing the performance of domain classification during adversarial training, ensuring the balance of adversarial strategies. This paper provides theoretical insights into the proposed method and achieves outstanding performance on two EEG-based emotion datasets, SEED and SEED-IV. The code can be found at https://github.com/liuyici/SSAS. + oai:arXiv.org:2512.13458v2 + cs.LG + cs.AI + eess.SP + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1016/j.eswa.2025.130843 + Yici Liu, Qi Wei Oung, Hoi Leong Lee + + + From Zipf's Law to Neural Scaling through Heaps' Law and Hilberg's Hypothesis + https://arxiv.org/abs/2512.13491 + arXiv:2512.13491v2 Announce Type: replace +Abstract: We inspect the deductive connection between the neural scaling law and Zipf's law -- two statements discussed in machine learning and quantitative linguistics. The neural scaling law describes how the cross entropy rate of a foundation model -- such as a large language model -- changes with respect to the amount of training tokens, parameters, and compute. By contrast, Zipf's law posits that the distribution of tokens exhibits a power law tail. Whereas similar claims have been made in more specific settings, we show that the neural scaling law is a consequence of Zipf's law under certain broad assumptions that we reveal systematically. The derivation steps are as follows: We derive Heaps' law on the vocabulary growth from Zipf's law, Hilberg's hypothesis on the entropy scaling from Heaps' law, and the neural scaling from Hilberg's hypothesis. We illustrate these inference steps by a toy example of the Santa Fe process that satisfies all the four statistical laws. + oai:arXiv.org:2512.13491v2 + cs.IT + cs.LG + math.IT + math.ST + stat.TH + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + {\L}ukasz D\k{e}bowski + + + Assessing High-Risk AI Systems under the EU AI Act: From Legal Requirements to Technical Verification + https://arxiv.org/abs/2512.13907 + arXiv:2512.13907v2 Announce Type: replace +Abstract: The implementation of the AI Act requires practical mechanisms to verify compliance with legal obligations, yet concrete and operational mappings from high-level requirements to verifiable assessment activities remain limited, contributing to uneven readiness across Member States. This paper presents a structured mapping that translates high-level AI Act requirements into concrete, implementable verification activities applicable across the AI lifecycle. The mapping is derived through a systematic process in which legal requirements are decomposed into operational sub-requirements and grounded in authoritative standards and recognised practices. From this basis, verification activities are identified and characterised along two dimensions: the type of verification performed and the lifecycle target to which it applies. By making explicit the link between regulatory intent and technical and organisational assurance practices, the proposed mapping reduces interpretive uncertainty and provides a reusable reference for consistent, technology-agnostic compliance verification under the AI Act. + oai:arXiv.org:2512.13907v2 + cs.CY + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Alessio Buscemi, Tom Deckenbrunnen, Fahria Kabir, Kateryna Mishchenko, Nishat Mowla + + + FocalComm: Hard Instance-Aware Multi-Agent Perception + https://arxiv.org/abs/2512.13982 + arXiv:2512.13982v2 Announce Type: replace +Abstract: Multi-agent collaborative perception (CP) is a promising paradigm for improving autonomous driving safety, particularly for vulnerable road users like pedestrians, via robust 3D perception. However, existing CP approaches often optimize for vehicle detection performance metrics, underperforming on smaller, safety-critical objects such as pedestrians, where detection failures can be catastrophic. Furthermore, previous CP methods rely on full feature exchange rather than communicating only salient features that help reduce false negatives. To this end, we present FocalComm, a novel collaborative perception framework that focuses on exchanging hard-instance-oriented features among connected collaborative agents. FocalComm consists of two key novel designs: (1) a learnable progressive hard instance mining (HIM) module to extract hard instance-oriented features per agent, and (2) a query-based feature-level (intermediate) fusion technique that dynamically weights these identified features during collaboration. We show that FocalComm outperforms state-of-the-art collaborative perception methods on two challenging real-world datasets (V2X-Real and DAIR-V2X) across both vehicle-centric and infrastructure-centric collaborative setups. FocalComm also shows a strong performance gain in pedestrian detection in V2X-Real. + oai:arXiv.org:2512.13982v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Dereje Shenkut, Vijayakumar Bhagavatula + + + FUSION: Forecast-Embedded Agent Scheduling with Service Incentive Optimization over Distributed Air-Ground Edge Networks + https://arxiv.org/abs/2512.14323 + arXiv:2512.14323v2 Announce Type: replace +Abstract: In this paper, we introduce a first-of-its-kind forecasting-driven, incentive-inherent service provisioning framework for distributed air-ground integrated networks that explicitly accounts for human-machine coexistence. In our framework, vehicular-UAV agent pairs (APs) are proactively dispatched to overloaded hotspots to augment the computing capacity of edge servers (ESs), which in turn gives rise to a set of challenges that we jointly address: highly uncertain spatio-temporal workloads, spatio-temporal coupling between road traffic and UAV capacity, forecast-driven contracting risks, and heterogeneous quality-of-service (QoS) requirements of human users (HUs) and machine users (MUs). To address these challenges, we propose FUSION, a two-stage optimization framework, consisting of an offline stage and an online stage. In the offline stage, a liquid neural network-powered module performs multi-step spatio-temporal demand forecasting at distributed ESs, whose outputs are exploited by an enhanced ant colony optimization-based routing scheme and an auction-based incentive-compatible contracting mechanism, to jointly determine ES-AP contracts and pre-planned service routes. In the online stage, we formulate the congestion-aware task scheduling as a potential game among HUs, MUs, and heterogeneous ES/UAVs, and devise a potential-guided best-response dynamics algorithm that provably converges to a pure-strategy Nash equilibrium. Experiments on both synthetic and real-world datasets show that FUSION consistently achieves higher social welfare and improved resource utilization, while maintaining latency and energy costs comparable to state-of-the-art baselines and preserving individual rationality, budget balance, and near-truthfulness. + oai:arXiv.org:2512.14323v2 + cs.NI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Houyi Qi, Minghui Liwang, Seyyedali Hosseinalipour, Liqun Fu, Sai Zou, Xianbin Wang, Wei Ni, Yiguang Hong + + + Fine-Tuning of Neural Network Approximate MPC without Retraining via Bayesian Optimization + https://arxiv.org/abs/2512.14350 + arXiv:2512.14350v2 Announce Type: replace +Abstract: Approximate model-predictive control (AMPC) aims to imitate an MPC's behavior with a neural network, removing the need to solve an expensive optimization problem at runtime. However, during deployment, the parameters of the underlying MPC must usually be fine-tuned. This often renders AMPC impractical as it requires repeatedly generating a new dataset and retraining the neural network. Recent work addresses this problem by adapting AMPC without retraining using approximated sensitivities of the MPC's optimization problem. Currently, this adaption must be done by hand, which is labor-intensive and can be unintuitive for high-dimensional systems. To solve this issue, we propose using Bayesian optimization to tune the parameters of AMPC policies based on experimental data. By combining model-based control with direct and local learning, our approach achieves superior performance to nominal AMPC on hardware, with minimal experimentation. This allows automatic and data-efficient adaptation of AMPC to new system instances and fine-tuning to cost functions that are difficult to directly implement in MPC. We demonstrate the proposed method in hardware experiments for the swing-up maneuver on an inverted cartpole and yaw control of an under-actuated balancing unicycle robot, a challenging control problem. + oai:arXiv.org:2512.14350v2 + cs.RO + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Henrik Hose, Paul Brunzema, Alexander von Rohr, Alexander Gr\"afe, Angela P. Schoellig, Sebastian Trimpe + + + VLegal-Bench: Cognitively Grounded Benchmark for Vietnamese Legal Reasoning of Large Language Models + https://arxiv.org/abs/2512.14554 + arXiv:2512.14554v3 Announce Type: replace +Abstract: The rapid advancement of large language models (LLMs) has enabled new possibilities for applying artificial intelligence within the legal domain. Nonetheless, the complexity, hierarchical organization, and frequent revisions of Vietnamese legislation pose considerable challenges for evaluating how well these models interpret and utilize legal knowledge. To address this gap, Vietnamese Legal Benchmark (VLegal-Bench) is introduced, the first comprehensive benchmark designed to systematically assess LLMs on Vietnamese legal tasks. Informed by Bloom's cognitive taxonomy, VLegal-Bench encompasses multiple levels of legal understanding through tasks designed to reflect practical usage scenarios. The benchmark comprises 10,450 samples generated through a rigorous annotation pipeline, where legal experts label and cross-validate each instance using our annotation system to ensure every sample is grounded in authoritative legal documents and mirrors real-world legal assistant workflows, including general legal questions and answers, retrieval-augmented generation, multi-step reasoning, and scenario-based problem solving tailored to Vietnamese law. By providing a standardized, transparent, and cognitively informed evaluation framework, VLegal-Bench establishes a solid foundation for assessing LLM performance in Vietnamese legal contexts and supports the development of more reliable, interpretable, and ethically aligned AI-assisted legal systems. + oai:arXiv.org:2512.14554v3 + cs.CL + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Nguyen Tien Dong, Minh-Anh Nguyen, Thanh Dat Hoang, Nguyen Tuan Ngoc, Dao Xuan Quang Minh, Phan Phi Hai, Nguyen Thi Ngoc Anh, Dang Van Tu, Binh Vu + + + TUMTraf EMOT: Event-Based Multi-Object Tracking Dataset and Baseline for Traffic Scenarios + https://arxiv.org/abs/2512.14595 + arXiv:2512.14595v2 Announce Type: replace +Abstract: In Intelligent Transportation Systems (ITS), multi-object tracking is primarily based on frame-based cameras. However, these cameras tend to perform poorly under dim lighting and high-speed motion conditions. Event cameras, characterized by low latency, high dynamic range and high temporal resolution, have considerable potential to mitigate these issues. Compared to frame-based vision, there are far fewer studies on event-based vision. To address this research gap, we introduce an initial pilot dataset tailored for event-based ITS, covering vehicle and pedestrian detection and tracking. We establish a tracking-by-detection benchmark with a specialized feature extractor based on this dataset, achieving excellent performance. + oai:arXiv.org:2512.14595v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Mengyu Li, Xingcheng Zhou, Guang Chen, Alois Knoll, Hu Cao + + + CRISP: Contact-Guided Real2Sim from Monocular Video with Planar Scene Primitives + https://arxiv.org/abs/2512.14696 + arXiv:2512.14696v2 Announce Type: replace +Abstract: We introduce CRISP, a method that recovers simulatable human motion and scene geometry from monocular video. Prior work on joint human-scene reconstruction relies on data-driven priors and joint optimization with no physics in the loop, or recovers noisy geometry with artifacts that cause motion tracking policies with scene interactions to fail. In contrast, our key insight is to recover convex, clean, and simulation-ready geometry by fitting planar primitives to a point cloud reconstruction of the scene, via a simple clustering pipeline over depth, normals, and flow. To reconstruct scene geometry that might be occluded during interactions, we make use of human-scene contact modeling (e.g., we use human posture to reconstruct the occluded seat of a chair). Finally, we ensure that human and scene reconstructions are physically-plausible by using them to drive a humanoid controller via reinforcement learning. Our approach reduces motion tracking failure rates from 55.2\% to 6.9\% on human-centric video benchmarks (EMDB, PROX), while delivering a 43\% faster RL simulation throughput. We further validate it on in-the-wild videos including casually-captured videos, Internet videos, and even Sora-generated videos. This demonstrates CRISP's ability to generate physically-valid human motion and interaction environments at scale, greatly advancing real-to-sim applications for robotics and AR/VR. + oai:arXiv.org:2512.14696v2 + cs.CV + cs.GR + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Zihan Wang, Jiashun Wang, Jeff Tan, Yiwen Zhao, Jessica Hodgins, Shubham Tulsiani, Deva Ramanan + + + Let the Barbarians In: How AI Can Accelerate Systems Performance Research + https://arxiv.org/abs/2512.14806 + arXiv:2512.14806v4 Announce Type: replace +Abstract: Artificial Intelligence (AI) is beginning to transform the research process by automating the discovery of new solutions. This shift depends on the availability of reliable verifiers, which AI-driven approaches require to validate candidate solutions. Research focused on improving systems performance is especially well-suited to this paradigm because system performance problems naturally admit such verifiers: candidates can be implemented in real systems or simulators and evaluated against predefined workloads. We term this iterative cycle of generation, evaluation, and refinement AI-Driven Research for Systems (ADRS). Using several open-source ADRS instances (i.e., OpenEvolve, GEPA, and ShinkaEvolve), we demonstrate across ten case studies (e.g., multi-region cloud scheduling, mixture-of-experts load balancing, LLM-based SQL, transaction scheduling) that ADRS-generated solutions can match or even outperform human state-of-the-art designs. Based on these findings, we outline best practices (e.g., level of prompt specification, amount of feedback, robust evaluation) for effectively using ADRS, and we discuss future research directions and their implications. Although we do not yet have a universal recipe for applying ADRS across all of systems research, we hope our preliminary findings, together with the challenges we identify, offer meaningful guidance for future work as researcher effort shifts increasingly toward problem formulation and strategic oversight. + Note: This paper is an extension of our prior work [14]. It adds extensive evaluation across multiple ADRS frameworks and provides deeper analysis and insights into best practices. + oai:arXiv.org:2512.14806v4 + cs.SE + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Audrey Cheng, Shu Liu, Melissa Pan, Zhifei Li, Shubham Agarwal, Mert Cemri, Bowen Wang, Alexander Krentsel, Tian Xia, Jongseok Park, Shuo Yang, Jeff Chen, Lakshya Agrawal, Ashwin Naren, Shulu Li, Ruiying Ma, Aditya Desai, Jiarong Xing, Koushik Sen, Matei Zaharia, Ion Stoica + + + Softly Constrained Denoisers for Diffusion Models + https://arxiv.org/abs/2512.14980 + arXiv:2512.14980v2 Announce Type: replace +Abstract: Diffusion models struggle to produce samples that respect constraints, a common requirement in scientific applications. Recent approaches have introduced regularization terms in the loss or guidance methods during sampling to enforce such constraints, but they bias the generative model away from the true data distribution. This is a problem, especially when the constraint is misspecified, a common issue when formulating constraints on scientific data. In this paper, instead of changing the loss or the sampling loop, we integrate a guidance-inspired adjustment into the denoiser itself, giving it a soft inductive bias towards constraint-compliant samples. We show that these softly constrained denoisers exploit constraint knowledge to improve compliance over standard denoisers, and maintain enough flexibility to deviate from it when there is misspecification with observed data. + oai:arXiv.org:2512.14980v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Victor M. Yeom-Song, Severi Rissanen, Arno Solin, Samuel Kaski, Mingfei Sun + + + A Decision-Theoretic Approach for Managing Misalignment + https://arxiv.org/abs/2512.15584 + arXiv:2512.15584v2 Announce Type: replace +Abstract: When should we delegate decisions to AI systems? While the value alignment literature has developed techniques for shaping AI values, less attention has been paid to how to determine, under uncertainty, when imperfect alignment is good enough to justify delegation. We argue that rational delegation requires balancing an agent's value (mis)alignment with its epistemic accuracy and its reach (the acts it has available). This paper introduces a formal, decision-theoretic framework to analyze this tradeoff precisely accounting for a principal's uncertainty about these factors. Our analysis reveals a sharp distinction between two delegation scenarios. First, universal delegation (trusting an agent with any problem) demands near-perfect value alignment and total epistemic trust, conditions rarely met in practice. Second, we show that context-specific delegation can be optimal even with significant misalignment. An agent's superior accuracy or expanded reach may grant access to better overall decision problems, making delegation rational in expectation. We develop a novel scoring framework to quantify this ex ante decision. Ultimately, our work provides a principled method for determining when an AI is aligned enough for a given context, shifting the focus from achieving perfect alignment to managing the risks and rewards of delegation under uncertainty. + oai:arXiv.org:2512.15584v2 + cs.AI + cs.GT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Daniel A. Herrmann, Abinav Chari, Isabelle Qian, Sree Sharvesh, B. A. Levinstein + + + Hard Labels In! Rethinking the Role of Hard Labels in Mitigating Local Semantic Drift + https://arxiv.org/abs/2512.15647 + arXiv:2512.15647v2 Announce Type: replace +Abstract: Soft labels generated by teacher models have become a dominant paradigm for knowledge transfer and recent large-scale dataset distillation such as SRe2L, RDED, LPLD, offering richer supervision than conventional hard labels. However, we observe that when only a limited number of crops per image are used, soft labels are prone to local semantic drift: a crop may visually resemble another class, causing its soft embedding to deviate from the ground-truth semantics of the original image. This mismatch between local visual content and global semantic meaning introduces systematic errors and distribution misalignment between training and testing. In this work, we revisit the overlooked role of hard labels and show that, when appropriately integrated, they provide a powerful content-agnostic anchor to calibrate semantic drift. We theoretically characterize the emergence of drift under few soft-label supervision and demonstrate that hybridizing soft and hard labels restores alignment between visual content and semantic supervision. Building on this insight, we propose a new training paradigm, Hard Label for Alleviating Local Semantic Drift (HALD), which leverages hard labels as intermediate corrective signals while retaining the fine-grained advantages of soft labels. Extensive experiments on dataset distillation and large-scale conventional classification benchmarks validate our approach, showing consistent improvements in generalization. On ImageNet-1K, we achieve 42.7% with only 285M storage for soft labels, outperforming prior state-of-the-art LPLD by 9.0%. Our findings re-establish the importance of hard labels as a complementary tool, and call for a rethinking of their role in soft-label-dominated training. + oai:arXiv.org:2512.15647v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jiacheng Cui, Bingkui Tong, Xinyue Bi, Xiaohan Zhao, Jiacheng Liu, Zhiqiang Shen + + + COBRA: Catastrophic Bit-flip Reliability Analysis of State-Space Models + https://arxiv.org/abs/2512.15778 + arXiv:2512.15778v2 Announce Type: replace +Abstract: State-space models (SSMs), exemplified by the Mamba architecture, have recently emerged as state-of-the-art sequence-modeling frameworks, offering linear-time scalability together with strong performance in long-context settings. Owing to their unique combination of efficiency, scalability, and expressive capacity, SSMs have become compelling alternatives to transformer-based models, which suffer from the quadratic computational and memory costs of attention mechanisms. As SSMs are increasingly deployed in real-world applications, it is critical to assess their susceptibility to both software- and hardware-level threats to ensure secure and reliable operation. Among such threats, hardware-induced bit-flip attacks (BFAs) pose a particularly severe risk by corrupting model parameters through memory faults, thereby undermining model accuracy and functional integrity. To investigate this vulnerability, we introduce RAMBO, the first BFA framework specifically designed to target Mamba-based architectures. Through experiments on the Mamba-1.4b model with LAMBADA benchmark, a cloze-style word-prediction task, we demonstrate that flipping merely a single critical bit can catastrophically reduce accuracy from 74.64% to 0% and increase perplexity from 18.94 to 3.75 x 10^6. These results demonstrate the pronounced fragility of SSMs to adversarial perturbations. + oai:arXiv.org:2512.15778v2 + cs.CR + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Sanjay Das, Swastik Bhattacharya, Shamik Kundu, Arnab Raha, Souvik Kundu, Kanad Basu + + + DSO: Direct Steering Optimization for Bias Mitigation + https://arxiv.org/abs/2512.15926 + arXiv:2512.15926v2 Announce Type: replace +Abstract: Generative models are often deployed to make decisions on behalf of users, such as vision-language models (VLMs) identifying which person in a room is a doctor to help visually impaired individuals. Yet, VLM decisions are influenced by the perceived demographic attributes of people in the input, which can lead to biased outcomes like failing to identify women as doctors. Moreover, when reducing bias leads to performance loss, users may have varying needs for balancing bias mitigation with overall model capabilities, highlighting the demand for methods that enable controllable bias reduction during inference. Activation steering is a popular approach for inference-time controllability that has shown potential in inducing safer behavior in large language models (LLMs). However, we observe that current steering methods struggle to correct biases, where equiprobable outcomes across demographic groups are required. To address this, we propose Direct Steering Optimization (DSO) which uses reinforcement learning to find linear transformations for steering activations, tailored to mitigate bias while maintaining control over model performance. We demonstrate that DSO achieves state-of-the-art trade-off between fairness and capabilities on both VLMs and LLMs, while offering practitioners inference-time control over the trade-off. Overall, our work highlights the benefit of designing steering strategies that are directly optimized to control model behavior, providing more effective bias intervention than methods that rely on pre-defined heuristics for controllability. + oai:arXiv.org:2512.15926v2 + cs.LG + cs.CL + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Lucas Monteiro Paes, Nivedha Sivakumar, Yinong Oliver Wang, Masha Fedzechkina Donaldson, Barry-John Theobald, Luca Zappella, Nicholas Apostoloff + + + PDE-Agent: A toolchain-augmented multi-agent framework for PDE solving + https://arxiv.org/abs/2512.16214 + arXiv:2512.16214v2 Announce Type: replace +Abstract: Solving Partial Differential Equations (PDEs) is a cornerstone of engineering and scientific research. Traditional methods for PDE solving are cumbersome, relying on manual setup and domain expertise. While Physics-Informed Neural Network (PINNs) introduced end-to-end neural network-based solutions, and frameworks like DeepXDE further enhanced automation, these approaches still depend on expert knowledge and lack full autonomy. In this work, we frame PDE solving as tool invocation via LLM-driven agents and introduce PDE-Agent, the first toolchain-augmented multi-agent collaboration framework, inheriting the reasoning capacity of LLMs and the controllability of external tools and enabling automated PDE solving from natural language descriptions. PDE-Agent leverages the strengths of multi-agent and multi-tool collaboration through two key innovations: (1) A Prog-Act framework with graph memory for multi-agent collaboration, which enables effective dynamic planning and error correction via dual-loop mechanisms (localized fixes and global revisions). (2) A Resource-Pool integrated with a tool-parameter separation mechanism for multi-tool collaboration. This centralizes the management of runtime artifacts and resolves inter-tool dependency gaps in existing frameworks. To validate and evaluate this new paradigm for PDE solving , we develop PDE-Bench, a multi-type PDE Benchmark for agent-based tool collaborative solving, and propose multi-level metrics for assessing tool coordination. Evaluations verify that PDE-Agent exhibits superior applicability and performance in complex multi-step, cross-step dependent tasks. This new paradigm of toolchain-augmented multi-agent PDE solving will further advance future developments in automated scientific computing. Our source code and dataset will be made publicly available. + oai:arXiv.org:2512.16214v2 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jianming Liu, Ren Zhu, Jian Xu, Kun Ding, Xu-Yao Zhang, Gaofeng Meng, Cheng-Lin Liu + + + LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding + https://arxiv.org/abs/2512.16229 + arXiv:2512.16229v2 Announce Type: replace +Abstract: Diffusion Large Language Models (dLLMs) have demonstrated significant potential for high-speed inference. However, current confidence-driven decoding strategies are constrained by limited parallelism, typically achieving only 1--3 tokens per forward pass (TPF). In this work, we identify that the degree of parallelism during dLLM inference is highly sensitive to the Token Filling Order (TFO). Then, we introduce Lookahead PArallel Decoding LoPA, a training-free, plug-and-play algorithm, to identify a superior TFO and hence accelerate inference. LoPA concurrently explores distinct candidate TFOs via parallel branches, and selects the one with the highest potential for future parallelism based on branch confidence. We apply LoPA to the state-of-the-art D2F model and observe a substantial enhancement in decoding efficiency. Notably, LoPA increases the TPF of D2F-Dream to 10.1 on the GSM8K while maintaining performance superior to the Dream baseline. Furthermore, to facilitate this unprecedented degree of parallelism, we develop a specialized multi-device inference system featuring Branch Parallelism (BP), which achieves a single-sample throughput of 1073.9 tokens per second under multi-GPU deployment. The code is available at https://github.com/zhijie-group/LoPA. + oai:arXiv.org:2512.16229v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Chenkai Xu, Yijie Jin, Jiajun Li, Yi Tu, Guoping Long, Dandan Tu, Mingcong Song, Hongjie Si, Tianqi Hou, Junchi Yan, Zhijie Deng + + + Coarse-to-Fine Open-Set Graph Node Classification with Large Language Models + https://arxiv.org/abs/2512.16244 + arXiv:2512.16244v2 Announce Type: replace +Abstract: Developing open-set classification methods capable of classifying in-distribution (ID) data while detecting out-of-distribution (OOD) samples is essential for deploying graph neural networks (GNNs) in open-world scenarios. Existing methods typically treat all OOD samples as a single class, despite real-world applications, especially high-stake settings such as fraud detection and medical diagnosis, demanding deeper insights into OOD samples, including their probable labels. This raises a critical question: can OOD detection be extended to OOD classification without true label information? To address this question, we propose a Coarse-to-Fine open-set Classification (CFC) framework that leverages large language models (LLMs) for graph datasets. CFC consists of three key components: a coarse classifier that uses LLM prompts for OOD detection and outlier label generation, a GNN-based fine classifier trained with OOD samples identified by the coarse classifier for enhanced OOD detection and ID classification, and refined OOD classification achieved through LLM prompts and post-processed OOD labels. Unlike methods that rely on synthetic or auxiliary OOD samples, CFC employs semantic OOD instances that are genuinely out-of-distribution based on their inherent meaning, improving interpretability and practical utility. Experimental results show that CFC improves OOD detection by ten percent over state-of-the-art methods on graph and text domains and achieves up to seventy percent accuracy in OOD classification on graph datasets. + oai:arXiv.org:2512.16244v2 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Xueqi Ma, Xingjun Ma, Sarah Monazam Erfani, Danilo Mandic, James Bailey + + + Love, Lies, and Language Models: Investigating AI's Role in Romance-Baiting Scams + https://arxiv.org/abs/2512.16280 + arXiv:2512.16280v2 Announce Type: replace +Abstract: Romance-baiting scams have become a major source of financial and emotional harm worldwide. These operations are run by organized crime syndicates that traffic thousands of people into forced labor, requiring them to build emotional intimacy with victims over weeks of text conversations before pressuring them into fraudulent cryptocurrency investments. Because the scams are inherently text-based, they raise urgent questions about the role of Large Language Models (LLMs) in both current and future automation. + We investigate this intersection by interviewing 145 insiders and 5 scam victims, performing a blinded long-term conversation study comparing LLM scam agents to human operators, and executing an evaluation of commercial safety filters. Our findings show that LLMs are already widely deployed within scam organizations, with 87% of scam labor consisting of systematized conversational tasks readily susceptible to automation. In a week-long study, an LLM agent not only elicited greater trust from study participants (p=0.007) but also achieved higher compliance with requests than human operators (46% vs. 18% for humans). Meanwhile, popular safety filters detected 0.0% of romance baiting dialogues. Together, these results suggest that romance-baiting scams may be amenable to full-scale LLM automation, while existing defenses remain inadequate to prevent their expansion. + oai:arXiv.org:2512.16280v2 + cs.CR + cs.AI + cs.CY + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Usenix Security Symposium 2026 + Gilad Gressel, Rahul Pankajakshan, Shir Rozenfeld, Ling Li, Ivan Franceschini, Krishnashree Achuthan, Yisroel Mirsky + + + In-Context Probing for Membership Inference in Fine-Tuned Language Models + https://arxiv.org/abs/2512.16292 + arXiv:2512.16292v2 Announce Type: replace +Abstract: Membership inference attacks (MIAs) pose a critical privacy threat to fine-tuned large language models (LLMs), especially when models are adapted to domain-specific tasks using sensitive data. While prior black-box MIA techniques rely on confidence scores or token likelihoods, these signals are often entangled with a sample's intrinsic properties - such as content difficulty or rarity - leading to poor generalization and low signal-to-noise ratios. In this paper, we propose ICP-MIA, a novel MIA framework grounded in the theory of training dynamics, particularly the phenomenon of diminishing returns during optimization. We introduce the Optimization Gap as a fundamental signal of membership: at convergence, member samples exhibit minimal remaining loss-reduction potential, while non-members retain significant potential for further optimization. To estimate this gap in a black-box setting, we propose In-Context Probing (ICP), a training-free method that simulates fine-tuning-like behavior via strategically constructed input contexts. We propose two probing strategies: reference-data-based (using semantically similar public samples) and self-perturbation (via masking or generation). Experiments on three tasks and multiple LLMs show that ICP-MIA significantly outperforms prior black-box MIAs, particularly at low false positive rates. We further analyze how reference data alignment, model type, PEFT configurations, and training schedules affect attack effectiveness. Our findings establish ICP-MIA as a practical and theoretically grounded framework for auditing privacy risks in deployed LLMs. + oai:arXiv.org:2512.16292v2 + cs.CR + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Zhexi Lu, Hongliang Chi, Nathalie Baracaldo, Swanand Ravindra Kadhe, Yuseok Jeon, Lei Yu + + + Adaptation of Agentic AI + https://arxiv.org/abs/2512.16301 + arXiv:2512.16301v2 Announce Type: replace +Abstract: Cutting-edge agentic AI systems are built on foundation models that can be adapted to plan, reason, and interact with external tools to perform increasingly complex and specialized tasks. As these systems grow in capability and scope, adaptation becomes a central mechanism for improving performance, reliability, and generalization. In this paper, we unify the rapidly expanding research landscape into a systematic framework that spans both agent adaptations and tool adaptations. We further decompose these into tool-execution-signaled and agent-output-signaled forms of agent adaptation, as well as agent-agnostic and agent-supervised forms of tool adaptation. We demonstrate that this framework helps clarify the design space of adaptation strategies in agentic AI, makes their trade-offs explicit, and provides practical guidance for selecting or switching among strategies during system design. We then review the representative approaches in each category, analyze their strengths and limitations, and highlight key open challenges and future opportunities. Overall, this paper aims to offer a conceptual foundation and practical roadmap for researchers and practitioners seeking to build more capable, efficient, and reliable agentic AI systems. + oai:arXiv.org:2512.16301v2 + cs.AI + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Pengcheng Jiang, Jiacheng Lin, Zhiyi Shi, Zifeng Wang, Luxi He, Yichen Wu, Ming Zhong, Peiyang Song, Qizheng Zhang, Heng Wang, Xueqiang Xu, Hanwen Xu, Pengrui Han, Dylan Zhang, Jiashuo Sun, Chaoqi Yang, Kun Qian, Tian Wang, Changran Hu, Manling Li, Quanzheng Li, Hao Peng, Sheng Wang, Jingbo Shang, Chao Zhang, Jiaxuan You, Liyuan Liu, Pan Lu, Yu Zhang, Heng Ji, Yejin Choi, Dawn Song, Jimeng Sun, Jiawei Han + + + Pretrained Battery Transformer (PBT): A battery life prediction foundation model + https://arxiv.org/abs/2512.16334 + arXiv:2512.16334v2 Announce Type: replace +Abstract: Early prediction of battery cycle life is essential for accelerating battery research, manufacturing, and deployment. Although machine learning methods have shown encouraging results, progress is hindered by data scarcity and heterogeneity arising from diverse aging conditions. In other fields, foundation models (FMs) trained on diverse datasets have achieved broad generalization through transfer learning, but no FMs have been reported for battery cycle life prediction yet. Here we present the Pretrained Battery Transformer (PBT), the first FM for battery life prediction, developed through domain-knowledge-encoded mixture-of-expert layers. Validated on the largest public battery life database, PBT learns transferable representations from 13 lithium-ion battery (LIB) datasets, outperforming existing models by an average of 19.8%. With transfer learning, PBT achieves state-of-the-art performance across 15 diverse datasets encompassing various operating conditions, formation protocols, and chemistries. This work establishes a foundation model pathway for battery lifetime prediction, paving the way toward universal battery lifetime prediction systems. + oai:arXiv.org:2512.16334v2 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Ruifeng Tan, Weixiang Hong, Jia Li, Jiaqiang Huang, Tong-Yi Zhang + + + Adaptive Frequency Domain Alignment Network for Medical image segmentation + https://arxiv.org/abs/2512.16393 + arXiv:2512.16393v2 Announce Type: replace +Abstract: High-quality annotated data plays a crucial role in achieving accurate segmentation. However, such data for medical image segmentation are often scarce due to the time-consuming and labor-intensive nature of manual annotation. To address this challenge, we propose the Adaptive Frequency Domain Alignment Network (AFDAN)--a novel domain adaptation framework designed to align features in the frequency domain and alleviate data scarcity. AFDAN integrates three core components to enable robust cross-domain knowledge transfer: an Adversarial Domain Learning Module that transfers features from the source to the target domain; a Source-Target Frequency Fusion Module that blends frequency representations across domains; and a Spatial-Frequency Integration Module that combines both frequency and spatial features to further enhance segmentation accuracy across domains. Extensive experiments demonstrate the effectiveness of AFDAN: it achieves an Intersection over Union (IoU) of 90.9% for vitiligo segmentation in the newly constructed VITILIGO2025 dataset and a competitive IoU of 82.6% on the retinal vessel segmentation benchmark DRIVE, surpassing existing state-of-the-art approaches. + oai:arXiv.org:2512.16393v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhanwei Li, Liang Li, Jiawan Zhang + + + Navigating the Reality Gap: Privacy-Preserving Adaptation of ASR for Challenging Low-Resource Domains + https://arxiv.org/abs/2512.16401 + arXiv:2512.16401v2 Announce Type: replace +Abstract: Automatic Speech Recognition (ASR) holds immense potential to assist in clinical documentation and patient report generation, particularly in resource-constrained regions. However, deployment is currently hindered by a technical deadlock: a severe "Reality Gap" between laboratory performance and noisy, real-world clinical audio, coupled with strict privacy and resource constraints. We quantify this gap, showing that a robust multilingual model (IndicWav2Vec) degrades to a 40.94% WER on rural clinical data from India, rendering it unusable. To address this, we explore a zero-data-exfiltration framework enabling localized, continual adaptation via Low-Rank Adaptation (LoRA). We conduct a rigorous investigative study of continual learning strategies, characterizing the trade-offs between data-driven and parameter-driven stability. Our results demonstrate that multi-domain Experience Replay (ER) yields the primary performance gains, achieving a 17.1% relative improvement in target WER and reducing catastrophic forgetting by 55% compared to naive adaptation. Furthermore, we observed that standard Elastic Weight Consolidation (EWC) faced numerical stability challenges when applied to LoRA in noisy environments. Our experiments show that a stabilized, linearized formulation effectively controls gradient magnitudes and enables stable convergence. Finally, we verify via a domain-specific spot check that acoustic adaptation is a fundamental prerequisite for usability which cannot be bypassed by language models alone. + oai:arXiv.org:2512.16401v2 + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Darshil Chauhan, Adityasinh Solanki, Vansh Patel, Kanav Kapoor, Ritvik Jain, Aditya Bansal, Pratik Narang, Dhruv Kumar + + + Skeleton-Snippet Contrastive Learning with Multiscale Feature Fusion for Action Localization + https://arxiv.org/abs/2512.16504 + arXiv:2512.16504v2 Announce Type: replace +Abstract: The self-supervised pretraining paradigm has achieved great success in learning 3D action representations for skeleton-based action recognition using contrastive learning. However, learning effective representations for skeleton-based temporal action localization remains challenging and underexplored. Unlike video-level {action} recognition, detecting action boundaries requires temporally sensitive features that capture subtle differences between adjacent frames where labels change. To this end, we formulate a snippet discrimination pretext task for self-supervised pretraining, which densely projects skeleton sequences into non-overlapping segments and promotes features that distinguish them across videos via contrastive learning. Additionally, we build on strong backbones of skeleton-based action recognition models by fusing intermediate features with a U-shaped module to enhance feature resolution for frame-level localization. Our approach consistently improves existing skeleton-based contrastive learning methods for action localization on BABEL across diverse subsets and evaluation protocols. We also achieve state-of-the-art transfer learning performance on PKUMMD with pretraining on NTU RGB+D and BABEL. + oai:arXiv.org:2512.16504v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Qiushuo Cheng, Jingjing Liu, Catherine Morgan, Alan Whone, Majid Mirmehdi + + + Scaling Laws for Energy Efficiency of Local LLMs + https://arxiv.org/abs/2512.16531 + arXiv:2512.16531v2 Announce Type: replace +Abstract: Deploying local large language models and vision-language models on edge devices requires balancing accuracy with constrained computational and energy budgets. Although graphics processors dominate modern artificial-intelligence deployment, most consumer hardware--including laptops, desktops, industrial controllers, and embedded systems--relies on central processing units. Despite this, the computational laws governing central-processing-unit-only inference for local language and vision-language workloads remain largely unexplored. We systematically benchmark large language and vision-language models on two representative central-processing-unit tiers widely used for local inference: a MacBook Pro M2, reflecting mainstream laptop-class deployment, and a Raspberry Pi 5, representing constrained, low-power embedded settings. Using a unified methodology based on continuous sampling of processor and memory usage together with area-under-curve integration, we characterize how computational load scales with input text length for language models and with image resolution for vision-language models. We uncover two empirical scaling laws: (1) computational cost for language-model inference scales approximately linearly with token length; and (2) vision-language models exhibit a preprocessing-driven "resolution knee", where compute remains constant above an internal resolution clamp and decreases sharply below it. Beyond these laws, we show that quantum-inspired compression reduces processor and memory usage by up to 71.9% and energy consumption by up to 62%, while preserving or improving semantic accuracy. These results provide a systematic quantification of multimodal central-processing-unit-only scaling for local language and vision-language workloads, and they identify model compression and input-resolution preprocessing as effective, low-cost levers for sustainable edge inference. + oai:arXiv.org:2512.16531v2 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ander Alvarez, Alessandro Genuardi, Nilotpal Sinha, Antonio Tiene, Mikail Okyay, Bakbergen Ryskulov, David Montero, Samuel Mugel, Rom\'an Or\'us + + + Microsoft Academic Graph Information Retrieval for Research Recommendation and Assistance + https://arxiv.org/abs/2512.16661 + arXiv:2512.16661v2 Announce Type: replace +Abstract: In today's information-driven world, access to scientific publications has become increasingly easy. At the same time, filtering through the massive volume of available research has become more challenging than ever. Graph Neural Networks (GNNs) and graph attention mechanisms have shown strong effectiveness in searching large-scale information databases, particularly when combined with modern large language models. In this paper, we propose an Attention-Based Subgraph Retriever, a GNN-as-retriever model that applies attention-based pruning to extract a refined subgraph, which is then passed to a large language model for advanced knowledge reasoning. + oai:arXiv.org:2512.16661v2 + cs.IR + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Shikshya Shiwakoti, Samuel Goldsmith, Ujjwal Pandit + + + Dual Computational Horizons: Incompleteness and Unpredictability in Intelligent Systems + https://arxiv.org/abs/2512.16707 + arXiv:2512.16707v2 Announce Type: replace +Abstract: We formalize two independent computational limitations that constrain algorithmic intelligence: formal incompleteness and dynamical unpredictability. The former limits the deductive power of consistent reasoning systems while the latter bounds long-term prediction under finite precision. We show that these two extrema together impose structural bounds on an agent's ability to reason about its own predictive capabilities. In particular, an algorithmic agent cannot verify its own maximal prediction horizon universally. This perspective clarifies inherent trade-offs between reasoning, prediction, and self-analysis in intelligent systems. The construction presented here constitutes one representative instance of a broader logical class of such limitations. + oai:arXiv.org:2512.16707v2 + cs.AI + cs.LO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Abhisek Ganguly + + + Discovering and Learning Probabilistic Models of Black-Box AI Capabilities + https://arxiv.org/abs/2512.16733 + arXiv:2512.16733v2 Announce Type: replace +Abstract: Black-box AI (BBAI) systems such as foundational models are increasingly being used for sequential decision making. To ensure that such systems are safe to operate and deploy, it is imperative to develop efficient methods that can provide a sound and interpretable representation of the BBAI's capabilities. This paper shows that PDDL-style representations can be used to efficiently learn and model an input BBAI's planning capabilities. It uses the Monte-Carlo tree search paradigm to systematically create test tasks, acquire data, and prune the hypothesis space of possible symbolic models. Learned models describe a BBAI's capabilities, the conditions under which they can be executed, and the possible outcomes of executing them along with their associated probabilities. Theoretical results show soundness, completeness and convergence of the learned models. Empirical results with multiple BBAI systems illustrate the scope, efficiency, and accuracy of the presented methods. + oai:arXiv.org:2512.16733v2 + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Daniel Bramblett, Rushang Karia, Adrian Ciotinga, Ruthvick Suresh, Pulkit Verma, YooJung Choi, Siddharth Srivastava + + + Learning Confidence Ellipsoids and Applications to Robust Subspace Recovery + https://arxiv.org/abs/2512.16875 + arXiv:2512.16875v2 Announce Type: replace +Abstract: We study the problem of finding confidence ellipsoids for an arbitrary distribution in high dimensions. Given samples from a distribution $D$ and a confidence parameter $\alpha$, the goal is to find the smallest volume ellipsoid $E$ which has probability mass $\Pr_{D}[E] \ge 1-\alpha$. Ellipsoids are a highly expressive class of confidence sets as they can capture correlations in the distribution, and can approximate any convex set. This problem has been studied in many different communities. In statistics, this is the classic minimum volume estimator introduced by Rousseeuw as a robust non-parametric estimator of location and scatter. However in high dimensions, it becomes NP-hard to obtain any non-trivial approximation factor in volume when the condition number $\beta$ of the ellipsoid (ratio of the largest to the smallest axis length) goes to $\infty$. This motivates the focus of our paper: can we efficiently find confidence ellipsoids with volume approximation guarantees when compared to ellipsoids of bounded condition number $\beta$? + Our main result is a polynomial time algorithm that finds an ellipsoid $E$ whose volume is within a $O(\beta)^{\gamma d}$ multiplicative factor of the volume of best $\beta$-conditioned ellipsoid while covering at least $1-O(\alpha/\gamma)$ probability mass for any $\gamma < \alpha$. We complement this with a computational hardness result that shows that such a dependence seems necessary up to constants in the exponent. The algorithm and analysis uses the rich primal-dual structure of the minimum volume enclosing ellipsoid and the geometric Brascamp-Lieb inequality. As a consequence, we obtain the first polynomial time algorithm with approximation guarantees on worst-case instances of the robust subspace recovery problem. + oai:arXiv.org:2512.16875v2 + cs.DS + cs.LG + math.ST + stat.ML + stat.TH + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chao Gao, Liren Shan, Vaidehi Srinivas, Aravindan Vijayaraghavan + + + Exploration vs Exploitation: Rethinking RLVR through Clipping, Entropy, and Spurious Reward + https://arxiv.org/abs/2512.16912 + arXiv:2512.16912v2 Announce Type: replace +Abstract: This paper examines the exploration-exploitation trade-off in reinforcement learning with verifiable rewards (RLVR), a framework for improving the reasoning of Large Language Models (LLMs). Recent studies suggest that RLVR can elicit strong mathematical reasoning in LLMs through two seemingly paradoxical mechanisms: spurious rewards, which suppress exploitation by rewarding outcomes unrelated to the ground truth, and entropy minimization, which suppresses exploration by pushing the model toward more confident and deterministic outputs, highlighting a puzzling dynamic: both discouraging exploitation and discouraging exploration improve reasoning performance, yet the underlying principles that reconcile these effects remain poorly understood. We focus on two fundamental questions: (i) how policy entropy relates to performance, and (ii) whether spurious rewards yield gains, potentially through the interplay of clipping bias and model contamination. Our results show that clipping bias under spurious rewards reduces policy entropy, leading to more confident and deterministic outputs, while entropy minimization alone is insufficient for improvement. We further propose a reward-misalignment model explaining why spurious rewards can enhance performance beyond contaminated settings. Our findings clarify the mechanisms behind spurious-reward benefits and provide principles for more effective RLVR training. + oai:arXiv.org:2512.16912v2 + cs.LG + cs.AI + cs.CL + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Peter Chen, Xiaopeng Li, Ziniu Li, Wotao Yin, Xi Chen, Tianyi Lin + + + AdaTooler-V: Adaptive Tool-Use for Images and Videos + https://arxiv.org/abs/2512.16918 + arXiv:2512.16918v2 Announce Type: replace +Abstract: Recent advances have shown that multimodal large language models (MLLMs) benefit from multimodal interleaved chain-of-thought (CoT) with vision tool interactions. However, existing open-source models often exhibit blind tool-use reasoning patterns, invoking vision tools even when they are unnecessary, which significantly increases inference overhead and degrades model performance. To this end, we propose AdaTooler-V, an MLLM that performs adaptive tool-use by determining whether a visual problem truly requires tools. First, we introduce AT-GRPO, a reinforcement learning algorithm that adaptively adjusts reward scales based on the Tool Benefit Score of each sample, encouraging the model to invoke tools only when they provide genuine improvements. Moreover, we construct two datasets to support training: AdaTooler-V-CoT-100k for SFT cold start and AdaTooler-V-300k for RL with verifiable rewards across single-image, multi-image, and video data. Experiments across twelve benchmarks demonstrate the strong reasoning capability of AdaTooler-V, outperforming existing methods in diverse visual reasoning tasks. Notably, AdaTooler-V-7B achieves an accuracy of 89.8\% on the high-resolution benchmark V*, surpassing the commercial proprietary model GPT-4o and Gemini 1.5 Pro. All code, models, and data are released. + oai:arXiv.org:2512.16918v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chaoyang Wang, Kaituo Feng, Dongyang Chen, Zhongyu Wang, Zhixun Li, Sicheng Gao, Meng Meng, Xu Zhou, Manyuan Zhang, Yuzhang Shang, Xiangyu Yue + + + Compression is Routing: Reconstruction Error as an Intrinsic Signal for Modular Language Models + https://arxiv.org/abs/2512.16963 + arXiv:2512.16963v2 Announce Type: replace +Abstract: Current Large Language Models (LLMs) face three major challenges: context length limitations, high inference costs, and catastrophic forgetting during continual learning. While Mixture-of-Experts (MoE) architectures mitigate some of these conflicts, their routing mechanisms typically rely on explicitly trained auxiliary classifiers. This not only increases system complexity but also often lacks interpretability when handling mixed-domain inputs. + Building upon the premise that ``Compression is Intelligence,'' this paper proposes a novel architectural philosophy: Compression is Routing. We trained an 87M-parameter end-to-end Transformer Autoencoder, achieving a 64x sequence length compression (compressing 512 tokens into 8 latent vectors). Experimental results demonstrate that this compressor possesses extreme domain discriminative capability: it achieves a reconstruction accuracy of 99.47% on the in-domain (code) validation set; accuracy drops sharply to 47.76% on a semi-out-of-distribution domain (Wiki text); and further plummets to just 0.57% on a fully out-of-distribution domain (random sequences). + This extreme and systematic performance discrepancy establishes the validity of reconstruction error as an Intrinsic Distribution Fingerprint. Based on this, we propose that expert modules can be automatically scheduled using reconstruction residuals directly, without the need for explicit gating networks. This mechanism offers excellent scalability. Furthermore, this architecture provides a new perspective on ``VRAM compression'' for handling ultra-long contexts. This report aims to verify the physical validity of this foundational architecture, offering a new research perspective for the next generation of scalable modular neural networks. + oai:arXiv.org:2512.16963v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhongpan Tang + + + 4D-RGPT: Toward Region-level 4D Understanding via Perceptual Distillation + https://arxiv.org/abs/2512.17012 + arXiv:2512.17012v2 Announce Type: replace +Abstract: Despite advances in Multimodal LLMs (MLLMs), their ability to reason over 3D structures and temporal dynamics remains limited, constrained by weak 4D perception and temporal understanding. Existing 3D and 4D Video Question Answering (VQA) benchmarks also emphasize static scenes and lack region-level prompting. We tackle these issues by introducing: (a) 4D-RGPT, a specialized MLLM designed to capture 4D representations from video inputs with enhanced temporal perception; (b) Perceptual 4D Distillation (P4D), a training framework that transfers 4D representations from a frozen expert model into 4D-RGPT for comprehensive 4D perception; and (c) R4D-Bench, a benchmark for depth-aware dynamic scenes with region-level prompting, built via a hybrid automated and human-verified pipeline. Our 4D-RGPT achieves notable improvements on both existing 4D VQA benchmarks and the proposed R4D-Bench benchmark. + oai:arXiv.org:2512.17012v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Chiao-An Yang, Ryo Hachiuma, Sifei Liu, Subhashree Radhakrishnan, Raymond A. Yeh, Yu-Chiang Frank Wang, Min-Hung Chen + + + Predictive Modeling of Maritime Radar Data Using Transformer Architecture + https://arxiv.org/abs/2512.17098 + arXiv:2512.17098v2 Announce Type: replace +Abstract: Maritime autonomous systems require robust predictive capabilities to anticipate vessel motion and environmental dynamics. While transformer architectures have revolutionized AIS-based trajectory prediction and demonstrated feasibility for sonar frame forecasting, their application to maritime radar frame prediction remains unexplored, creating a critical gap given radar's all-weather reliability for navigation. This survey systematically reviews predictive modeling approaches relevant to maritime radar, with emphasis on transformer architectures for spatiotemporal sequence forecasting, where existing representative methods are analyzed according to data type, architecture, and prediction horizon. Our review shows that, while the literature has demonstrated transformer-based frame prediction for sonar sensing, no prior work addresses transformer-based maritime radar frame prediction, thereby defining a clear research gap and motivating a concrete research direction for future work in this area. + oai:arXiv.org:2512.17098v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Bjorna Qesaraku, Jan Steckel + + + UniCoMTE: A Universal Counterfactual Framework for Explaining Time-Series Classifiers on ECG Data + https://arxiv.org/abs/2512.17100 + arXiv:2512.17100v2 Announce Type: replace +Abstract: Machine learning models, particularly deep neural networks, have demonstrated strong performance in classifying complex time series data. However, their black-box nature limits trust and adoption, especially in high-stakes domains such as healthcare. To address this challenge, we introduce UniCoMTE, a model-agnostic framework for generating counterfactual explanations for multivariate time series classifiers. The framework identifies temporal features that most heavily influence a model's prediction by modifying the input sample and assessing its impact on the model's prediction. UniCoMTE is compatible with a wide range of model architectures and operates directly on raw time series inputs. In this study, we evaluate UniCoMTE's explanations on a time series ECG classifier. We quantify explanation quality by comparing our explanations' comprehensibility to comprehensibility of established techniques (LIME and SHAP) and assessing their generalizability to similar samples. Furthermore, clinical utility is assessed through a questionnaire completed by medical experts who review counterfactual explanations presented alongside original ECG samples. Results show that our approach produces concise, stable, and human-aligned explanations that outperform existing methods in both clarity and applicability. By linking model predictions to meaningful signal patterns, the framework advances the interpretability of deep learning models for real-world time series applications. + oai:arXiv.org:2512.17100v2 + cs.LG + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Justin Li, Efe Sencan, Jasper Zheng Duan, Vitus J. Leung, Stephen Tsaur, Ayse K. Coskun + + + Towards Senior-Robot Interaction: Reactive Robot Dog Gestures + https://arxiv.org/abs/2512.17136 + arXiv:2512.17136v2 Announce Type: replace +Abstract: As the global population ages, many seniors face the problem of loneliness. Companion robots offer a potential solution. However, current companion robots often lack advanced functionality, while task-oriented robots are not designed for social interaction, limiting their suitability and acceptance by seniors. Our work introduces a senior-oriented system for quadruped robots that allows for more intuitive user input and provides more socially expressive output. For user input, we implemented a MediaPipe-based module for hand gesture and head movement recognition, enabling control without a remote. For output, we designed and trained robotic dog gestures using curriculum-based reinforcement learning in Isaac Gym, progressing from simple standing to three-legged balancing and leg extensions, and more. The final tests achieved over 95\% success on average in simulation, and we validated a key social gesture (the paw-lift) on a Unitree robot. Real-world tests demonstrated the feasibility and social expressiveness of this framework, while also revealing sim-to-real challenges in joint compliance, load distribution, and balance control. These contributions advance the development of practical quadruped robots as social companions for the senior and outline pathways for sim-to-real adaptation and inform future user studies. + oai:arXiv.org:2512.17136v2 + cs.RO + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chunyang Meng, Eduardo B. Sandoval, Ricardo Sosa, Francisco Cruz + + + Solomonoff-Inspired Hypothesis Ranking with LLMs for Prediction Under Uncertainty + https://arxiv.org/abs/2512.17145 + arXiv:2512.17145v2 Announce Type: replace +Abstract: Reasoning under uncertainty is a key challenge in AI, especially for real-world tasks, where problems with sparse data demands systematic generalisation. Existing approaches struggle to balance accuracy and simplicity when evaluating multiple candidate solutions. We propose a Solomonoff-inspired method that weights LLM-generated hypotheses by simplicity and predictive fit. Applied to benchmark (Mini-ARC) tasks, our method produces Solomonoff-weighted mixtures for per-cell predictions, yielding conservative, uncertainty-aware outputs even when hypotheses are noisy or partially incorrect. Compared to Bayesian Model Averaging (BMA), Solomonoff scoring spreads probability more evenly across competing hypotheses, while BMA concentrates weight on the most likely but potentially flawed candidates. Across tasks, this highlights the value of algorithmic information-theoretic priors for interpretable, reliable multi-hypothesis reasoning under uncertainty. + oai:arXiv.org:2512.17145v2 + cs.AI + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Josh Barber (QUT), Rourke Young (QUT), Cameron Coombe (QUT,CSIRO), Will Browne (QUT) + + + MAPPO-LCR: Multi-Agent Proximal Policy Optimization with Local Cooperation Reward in Spatial Public Goods Games + https://arxiv.org/abs/2512.17187 + arXiv:2512.17187v2 Announce Type: replace +Abstract: Spatial public goods games model collective dilemmas where individual payoffs depend on population-level strategy configurations. Most existing studies rely on evolutionary update rules or value-based reinforcement learning methods. These approaches struggle to represent payoff coupling and non-stationarity in large interacting populations. This work introduces Multi-Agent Proximal Policy Optimization (MAPPO) into spatial public goods games for the first time. In these games, individual returns are intrinsically coupled through overlapping group interactions. Proximal Policy Optimization (PPO) treats agents as independent learners and ignores this coupling during value estimation. MAPPO addresses this limitation through a centralized critic that evaluates joint strategy configurations. To study neighborhood-level cooperation signals under this framework, we propose MAPPO with Local Cooperation Reward, termed MAPPO-LCR. The local cooperation reward aligns policy updates with surrounding cooperative density without altering the original game structure. MAPPO-LCR preserves decentralized execution while enabling population-level value estimation during training. Extensive simulations demonstrate stable cooperation emergence and reliable convergence across enhancement factors. Statistical analyses further confirm the learning advantage of MAPPO over PPO in spatial public goods games. + oai:arXiv.org:2512.17187v2 + cs.MA + cs.GT + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhaoqilin Yang, Axin Xiang, Kedi Yang, Tianjun Liu, Youliang Tian + + + Sensor Management System (SMS): Open-source software for FAIR sensor metadata management in Earth system sciences + https://arxiv.org/abs/2512.17280 + arXiv:2512.17280v2 Announce Type: replace +Abstract: Deriving reliable conclusions and insights from environmental observational data urgently requires the enrichment with consistent and comprehensive metadata, including time-resolved context such as changing deployments, configurations, and maintenance actions. We have therefore developed the Sensor Management System (SMS), which provides a user-friendly and feature-rich platform for modeling even the most complex sensor systems and managing all sensor-related information across their life cycle. Each entity is described via well-defined terms like Devices, Platforms and Configurations, as well as Sites that are further enhanced with attributes for, e.g., instrument manufacturers, contact information or measured quantities and complemented by a continuous history of system-related actions. By further linking the SMS to sub-sequent systems and services like PID-registration or controlled vocabularies and establishing a community of end-users, the SMS provides the central element of a digital ecosystem, that fosters a more consistent, sustainable and FAIR provision of sensor-related metadata. + oai:arXiv.org:2512.17280v2 + cs.SE + Tue, 23 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Yifeng Cai, Zhida An, Yuhan Meng, Houqian Liu, Pengli Wang, Hanwen Lei, Yao Guo, Ding Li + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Christof Lorenz, Nils Brinckmann, Jan Bumberger, Marc Hanisch, Tobias Kuhnert, Ulrich Loup, Rubankumar Moorthy, Florian Obsersteiner, David Sch\"afer, Thomas Schnicke - Sigma-MoE-Tiny Technical Report - https://arxiv.org/abs/2512.16248 - arXiv:2512.16248v2 Announce Type: replace -Abstract: Mixture-of-Experts (MoE) has emerged as a promising paradigm for foundation models due to its efficient and powerful scalability. In this work, we present Sigma-MoE-Tiny, an MoE language model that achieves the highest sparsity compared to existing open-source models. Sigma-MoE-Tiny employs fine-grained expert segmentation with up to 96 experts per layer, while activating only one expert for each token, resulting in 20B total parameters with just 0.5B activated. The major challenge introduced by such extreme sparsity lies in expert load balancing. We find that the widely-used load balancing loss tends to become ineffective in the lower layers under this setting. To address this issue, we propose a progressive sparsification schedule aiming to balance expert utilization and training stability. Sigma-MoE-Tiny is pre-trained on a diverse and high-quality corpus, followed by post-training to further unlock its capabilities. The entire training process remains remarkably stable, with no occurrence of irrecoverable loss spikes. Comprehensive evaluations reveal that, despite activating only 0.5B parameters, Sigma-MoE-Tiny achieves top-tier performance among counterparts of comparable or significantly larger scale. In addition, we provide an in-depth discussion of load balancing in highly sparse MoE models, offering insights for advancing sparsity in future MoE architectures. - Project page: https://qghuxmu.github.io/Sigma-MoE-Tiny - Code: https://github.com/microsoft/ltp-megatron-lm - oai:arXiv.org:2512.16248v2 - cs.CL + Towards Pixel-Wise Anomaly Location for High-Resolution PCBA via Self-Supervised Image Reconstruction + https://arxiv.org/abs/2512.17296 + arXiv:2512.17296v2 Announce Type: replace +Abstract: Automated defect inspection of assembled Printed Circuit Board Assemblies (PCBA) is quite challenging due to the insufficient labeled data, micro-defects with just a few pixels in visually-complex and high-resolution images. To address these challenges, we present HiSIR-Net, a High resolution, Self-supervised Reconstruction framework for pixel-wise PCBA localization. Our design combines two lightweight modules that make this practical on real 4K-resolution boards: (i) a Selective Input-Reconstruction Gate (SIR-Gate) that lets the model decide where to trust reconstruction versus the original input, thereby reducing irrelevant reconstruction artifacts and false alarms; and (ii) a Region-level Optimized Patch Selection (ROPS) scheme with positional cues to select overlapping patch reconstructions coherently across arbitrary resolutions. Organically integrating these mechanisms yields clean, high-resolution anomaly maps with low false positive (FP) rate. To bridge the gap in high-resolution PCBA datasets, we further contribute a self-collected dataset named SIPCBA-500 of 500 images. We conduct extensive experiments on our SIPCBA-500 as well as public benchmarks, demonstrating the superior localization performance of our method while running at practical speed. Full code and dataset will be made available upon acceptance. + oai:arXiv.org:2512.17296v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Wuyi Liu, Le Jin, Junxian Yang, Yuanchao Yu, Zishuo Peng, Jinfeng Xu, Xianzhi Li, Jun Zhou + + + TakeAD: Preference-based Post-optimization for End-to-end Autonomous Driving with Expert Takeover Data + https://arxiv.org/abs/2512.17370 + arXiv:2512.17370v2 Announce Type: replace +Abstract: Existing end-to-end autonomous driving methods typically rely on imitation learning (IL) but face a key challenge: the misalignment between open-loop training and closed-loop deployment. This misalignment often triggers driver-initiated takeovers and system disengagements during closed-loop execution. How to leverage those expert takeover data from disengagement scenarios and effectively expand the IL policy's capability presents a valuable yet unexplored challenge. In this paper, we propose TakeAD, a novel preference-based post-optimization framework that fine-tunes the pre-trained IL policy with this disengagement data to enhance the closed-loop driving performance. First, we design an efficient expert takeover data collection pipeline inspired by human takeover mechanisms in real-world autonomous driving systems. Then, this post optimization framework integrates iterative Dataset Aggregation (DAgger) for imitation learning with Direct Preference Optimization (DPO) for preference alignment. The DAgger stage equips the policy with fundamental capabilities to handle disengagement states through direct imitation of expert interventions. Subsequently, the DPO stage refines the policy's behavior to better align with expert preferences in disengagement scenarios. Through multiple iterations, the policy progressively learns recovery strategies for disengagement states, thereby mitigating the open-loop gap. Experiments on the closed-loop Bench2Drive benchmark demonstrate our method's effectiveness compared with pure IL methods, with comprehensive ablations confirming the contribution of each component. + oai:arXiv.org:2512.17370v2 + cs.RO cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qingguo Hu, Zhenghao Lin, Ziyue Yang, Yucheng Ding, Xiao Liu, Yuting Jiang, Ruizhe Wang, Tianyu Chen, Zhongxin Guo, Yifan Xiong, Rui Gao, Lei Qu, Jinsong Su, Peng Cheng, Yeyun Gong + Deqing Liu, Yinfeng Gao, Deheng Qian, Qichao Zhang, Xiaoqing Ye, Junyu Han, Yupeng Zheng, Xueyi Liu, Zhongpu Xia, Dawei Ding, Yifeng Pan, Dongbin Zhao - Multi-granularity Spatiotemporal Flow Patterns - https://arxiv.org/abs/2512.16255 - arXiv:2512.16255v2 Announce Type: replace -Abstract: Analyzing flow of objects or data at different granularities of space and time can unveil interesting insights or trends. For example, transportation companies, by aggregating passenger travel data (e.g., counting passengers traveling from one region to another), can analyze movement behavior. In this paper, we study the problem of finding important trends in passenger movements between regions at different granularities. We define Origin (O), Destination (D), and Time (T ) patterns (ODT patterns) and propose a bottom-up algorithm that enumerates them. We suggest and employ optimizations that greatly reduce the search space and the computational cost of pattern enumeration. We also propose pattern variants (constrained patterns and top-k patterns) that could be useful to different applications scenarios. Finally, we propose an approximate solution that fast identifies ODT patterns of specific sizes, following a generate-and-test approach. We evaluate the efficiency and effectiveness of our methods on three real datasets and showcase interesting ODT flow patterns in them. - oai:arXiv.org:2512.16255v2 - cs.DB - Mon, 22 Dec 2025 00:00:00 -0500 + Xiaomi MiMo-VL-Miloco Technical Report + https://arxiv.org/abs/2512.17436 + arXiv:2512.17436v2 Announce Type: replace +Abstract: We open-source MiMo-VL-Miloco-7B and its quantized variant MiMo-VL-Miloco-7B-GGUF, a pair of home-centric vision-language models that achieve strong performance on both home-scenario understanding and general multimodal reasoning. Built on the MiMo-VL-7B backbone, MiMo-VL-Miloco-7B is specialized for smart-home environments, attaining leading F1 scores on gesture recognition and common home-scenario understanding, while also delivering consistent gains across video benchmarks such as Video-MME, Video-MMMU, and Charades-STA, as well as language understanding benchmarks including MMMU-Pro and MMLU-Pro. In our experiments, MiMo-VL-Miloco-7B outperforms strong closed-source and open-source baselines on home-scenario understanding and several multimodal reasoning benchmarks. To balance specialization and generality, we design a two-stage training pipeline that combines supervised fine-tuning with reinforcement learning based on Group Relative Policy Optimization, leveraging efficient multi-domain data. We further incorporate chain-of-thought supervision and token-budget-aware reasoning, enabling the model to learn knowledge in a data-efficient manner while also performing reasoning efficiently. Our analysis shows that targeted home-scenario training not only enhances activity and gesture understanding, but also improves text-only reasoning with only modest trade-offs on document-centric tasks. Model checkpoints, quantized GGUF weights, and our home-scenario evaluation toolkit are publicly available at https://github.com/XiaoMi/xiaomi-mimo-vl-miloco to support research and deployment in real-world smart-home applications. + oai:arXiv.org:2512.17436v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Chrysanthi Kosyfaki, Nikos Mamoulis, Reynold Cheng, Ben Kao + Jiaze Li, Jingyang Chen, Yuxun Qu, Shijie Xu, Zhenru Lin, Junyou Zhu, Boshen Xu, Wenhui Tan, Pei Fu, Jianzhong Ju, Zhenbo Luo, Jian Luan - Best Practices For Empirical Meta-Algorithmic Research: Guidelines from the COSEAL Research Network - https://arxiv.org/abs/2512.16491 - arXiv:2512.16491v2 Announce Type: replace -Abstract: Empirical research on meta-algorithmics, such as algorithm selection, configuration, and scheduling, often relies on extensive and thus computationally expensive experiments. With the large degree of freedom we have over our experimental setup and design comes a plethora of possible error sources that threaten the scalability and validity of our scientific insights. Best practices for meta-algorithmic research exist, but they are scattered between different publications and fields, and continue to evolve separately from each other. In this report, we collect good practices for empirical meta-algorithmic research across the subfields of the COSEAL community, encompassing the entire experimental cycle: from formulating research questions and selecting an experimental design, to executing experiments, and ultimately, analyzing and presenting results impartially. It establishes the current state-of-the-art practices within meta-algorithmic research and serves as a guideline to both new researchers and practitioners in meta-algorithmic fields. - oai:arXiv.org:2512.16491v2 + Learning What to Write: Write-Gated KV for Efficient Long-Context Inference + https://arxiv.org/abs/2512.17452 + arXiv:2512.17452v2 Announce Type: replace +Abstract: Long-context LLM inference is bottlenecked by the quadratic attention complexity and linear KV cache growth. Prior approaches mitigate this via post-hoc selection or eviction but overlook the root inefficiency: indiscriminate writing to persistent memory. In this paper, we formalize KV cache management as a causal system of three primitives: KV Admission, Selection, and Eviction. We instantiate KV Admission via Write-Gated KV, a lightweight mechanism that learns to predict token utility before it enters the cache. By filtering out low-utility states early to maintain a compact global cache alongside a sliding local cache, Write-Gated KV reduces memory usage by 46-57% and delivers 3.03-3.45$\times$ prefill and 1.89-2.56$\times$ decode speedups on Llama model with negligible accuracy loss, all while remaining compatible with FlashAttention and paged-KV systems. These results demonstrate that learning what to write, is a principled and practical recipe for efficient long-context inference. Code is available at https://github.com/EMCLab-Sinica/WG-KV . + oai:arXiv.org:2512.17452v2 + cs.LG cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yen-Chieh Huang, Pi-Cheng Hsiu, Rui Fang, Ming-Syan Chen + + + NetworkFF: Unified Layer Optimization in Forward-Only Neural Networks + https://arxiv.org/abs/2512.17531 + arXiv:2512.17531v2 Announce Type: replace +Abstract: The Forward-Forward algorithm eliminates backpropagation's memory constraints and biological implausibility through dual forward passes with positive and negative data. However, conventional implementations suffer from critical inter-layer isolation, where layers optimize goodness functions independently without leveraging collective learning dynamics. This isolation constrains representational coordination and limits convergence efficiency in deeper architectures. This paper introduces Collaborative Forward-Forward (CFF) learning, extending the original algorithm through inter-layer cooperation mechanisms that preserve forward-only computation while enabling global context integration. Our framework implements two collaborative paradigms: Fixed CFF (F-CFF) with constant inter-layer coupling and Adaptive CFF (A-CFF) with learnable collaboration parameters that evolve during training. The collaborative goodness function incorporates weighted contributions from all layers, enabling coordinated feature learning while maintaining memory efficiency and biological plausibility. Comprehensive evaluation on MNIST and Fashion-MNIST demonstrates significant performance improvements over baseline Forward-Forward implementations. These findings establish inter-layer collaboration as a fundamental enhancement to Forward-Forward learning, with immediate applicability to neuromorphic computing architectures and energy-constrained AI systems. + oai:arXiv.org:2512.17531v2 + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Theresa Eimer, Lennart Sch\"apermeier, Andr\'e Biedenkapp, Alexander Tornede, Lars Kotthoff, Pieter Leyman, Matthias Feurer, Katharina Eggensperger, Kaitlin Maile, Tanja Tornede, Anna Kozak, Ke Xue, Marcel Wever, Mitra Baratchi, Damir Pulatov, Heike Trautmann, Haniye Kashgarani, Marius Lindauer + Salar Beigzad + + + A Unified Representation of Neural Networks Architectures + https://arxiv.org/abs/2512.17593 + arXiv:2512.17593v2 Announce Type: replace +Abstract: In this paper we consider the limiting case of neural networks (NNs) architectures when the number of neurons in each hidden layer and the number of hidden layers tend to infinity thus forming a continuum, and we derive approximation errors as a function of the number of neurons and/or hidden layers. Firstly, we consider the case of neural networks with a single hidden layer and we derive an integral infinite width neural representation that generalizes existing continuous neural networks (CNNs) representations. Then we extend this to deep residual CNNs that have a finite number of integral hidden layers and residual connections. Secondly, we revisit the relation between neural ODEs and deep residual NNs and we formalize approximation errors via discretization techniques. Then, we merge these two approaches into a unified homogeneous representation of NNs as a Distributed Parameter neural Network (DiPaNet) and we show that most of the existing finite and infinite-dimensional NNs architectures are related via homogenization/discretization with the DiPaNet representation. Our approach is purely deterministic and applies to general, uniformly continuous matrix weight functions. Relations with neural fields and other neural integro-differential equations are discussed along with further possible generalizations and applications of the DiPaNet framework. + oai:arXiv.org:2512.17593v2 + cs.LG + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Christophe Prieur, Mircea Lazar, Bogdan Robu - ParamExplorer: A framework for exploring parameters in generative art - https://arxiv.org/abs/2512.16529 - arXiv:2512.16529v2 Announce Type: replace -Abstract: Generative art systems often involve high-dimensional and complex parameter spaces in which aesthetically compelling outputs occupy only small, fragmented regions. Because of this combinatorial explosion, artists typically rely on extensive manual trial-and-error, leaving many potentially interesting configurations undiscovered. In this work we make two contributions. First, we introduce ParamExplorer, an interactive and modular framework inspired by reinforcement learning that helps the exploration of parameter spaces in generative art algorithms, guided by human-in-the-loop or even automated feedback. The framework also integrates seamlessly with existing p5js projects. Second, within this framework we implement and evaluate several exploration strategies, referred to as agents. - oai:arXiv.org:2512.16529v2 + SCOPE: Sequential Causal Optimization of Process Interventions + https://arxiv.org/abs/2512.17629 + arXiv:2512.17629v2 Announce Type: replace +Abstract: Prescriptive Process Monitoring (PresPM) recommends interventions during business processes to optimize key performance indicators (KPIs). In realistic settings, interventions are rarely isolated: organizations need to align sequences of interventions to jointly steer the outcome of a case. Existing PresPM approaches fall short in this respect. Many focus on a single intervention decision, while others treat multiple interventions independently, ignoring how they interact over time. Methods that do address these dependencies depend either on simulation or data augmentation to approximate the process to train a Reinforcement Learning (RL) agent, which can create a reality gap and introduce bias. We introduce SCOPE, a PresPM approach that learns aligned sequential intervention recommendations. SCOPE employs backward induction to estimate the effect of each candidate intervention action, propagating its impact from the final decision point back to the first. By leveraging causal learners, our method can utilize observational data directly, unlike methods that require constructing process approximations for reinforcement learning. Experiments on both an existing synthetic dataset and a new semi-synthetic dataset show that SCOPE consistently outperforms state-of-the-art PresPM techniques in optimizing the KPI. The novel semi-synthetic setup, based on a real-life event log, is provided as a reusable benchmark for future work on sequential PresPM. + oai:arXiv.org:2512.17629v2 + cs.LG cs.AI - cs.HC - cs.SE - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Jakob De Moor, Hans Weytjens, Johannes De Smedt, Jochen De Weerdt + + + Estimating Spatially Resolved Radiation Fields Using Neural Networks + https://arxiv.org/abs/2512.17654 + arXiv:2512.17654v2 Announce Type: replace +Abstract: We present an in-depth analysis on how to build and train neural networks to estimate the spatial distribution of scattered radiation fields for radiation protection dosimetry in medical radiation fields, such as those found in interventional radiology and cardiology. We present three different synthetically generated datasets with increasing complexity for training, using a Monte-Carlo Simulation application based on Geant4. On those datasets, we evaluate convolutional and fully connected architectures of neural networks to demonstrate which design decisions work well for reconstructing the fluence and spectra distributions over the spatial domain of such radiation fields. All our datasets, as well as our training pipeline, are published as open source in separate repositories. + oai:arXiv.org:2512.17654v2 + cs.LG + physics.comp-ph + physics.med-ph + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-sa/4.0/ + Felix Lehner, Pasquale Lombardo, Susana Castillo, Oliver Hupe, Marcus Magnor + + + Methods and Tools for Secure Quantum Clouds with a specific Case Study on Homomorphic Encryption + https://arxiv.org/abs/2512.17748 + arXiv:2512.17748v2 Announce Type: replace +Abstract: The rise of quantum computing/technology potentially introduces significant security challenges to cloud computing, necessitating quantum-resistant encryption strategies as well as protection schemes and methods for cloud infrastructures offering quantum computing time and services (i.e. quantum clouds). This research explores various options for securing quantum clouds and ensuring privacy, especially focussing on the integration of homomorphic encryption (HE) into Eclipse Qrisp, a high-level quantum computing framework, to enhance the security of quantum cloud platforms. The study addresses the technical feasibility of integrating HE with Qrisp, evaluates performance trade-offs, and assesses the potential impact on future quantum cloud architectures. The successful implementation and Qrisp integration of three post-quantum cryptographic (PQC) algorithms demonstrates the feasibility of integrating HE with quantum computing frameworks. The findings indicate that while the Quantum One-Time Pad (QOTP) offers simplicity and low overhead, other algorithms like Chen and Gentry-Sahai-Waters (GSW) present performance trade-offs in terms of runtime and memory consumption. The study results in an overall set of recommendations for securing quantum clouds, e.g. implementing HE at data storage and processing levels, developing Quantum Key Distribution (QKD), and enforcing stringent access control and authentication mechanisms as well as participating in PQC standardization efforts. + oai:arXiv.org:2512.17748v2 + cs.CR + quant-ph + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Aurelia Kusumastuti, Nikolay Tcholtchev, Philipp L\"ammel, Sebastian Bock, Manfred Hauswirth + + + A Parametric Framework for Anticipatory Flashflood Warning: Integrating Landscape Vulnerability with Precipitation Forecasts + https://arxiv.org/abs/2512.17785 + arXiv:2512.17785v2 Announce Type: replace +Abstract: Flash flood warnings are largely reactive, providing limited advance notice for evacuation planning and resource prepositioning. This study presents and validates an anticipatory, parametric framework that converts landscape vulnerability and precipitation into transparent, zone-aware threat levels at neighborhood scales. We first derive an inherent hazard likelihood (IHL) surface using pluvial flood depth, height above nearest drainage, and distance to streams. Next, we compute a hazard severity index (HSI) by normalizing 24-hour rainfall against local Atlas-14 100-year, 24-hour depths. We then integrate IHL and HSI within a localized threat severity (LTS) matrix using 20 class-specific triggers, requiring lower exceedance in high-risk terrain and higher exceedance in uplands. Applied to two Texas flood events, the LTS exhibits statistically significant spatial association with independent crowdsourced impact proxies, capturing observed disruption hotspots. The framework is computationally lightweight, scalable, and extends actionable situational awareness into a 48-72 hour anticipatory window, supporting pre-event decision-making by emergency managers. + oai:arXiv.org:2512.17785v2 + cs.CE + Tue, 23 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Xiangpeng Li, Junwei Ma, Samuel D Brody, Ali Mostafavi + + + Chorus: Multi-Teacher Pretraining for Holistic 3D Gaussian Scene Encoding + https://arxiv.org/abs/2512.17817 + arXiv:2512.17817v2 Announce Type: replace +Abstract: While 3DGS has emerged as a high-fidelity scene representation, encoding rich, general-purpose features directly from its primitives remains under-explored. We address this gap by introducing Chorus, a multi-teacher pretraining framework that learns a holistic feed-forward 3D Gaussian Splatting (3DGS) scene encoder by distilling complementary signals from 2D foundation models. Chorus employs a shared 3D encoder and teacher-specific projectors to learn from language-aligned, generalist, and object-aware teachers, encouraging a shared embedding space that captures signals from high-level semantics to fine-grained structure. + We evaluate Chorus on a wide range of tasks: open-vocabulary semantic and instance segmentation, linear and decoder probing, as well as data-efficient supervision. Besides 3DGS, we also test Chorus on several benchmarks that only support point clouds by pretraining a variant using only Gaussians' centers, colors, estimated normals as inputs. Interestingly, this encoder shows strong transfer and outperforms the point clouds baseline while using 39.9 times fewer training scenes. Finally, we propose a render-and-distill adaptation that facilitates out-of-domain finetuning. Our code and model will be released upon publication. + oai:arXiv.org:2512.17817v2 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 replace + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Yue Li, Qi Ma, Runyi Yang, Mengjiao Ma, Bin Ren, Nikola Popovic, Nicu Sebe, Theo Gevers, Luc Van Gool, Danda Pani Paudel, Martin R. Oswald + + + Structure of Classifier Boundaries: Case Study for a Naive Bayes Classifier + https://arxiv.org/abs/2212.04382 + arXiv:2212.04382v3 Announce Type: replace-cross +Abstract: Classifiers assign complex input data points to one of a small number of output categories. For a Bayes classifier whose input space is a graph, we study the structure of the \emph{boundary}, which comprises those points for which at least one neighbor is classified differently. The scientific setting is assignment of DNA reads produced by \NGSs\ to candidate source genomes. The boundary is both large and complicated in structure. We introduce a new measure of uncertainty, Neighbor Similarity, that compares the result for an input point to the distribution of results for its neighbors. This measure not only tracks two inherent uncertainty measures for the Bayes classifier, but also can be implemented for classifiers without inherent measures of uncertainty. + oai:arXiv.org:2212.04382v3 + stat.ML + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by/4.0/ + Alan F. Karr, Zac Bowen, Adam A. Porter, Jeanne Ruane + + + Extensional concepts in intensional type theory, revisited + https://arxiv.org/abs/2310.05706 + arXiv:2310.05706v2 Announce Type: replace-cross +Abstract: Revisiting a classic result from M. Hofmann's dissertation, we give a direct proof of Morita equivalence, in the sense of V. Isaev, between extensional type theory and intensional type theory extended by the principles of functional extensionality and of uniqueness of identity proofs. + oai:arXiv.org:2310.05706v2 + math.LO + cs.LO + math.CT + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Theoret. Comput. Sci. 1029 (2025), Paper No. 115051, 29 pp + Chris Kapulkin, Yufeng Li + + + Incentivizing Forecasters to Learn: Summarized vs. Unrestricted Advice + https://arxiv.org/abs/2310.19147 + arXiv:2310.19147v4 Announce Type: replace-cross +Abstract: How should forecasters be incentivized to acquire the most information when learning takes place over time? We address this question in the context of a novel dynamic mechanism design problem in which a designer incentivizes an expert to learn by conditioning rewards on an event's outcome and the expert's reports. Eliciting summarized advice at a terminal date maximizes information acquisition if an informative signal either fully reveals the outcome or has predictable content. Otherwise, richer reporting capabilities may be required. Our findings shed light on incentive design for consultation and forecasting by illustrating how learning dynamics shape the qualitative properties of effort-maximizing contracts. + oai:arXiv.org:2310.19147v4 + econ.TH + cs.GT + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yingkai Li, Jonathan Libgober + + + Training robust and generalizable quantum models + https://arxiv.org/abs/2311.11871 + arXiv:2311.11871v4 Announce Type: replace-cross +Abstract: Adversarial robustness and generalization are both crucial properties of reliable machine learning models. In this paper, we study these properties in the context of quantum machine learning based on Lipschitz bounds. We derive parameter-dependent Lipschitz bounds for quantum models with trainable encoding, showing that the norm of the data encoding has a crucial impact on the robustness against data perturbations. Further, we derive a bound on the generalization error which explicitly involves the parameters of the data encoding. Our theoretical findings give rise to a practical strategy for training robust and generalizable quantum models by regularizing the Lipschitz bound in the cost. Further, we show that, for fixed and non-trainable encodings, as those frequently employed in quantum machine learning, the Lipschitz bound cannot be influenced by tuning the parameters. Thus, trainable encodings are crucial for systematically adapting robustness and generalization during training. The practical implications of our theoretical findings are illustrated with numerical results. + oai:arXiv.org:2311.11871v4 + quant-ph + cs.LG + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 10.1103/PhysRevResearch.6.043326 + Physical Review Research 6, 043326, 2024 + Julian Berberich, Daniel Fink, Daniel Pranji\'c, Christian Tutschku, Christian Holm + + + Majority voting is not good for heaven or hell, with mirrored performance + https://arxiv.org/abs/2401.00592 + arXiv:2401.00592v5 Announce Type: replace-cross +Abstract: Within the ViSE (Voting in Stochastic Environment) model, we study the effectiveness of majority voting in various environments. As shown by the pit-of-losses paradox identified in previous work, majority decisions in apparently hostile environments tend to reduce the capital of society. In such cases, the simple social decision rule of ``rejecting all proposals without voting'' outperforms majority voting. In this paper, we identify another pit of losses appearing in favorable environments; here, the simple social decision rule of ``accepting all proposals without voting'' is superior to majority voting. We prove that, under a version of simple majority called symmetrized majority and under the antisymmetry of the voting body, this second pit of losses is a mirror image of the one arising in hostile environments, and we explain this phenomenon. Technically, we consider a voting society consisting of individualists who support all proposals that increase their personal capital and a group (or groups) whose members vote to increase their group's wealth. According to the key lemma, the expected capital gain of each agent under the social decision rule when the random gain generator is $X$ with mean $\mu>0$ exceeds their expected gain under the reflected generator $-X$ by exactly $\mu$. This extends to location-scale families of generators with distributions symmetric about their mean. This result reveals a mirror symmetry in the performance of the symmetrized majority rule relative to a baseline rule. The baseline rule accepts all proposals in favorable environments and rejects them in unfavorable (hostile) ones. + oai:arXiv.org:2401.00592v5 + physics.soc-ph + cs.GT + math.OC + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Pavel Chebotarev, Vadim Afonkin + + + Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method + https://arxiv.org/abs/2402.08992 + arXiv:2402.08992v3 Announce Type: replace-cross +Abstract: High-probability guarantees in stochastic optimization are often obtained only under strong noise assumptions such as sub-Gaussian tails. We show that such guarantees can also be achieved under the weaker assumption of bounded variance by developing a stochastic proximal point method. This method combines a proximal subproblem solver, which inherently reduces variance, with a probability booster that amplifies per-iteration reliability into high-confidence results. The analysis demonstrates convergence with low sample complexity, without restrictive noise assumptions or reliance on mini-batching. + oai:arXiv.org:2402.08992v3 + math.OC + cs.LG + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by/4.0/ + Jiaming Liang + + + Variational Markov chain mixtures with automatic component selection + https://arxiv.org/abs/2406.04653 + arXiv:2406.04653v2 Announce Type: replace-cross +Abstract: Markov state modeling has gained popularity in various scientific fields since it reduces complex time-series data sets into transitions between a few states. Yet common Markov state modeling frameworks assume a single Markov chain describes the data, so they suffer from an inability to discern heterogeneities. As an alternative, this paper models time-series data using a mixture of Markov chains, and it automatically determines the number of mixture components using the variational expectation-maximization algorithm.Variational EM simultaneously identifies the number of Markov chains and the dynamics of each chain without expensive model comparisons or posterior sampling. As a theoretical contribution, this paper identifies the natural limits of Markov state mixture modeling by proving a lower bound on the classification error. It then presents numerical experiments where variational EM achieves performance consistent with the theoretically optimal error scaling. The experiments are based on synthetic and observational data sets including Last.fm music listening, ultramarathon running, and gene expression. In each of the three data sets, variational EM leads to the identification of meaningful heterogeneities. + oai:arXiv.org:2406.04653v2 + stat.ME + cs.NA + math.NA + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by/4.0/ + Christopher E. Miles, Robert J. Webber + + + Any-Time Regret-Guaranteed Algorithm for Control of Linear Quadratic Systems + https://arxiv.org/abs/2406.07746 + arXiv:2406.07746v2 Announce Type: replace-cross +Abstract: We propose a computationally efficient algorithm that achieves anytime regret of order $\mathcal{O}(\sqrt{t})$, with explicit dependence on the system dimensions and on the solution of the Discrete Algebraic Riccati Equation (DARE). Our approach uses an appropriately tuned regularization and a sufficiently accurate initial estimate to construct confidence ellipsoids for control design. A carefully designed input-perturbation mechanism is incorporated to ensure anytime performance. We develop two variants of the algorithm. The first enforces strong sequential stability, requiring each policy to be stabilizing and successive policies to remain close. This sequential condition helps prevent state explosion at policy update times; however, it results in a suboptimal regret scaling with respect to the DARE solution. Motivated by this limitation, we introduce a second class of algorithms that removes this requirement and instead requires only that each generated policy be stabilizing. Closed-loop stability is then preserved through a dwell-time inspired policy-update rule. This class of algorithms also addresses key shortcomings of most existing approaches which lack explicit high-probability bounds on the state trajectory expressed in system-theoretic terms. Our analysis shows that partially relaxing the sequential-stability requirement yields optimal regret. Finally, our method eliminates the need for any \emph{a priori} bound on the norm of the DARE solution, an assumption required by all existing computationally efficient OFU based algorithms. + oai:arXiv.org:2406.07746v2 + stat.ML + cs.LG + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jafar Abbaszadeh Chekan, Cedric Langbort + + + High Frequency Matters: Uncertainty Guided Image Compression with Wavelet Diffusion + https://arxiv.org/abs/2407.12538 + arXiv:2407.12538v3 Announce Type: replace-cross +Abstract: Diffusion probabilistic models have recently achieved remarkable success in generating high-quality images. However, balancing high perceptual quality and low distortion remains challenging in application of diffusion models in image compression. To address this issue, we propose a novel Uncertainty-Guided image compression approach with wavelet Diffusion (UGDiff). Our approach focuses on high frequency compression via the wavelet transform, since high frequency components are crucial for reconstructing image details. We introduce a wavelet conditional diffusion model for high frequency prediction, followed by a residual codec that compresses and transmits prediction residuals to the decoder. This diffusion prediction-then-residual compression paradigm effectively addresses the low fidelity issue common in direct reconstructions by existing diffusion models. Considering the uncertainty from the random sampling of the diffusion model, we further design an uncertainty-weighted rate-distortion (R-D) loss tailored for residual compression, providing a more rational trade-off between rate and distortion. Comprehensive experiments on two benchmark datasets validate the effectiveness of UGDiff, surpassing state-of-the-art image compression methods in R-D performance, perceptual quality, subjective quality, and inference time. Our code is available at: https://github.com/hejiaxiang1/Wavelet-Diffusion/tree/main. + oai:arXiv.org:2407.12538v3 + eess.IV + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by/4.0/ + Juan Song, Jiaxiang He, Lijie Yang, Mingtao Feng, Keyan Wang + + + Improving the Accuracy of DC Optimal Power Flow Formulations via Parameter Optimization + https://arxiv.org/abs/2410.11725 + arXiv:2410.11725v3 Announce Type: replace-cross +Abstract: DC Optimal Power Flow (DC-OPF) problems optimize the generators' active power setpoints while satisfying constraints based on the DC power flow linearization. The computational tractability advantages of DC-OPF problems come at the expense of inaccuracies relative to AC Optimal Power Flow (AC-OPF) problems that accurately model the nonlinear steady-state behavior of power grids. This paper proposes an algorithm that significantly improves the accuracy of the generators' active power setpoints from DC-OPF problems with respect to the corresponding AC-OPF problems over a specified range of operating conditions. Using sensitivity information in a machine learning-inspired methodology, this algorithm tunes coefficient and bias parameters in the DC power flow approximation to improve the accuracy of the resulting DC-OPF solutions. Employing the Truncated Newton Conjugate-Gradient (TNC) method, a Quasi-Newton optimization technique, this parameter tuning occurs during an offline training phase, with the resulting parameters then used in online computations. Numerical results underscore the algorithm's efficacy with accuracy improvements in squared two-norm and $\infty$-norm losses of up to $90\%$ and $79\%$, respectively, relative to traditional DC-OPF formulations. + oai:arXiv.org:2410.11725v3 + math.OC + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Babak Taheri, Daniel K. Molzahn + + + Theoretical Convergence Guarantees for Variational Autoencoders + https://arxiv.org/abs/2410.16750 + arXiv:2410.16750v3 Announce Type: replace-cross +Abstract: Variational Autoencoders (VAE) are popular generative models used to sample from complex data distributions. Despite their empirical success in various machine learning tasks, significant gaps remain in understanding their theoretical properties, particularly regarding convergence guarantees. This paper aims to bridge that gap by providing non-asymptotic convergence guarantees for VAE trained using both Stochastic Gradient Descent and Adam algorithms. We derive a convergence rate of $\mathcal{O}(\log n / \sqrt{n})$, where $n$ is the number of iterations of the optimization algorithm, with explicit dependencies on the batch size, the number of variational samples, and other key hyperparameters. Our theoretical analysis applies to both Linear VAE and Deep Gaussian VAE, as well as several VAE variants, including $\beta$-VAE and IWAE. Additionally, we empirically illustrate the impact of hyperparameters on convergence, offering new insights into the theoretical understanding of VAE training. + oai:arXiv.org:2410.16750v3 + stat.ML + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sobihan Surendran (LPSM), Antoine Godichon-Baggioni (LPSM), Sylvain Le Corff (LPSM) + + + Generalized free energy and excess/housekeeping decomposition in nonequilibrium systems: from large deviations to thermodynamic speed limits + https://arxiv.org/abs/2412.08432 + arXiv:2412.08432v2 Announce Type: replace-cross +Abstract: In genuine nonequilibrium systems that undergo continuous driving, the thermodynamic forces are nonconservative, meaning they cannot be described by any free energy potential. Nonetheless, we show that the dynamics of such systems are governed by a "generalized free energy" that is derived from a large-deviations variational principle. This variational principle also yields a decomposition of fluxes, forces, and dissipation (entropy production) into a conservative "excess" part and a nonconservative "housekeeping" part. Our decomposition is universally applicable to stochastic master equations, deterministic chemical reaction networks, and open systems. We also show that the excess entropy production obeys a thermodynamic speed limit (TSL), a fundamental thermodynamic constraint on the rate of state evolution and/or external fluxes. We demonstrate our approach on several examples, including real-world metabolic networks, where we derive fundamental dissipation bounds and uncover "futile" metabolic cycles. Our generalized free energy and decomposition are empirically accessible to thermodynamic inference in both stochastic and deterministic systems. We discuss important connections to several theoretical frameworks, including information geometry and Onsager theory, as well as previous excess/housekeeping decompositions. + oai:arXiv.org:2412.08432v2 + cond-mat.stat-mech + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Artemy Kolchinsky, Andreas Dechant, Kohei Yoshimura, Sosuke Ito + + + Q-Fly: An Optical Interconnect for Modular Quantum Computers + https://arxiv.org/abs/2412.09299 + arXiv:2412.09299v4 Announce Type: replace-cross +Abstract: Much like classical supercomputers, scaling up quantum computers requires an optical interconnect. However, signal attenuation leads to irreversible qubit loss, making quantum interconnect design guidelines and metrics different from conventional computing. Inspired by the classical Dragonfly topology, we propose a multi-group structure where the group switch routes photons emitted by computational end nodes to the group's shared pool of Bell state analyzers (which conduct the entanglement swapping that creates end-to-end entanglement) or across a low-diameter path to another group. We present a full-stack analysis of system performance, a combination of distributed and centralized protocols, and a resource scheduler that plans qubit placement and communications for large-scale, fault-tolerant systems. We implement a prototype three-node switched interconnect to justify hardware-side scalability and to expose low-level architectural challenges. We create two-hop entanglement with fidelities of 0.6-0.76. Our design emphasizes reducing network hops and optical components to simplify system stabilization while flexibly adjusting optical path lengths. Based on evaluated loss and infidelity budgets, we find that moderate-radix switches enable systems meeting expected near-term needs, and large systems are feasible. Our design is expected to be effective for a variety of quantum computing technologies, including ion traps and neutral atoms. + oai:arXiv.org:2412.09299v4 + quant-ph + cs.AR + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Daisuke Sakuma, Tomoki Tsuno, Hikaru Shimizu, Yuki Kurosawa, Monet Tokuyama Friedrich, Kentaro Teramoto, Amin Taherkhani, Andrew Todd, Yosuke Ueno, Michal Hajdu\v{s}ek, Rikizo Ikuta, Rodney Van Meter, Toshihiko Sasaki, Shota Nagayama + + + WaveOrder: A differentiable wave-optical framework for scalable biological microscopy with diverse modalities + https://arxiv.org/abs/2412.09775 + arXiv:2412.09775v3 Announce Type: replace-cross +Abstract: Correlative computational microscopy can accelerate imaging and modeling of cellular dynamics by relaxing trade-offs inherent to dynamic imaging. Existing computational microscopy frameworks are either specialized or overly generic, limiting use to fixed configurations or domain experts. We introduce WaveOrder, a generalist wave-optical framework for imaging the architectural order of biomolecules. WaveOrder reconstructs diverse specimen properties from multi-channel acquisitions, with or without fluorescence. It provides a unified representation of linear optical properties and differentiable physics-based image formation models spanning widefield, confocal, light-sheet, and oblique label-free geometries. WaveOrder uses physics-informed ML to auto-tune model parameters and solve blind shift-variant restoration problems. This open-source, PyTorch-based framework enables scalable quantitative imaging across scales from organelles to adult zebrafish, and improves restoration of cellular structures in high-throughput experiments. We validate WaveOrder on diverse imaging applications, demonstrating its ability to recover biomolecular structure beyond the limits of existing approaches. + oai:arXiv.org:2412.09775v3 + physics.optics + cs.CV + q-bio.QM + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by-sa/4.0/ + Talon Chandler, Ivan E. Ivanov, Gabriel Sturm, Sheng Xiao, Xiang Zhao, Alexander Hillsley, Allyson Quinn Ryan, Ziwen Liu, Sricharan Reddy Varra, Ilan Theodoro, Eduardo Hirata-Miyasaki, Deepika Sundarraman, Amitabh Verma, Madhurya Sekhar, Chad Liu, Soorya Pradeep, See-Chi Lee, Shannon N. Rhoads, Maria Clara Zanellati, Sarah Cohen, Carolina Arias, Manuel D. Leonetti, Adrian Jacobo, Keir Balla, Lo\"ic A. Royer, Shalin B. Mehta + + + A Riemannian Optimization Perspective of the Gauss-Newton Method for Feedforward Neural Networks + https://arxiv.org/abs/2412.14031 + arXiv:2412.14031v5 Announce Type: replace-cross +Abstract: In this work, we establish non-asymptotic convergence bounds for the Gauss-Newton method in training neural networks with smooth activations. In the underparameterized regime, the Gauss-Newton gradient flow in parameter space induces a Riemannian gradient flow on a low-dimensional embedded submanifold of the function space. Using tools from Riemannian optimization, we establish geodesic Polyak-Lojasiewicz and Lipschitz-smoothness conditions for the loss under appropriately chosen output scaling, yielding geometric convergence to the optimal in-class predictor at an explicit rate independent of the conditioning of the Gram matrix. In the overparameterized regime, we propose adaptive, curvature-aware regularization schedules that ensure fast geometric convergence to a global optimum at a rate independent of the minimum eigenvalue of the neural tangent kernel and, locally, of the modulus of strong convexity of the loss. These results demonstrate that Gauss-Newton achieves accelerated convergence rates in settings where first-order methods exhibit slow convergence due to ill-conditioned kernel matrices and loss landscapes. + oai:arXiv.org:2412.14031v5 + math.OC + cs.AI + cs.LG + cs.SY + eess.SY + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Semih Cayci + + + PIGUIQA: A Physical Imaging Guided Perceptual Framework for Underwater Image Quality Assessment + https://arxiv.org/abs/2412.15527 + arXiv:2412.15527v3 Announce Type: replace-cross +Abstract: In this paper, we propose a Physical Imaging Guided perceptual framework for Underwater Image Quality Assessment (UIQA), termed PIGUIQA. First, we formulate UIQA as a comprehensive problem that considers the combined effects of direct transmission attenuation and backward scattering on image perception. By leveraging underwater radiative transfer theory, we systematically integrate physics-based imaging estimations to establish quantitative metrics for these distortions. Second, recognizing spatial variations in image content significance and human perceptual sensitivity to distortions, we design a module built upon a neighborhood attention mechanism for local perception of images. This module effectively captures subtle features in images, thereby enhancing the adaptive perception of distortions on the basis of local information. Third, by employing a global perceptual aggregator that further integrates holistic image scene with underwater distortion information, the proposed model accurately predicts image quality scores. Extensive experiments across multiple benchmarks demonstrate that PIGUIQA achieves state-of-the-art performance while maintaining robust cross-dataset generalizability. The implementation is publicly available at https://github.com/WeizhiXian/PIGUIQA + oai:arXiv.org:2412.15527v3 + eess.IV + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Weizhi Xian, Mingliang Zhou, Leong Hou U, Zhengguo Li + + + SimSort: A Data-Driven Framework for Spike Sorting by Large-Scale Electrophysiology Simulation + https://arxiv.org/abs/2502.03198 + arXiv:2502.03198v3 Announce Type: replace-cross +Abstract: Spike sorting is an essential process in neural recording, which identifies and separates electrical signals from individual neurons recorded by electrodes in the brain, enabling researchers to study how specific neurons communicate and process information. Although there exist a number of spike sorting methods which have contributed to significant neuroscientific breakthroughs, many are heuristically designed, making it challenging to verify their correctness due to the difficulty of obtaining ground truth labels from real-world neural recordings. In this work, we explore a data-driven, deep learning-based approach. We begin by creating a large-scale dataset through electrophysiology simulations using biologically realistic computational models. We then present SimSort, a pretraining framework for spike sorting. Trained solely on simulated data, SimSort demonstrates zero-shot generalizability to real-world spike sorting tasks, yielding consistent improvements over existing methods across multiple benchmarks. These results highlight the potential of simulation-driven pretraining to enhance the robustness and scalability of spike sorting in experimental neuroscience. + oai:arXiv.org:2502.03198v3 + q-bio.NC + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + 39th Conference on Neural Information Processing Systems (NeurIPS 2025) + Yimu Zhang, Dongqi Han, Yansen Wang, Zhenning Lv, Yu Gu, Dongsheng Li + + + A Comprehensive Survey on Generative AI for Video-to-Music Generation + https://arxiv.org/abs/2502.12489 + arXiv:2502.12489v2 Announce Type: replace-cross +Abstract: The burgeoning growth of video-to-music generation can be attributed to the ascendancy of multimodal generative models. However, there is a lack of literature that comprehensively combs through the work in this field. To fill this gap, this paper presents a comprehensive review of video-to-music generation using deep generative AI techniques, focusing on three key components: conditioning input construction, conditioning mechanism, and music generation frameworks. We categorize existing approaches based on their designs for each component, clarifying the roles of different strategies. Preceding this, we provide a fine-grained categorization of video and music modalities, illustrating how different categories influence the design of components within the generation pipelines. Furthermore, we summarize available multimodal datasets and evaluation metrics while highlighting ongoing challenges in the field. + oai:arXiv.org:2502.12489v2 + eess.AS + cs.AI + cs.MM + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Shulei Ji, Songruoyao Wu, Zihao Wang, Shuyu Li, Kejun Zhang + + + Error bounds for composite quantum hypothesis testing and a new characterization of the weighted Kubo-Ando geometric means + https://arxiv.org/abs/2503.13379 + arXiv:2503.13379v4 Announce Type: replace-cross +Abstract: The optimal error exponents of binary composite i.i.d. state discrimination are trivially bounded by the worst-case pairwise exponents of discriminating individual elements of the sets representing the two hypotheses, and in the finite-dimensional classical case, these bounds in fact give exact single-copy expressions for the error exponents. In contrast, in the non-commutative case, the optimal exponents are only known to be expressible in terms of regularized divergences, resulting in formulas that, while conceptually relevant, are practically not very useful. In this paper, we develop further an approach initiated in [Mosonyi, Szil\'agyi, Weiner, IEEE Trans. Inf. Th. 68(2):1032--1067, 2022] to give improved single-copy bounds on the error exponents by comparing not only individual states from the two hypotheses, but also various unnormalized positive semi-definite operators associated to them. Here, we show a number of equivalent characterizations of such operators giving valid bounds, and show that in the commutative case, considering weighted geometric means of the states, and in the case of two states per hypothesis, considering weighted Kubo-Ando geometric means, are optimal for this approach. As a result, we give a new characterization of the weighted Kubo-Ando geometric means as the only $2$-variable operator geometric means that are block additive, tensor multiplicative, and satisfy the arithmetic-geometric mean inequality. We also extend our results to composite quantum channel discrimination, and show an analogous optimality property of the weighted Kubo-Ando geometric means of two quantum channels, a notion that seems to be new. We extend this concept to defining the notion of superoperator perspective function and establish some of its basic properties, which may be of independent interest. + oai:arXiv.org:2503.13379v4 + quant-ph + cs.IT + math-ph + math.FA + math.IT + math.MP + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + P\'eter E. Frenkel, Mil\'an Mosonyi, P\'eter Vrana, Mih\'aly Weiner + + + Density estimation via mixture discrepancy and moments + https://arxiv.org/abs/2504.01570 + arXiv:2504.01570v2 Announce Type: replace-cross +Abstract: With the aim of generalizing histogram statistics to higher dimensional cases, density estimation via discrepancy based sequential partition (DSP) has been proposed to learn an adaptive piecewise constant approximation defined on a binary sequential partition of the underlying domain, where the star discrepancy is adopted to measure the uniformity of particle distribution. However, the calculation of the star discrepancy is NP-hard and it does not satisfy the reflection invariance and rotation invariance either. To this end, we use the mixture discrepancy and the comparison of moments as a replacement of the star discrepancy, leading to the density estimation via mixture discrepancy based sequential partition (DSP-mix) and density estimation via moment-based sequential partition (MSP), respectively. Both DSP-mix and MSP are computationally tractable and exhibit the reflection and rotation invariance. Numerical experiments in reconstructing Beta mixtures, Gaussian mixtures and heavy-tailed Cauchy mixtures up to 30 dimension are conducted, demonstrating that MSP can maintain the same accuracy compared with DSP, while gaining an increase in speed by a factor of two to twenty for large sample size, and DSP-mix can achieve satisfactory accuracy and boost the efficiency in low-dimensional tests ($d \le 6$), but might lose accuracy in high-dimensional problems due to a reduction in partition level. + oai:arXiv.org:2504.01570v2 + stat.ML + cs.LG + physics.comp-ph + stat.ME + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhengyang Lei, Lirong Qu, Sihong Shao, Yunfeng Xiong + + + Stable EEG Source Estimation for Standardized Kalman Filter using Change Rate Tracking + https://arxiv.org/abs/2504.01984 + arXiv:2504.01984v2 Announce Type: replace-cross +Abstract: This article focuses on the measurement and evolution modeling of Standardized Kalman filtering for brain activity estimation using non-invasive electroencephalography data. Here, we propose new parameter tuning and a model that uses the rate of change in the brain activity distribution to improve the stability of otherwise accurate estimates. Namely, we propose a backward-differentiation-based measurement model for the change rate, which notably improves the filtering-parametrization-stability of the tracking. Simulated data and data from a real subject were used in experiments. + oai:arXiv.org:2504.01984v2 + stat.AP + cs.NA + eess.SP + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Joonas Lahtinen + + + Stochastic Optimization with Optimal Importance Sampling + https://arxiv.org/abs/2504.03560 + arXiv:2504.03560v2 Announce Type: replace-cross +Abstract: Importance Sampling (IS) is a widely used variance reduction technique for enhancing the efficiency of Monte Carlo methods, particularly in rare-event simulation and related applications. Despite its effectiveness, the performance of IS is highly sensitive to the choice of the proposal distribution and often requires stochastic calibration. While the design and analysis of IS have been extensively studied in estimation settings, applying IS within stochastic optimization introduces a fundamental challenge: the decision variable and the importance sampling distribution are mutually dependent, creating a circular optimization structure. This interdependence complicates both convergence analysis and variance control. We consider convex stochastic optimization problems with linear constraints and propose a single-loop stochastic approximation algorithm, based on a joint variant of Nesterov's dual averaging, that jointly updates the decision variable and the importance sampling distribution, without time-scale separation or nested optimization. The method is globally convergent and achieves minimal asymptotic variance among stochastic gradient schemes, matching the performance of an oracle sampler adapted to the optimal solution. + oai:arXiv.org:2504.03560v2 + math.OC + cs.LG + math.ST + stat.ML + stat.TH + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross http://creativecommons.org/licenses/by/4.0/ - Julien Gachadoat, Guillaume Lagarde + Liviu Aolaritei, Bart P. G. Van Parys, Henry Lam, Michael I. Jordan - DeContext as Defense: Safe Image Editing in Diffusion Transformers - https://arxiv.org/abs/2512.16625 - arXiv:2512.16625v2 Announce Type: replace -Abstract: In-context diffusion models allow users to modify images with remarkable ease and realism. However, the same power raises serious privacy concerns: personal images can be easily manipulated for identity impersonation, misinformation, or other malicious uses, all without the owner's consent. While prior work has explored input perturbations to protect against misuse in personalized text-to-image generation, the robustness of modern, large-scale in-context DiT-based models remains largely unexamined. In this paper, we propose DeContext, a new method to safeguard input images from unauthorized in-context editing. Our key insight is that contextual information from the source image propagates to the output primarily through multimodal attention layers. By injecting small, targeted perturbations that weaken these cross-attention pathways, DeContext breaks this flow, effectively decouples the link between input and output. This simple defense is both efficient and robust. We further show that early denoising steps and specific transformer blocks dominate context propagation, which allows us to concentrate perturbations where they matter most. Experiments on Flux Kontext and Step1X-Edit show that DeContext consistently blocks unwanted image edits while preserving visual quality. These results highlight the effectiveness of attention-based perturbations as a powerful defense against image manipulation. Code is available at https://github.com/LinghuiiShen/DeContext. - oai:arXiv.org:2512.16625v2 - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Solvers for the Hermitian and the pseudo-Hermitian Bethe-Salpeter equation in the Yambo code: Implementation and Performance + https://arxiv.org/abs/2504.10096 + arXiv:2504.10096v3 Announce Type: replace-cross +Abstract: We analyze the performance of two strategies in solving the structured eigenvalue problem deriving from the Bethe-Salpeter equation (BSE) in condensed matter physics. The BSE matrix is constructed with the Yambo code, and the two strategies are implemented by interfacing Yambo with the ScaLAPACK and ELPA libraries for direct diagonalization, and with the SLEPc library for the iterative approach. We consider both the Hermitian (Tamm-Dancoff approximation) and pseudo-Hermitian forms, addressing dense matrices of three different sizes. A description of the implementation is also provided, with details for the pseudo-Hermitian case. Timing and memory utilization are analyzed on both CPU and GPU clusters. Our results demonstrate that it is now feasible to handle dense BSE matrices of the order of 10^5. + oai:arXiv.org:2504.10096v3 + cond-mat.mtrl-sci + cs.DC + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross http://creativecommons.org/licenses/by/4.0/ - Linghui Shen, Mingyue Cui, Xingyi Yang - - - Fully Dynamic Algorithms for Chamfer Distance - https://arxiv.org/abs/2512.16639 - arXiv:2512.16639v2 Announce Type: replace -Abstract: We study the problem of computing Chamfer distance in the fully dynamic setting, where two set of points $A, B \subset \mathbb{R}^{d}$, each of size up to $n$, dynamically evolve through point insertions or deletions and the goal is to efficiently maintain an approximation to $\mathrm{dist}_{\mathrm{CH}}(A,B) = \sum_{a \in A} \min_{b \in B} \textrm{dist}(a,b)$, where $\textrm{dist}$ is a distance measure. Chamfer distance is a widely used dissimilarity metric for point clouds, with many practical applications that require repeated evaluation on dynamically changing datasets, e.g., when used as a loss function in machine learning. In this paper, we present the first dynamic algorithm for maintaining an approximation of the Chamfer distance under the $\ell_p$ norm for $p \in \{1,2 \}$. Our algorithm reduces to approximate nearest neighbor (ANN) search with little overhead. Plugging in standard ANN bounds, we obtain $(1+\epsilon)$-approximation in $\tilde{O}(\epsilon^{-d})$ update time and $O(1/\epsilon)$-approximation in $\tilde{O}(d n^{\epsilon^2} \epsilon^{-4})$ update time. We evaluate our method on real-world datasets and demonstrate that it performs competitively against natural baselines. - oai:arXiv.org:2512.16639v2 - cs.DS - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Gramoz Goranci, Shaofeng Jiang, Peter Kiss, Eva Szilagyi, Qiaoyuan Yang + Petru Milev, Blanca Mellado-Pinto, Muralidhar Nalabothula, Ali Esquembre Kucukalic, Fernando Alvarruiz, Enrique Ramos, Francesco Filippone, Alejandro Molina-Sanchez, Ludger Wirtz, Jose E. Roman, Davide Sangalli - Efficient Bitcoin Meta-Protocol Transaction and Data Discovery Through nLockTime Field Repurposing - https://arxiv.org/abs/2512.16683 - arXiv:2512.16683v2 Announce Type: replace -Abstract: We describe the Lockchain Protocol, a lightweight Bitcoin meta-protocol that enables highly efficient transaction discovery at zero marginal block space cost, and data verification without introducing any new on-chain storage mechanism. The protocol repurposes the mandatory 4-byte nLockTime field of every Bitcoin transaction as a compact metadata header. By constraining values to an unused range of past Unix timestamps greater than or equal to 500,000,000, the field can encode a protocol signal, type, variant, and sequence identifier while remaining fully valid under Bitcoin consensus and policy rules. The primary contribution of the protocol is an efficient discovery layer. Indexers can filter candidate transactions by examining a fixed-size header field, independent of transaction payload size, and only then selectively inspect heavier data such as OP RETURN outputs or witness fields. The Lockchain Protocol applies established protocol design patterns to an under-optimised problem domain, namely transaction discovery at scale, and does not claim new cryptographic primitives or storage methods. - oai:arXiv.org:2512.16683v2 - cs.CR - cs.DC - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Nikodem Tomczak + Differentiable Nonlinear Model Predictive Control + https://arxiv.org/abs/2505.01353 + arXiv:2505.01353v2 Announce Type: replace-cross +Abstract: The efficient computation of parametric solution sensitivities is a key challenge in the integration of learning-enhanced methods with nonlinear model predictive control (MPC), as their availability is crucial for many learning algorithms. This paper discusses the computation of solution sensitivities of general nonlinear programs (NLPs) using the implicit function theorem (IFT) and smoothed optimality conditions treated in interior-point methods (IPM). We detail sensitivity computation within a sequential quadratic programming (SQP) method which employs an IPM for the quadratic subproblems. Previous works presented in the machine learning community are limited to convex or unconstrained formulations, or lack an implementation for efficient sensitivity evaluation. The publication is accompanied by an efficient open-source implementation within the acados framework, providing both forward and adjoint sensitivities for general optimal control problems, achieving speedups exceeding 3x over the state-of-the-art solvers mpc.pytorch and cvxpygen. + oai:arXiv.org:2505.01353v2 + math.OC + cs.AI + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Jonathan Frey, Katrin Baumg\"artner, Gianluca Frison, Dirk Reinhardt, Jasper Hoffmann, Leonard Fichtner, Sebastien Gros, Moritz Diehl - A multi-centre, multi-device benchmark dataset for landmark-based comprehensive fetal biometry - https://arxiv.org/abs/2512.16710 - arXiv:2512.16710v2 Announce Type: replace -Abstract: Accurate fetal growth assessment from ultrasound (US) relies on precise biometry measured by manually identifying anatomical landmarks in standard planes. Manual landmarking is time-consuming, operator-dependent, and sensitive to variability across scanners and sites, limiting the reproducibility of automated approaches. There is a need for multi-source annotated datasets to develop artificial intelligence-assisted fetal growth assessment methods. To address this bottleneck, we present an open, multi-centre, multi-device benchmark dataset of fetal US images with expert anatomical landmark annotations for clinically used fetal biometric measurements. These measurements include head bi-parietal and occipito-frontal diameters, abdominal transverse and antero-posterior diameters, and femoral length. The dataset comprises 4,513 de-identified US images from 1,904 subjects acquired at three clinical sites using seven different US devices. We provide standardised, subject-disjoint train/test splits, evaluation code, and baseline results to enable fair and reproducible comparison of methods. Using an automatic biometry model, we quantify domain shift and demonstrate that training and evaluation confined to a single centre substantially overestimate performance relative to multi-centre testing. To the best of our knowledge, this is the first publicly available multi-centre, multi-device, landmark-annotated dataset that covers all primary fetal biometry measures, providing a robust benchmark for domain adaptation and multi-centre generalisation in fetal biometry and enabling more reliable AI-assisted fetal growth assessment across centres. All data, annotations, training code, and evaluation pipelines are made publicly available. - oai:arXiv.org:2512.16710v2 + Reproducing and Improving CheXNet: Deep Learning for Chest X-ray Disease Classification + https://arxiv.org/abs/2505.06646 + arXiv:2505.06646v2 Announce Type: replace-cross +Abstract: Deep learning for radiologic image analysis is a rapidly growing field in biomedical research and is likely to become a standard practice in modern medicine. On the publicly available NIH ChestX-ray14 dataset, containing X-ray images that are classified by the presence or absence of 14 different diseases, we reproduced an algorithm known as CheXNet, as well as explored other algorithms that outperform CheXNet's baseline metrics. Model performance was primarily evaluated using the F1 score and AUC-ROC, both of which are critical metrics for imbalanced, multi-label classification tasks in medical imaging. The best model achieved an average AUC-ROC score of 0.85 and an average F1 score of 0.39 across all 14 disease classifications present in the dataset. + oai:arXiv.org:2505.06646v2 + eess.IV cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Chiara Di Vece, Zhehua Mao, Netanell Avisdris, Brian Dromey, Raffaele Napolitano, Dafna Ben Bashat, Francisco Vasconcelos, Danail Stoyanov, Leo Joskowicz, Sophia Bano + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by/4.0/ + Daniel J. Strick, Carlos Garcia, Anthony Huang, Thomas Gardos - Towards Reproducibility in Predictive Process Mining: SPICE -- A Deep Learning Library - https://arxiv.org/abs/2512.16715 - arXiv:2512.16715v2 Announce Type: replace -Abstract: In recent years, Predictive Process Mining (PPM) techniques based on artificial neural networks have evolved as a method for monitoring the future behavior of unfolding business processes and predicting Key Performance Indicators (KPIs). However, many PPM approaches often lack reproducibility, transparency in decision making, usability for incorporating novel datasets and benchmarking, making comparisons among different implementations very difficult. In this paper, we propose SPICE, a Python framework that reimplements three popular, existing baseline deep-learning-based methods for PPM in PyTorch, while designing a common base framework with rigorous configurability to enable reproducible and robust comparison of past and future modelling approaches. We compare SPICE to original reported metrics and with fair metrics on 11 datasets. - oai:arXiv.org:2512.16715v2 - cs.LG + Transformer-Based Neural Quantum Digital Twins for Many-Body Quantum Simulation and Optimal Annealing Schedule Design + https://arxiv.org/abs/2505.15662 + arXiv:2505.15662v2 Announce Type: replace-cross +Abstract: We introduce Transformer-based Neural Quantum Digital Twins (Tx-NQDTs) to simulate full adiabatic dynamics of many-body quantum systems, including ground and low-lying excited states, at low computational cost. Tx-NQDTs employ a graph-informed Transformer neural network trained to predict spectral properties (energy levels and gap locations) needed for annealing schedule design. We integrate these predictions with an adaptive annealing schedule design based on first-order adiabatic perturbation theory (FOAPT), which slows the evolution near predicted small gaps to maintain adiabaticity. Experiments on a D-Wave quantum annealer (N = 10, 15, 20 qubits, 12 control segments) show that Tx-NQDT-informed schedules significantly improve success probabilities despite hardware noise and calibration drift. The optimized schedules achieve success probabilities 2.2-11.7 percentage points higher than the default linear schedule, outperforming the D-Wave baseline in 44 of 60 cases. These results demonstrate a practical, data-driven route to improved quantum annealing performance on real hardware. + oai:arXiv.org:2505.15662v2 + quant-ph cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Oliver Stritzel, Nick H\"uhnerbein, Simon Rauch, Itzel Zarate, Lukas Fleischmann, Moike Buck, Attila Lischka, Christian Frey + cs.ET + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Jianlong Lu, Hanqiu Peng, Ying Chen - Observer-based Differentially Private Consensus for Linear Multi-agent Systems - https://arxiv.org/abs/2512.16736 - arXiv:2512.16736v2 Announce Type: replace -Abstract: This paper investigates the differentially private consensus problem for general linear multi-agent systems (MASs) based on output feedback protocols. To protect the output information, which is considered private data and may be at high risk of exposure, Laplace noise is added to the information exchange. The conditions for achieving mean square and almost sure consensus in observer-based MASs are established using the backstepping method and the convergence theory for nonnegative almost supermartingales. It is shown that the separation principle remains valid for the consensus problem of linear MASs with decaying Laplace noise. Furthermore, the convergence rate is provided. Then, a joint design framework is developed for state estimation gain, feedback control gain, and noise to ensure the preservation of \epsilon-differential privacy. The output information of each agent is shown to be protected at every time step. Finally, sufficient conditions are established for simultaneously achieving consensus and preserving differential privacy for linear MASs utilizing both full-order and reduced-order observers. Meanwhile, an \epsilon*-differentially private consensus is achieved to meet the desired privacy level. Two simulation examples are provided to validate the theoretical results. - oai:arXiv.org:2512.16736v2 - eess.SY - cs.SY - Mon, 22 Dec 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiaofeng Zong, Ming-Yu Wang, Jimin Wang, Ji-Feng Zhang + A Tutorial on Beyond-Diagonal Reconfigurable Intelligent Surfaces: Modeling, Architectures, System Design and Optimization, and Applications + https://arxiv.org/abs/2505.16504 + arXiv:2505.16504v2 Announce Type: replace-cross +Abstract: Written by its inventors, this first tutorial on Beyond-Diagonal Reconfigurable Intelligent Surfaces (BD-RISs) provides the readers with the basics and fundamental tools necessary to appreciate, understand, and contribute to this emerging and disruptive technology. Conventional (Diagonal) RISs (D-RISs) are characterized by a diagonal scattering matrix $\mathbf{\Theta}$ such that the wave manipulation flexibility of D-RIS is extremely limited. In contrast, BD-RIS refers to a novel and general framework for RIS where its scattering matrix is not limited to be diagonal (hence, the ``beyond-diagonal'' terminology) and consequently, all entries of $\mathbf{\Theta}$ can potentially help shaping waves for much higher manipulation flexibility. This physically means that BD-RIS can artificially engineer and reconfigure coupling across elements of the surface thanks to inter-element reconfigurable components which allow waves absorbed by one element to flow through other elements. Consequently, BD-RIS opens the door to more general and versatile intelligent surfaces that subsumes existing RIS architectures as special cases. In this tutorial, we share all the secret sauce to model, design, and optimize BD-RIS and make BD-RIS transformative in many different applications. Topics discussed include physics-consistent and multi-port network-aided modeling; transmitting, reflecting, hybrid, and multi-sector mode analysis; reciprocal and non-reciprocal architecture designs and optimal performance-complexity Pareto frontier of BD-RIS; signal processing, optimization, and channel estimation for BD-RIS; hardware impairments (discrete-value impedance and admittance, lossy interconnections and components, wideband effects, mutual coupling) of BD-RIS; benefits and applications of BD-RIS in communications, sensing, power transfer. + oai:arXiv.org:2505.16504v2 + eess.SP + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross + http://creativecommons.org/licenses/by/4.0/ + Hongyu Li, Matteo Nerini, Shanpu Shen, Bruno Clerckx - Analogy between List Coloring Problems and the Interval $k$-$(\gamma,\mu)$-choosability property: theoretical aspects of complexity - https://arxiv.org/abs/2512.16807 - arXiv:2512.16807v2 Announce Type: replace -Abstract: This work investigates structural and computational aspects of list-based graph coloring under interval constraints. Building on the framework of analogous and p-analogous problems, we show that classical List Coloring, $\mu$-coloring, and $(\gamma,\mu)$-coloring share strong complexity-preserving correspondences on graph classes closed under pendant-vertex extensions. These equivalences allow hardness and tractability results to transfer directly among the models. Motivated by applications in scheduling and resource allocation with bounded ranges, we introduce the interval-restricted $k$-$(\gamma,\mu)$-coloring model, where each vertex receives an interval of exactly $k$ consecutive admissible colors. We prove that, although $(\gamma,\mu)$-coloring is NP-complete even on several well-structured graph classes, its $k$-restricted version becomes polynomial-time solvable for any fixed $k$. Extending this formulation, we define $k$-$(\gamma,\mu)$-choosability and analyze its expressive power and computational limits. Our results show that the number of admissible list assignments is drastically reduced under interval constraints, yielding a more tractable alternative to classical choosability, even though the general decision problem remains located at high levels of the polynomial hierarchy. Overall, the paper provides a unified view of list-coloring variants through structural reductions, establishes new complexity bounds for interval-based models, and highlights the algorithmic advantages of imposing fixed-size consecutive color ranges. - oai:arXiv.org:2512.16807v2 - cs.CC - Mon, 22 Dec 2025 00:00:00 -0500 - replace + Attractor learning for spatiotemporally chaotic dynamical systems using echo state networks with transfer learning + https://arxiv.org/abs/2505.24099 + arXiv:2505.24099v2 Announce Type: replace-cross +Abstract: In this paper, we explore the predictive capabilities of echo state networks (ESNs) for the generalized Kuramoto-Sivashinsky (gKS) equation, an archetypal nonlinear PDE that exhibits spatiotemporal chaos. Our research focuses on predicting changes in long-term statistical patterns of the gKS model that result from varying the dispersion relation or the length of the spatial domain. We use transfer learning to adapt ESNs to different parameter settings and successfully capture changes in the underlying chaotic attractor. Previous work has shown that transfer learning can be used effectively with ESNs for single-orbit prediction. The novelty of our paper lies in our use of this pairing to predict the long-term statistical properties of spatiotemporally chaotic PDEs. We also show that transfer learning nontrivially improves the length of time that predictions of individual gKS trajectories remain accurate. + oai:arXiv.org:2505.24099v2 + math.DS + cs.AI + cs.LG + nlin.CD + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 + replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Simone Ingrid Monteiro Gama, Rosiane de Freitas Rodrigues + Mohammad Shah Alam, William Ott, Ilya Timofeyev - An AI-driven Assessment of Bone Density as a Biomarker Leading to the Aging Law - https://arxiv.org/abs/2308.02815 - arXiv:2308.02815v2 Announce Type: replace-cross -Abstract: As global population aging intensifies, there is growing interest in the study of biological age. Bones have long been used to evaluate biological age, and the decline in bone density with age is a well-recognized phenomenon in adults. However, the pattern of this decline remains controversial, making it difficult to serve as a reliable indicator of the aging process. Here we present a novel AI-driven statistical method to assess the bone density, and a discovery that the bone mass distribution in trabecular bone of vertebrae follows a non-Gaussian, unimodal, and skewed distribution in CT images. The statistical mode of the distribution is defined as the measure of bone mass, which is a groundbreaking assessment of bone density, named Trabecular Bone Density (TBD). The dataset of CT images are collected from 1,719 patients who underwent PET/CT scans in three hospitals, in which a subset of the dataset is used for AI model training and generalization. Based upon the cases, we demonstrate that the pattern of bone density declining with aging exhibits a consistent trend of exponential decline across sexes and age groups using TBD assessment. The developed AI-driven statistical method blazes a trail in the field of AI for reliable quantitative computation and AI for medicine. The findings suggest that human aging is a gradual process, with the rate of decline slowing progressively over time, which will provide a valuable basis for scientific prediction of life expectancy. - oai:arXiv.org:2308.02815v2 - physics.med-ph - cs.AI - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Density estimation via periodic scaled Korobov kernel method with exponential decay condition + https://arxiv.org/abs/2506.15419 + arXiv:2506.15419v2 Announce Type: replace-cross +Abstract: We propose the periodic scaled Korobov kernel (PSKK) method for nonparametric density estimation on $\mathbb{R}^d$. By first wrapping the target density into a periodic version through modulo operation and subsequently applying kernel ridge regression in scaled Korobov spaces, we extend the kernel approach proposed by Kazashi and Nobile (SIAM J. Numer. Anal., 2023) and eliminate its requirement for inherent periodicity of the density function. This key modification enables effective estimation of densities defined on unbounded domains. We establish rigorous mean integrated squared error (MISE) bounds, proving that for densities with smoothness of order $\alpha$ and exponential decay, our PSKK method achieves an $\mathcal{O}(M^{-1/(1+1/(2\alpha)+\epsilon)})$ MISE convergence rate with an arbitrarily small $\epsilon>0$. While matching the convergence rate of the previous kernel approach, our method applies to non-periodic distributions at the cost of stronger differentiability and exponential decay assumptions. Numerical experiments confirm the theoretical results and demonstrate a significant improvement over traditional kernel density estimation in large-sample regimes. + oai:arXiv.org:2506.15419v2 + math.ST + cs.NA + math.NA + stat.TH + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Linmi Tao, Donglai Tao, Ruiyang Liu, Yu Cheng, Yuezhi Zhou, Li Huo, Zuoxiang He, Ti Jiang, Jingmao Cui, Yuanbiao Wang, Guilan Hu, Xiangsong Zhang, Yongwei Pan, Ye Yuan, Yun Liu + http://creativecommons.org/licenses/by/4.0/ + Ziyang Ye, Haoyuan Tan, Xiaoqun Wang, Zhijian He - Differentially private Bayesian tests - https://arxiv.org/abs/2401.15502 - arXiv:2401.15502v3 Announce Type: replace-cross -Abstract: Differential privacy has emerged as an significant cornerstone in the realm of scientific hypothesis testing utilizing confidential data. In reporting scientific discoveries, Bayesian tests are widely adopted since they effectively circumnavigate the key criticisms of P-values, namely, lack of interpretability and inability to quantify evidence in support of the competing hypotheses. We present a novel differentially private Bayesian hypotheses testing framework that arise naturally under a principled data generative mechanism, inherently maintaining the interpretability of the resulting inferences. Furthermore, by focusing on differentially private Bayes factors based on widely used test statistics, we circumvent the need to model the complete data generative mechanism and ensure substantial computational benefits. We also provide a set of sufficient conditions to establish results on Bayes factor consistency under the proposed framework. The utility of the devised technology is showcased via several numerical experiments. - oai:arXiv.org:2401.15502v3 + The Condition Number as a Scale-Invariant Proxy for Information Encoding in Neural Units + https://arxiv.org/abs/2506.16289 + arXiv:2506.16289v2 Announce Type: replace-cross +Abstract: This paper explores the relationship between the condition number of a neural network's weight tensor and the extent of information encoded by the associated processing unit, viewed through the lens of information theory. It argues that a high condition number, though not sufficient for effective knowledge encoding, may indicate that the unit has learned to selectively amplify and compress information. This intuition is formalized for linear units with Gaussian inputs, linking the condition number and the transformation's log-volume scaling factor to the characteristics of the output entropy and the geometric properties of the learned transformation. The analysis demonstrates that for a fixed weight norm, a concentrated distribution of singular values (high condition number) corresponds to reduced overall information transfer, indicating a specialized and efficient encoding strategy. Furthermore, the linear stage entropy bound provides an upper limit on post-activation information for contractive, element-wise nonlinearities, supporting the condition number as a scale-invariant proxy for encoding capacity in practical neural networks. An empirical case study applies these principles to guide selective fine-tuning of Large Language Models for both a new task and a new input modality. The experiments show that the proposed method, named KappaTune, effectively mitigates catastrophic forgetting. Unlike many existing catastrophic forgetting mitigation methods that rely on access to pre-training statistics, which are often unavailable, this selective fine-tuning approach offers a way to bypass this common requirement. + oai:arXiv.org:2506.16289v2 stat.ML - cs.CR cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/publicdomain/zero/1.0/ - Abhisek Chakraborty, Saptati Datta + http://creativecommons.org/licenses/by/4.0/ + Oswaldo Ludwig - SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning - https://arxiv.org/abs/2402.04114 - arXiv:2402.04114v3 Announce Type: replace-cross -Abstract: In this paper, we analyze the sample and communication complexity of the federated linear stochastic approximation (FedLSA) algorithm. We explicitly quantify the effects of local training with agent heterogeneity. We show that the communication complexity of FedLSA scales polynomially with the inverse of the desired accuracy $\epsilon$. To overcome this, we propose SCAFFLSA a new variant of FedLSA that uses control variates to correct for client drift, and establish its sample and communication complexities. We show that for statistically heterogeneous agents, its communication complexity scales logarithmically with the desired accuracy, similar to Scaffnew. An important finding is that, compared to the existing results for Scaffnew, the sample complexity scales with the inverse of the number of agents, a property referred to as linear speed-up. Achieving this linear speed-up requires completely new theoretical arguments. We apply the proposed method to federated temporal difference learning with linear function approximation and analyze the corresponding complexity improvements. - oai:arXiv.org:2402.04114v3 - stat.ML - cs.LG - math.OC - Mon, 22 Dec 2025 00:00:00 -0500 + On the statistical convergence of N-body simulations of the Solar System + https://arxiv.org/abs/2507.04987 + arXiv:2507.04987v2 Announce Type: replace-cross +Abstract: Most direct N-body integrations of planetary systems use a symplectic integrator with a fixed timestep. A large timestep is desirable in order to speed up the numerical simulations. However, simulations yield unphysical results if the timestep is too large. Surprisingly, no systematic convergence study has been performed on long (Gyr) timescales. In this paper we present numerical experiments to determine the minimum timestep one has to use in long-term integrations of the Solar System in order to recover the system's fundamental secular frequencies and instability rate. We find that timesteps of up to 32 days, i.e. a third of Mercury's orbital period, yield physical results in an ensemble of 5 Gyr integrations. We argue that the chaotic diffusion that drives the Solar System's long-term evolution dominates over numerical diffusion and timestep resonances. Our results bolster confidence that the statistical results of most simulations in the literature are indeed physical and provide guidance on how to run time and energy efficient simulations while making sure results can be trusted. + oai:arXiv.org:2507.04987v2 + astro-ph.EP + astro-ph.IM + cs.NA + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Paul Mangold, Sergey Samsonov, Safwan Labbi, Ilya Levin, Reda Alami, Alexey Naumov, Eric Moulines + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Hanno Rein, Garett Brown, Mei Kanda - MSDiff: Multi-Scale Diffusion Model for Ultra-Sparse View CT Reconstruction - https://arxiv.org/abs/2405.05814 - arXiv:2405.05814v2 Announce Type: replace-cross -Abstract: Computed Tomography (CT) technology reduces radiation haz-ards to the human body through sparse sampling, but fewer sampling angles pose challenges for image reconstruction. Score-based generative models are widely used in sparse-view CT re-construction, performance diminishes significantly with a sharp reduction in projection angles. Therefore, we propose an ultra-sparse view CT reconstruction method utilizing multi-scale dif-fusion models (MSDiff), designed to concentrate on the global distribution of information and facilitate the reconstruction of sparse views with local image characteristics. Specifically, the proposed model ingeniously integrates information from both comprehensive sampling and selectively sparse sampling tech-niques. Through precise adjustments in diffusion model, it is capable of extracting diverse noise distribution, furthering the understanding of the overall structure of images, and aiding the fully sampled model in recovering image information more effec-tively. By leveraging the inherent correlations within the projec-tion data, we have designed an equidistant mask, enabling the model to focus its attention more effectively. Experimental re-sults demonstrated that the multi-scale model approach signifi-cantly improved the quality of image reconstruction under ultra-sparse angles, with good generalization across various datasets. - oai:arXiv.org:2405.05814v2 + OrthoInsight: Rib Fracture Diagnosis and Report Generation Based on Multi-Modal Large Models + https://arxiv.org/abs/2507.13993 + arXiv:2507.13993v3 Announce Type: replace-cross +Abstract: The growing volume of medical imaging data has increased the need for automated diagnostic tools, especially for musculoskeletal injuries like rib fractures, commonly detected via CT scans. Manual interpretation is time-consuming and error-prone. We propose OrthoInsight, a multi-modal deep learning framework for rib fracture diagnosis and report generation. It integrates a YOLOv9 model for fracture detection, a medical knowledge graph for retrieving clinical context, and a fine-tuned LLaVA language model for generating diagnostic reports. OrthoInsight combines visual features from CT images with expert textual data to deliver clinically useful outputs. Evaluated on 28,675 annotated CT images and expert reports, it achieves high performance across Diagnostic Accuracy, Content Completeness, Logical Coherence, and Clinical Guidance Value, with an average score of 4.28, outperforming models like GPT-4 and Claude-3. This study demonstrates the potential of multi-modal learning in transforming medical image analysis and providing effective support for radiologists. + oai:arXiv.org:2507.13993v3 eess.IV + cs.AI cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Junyan Zhang, Mengxiao Geng, Pinhuang Tan, Yi Liu, Zhili Liu, Bin Huang, Qiegen Liu + Jinzhi Wang, Jiangbo Zhang, Chenzhan Yu, Zhigang Xiu, Duwei Dai, Ziyu xu, Ningyong Wu, Wenhong Zhao - Adjusting Model Size in Continual Gaussian Processes: How Big is Big Enough? - https://arxiv.org/abs/2408.07588 - arXiv:2408.07588v5 Announce Type: replace-cross -Abstract: Many machine learning models require setting a parameter that controls their size before training, e.g. number of neurons in DNNs, or inducing points in GPs. Increasing capacity typically improves performance until all the information from the dataset is captured. After this point, computational cost keeps increasing, without improved performance. This leads to the question "How big is big enough?" We investigate this problem for Gaussian processes (single-layer neural networks) in continual learning. Here, data becomes available incrementally, and the final dataset size will therefore not be known before training, preventing the use of heuristics for setting a fixed model size. We develop a method to automatically adjust model size while maintaining near-optimal performance. Our experimental procedure follows the constraint that any hyperparameters must be set without seeing dataset properties, and we show that our method performs well across diverse datasets without the need to adjust its hyperparameter, showing it requires less tuning than others. - oai:arXiv.org:2408.07588v5 - stat.ML + ASR-Synchronized Speaker-Role Diarization + https://arxiv.org/abs/2507.17765 + arXiv:2507.17765v3 Announce Type: replace-cross +Abstract: Speaker-role diarization (RD), such as doctor vs. patient or lawyer vs. client, is practically often more useful than conventional speaker diarization (SD), which assigns only generic labels (speaker-1, speaker-2). The state-of-the-art end-to-end ASR+RD approach uses a single transducer that serializes word and role predictions (role at the end of a speaker's turn), but at the cost of degraded ASR performance. To address this, we adapt a recent joint ASR+SD framework to ASR+RD by freezing the ASR transducer and training an auxiliary RD transducer in parallel to assign a role to each ASR-predicted word. For this, we first show that SD and RD are fundamentally different tasks, exhibiting different dependencies on acoustic and linguistic information. Motivated by this, we propose (1) task-specific predictor networks and (2) using higher-layer ASR encoder features as input to the RD encoder. Additionally, we replace the blank-shared RNNT loss by cross-entropy loss along the 1-best forced-alignment path to further improve performance while reducing computational and memory requirements during RD training. Experiments on a public and a private dataset of doctor-patient conversations demonstrate that our method outperforms the best baseline with relative reductions of 6.2% and 4.5% in role-based word diarization error rate (R-WDER), respectively + oai:arXiv.org:2507.17765v3 + eess.AS + cs.AI cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - PMLR 267:48974-49000 (2025) - Guiomar Pescador-Barrios, Sarah Filippi, Mark van der Wilk + Arindam Ghosh, Mark Fuhs, Bongjun Kim, Anurag Chowdhury, Monika Woszczyna - Non-Perturbative Trivializing Flows for Lattice Gauge Theories - https://arxiv.org/abs/2410.13161 - arXiv:2410.13161v2 Announce Type: replace-cross -Abstract: Continuous normalizing flows are known to be highly expressive and flexible, which allows for easier incorporation of large symmetries and makes them a powerful computational tool for lattice field theories. Building on previous work, we present a general continuous normalizing flow architecture for matrix Lie groups that is equivariant under group transformations. We apply this to lattice gauge theories in two dimensions as a proof of principle and demonstrate competitive performance, showing its potential as a tool for future lattice computations. - oai:arXiv.org:2410.13161v2 - hep-lat - cond-mat.stat-mech + Deep Variational Free Energy Calculation of Hydrogen Hugoniot + https://arxiv.org/abs/2507.18540 + arXiv:2507.18540v2 Announce Type: replace-cross +Abstract: We develop a deep variational free energy framework to compute the equation of state of hydrogen in the warm dense matter region. This method parameterizes the variational density matrix of hydrogen nuclei and electrons at finite temperature using three deep generative models: a normalizing flow model for the Boltzmann distribution of the classical nuclei, an autoregressive transformer for the distribution of electrons in excited states, and a permutational equivariant flow model for the unitary backflow transformation of electron coordinates in Hartree-Fock states. By jointly optimizing the three neural networks to minimize the variational free energy, we obtain the equation of state and related thermodynamic properties of dense hydrogen for the temperature range where electrons occupy excited states. We compare our results with other theoretical and experimental results on the deuterium Hugoniot curve, aiming to resolve existing discrepancies. Our results bridge the gap between the results obtained by path-integral Monte Carlo calculations at high temperature and ground-state electronic methods at low temperature, thus providing a valuable benchmark for hydrogen in the warm dense matter region. + oai:arXiv.org:2507.18540v2 + cond-mat.str-el cs.LG - hep-th - Mon, 22 Dec 2025 00:00:00 -0500 + physics.comp-ph + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - 10.1103/31d5-hvp6 - Phys. Rev. D 112, 094516 (2025) - Mathis Gerdes, Pim de Haan, Roberto Bondesan, Miranda C. N. Cheng + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zihang Li, Hao Xie, Xinyang Dong, Lei Wang - Dynamic PET Image Prediction Using a Network Combining Reversible and Irreversible Modules - https://arxiv.org/abs/2410.22674 - arXiv:2410.22674v2 Announce Type: replace-cross -Abstract: Dynamic positron emission tomography (PET) images can reveal the distribution of tracers in the organism and the dynamic processes involved in biochemical reactions, and it is widely used in clinical practice. Despite the high effectiveness of dynamic PET imaging in studying the kinetics and metabolic processes of radiotracers. Pro-longed scan times can cause discomfort for both patients and medical personnel. This study proposes a dynamic frame prediction method for dynamic PET imaging, reduc-ing dynamic PET scanning time by applying a multi-module deep learning framework composed of reversible and irreversible modules. The network can predict kinetic parameter images based on the early frames of dynamic PET images, and then generate complete dynamic PET images. In validation experiments with simulated data, our network demonstrated good predictive performance for kinetic parameters and was able to reconstruct high-quality dynamic PET images. Additionally, in clinical data experiments, the network exhibited good generalization performance and attached that the proposed method has promising clinical application prospects. - oai:arXiv.org:2410.22674v2 + Deep Equilibrium Convolutional Sparse Coding for Hyperspectral Image Denoising + https://arxiv.org/abs/2508.15553 + arXiv:2508.15553v2 Announce Type: replace-cross +Abstract: Hyperspectral images (HSIs) play a crucial role in remote sensing but are often degraded by complex noise patterns. Ensuring the physical property of the denoised HSIs is vital for robust HSI denoising, giving the rise of deep unfolding-based methods. However, these methods map the optimization of a physical model to a learnable network with a predefined depth, which lacks convergence guarantees. In contrast, Deep Equilibrium (DEQ) models treat the hidden layers of deep networks as the solution to a fixed-point problem and models them as infinite-depth networks, naturally consistent with the optimization. Under the framework of DEQ, we propose a Deep Equilibrium Convolutional Sparse Coding (DECSC) framework that unifies local spatial-spectral correlations, nonlocal spatial self-similarities, and global spatial consistency for robust HSI denoising. Within the convolutional sparse coding (CSC) framework, we enforce shared 2D convolutional sparse representation to ensure global spatial consistency across bands, while unshared 3D convolutional sparse representation captures local spatial-spectral details. To further exploit nonlocal self-similarities, a transformer block is embedded after the 2D CSC. Additionally, a detail enhancement module is integrated with the 3D CSC to promote image detail preservation. We formulate the proximal gradient descent of the CSC model as a fixed-point problem and transform the iterative updates into a learnable network architecture within the framework of DEQ. Experimental results demonstrate that our DECSC method achieves superior denoising performance compared to state-of-the-art methods. + oai:arXiv.org:2508.15553v2 eess.IV - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jie Sun, Junyan Zhang, Qian Xia, Chuanfu Sun, Yumei Chen, Yunjie Yang, Huafeng Liu, Wentao Zhu, Qiegen Liu + Jin Ye, Jingran Wang, Fengchao Xiong, Jingzhou Chen, Yuntao Qian - Targeted Learning for Variable Importance - https://arxiv.org/abs/2411.02221 - arXiv:2411.02221v2 Announce Type: replace-cross -Abstract: Variable importance is one of the most widely used measures for interpreting machine learning with significant interest from both statistics and machine learning communities. Recently, increasing attention has been directed toward uncertainty quantification in these metrics. Current approaches largely rely on one-step procedures, which, while asymptotically efficient, can present higher sensitivity and instability in finite sample settings. To address these limitations, we propose a novel method by employing the targeted learning (TL) framework, designed to enhance robustness in inference for variable importance metrics. Our approach is particularly suited for conditional permutation variable importance. We show that it (i) retains the asymptotic efficiency of traditional methods, (ii) maintains comparable computational complexity, and (iii) delivers improved accuracy, especially in finite sample contexts. We further support these findings with numerical experiments that illustrate the practical advantages of our method and validate the theoretical results. - oai:arXiv.org:2411.02221v2 - stat.ML - cs.LG - stat.ME - Mon, 22 Dec 2025 00:00:00 -0500 + Understanding Benefits and Pitfalls of Current Methods for the Segmentation of Undersampled MRI Data + https://arxiv.org/abs/2508.18975 + arXiv:2508.18975v2 Announce Type: replace-cross +Abstract: MR imaging is a valuable diagnostic tool allowing to non-invasively visualize patient anatomy and pathology with high soft-tissue contrast. However, MRI acquisition is typically time-consuming, leading to patient discomfort and increased costs to the healthcare system. Recent years have seen substantial research effort into the development of methods that allow for accelerated MRI acquisition while still obtaining a reconstruction that appears similar to the fully-sampled MR image. However, for many applications a perfectly reconstructed MR image may not be necessary, particularly, when the primary goal is a downstream task such as segmentation. This has led to growing interest in methods that aim to perform segmentation directly on accelerated MRI data. Despite recent advances, existing methods have largely been developed in isolation, without direct comparison to one another, often using separate or private datasets, and lacking unified evaluation standards. To date, no high-quality, comprehensive comparison of these methods exists, and the optimal strategy for segmenting accelerated MR data remains unknown. This paper provides the first unified benchmark for the segmentation of undersampled MRI data comparing 7 approaches. A particular focus is placed on comparing \textit{one-stage approaches}, that combine reconstruction and segmentation into a unified model, with \textit{two-stage approaches}, that utilize established MRI reconstruction methods followed by a segmentation network. We test these methods on two MRI datasets that include multi-coil k-space data as well as a human-annotated segmentation ground-truth. We find that simple two-stage methods that consider data-consistency lead to the best segmentation scores, surpassing complex specialized methods that are developed specifically for this task. + oai:arXiv.org:2508.18975v2 + eess.IV + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Xiaohan Wang, Yunzhe Zhou, Giles Hooker + Jan Nikolas Morshuis, Matthias Hein, Christian F. Baumgartner - Refined Analysis of Federated Averaging and Federated Richardson-Romberg - https://arxiv.org/abs/2412.01389 - arXiv:2412.01389v2 Announce Type: replace-cross -Abstract: In this paper, we present a novel analysis of \FedAvg with constant step size, relying on the Markov property of the underlying process. We demonstrate that the global iterates of the algorithm converge to a stationary distribution and analyze its resulting bias and variance relative to the problem's solution. We provide a first-order bias expansion in both homogeneous and heterogeneous settings. Interestingly, this bias decomposes into two distinct components: one that depends solely on stochastic gradient noise and another on client heterogeneity. Finally, we introduce a new algorithm based on the Richardson-Romberg extrapolation technique to mitigate this bias. - oai:arXiv.org:2412.01389v2 - stat.ML - cs.LG - math.OC - Mon, 22 Dec 2025 00:00:00 -0500 + Modeling spectral filtering effects on color-matching functions: Implications for observer variability + https://arxiv.org/abs/2508.19291 + arXiv:2508.19291v2 Announce Type: replace-cross +Abstract: This study investigates the impact of spectral filtering on color-matching functions (CMFs) and its implications for observer variability modeling. We conducted color matching experiments with two observers, both with and without a spectral filter in front of a bipartite field. Using a novel computational approach, we estimated the filter transmittance and transformation matrix necessary to convert unfiltered CMFs to filtered CMFs. Statistical analysis revealed good agreement between estimated and measured filter characteristics, particularly in central wavelength regions. Applying this methodology to compare between Stiles and Burch 1955 (SB1955) mean observer CMFs and our previously published "ICVIO" mean observer CMFs, we identified a "yellow" (short-wavelength suppressing) filter that effectively transforms between these datasets. This finding aligns with our hypothesis that observed differences between the CMF sets are attributable to age-related lens yellowing (average observer age: 49 years in ICVIO versus 30 years in SB1955). Our approach enables efficient representation of observer variability through a single filter rather than three separate functions, offering potentially reduced experimental overhead while maintaining accuracy in characterizing individual color vision differences. + oai:arXiv.org:2508.19291v2 + astro-ph.IM + cs.CV + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Paul Mangold, Alain Durmus, Aymeric Dieuleveut, Sergey Samsonov, Eric Moulines + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Luvin Munish Ragoo, Ivar Farup, Casper F. Andersen, Graham Finlayson - Q-Fly: An Optical Interconnect for Modular Quantum Computers - https://arxiv.org/abs/2412.09299 - arXiv:2412.09299v3 Announce Type: replace-cross -Abstract: Much like classical supercomputers, scaling up quantum computers requires an optical interconnect. However, signal attenuation leads to irreversible qubit loss, making quantum interconnect design guidelines and metrics different from conventional computing. Inspired by the classical Dragonfly topology, we propose a multi-group structure where the group switch routes photons emitted by computational end nodes to the group's shared pool of Bell state analyzers (which conduct the entanglement swapping that creates end-to-end entanglement) or across a low-diameter path to another group. We present a full-stack analysis of system performance, a combination of distributed and centralized protocols, and a resource scheduler that plans qubit placement and communications for large-scale, fault-tolerant systems. We implement a prototype three-node switched interconnect to justify hardware-side scalability and to expose low-level architectural challenges. We create two-hop entanglement with fidelities of 0.6-0.76. Our design emphasizes reducing network hops and optical components to simplify system stabilization while flexibly adjusting optical path lengths. Based on evaluated loss and infidelity budgets, we find that moderate-radix switches enable systems meeting expected near-term needs, and large systems are feasible. Our design is expected to be effective for a variety of quantum computing technologies, including ion traps and neutral atoms. - oai:arXiv.org:2412.09299v3 - quant-ph - cs.AR - Mon, 22 Dec 2025 00:00:00 -0500 + On Zero-sum Game Representation for Replicator Dynamics + https://arxiv.org/abs/2508.21299 + arXiv:2508.21299v2 Announce Type: replace-cross +Abstract: Replicator dynamics have been widely used in evolutionary game theory to model how strategy frequencies evolve over time in large populations. The so-called payoff matrix encodes the pairwise fitness that each strategy obtains when interacting with every other strategy, and it solely determines the replicator dynamics. If the payoff matrix is unknown, we show in this paper that it cannot be inferred from observed strategy frequencies alone -- distinct payoff matrices can induce the same replicator dynamics. We thus look for a canonical representative of the payoff matrix in the equivalence class. The main result of the paper is to show that for every polynomial replicator dynamics (i.e., the vector field is a polynomial), there always exists a skew-symmetric, polynomial payoff matrix that can induce the given dynamics. + oai:arXiv.org:2508.21299v2 + math.DS + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Daisuke Sakuma, Tomoki Tsuno, Hikaru Shimizu, Yuki Kurosawa, Monet Tokuyama Friedrich, Kentaro Teramoto, Amin Taherkhani, Andrew Todd, Yosuke Ueno, Michal Hajdu\v{s}ek, Rikizo Ikuta, Rodney Van Meter, Toshihiko Sasaki, Shota Nagayama + Haoyu Yin, Xudong Chen, Bruno Sinopoli - Raising Bidders' Awareness in Second-Price Auctions - https://arxiv.org/abs/2412.12676 - arXiv:2412.12676v2 Announce Type: replace-cross -Abstract: When bidders bid on complex objects, they might be unaware of characteristics effecting their valuations. We assume that each buyer's valuation is a sum of independent random variables, one for each characteristic. When a bidder is unaware of a characteristic, he omits the random variable from the sum. We study the seller's decision to raise bidders' awareness of characteristics before a second-price auction with entry fees. Optimal entry fees capture an additional unawareness rent due to unaware bidders misperceiving their probability of winning and the price to be paid upon winning. When raising a bidder's individual awareness of a characteristic with positive expected value, the seller faces a trade-off between positive effects on the expected first order statistic and unawareness rents of remaining unaware bidders on one hand and the loss of the unawareness rent from the newly aware bidder on the other. We present characterization results on raising public awareness together with no versus full information. We discuss the winner's curse due to unawareness of characteristics. - oai:arXiv.org:2412.12676v2 - econ.TH - cs.GT - Mon, 22 Dec 2025 00:00:00 -0500 + MeanVC: Lightweight and Streaming Zero-Shot Voice Conversion via Mean Flows + https://arxiv.org/abs/2510.08392 + arXiv:2510.08392v3 Announce Type: replace-cross +Abstract: Zero-shot voice conversion (VC) aims to transfer timbre from a source speaker to any unseen target speaker while preserving linguistic content. Growing application scenarios demand models with streaming inference capabilities. This has created a pressing need for models that are simultaneously fast, lightweight, and high-fidelity. However, existing streaming methods typically rely on either autoregressive (AR) or non-autoregressive (NAR) frameworks, which either require large parameter sizes to achieve strong performance or struggle to generalize to unseen speakers. In this study, we propose MeanVC, a lightweight and streaming zero-shot VC approach. MeanVC introduces a diffusion transformer with a chunk-wise autoregressive denoising strategy, combining the strengths of both AR and NAR paradigms for efficient streaming processing. By introducing mean flows, MeanVC regresses the average velocity field during training, enabling zero-shot VC with superior speech quality and speaker similarity in a single sampling step by directly mapping from the start to the endpoint of the flow trajectory. Additionally, we incorporate diffusion adversarial post-training to mitigate over-smoothing and further enhance speech quality. Experimental results demonstrate that MeanVC significantly outperforms existing zero-shot streaming VC systems, achieving superior conversion quality with higher efficiency and significantly fewer parameters. Audio demos and code are publicly available at https://aslp-lab.github.io/MeanVC. + oai:arXiv.org:2510.08392v3 + eess.AS + cs.SD + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ying Xue Li, Burkhard C. Schipper + Guobin Ma, Jixun Yao, Ziqian Ning, Yuepeng Jiang, Lingxin Xiong, Lei Xie, Pengcheng Zhu - Feedback Arc Sets and Feedback Arc Set Decompositions in Weighted and Unweighted Oriented Graphs - https://arxiv.org/abs/2501.06935 - arXiv:2501.06935v3 Announce Type: replace-cross -Abstract: Let $D=(V(D),A(D))$ be a digraph with at least one directed cycle. A set $F$ of arcs is a feedback arc set (FAS) if $D-F$ has no directed cycle. The FAS decomposition number ${\rm fasd}(D)$ of $D$ is the maximum number of pairwise disjoint FASs whose union is $A(D)$. The directed girth $g(D)$ of $D$ is the minimum length of a directed cycle of $D$. Note that ${\rm fasd}(D)\le g(D).$ The FAS decomposition number appears in the well-known and far-from-solved conjecture of Woodall (1978) stating that for every planar digraph $D$ with at least one directed cycle, ${\rm fasd}(D)=g(D).$ The degree of a vertex of $D$ is the sum of its in-degree and out-degree. - Let $D$ be an arc-weighted digraph and let ${\rm fas}_w(D)$ denote the minimum weight of its FAS. In this paper, we study bounds on ${\rm fasd}(D)$, ${\rm fas}_w(D)$ and ${\rm fas}(D)$ for arc-weighted oriented graphs $D$ (i.e., digraphs without opposite arcs) with upper-bounded maximum degree $\Delta(D)$ and lower-bounded $g(D)$. Note that these parameters are related: ${\rm fas}_w(D)\le w(D)/{\rm fasd}(D)$, where $w(D)$ is the total weight of $D$, and ${\rm fas}(D)\le |A(D)|/{\rm fasd}(D).$ In particular, we prove the following: (i) If $\Delta(D)\leq~4$ and $g(D)\geq 3$, then ${\rm fasd}(D) \geq 3$ and therefore ${\rm fas}_w(D)\leq \frac{w(D)}{3}$ which generalizes a known tight bound for an unweighted oriented graph with maximum degree at most 4; (ii) If $\Delta(D)\leq 3$ and $g(D)\in \{3,4,5\}$, then ${\rm fasd}(D)=g(D)$; (iii) If $\Delta(D)\leq 3$ and $g(D)\ge 8$ then ${\rm fasd}(D)<g(D).$ We also give some bounds for the cases when $\Delta$ or $g$ are large and state several open problems and a conjecture. - oai:arXiv.org:2501.06935v3 - math.CO - cs.DM - Mon, 22 Dec 2025 00:00:00 -0500 + Polarization based direction of arrival estimation using a radio interferometric array + https://arxiv.org/abs/2510.15116 + arXiv:2510.15116v2 Announce Type: replace-cross +Abstract: Direction of arrival (DOA) estimation is mostly performed using specialized arrays that have carefully designed receiver spacing and layouts to match the operating frequency range. In contrast, radio interferometric arrays are designed to optimally sample the Fourier space data for making high quality images of the sky. + Therefore, using existing radio interferometric arrays (with arbitrary geometry and wide frequency variation) for DOA estimation is practically infeasible except by using images made by such interferometers. In this paper, we focus on low cost DOA estimation without imaging, using a subset of a radio interferometric array, using a fraction of the data collected by the full array, and, enabling early determination of DOAs. The proposed method is suitable for transient and low duty cycle source detection. Moreover, the proposed method is an ideal follow-up step to online radio frequency interference (RFI) mitigation, enabling the early estimation of the DOA of the detected RFI. + oai:arXiv.org:2510.15116v2 + astro-ph.IM + cs.LG + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Gregory Gutin, Mads Anker Nielsen, Anders Yeo, Yacong Zhou + Sarod Yatawatta - Saturation-Aware Snapshot Compressive Imaging: Theory and Algorithm - https://arxiv.org/abs/2501.11869 - arXiv:2501.11869v3 Announce Type: replace-cross -Abstract: Snapshot Compressive Imaging (SCI) uses coded masks to compress a 3D data cube into a single 2D snapshot. In practice, multiplexing can push intensities beyond the sensor's dynamic range, producing saturation that violates the linear SCI model and degrades reconstruction. This paper provides the first theoretical characterization of SCI recovery under saturation. We model clipping as an element-wise nonlinearity and derive a finite-sample recovery bound for compression-based SCI that links reconstruction error to mask density and the extent of saturation. The analysis yields a clear design rule: optimal Bernoulli masks use densities below one-half, decreasing further as saturation strengthens. Guided by this principle, we optimize mask patterns and introduce a novel reconstruction framework, Saturation-Aware PnP Net (SAPnet), which explicitly enforces consistency with saturated measurements. Experiments on standard video-SCI benchmarks confirm our theory and demonstrate that SAPnet significantly outperforms existing PnP-based methods. - oai:arXiv.org:2501.11869v3 - eess.IV + A Single-Loop First-Order Algorithm for Linearly Constrained Bilevel Optimization + https://arxiv.org/abs/2510.24710 + arXiv:2510.24710v2 Announce Type: replace-cross +Abstract: We study bilevel optimization problems where the lower-level problems are strongly convex and have coupled linear constraints. To overcome the potential non-smoothness of the hyper-objective and the computational challenges associated with the Hessian matrix, we utilize penalty and augmented Lagrangian methods to reformulate the original problem as a single-level one. Especially, we establish a strong theoretical connection between the reformulated function and the original hyper-objective by characterizing the closeness of their values and derivatives. Based on this reformulation, we propose a single-loop, first-order algorithm for linearly constrained bilevel optimization (SFLCB). We provide rigorous analyses of its non-asymptotic convergence rates, showing an improvement over prior double-loop algorithms -- form $O(\epsilon^{-3}\log(\epsilon^{-1}))$ to $O(\epsilon^{-3})$. The experiments corroborate our theoretical findings and demonstrate the practical efficiency of the proposed SFLCB algorithm. Simulation code is provided at https://github.com/ShenGroup/SFLCB. + oai:arXiv.org:2510.24710v2 + math.OC cs.IT + cs.LG math.IT - stat.AP - Mon, 22 Dec 2025 00:00:00 -0500 + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Mengyu Zhao, Shirin Jalali + Wei Shen, Jiawei Zhang, Minhui Huang, Cong Shen - Beyond Eckmann-Hilton: Commutativity in Higher Categories - https://arxiv.org/abs/2501.16465 - arXiv:2501.16465v3 Announce Type: replace-cross -Abstract: We show that in a weak globular $\omega$-category, all composition operations are equivalent and commutative for cells with sufficiently degenerate boundary, which can be considered a higher-dimensional generalisation of the Eckmann-Hilton argument. Our results are formulated constructively in a type-theoretic presentation of $\omega$-categories. The heart of our construction is a family of padding and repadding techniques, which gives an equivalence relation between cells which are not necessarily parallel. Our work has been implemented, allowing us to explicitly compute suitable witnesses, which grow rapidly in complexity as the dimension increases. These witnesses can be exported as inhabitants of identity types in Homotopy Type Theory, and hence are of relevance in synthetic homotopy theory. - oai:arXiv.org:2501.16465v3 - math.CT - cs.LO - Mon, 22 Dec 2025 00:00:00 -0500 + Fractional Iterates and Oscillatory Convergence + https://arxiv.org/abs/2510.25606 + arXiv:2510.25606v3 Announce Type: replace-cross +Abstract: The simple continued fractions for the Golden & Silver means are well-known. It is astonishing that, as far as we know, no one has published half-iterates (let alone quarter-iterates) for the corresponding algorithms. We also examine the cosine and logistic maps (with parameter $2 < \lambda < 3$). + oai:arXiv.org:2510.25606v3 + math.NT + cs.DM + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Thibaut Benjamin, Ioannis Markakis, Wilfred Offord, Chiara Sarti, Jamie Vicary + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Steven Finch - Foundation for unbiased cross-validation of spatio-temporal models for species distribution modeling - https://arxiv.org/abs/2502.03480 - arXiv:2502.03480v2 Announce Type: replace-cross -Abstract: Evaluating the predictive performance of species distribution models (SDMs) under realistic deployment scenarios requires careful handling of spatial and temporal dependencies in the data. Cross-validation (CV) is the standard approach for model evaluation, but its design strongly influences the validity of performance estimates. When SDMs are intended for spatial or temporal transfer, random CV can lead to overoptimistic results due to spatial autocorrelation (SAC) among neighboring observations. - We benchmark four machine learning algorithms (GBM, XGBoost, LightGBM, Random Forest) on two real-world presence-absence datasets, a temperate plant and an anadromous fish, using multiple CV designs: random, spatial, spatio-temporal, environmental, and forward-chaining. Two training data usage strategies (LAST FOLD and RETRAIN) are evaluated, with hyperparameter tuning performed within each CV scheme. Model performance is assessed on independent out-of-time test sets using AUC, MAE, and correlation metrics. - Random CV overestimates AUC by up to 0.16 and produces MAE values up to 80 percent higher than spatially blocked alternatives. Blocking at the empirical SAC range substantially reduces this bias. Training strategy affects evaluation outcomes: LAST FOLD yields smaller validation-test discrepancies under strong SAC, while RETRAIN achieves higher test AUC when SAC is weaker. Boosted ensemble models consistently perform best under spatially structured CV designs. We recommend a robust SDM workflow based on SAC-aware blocking, blocked hyperparameter tuning, and external temporal validation to improve reliability under spatial and temporal shifts. - oai:arXiv.org:2502.03480v2 - stat.AP - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Practical hybrid decoding scheme for parity-encoded spin systems + https://arxiv.org/abs/2510.26189 + arXiv:2510.26189v3 Announce Type: replace-cross +Abstract: We propose a practical hybrid decoding scheme for the parity-encoding architecture. This architecture was first introduced by N. Sourlas as a computational technique for tackling hard optimization problems, especially those modeled by spin systems such as the Ising model and spin glasses, and reinvented by W. Lechner, P. Hauke, and P. Zoller to develop quantum annealing devices. We study the specific model, called the SLHZ model, aiming to achieve a near-term quantum annealing device implemented solely through geometrically local spin interactions. Taking account of the close connection between the SLHZ model and a classical low-density-parity-check code, two approaches can be chosen for the decoding: (1) finding the ground state of a spin Hamiltonian derived from the SLHZ model, which can be achieved via stochastic decoders such as a quantum annealer or a classical Monte Carlo sampler; (2) using deterministic decoding techniques for the classical LDPC code, such as belief propagation and bit-flip decoder. The proposed hybrid approach combines the two approaches by applying bit-flip decoding to the readout of the stochastic decoder based on the SLHZ model. We present simulations demonstrating that this approach can reveal the latent potential of the SLHZ model, realizing soft-annealing concept proposed by Sourlas. + oai:arXiv.org:2510.26189v3 + quant-ph + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/publicdomain/zero/1.0/ - 10.1016/j.ecoinf.2025.103521 - Ecological Informatics, Volume 92, Article 103521, 2025 - Diana Koldasbayeva, Alexey Zaytsev + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yoshihiro Nambu - Constraint-based causal discovery with tiered background knowledge and latent variables in single or overlapping datasets - https://arxiv.org/abs/2503.21526 - arXiv:2503.21526v3 Announce Type: replace-cross -Abstract: In this paper we consider the use of tiered background knowledge within constraint based causal discovery. Our focus is on settings relaxing causal sufficiency, i.e. allowing for latent variables which may arise because relevant information could not be measured at all, or not jointly, as in the case of multiple overlapping datasets. We first present novel insights into the properties of the 'tiered FCI' (tFCI) algorithm. Building on this, we introduce a new extension of the IOD (integrating overlapping datasets) algorithm incorporating tiered background knowledge, the 'tiered IOD' (tIOD) algorithm. We show that under full usage of the tiered background knowledge tFCI and tIOD are sound, while simple versions of the tIOD and tFCI are sound and complete. We further show that the tIOD algorithm can often be expected to be considerably more efficient and informative than the IOD algorithm even beyond the obvious restriction of the Markov equivalence classes. We provide a formal result on the conditions for this gain in efficiency and informativeness. Our results are accompanied by a series of examples illustrating the exact role and usefulness of tiered background knowledge. - oai:arXiv.org:2503.21526v3 + Source-Optimal Training is Transfer-Suboptimal + https://arxiv.org/abs/2511.08401 + arXiv:2511.08401v3 Announce Type: replace-cross +Abstract: We prove that training a source model optimally for its own task is generically suboptimal when the objective is downstream transfer. We study the source-side optimization problem in L2-SP ridge regression and show a fundamental mismatch between the source-optimal and transfer-optimal source regularization: outside of a measure-zero set, $\tau_0^* \neq \tau_S^*$. We characterize the transfer-optimal source penalty $\tau_0^*$ as a function of task alignment and identify an alignment-dependent reversal: with imperfect alignment ($0<\rho<1$), transfer benefits from stronger source regularization, while in super-aligned regimes ($\rho>1$), transfer benefits from weaker regularization. In isotropic settings, the decision of whether transfer helps is independent of the target sample size and noise, depending only on task alignment and source characteristics. We verify the linear predictions in a synthetic ridge regression experiment, and we present CIFAR-10 experiments as evidence that the source-optimal versus transfer-optimal mismatch can persist in nonlinear networks. + oai:arXiv.org:2511.08401v3 stat.ML cs.LG math.ST stat.TH - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Christine W. Bang, Vanessa Didelez + http://creativecommons.org/publicdomain/zero/1.0/ + C. Evans Hedges - A Survey on Archetypal Analysis - https://arxiv.org/abs/2504.12392 - arXiv:2504.12392v2 Announce Type: replace-cross -Abstract: Archetypal analysis (AA) was originally proposed in 1994 by Adele Cutler and Leo Breiman as a computational procedure for extracting distinct aspects, so-called archetypes, from observations, with each observational record approximated as a mixture (i.e., convex combination) of these archetypes. AA thereby provides straightforward, interpretable, and explainable representations for feature extraction and dimensionality reduction, facilitating the understanding of the structure of high-dimensional data and enabling wide applications across the sciences. However, AA also faces challenges, particularly as the associated optimization problem is non-convex. This is the first survey that provides researchers and data mining practitioners with an overview of the methodologies and opportunities that AA offers, surveying the many applications of AA across disparate fields of science, as well as best practices for modeling data with AA and its limitations. The survey concludes by explaining crucial future research directions concerning AA. - oai:arXiv.org:2504.12392v2 - stat.ME - cs.LG - stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 + Guaranteeing Higher Order Convergence Rates for Accelerated Wasserstein Gradient Flow Schemes + https://arxiv.org/abs/2511.10884 + arXiv:2511.10884v2 Announce Type: replace-cross +Abstract: In this paper, we study higher-order-accurate-in-time minimizing movements schemes for Wasserstein gradient flows. We introduce a novel accelerated second-order scheme, leveraging the differential structure of the Wasserstein space in both Eulerian and Lagrangian coordinates. For sufficiently smooth energy functionals, we show that our scheme provably achieves an optimal quadratic convergence rate. Under the weaker assumptions of Wasserstein differentiability and $\lambda$-displacement convexity (for any $\lambda\in \mathbb{R}$), we show that our scheme still achieves a first-order convergence rate and has strong numerical stability. In particular, we show that the energy is nearly monotone in general, while when the energy is $L$-smooth and $\lambda$-displacement convex (with $\lambda>0$), we prove the energy is non-increasing and the norm of the Wasserstein gradient is exponentially decreasing along the iterates. Taken together, our work provides the first fully rigorous proof of accelerated second-order convergence rates for smooth functionals and shows that the scheme performs no worse than the classical scheme JKO scheme for functionals that are $\lambda$-displacement convex and Wasserstein differentiable. + oai:arXiv.org:2511.10884v2 + math.AP + cs.NA + math.NA + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Aleix Alcacer, Irene Epifanio, Sebastian Mair, Morten M{\o}rup + http://creativecommons.org/licenses/by/4.0/ + Raymond Chu, Matt Jacobs - The Eye as a Window to Systemic Health: A Survey of Retinal Imaging from Classical Techniques to Oculomics - https://arxiv.org/abs/2505.04006 - arXiv:2505.04006v2 Announce Type: replace-cross -Abstract: The unique vascularized anatomy of the human eye, encased in the retina, provides an opportunity to act as a window for human health. The retinal structure assists in assessing the early detection, monitoring of disease progression and intervention for both ocular and non-ocular diseases. The advancement in imaging technology leveraging Artificial Intelligence has seized this opportunity to bridge the gap between the eye and human health. This track paves the way for unveiling systemic health insight from the ocular system and surrogating non-invasive markers for timely intervention and identification. The new frontiers of oculomics in ophthalmology cover both ocular and systemic diseases, and getting more attention to explore them. In this survey paper, we explore the evolution of retinal imaging techniques, the dire need for the integration of AI-driven analysis, and the shift of retinal imaging from classical techniques to oculomics. We also discuss some hurdles that may be faced in the progression of oculomics, highlighting the research gaps and future directions. - oai:arXiv.org:2505.04006v2 + Weakly Supervised Segmentation and Classification of Alpha-Synuclein Aggregates in Brightfield Midbrain Images + https://arxiv.org/abs/2511.16268 + arXiv:2511.16268v2 Announce Type: replace-cross +Abstract: Parkinson's disease (PD) is a neurodegenerative disorder associated with the accumulation of misfolded alpha-synuclein aggregates, forming Lewy bodies and neuritic shape used for pathology diagnostics. Automatic analysis of immunohistochemistry histopathological images with Deep Learning provides a promising tool for better understanding the spatial organization of these aggregates. In this study, we develop an automated image processing pipeline to segment and classify these aggregates in whole-slide images (WSIs) of midbrain tissue from PD and incidental Lewy Body Disease (iLBD) cases based on weakly supervised segmentation, robust to immunohistochemical labelling variability, with a ResNet50 classifier. Our approach allows to differentiate between major aggregate morphologies, including Lewy bodies and neurites with a balanced accuracy of $80\%$. This framework paves the way for large-scale characterization of the spatial distribution and heterogeneity of alpha-synuclein aggregates in brightfield immunohistochemical tissue, and for investigating their poorly understood relationships with surrounding cells such as microglia and astrocytes. + oai:arXiv.org:2511.16268v2 eess.IV cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + q-bio.QM + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1016/j.premed.2025.100023 - Volume 4, December 2025, 100023 - Inamullah, Imran Razzak, Shoaib Jameel + http://creativecommons.org/licenses/by/4.0/ + Erwan Dereure, Robin Louiset, Laura Parkkinen, David A Menassa, David Holcman - The Stochastic Occupation Kernel (SOCK) Method for Learning Stochastic Differential Equations - https://arxiv.org/abs/2505.11622 - arXiv:2505.11622v2 Announce Type: replace-cross -Abstract: We present a novel kernel-based method for learning multivariate stochastic differential equations (SDEs). The method follows a two-step procedure: we first estimate the drift term function, then the (matrix-valued) diffusion function given the drift. Occupation kernels are integral functionals on a reproducing kernel Hilbert space (RKHS) that aggregate information over a trajectory. Our approach leverages vector-valued occupation kernels for estimating the drift component of the stochastic process. For diffusion estimation, we extend this framework by introducing operator-valued occupation kernels, enabling the estimation of an auxiliary matrix-valued function as a positive semi-definite operator, from which we readily derive the diffusion estimate. This enables us to avoid common challenges in SDE learning, such as intractable likelihoods, by optimizing a reconstruction-error-based objective. We propose a simple learning procedure that retains strong predictive accuracy while using Fenchel duality to promote efficiency. We validate the method on simulated benchmarks and a real-world dataset of Amyloid imaging in healthy and Alzheimer's disease subjects. - oai:arXiv.org:2505.11622v2 + Ensuring Calibration Robustness in Split Conformal Prediction Under Adversarial Attacks + https://arxiv.org/abs/2511.18562 + arXiv:2511.18562v2 Announce Type: replace-cross +Abstract: Conformal prediction (CP) provides distribution-free, finite-sample coverage guarantees but critically relies on exchangeability, a condition often violated under distribution shift. We study the robustness of split conformal prediction under adversarial perturbations at test time, focusing on both coverage validity and the resulting prediction set size. Our theoretical analysis characterizes how the strength of adversarial perturbations during calibration affects coverage guarantees under adversarial test conditions. We further examine the impact of adversarial training at the model-training stage. Extensive experiments support our theory: (i) Prediction coverage varies monotonically with the calibration-time attack strength, enabling the use of nonzero calibration-time attack to predictably control coverage under adversarial tests; (ii) target coverage can hold over a range of test-time attacks: with a suitable calibration attack, coverage stays within any chosen tolerance band across a contiguous set of perturbation levels; and (iii) adversarial training at the training stage produces tighter prediction sets that retain high informativeness. + oai:arXiv.org:2511.18562v2 stat.ML cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Michael L. Wells, Kamel Lahouel, Bruno Jedynak + Xunlei Qian, Yue Xing - Clustering and Pruning in Causal Data Fusion - https://arxiv.org/abs/2505.15215 - arXiv:2505.15215v2 Announce Type: replace-cross -Abstract: Data fusion, the process of combining observational and experimental data, can enable the identification of causal effects that would otherwise remain non-identifiable. Although identification algorithms have been developed for specific scenarios, do-calculus remains the only general-purpose tool for causal data fusion, particularly when variables are present in some data sources but not others. However, approaches based on do-calculus may encounter computational challenges as the number of variables increases and the causal graph grows in complexity. Consequently, there exists a need to reduce the size of such models while preserving the essential features. For this purpose, we propose pruning (removing unnecessary variables) and clustering (combining variables) as preprocessing operations for causal data fusion. We generalize earlier results on a single data source and derive conditions for applying pruning and clustering in the case of multiple data sources. We give sufficient conditions for inferring the identifiability or non-identifiability of a causal effect in a larger graph based on a smaller graph and show how to obtain the corresponding identifying functional for identifiable causal effects. Examples from epidemiology and social science demonstrate the use of the results. - oai:arXiv.org:2505.15215v2 - stat.ML - cs.LG - stat.ME - Mon, 22 Dec 2025 00:00:00 -0500 + Algebraic Obstructions and the Collapse of Elementary Structure in the Kronecker Problem + https://arxiv.org/abs/2511.22856 + arXiv:2511.22856v3 Announce Type: replace-cross +Abstract: While Kronecker coefficients $g(\lambda,\mu,\nu)$ with bounded rows are polynomial-time computable via lattice-point methods, no explicit closed-form formulas have been obtained for genuinely three-row cases in the 87 years since Murnaghan's foundational work. This paper provides such formulas for the first time and identifies a universal structural boundary at parameter value 5 where elementary combinatorial patterns collapse. + We analyze two independent families of genuinely three-row coefficients and establish that for $k \leq 4$, the formulas exhibit elementary structure: oscillation bounds follow the triangular-Hogben pattern, and polynomial expressions factor completely over $\mathbb{Z}$. At the critical threshold $k=5$, this structure collapses: the triangular pattern fails, and algebraic obstructions -- irreducible quadratic factors with negative discriminant -- emerge. + We develop integer forcing, a proof technique exploiting the tension between continuous asymptotics and discrete integrality. As concrete results, we prove that $g((n,n,1)^3) = 2 - (n \mod 2)$ for all $n \geq 3$ -- the first explicit formula for a genuinely three-row Kronecker coefficient -- derive five explicit polynomial formulas for staircase-hook coefficients, and verify Saxl's conjecture for 132 three-row partitions. + oai:arXiv.org:2511.22856v3 + math.CO + cs.CC + math.RT + quant-ph + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Otto Tabell, Santtu Tikka, Juha Karvanen + Soong Kyum Lee - Text-guided multi-stage cross-perception network for medical image segmentation - https://arxiv.org/abs/2506.07475 - arXiv:2506.07475v3 Announce Type: replace-cross -Abstract: Medical image segmentation plays a crucial role in clinical medicine, serving as a key tool for auxiliary diagnosis, treatment planning, and disease monitoring. However, traditional segmentation methods such as U-Net are often limited by weak semantic expression of target regions, which stems from insufficient generalization and a lack of interactivity. Incorporating text prompts offers a promising avenue to more accurately pinpoint lesion locations, yet existing text-guided methods are still hindered by insufficient cross-modal interaction and inadequate cross-modal feature representation. To address these challenges, we propose the Text-guided Multi-stage Cross-perception network (TMC). TMC incorporates a Multi-stage Cross-attention Module (MCM) to enhance the model's understanding of fine-grained semantic details and a Multi-stage Alignment Loss (MA Loss) to improve the consistency of cross-modal semantics across different feature levels. Experimental results on three public datasets (QaTa-COV19, MosMedData, and Duke-Breast-Cancer-MRI) demonstrate the superior performance of TMC, achieving Dice scores of 84.65\%, 78.39\%, and 88.09\%, respectively, and consistently outperforming both U-Net-based networks and existing text-guided methods. - oai:arXiv.org:2506.07475v3 - eess.IV - cs.CV - Mon, 22 Dec 2025 00:00:00 -0500 + Continuous-time reinforcement learning for optimal switching over multiple regimes + https://arxiv.org/abs/2512.04697 + arXiv:2512.04697v2 Announce Type: replace-cross +Abstract: This paper studies the continuous-time reinforcement learning (RL) for optimal switching problems across multiple regimes. We consider a type of exploratory formulation under entropy regularization where the agent randomizes both the timing of switches and the selection of regimes through the generator matrix of an associated continuous-time finite-state Markov chain. We establish the well-posedness of the associated system of Hamilton-Jacobi-Bellman (HJB) equations and provide a characterization of the optimal policy. The policy improvement and the convergence of the policy iterations are rigorously established by analyzing the system of equations. We also show the convergence of the value function in the exploratory formulation towards the value function in the classical formulation as the temperature parameter vanishes. Finally, a reinforcement learning algorithm is devised and implemented by invoking the policy evaluation based on the martingale characterization. Our numerical examples with the aid of neural networks illustrate the effectiveness of the proposed RL algorithm. + oai:arXiv.org:2512.04697v2 + math.OC + cs.LG + q-fin.CP + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Gaoyu Chen, Haixia Pan + Yijie Huang, Mengge Li, Xiang Yu, Zhou Zhou - Fine-Tuning Large Audio-Language Models with LoRA for Precise Temporal Localization of Prolonged Exposure Therapy Elements - https://arxiv.org/abs/2506.09707 - arXiv:2506.09707v4 Announce Type: replace-cross -Abstract: Prolonged Exposure (PE) therapy is an effective treatment for post-traumatic stress disorder (PTSD), but evaluating therapist fidelity remains labor-intensive due to the need for manual review of session recordings. We present a method for the automatic temporal localization of key PE fidelity elements, identifying their start and stop times, directly from session audio and transcripts. Our approach fine-tunes a large pre-trained audio-language model, Qwen2-Audio, using Low-Rank Adaptation (LoRA) to process focused 30-second windows of audio-transcript input. Fidelity labels for three core protocol phases, therapist orientation (P1), imaginal exposure (P2), and post-imaginal processing (P3), are generated via LLM-based prompting and verified by trained raters. The model is trained to predict normalized boundary offsets using soft supervision guided by task-specific prompts. On a dataset of 308 real PE sessions, our best configuration (LoRA rank 8, 30s windows) achieves a mean absolute error (MAE) of 5.3s across tasks, within typical rater tolerance for timestamp review, enabling practical fidelity QC. We further analyze the effects of window size and LoRA rank, highlighting the importance of context granularity and model adaptation. This work introduces a privacy-preserving, scalable framework for fidelity tracking in PE therapy, with potential to support clinician training, supervision, and quality assurance. - oai:arXiv.org:2506.09707v4 - eess.AS - cs.CL - cs.HC - Mon, 22 Dec 2025 00:00:00 -0500 + A probabilistic foundation model for crystal structure denoising, phase classification, and order parameters + https://arxiv.org/abs/2512.11077 + arXiv:2512.11077v2 Announce Type: replace-cross +Abstract: Atomistic simulations generate large volumes of noisy structural data, but extracting phase labels, order parameters (OPs), and defect information in a way that is universal, robust, and interpretable remains challenging. Existing tools such as PTM and CNA are restricted to a small set of hand-crafted lattices (e.g.\ FCC/BCC/HCP), degrade under strong thermal disorder or defects, and produce hard, template-based labels without per-atom probability or confidence scores. Here we introduce a log-probability foundation model that unifies denoising, phase classification, and OP extraction within a single probabilistic framework. We reuse the MACE-MP foundation interatomic potential on crystal structures mapped to AFLOW prototypes, training it to predict per-atom, per-phase logits $l$ and to aggregate them into a global log-density $\log \hat{P}_\theta(\boldsymbol{r})$ whose gradient defines a conservative score field. Denoising corresponds to gradient ascent on this learned log-density, phase labels follow from $\arg\max_c l_{ac}$, and the $l$ values act as continuous, defect-sensitive and interpretable OPs quantifying the Euclidean distance to ideal phases. We demonstrate universality across hundreds of prototypes, robustness under strong thermal and defect-induced disorder, and accurate treatment of complex systems such as ice polymorphs, ice--water interfaces, and shock-compressed Ti. + oai:arXiv.org:2512.11077v2 + cond-mat.mtrl-sci + cs.AI + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Suhas BN, Andrew M. Sherrill, Jyoti Alaparthi, Dominik Mattioli, Rosa I. Arriaga, Chris W. Wiese, Saeed Abdullah + Hyuna Kwon, Babak Sadigh, Sebastien Hamel, Vincenzo Lordi, John Klepeis, Fei Zhou - On the performance of multi-fidelity and reduced-dimensional neural emulators for inference of physiological boundary conditions - https://arxiv.org/abs/2506.11683 - arXiv:2506.11683v2 Announce Type: replace-cross -Abstract: Solving inverse problems in cardiovascular modeling is particularly challenging due to the high computational cost of running high-fidelity simulations. In this work, we focus on Bayesian parameter estimation and explore different methods to reduce the computational cost of sampling from the posterior distribution by leveraging low-fidelity approximations. A common approach is to construct a surrogate model for the high-fidelity simulation itself. Another is to build a surrogate for the discrepancy between high- and low-fidelity models. This discrepancy, which is often easier to approximate, is modeled with either a fully connected neural network or a nonlinear dimensionality reduction technique that enables surrogate construction in a lower-dimensional space. A third possible approach is to treat the discrepancy between the high-fidelity and surrogate models as random noise and estimate its distribution using normalizing flows. This allows us to incorporate the approximation error into the Bayesian inverse problem by modifying the likelihood function. We validate five different methods which are variations of the above on analytical test cases by comparing them to posterior distributions derived solely from high-fidelity models, assessing both accuracy and computational cost. Finally, we demonstrate our approaches on two cardiovascular examples of increasing complexity: a lumped-parameter Windkessel model and a patient-specific three-dimensional anatomy. - oai:arXiv.org:2506.11683v2 - stat.ML - cs.CE + Stopping Rules for Stochastic Gradient Descent via Anytime-Valid Confidence Sequences + https://arxiv.org/abs/2512.13123 + arXiv:2512.13123v3 Announce Type: replace-cross +Abstract: We study stopping rules for stochastic gradient descent (SGD) for convex optimization from the perspective of anytime-valid confidence sequences. Classical analyses of SGD provide convergence guarantees in expectation or at a fixed horizon, but offer no statistically valid way to assess, at an arbitrary time, how close the current iterate is to the optimum. We develop an anytime-valid, data-dependent upper confidence sequence for the weighted average suboptimality of projected SGD, constructed via nonnegative supermartingales and requiring no smoothness or strong convexity. This confidence sequence yields a simple stopping rule that is provably $\varepsilon$-optimal with probability at least $1-\alpha$, with explicit bounds on the stopping time under standard stochastic approximation stepsizes. To the best of our knowledge, these are the first rigorous, time-uniform performance guarantees and finite-time $\varepsilon$-optimality certificates for projected SGD with general convex objectives, based solely on observable trajectory quantities. + oai:arXiv.org:2512.13123v3 + math.OC cs.LG math.ST - q-bio.QM + stat.ML stat.TH - Mon, 22 Dec 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1016/j.compbiomed.2025.111389 - Computers in Biology and Medicine 200, 111389 (2026) - Chloe H. Choi, Andrea Zanoni, Daniele E. Schiavazzi, Alison L. Marsden - - - Beyond Force Metrics: Pre-Training MLFFs for Stable MD Simulations - https://arxiv.org/abs/2506.14850 - arXiv:2506.14850v3 Announce Type: replace-cross -Abstract: Machine-learning force fields (MLFFs) have emerged as a promising solution for speeding up ab initio molecular dynamics (MD) simulations, where accurate force predictions are critical but often computationally expensive. In this work, we employ GemNet-T, a graph neural network model, as an MLFF and investigate two training strategies: (1) direct training on MD17 (10K samples) without pre-training, and (2) pre-training on the large-scale OC20 dataset followed by fine-tuning on MD17 (10K). While both approaches achieve low force mean absolute errors (MAEs), reaching 5 meV/A per atom, we find that lower force errors do not necessarily guarantee stable MD simulations. Notably, the pre-trained GemNet-T model yields significantly improved simulation stability, sustaining trajectories up to three times longer than the model trained from scratch. By analyzing local properties of the learned force fields, we find that pre-training produces more structured latent representations, smoother force responses to local geometric changes, and more consistent force differences between nearby configurations, all of which contribute to more stable and reliable MD simulations. These findings underscore the value of pre-training on large, diverse datasets to capture complex molecular interactions and highlight that force MAE alone is not always a sufficient metric of MD simulation stability. - oai:arXiv.org:2506.14850v3 - physics.chem-ph - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Shagun Maheshwari, Zhengxian Tang, Janghoon Ock, Adeesh Kolluru, Amir Barati Farimani, John R. Kitchin + Liviu Aolaritei, Michael I. Jordan - Quantifying Uncertainty in the Presence of Distribution Shifts - https://arxiv.org/abs/2506.18283 - arXiv:2506.18283v2 Announce Type: replace-cross -Abstract: Neural networks make accurate predictions but often fail to provide reliable uncertainty estimates, especially under covariate distribution shifts between training and testing. To address this problem, we propose a Bayesian framework for uncertainty estimation that explicitly accounts for covariate shifts. While conventional approaches rely on fixed priors, the key idea of our method is an adaptive prior, conditioned on both training and new covariates. This prior naturally increases uncertainty for inputs that lie far from the training distribution in regions where predictive performance is likely to degrade. To efficiently approximate the resulting posterior predictive distribution, we employ amortized variational inference. Finally, we construct synthetic environments by drawing small bootstrap samples from the training data, simulating a range of plausible covariate shift using only the original dataset. We evaluate our method on both synthetic and real-world data. It yields substantially improved uncertainty estimates under distribution shifts. - oai:arXiv.org:2506.18283v2 + Universality of high-dimensional scaling limits of stochastic gradient descent + https://arxiv.org/abs/2512.13634 + arXiv:2512.13634v2 Announce Type: replace-cross +Abstract: We consider statistical tasks in high dimensions whose loss depends on the data only through its projection into a fixed-dimensional subspace spanned by the parameter vectors and certain ground truth vectors. This includes classifying mixture distributions with cross-entropy loss with one and two-layer networks, and learning single and multi-index models with one and two-layer networks. When the data is drawn from an isotropic Gaussian mixture distribution, it is known that the evolution of a finite family of summary statistics under stochastic gradient descent converges to an autonomous ordinary differential equation (ODE), as the dimension and sample size go to $\infty$ and the step size goes to $0$ commensurately. Our main result is that these ODE limits are universal in that this limit is the same whenever the data is drawn from mixtures of arbitrary product distributions whose first two moments match the corresponding Gaussian distribution, provided the initialization and ground truth vectors are coordinate-delocalized. We complement this by proving two corresponding non-universality results. We provide a simple example where the ODE limits are non-universal if the initialization is coordinate aligned. We also show that the stochastic differential equation limits arising as fluctuations of the summary statistics around their ODE's fixed points are not universal. + oai:arXiv.org:2512.13634v2 stat.ML cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + math.PR + math.ST + stat.TH + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Yuli Slavutsky, David M. Blei + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Reza Gheissari, Aukosh Jagannath - The gradual transformation of inland areas -- human plowing, horse plowing and equity incentives - https://arxiv.org/abs/2507.00067 - arXiv:2507.00067v3 Announce Type: replace-cross -Abstract: Many modern areas have not learned their lessons and often hope for the wisdom of later generations, resulting in them only possessing modern technology and difficult to iterate ancient civilizations. At present, there is no way to tell how we should learn from history and promote the gradual upgrading of civilization. Therefore, we must tell the history of civilization's progress and the means of governance, learn from experience to improve the comprehensive strength and survival ability of civilization, and achieve an optimal solution for the tempering brought by conflicts and the reduction of internal conflicts. Firstly, we must follow the footsteps of history and explore the reasons for the long-term stability of each country in conflict, including providing economic benefits to the people and means of suppressing them; then, use mathematical methods to demonstrate how we can achieve the optimal solution at the current stage. After analysis, we can conclude that the civilization transformed from human plowing to horse plowing can easily suppress the resistance of the people and provide them with the ability to resist; The selection of rulers should consider multiple institutional aspects, such as exams, elections, and drawing lots; Economic development follows a lognormal distribution and can be adjusted by population mean and standard deviation. Using a lognormal distribution with the maximum value to divide equity can adjust the wealth gap. - oai:arXiv.org:2507.00067v3 - physics.soc-ph - cs.CE - econ.GN - q-fin.EC - Mon, 22 Dec 2025 00:00:00 -0500 + Chirp Delay-Doppler Domain Modulation Based Joint Communication and Radar for Autonomous Vehicles + https://arxiv.org/abs/2512.14432 + arXiv:2512.14432v2 Announce Type: replace-cross +Abstract: This paper introduces a sensing-centric joint communication and millimeter-wave radar paradigm to facilitate collaboration among intelligent vehicles. + We first propose a chirp waveform-based delay-Doppler quadrature amplitude modulation (DD-QAM) that modulates data across delay, Doppler, and amplitude dimensions. + Building upon this modulation scheme, we derive its achievable rate to quantify the communication performance. + We then introduce an extended Kalman filter-based scheme for four-dimensional (4D) parameter estimation in dynamic environments, enabling the active vehicles to accurately estimate orientation and tangential-velocity beyond traditional 4D radar systems. + Furthermore, in terms of communication, we propose a dual-compensation-based demodulation and tracking scheme that allows the passive vehicles to effectively demodulate data without compromising their sensing functions. + Simulation results underscore the feasibility and superior performance of our proposed methods, marking a significant advancement in the field of autonomous vehicles. + Simulation codes are provided to reproduce the results in this paper: \href{https://github.com/LiZhuoRan0/2026-IEEE-TWC-ChirpDelayDopplerModulationISAC}{https://github.com/LiZhuoRan0}. + oai:arXiv.org:2512.14432v2 + eess.SP + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Hongfa Zi, Zhen Liu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Zhuoran Li, Zhen Gao, Sheng Chen, Dusit Niyato, Zhaocheng Wang, George K. Karagiannidis - SpecCLIP: Aligning and Translating Spectroscopic Measurements for Stars - https://arxiv.org/abs/2507.01939 - arXiv:2507.01939v4 Announce Type: replace-cross -Abstract: In recent years, large language models (LLMs) have transformed natural language understanding through vast datasets and large-scale parameterization. Inspired by this success, we present SpecCLIP, a foundation model framework that extends LLM-inspired methodologies to stellar spectral analysis. Stellar spectra, akin to structured language, encode rich physical and chemical information about stars. By training foundation models on large-scale spectral datasets, our goal is to learn robust and informative embeddings that support diverse downstream applications. As a proof of concept, SpecCLIP involves pre-training on two spectral types--LAMOST low-resolution and Gaia XP--followed by contrastive alignment using the CLIP (Contrastive Language-Image Pre-training) framework, adapted to associate spectra from different instruments. This alignment is complemented by auxiliary decoders that preserve spectrum-specific information and enable translation (prediction) between spectral types, with the former achieved by maximizing mutual information between embeddings and input spectra. The result is a cross-spectrum framework enabling intrinsic calibration and flexible applications across instruments. We demonstrate that fine-tuning these models on moderate-sized labeled datasets improves adaptability to tasks such as stellar-parameter estimation and chemical-abundance determination. SpecCLIP also enhances the accuracy and precision of parameter estimates benchmarked against external survey data. Additionally, its similarity search and cross-spectrum prediction capabilities offer potential for anomaly detection. Our results suggest that contrastively trained foundation models enriched with spectrum-aware decoders can advance precision stellar spectroscopy. Our code SpecCLIP is publicly available at https://github.com/Xiaosheng-Zhao/SpecCLIP - oai:arXiv.org:2507.01939v4 - astro-ph.IM - astro-ph.SR - cs.AI - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 + Information-Theoretic Constraints on Variational Quantum Optimization: Efficiency Transitions and the Dynamical Lie Algebra + https://arxiv.org/abs/2512.14701 + arXiv:2512.14701v2 Announce Type: replace-cross +Abstract: Variational quantum algorithms are leading candidates for near-term advantage, yet their scalability is fundamentally limited by the ``Barren Plateau'' phenomenon. While traditionally attributed to geometric concentration of measure, I propose an information-theoretic origin: a bandwidth bottleneck in the optimization feedback loop. By modeling the optimizer as a coherent Maxwell's Demon, I derive a thermodynamic constitutive relation, $\Delta E \leq \eta I(S:A)$, where work extraction is strictly bounded by the mutual information established via entanglement. I demonstrate that systems with polynomial Dynamical Lie Algebra (DLA) dimension exhibit ``Information Superconductivity'' (sustained $\eta > 0$), whereas systems with exponential DLA dimension undergo an efficiency collapse when the rate of information scrambling exceeds the ancilla's channel capacity. These results reframe quantum trainability as a thermodynamic phase transition governed by the stability of information flow. + oai:arXiv.org:2512.14701v2 + quant-ph + cs.ET + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiaosheng Zhao, Yang Huang, Guirong Xue, Xiao Kong, Jifeng Liu, Xiaoyu Tang, Timothy C. Beers, Yuan-Sen Ting, A-Li Luo + Jun Liang Tan - From Classical Machine Learning to Emerging Foundation Models: Review on Multimodal Data Integration for Cancer Research - https://arxiv.org/abs/2507.09028 - arXiv:2507.09028v2 Announce Type: replace-cross -Abstract: Cancer research is increasingly driven by the integration of diverse data modalities, spanning from genomics and proteomics to imaging and clinical factors. However, extracting actionable insights from these vast and heterogeneous datasets remains a key challenge. The rise of foundation models (FMs) -- large deep-learning models pretrained on extensive amounts of data serving as a backbone for a wide range of downstream tasks -- offers new avenues for discovering biomarkers, improving diagnosis, and personalizing treatment. This paper presents a comprehensive review of widely adopted integration strategies of multimodal data to assist advance the computational approaches for data-driven discoveries in oncology. We examine emerging trends in machine learning (ML) and deep learning (DL), including methodological frameworks, validation protocols, and open-source resources targeting cancer subtype classification, biomarker discovery, treatment guidance, and outcome prediction. This study also comprehensively covers the shift from traditional ML to FMs for multimodal integration. We present a holistic view of recent FMs advancements and challenges faced during the integration of multi-omics with advanced imaging data. We identify the state-of-the-art FMs, publicly available multi-modal repositories, and advanced tools and methods for data integration. We argue that current state-of-the-art integrative methods provide the essential groundwork for developing the next generation of large-scale, pre-trained models poised to further revolutionize oncology. To the best of our knowledge, this is the first review to systematically map the transition from conventional ML to advanced FM for multimodal data integration in oncology, while also framing these developments as foundational for the forthcoming era of large-scale AI models in cancer research. - oai:arXiv.org:2507.09028v2 - q-bio.QM + Large Model Enabled Embodied Intelligence for 6G Integrated Perception, Communication, and Computation Network + https://arxiv.org/abs/2512.15109 + arXiv:2512.15109v2 Announce Type: replace-cross +Abstract: The advent of sixth-generation (6G) places intelligence at the core of wireless architecture, fusing perception, communication, and computation into a single closed-loop. This paper argues that large artificial intelligence models (LAMs) can endow base stations with perception, reasoning, and acting capabilities, thus transforming them into intelligent base station agents (IBSAs). We first review the historical evolution of BSs from single-functional analog infrastructure to distributed, software-defined, and finally LAM-empowered IBSA, highlighting the accompanying changes in architecture, hardware platforms, and deployment. We then present an IBSA architecture that couples a perception-cognition-execution pipeline with cloud-edge-end collaboration and parameter-efficient adaptation. Subsequently,we study two representative scenarios: (i) cooperative vehicle-road perception for autonomous driving, and (ii) ubiquitous base station support for low-altitude uncrewed aerial vehicle safety monitoring and response against unauthorized drones. On this basis, we analyze key enabling technologies spanning LAM design and training, efficient edge-cloud inference, multi-modal perception and actuation, as well as trustworthy security and governance. We further propose a holistic evaluation framework and benchmark considerations that jointly cover communication performance, perception accuracy, decision-making reliability, safety, and energy efficiency. Finally, we distill open challenges on benchmarks, continual adaptation, trustworthy decision-making, and standardization. Together, this work positions LAM-enabled IBSAs as a practical path toward integrated perception, communication, and computation native, safety-critical 6G systems. + oai:arXiv.org:2512.15109v2 + eess.SP cs.AI - Mon, 22 Dec 2025 00:00:00 -0500 + cs.IT + math.IT + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Amgad Muneer, Muhammad Waqas, Maliazurina B Saad, Eman Showkatian, Rukhmini Bandyopadhyay, Hui Xu, Wentao Li, Joe Y Chang, Zhongxing Liao, Cara Haymaker, Luisa Solis Soto, Carol C Wu, Natalie I Vokes, Xiuning Le, Lauren A Byers, Don L Gibbons, John V Heymach, Jianjun Zhang, Jia Wu - - - Automated Machine Learning Pipeline: Large Language Models-Assisted Automated Dataset Generation for Training Machine-Learned Interatomic Potentials - https://arxiv.org/abs/2509.21647 - arXiv:2509.21647v2 Announce Type: replace-cross -Abstract: Machine learning interatomic potentials (MLIPs) have become powerful tools to extend molecular simulations beyond the limits of quantum methods, offering near-quantum accuracy at much lower computational cost. Yet, developing reliable MLIPs remains difficult because it requires generating high-quality datasets, preprocessing atomic structures, and carefully training and validating models. In this work, we introduce an Automated Machine Learning Pipeline (AMLP) that unifies the entire workflow from dataset creation to model validation. AMLP employs large-language-model agents to assist with electronic-structure code selection, input preparation, and output conversion, while its analysis suite (AMLP-Analysis), based on ASE supports a range of molecular simulations. The pipeline is built on the MACE architecture and validated on acridine polymorphs, where, with a straightforward fine-tuning of a foundation model, mean absolute errors of ~1.7 meV/atom in energies and ~7.0 meV/{\AA} in forces are achieved. The fitted MLIP reproduces DFT geometries with sub-{\AA} accuracy and demonstrates stability during molecular dynamics simulations in the microcanonical and canonical ensembles. - oai:arXiv.org:2509.21647v2 - cond-mat.mtrl-sci - cs.LG - Mon, 22 Dec 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Adam Lahouari, Jutta Rogal, Mark E. Tuckerman + Zhuoran Li, Zhen Gao, Xinhua Liu, Zheng Wang, Xiaotian Zhou, Lei Liu, Yongpeng Wu, Wei Feng, Yongming Huang - A Review of Software for Designing and Operating Quantum Networks - https://arxiv.org/abs/2510.00203 - arXiv:2510.00203v2 Announce Type: replace-cross -Abstract: Quantum network protocol development is crucial to realizing a production-grade network that can support distributed sensing, secure communication, and utility-scale quantum computation. However, the transition from laboratory demonstration to deployable networks requires software implementations of architectures and protocols tailored to the unique constraints of quantum systems. This paper reviews the current state of software implementations for quantum networks, organized around the three-plane abstraction of infrastructure, logical, and control/service planes. We cover software for both designing quantum network protocols (e.g., SeQUeNCe, QuISP, and NetSquid) and operating them, with a focus on essential control/service plane functions such as entanglement, topology, and resource management, in a proposed taxonomy. Our review highlights a persistent gap between theoretical protocol proposals and their realization in simulators or testbeds, particularly in dynamic topology and network management. We conclude by outlining open challenges and proposing a roadmap for developing scalable software architectures to enable hybrid, large-scale quantum networks. - oai:arXiv.org:2510.00203v2 + Practical Challenges in Executing Shor's Algorithm on Existing Quantum Platforms + https://arxiv.org/abs/2512.15330 + arXiv:2512.15330v2 Announce Type: replace-cross +Abstract: Quantum computers pose a fundamental threat to widely deployed public-key cryptosystems, such as RSA and ECC, by enabling efficient integer factorization using Shor's algorithm. Theoretical resource estimates suggest that 2048-bit RSA keys could be broken using Shor's algorithm with fewer than a million noisy qubits. Although such machines do not yet exist, the availability of smaller, cloud-accessible quantum processors and open-source implementations of Shor's algorithm raises the question of what key sizes can realistically be factored with today's platforms. In this work, we experimentally investigate Shor's algorithm on several cloud-based quantum computers using publicly available implementations. Our results reveal a substantial gap between the capabilities of current quantum hardware and the requirements for factoring cryptographically relevant integers. In particular, we observe that circuit constructions still need to be highly specific for each modulus, and that machine fidelities are unstable, with high and fluctuating error rates. + oai:arXiv.org:2512.15330v2 quant-ph - cs.NI - cs.SY - eess.SY - Mon, 22 Dec 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Robert J. Hayek, Joaquin Chung, Rajkumar Kettimuthu - - - Quantum Generative Modeling of Single-Cell transcriptomes: Capturing Gene-Gene and Cell-Cell Interactions - https://arxiv.org/abs/2510.12776 - arXiv:2510.12776v3 Announce Type: replace-cross -Abstract: Single-cell RNA sequencing (scRNA-seq) data simulation is limited by classical methods that rely on linear correlations, failing to capture the intrinsic, nonlinear dependencies. No existing simulator jointly models gene-gene and cell-cell interactions. We introduce qSimCells, a novel quantum computing-based simulator that employs entanglement to model intra- and inter-cellular interactions, generating realistic single-cell transcriptomes with cellular heterogeneity. The core innovation is a quantum kernel that uses a parameterized quantum circuit with CNOT gates to encode complex, nonlinear gene regulatory network (GRN) as well as cell-cell communication topologies with explicit causal directionality. The resulting synthetic data exhibits non-classical dependencies: standard correlation-based analyses (Pearson and Spearman) fail to recover the programmed causal pathways and instead report spurious associations driven by high baseline gene-expression probabilities. Furthermore, applying cell-cell communication detection to the simulated data validates the true mechanistic links, revealing a robust, up to 75-fold relative increase in inferred communication probability only when quantum entanglement is active. These results demonstrate that the quantum kernel is essential for producing high-fidelity ground-truth datasets and highlight the need for advanced inference techniques to capture the complex, non-classical dependencies inherent in gene regulation. - oai:arXiv.org:2510.12776v3 - q-bio.QM + cs.CR cs.ET - physics.bio-ph - q-bio.GN - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Selim Romero, Vignesh S. Kumar, Robert S. Chapkin, James J. Cai - - - Generalized infinite dimensional Alpha-Procrustes based geometries - https://arxiv.org/abs/2511.09801 - arXiv:2511.09801v2 Announce Type: replace-cross -Abstract: This work extends the recently introduced Alpha-Procrustes family of Riemannian metrics for symmetric positive definite (SPD) matrices by incorporating generalized versions of the Bures-Wasserstein (GBW), Log-Euclidean, and Wasserstein distances. While the Alpha-Procrustes framework has unified many classical metrics in both finite- and infinite- dimensional settings, it previously lacked the structural components necessary to realize these generalized forms. We introduce a formalism based on unitized Hilbert-Schmidt operators and an extended Mahalanobis norm that allows the construction of robust, infinite-dimensional generalizations of GBW and Log-Hilbert-Schmidt distances. Our approach also incorporates a learnable regularization parameter that enhances geometric stability in high-dimensional comparisons. Preliminary experiments reproducing benchmarks from the literature demonstrate the improved performance of our generalized metrics, particularly in scenarios involving comparisons between datasets of varying dimension and scale. This work lays a theoretical and computational foundation for advancing robust geometric methods in machine learning, statistical inference, and functional data analysis. - oai:arXiv.org:2511.09801v2 - stat.ML - cs.LG - math.FA - math.OC - Mon, 22 Dec 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Salvish Goomanee, Andi Han, Pratik Jawanpuria, Bamdev Mishra + Paul Bagourd, Julian Jang-Jaccard, Vincent Lenders, Alain Mermoud, Torsten Hoefler, Cornelius Hempel - Safeguarded Stochastic Polyak Step Sizes for Non-smooth Optimization: Robust Performance Without Small (Sub)Gradients - https://arxiv.org/abs/2512.02342 - arXiv:2512.02342v2 Announce Type: replace-cross -Abstract: The stochastic Polyak step size (SPS) has proven to be a promising choice for stochastic gradient descent (SGD), delivering competitive performance relative to state-of-the-art methods on smooth convex and non-convex optimization problems, including deep neural network training. However, extensions of this approach to non-smooth settings remain in their early stages, often relying on interpolation assumptions or requiring knowledge of the optimal solution. In this work, we propose a novel SPS variant, Safeguarded SPS (SPS$_{safe}$), for the stochastic subgradient method, and provide rigorous convergence guarantees for non-smooth convex optimization with no need for strong assumptions. We further incorporate momentum into the update rule, yielding equally tight theoretical results. On non-smooth convex benchmarks, our experiments are consistent with the theoretical predictions on how the safeguard affects the convergence neighborhood. On deep neural networks the proposed step size achieves competitive performance to existing adaptive baselines and exhibits stable behavior across a wide range of problem settings. Moreover, in these experiments, the gradient norms under our step size do not collapse to (near) zero, indicating robustness to vanishing gradients. - oai:arXiv.org:2512.02342v2 - math.OC + Prospects for quantum advantage in machine learning from the representability of functions + https://arxiv.org/abs/2512.15661 + arXiv:2512.15661v2 Announce Type: replace-cross +Abstract: Demonstrating quantum advantage in machine learning tasks requires navigating a complex landscape of proposed models and algorithms. To bring clarity to this search, we introduce a framework that connects the structure of parametrized quantum circuits to the mathematical nature of the functions they can actually learn. Within this framework, we show how fundamental properties, like circuit depth and non-Clifford gate count, directly determine whether a model's output leads to efficient classical simulation or surrogation. We argue that this analysis uncovers common pathways to dequantization that underlie many existing simulation methods. More importantly, it reveals critical distinctions between models that are fully simulatable, those whose function space is classically tractable, and those that remain robustly quantum. This perspective provides a conceptual map of this landscape, clarifying how different models relate to classical simulability and pointing to where opportunities for quantum advantage may lie. + oai:arXiv.org:2512.15661v2 + quant-ph cs.LG stat.ML - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Dimitris Oikonomou, Nicolas Loizou + Sergi Masot-Llima, Elies Gil-Fuster, Carlos Bravo-Prieto, Jens Eisert, Tommaso Guaita - TreeVQA: A Tree-Structured Execution Framework for Shot Reduction in Variational Quantum Algorithms - https://arxiv.org/abs/2512.12068 - arXiv:2512.12068v2 Announce Type: replace-cross -Abstract: Variational Quantum Algorithms (VQAs) are promising for near- and intermediate-term quantum computing, but their execution cost is substantial. Each task requires many iterations and numerous circuits per iteration, and real-world applications often involve multiple tasks, scaling with the precision needed to explore the application's energy landscape. This demands an enormous number of execution shots, making practical use prohibitively expensive. We observe that VQA costs can be significantly reduced by exploiting execution similarities across an application's tasks. Based on this insight, we propose TreeVQA, a tree-based execution framework that begins by executing tasks jointly and progressively branches only as their quantum executions diverge. Implemented as a VQA wrapper, TreeVQA integrates with typical VQA applications. Evaluations on scientific and combinatorial benchmarks show shot count reductions of $25.9\times$ on average and over $100\times$ for large-scale problems at the same target accuracy. The benefits grow further with increasing problem size and precision requirements. - oai:arXiv.org:2512.12068v2 - quant-ph - cs.AR - cs.DC - cs.ET - Mon, 22 Dec 2025 00:00:00 -0500 + Deep Reinforcement Learning Optimization for Uncertain Nonlinear Systems via Event-Triggered Robust Adaptive Dynamic Programming + https://arxiv.org/abs/2512.15735 + arXiv:2512.15735v2 Announce Type: replace-cross +Abstract: This work proposes a unified control architecture that couples a Reinforcement Learning (RL)-driven controller with a disturbance-rejection Extended State Observer (ESO), complemented by an Event-Triggered Mechanism (ETM) to limit unnecessary computations. The ESO is utilized to estimate the system states and the lumped disturbance in real time, forming the foundation for effective disturbance compensation. To obtain near-optimal behavior without an accurate system description, a value-iteration-based Adaptive Dynamic Programming (ADP) method is adopted for policy approximation. The inclusion of the ETM ensures that parameter updates of the learning module are executed only when the state deviation surpasses a predefined bound, thereby preventing excessive learning activity and substantially reducing computational load. A Lyapunov-oriented analysis is used to characterize the stability properties of the resulting closed-loop system. Numerical experiments further confirm that the developed approach maintains strong control performance and disturbance tolerance, while achieving a significant reduction in sampling and processing effort compared with standard time-triggered ADP schemes. + oai:arXiv.org:2512.15735v2 + math.OC + cs.AI + cs.SY + eess.SY + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuewen Hou, Dhanvi Bharadwaj, Gokul Subramanian Ravi + Ningwei Bai, Chi Pui Chan, Qichen Yin, Tengyang Gong, Yunda Yan, Zezhi Tang - V-Rex: Real-Time Streaming Video LLM Acceleration via Dynamic KV Cache Retrieval - https://arxiv.org/abs/2512.12284 - arXiv:2512.12284v2 Announce Type: replace-cross -Abstract: Streaming video large language models (LLMs) are increasingly used for real-time multimodal tasks such as video captioning, question answering, conversational agents, and augmented reality. However, these models face fundamental memory and computational challenges because their key-value (KV) caches grow substantially with continuous streaming video input. This process requires an iterative prefill stage, which is a unique feature of streaming video LLMs. Due to its iterative prefill stage, it suffers from significant limitations, including extensive computation, substantial data transfer, and degradation in accuracy. Crucially, this issue is exacerbated for edge deployment, which is the primary target for these models. - In this work, we propose V-Rex, the first software-hardware co-designed accelerator that comprehensively addresses both algorithmic and hardware bottlenecks in streaming video LLM inference. At its core, V-Rex introduces ReSV, a training-free dynamic KV cache retrieval algorithm. ReSV exploits temporal and spatial similarity-based token clustering to reduce excessive KV cache memory across video frames. To fully realize these algorithmic benefits, V-Rex offers a compact, low-latency hardware accelerator with a dynamic KV cache retrieval engine (DRE), featuring bit-level and early-exit based computing units. V-Rex achieves unprecedented real-time of 3.9-8.3 FPS and energy-efficient streaming video LLM inference on edge deployment with negligible accuracy loss. While DRE only accounts for 2.2% power and 2.0% area, the system delivers 1.9-19.7x speedup and 3.1-18.5x energy efficiency improvements over AGX Orin GPU. This work is the first to comprehensively tackle KV cache retrieval across algorithms and hardware, enabling real-time streaming video LLM inference on resource-constrained edge devices. - oai:arXiv.org:2512.12284v2 - eess.IV + The Universe Learning Itself: On the Evolution of Dynamics from the Big Bang to Machine Intelligence + https://arxiv.org/abs/2512.16515 + arXiv:2512.16515v2 Announce Type: replace-cross +Abstract: We develop a unified, dynamical-systems narrative of the universe that traces a continuous chain of structure formation from the Big Bang to contemporary human societies and their artificial learning systems. Rather than treating cosmology, astrophysics, geophysics, biology, cognition, and machine intelligence as disjoint domains, we view each as successive regimes of dynamics on ever-richer state spaces, stitched together by phase transitions, symmetry-breaking events, and emergent attractors. Starting from inflationary field dynamics and the growth of primordial perturbations, we describe how gravitational instability sculpts the cosmic web, how dissipative collapse in baryonic matter yields stars and planets, and how planetary-scale geochemical cycles define long-lived nonequilibrium attractors. Within these attractors, we frame the origin of life as the emergence of self-maintaining reaction networks, evolutionary biology as flow on high-dimensional genotype-phenotype-environment manifolds, and brains as adaptive dynamical systems operating near critical surfaces. Human culture and technology-including modern machine learning and artificial intelligence-are then interpreted as symbolic and institutional dynamics that implement and refine engineered learning flows which recursively reshape their own phase space. Throughout, we emphasize recurring mathematical motifs-instability, bifurcation, multiscale coupling, and constrained flows on measure-zero subsets of the accessible state space. Our aim is not to present any new cosmological or biological model, but a cross-scale, theoretical perspective: a way of reading the universe's history as the evolution of dynamics itself, culminating (so far) in biological and artificial systems capable of modeling, predicting, and deliberately perturbing their own future trajectories. + oai:arXiv.org:2512.16515v2 + nlin.AO cs.AI - cs.AR - cs.CV - cs.MM - Mon, 22 Dec 2025 00:00:00 -0500 + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Donghyuk Kim, Sejeong Yang, Wonjin Shin, Joo-Young Kim + http://creativecommons.org/licenses/by/4.0/ + Pradeep Singh, Mudasani Rushikesh, Bezawada Sri Sai Anurag, Balasubramanian Raman - Multiscale modeling of blood circulation with cerebral autoregulation and network pathway analysis for hemodynamic redistribution in the vascular network with anatomical variations and stenosis conditions - https://arxiv.org/abs/2512.15482 - arXiv:2512.15482v2 Announce Type: replace-cross -Abstract: Cerebral hemodynamics is fundamentally regulated by the Circle of Willis (CoW), which redistributes flow through communicating arteries to stabilize perfusion under anatomical variations and vascular stenosis. In this study, we develop a multiscale circulation model by coupling a systemic hemodynamic framework with a cerebral arterial network reconstructed from medical imaging. The model incorporates a cerebral autoregulation mechanism (CAM) and enables quantitative simulation of flow redistribution within the CoW under normal, anatomically varied, and stenotic conditions. Baseline simulations reproduce physiological flow distributions in which communicating arteries remain nearly inactive, showing negligible cross-flow and agreement with clinical measurements. In contrast, anatomical variations reveal distinct collateral activation patterns: the anterior communicating artery (ACoA) emerges as the earliest and most sensitive functional collateral, whereas the posterior communicating arteries (PCoAs) exhibit structure-dependent engagement. Progressive stenosis simulations further demonstrate a transition from a complete CoW to a fetal-type posterior cerebral artery (PCA) configuration, characterized by early ACoA flow reversal followed by ipsilateral PCoA activation, consistent with experimental and transcranial Doppler observations. Finally, a path-based quantitative analysis is introduced to illustrate how the cerebral vascular network dynamically reconfigures collateral pathways in response to structural changes. Overall, the proposed framework provides a physiologically interpretable, image-informed tool for investigating cerebral flow regulation through functional collaterals within the CoW, with potential applications in the diagnosis and treatment planning of cerebrovascular diseases. - oai:arXiv.org:2512.15482v2 - physics.med-ph - cs.NA - math.NA - Mon, 22 Dec 2025 00:00:00 -0500 + Four special Poncelet triangle families about the incircle + https://arxiv.org/abs/2512.17440 + arXiv:2512.17440v2 Announce Type: replace-cross +Abstract: We describe four special families of ellipse-inscribed Poncelet triangles about the incircle which maintain certain triangle centers stationary and which also display interesting conservations. + oai:arXiv.org:2512.17440v2 + math.MG + cs.GR + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Jiawei Liu, Atsushi Kanoke, Hidenori Endo, Kuniyasu Niizuma, Hiroshi Suito + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Ronaldo A. Garcia, Mark Helman, Dan Reznik - On the Codebook Design for NOMA Schemes from Bent Functions - https://arxiv.org/abs/2512.16560 - arXiv:2512.16560v2 Announce Type: replace-cross -Abstract: Uplink grant-free non-orthogonal multiple access (NOMA) is a promising technology for massive connectivity with low latency and high energy efficiency. In code-domain NOMA schemes, the requirements boil down to the design of codebooks that contain a large number of spreading sequences with low peak-to-average power ratio (PAPR) while maintaining low coherence. When employing binary Golay sequences with guaranteed low PAPR in the design, the fundamental problem is to construct a large set of $n$-variable quadratic bent or near-bent functions in a particular form such that the difference of any two is bent for even $n$ or near-bent for odd $n$ to achieve optimally low coherence. In this work, we propose a theoretical construction of NOMA codebooks by applying a recursive approach to those particular quadratic bent functions in smaller dimensions. The proposed construction yields desired NOMA codebooks that contain $6\cdot N$ Golay sequences of length $N=2^{4m}$ for any positive integer $m$ and have the lowest possible coherence $1/\sqrt{N}$. - oai:arXiv.org:2512.16560v2 - math.CO - cs.DM - cs.IT - math.IT - Mon, 22 Dec 2025 00:00:00 -0500 + Alternating Direction Method of Multipliers for Nonlinear Matrix Decompositions + https://arxiv.org/abs/2512.17473 + arXiv:2512.17473v2 Announce Type: replace-cross +Abstract: We present an algorithm based on the alternating direction method of multipliers (ADMM) for solving nonlinear matrix decompositions (NMD). Given an input matrix $X \in \mathbb{R}^{m \times n}$ and a factorization rank $r \ll \min(m, n)$, NMD seeks matrices $W \in \mathbb{R}^{m \times r}$ and $H \in \mathbb{R}^{r \times n}$ such that $X \approx f(WH)$, where $f$ is an element-wise nonlinear function. We evaluate our method on several representative nonlinear models: the rectified linear unit activation $f(x) = \max(0, x)$, suitable for nonnegative sparse data approximation, the component-wise square $f(x) = x^2$, applicable to probabilistic circuit representation, and the MinMax transform $f(x) = \min(b, \max(a, x))$, relevant for recommender systems. The proposed framework flexibly supports diverse loss functions, including least squares, $\ell_1$ norm, and the Kullback-Leibler divergence, and can be readily extended to other nonlinearities and metrics. We illustrate the applicability, efficiency, and adaptability of the approach on real-world datasets, highlighting its potential for a broad range of applications. + oai:arXiv.org:2512.17473v2 + eess.SP + cs.LG + math.OC + stat.ML + Tue, 23 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Chunlei Li, Constanza Riera, Palash Sarkar, Pantelimon Stanica + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Atharva Awari, Nicolas Gillis, Arnaud Vandaele