diff --git "a/raw_rss_feeds/https___arxiv_org_rss_cs.xml" "b/raw_rss_feeds/https___arxiv_org_rss_cs.xml"
--- "a/raw_rss_feeds/https___arxiv_org_rss_cs.xml"
+++ "b/raw_rss_feeds/https___arxiv_org_rss_cs.xml"
@@ -7,9806 +7,10162 @@
http://www.rssboard.org/rss-specificationen-us
- Thu, 11 Dec 2025 05:00:14 +0000
+ Fri, 12 Dec 2025 05:00:13 +0000rss-help@arxiv.org
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500SaturdaySunday
- Agentic AI as Undercover Teammates: Argumentative Knowledge Construction in Hybrid Human-AI Collaborative Learning
- https://arxiv.org/abs/2512.08933
- arXiv:2512.08933v1 Announce Type: new
-Abstract: Generative artificial intelligence (AI) agents are increasingly embedded in collaborative learning environments, yet their impact on the processes of argumentative knowledge construction remains insufficiently understood. Emerging conceptualisations of agentic AI and artificial agency suggest that such systems possess bounded autonomy, interactivity, and adaptability, allowing them to engage as epistemic participants rather than mere instructional tools. Building on this theoretical foundation, the present study investigates how agentic AI, designed as undercover teammates with either supportive or contrarian personas, shapes the epistemic and social dynamics of collaborative reasoning. Drawing on Weinberger and Fischer's (2006) four-dimensional framework, participation, epistemic reasoning, argument structure, and social modes of co-construction, we analysed synchronous discourse data from 212 human and 64 AI participants (92 triads) engaged in an analytical problem-solving task. Mixed-effects and epistemic network analyses revealed that AI teammates maintained balanced participation but substantially reorganised epistemic and social processes: supportive personas promoted conceptual integration and consensus-oriented reasoning, whereas contrarian personas provoked critical elaboration and conflict-driven negotiation. Epistemic adequacy, rather than participation volume, predicted individual learning gains, indicating that agentic AI's educational value lies in enhancing the quality and coordination of reasoning rather than amplifying discourse quantity. These findings extend CSCL theory by conceptualising agentic AI as epistemic and social participants, bounded yet adaptive collaborators that redistribute cognitive and argumentative labour in hybrid human-AI learning environments.
- oai:arXiv.org:2512.08933v1
- cs.HC
+ ExaCraft: Dynamic Learning Context Adaptation for Personalized Educational Examples
+ https://arxiv.org/abs/2512.09931
+ arXiv:2512.09931v1 Announce Type: new
+Abstract: Learning is most effective when it's connected to relevant, relatable examples that resonate with learners on a personal level. However, existing educational AI tools don't focus on generating examples or adapting to learners' changing understanding, struggles, or growing skills. We've developed ExaCraft, an AI system that generates personalized examples by adapting to the learner's dynamic context. Through the Google Gemini AI and Python Flask API, accessible via a Chrome extension, ExaCraft combines user-defined profiles (including location, education, profession, and complexity preferences) with real-time analysis of learner behavior. This ensures examples are both culturally relevant and tailored to individual learning needs. The system's core innovation is its ability to adapt to five key aspects of the learning context: indicators of struggle, mastery patterns, topic progression history, session boundaries, and learning progression signals. Our demonstration will show how ExaCraft's examples evolve from basic concepts to advanced technical implementations, responding to topic repetition, regeneration requests, and topic progression patterns in different use cases.
+ oai:arXiv.org:2512.09931v1cs.AI
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.HC
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Lixiang Yan, Yueqiao Jin, Linxuan Zhao, Roberto Martinez-Maldonado, Xinyu Li, Xiu Guan, Wenxin Guo, Xibin Han, Dragan Ga\v{s}evi\'c
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Akaash Chatterjee (Indian Institute of Technology Jodhpur), Suman Kundu (Indian Institute of Technology Jodhpur)
- Motion2Meaning: A Clinician-Centered Framework for Contestable LLM in Parkinson's Disease Gait Interpretation
- https://arxiv.org/abs/2512.08934
- arXiv:2512.08934v1 Announce Type: new
-Abstract: AI-assisted gait analysis holds promise for improving Parkinson's Disease (PD) care, but current clinical dashboards lack transparency and offer no meaningful way for clinicians to interrogate or contest AI decisions. To address this issue, we present Motion2Meaning, a clinician-centered framework that advances Contestable AI through a tightly integrated interface designed for interpretability, oversight, and procedural recourse. Our approach leverages vertical Ground Reaction Force (vGRF) time-series data from wearable sensors as an objective biomarker of PD motor states. The system comprises three key components: a Gait Data Visualization Interface (GDVI), a one-dimensional Convolutional Neural Network (1D-CNN) that predicts Hoehn & Yahr severity stages, and a Contestable Interpretation Interface (CII) that combines our novel Cross-Modal Explanation Discrepancy (XMED) safeguard with a contestable Large Language Model (LLM). Our 1D-CNN achieves 89.0% F1-score on the public PhysioNet gait dataset. XMED successfully identifies model unreliability by detecting a five-fold increase in explanation discrepancies in incorrect predictions (7.45%) compared to correct ones (1.56%), while our LLM-powered interface enables clinicians to validate correct predictions and successfully contest a portion of the model's errors. A human-centered evaluation of this contestable interface reveals a crucial trade-off between the LLM's factual grounding and its readability and responsiveness to clinical feedback. This work demonstrates the feasibility of combining wearable sensor analysis with Explainable AI (XAI) and contestable LLMs to create a transparent, auditable system for PD gait interpretation that maintains clinical oversight while leveraging advanced AI capabilities. Our implementation is publicly available at: https://github.com/hungdothanh/motion2meaning.
- oai:arXiv.org:2512.08934v1
- cs.HC
+ Suzume-chan: Your Personal Navigator as an Embodied Information Hub
+ https://arxiv.org/abs/2512.09932
+ arXiv:2512.09932v1 Announce Type: new
+Abstract: Access to expert knowledge often requires real-time human communication. Digital tools improve access to information but rarely create the sense of connection needed for deep understanding. This study addresses this issue using Social Presence Theory, which explains how a feeling of "being together" enhances communication. An "Embodied Information Hub" is proposed as a new way to share knowledge through physical and conversational interaction. The prototype, Suzume-chan, is a small, soft AI agent running locally with a language model and retrieval-augmented generation (RAG). It learns from spoken explanations and responds through dialogue, reducing psychological distance and making knowledge sharing warmer and more human-centered.
+ oai:arXiv.org:2512.09932v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.HC
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Loc Phuc Truong Nguyen, Hung Thanh Do, Hung Truong Thanh Nguyen, Hung Cao
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Maya Grace Torii, Takahito Murakami, Shuka Koseki, Yoichi Ochiai
- From Script to Stage: Automating Experimental Design for Social Simulations with LLMs
- https://arxiv.org/abs/2512.08935
- arXiv:2512.08935v1 Announce Type: new
-Abstract: The rise of large language models (LLMs) has opened new avenues for social science research. Multi-agent simulations powered by LLMs are increasingly becoming a vital approach for exploring complex social phenomena and testing theoretical hypotheses. However, traditional computational experiments often rely heavily on interdisciplinary expertise, involve complex operations, and present high barriers to entry. While LLM-driven agents show great potential for automating experimental design, their reliability and scientific rigor remain insufficient for widespread adoption. To address these challenges, this paper proposes an automated multi-agent experiment design framework based on script generation, inspired by the concept of the Decision Theater. The experimental design process is divided into three stages: (1) Script Generation - a Screenwriter Agent drafts candidate experimental scripts; (2) Script Finalization - a Director Agent evaluates and selects the final script; (3) Actor Generation - an Actor Factory creates actor agents capable of performing on the experimental "stage" according to the finalized script. Extensive experiment conducted across multiple social science experimental scenarios demonstrate that the generated actor agents can perform according to the designed scripts and reproduce outcomes consistent with real-world situations. This framework not only lowers the barriers to experimental design in social science but also provides a novel decision-support tool for policy-making and research. The project's source code is available at: https://anonymous.4open.science/r/FSTS-DE1E
- oai:arXiv.org:2512.08935v1
- cs.HC
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ IoTEdu: Access Control, Detection, and Automatic Incident Response in Academic IoT Networks
+ https://arxiv.org/abs/2512.09934
+ arXiv:2512.09934v1 Announce Type: new
+Abstract: The growing presence of IoT devices in academic environments has increased operational complexity and exposed security weaknesses, especially in academic institutions without unified policies for registration, monitoring, and incident response involving IoT. This work presents IoTEdu, an integrated platform that combines access control, incident detection, and automatic blocking of IoT devices. The solution was evaluated in a controlled environment with simulated attacks, achieving an average time of 28.6 seconds between detection and blocking. The results show a reduction in manual intervention, standardization of responses, and unification of the processes of registration, monitoring, and incident response.
+ oai:arXiv.org:2512.09934v1
+ cs.CR
+ cs.AI
+ cs.NI
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Yuwei Guo, Zihan Zhao, Deyu Zhou, Xiaowei Liu, Ming Zhang
+ Joner Assolin, Diego Kreutz, Leandro Bertholdo
- A Principle-based Framework for the Development and Evaluation of Large Language Models for Health and Wellness
- https://arxiv.org/abs/2512.08936
- arXiv:2512.08936v1 Announce Type: new
-Abstract: The incorporation of generative artificial intelligence into personal health applications presents a transformative opportunity for personalized, data-driven health and fitness guidance, yet also poses challenges related to user safety, model accuracy, and personal privacy. To address these challenges, a novel, principle-based framework was developed and validated for the systematic evaluation of LLMs applied to personal health and wellness. First, the development of the Fitbit Insights explorer, a large language model (LLM)-powered system designed to help users interpret their personal health data, is described. Subsequently, the safety, helpfulness, accuracy, relevance, and personalization (SHARP) principle-based framework is introduced as an end-to-end operational methodology that integrates comprehensive evaluation techniques including human evaluation by generalists and clinical specialists, autorater assessments, and adversarial testing, into an iterative development lifecycle. Through the application of this framework to the Fitbit Insights explorer in a staged deployment involving over 13,000 consented users, challenges not apparent during initial testing were systematically identified. This process guided targeted improvements to the system and demonstrated the necessity of combining isolated technical evaluations with real-world user feedback. Finally, a comprehensive, actionable approach is established for the responsible development and deployment of LLM-powered health applications, providing a standardized methodology to foster innovation while ensuring emerging technologies are safe, effective, and trustworthy for users.
- oai:arXiv.org:2512.08936v1
- cs.HC
+ Exploring Health Misinformation Detection with Multi-Agent Debate
+ https://arxiv.org/abs/2512.09935
+ arXiv:2512.09935v1 Announce Type: new
+Abstract: Fact-checking health-related claims has become increasingly critical as misinformation proliferates online. Effective verification requires both the retrieval of high-quality evidence and rigorous reasoning processes. In this paper, we propose a two-stage framework for health misinformation detection: Agreement Score Prediction followed by Multi-Agent Debate. In the first stage, we employ large language models (LLMs) to independently evaluate retrieved articles and compute an aggregated agreement score that reflects the overall evidence stance. When this score indicates insufficient consensus-falling below a predefined threshold-the system proceeds to a second stage. Multiple agents engage in structured debate to synthesize conflicting evidence and generate well-reasoned verdicts with explicit justifications. Experimental results demonstrate that our two-stage approach achieves superior performance compared to baseline methods, highlighting the value of combining automated scoring with collaborative reasoning for complex verification tasks.
+ oai:arXiv.org:2512.09935v1cs.AI
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Brent Winslow, Jacqueline Shreibati, Javier Perez, Hao-Wei Su, Nichole Young-Lin, Nova Hammerquist, Daniel McDuff, Jason Guss, Jenny Vafeiadou, Nick Cain, Alex Lin, Erik Schenck, Shiva Rajagopal, Jia-Ru Chung, Anusha Venkatakrishnan, Amy Armento Lee, Maryam Karimzadehgan, Qingyou Meng, Rythm Agarwal, Aravind Natarajan, Tracy Giest
+ Chih-Han Chen, Chen-Han Tsai, Yu-Shao Peng
- When AI Gives Advice: Evaluating AI and Human Responses to Online Advice-Seeking for Well-Being
- https://arxiv.org/abs/2512.08937
- arXiv:2512.08937v1 Announce Type: new
-Abstract: Seeking advice is a core human behavior that the Internet has reinvented twice: first through forums and Q\&A communities that crowdsource public guidance, and now through large language models (LLMs) that deliver private, on-demand counsel at scale. Yet the quality of this synthesized LLM advice remains unclear. How does it compare, not only against arbitrary human comments, but against the wisdom of the online crowd? We conducted two studies (N = 210) in which experts compared top-voted Reddit advice with LLM-generated advice. LLMs ranked significantly higher overall and on effectiveness, warmth, and willingness to seek advice again. GPT-4o beat GPT-5 on all metrics except sycophancy, suggesting that benchmark gains need not improve advice-giving. In our second study, we examined how human and algorithmic advice could be combined, and found that human advice can be unobtrusively polished to compete with AI-generated comments. Finally, to surface user expectations, we ran an exploratory survey with undergraduates (N=148) that revealed heterogeneous, persona-dependent preferences for agent qualities (e.g., coach-like: goal-focused structure; friend-like: warmth and humor). We conclude with design implications for advice-giving agents and ecosystems blending AI, crowd input, and expert oversight.
- oai:arXiv.org:2512.08937v1
- cs.HC
- cs.AI
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ QSTAformer: A Quantum-Enhanced Transformer for Robust Short-Term Voltage Stability Assessment against Adversarial Attacks
+ https://arxiv.org/abs/2512.09936
+ arXiv:2512.09936v1 Announce Type: new
+Abstract: Short-term voltage stability assessment (STVSA) is critical for secure power system operation. While classical machine learning-based methods have demonstrated strong performance, they still face challenges in robustness under adversarial conditions. This paper proposes QSTAformer-a tailored quantum-enhanced Transformer architecture that embeds parameterized quantum circuits (PQCs) into attention mechanisms-for robust and efficient STVSA. A dedicated adversarial training strategy is developed to defend against both white-box and gray-box attacks. Furthermore, diverse PQC architectures are benchmarked to explore trade-offs between expressiveness, convergence, and efficiency. To the best of our knowledge, this is the first work to systematically investigate the adversarial vulnerability of quantum machine learning-based STVSA. Case studies on the IEEE 39-bus system demonstrate that QSTAformer achieves competitive accuracy, reduced complexity, and stronger robustness, underscoring its potential for secure and scalable STVSA under adversarial conditions.
+ oai:arXiv.org:2512.09936v1
+ eess.SY
+ cs.LG
+ cs.SY
+ quant-ph
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Harsh Kumar, Jasmine Chahal, Yinuo Zhao, Zeling Zhang, Annika Wei, Louis Tay, Ashton Anderson
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yang Li, Chong Ma, Yuanzheng Li, Sen Li, Yanbo Chen, Zhaoyang Dong
- The Impact of Artificial Intelligence on Strategic Technology Management: A Mixed-Methods Analysis of Resources, Capabilities, and Human-AI Collaboration
- https://arxiv.org/abs/2512.08938
- arXiv:2512.08938v1 Announce Type: new
-Abstract: This paper investigates how artificial intelligence (AI) can be effectively integrated into Strategic Technology Management (STM) practices to enhance the strategic alignment and effectiveness of technology investments. Through a mixed-methods approach combining quantitative survey data (n=230) and qualitative expert interviews (n=14), this study addresses three critical research questions: what success factors AI innovates for STM roadmap formulation under uncertainty; what resources and capabilities organizations require for AI-enhanced STM; and how human-AI interaction should be designed for complex STM tasks. The findings reveal that AI fundamentally transforms STM through data-driven strategic alignment and continuous adaptation, while success depends on cultivating proprietary data ecosystems, specialized human talent, and robust governance capabilities. The study introduces the AI-based Strategic Technology Management (AIbSTM) conceptual framework, which synthesizes technical capabilities with human and organizational dimensions across three layers: strategic alignment, resource-based view, and human-AI interaction. Contrary to visions of autonomous AI leadership, the research demonstrates that the most viable trajectory is human-centric augmentation, where AI serves as a collaborative partner rather than a replacement for human judgment. This work contributes to theory by extending the Resource-Based View to AI contexts and addressing cognitive and socio-technical chasms in AI adoption, while offering practitioners a prescriptive framework for navigating AI integration in strategic technology management.
- oai:arXiv.org:2512.08938v1
- cs.HC
+ Blockchain-Anchored Audit Trail Model for Transparent Inter-Operator Settlement
+ https://arxiv.org/abs/2512.09938
+ arXiv:2512.09938v1 Announce Type: new
+Abstract: The telecommunications and financial services industries face substantial challenges in inter-operator settlement processes, characterized by extended reconciliation cycles, high transaction costs, and limited real-time transparency. Traditional settlement mechanisms rely on multiple intermediaries and manual procedures, resulting in settlement periods exceeding 120 days with operational costs consuming approximately 5 percent of total revenue. This research presents a blockchain-anchored audit trail model enabling transparent, immutable, and automated inter-operator settlement. The framework leverages distributed ledger technology, smart contract automation, and cryptographic verification to establish a unified, tamper-proof transaction record. Empirical evaluation demonstrates 87 percent reduction in transaction fees, settlement cycle compression from 120 days to 3 minutes, and 100 percent audit trail integrity. Smart contract automation reduces manual intervention by 92 percent and eliminates 88 percent of settlement disputes. Market analysis indicates institutional adoption accelerated from 8 percent in 2020 to 52 percent by April 2024, with projected industry investment reaching 9.2 billion USD annually. The framework addresses scalability (12,000 transactions per second), interoperability, and regulatory compliance across multiple jurisdictions.
+ oai:arXiv.org:2512.09938v1
+ cs.CRcs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Massimo Fascinari, Vincent English
+ INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING; 2024;https://www.ijisae.org/index.php/IJISAE/article/view/7919/6939
+ Balakumar Ravindranath Kunthu, Ranganath Nagesh Taware, Sathish Krishna Anumula
- Assessing the Human-Likeness of LLM-Driven Digital Twins in Simulating Health Care System Trust
- https://arxiv.org/abs/2512.08939
- arXiv:2512.08939v1 Announce Type: new
-Abstract: Serving as an emerging and powerful tool, Large Language Model (LLM)-driven Human Digital Twins are showing great potential in healthcare system research. However, its actual simulation ability for complex human psychological traits, such as distrust in the healthcare system, remains unclear. This research gap particularly impacts health professionals' trust and usage of LLM-based Artificial Intelligence (AI) systems in assisting their routine work. In this study, based on the Twin-2K-500 dataset, we systematically evaluated the simulation results of the LLM-driven human digital twin using the Health Care System Distrust Scale (HCSDS) with an established human-subject sample, analyzing item-level distributions, summary statistics, and demographic subgroup patterns. Results showed that the simulated responses by the digital twin were significantly more centralized with lower variance and had fewer selections of extreme options (all p<0.001). While the digital twin broadly reproduces human results in major demographic patterns, such as age and gender, it exhibits relatively low sensitivity in capturing minor differences in education levels. The LLM-based digital twin simulation has the potential to simulate population trends, but it also presents challenges in making detailed, specific distinctions in subgroups of human beings. This study suggests that the current LLM-driven Digital Twins have limitations in modeling complex human attitudes, which require careful calibration and validation before applying them in inferential analyses or policy simulations in health systems engineering. Future studies are necessary to examine the emotional reasoning mechanism of LLMs before their use, particularly for studies that involve simulations sensitive to social topics, such as human-automation trust.
- oai:arXiv.org:2512.08939v1
- cs.HC
+ Norm-Governed Multi-Agent Decision-Making in Simulator-Coupled Environments:The Reinsurance Constrained Multi-Agent Simulation Process (R-CMASP)
+ https://arxiv.org/abs/2512.09939
+ arXiv:2512.09939v1 Announce Type: new
+Abstract: Reinsurance decision-making exhibits the core structural properties that motivate multi-agent models: distributed and asymmetric information, partial observability, heterogeneous epistemic responsibilities, simulator-driven environment dynamics, and binding prudential and regulatory constraints. Deterministic workflow automation cannot meet these requirements, as it lacks the epistemic flexibility, cooperative coordination mechanisms, and norm-sensitive behaviour required for institutional risk-transfer.
+ We propose the Reinsurance Constrained Multi-Agent Simulation Process (R-CMASP), a formal model that extends stochastic games and Dec-POMDPs by adding three missing elements: (i) simulator-coupled transition dynamics grounded in catastrophe, capital, and portfolio engines; (ii) role-specialized agents with structured observability, belief updates, and typed communication; and (iii) a normative feasibility layer encoding solvency, regulatory, and organizational rules as admissibility constraints on joint actions.
+ Using LLM-based agents with tool access and typed message protocols, we show in a domain-calibrated synthetic environment that governed multi-agent coordination yields more stable, coherent, and norm-adherent behaviour than deterministic automation or monolithic LLM baselines--reducing pricing variance, improving capital efficiency, and increasing clause-interpretation accuracy. Embedding prudential norms as admissibility constraints and structuring communication into typed acts measurably enhances equilibrium stability.
+ Overall, the results suggest that regulated, simulator-driven decision environments are most naturally modelled as norm-governed, simulator-coupled multi-agent systems.
+ oai:arXiv.org:2512.09939v1
+ cs.MAcs.AI
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Yuzhou Wu, Mingyang Wu, Di Liu, Rong Yin, Kang Li
+ Stella C. Dong
- Psychlysis: Towards the Creation of a Questionnaire-based Machine Learning Tool to Analyze States of Mind
- https://arxiv.org/abs/2512.08940
- arXiv:2512.08940v1 Announce Type: new
-Abstract: This paper describes the development of Psychlysis, a work-in-progress questionnaire-based machine learning application analyzing the user's current state of mind and suggesting ways to improve their mood using Machine Learning. The application utilizes the OCEAN model to understand the user's personality traits and make customized suggestions to enhance their well-being. The proposed application focus on improving the user's mood rather than just detecting their emotions. Preliminary results of the model are presented, showing the potential of the application in predicting the user's mood and providing personalized recommendations. The paper concludes by highlighting the potential benefits of such an application for various societal segments, including doctors, individuals, and mental health organizations, in improving emotional well-being and reducing the negative impact of mental health issues on daily life.
- oai:arXiv.org:2512.08940v1
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fourier Sparsity of Delta Functions and Matching Vector PIRs
+ https://arxiv.org/abs/2512.09941
+ arXiv:2512.09941v1 Announce Type: new
+Abstract: In this paper we study a basic and natural question about Fourier analysis of Boolean functions, which has applications to the study of Matching Vector based Private Information Retrieval (PIR) schemes. For integers m and r, define a delta function on {0,1}^r to be a function f: Z_m^r -> C with f(0) = 1 and f(x) = 0 for all nonzero Boolean x. The basic question we study is how small the Fourier sparsity of a delta function can be; namely how sparse such an f can be in the Fourier basis?
+ In addition to being intrinsically interesting and natural, such questions arise naturally when studying "S-decoding polynomials" for the known matching vector families. Finding S-decoding polynomials of reduced sparsity, which corresponds to finding delta functions with low Fourier sparsity, would improve the current best PIR schemes.
+ We show nontrivial upper and lower bounds on the Fourier sparsity of delta functions. Our proofs are elementary and clean. These results imply limitations on improving Matching Vector PIR schemes simply by finding better S-decoding polynomials. In particular, there are no S-decoding polynomials that can make Matching Vector PIRs based on the known matching vector families achieve polylogarithmic communication with a constant number of servers. Many interesting questions remain open.
+ oai:arXiv.org:2512.09941v1
+ cs.IT
+ cs.CR
+ math.IT
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Hemakshi Jani, Mitish Karia, Meet Gohil, Rahul Bhadja, Aznam Yacoub, Shafaq Khan
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fatemeh Ghasemi, Swastik Kopparty
- One Size Fits None: A Personalized Framework for Urban Accessibility Using Exponential Decay
- https://arxiv.org/abs/2512.08941
- arXiv:2512.08941v1 Announce Type: new
-Abstract: This study develops a personalized accessibility framework that integrates exponential decay functions with user-customizable weighting systems. The framework enables real-time, personalized urban evaluation based on individual priorities and lifestyle requirements. The methodology employs grid-based discretization and a two-stage computational architecture that separates intensive preprocessing from lightweight real-time calculations. The computational architecture demonstrates that accessibility modelling can be made accessible to non-technical users through interactive interfaces, enabling fine-grained spatial analysis and identification of accessibility variations within neighbourhoods. The research contributes to Sustainable Development Goal 11's vision of inclusive, sustainable cities by providing tools for understanding how different populations experience identical urban spaces, supporting evidence-based policy development that addresses accessibility gaps.
- oai:arXiv.org:2512.08941v1
- cs.HC
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ A study of the spectrum resource leasing method based on ERC4907 extension
+ https://arxiv.org/abs/2512.09942
+ arXiv:2512.09942v1 Announce Type: new
+Abstract: The ERC4907 standard enables rentable Non-Fungible Tokens (NFTs) but is limited to single-user, single-time-slot authorization, which severely limits its applicability and efficiency in decentralized multi-slot scheduling scenarios. To address this limitation, this paper proposes Multi-slot ERC4907 (M-ERC4907) extension method. The M-ERC4907 method introduces novel functionalities to support the batch configuration of multiple time slots and simultaneous authorization of multiple users, thereby effectively eliminating the rigid sequential authorization constraint of ERC4907. The experiment was conducted on the Remix development platform. Experimental results show that the M-ERC4907 method significantly reduces on-chain transactions and overall Gas consumption, leading to enhanced scalability and resource allocation efficiency.
+ oai:arXiv.org:2512.09942v1
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Prabhanjana Ghuriki, S. Chanti, Jossy P George
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhiming Liang, Bin Chen, Litao Ye, Chen Sun, Shuo Wang, Zhe Peng
- Beyond Technical Debt: How AI-Assisted Development Creates Comprehension Debt in Resource-Constrained Indie Teams
- https://arxiv.org/abs/2512.08942
- arXiv:2512.08942v1 Announce Type: new
-Abstract: Junior indie game developers in distributed, part-time teams lack production frameworks suited to their specific context, as traditional methodologies are often inaccessible. This study introduces the CIGDI (Co-Intelligence Game Development Ideation) Framework, an alternative approach for integrating AI tools to address persistent challenges of technical debt, coordination, and burnout.
- The framework emerged from a three-month reflective practice and autoethnographic study of a three-person distributed team developing the 2D narrative game "The Worm's Memoirs". Based on analysis of development data (N=157 Jira tasks, N=333 GitHub commits, N=13+ Miro boards, N=8 reflection sessions), CIGDI is proposed as a seven-stage iterative process structured around human-in-the-loop decision points (Priority Criteria and Timeboxing).
- While AI support democratized knowledge access and reduced cognitive load, our analysis identified a significant challenge: "comprehension debt." We define this as a novel form of technical debt where AI helps teams build systems more sophisticated than their independent skill level can create or maintain. This paradox (possessing functional systems the team incompletely understands) creates fragility and AI dependency, distinct from traditional code quality debt.
- This work contributes a practical production framework for resource-constrained teams and identifies critical questions about whether AI assistance constitutes a learning ladder or a dependency trap for developer skill.
- oai:arXiv.org:2512.08942v1
- cs.HC
+ Echo-CoPilot: A Multi-View, Multi-Task Agent for Echocardiography Interpretation and Reporting
+ https://arxiv.org/abs/2512.09944
+ arXiv:2512.09944v1 Announce Type: new
+Abstract: Echocardiography is central to contemporary cardiovascular care, but full-study interpretation remains a cognitively demanding, multi-view task that is still performed manually. While recent foundation models for echocardiography can achieve strong performance on individual perceptual subtasks such as view classification, segmentation, or disease prediction, they typically operate in isolation and do not provide a unified, clinically coherent assessment. In this work, we introduce Echo-CoPilot, a multi-view, multi-task agent that uses a large language model to orchestrate a suite of specialized echocardiography tools. Within a ReAct-style loop, the agent decomposes clinician queries, invokes tools for view recognition, cardiac structure segmentation, measurement and disease prediction, and report synthesis, and integrates their outputs into guideline-aware answers and narrative summaries. We evaluate Echo-CoPilot on the public MIMIC-EchoQA benchmark, where it achieves an accuracy of 50.8\%, outperforming both general-purpose and biomedical video vision-language models. Qualitative analyses further show that the agent leverages quantitative measurements and physiologic context to resolve challenging cases near clinical decision thresholds, such as borderline left ventricular hypertrophy or pericardial effusion severity. The code will be released upon acceptance of the paper.
+ oai:arXiv.org:2512.09944v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CV
+ cs.LG
+ eess.IV
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/publicdomain/zero/1.0/
- Yujie Zhang
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Moein Heidari, Mohammad Amin Roohi, Armin Khosravi, Ilker Hacihaliloglu
- Noise-Robust Abstractive Compression in Retrieval-Augmented Language Models
- https://arxiv.org/abs/2512.08943
- arXiv:2512.08943v1 Announce Type: new
-Abstract: Abstractive compression utilizes smaller langauge models to condense query-relevant context, reducing computational costs in retrieval-augmented generation (RAG). However, retrieved documents often include information that is either irrelevant to answering the query or misleading due to factual incorrect content, despite having high relevance scores. This behavior indicates that abstractive compressors are more likely to omit important information essential for the correct answer, especially in long contexts where attention dispersion occurs. To address this issue, we categorize retrieved documents in a more fine-grained manner and propose Abstractive Compression Robust against Noise (ACoRN), which introduces two novel training steps. First, we use offline data augmentation on the training dataset to enhance compressor robustness against two distinct types of retrieval noise. Second, since the language model based compressor cannot fully utilize information from multiple retrieved documents and exhibits positional bias, we perform finetuning to generate summaries centered around key information that directly supports the correct answer. Our experiments demonstrate that T5-large, trained with ACoRN as a compressor, improves EM and F1 scores while preserving the answer string, which could serve as direct evidence. ACoRN excels on datasets with many accuracy reducing documents, making it highly useful in real-world scenarios.
- oai:arXiv.org:2512.08943v1
- cs.CL
+ ELANA: A Simple Energy and Latency Analyzer for LLMs
+ https://arxiv.org/abs/2512.09946
+ arXiv:2512.09946v1 Announce Type: new
+Abstract: The latency and power consumption of large language models (LLMs) are major constraints when serving them across a wide spectrum of hardware platforms, from mobile edge devices to cloud GPU clusters. Benchmarking is crucial for optimizing efficiency in both model deployment and next-generation model development. To address this need, we open-source a simple profiling tool, \textbf{ELANA}, for evaluating LLMs. ELANA is designed as a lightweight, academic-friendly profiler for analyzing model size, key-value (KV) cache size, prefilling latency (Time-to-first-token, TTFT), generation latency (Time-per-output-token, TPOT), and end-to-end latency (Time-to-last-token, TTLT) of LLMs on both multi-GPU and edge GPU platforms. It supports all publicly available models on Hugging Face and offers a simple command-line interface, along with optional energy consumption logging. Moreover, ELANA is fully compatible with popular Hugging Face APIs and can be easily customized or adapted to compressed or low bit-width models, making it ideal for research on efficient LLMs or for small-scale proof-of-concept studies. We release the ELANA profiling tool at: https://github.com/enyac-group/Elana.
+ oai:arXiv.org:2512.09946v1
+ cs.DCcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Singon Kim
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Hung-Yueh Chiang, Bokun Wang, Diana Marculescu
- Enhancing Reliability across Short and Long-Form QA via Reinforcement Learning
- https://arxiv.org/abs/2512.08944
- arXiv:2512.08944v1 Announce Type: new
-Abstract: While reinforcement learning has unlocked unprecedented complex reasoning in large language models, it has also amplified their propensity for hallucination, creating a critical trade-off between capability and reliability. This work confronts this challenge by introducing a targeted RL framework designed to mitigate both intrinsic and extrinsic hallucinations across short and long-form question answering. We address extrinsic hallucinations (flawed internal knowledge) by creating a novel training set from open-ended conversions of TriviaQA. Concurrently, we tackle intrinsic hallucinations (unfaithfulness to context) by leveraging long-form texts from FineWeb in a fact-grounding reward scheme. To further bolster reliability, our framework explicitly rewards the model for refusing to answer unanswerable questions, thereby cultivating crucial cautiousness. Extensive experiments demonstrate that our methodology yields significant performance gains across a diverse suite of benchmarks, substantially reducing both hallucination types. Ultimately, this research contributes a practical framework for resolving the critical tension between advanced reasoning and factual trustworthiness, paving the way for more capable and reliable large language models.
- oai:arXiv.org:2512.08944v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ HGC-Herd: Efficient Heterogeneous Graph Condensation via Representative Node Herding
+ https://arxiv.org/abs/2512.09947
+ arXiv:2512.09947v1 Announce Type: new
+Abstract: Heterogeneous graph neural networks (HGNNs) have demonstrated strong capability in modeling complex semantics across multi-type nodes and relations. However, their scalability to large-scale graphs remains challenging due to structural redundancy and high-dimensional node features. Existing graph condensation approaches, such as GCond, are primarily developed for homogeneous graphs and rely on gradient matching, resulting in considerable computational, memory, and optimization overhead. We propose HGC-Herd, a training-free condensation framework that generates compact yet informative heterogeneous graphs while maintaining both semantic and structural fidelity. HGC-Herd integrates lightweight feature propagation to encode multi-hop relational context and employs a class-wise herding mechanism to identify representative nodes per class, producing balanced and discriminative subsets for downstream learning tasks. Extensive experiments on ACM, DBLP, and Freebase validate that HGC-Herd attains comparable or superior accuracy to full-graph training while markedly reducing both runtime and memory consumption. These results underscore its practical value for efficient and scalable heterogeneous graph representation learning.
+ oai:arXiv.org:2512.09947v1
+ cs.LG
+ cs.IR
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Yudong Wang, Zhe Yang, Wenhan Ma, Zhifang Sui, Liang Zhao
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fuyan Ou, Siqi Ai, Yulin Hu
- The Linguistic Architecture of Reflective Thought: Evaluation of a Large Language Model as a Tool to Isolate the Formal Structure of Mentalization
- https://arxiv.org/abs/2512.08945
- arXiv:2512.08945v1 Announce Type: new
-Abstract: Background: Mentalization integrates cognitive, affective, and intersubjective components. Large Language Models (LLMs) display an increasing ability to generate reflective texts, raising questions regarding the relationship between linguistic form and mental representation. This study assesses the extent to which a single LLM can reproduce the linguistic structure of mentalization according to the parameters of Mentalization-Based Treatment (MBT).
- Methods: Fifty dialogues were generated between human participants and an LLM configured in standard mode. Five psychiatrists trained in MBT, working under blinded conditions, evaluated the mentalization profiles produced by the model along the four MBT axes, assigning Likert-scale scores for evaluative coherence, argumentative coherence, and global quality. Inter-rater agreement was estimated using ICC(3,1).
- Results: Mean scores (3.63-3.98) and moderate standard deviations indicate a high level of structural coherence in the generated profiles. ICC values (0.60-0.84) show substantial-to-high agreement among raters. The model proved more stable in the Implicit-Explicit and Self-Other dimensions, while presenting limitations in the integration of internal states and external contexts. The profiles were coherent and clinically interpretable yet characterized by affective neutrality.
- oai:arXiv.org:2512.08945v1
- cs.CL
+ ZK-APEX: Zero-Knowledge Approximate Personalized Unlearning with Executable Proofs
+ https://arxiv.org/abs/2512.09953
+ arXiv:2512.09953v1 Announce Type: new
+Abstract: Machine unlearning aims to remove the influence of specific data points from a trained model to satisfy privacy, copyright, and safety requirements. In real deployments, providers distribute a global model to many edge devices, where each client personalizes the model using private data. When a deletion request is issued, clients may ignore it or falsely claim compliance, and providers cannot check their parameters or data. This makes verification difficult, especially because personalized models must forget the targeted samples while preserving local utility, and verification must remain lightweight on edge devices.
+ We introduce ZK APEX, a zero-shot personalized unlearning method that operates directly on the personalized model without retraining. ZK APEX combines sparse masking on the provider side with a small Group OBS compensation step on the client side, using a blockwise empirical Fisher matrix to create a curvature-aware update designed for low overhead. Paired with Halo2 zero-knowledge proofs, it enables the provider to verify that the correct unlearning transformation was applied without revealing any private data or personalized parameters.
+ On Vision Transformer classification tasks, ZK APEX recovers nearly all personalization accuracy while effectively removing the targeted information. Applied to the OPT125M generative model trained on code data, it recovers around seventy percent of the original accuracy. Proof generation for the ViT case completes in about two hours, more than ten million times faster than retraining-based checks, with less than one gigabyte of memory use and proof sizes around four hundred megabytes. These results show the first practical framework for verifiable personalized unlearning on edge devices.
+ oai:arXiv.org:2512.09953v1
+ cs.CRcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Stefano Epifani (University of Pavia Italy, Digital Transformation Institute Italy), Giuliano Castigliego (Digital Transformation Institute Italy, Psychoanalytic Academy of Italian-Speaking Switzerland), Laura Kecskemeti (Psychiatric Services of the Canton of Grisons Switzerland), Giuliano Razzicchia (Digital Transformation Institute Italy), Elisabeth Seiwald-Sonderegger (Psychiatric Services of the Canton of Grisons Switzerland)
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mohammad M Maheri, Sunil Cotterill, Alex Davidson, Hamed Haddadi
- Large Language Models as Search Engines: Societal Challenges
- https://arxiv.org/abs/2512.08946
- arXiv:2512.08946v1 Announce Type: new
-Abstract: Large Language Models (LLMs) may one day replace search engines as the primary portal to information on the Web. In this article, we investigate the societal challenges that such a change could bring. We focus on the roles of LLM Providers, Content Creators, and End Users, and identify 15 types of challenges. With each, we show current mitigation strategies -- both from the technical perspective and the legal perspective. We also discuss the impact of each challenge and point out future research opportunities.
- oai:arXiv.org:2512.08946v1
- cs.CY
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Cross-Layer Isochronous Diffusion Protocol (CIDP): A Rigorous Information-Theoretic and Control-Theoretic Framework for Sovereign Tactical Anonymity
+ https://arxiv.org/abs/2512.09954
+ arXiv:2512.09954v1 Announce Type: new
+Abstract: Next-generation tactical networks face a critical Anonymity Trilemma: it is impossible to simultaneously achieve strong anonymity, low latency (isochrony), and low bandwidth overhead under a global passive adversary. CIDP breaks this deadlock by injecting physical-layer entropy via rapid antenna sidelobe modulation, enabling near-isochronous, low-overhead anonymous communication. CIDP jointly designs: (a) a Lyapunov drift-plus-penalty network controller that stabilizes queues and maximizes entropy injection; (b) a robust discrete-time Control Barrier Function (RaCBF) filter that provably enforces deterministic jitter bounds for real-time flows despite uncertainty; and (c) a convex Sidelobe Time Modulation (SLTM) optimization that spreads signals into the antenna null-space to mask transmissions. We explicitly augment the classical anonymity bound with a physical-layer equivocation term, showing that rapidly changing sidelobes contribute additional secrecy. Consequently, as the injected physical entropy grows, both latency and dummy overhead can approach zero for a fixed anonymity target. We provide full theoretical proofs of queue stability, barrier-set invariance, and SLTM convexity. Moreover, we quantitatively benchmark our SLTM design against recent LPI/LPD schemes, demonstrating significantly lower intercept probability for comparable overhead. High-fidelity MATLAB/NS-3 simulations and an FPGA prototype validate CIDP: results show approximately 40% larger anonymity sets and 100% compliance with sub-30 ms jitter (compared to a Tor-like baseline), with only about 5% throughput loss. We also outline a Modular Open Systems Approach (MOSA) and FOCI-compliant supply-chain strategy. CIDP is the first architecture that simultaneously addresses strong anonymity, strict isochrony, and spectral efficiency with provable guarantees, making it highly relevant for sovereign JADC2 deployments.
+ oai:arXiv.org:2512.09954v1
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- SIGIR Forum 2025
- Zacchary Sadeddine, Winston Maxwell, Ga\"el Varoquaux, Fabian M. Suchanek
+ Pravin G
- Kinematics Control of Electromagnetic Formation Flight Using Angular-Momentum Conservation Constraint
- https://arxiv.org/abs/2512.08949
- arXiv:2512.08949v1 Announce Type: new
-Abstract: Electromagnetic formation flight (EMFF) uses the electromagnetic force to control the relative positions of multiple satellites without using conventional fuel-based propulsion. To compensate for the electromagnetic torque generated alongside the electromagnetic force, in most previous studies, all satellites were assumed to have reaction wheels (RWs) besides electromagnetic coils. However, the RW-loaded angular momentum becomes non-uniformly distributed among the satellites, because the electromagnetic torque usually differs between satellites. Without a proper control scheme, this deviation increases over time, and the RWs become saturated quickly, preventing the attitudes of the satellites from being controlled. In this study, a new controller is proposed that enables the electromagnetic force and torque to be controlled simultaneously. The EMFF kinematics derived from the conservation of angular momentum are used for the controller design. This controller can control $n$ satellites without saturating the RWs, and only one set of RWs is required among all satellites. The combination of the proposed controller with a simple unloading control exclusive to the chief satellite results in the elimination of the accumulation of angular momentum in the entire system. The effectiveness of the proposed controller is demonstrated through numerical simulations of the formation maintenance and formation reconfiguration of a five-satellite system.
- oai:arXiv.org:2512.08949v1
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ CloudFix: Automated Policy Repair for Cloud Access Control Policies Using Large Language Models
+ https://arxiv.org/abs/2512.09957
+ arXiv:2512.09957v1 Announce Type: new
+Abstract: Access control policies are vital for securing modern cloud computing, where organizations must manage access to sensitive data across thousands of users in distributed system settings. Cloud administrators typically write and update policies manually, which can be an error-prone and time-consuming process and can potentially lead to security vulnerabilities. Existing approaches based on symbolic analysis have demon- strated success in automated debugging and repairing access control policies; however, their generalizability is limited in the context of cloud-based access control. Conversely, Large Language Models (LLMs) have been utilized for automated program repair; however, their applicability to repairing cloud access control policies remains unexplored. In this work, we introduce CloudFix, the first automated policy repair framework for cloud access control that combines formal methods with LLMs. Given an access control policy and a specification of allowed and denied access requests, CloudFix employs Formal Methods-based Fault Localization to identify faulty statements in the policy and leverages LLMs to generate potential repairs, which are then verified using SMT solvers. To evaluate CloudFix, we curated a dataset of 282 real-world AWS access control policies extracted from forum posts and augmented them with synthetically generated request sets based on real scenarios. Our experimental results show that CloudFix improves repair accuracy over a Baseline implementation across varying request sizes. Our work is the first to leverage LLMs for policy repair, showcasing the effectiveness of LLMs for access control and enabling efficient and automated repair of cloud access control policies. We make our tool Cloudfix and AWS dataset publicly available.
+ oai:arXiv.org:2512.09957v1
+ cs.DC
+ cs.CR
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.2514/1.G005873
- Yuta Takahashi, Hiraku Sakamoto, Shin-ichiro Sakai
+ http://creativecommons.org/licenses/by/4.0/
+ Bethel Hall, Owen Ungaro, William Eiers
- Optimizing Algorithms for Mobile Health Interventions with Active Querying Optimization
- https://arxiv.org/abs/2512.08950
- arXiv:2512.08950v1 Announce Type: new
-Abstract: Reinforcement learning in mobile health (mHealth) interventions requires balancing intervention efficacy with user burden, particularly when state measurements (for example, user surveys or feedback) are costly yet essential. The Act-Then-Measure (ATM) heuristic addresses this challenge by decoupling control and measurement actions within the Action-Contingent Noiselessly Observable Markov Decision Process (ACNO-MDP) framework. However, the standard ATM algorithm relies on a temporal-difference-inspired Q-learning method, which is prone to instability in sparse and noisy environments. In this work, we propose a Bayesian extension to ATM that replaces standard Q-learning with a Kalman filter-style Bayesian update, maintaining uncertainty-aware estimates of Q-values and enabling more stable and sample-efficient learning. We evaluate our method in both toy environments and clinically motivated testbeds. In small, tabular environments, Bayesian ATM achieves comparable or improved scalarized returns with substantially lower variance and more stable policy behavior. In contrast, in larger and more complex mHealth settings, both the standard and Bayesian ATM variants perform poorly, suggesting a mismatch between ATM's modeling assumptions and the structural challenges of real-world mHealth domains. These findings highlight the value of uncertainty-aware methods in low-data settings while underscoring the need for new RL algorithms that explicitly model causal structure, continuous states, and delayed feedback under observation cost constraints.
- oai:arXiv.org:2512.08950v1
- cs.LG
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ When Quantum Federated Learning Meets Blockchain in 6G Networks
+ https://arxiv.org/abs/2512.09958
+ arXiv:2512.09958v1 Announce Type: new
+Abstract: Quantum federated learning (QFL) is emerging as a key enabler for intelligent, secure, and privacy-preserving model training in next-generation 6G networks. By leveraging the computational advantages of quantum devices, QFL offers significant improvements in learning efficiency and resilience against quantum-era threats. However, future 6G environments are expected to be highly dynamic, decentralized, and data-intensive, which necessitates moving beyond traditional centralized federated learning frameworks. To meet this demand, blockchain technology provides a decentralized, tamper-resistant infrastructure capable of enabling trustless collaboration among distributed quantum edge devices. This paper presents QFLchain, a novel framework that integrates QFL with blockchain to support scalable and secure 6G intelligence. In this work, we investigate four key pillars of \textit{QFLchain} in the 6G context: (i) communication and consensus overhead, (ii) scalability and storage overhead, (iii) energy inefficiency, and (iv) security vulnerability. A case study is also presented, demonstrating potential advantages of QFLchain, based on simulation, over state-of-the-art approaches in terms of training performance.
+ oai:arXiv.org:2512.09958v1
+ cs.CR
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Aseel Rawashdeh
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Dinh C. Nguyen, Md Bokhtiar Al Zami, Ratun Rahman, Shaba Shaon, Tuy Tan Nguyen, Fatemeh Afghah
- AI Co-Artist: A LLM-Powered Framework for Interactive GLSL Shader Animation Evolution
- https://arxiv.org/abs/2512.08951
- arXiv:2512.08951v1 Announce Type: new
-Abstract: Creative coding and real-time shader programming are at the forefront of interactive digital art, enabling artists, designers, and enthusiasts to produce mesmerizing, complex visual effects that respond to real-time stimuli such as sound or user interaction. However, despite the rich potential of tools like GLSL, the steep learning curve and requirement for programming fluency pose substantial barriers for newcomers and even experienced artists who may not have a technical background. In this paper, we present AI Co-Artist, a novel interactive system that harnesses the capabilities of large language models (LLMs), specifically GPT-4, to support the iterative evolution and refinement of GLSL shaders through a user-friendly, visually-driven interface. Drawing inspiration from the user-guided evolutionary principles pioneered by the Picbreeder platform, our system empowers users to evolve shader art using intuitive interactions, without needing to write or understand code. AI Co-Artist serves as both a creative companion and a technical assistant, allowing users to explore a vast generative design space of real-time visual art. Through comprehensive evaluations, including structured user studies and qualitative feedback, we demonstrate that AI Co-Artist significantly reduces the technical threshold for shader creation, enhances creative outcomes, and supports a wide range of users in producing professional-quality visual effects. Furthermore, we argue that this paradigm is broadly generalizable. By leveraging the dual strengths of LLMs-semantic understanding and program synthesis, our method can be applied to diverse creative domains, including website layout generation, architectural visualizations, product prototyping, and infographics.
- oai:arXiv.org:2512.08951v1
- cs.NE
- cs.AI
- cs.GR
- Thu, 11 Dec 2025 00:00:00 -0500
+ TRUCE: TRUsted Compliance Enforcement Service for Secure Health Data Exchange
+ https://arxiv.org/abs/2512.09959
+ arXiv:2512.09959v1 Announce Type: new
+Abstract: Organizations are increasingly sharing large volumes of sensitive Personally Identifiable Information (PII), like health records, with each other to better manage their services. Protecting PII data has become increasingly important in today's digital age, and several regulations have been formulated to ensure the secure exchange and management of sensitive personal data. However, at times some of these regulations are at loggerheads with each other, like the Health Insurance Portability and Accountability Act (HIPAA) and Cures Act; and this adds complexity to the already challenging task of Health Data compliance. As public concern regarding sensitive data breaches grows, finding solutions that streamline compliance processes and enhance individual privacy is crucial. We have developed a novel TRUsted Compliance Enforcement (TRUCE) framework for secure data exchange which aims to automate compliance procedures and enhance trusted data management within organizations. The TRUCE framework reasons over contexts of data exchange and assesses the trust score of users and the veracity of data based on corresponding regulations. This framework, developed using approaches from AI/Knowledge representation and Semantic Web technologies, includes a trust management method that incorporates static ground truth, represented by regulations such as HIPAA, and dynamic ground truth, defined by an organization's policies. In this paper, we present our framework in detail along with the validation against the Health Insurance Portability and Accountability Act (HIPAA) Data Usage Agreement (DUA) on CDC Contact Tracing patient data, up to one million patient records. TRUCE service will streamline compliance efforts and ensure adherence to privacy regulations and can be used by organizations to manage compliance of large velocity data exchange in real time.
+ oai:arXiv.org:2512.09959v1
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Kamer Ali Yuksel, Hassan Sawaf
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dae-young Kim, Karuna Pande Joshi
- Learning When to Ask: Simulation-Trained Humanoids for Mental-Health Diagnosis
- https://arxiv.org/abs/2512.08952
- arXiv:2512.08952v1 Announce Type: new
-Abstract: Testing humanoid robots with users is slow, causes wear, and limits iteration and diversity. Yet screening agents must master conversational timing, prosody, backchannels, and what to attend to in faces and speech for Depression and PTSD. Most simulators omit policy learning with nonverbal dynamics; many controllers chase task accuracy while underweighting trust, pacing, and rapport. We virtualise the humanoid as a conversational agent to train without hardware burden. Our agent-centred, simulation-first pipeline turns interview data into 276 Unreal Engine MetaHuman patients with synchronised speech, gaze/face, and head-torso poses, plus PHQ-8 and PCL-C flows. A perception-fusion-policy loop decides what and when to speak, when to backchannel, and how to avoid interruptions, under a safety shield. Training uses counterfactual replay (bounded nonverbal perturbations) and an uncertainty-aware turn manager that probes to reduce diagnostic ambiguity. Results are simulation-only; the humanoid is the transfer target. In comparing three controllers, a custom TD3 (Twin Delayed DDPG) outperformed PPO and CEM, achieving near-ceiling coverage with steadier pace at comparable rewards. Decision-quality analyses show negligible turn overlap, aligned cut timing, fewer clarification prompts, and shorter waits. Performance stays stable under modality dropout and a renderer swap, and rankings hold on a held-out patient split. Contributions: (1) an agent-centred simulator that turns interviews into 276 interactive patients with bounded nonverbal counterfactuals; (2) a safe learning loop that treats timing and rapport as first-class control variables; (3) a comparative study (TD3 vs PPO/CEM) with clear gains in completeness and social timing; and (4) ablations and robustness analyses explaining the gains and enabling clinician-supervised humanoid pilots.
- oai:arXiv.org:2512.08952v1
+ TDC-Cache: A Trustworthy Decentralized Cooperative Caching Framework for Web3.0
+ https://arxiv.org/abs/2512.09961
+ arXiv:2512.09961v1 Announce Type: new
+Abstract: The rapid growth of Web3.0 is transforming the Internet from a centralized structure to decentralized, which empowers users with unprecedented self-sovereignty over their own data. However, in the context of decentralized data access within Web3.0, it is imperative to cope with efficiency concerns caused by the replication of redundant data, as well as security vulnerabilities caused by data inconsistency. To address these challenges, we develop a Trustworthy Decentralized Cooperative Caching (TDC-Cache) framework for Web3.0 to ensure efficient caching and enhance system resilience against adversarial threats. This framework features a two-layer architecture, wherein the Decentralized Oracle Network (DON) layer serves as a trusted intermediary platform for decentralized caching, bridging the contents from decentralized storage and the content requests from users. In light of the complexity of Web3.0 network topologies and data flows, we propose a Deep Reinforcement Learning-Based Decentralized Caching (DRL-DC) for TDC-Cache to dynamically optimize caching strategies of distributed oracles. Furthermore, we develop a Proof of Cooperative Learning (PoCL) consensus to maintain the consistency of decentralized caching decisions within DON. Experimental results show that, compared with existing approaches, the proposed framework reduces average access latency by 20%, increases the cache hit rate by at most 18%, and improves the average success consensus rate by 10%. Overall, this paper serves as a first foray into the investigation of decentralized caching framework and strategy for Web3.0.
+ oai:arXiv.org:2512.09961v1
+ cs.DCcs.LG
- cs.AI
- cs.HC
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Filippo Cenacchi, Deborah Richards, Longbing Cao
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jinyu Chen, Long Shi, Taotao Wang, Jiaheng Wang, Wei Zhang
- SimClinician: A Multimodal Simulation Testbed for Reliable Psychologist AI Collaboration in Mental Health Diagnosis
- https://arxiv.org/abs/2512.08953
- arXiv:2512.08953v1 Announce Type: new
-Abstract: AI based mental health diagnosis is often judged by benchmark accuracy, yet in practice its value depends on how psychologists respond whether they accept, adjust, or reject AI suggestions. Mental health makes this especially challenging: decisions are continuous and shaped by cues in tone, pauses, word choice, and nonverbal behaviors of patients. Current research rarely examines how AI diagnosis interface design influences these choices, leaving little basis for reliable testing before live studies. We present SimClinician, an interactive simulation platform, to transform patient data into psychologist AI collaborative diagnosis. Contributions include: (1) a dashboard integrating audio, text, and gaze-expression patterns; (2) an avatar module rendering de-identified dynamics for analysis; (3) a decision layer that maps AI outputs to multimodal evidence, letting psychologists review AI reasoning, and enter a diagnosis. Tested on the E-DAIC corpus (276 clinical interviews, expanded to 480,000 simulations), SimClinician shows that a confirmation step raises acceptance by 23%, keeping escalations below 9%, and maintaining smooth interaction flow.
- oai:arXiv.org:2512.08953v1
- cs.HC
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ GoodSpeed: Optimizing Fair Goodput with Adaptive Speculative Decoding in Distributed Edge Inference
+ https://arxiv.org/abs/2512.09963
+ arXiv:2512.09963v1 Announce Type: new
+Abstract: Large language models (LLMs) have revolutionized natural language processing, yet their high computational demands pose significant challenges for real-time inference, especially in multi-user server speculative decoding and resource-constrained environments. Speculative decoding has emerged as a promising technique to accelerate LLM inference by using lightweight draft models to generate candidate tokens, which are subsequently verified by a larger, more accurate model. However, ensuring both high goodput (the effective rate of accepted tokens) and fairness across multiple draft servers cooperating with a central verification server remains an open challenge. This paper introduces GOODSPEED, a novel distributed inference framework that optimizes goodput through adaptive speculative decoding. GOODSPEED employs a central verification server that coordinates a set of heterogeneous draft servers, each running a small language model to generate speculative tokens. To manage resource allocation effectively, GOODSPEED incorporates a gradient scheduling algorithm that dynamically assigns token verification tasks, maximizing a logarithmic utility function to ensure proportional fairness across servers. By processing speculative outputs from all draft servers in parallel, the framework enables efficient collaboration between the verification server and distributed draft generators, streamlining both latency and throughput. Through rigorous fluid sample path analysis, we show that GOODSPEED converges to the optimal goodput allocation in steady-state conditions and maintains near-optimal performance with provably bounded error under dynamic workloads. These results demonstrate that GOODSPEED provides a scalable, fair and efficient solution for multi- in distributed LLM inference systems.
+ oai:arXiv.org:2512.09963v1
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Filippo Cenacchi, Longbing Cao, Deborah Richards
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Phuong Tran, Tzu-Hao Liu, Long Tan Le, Tung-Anh Nguyen, Van Quan La, Eason Yu, Han Shu, Choong Seon Hong, Nguyen H. Tran
- An Electrocardiogram Multi-task Benchmark with Comprehensive Evaluations and Insightful Findings
- https://arxiv.org/abs/2512.08954
- arXiv:2512.08954v1 Announce Type: new
-Abstract: In the process of patient diagnosis, non-invasive measurements are widely used due to their low risks and quick results. Electrocardiogram (ECG), as a non-invasive method to collect heart activities, is used to diagnose cardiac conditions. Analyzing the ECG typically requires domain expertise, which is a roadblock to applying artificial intelligence (AI) for healthcare. Through advances in self-supervised learning and foundation models, AI systems can now acquire and leverage domain knowledge without relying solely on human expertise. However, there is a lack of comprehensive analyses over the foundation models' performance on ECG. This study aims to answer the research question: "Are Foundation Models Useful for ECG Analysis?" To address it, we evaluate language/general time-series/ECG foundation models in comparison with time-series deep learning models. The experimental results show that general time-series/ECG foundation models achieve a top performance rate of 80%, indicating their effectiveness in ECG analysis. In-depth analyses and insights are provided along with comprehensive experimental results. This study highlights the limitations and potential of foundation models in advancing physiological waveform analysis. The data and code for this benchmark are publicly available at https://github.com/yuhaoxu99/ECGMultitasks-Benchmark.
- oai:arXiv.org:2512.08954v1
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Hybrid Finite Element and Least Squares Support Vector Regression Method for solving Partial Differential Equations with Legendre Polynomial Kernels
+ https://arxiv.org/abs/2512.09967
+ arXiv:2512.09967v1 Announce Type: new
+Abstract: A hybrid computational approach that integrates the finite element method (FEM) with least squares support vector regression (LSSVR) is introduced to solve partial differential equations. The method combines FEM's ability to provide the nodal solutions and LSSVR with higher-order Legendre polynomial kernels to deliver a closed-form analytical solution for interpolation between the nodes. The hybrid approach implements element-wise enhancement (super-resolution) of a given numerical solution, resulting in high resolution accuracy, while maintaining consistency with FEM nodal values at element boundaries. It can adapt any low-order FEM code to obtain high-order resolution by leveraging localized kernel refinement and parallel computation without additional implementation overhead. Therefore, effective inference/post-processing of the obtained super-resolved solution is possible. Evaluation results show that the hybrid FEM-LSSVR approach can achieve significantly higher accuracy compared to the base FEM solution. Comparable accuracy is a achieved when comparing the hybrid solution with a standalone FEM result with the same polynomial basis function order. The convergence studies were conducted for four elliptic boundary value problems to demonstrate the method's ability, accuracy, and reliability. Finally, the algorithm can be directly used as a plug-and-play method for super-resolving low-order numerical solvers and for super-resolution of expensive/under-resolved experimental data.
+ oai:arXiv.org:2512.09967v1
+ math.NA
+ cs.NA
+ physics.comp-ph
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuhao Xu, Jiaying Lu, Sirui Ding, Defu Cao, Xiao Hu, Carl Yang
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Maryam Babaei, Peter Rucz, Manfred Kaltenbacher, Stefan Schoder
- LLM4XCE: Large Language Models for Extremely Large-Scale Massive MIMO Channel Estimation
- https://arxiv.org/abs/2512.08955
- arXiv:2512.08955v1 Announce Type: new
-Abstract: Extremely large-scale massive multiple-input multiple-output (XL-MIMO) is a key enabler for sixth-generation (6G) networks, offering massive spatial degrees of freedom. Despite these advantages, the coexistence of near-field and far-field effects in hybrid-field channels presents significant challenges for accurate estimation, where traditional methods often struggle to generalize effectively. In recent years, large language models (LLMs) have achieved impressive performance on downstream tasks via fine-tuning, aligning with the semantic communication shift toward task-oriented understanding over bit-level accuracy.
- Motivated by this, we propose Large Language Models for XL-MIMO Channel Estimation (LLM4XCE), a novel channel estimation framework that leverages the semantic modeling capabilities of large language models to recover essential spatial-channel representations for downstream tasks. The model integrates a carefully designed embedding module with Parallel Feature-Spatial Attention, enabling deep fusion of pilot features and spatial structures to construct a semantically rich representation for LLM input. By fine-tuning only the top two Transformer layers, our method effectively captures latent dependencies in the pilot data while ensuring high training efficiency. Extensive simulations demonstrate that LLM4XCE significantly outperforms existing state-of-the-art methods under hybrid-field conditions, achieving superior estimation accuracy and generalization performance.
- oai:arXiv.org:2512.08955v1
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Neuromorphic Eye Tracking for Low-Latency Pupil Detection
+ https://arxiv.org/abs/2512.09969
+ arXiv:2512.09969v1 Announce Type: new
+Abstract: Eye tracking for wearable systems demands low latency and milliwatt-level power, but conventional frame-based pipelines struggle with motion blur, high compute cost, and limited temporal resolution. Such capabilities are vital for enabling seamless and responsive interaction in emerging technologies like augmented reality (AR) and virtual reality (VR), where understanding user gaze is key to immersion and interface design. Neuromorphic sensors and spiking neural networks (SNNs) offer a promising alternative, yet existing SNN approaches are either too specialized or fall short of the performance of modern ANN architectures. This paper presents a neuromorphic version of top-performing event-based eye-tracking models, replacing their recurrent and attention modules with lightweight LIF layers and exploiting depth-wise separable convolutions to reduce model complexity. Our models obtain 3.7-4.1px mean error, approaching the accuracy of the application-specific neuromorphic system, Retina (3.24px), while reducing model size by 20x and theoretical compute by 850x, compared to the closest ANN variant of the proposed model. These efficient variants are projected to operate at an estimated 3.9-4.9 mW with 3 ms latency at 1 kHz. The present results indicate that high-performing event-based eye-tracking architectures can be redesigned as SNNs with substantial efficiency gains, while retaining accuracy suitable for real-time wearable deployment.
+ oai:arXiv.org:2512.09969v1
+ cs.CV
+ cs.NE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Renbin Li, Shuangshuang Li, Peihao Dong
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Paul Hueber, Luca Peres, Florian Pitters, Alejandro Gloriani, Oliver Rhodes
- DW-KNN: A Transparent Local Classifier Integrating Distance Consistency and Neighbor Reliability
- https://arxiv.org/abs/2512.08956
- arXiv:2512.08956v1 Announce Type: new
-Abstract: K-Nearest Neighbors (KNN) is one of the most used ML classifiers. However, if we observe closely, standard distance-weighted KNN and relative variants assume all 'k' neighbors are equally reliable. In heterogeneous feature space, this becomes a limitation that hinders reliability in predicting true levels of the observation.
- We propose DW-KNN (Double Weighted KNN), a transparent and robust variant that integrates exponential distance with neighbor validity. This enables instance-level interpretability, suppresses noisy or mislabeled samples, and reduces hyperparameter sensitivity.
- Comprehensive evaluation on 9 data-sets helps to demonstrate that DW-KNN achieves 0.8988 accuracy on average. It ranks 2nd among six methods and within 0.2% of the best-performing Ensemble KNN. It also exhibits the lowest cross-validation variance (0.0156), indicating reliable prediction stability. Statistical significance test confirmed ($p < 0.001$) improvement over compactness weighted KNN (+4.09\%) and Kernel weighted KNN (+1.13\%). The method provides a simple yet effective alternative to complex adaptive schemes, particularly valuable for high-stakes applications requiring explainable predictions.
- oai:arXiv.org:2512.08956v1
+ BAMBO: Construct Ability and Efficiency LLM Pareto Set via Bayesian Adaptive Multi-objective Block-wise Optimization
+ https://arxiv.org/abs/2512.09972
+ arXiv:2512.09972v1 Announce Type: new
+Abstract: Constructing a Pareto set is pivotal for navigating the capability-efficiency trade-offs in Large Language Models (LLMs); however, existing merging techniques remain inadequate for this task. Coarse-grained, model-level methods yield only a sparse set of suboptimal solutions, while fine-grained, layer-wise approaches suffer from the "curse of dimensionality," rendering the search space computationally intractable. To resolve this dichotomy, we propose BAMBO (Bayesian Adaptive Multi-objective Block-wise Optimization), a novel framework that automatically constructs the LLM Pareto set. BAMBO renders the search tractable by introducing a Hybrid Optimal Block Partitioning strategy. Formulated as a 1D clustering problem, this strategy leverages a dynamic programming approach to optimally balance intra-block homogeneity and inter-block information distribution, thereby dramatically reducing dimensionality without sacrificing critical granularity. The entire process is automated within an evolutionary loop driven by the q-Expected Hypervolume Improvement (qEHVI) acquisition function. Experiments demonstrate that BAMBO discovers a superior and more comprehensive Pareto frontier than baselines, enabling agile model selection tailored to diverse operational constraints. Code is available at: https://github.com/xin8coder/BAMBO.
+ oai:arXiv.org:2512.09972v1cs.LG
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CL
+ cs.NE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Kumarjit Pathak, Karthik K, Sachin Madan, Jitin Kapila
+ Kesheng Chen, Wenjian Luo, Zhenqian Zhu, Yamin Hu, Yiya Xi
- LUMOS: Large User MOdels for User Behavior Prediction
- https://arxiv.org/abs/2512.08957
- arXiv:2512.08957v1 Announce Type: new
-Abstract: User behavior prediction at scale remains a critical challenge for online B2C platforms. Traditional approaches rely heavily on task-specific models and domain-specific feature engineering. This is time-consuming, computationally expensive, and requires domain expertise and therefore not scalable. We present LUMOS (Large User MOdel Series), a transformer-based architecture that eliminates task-specific models and manual feature engineering by learning multiple tasks jointly using only raw user activity data. LUMOS introduces a novel cross-attention mechanism that conditions predictions on future known events (e.g., holidays, sales, etc.), enabling the model to predict complex behaviour patterns like "how will upcoming holidays affect user engagement?" The architecture also employs multi-modal tokenization, combining user transactions, event context, and static user demographic attributes into rich representations processed through specialized embedding pathways.
- Through extensive experiments on a production dataset spanning 275 billion user activity tokens from 250 million users, we demonstrate that LUMOS achieves superior performance compared to traditional task-specific models. Across 5 tasks with established baselines, we achieve an average improvement of 0.025 in ROC-AUC for binary classification tasks and 4.6\% reduction in MAPE for regression tasks. Online A/B testing validates these improvements translate to measurable business impact with a 3.15\% increase in Daily Active Users.
- oai:arXiv.org:2512.08957v1
+ Enhancing Fake-News Detection with Node-Level Topological Features
+ https://arxiv.org/abs/2512.09974
+ arXiv:2512.09974v1 Announce Type: new
+Abstract: In recent years, the proliferation of misinformation and fake news has posed serious threats to individuals and society, spurring intense research into automated detection methods. Previous work showed that integrating content, user preferences, and propagation structure achieves strong performance, but leaves all graph-level representation learning entirely to the GNN, hiding any explicit topological cues. To close this gap, we introduce a lightweight enhancement: for each node, we append two classical graph-theoretic metrics, degree centrality and local clustering coefficient, to its original BERT and profile embeddings, thus explicitly flagging the roles of hub and community. In the UPFD Politifact subset, this simple modification boosts macro F1 from 0.7753 to 0.8344 over the original baseline. Our study not only demonstrates the practical value of explicit topology features in fake-news detection but also provides an interpretable, easily reproducible template for fusing graph metrics in other information-diffusion tasks.
+ oai:arXiv.org:2512.09974v1
+ cs.SIcs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Dhruv Nigam
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kaiyuan Xu
- EEG-Bench: A Benchmark for EEG Foundation Models in Clinical Applications
- https://arxiv.org/abs/2512.08959
- arXiv:2512.08959v1 Announce Type: new
-Abstract: We introduce a unified benchmarking framework focused on evaluating EEG-based foundation models in clinical applications. The benchmark spans 11 well-defined diagnostic tasks across 14 publicly available EEG datasets, including epilepsy, schizophrenia, Parkinson's disease, OCD, and mild traumatic brain injury. It features minimal preprocessing, standardized evaluation protocols, and enables side-by-side comparisons of classical baselines and modern foundation models. Our results show that while foundation models achieve strong performance in certain settings, simpler models often remain competitive, particularly under clinical distribution shifts. To facilitate reproducibility and adoption, we release all prepared data and code in an accessible and extensible format.
- oai:arXiv.org:2512.08959v1
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Generalised Arc Consistency via the Synchronised Product of Finite Automata wrt a Constraint
+ https://arxiv.org/abs/2512.09975
+ arXiv:2512.09975v1 Announce Type: new
+Abstract: Given an $m$ by $n$ matrix $V$ of domain variables $v_{i,j}$ (with $i$ from $1$ to $m$ and $j$ from $1$ to $n$), where each row $i$ must be accepted by a specified Deterministic Finite Automaton (DFA) $\mathcal{A}_i$ and each column $j$ must satisfy the same constraint $\texttt{ctr}$, we show how to use the \emph{synchronised product of DFAs wrt constraint} $\texttt{ctr}$ to obtain a Berge-acyclic decomposition ensuring Generalised Arc Consistency (GAC). Such decomposition consists of one \texttt{regular} and $n$ \texttt{table} constraints. We illustrate the effectiveness of this method by solving a hydrogen distribution problem, finding optimal solutions and proving optimality quickly.
+ oai:arXiv.org:2512.09975v1
+ cs.FL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Ard Kastrati, Josua B\"urki, Jonas Lauer, Cheng Xuan, Raffaele Iaquinto, Roger Wattenhofer
+ Nicolas Beldiceanu
- Resolving Conflicts in Lifelong Learning via Aligning Updates in Subspaces
- https://arxiv.org/abs/2512.08960
- arXiv:2512.08960v1 Announce Type: new
-Abstract: Low-Rank Adaptation (LoRA) enables efficient Continual Learning but often suffers from catastrophic forgetting due to destructive interference between tasks. Our analysis reveals that this degradation is primarily driven by antagonistic directional updates where new task gradients directly oppose the historical weight trajectory. To address this, we propose PS-LoRA (Parameter Stability LoRA), a framework designed to resolve conflicts by aligning updates within the optimization subspace. Our approach employs a dual-regularization objective that penalizes conflicting directions and constrains magnitude deviations to ensure consistency with prior knowledge. Additionally, we implement a magnitude-based merging strategy to consolidate sequential adapters into a robust representation without retraining. Experiments on NLP and Vision benchmarks show that PS-LoRA outperforms state-of-the-art methods by preserving the stability of learned representations while efficiently adapting to new domains.
- oai:arXiv.org:2512.08960v1
- cs.LG
+ Fuzzy Hierarchical Multiplex
+ https://arxiv.org/abs/2512.09976
+ arXiv:2512.09976v1 Announce Type: new
+Abstract: A new fuzzy optimization framework that extends FCM causality is proposed. This model utilizes the dynamics to map data into metrics and create a framework that examines logical implication and hierarchy of concepts using a multiplex. Moreover, this is a white-theoretical paper introducing the framework and analyzing the logic and math behind it. Upon this extension the main objectives and the orientation of this framework is expounded and exemplified; this framework is meant for service optimization of information transmission in service process design. Lastly, a thorough analysis of the FHM is included which is done following the logical steps in a simple and elegant manner.
+ oai:arXiv.org:2512.09976v1cs.AI
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yueer Zhou, Yichen Wu, Ying Wei
-
-
- SEA: Spectral Edge Attacks on Graph Neural Networks
- https://arxiv.org/abs/2512.08964
- arXiv:2512.08964v1 Announce Type: new
-Abstract: Graph Neural Networks (GNNs) achieve strong performance on graph-structured data, but are notoriously vulnerable to small, carefully crafted perturbations of the graph structure. Most existing structure-based attacks rely on gradient-based heuristics or local connectivity patterns, and treat edges as equally important candidates for manipulation. In this paper, we propose Spectral Edge Attacks (SEA), a new family of adversarial attacks that explicitly leverage spectral robustness evaluation to guide structural perturbations. Our key idea is to compute a spectral embedding that captures the most fragile directions of the input manifold and to use it to assign a robustness score to each edge or non-edge. Based on these scores, we introduce two complementary attack variants: (i) a Spade-guided deletion attack that removes the most spectrally robust edges, and (ii) a Spade-guided addition attack that inserts edges between nodes that are maximally incompatible in the fragile spectral space. Both attacks operate at the graph level, are model-aware but conceptually simple, and can be plugged into existing GNN architectures without requiring gradients. We describe the spectral formulation, the attack algorithms, and experiments on benchmarks.
- oai:arXiv.org:2512.08964v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yongyu Wang
+ Alexis Kafantaris
- Financial Instruction Following Evaluation (FIFE)
- https://arxiv.org/abs/2512.08965
- arXiv:2512.08965v1 Announce Type: new
-Abstract: Language Models (LMs) struggle with complex, interdependent instructions, particularly in high-stakes domains like finance where precision is critical. We introduce FIFE, a novel, high-difficulty benchmark designed to assess LM instruction-following capabilities for financial analysis tasks. FIFE comprises 88 human-authored prompts and employs a verification system with chainable, verifiable constraints for fine-grained reward signals. We evaluate 53 models (proprietary, open-weight, open-source) in a zero-shot setting. Our key findings reveal a clear performance hierarchy: the top open-weight model (76.1 strict / 79.5 loose) surpasses the leading proprietary system (65.9 strict / 70.5 loose), while the best open-source models lag significantly (45.5 strict / 48.9 loose). However, even top-performing models struggle with FIFE's complex requirements, failing to achieve perfect compliance. We release our dataset and code as an open-source resource to promote research in Reinforcement Learning for the financial domain.
- oai:arXiv.org:2512.08965v1
- cs.LG
+ Exploring LLMs for Scientific Information Extraction Using The SciEx Framework
+ https://arxiv.org/abs/2512.10004
+ arXiv:2512.10004v1 Announce Type: new
+Abstract: Large language models (LLMs) are increasingly touted as powerful tools for automating scientific information extraction. However, existing methods and tools often struggle with the realities of scientific literature: long-context documents, multi-modal content, and reconciling varied and inconsistent fine-grained information across multiple publications into standardized formats. These challenges are further compounded when the desired data schema or extraction ontology changes rapidly, making it difficult to re-architect or fine-tune existing systems. We present SciEx, a modular and composable framework that decouples key components including PDF parsing, multi-modal retrieval, extraction, and aggregation. This design streamlines on-demand data extraction while enabling extensibility and flexible integration of new models, prompting strategies, and reasoning mechanisms. We evaluate SciEx on datasets spanning three scientific topics for its ability to extract fine-grained information accurately and consistently. Our findings provide practical insights into both the strengths and limitations of current LLM-based pipelines.
+ oai:arXiv.org:2512.10004v1cs.AIcs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Glenn Matlin, Siddharth, Anirudh JM, Aditya Shukla, Yahya Hassan, Sudheer Chava
-
-
- CluCERT: Certifying LLM Robustness via Clustering-Guided Denoising Smoothing
- https://arxiv.org/abs/2512.08967
- arXiv:2512.08967v1 Announce Type: new
-Abstract: Recent advancements in Large Language Models (LLMs) have led to their widespread adoption in daily applications. Despite their impressive capabilities, they remain vulnerable to adversarial attacks, as even minor meaning-preserving changes such as synonym substitutions can lead to incorrect predictions. As a result, certifying the robustness of LLMs against such adversarial prompts is of vital importance. Existing approaches focused on word deletion or simple denoising strategies to achieve robustness certification. However, these methods face two critical limitations: (1) they yield loose robustness bounds due to the lack of semantic validation for perturbed outputs and (2) they suffer from high computational costs due to repeated sampling. To address these limitations, we propose CluCERT, a novel framework for certifying LLM robustness via clustering-guided denoising smoothing. Specifically, to achieve tighter certified bounds, we introduce a semantic clustering filter that reduces noisy samples and retains meaningful perturbations, supported by theoretical analysis. Furthermore, we enhance computational efficiency through two mechanisms: a refine module that extracts core semantics, and a fast synonym substitution strategy that accelerates the denoising process. Finally, we conduct extensive experiments on various downstream tasks and jailbreak defense scenarios. Experimental results demonstrate that our method outperforms existing certified approaches in both robustness bounds and computational efficiency.
- oai:arXiv.org:2512.08967v1
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zixia Wang, Gaojie Jin, Jia Hu, Ronghui Mu
+ Sha Li, Ayush Sadekar, Nathan Self, Yiqi Su, Lars Andersland, Mira Chaplin, Annabel Zhang, Hyoju Yang, James B Henderson, Krista Wigginton, Linsey Marr, T. M. Murali, Naren Ramakrishnan
- StructuredDNA: A Bio-Physical Framework for Energy-Aware Transformer Routing
- https://arxiv.org/abs/2512.08968
- arXiv:2512.08968v1 Announce Type: new
-Abstract: The rapid scaling of large computational models has led to a critical increase in energy and compute costs. Inspired by biological systems where structure and function emerge from low-energy configurations, we introduce StructuredDNA, a sparse architecture framework for modular, energy-aware Transformer routing. StructuredDNA replaces dense Mixture-of-Experts routing with a bio-physical, energy-guided routing layer based on semantic energy minimization. Inputs are dynamically grouped into semantic codons, and routing selects a single expert by minimizing a global energy functional that combines cohesion, uncertainty, and computational cost.
- We validate StructuredDNA on both specialized (BioASQ) and open-domain benchmarks (WikiText-103). On BioASQ (K = 50), we achieve a 97.7% reduction in Energy Utilization Density (EUD) and a Semantic Stability Index (SSI) of 0.998. We further demonstrate a Semantic Scaling Law on WikiText-103, showing that the architecture generalizes to open domains by scaling expert granularity (K = 2048) while maintaining more than 99% energy efficiency. StructuredDNA thus establishes a robust, domain-agnostic paradigm for future sparse computational frameworks.
- StructuredDNA provides an explicit link between bio-physical principles and sparse expert routing in Transformer architectures, and points toward future energy-aware, modular, and scalable computational systems. We discuss limitations of this proof-of-concept study and outline directions for scaling the approach to larger models, datasets, and hardware platforms. The StructuredDNA implementation is available at https://github.com/InnoDeep-repos/StructuredDNA .
- oai:arXiv.org:2512.08968v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Spatial Spiking Neural Networks Enable Efficient and Robust Temporal Computation
+ https://arxiv.org/abs/2512.10011
+ arXiv:2512.10011v1 Announce Type: new
+Abstract: The efficiency of modern machine intelligence depends on high accuracy with minimal computational cost. In spiking neural networks (SNNs), synaptic delays are crucial for encoding temporal structure, yet existing models treat them as fully trainable, unconstrained parameters, leading to large memory footprints, higher computational demand, and a departure from biological plausibility. In the brain, however, delays arise from physical distances between neurons embedded in space. Building on this principle, we introduce Spatial Spiking Neural Networks (SpSNNs), a framework in which neurons learn coordinates in a finite-dimensional Euclidean space and delays emerge from inter-neuron distances. This replaces per-synapse delay learning with position learning, substantially reducing parameter count while retaining temporal expressiveness. Across the Yin-Yang and Spiking Heidelberg Digits benchmarks, SpSNNs outperform SNNs with unconstrained delays despite using far fewer parameters. Performance consistently peaks in 2D and 3D networks rather than infinite-dimensional delay spaces, revealing a geometric regularization effect. Moreover, dynamically sparsified SpSNNs maintain full accuracy even at 90% sparsity, matching standard delay-trained SNNs while using up to 18x fewer parameters. Because learned spatial layouts map naturally onto hardware geometries, SpSNNs lend themselves to efficient neuromorphic implementation. Methodologically, SpSNNs compute exact delay gradients via automatic differentiation with custom-derived rules, supporting arbitrary neuron models and architectures. Altogether, SpSNNs provide a principled platform for exploring spatial structure in temporal computation and offer a hardware-friendly substrate for scalable, energy-efficient neuromorphic intelligence.
+ oai:arXiv.org:2512.10011v1
+ cs.NE
+ q-bio.NC
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Mustapha Hamdi
+ Lennart P. L. Landsmeer, Amirreza Movahedin, Mario Negrello, Said Hamdioui, Christos Strydis
- Learning Robust Representations for Malicious Content Detection via Contrastive Sampling and Uncertainty Estimation
- https://arxiv.org/abs/2512.08969
- arXiv:2512.08969v1 Announce Type: new
-Abstract: We propose the Uncertainty Contrastive Framework (UCF), a Positive-Unlabeled (PU) representation learning framework that integrates uncertainty-aware contrastive loss, adaptive temperature scaling, and a self-attention-guided LSTM encoder to improve classification under noisy and imbalanced conditions. UCF dynamically adjusts contrastive weighting based on sample confidence, stabilizes training using positive anchors, and adapts temperature parameters to batch-level variability. Applied to malicious content classification, UCF-generated embeddings enable multiple traditional classifiers to achieve more than 93.38% accuracy, precision above 0.93, and near-perfect recall, with minimal false negatives and competitive ROC-AUC scores. Visual analyses confirm clear separation between positive and unlabeled instances, highlighting the framework's ability to produce calibrated, discriminative embeddings. These results position UCF as a robust and scalable solution for PU learning in high-stakes domains such as cybersecurity and biomedical text mining.
- oai:arXiv.org:2512.08969v1
+ Latent Action World Models for Control with Unlabeled Trajectories
+ https://arxiv.org/abs/2512.10016
+ arXiv:2512.10016v1 Announce Type: new
+Abstract: Inspired by how humans combine direct interaction with action-free experience (e.g., videos), we study world models that learn from heterogeneous data. Standard world models typically rely on action-conditioned trajectories, which limits effectiveness when action labels are scarce. We introduce a family of latent-action world models that jointly use action-conditioned and action-free data by learning a shared latent action representation. This latent space aligns observed control signals with actions inferred from passive observations, enabling a single dynamics model to train on large-scale unlabeled trajectories while requiring only a small set of action-labeled ones. We use the latent-action world model to learn a latent-action policy through offline reinforcement learning (RL), thereby bridging two traditionally separate domains: offline RL, which typically relies on action-conditioned data, and action-free training, which is rarely used with subsequent RL. On the DeepMind Control Suite, our approach achieves strong performance while using about an order of magnitude fewer action-labeled samples than purely action-conditioned baselines. These results show that latent actions enable training on both passive and interactive data, which makes world models learn more efficiently.
+ oai:arXiv.org:2512.10016v1cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Elias Hossain, Umesh Biswas, Charan Gudla, Sai Phani Parsa
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Marvin Alles, Xingyuan Zhang, Patrick van der Smagt, Philip Becker-Ehmck
- Enhancing Automatic Speech Recognition Through Integrated Noise Detection Architecture
- https://arxiv.org/abs/2512.08973
- arXiv:2512.08973v1 Announce Type: new
-Abstract: This research presents a novel approach to enhancing automatic speech recognition systems by integrating noise detection capabilities directly into the recognition architecture. Building upon the wav2vec2 framework, the proposed method incorporates a dedicated noise identification module that operates concurrently with speech transcription. Experimental validation using publicly available speech and environmental audio datasets demonstrates substantial improvements in transcription quality and noise discrimination. The enhanced system achieves superior performance in word error rate, character error rate, and noise detection accuracy compared to conventional architectures. Results indicate that joint optimization of transcription and noise classification objectives yields more reliable speech recognition in challenging acoustic conditions.
- oai:arXiv.org:2512.08973v1
- cs.SD
- cs.AI
- cs.LG
- eess.AS
- Thu, 11 Dec 2025 00:00:00 -0500
+ Complexity of Linear Subsequences of $k$-Automatic Sequences
+ https://arxiv.org/abs/2512.10017
+ arXiv:2512.10017v1 Announce Type: new
+Abstract: We construct automata with input(s) in base $k$ recognizing some basic relations and study their number of states. We also consider some basic operations on $k$-automatic sequences and discuss their state complexity. We find a relationship between subword complexity of the interior sequence $(h'(i))_{i \geq 0}$ and state complexity of the linear subsequence $(h(ni+c))_{i \geq 0}$. We resolve a recent question of Zantema and Bosma about linear subsequences of $k$-automatic sequences with input in most-significant-digit-first format. We also discuss the state complexity and runtime complexity of using a reasonable interpretation of B\"uchi arithmetic to actually construct some of the studied automata recognizing relations or carrying out operations on automatic sequences.
+ oai:arXiv.org:2512.10017v1
+ cs.FL
+ cs.DM
+ math.CO
+ math.NT
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Karamvir Singh
+ http://creativecommons.org/licenses/by/4.0/
+ Delaram Moradi, Narad Rampersad, Jeffrey Shallit
- Peek-a-Boo Reasoning: Contrastive Region Masking in MLLMs
- https://arxiv.org/abs/2512.08976
- arXiv:2512.08976v1 Announce Type: new
-Abstract: We introduce Contrastive Region Masking (CRM), a training free diagnostic that reveals how multimodal large language models (MLLMs) depend on specific visual regions at each step of chain-of-thought (CoT) reasoning. Unlike prior approaches limited to final answers or attention maps, CRM provides causal, step-level attri- bution by systematically masking annotated regions and contrasting the resulting reasoning traces with unmasked baselines. Applied to datasets such as VisArgs, CRM reveals distinct failure modes: some models preserve reasoning structure, but hallucinate when evidence is missing, while others ground tightly to visual cues yet collapse under perturbations. By shifting the evaluation from correctness of an- swers to faithfulness of reasoning, CRM reframes visual benchmarks as diagnostic tools, highlighting the need for multimodal evaluation frameworks that measure not just performance, but also robustness and fidelity of reasoning.
- oai:arXiv.org:2512.08976v1
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Comparative Analysis of zk-SNARKs and zk-STARKs: Theory and Practice
+ https://arxiv.org/abs/2512.10020
+ arXiv:2512.10020v1 Announce Type: new
+Abstract: Zero-knowledge proofs (ZKPs) are central to secure and privacy-preserving computation, with zk-SNARKs and zk-STARKs emerging as leading frameworks offering distinct trade-offs in efficiency, scalability, and trust assumptions. While their theoretical foundations are well studied, practical performance under real-world conditions remains less understood.
+ In this work, we present a systematic, implementation-level comparison of zk-SNARKs (Groth16) and zk-STARKs using publicly available reference implementations on a consumer-grade ARM platform. Our empirical evaluation covers proof generation time, verification latency, proof size, and CPU profiling. Results show that zk-SNARKs generate proofs 68x faster with 123x smaller proof size, but verify slower and require trusted setup, whereas zk-STARKs, despite larger proofs and slower generation, verify faster and remain transparent and post-quantum secure. Profiling further identifies distinct computational bottlenecks across the two systems, underscoring how execution models and implementation details significantly affect real-world performance. These findings provide actionable insights for developers, protocol designers, and researchers in selecting and optimizing proof systems for applications such as privacy-preserving transactions, verifiable computation, and scalable rollups.
+ oai:arXiv.org:2512.10020v1
+ cs.CR
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Isha Chaturvedi, Anjana Nair, Yushen Li, Adhitya Rajendra Kumar, Kevin Zhu, Sunishchal Dev, Ashwinee Panda, Vasu Sharma
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ayush Nainwal, Atharva Kamble, Nitin Awathare
- Hacia una moderna "republica de las ideas" via un nuevo ecosistema de comunicacion cientifica (Toward a modern "Republic of Ideas" via a new Ecosystem of Scientific Communication)
- https://arxiv.org/abs/2512.08977
- arXiv:2512.08977v1 Announce Type: new
-Abstract: The contemporary academic ecosystem, heir to the Enlightenment's "Republic of Letters," finds itself in a state of profound and unsustainable crisis. That order, based on the free flow of correspondence and the disinterested pursuit of knowledge, has been supplanted by a system teetering under the weight of its own contradictions. This work embarks on a fundamental redesign to articulate an innovative and coherent framework for scientific communication. To this end, four distinct but complementary schools of thought are synthesized: From Ordoliberalism, we take the rigor of designing an "economic constitution" that prevents the concentration of power and fosters fair competition. From Humanistic Economics, we extract the telos, or normative purpose (human flourishing and shared prosperity). From Digital Humanism, we derive the technological ethos, ensuring that the infrastructure serves human dignity. Finally, from Decentralized Science (DeSci), we take the set of architectural tools (smart contracts, DAOs, tokens) capable of building this new order from the ground up. The narrative arc is deliberate: Part 1 offers an anatomy of decay, using the Uddin demand as a scalpel to dissect the economic, institutional, and epistemic pathologies of the current system. Part 2 articulates the philosophical constitution of the new republic. Part 3 details the architectural blueprint and technological infrastructure of this new order. Part 4 subjects this design to rigorous stress tests to forge its resilience and antifragility. Part 5 charts a strategic roadmap for the transition, a plausible path to navigate from the status quo to full community sovereignty. Finally, Part 6 brings the book full circle, returning to the foundational vision. This is a call to action to build the New Republic of Ideas that the pursuit of truth deserves in the 21st century.
- oai:arXiv.org:2512.08977v1
- cs.OH
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Enrique Lopez-Gonzalez
-
-
- Institutional AI Sovereignty Through Gateway Architecture: Implementation Report from Fontys ICT
- https://arxiv.org/abs/2512.08978
- arXiv:2512.08978v1 Announce Type: new
-Abstract: To counter fragmented, high-risk adoption of commercial AI tools, we built and ran an institutional AI platform in a six-month, 300-user pilot, showing that a university of applied sciences can offer advanced AI with fair access, transparent risks, controlled costs, and alignment with European law.
- Commercial AI subscriptions create unequal access and compliance risks through opaque processing and non-EU hosting, yet banning them is neither realistic nor useful. Institutions need a way to provide powerful AI in a sovereign, accountable form.
- Our solution is a governed gateway platform with three layers: a ChatGPT-style frontend linked to institutional identity that makes model choice explicit; a gateway core enforcing policy, controlling access and budgets, and routing traffic to EU infrastructure by default; and a provider layer wrapping commercial and open-source models in institutional model cards that consolidate vendor documentation into one governance interface.
- The pilot ran reliably with no privacy incidents and strong adoption, enabling EU-default routing, managed spending, and transparent model choices. Only the gateway pattern combines model diversity and rapid innovation with institutional control.
- The central insight: AI is not a support function but strategy, demanding dedicated leadership. Sustainable operation requires governance beyond traditional boundaries. We recommend establishing a formal AI Officer role combining technical literacy, governance authority, and educational responsibility. Without it, AI decisions stay ad-hoc and institutional exposure grows. With it, higher-education institutions can realistically operate their own multi-provider AI platform, provided they govern AI as seriously as they teach it.
- oai:arXiv.org:2512.08978v1
- cs.CY
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Mass Preserving Numerical Scheme for Kinetic Equations that Model Social Phenomena
+ https://arxiv.org/abs/2512.10027
+ arXiv:2512.10027v1 Announce Type: new
+Abstract: In recent years, kinetic equations have been used to model many social phenomena. A key feature of these models is that transition rate kernels involve Dirac delta functions, which capture sudden, discontinuous state changes. Here, we study kinetic equations with transition rates of the form $$ T(x,y,u) = \delta_{\phi(x,y) - u}. $$ We establish the global existence and uniqueness of solutions for these systems and introduce a fully deterministic scheme, the \emph{Mass Preserving Collocation Method}, which enables efficient, high fidelity simulation of models with multiple subsystems. We validate the accuracy, efficiency, and consistency of the solver on models with up to five subsystems, and compare its performance against two state-of-the-art agent-based methods: Tau-leaping and hybrid methods. Our scheme resolves subsystem distributions captured by these stochastic approaches while preserving mass numerically, requiring significantly less computational time and resources, and avoiding variability and hyperparameter tuning characteristic of these methods.
+ oai:arXiv.org:2512.10027v1
+ math.NA
+ cs.NA
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Ruud Huijts, Koen Suilen
+ Yassin Bahid, Eduardo Corona, Nancy Rodriguez
- What Happens When: Learning Temporal Orders of Events in Videos
- https://arxiv.org/abs/2512.08979
- arXiv:2512.08979v1 Announce Type: new
-Abstract: Video Large Multimodal Models (VLMMs) have shown impressive performance in video understanding, yet their ability to accurately capture the temporal order of multiple events remains underexplored. We interestingly observe that, even when video frames are scrambled, models perform very well on the existing benchmarks by comprehensive experiments. This implies that VLMMs may not necessarily rely on accurate sequential processing of visual events, but instead depend on prior knowledge of typical scenarios to answer the question. To benchmark temporal understanding capabilities in VLMMs, we propose VECTOR, designed to explicitly assess a model's ability to identify the temporal order of events. On this benchmark, we observe that various VLMMs often fail to understand the orders of events. To address this, we propose MECOT (Multi-Event instruction fine-tuning with Chain-of-Thought), which (1) trains models on detailed, event-by-event video descriptions and (2) using chain-of-thought prompts at inference to enhance temporal awareness. MECOT outperforms prior arts on VECTOR as well as improving performance on existing video benchmarks, implying effectiveness of temporal understanding. We release our code, model and datasets.
- oai:arXiv.org:2512.08979v1
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Malicious GenAI Chrome Extensions: Unpacking Data Exfiltration and Malicious Behaviours
+ https://arxiv.org/abs/2512.10029
+ arXiv:2512.10029v1 Announce Type: new
+Abstract: The rapid proliferation of AI and GenAI tools has extended to the Chrome Web Store. Cybercriminals are exploiting this trend, deploying malicious Chrome extensions posing as AI tools or impersonating popular GenAI models to target users. These extensions often appear legitimate while secretly exfiltrating sensitive data or redirecting users web traffic to attacker-controlled domains.
+ To examine the impact of this trend on the browser extension ecosystem, we curated a dataset of 5,551 AI-themed extensions released over a nine-month period to the Chrome Web Store. Using a multi-signal detection methodology that combines manifest analysis, domain reputation, and runtime network behavior, supplemented with human review, we identified 154 previously undetected malicious Chrome extensions. Together with extensions known from public threat research disclosures, this resulted in a final set of 341 malicious extensions for analysis. Of these, 29 were GenAI-related, forming the focus of our in-depth analysis and disclosure.
+ We deconstruct representative GenAI cases, including Supersonic AI, DeepSeek AI | Free AI Assistant, and Perplexity Search, to illustrate attacker techniques such as Adversary-in-the-Browser, impersonation, bait-and-switch updates, query hijacking, and redirection. Our findings show that threat actors are leveraging GenAI trends and exploiting browser extension APIs and settings for malicious purposes. This demonstrates that the browser extension threat landscape is directly evolving alongside the rapid adoption of GenAI technologies.
+ oai:arXiv.org:2512.10029v1
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Daechul Ahn, Yura Choi, Hyeonbeom Choi, Seongwon Cho, San Kim, Jonghyun Choi
+ Shresta B. Seetharam, Mohamed Nabeel, William Melicher
- Training Multi-Image Vision Agents via End2End Reinforcement Learning
- https://arxiv.org/abs/2512.08980
- arXiv:2512.08980v1 Announce Type: new
-Abstract: Recent VLM-based agents aim to replicate OpenAI O3's ``thinking with images" via tool use, but most open-source methods limit input to a single image, falling short on real-world multi-image QA tasks. To address this, we propose IMAgent, an open-source vision agent trained via end-to-end reinforcement learning dedicated for complex multi-image tasks. By leveraging a multi-agent system, we generate challenging and visually-rich multi-image QA pairs to fully activate the tool-use potential of the base VLM. Through manual verification, we obtain MIFG-QA, comprising 10k samples for training and evaluation. With deeper reasoning steps, VLMs may increasingly ignore visual inputs. We therefore develop two specialized tools for visual reflection and confirmation, allowing the model to proactively reallocate its attention to image content during inference. Benefiting from our well-designed action-trajectory two-level mask strategy, IMAgent achieves stable tool use behavior via pure RL training without requiring costly supervised fine-tuning data. Extensive experiments demonstrate that IMAgent maintains strong performance on existing single-image benchmarks while achieving substantial improvements on our proposed multi-image dataset, with our analysis providing actionable insights for the research community. Codes and data will be released soon.
- oai:arXiv.org:2512.08980v1
+ ABBSPO: Adaptive Bounding Box Scaling and Symmetric Prior based Orientation Prediction for Detecting Aerial Image Objects
+ https://arxiv.org/abs/2512.10031
+ arXiv:2512.10031v1 Announce Type: new
+Abstract: Weakly supervised oriented object detection (WS-OOD) has gained attention as a cost-effective alternative to fully supervised methods, providing both efficiency and high accuracy. Among weakly supervised approaches, horizontal bounding box (HBox)-supervised OOD stands out for its ability to directly leverage existing HBox annotations while achieving the highest accuracy under weak supervision settings. This paper introduces adaptive bounding box scaling and symmetry-prior-based orientation prediction, called ABBSPO, a framework for WS-OOD. Our ABBSPO addresses limitations of previous HBox-supervised OOD methods, which compare ground truth (GT) HBoxes directly with the minimum circumscribed rectangles of predicted RBoxes, often leading to inaccurate scale estimation. To overcome this, we propose: (i) Adaptive Bounding Box Scaling (ABBS), which appropriately scales GT HBoxes to optimize for the size of each predicted RBox, ensuring more accurate scale prediction; and (ii) a Symmetric Prior Angle (SPA) loss that exploits inherent symmetry of aerial objects for self-supervised learning, resolving issues in previous methods where learning collapses when predictions for all three augmented views (original, rotated, and flipped) are consistently incorrect. Extensive experimental results demonstrate that ABBSPO achieves state-of-the-art performance, outperforming existing methods.
+ oai:arXiv.org:2512.10031v1cs.CVcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chengqi Dong, Chuhuai Yue, Hang He, Rongge Mao, Fenghe Tang, S Kevin Zhou, Zekun Xu, Xiaohan Wang, Jiajun Chai, Wei Lin, Guojun Yin
-
-
- Mitigating Bias with Words: Inducing Demographic Ambiguity in Face Recognition Templates by Text Encoding
- https://arxiv.org/abs/2512.08981
- arXiv:2512.08981v1 Announce Type: new
-Abstract: Face recognition (FR) systems are often prone to demographic biases, partially due to the entanglement of demographic-specific information with identity-relevant features in facial embeddings. This bias is extremely critical in large multicultural cities, especially where biometrics play a major role in smart city infrastructure. The entanglement can cause demographic attributes to overshadow identity cues in the embedding space, resulting in disparities in verification performance across different demographic groups. To address this issue, we propose a novel strategy, Unified Text-Image Embedding (UTIE), which aims to induce demographic ambiguity in face embeddings by enriching them with information related to other demographic groups. This encourages face embeddings to emphasize identity-relevant features and thus promotes fairer verification performance across groups. UTIE leverages the zero-shot capabilities and cross-modal semantic alignment of Vision-Language Models (VLMs). Given that VLMs are naturally trained to align visual and textual representations, we enrich the facial embeddings of each demographic group with text-derived demographic features extracted from other demographic groups. This encourages a more neutral representation in terms of demographic attributes. We evaluate UTIE using three VLMs, CLIP, OpenCLIP, and SigLIP, on two widely used benchmarks, RFW and BFW, designed to assess bias in FR. Experimental results show that UTIE consistently reduces bias metrics while maintaining, or even improving in several cases, the face verification accuracy.
- oai:arXiv.org:2512.08981v1
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Tahar Chettaoui, Naser Damer, Fadi Boutros
+ Woojin Lee, Hyugjae Chang, Jaeho Moon, Jaehyup Lee, Munchurl Kim
- Consist-Retinex: One-Step Noise-Emphasized Consistency Training Accelerates High-Quality Retinex Enhancement
- https://arxiv.org/abs/2512.08982
- arXiv:2512.08982v1 Announce Type: new
-Abstract: Diffusion models have achieved remarkable success in low-light image enhancement through Retinex-based decomposition, yet their requirement for hundreds of iterative sampling steps severely limits practical deployment. While recent consistency models offer promising one-step generation for \textit{unconditional synthesis}, their application to \textit{conditional enhancement} remains unexplored. We present \textbf{Consist-Retinex}, the first framework adapting consistency modeling to Retinex-based low-light enhancement. Our key insight is that conditional enhancement requires fundamentally different training dynamics than unconditional generation standard consistency training focuses on low-noise regions near the data manifold, while conditional mapping critically depends on large-noise regimes that bridge degraded inputs to enhanced outputs. We introduce two core innovations: (1) a \textbf{dual-objective consistency loss} combining temporal consistency with ground-truth alignment under randomized time sampling, providing full-spectrum supervision for stable convergence; and (2) an \textbf{adaptive noise-emphasized sampling strategy} that prioritizes training on large-noise regions essential for one-step conditional generation. On VE-LOL-L, Consist-Retinex achieves \textbf{state-of-the-art performance with single-step sampling} (\textbf{PSNR: 25.51 vs. 23.41, FID: 44.73 vs. 49.59} compared to Diff-Retinex++), while requiring only \textbf{1/8 of the training budget} relative to the 1000-step Diff-Retinex baseline.
- oai:arXiv.org:2512.08982v1
- cs.CV
+ Cluster-Dags as Powerful Background Knowledge For Causal Discovery
+ https://arxiv.org/abs/2512.10032
+ arXiv:2512.10032v1 Announce Type: new
+Abstract: Finding cause-effect relationships is of key importance in science. Causal discovery aims to recover a graph from data that succinctly describes these cause-effect relationships. However, current methods face several challenges, especially when dealing with high-dimensional data and complex dependencies. Incorporating prior knowledge about the system can aid causal discovery. In this work, we leverage Cluster-DAGs as a prior knowledge framework to warm-start causal discovery. We show that Cluster-DAGs offer greater flexibility than existing approaches based on tiered background knowledge and introduce two modified constraint-based algorithms, Cluster-PC and Cluster-FCI, for causal discovery in the fully and partially observed setting, respectively. Empirical evaluation on simulated data demonstrates that Cluster-PC and Cluster-FCI outperform their respective baselines without prior knowledge.
+ oai:arXiv.org:2512.10032v1
+ cs.LGcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Jian Xu, Wei Chen, Shigui Li, Delu Zeng, John Paisley, Qibin Zhao
+ Jan Marco Ruiz de Vargas, Kirtan Padh, Niki Kilbertus
- HSCP: A Two-Stage Spectral Clustering Framework for Resource-Constrained UAV Identification
- https://arxiv.org/abs/2512.08983
- arXiv:2512.08983v1 Announce Type: new
-Abstract: With the rapid development of Unmanned Aerial Vehicles (UAVs) and the increasing complexity of low-altitude security threats, traditional UAV identification methods struggle to extract reliable signal features and meet real-time requirements in complex environments. Recently, deep learning based Radio Frequency Fingerprint Identification (RFFI) approaches have greatly improved recognition accuracy. However, their large model sizes and high computational demands hinder deployment on resource-constrained edge devices. While model pruning offers a general solution for complexity reduction, existing weight, channel, and layer pruning techniques struggle to concurrently optimize compression rate, hardware acceleration, and recognition accuracy. To this end, in this paper, we introduce HSCP, a Hierarchical Spectral Clustering Pruning framework that combines layer pruning with channel pruning to achieve extreme compression, high performance, and efficient inference. In the first stage, HSCP employs spectral clustering guided by Centered Kernel Alignment (CKA) to identify and remove redundant layers. Subsequently, the same strategy is applied to the channel dimension to eliminate a finer redundancy. To ensure robustness, we further employ a noise-robust fine-tuning strategy. Experiments on the UAV-M100 benchmark demonstrate that HSCP outperforms existing channel and layer pruning methods. Specifically, HSCP achieves $86.39\%$ parameter reduction and $84.44\%$ FLOPs reduction on ResNet18 while improving accuracy by $1.49\%$ compared to the unpruned baseline, and maintains superior robustness even in low signal-to-noise ratio environments.
- oai:arXiv.org:2512.08983v1
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Robust Gradient Descent via Heavy-Ball Momentum with Predictive Extrapolation
+ https://arxiv.org/abs/2512.10033
+ arXiv:2512.10033v1 Announce Type: new
+Abstract: Accelerated gradient methods like Nesterov's Accelerated Gradient (NAG) achieve faster convergence on well-conditioned problems but often diverge on ill-conditioned or non-convex landscapes due to aggressive momentum accumulation. We propose Heavy-Ball Synthetic Gradient Extrapolation (HB-SGE), a robust first-order method that combines heavy-ball momentum with predictive gradient extrapolation. Unlike classical momentum methods that accumulate historical gradients, HB-SGE estimates future gradient directions using local Taylor approximations, providing adaptive acceleration while maintaining stability. We prove convergence guarantees for strongly convex functions and demonstrate empirically that HB-SGE prevents divergence on problems where NAG and standard momentum fail. On ill-conditioned quadratics (condition number $\kappa=50$), HB-SGE converges in 119 iterations while both SGD and NAG diverge. On the non-convex Rosenbrock function, HB-SGE achieves convergence in 2,718 iterations where classical momentum methods diverge within 10 steps. While NAG remains faster on well-conditioned problems, HB-SGE provides a robust alternative with speedup over SGD across diverse landscapes, requiring only $O(d)$ memory overhead and the same hyperparameters as standard momentum.
+ oai:arXiv.org:2512.10033v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Maoyu Wang, Yao Lu, Bo Zhou, Zhuangzhi Chen, Yun Lin, Qi Xuan, Guan Gui
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Sarwan Ali
- RAG-HAR: Retrieval Augmented Generation-based Human Activity Recognition
- https://arxiv.org/abs/2512.08984
- arXiv:2512.08984v1 Announce Type: new
-Abstract: Human Activity Recognition (HAR) underpins applications in healthcare, rehabilitation, fitness tracking, and smart environments, yet existing deep learning approaches demand dataset-specific training, large labeled corpora, and significant computational resources.We introduce RAG-HAR, a training-free retrieval-augmented framework that leverages large language models (LLMs) for HAR. RAG-HAR computes lightweight statistical descriptors, retrieves semantically similar samples from a vector database, and uses this contextual evidence to make LLM-based activity identification. We further enhance RAG-HAR by first applying prompt optimization and introducing an LLM-based activity descriptor that generates context-enriched vector databases for delivering accurate and highly relevant contextual information. Along with these mechanisms, RAG-HAR achieves state-of-the-art performance across six diverse HAR benchmarks. Most importantly, RAG-HAR attains these improvements without requiring model training or fine-tuning, emphasizing its robustness and practical applicability. RAG-HAR moves beyond known behaviors, enabling the recognition and meaningful labelling of multiple unseen human activities.
- oai:arXiv.org:2512.08984v1
- cs.CV
+ DynaMate: An Autonomous Agent for Protein-Ligand Molecular Dynamics Simulations
+ https://arxiv.org/abs/2512.10034
+ arXiv:2512.10034v1 Announce Type: new
+Abstract: Force field-based molecular dynamics (MD) simulations are indispensable for probing the structure, dynamics, and functions of biomolecular systems, including proteins and protein-ligand complexes. Despite their broad utility in drug discovery and protein engineering, the technical complexity of MD setup, encompassing parameterization, input preparation, and software configuration, remains a major barrier for widespread and efficient usage. Agentic LLMs have demonstrated their capacity to autonomously execute multi-step scientific processes, and to date, they have not successfully been used to automate protein-ligand MD workflows. Here, we present DynaMate, a modular multi-agent framework that autonomously designs and executes complete MD workflows for both protein and protein-ligand systems, and offers free energy binding affinity calculations with the MM/PB(GB)SA method. The framework integrates dynamic tool use, web search, PaperQA, and a self-correcting behavior. DynaMate comprises three specialized modules, interacting to plan the experiment, perform the simulation, and analyze the results. We evaluated its performance across twelve benchmark systems of varying complexity, assessing success rate, efficiency, and adaptability. DynaMate reliably performed full MD simulations, corrected runtime errors through iterative reasoning, and produced meaningful analyses of protein-ligand interactions. This automated framework paves the way toward standardized, scalable, and time-efficient molecular modeling pipelines for future biomolecular and drug design applications.
+ oai:arXiv.org:2512.10034v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Nirhoshan Sivaroopan, Hansi Karunarathna, Chamara Madarasingha, Anura Jayasumana, Kanchana Thilakarathna
+ Salom\'e Guilbert, Cassandra Masschelein, Jeremy Goumaz, Bohdan Naida, Philippe Schwaller
- An Efficient Test-Time Scaling Approach for Image Generation
- https://arxiv.org/abs/2512.08985
- arXiv:2512.08985v1 Announce Type: new
-Abstract: Image generation has emerged as a mainstream application of large generative AI models. Just as test-time compute and reasoning have helped language models improve their capabilities, similar benefits have also been observed with image generation models. In particular, searching over noise samples for diffusion and flow models has shown to scale well with test-time compute. While recent works have explored allocating non-uniform inference-compute budgets across different denoising steps, they rely on greedy algorithms and allocate the compute budget ineffectively. In this work, we study this problem and propose solutions to fix it. We propose the Verifier-Threshold method which automatically reallocates test-time compute and delivers substantial efficiency improvements. For the same performance on the GenEval benchmark, we achieve a 2-4x reduction in computational time over the state-of-the-art method.
- oai:arXiv.org:2512.08985v1
+ Diffusion Is Your Friend in Show, Suggest and Tell
+ https://arxiv.org/abs/2512.10038
+ arXiv:2512.10038v1 Announce Type: new
+Abstract: Diffusion Denoising models demonstrated impressive results across generative Computer Vision tasks, but they still fail to outperform standard autoregressive solutions in the discrete domain, and only match them at best. In this work, we propose a different paradigm by adopting diffusion models to provide suggestions to the autoregressive generation rather than replacing them. By doing so, we combine the bidirectional and refining capabilities of the former with the strong linguistic structure provided by the latter. To showcase its effectiveness, we present Show, Suggest and Tell (SST), which achieves State-of-the-Art results on COCO, among models in a similar setting. In particular, SST achieves 125.1 CIDEr-D on the COCO dataset without Reinforcement Learning, outperforming both autoregressive and diffusion model State-of-the-Art results by 1.5 and 2.5 points. On top of the strong results, we performed extensive experiments to validate the proposal and analyze the impact of the suggestion module. Results demonstrate a positive correlation between suggestion and caption quality, overall indicating a currently underexplored but promising research direction. Code will be available at: https://github.com/jchenghu/show\_suggest\_tell.
+ oai:arXiv.org:2512.10038v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by-nc-nd/4.0/
- Vignesh Sundaresha, Akash Haridas, Vikram Appia, Lav Varshney
+ 2025 IEEE International Conference on Big Data
+ Jia Cheng Hu, Roberto Cavicchioli, Alessandro Capotondi
- Explainable Fundus Image Curation and Lesion Detection in Diabetic Retinopathy
- https://arxiv.org/abs/2512.08986
- arXiv:2512.08986v1 Announce Type: new
-Abstract: Diabetic Retinopathy (DR) affects individuals with long-term diabetes. Without early diagnosis, DR can lead to vision loss. Fundus photography captures the structure of the retina along with abnormalities indicative of the stage of the disease. Artificial Intelligence (AI) can support clinicians in identifying these lesions, reducing manual workload, but models require high-quality annotated datasets. Due to the complexity of retinal structures, errors in image acquisition and lesion interpretation of manual annotators can occur. We proposed a quality-control framework, ensuring only high-standard data is used for evaluation and AI training. First, an explainable feature-based classifier is used to filter inadequate images. The features are extracted both using image processing and contrastive learning. Then, the images are enhanced and put subject to annotation, using deep-learning-based assistance. Lastly, the agreement between annotators calculated using derived formulas determines the usability of the annotations.
- oai:arXiv.org:2512.08986v1
- cs.CV
+ Intelligently Weighting Multiple Reference Models for Direct Preference Optimization of LLMs
+ https://arxiv.org/abs/2512.10040
+ arXiv:2512.10040v1 Announce Type: new
+Abstract: Fine-tuning is integral for aligning large language models (LLMs) with human preferences. Multiple-Reference Preference Optimization (MRPO) builds on Direct Preference Optimization (DPO) by fine-tuning LLMs on preference datasets while regularizing the policy towards a mixture of reference models to leverage their collective desirable properties. However, current methods for setting the reference weights are ad-hoc and statistically unsound, leading to unreliable performance. To address this, we introduce four new weighting strategies: two offline methods that leverage held-out validation signal; one online method that uses a sliding-window estimator to reduce overfitting; and an online method that treats reference weighting as a $K$-armed bandit via Thompson Sampling. Experiments using Qwen2.5-0.5B as the policy model and seven reference models from the Llama, Mistral, Qwen, Yi, and Phi families (0.5B-14B each) show that all 4 of our strategies outperform the current MRPO weighting methods on UltraFeedback and SafeRLHF in preference accuracy. More thought-provokingly, however, we find that single-reference DPO, using any of 6 out of 7 references, consistently outperforms all tested multiple-reference approaches -- calling into question the practical appeal of multiple-reference approaches.
+ oai:arXiv.org:2512.10040v1
+ cs.LGcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Anca Mihai, Adrian Groza
+ http://creativecommons.org/licenses/by/4.0/
+ Skyler Wu, Aymen Echarghaoui
- 3DID: Direct 3D Inverse Design for Aerodynamics with Physics-Aware Optimization
- https://arxiv.org/abs/2512.08987
- arXiv:2512.08987v1 Announce Type: new
-Abstract: Inverse design aims to design the input variables of a physical system to optimize a specified objective function, typically formulated as a search or optimization problem. However, in 3D domains, the design space grows exponentially, rendering exhaustive grid-based searches infeasible. Recent advances in deep learning have accelerated inverse design by providing powerful generative priors and differentiable surrogate models. Nevertheless, current methods tend to approximate the 3D design space using 2D projections or fine-tune existing 3D shapes. These approaches sacrifice volumetric detail and constrain design exploration, preventing true 3D design from scratch. In this paper, we propose a 3D Inverse Design (3DID) framework that directly navigates the 3D design space by coupling a continuous latent representation with a physics-aware optimization strategy. We first learn a unified physics-geometry embedding that compactly captures shape and physical field data in a continuous latent space. Then, we introduce a two-stage strategy to perform physics-aware optimization. In the first stage, a gradient-guided diffusion sampler explores the global latent manifold. In the second stage, an objective-driven, topology-preserving refinement further sculpts each candidate toward the target objective. This enables 3DID to generate high-fidelity 3D geometries, outperforming existing methods in both solution quality and design versatility.
- oai:arXiv.org:2512.08987v1
+ MetaVoxel: Joint Diffusion Modeling of Imaging and Clinical Metadata
+ https://arxiv.org/abs/2512.10041
+ arXiv:2512.10041v1 Announce Type: new
+Abstract: Modern deep learning methods have achieved impressive results across tasks from disease classification, estimating continuous biomarkers, to generating realistic medical images. Most of these approaches are trained to model conditional distributions defined by a specific predictive direction with a specific set of input variables. We introduce MetaVoxel, a generative joint diffusion modeling framework that models the joint distribution over imaging data and clinical metadata by learning a single diffusion process spanning all variables. By capturing the joint distribution, MetaVoxel unifies tasks that traditionally require separate conditional models and supports flexible zero-shot inference using arbitrary subsets of inputs without task-specific retraining. Using more than 10,000 T1-weighted MRI scans paired with clinical metadata from nine datasets, we show that a single MetaVoxel model can perform image generation, age estimation, and sex prediction, achieving performance comparable to established task-specific baselines. Additional experiments highlight its capabilities for flexible inference.Together, these findings demonstrate that joint multimodal diffusion offers a promising direction for unifying medical AI models and enabling broader clinical applicability.
+ oai:arXiv.org:2512.10041v1cs.CVcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuze Hao, Linchao Zhu, Yi Yang
+ Yihao Liu, Chenyu Gao, Lianrui Zuo, Michael E. Kim, Brian D. Boyd, Lisa L. Barnes, Walter A. Kukull, Lori L. Beason-Held, Susan M. Resnick, Timothy J. Hohman, Warren D. Taylor, Bennett A. Landman
- Enhancing Knowledge Transfer in Hyperspectral Image Classification via Cross-scene Knowledge Integration
- https://arxiv.org/abs/2512.08989
- arXiv:2512.08989v1 Announce Type: new
-Abstract: Knowledge transfer has strong potential to improve hyperspectral image (HSI) classification, yet two inherent challenges fundamentally restrict effective cross-domain transfer: spectral variations caused by different sensors and semantic inconsistencies across heterogeneous scenes. Existing methods are limited by transfer settings that assume homogeneous domains or heterogeneous scenarios with only co-occurring categories. When label spaces do not overlap, they further rely on complete source-domain coverage and therefore overlook critical target-private information. To overcome these limitations and enable knowledge transfer in fully heterogeneous settings, we propose Cross-scene Knowledge Integration (CKI), a framework that explicitly incorporates target-private knowledge during transfer. CKI includes: (1) Alignment of Spectral Characteristics (ASC) to reduce spectral discrepancies through domain-agnostic projection; (2) Cross-scene Knowledge Sharing Preference (CKSP), which resolves semantic mismatch via a Source Similarity Mechanism (SSM); and (3) Complementary Information Integration (CII) to maximize the use of target-specific complementary cues. Extensive experiments verify that CKI achieves state-of-the-art performance with strong stability across diverse cross-scene HSI scenarios.
- oai:arXiv.org:2512.08989v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ SEMDICE: Off-policy State Entropy Maximization via Stationary Distribution Correction Estimation
+ https://arxiv.org/abs/2512.10042
+ arXiv:2512.10042v1 Announce Type: new
+Abstract: In the unsupervised pre-training for reinforcement learning, the agent aims to learn a prior policy for downstream tasks without relying on task-specific reward functions. We focus on state entropy maximization (SEM), where the goal is to learn a policy that maximizes the entropy of the state stationary distribution. In this paper, we introduce SEMDICE, a principled off-policy algorithm that computes an SEM policy from an arbitrary off-policy dataset, which optimizes the policy directly within the space of stationary distributions. SEMDICE computes a single, stationary Markov state-entropy-maximizing policy from an arbitrary off-policy dataset. Experimental results demonstrate that SEMDICE outperforms baseline algorithms in maximizing state entropy while achieving the best adaptation efficiency for downstream tasks among SEM-based unsupervised RL pre-training methods.
+ oai:arXiv.org:2512.10042v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Lu Huo, Wenjian Huang, Jianguo Zhang, Min Xu, Haimin Zhang
+ http://creativecommons.org/licenses/by/4.0/
+ Jongmin Lee, Meiqi Sun, Pieter Abbeel
- Deterministic World Models for Verification of Closed-loop Vision-based Systems
- https://arxiv.org/abs/2512.08991
- arXiv:2512.08991v1 Announce Type: new
-Abstract: Verifying closed-loop vision-based control systems remains a fundamental challenge due to the high dimensionality of images and the difficulty of modeling visual environments. While generative models are increasingly used as camera surrogates in verification, their reliance on stochastic latent variables introduces unnecessary overapproximation error. To address this bottleneck, we propose a Deterministic World Model (DWM) that maps system states directly to generative images, effectively eliminating uninterpretable latent variables to ensure precise input bounds. The DWM is trained with a dual-objective loss function that combines pixel-level reconstruction accuracy with a control difference loss to maintain behavioral consistency with the real system. We integrate DWM into a verification pipeline utilizing Star-based reachability analysis (StarV) and employ conformal prediction to derive rigorous statistical bounds on the trajectory deviation between the world model and the actual vision-based system. Experiments on standard benchmarks show that our approach yields significantly tighter reachable sets and better verification performance than a latent-variable baseline.
- oai:arXiv.org:2512.08991v1
- cs.CV
+ Local LLM Ensembles for Zero-shot Portuguese Named Entity Recognition
+ https://arxiv.org/abs/2512.10043
+ arXiv:2512.10043v1 Announce Type: new
+Abstract: Large Language Models (LLMs) excel in many Natural Language Processing (NLP) tasks through in-context learning but often under-perform in Named Entity Recognition (NER), especially for lower-resource languages like Portuguese. While open-weight LLMs enable local deployment, no single model dominates all tasks, motivating ensemble approaches. However, existing LLM ensembles focus on text generation or classification, leaving NER under-explored. In this context, this work proposes a novel three-step ensemble pipeline for zero-shot NER using similarly capable, locally run LLMs. Our method outperforms individual LLMs in four out of five Portuguese NER datasets by leveraging a heuristic to select optimal model combinations with minimal annotated data. Moreover, we show that ensembles obtained on different source datasets generally outperform individual LLMs in cross-dataset configurations, potentially eliminating the need for annotated data for the current task. Our work advances scalable, low-resource, and zero-shot NER by effectively combining multiple small LLMs without fine-tuning. Code is available at https://github.com/Joao-Luz/local-llm-ner-ensemble.
+ oai:arXiv.org:2512.10043v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Yuang Geng, Zhuoyang Zhou, Zhongzheng Zhang, Siyuan Pan, Hoang-Dung Tran, Ivan Ruchkin
+ Jo\~ao Lucas Luz Lima Sarcinelli, Diego Furtado Silva
- PoultryTalk: A Multi-modal Retrieval-Augmented Generation (RAG) System for Intelligent Poultry Management and Decision Support
- https://arxiv.org/abs/2512.08995
- arXiv:2512.08995v1 Announce Type: new
-Abstract: The Poultry industry plays a vital role in global food security, yet small- and medium-scale farmers frequently lack timely access to expert-level support for disease diagnosis, nutrition planning, and management decisions. With rising climate stress, unpredictable feed prices, and persistent disease threats, poultry producers often struggle to make quick, informed decisions. Therefore, there is a critical need for intelligent, data-driven systems that can deliver reliable, on-demand consultation. This paper presents PoultryTalk, a novel multi-modal Retrieval-Augmented Generation (RAG) system designed to provide real-time expert guidance through text and image-based interaction. PoultryTalk uses OpenAI's text-embedding-3-small and GPT-4o to provide smart, context-aware poultry management advice from text, images, or questions. System usability and performance were evaluated using 200 expert-verified queries and feedback from 34 participants who submitted 267 queries to the PoultryTalk prototype. The expert-verified benchmark queries confirmed strong technical performance, achieving a semantic similarity of 84.0% and an average response latency of 3.6 seconds. Compared with OpenAI's GPT-4o, PoultryTalk delivered more accurate and reliable information related to poultry. Based on participants' evaluations, PoultryTalk achieved a response accuracy of 89.9%, with about 9.1% of responses rated as incorrect. A post-use survey indicated high user satisfaction: 95.6% of participants reported that the chatbot provided "always correct" and "mostly correct" answers. 82.6% indicated they would recommend the tool, and 17.4% responded "maybe." These results collectively demonstrate that PoultryTalk not only delivers accurate, contextually relevant information but also demonstrates strong user acceptance and scalability potential.
- oai:arXiv.org:2512.08995v1
- cs.HC
- cs.IR
- Thu, 11 Dec 2025 00:00:00 -0500
+ SimWorld-Robotics: Synthesizing Photorealistic and Dynamic Urban Environments for Multimodal Robot Navigation and Collaboration
+ https://arxiv.org/abs/2512.10046
+ arXiv:2512.10046v1 Announce Type: new
+Abstract: Recent advances in foundation models have shown promising results in developing generalist robotics that can perform diverse tasks in open-ended scenarios given multimodal inputs. However, current work has been mainly focused on indoor, household scenarios. In this work, we present SimWorld-Robotics~(SWR), a simulation platform for embodied AI in large-scale, photorealistic urban environments. Built on Unreal Engine 5, SWR procedurally generates unlimited photorealistic urban scenes populated with dynamic elements such as pedestrians and traffic systems, surpassing prior urban simulations in realism, complexity, and scalability. It also supports multi-robot control and communication. With these key features, we build two challenging robot benchmarks: (1) a multimodal instruction-following task, where a robot must follow vision-language navigation instructions to reach a destination in the presence of pedestrians and traffic; and (2) a multi-agent search task, where two robots must communicate to cooperatively locate and meet each other. Unlike existing benchmarks, these two new benchmarks comprehensively evaluate a wide range of critical robot capacities in realistic scenarios, including (1) multimodal instructions grounding, (2) 3D spatial reasoning in large environments, (3) safe, long-range navigation with people and traffic, (4) multi-robot collaboration, and (5) grounded communication. Our experimental results demonstrate that state-of-the-art models, including vision-language models (VLMs), struggle with our tasks, lacking robust perception, reasoning, and planning abilities necessary for urban environments.
+ oai:arXiv.org:2512.10046v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Kapalik Khanal, Biswash Khatiwada, Stephen Afrifa, Ranjan Sapkota, Sanjay Shah, Frank Bai, Ramesh Bahadur Bist
+ http://creativecommons.org/licenses/by/4.0/
+ Yan Zhuang, Jiawei Ren, Xiaokang Ye, Jianzhi Shen, Ruixuan Zhang, Tianai Yue, Muhammad Faayez, Xuhong He, Ziqiao Ma, Lianhui Qin, Zhiting Hu, Tianmin Shu
- Demo: Generative AI helps Radiotherapy Planning with User Preference
- https://arxiv.org/abs/2512.08996
- arXiv:2512.08996v1 Announce Type: new
-Abstract: Radiotherapy planning is a highly complex process that often varies significantly across institutions and individual planners. Most existing deep learning approaches for 3D dose prediction rely on reference plans as ground truth during training, which can inadvertently bias models toward specific planning styles or institutional preferences. In this study, we introduce a novel generative model that predicts 3D dose distributions based solely on user-defined preference flavors. These customizable preferences enable planners to prioritize specific trade-offs between organs-at-risk (OARs) and planning target volumes (PTVs), offering greater flexibility and personalization. Designed for seamless integration with clinical treatment planning systems, our approach assists users in generating high-quality plans efficiently. Comparative evaluations demonstrate that our method can surpasses the Varian RapidPlan model in both adaptability and plan quality in some scenarios.
- oai:arXiv.org:2512.08996v1
- cs.CV
- cs.AI
+ Detailed balance in large language model-driven agents
+ https://arxiv.org/abs/2512.10047
+ arXiv:2512.10047v1 Announce Type: new
+Abstract: Large language model (LLM)-driven agents are emerging as a powerful new paradigm for solving complex problems. Despite the empirical success of these practices, a theoretical framework to understand and unify their macroscopic dynamics remains lacking. This Letter proposes a method based on the least action principle to estimate the underlying generative directionality of LLMs embedded within agents. By experimentally measuring the transition probabilities between LLM-generated states, we statistically discover a detailed balance in LLM-generated transitions, indicating that LLM generation may not be achieved by generally learning rule sets and strategies, but rather by implicitly learning a class of underlying potential functions that may transcend different LLM architectures and prompt templates. To our knowledge, this is the first discovery of a macroscopic physical law in LLM generative dynamics that does not depend on specific model details. This work is an attempt to establish a macroscopic dynamics theory of complex AI systems, aiming to elevate the study of AI agents from a collection of engineering practices to a science built on effective measurements that are predictable and quantifiable.
+ oai:arXiv.org:2512.10047v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cond-mat.stat-mech
+ cs.AI
+ nlin.AO
+ physics.data-an
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Riqiang Gao, Simon Arberet, Martin Kraus, Han Liu, Wilko FAR Verbakel, Dorin Comaniciu, Florin-Cristian Ghesu, Ali Kamen
+ Zhuo-Yang Song, Qing-Hong Cao, Ming-xing Luo, Hua Xing Zhu
- Diffusion Model Regularized Implicit Neural Representation for CT Metal Artifact Reduction
- https://arxiv.org/abs/2512.08999
- arXiv:2512.08999v1 Announce Type: new
-Abstract: Computed tomography (CT) images are often severely corrupted by artifacts in the presence of metals. Existing supervised metal artifact reduction (MAR) approaches suffer from performance instability on known data due to their reliance on limited paired metal-clean data, which limits their clinical applicability. Moreover, existing unsupervised methods face two main challenges: 1) the CT physical geometry is not effectively incorporated into the MAR process to ensure data fidelity; 2) traditional heuristics regularization terms cannot fully capture the abundant prior knowledge available. To overcome these shortcomings, we propose diffusion model regularized implicit neural representation framework for MAR. The implicit neural representation integrates physical constraints and imposes data fidelity, while the pre-trained diffusion model provides prior knowledge to regularize the solution. Experimental results on both simulated and clinical data demonstrate the effectiveness and generalization ability of our method, highlighting its potential to be applied to clinical settings.
- oai:arXiv.org:2512.08999v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ DB2-TransF: All You Need Is Learnable Daubechies Wavelets for Time Series Forecasting
+ https://arxiv.org/abs/2512.10051
+ arXiv:2512.10051v1 Announce Type: new
+Abstract: Time series forecasting requires models that can efficiently capture complex temporal dependencies, especially in large-scale and high-dimensional settings. While Transformer-based architectures excel at modeling long-range dependencies, their quadratic computational complexity poses limitations on scalability and adaptability. To overcome these challenges, we introduce DB2-TransF, a novel Transformer-inspired architecture that replaces the self-attention mechanism with a learnable Daubechies wavelet coefficient layer. This wavelet-based module efficiently captures multi-scale local and global patterns and enhances the modeling of correlations across multiple time series for the time series forecasting task. Extensive experiments on 13 standard forecasting benchmarks demonstrate that DB2-TransF achieves comparable or superior predictive accuracy to conventional Transformers, while substantially reducing memory usage for the time series forecasting task. The obtained experimental results position DB2-TransF as a scalable and resource-efficient framework for advanced time series forecasting. Our code is available at https://github.com/SteadySurfdom/DB2-TransF
+ oai:arXiv.org:2512.10051v1
+ cs.LG
+ cs.AI
+ eess.SP
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jie Wen, Chenhe Du, Xiao Wang, Yuyao Zhang
+ Moulik Gupta, Achyut Mani Tripathi
- A Physics-Constrained, Design-Driven Methodology for Defect Dataset Generation in Optical Lithography
- https://arxiv.org/abs/2512.09001
- arXiv:2512.09001v1 Announce Type: new
-Abstract: The efficacy of Artificial Intelligence (AI) in micro/nano manufacturing is fundamentally constrained by the scarcity of high-quality and physically grounded training data for defect inspection. Lithography defect data from semiconductor industry are rarely accessible for research use, resulting in a shortage of publicly available datasets. To address this bottleneck in lithography, this study proposes a novel methodology for generating large-scale, physically valid defect datasets with pixel-level annotations. The framework begins with the ab initio synthesis of defect layouts using controllable, physics-constrained mathematical morphology operations (erosion and dilation) applied to the original design-level layout. These synthesized layouts, together with their defect-free counterparts, are fabricated into physical samples via high-fidelity digital micromirror device (DMD)-based lithography. Optical micrographs of the synthesized defect samples and their defect-free references are then compared to create consistent defect delineation annotations. Using this methodology, we constructed a comprehensive dataset of 3,530 Optical micrographs containing 13,365 annotated defect instances including four classes: bridge, burr, pinch, and contamination. Each defect instance is annotated with a pixel-accurate segmentation mask, preserving full contour and geometry. The segmentation-based Mask R-CNN achieves AP@0.5 of 0.980, 0.965, and 0.971, compared with 0.740, 0.719, and 0.717 for Faster R-CNN on bridge, burr, and pinch classes, representing a mean AP@0.5 improvement of approximately 34%. For the contamination class, Mask R-CNN achieves an AP@0.5 roughly 42% higher than Faster R-CNN. These consistent gains demonstrate that our proposed methodology to generate defect datasets with pixel-level annotations is feasible for robust AI-based Measurement/Inspection (MI) in semiconductor fabrication.
- oai:arXiv.org:2512.09001v1
- cs.CV
+ Parallel Decoder Transformer: Model-Internal Parallel Decoding with Speculative Invariance via Note Conditioning
+ https://arxiv.org/abs/2512.10054
+ arXiv:2512.10054v1 Announce Type: new
+Abstract: Autoregressive decoding in Large Language Models (LLMs) is inherently sequential, creating a latency bottleneck that scales linearly with output length. While ``Decomposition-and-Fill'' methods like Skeleton-of-Thought attempt to parallelize generation via external orchestration, they suffer from \textit{coherence drift} due to the lack of cross-stream communication. In this work, we introduce the \textbf{Parallel Decoder Transformer (PDT)}, a parameter-efficient architecture that embeds coordination primitives directly into the inference process of a frozen pre-trained model.
+ Instead of retraining the base model, PDT injects lightweight \textit{Speculative Note Conditioning (SNC)} adapters that allow parallel decoding streams to synchronize via a shared, dynamic latent space. We formulate coordination as a \textit{speculative consensus} problem, where sibling streams broadcast semantic ``notes'' to a global bus, gated by a learned verification head. We validate our approach on a 50,000-step curriculum using a frozen 20B-parameter backbone. Our results demonstrate that PDT achieves effective self-correction, reaching \textbf{77.8\% precision} in coverage prediction and recovering approximate serial semantics without modifying the trunk weights. This establishes PDT as a scalable, efficient alternative to full model fine-tuning for structured parallel generation.
+ oai:arXiv.org:2512.10054v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Yuehua Hu, Jiyeong Kong, Dong-yeol Shin, Jaekyun Kim, Kyung-Tae Kang
+ http://creativecommons.org/licenses/by/4.0/
+ Logan Robbins
- A Survey of Body and Face Motion: Datasets, Performance Evaluation Metrics and Generative Techniques
- https://arxiv.org/abs/2512.09005
- arXiv:2512.09005v1 Announce Type: new
-Abstract: Body and face motion play an integral role in communication. They convey crucial information on the participants. Advances in generative modeling and multi-modal learning have enabled motion generation from signals such as speech, conversational context and visual cues. However, generating expressive and coherent face and body dynamics remains challenging due to the complex interplay of verbal / non-verbal cues and individual personality traits. This survey reviews body and face motion generation, covering core concepts, representations techniques, generative approaches, datasets and evaluation metrics. We highlight future directions to enhance the realism, coherence and expressiveness of avatars in dyadic settings. To the best of our knowledge, this work is the first comprehensive review to cover both body and face motion. Detailed resources are listed on https://lownish23csz0010.github.io/mogen/.
- oai:arXiv.org:2512.09005v1
- cs.CV
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Mitigating Exposure Bias in Risk-Aware Time Series Forecasting with Soft Tokens
+ https://arxiv.org/abs/2512.10056
+ arXiv:2512.10056v1 Announce Type: new
+Abstract: Autoregressive forecasting is central to predictive control in diabetes and hemodynamic management, where different operating zones carry different clinical risks. Standard models trained with teacher forcing suffer from exposure bias, yielding unstable multi-step forecasts for closed-loop use. We introduce Soft-Token Trajectory Forecasting (SoTra), which propagates continuous probability distributions (``soft tokens'') to mitigate exposure bias and learn calibrated, uncertainty-aware trajectories. A risk-aware decoding module then minimizes expected clinical harm. In glucose forecasting, SoTra reduces average zone-based risk by 18\%; in blood-pressure forecasting, it lowers effective clinical risk by approximately 15\%. These improvements support its use in safety-critical predictive control.
+ oai:arXiv.org:2512.10056v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Lownish Rai Sookha, Nikhil Pakhale, Mudasir Ganaie, Abhinav Dhall
+ http://creativecommons.org/licenses/by/4.0/
+ Alireza Namazi, Amirreza Dolatpour Fathkouhi, Heman Shakeri
- Llama-based source code vulnerability detection: Prompt engineering vs Fine tuning
- https://arxiv.org/abs/2512.09006
- arXiv:2512.09006v1 Announce Type: new
-Abstract: The significant increase in software production, driven by the acceleration of development cycles over the past two decades, has led to a steady rise in software vulnerabilities, as shown by statistics published yearly by the CVE program. The automation of the source code vulnerability detection (CVD) process has thus become essential, and several methods have been proposed ranging from the well established program analysis techniques to the more recent AI-based methods. Our research investigates Large Language Models (LLMs), which are considered among the most performant AI models to date, for the CVD task. The objective is to study their performance and apply different state-of-the-art techniques to enhance their effectiveness for this task. We explore various fine-tuning and prompt engineering settings. We particularly suggest one novel approach for fine-tuning LLMs which we call Double Fine-tuning, and also test the understudied Test-Time fine-tuning approach. We leverage the recent open-source Llama-3.1 8B, with source code samples extracted from BigVul and PrimeVul datasets. Our conclusions highlight the importance of fine-tuning to resolve the task, the performance of Double tuning, as well as the potential of Llama models for CVD. Though prompting proved ineffective, Retrieval augmented generation (RAG) performed relatively well as an example selection technique. Overall, some of our research questions have been answered, and many are still on hold, which leaves us many future work perspectives. Code repository is available here: https://github.com/DynaSoumhaneOuchebara/Llama-based-vulnerability-detection.
- oai:arXiv.org:2512.09006v1
- cs.SE
+ Mind the Gap! Pathways Towards Unifying AI Safety and Ethics Research
+ https://arxiv.org/abs/2512.10058
+ arXiv:2512.10058v1 Announce Type: new
+Abstract: While much research in artificial intelligence (AI) has focused on scaling capabilities, the accelerating pace of development makes countervailing work on producing harmless, "aligned" systems increasingly urgent. Yet research on alignment has diverged along two largely parallel tracks: safety--centered on scaled intelligence, deceptive or scheming behaviors, and existential risk--and ethics--focused on present harms, the reproduction of social bias, and flaws in production pipelines. Although both communities warn of insufficient investment in alignment, they disagree on what alignment means or ought to mean. As a result, their efforts have evolved in relative isolation, shaped by distinct methodologies, institutional homes, and disciplinary genealogies.
+ We present a large-scale, quantitative study showing the structural split between AI safety and AI ethics. Using a bibliometric and co-authorship network analysis of 6,442 papers from twelve major ML and NLP conferences (2020-2025), we find that over 80% of collaborations occur within either the safety or ethics communities, and cross-field connectivity is highly concentrated: roughly 5% of papers account for more than 85% of bridging links. Removing a small number of these brokers sharply increases segregation, indicating that cross-disciplinary exchange depends on a handful of actors rather than broad, distributed collaboration. These results show that the safety-ethics divide is not only conceptual but institutional, with implications for research agendas, policy, and venues. We argue that integrating technical safety work with normative ethics--via shared benchmarks, cross-institutional venues, and mixed-method methodologies--is essential for building AI systems that are both robust and just.
+ oai:arXiv.org:2512.10058v1cs.AI
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CY
+ cs.HC
+ cs.SI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Dyna Soumhane Ouchebara, St\'ephane Dupont
+ Dani Roytburg, Beck Miller
- Towards Lossless Ultimate Vision Token Compression for VLMs
- https://arxiv.org/abs/2512.09010
- arXiv:2512.09010v1 Announce Type: new
-Abstract: Visual language models encounter challenges in computational efficiency and latency, primarily due to the substantial redundancy in the token representations of high-resolution images and videos. Current attention/similarity-based compression algorithms suffer from either position bias or class imbalance, leading to significant accuracy degradation. They also fail to generalize to shallow LLM layers, which exhibit weaker cross-modal interactions. To address this, we extend token compression to the visual encoder through an effective iterative merging scheme that is orthogonal in spatial axes to accelerate the computation across the entire VLM. Furthermoer, we integrate a spectrum pruning unit into LLM through an attention/similarity-free low-pass filter, which gradually prunes redundant visual tokens and is fully compatible to modern FlashAttention. On this basis, we propose Lossless Ultimate Vision tokens Compression (LUVC) framework. LUVC systematically compresses visual tokens until complete elimination at the final layer of LLM, so that the high-dimensional visual features are gradually fused into the multimodal queries. The experiments show that LUVC achieves a 2 speedup inference in language model with negligible accuracy degradation, and the training-free characteristic enables immediate deployment across multiple VLMs.
- oai:arXiv.org:2512.09010v1
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Efficient Boys function evaluation using minimax approximation
+ https://arxiv.org/abs/2512.10059
+ arXiv:2512.10059v1 Announce Type: new
+Abstract: We present an algorithm for efficient evaluation of Boys functions $F_0,\dots,F_{k_\mathrm{max}}$ tailored to modern computing architectures, in particular graphical processing units (GPUs), where maximum throughput is high and data movement is costly. The method combines rational minimax approximations with upward and downward recurrence relations. The non-negative real axis is partitioned into three regions, $[0,\infty\rangle = A\cup B\cup C$, where regions $A$ and $B$ are treated using rational minimax approximations and region $C$ by an asymptotic approximation. This formulation avoids lookup tables and irregular memory access, making it well suited hardware with high maximum throughput and low latency. The rational minimax coefficients are generated using the rational Remez algorithm. For a target maximum absolute error of $\varepsilon_\mathrm{tol} = 5\cdot10^{-14}$, the corresponding approximation regions and coefficients for Boys functions $F_0,\dots,F_{32}$ are provided in the appendix.
+ oai:arXiv.org:2512.10059v1
+ math.NA
+ cs.NA
+ physics.comp-ph
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Dehua Zheng, Mouxiao Huang, Borui Jiang, Hailin Hu, Xinghao Chen
+ Rasmus Vikhamar-Sandberg, Michal Repisky
- An Approach for Detection of Entities in Dynamic Media Contents
- https://arxiv.org/abs/2512.09011
- arXiv:2512.09011v1 Announce Type: new
-Abstract: The notion of learning underlies almost every evolution of Intelligent Agents. In this paper, we present an approach for searching and detecting a given entity in a video sequence. Specifically, we study how the deep learning technique by artificial neuralnetworks allows us to detect a character in a video sequence. The technique of detecting a character in a video is a complex field of study, considering the multitude of objects present in the data under analysis. From the results obtained, we highlight the following, compared to state of the art: In our approach, within the field of Computer Vision, the structuring of supervised learning algorithms allowed us to achieve several successes from simple characteristics of the target character. Our results demonstrate that is new approach allows us to locate, in an efficient way, wanted individuals from a private or public image base. For the case of Angola, the classifier we propose opens the possibility of reinforcing the national security system based on the database of target individuals (disappeared, criminals, etc.) and the video sequences of the Integrated Public Security Centre (CISP).
- oai:arXiv.org:2512.09011v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ \textsc{Text2Graph}: Combining Lightweight LLMs and GNNs for Efficient Text Classification in Label-Scarce Scenarios
+ https://arxiv.org/abs/2512.10061
+ arXiv:2512.10061v1 Announce Type: new
+Abstract: Large Language Models (LLMs) have become effective zero-shot classifiers, but their high computational requirements and environmental costs limit their practicality for large-scale annotation in high-performance computing (HPC) environments. To support more sustainable workflows, we present \textsc{Text2Graph}, an open-source Python package that provides a modular implementation of existing text-to-graph classification approaches. The framework enables users to combine LLM-based partial annotation with Graph Neural Network (GNN) label propagation in a flexible manner, making it straightforward to swap components such as feature extractors, edge construction methods, and sampling strategies. We benchmark \textsc{Text2Graph} on a zero-shot setting using five datasets spanning topic classification and sentiment analysis tasks, comparing multiple variants against other zero-shot approaches for text classification. In addition to reporting performance, we provide detailed estimates of energy consumption and carbon emissions, showing that graph-based propagation achieves competitive results at a fraction of the energy and environmental cost.
+ oai:arXiv.org:2512.10061v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- 10.32996/jcsts.2023.5.3.2
- Journal of Computer Science and Technology Studies, Vol. 5, No. 3, pp. 13-24, 2023
- Nzakiese Mbongo, Ngombo Armando
+ Jo\~ao Lucas Luz Lima Sarcinelli, Ricardo Marcondes Marcacini
- Graph Deep Learning for Intracranial Aneurysm Blood Flow Simulation and Risk Assessment
- https://arxiv.org/abs/2512.09013
- arXiv:2512.09013v1 Announce Type: new
-Abstract: Intracranial aneurysms remain a major cause of neurological morbidity and mortality worldwide, where rupture risk is tightly coupled to local hemodynamics particularly wall shear stress and oscillatory shear index. Conventional computational fluid dynamics simulations provide accurate insights but are prohibitively slow and require specialized expertise. Clinical imaging alternatives such as 4D Flow MRI offer direct in-vivo measurements, yet their spatial resolution remains insufficient to capture the fine-scale shear patterns that drive endothelial remodeling and rupture risk while being extremely impractical and expensive.
- We present a graph neural network surrogate model that bridges this gap by reproducing full-field hemodynamics directly from vascular geometries in less than one minute per cardiac cycle. Trained on a comprehensive dataset of high-fidelity simulations of patient-specific aneurysms, our architecture combines graph transformers with autoregressive predictions to accurately simulate blood flow, wall shear stress, and oscillatory shear index. The model generalizes across unseen patient geometries and inflow conditions without mesh-specific calibration. Beyond accelerating simulation, our framework establishes the foundation for clinically interpretable hemodynamic prediction. By enabling near real-time inference integrated with existing imaging pipelines, it allows direct comparison with hospital phase-diagram assessments and extends them with physically grounded, high-resolution flow fields.
- This work transforms high-fidelity simulations from an expert-only research tool into a deployable, data-driven decision support system. Our full pipeline delivers high-resolution hemodynamic predictions within minutes of patient imaging, without requiring computational specialists, marking a step-change toward real-time, bedside aneurysm analysis.
- oai:arXiv.org:2512.09013v1
- cs.LG
- physics.flu-dyn
- Thu, 11 Dec 2025 00:00:00 -0500
+ Classifying covering types in homotopy type theory
+ https://arxiv.org/abs/2512.10064
+ arXiv:2512.10064v1 Announce Type: new
+Abstract: Covering spaces are a fundamental tool in algebraic topology because of the close relationship they bear with the fundamental groups of spaces. Indeed, they are in correspondence with the subgroups of the fundamental group: this is known as the Galois correspondence. In particular, the covering space corresponding to the trivial group is the universal covering, which is a "1-connected" variant of the original space, in the sense that it has the same homotopy groups, except for the first one which is trivial. In this article, we formalize this correspondence in homotopy type theory, a variant of Martin-L\"of type theory in which types can be interpreted as spaces (up to homotopy). Along the way, we develop an n-dimensional generalization of covering spaces. Moreover, in order to demonstrate the applicability of our approach, we formally classify the covering of lens spaces and explain how to construct the Poincar\'e homology sphere.
+ oai:arXiv.org:2512.10064v1
+ cs.LO
+ math.AT
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Paul Garnier, Pablo Jeken-Rico, Vincent Lannelongue, Chiara Faitini, Aur\`ele Goetz, Lea Chanvillard, Ramy Nemer, Jonathan Viquerat, Ugo Pelissier, Philippe Meliga, Jacques S\'edat, Thomas Liebig, Yves Chau, Elie Hachem
+ Samuel Mimram, \'Emile Oleon
- Prototyping and Evaluating a Real-time Neuro-Adaptive Virtual Reality Flight Training System
- https://arxiv.org/abs/2512.09014
- arXiv:2512.09014v1 Announce Type: new
-Abstract: Real-time adjustments to task difficulty during flight training are crucial for optimizing performance and managing pilot workload. This study evaluated the functionality of a pre-trained brain-computer interface (BCI) that adapts training difficulty based on real-time estimations of workload from brain signals. Specifically, an EEG-based neuro-adaptive training system was developed and tested in Virtual Reality (VR) flight simulations with military student pilots. The neuro-adaptive system was compared to a fixed sequence that progressively increased in difficulty, in terms of self-reported user engagement, workload, and simulator sickness (subjective measures), as well as flight performance (objective metric). Additionally, we explored the relationships between subjective workload and flight performance in the VR simulator for each condition. The experiments concluded with semi-structured interviews to elicit the pilots' experience with the neuro-adaptive prototype. Results revealed no significant differences between the adaptive and fixed sequence conditions in subjective measures or flight performance. In both conditions, flight performance decreased as subjective workload increased. The semi-structured interviews indicated that, upon briefing, the pilots preferred the neuro-adaptive VR training system over the system with a fixed sequence, although individual differences were observed in the perception of difficulty and the order of changes in difficulty. Even though this study shows performance does not change, BCI-based flight training systems hold the potential to provide a more personalized and varied training experience.
- oai:arXiv.org:2512.09014v1
+ Linear socio-demographic representations emerge in Large Language Models from indirect cues
+ https://arxiv.org/abs/2512.10065
+ arXiv:2512.10065v1 Announce Type: new
+Abstract: We investigate how LLMs encode sociodemographic attributes of human conversational partners inferred from indirect cues such as names and occupations. We show that LLMs develop linear representations of user demographics within activation space, wherein stereotypically associated attributes are encoded along interpretable geometric directions. We first probe residual streams across layers of four open transformer-based LLMs (Magistral 24B, Qwen3 14B, GPT-OSS 20B, OLMo2-1B) prompted with explicit demographic disclosure. We show that the same probes predict demographics from implicit cues: names activate census-aligned gender and race representations, while occupations trigger representations correlated with real-world workforce statistics. These linear representations allow us to explain demographic inferences implicitly formed by LLMs during conversation. We demonstrate that these implicit demographic representations actively shape downstream behavior, such as career recommendations. Our study further highlights that models that pass bias benchmark tests may still harbor and leverage implicit biases, with implications for fairness when applied at scale.
+ oai:arXiv.org:2512.10065v1
+ cs.AIcs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Evy van Weelden, Jos M. Prinsen, Caterina Ceccato, Ethel Pruss, Anita Vrins, Maryam Alimardani, Travis J. Wiltshire, Max M. Louwerse
-
-
- Luxical: High-Speed Lexical-Dense Text Embeddings
- https://arxiv.org/abs/2512.09015
- arXiv:2512.09015v1 Announce Type: new
-Abstract: Frontier language model quality increasingly hinges on our ability to organize web-scale text corpora for training. Today's dominant tools trade off speed and flexibility: lexical classifiers (e.g., FastText) are fast but limited to producing classification output scores, while the vector-valued outputs of transformer text embedding models flexibly support numerous workflows (e.g., clustering, classification, and retrieval) but are computationally expensive to produce. We introduce Luxical, a library for high-speed "lexical-dense" text embeddings that aims to recover the best properties of both approaches for web-scale text organization. Luxical combines sparse TF--IDF features, a small ReLU network, and a knowledge distillation training regimen to approximate large transformer embedding models at a fraction of their operational cost. In this technical report, we describe the Luxical architecture and training objective and evaluate a concrete Luxical model in two disparate applications: a targeted webcrawl document retrieval test and an end-to-end language model data curation task grounded in text classification. In these tasks we demonstrate speedups ranging from 3x to 100x over varying-sized neural baselines, and comparable to FastText model inference during the data curation task. On these evaluations, the tested Luxical model illustrates favorable compute/quality trade-offs for large-scale text organization, matching the quality of neural baselines. Luxical is available as open-source software at https://github.com/datologyai/luxical.
- oai:arXiv.org:2512.09015v1
- cs.CL
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- DatologyAI, :, Luke Merrick, Alex Fang, Aldo Carranza, Alvin Deng, Amro Abbas, Brett Larsen, Cody Blakeney, Darren Teh, David Schwab, Fan Pan, Haakon Mongstad, Haoli Yin, Jack Urbanek, Jason Lee, Jason Telanoff, Josh Wills, Kaleigh Mentzer, Paul Burstein, Parth Doshi, Paul Burnstein, Pratyush Maini, Ricardo Monti, Rishabh Adiga, Scott Loftin, Siddharth Joshi, Spandan Das, Tony Jiang, Vineeth Dorma, Zhengping Wang, Bogdan Gaza, Ari Morcos, Matthew Leavitt
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Paul Bouchaud, Pedro Ramaciotti
- Learning to Remove Lens Flare in Event Camera
- https://arxiv.org/abs/2512.09016
- arXiv:2512.09016v1 Announce Type: new
-Abstract: Event cameras have the potential to revolutionize vision systems with their high temporal resolution and dynamic range, yet they remain susceptible to lens flare, a fundamental optical artifact that causes severe degradation. In event streams, this optical artifact forms a complex, spatio-temporal distortion that has been largely overlooked. We present E-Deflare, the first systematic framework for removing lens flare from event camera data. We first establish the theoretical foundation by deriving a physics-grounded forward model of the non-linear suppression mechanism. This insight enables the creation of the E-Deflare Benchmark, a comprehensive resource featuring a large-scale simulated training set, E-Flare-2.7K, and the first-ever paired real-world test set, E-Flare-R, captured by our novel optical system. Empowered by this benchmark, we design E-DeflareNet, which achieves state-of-the-art restoration performance. Extensive experiments validate our approach and demonstrate clear benefits for downstream tasks. Code and datasets are publicly available.
- oai:arXiv.org:2512.09016v1
+ Independent Density Estimation
+ https://arxiv.org/abs/2512.10067
+ arXiv:2512.10067v1 Announce Type: new
+Abstract: Large-scale Vision-Language models have achieved remarkable results in various domains, such as image captioning and conditioned image generation. Neverthe- less, these models still encounter difficulties in achieving human-like composi- tional generalization. In this study, we propose a new method called Independent Density Estimation (IDE) to tackle this challenge. IDE aims to learn the connec- tion between individual words in a sentence and the corresponding features in an image, enabling compositional generalization. We build two models based on the philosophy of IDE. The first one utilizes fully disentangled visual representations as input, and the second leverages a Variational Auto-Encoder to obtain partially disentangled features from raw images. Additionally, we propose an entropy- based compositional inference method to combine predictions of each word in the sentence. Our models exhibit superior generalization to unseen compositions compared to current models when evaluated on various datasets.
+ oai:arXiv.org:2512.10067v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Haiqian Han, Lingdong Kong, Jianing Li, Ao Liang, Chengtao Zhu, Jiacheng Lyu, Lai Xing Ng, Xiangyang Ji, Wei Tsang Ooi, Benoit R. Cottereau
+ http://creativecommons.org/licenses/by/4.0/
+ Jiahao Liu
- EMMap: A Systematic Framework for Spatial EMFI Mapping and Fault Classification on Microcontrollers
- https://arxiv.org/abs/2512.09049
- arXiv:2512.09049v1 Announce Type: new
-Abstract: Electromagnetic Fault Injection (EMFI) is a powerful technique for inducing bit flips and instruction-level perturbations on microcontrollers, yet existing literature lacks a unified methodology for systematically mapping spatial sensitivity and classifying resulting fault behaviors. Building on insights from O'Flynn and Kuhnapfel et al., we introduce a platform-agnostic framework for Spatial EMFI Mapping and Fault Classification, aimed at understanding how spatial probe position influences fault outcomes. We present pilot experiments on three representative microcontroller targets including the Xtensa LX6 (ESP32) and two ChipWhisper boards not as definitive evaluations, but as illustrative demonstrations of how the proposed methodology can be applied in practice. These preliminary observations motivate a generalized and reproducible workflow that researchers can adopt when analyzing EMFI susceptibility across diverse embedded architectures.
- oai:arXiv.org:2512.09049v1
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Openpi Comet: Competition Solution For 2025 BEHAVIOR Challenge
+ https://arxiv.org/abs/2512.10071
+ arXiv:2512.10071v1 Announce Type: new
+Abstract: The 2025 BEHAVIOR Challenge is designed to rigorously track progress toward solving long-horizon tasks by physical agents in simulated environments. BEHAVIOR-1K focuses on everyday household tasks that people most want robots to assist with and these tasks introduce long-horizon mobile manipulation challenges in realistic settings, bridging the gap between current research and real-world, human-centric applications. This report presents our solution to the 2025 BEHAVIOR Challenge in a very close 2nd place and substantially outperforms the rest of the submissions. Building on $\pi_{0.5}$, we focus on systematically building our solution by studying the effects of training techniques and data. Through careful ablations, we show the scaling power in pre-training and post-training phases for competitive performance. We summarize our practical lessons and design recommendations that we hope will provide actionable insights for the broader embodied AI community when adapting powerful foundation models to complex embodied scenarios.
+ oai:arXiv.org:2512.10071v1
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Gandham Sai Santhosh, Siddhartha Sanjay Naik, Ritwik Badola, Chester Rebeiro
+ Junjie Bai, Yu-Wei Chao, Qizhi Chen, Jinwei Gu, Moo Jin Kim, Zhaoshuo Li, Xuan Li, Tsung-Yi Lin, Ming-Yu Liu, Nic Ma, Kaichun Mo, Delin Qu, Shangkun Sun, Hongchi Xia, Fangyin Wei, Xiaohui Zeng
- Improving Multi-Class Calibration through Normalization-Aware Isotonic Techniques
- https://arxiv.org/abs/2512.09054
- arXiv:2512.09054v1 Announce Type: new
-Abstract: Accurate and reliable probability predictions are essential for multi-class supervised learning tasks, where well-calibrated models enable rational decision-making. While isotonic regression has proven effective for binary calibration, its extension to multi-class problems via one-vs-rest calibration produced suboptimal results when compared to parametric methods, limiting its practical adoption. In this work, we propose novel isotonic normalization-aware techniques for multiclass calibration, grounded in natural and intuitive assumptions expected by practitioners. Unlike prior approaches, our methods inherently account for probability normalization by either incorporating normalization directly into the optimization process (NA-FIR) or modeling the problem as a cumulative bivariate isotonic regression (SCIR). Empirical evaluation on a variety of text and image classification datasets across different model architectures reveals that our approach consistently improves negative log-likelihood (NLL) and expected calibration error (ECE) metrics.
- oai:arXiv.org:2512.09054v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Empirical Hardness in Multi-Agent Pathfinding: Research Challenges and Opportunities
+ https://arxiv.org/abs/2512.10078
+ arXiv:2512.10078v1 Announce Type: new
+Abstract: Multi-agent pathfinding (MAPF) is the problem of finding collision-free paths for a team of agents on a map. Although MAPF is NP-hard, the hardness of solving individual instances varies significantly, revealing a gap between theoretical complexity and actual hardness. This paper outlines three key research challenges in MAPF empirical hardness to understand such phenomena. The first challenge, known as algorithm selection, is determining the best-performing algorithms for a given instance. The second challenge is understanding the key instance features that affect MAPF empirical hardness, such as structural properties like phase transition and backbone/backdoor. The third challenge is how to leverage our knowledge of MAPF empirical hardness to effectively generate hard MAPF instances or diverse benchmark datasets. This work establishes a foundation for future empirical hardness research and encourages deeper investigation into these promising and underexplored areas.
+ oai:arXiv.org:2512.10078v1
+ cs.MA
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Alon Arad, Saharon Rosset
+ Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, 2025, Pages 2885-2889
+ Jingyao Ren, Eric Ewing, T. K. Satish Kumar, Sven Koenig, Nora Ayanian
- ConceptPose: Training-Free Zero-Shot Object Pose Estimation using Concept Vectors
- https://arxiv.org/abs/2512.09056
- arXiv:2512.09056v1 Announce Type: new
-Abstract: Object pose estimation is a fundamental task in computer vision and robotics, yet most methods require extensive, dataset-specific training. Concurrently, large-scale vision language models show remarkable zero-shot capabilities. In this work, we bridge these two worlds by introducing ConceptPose, a framework for object pose estimation that is both training-free and model-free. ConceptPose leverages a vision-language-model (VLM) to create open-vocabulary 3D concept maps, where each point is tagged with a concept vector derived from saliency maps. By establishing robust 3D-3D correspondences across concept maps, our approach allows precise estimation of 6DoF relative pose. Without any object or dataset-specific training, our approach achieves state-of-the-art results on common zero shot relative pose estimation benchmarks, significantly outperforming existing methods by over 62% in ADD(-S) score, including those that utilize extensive dataset-specific training.
- oai:arXiv.org:2512.09056v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Search-based Software Testing Driven by Domain Knowledge: Reflections and New Perspectives
+ https://arxiv.org/abs/2512.10079
+ arXiv:2512.10079v1 Announce Type: new
+Abstract: Search-based Software Testing (SBST) can automatically generate test cases to search for requirements violations. Unlike manual test case development, it can generate a substantial number of test cases in a limited time. However, SBST does not possess the domain knowledge of engineers. Several techniques have been proposed to integrate engineers' domain knowledge within existing SBST frameworks. This paper will reflect on recent experimental results by highlighting bold and unexpected results. It will help re-examine SBST techniques driven by domain knowledge from a new perspective, suggesting new directions for future research.
+ oai:arXiv.org:2512.10079v1
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Liming Kuang, Yordanka Velikova, Mahdi Saleh, Jan-Nico Zaech, Danda Pani Paudel, Benjamin Busam
+ Federico Formica, Mark Lawford, Claudio Menghi
- A Diffusion-Based Framework for High-Resolution Precipitation Forecasting over CONUS
- https://arxiv.org/abs/2512.09059
- arXiv:2512.09059v1 Announce Type: new
-Abstract: Accurate precipitation forecasting is essential for hydrometeorological risk management, especially for anticipating extreme rainfall that can lead to flash flooding and infrastructure damage. This study introduces a diffusion-based deep learning (DL) framework that systematically compares three residual prediction strategies differing only in their input sources: (1) a fully data-driven model using only past observations from the Multi-Radar Multi-Sensor (MRMS) system, (2) a corrective model using only forecasts from the High-Resolution Rapid Refresh (HRRR) numerical weather prediction system, and (3) a hybrid model integrating both MRMS and selected HRRR forecast variables. By evaluating these approaches under a unified setup, we provide a clearer understanding of how each data source contributes to predictive skill over the Continental United States (CONUS). Forecasts are produced at 1-km spatial resolution, beginning with direct 1-hour predictions and extending to 12 hours using autoregressive rollouts. Performance is evaluated using both CONUS-wide and region-specific metrics that assess overall performance and skill at extreme rainfall thresholds. Across all lead times, our DL framework consistently outperforms the HRRR baseline in pixel-wise and spatiostatistical metrics. The hybrid model performs best at the shortest lead time, while the HRRR-corrective model outperforms others at longer lead times, maintaining high skill through 12 hours. To assess reliability, we incorporate calibrated uncertainty quantification tailored to the residual learning setup. These gains, particularly at longer lead times, are critical for emergency preparedness, where modest increases in forecast horizon can improve decision-making. This work advances DL-based precipitation forecasting by enhancing predictive skill, reliability, and applicability across regions.
- oai:arXiv.org:2512.09059v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ What Kind of Reasoning (if any) is an LLM actually doing? On the Stochastic Nature and Abductive Appearance of Large Language Models
+ https://arxiv.org/abs/2512.10080
+ arXiv:2512.10080v1 Announce Type: new
+Abstract: This article looks at how reasoning works in current Large Language Models (LLMs) that function using the token-completion method. It examines their stochastic nature and their similarity to human abductive reasoning. The argument is that these LLMs create text based on learned patterns rather than performing actual abductive reasoning. When their output seems abductive, this is largely because they are trained on human-generated texts that include reasoning structures. Examples are used to show how LLMs can produce plausible ideas, mimic commonsense reasoning, and give explanatory answers without being grounded in truth, semantics, verification, or understanding, and without performing any real abductive reasoning. This dual nature, where the models have a stochastic base but appear abductive in use, has important consequences for how LLMs are evaluated and applied. They can assist with generating ideas and supporting human thinking, but their outputs must be critically assessed because they cannot identify truth or verify their explanations. The article concludes by addressing five objections to these points, noting some limitations in the analysis, and offering an overall evaluation.
+ oai:arXiv.org:2512.10080v1
+ cs.CL
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Marina Vicens-Miquel, Amy McGovern, Aaron J. Hill, Efi Foufoula-Georgiou, Clement Guilloteau, Samuel S. P. Shen
+ Luciano Floridi, Jessica Morley, Claudio Novelli, David Watson
- SIP: Site in Pieces- A Dataset of Disaggregated Construction-Phase 3D Scans for Semantic Segmentation and Scene Understanding
- https://arxiv.org/abs/2512.09062
- arXiv:2512.09062v1 Announce Type: new
-Abstract: Accurate 3D scene interpretation in active construction sites is essential for progress monitoring, safety assessment, and digital twin development. LiDAR is widely used in construction because it offers advantages over camera-based systems, performing reliably in cluttered and dynamically changing conditions. Yet most public datasets for 3D perception are derived from densely fused scans with uniform sampling and complete visibility, conditions that do not reflect real construction sites. Field data are often collected as isolated single-station LiDAR views, constrained by safety requirements, limited access, and ongoing operations. These factors lead to radial density decay, fragmented geometry, and view-dependent visibility-characteristics that remain underrepresented in existing datasets. This paper presents SIP, Site in Pieces, a dataset created to reflect the practical constraints of LiDAR acquisition during construction. SIP provides indoor and outdoor scenes captured with a terrestrial LiDAR scanner and annotated at the point level using a taxonomy tailored to construction environments: A. Built Environment, B. Construction Operations, and C. Site Surroundings. The dataset includes both structural components and slender temporary objects such as scaffolding, MEP piping, and scissor lifts, where sparsity caused by occlusion and fragmented geometry make segmentation particularly challenging. The scanning protocol, annotation workflow, and quality control procedures establish a consistent foundation for the dataset. SIP is openly available with a supporting Git repository, offering adaptable class configurations that streamline adoption within modern 3D deep learning frameworks. By providing field data that retain real-world sensing characteristics, SIP enables robust benchmarking and contributes to advancing construction-oriented 3D vision tasks.
- oai:arXiv.org:2512.09062v1
- cs.CV
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Defining the Scope of Learning Analytics: An Axiomatic Approach for Analytic Practice and Measurable Learning Phenomena
+ https://arxiv.org/abs/2512.10081
+ arXiv:2512.10081v1 Announce Type: new
+Abstract: Learning Analytics (LA) has rapidly expanded through practical and technological innovation, yet its foundational identity has remained theoretically under-specified. This paper addresses this gap by proposing the first axiomatic theory that formally defines the essential structure, scope, and limitations of LA. Derived from the psychological definition of learning and the methodological requirements of LA, the framework consists of five axioms specifying discrete observation, experience construction, state transition, and inference. From these axioms, we derive a set of theorems and propositions that clarify the epistemological stance of LA, including the inherent unobservability of learner states, the irreducibility of temporal order, constraints on reachable states, and the impossibility of deterministically predicting future learning. We further define LA structure and LA practice as formal objects, demonstrating the sufficiency and necessity of the axioms and showing that diverse LA approaches -- such as Bayesian Knowledge Tracing and dashboards -- can be uniformly explained within this framework. The theory provides guiding principles for designing analytic methods and interpreting learning data while avoiding naive behaviorism and category errors by establishing an explicit theoretical inference layer between observations and states. This work positions LA as a rigorous science of state transition systems based on observability, establishing the theoretical foundation necessary for the field's maturation as a scholarly discipline.
+ oai:arXiv.org:2512.10081v1
+ cs.CY
+ cs.AI
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Seongyong Kim, Yong Kwon Cho
+ http://creativecommons.org/licenses/by/4.0/
+ Kensuke Takii, Changhao Liang, Hiroaki Ogata
- ShelfAware: Real-Time Visual-Inertial Semantic Localization in Quasi-Static Environments with Low-Cost Sensors
- https://arxiv.org/abs/2512.09065
- arXiv:2512.09065v1 Announce Type: new
-Abstract: Many indoor workspaces are quasi-static: global layout is stable but local semantics change continually, producing repetitive geometry, dynamic clutter, and perceptual noise that defeat vision-based localization. We present ShelfAware, a semantic particle filter for robust global localization that treats scene semantics as statistical evidence over object categories rather than fixed landmarks. ShelfAware fuses a depth likelihood with a category-centric semantic similarity and uses a precomputed bank of semantic viewpoints to perform inverse semantic proposals inside MCL, yielding fast, targeted hypothesis generation on low-cost, vision-only hardware. Across 100 global-localization trials spanning four conditions (cart-mounted, wearable, dynamic obstacles, and sparse semantics) in a semantically dense, retail environment, ShelfAware achieves a 96% success rate (vs. 22% MCL and 10% AMCL) with a mean time-to-convergence of 1.91s, attains the lowest translational RMSE in all conditions, and maintains stable tracking in 80% of tested sequences, all while running in real time on a consumer laptop-class platform. By modeling semantics distributionally at the category level and leveraging inverse proposals, ShelfAware resolves geometric aliasing and semantic drift common to quasi-static domains. Because the method requires only vision sensors and VIO, it integrates as an infrastructure-free building block for mobile robots in warehouses, labs, and retail settings; as a representative application, it also supports the creation of assistive devices providing start-anytime, shared-control assistive navigation for people with visual impairments.
- oai:arXiv.org:2512.09065v1
- cs.RO
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Metric-driven numerical methods
+ https://arxiv.org/abs/2512.10083
+ arXiv:2512.10083v1 Announce Type: new
+Abstract: In this paper, we explore the concept of metric-driven numerical methods as a powerful tool for solving various types of multiscale partial differential equations. Our focus is on computing constrained minimizers of functionals - or, equivalently, by considering the associated Euler-Lagrange equations - the solution of a class of eigenvalue problems that may involve nonlinearities in the eigenfunctions. We introduce metric-driven methods for such problems via Riemannian gradient techniques, leveraging the idea that gradients can be represented in different metrics (so-called Sobolev gradients) to accelerate convergence. We show that the choice of metric not only leads to specific metric-driven iterative schemes, but also induces approximation spaces with enhanced properties, particularly in low-regularity regimes or when the solution exhibits heterogeneous multiscale features. In fact, we recover a well-known class of multiscale spaces based on the Localized Orthogonal Decomposition (LOD), now derived from a new perspective. Alongside a discussion of the metric-driven approach for a model problem, we also demonstrate its application to simulating the ground states of spin-orbit-coupled Bose-Einstein condensates.
+ oai:arXiv.org:2512.10083v1
+ math.NA
+ cs.NA
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shivendra Agrawal, Jake Brawer, Ashutosh Naik, Alessandro Roncone, Bradley Hayes
+ Patrick Henning, Laura Huynh, Daniel Peterseim
- ORCA: Open-ended Response Correctness Assessment for Audio Question Answering
- https://arxiv.org/abs/2512.09066
- arXiv:2512.09066v1 Announce Type: new
-Abstract: Evaluating open-ended responses from large audio language models (LALMs) is challenging because human annotators often genuinely disagree on answer correctness due to multiple valid interpretations, partial correctness, and subjective judgment. Traditional metrics reporting only mean scores fail to capture this uncertainty. We present ORCA (Open-ended Response Correctness Assessment), a framework that models the variability in human judgments using Beta distributions to predict both expected correctness and uncertainty. Our three-stage annotation framework combines human judgment with structured feedback and iterative refinement to simultaneously curate training data and improve benchmark quality. We collected 11,721 annotations across 3,580 question-answer pairs from 15 LALMs on two audio QA benchmarks, achieving inter-annotator agreement of 0.82 (Krippendorff's alpha). ORCA achieves 0.91 Spearman correlation with mean human judgments, matching or outperforming LLM-judge baselines while providing uncertainty estimates and requiring significantly less compute. We release our models, code, and curated dataset.
- oai:arXiv.org:2512.09066v1
- cs.SD
- cs.AI
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Evaluation of Risk and Resilience of the MBTA Green Rapid Transit System
+ https://arxiv.org/abs/2512.10088
+ arXiv:2512.10088v1 Announce Type: new
+Abstract: The Transportation Systems Sector is one of the sixteen critical infrastructure sectors identified by the Cybersecurity and Infrastructure Security Agency (CISA) and plays a crucial role in ensuring public safety, economic stability, and national security. The Massachusetts Bay Transportation Authority (MBTA) serves as the primary public transportation system in the Greater Boston Area, with the Green Line representing one of the oldest and most complex rapid transit systems in the network. This paper presents a network-based risk and resilience assessment of the MBTA Green Line using graph theory, network metrics, and the Model-Based Risk Analysis (MBRA) tool. The original 70-station Green Line network is simplified into a 17-node model, and key metrics, including degree centrality, betweenness centrality, eigenvector centrality, spectral radius, node robustness, and blocking nodes, are computed using Python-based analysis. Critical vulnerability is derived using the MBRA resiliency equation, and random, targeted, and cyber-physical attack scenarios are evaluated. The results identify North Station, Government Center, Haymarket, Copley, and Kenmore as the most critical nodes. A fault tree analysis between Kenmore and Copley further demonstrates the impact of budget allocation on threat reduction. This work highlights key vulnerabilities in the Green Line network and provides actionable recommendations to improve resilience against cyber-physical threats.
+ oai:arXiv.org:2512.10088v1
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- \v{S}imon Sedl\'a\v{c}ek, Sara Barahona, Bolaji Yusuf, Laura Herrera-Alarc\'on, Santosh Kesiraju, Cecilia Bola\~nos, Alicia Lozano-Diez, Sathvik Udupa, Fernando L\'opez, Allison Ferner, Ramani Duraiswami, Jan \v{C}ernock\'y
+ Anil Kumar Gorthi
- Contrast transfer functions help quantify neural network out-of-distribution generalization in HRTEM
- https://arxiv.org/abs/2512.09067
- arXiv:2512.09067v1 Announce Type: new
-Abstract: Neural networks, while effective for tackling many challenging scientific tasks, are not known to perform well out-of-distribution (OOD), i.e., within domains which differ from their training data. Understanding neural network OOD generalization is paramount to their successful deployment in experimental workflows, especially when ground-truth knowledge about the experiment is hard to establish or experimental conditions significantly vary. With inherent access to ground-truth information and fine-grained control of underlying distributions, simulation-based data curation facilitates precise investigation of OOD generalization behavior. Here, we probe generalization with respect to imaging conditions of neural network segmentation models for high-resolution transmission electron microscopy (HRTEM) imaging of nanoparticles, training and measuring the OOD generalization of over 12,000 neural networks using synthetic data generated via random structure sampling and multislice simulation. Using the HRTEM contrast transfer function, we further develop a framework to compare information content of HRTEM datasets and quantify OOD domain shifts. We demonstrate that neural network segmentation models enjoy significant performance stability, but will smoothly and predictably worsen as imaging conditions shift from the training distribution. Lastly, we consider limitations of our approach in explaining other OOD shifts, such as of the atomic structures, and discuss complementary techniques for understanding generalization in such settings.
- oai:arXiv.org:2512.09067v1
- cs.LG
- cond-mat.mtrl-sci
- Thu, 11 Dec 2025 00:00:00 -0500
+ Algorithm-Driven On-Chip Integration for High Density and Low Cost
+ https://arxiv.org/abs/2512.10089
+ arXiv:2512.10089v1 Announce Type: new
+Abstract: Growing interest in semiconductor workforce development has generated demand for platforms capable of supporting large numbers of independent hardware designs for research and training without imposing high per-project overhead. Traditional multi-project wafer (MPW) services based solely on physical co-placement have historically met this need, yet their scalability breaks down as project counts rise. Recent efforts towards scalable chip tapeouts mitigate these limitations by integrating many small designs within a shared die and attempt to amortize costly resources such as IO pads and memory macros. However, foundational principles for arranging, linking, and validating such densely integrated design sites have received limited systematic investigation. This work presents a new approach with three key techniques to address this gap. First, we establish a structured formulation of the design space that enables automated, algorithm-driven packing of many projects, replacing manual layout practices. Second, we introduce an architecture that exploits only the narrow-area regions between sites to deliver on off-chip communication and other shared needs. Third, we provide a practical approach for on-chip power domains enabling per-project power characterization at a standard laboratory bench and requiring no expertise in low-power ASIC design. Experimental results show that our approach achieves substantial area reductions of up to 13x over state-of-the-art physical-only aggregation methods, offering a scalable and cost-effective path forward for large-scale tapeout environments.
+ oai:arXiv.org:2512.10089v1
+ cs.AR
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Luis Rangel DaCosta, Mary C. Scott
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jeongeun Kim, Sabrina Yarzada, Paul Chen, Christopher Torng
- KD-OCT: Efficient Knowledge Distillation for Clinical-Grade Retinal OCT Classification
- https://arxiv.org/abs/2512.09069
- arXiv:2512.09069v1 Announce Type: new
-Abstract: Age-related macular degeneration (AMD) and choroidal neovascularization (CNV)-related conditions are leading causes of vision loss worldwide, with optical coherence tomography (OCT) serving as a cornerstone for early detection and management. However, deploying state-of-the-art deep learning models like ConvNeXtV2-Large in clinical settings is hindered by their computational demands. Therefore, it is desirable to develop efficient models that maintain high diagnostic performance while enabling real-time deployment. In this study, a novel knowledge distillation framework, termed KD-OCT, is proposed to compress a high-performance ConvNeXtV2-Large teacher model, enhanced with advanced augmentations, stochastic weight averaging, and focal loss, into a lightweight EfficientNet-B2 student for classifying normal, drusen, and CNV cases. KD-OCT employs real-time distillation with a combined loss balancing soft teacher knowledge transfer and hard ground-truth supervision. The effectiveness of the proposed method is evaluated on the Noor Eye Hospital (NEH) dataset using patient-level cross-validation. Experimental results demonstrate that KD-OCT outperforms comparable multi-scale or feature-fusion OCT classifiers in efficiency- accuracy balance, achieving near-teacher performance with substantial reductions in model size and inference time. Despite the compression, the student model exceeds most existing frameworks, facilitating edge deployment for AMD screening. Code is available at https://github.com/erfan-nourbakhsh/KD- OCT.
- oai:arXiv.org:2512.09069v1
- cs.CV
+ Interpretable Embeddings with Sparse Autoencoders: A Data Analysis Toolkit
+ https://arxiv.org/abs/2512.10092
+ arXiv:2512.10092v1 Announce Type: new
+Abstract: Analyzing large-scale text corpora is a core challenge in machine learning, crucial for tasks like identifying undesirable model behaviors or biases in training data. Current methods often rely on costly LLM-based techniques (e.g. annotating dataset differences) or dense embedding models (e.g. for clustering), which lack control over the properties of interest. We propose using sparse autoencoders (SAEs) to create SAE embeddings: representations whose dimensions map to interpretable concepts. Through four data analysis tasks, we show that SAE embeddings are more cost-effective and reliable than LLMs and more controllable than dense embeddings. Using the large hypothesis space of SAEs, we can uncover insights such as (1) semantic differences between datasets and (2) unexpected concept correlations in documents. For instance, by comparing model responses, we find that Grok-4 clarifies ambiguities more often than nine other frontier models. Relative to LLMs, SAE embeddings uncover bigger differences at 2-8x lower cost and identify biases more reliably. Additionally, SAE embeddings are controllable: by filtering concepts, we can (3) cluster documents along axes of interest and (4) outperform dense embeddings on property-based retrieval. Using SAE embeddings, we study model behavior with two case studies: investigating how OpenAI model behavior has changed over time and finding "trigger" phrases learned by Tulu-3 (Lambert et al., 2024) from its training data. These results position SAEs as a versatile tool for unstructured data analysis and highlight the neglected importance of interpreting models through their data.
+ oai:arXiv.org:2512.10092v1cs.AIcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Erfan Nourbakhsh, Nasrin Sanjari, Ali Nourbakhsh
+ http://creativecommons.org/licenses/by/4.0/
+ Nick Jiang, Xiaoqing Sun, Lisa Dunlap, Lewis Smith, Neel Nanda
- Banach neural operator for Navier-Stokes equations
- https://arxiv.org/abs/2512.09070
- arXiv:2512.09070v1 Announce Type: new
-Abstract: Classical neural networks are known for their ability to approximate mappings between finite-dimensional spaces, but they fall short in capturing complex operator dynamics across infinite-dimensional function spaces. Neural operators, in contrast, have emerged as powerful tools in scientific machine learning for learning such mappings. However, standard neural operators typically lack mechanisms for mixing or attending to input information across space and time. In this work, we introduce the Banach neural operator (BNO) -- a novel framework that integrates Koopman operator theory with deep neural networks to predict nonlinear, spatiotemporal dynamics from partial observations. The BNO approximates a nonlinear operator between Banach spaces by combining spectral linearization (via Koopman theory) with deep feature learning (via convolutional neural networks and nonlinear activations). This sequence-to-sequence model captures dominant dynamic modes and allows for mesh-independent prediction. Numerical experiments on the Navier-Stokes equations demonstrate the method's accuracy and generalization capabilities. In particular, BNO achieves robust zero-shot super-resolution in unsteady flow prediction and consistently outperforms conventional Koopman-based methods and deep learning models.
- oai:arXiv.org:2512.09070v1
- cs.NE
- cs.LG
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ Does Timeboost Reduce MEV-Related Spam? Theory and Evidence from Layer-2 Transactions
+ https://arxiv.org/abs/2512.10094
+ arXiv:2512.10094v1 Announce Type: new
+Abstract: Maximal extractable value opportunities often induce spam in Layer-2 blockchains: many identical transactions are submitted near simultaneously, most of which revert, wasting blockspace. We study Timeboost, a mechanism on Arbitrum that auctions a timestamp advantage, crucial under first-come first-served sequencing rules. We develop a game-theoretic model in which users choose the number of transaction copies to submit, and extend upon the baseline setting by modeling the Timeboost auction and subsequent transaction submission behavior. We show that Timeboost reduces spam and increases sequencer/DAO revenue in equilibrium relative to the baseline, transferring user payments from revert costs to auction bids. Empirically, we assemble mempool data from multiple Layer-2 networks, measuring spam via identical transactions submitted in narrow time intervals, and conduct an event study around Timeboost adoption on Arbitrum using other L2s as contemporaneous benchmarks. We find a decline in MEV-related spam and an increase in revenue on Arbitrum post-adoption, consistent with model predictions.
+ oai:arXiv.org:2512.10094v1
+ cs.GT
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1063/5.0284818
- Bo Zhang
+ http://creativecommons.org/licenses/by/4.0/
+ Brian Zhu
- Adaptive Thresholding for Visual Place Recognition using Negative Gaussian Mixture Statistics
- https://arxiv.org/abs/2512.09071
- arXiv:2512.09071v1 Announce Type: new
-Abstract: Visual place recognition (VPR) is an important component technology for camera-based mapping and navigation applications. This is a challenging problem because images of the same place may appear quite different for reasons including seasonal changes, weather illumination, structural changes to the environment, as well as transient pedestrian or vehicle traffic. Papers focusing on generating image descriptors for VPR report their results using metrics such as recall@K and ROC curves. However, for a robot implementation, determining which matches are sufficiently good is often reduced to a manually set threshold. And it is difficult to manually select a threshold that will work for a variety of visual scenarios. This paper addresses the problem of automatically selecting a threshold for VPR by looking at the 'negative' Gaussian mixture statistics for a place - image statistics indicating not this place. We show that this approach can be used to select thresholds that work well for a variety of image databases and image descriptors.
- oai:arXiv.org:2512.09071v1
+ TraceFlow: Dynamic 3D Reconstruction of Specular Scenes Driven by Ray Tracing
+ https://arxiv.org/abs/2512.10095
+ arXiv:2512.10095v1 Announce Type: new
+Abstract: We present TraceFlow, a novel framework for high-fidelity rendering of dynamic specular scenes by addressing two key challenges: precise reflection direction estimation and physically accurate reflection modeling. To achieve this, we propose a Residual Material-Augmented 2D Gaussian Splatting representation that models dynamic geometry and material properties, allowing accurate reflection ray computation. Furthermore, we introduce a Dynamic Environment Gaussian and a hybrid rendering pipeline that decomposes rendering into diffuse and specular components, enabling physically grounded specular synthesis via rasterization and ray tracing. Finally, we devise a coarse-to-fine training strategy to improve optimization stability and promote physically meaningful decomposition. Extensive experiments on dynamic scene benchmarks demonstrate that TraceFlow outperforms prior methods both quantitatively and qualitatively, producing sharper and more realistic specular reflections in complex dynamic environments.
+ oai:arXiv.org:2512.10095v1cs.CV
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nick Trinh, Damian Lyons
+ http://creativecommons.org/licenses/by/4.0/
+ Jiachen Tao, Junyi Wu, Haoxuan Wang, Zongxin Yang, Dawen Cai, Yan Yan
- Modular Deep-Learning-Based Early Warning System for Deadly Heatwave Prediction
- https://arxiv.org/abs/2512.09074
- arXiv:2512.09074v1 Announce Type: new
-Abstract: Severe heatwaves in urban areas significantly threaten public health, calling for establishing early warning strategies. Despite predicting occurrence of heatwaves and attributing historical mortality, predicting an incoming deadly heatwave remains a challenge due to the difficulty in defining and estimating heat-related mortality. Furthermore, establishing an early warning system imposes additional requirements, including data availability, spatial and temporal robustness, and decision costs. To address these challenges, we propose DeepTherm, a modular early warning system for deadly heatwave prediction without requiring heat-related mortality history. By highlighting the flexibility of deep learning, DeepTherm employs a dual-prediction pipeline, disentangling baseline mortality in the absence of heatwaves and other irregular events from all-cause mortality. We evaluated DeepTherm on real-world data across Spain. Results demonstrate consistent, robust, and accurate performance across diverse regions, time periods, and population groups while allowing trade-off between missed alarms and false alarms.
- oai:arXiv.org:2512.09074v1
+ MedXAI: A Retrieval-Augmented and Self-Verifying Framework for Knowledge-Guided Medical Image Analysis
+ https://arxiv.org/abs/2512.10098
+ arXiv:2512.10098v1 Announce Type: new
+Abstract: Accurate and interpretable image-based diagnosis remains a fundamental challenge in medical AI, particularly un- der domain shifts and rare-class conditions. Deep learning mod- els often struggle with real-world distribution changes, exhibit bias against infrequent pathologies, and lack the transparency required for deployment in safety-critical clinical environments. We introduce MedXAI (An Explainable Framework for Med- ical Imaging Classification), a unified expert knowledge based framework that integrates deep vision models with clinician- derived expert knowledge to improve generalization, reduce rare- class bias, and provide human-understandable explanations by localizing the relevant diagnostic features rather than relying on technical post-hoc methods (e.g., Saliency Maps, LIME). We evaluate MedXAI across heterogeneous modalities on two challenging tasks: (i) Seizure Onset Zone localization from resting-state fMRI, and (ii) Diabetic Retinopathy grading. Ex periments on ten multicenter datasets show consistent gains, including a 3% improvement in cross-domain generalization and a 10% improvmnet in F1 score of rare class, substantially outperforming strong deep learning baselines. Ablations confirm that the symbolic components act as effective clinical priors and regularizers, improving robustness under distribution shift. MedXAI delivers clinically aligned explanations while achieving superior in-domain and cross-domain performance, particularly for rare diseases in multimodal medical AI.
+ oai:arXiv.org:2512.10098v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Shangqing Xu, Zhiyuan Zhao, Megha Sharma, Jos\'e Mar\'ia Mart\'in-Olalla, Alexander Rodr\'iguez, Gregory A. Wellenius, B. Aditya Prakash
+ Midhat Urooj, Ayan Banerjee, Farhat Shaikh, Kuntal Thakur, Sandeep Gupta
- Beyond the Hype: Comparing Lightweight and Deep Learning Models for Air Quality Forecasting
- https://arxiv.org/abs/2512.09076
- arXiv:2512.09076v1 Announce Type: new
-Abstract: Accurate forecasting of urban air pollution is essential for protecting public health and guiding mitigation policies. While Deep Learning (DL) and hybrid pipelines dominate recent research, their complexity and limited interpretability hinder operational use. This study investigates whether lightweight additive models -- Facebook Prophet (FBP) and NeuralProphet (NP) -- can deliver competitive forecasts for particulate matter (PM$_{2.5}$, PM$_{10}$) in Beijing, China. Using multi-year pollutant and meteorological data, we applied systematic feature selection (correlation, mutual information, mRMR), leakage-safe scaling, and chronological data splits. Both models were trained with pollutant and precursor regressors, with NP additionally leveraging lagged dependencies. For context, two machine learning baselines (LSTM, LightGBM) and one traditional statistical model (SARIMAX) were also implemented. Performance was evaluated on a 7-day holdout using MAE, RMSE, and $R^2$. Results show that FBP consistently outperformed NP, SARIMAX, and the learning-based baselines, achieving test $R^2$ above 0.94 for both pollutants. These findings demonstrate that interpretable additive models remain competitive with both traditional and complex approaches, offering a practical balance of accuracy, transparency, and ease of deployment.
- oai:arXiv.org:2512.09076v1
+ Push Smarter, Not Harder: Hierarchical RL-Diffusion Policy for Efficient Nonprehensile Manipulation
+ https://arxiv.org/abs/2512.10099
+ arXiv:2512.10099v1 Announce Type: new
+Abstract: Nonprehensile manipulation, such as pushing objects across cluttered environments, presents a challenging control problem due to complex contact dynamics and long-horizon planning requirements. In this work, we propose HeRD, a hierarchical reinforcement learning-diffusion policy that decomposes pushing tasks into two levels: high-level goal selection and low-level trajectory generation. We employ a high-level reinforcement learning (RL) agent to select intermediate spatial goals, and a low-level goal-conditioned diffusion model to generate feasible, efficient trajectories to reach them.
+ This architecture combines the long-term reward maximizing behaviour of RL with the generative capabilities of diffusion models. We evaluate our method in a 2D simulation environment and show that it outperforms the state-of-the-art baseline in success rate, path efficiency, and generalization across multiple environment configurations. Our results suggest that hierarchical control with generative low-level planning is a promising direction for scalable, goal-directed nonprehensile manipulation. Code, documentation, and trained models are available: https://github.com/carosteven/HeRD.
+ oai:arXiv.org:2512.10099v1
+ cs.ROcs.LG
- cs.AI
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Moazzam Umer Gondal, Hamad ul Qudous, Asma Ahmad Farhan
+ Steven Caro, Stephen L. Smith
- Almost-Optimal Approximation Algorithms for Global Minimum Cut in Directed Graphs
- https://arxiv.org/abs/2512.09080
- arXiv:2512.09080v1 Announce Type: new
-Abstract: We develop new $(1+\epsilon)$-approximation algorithms for finding the global minimum edge-cut in a directed edge-weighted graph, and for finding the global minimum vertex-cut in a directed vertex-weighted graph. Our algorithms are randomized, and have a running time of $O\left(m^{1+o(1)}/\epsilon\right)$ on any $m$-edge $n$-vertex input graph, assuming all edge/vertex weights are polynomially-bounded. In particular, for any constant $\epsilon>0$, our algorithms have an almost-optimal running time of $O\left(m^{1+o(1)}\right)$. The fastest previously-known running time for this setting, due to (Cen et al., FOCS 2021), is $\tilde{O}\left(\min\left\{n^2/\epsilon^2,m^{1+o(1)}\sqrt{n}\right\}\right)$ for Minimum Edge-Cut, and $\tilde{O}\left(n^2/\epsilon^2\right)$ for Minimum Vertex-Cut. Our results further extend to the rooted variants of the Minimum Edge-Cut and Minimum Vertex-Cut problems, where the algorithm is additionally given a root vertex $r$, and the goal is to find a minimum-weight cut separating any vertex from the root $r$. In terms of techniques, we build upon and extend a framework that was recently introduced by (Chuzhoy et al., SODA 2026) for solving the Minimum Vertex-Cut problem in unweighted directed graphs. Additionally, in order to obtain our result for the Global Minimum Vertex-Cut problem, we develop a novel black-box reduction from this problem to its rooted variant. Prior to our work, such reductions were only known for more restricted settings, such as when all vertex-weights are unit.
- oai:arXiv.org:2512.09080v1
- cs.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ Robust AI Security and Alignment: A Sisyphean Endeavor?
+ https://arxiv.org/abs/2512.10100
+ arXiv:2512.10100v1 Announce Type: new
+Abstract: This manuscript establishes information-theoretic limitations for robustness of AI security and alignment by extending G\"odel's incompleteness theorem to AI. Knowing these limitations and preparing for the challenges they bring is critically important for the responsible adoption of the AI technology. Practical approaches to dealing with these challenges are provided as well. Broader implications for cognitive reasoning limitations of AI systems are also proven.
+ oai:arXiv.org:2512.10100v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ron Mosenzon
+ http://creativecommons.org/licenses/by/4.0/
+ Apostol Vassilev
- AgentComp: From Agentic Reasoning to Compositional Mastery in Text-to-Image Models
- https://arxiv.org/abs/2512.09081
- arXiv:2512.09081v1 Announce Type: new
-Abstract: Text-to-image generative models have achieved remarkable visual quality but still struggle with compositionality$-$accurately capturing object relationships, attribute bindings, and fine-grained details in prompts. A key limitation is that models are not explicitly trained to differentiate between compositionally similar prompts and images, resulting in outputs that are close to the intended description yet deviate in fine-grained details. To address this, we propose AgentComp, a framework that explicitly trains models to better differentiate such compositional variations and enhance their reasoning ability. AgentComp leverages the reasoning and tool-use capabilities of large language models equipped with image generation, editing, and VQA tools to autonomously construct compositional datasets. Using these datasets, we apply an agentic preference optimization method to fine-tune text-to-image models, enabling them to better distinguish between compositionally similar samples and resulting in overall stronger compositional generation ability. AgentComp achieves state-of-the-art results on compositionality benchmarks such as T2I-CompBench, without compromising image quality$-$a common drawback in prior approaches$-$and even generalizes to other capabilities not explicitly trained for, such as text rendering.
- oai:arXiv.org:2512.09081v1
+ Hierarchical Instance Tracking to Balance Privacy Preservation with Accessible Information
+ https://arxiv.org/abs/2512.10102
+ arXiv:2512.10102v1 Announce Type: new
+Abstract: We propose a novel task, hierarchical instance tracking, which entails tracking all instances of predefined categories of objects and parts, while maintaining their hierarchical relationships. We introduce the first benchmark dataset supporting this task, consisting of 2,765 unique entities that are tracked in 552 videos and belong to 40 categories (across objects and parts). Evaluation of seven variants of four models tailored to our novel task reveals the new dataset is challenging. Our dataset is available at https://vizwiz.org/tasks-and-datasets/hierarchical-instance-tracking/
+ oai:arXiv.org:2512.10102v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Arman Zarei, Jiacheng Pan, Matthew Gwilliam, Soheil Feizi, Zhenheng Yang
+ Neelima Prasad, Jarek Reynolds, Neel Karsanbhai, Tanusree Sharma, Lotus Zhang, Abigale Stangl, Yang Wang, Leah Findlater, Danna Gurari
- GS-KAN: Parameter-Efficient Kolmogorov-Arnold Networks via Sprecher-Type Shared Basis Functions
- https://arxiv.org/abs/2512.09084
- arXiv:2512.09084v1 Announce Type: new
-Abstract: The Kolmogorov-Arnold representation theorem offers a theoretical alternative to Multi-Layer Perceptrons (MLPs) by placing learnable univariate functions on edges rather than nodes. While recent implementations such as Kolmogorov-Arnold Networks (KANs) demonstrate high approximation capabilities, they suffer from significant parameter inefficiency due to the requirement of maintaining unique parameterizations for every network edge. In this work, we propose GS-KAN (Generalized Sprecher-KAN), a lightweight architecture inspired by David Sprecher's refinement of the superposition theorem. GS-KAN constructs unique edge functions by applying learnable linear transformations to a single learnable, shared parent function per layer. We evaluate GS-KAN against existing KAN architectures and MLPs across synthetic function approximation, tabular data regression and image classification tasks. Our results demonstrate that GS-KAN outperforms both MLPs and standard KAN baselines on continuous function approximation tasks while maintaining superior parameter efficiency. Additionally, GS-KAN achieves competitive performance with existing KAN architectures on tabular regression and outperforms MLPs on high-dimensional classification tasks. Crucially, the proposed architecture enables the deployment of KAN-based architectures in high-dimensional regimes under strict parameter constraints, a setting where standard implementations are typically infeasible due to parameter explosion. The source code is available at https://github.com/rambamn48/gs-impl.
- oai:arXiv.org:2512.09084v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ LLM-PEA: Leveraging Large Language Models Against Phishing Email Attacks
+ https://arxiv.org/abs/2512.10104
+ arXiv:2512.10104v1 Announce Type: new
+Abstract: Email phishing is one of the most prevalent and globally consequential vectors of cyber intrusion. As systems increasingly deploy Large Language Models (LLMs) applications, these systems face evolving phishing email threats that exploit their fundamental architectures. Current LLMs require substantial hardening before deployment in email security systems, particularly against coordinated multi-vector attacks that exploit architectural vulnerabilities. This paper proposes LLMPEA, an LLM-based framework to detect phishing email attacks across multiple attack vectors, including prompt injection, text refinement, and multilingual attacks. We evaluate three frontier LLMs (e.g., GPT-4o, Claude Sonnet 4, and Grok-3) and comprehensive prompting design to assess their feasibility, robustness, and limitations against phishing email attacks. Our empirical analysis reveals that LLMs can detect the phishing email over 90% accuracy while we also highlight that LLM-based phishing email detection systems could be exploited by adversarial attack, prompt injection, and multilingual attacks. Our findings provide critical insights for LLM-based phishing detection in real-world settings where attackers exploit multiple vulnerabilities in combination.
+ oai:arXiv.org:2512.10104v1
+ cs.CR
+ cs.IR
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Oscar Eliasson
+ Najmul Hassan, Prashanth BusiReddyGari, Haitao Zhao, Yihao Ren, Jinsheng Xu, Shaohu Zhang
- Mental Models of Autonomy and Sentience Shape Reactions to AI
- https://arxiv.org/abs/2512.09085
- arXiv:2512.09085v1 Announce Type: new
-Abstract: Narratives about artificial intelligence (AI) entangle autonomy, the capacity to self-govern, with sentience, the capacity to sense and feel. AI agents that perform tasks autonomously and companions that recognize and express emotions may activate mental models of autonomy and sentience, respectively, provoking distinct reactions. To examine this possibility, we conducted three pilot studies (N = 374) and four preregistered vignette experiments describing an AI as autonomous, sentient, both, or neither (N = 2,702). Activating a mental model of sentience increased general mind perception (cognition and emotion) and moral consideration more than autonomy, but autonomy increased perceived threat more than sentience. Sentience also increased perceived autonomy more than vice versa. Based on a within-paper meta-analysis, sentience changed reactions more than autonomy on average. By disentangling different mental models of AI, we can study human-AI interaction with more precision to better navigate the detailed design of anthropomorphized AI and prompting interfaces.
- oai:arXiv.org:2512.09085v1
- cs.HC
+ Modeling Narrative Archetypes in Conspiratorial Narratives: Insights from Singapore-Based Telegram Groups
+ https://arxiv.org/abs/2512.10105
+ arXiv:2512.10105v1 Announce Type: new
+Abstract: Conspiratorial discourse is increasingly embedded within digital communication ecosystems, yet its structure and spread remain difficult to study. This work analyzes conspiratorial narratives in Singapore-based Telegram groups, showing that such content is woven into everyday discussions rather than confined to isolated echo chambers. We propose a two-stage computational framework. First, we fine-tune RoBERTa-large to classify messages as conspiratorial or not, achieving an F1-score of 0.866 on 2,000 expert-labeled messages. Second, we build a signed belief graph in which nodes represent messages and edge signs reflect alignment in belief labels, weighted by textual similarity. We introduce a Signed Belief Graph Neural Network (SiBeGNN) that uses a Sign Disentanglement Loss to learn embeddings that separate ideological alignment from stylistic features.
+ Using hierarchical clustering on these embeddings, we identify seven narrative archetypes across 553,648 messages: legal topics, medical concerns, media discussions, finance, contradictions in authority, group moderation, and general chat. SiBeGNN yields stronger clustering quality (cDBI = 8.38) than baseline methods (13.60 to 67.27), supported by 88 percent inter-rater agreement in expert evaluations. Our analysis shows that conspiratorial messages appear not only in clusters focused on skepticism or distrust, but also within routine discussions of finance, law, and everyday matters. These findings challenge common assumptions about online radicalization by demonstrating that conspiratorial discourse operates within ordinary social interaction. The proposed framework advances computational methods for belief-driven discourse analysis and offers applications for stance detection, political communication studies, and content moderation policy.
+ oai:arXiv.org:2512.10105v1cs.AI
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Janet V. T. Pauketat, Daniel B. Shank, Aikaterina Manoli, Jacy Reese Anthis
+ Soorya Ram Shimgekar, Abhay Goyal, Lam Yin Cheung, Roy Ka-Wei Lee, Koustuv Saha, Pi Zonooz, Navin Kumar
- Inferring Operator Emotions from a Motion-Controlled Robotic Arm
- https://arxiv.org/abs/2512.09086
- arXiv:2512.09086v1 Announce Type: new
-Abstract: A remote robot operator's affective state can significantly impact the resulting robot's motions leading to unexpected consequences, even when the user follows protocol and performs permitted tasks. The recognition of a user operator's affective states in remote robot control scenarios is, however, underexplored. Current emotion recognition methods rely on reading the user's vital signs or body language, but the devices and user participation these measures require would add limitations to remote robot control. We demonstrate that the functional movements of a remote-controlled robotic avatar, which was not designed for emotional expression, can be used to infer the emotional state of the human operator via a machine-learning system. Specifically, our system achieved 83.3$\%$ accuracy in recognizing the user's emotional state expressed by robot movements, as a result of their hand motions. We discuss the implications of this system on prominent current and future remote robot operation and affective robotic contexts.
- oai:arXiv.org:2512.09086v1
- cs.RO
+ A Simulation Framework for Studying Recommendation-Network Co-evolution in Social Platforms
+ https://arxiv.org/abs/2512.10106
+ arXiv:2512.10106v1 Announce Type: new
+Abstract: Studying how recommendation systems reshape social networks is difficult on live platforms: confounds abound, and controlled experiments risk user harm. We present an agent-based simulator where content production, tie formation, and a graph attention network (GAT) recommender co-evolve in a closed loop. We calibrate parameters using Mastodon data and validate out-of-sample against Bluesky (4--6\% error on structural metrics; 10--15\% on held-out temporal splits). Across 18 configurations at 100 agents, we find that \emph{activation timing} affects outcomes: introducing recommendations at $t=10$ vs.\ $t=40$ decreases transitivity by 10\% while engagement differs by $<$8\%. Delaying activation increases content diversity by 9\% while reducing modularity by 4\%. Scaling experiments ($n$ up to 5,000) show the effect persists but attenuates. Jacobian analysis confirms local stability under bounded reactance parameters. We release configuration schemas and reproduction scripts.
+ oai:arXiv.org:2512.10106v1
+ cs.SI
+ cs.MA
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Gaurav Koley, Sanika Digrajkar
+
+
+ Generate-Then-Validate: A Novel Question Generation Approach Using Small Language Models
+ https://arxiv.org/abs/2512.10110
+ arXiv:2512.10110v1 Announce Type: new
+Abstract: We explore the use of small language models (SLMs) for automatic question generation as a complement to the prevalent use of their large counterparts in learning analytics research. We present a novel question generation pipeline that leverages both the text generation and the probabilistic reasoning abilities of SLMs to generate high-quality questions. Adopting a "generate-then-validate" strategy, our pipeline first performs expansive generation to create an abundance of candidate questions and refine them through selective validation based on novel probabilistic reasoning. We conducted two evaluation studies, one with seven human experts and the other with a large language model (LLM), to assess the quality of the generated questions. Most judges (humans or LLMs) agreed that the generated questions had clear answers and generally aligned well with the intended learning objectives. Our findings suggest that an SLM can effectively generate high-quality questions when guided by a well-designed pipeline that leverages its strengths.
+ oai:arXiv.org:2512.10110v1
+ cs.CL
+ cs.HC
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yumou Wei, John Stamper, Paulo F. Carvalho
+
+
+ Dark Personality Traits and Online Toxicity: Linking Self-Reports to Reddit Activity
+ https://arxiv.org/abs/2512.10113
+ arXiv:2512.10113v1 Announce Type: new
+Abstract: Dark personality traits have been linked to online misbehavior such as trolling, incivility, and toxic speech. Yet the relationship between these traits and actual online conduct remains understudied. Here we investigate the associations between dark traits, online toxicity, and the socio-linguistic characteristics of online user activity. To explore this relationship, we developed a Web application that integrates validated psychological questionnaires from Amazon Mechanical Turk users to their Reddit activity data. This allowed collecting nearly 57K Reddit comments, including 2.2M tokens and 152.7K sentences from 114 users, that we systematically represent through 224 linguistic and behavioral features. We then examined their relationship to questionnaire-based trait measures via multiple correlation analyses. Among our findings is that dark traits primarily influence the production rather than the perception of online incivility. Sadistic and psychopathic tendencies are most strongly associated with overtly toxic language, whereas other dark dispositions manifest more subtly, often eluding simple textual proxies. Self-reported engagement in hostile behavior mirrors actual online activity, while existing hand-crafted textual proxies for dark triad traits show limited correspondence with our validated measures. Finally, bright and dark traits interact in nuanced ways, with extraversion reducing trolling tendencies and conscientiousness showing modest associations with entitlement and callousness. These findings deepen understanding of how personality shapes toxic online behavior and highlight both opportunities and challenges for developing reliable computational tools and targeted, effective moderation strategies.
+ oai:arXiv.org:2512.10113v1
+ cs.CYcs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xinyu Qi, Zeyu Deng, Shaun Alexander Macdonald, Liying Li, Chen Wang, Muhammad Ali Imran, Philip G. Zhao
+ Aldo Cerulli, Benedetta Tessa, Giuseppe La Selva, Oronzo Mazzeo, Lorenzo Cima, Lucia Monacis, Stefano Cresci
- A posteriori error estimates for mixed-dimensional Darcy flow using non-matching grids
- https://arxiv.org/abs/2512.09087
- arXiv:2512.09087v1 Announce Type: new
-Abstract: In this article, we extend the a posteriori error estimates for hierarchical mixed-dimensional elliptic equations developed in [Varela et al., J. Numer. Math., 48 (2023), pp. 247-280] to the setting of non-matching mixed-dimensional grids. The extension is achieved by introducing transfer grids between the planar subdomain and interface grids, together with stable discrete projection operators for primal (potential) and dual (flux) variables. The proposed non-matching estimators remain fully guaranteed and computable. Numerical experiments, including three-dimensional problems based on community benchmarks for incompressible Darcy flow in fractured porous media, demonstrate reliable performance of the estimators for the non-matching grids and effectivity that is comparable to the estimators for matching grids.
- oai:arXiv.org:2512.09087v1
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ AgriRegion: Region-Aware Retrieval for High-Fidelity Agricultural Advice
+ https://arxiv.org/abs/2512.10114
+ arXiv:2512.10114v1 Announce Type: new
+Abstract: Large Language Models (LLMs) have demonstrated significant potential in democratizing access to information. However, in the domain of agriculture, general-purpose models frequently suffer from contextual hallucination, which provides non-factual advice or answers are scientifically sound in one region but disastrous in another due to variations in soil, climate, and local regulations. We introduce AgriRegion, a Retrieval-Augmented Generation (RAG) framework designed specifically for high-fidelity, region-aware agricultural advisory. Unlike standard RAG approaches that rely solely on semantic similarity, AgriRegion incorporates a geospatial metadata injection layer and a region-prioritized re-ranking mechanism. By restricting the knowledge base to verified local agricultural extension services and enforcing geo-spatial constraints during retrieval, AgriRegion ensures that the advice regarding planting schedules, pest control, and fertilization is locally accurate. We create a novel benchmark dataset, AgriRegion-Eval, which comprises 160 domain-specific questions across 12 agricultural subfields. Experiments demonstrate that AgriRegion reduces hallucinations by 10-20% compared to state-of-the-art LLMs systems and significantly improves trust scores according to a comprehensive evaluation.
+ oai:arXiv.org:2512.10114v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jhabriel Varela, Christian E. Schaerer, Eirik Keilegavlen, Inga Berre
+ Mesafint Fanuel, Mahmoud Nabil Mahmoud, Crystal Cook Marshal, Vishal Lakhotia, Biswanath Dari, Kaushik Roy, Shaohu Zhang
+
+
+ Fast Functionally Redundant Inverse Kinematics for Robotic Toolpath Optimisation in Manufacturing Tasks
+ https://arxiv.org/abs/2512.10116
+ arXiv:2512.10116v1 Announce Type: new
+Abstract: Industrial automation with six-axis robotic arms is critical for many manufacturing tasks, including welding and additive manufacturing applications; however, many of these operations are functionally redundant due to the symmetrical tool axis, which effectively makes the operation a five-axis task. Exploiting this redundancy is crucial for achieving the desired workspace and dexterity required for the feasibility and optimisation of toolpath planning. Inverse kinematics algorithms can solve this in a fast, reactive framework, but these techniques are underutilised over the more computationally expensive offline planning methods. We propose a novel algorithm to solve functionally redundant inverse kinematics for robotic manipulation utilising a task space decomposition approach, the damped least-squares method and Halley's method to achieve fast and robust solutions with reduced joint motion. We evaluate our methodology in the case of toolpath optimisation in a cold spray coating application on a non-planar surface. The functionally redundant inverse kinematics algorithm can quickly solve motion plans that minimise joint motion, expanding the feasible operating space of the complex toolpath. We validate our approach on an industrial ABB manipulator and cold-spray gun executing the computed toolpath.
+ oai:arXiv.org:2512.10116v1
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Andrew Razjigaev, Hans Lohr, Alejandro Vargas-Uscategui, Peter King, Tirthankar Bandyopadhyay
- Calibrated Trust in Dealing with LLM Hallucinations: A Qualitative Study
- https://arxiv.org/abs/2512.09088
- arXiv:2512.09088v1 Announce Type: new
-Abstract: Hallucinations are outputs by Large Language Models (LLMs) that are factually incorrect yet appear plausible [1]. This paper investigates how such hallucinations influence users' trust in LLMs and users' interaction with LLMs. To explore this in everyday use, we conducted a qualitative study with 192 participants. Our findings show that hallucinations do not result in blanket mistrust but instead lead to context-sensitive trust calibration. Building on the calibrated trust model by Lee & See [2] and Afroogh et al.'s trust-related factors [3], we confirm expectancy [3], [4], prior experience [3], [4], [5], and user expertise & domain knowledge [3], [4] as userrelated (human) trust factors, and identify intuition as an additional factor relevant for hallucination detection. Additionally, we found that trust dynamics are further influenced by contextual factors, particularly perceived risk [3] and decision stakes [6]. Consequently, we validate the recursive trust calibration process proposed by Bl\"obaum [7] and extend it by including intuition as a user-related trust factor. Based on these insights, we propose practical recommendations for responsible and reflective LLM use.
- oai:arXiv.org:2512.09088v1
+ CHyLL: Learning Continuous Neural Representations of Hybrid Systems
+ https://arxiv.org/abs/2512.10117
+ arXiv:2512.10117v1 Announce Type: new
+Abstract: Learning the flows of hybrid systems that have both continuous and discrete time dynamics is challenging. The existing method learns the dynamics in each discrete mode, which suffers from the combination of mode switching and discontinuities in the flows. In this work, we propose CHyLL (Continuous Hybrid System Learning in Latent Space), which learns a continuous neural representation of a hybrid system without trajectory segmentation, event functions, or mode switching. The key insight of CHyLL is that the reset map glues the state space at the guard surface, reformulating the state space as a piecewise smooth quotient manifold where the flow becomes spatially continuous. Building upon these insights and the embedding theorems grounded in differential topology, CHyLL concurrently learns a singularity-free neural embedding in a higher-dimensional space and the continuous flow in it. We showcase that CHyLL can accurately predict the flow of hybrid systems with superior accuracy and identify the topological invariants of the hybrid systems. Finally, we apply CHyLL to the stochastic optimal control problem.
+ oai:arXiv.org:2512.10117v1
+ cs.LGcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.RO
+ cs.SY
+ eess.SP
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- The 3rd International Conference on Foundation and Large Language Models (FLLM2025), Vienna, Austria, 25-28 November 2025
- Adrian Ryser, Florian Allwein, Tim Schlippe
+ Sangli Teng, Hang Liu, Jingyu Song, Koushil Sreenath
- A Taxonomy of Numerical Differentiation Methods
- https://arxiv.org/abs/2512.09090
- arXiv:2512.09090v1 Announce Type: new
-Abstract: Differentiation is a cornerstone of computing and data analysis in every discipline of science and engineering. Indeed, most fundamental physics laws are expressed as relationships between derivatives in space and time. However, derivatives are rarely directly measurable and must instead be computed, often from noisy, potentially corrupt data streams. There is a rich and broad literature of computational differentiation algorithms, but many impose extra constraints to work correctly, e.g. periodic boundary conditions, or are compromised in the presence of noise and corruption. It can therefore be challenging to select the method best-suited to any particular problem. Here, we review a broad range of numerical methods for calculating derivatives, present important contextual considerations and choice points, compare relative advantages, and provide basic theory for each algorithm in order to assist users with the mathematical underpinnings. This serves as a practical guide to help scientists and engineers match methods to application domains. We also provide an open-source Python package, PyNumDiff, which contains a broad suite of methods for differentiating noisy data.
- oai:arXiv.org:2512.09090v1
- math.NA
- cs.CE
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Explicit Control Barrier Function-based Safety Filters and their Resource-Aware Computation
+ https://arxiv.org/abs/2512.10118
+ arXiv:2512.10118v1 Announce Type: new
+Abstract: This paper studies the efficient implementation of safety filters that are designed using control barrier functions (CBFs), which minimally modify a nominal controller to render it safe with respect to a prescribed set of states. Although CBF-based safety filters are often implemented by solving a quadratic program (QP) in real time, the use of off-the-shelf solvers for such optimization problems poses a challenge in applications where control actions need to be computed efficiently at very high frequencies. In this paper, we introduce a closed-form expression for controllers obtained through CBF-based safety filters. This expression is obtained by partitioning the state-space into different regions, with a different closed-form solution in each region. We leverage this formula to introduce a resource-aware implementation of CBF-based safety filters that detects changes in the partition region and uses the closed-form expression between changes. We showcase the applicability of our approach in examples ranging from aerospace control to safe reinforcement learning.
+ oai:arXiv.org:2512.10118v1
+ eess.SY
+ cs.SY
+ math.OC
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Pavel Komarov, Floris van Breugel, J. Nathan Kutz
+ Pol Mestres, Shima Sadat Mousavi, Pio Ong, Lizhi Yang, Ersin Das, Joel W. Burdick, Aaron D. Ames
- Explaining the Unseen: Multimodal Vision-Language Reasoning for Situational Awareness in Underground Mining Disasters
- https://arxiv.org/abs/2512.09092
- arXiv:2512.09092v1 Announce Type: new
-Abstract: Underground mining disasters produce pervasive darkness, dust, and collapses that obscure vision and make situational awareness difficult for humans and conventional systems. To address this, we propose MDSE, Multimodal Disaster Situation Explainer, a novel vision-language framework that automatically generates detailed textual explanations of post-disaster underground scenes. MDSE has three-fold innovations: (i) Context-Aware Cross-Attention for robust alignment of visual and textual features even under severe degradation; (ii) Segmentation-aware dual pathway visual encoding that fuses global and region-specific embeddings; and (iii) Resource-Efficient Transformer-Based Language Model for expressive caption generation with minimal compute cost. To support this task, we present the Underground Mine Disaster (UMD) dataset--the first image-caption corpus of real underground disaster scenes--enabling rigorous training and evaluation. Extensive experiments on UMD and related benchmarks show that MDSE substantially outperforms state-of-the-art captioning models, producing more accurate and contextually relevant descriptions that capture crucial details in obscured environments, improving situational awareness for underground emergency response. The code is at https://github.com/mizanJewel/Multimodal-Disaster-Situation-Explainer.
- oai:arXiv.org:2512.09092v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ VocSim: A Training-free Benchmark for Zero-shot Content Identity in Single-source Audio
+ https://arxiv.org/abs/2512.10120
+ arXiv:2512.10120v1 Announce Type: new
+Abstract: General-purpose audio representations aim to map acoustically variable instances of the same event to nearby points, resolving content identity in a zero-shot setting. Unlike supervised classification benchmarks that measure adaptability via parameter updates, we introduce VocSim, a training-free benchmark probing the intrinsic geometric alignment of frozen embeddings. VocSim aggregates 125k single-source clips from 19 corpora spanning human speech, animal vocalizations, and environmental sounds. By restricting to single-source audio, we isolate content representation from the confound of source separation. We evaluate embeddings using Precision@k for local purity and the Global Separation Rate (GSR) for point-wise class separation. To calibrate GSR, we report lift over an empirical permutation baseline. Across diverse foundation models, a simple pipeline, frozen Whisper encoder features, time-frequency pooling, and label-free PCA, yields strong zero-shot performance. However, VocSim also uncovers a consistent generalization gap. On blind, low-resource speech, local retrieval drops sharply. While performance remains statistically distinguishable from chance, the absolute geometric structure collapses, indicating a failure to generalize to unseen phonotactics. As external validation, our top embeddings predict avian perceptual similarity, improve bioacoustic classification, and achieve state-of-the-art results on the HEAR benchmark. We posit that the intrinsic geometric quality measured here proxies utility in unlisted downstream applications. We release data, code, and a public leaderboard to standardize the evaluation of intrinsic audio geometry.
+ oai:arXiv.org:2512.10120v1
+ cs.SD
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Mizanur Rahman Jewel, Mohamed Elmahallawy, Sanjay Madria, Samuel Frimpong
+ http://creativecommons.org/licenses/by/4.0/
+ Maris Basha, Anja Zai, Sabine Stoll, Richard Hahnloser
- Food Image Generation on Multi-Noun Categories
- https://arxiv.org/abs/2512.09095
- arXiv:2512.09095v1 Announce Type: new
-Abstract: Generating realistic food images for categories with multiple nouns is surprisingly challenging. For instance, the prompt "egg noodle" may result in images that incorrectly contain both eggs and noodles as separate entities. Multi-noun food categories are common in real-world datasets and account for a large portion of entries in benchmarks such as UEC-256. These compound names often cause generative models to misinterpret the semantics, producing unintended ingredients or objects. This is due to insufficient multi-noun category related knowledge in the text encoder and misinterpretation of multi-noun relationships, leading to incorrect spatial layouts. To overcome these challenges, we propose FoCULR (Food Category Understanding and Layout Refinement) which incorporates food domain knowledge and introduces core concepts early in the generation process. Experimental results demonstrate that the integration of these techniques improves image generation performance in the food domain.
- oai:arXiv.org:2512.09095v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Workflow is All You Need: Escaping the "Statistical Smoothing Trap" via High-Entropy Information Foraging and Adversarial Pacing
+ https://arxiv.org/abs/2512.10121
+ arXiv:2512.10121v1 Announce Type: new
+Abstract: Central to long-form text generation in vertical domains is the "impossible trinity" confronting current large language models (LLMs): the simultaneous achievement of low hallucination, deep logical coherence, and personalized expression. This study establishes that this bottleneck arises from existing generative paradigms succumbing to the Statistical Smoothing Trap, a phenomenon that overlooks the high-entropy information acquisition and structured cognitive processes integral to expert-level writing. To address this limitation, we propose the DeepNews Framework, an agentic workflow that explicitly models the implicit cognitive processes of seasoned financial journalists. The framework integrates three core modules: first, a dual-granularity retrieval mechanism grounded in information foraging theory, which enforces a 10:1 saturated information input ratio to mitigate hallucinatory outputs; second, schema-guided strategic planning, a process leveraging domain expert knowledge bases (narrative schemas) and Atomic Blocks to forge a robust logical skeleton; third, adversarial constraint prompting, a technique deploying tactics including Rhythm Break and Logic Fog to disrupt the probabilistic smoothness inherent in model-generated text. Experiments delineate a salient Knowledge Cliff in deep financial reporting: content truthfulness collapses when retrieved context falls below 15,000 characters, while a high-redundancy input exceeding 30,000 characters stabilizes the Hallucination-Free Rate (HFR) above 85%. In an ecological validity blind test conducted with a top-tier Chinese technology media outlet, the DeepNews system--built on a previous-generation model (DeepSeek-V3-0324)-achieved a 25% submission acceptance rate, significantly outperforming the 0% acceptance rate of zero-shot generation by a state-of-the-art (SOTA) model (GPT-5).
+ oai:arXiv.org:2512.10121v1
+ cs.CL
+ cs.AI
+ cs.CY
+ q-fin.GN
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Xinyue Pan, Yuhao Chen, Jiangpeng He, Fengqing Zhu
+ Zhongjie Jiang
- Characterizing Human Feedback-Based Control in Naturalistic Driving Interactions via Gaussian Process Regression with Linear Feedback
- https://arxiv.org/abs/2512.09097
- arXiv:2512.09097v1 Announce Type: new
-Abstract: Understanding driver interactions is critical to designing autonomous vehicles to interoperate safely with human-driven cars. We consider the impact of these interactions on the policies drivers employ when navigating unsigned intersections in a driving simulator. The simulator allows the collection of naturalistic decision-making and behavior data in a controlled environment. Using these data, we model the human driver responses as state-based feedback controllers learned via Gaussian Process regression methods. We compute the feedback gain of the controller using a weighted combination of linear and nonlinear priors. We then analyze how the individual gains are reflected in driver behavior. We also assess differences in these controllers across populations of drivers. Our work in data-driven analyses of how drivers determine their policies can facilitate future work in the design of socially responsive autonomy for vehicles.
- oai:arXiv.org:2512.09097v1
- eess.SY
- cs.RO
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Numerical approximation of the first $p$-Laplace eigenpair
+ https://arxiv.org/abs/2512.10122
+ arXiv:2512.10122v1 Announce Type: new
+Abstract: We approximate the first Dirichlet eigenpair of the $p$-Laplace operator for $2 \leq p < \infty$ on both Euclidean and surface domains. We emphasize large $p$ values and discuss how the $p \to \infty$ limit connects to the underlying geometry of our domain. Working with large $p$ values introduces significant numerical challenges. We present a surface finite element numerical scheme that combines a Newton inverse-power iteration with a new domain rescaling strategy, which enables stable computations for large $p$. Numerical experiments in $1$D, planar domains, and surfaces embedded in $\mathbb{R}^3$ demonstrate the accuracy and robustness of our approach and show convergence towards the $p \to \infty$ limiting behavior.
+ oai:arXiv.org:2512.10122v1
+ math.NA
+ cs.NA
+ math.SP
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Rachel DiPirro, Rosalyn Devonport, Dan Calderone, Chishang "Mario'' Yang, Wendy Ju, Meeko Oishi
+ http://creativecommons.org/licenses/by/4.0/
+ Hannah Potgieter, Razvan C. Fetecau, Steven J. Ruuth
- Masked Generative Policy for Robotic Control
- https://arxiv.org/abs/2512.09101
- arXiv:2512.09101v1 Announce Type: new
-Abstract: We present Masked Generative Policy (MGP), a novel framework for visuomotor imitation learning. We represent actions as discrete tokens, and train a conditional masked transformer that generates tokens in parallel and then rapidly refines only low-confidence tokens. We further propose two new sampling paradigms: MGP-Short, which performs parallel masked generation with score-based refinement for Markovian tasks, and MGP-Long, which predicts full trajectories in a single pass and dynamically refines low-confidence action tokens based on new observations. With globally coherent prediction and robust adaptive execution capabilities, MGP-Long enables reliable control on complex and non-Markovian tasks that prior methods struggle with. Extensive evaluations on 150 robotic manipulation tasks spanning the Meta-World and LIBERO benchmarks show that MGP achieves both rapid inference and superior success rates compared to state-of-the-art diffusion and autoregressive policies. Specifically, MGP increases the average success rate by 9% across 150 tasks while cutting per-sequence inference time by up to 35x. It further improves the average success rate by 60% in dynamic and missing-observation environments, and solves two non-Markovian scenarios where other state-of-the-art methods fail.
- oai:arXiv.org:2512.09101v1
+ Inertial Magnetic SLAM Systems Using Low-Cost Sensors
+ https://arxiv.org/abs/2512.10128
+ arXiv:2512.10128v1 Announce Type: new
+Abstract: Spatially inhomogeneous magnetic fields offer a valuable, non-visual information source for positioning. Among systems leveraging this, magnetic field-based simultaneous localization and mapping (SLAM) systems are particularly attractive because they can provide positioning information and build a magnetic field map on the fly. Moreover, they have bounded error within mapped regions. However, state-of-the-art methods typically require low-drift odometry data provided by visual odometry or a wheel encoder, etc. This is because these systems need to minimize/reduce positioning errors while exploring, which happens when they are in unmapped regions. To address these limitations, this work proposes a loosely coupled and a tightly coupled inertial magnetic SLAM (IM-SLAM) system. The proposed systems use commonly available low-cost sensors: an inertial measurement unit (IMU), a magnetometer array, and a barometer. The use of non-visual data provides a significant advantage over visual-based systems, making it robust to low-visibility conditions. Both systems employ state-space representations, and magnetic field models on different scales. The difference lies in how they use a local and global magnetic field model. The loosely coupled system uses these models separately in two state-space models, while the tightly coupled system integrates them into one state-space model. Experiment results show that the tightly coupled IM-SLAM system achieves lower positioning errors than the loosely coupled system in most scenarios, with typical errors on the order of meters per 100 meters traveled. These results demonstrate the feasiblity of developing a full 3D IM-SLAM systems using low-cost sensors and the potential of applying these systems in emergency response scenarios such as mine/fire rescue.
+ oai:arXiv.org:2512.10128v1cs.RO
+ eess.SP
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Chuan Huang, Gustaf Hendeby, Isaac Skog
+
+
+ Universal Hirschberg for Width Bounded Dynamic Programs
+ https://arxiv.org/abs/2512.10132
+ arXiv:2512.10132v1 Announce Type: new
+Abstract: Hirschberg's algorithm (1975) reduces the space complexity for the longest common subsequence problem from $O(N^2)$ to $O(N)$ via recursive midpoint bisection on a grid dynamic program (DP). We show that the underlying idea generalizes to a broad class of dynamic programs with local dependencies on directed acyclic graphs (DP DAGs). Modeling a DP as deterministic time evolution over a topologically ordered DAG with frontier width $\omega$ and bounded in-degree, and assuming a max-type semiring with deterministic tie breaking, we prove that in a standard offline random-access model any such DP admits deterministic traceback in space $O(\omega \log T + (\log T)^{O(1)})$ cells over a fixed finite alphabet, where $T$ is the number of states. Our construction replaces backward dynamic programs by forward-only recomputation and organizes the time order into a height-compressed recursion tree whose nodes expose small "middle frontiers'' across which every optimal path must pass. The framework yields near-optimal traceback bounds for asymmetric and banded sequence alignment, one-dimensional recurrences, and dynamic-programming formulations on graphs of bounded pathwidth. We also show that an $\Omega(\omega)$ space term (in bits) is unavoidable in forward single-pass models and discuss conjectured $\sqrt{T}$-type barriers in streaming settings, supporting the view that space-efficient traceback is a structural property of width-bounded DP DAGs rather than a peculiarity of grid-based algorithms.
+ oai:arXiv.org:2512.10132v1
+ cs.DScs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Lipeng Zhuang, Shiyu Fan, Florent P. Audonnet, Yingdong Ru, Gerardo Aragon Camarasa, Paul Henderson
+ Logan Nye
- Natural Geometry of Robust Data Attribution: From Convex Models to Deep Networks
- https://arxiv.org/abs/2512.09103
- arXiv:2512.09103v1 Announce Type: new
-Abstract: Data attribution methods identify which training examples are responsible for a model's predictions, but their sensitivity to distributional perturbations undermines practical reliability. We present a unified framework for certified robust attribution that extends from convex models to deep networks. For convex settings, we derive Wasserstein-Robust Influence Functions (W-RIF) with provable coverage guarantees. For deep networks, we demonstrate that Euclidean certification is rendered vacuous by spectral amplification -- a mechanism where the inherent ill-conditioning of deep representations inflates Lipschitz bounds by over $10{,}000\times$. This explains why standard TRAK scores, while accurate point estimates, are geometrically fragile: naive Euclidean robustness analysis yields 0\% certification. Our key contribution is the Natural Wasserstein metric, which measures perturbations in the geometry induced by the model's own feature covariance. This eliminates spectral amplification, reducing worst-case sensitivity by $76\times$ and stabilizing attribution estimates. On CIFAR-10 with ResNet-18, Natural W-TRAK certifies 68.7\% of ranking pairs compared to 0\% for Euclidean baselines -- to our knowledge, the first non-vacuous certified bounds for neural network attribution. Furthermore, we prove that the Self-Influence term arising from our analysis equals the Lipschitz constant governing attribution stability, providing theoretical grounding for leverage-based anomaly detection. Empirically, Self-Influence achieves 0.970 AUROC for label noise detection, identifying 94.1\% of corrupted labels by examining just the top 20\% of training data.
- oai:arXiv.org:2512.09103v1
+ Partitioning the Sample Space for a More Precise Shannon Entropy Estimation
+ https://arxiv.org/abs/2512.10133
+ arXiv:2512.10133v1 Announce Type: new
+Abstract: Reliable data-driven estimation of Shannon entropy from small data sets, where the number of examples is potentially smaller than the number of possible outcomes, is a critical matter in several applications. In this paper, we introduce a discrete entropy estimator, where we use the decomposability property in combination with estimations of the missing mass and the number of unseen outcomes to compensate for the negative bias induced by them. Experimental results show that the proposed method outperforms some classical estimators in undersampled regimes, and performs comparably with some well-established state-of-the-art estimators.
+ oai:arXiv.org:2512.10133v1cs.LG
- math.OC
- Thu, 11 Dec 2025 00:00:00 -0500
+ math.ST
+ stat.TH
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shihao Li, Jiachen Li, Dongmei Chen
+ http://creativecommons.org/licenses/by/4.0/
+ Gabriel F. A. Bastos, Jugurta Montalv\~ao
- SURA: Secure Unsourced Random Access
- https://arxiv.org/abs/2512.09104
- arXiv:2512.09104v1 Announce Type: new
-Abstract: This work introduces security for unsourced random access (URA) by employing wiretap-inspired physical layer techniques. To achieve confidentiality, the proposed system opportunistically exploits intrinsic features of feedback-aided URA without adding any overhead or altering its original structure or operational characteristics. As a result, the proposed system preserves the low-cost advantages of URA, including low delay and minimal signaling overhead, while providing secure communication. To secure transmission, each user generates a secret key and an artificial noise sequence from the feedback signal that the BS broadcasts in previous transmission rounds. This feedback depends on the BS-user channel, making it a private signal for each user. The secure transmission is performed by three actions: encrypting the data using the secret key, sending only the parity bits of the LDPC encoded secret key to allow the legitimate receiver to recover it, and masking these parity bits with the artificial noise. For reception, a receiver algorithm is designed for the legitimate user, and a leakage analysis is provided to quantify the information available to the eavesdropper. The simulation results show that meaningful secrecy is achieved in URA without modifying its structure and with negligible impact on standard performance.
- oai:arXiv.org:2512.09104v1
- cs.IT
- math.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Approximate Counting in Local Lemma Regimes
+ https://arxiv.org/abs/2512.10134
+ arXiv:2512.10134v1 Announce Type: new
+Abstract: We establish efficient approximate counting algorithms for several natural problems in local lemma regimes. In particular, we consider the probability of intersection of events and the dimension of intersection of subspaces. Our approach is based on the cluster expansion method. We obtain fully polynomial-time approximation schemes for both the probability of intersection and the dimension of intersection for commuting projectors. For general projectors, we provide two algorithms: a fully polynomial-time approximation scheme under a global inclusion-exclusion stability condition, and an efficient affine approximation under a spectral gap assumption. As corollaries of our results, we obtain efficient algorithms for approximating the number of satisfying assignments of conjunctive normal form formulae and the dimension of satisfying subspaces of quantum satisfiability formulae.
+ oai:arXiv.org:2512.10134v1
+ cs.DS
+ math.CO
+ math.PR
+ quant-ph
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mohammad Javad Ahmadi, Rafael F. Schaefer, H. Vincent Poor
+ Ryan L. Mann, Gabriel Waite
- Cognitive Trust in HRI: "Pay Attention to Me and I'll Trust You Even if You are Wrong"
- https://arxiv.org/abs/2512.09105
- arXiv:2512.09105v1 Announce Type: new
-Abstract: Cognitive trust and the belief that a robot is capable of accurately performing tasks, are recognized as central factors in fostering high-quality human-robot interactions. It is well established that performance factors such as the robot's competence and its reliability shape cognitive trust. Recent studies suggest that affective factors, such as robotic attentiveness, also play a role in building cognitive trust. This work explores the interplay between these two factors that shape cognitive trust. Specifically, we evaluated whether different combinations of robotic competence and attentiveness introduce a compensatory mechanism, where one factor compensates for the lack of the other. In the experiment, participants performed a search task with a robotic dog in a 2x2 experimental design that included two factors: competence (high or low) and attentiveness (high or low). The results revealed that high attentiveness can compensate for low competence. Participants who collaborated with a highly attentive robot that performed poorly reported trust levels comparable to those working with a highly competent robot. When the robot did not demonstrate attentiveness, low competence resulted in a substantial decrease in cognitive trust. The findings indicate that building cognitive trust in human-robot interaction may be more complex than previously believed, involving emotional processes that are typically overlooked. We highlight an affective compensatory mechanism that adds a layer to consider alongside traditional competence-based models of cognitive trust.
- oai:arXiv.org:2512.09105v1
- cs.RO
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Lightweight Security for Private Networks: Real-World Evaluation of WireGuard
+ https://arxiv.org/abs/2512.10135
+ arXiv:2512.10135v1 Announce Type: new
+Abstract: This paper explores WireGuard as a lightweight alternative to IPsec for securing the user plane as well as the control plane in an industrial Open RAN deployment at the Adtran Terafactory in Meiningen. We focus on a realistic scenario where external vendors access their hardware in our 5G factory network, posing recurrent security risks from untrusted gNBs and intermediate network elements. Unlike prior studies limited to lab setups, we implement a complete proof-of-concept in a factory environment and compare WireGuard with IPsec under industrial traffic conditions. Our approach successfully protects user data (N3 interface) against untrusted gNBs and man-in-the-middle attacks while enabling control plane (N2 interface) authentication between the access and mobility management functions (AMF) and gNB. Performance measurements show that WireGuard adds minimal overhead in throughput, latency, and Central Processing Unit (CPU) usage, achieving performance comparable to IPsec. These findings demonstrate that WireGuard offers competitive performance with significantly reduced configuration complexity, making it a strong candidate for broader adoption in O-RAN, providing a unified, lightweight security layer across multiple interfaces and components.
+ oai:arXiv.org:2512.10135v1
+ cs.CR
+ cs.NI
+ hep-ex
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Adi Manor, Dan Cohen, Ziv Keidar, Avi Parush, Hadas Erel
+ Hubert Djuitcheu, Andrew Sergeev, Khurshid Alam, Danny Santhosh, Achim Autenrieth, Jochen Seitz
- Learning Unmasking Policies for Diffusion Language Models
- https://arxiv.org/abs/2512.09106
- arXiv:2512.09106v1 Announce Type: new
-Abstract: Diffusion (Large) Language Models (dLLMs) now match the downstream performance of their autoregressive counterparts on many tasks, while holding the promise of being more efficient during inference. One particularly successful variant is masked discrete diffusion, in which a buffer filled with special mask tokens is progressively replaced with tokens sampled from the model's vocabulary. Efficiency can be gained by unmasking several tokens in parallel, but doing too many at once risks degrading the generation quality. Thus, one critical design aspect of dLLMs is the sampling procedure that selects, at each step of the diffusion process, which tokens to replace. Indeed, recent work has found that heuristic strategies such as confidence thresholding lead to both higher quality and token throughput compared to random unmasking. However, such heuristics have downsides: they require manual tuning, and we observe that their performance degrades with larger buffer sizes. In this work, we instead propose to train sampling procedures using reinforcement learning. Specifically, we formalize masked diffusion sampling as a Markov decision process in which the dLLM serves as the environment, and propose a lightweight policy architecture based on a single-layer transformer that maps dLLM token confidences to unmasking decisions. Our experiments show that these trained policies match the performance of state-of-the-art heuristics when combined with semi-autoregressive generation, while outperforming them in the full diffusion setting. We also examine the transferability of these policies, finding that they can generalize to new underlying dLLMs and longer sequence lengths. However, we also observe that their performance degrades when applied to out-of-domain data, and that fine-grained tuning of the accuracy-efficiency trade-off can be challenging with our approach.
- oai:arXiv.org:2512.09106v1
+ Sequence-to-Image Transformation for Sequence Classification Using Rips Complex Construction and Chaos Game Representation
+ https://arxiv.org/abs/2512.10141
+ arXiv:2512.10141v1 Announce Type: new
+Abstract: Traditional feature engineering approaches for molecular sequence classification suffer from sparsity issues and computational complexity, while deep learning models often underperform on tabular biological data. This paper introduces a novel topological approach that transforms molecular sequences into images by combining Chaos Game Representation (CGR) with Rips complex construction from algebraic topology. Our method maps sequence elements to 2D coordinates via CGR, computes pairwise distances, and constructs Rips complexes to capture both local structural and global topological features. We provide formal guarantees on representation uniqueness, topological stability, and information preservation. Extensive experiments on anticancer peptide datasets demonstrate superior performance over vector-based, sequence language models, and existing image-based methods, achieving 86.8\% and 94.5\% accuracy on breast and lung cancer datasets, respectively. The topological representation preserves critical sequence information while enabling effective utilization of vision-based deep learning architectures for molecular sequence analysis.
+ oai:arXiv.org:2512.10141v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Metod Jazbec, Theo X. Olausson, Louis B\'ethune, Pierre Ablin, Michael Kirchhof, Joao Monterio, Victor Turrisi, Jason Ramapuram, Marco Cuturi
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Sarwan Ali, Taslim Murad, Imdadullah Khan
- Evolving Excellence: Automated Optimization of LLM-based Agents
- https://arxiv.org/abs/2512.09108
- arXiv:2512.09108v1 Announce Type: new
-Abstract: Agentic AI systems built on large language models (LLMs) offer significant potential for automating complex workflows, from software development to customer support. However, LLM agents often underperform due to suboptimal configurations; poorly tuned prompts, tool descriptions, and parameters that typically require weeks of manual refinement. Existing optimization methods either are too complex for general use or treat components in isolation, missing critical interdependencies.
- We present ARTEMIS, a no-code evolutionary optimization platform that jointly optimizes agent configurations through semantically-aware genetic operators. Given only a benchmark script and natural language goals, ARTEMIS automatically discovers configurable components, extracts performance signals from execution logs, and evolves configurations without requiring architectural modifications.
- We evaluate ARTEMIS on four representative agent systems: the \emph{ALE Agent} for competitive programming on AtCoder Heuristic Contest, achieving a \textbf{$13.6\%$ improvement} in acceptance rate; the \emph{Mini-SWE Agent} for code optimization on SWE-Perf, with a statistically significant \textbf{10.1\% performance gain}; and the \emph{CrewAI Agent} for cost and mathematical reasoning on Math Odyssey, achieving a statistically significant \textbf{$36.9\%$ reduction} in the number of tokens required for evaluation. We also evaluate the \emph{MathTales-Teacher Agent} powered by a smaller open-source model (Qwen2.5-7B) on GSM8K primary-level mathematics problems, achieving a \textbf{22\% accuracy improvement} and demonstrating that ARTEMIS can optimize agents based on both commercial and local models.
- oai:arXiv.org:2512.09108v1
- cs.SE
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Murmur2Vec: A Hashing Based Solution For Embedding Generation Of COVID-19 Spike Sequences
+ https://arxiv.org/abs/2512.10147
+ arXiv:2512.10147v1 Announce Type: new
+Abstract: Early detection and characterization of coronavirus disease (COVID-19), caused by SARS-CoV-2, remain critical for effective clinical response and public-health planning. The global availability of large-scale viral sequence data presents significant opportunities for computational analysis; however, existing approaches face notable limitations. Phylogenetic tree-based methods are computationally intensive and do not scale efficiently to today's multi-million-sequence datasets. Similarly, current embedding-based techniques often rely on aligned sequences or exhibit suboptimal predictive performance and high runtime costs, creating barriers to practical large-scale analysis. In this study, we focus on the most prevalent SARS-CoV-2 lineages associated with the spike protein region and introduce a scalable embedding method that leverages hashing to generate compact, low-dimensional representations of spike sequences. These embeddings are subsequently used to train a variety of machine learning models for supervised lineage classification. We conduct an extensive evaluation comparing our approach with multiple baseline and state-of-the-art biological sequence embedding methods across diverse metrics. Our results demonstrate that the proposed embeddings offer substantial improvements in efficiency, achieving up to 86.4\% classification accuracy while reducing embedding generation time by as much as 99.81\%. This highlights the method's potential as a fast, effective, and scalable solution for large-scale viral sequence analysis.
+ oai:arXiv.org:2512.10147v1
+ cs.LG
+ q-bio.GN
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Paul Brookes, Vardan Voskanyan, Rafail Giavrimis, Matthew Truscott, Mina Ilieva, Chrystalla Pavlou, Alexandru Staicu, Manal Adham, Will Evers- Hood, Jingzhi Gong, Kejia Zhang, Matvey Fedoseev, Vishal Sharma, Roman Bauer, Zheng Wang, Hema Nair, Wei Jie, Tianhua Xu, Aurora Constantin, Leslie Kanthan, Michail Basios
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Sarwan Ali, Taslim Murad
- Semantic Trajectory Generation for Goal-Oriented Spacecraft Rendezvous
- https://arxiv.org/abs/2512.09111
- arXiv:2512.09111v1 Announce Type: new
-Abstract: Reliable real-time trajectory generation is essential for future autonomous spacecraft. While recent progress in nonconvex guidance and control is paving the way for onboard autonomous trajectory optimization, these methods still rely on extensive expert input (e.g., waypoints, constraints, mission timelines, etc.), which limits the operational scalability in real rendezvous missions.This paper introduces SAGES (Semantic Autonomous Guidance Engine for Space), a trajectory-generation framework that translates natural-language commands into spacecraft trajectories that reflect high-level intent while respecting nonconvex constraints. Experiments in two settings -- fault-tolerant proximity operations with continuous-time constraint enforcement and a free-flying robotic platform -- demonstrate that SAGES reliably produces trajectories aligned with human commands, achieving over 90\% semantic-behavioral consistency across diverse behavior modes. Ultimately, this work marks an initial step toward language-conditioned, constraint-aware spacecraft trajectory generation, enabling operators to interactively guide both safety and behavior through intuitive natural-language commands with reduced expert burden.
- oai:arXiv.org:2512.09111v1
- cs.RO
+ PARAN: Persona-Augmented Review ANswering system on Food Delivery Review Dataset
+ https://arxiv.org/abs/2512.10148
+ arXiv:2512.10148v1 Announce Type: new
+Abstract: Personalized review response generation presents a significant challenge in domains where user information is limited, such as food delivery platforms. While large language models (LLMs) offer powerful text generation capabilities, they often produce generic responses when lacking contextual user data, reducing engagement and effectiveness. In this work, we propose a two-stage prompting framework that infers both explicit (e.g., user-stated preferences) and implicit (e.g., demographic or stylistic cues) personas directly from short review texts. These inferred persona attributes are then incorporated into the response generation prompt to produce user-tailored replies. To encourage diverse yet faithful generations, we adjust decoding temperature during inference. We evaluate our method using a real-world dataset collected from a Korean food delivery app, and assess its impact on precision, diversity, and semantic consistency. Our findings highlight the effectiveness of persona-augmented prompting in enhancing the relevance and personalization of automated responses without requiring model fine-tuning.
+ oai:arXiv.org:2512.10148v1
+ cs.CLcs.AI
- math.OC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuji Takubo, Arpit Dwivedi, Sukeerth Ramkumar, Luis A. Pabon, Daniele Gammelli, Marco Pavone, Simone D'Amico
+ Moonsoo Park, Jeongseok Yun, Bohyung Kim
- GimbalDiffusion: Gravity-Aware Camera Control for Video Generation
- https://arxiv.org/abs/2512.09112
- arXiv:2512.09112v1 Announce Type: new
-Abstract: Recent progress in text-to-video generation has achieved remarkable realism, yet fine-grained control over camera motion and orientation remains elusive. Existing approaches typically encode camera trajectories through relative or ambiguous representations, limiting explicit geometric control. We introduce GimbalDiffusion, a framework that enables camera control grounded in physical-world coordinates, using gravity as a global reference. Instead of describing motion relative to previous frames, our method defines camera trajectories in an absolute coordinate system, allowing precise and interpretable control over camera parameters without requiring an initial reference frame. We leverage panoramic 360-degree videos to construct a wide variety of camera trajectories, well beyond the predominantly straight, forward-facing trajectories seen in conventional video data. To further enhance camera guidance, we introduce null-pitch conditioning, an annotation strategy that reduces the model's reliance on text content when conflicting with camera specifications (e.g., generating grass while the camera points towards the sky). Finally, we establish a benchmark for camera-aware video generation by rebalancing SpatialVID-HQ for comprehensive evaluation under wide camera pitch variation. Together, these contributions advance the controllability and robustness of text-to-video models, enabling precise, gravity-aligned camera manipulation within generative frameworks.
- oai:arXiv.org:2512.09112v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ STARS: Semantic Tokens with Augmented Representations for Recommendation at Scale
+ https://arxiv.org/abs/2512.10149
+ arXiv:2512.10149v1 Announce Type: new
+Abstract: Real-world ecommerce recommender systems must deliver relevant items under strict tens-of-milliseconds latency constraints despite challenges such as cold-start products, rapidly shifting user intent, and dynamic context including seasonality, holidays, and promotions. We introduce STARS, a transformer-based sequential recommendation framework built for large-scale, low-latency ecommerce settings. STARS combines several innovations: dual-memory user embeddings that separate long-term preferences from short-term session intent; semantic item tokens that fuse pretrained text embeddings, learnable deltas, and LLM-derived attribute tags, strengthening content-based matching, long-tail coverage, and cold-start performance; context-aware scoring with learned calendar and event offsets; and a latency-conscious two-stage retrieval pipeline that performs offline embedding generation and online maximum inner-product search with filtering, enabling tens-of-milliseconds response times. In offline evaluations on production-scale data, STARS improves Hit@5 by more than 75 percent relative to our existing LambdaMART system. A large-scale A/B test on 6 million visits shows statistically significant lifts, including Total Orders +0.8%, Add-to-Cart on Home +2.0%, and Visits per User +0.5%. These results demonstrate that combining semantic enrichment, multi-intent modeling, and deployment-oriented design can yield state-of-the-art recommendation quality in real-world environments without sacrificing serving efficiency.
+ oai:arXiv.org:2512.10149v1
+ cs.IR
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fr\'ed\'eric Fortier-Chouinard, Yannick Hold-Geoffroy, Valentin Deschaintre, Matheus Gadelha, Jean-Fran\c{c}ois Lalonde
+ http://creativecommons.org/licenses/by/4.0/
+ Han Chen, Steven Zhu, Yingrui Li
- AI TIPS 2.0: A Comprehensive Framework for Operationalizing AI Governance
- https://arxiv.org/abs/2512.09114
- arXiv:2512.09114v1 Announce Type: new
-Abstract: The deployment of AI systems faces three critical governance challenges that current frameworks fail to adequately address. First, organizations struggle with inadequate risk assessment at the use case level, exemplified by the Humana class action lawsuit and other high impact cases where an AI system deployed to production exhibited both significant bias and high error rates, resulting in improper healthcare claim denials. Each AI use case presents unique risk profiles requiring tailored governance, yet most frameworks provide one size fits all guidance. Second, existing frameworks like ISO 42001 and NIST AI RMF remain at high conceptual levels, offering principles without actionable controls, leaving practitioners unable to translate governance requirements into specific technical implementations. Third, organizations lack mechanisms for operationalizing governance at scale, with no systematic approach to embed trustworthy AI practices throughout the development lifecycle, measure compliance quantitatively, or provide role-appropriate visibility from boards to data scientists. We present AI TIPS, Artificial Intelligence Trust-Integrated Pillars for Sustainability 2.0, update to the comprehensive operational framework developed in 2019,four years before NIST's AI Risk Management Framework, that directly addresses these challenges.
- oai:arXiv.org:2512.09114v1
+ Unforgotten Safety: Preserving Safety Alignment of Large Language Models with Continual Learning
+ https://arxiv.org/abs/2512.10150
+ arXiv:2512.10150v1 Announce Type: new
+Abstract: The safety alignment of large language models (LLMs) is becoming increasingly important with their democratization. In this paper, we study the safety degradation that comes with adapting LLMs to new tasks. We attribute this safety compromise to catastrophic forgetting and frame the problem of preserving safety when fine-tuning as a continual learning (CL) problem. We consider the fine-tuning-as-a-service setup where the user uploads their data to a service provider to get a customized model that excels on the user's selected task. We adapt several CL approaches from the literature and systematically evaluate their ability to mitigate safety degradation. These include regularization-based, memory-based, and model merging approaches. We consider two scenarios, (1) benign user data and (2) poisoned user data. Our results demonstrate that CL approaches consistently achieve lower attack success rates than standard fine-tuning. Among these, DER outperforms both other CL methods and existing safety-preserving baselines while maintaining task utility. These findings generalize across three downstream tasks (GSM8K, SST2, Code) and three model families (LLaMA2-7B, Mistral-7B, Gemma-2B), establishing CL as a practical solution to preserve safety.
+ oai:arXiv.org:2512.10150v1
+ cs.CLcs.AI
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Pamela Gupta
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lama Alssum, Hani Itani, Hasan Abed Al Kader Hammoud, Philip Torr, Adel Bibi, Bernard Ghanem
- SuperF: Neural Implicit Fields for Multi-Image Super-Resolution
- https://arxiv.org/abs/2512.09115
- arXiv:2512.09115v1 Announce Type: new
-Abstract: High-resolution imagery is often hindered by limitations in sensor technology, atmospheric conditions, and costs. Such challenges occur in satellite remote sensing, but also with handheld cameras, such as our smartphones. Hence, super-resolution aims to enhance the image resolution algorithmically. Since single-image super-resolution requires solving an inverse problem, such methods must exploit strong priors, e.g. learned from high-resolution training data, or be constrained by auxiliary data, e.g. by a high-resolution guide from another modality. While qualitatively pleasing, such approaches often lead to "hallucinated" structures that do not match reality. In contrast, multi-image super-resolution (MISR) aims to improve the (optical) resolution by constraining the super-resolution process with multiple views taken with sub-pixel shifts. Here, we propose SuperF, a test-time optimization approach for MISR that leverages coordinate-based neural networks, also called neural fields. Their ability to represent continuous signals with an implicit neural representation (INR) makes them an ideal fit for the MISR task.
- The key characteristic of our approach is to share an INR for multiple shifted low-resolution frames and to jointly optimize the frame alignment with the INR. Our approach advances related INR baselines, adopted from burst fusion for layer separation, by directly parameterizing the sub-pixel alignment as optimizable affine transformation parameters and by optimizing via a super-sampled coordinate grid that corresponds to the output resolution. Our experiments yield compelling results on simulated bursts of satellite imagery and ground-level images from handheld cameras, with upsampling factors of up to 8. A key advantage of SuperF is that this approach does not rely on any high-resolution training data.
- oai:arXiv.org:2512.09115v1
+ Topological Conditioning for Mammography Models via a Stable Wavelet-Persistence Vectorization
+ https://arxiv.org/abs/2512.10151
+ arXiv:2512.10151v1 Announce Type: new
+Abstract: Breast cancer is the most commonly diagnosed cancer in women and a leading cause of cancer death worldwide. Screening mammography reduces mortality, yet interpretation still suffers from substantial false negatives and false positives, and model accuracy often degrades when deployed across scanners, modalities, and patient populations. We propose a simple conditioning signal aimed at improving external performance based on a wavelet based vectorization of persistent homology. Using topological data analysis, we summarize image structure that persists across intensity thresholds and convert this information into spatial, multi scale maps that are provably stable to small intensity perturbations. These maps are integrated into a two stage detection pipeline through input level channel concatenation. The model is trained and validated on the CBIS DDSM digitized film mammography cohort from the United States and evaluated on two independent full field digital mammography cohorts from Portugal (INbreast) and China (CMMD), with performance reported at the patient level. On INbreast, augmenting ConvNeXt Tiny with wavelet persistence channels increases patient level AUC from 0.55 to 0.75 under a limited training budget.
+ oai:arXiv.org:2512.10151v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Sander Riis{\o}en Jyhne, Christian Igel, Morten Goodwin, Per-Arne Andersen, Serge Belongie, Nico Lang
-
-
- High Order Numerical Methods Preserving Invariant Domain for Hyperbolic and Related Systems
- https://arxiv.org/abs/2512.09116
- arXiv:2512.09116v1 Announce Type: new
-Abstract: Admissible states in hyperbolic systems and related equations often form a convex invariant domain. Numerical violations of this domain can lead to loss of hyperbolicity, resulting in illposedness and severe numerical instabilities. It is therefore crucial for numerical schemes to preserve the invariant domain to ensure both physically meaningful solutions and robust computations. For complex systems, constructing invariant-domain-preserving (IDP) schemes is highly nontrivial and particularly challenging for high-order accurate methods. This paper presents a comprehensive survey of IDP schemes for hyperbolic and related systems, with a focus on the most popular approaches for constructing provable IDP schemes. We first give a systematic review of the fundamental approaches for establishing the IDP property in first-order accurate schemes, covering finite difference, finite volume, finite element, and residual distribution methods. Then we focus on two widely used and actively developed classes of high order IDP schemes as well as their recent developments, most of which have emerged in the past decade. The first class of methods seeks an intrinsic weak IDP property in high-order schemes and then designs polynomial limiters to enforce a strong IDP property at the points of interest. This generic approach applies to high-order finite volume and discontinuousGalerkin schemes. The second class is based on the flux limiting approaches, which originated from the flux-corrected transport method and can be adapted to a broader range of spatial discretizations, including finite difference and continuous finite element methods. In this survey, we elucidate the main ideas in the construction of IDP schemes, provide some new perspectives and insights, with extensive examples, and numerical experiments in gas dynamics and magnetohydrodynamics.
- oai:arXiv.org:2512.09116v1
- math.NA
- astro-ph.IM
- cs.NA
- physics.comp-ph
- physics.flu-dyn
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kailiang Wu, Xiangxiong Zhang, Chi-Wang Shu
+ Charles Fanning, Mehmet Emin Aktas
- A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding Problem
- https://arxiv.org/abs/2512.09117
- arXiv:2512.09117v1 Announce Type: new
-Abstract: This paper presents a formal, categorical framework for analysing how humans and large language models (LLMs) transform content into truth-evaluated propositions about a state space of possible worlds W , in order to argue that LLMs do not solve but circumvent the symbol grounding problem.
- oai:arXiv.org:2512.09117v1
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Rethinking Causal Discovery Through the Lens of Exchangeability
+ https://arxiv.org/abs/2512.10152
+ arXiv:2512.10152v1 Announce Type: new
+Abstract: Causal discovery methods have traditionally been developed under two distinct regimes: independent and identically distributed (i.i.d.) and timeseries data, each governed by separate modelling assumptions. In this paper, we argue that the i.i.d. setting can and should be reframed in terms of exchangeability, a strictly more general symmetry principle. We present the implications of this reframing, alongside two core arguments: (1) a conceptual argument, based on extending the dependency of experimental causal inference on exchangeability to causal discovery; and (2) an empirical argument, showing that many existing i.i.d. causal-discovery methods are predicated on exchangeability assumptions, and that the sole extensive widely-used real-world "i.i.d." benchmark (the T\"ubingen dataset) consists mainly of exchangeable (and not i.i.d.) examples. Building on this insight, we introduce a novel synthetic dataset that enforces only the exchangeability assumption, without imposing the stronger i.i.d. assumption. We show that our exchangeable synthetic dataset mirrors the statistical structure of the real-world "i.i.d." dataset more closely than all other i.i.d. synthetic datasets. Furthermore, we demonstrate the predictive capability of this dataset by proposing a neural-network-based causal-discovery algorithm trained exclusively on our synthetic dataset, and which performs similarly to other state-of-the-art i.i.d. methods on the real-world benchmark.
+ oai:arXiv.org:2512.10152v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Luciano Floridi, Yiyang Jia, Fernando Tohm\'e
+ Tiago Brogueira, M\'ario Figueiredo
- A Hybrid Neural Network-Finite Element Method for the Viscous-Plastic Sea-Ice Model
- https://arxiv.org/abs/2512.09118
- arXiv:2512.09118v1 Announce Type: new
-Abstract: We present an efficient hybrid Neural Network-Finite Element Method (NN-FEM) for solving the viscous-plastic (VP) sea-ice model. The VP model is widely used in climate simulations to represent large-scale sea-ice dynamics. However, the strong nonlinearity introduced by the material law makes VP solvers computationally expensive, with the cost per degree of freedom increasing rapidly under mesh refinement. High spatial resolution is particularly required to capture narrow deformation bands known as linear kinematic features in viscous-plastic models. To improve computational efficiency in simulating such fine-scale deformation features, we propose to enrich coarse-mesh finite element approximations with fine-scale corrections predicted by neural networks trained with high-resolution simulations. The neural network operates locally on small patches of grid elements, which is efficient due to its relatively small size and parallel applicability across grid patches. An advantage of this local approach is that it generalizes well to different right-hand sides and computational domains, since the network operates on small subregions rather than learning details tied to a specific choice of boundary conditions, forcing, or geometry. The numerical examples quantify the runtime and evaluate the error for this hybrid approach with respect to the simulation of sea-ice deformations. Applying the learned network correction enables coarser-grid simulations to achieve qualitatively similar accuracy at approximately 11 times lower computational cost relative to the high-resolution reference simulations. Moreover, the learned correction accelerates the Newton solver by up to 10% compared to runs without the correction at the same mesh resolution.
- oai:arXiv.org:2512.09118v1
- math.NA
- cs.NA
- physics.comp-ph
- physics.flu-dyn
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Vertically Integrated Framework for Templatized Chip Design
+ https://arxiv.org/abs/2512.10155
+ arXiv:2512.10155v1 Announce Type: new
+Abstract: Developers who primarily engage with software often struggle to incorporate custom hardware into their applications, even though specialized silicon can provide substantial benefits to machine learning and AI, as well as to the application domains that they enable. This work investigates how a chip can be generated from a high-level object-oriented software specification, targeting introductory-level chip design learners with only very light performance requirements, while maintaining mental continuity between the chip layout and the software source program. In our approach, each software object is represented as a corresponding region on the die, producing a one-to-one structural mapping that preserves these familiar abstractions throughout the design flow. To support this mapping, we employ a modular construction strategy in which vertically composed IP blocks implement the behavioral protocols expressed in software. A direct syntactic translation, however, cannot meet hardware-level efficiency or communication constraints. For this reason, we leverage formal type systems based on sequences that check whether interactions between hardware modules adhere to the communication patterns described in the software model. We further examine hardware interconnect strategies for composing many such modules and develop layout techniques suited to this object-aligned design style. Together, these contributions preserve mental continuity from software to chip design for new learners and enables practical layout generation, ultimately reducing the expertise required for software developers to participate in chip creation.
+ oai:arXiv.org:2512.10155v1
+ cs.AR
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Nils Margenberg, Carolin Mehlmann
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jeongeun Kim, Christopher Torng
- Knowledge-Guided Large Language Model for Automatic Pediatric Dental Record Understanding and Safe Antibiotic Recommendation
- https://arxiv.org/abs/2512.09127
- arXiv:2512.09127v1 Announce Type: new
-Abstract: Accurate interpretation of pediatric dental clinical records and safe antibiotic prescribing remain persistent challenges in dental informatics. Traditional rule-based clinical decision support systems struggle with unstructured dental narratives, incomplete radiographic descriptions, and complex safety constraints. To address these limitations, this study proposes a Knowledge-Guided Large Language Model (KG-LLM) that integrates a pediatric dental knowledge graph, retrieval-augmented generation (RAG), and a multi-stage safety validation pipeline for evidence-grounded antibiotic recommendation. The framework first employs a clinical NER/RE module to extract structured entities and relations from dental notes and radiology reports. Relevant guidelines, drug-safety rules, and analogous historical cases are subsequently retrieved from the knowledge graph and supplied to the LLM for diagnostic summarization and dose-drug-duration prediction. Safety assurance is achieved through a dual-layer validation mechanism combining deterministic rule checking with a learned classifier for detecting allergies, contraindications, and dosing errors. Experiments on 32,000 de-identified pediatric dental visit records demonstrate the effectiveness of the proposed approach. Compared with a domain-adapted Llama-2 clinical baseline, KG-LLM improves record-understanding performance (F1: 0.914 vs. 0.867), drug-dose-duration accuracy (Top-1: 0.782 vs. 0.716), and reduces unsafe antibiotic suggestions by 50%. Additional evaluation across summary quality, recommendation accuracy, and global safety scores further confirms the robustness of the system. Ablation analyses indicate that the knowledge graph, RAG, and safety modules each contribute substantially to clinical reliability and interpretability.
- oai:arXiv.org:2512.09127v1
- cs.CL
+ Enhancing Large Language Models for End-to-End Circuit Analysis Problem Solving
+ https://arxiv.org/abs/2512.10159
+ arXiv:2512.10159v1 Announce Type: new
+Abstract: Large language models (LLMs) have shown strong performance in data-rich domains such as programming, but their reliability in engineering tasks remains limited. Circuit analysis -- requiring multimodal understanding and precise mathematical reasoning -- highlights these challenges. Although Gemini 2.5 Pro improves diagram interpretation and analog-circuit reasoning, it still struggles to consistently produce correct solutions when given both text and circuit diagrams. At the same time, engineering education needs scalable AI tools capable of generating accurate solutions for tasks such as automated homework feedback and question-answering. This paper presents an enhanced, end-to-end circuit problem solver built on Gemini 2.5 Pro. We first benchmark Gemini on a representative set of undergraduate circuit problems and identify two major failure modes: 1) circuit-recognition hallucinations, particularly incorrect source polarity detection, and 2) reasoning-process hallucinations, such as incorrect current directions. To address recognition errors, we integrate a fine-tuned YOLO detector and OpenCV processing to isolate voltage and current sources, enabling Gemini to re-identify source polarities from cropped images with near-perfect accuracy. To reduce reasoning errors, we introduce an ngspice-based verification loop in which Gemini generates a .cir file, ngspice simulates the circuit, and discrepancies trigger iterative regeneration with optional human-in-the-loop review. Across 83 problems, the proposed pipeline achieves a 97.59% success rate (81 correct solutions), substantially outperforming Gemini 2.5 Pro's original 79.52% accuracy. This system extends LLM capabilities for multimodal engineering problem-solving and supports the creation of high-quality educational datasets and AI-powered instructional tools.
+ oai:arXiv.org:2512.10159v1
+ cs.CYcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.HC
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Zihan Han, Junyan Ge, Caifeng Li
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Liangliang Chen, Weiyu Sun, Ying Zhang
- Integrated Pipeline for Coronary Angiography With Automated Lesion Profiling, Virtual Stenting, and 100-Vessel FFR Validation
- https://arxiv.org/abs/2512.09134
- arXiv:2512.09134v1 Announce Type: new
-Abstract: Coronary angiography is the main tool for assessing coronary artery disease, but visual grading of stenosis is variable and only moderately related to ischaemia. Wire based fractional flow reserve (FFR) improves lesion selection but is not used systematically. Angiography derived indices such as quantitative flow ratio (QFR) offer wire free physiology, yet many tools are workflow intensive and separate from automated anatomy analysis and virtual PCI planning. We developed AngioAI-QFR, an end to end angiography only pipeline combining deep learning stenosis detection, lumen segmentation, centreline and diameter extraction, per millimetre Relative Flow Capacity profiling, and virtual stenting with automatic recomputation of angiography derived QFR. The system was evaluated in 100 consecutive vessels with invasive FFR as reference. Primary endpoints were agreement with FFR (correlation, mean absolute error) and diagnostic performance for FFR <= 0.80. On held out frames, stenosis detection achieved precision 0.97 and lumen segmentation Dice 0.78. Across 100 vessels, AngioAI-QFR correlated strongly with FFR (r = 0.89, MAE 0.045). The AUC for detecting FFR <= 0.80 was 0.93, with sensitivity 0.88 and specificity 0.86. The pipeline completed fully automatically in 93 percent of vessels, with median time to result 41 s. RFC profiling distinguished focal from diffuse capacity loss, and virtual stenting predicted larger QFR gain in focal than in diffuse disease. AngioAI-QFR provides a practical, near real time pipeline that unifies computer vision, functional profiling, and virtual PCI with automated angiography derived physiology.
- oai:arXiv.org:2512.09134v1
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ BookReconciler: An Open-Source Tool for Metadata Enrichment and Work-Level Clustering
+ https://arxiv.org/abs/2512.10165
+ arXiv:2512.10165v1 Announce Type: new
+Abstract: We present BookReconciler, an open-source tool for enhancing and clustering book data. BookReconciler allows users to take spreadsheets with minimal metadata, such as book title and author, and automatically 1) add authoritative, persistent identifiers like ISBNs 2) and cluster related Expressions and Manifestations of the same Work, e.g., different translations or editions. This enhancement makes it easier to combine related collections and analyze books at scale. The tool is currently designed as an extension for OpenRefine -- a popular software application -- and connects to major bibliographic services including the Library of Congress, VIAF, OCLC, HathiTrust, Google Books, and Wikidata. Our approach prioritizes human judgment. Through an interactive interface, users can manually evaluate matches and define the contours of a Work (e.g., to include translations or not). We evaluate reconciliation performance on datasets of U.S. prize-winning books and contemporary world fiction. BookReconciler achieves near-perfect accuracy for U.S. works but lower performance for global texts, reflecting structural weaknesses in bibliographic infrastructures for non-English and global literature. Overall, BookReconciler supports the reuse of bibliographic data across domains and applications, contributing to ongoing work in digital libraries and digital humanities.
+ oai:arXiv.org:2512.10165v1
+ cs.DL
+ cs.IR
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Georgy Kopanitsa, Oleg Metsker, Alexey Yakovlev
+ Joint Conference on Digital Libraries (JCDL), 2025
+ Matt Miller, Dan Sinykin, Melanie Walsh
- Energy-Based Modeling and Structure-Preserving Discretization of Physical Systems
- https://arxiv.org/abs/2512.09138
- arXiv:2512.09138v1 Announce Type: new
-Abstract: This paper develops a comprehensive mathematical framework for energy-based modeling of physical systems, with particular emphasis on preserving fundamental structural properties throughout the modeling and discretization process. The approach provides systematic methods for handling challenging system classes including high-index differential-algebraic equations and nonlinear multiphysics problems. Theoretical foundations are established for regularizing constrained systems while maintaining physical consistency, analyzing stability properties, and constructing numerical discretizations that inherit the energy dissipation structure of the continuous models. The versatility and practical utility of the framework are demonstrated through applications across multiple domains including poroelastic media, nonlinear circuits, constrained mechanics, and phase-field models. The results ensure that essential physical properties such as energy balance and dissipation are maintained from the continuous formulation through to numerical implementation, providing robust foundations for computational physics and engineering applications.
- oai:arXiv.org:2512.09138v1
- math.NA
- cs.NA
- math.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ Emergent Collective Memory in Decentralized Multi-Agent AI Systems
+ https://arxiv.org/abs/2512.10166
+ arXiv:2512.10166v1 Announce Type: new
+Abstract: We demonstrate how collective memory emerges in decentralized multi-agent systems through the interplay between individual agent memory and environmental trace communication. Our agents maintain internal memory states while depositing persistent environmental traces, creating a spatially distributed collective memory without centralized control. Comprehensive validation across five environmental conditions (20x20 to 50x50 grids, 5-20 agents, 50 runs per configuration) reveals a critical asymmetry: individual memory alone provides 68.7% performance improvement over no-memory baselines (1563.87 vs 927.23, p < 0.001), while environmental traces without memory fail completely. This demonstrates that memory functions independently but traces require cognitive infrastructure for interpretation. Systematic density-sweep experiments (rho in [0.049, 0.300], up to 625 agents) validate our theoretical phase transition prediction. On realistic large grids (30x30, 50x50), stigmergic coordination dominates above rho ~ 0.20, with traces outperforming memory by 36-41% on composite metrics despite lower food efficiency. The experimental crossover confirms the predicted critical density rho_c = 0.230 within 13% error.
+ oai:arXiv.org:2512.10166v1
+ cs.MA
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- M. H. M Rashid
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Khushiyant
- SDialog: A Python Toolkit for End-to-End Agent Building, User Simulation, Dialog Generation, and Evaluation
- https://arxiv.org/abs/2512.09142
- arXiv:2512.09142v1 Announce Type: new
-Abstract: We present SDialog, an MIT-licensed open-source Python toolkit that unifies dialog generation, evaluation and mechanistic interpretability into a single end-to-end framework for building and analyzing LLM-based conversational agents. Built around a standardized \texttt{Dialog} representation, SDialog provides: (1) persona-driven multi-agent simulation with composable orchestration for controlled, synthetic dialog generation, (2) comprehensive evaluation combining linguistic metrics, LLM-as-a-judge and functional correctness validators, (3) mechanistic interpretability tools for activation inspection and steering via feature ablation and induction, and (4) audio generation with full acoustic simulation including 3D room modeling and microphone effects. The toolkit integrates with all major LLM backends, enabling mixed-backend experiments under a unified API. By coupling generation, evaluation, and interpretability in a dialog-centric architecture, SDialog enables researchers to build, benchmark and understand conversational systems more systematically.
- oai:arXiv.org:2512.09142v1
+ The 2025 Foundation Model Transparency Index
+ https://arxiv.org/abs/2512.10169
+ arXiv:2512.10169v1 Announce Type: new
+Abstract: Foundation model developers are among the world's most important companies. As these companies become increasingly consequential, how do their transparency practices evolve? The 2025 Foundation Model Transparency Index is the third edition of an annual effort to characterize and quantify the transparency of foundation model developers. The 2025 FMTI introduces new indicators related to data acquisition, usage data, and monitoring and evaluates companies like Alibaba, DeepSeek, and xAI for the first time. The 2024 FMTI reported that transparency was improving, but the 2025 FMTI finds this progress has deteriorated: the average score out of 100 fell from 58 in 2024 to 40 in 2025. Companies are most opaque about their training data and training compute as well as the post-deployment usage and impact of their flagship models. In spite of this general trend, IBM stands out as a positive outlier, scoring 95, in contrast to the lowest scorers, xAI and Midjourney, at just 14. The five members of the Frontier Model Forum we score end up in the middle of the Index: we posit that these companies avoid reputational harms from low scores but lack incentives to be transparency leaders. As policymakers around the world increasingly mandate certain types of transparency, this work reveals the current state of transparency for foundation model developers, how it may change given newly enacted policy, and where more aggressive policy interventions are necessary to address critical information deficits.
+ oai:arXiv.org:2512.10169v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CY
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Sergio Burdisso, S\'everin Baroudi, Yanis Labrak, David Grunert, Pawel Cyrta, Yiyang Chen, Srikanth Madikeri, Esa\'u Villatoro-Tello, Thomas Schaaf, Ricard Marxer, Petr Motlicek
+ Alexander Wan, Kevin Klyman, Sayash Kapoor, Nestor Maslej, Shayne Longpre, Betty Xiong, Percy Liang, Rishi Bommasani
- Detecting Hallucinations in Graph Retrieval-Augmented Generation via Attention Patterns and Semantic Alignment
- https://arxiv.org/abs/2512.09148
- arXiv:2512.09148v1 Announce Type: new
-Abstract: Graph-based Retrieval-Augmented Generation (GraphRAG) enhances Large Language Models (LLMs) by incorporating external knowledge from linearized subgraphs retrieved from knowledge graphs. However, LLMs struggle to interpret the relational and topological information in these inputs, resulting in hallucinations that are inconsistent with the retrieved knowledge. To analyze how LLMs attend to and retain structured knowledge during generation, we propose two lightweight interpretability metrics: Path Reliance Degree (PRD), which measures over-reliance on shortest-path triples, and Semantic Alignment Score (SAS), which assesses how well the model's internal representations align with the retrieved knowledge. Through empirical analysis on a knowledge-based QA task, we identify failure patterns associated with over-reliance on salient paths and weak semantic grounding, as indicated by high PRD and low SAS scores. We further develop a lightweight post-hoc hallucination detector, Graph Grounding and Alignment (GGA), which outperforms strong semantic and confidence-based baselines across AUC and F1. By grounding hallucination analysis in mechanistic interpretability, our work offers insights into how structural limitations in LLMs contribute to hallucinations, informing the design of more reliable GraphRAG systems in the future.
- oai:arXiv.org:2512.09148v1
- cs.CL
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Semantic-Aware Confidence Calibration for Automated Audio Captioning
+ https://arxiv.org/abs/2512.10170
+ arXiv:2512.10170v1 Announce Type: new
+Abstract: Automated audio captioning models frequently produce overconfident predictions regardless of semantic accuracy, limiting their reliability in deployment. This deficiency stems from two factors: evaluation metrics based on n-gram overlap that fail to capture semantic correctness, and the absence of calibrated confidence estimation. We present a framework that addresses both limitations by integrating confidence prediction into audio captioning and redefining correctness through semantic similarity. Our approach augments a Whisper-based audio captioning model with a learned confidence prediction head that estimates uncertainty from decoder hidden states. We employ CLAP audio-text embeddings and sentence transformer similarities (FENSE) to define semantic correctness, enabling Expected Calibration Error (ECE) computation that reflects true caption quality rather than surface-level text overlap. Experiments on Clotho v2 demonstrate that confidence-guided beam search with semantic evaluation achieves dramatically improved calibration (CLAP-based ECE of 0.071) compared to greedy decoding baselines (ECE of 0.488), while simultaneously improving caption quality across standard metrics. Our results establish that semantic similarity provides a more meaningful foundation for confidence calibration in audio captioning than traditional n-gram metrics.
+ oai:arXiv.org:2512.10170v1
+ cs.SD
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shanghao Li, Jinda Han, Yibo Wang, Yuanjie Zhu, Zihe Song, Langzhou He, Kenan Kamel A Alghythee, Philip S. Yu
+ http://creativecommons.org/licenses/by/4.0/
+ Lucas Dunker, Sai Akshay Menta, Snigdha Mohana Addepalli, Venkata Krishna Rayalu Garapati
- MindShift: Analyzing Language Models' Reactions to Psychological Prompts
- https://arxiv.org/abs/2512.09149
- arXiv:2512.09149v1 Announce Type: new
-Abstract: Large language models (LLMs) hold the potential to absorb and reflect personality traits and attitudes specified by users. In our study, we investigated this potential using robust psychometric measures. We adapted the most studied test in psychological literature, namely Minnesota Multiphasic Personality Inventory (MMPI) and examined LLMs' behavior to identify traits. To asses the sensitivity of LLMs' prompts and psychological biases we created personality-oriented prompts, crafting a detailed set of personas that vary in trait intensity. This enables us to measure how well LLMs follow these roles. Our study introduces MindShift, a benchmark for evaluating LLMs' psychological adaptability. The results highlight a consistent improvement in LLMs' role perception, attributed to advancements in training datasets and alignment techniques. Additionally, we observe significant differences in responses to psychometric assessments across different model types and families, suggesting variability in their ability to emulate human-like personality traits. MindShift prompts and code for LLM evaluation will be publicly available.
- oai:arXiv.org:2512.09149v1
- cs.CL
+ Offscript: Automated Auditing of Instruction Adherence in LLMs
+ https://arxiv.org/abs/2512.10172
+ arXiv:2512.10172v1 Announce Type: new
+Abstract: Large Language Models (LLMs) and generative search systems are increasingly used for information seeking by diverse populations with varying preferences for knowledge sourcing and presentation. While users can customize LLM behavior through custom instructions and behavioral prompts, no mechanism exists to evaluate whether these instructions are being followed effectively. We present Offscript, an automated auditing tool that efficiently identifies potential instruction following failures in LLMs. In a pilot study analyzing custom instructions sourced from Reddit, Offscript detected potential deviations from instructed behavior in 86.4% of conversations, 22.2% of which were confirmed as material violations through human review. Our findings suggest that automated auditing serves as a viable approach for evaluating compliance to behavioral instructions related to information seeking.
+ oai:arXiv.org:2512.10172v1
+ cs.HCcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Anton Vasiliuk, Irina Abdullaeva, Polina Druzhinina, Anton Razzhigaev, Andrey Kuznetsov
+ Nicholas Clark, Ryan Bai, Tanu Mitra
- Exposing Vulnerabilities in Counterfeit Prevention Systems Utilizing Physically Unclonable Surface Features
- https://arxiv.org/abs/2512.09150
- arXiv:2512.09150v1 Announce Type: new
-Abstract: Counterfeit products pose significant risks to public health and safety through infiltrating untrusted supply chains. Among numerous anti-counterfeiting techniques, leveraging inherent, unclonable microscopic irregularities of paper surfaces is an accurate and cost-effective solution. Prior work of this approach has focused on enabling ubiquitous acquisition of these physically unclonable features (PUFs). However, we will show that existing authentication methods relying on paper surface PUFs may be vulnerable to adversaries, resulting in a gap between technological feasibility and secure real-world deployment. This gap is investigated through formalizing an operational framework for paper-PUF-based authentication. Informed by this framework, we reveal system-level vulnerabilities across both physical and digital domains, designing physical denial-of-service and digital forgery attacks to disrupt proper authentication. The effectiveness of the designed attacks underscores the strong need for security countermeasures for reliable and resilient authentication based on paper PUFs. The proposed framework further facilitates a comprehensive, stage-by-stage security analysis, guiding the design of future counterfeit prevention systems. This analysis delves into potential attack strategies, offering a foundational understanding of how various system components, such as physical features and verification processes, might be exploited by adversaries.
- oai:arXiv.org:2512.09150v1
- cs.CR
- eess.SP
- Thu, 11 Dec 2025 00:00:00 -0500
+ ATLAS: Automated Toolkit for Large-Scale Verified Code Synthesis
+ https://arxiv.org/abs/2512.10173
+ arXiv:2512.10173v1 Announce Type: new
+Abstract: Large language models have shown potential for program verification, but progress is hindered by the scarcity of verified code for training. We present ATLAS, an automated pipeline that synthesizes verified programs at scale to address this data bottleneck. ATLAS generates complete Dafny programs with specifications, implementations, and proofs, producing 2.7K verified programs from which we extract over 19K training examples--more than 7 per verified program--by decomposing the synthesis process into multiple specialized tasks. Fine-tuning Qwen 2.5 7B Coder on this dataset produces substantial gains: +23 percentage points on DafnyBench and +50 percentage points on DafnySynthesis. These results demonstrate that synthetic verified code can effectively enhance LLM capabilities for formal verification.
+ oai:arXiv.org:2512.10173v1
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Anirudh Nakra, Nayeeb Rashid, Chau-Wai Wong, Min Wu
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mantas Baksys, Stefan Zetzsche, Olivier Bouissou, Remi Delmas, Soonho Kong
- PILLTOP: Multi-Material Topology Optimization of Polypills for Prescribed Drug-Release Kinetics
- https://arxiv.org/abs/2512.09154
- arXiv:2512.09154v1 Announce Type: new
-Abstract: Polypills are single oral dosage forms that combine multiple active pharmaceutical ingredients and excipients, enabling fixed-dose combination therapies, coordinated multi-phase release, and precise customization of patient-specific treatment protocols. Recent advances in additive manufacturing facilitate the physical realization of multi-material excipients, offering superior customization of target release profiles. However, polypill formulations remain tuned by ad hoc parameter sweeps; this reliance renders current design workflows ill-suited for the systematic exploration of the high-dimensional space of shapes, compositions, and release behaviors.
- We present an automated design framework for polypills that leverages topology optimization to match dissolution behaviors with prescribed drug release kinetics. In particular, we employ a supershape parametrization to define geometry/phase distribution, a neural network representation to specify excipient distribution, and a coupled system of modified Allen-Cahn and Fick's diffusion equations to govern dissolution kinetics. The framework is implemented in JAX, utilizing automatic differentiation to compute sensitivities for the co-optimization of pill shape and constituent distribution. We validate the method through single-phase and multi-excipient case studies.
- oai:arXiv.org:2512.09154v1
- cs.CE
- Thu, 11 Dec 2025 00:00:00 -0500
+ CIEGAD: Cluster-Conditioned Interpolative and Extrapolative Framework for Geometry-Aware and Domain-Aligned Data Augmentation
+ https://arxiv.org/abs/2512.10178
+ arXiv:2512.10178v1 Announce Type: new
+Abstract: In practical deep learning deployment, the scarcity of data and the imbalance of label distributions often lead to semantically uncovered regions within the real-world data distribution, hindering model training and causing misclassification near class boundaries as well as unstable behaviors in peripheral areas. Although recent large language models (LLMs) show promise for data augmentation, an integrated framework that simultaneously achieves directional control of generation, domain alignment, and quality control has not yet been fully established. To address these challenges, we propose a Cluster-conditioned Interpolative and Extrapolative framework for Geometry-Aware and Domain-aligned data augmentation (CIEGAD), which systematically complements both in-distribution and out-of-distribution semantically uncovered regions. CIEGAD constructs domain profiles through cluster conditioning, allocates generation with a hierarchical frequency-geometric allocation integrating class frequency and geometric indicators, and finely controls generation directions via the coexistence of interpolative and extrapolative synthesis. It further performs quality control through geometry-constrained filtering combined with an LLM-as-a-Judge mechanism. Experiments on multiple classification tasks demonstrate that CIEGAD effectively extends the periphery of real-world data distributions while maintaining high alignment between generated and real-world data as well as semantic diversity. In particular, for long-tailed and multi-class classification tasks, CIEGAD consistently improves F1 and recall, validating the triple harmony of distributional consistency, diversity, and quality. These results indicate that CIEGAD serves as a practically oriented data augmentation framework that complements underrepresented regions while preserving alignment with real-world data.
+ oai:arXiv.org:2512.10178v1
+ cs.LG
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Rahul Kumar Padhy, Aaditya Chandrasekhar, Amir M. Mirzendehdel
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Keito Inoshita, Xiaokang Zhou, Akira Kawai, Katsutoshi Yada
- Improving a Parallel C++ Intel AVX-512 SIMD Linear Genetic Programming Interpreter
- https://arxiv.org/abs/2512.09157
- arXiv:2512.09157v1 Announce Type: new
-Abstract: We extend recent 256 SSE vector work to 512 AVX giving a four fold speedup. We use MAGPIE (Machine Automated General Performance Improvement via Evolution of software) to speedup a C++ linear genetic programming interpreter. Local search is provided with three alternative hand optimised codes, revision history and the Intel 512 bit AVX512VL documentation as C++ XML. Magpie is applied to the new Single Instruction Multiple Data (SIMD) parallel interpreter for Peter Nordin's linear genetic programming GPengine. Linux mprotect sandboxes whilst performance is given by perf instruction count. In both cases, in a matter of hours local search reliably sped up 114 or 310 lines of manually written parallel SIMD code for the Intel Advanced Vector Extensions (AVX) by 2 percent.
- oai:arXiv.org:2512.09157v1
- cs.NE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Assessing Neuromorphic Computing for Fingertip Force Decoding from Electromyography
+ https://arxiv.org/abs/2512.10179
+ arXiv:2512.10179v1 Announce Type: new
+Abstract: High-density surface electromyography (HD-sEMG) provides a noninvasive neural interface for assistive and rehabilitation control, but mapping neural activity to user motor intent remains challenging. We assess a spiking neural network (SNN) as a neuromorphic architecture against a temporal convolutional network (TCN) for decoding fingertip force from motor-unit (MU) firing derived from HD-sEMG. Data were collected from a single participant (10 trials) with two forearm electrode arrays; MU activity was obtained via FastICA-based decomposition, and models were trained on overlapping windows with end-to-end causal convolutions. On held-out trials, the TCN achieved 4.44% MVC RMSE (Pearson r = 0.974) while the SNN achieved 8.25% MVC (r = 0.922). While the TCN was more accurate, we view the SNN as a realistic neuromorphic baseline that could close much of this gap with modest architectural and hyperparameter refinements.
+ oai:arXiv.org:2512.10179v1
+ cs.LG
+ eess.SP
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- William B. Langdon
+ http://creativecommons.org/licenses/by/4.0/
+ Abolfazl Shahrooei, Luke Arthur, Om Patel, Derek Kamper
- GTAvatar: Bridging Gaussian Splatting and Texture Mapping for Relightable and Editable Gaussian Avatars
- https://arxiv.org/abs/2512.09162
- arXiv:2512.09162v1 Announce Type: new
-Abstract: Recent advancements in Gaussian Splatting have enabled increasingly accurate reconstruction of photorealistic head avatars, opening the door to numerous applications in visual effects, videoconferencing, and virtual reality. This, however, comes with the lack of intuitive editability offered by traditional triangle mesh-based methods. In contrast, we propose a method that combines the accuracy and fidelity of 2D Gaussian Splatting with the intuitiveness of UV texture mapping. By embedding each canonical Gaussian primitive's local frame into a patch in the UV space of a template mesh in a computationally efficient manner, we reconstruct continuous editable material head textures from a single monocular video on a conventional UV domain. Furthermore, we leverage an efficient physically based reflectance model to enable relighting and editing of these intrinsic material maps. Through extensive comparisons with state-of-the-art methods, we demonstrate the accuracy of our reconstructions, the quality of our relighting results, and the ability to provide intuitive controls for modifying an avatar's appearance and geometry via texture mapping without additional optimization.
- oai:arXiv.org:2512.09162v1
- cs.CV
- cs.GR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Neuromorphic Processor Employing FPGA Technology with Universal Interconnections
+ https://arxiv.org/abs/2512.10180
+ arXiv:2512.10180v1 Announce Type: new
+Abstract: Neuromorphic computing, inspired by biological neural systems, holds immense promise for ultra-low-power and real-time inference applications. However, limited access to flexible, open-source platforms continues to hinder widespread adoption and experimentation. In this paper, we present a low-cost neuromorphic processor implemented on a Xilinx Zynq-7000 FPGA platform. The processor supports all-to-all configurable connectivity and employs the leaky integrate-and-fire (LIF) neuron model with customizable parameters such as threshold, synaptic weights, and refractory period. Communication with the host system is handled via a UART interface, enabling runtime reconfiguration without hardware resynthesis. The architecture was validated using benchmark datasets including the Iris classification and MNIST digit recognition tasks. Post-synthesis results highlight the design's energy efficiency and scalability, establishing its viability as a research-grade neuromorphic platform that is both accessible and adaptable for real-world spiking neural network applications. This implementation will be released as open source following project completion.
+ oai:arXiv.org:2512.10180v1
+ cs.AR
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Kelian Baert, Mae Younes, Francois Bourel, Marc Christie, Adnane Boukhayma
+ Pracheta Harlikar, Abdel-Hameed A. Badawy, Prasanna Date
- WonderZoom: Multi-Scale 3D World Generation
- https://arxiv.org/abs/2512.09164
- arXiv:2512.09164v1 Announce Type: new
-Abstract: We present WonderZoom, a novel approach to generating 3D scenes with contents across multiple spatial scales from a single image. Existing 3D world generation models remain limited to single-scale synthesis and cannot produce coherent scene contents at varying granularities. The fundamental challenge is the lack of a scale-aware 3D representation capable of generating and rendering content with largely different spatial sizes. WonderZoom addresses this through two key innovations: (1) scale-adaptive Gaussian surfels for generating and real-time rendering of multi-scale 3D scenes, and (2) a progressive detail synthesizer that iteratively generates finer-scale 3D contents. Our approach enables users to "zoom into" a 3D region and auto-regressively synthesize previously non-existent fine details from landscapes to microscopic features. Experiments demonstrate that WonderZoom significantly outperforms state-of-the-art video and 3D models in both quality and alignment, enabling multi-scale 3D world creation from a single image. We show video results and an interactive viewer of generated multi-scale 3D worlds in https://wonderzoom.github.io/
- oai:arXiv.org:2512.09164v1
- cs.CV
- cs.AI
- cs.GR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Watermarks for Language Models via Probabilistic Automata
+ https://arxiv.org/abs/2512.10185
+ arXiv:2512.10185v1 Announce Type: new
+Abstract: A recent watermarking scheme for language models achieves distortion-free embedding and robustness to edit-distance attacks. However, it suffers from limited generation diversity and high detection overhead. In parallel, recent research has focused on undetectability, a property ensuring that watermarks remain difficult for adversaries to detect and spoof. In this work, we introduce a new class of watermarking schemes constructed through probabilistic automata. We present two instantiations: (i) a practical scheme with exponential generation diversity and computational efficiency, and (ii) a theoretical construction with formal undetectability guarantees under cryptographic assumptions. Extensive experiments on LLaMA-3B and Mistral-7B validate the superior performance of our scheme in terms of robustness and efficiency.
+ oai:arXiv.org:2512.10185v1
+ cs.CR
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jin Cao, Hong-Xing Yu, Jiajun Wu
+ Yangkun Wang, Jingbo Shang
- Spectral Embedding via Chebyshev Bases for Robust DeepONet Approximation
- https://arxiv.org/abs/2512.09165
- arXiv:2512.09165v1 Announce Type: new
-Abstract: Deep Operator Networks (DeepONets) have become a central tool in data-driven operator learning, providing flexible surrogates for nonlinear mappings arising in partial differential equations (PDEs). However, the standard trunk design based on fully connected layers acting on raw spatial or spatiotemporal coordinates struggles to represent sharp gradients, boundary layers, and non-periodic structures commonly found in PDEs posed on bounded domains with Dirichlet or Neumann boundary conditions. To address these limitations, we introduce the Spectral-Embedded DeepONet (SEDONet), a new DeepONet variant in which the trunk is driven by a fixed Chebyshev spectral dictionary rather than coordinate inputs. This non-periodic spectral embedding provides a principled inductive bias tailored to bounded domains, enabling the learned operator to capture fine-scale non-periodic features that are difficult for Fourier or MLP trunks to represent. SEDONet is evaluated on a suite of PDE benchmarks including 2D Poisson, 1D Burgers, 1D advection-diffusion, Allen-Cahn dynamics, and the Lorenz-96 chaotic system, covering elliptic, parabolic, advective, and multiscale temporal phenomena, all of which can be viewed as canonical problems in computational mechanics. Across all datasets, SEDONet consistently achieves the lowest relative L2 errors among DeepONet, FEDONet, and SEDONet, with average improvements of about 30-40% over the baseline DeepONet and meaningful gains over Fourier-embedded variants on non-periodic geometries. Spectral analyses further show that SEDONet more accurately preserves high-frequency and boundary-localized features, demonstrating the value of Chebyshev embeddings in non-periodic operator learning. The proposed architecture offers a simple, parameter-neutral modification to DeepONets, delivering a robust and efficient spectral framework for surrogate modeling of PDEs on bounded domains.
- oai:arXiv.org:2512.09165v1
+ MiniF2F-Dafny: LLM-Guided Mathematical Theorem Proving via Auto-Active Verification
+ https://arxiv.org/abs/2512.10187
+ arXiv:2512.10187v1 Announce Type: new
+Abstract: We present miniF2F-Dafny, the first translation of the mathematical reasoning benchmark miniF2F to an automated theorem prover: Dafny. Previously, the benchmark existed only in interactive theorem provers (Lean, Isabelle, HOL Light, Metamath). We find that Dafny's automation verifies 99/244 (40.6%) of the test set and 109/244 (44.7%) of the validation set with empty proofs--requiring no manual proof steps. For problems where empty proofs fail, we evaluate 12 off-the-shelf LLMs on providing proof hints. The best model we test achieves 55.7% pass@4 success rate employing iterative error correction. These preliminary results highlight an effective division of labor: LLMs provide high-level guidance while automation handles low-level details. Our benchmark can be found on GitHub at http://github.com/dafny-lang/miniF2F .
+ oai:arXiv.org:2512.10187v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Muhammad Abid, Omer San
+ Mantas Baksys, Stefan Zetzsche, Olivier Bouissou
- Prompt-Based Continual Compositional Zero-Shot Learning
- https://arxiv.org/abs/2512.09172
- arXiv:2512.09172v1 Announce Type: new
-Abstract: We tackle continual adaptation of vision-language models to new attributes, objects, and their compositions in Compositional Zero-Shot Learning (CZSL), while preventing forgetting of prior knowledge. Unlike classical continual learning where classes are disjoint, CCZSL is more complex as attributes and objects may reoccur across sessions while compositions remain unique. Built on a frozen VLM backbone, we propose the first Prompt-based Continual Compositional Zero-Shot Learning (PromptCCZSL) framework that retains prior knowledge through recency-weighted multi-teacher distillation. It employs session-aware compositional prompts to fuse multimodal features for new compositions, while attribute and object prompts are learned through session-agnostic fusion to maintain global semantic consistency, which is further stabilized by a Cosine Anchor Loss (CAL) to preserve prior knowledge. To enhance adaptation in the current session, an Orthogonal Projection Loss (OPL) ensures that new attribute and object embeddings remain distinct from previous ones, preventing overlap, while an Intra-Session Diversity Loss (IDL) promotes variation among current-session embeddings for richer, more discriminative representations. We also introduce a comprehensive protocol that jointly measures catastrophic forgetting and compositional generalization. Extensive experiments on UT-Zappos and C-GQA benchmarks demonstrate that PromptCCZSL achieves substantial improvements over prior VLM-based and non-VLM baselines, setting a new benchmark for CCZSL in closed-world settings.
- oai:arXiv.org:2512.09172v1
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Exact Recovery of Non-Random Missing Multidimensional Time Series via Temporal Isometric Delay-Embedding Transform
+ https://arxiv.org/abs/2512.10191
+ arXiv:2512.10191v1 Announce Type: new
+Abstract: Non-random missing data is a ubiquitous yet undertreated flaw in multidimensional time series, fundamentally threatening the reliability of data-driven analysis and decision-making. Pure low-rank tensor completion, as a classical data recovery method, falls short in handling non-random missingness, both methodologically and theoretically. Hankel-structured tensor completion models provide a feasible approach for recovering multidimensional time series with non-random missing patterns. However, most Hankel-based multidimensional data recovery methods both suffer from unclear sources of Hankel tensor low-rankness and lack an exact recovery theory for non-random missing data. To address these issues, we propose the temporal isometric delay-embedding transform, which constructs a Hankel tensor whose low-rankness is naturally induced by the smoothness and periodicity of the underlying time series. Leveraging this property, we develop the \textit{Low-Rank Tensor Completion with Temporal Isometric Delay-embedding Transform} (LRTC-TIDT) model, which characterizes the low-rank structure under the \textit{Tensor Singular Value Decomposition} (t-SVD) framework. Once the prescribed non-random sampling conditions and mild incoherence assumptions are satisfied, the proposed LRTC-TIDT model achieves exact recovery, as confirmed by simulation experiments under various non-random missing patterns. Furthermore, LRTC-TIDT consistently outperforms existing tensor-based methods across multiple real-world tasks, including network flow reconstruction, urban traffic estimation, and temperature field prediction. Our implementation is publicly available at https://github.com/HaoShu2000/LRTC-TIDT.
+ oai:arXiv.org:2512.10191v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sauda Maryam, Sara Nadeem, Faisal Qureshi, Mohsen Ali
+ Hao Shu, Jicheng Li, Yu Jin, Ling Zhou
- Understanding the Failure Modes of Transformers through the Lens of Graph Neural Networks
- https://arxiv.org/abs/2512.09182
- arXiv:2512.09182v1 Announce Type: new
-Abstract: Transformers and more specifically decoder-only transformers dominate modern LLM architectures. While they have shown to work exceptionally well, they are not without issues, resulting in surprising failure modes and predictably asymmetric performance degradation. This article is a study of many of these observed failure modes of transformers through the lens of graph neural network (GNN) theory. We first make the case that much of deep learning, including transformers, is about learnable information mixing and propagation. This makes the study of model failure modes a study of bottlenecks in information propagation. This naturally leads to GNN theory, where there is already a rich literature on information propagation bottlenecks and theoretical failure modes of models. We then make the case that many issues faced by GNNs are also experienced by transformers. In addition, we analyze how the causal nature of decoder-only transformers create interesting geometric properties in information propagation, resulting in predictable and potentially devastating failure modes. Finally, we observe that existing solutions in transformer research tend to be ad-hoc and driven by intuition rather than grounded theoretical motivation. As such, we unify many such solutions under a more theoretical perspective, providing insight into why they work, what problem they are actually solving, and how they can be further improved to target specific failure modes of transformers. Overall, this article is an attempt to bridge the gap between observed failure modes in transformers and a general lack of theoretical understanding of them in this space.
- oai:arXiv.org:2512.09182v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ A robust fully-mixed finite element method with skew-symmetry penalization for low-frequency poroelasticity
+ https://arxiv.org/abs/2512.10192
+ arXiv:2512.10192v1 Announce Type: new
+Abstract: In this work, we present and analyze a fully-mixed finite element scheme for the dynamic poroelasticity problem in the low-frequency regime. We write the problem as a four-field, first-order, hyperbolic system of equations where the symmetry constraint on the stress field is imposed via penalization. This strategy is equivalent to adding a perturbation to the saddle point system arising when the stress symmetry is weakly-imposed. The coupling of solid and fluid phases is discretized by means of stable mixed elements in space and implicit time advancing schemes. The presented stability analysis is fully robust with respect to meaningful cases of degenerate model parameters. Numerical tests validate the convergence and robustness and assess the performances of the method for the simulation of wave propagation phenomena in porous materials.
+ oai:arXiv.org:2512.10192v1
+ math.NA
+ cs.NA
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Hunjae Lee
+ Stefano Bonetti, Michele Botti, Patrick Vega
- Learning Patient-Specific Disease Dynamics with Latent Flow Matching for Longitudinal Imaging Generation
- https://arxiv.org/abs/2512.09185
- arXiv:2512.09185v1 Announce Type: new
-Abstract: Understanding disease progression is a central clinical challenge with direct implications for early diagnosis and personalized treatment. While recent generative approaches have attempted to model progression, key mismatches remain: disease dynamics are inherently continuous and monotonic, yet latent representations are often scattered, lacking semantic structure, and diffusion-based models disrupt continuity with random denoising process. In this work, we propose to treat the disease dynamic as a velocity field and leverage Flow Matching (FM) to align the temporal evolution of patient data. Unlike prior methods, it captures the intrinsic dynamic of disease, making the progression more interpretable. However, a key challenge remains: in latent space, Auto-Encoders (AEs) do not guarantee alignment across patients or correlation with clinical-severity indicators (e.g., age and disease conditions). To address this, we propose to learn patient-specific latent alignment, which enforces patient trajectories to lie along a specific axis, with magnitude increasing monotonically with disease severity. This leads to a consistent and semantically meaningful latent space. Together, we present $\Delta$-LFM, a framework for modeling patient-specific latent progression with flow matching. Across three longitudinal MRI benchmarks, $\Delta$-LFM demonstrates strong empirical performance and, more importantly, offers a new framework for interpreting and visualizing disease dynamics.
- oai:arXiv.org:2512.09185v1
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Traffic Equilibrium in Mixed-Autonomy Network with Capped Customer Waiting
+ https://arxiv.org/abs/2512.10194
+ arXiv:2512.10194v1 Announce Type: new
+Abstract: This paper develops a unified modeling framework to capture the equilibrium-state interactions among ride-hailing companies, travelers, and traffic of mixed-autonomy transportation networks. Our framework integrates four interrelated sub-modules: (i) the operational behavior of representative ride-hailing Mixed-Fleet Traffic Network Companies (MiFleet TNCs) managing autonomous vehicle (AV) and human-driven vehicle (HV) fleets, (ii) traveler mode-choice decisions taking into account travel costs and waiting time, (iii) capped customer waiting times to reflect the option available to travelers not to wait for TNCs' service beyond his/her patience and to resort to existing travel modes, and (iv) a flow-dependent traffic congestion model for travel times. A key modeling feature distinguishes AVs and HVs across the pickup and service (customer-on-board) stages: AVs follow Wardrop pickup routes but may deviate during service under company coordination, whereas HVs operate in the reverse manner. The overall framework is formulated as a Nonlinear Complementarity Problem (NCP), which is equivalent to a Variational Inequality(VI) formulation based on which the existence of a variational equilibrium solution to the traffic model is established. Numerical experiments examine how AV penetration and Wardrop relaxation factors, which bound route deviation, affect company, traveler, and system performance to various degrees. The results provide actionable insights for policymakers on regulating AV adoption and company vehicle deviation behavior in modern-day traffic systems that are fast changing due to the advances in technology and information accessibility.
+ oai:arXiv.org:2512.10194v1
+ eess.SY
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Hao Chen, Rui Yin, Yifan Chen, Qi Chen, Chao Li
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiaxin Hou, Kexin Wang, Ruolin Li, Jong-shi Pang
- WOLF: Werewolf-based Observations for LLM Deception and Falsehoods
- https://arxiv.org/abs/2512.09187
- arXiv:2512.09187v1 Announce Type: new
-Abstract: Deception is a fundamental challenge for multi-agent reasoning: effective systems must strategically conceal information while detecting misleading behavior in others. Yet most evaluations reduce deception to static classification, ignoring the interactive, adversarial, and longitudinal nature of real deceptive dynamics. Large language models (LLMs) can deceive convincingly but remain weak at detecting deception in peers. We present WOLF, a multi-agent social deduction benchmark based on Werewolf that enables separable measurement of deception production and detection. WOLF embeds role-grounded agents (Villager, Werewolf, Seer, Doctor) in a programmable LangGraph state machine with strict night-day cycles, debate turns, and majority voting. Every statement is a distinct analysis unit, with self-assessed honesty from speakers and peer-rated deceptiveness from others. Deception is categorized via a standardized taxonomy (omission, distortion, fabrication, misdirection), while suspicion scores are longitudinally smoothed to capture both immediate judgments and evolving trust dynamics. Structured logs preserve prompts, outputs, and state transitions for full reproducibility. Across 7,320 statements and 100 runs, Werewolves produce deceptive statements in 31% of turns, while peer detection achieves 71-73% precision with ~52% overall accuracy. Precision is higher for identifying Werewolves, though false positives occur against Villagers. Suspicion toward Werewolves rises from ~52% to over 60% across rounds, while suspicion toward Villagers and the Doctor stabilizes near 44-46%. This divergence shows that extended interaction improves recall against liars without compounding errors against truthful roles. WOLF moves deception evaluation beyond static datasets, offering a dynamic, controlled testbed for measuring deceptive and detective capacity in adversarial multi-agent interaction.
- oai:arXiv.org:2512.09187v1
+ AutoMedic: An Automated Evaluation Framework for Clinical Conversational Agents with Medical Dataset Grounding
+ https://arxiv.org/abs/2512.10195
+ arXiv:2512.10195v1 Announce Type: new
+Abstract: Evaluating large language models (LLMs) has recently emerged as a critical issue for safe and trustworthy application of LLMs in the medical domain. Although a variety of static medical question-answering (QA) benchmarks have been proposed, many aspects remain underexplored, such as the effectiveness of LLMs in generating responses in dynamic, interactive clinical multi-turn conversation situations and the identification of multi-faceted evaluation strategies beyond simple accuracy. However, formally evaluating a dynamic, interactive clinical situation is hindered by its vast combinatorial space of possible patient states and interaction trajectories, making it difficult to standardize and quantitatively measure such scenarios. Here, we introduce AutoMedic, a multi-agent simulation framework that enables automated evaluation of LLMs as clinical conversational agents. AutoMedic transforms off-the-shelf static QA datasets into virtual patient profiles, enabling realistic and clinically grounded multi-turn clinical dialogues between LLM agents. The performance of various clinical conversational agents is then assessed based on our CARE metric, which provides a multi-faceted evaluation standard of clinical conversational accuracy, efficiency/strategy, empathy, and robustness. Our findings, validated by human experts, demonstrate the validity of AutoMedic as an automated evaluation framework for clinical conversational agents, offering practical guidelines for the effective development of LLMs in conversational medical applications.
+ oai:arXiv.org:2512.10195v1
+ cs.CL
+ cs.LGcs.MA
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Mrinal Agarwal, Saad Rana, Theo Sundoro, Hermela Berhe, Spencer Kim, Vasu Sharma, Sean O'Brien, Kevin Zhu
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Gyutaek Oh, Sangjoon Park, Byung-Hoon Kim
- Understanding Mental States in Active and Autonomous Driving with EEG
- https://arxiv.org/abs/2512.09190
- arXiv:2512.09190v1 Announce Type: new
-Abstract: Understanding how driver mental states differ between active and autonomous driving is critical for designing safe human-vehicle interfaces. This paper presents the first EEG-based comparison of cognitive load, fatigue, valence, and arousal across the two driving modes. Using data from 31 participants performing identical tasks in both scenarios of three different complexity levels, we analyze temporal patterns, task-complexity effects, and channel-wise activation differences. Our findings show that although both modes evoke similar trends across complexity levels, the intensity of mental states and the underlying neural activation differ substantially, indicating a clear distribution shift between active and autonomous driving. Transfer-learning experiments confirm that models trained on active driving data generalize poorly to autonomous driving and vice versa. We attribute this distribution shift primarily to differences in motor engagement and attentional demands between the two driving modes, which lead to distinct spatial and temporal EEG activation patterns. Although autonomous driving results in lower overall cortical activation, participants continue to exhibit measurable fluctuations in cognitive load, fatigue, valence, and arousal associated with readiness to intervene, task-evoked emotional responses, and monotony-related passive fatigue. These results emphasize the need for scenario-specific data and models when developing next-generation driver monitoring systems for autonomous vehicles.
- oai:arXiv.org:2512.09190v1
+ HyFinBall: a Hybrid User Interface for Coordinated 2D+3D Visualization in Semi-Immersive VR
+ https://arxiv.org/abs/2512.10196
+ arXiv:2512.10196v1 Announce Type: new
+Abstract: Sophisticated 3D visualization applications usually provide coordinated 2D and 3D views. Normally 3D input device is used for 3D tasks since they perform better than traditional 2D input devices. However, they do not perform better for 2D tasks. This paper presents a bimanual hybrid user interface that supports four interaction modes: a dual 6-degree-of-freedom (DOF) input device mode, a dual planar constrained 3DOF input device mode, a dual 2-finger multi-touch mode, and 3D hand and finger gestures. The application is a multi-dimensional visualization with coordinated 3D and 2D views on a desktop VR system. The input devices are buttonballs with seamless switching between 3D and 2D device modes, as well as between free-hand finger input and device usage. The 3D and 2D device mode switch automatically switches a buttonball's visual representation between a 3D cursor and a 2D cursor while changing the available user interaction techniques between 3D and 2D interaction techniques to interact with the coordinated views. The paper also provides two formal user studies to evaluate HyFinBall for various dimensional tasks, including 3D, 2D, and cross-dimensional tasks. Our experimental results show the benefits of the HyFinBall interface for cross-dimensional tasks that require 3D and 2D interactions.
+ oai:arXiv.org:2512.10196v1cs.HC
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Prithila Angkan, Paul Hungler, Ali Etemad
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Isaac Cho, Zachary Wartell
- TritonForge: Profiling-Guided Framework for Automated Triton Kernel Optimization
- https://arxiv.org/abs/2512.09196
- arXiv:2512.09196v1 Announce Type: new
-Abstract: High-performance GPU kernel optimization remains a critical yet labor-intensive task in modern machine learning workloads. Although Triton, a domain-specific language for GPU programming, enables developers to write efficient kernels with concise code, achieving expert-level performance still requires deep understanding of GPU architectures and low-level performance trade-offs. We present TritonForge, a profiling-guided framework for automated Triton kernel optimization. TritonForge integrates kernel analysis, runtime profiling, and iterative code transformation to streamline the optimization process. By incorporating data-driven feedback from profiling results, the system identifies performance bottlenecks, proposes targeted code modifications, and evaluates their impact automatically. While our prototype leverages large language models (LLMs) to assist in code reasoning and transformation, the framework remains modular and model-agnostic. Across diverse kernel types and GPU architectures, TritonForge achieves up to 5x performance improvement over baseline implementations and on average 1.76x of the cases are successful, providing a foundation for future research in automated GPU performance optimization.
- oai:arXiv.org:2512.09196v1
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Variational-hemivariational inequalities: A brief survey on mathematical theory and numerical analysis
+ https://arxiv.org/abs/2512.10204
+ arXiv:2512.10204v1 Announce Type: new
+Abstract: Variational-hemivariational inequalities are an area full of interesting and challenging mathematical problems. The area can be viewed as a natural extension of that of variational inequalities. Variational-hemivariational inequalities are valuable for application problems from physical sciences and engineering that involve non-smooth and even set-valued relations, monotone or non-monotone, among physical quantities. In the recent years, there has been substantial growth of research interest in modeling, well-posedness analysis, development of numerical methods and numerical algorithms of variational-hemivariational inequalities. This survey paper is devoted to a brief account of well-posedness and numerical analysis results for variational-hemivariational inequalities. The theoretical results are presented for a family of abstract stationary variational-hemivariational inequalities and the main idea is explained for an accessible proof of existence and uniqueness. To better appreciate the distinguished feature of variational-hemivariational inequalities, for comparison, three mechanical problems are introduced leading to a variational equation, a variational inequality, and a variational-hemivariational inequality, respectively. The paper also comments on mixed variational-hemivariational inequalities, with examples from applications in fluid mechanics, and on results concerning the numerical solution of other types (nonstationary, history dependent) of variational-hemivariational inequalities.
+ oai:arXiv.org:2512.10204v1
+ math.NA
+ cs.NA
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Haonan Li, Keyu Man, Partha Kanuparthy, Hanning Chen, Wei Sun, Sreen Tallam, Chenguang Zhu, Kevin Zhu, Zhiyun Qian
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Weimin Han
- Towards Optimal Valve Prescription for Transcatheter Aortic Valve Replacement (TAVR) Surgery: A Machine Learning Approach
- https://arxiv.org/abs/2512.09198
- arXiv:2512.09198v1 Announce Type: new
-Abstract: Transcatheter Aortic Valve Replacement (TAVR) has emerged as a minimally invasive treatment option for patients with severe aortic stenosis, a life-threatening cardiovascular condition. Multiple transcatheter heart valves (THV) have been approved for use in TAVR, but current guidelines regarding valve type prescription remain an active topic of debate. We propose a data-driven clinical support tool to identify the optimal valve type with the objective of minimizing the risk of permanent pacemaker implantation (PPI), a predominant postoperative complication. We synthesize a novel dataset that combines U.S. and Greek patient populations and integrates three distinct data sources (patient demographics, computed tomography scans, echocardiograms) while harmonizing differences in each country's record system. We introduce a leaf-level analysis to leverage population heterogeneity and avoid benchmarking against uncertain counterfactual risk estimates. The final prescriptive model shows a reduction in PPI rates of 26% and 16% compared with the current standard of care in our internal U.S. population and external Greek validation cohort, respectively. To the best of our knowledge, this work represents the first unified, personalized prescription strategy for THV selection in TAVR.
- oai:arXiv.org:2512.09198v1
- cs.LG
+ CP-Env: Evaluating Large Language Models on Clinical Pathways in a Controllable Hospital Environment
+ https://arxiv.org/abs/2512.10206
+ arXiv:2512.10206v1 Announce Type: new
+Abstract: Medical care follows complex clinical pathways that extend beyond isolated physician-patient encounters, emphasizing decision-making and transitions between different stages. Current benchmarks focusing on static exams or isolated dialogues inadequately evaluate large language models (LLMs) in dynamic clinical scenarios. We introduce CP-Env, a controllable agentic hospital environment designed to evaluate LLMs across end-to-end clinical pathways. CP-Env simulates a hospital ecosystem with patient and physician agents, constructing scenarios ranging from triage and specialist consultation to diagnostic testing and multidisciplinary team meetings for agent interaction. Following real hospital adaptive flow of healthcare, it enables branching, long-horizon task execution. We propose a three-tiered evaluation framework encompassing Clinical Efficacy, Process Competency, and Professional Ethics. Results reveal that most models struggle with pathway complexity, exhibiting hallucinations and losing critical diagnostic details. Interestingly, excessive reasoning steps can sometimes prove counterproductive, while top models tend to exhibit reduced tool dependency through internalized knowledge. CP-Env advances medical AI agents development through comprehensive end-to-end clinical evaluation. We provide the benchmark and evaluation tools for further research and development at https://github.com/SPIRAL-MED/CP-Env.
+ oai:arXiv.org:2512.10206v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Phevos Paschalidis, Vasiliki Stoumpou, Lisa Everest, Yu Ma, Talhat Azemi, Jawad Haider, Steven Zweibel, Eleftherios M. Protopapas, Jeff Mather, Maciej Tysarowski, George E. Sarris, Robert C. Hagberg, Howard L. Haronian, Dimitris Bertsimas
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yakun Zhu, Zhongzhen Huang, Qianhan Feng, Linjie Mu, Yannian Gu, Shaoting Zhang, Qi Dou, Xiaofan Zhang
- LLMs for Analog Circuit Design Continuum (ACDC)
- https://arxiv.org/abs/2512.09199
- arXiv:2512.09199v1 Announce Type: new
-Abstract: Large Language Models (LLMs) and transformer architectures have shown impressive reasoning and generation capabilities across diverse natural language tasks. However, their reliability and robustness in real-world engineering domains remain largely unexplored, limiting their practical utility in human-centric workflows. In this work, we investigate the applicability and consistency of LLMs for analog circuit design -- a task requiring domain-specific reasoning, adherence to physical constraints, and structured representations -- focusing on AI-assisted design where humans remain in the loop. We study how different data representations influence model behavior and compare smaller models (e.g., T5, GPT-2) with larger foundation models (e.g., Mistral-7B, GPT-oss-20B) under varying training conditions. Our results highlight key reliability challenges, including sensitivity to data format, instability in generated designs, and limited generalization to unseen circuit configurations. These findings provide early evidence on the limits and potential of LLMs as tools to enhance human capabilities in complex engineering tasks, offering insights into designing reliable, deployable foundation models for structured, real-world applications.
- oai:arXiv.org:2512.09199v1
- cs.LG
+ An exploration for higher efficiency in multi objective optimisation with reinforcement learning
+ https://arxiv.org/abs/2512.10208
+ arXiv:2512.10208v1 Announce Type: new
+Abstract: Efficiency in optimisation and search processes persists to be one of the challenges, which affects the performance and use of optimisation algorithms. Utilising a pool of operators instead of a single operator to handle move operations within a neighbourhood remains promising, but an optimum or near optimum sequence of operators necessitates further investigation. One of the promising ideas is to generalise experiences and seek how to utilise it. Although numerous works are done around this issue for single objective optimisation, multi-objective cases have not much been touched in this regard. A generalised approach based on multi-objective reinforcement learning approach seems to create remedy for this issue and offer good solutions. This paper overviews a generalisation approach proposed with certain stages completed and phases outstanding that is aimed to help demonstrate the efficiency of using multi-objective reinforcement learning.
+ oai:arXiv.org:2512.10208v1cs.AI
- cs.PF
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.NE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Yasaman Esfandiari, Jocelyn Rego, Austin Meyer, Jonathan Gallagher, Mia Levy
-
-
- Meta Lattice: Model Space Redesign for Cost-Effective Industry-Scale Ads Recommendations
- https://arxiv.org/abs/2512.09200
- arXiv:2512.09200v1 Announce Type: new
-Abstract: The rapidly evolving landscape of products, surfaces, policies, and regulations poses significant challenges for deploying state-of-the-art recommendation models at industry scale, primarily due to data fragmentation across domains and escalating infrastructure costs that hinder sustained quality improvements.
- To address this challenge, we propose Lattice, a recommendation framework centered around model space redesign that extends Multi-Domain, Multi-Objective (MDMO) learning beyond models and learning objectives. Lattice addresses these challenges through a comprehensive model space redesign that combines cross-domain knowledge sharing, data consolidation, model unification, distillation, and system optimizations to achieve significant improvements in both quality and cost-efficiency.
- Our deployment of Lattice at Meta has resulted in 10% revenue-driving top-line metrics gain, 11.5% user satisfaction improvement, 6% boost in conversion rate, with 20% capacity saving.
- oai:arXiv.org:2512.09200v1
- cs.IR
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Liang Luo, Yuxin Chen, Zhengyu Zhang, Mengyue Hang, Andrew Gu, Buyun Zhang, Boyang Liu, Chen Chen, Chengze Fan, Dong Liang, Fan Yang, Feifan Gu, Huayu Li, Jade Nie, Jiayi Xu, Jiyan Yang, Jongsoo Park, Laming Chen, Longhao Jin, Qianru Li, Qin Huang, Shali Jiang, Shiwen Shen, Shuaiwen Wang, Sihan Zeng, Siyang Yuan, Tongyi Tang, Weilin Zhang, Wenjun Wang, Xi Liu, Xiaohan Wei, Xiaozhen Xia, Yuchen Hao, Yunlong He, Yasmine Badr, Zeliang Chen, Maxim Naumov, Yantao Yao, Wenlin Chen, Santanu Kolay, GP Musumeci, Ellie Dingqiao Wen
+ 10.5281/zenodo.17778541
+ Mehmet Emin Aydin
- Residual Primitive Fitting of 3D Shapes with SuperFrusta
- https://arxiv.org/abs/2512.09201
- arXiv:2512.09201v1 Announce Type: new
-Abstract: We introduce a framework for converting 3D shapes into compact and editable assemblies of analytic primitives, directly addressing the persistent trade-off between reconstruction fidelity and parsimony. Our approach combines two key contributions: a novel primitive, termed SuperFrustum, and an iterative fiting algorithm, Residual Primitive Fitting (ResFit). SuperFrustum is an analytical primitive that is simultaneously (1) expressive, being able to model various common solids such as cylinders, spheres, cones & their tapered and bent forms, (2) editable, being compactly parameterized with 8 parameters, and (3) optimizable, with a sign distance field differentiable w.r.t. its parameters almost everywhere. ResFit is an unsupervised procedure that interleaves global shape analysis with local optimization, iteratively fitting primitives to the unexplained residual of a shape to discover a parsimonious yet accurate decompositions for each input shape. On diverse 3D benchmarks, our method achieves state-of-the-art results, improving IoU by over 9 points while using nearly half as many primitives as prior work. The resulting assemblies bridge the gap between dense 3D data and human-controllable design, producing high-fidelity and editable shape programs.
- oai:arXiv.org:2512.09201v1
- cs.GR
+ Feature Coding for Scalable Machine Vision
+ https://arxiv.org/abs/2512.10209
+ arXiv:2512.10209v1 Announce Type: new
+Abstract: Deep neural networks (DNNs) drive modern machine vision but are challenging to deploy on edge devices due to high compute demands. Traditional approaches-running the full model on-device or offloading to the cloud face trade-offs in latency, bandwidth, and privacy. Splitting the inference workload between the edge and the cloud offers a balanced solution, but transmitting intermediate features to enable such splitting introduces new bandwidth challenges. To address this, the Moving Picture Experts Group (MPEG) initiated the Feature Coding for Machines (FCM) standard, establishing a bitstream syntax and codec pipeline tailored for compressing intermediate features. This paper presents the design and performance of the Feature Coding Test Model (FCTM), showing significant bitrate reductions-averaging 85.14%-across multiple vision tasks while preserving accuracy. FCM offers a scalable path for efficient and interoperable deployment of intelligent features in bandwidth-limited and privacy-sensitive consumer applications.
+ oai:arXiv.org:2512.10209v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Aditya Ganeshan, Matheus Gadelha, Thibault Groueix, Zhiqin Chen, Siddhartha Chaudhuri, Vladimir Kim, Wang Yifan, Daniel Ritchie
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1109/MCE.2025.3630304
+ 2025 IEEE Consumer Electronics Magazine
+ Md Eimran Hossain Eimon, Juan Merlos, Ashan Perera, Hari Kalva, Velibor Adzic, Borko Furht
- Tensor-Compressed and Fully-Quantized Training of Neural PDE Solvers
- https://arxiv.org/abs/2512.09202
- arXiv:2512.09202v1 Announce Type: new
-Abstract: Physics-Informed Neural Networks (PINNs) have emerged as a promising paradigm for solving partial differential equations (PDEs) by embedding physical laws into neural network training objectives. However, their deployment on resource-constrained platforms is hindered by substantial computational and memory overhead, primarily stemming from higher-order automatic differentiation, intensive tensor operations, and reliance on full-precision arithmetic. To address these challenges, we present a framework that enables scalable and energy-efficient PINN training on edge devices. This framework integrates fully quantized training, Stein's estimator (SE)-based residual loss computation, and tensor-train (TT) decomposition for weight compression. It contributes three key innovations: (1) a mixed-precision training method that use a square-block MX (SMX) format to eliminate data duplication during backpropagation; (2) a difference-based quantization scheme for the Stein's estimator that mitigates underflow; and (3) a partial-reconstruction scheme (PRS) for TT-Layers that reduces quantization-error accumulation. We further design PINTA, a precision-scalable hardware accelerator, to fully exploit the performance of the framework. Experiments on the 2-D Poisson, 20-D Hamilton-Jacobi-Bellman (HJB), and 100-D Heat equations demonstrate that the proposed framework achieves accuracy comparable to or better than full-precision, uncompressed baselines while delivering 5.5x to 83.5x speedups and 159.6x to 2324.1x energy savings. This work enables real-time PDE solving on edge devices and paves the way for energy-efficient scientific computing at scale.
- oai:arXiv.org:2512.09202v1
- cs.LG
+ ID-PaS : Identity-Aware Predict-and-Search for General Mixed-Integer Linear Programs
+ https://arxiv.org/abs/2512.10211
+ arXiv:2512.10211v1 Announce Type: new
+Abstract: Mixed-Integer Linear Programs (MIPs) are powerful and flexible tools for modeling a wide range of real-world combinatorial optimization problems. Predict-and-Search methods operate by using a predictive model to estimate promising variable assignments and then guiding a search procedure toward high-quality solutions. Recent research has demonstrated that incorporating machine learning (ML) into the Predict-and-Search framework significantly enhances its performance. Still, it is restricted to binary problems and overlooks the presence of fixed variables that commonly arise in practical settings. This work extends the Predict-and-Search (PaS) framework to parametric MIPs and introduces ID-PaS, an identity-aware learning framework that enables the ML model to handle heterogeneous variables more effectively. Experiments on several real-world large-scale problems demonstrate that ID-PaS consistently achieves superior performance compared to the state-of-the-art solver Gurobi and PaS.
+ oai:arXiv.org:2512.10211v1cs.AI
- cs.AR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Jinming Lu, Jiayi Tian, Yequan Zhao, Hai Li, Zheng Zhang
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Junyang Cai, El Mehdi Er Raqabi, Pascal Van Hentenryck, Bistra Dilkina
- Beyond Algorithm Evolution: An LLM-Driven Framework for the Co-Evolution of Swarm Intelligence Optimization Algorithms and Prompts
- https://arxiv.org/abs/2512.09209
- arXiv:2512.09209v1 Announce Type: new
-Abstract: The field of automated algorithm design has been advanced by frameworks such as EoH, FunSearch, and Reevo. Yet, their focus on algorithm evolution alone, neglecting the prompts that guide them, limits their effectiveness with LLMs, especially in complex, uncertain environments where they nonetheless implicitly rely on strategies from swarm intelligence optimization algorithms. Recognizing this, we argue that swarm intelligence optimization provides a more generalized and principled foundation for automated design. Consequently, this paper proposes a novel framework for the collaborative evolution of both swarm intelligence algorithms and guiding prompts using a single LLM. To enhance interpretability, we also propose a simple yet efficient evaluation method for prompt templates. The framework was rigorously evaluated on a range of NP problems, where it demonstrated superior performance compared to several state-of-the-art automated design approaches. Experiments with various LLMs (e.g., GPT-4o-mini, Qwen3-32B, GPT-5) reveal significantly divergent evolutionary trajectories in the generated prompts, further underscoring the necessity of a structured co-evolution framework. Importantly, our approach maintains leading performance across different models, demonstrating reduced reliance on the most powerful LLMs and enabling more cost-effective deployments. Ablation studies and in-depth analysis of the evolved prompts confirm that collaborative evolution is essential for achieving optimal performance. Our work establishes a new paradigm for swarm intelligence optimization algorithms, underscoring the indispensable role of prompt evolution.
- oai:arXiv.org:2512.09209v1
- cs.NE
- Thu, 11 Dec 2025 00:00:00 -0500
+ PANDAExpress: a Simpler and Faster PANDA Algorithm
+ https://arxiv.org/abs/2512.10217
+ arXiv:2512.10217v1 Announce Type: new
+Abstract: PANDA is a powerful generic algorithm for answering conjunctive queries (CQs) and disjunctive datalog rules (DDRs) given input degree constraints. In the special case where degree constraints are cardinality constraints and the query is Boolean, PANDA runs in $\tilde O (N^{subw})$-time, where $N$ is the input size, and $subw$ is the submodular width of the query, a notion introduced by Daniel Marx (JACM 2013). When specialized to certain classes of sub-graph pattern finding problems, the $\tilde O(N^{subw})$ runtime matches the optimal runtime possible, modulo some conjectures in fine-grained complexity (Bringmann and Gorbachev (STOC 25)). The PANDA framework is much more general, as it handles arbitrary input degree constraints, which capture common statistics and integrity constraints used in relational database management systems, it works for queries with free variables, and for both CQs and DDRs.
+ The key weakness of PANDA is the large $polylog(N)$-factor hidden in the $\tilde O(\cdot)$ notation. This makes PANDA completely impractical, and fall short of what is achievable with specialized algorithms. This paper resolves this weakness with two novel ideas. First, we prove a new probabilistic inequality that upper-bounds the output size of DDRs under arbitrary degree constraints. Second, the proof of this inequality directly leads to a new algorithm named PANDAExpress that is both simpler and faster than PANDA. The novel feature of PANDAExpress is a new partitioning scheme that uses arbitrary hyperplane cuts instead of axis-parallel hyperplanes used in PANDA. These hyperplanes are dynamically constructed based on data-skewness statistics carefully tracked throughout the algorithm's execution. As a result, PANDAExpress removes the $polylog(N)$-factor from the runtime of PANDA, matching the runtimes of intricate specialized algorithms, while retaining all its generality and power.
+ oai:arXiv.org:2512.10217v1
+ cs.DB
+ cs.IT
+ math.IT
+ math.PR
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Shipeng Cen, Ying Tan
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mahmoud Abo Khamis, Hung Q. Ngo, Dan Suciu
- Targeting Misalignment: A Conflict-Aware Framework for Reward-Model-based LLM Alignment
- https://arxiv.org/abs/2512.09212
- arXiv:2512.09212v1 Announce Type: new
-Abstract: Reward-model-based fine-tuning is a central paradigm in aligning Large Language Models with human preferences. However, such approaches critically rely on the assumption that proxy reward models accurately reflect intended supervision, a condition often violated due to annotation noise, bias, or limited coverage. This misalignment can lead to undesirable behaviors, where models optimize for flawed signals rather than true human values. In this paper, we investigate a novel framework to identify and mitigate such misalignment by treating the fine-tuning process as a form of knowledge integration. We focus on detecting instances of proxy-policy conflicts, cases where the base model strongly disagrees with the proxy. We argue that such conflicts often signify areas of shared ignorance, where neither the policy nor the reward model possesses sufficient knowledge, making them especially susceptible to misalignment. To this end, we propose two complementary metrics for identifying these conflicts: a localized Proxy-Policy Alignment Conflict Score (PACS) and a global Kendall-Tau Distance measure. Building on this insight, we design an algorithm named Selective Human-in-the-loop Feedback via Conflict-Aware Sampling (SHF-CAS) that targets high-conflict QA pairs for additional feedback, refining both the reward model and policy efficiently. Experiments on two alignment tasks demonstrate that our approach enhances general alignment performance, even when trained with a biased proxy reward. Our work provides a new lens for interpreting alignment failures and offers a principled pathway for targeted refinement in LLM training.
- oai:arXiv.org:2512.09212v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Does SWE-Bench-Verified Test Agent Ability or Model Memory?
+ https://arxiv.org/abs/2512.10218
+ arXiv:2512.10218v1 Announce Type: new
+Abstract: SWE-Bench-Verified, a dataset comprising 500 issues, serves as a de facto benchmark for evaluating various large language models (LLMs) on their ability to resolve GitHub issues. But this benchmark may overlap with model training data. If that is true, scores may reflect training recall, not issue-solving skill. To study this, we test two Claude models that frequently appear in top-performing agents submitted to the benchmark. We ask them to find relevant files using only issue text, and then issue text plus file paths. We then run the same setup on BeetleBox and SWE-rebench. Despite both benchmarks involving popular open-source Python projects, models performed 3 times better on SWE-Bench-Verified. They were also 6 times better at finding edited files, without any additional context about the projects themselves. This gap suggests the models may have seen many SWE-Bench-Verified tasks during training. As a result, scores on this benchmark may not reflect an agent's ability to handle real software issues, yet it continues to be used in ways that can misrepresent progress and lead to choices that favour agents that use certain models over strong agent design. Our setup tests the localization step with minimal context to the extent that the task should be logically impossible to solve. Our results show the risk of relying on older popular benchmarks and support the shift toward newer datasets built with contamination in mind.
+ oai:arXiv.org:2512.10218v1
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by-nc-sa/4.0/
- Zixuan Liu, Siavash H. Khajavi, Guangkai Jiang, Xinru Liu
+ Thanosan Prathifkumar, Noble Saji Mathews, Meiyappan Nagappan
- MPC for momentum counter-balanced and zero-impulse contact with a free-spinning satellite
- https://arxiv.org/abs/2512.09213
- arXiv:2512.09213v1 Announce Type: new
-Abstract: In on-orbit robotics, a servicer satellite's ability to make contact with a free-spinning target satellite is essential to completing most on-orbit servicing (OOS) tasks. This manuscript develops a nonlinear model predictive control (MPC) framework that generates feasible controls for a servicer satellite to achieve zero-impulse contact with a free-spinning target satellite. The overall maneuver requires coordination between two separately actuated modules of the servicer satellite: (1) a moment generation module and (2) a manipulation module. We apply MPC to control both modules by explicitly modeling the cross-coupling dynamics between them. We demonstrate that the MPC controller can enforce actuation and state constraints that prior control approaches could not account for. We evaluate the performance of the MPC controller by simulating zero-impulse contact scenarios with a free-spinning target satellite via numerical Monte Carlo (MC) trials and comparing the simulation results with prior control approaches. Our simulation results validate the effectiveness of the MPC controller in maintaining spin synchronization and zero-impulse contact under operation constraints, moving contact location, and observation and actuation noise.
- oai:arXiv.org:2512.09213v1
- eess.SY
- cs.RO
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Improving the decoding performance of CA-polar codes
+ https://arxiv.org/abs/2512.10223
+ arXiv:2512.10223v1 Announce Type: new
+Abstract: We investigate the use of modern code-agnostic decoders to convert CA-SCL from an incomplete decoder to a complete one. When CA-SCL fails to identify a codeword that passes the CRC check, we apply a code-agnostic decoder that identifies a codeword that satisfies the CRC. We establish that this approach gives gains of up to 0.2 dB in block error rate for CA-Polar codes from the 5G New Radio standard. If, instead, the message had been encoded in a systematic CA-polar code, the gain improves to 0.2 ~ 1dB. Leveraging recent developments in blockwise soft output, we additionally establish that it is possible to control the undetected error rate even when using the CRC for error correction.
+ oai:arXiv.org:2512.10223v1
+ cs.IT
+ eess.SP
+ math.IT
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Theofania Karampela, Rishie Seshadri, Florian D\"orfler, Sarah H. Q. Li
+ http://creativecommons.org/licenses/by/4.0/
+ Jiewei Feng, Peihong Yuan, Ken R. Duffy, Muriel M\'edard
- View-on-Graph: Zero-shot 3D Visual Grounding via Vision-Language Reasoning on Scene Graphs
- https://arxiv.org/abs/2512.09215
- arXiv:2512.09215v1 Announce Type: new
-Abstract: 3D visual grounding (3DVG) identifies objects in 3D scenes from language descriptions. Existing zero-shot approaches leverage 2D vision-language models (VLMs) by converting 3D spatial information (SI) into forms amenable to VLM processing, typically as composite inputs such as specified view renderings or video sequences with overlaid object markers. However, this VLM + SI paradigm yields entangled visual representations that compel the VLM to process entire cluttered cues, making it hard to exploit spatial semantic relationships effectively. In this work, we propose a new VLM x SI paradigm that externalizes the 3D SI into a form enabling the VLM to incrementally retrieve only what it needs during reasoning. We instantiate this paradigm with a novel View-on-Graph (VoG) method, which organizes the scene into a multi-modal, multi-layer scene graph and allows the VLM to operate as an active agent that selectively accesses necessary cues as it traverses the scene. This design offers two intrinsic advantages: (i) by structuring 3D context into a spatially and semantically coherent scene graph rather than confounding the VLM with densely entangled visual inputs, it lowers the VLM's reasoning difficulty; and (ii) by actively exploring and reasoning over the scene graph, it naturally produces transparent, step-by-step traces for interpretable 3DVG. Extensive experiments show that VoG achieves state-of-the-art zero-shot performance, establishing structured scene exploration as a promising strategy for advancing zero-shot 3DVG.
- oai:arXiv.org:2512.09215v1
+ Federated Domain Generalization with Latent Space Inversion
+ https://arxiv.org/abs/2512.10224
+ arXiv:2512.10224v1 Announce Type: new
+Abstract: Federated domain generalization (FedDG) addresses distribution shifts among clients in a federated learning framework. FedDG methods aggregate the parameters of locally trained client models to form a global model that generalizes to unseen clients while preserving data privacy. While improving the generalization capability of the global model, many existing approaches in FedDG jeopardize privacy by sharing statistics of client data between themselves. Our solution addresses this problem by contributing new ways to perform local client training and model aggregation. To improve local client training, we enforce (domain) invariance across local models with the help of a novel technique, \textbf{latent space inversion}, which enables better client privacy. When clients are not \emph{i.i.d}, aggregating their local models may discard certain local adaptations. To overcome this, we propose an \textbf{important weight} aggregation strategy to prioritize parameters that significantly influence predictions of local models during aggregation. Our extensive experiments show that our approach achieves superior results over state-of-the-art methods with less communication overhead.
+ oai:arXiv.org:2512.10224v1
+ cs.LG
+ cs.AIcs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuanyuan Liu, Haiyang Mei, Dongyang Zhan, Jiayue Zhao, Dongsheng Zhou, Bo Dong, Xin Yang
+ http://creativecommons.org/licenses/by/4.0/
+ Ragja Palakkadavath, Hung Le, Thanh Nguyen-Tang, Svetha Venkatesh, Sunil Gupta
- Bug Priority Change Prediction: An Exploratory Study on Apache Software
- https://arxiv.org/abs/2512.09216
- arXiv:2512.09216v1 Announce Type: new
-Abstract: Bug fixing is a critical activity in the software development process. In issue tracking systems such as JIRA, each bug report is assigned a priority level to indicate the urgency and importance level of the bug. The priority may change during the bug fixing process, indicating that the urgency and importance level of the bug will change with the bug fixing. However, manually evaluating priority changes for bugs is a tedious process that heavily relies on the subjective judgment of developers and project managers, leading to incorrect priority changes and thus hindering timely bug fixes. Given the lack of research on bug priority change prediction, we propose a novel two-phase bug report priority change prediction method based on bug fixing evolution features and class imbalance handling strategy. Specifically, we divided the bug lifecycle into two phases: bug reporting and bug fixing, and constructed bug priority change prediction models for each phase. To evaluate the performance of our method, we conducted experiments on a bug dataset constructed from 32 non-trivial Apache projects. The experimental results show that our proposed bug fixing evolution features and the adopted class imbalance handling strategy can effectively improve the performance of prediction models. The F1-score of the prediction model constructed for the bug reporting phase reached 0.798, while the F1-weighted and F1-macro of the prediction model constructed for the bug fixing phase were 0.712 and 0.613, respectively. Furthermore, we explored the cross-project applicability of our prediction models and their performance at different priority levels. The findings indicate large variations in model performance across different projects, although the overall scores remain decent. Meanwhile, the predictive performance across various priority levels remained relatively consistently high.
- oai:arXiv.org:2512.09216v1
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Latent Chain-of-Thought World Modeling for End-to-End Driving
+ https://arxiv.org/abs/2512.10226
+ arXiv:2512.10226v1 Announce Type: new
+Abstract: Recent Vision-Language-Action (VLA) models for autonomous driving explore inference-time reasoning as a way to improve driving performance and safety in challenging scenarios. Most prior work uses natural language to express chain-of-thought (CoT) reasoning before producing driving actions. However, text may not be the most efficient representation for reasoning. In this work, we present Latent-CoT-Drive (LCDrive): a model that expresses CoT in a latent language that captures possible outcomes of the driving actions being considered. Our approach unifies CoT reasoning and decision making by representing both in an action-aligned latent space. Instead of natural language, the model reasons by interleaving (1) action-proposal tokens, which use the same vocabulary as the model's output actions; and (2) world model tokens, which are grounded in a learned latent world model and express future outcomes of these actions. We cold start latent CoT by supervising the model's action proposals and world model tokens based on ground-truth future rollouts of the scene. We then post-train with closed-loop reinforcement learning to strengthen reasoning capabilities. On a large-scale end-to-end driving benchmark, LCDrive achieves faster inference, better trajectory quality, and larger improvements from interactive reinforcement learning compared to both non-reasoning and text-reasoning baselines.
+ oai:arXiv.org:2512.10226v1
+ cs.CV
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Guangzong Cai, Zengyang Li, Peng Liang, Ran Mo, Hui Liu, Yutao Ma
+ Shuhan Tan, Kashyap Chitta, Yuxiao Chen, Ran Tian, Yurong You, Yan Wang, Wenjie Luo, Yulong Cao, Philipp Krahenbuhl, Marco Pavone, Boris Ivanovic
- Dynamic Graph Coloring: Sequential, Parallel, and Distributed
- https://arxiv.org/abs/2512.09218
- arXiv:2512.09218v1 Announce Type: new
-Abstract: We present a simple randomized algorithm that can efficiently maintain a $(\Delta+1)$ coloring as the graph undergoes edge insertion and deletion updates, where $\Delta$ denotes an upper bound on the maximum degree. A key advantage is the algorithm's ability to process many updates simultaneously, which makes it naturally adaptable to the parallel and distributed models. Concretely, it gives a unified framework across the models, leading to the following results:
- - In the sequential setting, the algorithm processes each update in $O(1)$ expected time, worst-case. This matches and strengthens the results of Henzinger and Peng [TALG 2022] and Bhattacharya et al. [TALG 2022], who achieved an $O(1)$ bound but amortized (in expectation and with high probability, respectively), whose work was an improvement of the $O(\log \Delta)$ expected amortized bound of Bhattacharya et al. [SODA'18].
- - In the parallel setting, the algorithm processes each (arbitrary size) batch of updates using $O(1)$ work per update in the batch in expectation, and in $\text{poly}(\log n)$ depth with high probability. This is, in a sense, an ideal parallelization of the above results.
- - In the distributed setting, the algorithm can maintain a coloring of the network graph as (potentially many) edges are added or deleted. The maintained coloring is always proper; it may become partial upon updates, i.e., some nodes may temporarily lose their colors, but quickly converges to a full, proper coloring. Concretely, each insertion and deletion causes at most $O(1)$ nodes to become uncolored, but this is resolved within $O(\log n)$ rounds with high probability (e.g., in the absence of further updates nearby--the precise guarantee is stronger, but technical). Importantly, the algorithm incurs only $O(1)$ expected message complexity and computation per update.
- oai:arXiv.org:2512.09218v1
- cs.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ An Efficient Graph-Transformer Operator for Learning Physical Dynamics with Manifolds Embedding
+ https://arxiv.org/abs/2512.10227
+ arXiv:2512.10227v1 Announce Type: new
+Abstract: Accurate and efficient physical simulations are essential in science and engineering, yet traditional numerical solvers face significant challenges in computational cost when handling simulations across dynamic scenarios involving complex geometries, varying boundary/initial conditions, and diverse physical parameters. While deep learning offers promising alternatives, existing methods often struggle with flexibility and generalization, particularly on unstructured meshes, which significantly limits their practical applicability. To address these challenges, we propose PhysGTO, an efficient Graph-Transformer Operator for learning physical dynamics through explicit manifold embeddings in both physical and latent spaces. In the physical space, the proposed Unified Graph Embedding module aligns node-level conditions and constructs sparse yet structure-preserving graph connectivity to process heterogeneous inputs. In the latent space, PhysGTO integrates a lightweight flux-oriented message-passing scheme with projection-inspired attention to capture local and global dependencies, facilitating multilevel interactions among complex physical correlations. This design ensures linear complexity relative to the number of mesh points, reducing both the number of trainable parameters and computational costs in terms of floating-point operations (FLOPs), and thereby allowing efficient inference in real-time applications. We introduce a comprehensive benchmark spanning eleven datasets, covering problems with unstructured meshes, transient flow dynamics, and large-scale 3D geometries. PhysGTO consistently achieves state-of-the-art accuracy while significantly reducing computational costs, demonstrating superior flexibility, scalability, and generalization in a wide range of simulation tasks.
+ oai:arXiv.org:2512.10227v1
+ cs.CE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Mohsen Ghaffari, Jaehyun Koo
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pengwei Liu, Xingyu Ren, Pengkai Wang, Hangjie Yuan, Zhongkai Hao, Guanyu Chen, Chao Xu, Dong Ni, Shengze Cai
- CORE: A Conceptual Reasoning Layer for Large Language Models
- https://arxiv.org/abs/2512.09222
- arXiv:2512.09222v1 Announce Type: new
-Abstract: Large language models handle single-turn generation well, but multi-turn interactions still require the model to reconstruct user intent and task state from an expanding token history because internal representations do not persist across turns. This token-first paradigm leads to drift, inconsistent reasoning modes, and growing prompts as conversations deepen. We propose CORE, a concept-first interaction layer that improves multi-turn stability without modifying model weights. CORE combines a small library of universal cognitive operators with a persistent Local Concept - a compact semantic state capturing the task, constraints, preferences, and intermediate results. Each model call receives only this concept state, the user's latest instruction, and the selected operator, eliminating the need to replay full history. A preliminary prototype simulating CORE's behavior shows about 42% reduction in cumulative prompt tokens, though this number reflects prototype conditions and should not be interpreted as a real-world performance estimate. CORE offers a model-agnostic mechanism that separates conceptual reasoning from language generation, suggesting a scalable direction for more stable multi-turn systems.
- oai:arXiv.org:2512.09222v1
- cs.CL
+ Adaptive Information Routing for Multimodal Time Series Forecasting
+ https://arxiv.org/abs/2512.10229
+ arXiv:2512.10229v1 Announce Type: new
+Abstract: Time series forecasting is a critical task for artificial intelligence with numerous real-world applications. Traditional approaches primarily rely on historical time series data to predict the future values. However, in practical scenarios, this is often insufficient for accurate predictions due to the limited information available. To address this challenge, multimodal time series forecasting methods which incorporate additional data modalities, mainly text data, alongside time series data have been explored. In this work, we introduce the Adaptive Information Routing (AIR) framework, a novel approach for multimodal time series forecasting. Unlike existing methods that treat text data on par with time series data as interchangeable auxiliary features for forecasting, AIR leverages text information to dynamically guide the time series model by controlling how and to what extent multivariate time series information should be combined. We also present a text-refinement pipeline that employs a large language model to convert raw text data into a form suitable for multimodal forecasting, and we introduce a benchmark that facilitates multimodal forecasting experiments based on this pipeline. Experiment results with the real world market data such as crude oil price and exchange rates demonstrate that AIR effectively modulates the behavior of the time series model using textual inputs, significantly enhancing forecasting accuracy in various time series forecasting tasks.
+ oai:arXiv.org:2512.10229v1
+ cs.LGcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Vishwas Hegde, Vindhya Shigehalli
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Jun Seo, Hyeokjun Choe, Seohui Bae, Soyeon Park, Wonbin Ahn, Taeyoon Lim, Junhyuk Kang, Sangjun Han, Jaehoon Lee, Dongwan Kang, Minjae Kim, Sungdong Yoo, Soonyoung Lee
- Enabling Next-Generation Consumer Experience with Feature Coding for Machines
- https://arxiv.org/abs/2512.09232
- arXiv:2512.09232v1 Announce Type: new
-Abstract: As consumer devices become increasingly intelligent and interconnected, efficient data transfer solutions for machine tasks have become essential. This paper presents an overview of the latest Feature Coding for Machines (FCM) standard, part of MPEG-AI and developed by the Moving Picture Experts Group (MPEG). FCM supports AI-driven applications by enabling the efficient extraction, compression, and transmission of intermediate neural network features. By offloading computationally intensive operations to base servers with high computing resources, FCM allows low-powered devices to leverage large deep learning models. Experimental results indicate that the FCM standard maintains the same level of accuracy while reducing bitrate requirements by 75.90% compared to remote inference.
- oai:arXiv.org:2512.09232v1
+ Emerging Standards for Machine-to-Machine Video Coding
+ https://arxiv.org/abs/2512.10230
+ arXiv:2512.10230v1 Announce Type: new
+Abstract: Machines are increasingly becoming the primary consumers of visual data, yet most deployments of machine-to-machine systems still rely on remote inference where pixel-based video is streamed using codecs optimized for human perception. Consequently, this paradigm is bandwidth intensive, scales poorly, and exposes raw images to third parties. Recent efforts in the Moving Picture Experts Group (MPEG) redesigned the pipeline for machine-to-machine communication: Video Coding for Machines (VCM) is designed to apply task-aware coding tools in the pixel domain, and Feature Coding for Machines (FCM) is designed to compress intermediate neural features to reduce bitrate, preserve privacy, and support compute offload. Experiments show that FCM is capable of maintaining accuracy close to edge inference while significantly reducing bitrate. Additional analysis of H.26X codecs used as inner codecs in FCM reveals that H.265/High Efficiency Video Coding (HEVC) and H.266/Versatile Video Coding (VVC) achieve almost identical machine task performance, with an average BD-Rate increase of 1.39% when VVC is replaced with HEVC. In contrast, H.264/Advanced Video Coding (AVC) yields an average BD-Rate increase of 32.28% compared to VVC. However, for the tracking task, the impact of codec choice is minimal, with HEVC outperforming VVC and achieving BD Rate of -1.81% and 8.79% for AVC, indicating that existing hardware for already deployed codecs can support machine-to-machine communication without degrading performance.
+ oai:arXiv.org:2512.10230v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1109/ICCE63647.2025.10930026
- 2025 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 2025, pp. 1-4
- Md Eimran Hossain Eimon, Juan Merlos, Ashan Perera, Hari Kalva, Velibor Adzic, Borko Furht
+ Md Eimran Hossain Eimon, Velibor Adzic, Hari Kalva, Borko Furht
- Analysis of the Security Design, Engineering, and Implementation of the SecureDNA System
- https://arxiv.org/abs/2512.09233
- arXiv:2512.09233v1 Announce Type: new
-Abstract: We analyze security aspects of the SecureDNA system regarding its system design, engineering, and implementation. This system enables DNA synthesizers to screen order requests against a database of hazards. By applying novel cryptography, the system aims to keep order requests and the database of hazards secret. Discerning the detailed operation of the system in part from source code (Version 1.0.8), our analysis examines key management, certificate infrastructure, authentication, and rate-limiting mechanisms. We also perform the first formal-methods analysis of the mutual authentication, basic request, and exemption-handling protocols.
- Without breaking the cryptography, our main finding is that SecureDNA's custom mutual authentication protocol SCEP achieves only one-way authentication: the hazards database and keyservers never learn with whom they communicate. This structural weakness violates the principle of defense in depth and enables an adversary to circumvent rate limits that protect the secrecy of the hazards database, if the synthesizer connects with a malicious or corrupted keyserver or hashed database. We point out an additional structural weakness that also violates the principle of defense in depth: inadequate cryptographic bindings prevent the system from detecting if responses, within a TLS channel, from the hazards database were modified. Consequently, if a synthesizer were to reconnect with the database over the same TLS session, an adversary could replay and swap responses from the database without breaking TLS. Although the SecureDNA implementation does not allow such reconnections, it would be stronger security engineering to avoid the underlying structural weakness. We identify these vulnerabilities and suggest and verify mitigations, including adding strong bindings. Software Version 1.1.0 fixes SCEP with our proposed SCEP+ protocol.
- oai:arXiv.org:2512.09233v1
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ SemanticBBV: A Semantic Signature for Cross-Program Knowledge Reuse in Microarchitecture Simulation
+ https://arxiv.org/abs/2512.10231
+ arXiv:2512.10231v1 Announce Type: new
+Abstract: For decades, sampling-based techniques have been the de facto standard for accelerating microarchitecture simulation, with the Basic Block Vector (BBV) serving as the cornerstone program representation. Yet, the BBV's fundamental limitations: order-dependent IDs that prevent cross-program knowledge reuse and a lack of semantic content predictive of hardware performance have left a massive potential for optimization untapped.
+ To address these gaps, we introduce SemanticBBV, a novel, two-stage framework that generates robust, performance-aware signatures for cross-program simulation reuse. First, a lightweight RWKV-based semantic encoder transforms assembly basic blocks into rich Basic Block Embeddings (BBEs), capturing deep functional semantics. Second, an order-invariant Set Transformer aggregates these BBEs, weighted by execution frequency, into a final signature. Crucially, this stage is co-trained with a dual objective: a triplet loss for signature distinctiveness and a Cycles Per Instruction (CPI) regression task, directly imbuing the signature with performance sensitivity. Our evaluation demonstrates that SemanticBBV not only matches traditional BBVs in single-program accuracy but also enables unprecedented cross-program analysis. By simulating just 14 universal program points, we estimated the performance of ten SPEC CPU benchmarks with 86.3% average accuracy, achieving a 7143x simulation speedup. Furthermore, the signature shows strong adaptability to new microarchitectures with minimal fine-tuning.
+ oai:arXiv.org:2512.10231v1
+ cs.AR
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/publicdomain/zero/1.0/
- Alan T. Sherman, Jeremy J. Romanik Romano, Edward Zieglar, Enis Golaszewski, Jonathan D. Fuchs, William E. Byrd
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Zhenguo Liu, Chengao Shi, Chen Ding, Jiang Xu
- Efficient Feature Compression for Machines with Global Statistics Preservation
- https://arxiv.org/abs/2512.09235
- arXiv:2512.09235v1 Announce Type: new
-Abstract: The split-inference paradigm divides an artificial intelligence (AI) model into two parts. This necessitates the transfer of intermediate feature data between the two halves. Here, effective compression of the feature data becomes vital. In this paper, we employ Z-score normalization to efficiently recover the compressed feature data at the decoder side. To examine the efficacy of our method, the proposed method is integrated into the latest Feature Coding for Machines (FCM) codec standard under development by the Moving Picture Experts Group (MPEG). Our method supersedes the existing scaling method used by the current standard under development. It both reduces the overhead bits and improves the end-task accuracy. To further reduce the overhead in certain circumstances, we also propose a simplified method. Experiments show that using our proposed method shows 17.09% reduction in bitrate on average across different tasks and up to 65.69% for object tracking without sacrificing the task accuracy.
- oai:arXiv.org:2512.09235v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Permutation-Equivariant Learning for Dynamic Security Assessment of Power System Frequency Response
+ https://arxiv.org/abs/2512.10232
+ arXiv:2512.10232v1 Announce Type: new
+Abstract: This paper presents a hybrid model-AI framework for real-time dynamic security assessment of frequency stability in power systems. The proposed method rapidly estimates key frequency parameters under a dynamic set of disturbances, which are continuously updated based on operating conditions and unit commitment. To achieve this, the framework builds on a modal-based formulation of the system frequency response (SFR), which leverages the system's eigenstructure to predict key frequency stability metrics. A Deep Sets-inspired neural network is employed to estimate the complex modal coefficients required by the modal-based SFR approach, formulated as a permutation-equivariant learning problem. This enables fast and accurate prediction of the frequency nadir and its timing across different operating conditions and disturbances. The framework achieves scalability by reusing precomputed modal structures and updating only the disturbance-specific coefficients. It demonstrates strong generalization capabilities without requiring an extensive set of operating scenarios during training or the widespread deployment of phasor measurement units (PMUs). The method is validated on the IEEE 39-bus and 118-bus systems, showing superior accuracy, robustness, and computational efficiency compared to purely data-driven approaches.
+ oai:arXiv.org:2512.10232v1
+ eess.SY
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1109/ISCAS56072.2025.11043278
- 2025 IEEE International Symposium on Circuits and Systems (ISCAS), London, United Kingdom, 2025, pp. 1-5
- Md Eimran Hossain Eimon, Hyomin Choi, Fabien Racap\'e, Mateen Ulhaq, Velibor Adzic, Hari Kalva, Borko Furht
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Francisco Zelaya-Arrazabal, Sebastian Martinez-Lizana, Hector Pulgar-Painemal, Jin Zhao
- Training-free Context-adaptive Attention for Efficient Long Context Modeling
- https://arxiv.org/abs/2512.09238
- arXiv:2512.09238v1 Announce Type: new
-Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of natural language processing tasks. These capabilities stem primarily from the self-attention mechanism, which enables modeling of long-range dependencies. However, the quadratic complexity of self-attention with respect to sequence length poses significant computational and memory challenges, especially as sequence length extends to extremes. While various sparse attention and KV cache compression methods have been proposed to improve efficiency, they often suffer from limitations such as reliance on fixed patterns, inability to handle both prefilling and decoding stages, or the requirement for additional training. In this paper, we propose Training-free Context-adaptive Attention (TCA-Attention), a training-free sparse attention mechanism that selectively attends to only the informative tokens for efficient long-context inference. Our method consists of two lightweight phases: i) an offline calibration phase that determines head-specific sparsity budgets via a single forward pass, and ii) an online token selection phase that adaptively retains core context tokens using a lightweight redundancy metric. TCA-Attention provides a unified solution that accelerates both prefilling and decoding while reducing KV cache memory footprint, without requiring parameter updates or architectural changes. Theoretical analysis shows that our approach maintains bounded approximation error. Extensive experiments demonstrate that TCA-Attention achieves a 2.8$\times$ speedup and reduces KV cache by 61% at 128K context length while maintaining performance comparable to full attention across various benchmarks, offering a practical plug-and-play solution for efficient long-context inference.
- oai:arXiv.org:2512.09238v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Understanding Toxic Interaction Across User and Video Clusters in Social Video Platforms
+ https://arxiv.org/abs/2512.10233
+ arXiv:2512.10233v1 Announce Type: new
+Abstract: Social video platforms shape how people access information, while recommendation systems can narrow exposure and increase the risk of toxic interaction. Previous research has often examined text or users in isolation, overlooking the structural context in which such toxic interactions occur. Without considering who interacts with whom and around what content, it is difficult to explain why negative expressions cluster within particular communities. To address this issue, this study focuses on the Chinese social video platform Bilibili, incorporating video-level information as the environment for user expression, modeling users and videos in an interaction matrix. After normalization and dimensionality reduction, we perform separate clustering on both sides of the video-user interaction matrix with K-means. Cluster assignments facilitate comparisons of user behavior, including message length, posting frequency, and source (barrage and comment), as well as textual features such as sentiment and toxicity, and video attributes defined by uploaders. Such a clustering approach integrates structural ties with content signals to identify stable groups of videos and users. We find clear stratification in interaction style (message length, comment ratio) across user clusters, while sentiment and toxicity differences are weak or inconsistent across video clusters. Across video clusters, viewing volume exhibits a clear hierarchy, with higher exposure groups concentrating more toxic expressions. For such a group, platforms should require timely intervention during periods of rapid growth. Across user clusters, comment ratio and message length form distinct hierarchies, and several clusters with longer and comment-oriented messages exhibit lower toxicity. For such groups, platforms should strengthen mechanisms that sustain rational dialogue and encourage engagement across topics.
+ oai:arXiv.org:2512.10233v1
+ cs.SI
+ cs.DL
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Zeng You, Yaofo Chen, Shuhai Zhang, Zhijie Qiu, Tingyu Wu, Yingjian Li, Yaowei Wang, Mingkui Tan
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qiao Wang, Liang Liu, Mitsuo Yoshida
- A Clinically Interpretable Deep CNN Framework for Early Chronic Kidney Disease Prediction Using Grad-CAM-Based Explainable AI
- https://arxiv.org/abs/2512.09244
- arXiv:2512.09244v1 Announce Type: new
-Abstract: Chronic Kidney Disease (CKD) constitutes a major global medical burden, marked by the gradual deterioration of renal function, which results in the impaired clearance of metabolic waste and disturbances in systemic fluid homeostasis. Owing to its substantial contribution to worldwide morbidity and mortality, the development of reliable and efficient diagnostic approaches is critically important to facilitate early detection and prompt clinical management. This study presents a deep convolutional neural network (CNN) for early CKD detection from CT kidney images, complemented by class balancing using Synthetic Minority Over-sampling Technique (SMOTE) and interpretability via Gradient-weighted Class Activation Mapping (Grad-CAM). The model was trained and evaluated on the CT KIDNEY DATASET, which contains 12,446 CT images, including 3,709 cyst, 5,077 normal, 1,377 stone, and 2,283 tumor cases. The proposed deep CNN achieved a remarkable classification performance, attaining 100% accuracy in the early detection of chronic kidney disease (CKD). This significant advancement demonstrates strong potential for addressing critical clinical diagnostic challenges and enhancing early medical intervention strategies.
- oai:arXiv.org:2512.09244v1
- cs.CV
+ InFerActive: Towards Scalable Human Evaluation of Large Language Models through Interactive Inference
+ https://arxiv.org/abs/2512.10234
+ arXiv:2512.10234v1 Announce Type: new
+Abstract: Human evaluation remains the gold standard for evaluating outputs of Large Language Models (LLMs). The current evaluation paradigm reviews numerous individual responses, leading to significant scalability challenges. LLM outputs can be more efficiently represented as a tree structure, reflecting their autoregressive generation process and stochastic token selection. However, conventional tree visualization cannot scale to the exponentially large trees generated by modern sampling methods of LLMs. To address this problem, we present InFerActive, an interactive inference system for scalable human evaluation. InFerActive enables on-demand exploration through probability-based filtering and evaluation features, while bridging the semantic gap between computational tokens and human-readable text through adaptive visualization techniques. Through a technical evaluation and user study (N=12), we demonstrate that InFerActive significantly improves evaluation efficiency and enables more comprehensive assessment of model behavior. We further conduct expert case studies that demonstrate InFerActive's practical applicability and potential for transforming LLM evaluation workflows.
+ oai:arXiv.org:2512.10234v1
+ cs.HCcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Anas Bin Ayub, Nilima Sultana Niha, Md. Zahurul Haque
-
-
- OmniPSD: Layered PSD Generation with Diffusion Transformer
- https://arxiv.org/abs/2512.09247
- arXiv:2512.09247v1 Announce Type: new
-Abstract: Recent advances in diffusion models have greatly improved image generation and editing, yet generating or reconstructing layered PSD files with transparent alpha channels remains highly challenging. We propose OmniPSD, a unified diffusion framework built upon the Flux ecosystem that enables both text-to-PSD generation and image-to-PSD decomposition through in-context learning. For text-to-PSD generation, OmniPSD arranges multiple target layers spatially into a single canvas and learns their compositional relationships through spatial attention, producing semantically coherent and hierarchically structured layers. For image-to-PSD decomposition, it performs iterative in-context editing, progressively extracting and erasing textual and foreground components to reconstruct editable PSD layers from a single flattened image. An RGBA-VAE is employed as an auxiliary representation module to preserve transparency without affecting structure learning. Extensive experiments on our new RGBA-layered dataset demonstrate that OmniPSD achieves high-fidelity generation, structural consistency, and transparency awareness, offering a new paradigm for layered design generation and decomposition with diffusion transformers.
- oai:arXiv.org:2512.09247v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Cheng Liu, Yiren Song, Haofan Wang, Mike Zheng Shou
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Junhyeong Hwangbo, Soohyun Lee, Minsoo Cheong, Hyeon Jeon, Jinwook Seo
- GLACIA: Instance-Aware Positional Reasoning for Glacial Lake Segmentation via Multimodal Large Language Model
- https://arxiv.org/abs/2512.09251
- arXiv:2512.09251v1 Announce Type: new
-Abstract: Glacial lake monitoring bears great significance in mitigating the anticipated risk of Glacial Lake Outburst Floods. However, existing segmentation methods based on convolutional neural networks (CNNs) and Vision Transformers (ViTs), remain constrained to pixel-level predictions, lacking high-level global scene semantics and human-interpretable reasoning. To address this, we introduce GLACIA (\textbf{G}lacial \textbf{LA}ke segmentation with \textbf{C}ontextual \textbf{I}nstance \textbf{A}wareness), the first framework that integrates large language models with segmentation capabilities to produce both accurate segmentation masks and corresponding spatial reasoning outputs. We construct the Glacial Lake Position Reasoning (GLake-Pos) dataset pipeline, which provides diverse, spatially grounded question-answer pairs designed to overcome the lack of instance-aware positional reasoning data in remote sensing. Comparative evaluation demonstrate that GLACIA (mIoU: 87.30) surpasses state-of-the-art method based on CNNs (mIoU: 78.55 - 79.01), ViTs (mIoU: 69.27 - 81.75), Geo-foundation models (mIoU: 76.37 - 87.10), and reasoning based segmentation methods (mIoU: 60.12 - 75.66). Our approach enables intuitive disaster preparedness and informed policy-making in the context of rapidly changing glacial environments by facilitating natural language interaction, thereby supporting more efficient and interpretable decision-making. The code is released on https://github.com/lalitmaurya47/GLACIA
- oai:arXiv.org:2512.09251v1
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Task-Oriented Grasping Using Reinforcement Learning with a Contextual Reward Machine
+ https://arxiv.org/abs/2512.10235
+ arXiv:2512.10235v1 Announce Type: new
+Abstract: This paper presents a reinforcement learning framework that incorporates a Contextual Reward Machine for task-oriented grasping. The Contextual Reward Machine reduces task complexity by decomposing grasping tasks into manageable sub-tasks. Each sub-task is associated with a stage-specific context, including a reward function, an action space, and a state abstraction function. This contextual information enables efficient intra-stage guidance and improves learning efficiency by reducing the state-action space and guiding exploration within clearly defined boundaries. In addition, transition rewards are introduced to encourage or penalize transitions between stages which guides the model toward desirable stage sequences and further accelerates convergence. When integrated with the Proximal Policy Optimization algorithm, the proposed method achieved a 95% success rate across 1,000 simulated grasping tasks encompassing diverse objects, affordances, and grasp topologies. It outperformed the state-of-the-art methods in both learning speed and success rate. The approach was transferred to a real robot, where it achieved a success rate of 83.3% in 60 grasping tasks over six affordances. These experimental results demonstrate superior accuracy, data efficiency, and learning efficiency. They underscore the model's potential to advance task-oriented grasping in both simulated and real-world settings.
+ oai:arXiv.org:2512.10235v1
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Lalit Maurya, Saurabh Kaushik, Beth Tellman
+ Hui Li, Akhlak Uz Zaman, Fujian Yan, Hongsheng He
- The Illusion of Rationality: Tacit Bias and Strategic Dominance in Frontier LLM Negotiation Games
- https://arxiv.org/abs/2512.09254
- arXiv:2512.09254v1 Announce Type: new
-Abstract: Large language models (LLMs) are increasingly being deployed as autonomous agents on behalf of institutions and individuals in economic, political, and social settings that involve negotiation. Yet this trend carries significant risks if their strategic behavior is not well understood. In this work, we revisit the NegotiationArena framework and run controlled simulation experiments on a diverse set of frontier LLMs across three multi turn bargaining games: Buyer Seller, Multi turn Ultimatum, and Resource Exchange. We ask whether improved general reasoning capabilities lead to rational, unbiased, and convergent negotiation strategies. Our results challenge this assumption. We find that models diverge into distinct, model specific strategic equilibria rather than converging to a unified optimal behavior. Moreover, strong numerical and semantic anchoring effects persist: initial offers are highly predictive of final agreements, and models consistently generate biased proposals by collapsing diverse internal valuations into rigid, generic price points. More concerningly, we observe dominance patterns in which some models systematically achieve higher payoffs than their counterparts. These findings underscore an urgent need to develop mechanisms to mitigate these issues before deploying such systems in real-world scenarios.
- oai:arXiv.org:2512.09254v1
- cs.GT
- cs.MA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Design Space Exploration of DMA based Finer-Grain Compute Communication Overlap
+ https://arxiv.org/abs/2512.10236
+ arXiv:2512.10236v1 Announce Type: new
+Abstract: As both ML training and inference are increasingly distributed, parallelization techniques that shard (divide) ML model across GPUs of a distributed system, are often deployed. With such techniques, there is a high prevalence of data-dependent communication and computation operations where communication is exposed, leaving as high as 1.7x ideal performance on the table. Prior works harness the fact that ML model state and inputs are already sharded, and employ careful overlap of individual computation/communication shards. While such coarse-grain overlap is promising, in this work, we instead make a case for finer-grain compute-communication overlap which we term FiCCO, where we argue for finer-granularity, one-level deeper overlap than at shard-level, to unlock compute/communication overlap for a wider set of network topologies, finer-grain dataflow and more. We show that FiCCO opens up a wider design space of execution schedules than possible at shard-level alone. At the same time, decomposition of ML operations into smaller operations (done in both shard-based and finer-grain techniques) causes operation-level inefficiency losses. To balance the two, we first present a detailed characterization of these inefficiency losses, then present a design space of FiCCO schedules, and finally overlay the schedules with concomitant inefficiency signatures. Doing so helps us design heuristics that frameworks and runtimes can harness to select bespoke FiCCO schedules based on the nature of underlying ML operations. Finally, to further minimize contention inefficiencies inherent with operation overlap, we offload communication to GPU DMA engines. We evaluate several scenarios from realistic ML deployments and demonstrate that our proposed bespoke schedules deliver up to 1.6x speedup and our heuristics provide accurate guidance in 81% of unseen scenarios.
+ oai:arXiv.org:2512.10236v1
+ cs.DC
+ cs.AR
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Manuel S. R\'ios, Ruben F. Manrique, Nicanor Quijano, Luis F. Giraldo
+ Shagnik Pal, Shaizeen Aga, Suchita Pati, Mahzabeen Islam, Lizy K. John
- ROI-Packing: Efficient Region-Based Compression for Machine Vision
- https://arxiv.org/abs/2512.09258
- arXiv:2512.09258v1 Announce Type: new
-Abstract: This paper introduces ROI-Packing, an efficient image compression method tailored specifically for machine vision. By prioritizing regions of interest (ROI) critical to end-task accuracy and packing them efficiently while discarding less relevant data, ROI-Packing achieves significant compression efficiency without requiring retraining or fine-tuning of end-task models. Comprehensive evaluations across five datasets and two popular tasks-object detection and instance segmentation-demonstrate up to a 44.10% reduction in bitrate without compromising end-task accuracy, along with an 8.88 % improvement in accuracy at the same bitrate compared to the state-of-the-art Versatile Video Coding (VVC) codec standardized by the Moving Picture Experts Group (MPEG).
- oai:arXiv.org:2512.09258v1
+ Multi-dimensional Preference Alignment by Conditioning Reward Itself
+ https://arxiv.org/abs/2512.10237
+ arXiv:2512.10237v1 Announce Type: new
+Abstract: Reinforcement Learning from Human Feedback has emerged as a standard for aligning diffusion models. However, we identify a fundamental limitation in the standard DPO formulation because it relies on the Bradley-Terry model to aggregate diverse evaluation axes like aesthetic quality and semantic alignment into a single scalar reward. This aggregation creates a reward conflict where the model is forced to unlearn desirable features of a specific dimension if they appear in a globally non-preferred sample. To address this issue, we propose Multi Reward Conditional DPO (MCDPO). This method resolves reward conflicts by introducing a disentangled Bradley-Terry objective. MCDPO explicitly injects a preference outcome vector as a condition during training, which allows the model to learn the correct optimization direction for each reward axis independently within a single network. We further introduce dimensional reward dropout to ensure balanced optimization across dimensions. Extensive experiments on Stable Diffusion 1.5 and SDXL demonstrate that MCDPO achieves superior performance on benchmarks. Notably, our conditional framework enables dynamic and multiple-axis control at inference time using Classifier Free Guidance to amplify specific reward dimensions without additional training or external reward models.
+ oai:arXiv.org:2512.10237v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1109/MIPR67560.2025.00044
- International Conference on Multimedia Information Processing and Retrieval (MIPR), San Jose, CA, USA, 2025, pp. 233-238
- Md Eimran Hossain Eimon, Alena Krause, Ashan Perera, Juan Merlos, Hari Kalva, Velibor Adzic, Borko Furht
-
-
- From Forecast to Action: Uncertainty-Aware UAV Deployment for Ocean Drifter Recovery
- https://arxiv.org/abs/2512.09260
- arXiv:2512.09260v1 Announce Type: new
-Abstract: We present a novel predict-then-optimize framework for maritime search operations that integrates trajectory forecasting with UAV deployment optimization-an end-to-end approach not addressed in prior work. A large language model predicts the drifter's trajectory, and spatial uncertainty is modeled using Gaussian-based particle sampling. Unlike traditional static deployment methods, we dynamically adapt UAV detection radii based on distance and optimize their placement using meta-heuristic algorithms. Experiments on real-world data from the Korean coastline demonstrate that our method, particularly the repair mechanism designed for this problem, significantly outperforms the random search baselines. This work introduces a practical and robust integration of trajectory prediction and spatial optimization for intelligent maritime rescue.
- oai:arXiv.org:2512.09260v1
- cs.NE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jingeun Kim, Yong-Hyuk Kim, Yourim Yoon
-
-
- FLARE v2: A Recursive Framework for Program Comprehension Across Languages and Levels of Abstraction
- https://arxiv.org/abs/2512.09261
- arXiv:2512.09261v1 Announce Type: new
-Abstract: Building on the classroom framework reported in Heath et al. (2025), this paper proposes FLARE v2 as a recursive, semiotically informed account of how program meaning is constructed. It reinterprets the descriptive tiers of FLARE v1 as instances of a single generative operation: identify elements (characterised by the four properties Receives, Sends, Effects, Shares); analyse their bindings along two dimensions (Causal-Temporal and Communicative); and recognise the new element that emerges. The Causal-Temporal dimension encompasses three subtypes - Sequential, Branch, and Event - that together account for control flow in both procedural and event-driven environments. A Compositional Ladder provides a visual parallel between literacy progressions and programming structures, illustrating how recursive composition operates from blocks and statements through segments, systems, and services. The framework aims to address conceptual and cognitive-load limitations reported in FLARE v1 and is situated within semiotic and program-comprehension theory. FLARE v2 is presented as a conceptual lens with potential implications for pedagogy and curriculum design; implementation and empirical evaluation are left for future work.
- oai:arXiv.org:2512.09261v1
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Justin Heath
+ Jiho Jang, Jinyoung Kim, Kyungjune Baek, Nojun Kwak
- FBA$^2$D: Frequency-based Black-box Attack for AI-generated Image Detection
- https://arxiv.org/abs/2512.09264
- arXiv:2512.09264v1 Announce Type: new
-Abstract: The prosperous development of Artificial Intelligence-Generated Content (AIGC) has brought people's anxiety about the spread of false information on social media. Designing detectors for filtering is an effective defense method, but most detectors will be compromised by adversarial samples. Currently, most studies exposing AIGC security issues assume information on model structure and data distribution. In real applications, attackers query and interfere with models that provide services in the form of application programming interfaces (APIs), which constitutes the black-box decision-based attack paradigm. However, to the best of our knowledge, decision-based attacks on AIGC detectors remain unexplored. In this study, we propose \textbf{FBA$^2$D}: a frequency-based black-box attack method for AIGC detection to fill the research gap. Motivated by frequency-domain discrepancies between generated and real images, we develop a decision-based attack that leverages the Discrete Cosine Transform (DCT) for fine-grained spectral partitioning and selects frequency bands as query subspaces, improving both query efficiency and image quality. Moreover, attacks on AIGC detectors should mitigate initialization failures, preserve image quality, and operate under strict query budgets. To address these issues, we adopt an ``adversarial example soup'' method, averaging candidates from successive surrogate iterations and using the result as the initialization to accelerate the query-based attack. The empirical study on the Synthetic LSUN dataset and GenImage dataset demonstrate the effectiveness of our prosed method. This study shows the urgency of addressing practical AIGC security problems.
- oai:arXiv.org:2512.09264v1
- cs.CR
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Studying and Automating Issue Resolution for Software Quality
+ https://arxiv.org/abs/2512.10238
+ arXiv:2512.10238v1 Announce Type: new
+Abstract: Effective issue resolution is crucial for maintaining software quality. Yet developers frequently encounter challenges such as low-quality issue reports, limited understanding of real-world workflows, and a lack of automated support. This research aims to address these challenges through three complementary directions. First, we enhance issue report quality by proposing techniques that leverage LLM reasoning and application-specific information. Second, we empirically characterize developer workflows in both traditional and AI-augmented systems. Third, we automate cognitively demanding resolution tasks, including buggy UI localization and solution identification, through ML, DL, and LLM-based approaches. Together, our work delivers empirical insights, practical tools, and automated methods to advance AI-driven issue resolution, supporting more maintainable and high-quality software systems.
+ oai:arXiv.org:2512.10238v1
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Xiaojing Chen, Dan Li, Lijun Peng, Jun Yan{\L}etter, Zhiqing Guo, Junyang Chen, Xiao Lan, Zhongjie Ba, Yunfeng Diao{\L}etter
+ http://creativecommons.org/licenses/by/4.0/
+ Antu Saha
- Contrastive Learning for Semi-Supervised Deep Regression with Generalized Ordinal Rankings from Spectral Seriation
- https://arxiv.org/abs/2512.09267
- arXiv:2512.09267v1 Announce Type: new
-Abstract: Contrastive learning methods enforce label distance relationships in feature space to improve representation capability for regression models. However, these methods highly depend on label information to correctly recover ordinal relationships of features, limiting their applications to semi-supervised regression. In this work, we extend contrastive regression methods to allow unlabeled data to be used in the semi-supervised setting, thereby reducing the dependence on costly annotations. Particularly we construct the feature similarity matrix with both labeled and unlabeled samples in a mini-batch to reflect inter-sample relationships, and an accurate ordinal ranking of involved unlabeled samples can be recovered through spectral seriation algorithms if the level of error is within certain bounds. The introduction of labeled samples above provides regularization of the ordinal ranking with guidance from the ground-truth label information, making the ranking more reliable. To reduce feature perturbations, we further utilize the dynamic programming algorithm to select robust features for the matrix construction. The recovered ordinal relationship is then used for contrastive learning on unlabeled samples, and we thus allow more data to be used for feature representation learning, thereby achieving more robust results. The ordinal rankings can also be used to supervise predictions on unlabeled samples, serving as an additional training signal. We provide theoretical guarantees and empirical verification through experiments on various datasets, demonstrating that our method can surpass existing state-of-the-art semi-supervised deep regression methods. Our code have been released on https://github.com/xmed-lab/CLSS.
- oai:arXiv.org:2512.09267v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ The Circulate and Recapture Dynamic of Fan Mobility in Agency-Affiliated VTuber Networks
+ https://arxiv.org/abs/2512.10240
+ arXiv:2512.10240v1 Announce Type: new
+Abstract: VTuber agencies -- multichannel networks (MCNs) that bundle Virtual YouTubers (VTubers) on YouTube -- curate portfolios of channels and coordinate programming, cross appearances, and branding in the live-streaming VTuber ecosystem. It remains unclear whether affiliation binds fans to a single channel or instead encourages movement within a portfolio that buffers exit, and how these micro level dynamics relate to meso level audience overlap. This study examines how affiliation shapes short horizon viewer trajectories and the organization of audience overlap networks by contrasting agency affiliated and independent VTubers. Using a large, multiyear, fan centered panel of VTuber live stream engagement on YouTube, we construct monthly audience overlap between creators with a similarity measure that is robust to audience size asymmetries. At the micro level, we track retention, changes in the primary creator watched (oshi), and inactivity; at the meso level, we compare structural properties of affiliation specific subgraphs and visualize viewer state transitions. The analysis identifies a pattern of loose mobility: fans tend to remain active while reallocating attention within the same affiliation type, with limited leakage across affiliation type. Network results indicate convergence in global overlap while local neighborhoods within affiliated subgraphs remain persistently denser. Flow diagrams reveal circulate and recapture dynamics that stabilize participation without relying on single channel lock in. We contribute a reusable measurement framework for VTuber live streaming that links micro level trajectories to meso level organization and informs research on creator labor, influencer marketing, and platform governance on video platforms. We do not claim causal effects; the observed regularities are consistent with proximity engineered by VTuber agencies and coordinated recapture.
+ oai:arXiv.org:2512.10240v1
+ cs.SI
+ cs.DL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ce Wang, Weihang Dai, Hanru Bai, Xiaomeng Li
+ Tomohiro Murakami, Mitsuo Yoshida
- Goal inference with Rao-Blackwellized Particle Filters
- https://arxiv.org/abs/2512.09269
- arXiv:2512.09269v1 Announce Type: new
-Abstract: Inferring the eventual goal of a mobile agent from noisy observations of its trajectory is a fundamental estimation problem. We initiate the study of such intent inference using a variant of a Rao-Blackwellized Particle Filter (RBPF), subject to the assumption that the agent's intent manifests through closed-loop behavior with a state-of-the-art provable practical stability property. Leveraging the assumed closed-form agent dynamics, the RBPF analytically marginalizes the linear-Gaussian substructure and updates particle weights only, improving sample efficiency over a standard particle filter. Two difference estimators are introduced: a Gaussian mixture model using the RBPF weights and a reduced version confining the mixture to the effective sample. We quantify how well the adversary can recover the agent's intent using information-theoretic leakage metrics and provide computable lower bounds on the Kullback-Leibler (KL) divergence between the true intent distribution and RBPF estimates via Gaussian-mixture KL bounds. We also provide a bound on the difference in performance between the two estimators, highlighting the fact that the reduced estimator performs almost as well as the complete one. Experiments illustrate fast and accurate intent recovery for compliant agents, motivating future work on designing intent-obfuscating controllers.
- oai:arXiv.org:2512.09269v1
+ Solving Semi-Supervised Few-Shot Learning from an Auto-Annotation Perspective
+ https://arxiv.org/abs/2512.10244
+ arXiv:2512.10244v1 Announce Type: new
+Abstract: Semi-supervised few-shot learning (SSFSL) formulates real-world applications like ''auto-annotation'', as it aims to learn a model over a few labeled and abundant unlabeled examples to annotate the unlabeled ones. Despite the availability of powerful open-source Vision-Language Models (VLMs) and their pretraining data, the SSFSL literature largely neglects these open-source resources. In contrast, the related area few-shot learning (FSL) has already exploited them to boost performance. Arguably, to achieve auto-annotation in the real world, SSFSL should leverage such open-source resources. To this end, we start by applying established SSL methods to finetune a VLM. Counterintuitively, they significantly underperform FSL baselines. Our in-depth analysis reveals the root cause: VLMs produce rather ''flat'' distributions of softmax probabilities. This results in zero utilization of unlabeled data and weak supervision signals. We address this issue with embarrassingly simple techniques: classifier initialization and temperature tuning. They jointly increase the confidence scores of pseudo-labels, improving the utilization rate of unlabeled data, and strengthening supervision signals. Building on this, we propose: Stage-Wise Finetuning with Temperature Tuning (SWIFT), which enables existing SSL methods to effectively finetune a VLM on limited labeled data, abundant unlabeled data, and task-relevant but noisy data retrieved from the VLM's pretraining set. Extensive experiments on five SSFSL benchmarks show that SWIFT outperforms recent FSL and SSL methods by $\sim$5 accuracy points. SWIFT even rivals supervised learning, which finetunes VLMs with the unlabeled data being labeled with ground truth!
+ oai:arXiv.org:2512.10244v1
+ cs.CVcs.LG
- cs.IR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Yixuan Wang, Dan P. Guralnik, Warren E. Dixon
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Tian Liu, Anwesha Basu, James Caverlee, Shu Kong
- MoRel: Long-Range Flicker-Free 4D Motion Modeling via Anchor Relay-based Bidirectional Blending with Hierarchical Densification
- https://arxiv.org/abs/2512.09270
- arXiv:2512.09270v1 Announce Type: new
-Abstract: Recent advances in 4D Gaussian Splatting (4DGS) have extended the high-speed rendering capability of 3D Gaussian Splatting (3DGS) into the temporal domain, enabling real-time rendering of dynamic scenes. However, one of the major remaining challenges lies in modeling long-range motion-contained dynamic videos, where a naive extension of existing methods leads to severe memory explosion, temporal flickering, and failure to handle appearing or disappearing occlusions over time. To address these challenges, we propose a novel 4DGS framework characterized by an Anchor Relay-based Bidirectional Blending (ARBB) mechanism, named MoRel, which enables temporally consistent and memory-efficient modeling of long-range dynamic scenes. Our method progressively constructs locally canonical anchor spaces at key-frame time index and models inter-frame deformations at the anchor level, enhancing temporal coherence. By learning bidirectional deformations between KfA and adaptively blending them through learnable opacity control, our approach mitigates temporal discontinuities and flickering artifacts. We further introduce a Feature-variance-guided Hierarchical Densification (FHD) scheme that effectively densifies KfA's while keeping rendering quality, based on an assigned level of feature-variance. To effectively evaluate our model's capability to handle real-world long-range 4D motion, we newly compose long-range 4D motion-contained dataset, called SelfCap$_{\text{LR}}$. It has larger average dynamic motion magnitude, captured at spatially wider spaces, compared to previous dynamic video datasets. Overall, our MoRel achieves temporally coherent and flicker-free long-range 4D reconstruction while maintaining bounded memory usage, demonstrating both scalability and efficiency in dynamic Gaussian-based representations.
- oai:arXiv.org:2512.09270v1
+ RobustSora: De-Watermarked Benchmark for Robust AI-Generated Video Detection
+ https://arxiv.org/abs/2512.10248
+ arXiv:2512.10248v1 Announce Type: new
+Abstract: The proliferation of AI-generated video technologies poses challenges to information integrity. While recent benchmarks advance AIGC video detection, they overlook a critical factor: many state-of-the-art generative models embed digital watermarks in outputs, and detectors may partially rely on these patterns. To evaluate this influence, we present RobustSora, the benchmark designed to assess watermark robustness in AIGC video detection. We systematically construct a dataset of 6,500 videos comprising four types: Authentic-Clean (A-C), Authentic-Spoofed with fake watermarks (A-S), Generated-Watermarked (G-W), and Generated-DeWatermarked (G-DeW). Our benchmark introduces two evaluation tasks: Task-I tests performance on watermark-removed AI videos, while Task-II assesses false alarm rates on authentic videos with fake watermarks. Experiments with ten models spanning specialized AIGC detectors, transformer architectures, and MLLM approaches reveal performance variations of 2-8pp under watermark manipulation. Transformer-based models show consistent moderate dependency (6-8pp), while MLLMs exhibit diverse patterns (2-8pp). These findings indicate partial watermark dependency and highlight the need for watermark-aware training strategies. RobustSora provides essential tools to advance robust AIGC detection research.
+ oai:arXiv.org:2512.10248v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sangwoon Kwak, Weeyoung Kwon, Jun Young Jeong, Geonho Kim, Won-Sik Cheong, Jihyong Oh
+ Zhuo Wang, Xiliang Liu, Ligang Sun
- LongT2IBench: A Benchmark for Evaluating Long Text-to-Image Generation with Graph-structured Annotations
- https://arxiv.org/abs/2512.09271
- arXiv:2512.09271v1 Announce Type: new
-Abstract: The increasing popularity of long Text-to-Image (T2I) generation has created an urgent need for automatic and interpretable models that can evaluate the image-text alignment in long prompt scenarios. However, the existing T2I alignment benchmarks predominantly focus on short prompt scenarios and only provide MOS or Likert scale annotations. This inherent limitation hinders the development of long T2I evaluators, particularly in terms of the interpretability of alignment. In this study, we contribute LongT2IBench, which comprises 14K long text-image pairs accompanied by graph-structured human annotations. Given the detail-intensive nature of long prompts, we first design a Generate-Refine-Qualify annotation protocol to convert them into textual graph structures that encompass entities, attributes, and relations. Through this transformation, fine-grained alignment annotations are achieved based on these granular elements. Finally, the graph-structed annotations are converted into alignment scores and interpretations to facilitate the design of T2I evaluation models. Based on LongT2IBench, we further propose LongT2IExpert, a LongT2I evaluator that enables multi-modal large language models (MLLMs) to provide both quantitative scores and structured interpretations through an instruction-tuning process with Hierarchical Alignment Chain-of-Thought (CoT). Extensive experiments and comparisons demonstrate the superiority of the proposed LongT2IExpert in alignment evaluation and interpretation. Data and code have been released in https://welldky.github.io/LongT2IBench-Homepage/.
- oai:arXiv.org:2512.09271v1
+ THE-Pose: Topological Prior with Hybrid Graph Fusion for Estimating Category-Level 6D Object Pose
+ https://arxiv.org/abs/2512.10251
+ arXiv:2512.10251v1 Announce Type: new
+Abstract: Category-level object pose estimation requires both global context and local structure to ensure robustness against intra-class variations. However, 3D graph convolution (3D-GC) methods only focus on local geometry and depth information, making them vulnerable to complex objects and visual ambiguities. To address this, we present THE-Pose, a novel category-level 6D pose estimation framework that leverages a topological prior via surface embedding and hybrid graph fusion. Specifically, we extract consistent and invariant topological features from the image domain, effectively overcoming the limitations inherent in existing 3D-GC based methods. Our Hybrid Graph Fusion (HGF) module adaptively integrates the topological features with point-cloud features, seamlessly bridging 2D image context and 3D geometric structure. These fused features ensure stability for unseen or complicated objects, even under significant occlusions. Extensive experiments on the REAL275 dataset show that THE-Pose achieves a 35.8% improvement over the 3D-GC baseline (HS-Pose) and surpasses the previous state-of-the-art by 7.2% across all key metrics. The code is avaialbe on https://github.com/EHxxx/THE-Pose
+ oai:arXiv.org:2512.10251v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhichao Yang, Tianjiao Gu, Jianjie Wang, Feiyu Lin, Xiangfei Sheng, Pengfei Chen, Leida Li
+ Eunho Lee, Chaehyeon Song, Seunghoon Jeong, Ayoung Kim
- Dynamic Facial Expressions Analysis Based Parkinson's Disease Auxiliary Diagnosis
- https://arxiv.org/abs/2512.09276
- arXiv:2512.09276v1 Announce Type: new
-Abstract: Parkinson's disease (PD), a prevalent neurodegenerative disorder, significantly affects patients' daily functioning and social interactions. To facilitate a more efficient and accessible diagnostic approach for PD, we propose a dynamic facial expression analysis-based PD auxiliary diagnosis method. This method targets hypomimia, a characteristic clinical symptom of PD, by analyzing two manifestations: reduced facial expressivity and facial rigidity, thereby facilitating the diagnosis process. We develop a multimodal facial expression analysis network to extract expression intensity features during patients' performance of various facial expressions. This network leverages the CLIP architecture to integrate visual and textual features while preserving the temporal dynamics of facial expressions. Subsequently, the expression intensity features are processed and input into an LSTM-based classification network for PD diagnosis. Our method achieves an accuracy of 93.1%, outperforming other in-vitro PD diagnostic approaches. This technique offers a more convenient detection method for potential PD patients, improving their diagnostic experience.
- oai:arXiv.org:2512.09276v1
+ GDKVM: Echocardiography Video Segmentation via Spatiotemporal Key-Value Memory with Gated Delta Rule
+ https://arxiv.org/abs/2512.10252
+ arXiv:2512.10252v1 Announce Type: new
+Abstract: Accurate segmentation of cardiac chambers in echocardiography sequences is crucial for the quantitative analysis of cardiac function, aiding in clinical diagnosis and treatment. The imaging noise, artifacts, and the deformation and motion of the heart pose challenges to segmentation algorithms. While existing methods based on convolutional neural networks, Transformers, and space-time memory networks have improved segmentation accuracy, they often struggle with the trade-off between capturing long-range spatiotemporal dependencies and maintaining computational efficiency with fine-grained feature representation. In this paper, we introduce GDKVM, a novel architecture for echocardiography video segmentation. The model employs Linear Key-Value Association (LKVA) to effectively model inter-frame correlations, and introduces Gated Delta Rule (GDR) to efficiently store intermediate memory states. Key-Pixel Feature Fusion (KPFF) module is designed to integrate local and global features at multiple scales, enhancing robustness against boundary blurring and noise interference. We validated GDKVM on two mainstream echocardiography video datasets (CAMUS and EchoNet-Dynamic) and compared it with various state-of-the-art methods. Experimental results show that GDKVM outperforms existing approaches in terms of segmentation accuracy and robustness, while ensuring real-time performance. Code is available at https://github.com/wangrui2025/GDKVM.
+ oai:arXiv.org:2512.10252v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xiaochen Huang, Xiaochen Bi, Cuihua Lv, Xin Wang, Haoyan Zhang, Wenjing Jiang, Xin Ma, Yibin Li
-
-
- Efficient MoE Serving in the Memory-Bound Regime: Balance Activated Experts, Not Tokens
- https://arxiv.org/abs/2512.09277
- arXiv:2512.09277v1 Announce Type: new
-Abstract: Expert Parallelism (EP) permits Mixture of Experts (MoE) models to scale beyond a single GPU. To address load imbalance across GPUs in EP, existing approaches aim to balance the number of tokens each GPU processes. Surprisingly, we find that this objective degrades performance rather than improving it when processing is memory-bound - a common occurrence in MoE serving, especially in the decode phase. Our analysis reveals that balancing the number of tokens processed per GPU increases the number of activated experts, exacerbating memory pressure in the memory-bound regime.
- We propose Minimum Expert Token ROuting, a novel token-routing algorithm for high-performance expert-parallel MoE serving in the memory-bound regime that balances the number of activated experts per GPU rather than token counts. METRO achieves near-optimal routing quality with minimal computational overhead by jointly optimizing algorithmic efficiency and leveraging the GPU's parallel processing power. To guarantee routing quality, METRO also employs a novel allGather scheme to gather global top-k knowledge, which has minimal overhead compared to conventional allToAll. Our evaluation of METRO against EPLB on both real systems (vLLM over 8 A100 GPUs) and a proprietary simulator (8-16 B200 GPUs) shows that METRO reduces decode latency by 11 - 22%, and total token throughput by 3 - 21% for Qwen3 and DeepSeek-V3 serving, where prefill and decode phases are co-deployed. In addition, by trading latency headroom for throughput, METRO improves decode throughput by up to 4.11x over EPLB at a fixed decode SLO.
- oai:arXiv.org:2512.09277v1
- cs.DC
- cs.AR
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yanpeng Yu, Haiyue Ma, Krish Agarwal, Nicolai Oswald, Qijing Huang, Hugo Linsenmaier, Chunhui Mei, Ritchie Zhao, Ritika Borkar, Bita Darvish Rouhani, David Nellans, Ronny Krashinsky, Anurag Khandelwal
+ Rui Wang, Yimu Sun, Jingxing Guo, Huisi Wu, Jing Qin
- LoGoColor: Local-Global 3D Colorization for 360{\deg} Scenes
- https://arxiv.org/abs/2512.09278
- arXiv:2512.09278v1 Announce Type: new
-Abstract: Single-channel 3D reconstruction is widely used in fields such as robotics and medical imaging. While this line of work excels at reconstructing 3D geometry, the outputs are not colored 3D models, thus 3D colorization is required for visualization. Recent 3D colorization studies address this problem by distilling 2D image colorization models. However, these approaches suffer from an inherent inconsistency of 2D image models. This results in colors being averaged during training, leading to monotonous and oversimplified results, particularly in complex 360{\deg} scenes. In contrast, we aim to preserve color diversity by generating a new set of consistently colorized training views, thereby bypassing the averaging process. Nevertheless, eliminating the averaging process introduces a new challenge: ensuring strict multi-view consistency across these colorized views. To achieve this, we propose LoGoColor, a pipeline designed to preserve color diversity by eliminating this guidance-averaging process with a `Local-Global' approach: we partition the scene into subscenes and explicitly tackle both inter-subscene and intra-subscene consistency using a fine-tuned multi-view diffusion model. We demonstrate that our method achieves quantitatively and qualitatively more consistent and plausible 3D colorization on complex 360{\deg} scenes than existing methods, and validate its superior color diversity using a novel Color Diversity Index.
- oai:arXiv.org:2512.09278v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Reject or Not?: A Benchmark for Voice Assistant Query Rejection in Smart Home Scenario and an Improved Method Based on LLMs
+ https://arxiv.org/abs/2512.10257
+ arXiv:2512.10257v1 Announce Type: new
+Abstract: In smart-home voice assistant scenario, deciding whether to accept or reject a user query is the first step before any downstream processing. To address the limited query-rejection capability of current voice assistants, this paper presents the first Chinese-oriented open-source benchmark and evaluation suite for smart homes, together with a personalized query-rejection method based on large language models. On the data side, we construct the first multimodal query-rejection dataset tailored for domestic scenarios, containing 11,913 manually labeled text-speech pairs that systematically cover twelve typical dialogue types (e.g., chit-chat, non-human sounds, valid commands, ambiguous references, device-irrelevant requests). Fine-grained labels, conversational context and multi-turn information are provided to support both zero-shot and fine-tuning evaluations across language and multimodal large models. On the method side, we propose a three-tier collaborative architecture: first, a Qwen-2.5-3B adapter fine-tuned to model family-agnostic semantic boundaries; second, a dynamic household-level historical dialogue module to capture personalized habits; third, a household-specific RAG knowledge base that explicitly memorizes and revises past false-rejection cases. Experiments show that the proposed approach significantly outperforms zero-shot and fine-tuned general LLMs on the constructed dataset, with pronounced gains in rejection accuracy for family-specific expressions and complex multi-turn scenarios. This work provides a reproducible data foundation, evaluation standard and extensible technical framework for reliability research in smart-home voice interaction.
+ oai:arXiv.org:2512.10257v1
+ cs.HC
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Yeonjin Chang, Juhwan Cho, Seunghyeon Seo, Wonsik Shin, Nojun Kwak
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Huichao Men, Yizhen Hu, Yingyang He, Yu Gao, Xiaofeng Mou, Yi Xu
- A Modular Lean 4 Framework for Confluence and Strong Normalization of Lambda Calculi with Products and Sums
- https://arxiv.org/abs/2512.09280
- arXiv:2512.09280v1 Announce Type: new
-Abstract: We present Metatheory, a comprehensive library for programming language foundations in Lean 4. The library features a modular framework for proving confluence of abstract rewriting systems using three classical proof techniques: the diamond property, Newmans lemma, and the Hindley-Rosen lemma. These are instantiated across six case studies including untyped lambda calculus, combinatory logic, term rewriting, simply typed lambda calculus, and STLC with products and sums. All theorems are fully mechanized with zero axioms or sorry statements. We provide complete proofs of de Bruijn substitution infrastructure and demonstrate strong normalization via logical relations. To our knowledge, this is the first comprehensive confluence and normalization framework for Lean 4.
- oai:arXiv.org:2512.09280v1
- cs.LO
- Thu, 11 Dec 2025 00:00:00 -0500
+ R^2-HGP: A Double-Regularized Gaussian Process for Heterogeneous Transfer Learning
+ https://arxiv.org/abs/2512.10258
+ arXiv:2512.10258v1 Announce Type: new
+Abstract: Multi-output Gaussian process (MGP) models have attracted significant attention for their flexibility and uncertainty-quantification capabilities, and have been widely adopted in multi-source transfer learning scenarios due to their ability to capture inter-task correlations. However, they still face several challenges in transfer learning. First, the input spaces of the source and target domains are often heterogeneous, which makes direct knowledge transfer difficult. Second, potential prior knowledge and physical information are typically ignored during heterogeneous transfer, hampering the utilization of domain-specific insights and leading to unstable mappings. Third, inappropriate information sharing among target and sources can easily lead to negative transfer. Traditional models fail to address these issues in a unified way. To overcome these limitations, this paper proposes a Double-Regularized Heterogeneous Gaussian Process framework (R^2-HGP). Specifically, a trainable prior probability mapping model is first proposed to align the heterogeneous input domains. The resulting aligned inputs are treated as latent variables, upon which a multi-source transfer GP model is constructed and the entire structure is integrated into a novel conditional variational autoencoder (CVAE) based framework. Physical insights is further incorporated as a regularization term to ensure that the alignment results adhere to known physical knowledge. Next, within the multi-source transfer GP model, a sparsity penalty is imposed on the transfer coefficients, enabling the model to adaptively select the most informative source outputs and suppress negative transfer. Extensive simulations and real-world engineering case studies validate the effectiveness of our R^2-HGP, demonstrating consistent superiority over state-of-the-art benchmarks across diverse evaluation metrics.
+ oai:arXiv.org:2512.10258v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Arthur Ramos, Anjolina Oliveira, Ruy de Queiroz, Tiago de Veras
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Duo Wang, Xinming Wang, Chao Wang, Xiaowei Yue, Jianguo Wu
- Higher-order multi-scale computational method and its convergence analysis for hygro-thermo-mechanical coupling problems of quasi-periodic composite structures
- https://arxiv.org/abs/2512.09281
- arXiv:2512.09281v1 Announce Type: new
-Abstract: This paper proposes a novel higher-order multi-scale (HOMS) computational method, which is highly targeted for efficient, high-accuracy and low-computational-cost simulation of hygro-thermo-mechanical (H-T-M) coupling problems in quasi-periodic composite structures. The first innovation of this work is that the establishment of the high-accuracy multi-scale model incorporating the higher-order correction terms for H-T-M coupling problems of quasi-periodic composite structures. The second innovation of this work is that the error analyses in the point-wise and integral senses are rigorously derived for multi-scale asymptotic solutions. Especially from the point-wise error analysis, the primary impetus for current study to develop the HOMS approach for quasi-periodic composite structures is illustrated. Furthermore, an high-accuracy multi-scale numerical algorithm is developed based on finite element method, while corresponding convergent analysis is also obtained. Finally, extensive numerical experiments are conducted to validate the computational performance of the proposed HOMS computational approach, demonstrating not only exceptional numerical accuracy, but also reduced computational cost.
- oai:arXiv.org:2512.09281v1
+ Convergence analysis of contrast source inversion type methods for acoustic inverse medium scattering problems
+ https://arxiv.org/abs/2512.10260
+ arXiv:2512.10260v1 Announce Type: new
+Abstract: The contrast source inversion (CSI) method and the subspace-based optimization method (SOM) are first proposed in 1997 and 2009, respectively, and subsequently modified. The two methods and their variants share several properties and thus are called the CSI-type methods. The CSI-type methods are efficient and popular methods for solving inverse medium scattering problems, but their rigorous convergence remains an open problem. In this paper, we propose two iteratively regularized CSI-type (IRCSI-type) methods with a novel $\ell_1$ proximal term as the iteratively regularized term: the iteratively regularized CSI (IRCSI) method and the iteratively regularized SOM (IRSOM) method, which have a similar computation complexity to the original CSI and SOM methods, respectively, and prove their global convergence under natural and weak conditions on the original objective function. To the best of our knowledge, this is the first convergence result for iterative methods of solving nonlinear inverse scattering problems with a fixed frequency. The convergence and performance of the two IRCSI-type algorithms are illustrated by numerical experiments.
+ oai:arXiv.org:2512.10260v1math.NAcs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hao Dong, Yifei Ding, Jiale Linghu, Yufeng Nie, Yaochuang Han
+ Qiao Hu, Bo Zhang, Haiwen Zhang
- FoundIR-v2: Optimizing Pre-Training Data Mixtures for Image Restoration Foundation Model
- https://arxiv.org/abs/2512.09282
- arXiv:2512.09282v1 Announce Type: new
-Abstract: Recent studies have witnessed significant advances in image restoration foundation models driven by improvements in the scale and quality of pre-training data. In this work, we find that the data mixture proportions from different restoration tasks are also a critical factor directly determining the overall performance of all-in-one image restoration models. To this end, we propose a high-capacity diffusion-based image restoration foundation model, FoundIR-v2, which adopts a data equilibrium scheduling paradigm to dynamically optimize the proportions of mixed training datasets from different tasks. By leveraging the data mixing law, our method ensures a balanced dataset composition, enabling the model to achieve consistent generalization and comprehensive performance across diverse tasks. Furthermore, we introduce an effective Mixture-of-Experts (MoE)-driven scheduler into generative pre-training to flexibly allocate task-adaptive diffusion priors for each restoration task, accounting for the distinct degradation forms and levels exhibited by different tasks. Extensive experiments demonstrate that our method can address over 50 sub-tasks across a broader scope of real-world scenarios and achieves favorable performance against state-of-the-art approaches.
- oai:arXiv.org:2512.09282v1
+ VLM-NCD:Novel Class Discovery with Vision-Based Large Language Models
+ https://arxiv.org/abs/2512.10262
+ arXiv:2512.10262v1 Announce Type: new
+Abstract: Novel Class Discovery aims to utilise prior knowledge of known classes to classify and discover unknown classes from unlabelled data. Existing NCD methods for images primarily rely on visual features, which suffer from limitations such as insufficient feature discriminability and the long-tail distribution of data. We propose LLM-NCD, a multimodal framework that breaks this bottleneck by fusing visual-textual semantics and prototype guided clustering. Our key innovation lies in modelling cluster centres and semantic prototypes of known classes by jointly optimising known class image and text features, and a dualphase discovery mechanism that dynamically separates known or novel samples via semantic affinity thresholds and adaptive clustering. Experiments on the CIFAR-100 dataset show that compared to the current methods, this method achieves up to 25.3% improvement in accuracy for unknown classes. Notably, our method shows unique resilience to long tail distributions, a first in NCD literature.
+ oai:arXiv.org:2512.10262v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xiang Chen, Jinshan Pan, Jiangxin Dong, Jian Yang, Jinhui Tang
-
-
- UPETrack: Unidirectional Position Estimation for Tracking Occluded Deformable Linear Objects
- https://arxiv.org/abs/2512.09283
- arXiv:2512.09283v1 Announce Type: new
-Abstract: Real-time state tracking of Deformable Linear Objects (DLOs) is critical for enabling robotic manipulation of DLOs in industrial assembly, medical procedures, and daily-life applications. However, the high-dimensional configuration space, nonlinear dynamics, and frequent partial occlusions present fundamental barriers to robust real-time DLO tracking. To address these limitations, this study introduces UPETrack, a geometry-driven framework based on Unidirectional Position Estimation (UPE), which facilitates tracking without the requirement for physical modeling, virtual simulation, or visual markers. The framework operates in two phases: (1) visible segment tracking is based on a Gaussian Mixture Model (GMM) fitted via the Expectation Maximization (EM) algorithm, and (2) occlusion region prediction employing UPE algorithm we proposed. UPE leverages the geometric continuity inherent in DLO shapes and their temporal evolution patterns to derive a closed-form positional estimator through three principal mechanisms: (i) local linear combination displacement term, (ii) proximal linear constraint term, and (iii) historical curvature term. This analytical formulation allows efficient and stable estimation of occluded nodes through explicit linear combinations of geometric components, eliminating the need for additional iterative optimization. Experimental results demonstrate that UPETrack surpasses two state-of-the-art tracking algorithms, including TrackDLO and CDCPD2, in both positioning accuracy and computational efficiency.
- oai:arXiv.org:2512.09283v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fan Wu, Chenguang Yang, Haibin Yang, Shuo Wang, Yanrui Xu, Xing Zhou, Meng Gao, Yaoqi Xian, Zhihong Zhu, Shifeng Huang
+ Yuetong Su, Baoguo Wei, Xinyu Wang, Xu Li, Lixin Li
- Who Speaks What from Afar: Eavesdropping In-Person Conversations via mmWave Sensing
- https://arxiv.org/abs/2512.09285
- arXiv:2512.09285v1 Announce Type: new
-Abstract: Multi-participant meetings occur across various domains, such as business negotiations and medical consultations, during which sensitive information like trade secrets, business strategies, and patient conditions is often discussed. Previous research has demonstrated that attackers with mmWave radars outside the room can overhear meeting content by detecting minute speech-induced vibrations on objects. However, these eavesdropping attacks cannot differentiate which speech content comes from which person in a multi-participant meeting, leading to potential misunderstandings and poor decision-making. In this paper, we answer the question ``who speaks what''. By leveraging the spatial diversity introduced by ubiquitous objects, we propose an attack system that enables attackers to remotely eavesdrop on in-person conversations without requiring prior knowledge, such as identities, the number of participants, or seating arrangements. Since participants in in-person meetings are typically seated at different locations, their speech induces distinct vibration patterns on nearby objects. To exploit this, we design a noise-robust unsupervised approach for distinguishing participants by detecting speech-induced vibration differences in the frequency domain. Meanwhile, a deep learning-based framework is explored to combine signals from objects for speech quality enhancement. We validate the proof-of-concept attack on speech classification and signal enhancement through extensive experiments. The experimental results show that our attack can achieve the speech classification accuracy of up to $0.99$ with several participants in a meeting room. Meanwhile, our attack demonstrates consistent speech quality enhancement across all real-world scenarios, including different distances between the radar and the objects.
- oai:arXiv.org:2512.09285v1
+ MR-FlowDPO: Multi-Reward Direct Preference Optimization for Flow-Matching Text-to-Music Generation
+ https://arxiv.org/abs/2512.10264
+ arXiv:2512.10264v1 Announce Type: new
+Abstract: A key challenge in music generation models is their lack of direct alignment with human preferences, as music evaluation is inherently subjective and varies widely across individuals. We introduce MR-FlowDPO, a novel approach that enhances flow-matching-based music generation models - a major class of modern music generative models, using Direct Preference Optimization (DPO) with multiple musical rewards. The rewards are crafted to assess music quality across three key dimensions: text alignment, audio production quality, and semantic consistency, utilizing scalable off-the-shelf models for each reward prediction. We employ these rewards in two ways: (i) By constructing preference data for DPO and (ii) by integrating the rewards into text prompting. To address the ambiguity in musicality evaluation, we propose a novel scoring mechanism leveraging semantic self-supervised representations, which significantly improves the rhythmic stability of generated music. We conduct an extensive evaluation using a variety of music-specific objective metrics as well as a human study. Results show that MR-FlowDPO significantly enhances overall music generation quality and is consistently preferred over highly competitive baselines in terms of audio quality, text alignment, and musicality. Our code is publicly available at https://github.com/lonzi/mrflow_dpo; Samples are provided in our demo page at https://lonzi.github.io/mr_flowdpo_demopage/.
+ oai:arXiv.org:2512.10264v1cs.SD
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shaoying Wang, Hansong Zhou, Yukun Yuan, Xiaonan Zhang
+ http://creativecommons.org/licenses/by/4.0/
+ Alon Ziv, Sanyuan Chen, Andros Tjandra, Yossi Adi, Wei-Ning Hsu, Bowen Shi
- Fast operator learning for mapping correlations
- https://arxiv.org/abs/2512.09286
- arXiv:2512.09286v1 Announce Type: new
-Abstract: We propose a fast, optimization-free method for learning the transition operators of high-dimensional Markov processes. The central idea is to perform a Galerkin projection of the transition operator to a suitable set of low-order bases that capture the correlations between the dimensions. Such a discretized operator can be obtained from moments corresponding to our choice of basis without curse of dimensionality. Furthermore, by exploiting its low-rank structure and the spatial decay of correlations, we can obtain a compressed representation with computational complexity of order $\mathcal{O}(dN)$, where $d$ is the dimensionality and $N$ is the sample size. We further theoretically analyze the approximation error of the proposed compressed representation. We numerically demonstrate that the learned operator allows efficient prediction of future events and solving high-dimensional boundary value problems. This gives rise to a simple linear algebraic method for high-dimensional rare-events simulations.
- oai:arXiv.org:2512.09286v1
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Long-LRM++: Preserving Fine Details in Feed-Forward Wide-Coverage Reconstruction
+ https://arxiv.org/abs/2512.10267
+ arXiv:2512.10267v1 Announce Type: new
+Abstract: Recent advances in generalizable Gaussian splatting (GS) have enabled feed-forward reconstruction of scenes from tens of input views. Long-LRM notably scales this paradigm to 32 input images at $950\times540$ resolution, achieving 360{\deg} scene-level reconstruction in a single forward pass. However, directly predicting millions of Gaussian parameters at once remains highly error-sensitive: small inaccuracies in positions or other attributes lead to noticeable blurring, particularly in fine structures such as text. In parallel, implicit representation methods such as LVSM and LaCT have demonstrated significantly higher rendering fidelity by compressing scene information into model weights rather than explicit Gaussians, and decoding RGB frames using the full transformer or TTT backbone. However, this computationally intensive decompression process for every rendered frame makes real-time rendering infeasible. These observations raise key questions: Is the deep, sequential "decompression" process necessary? Can we retain the benefits of implicit representations while enabling real-time performance? We address these questions with Long-LRM++, a model that adopts a semi-explicit scene representation combined with a lightweight decoder. Long-LRM++ matches the rendering quality of LaCT on DL3DV while achieving real-time 14 FPS rendering on an A100 GPU, overcoming the speed limitations of prior implicit methods. Our design also scales to 64 input views at the $950\times540$ resolution, demonstrating strong generalization to increased input lengths. Additionally, Long-LRM++ delivers superior novel-view depth prediction on ScanNetv2 compared to direct depth rendering from Gaussians. Extensive ablation studies validate the effectiveness of each component in the proposed framework.
+ oai:arXiv.org:2512.10267v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuehaw Khoo, Yuguan Wang, Siyao Yang
+ Chen Ziwen, Hao Tan, Peng Wang, Zexiang Xu, Li Fuxin
- MelanomaNet: Explainable Deep Learning for Skin Lesion Classification
- https://arxiv.org/abs/2512.09289
- arXiv:2512.09289v1 Announce Type: new
-Abstract: Automated skin lesion classification using deep learning has shown remarkable accuracy, yet clinical adoption remains limited due to the "black box" nature of these models. We present MelanomaNet, an explainable deep learning system for multi-class skin lesion classification that addresses this gap through four complementary interpretability mechanisms. Our approach combines an EfficientNet V2 backbone with GradCAM++ attention visualization, automated ABCDE clinical criterion extraction, Fast Concept Activation Vectors (FastCAV) for concept-based explanations, and Monte Carlo Dropout uncertainty quantification. We evaluate our system on the ISIC 2019 dataset containing 25,331 dermoscopic images across 9 diagnostic categories. Our model achieves 85.61% accuracy with a weighted F1 score of 0.8564, while providing clinically meaningful explanations that align model attention with established dermatological assessment criteria. The uncertainty quantification module decomposes prediction confidence into epistemic and aleatoric components, enabling automatic flagging of unreliable predictions for clinical review. Our results demonstrate that high classification performance can be achieved alongside comprehensive interpretability, potentially facilitating greater trust and adoption in clinical dermatology workflows. The source code is available at https://github.com/suxrobgm/explainable-melanoma
- oai:arXiv.org:2512.09289v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Balancing the Byline: Exploring Gender and Authorship Patterns in Canadian Science Publishing Journals
+ https://arxiv.org/abs/2512.10268
+ arXiv:2512.10268v1 Announce Type: new
+Abstract: Canada is internationally recognized for its leadership in science and its commitment to equity, diversity, and inclusion (EDI) in STEM (science, technology, engineering, and math) fields. Despite this leadership, limited research has examined gender disparities in scientific publishing within the Canadian context. This study analyzes over 67,000 articles published in 24 Canadian Science Publishing (CSP) journals between 2010 and 2021 to better understand patterns of gender representation. Findings show that women accounted for less than one-third of published authors across CSP journals. Representation varied by discipline, with higher proportions of women in biomedical sciences and lower proportions of women in engineering - trends that mirror broader national and global patterns. Notably, the proportion of women submitting manuscripts closely matched those published, suggesting that broader workforce disparities may play a larger role than publication bias. Women were less likely to be solo authors or to hold prominent authorship positions, such as first or last author - roles typically associated with research leadership and career advancement. These findings point to the need for a two-fold response: continued efforts to address systemic barriers to women's participation in science, and a review of publishing practices to ensure equitable access, recognition, and inclusion for all researchers.
+ oai:arXiv.org:2512.10268v1
+ cs.DL
+ physics.ed-ph
+ physics.soc-ph
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Sukhrobbek Ilyosbekov
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Eden J. Hennessey, Amanda Desnoyers, Margaret Christ, Adrianna Tassone, Skye Hennessey, Bianca Dreyer, Alex Jay, Patricia Sanchez, Shohini Ghose
- Identifying Bias in Machine-generated Text Detection
- https://arxiv.org/abs/2512.09292
- arXiv:2512.09292v1 Announce Type: new
-Abstract: The meteoric rise in text generation capability has been accompanied by parallel growth in interest in machine-generated text detection: the capability to identify whether a given text was generated using a model or written by a person. While detection models show strong performance, they have the capacity to cause significant negative impacts. We explore potential biases in English machine-generated text detection systems. We curate a dataset of student essays and assess 16 different detection systems for bias across four attributes: gender, race/ethnicity, English-language learner (ELL) status, and economic status. We evaluate these attributes using regression-based models to determine the significance and power of the effects, as well as performing subgroup analysis. We find that while biases are generally inconsistent across systems, there are several key issues: several models tend to classify disadvantaged groups as machine-generated, ELL essays are more likely to be classified as machine-generated, economically disadvantaged students' essays are less likely to be classified as machine-generated, and non-White ELL essays are disproportionately classified as machine-generated relative to their White counterparts. Finally, we perform human annotation and find that while humans perform generally poorly at the detection task, they show no significant biases on the studied attributes.
- oai:arXiv.org:2512.09292v1
- cs.CL
+ Hybrid Learning and Optimization-Based Dynamic Scheduling for DL Workloads on Heterogeneous GPU Clusters
+ https://arxiv.org/abs/2512.10271
+ arXiv:2512.10271v1 Announce Type: new
+Abstract: Modern cloud platforms increasingly host large-scale deep learning (DL) workloads, demanding high-throughput, low-latency GPU scheduling. However, the growing heterogeneity of GPU clusters and limited visibility into application characteristics pose major challenges for existing schedulers, which often rely on offline profiling or application-specific assumptions. We present RLTune, an application-agnostic reinforcement learning (RL)-based scheduling framework that dynamically prioritizes and allocates DL jobs on heterogeneous GPU clusters. RLTune integrates RL-driven prioritization with MILP-based job-to-node mapping to optimize system-wide objectives such as job completion time (JCT), queueing delay, and resource utilization. Trained on large-scale production traces from Microsoft Philly, Helios, and Alibaba, RLTune improves GPU utilization by up to 20%, reduces queueing delay by up to 81%, and shortens JCT by as much as 70 percent. Unlike prior approaches, RLTune generalizes across diverse workloads without requiring per-job profiling, making it practical for cloud providers to deploy at scale for more efficient, fair, and sustainable DL workload management.
+ oai:arXiv.org:2512.10271v1
+ cs.DCcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Kevin Stowe, Svetlana Afanaseva, Rodolfo Raimundo, Yitao Sun, Kailash Patil
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shruti Dongare, Redwan Ibne Seraj Khan, Hadeel Albahar, Nannan Zhao, Diego Melendez Maita, Ali R. Butt
- Electric Arc Furnaces Scheduling under Electricity Price Volatility with Reinforcement Learning
- https://arxiv.org/abs/2512.09293
- arXiv:2512.09293v1 Announce Type: new
-Abstract: This paper proposes a reinforcement learning-based framework for optimizing the operation of electric arc furnaces (EAFs) under volatile electricity prices. We formulate the deterministic version of the EAF scheduling problem into a mixed-integer linear programming (MILP) formulation, and then develop a Q-learning algorithm to perform real-time control of multiple EAF units under real-time price volatility and shared feeding capacity constraints. We design a custom reward function for the Q-learning algorithm to smooth the start-up penalties of the EAFs. Using real data from EAF designs and electricity prices in New York State, we benchmark our algorithm against a baseline rule-based controller and a MILP benchmark, assuming perfect price forecasts. The results show that our reinforcement learning algorithm achieves around 90% of the profit compared to the perfect MILP benchmark in various single-unit and multi-unit cases under a non-anticipatory control setting.
- oai:arXiv.org:2512.09293v1
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Reverse Thinking Enhances Missing Information Detection in Large Language Models
+ https://arxiv.org/abs/2512.10273
+ arXiv:2512.10273v1 Announce Type: new
+Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities in various reasoning tasks, yet they often struggle with problems involving missing information, exhibiting issues such as incomplete responses, factual errors, and hallucinations. While forward reasoning approaches like Chain-of-Thought (CoT) and Tree-of-Thought (ToT) have shown success in structured problem-solving, they frequently fail to systematically identify and recover omitted information. In this paper, we explore the potential of reverse thinking methodologies to enhance LLMs' performance on missing information detection tasks. Drawing inspiration from recent work on backward reasoning, we propose a novel framework that guides LLMs through reverse thinking to identify necessary conditions and pinpoint missing elements. Our approach transforms the challenging task of missing information identification into a more manageable backward reasoning problem, significantly improving model accuracy. Experimental results demonstrate that our reverse thinking approach achieves substantial performance gains compared to traditional forward reasoning methods, providing a promising direction for enhancing LLMs' logical completeness and reasoning robustness.
+ oai:arXiv.org:2512.10273v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ruonan Pi, Zhiyuan Fan, Bolun Xu
+ Yuxin Liu, Chaojie Gu, Yihang Zhang, Bin Qian, Shibo He
- Traffic Scene Small Target Detection Method Based on YOLOv8n-SPTS Model for Autonomous Driving
- https://arxiv.org/abs/2512.09296
- arXiv:2512.09296v1 Announce Type: new
-Abstract: This paper focuses on the key issue in autonomous driving: small target recognition in dynamic perception. Existing algorithms suffer from poor detection performance due to missing small target information, scale imbalance, and occlusion. We propose an improved YOLOv8n-SPTS model, which enhances the detection accuracy of small traffic targets through three key innovations: First, optimizing the feature extraction module. In the Backbone Bottleneck structure of YOLOv8n, 4 traditional convolution modules are replaced with Space-to-Depth Convolution (SPD-Conv) modules. This module retains fine-grained information through space-to-depth conversion, reduces information loss, and enhances the ability to capture features of low-resolution small targets. Second, enhancing feature fusion capability. The Spatial Pyramid Pooling - Fast Cross Stage Partial Connection (SPPFCSPC) module is introduced to replace the original SPPF module, integrating the multi-scale feature extraction from Spatial Pyramid Pooling (SPP) and the feature fusion mechanism of Cross Stage Partial Connection (CSP), thereby improving the model's contextual understanding of complex scenes and multi-scale feature expression ability. Third, designing a dedicated detection structure for small targets. A Triple-Stage Feature Pyramid (TSFP) structure is proposed, which adds a 160*160 small target detection head to the original detection heads to fully utilize high-resolution features in shallow layers; meanwhile, redundant large target detection heads are removed to balance computational efficiency. Comparative experiments on the VisDrone2019-DET dataset show that YOLOv8n-SPTS model ranks first in precision (61.9%), recall (48.3%), mAP@0.5 (52.6%), and mAP@0.5:0.95 (32.6%). Visualization results verify that the miss rate of small targets such as pedestrians and bicycles in occluded and dense scenes is significantly reduced.
- oai:arXiv.org:2512.09296v1
+ Sample-wise Adaptive Weighting for Transfer Consistency in Adversarial Distillation
+ https://arxiv.org/abs/2512.10275
+ arXiv:2512.10275v1 Announce Type: new
+Abstract: Adversarial distillation in the standard min-max adversarial training framework aims to transfer adversarial robustness from a large, robust teacher network to a compact student. However, existing work often neglects to incorporate state-of-the-art robust teachers. Through extensive analysis, we find that stronger teachers do not necessarily yield more robust students-a phenomenon known as robust saturation. While typically attributed to capacity gaps, we show that such explanations are incomplete. Instead, we identify adversarial transferability-the fraction of student-crafted adversarial examples that remain effective against the teacher-as a key factor in successful robustness transfer. Based on this insight, we propose Sample-wise Adaptive Adversarial Distillation (SAAD), which reweights training examples by their measured transferability without incurring additional computational cost. Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet show that SAAD consistently improves AutoAttack robustness over prior methods. Our code is available at https://github.com/HongsinLee/saad.
+ oai:arXiv.org:2512.10275v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Songhan Wu
+ Hongsin Lee, Hye Won Chung
- One-Shot Real-World Demonstration Synthesis for Scalable Bimanual Manipulation
- https://arxiv.org/abs/2512.09297
- arXiv:2512.09297v1 Announce Type: new
-Abstract: Learning dexterous bimanual manipulation policies critically depends on large-scale, high-quality demonstrations, yet current paradigms face inherent trade-offs: teleoperation provides physically grounded data but is prohibitively labor-intensive, while simulation-based synthesis scales efficiently but suffers from sim-to-real gaps. We present BiDemoSyn, a framework that synthesizes contact-rich, physically feasible bimanual demonstrations from a single real-world example. The key idea is to decompose tasks into invariant coordination blocks and variable, object-dependent adjustments, then adapt them through vision-guided alignment and lightweight trajectory optimization. This enables the generation of thousands of diverse and feasible demonstrations within several hour, without repeated teleoperation or reliance on imperfect simulation. Across six dual-arm tasks, we show that policies trained on BiDemoSyn data generalize robustly to novel object poses and shapes, significantly outperforming recent baselines. By bridging the gap between efficiency and real-world fidelity, BiDemoSyn provides a scalable path toward practical imitation learning for complex bimanual manipulation without compromising physical grounding.
- oai:arXiv.org:2512.09297v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Computing Evolutionarily Stable Strategies in Imperfect-Information Games
+ https://arxiv.org/abs/2512.10279
+ arXiv:2512.10279v1 Announce Type: new
+Abstract: We present an algorithm for computing evolutionarily stable strategies (ESSs) in symmetric perfect-recall extensive-form games of imperfect information. Our main algorithm is for two-player games, and we describe how it can be extended to multiplayer games. The algorithm is sound and computes all ESSs in nondegenerate games and a subset of them in degenerate games which contain an infinite continuum of symmetric Nash equilibria. The algorithm is anytime and can be stopped early to find one or more ESSs. We experiment on an imperfect-information cancer signaling game as well as random games to demonstrate scalability.
+ oai:arXiv.org:2512.10279v1
+ cs.GT
+ cs.AI
+ cs.MA
+ econ.TH
+ q-bio.PE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Huayi Zhou, Kui Jia
+ Sam Ganzfried
- VABench: A Comprehensive Benchmark for Audio-Video Generation
- https://arxiv.org/abs/2512.09299
- arXiv:2512.09299v1 Announce Type: new
-Abstract: Recent advances in video generation have been remarkable, enabling models to produce visually compelling videos with synchronized audio. While existing video generation benchmarks provide comprehensive metrics for visual quality, they lack convincing evaluations for audio-video generation, especially for models aiming to generate synchronized audio-video outputs. To address this gap, we introduce VABench, a comprehensive and multi-dimensional benchmark framework designed to systematically evaluate the capabilities of synchronous audio-video generation. VABench encompasses three primary task types: text-to-audio-video (T2AV), image-to-audio-video (I2AV), and stereo audio-video generation. It further establishes two major evaluation modules covering 15 dimensions. These dimensions specifically assess pairwise similarities (text-video, text-audio, video-audio), audio-video synchronization, lip-speech consistency, and carefully curated audio and video question-answering (QA) pairs, among others. Furthermore, VABench covers seven major content categories: animals, human sounds, music, environmental sounds, synchronous physical sounds, complex scenes, and virtual worlds. We provide a systematic analysis and visualization of the evaluation results, aiming to establish a new standard for assessing video generation models with synchronous audio capabilities and to promote the comprehensive advancement of the field.
- oai:arXiv.org:2512.09299v1
- cs.CV
- cs.SD
- Thu, 11 Dec 2025 00:00:00 -0500
+ Graph Neural Network Based Adaptive Threat Detection for Cloud Identity and Access Management Logs
+ https://arxiv.org/abs/2512.10280
+ arXiv:2512.10280v1 Announce Type: new
+Abstract: The rapid expansion of cloud infrastructures and distributed identity systems has significantly increased the complexity and attack surface of modern enterprises. Traditional rule based or signature driven detection systems are often inadequate in identifying novel or evolving threats within Identity and Access Management logs, where anomalous behavior may appear statistically benign but contextually malicious. This paper presents a Graph Neural Network Based Adaptive Threat Detection framework designed to learn latent user resource interaction patterns from IAM audit trails in real time. By modeling IAM logs as heterogeneous dynamic graphs, the proposed system captures temporal, relational, and contextual dependencies across entities such as users, roles, sessions, and access actions. The model incorporates attention based aggregation and graph embedding updates to enable continual adaptation to changing cloud environments. Experimental evaluation on synthesized and real world IAM datasets demonstrates that the proposed method achieves higher detection precision and recall than baseline LSTM and GCN classifiers, while maintaining scalability across multi tenant cloud environments. The frameworks adaptability enables proactive mitigation of insider threats, privilege escalation, and lateral movement attacks, contributing to the foundation of AI driven zero trust access analytics. This work bridges the gap between graph based machine learning and operational cloud security intelligence.
+ oai:arXiv.org:2512.10280v1
+ cs.CR
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Daili Hua, Xizhi Wang, Bohan Zeng, Xinyi Huang, Hao Liang, Junbo Niu, Xinlong Chen, Quanqing Xu, Wentao Zhang
+ Venkata Tanuja Madireddy
- ZeroOS: A Universal Modular Library OS for zkVMs
- https://arxiv.org/abs/2512.09300
- arXiv:2512.09300v1 Announce Type: new
-Abstract: zkVMs promise general-purpose verifiable computation through ISA-level compatibility with modern programs and toolchains. However, compatibility extends further than just the ISA; modern programs often cannot run or even compile without an operating system and libc. zkVMs attempt to address this by maintaining forks of language-specific runtimes and statically linking them into applications to create self-contained unikernels, but this ad-hoc approach leads to version hell and burdens verifiable applications (vApps) with an unnecessarily large trusted computing base. We solve this problem with ZeroOS, a modular library operating system (libOS) for vApp unikernels; vApp developers can use off-the-shelf toolchains to compile and link only the exact subset of the Linux ABI their vApp needs. Any zkVM team can easily leverage the ZeroOS ecosystem by writing a ZeroOS bootloader for their platform, resulting in a reduced maintainence burden and unifying the entire zkVM ecosystem with consolidated development and audit resources. ZeroOS is free and open-sourced at https://github.com/LayerZero-Labs/ZeroOS.
- oai:arXiv.org:2512.09300v1
- cs.OS
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Neuronal Attention Circuit (NAC) for Representation Learning
+ https://arxiv.org/abs/2512.10282
+ arXiv:2512.10282v1 Announce Type: new
+Abstract: Attention improves representation learning over RNNs, but its discrete nature limits continuous-time (CT) modeling. We introduce Neuronal Attention Circuit (NAC), a novel, biologically plausible CT-Attention mechanism that reformulates attention logits computation as the solution to a linear first-order ODE with nonlinear interlinked gates derived from repurposing \textit{C. elegans} Neuronal Circuit Policies (NCPs) wiring mechanism. NAC replaces dense projections with sparse sensory gates for key-query projections and a sparse backbone network with two heads for computing \textit{content-target} and \textit{learnable time-constant} gates, enabling efficient adaptive dynamics. NAC supports three attention logit computation modes: (i) explicit Euler integration, (ii) exact closed-form solution, and (iii) steady-state approximation. To improve memory intensity, we implemented a sparse Top-\emph{K} pairwise concatenation scheme that selectively curates key-query interactions. We provide rigorous theoretical guarantees, including state stability, bounded approximation errors, and universal approximation. Empirically, we implemented NAC in diverse domains, including irregular time-series classification, lane-keeping for autonomous vehicles, and industrial prognostics. We observed that NAC matches or outperforms competing baselines in accuracy and occupies an intermediate position in runtime and memory efficiency compared with several CT baselines.
+ oai:arXiv.org:2512.10282v1
+ cs.AI
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Guangxian Zou, Isaac Zhang, Ryan Zarick, Kelvin Wong, Thomas Kim, Daniel L. -K. Wong, Saeid Yazdinejad, Dan Boneh
+ Waleed Razzaq, Izis Kankaraway, Yun-Bo Zhao
- RACAM: Enhancing DRAM with Reuse-Aware Computation and Automated Mapping for ML Inference
- https://arxiv.org/abs/2512.09304
- arXiv:2512.09304v1 Announce Type: new
-Abstract: In-DRAM Processing-In-Memory (DRAM-PIM) has emerged as a promising approach to accelerate memory-intensive workloads by mitigating data transfer overhead between DRAM and the host processor. Bit-serial DRAM-PIM architectures, further enhance efficiency by supporting runtime variable data precision, which is critical for emerging workloads, such as large language model (LLM) inference. However, existing works still have major limitations: lack of data reuse, significant amounts of redundant data transfer, and insufficient support for workload mapping. To address these issues, we propose RACAM, the first in-DRAM bit-serial architecture which uses dedicated locality buffers, bit-serial PEs, popcount reduction units and broadcast units to enable data reuse and alleviate redundant data transfers. Furthermore, a workload mapping mechanism is proposed to fully explore the massive parallelism of DRAM architecture and identify the best mapping scheme of a given workload. We evaluate RACAM against GPUs and the state-of-the-art, in-DRAM PIM system, Proteus, across end-to-end LLM inferences. RACAM achieves 9x to 102x speedup over GPUs and 233x higher performance per mm2 compared to Proteus in case of GPT3.
- oai:arXiv.org:2512.09304v1
- cs.AR
- Thu, 11 Dec 2025 00:00:00 -0500
+ MotionEdit: Benchmarking and Learning Motion-Centric Image Editing
+ https://arxiv.org/abs/2512.10284
+ arXiv:2512.10284v1 Announce Type: new
+Abstract: We introduce MotionEdit, a novel dataset for motion-centric image editing-the task of modifying subject actions and interactions while preserving identity, structure, and physical plausibility. Unlike existing image editing datasets that focus on static appearance changes or contain only sparse, low-quality motion edits, MotionEdit provides high-fidelity image pairs depicting realistic motion transformations extracted and verified from continuous videos. This new task is not only scientifically challenging but also practically significant, powering downstream applications such as frame-controlled video synthesis and animation.
+ To evaluate model performance on the novel task, we introduce MotionEdit-Bench, a benchmark that challenges models on motion-centric edits and measures model performance with generative, discriminative, and preference-based metrics. Benchmark results reveal that motion editing remains highly challenging for existing state-of-the-art diffusion-based editing models. To address this gap, we propose MotionNFT (Motion-guided Negative-aware Fine Tuning), a post-training framework that computes motion alignment rewards based on how well the motion flow between input and model-edited images matches the ground-truth motion, guiding models toward accurate motion transformations. Extensive experiments on FLUX.1 Kontext and Qwen-Image-Edit show that MotionNFT consistently improves editing quality and motion fidelity of both base models on the motion editing task without sacrificing general editing ability, demonstrating its effectiveness.
+ oai:arXiv.org:2512.10284v1
+ cs.CV
+ cs.AI
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Siyuan Ma, Jiajun Hu, Jeeho Ryoo, Aman Arora, Lizy Kurian John
+ Yixin Wan, Lei Ke, Wenhao Yu, Kai-Wei Chang, Dong Yu
- From SAM to DINOv2: Towards Distilling Foundation Models to Lightweight Baselines for Generalized Polyp Segmentation
- https://arxiv.org/abs/2512.09307
- arXiv:2512.09307v1 Announce Type: new
-Abstract: Accurate polyp segmentation during colonoscopy is critical for the early detection of colorectal cancer and still remains challenging due to significant size, shape, and color variations, and the camouflaged nature of polyps. While lightweight baseline models such as U-Net, U-Net++, and PraNet offer advantages in terms of easy deployment and low computational cost, they struggle to deal with the above issues, leading to limited segmentation performance. In contrast, large-scale vision foundation models such as SAM, DINOv2, OneFormer, and Mask2Former have exhibited impressive generalization performance across natural image domains. However, their direct transfer to medical imaging tasks (e.g., colonoscopic polyp segmentation) is not straightforward, primarily due to the scarcity of large-scale datasets and lack of domain-specific knowledge. To bridge this gap, we propose a novel distillation framework, Polyp-DiFoM, that transfers the rich representations of foundation models into lightweight segmentation baselines, allowing efficient and accurate deployment in clinical settings. In particular, we infuse semantic priors from the foundation models into canonical architectures such as U-Net and U-Net++ and further perform frequency domain encoding for enhanced distillation, corroborating their generalization capability. Extensive experiments are performed across five benchmark datasets, such as Kvasir-SEG, CVC-ClinicDB, ETIS, ColonDB, and CVC-300. Notably, Polyp-DiFoM consistently outperforms respective baseline models significantly, as well as the state-of-the-art model, with nearly 9 times reduced computation overhead. The code is available at https://github.com/lostinrepo/PolypDiFoM.
- oai:arXiv.org:2512.09307v1
+ ShotDirector: Directorially Controllable Multi-Shot Video Generation with Cinematographic Transitions
+ https://arxiv.org/abs/2512.10286
+ arXiv:2512.10286v1 Announce Type: new
+Abstract: Shot transitions play a pivotal role in multi-shot video generation, as they determine the overall narrative expression and the directorial design of visual storytelling. However, recent progress has primarily focused on low-level visual consistency across shots, neglecting how transitions are designed and how cinematographic language contributes to coherent narrative expression. This often leads to mere sequential shot changes without intentional film-editing patterns. To address this limitation, we propose ShotDirector, an efficient framework that integrates parameter-level camera control and hierarchical editing-pattern-aware prompting. Specifically, we adopt a camera control module that incorporates 6-DoF poses and intrinsic settings to enable precise camera information injection. In addition, a shot-aware mask mechanism is employed to introduce hierarchical prompts aware of professional editing patterns, allowing fine-grained control over shot content. Through this design, our framework effectively combines parameter-level conditions with high-level semantic guidance, achieving film-like controllable shot transitions. To facilitate training and evaluation, we construct ShotWeaver40K, a dataset that captures the priors of film-like editing patterns, and develop a set of evaluation metrics for controllable multi-shot video generation. Extensive experiments demonstrate the effectiveness of our framework.
+ oai:arXiv.org:2512.10286v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shivanshu Agnihotri, Snehashis Majhi, Deepak Ranjan Nayak, Debesh Jha
+ Xiaoxue Wu, Xinyuan Chen, Yaohui Wang, Yu Qiao
- A Distributed Framework for Privacy-Enhanced Vision Transformers on the Edge
- https://arxiv.org/abs/2512.09309
- arXiv:2512.09309v1 Announce Type: new
-Abstract: Nowadays, visual intelligence tools have become ubiquitous, offering all kinds of convenience and possibilities. However, these tools have high computational requirements that exceed the capabilities of resource-constrained mobile and wearable devices. While offloading visual data to the cloud is a common solution, it introduces significant privacy vulnerabilities during transmission and server-side computation. To address this, we propose a novel distributed, hierarchical offloading framework for Vision Transformers (ViTs) that addresses these privacy challenges by design. Our approach uses a local trusted edge device, such as a mobile phone or an Nvidia Jetson, as the edge orchestrator. This orchestrator partitions the user's visual data into smaller portions and distributes them across multiple independent cloud servers. By design, no single external server possesses the complete image, preventing comprehensive data reconstruction. The final data merging and aggregation computation occurs exclusively on the user's trusted edge device. We apply our framework to the Segment Anything Model (SAM) as a practical case study, which demonstrates that our method substantially enhances content privacy over traditional cloud-based approaches. Evaluations show our framework maintains near-baseline segmentation performance while substantially reducing the risk of content reconstruction and user data exposure. Our framework provides a scalable, privacy-preserving solution for vision tasks in the edge-cloud continuum.
- oai:arXiv.org:2512.09309v1
- cs.DC
- cs.CR
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Kernel-based Resource-efficient Neural Surrogate for Multi-fidelity Prediction of Aerodynamic Field
+ https://arxiv.org/abs/2512.10287
+ arXiv:2512.10287v1 Announce Type: new
+Abstract: Surrogate models provide fast alternatives to costly aerodynamic simulations and are extremely useful in design and optimization applications. This study proposes the use of a recent kernel-based neural surrogate, KHRONOS. In this work, we blend sparse high-fidelity (HF) data with low-fidelity (LF) information to predict aerodynamic fields under varying constraints in computational resources. Unlike traditional approaches, KHRONOS is built upon variational principles, interpolation theory, and tensor decomposition. These elements provide a mathematical basis for heavy pruning compared to dense neural networks. Using the AirfRANS dataset as a high-fidelity benchmark and NeuralFoil to generate low-fidelity counterparts, this work compares the performance of KHRONOS with three contemporary model architectures: a multilayer perceptron (MLP), a graph neural network (GNN), and a physics-informed neural network (PINN). We consider varying levels of high-fidelity data availability (0%, 10%, and 30%) and increasingly complex geometry parameterizations. These are used to predict the surface pressure coefficient distribution over the airfoil. Results indicate that, whilst all models eventually achieve comparable predictive accuracy, KHRONOS excels in resource-constrained conditions. In this domain, KHRONOS consistently requires orders of magnitude fewer trainable parameters and delivers much faster training and inference than contemporary dense neural networks at comparable accuracy. These findings highlight the potential of KHRONOS and similar architectures to balance accuracy and efficiency in multi-fidelity aerodynamic field prediction.
+ oai:arXiv.org:2512.10287v1
+ cs.LG
+ physics.flu-dyn
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- 10.1145/3769102.3772714
- Proceedings of the Tenth ACM/IEEE Symposium on Edge Computing (SEC '25), 2025, Article 8, pp. 1-16
- Zihao Ding, Mufeng Zhu, Zhongze Tang, Sheng Wei, Yao Liu
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Apurba Sarker, Reza T. Batley, Darshan Sarojini, Sourav Saha
- Scene-agnostic Hierarchical Bimanual Task Planning via Visual Affordance Reasoning
- https://arxiv.org/abs/2512.09310
- arXiv:2512.09310v1 Announce Type: new
-Abstract: Embodied agents operating in open environments must translate high-level instructions into grounded, executable behaviors, often requiring coordinated use of both hands. While recent foundation models offer strong semantic reasoning, existing robotic task planners remain predominantly unimanual and fail to address the spatial, geometric, and coordination challenges inherent to bimanual manipulation in scene-agnostic settings. We present a unified framework for scene-agnostic bimanual task planning that bridges high-level reasoning with 3D-grounded two-handed execution. Our approach integrates three key modules. Visual Point Grounding (VPG) analyzes a single scene image to detect relevant objects and generate world-aligned interaction points. Bimanual Subgoal Planner (BSP) reasons over spatial adjacency and cross-object accessibility to produce compact, motion-neutralized subgoals that exploit opportunities for coordinated two-handed actions. Interaction-Point-Driven Bimanual Prompting (IPBP) binds these subgoals to a structured skill library, instantiating synchronized unimanual or bimanual action sequences that satisfy hand-state and affordance constraints. Together, these modules enable agents to plan semantically meaningful, physically feasible, and parallelizable two-handed behaviors in cluttered, previously unseen scenes. Experiments show that it produces coherent, feasible, and compact two-handed plans, and generalizes to cluttered scenes without retraining, demonstrating robust scene-agnostic affordance reasoning for bimanual tasks.
- oai:arXiv.org:2512.09310v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Certifying Concavity and Monotonicity in Games via Sum-of-Squares Hierarchies
+ https://arxiv.org/abs/2512.10292
+ arXiv:2512.10292v1 Announce Type: new
+Abstract: Concavity and its refinements underpin tractability in multiplayer games, where players independently choose actions to maximize their own payoffs which depend on other players' actions. In concave games, where players' strategy sets are compact and convex, and their payoffs are concave in their own actions, strong guarantees follow: Nash equilibria always exist and decentralized algorithms converge to equilibria. If the game is furthermore monotone, an even stronger guarantee holds: Nash equilibria are unique under strictness assumptions. Unfortunately, we show that certifying concavity or monotonicity is NP-hard, already for games where utilities are multivariate polynomials and compact, convex basic semialgebraic strategy sets -- an expressive class that captures extensive-form games with imperfect recall. On the positive side, we develop two hierarchies of sum-of-squares programs that certify concavity and monotonicity of a given game, and each level of the hierarchies can be solved in polynomial time. We show that almost all concave/monotone games are certified at some finite level of the hierarchies. Subsequently, we introduce SOS-concave/monotone games, which globally approximate concave/monotone games, and show that for any given game we can compute the closest SOS-concave/monotone game in polynomial time. Finally, we apply our techniques to canonical examples of imperfect recall extensive-form games.
+ oai:arXiv.org:2512.10292v1
+ cs.GT
+ cs.MA
+ math.OC
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kwang Bin Lee, Jiho Kang, Sung-Hee Lee
+ http://creativecommons.org/licenses/by/4.0/
+ Vincent Leon, Iosif Sakos, Ryann Sim, Antonios Varvitsiotis
- Transformer-Driven Multimodal Fusion for Explainable Suspiciousness Estimation in Visual Surveillance
- https://arxiv.org/abs/2512.09311
- arXiv:2512.09311v1 Announce Type: new
-Abstract: Suspiciousness estimation is critical for proactive threat detection and ensuring public safety in complex environments. This work introduces a large-scale annotated dataset, USE50k, along with a computationally efficient vision-based framework for real-time suspiciousness analysis. The USE50k dataset contains 65,500 images captured from diverse and uncontrolled environments, such as airports, railway stations, restaurants, parks, and other public areas, covering a broad spectrum of cues including weapons, fire, crowd density, abnormal facial expressions, and unusual body postures. Building on this dataset, we present DeepUSEvision, a lightweight and modular system integrating three key components, i.e., a Suspicious Object Detector based on an enhanced YOLOv12 architecture, dual Deep Convolutional Neural Networks (DCNN-I and DCNN-II) for facial expression and body-language recognition using image and landmark features, and a transformer-based Discriminator Network that adaptively fuses multimodal outputs to yield an interpretable suspiciousness score. Extensive experiments confirm the superior accuracy, robustness, and interpretability of the proposed framework compared to state-of-the-art approaches. Collectively, the USE50k dataset and the DeepUSEvision framework establish a strong and scalable foundation for intelligent surveillance and real-time risk assessment in safety-critical applications.
- oai:arXiv.org:2512.09311v1
+ Physically Aware 360$^\circ$ View Generation from a Single Image using Disentangled Scene Embeddings
+ https://arxiv.org/abs/2512.10293
+ arXiv:2512.10293v1 Announce Type: new
+Abstract: We introduce Disentangled360, an innovative 3D-aware technology that integrates the advantages of direction disentangled volume rendering with single-image 360{\deg} unique view synthesis for applications in medical imaging and natural scene reconstruction. In contrast to current techniques that either oversimplify anisotropic light behavior or lack generalizability across various contexts, our framework distinctly differentiates between isotropic and anisotropic contributions inside a Gaussian Splatting backbone. We implement a dual-branch conditioning framework, one optimized for CT intensity driven scattering in volumetric data and the other for real-world RGB scenes through normalized camera embeddings. To address scale ambiguity and maintain structural realism, we present a hybrid pose agnostic anchoring method that adaptively samples scene depth and material transitions, functioning as stable pivots during scene distillation. Our design integrates preoperative radiography simulation and consumer-grade 360{\deg} rendering into a singular inference pipeline, facilitating rapid, photorealistic view synthesis with inherent directionality. Evaluations on the Mip-NeRF 360, RealEstate10K, and DeepDRR datasets indicate superior SSIM and LPIPS performance, while runtime assessments confirm its viability for interactive applications. Disentangled360 facilitates mixed-reality medical supervision, robotic perception, and immersive content creation, eliminating the necessity for scene-specific finetuning or expensive photon simulations.
+ oai:arXiv.org:2512.10293v1cs.CV
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Kuldeep Singh Yadav, Lalan Kumar
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Karthikeya KV, Narendra Bandaru
- Tyche: A Hybrid Computation Framework of Illumination Pattern for Satellite Beam Hopping
- https://arxiv.org/abs/2512.09312
- arXiv:2512.09312v1 Announce Type: new
-Abstract: High-Throughput Satellites (HTS) use beam hopping to handle non-uniform and time-varying ground traffic demand. A significant technical challenge in beam hopping is the computation of effective illumination patterns. Traditional algorithms, like the genetic algorithm, require over 300 seconds to compute a single illumination pattern for just 37 cells, whereas modern HTS typically covers over 300 cells, rendering current methods impractical for real-world applications. Advanced approaches, such as multi-agent deep reinforcement learning, face convergence issues when the number of cells exceeds 40. In this paper, we introduce Tyche, a hybrid computation framework designed to address this challenge. Tyche incorporates a Monte Carlo Tree Search Beam Hopping (MCTS-BH) algorithm for computing illumination patterns and employs sliding window and pruning techniques to significantly reduce computation time. Specifically, MCTS-BH can compute one illumination pattern for 37 cells in just 12 seconds. To ensure real-time computation, we use a Greedy Beam Hopping (G-BH) algorithm, which provides a provisional solution while MCTS-BH completes its computation in the background. Our evaluation results show that MCTS-BH can increase throughput by up to 98.76%, demonstrating substantial improvements over existing solutions.
- oai:arXiv.org:2512.09312v1
- cs.NI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Lies We Can Trust: Quantifying Action Uncertainty with Inaccurate Stochastic Dynamics through Conformalized Nonholonomic Lie Groups
+ https://arxiv.org/abs/2512.10294
+ arXiv:2512.10294v1 Announce Type: new
+Abstract: We propose Conformal Lie-group Action Prediction Sets (CLAPS), a symmetry-aware conformal prediction-based algorithm that constructs, for a given action, a set guaranteed to contain the resulting system configuration at a user-defined probability. Our assurance holds under both aleatoric and epistemic uncertainty, non-asymptotically, and does not require strong assumptions about the true system dynamics, the uncertainty sources, or the quality of the approximate dynamics model. Typically, uncertainty quantification is tackled by making strong assumptions about the error distribution or magnitude, or by relying on uncalibrated uncertainty estimates - i.e., with no link to frequentist probabilities - which are insufficient for safe control. Recently, conformal prediction has emerged as a statistical framework capable of providing distribution-free probabilistic guarantees on test-time prediction accuracy. While current conformal methods treat robots as Euclidean points, many systems have non-Euclidean configurations, e.g., some mobile robots have SE(2). In this work, we rigorously analyze configuration errors using Lie groups, extending previous Euclidean Space theoretical guarantees to SE(2). Our experiments on a simulated JetBot, and on a real MBot, suggest that by considering the configuration space's structure, our symmetry-informed nonconformity score leads to more volume-efficient prediction regions which represent the underlying uncertainty better than existing approaches.
+ oai:arXiv.org:2512.10294v1
+ cs.RO
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Ziheng Yang, Kun Qiu, Zhe Chen, Wenjun Zhu, Yue Gao
+ Lu\'is Marques, Maani Ghaffari, Dmitry Berenson
- Hetero-SplitEE: Split Learning of Neural Networks with Early Exits for Heterogeneous IoT Devices
- https://arxiv.org/abs/2512.09313
- arXiv:2512.09313v1 Announce Type: new
-Abstract: The continuous scaling of deep neural networks has fundamentally transformed machine learning, with larger models demonstrating improved performance across diverse tasks. This growth in model size has dramatically increased the computational resources required for the training process. Consequently, distributed approaches, such as Federated Learning and Split Learning, have become essential paradigms for scalable deployment. However, existing Split Learning approaches assume client homogeneity and uniform split points across all participants. This critically limits their applicability to real-world IoT systems where devices exhibit heterogeneity in computational resources. To address this limitation, this paper proposes Hetero-SplitEE, a novel method that enables heterogeneous IoT devices to train a shared deep neural network in parallel collaboratively. By integrating heterogeneous early exits into hierarchical training, our approach allows each client to select distinct split points (cut layers) tailored to its computational capacity. In addition, we propose two cooperative training strategies, the Sequential strategy and the Averaging strategy, to facilitate this collaboration among clients with different split points. The Sequential strategy trains clients sequentially with a shared server model to reduce computational overhead. The Averaging strategy enables parallel client training with periodic cross-layer aggregation. Extensive experiments on CIFAR-10, CIFAR-100, and STL-10 datasets using ResNet-18 demonstrate that our method maintains competitive accuracy while efficiently supporting diverse computational constraints, enabling practical deployment of collaborative deep learning in heterogeneous IoT ecosystems.
- oai:arXiv.org:2512.09313v1
- cs.LG
+ FLARE: A Wireless Side-Channel Fingerprinting Attack on Federated Learning
+ https://arxiv.org/abs/2512.10296
+ arXiv:2512.10296v1 Announce Type: new
+Abstract: Federated Learning (FL) enables collaborative model training across distributed devices while safeguarding data and user privacy. However, FL remains susceptible to privacy threats that can compromise data via direct means. That said, indirectly compromising the confidentiality of the FL model architecture (e.g., a convolutional neural network (CNN) or a recurrent neural network (RNN)) on a client device by an outsider remains unexplored. If leaked, this information can enable next-level attacks tailored to the architecture. This paper proposes a novel side-channel fingerprinting attack, leveraging flow-level and packet-level statistics of encrypted wireless traffic from an FL client to infer its deep learning model architecture. We name it FLARE, a fingerprinting framework based on FL Architecture REconnaissance. Evaluation across various CNN and RNN variants-including pre-trained and custom models trained over IEEE 802.11 Wi-Fi-shows that FLARE achieves over 98% F1-score in closed-world and up to 91% in open-world scenarios. These results reveal that CNN and RNN models leak distinguishable traffic patterns, enabling architecture fingerprinting even under realistic FL settings with hardware, software, and data heterogeneity. To our knowledge, this is the first work to fingerprint FL model architectures by sniffing encrypted wireless traffic, exposing a critical side-channel vulnerability in current FL systems.
+ oai:arXiv.org:2512.10296v1
+ cs.CRcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuki Oda, Yuta Ono, Hiroshi Nakamura, Hideki Takase
-
-
- Benchmarking Real-World Medical Image Classification with Noisy Labels: Challenges, Practice, and Outlook
- https://arxiv.org/abs/2512.09315
- arXiv:2512.09315v1 Announce Type: new
-Abstract: Learning from noisy labels remains a major challenge in medical image analysis, where annotation demands expert knowledge and substantial inter-observer variability often leads to inconsistent or erroneous labels. Despite extensive research on learning with noisy labels (LNL), the robustness of existing methods in medical imaging has not been systematically assessed. To address this gap, we introduce LNMBench, a comprehensive benchmark for Label Noise in Medical imaging. LNMBench encompasses \textbf{10} representative methods evaluated across 7 datasets, 6 imaging modalities, and 3 noise patterns, establishing a unified and reproducible framework for robustness evaluation under realistic conditions. Comprehensive experiments reveal that the performance of existing LNL methods degrades substantially under high and real-world noise, highlighting the persistent challenges of class imbalance and domain variability in medical data. Motivated by these findings, we further propose a simple yet effective improvement to enhance model robustness under such conditions. The LNMBench codebase is publicly released to facilitate standardized evaluation, promote reproducible research, and provide practical insights for developing noise-resilient algorithms in both research and real-world medical applications.The codebase is publicly available on https://github.com/myyy777/LNMBench.
- oai:arXiv.org:2512.09315v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuan Ma, Junlin Hou, Chao Zhang, Yukun Zhou, Zongyuan Ge, Haoran Xie, Lie Ju
+ Md Nahid Hasan Shuvo, Moinul Hossain, Anik Mallik, Jeffrey Twigg, Fikadu Dagefu
- Simultaneous Genetic Evolution of Neural Networks for Optimal SFC Embedding
- https://arxiv.org/abs/2512.09318
- arXiv:2512.09318v1 Announce Type: new
-Abstract: The reliance of organisations on computer networks is enabled by network programmability, which is typically achieved through Service Function Chaining. These chains virtualise network functions, link them, and programmatically embed them on networking infrastructure. Optimal embedding of Service Function Chains is an NP-hard problem, with three sub-problems, chain composition, virtual network function embedding, and link embedding, that have to be optimised simultaneously, rather than sequentially, for optimal results. Genetic Algorithms have been employed for this, but existing approaches either do not optimise all three sub-problems or do not optimise all three sub-problems simultaneously. We propose a Genetic Algorithm-based approach called GENESIS, which evolves three sine-function-activated Neural Networks, and funnels their output to a Gaussian distribution and an A* algorithm to optimise all three sub-problems simultaneously. We evaluate GENESIS on an emulator across 48 different data centre scenarios and compare its performance to two state-of-the-art Genetic Algorithms and one greedy algorithm. GENESIS produces an optimal solution for 100% of the scenarios, whereas the second-best method optimises only 71% of the scenarios. Moreover, GENESIS is the fastest among all Genetic Algorithms, averaging 15.84 minutes, compared to an average of 38.62 minutes for the second-best Genetic Algorithm.
- oai:arXiv.org:2512.09318v1
- cs.NE
+ Investigating The Functional Roles of Attention Heads in Vision Language Models: Evidence for Reasoning Modules
+ https://arxiv.org/abs/2512.10300
+ arXiv:2512.10300v1 Announce Type: new
+Abstract: Despite excelling on multimodal benchmarks, vision-language models (VLMs) largely remain a black box. In this paper, we propose a novel interpretability framework to systematically analyze the internal mechanisms of VLMs, focusing on the functional roles of attention heads in multimodal reasoning. To this end, we introduce CogVision, a dataset that decomposes complex multimodal questions into step-by-step subquestions designed to simulate human reasoning through a chain-of-thought paradigm, with each subquestion associated with specific receptive or cognitive functions such as high-level visual reception and inference. Using a probing-based methodology, we identify attention heads that specialize in these functions and characterize them as functional heads. Our analysis across diverse VLM families reveals that these functional heads are universally sparse, vary in number and distribution across functions, and mediate interactions and hierarchical organization. Furthermore, intervention experiments demonstrate their critical role in multimodal reasoning: removing functional heads leads to performance degradation, while emphasizing them enhances accuracy. These findings provide new insights into the cognitive organization of VLMs and suggest promising directions for designing models with more human-aligned perceptual and reasoning abilities.
+ oai:arXiv.org:2512.10300v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Theviyanthan Krishnamohan, Lauritz Thamsen, Paul Harvey
+ http://creativecommons.org/licenses/by/4.0/
+ Yanbei Jiang, Xueqi Ma, Shu Liu, Sarah Monazam Erfani, Tongliang Liu, James Bailey, Jey Han Lau, Krista A. Ehinger
- Efficiency-Aware Computational Intelligence for Resource-Constrained Manufacturing Toward Edge-Ready Deployment
- https://arxiv.org/abs/2512.09319
- arXiv:2512.09319v1 Announce Type: new
-Abstract: Industrial cyber physical systems operate under heterogeneous sensing, stochastic dynamics, and shifting process conditions, producing data that are often incomplete, unlabeled, imbalanced, and domain shifted. High-fidelity datasets remain costly, confidential, and slow to obtain, while edge devices face strict limits on latency, bandwidth, and energy. These factors restrict the practicality of centralized deep learning, hinder the development of reliable digital twins, and increase the risk of error escape in safety-critical applications. Motivated by these challenges, this dissertation develops an efficiency grounded computational framework that enables data lean, physics-aware, and deployment ready intelligence for modern manufacturing environments. The research advances methods that collectively address core bottlenecks across multimodal and multiscale industrial scenarios. Generative strategies mitigate data scarcity and imbalance, while semi-supervised learning integrates unlabeled information to reduce annotation and simulation demands. Physics-informed representation learning strengthens interpretability and improves condition monitoring under small-data regimes. Spatially aware graph-based surrogate modeling provides efficient approximation of complex processes, and an edge cloud collaborative compression scheme supports real-time signal analytics under resource constraints. The dissertation also extends visual understanding through zero-shot vision language reasoning augmented by domain specific retrieval, enabling generalizable assessment in previously unseen scenarios. Together, these developments establish a unified paradigm of data efficient and resource aware intelligence that bridges laboratory learning with industrial deployment, supporting reliable decision-making across diverse manufacturing systems.
- oai:arXiv.org:2512.09319v1
- cs.CE
+ Trustworthy Orchestration Artificial Intelligence by the Ten Criteria with Control-Plane Governance
+ https://arxiv.org/abs/2512.10304
+ arXiv:2512.10304v1 Announce Type: new
+Abstract: As Artificial Intelligence (AI) systems increasingly assume consequential decision-making roles, a widening gap has emerged between technical capabilities and institutional accountability. Ethical guidance alone is insufficient to counter this challenge; it demands architectures that embed governance into the execution fabric of the ecosystem. This paper presents the Ten Criteria for Trustworthy Orchestration AI, a comprehensive assurance framework that integrates human input, semantic coherence, audit and provenance integrity into a unified Control-Panel architecture. Unlike conventional agentic AI initiatives that primarily focus on AI-to-AI coordination, the proposed framework provides an umbrella of governance to the entire AI components, their consumers and human participants. By taking aspiration from international standards and Australia's National Framework for AI Assurance initiative, this work demonstrates that trustworthiness can be systematically incorporated (by engineering) into AI systems, ensuring the execution fabric remains verifiable, transparent, reproducible and under meaningful human control.
+ oai:arXiv.org:2512.10304v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.ET
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qianyu Zhou
+ http://creativecommons.org/licenses/by/4.0/
+ Byeong Ho Kang, Wenli Yang, Muhammad Bilal Amin
- ObliInjection: Order-Oblivious Prompt Injection Attack to LLM Agents with Multi-source Data
- https://arxiv.org/abs/2512.09321
- arXiv:2512.09321v1 Announce Type: new
-Abstract: Prompt injection attacks aim to contaminate the input data of an LLM to mislead it into completing an attacker-chosen task instead of the intended task. In many applications and agents, the input data originates from multiple sources, with each source contributing a segment of the overall input. In these multi-source scenarios, an attacker may control only a subset of the sources and contaminate the corresponding segments, but typically does not know the order in which the segments are arranged within the input. Existing prompt injection attacks either assume that the entire input data comes from a single source under the attacker's control or ignore the uncertainty in the ordering of segments from different sources. As a result, their success is limited in domains involving multi-source data.
- In this work, we propose ObliInjection, the first prompt injection attack targeting LLM applications and agents with multi-source input data. ObliInjection introduces two key technical innovations: the order-oblivious loss, which quantifies the likelihood that the LLM will complete the attacker-chosen task regardless of how the clean and contaminated segments are ordered; and the orderGCG algorithm, which is tailored to minimize the order-oblivious loss and optimize the contaminated segments. Comprehensive experiments across three datasets spanning diverse application domains and twelve LLMs demonstrate that ObliInjection is highly effective, even when only one out of 6-100 segments in the input data is contaminated.
- oai:arXiv.org:2512.09321v1
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ InfoCom: Kilobyte-Scale Communication-Efficient Collaborative Perception with Information Bottleneck
+ https://arxiv.org/abs/2512.10305
+ arXiv:2512.10305v1 Announce Type: new
+Abstract: Precise environmental perception is critical for the reliability of autonomous driving systems. While collaborative perception mitigates the limitations of single-agent perception through information sharing, it encounters a fundamental communication-performance trade-off. Existing communication-efficient approaches typically assume MB-level data transmission per collaboration, which may fail due to practical network constraints. To address these issues, we propose InfoCom, an information-aware framework establishing the pioneering theoretical foundation for communication-efficient collaborative perception via extended Information Bottleneck principles. Departing from mainstream feature manipulation, InfoCom introduces a novel information purification paradigm that theoretically optimizes the extraction of minimal sufficient task-critical information under Information Bottleneck constraints. Its core innovations include: i) An Information-Aware Encoding condensing features into minimal messages while preserving perception-relevant information; ii) A Sparse Mask Generation identifying spatial cues with negligible communication cost; and iii) A Multi-Scale Decoding that progressively recovers perceptual information through mask-guided mechanisms rather than simple feature reconstruction. Comprehensive experiments across multiple datasets demonstrate that InfoCom achieves near-lossless perception while reducing communication overhead from megabyte to kilobyte-scale, representing 440-fold and 90-fold reductions per agent compared to Where2comm and ERMVP, respectively.
+ oai:arXiv.org:2512.10305v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ruiqi Wang, Yuqi Jia, Neil Zhenqiang Gong
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Quanmin Wei, Penglin Dai, Wei Li, Bingyi Liu, Xiao Wu
- Self-Supervised Learning with Gaussian Processes
- https://arxiv.org/abs/2512.09322
- arXiv:2512.09322v1 Announce Type: new
-Abstract: Self supervised learning (SSL) is a machine learning paradigm where models learn to understand the underlying structure of data without explicit supervision from labeled samples. The acquired representations from SSL have demonstrated useful for many downstream tasks including clustering, and linear classification, etc. To ensure smoothness of the representation space, most SSL methods rely on the ability to generate pairs of observations that are similar to a given instance. However, generating these pairs may be challenging for many types of data. Moreover, these methods lack consideration of uncertainty quantification and can perform poorly in out-of-sample prediction settings. To address these limitations, we propose Gaussian process self supervised learning (GPSSL), a novel approach that utilizes Gaussian processes (GP) models on representation learning. GP priors are imposed on the representations, and we obtain a generalized Bayesian posterior minimizing a loss function that encourages informative representations. The covariance function inherent in GPs naturally pulls representations of similar units together, serving as an alternative to using explicitly defined positive samples. We show that GPSSL is closely related to both kernel PCA and VICReg, a popular neural network-based SSL method, but unlike both allows for posterior uncertainties that can be propagated to downstream tasks. Experiments on various datasets, considering classification and regression tasks, demonstrate that GPSSL outperforms traditional methods in terms of accuracy, uncertainty quantification, and error control.
- oai:arXiv.org:2512.09322v1
+ An Interpretable AI Tool for SAVR vs TAVR in Low to Intermediate Risk Patients with Severe Aortic Stenosis
+ https://arxiv.org/abs/2512.10308
+ arXiv:2512.10308v1 Announce Type: new
+Abstract: Background. Treatment selection for low to intermediate risk patients with severe aortic stenosis between surgical (SAVR) and transcatheter (TAVR) aortic valve replacement remains variable in clinical practice, driven by patient heterogeneity and institutional preferences. While existing models predict postprocedural risk, there is a lack of interpretable, individualized treatment recommendations that directly optimize long-term outcomes.
+ Methods. We introduce an interpretable prescriptive framework that integrates prognostic matching, counterfactual outcome modeling, and an Optimal Policy Tree (OPT) to recommend the treatment minimizing expected 5-year mortality. Using data from Hartford Hospital and St. Vincent's Hospital, we emulate randomization via prognostic matching and sample weighting and estimate counterfactual mortality under both SAVR and TAVR. The policy model, trained on these counterfactual predictions, partitions patients into clinically coherent subgroups and prescribes the treatment associated with lower estimated risk.
+ Findings. If the OPT prescriptions are applied, counterfactual evaluation showed an estimated reduction in 5-year mortality of 20.3\% in Hartford and 13.8\% in St. Vincent's relative to real-life prescriptions, showing promising generalizability to unseen data from a different institution. The learned decision boundaries aligned with real-world outcomes and clinical observations.
+ Interpretation. Our interpretable prescriptive framework is, to the best of our knowledge, the first to provide transparent, data-driven recommendations for TAVR versus SAVR that improve estimated long-term outcomes both in an internal and external cohort, while remaining clinically grounded and contributing toward a more systematic and evidence-based approach to precision medicine in structural heart disease.
+ oai:arXiv.org:2512.10308v1cs.LG
- stat.ME
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Yunshan Duan, Sinead Williamson
-
-
- Analysis of Frequency and Voltage Strength in Power Electronics-Dominated Power Systems Based on Eigen-subsystems
- https://arxiv.org/abs/2512.09323
- arXiv:2512.09323v1 Announce Type: new
-Abstract: The large-scale integration of inverter-based resources (IBRs) has deteriorated the frequency/voltage (F/V) responses of power systems, leading to a higher risk of instability. Consequently, evaluating the F/V strength has become an important task in power electronics (PE)-dominated power systems. Existing methods typically examine F/V strength separately, employing fundamentally different metrics, such as inertia (focusing on device dynamics) and short-circuit ratio (SCR, addressing network characteristics). These fragmented approaches have resulted in a lack of comprehensive understanding of the overall system strength, potentially overlooking critical aspects. To address this problem, this paper proposes a unified framework for analyzing F/V strength. First, a unified modeling of F/V regulations is introduced. Then, based on modal decoupling, the power systems are decomposed into several eigen-subsystems, where the F/V responses are both decomposed into common-mode (CM) and differential-mode (DM) components, namely, CM-F, DM-F, CM-V, and DM-V. The CM-F and CM-V represent the collective response of all devices to external active or reactive power disturbances, independent of the power network characteristics. In contrast, the DM-F and DM-V capture the redistribution of disturbance power within the system, which is strongly influenced by the network topology and the locations of devices. Notably, traditional strength analysis generally ignores the CM-V (global voltage response), which, as discovered in this paper, may also become unstable in PE-dominated power systems. Based on the proposed framework, new metrics are proposed to evaluate the strength of each modal component. Finally, the effectiveness of the proposed approach is validated through simulations.
- oai:arXiv.org:2512.09323v1
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Huisheng Gao, Linbin Huang, Huanhai Xin, Zhiyi Li, Ping Ju
+ http://creativecommons.org/licenses/by/4.0/
+ Vasiliki Stoumpou, Maciej Tysarowski, Talhat Azemi, Jawad Haider, Howard L. Haronian, Robert C. Hagberg, Dimitris Bertsimas
- UniLS: End-to-End Audio-Driven Avatars for Unified Listening and Speaking
- https://arxiv.org/abs/2512.09327
- arXiv:2512.09327v1 Announce Type: new
-Abstract: Generating lifelike conversational avatars requires modeling not just isolated speakers, but the dynamic, reciprocal interaction of speaking and listening. However, modeling the listener is exceptionally challenging: direct audio-driven training fails, producing stiff, static listening motions. This failure stems from a fundamental imbalance: the speaker's motion is strongly driven by speech audio, while the listener's motion primarily follows an internal motion prior and is only loosely guided by external speech. This challenge has led most methods to focus on speak-only generation. The only prior attempt at joint generation relies on extra speaker's motion to produce the listener. This design is not end-to-end, thereby hindering the real-time applicability. To address this limitation, we present UniLS, the first end-to-end framework for generating unified speak-listen expressions, driven by only dual-track audio. Our method introduces a novel two-stage training paradigm. Stage 1 first learns the internal motion prior by training an audio-free autoregressive generator, capturing the spontaneous dynamics of natural facial motion. Stage 2 then introduces the dual-track audio, fine-tuning the generator to modulate the learned motion prior based on external speech cues. Extensive evaluations show UniLS achieves state-of-the-art speaking accuracy. More importantly, it delivers up to 44.1\% improvement in listening metrics, generating significantly more diverse and natural listening expressions. This effectively mitigates the stiffness problem and provides a practical, high-fidelity audio-driven solution for interactive digital humans.
- oai:arXiv.org:2512.09327v1
+ Efficient-VLN: A Training-Efficient Vision-Language Navigation Model
+ https://arxiv.org/abs/2512.10310
+ arXiv:2512.10310v1 Announce Type: new
+Abstract: Multimodal large language models (MLLMs) have shown promising potential in Vision-Language Navigation (VLN). However, their practical development is severely hindered by the substantial training overhead. We recognize two key issues that contribute to the overhead: (1) the quadratic computational burden from processing long-horizon historical observations as massive sequences of tokens, and (2) the exploration-efficiency trade-off in DAgger, i.e., a data aggregation process of collecting agent-explored trajectories. While more exploration yields effective error-recovery trajectories for handling test-time distribution shifts, it comes at the cost of longer trajectory lengths for both training and inference. To address these challenges, we propose Efficient-VLN, a training-efficient VLN model. Specifically, to mitigate the token processing burden, we design two efficient memory mechanisms: a progressive memory that dynamically allocates more tokens to recent observations, and a learnable recursive memory that utilizes the key-value cache of learnable tokens as the memory state. Moreover, we introduce a dynamic mixed policy to balance the exploration-efficiency trade-off. Extensive experiments show that Efficient-VLN achieves state-of-the-art performance on R2R-CE (64.2% SR) and RxR-CE (67.0% SR). Critically, our model consumes merely 282 H800 GPU hours, demonstrating a dramatic reduction in training overhead compared to state-of-the-art methods.
+ oai:arXiv.org:2512.10310v1cs.CV
- cs.SD
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Xuangeng Chu, Ruicong Liu, Yifei Huang, Yun Liu, Yichen Peng, Bo Zheng
+ Duo Zheng, Shijia Huang, Yanyang Li, Liwei Wang
- Self Distillation Fine-Tuning of Protein Language Models Improves Versatility in Protein Design
- https://arxiv.org/abs/2512.09329
- arXiv:2512.09329v1 Announce Type: new
-Abstract: Supervised fine-tuning (SFT) is a standard approach for adapting large language models to specialized domains, yet its application to protein sequence modeling and protein language models (PLMs) remains ad hoc. This is in part because high-quality annotated data are far more difficult to obtain for proteins than for natural language. We present a simple and general recipe for fast SFT of PLMs, designed to improve the fidelity, reliability, and novelty of generated protein sequences. Unlike existing approaches that require costly precompiled experimental datasets for SFT, our method leverages the PLM itself, integrating a lightweight curation pipeline with domain-specific filters to construct high-quality training data. These filters can independently refine a PLM's output and identify candidates for in vitro evaluation; when combined with SFT, they enable PLMs to generate more stable and functional enzymes, while expanding exploration into protein sequence space beyond natural variants. Although our approach is agnostic to both the choice of protein language model (PLM) and the protein system, we demonstrate its effectiveness with a genome-scale PLM (GenSLM) applied to the tryptophan synthase enzyme family. The supervised fine-tuned model generates sequences that are not only more novel but also display improved characteristics across both targeted design constraints and emergent protein property measures.
- oai:arXiv.org:2512.09329v1
- cs.LG
- cs.CE
- Thu, 11 Dec 2025 00:00:00 -0500
+ High-Dimensional Data Processing: Benchmarking Machine Learning and Deep Learning Architectures in Local and Distributed Environments
+ https://arxiv.org/abs/2512.10312
+ arXiv:2512.10312v1 Announce Type: new
+Abstract: This document reports the sequence of practices and methodologies implemented during the Big Data course. It details the workflow beginning with the processing of the Epsilon dataset through group and individual strategies, followed by text analysis and classification with RestMex and movie feature analysis with IMDb. Finally, it describes the technical implementation of a distributed computing cluster with Apache Spark on Linux using Scala.
+ oai:arXiv.org:2512.10312v1
+ cs.DC
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Amin Tavakoli, Raswanth Murugan, Ozan Gokdemir, Arvind Ramanathan, Frances Arnold, Anima Anandkumar
+ Julian Rodriguez, Piotr Lopez, Emiliano Lerma, Rafael Medrano, Jacobo Hernandez
- Passing the Baton: High Throughput Distributed Disk-Based Vector Search with BatANN
- https://arxiv.org/abs/2512.09331
- arXiv:2512.09331v1 Announce Type: new
-Abstract: Vector search underpins modern information-retrieval systems, including retrieval-augmented generation (RAG) pipelines and search engines over unstructured text and images. As datasets scale to billions of vectors, disk-based vector search has emerged as a practical solution. However, looking to the future, we need to anticipate datasets too large for any single server. We present BatANN, a distributed disk-based approximate nearest neighbor (ANN) system that retains the logarithmic search efficiency of a single global graph while achieving near-linear throughput scaling in the number of servers. Our core innovation is that when accessing a neighborhood which is stored on another machine, we send the full state of the query to the other machine to continue executing there for improved locality. On 100M- and 1B-point datasets at 0.95 recall using 10 servers, BatANN achieves 6.21-6.49x and 2.5-5.10x the throughput of the scatter-gather baseline, respectively, while maintaining mean latency below 6 ms. Moreover, we get these results on standard TCP. To our knowledge, BatANN is the first open-source distributed disk-based vector search system to operate over a single global graph.
- oai:arXiv.org:2512.09331v1
- cs.DC
- cs.IR
- Thu, 11 Dec 2025 00:00:00 -0500
+ EpiPlanAgent: Agentic Automated Epidemic Response Planning
+ https://arxiv.org/abs/2512.10313
+ arXiv:2512.10313v1 Announce Type: new
+Abstract: Epidemic response planning is essential yet traditionally reliant on labor-intensive manual methods. This study aimed to design and evaluate EpiPlanAgent, an agent-based system using large language models (LLMs) to automate the generation and validation of digital emergency response plans. The multi-agent framework integrated task decomposition, knowledge grounding, and simulation modules. Public health professionals tested the system using real-world outbreak scenarios in a controlled evaluation. Results demonstrated that EpiPlanAgent significantly improved the completeness and guideline alignment of plans while drastically reducing development time compared to manual workflows. Expert evaluation confirmed high consistency between AI-generated and human-authored content. User feedback indicated strong perceived utility. In conclusion, EpiPlanAgent provides an effective, scalable solution for intelligent epidemic response planning, demonstrating the potential of agentic AI to transform public health preparedness.
+ oai:arXiv.org:2512.10313v1
+ cs.AI
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nam Anh Dang (Cornell University), Ben Landrum (Cornell University), Ken Birman (Cornell University)
+ Kangkun Mao, Fang Xu, Jinru Ding, Yidong Jiang, Yujun Yao, Yirong Chen, Junming Liu, Xiaoqin Wu, Qian Wu, Xiaoyan Huang, Jie Xu
- Improved Physics-Driven Neural Network to Solve Inverse Scattering Problems
- https://arxiv.org/abs/2512.09333
- arXiv:2512.09333v1 Announce Type: new
-Abstract: This paper presents an improved physics-driven neural network (IPDNN) framework for solving electromagnetic inverse scattering problems (ISPs). A new Gaussian-localized oscillation-suppressing window (GLOW) activation function is introduced to stabilize convergence and enable a lightweight yet accurate network architecture. A dynamic scatter subregion identification strategy is further developed to adaptively refine the computational domain, preventing missed detections and reducing computational cost. Moreover, transfer learning is incorporated to extend the solver's applicability to practical scenarios, integrating the physical interpretability of iterative algorithms with the real-time inference capability of neural networks. Numerical simulations and experimental results demonstrate that the proposed solver achieves superior reconstruction accuracy, robustness, and efficiency compared with existing state-of-the-art methods.
- oai:arXiv.org:2512.09333v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ DualProtoSeg: Simple and Efficient Design with Text- and Image-Guided Prototype Learning for Weakly Supervised Histopathology Image Segmentation
+ https://arxiv.org/abs/2512.10314
+ arXiv:2512.10314v1 Announce Type: new
+Abstract: Weakly supervised semantic segmentation (WSSS) in histopathology seeks to reduce annotation cost by learning from image-level labels, yet it remains limited by inter-class homogeneity, intra-class heterogeneity, and the region-shrinkage effect of CAM-based supervision. We propose a simple and effective prototype-driven framework that leverages vision-language alignment to improve region discovery under weak supervision. Our method integrates CoOp-style learnable prompt tuning to generate text-based prototypes and combines them with learnable image prototypes, forming a dual-modal prototype bank that captures both semantic and appearance cues. To address oversmoothing in ViT representations, we incorporate a multi-scale pyramid module that enhances spatial precision and improves localization quality. Experiments on the BCSS-WSSS benchmark show that our approach surpasses existing state-of-the-art methods, and detailed analyses demonstrate the benefits of text description diversity, context length, and the complementary behavior of text and image prototypes. These results highlight the effectiveness of jointly leveraging textual semantics and visual prototype learning for WSSS in digital pathology.
+ oai:arXiv.org:2512.10314v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yutong Du, Zicheng Liu, Bo Wu, Jingwei Kou, Hang Li, Changyou Li, Yali Zong, Bo Qi
+ Anh M. Vu (equal contribution), Khang P. Le (equal contribution), Trang T. K. Vo (equal contribution), Ha Thach, Huy Hung Nguyen, David Yang, Han H. Huynh, Quynh Nguyen, Tuan M. Pham, Tuan-Anh Le, Minh H. N. Le, Thanh-Huy Nguyen, Akash Awasthi, Chandra Mohan, Zhu Han, Hien Van Nguyen
- Relightable and Dynamic Gaussian Avatar Reconstruction from Monocular Video
- https://arxiv.org/abs/2512.09335
- arXiv:2512.09335v1 Announce Type: new
-Abstract: Modeling relightable and animatable human avatars from monocular video is a long-standing and challenging task. Recently, Neural Radiance Field (NeRF) and 3D Gaussian Splatting (3DGS) methods have been employed to reconstruct the avatars. However, they often produce unsatisfactory photo-realistic results because of insufficient geometrical details related to body motion, such as clothing wrinkles. In this paper, we propose a 3DGS-based human avatar modeling framework, termed as Relightable and Dynamic Gaussian Avatar (RnD-Avatar), that presents accurate pose-variant deformation for high-fidelity geometrical details. To achieve this, we introduce dynamic skinning weights that define the human avatar's articulation based on pose while also learning additional deformations induced by body motion. We also introduce a novel regularization to capture fine geometric details under sparse visual cues. Furthermore, we present a new multi-view dataset with varied lighting conditions to evaluate relight. Our framework enables realistic rendering of novel poses and views while supporting photo-realistic lighting effects under arbitrary lighting conditions. Our method achieves state-of-the-art performance in novel view synthesis, novel pose rendering, and relighting.
- oai:arXiv.org:2512.09335v1
+ ConStruct: Structural Distillation of Foundation Models for Prototype-Based Weakly Supervised Histopathology Segmentation
+ https://arxiv.org/abs/2512.10316
+ arXiv:2512.10316v1 Announce Type: new
+Abstract: Weakly supervised semantic segmentation (WSSS) in histopathology relies heavily on classification backbones, yet these models often localize only the most discriminative regions and struggle to capture the full spatial extent of tissue structures. Vision-language models such as CONCH offer rich semantic alignment and morphology-aware representations, while modern segmentation backbones like SegFormer preserve fine-grained spatial cues. However, combining these complementary strengths remains challenging, especially under weak supervision and without dense annotations. We propose a prototype learning framework for WSSS in histopathological images that integrates morphology-aware representations from CONCH, multi-scale structural cues from SegFormer, and text-guided semantic alignment to produce prototypes that are simultaneously semantically discriminative and spatially coherent. To effectively leverage these heterogeneous sources, we introduce text-guided prototype initialization that incorporates pathology descriptions to generate more complete and semantically accurate pseudo-masks. A structural distillation mechanism transfers spatial knowledge from SegFormer to preserve fine-grained morphological patterns and local tissue boundaries during prototype learning. Our approach produces high-quality pseudo masks without pixel-level annotations, improves localization completeness, and enhances semantic consistency across tissue types. Experiments on BCSS-WSSS datasets demonstrate that our prototype learning framework outperforms existing WSSS methods while remaining computationally efficient through frozen foundation model backbones and lightweight trainable adapters.
+ oai:arXiv.org:2512.10316v1cs.CV
- cs.MM
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- 10.1145/3746027.3754851
- Seonghwa Choi, Moonkyeong Choi, Mingyu Jang, Jaekyung Kim, Jianfei Cai, Wen-Huang Cheng, Sanghoon Lee
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Khang Le (equal contribution), Ha Thach (equal contribution), Anh M. Vu (equal contribution), Trang T. K. Vo, Han H. Huynh, David Yang, Minh H. N. Le, Thanh-Huy Nguyen, Akash Awasthi, Chandra Mohan, Zhu Han, Hien Van Nguyen
- An Efficient Solver to Helmholtz Equations by Recontruction Discontinuous Approximation
- https://arxiv.org/abs/2512.09338
- arXiv:2512.09338v1 Announce Type: new
-Abstract: In this paper, an efficient solver for the Helmholtz equation using a noval approximation space is developed. The ingradients of the method include the approximation space recently proposed, a discontinuous Galerkin scheme extensively used, and a linear system solver with a natural preconditioner. Comparing to traditional discontinuous Galerkin methods, we refer to the new method as being more efficient in the following sense. The numerical performance of the new method shows that: 1) much less error can be reached using the same degrees of freedom; 2) the sparse matrix therein has much fewer nonzero entries so that both the storage space and the solution time cost for the iterative solver are reduced; 3) the preconditioner is proved to be optimal with respect to the mesh size in the absorbing case. Such advantage becomes more pronounced as the approximation order increases.
- oai:arXiv.org:2512.09338v1
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Translating Informal Proofs into Formal Proofs Using a Chain of States
+ https://arxiv.org/abs/2512.10317
+ arXiv:2512.10317v1 Announce Type: new
+Abstract: We address the problem of translating informal mathematical proofs expressed in natural language into formal proofs in Lean4 under a constrained computational budget. Our approach is grounded in two key insights. First, informal proofs tend to proceed via a sequence of logical transitions - often implications or equivalences - without explicitly specifying intermediate results or auxiliary lemmas. In contrast, formal systems like Lean require an explicit representation of each proof state and the tactics that connect them. Second, each informal reasoning step can be viewed as an abstract transformation between proof states, but identifying the corresponding formal tactics often requires nontrivial domain knowledge and precise control over proof context. To bridge this gap, we propose a two stage framework. Rather than generating formal tactics directly, we first extract a Chain of States (CoS), a sequence of intermediate formal proof states aligned with the logical structure of the informal argument. We then generate tactics to transition between adjacent states in the CoS, thereby constructing the full formal proof. This intermediate representation significantly reduces the complexity of tactic generation and improves alignment with informal reasoning patterns. We build dedicated datasets and benchmarks for training and evaluation, and introduce an interactive framework to support tactic generation from formal states. Empirical results show that our method substantially outperforms existing baselines, achieving higher proof success rates.
+ oai:arXiv.org:2512.10317v1
+ cs.LO
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shuhai Zhao
+ http://creativecommons.org/licenses/by/4.0/
+ Ziyu Wang, Bowen Yang, Shihao Zhou, Chenyi Li, Yuan Zhang, Bin Dong, Zaiwen Wen
- Visual Categorization Across Minds and Models: Cognitive Analysis of Human Labeling and Neuro-Symbolic Integration
- https://arxiv.org/abs/2512.09340
- arXiv:2512.09340v1 Announce Type: new
-Abstract: Understanding how humans and AI systems interpret ambiguous visual stimuli offers critical insight into the nature of perception, reasoning, and decision-making. This paper examines image labeling performance across human participants and deep neural networks, focusing on low-resolution, perceptually degraded stimuli. Drawing from computational cognitive science, cognitive architectures, and connectionist-symbolic hybrid models, we contrast human strategies such as analogical reasoning, shape-based recognition, and confidence modulation with AI's feature-based processing. Grounded in Marr's tri-level hypothesis, Simon's bounded rationality, and Thagard's frameworks of representation and emotion, we analyze participant responses in relation to Grad-CAM visualizations of model attention. Human behavior is further interpreted through cognitive principles modeled in ACT-R and Soar, revealing layered and heuristic decision strategies under uncertainty. Our findings highlight key parallels and divergences between biological and artificial systems in representation, inference, and confidence calibration. The analysis motivates future neuro-symbolic architectures that unify structured symbolic reasoning with connectionist representations. Such architectures, informed by principles of embodiment, explainability, and cognitive alignment, offer a path toward AI systems that are not only performant but also interpretable and cognitively grounded.
- oai:arXiv.org:2512.09340v1
- cs.AI
- cs.CV
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ L2 Ethernet Switch VLSI Implementation
+ https://arxiv.org/abs/2512.10318
+ arXiv:2512.10318v1 Announce Type: new
+Abstract: Ethernet switches are foundational to the global internet infrastructure. These devices route packets of data on a local area network between source addresses to destination media access control addresses. On the L2 layer of the Open Systems Interconnections model, Ethernet switches take in digitized data from a Media Independent Interface and send it to the corresponding output port for the destination address. Switches need to handle parallel input and output streams from each port, prioritizing throughput, efficiency, and packet integrity. Due to the confidential nature of the networking device industry, there do not exist many open source implementations of switching fabrics. We propose an open source design for an L2 Ethernet switch along with the power, performance, and area tradeoffs for architecture decisions.
+ oai:arXiv.org:2512.10318v1
+ cs.NI
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Chethana Prasad Kabgere
+ http://creativecommons.org/licenses/by/4.0/
+ Aniruddh Mishra, Benjamin Oommen, Jimmy Liang
- Development and Testing for Perception Based Autonomous Landing of a Long-Range QuadPlane
- https://arxiv.org/abs/2512.09343
- arXiv:2512.09343v1 Announce Type: new
-Abstract: QuadPlanes combine the range efficiency of fixed-wing aircraft with the maneuverability of multi-rotor platforms for long-range autonomous missions. In GPS-denied or cluttered urban environments, perception-based landing is vital for reliable operation. Unlike structured landing zones, real-world sites are unstructured and highly variable, requiring strong generalization capabilities from the perception system. Deep neural networks (DNNs) provide a scalable solution for learning landing site features across diverse visual and environmental conditions. While perception-driven landing has been shown in simulation, real-world deployment introduces significant challenges. Payload and volume constraints limit high-performance edge AI devices like the NVIDIA Jetson Orin Nano, which are crucial for real-time detection and control. Accurate pose estimation during descent is necessary, especially in the absence of GPS, and relies on dependable visual-inertial odometry. Achieving this with limited edge AI resources requires careful optimization of the entire deployment framework. The flight characteristics of large QuadPlanes further complicate the problem. These aircraft exhibit high inertia, reduced thrust vectoring, and slow response times further complicate stable landing maneuvers. This work presents a lightweight QuadPlane system for efficient vision-based autonomous landing and visual-inertial odometry, specifically developed for long-range QuadPlane operations such as aerial monitoring. It describes the hardware platform, sensor configuration, and embedded computing architecture designed to meet demanding real-time, physical constraints. This establishes a foundation for deploying autonomous landing in dynamic, unstructured, GPS-denied environments.
- oai:arXiv.org:2512.09343v1
+ Design of a six wheel suspension and a three-axis linear actuation mechanism for a laser weeding robot
+ https://arxiv.org/abs/2512.10319
+ arXiv:2512.10319v1 Announce Type: new
+Abstract: Mobile robots are increasingly utilized in agriculture to automate labor-intensive tasks such as weeding, sowing, harvesting and soil analysis. Recently, agricultural robots have been developed to detect and remove weeds using mechanical tools or precise herbicide sprays. Mechanical weeding is inefficient over large fields, and herbicides harm the soil ecosystem. Laser weeding with mobile robots has emerged as a sustainable alternative in precision farming. In this paper, we present an autonomous weeding robot that uses controlled exposure to a low energy laser beam for weed removal. The proposed robot is six-wheeled with a novel double four-bar suspension for higher stability. The laser is guided towards the detected weeds by a three-dimensional linear actuation mechanism. Field tests have demonstrated the robot's capability to navigate agricultural terrains effectively by overcoming obstacles up to 15 cm in height. At an optimal speed of 42.5 cm/s, the robot achieves a weed detection rate of 86.2\% and operating time of 87 seconds per meter. The laser actuation mechanism maintains a minimal mean positional error of 1.54 mm, combined with a high hit rate of 97\%, ensuring effective and accurate weed removal. This combination of speed, accuracy, and efficiency highlights the robot's potential for significantly enhancing precision farming practices.
+ oai:arXiv.org:2512.10319v1cs.ROcs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Ashik E Rasul, Humaira Tasnim, Ji Yu Kim, Young Hyun Lim, Scott Schmitz, Bruce W. Jo, Hyung-Jin Yoon
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Muhammad Usama, Muhammad Ibrahim Khan, Ahmad Hasan, Muhammad Shaaf Nadeem, Khawaja Fahad Iqbal, Jawad Aslam, Mian Ashfaq Ali, Asad Nisar Awan
- Eunomia: A Multicontroller Domain Partitioning Framework in Hierarchical Satellite Network
- https://arxiv.org/abs/2512.09345
- arXiv:2512.09345v1 Announce Type: new
-Abstract: With the rise of mega-satellite constellations, the integration of hierarchical non-terrestrial and terrestrial networks has become a cornerstone of 6G coverage enhancements. In these hierarchical satellite networks, controllers manage satellite switches within their assigned domains. However, the high mobility of LEO satellites and field-of-view (FOV) constraints pose fundamental challenges to efficient domain partitioning. Centralized control approaches face scalability bottlenecks, while distributed architectures with onboard controllers often disregard FOV limitations, leading to excessive signaling overhead. LEO satellites outside a controller's FOV require an average of five additional hops, resulting in a 10.6-fold increase in response time. To address these challenges, we propose Eunomia, a three-step domain-partitioning framework that leverages movement-aware FOV segmentation within a hybrid control plane combining ground stations and MEO satellites. Eunomia reduces control plane latency by constraining domains to FOV-aware regions and ensures single-hop signaling. It further balances traffic load through spectral clustering on a Control Overhead Relationship Graph and optimizes controller assignment via the Kuhn-Munkres algorithm. We implement Eunomia on the Plotinus emulation platform with realistic constellation parameters. Experimental results demonstrate that Eunomia reduces request loss by up to 58.3%, control overhead by up to 50.3\%, and algorithm execution time by 77.7% significantly outperforming current state-of-the-art solutions.
- oai:arXiv.org:2512.09345v1
- cs.NI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Point2Pose: A Generative Framework for 3D Human Pose Estimation with Multi-View Point Cloud Dataset
+ https://arxiv.org/abs/2512.10321
+ arXiv:2512.10321v1 Announce Type: new
+Abstract: We propose a novel generative approach for 3D human pose estimation. 3D human pose estimation poses several key challenges due to the complex geometry of the human body, self-occluding joints, and the requirement for large-scale real-world motion datasets. To address these challenges, we introduce Point2Pose, a framework that effectively models the distribution of human poses conditioned on sequential point cloud and pose history. Specifically, we employ a spatio-temporal point cloud encoder and a pose feature encoder to extract joint-wise features, followed by an attention-based generative regressor. Additionally, we present a large-scale indoor dataset MVPose3D, which contains multiple modalities, including IMU data of non-trivial human motions, dense multi-view point clouds, and RGB images. Experimental results show that the proposed method outperforms the baseline models, demonstrating its superior performance across various datasets.
+ oai:arXiv.org:2512.10321v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Qi Zhang, Kun Qiu, Zhe Chen, Wenjun Zhu, Xiaofan Xu, Ping Du, Yue Gao
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hyunsoo Lee, Daeum Jeon, Hyeokjae Oh
- COVLM-RL: Critical Object-Oriented Reasoning for Autonomous Driving Using VLM-Guided Reinforcement Learning
- https://arxiv.org/abs/2512.09349
- arXiv:2512.09349v1 Announce Type: new
-Abstract: End-to-end autonomous driving frameworks face persistent challenges in generalization, training efficiency, and interpretability. While recent methods leverage Vision-Language Models (VLMs) through supervised learning on large-scale datasets to improve reasoning, they often lack robustness in novel scenarios. Conversely, reinforcement learning (RL)-based approaches enhance adaptability but remain data-inefficient and lack transparent decision-making. % contribution To address these limitations, we propose COVLM-RL, a novel end-to-end driving framework that integrates Critical Object-oriented (CO) reasoning with VLM-guided RL. Specifically, we design a Chain-of-Thought (CoT) prompting strategy that enables the VLM to reason over critical traffic elements and generate high-level semantic decisions, effectively transforming multi-view visual inputs into structured semantic decision priors. These priors reduce the input dimensionality and inject task-relevant knowledge into the RL loop, accelerating training and improving policy interpretability. However, bridging high-level semantic guidance with continuous low-level control remains non-trivial. To this end, we introduce a consistency loss that encourages alignment between the VLM's semantic plans and the RL agent's control outputs, enhancing interpretability and training stability. Experiments conducted in the CARLA simulator demonstrate that COVLM-RL significantly improves the success rate by 30\% in trained driving environments and by 50\% in previously unseen environments, highlighting its strong generalization capability.
- oai:arXiv.org:2512.09349v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ User-Feedback-Driven Continual Adaptation for Vision-and-Language Navigation
+ https://arxiv.org/abs/2512.10322
+ arXiv:2512.10322v1 Announce Type: new
+Abstract: Vision-and-Language Navigation (VLN) requires agents to navigate complex environments by following natural-language instructions. General Scene Adaptation for VLN (GSA-VLN) shifts the focus from zero-shot generalization to continual, environment-specific adaptation, narrowing the gap between static benchmarks and real-world deployment. However, current GSA-VLN frameworks exclude user feedback, relying solely on unsupervised adaptation from repeated environmental exposure. In practice, user feedback offers natural and valuable supervision that can significantly enhance adaptation quality. We introduce a user-feedback-driven adaptation framework that extends GSA-VLN by systematically integrating human interactions into continual learning. Our approach converts user feedback-navigation instructions and corrective signals-into high-quality, environment-aligned training data, enabling efficient and realistic adaptation. A memory-bank warm-start mechanism further reuses previously acquired environmental knowledge, mitigating cold-start degradation and ensuring stable redeployment. Experiments on the GSA-R2R benchmark show that our method consistently surpasses strong baselines such as GR-DUET, improving navigation success and path efficiency. The memory-bank warm start stabilizes early navigation and reduces performance drops after updates. Results under both continual and hybrid adaptation settings confirm the robustness and generality of our framework, demonstrating sustained improvement across diverse deployment conditions.
+ oai:arXiv.org:2512.10322v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Lin Li, Yuxin Cai, Jianwu Fang, Jianru Xue, Chen Lv
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yongqiang Yu, Xuhui Li, Hazza Mahmood, Jinxing Zhou, Haodong Hong, Longtao Jiang, Zhiqiang Xu, Qi Wu, Xiaojun Chang
- TextGuider: Training-Free Guidance for Text Rendering via Attention Alignment
- https://arxiv.org/abs/2512.09350
- arXiv:2512.09350v1 Announce Type: new
-Abstract: Despite recent advances, diffusion-based text-to-image models still struggle with accurate text rendering. Several studies have proposed fine-tuning or training-free refinement methods for accurate text rendering. However, the critical issue of text omission, where the desired text is partially or entirely missing, remains largely overlooked. In this work, we propose TextGuider, a novel training-free method that encourages accurate and complete text appearance by aligning textual content tokens and text regions in the image. Specifically, we analyze attention patterns in MM-DiT models, particularly for text-related tokens intended to be rendered in the image. Leveraging this observation, we apply latent guidance during the early stage of denoising steps based on two loss functions that we introduce. Our method achieves state-of-the-art performance in test-time text rendering, with significant gains in recall and strong results in OCR accuracy and CLIP score.
- oai:arXiv.org:2512.09350v1
+ EchoingPixels: Cross-Modal Adaptive Token Reduction for Efficient Audio-Visual LLMs
+ https://arxiv.org/abs/2512.10324
+ arXiv:2512.10324v1 Announce Type: new
+Abstract: Audio-Visual Large Language Models (AV-LLMs) face prohibitive computational overhead from massive audio and video tokens. Token reduction, while extensively explored for video-only LLMs, is insufficient for the audio-visual domain, as these unimodal methods cannot leverage audio-visual cross-modal synergies. Furthermore, the distinct and dynamic information densities of audio and video render static budgets per modality suboptimal. How to perform token reduction on a joint audio-visual stream thus remains an unaddressed bottleneck. To fill this gap, we introduce EchoingPixels, a framework inspired by the coexistence and interaction of visuals and sound in real-world scenes. The core of our framework is the Cross-Modal Semantic Sieve (CS2), a module enabling early audio-visual interaction. Instead of compressing modalities independently, CS2 co-attends to the joint multimodal stream and reduces tokens from an entire combined pool of audio-visual tokens rather than using fixed budgets per modality. This single-pool approach allows it to adaptively allocate the token budget across both modalities and dynamically identify salient tokens in concert. To ensure this aggressive reduction preserves the vital temporal modeling capability, we co-design a Synchronization-Augmented RoPE (Sync-RoPE) to maintain critical temporal relationships for the sparsely selected tokens. Extensive experiments demonstrate that EchoingPixels achieves performance comparable to strong baselines using only 5-20% of the original tokens, with a 2-3x speedup and memory reduction.
+ oai:arXiv.org:2512.10324v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kanghyun Baek, Sangyub Lee, Jin Young Choi, Jaewoo Song, Daemin Park, Jooyoung Choi, Chaehun Shin, Bohyung Han, Sungroh Yoon
+ Chao Gong, Depeng Wang, Zhipeng Wei, Ya Guo, Huijia Zhu, Jingjing Chen
- Video-QTR: Query-Driven Temporal Reasoning Framework for Lightweight Video Understanding
- https://arxiv.org/abs/2512.09354
- arXiv:2512.09354v1 Announce Type: new
-Abstract: The rapid development of multimodal large-language models (MLLMs) has significantly expanded the scope of visual language reasoning, enabling unified systems to interpret and describe complex visual content. However, applying these models to long-video understanding remains computationally intensive. Dense frame encoding generates excessive visual tokens, leading to high memory consumption, redundant computation, and limited scalability in real-world applications. This inefficiency highlights a key limitation of the traditional process-then-reason paradigm, which analyzes visual streams exhaustively before semantic reasoning. To address this challenge, we introduce Video-QTR (Query-Driven Temporal Reasoning), a lightweight framework that redefines video comprehension as a query-guided reasoning process. Instead of encoding every frame, Video-QTR dynamically allocates perceptual resources based on the semantic intent of the query, creating an adaptive feedback loop between reasoning and perception. Extensive experiments across five benchmarks: MSVD-QA, Activity Net-QA, Movie Chat, and Video MME demonstrate that Video-QTR achieves state-of-the-art performance while reducing input frame consumption by up to 73%. These results confirm that query-driven temporal reasoning provides an efficient and scalable solution for video understanding.
- oai:arXiv.org:2512.09354v1
+ StainNet: A Special Staining Self-Supervised Vision Transformer for Computational Pathology
+ https://arxiv.org/abs/2512.10326
+ arXiv:2512.10326v1 Announce Type: new
+Abstract: Foundation models trained with self-supervised learning (SSL) on large-scale histological images have significantly accelerated the development of computational pathology. These models can serve as backbones for region-of-interest (ROI) image analysis or patch-level feature extractors in whole-slide images (WSIs) based on multiple instance learning (MIL). Existing pathology foundation models (PFMs) are typically pre-trained on Hematoxylin-Eosin (H&E) stained pathology images. However, images with special stains, such as immunohistochemistry, are also frequently used in clinical practice. PFMs pre-trained mainly on H\&E-stained images may be limited in clinical applications involving special stains. To address this issue, we propose StainNet, a specialized foundation model for special stains based on the vision transformer (ViT) architecture. StainNet adopts a self-distillation SSL approach and is trained on over 1.4 million patch images cropping from 20,231 publicly available special staining WSIs in the HISTAI database. To evaluate StainNet, we conduct experiments on an in-house slide-level liver malignancy classification task and two public ROI-level datasets to demonstrate its strong ability. We also perform few-ratio learning and retrieval evaluations, and compare StainNet with recently larger PFMs to further highlight its strengths. We have released the StainNet model weights at: https://huggingface.co/JWonderLand/StainNet.
+ oai:arXiv.org:2512.10326v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xinkui Zhao, Zuxin Wang, Yifan Zhang, Guanjie Cheng, Yueshen Xu, Shuiguang Deng, Chang Liu, Naibo Wang, Jianwei Yin
+ Jiawen Li, Jiali Hu, Xitong Ling, Yongqiang Lv, Yuxuan Chen, Yizhi Wang, Tian Guan, Yifei Liu, Yonghong He
- Branching Strategies Based on Subgraph GNNs: A Study on Theoretical Promise versus Practical Reality
- https://arxiv.org/abs/2512.09355
- arXiv:2512.09355v1 Announce Type: new
-Abstract: Graph Neural Networks (GNNs) have emerged as a promising approach for ``learning to branch'' in Mixed-Integer Linear Programming (MILP). While standard Message-Passing GNNs (MPNNs) are efficient, they theoretically lack the expressive power to fully represent MILP structures. Conversely, higher-order GNNs (like 2-FGNNs) are expressive but computationally prohibitive. In this work, we investigate Subgraph GNNs as a theoretical middle ground. Crucially, while previous work [Chen et al., 2025] demonstrated that GNNs with 3-WL expressive power can approximate Strong Branching, we prove a sharper result: node-anchored Subgraph GNNs whose expressive power is strictly lower than 3-WL [Zhang et al., 2023] are sufficient to approximate Strong Branching scores. However, our extensive empirical evaluation on four benchmark datasets reveals a stark contrast between theory and practice. While node-anchored Subgraph GNNs theoretically offer superior branching decisions, their $O(n)$ complexity overhead results in significant memory bottlenecks and slower solving times than MPNNs and heuristics. Our results indicate that for MILP branching, the computational cost of expressive GNNs currently outweighs their gains in decision quality, suggesting that future research must focus on efficiency-preserving expressivity.
- oai:arXiv.org:2512.09355v1
- cs.LG
- cs.AI
- cs.NA
- math.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Simple Yet Effective Selective Imputation for Incomplete Multi-view Clustering
+ https://arxiv.org/abs/2512.10327
+ arXiv:2512.10327v1 Announce Type: new
+Abstract: Incomplete multi-view data, where different views suffer from missing and unbalanced observations, pose significant challenges for clustering. Existing imputation-based methods attempt to estimate missing views to restore data associations, but indiscriminate imputation often introduces noise and bias, especially when the available information is insufficient. Imputation-free methods avoid this risk by relying solely on observed data, but struggle under severe incompleteness due to the lack of cross-view complementarity. To address this issue, we propose Informativeness-based Selective imputation Multi-View Clustering (ISMVC). Our method evaluates the imputation-relevant informativeness of each missing position based on intra-view similarity and cross-view consistency, and selectively imputes only when sufficient support is available. Furthermore, we integrate this selection with a variational autoencoder equipped with a mixture-of-Gaussians prior to learn clustering-friendly latent representations. By performing distribution-level imputation, ISMVC not only stabilizes the aggregation of posterior distributions but also explicitly models imputation uncertainty, enabling robust fusion and preventing overconfident reconstructions. Compared with existing cautious imputation strategies that depend on training dynamics or model feedback, our method is lightweight, data-driven, and model-agnostic. It can be readily integrated into existing IMC models as a plug-in module. Extensive experiments on multiple benchmark datasets under a more realistic and challenging unbalanced missing scenario demonstrate that our method outperforms both imputation-based and imputation-free approaches.
+ oai:arXiv.org:2512.10327v1
+ cs.CV
+ cs.MM
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Junru Zhou, Yicheng Wang, Pan Li
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Cai Xu, Jinlong Liu, Yilin Zhang, Ziyu Guan, Wei Zhao
- A higher-order three-scale computational method for efficient nonlinear thermo-mechanical coupling simulation of heterogeneous structures with multiple spatial scales
- https://arxiv.org/abs/2512.09357
- arXiv:2512.09357v1 Announce Type: new
-Abstract: Classical multi-scale methods involving two spatial scales face significant challenges when simulating heterogeneous structures with complicated three-scale spatial configurations. This study proposes an innovative higher-order three-scale (HOTS) computational method, aimed at accurately and efficiently computing the transient nonlinear thermo-mechanical coupling problems of heterogeneous structures with multiple spatial scales. In these heterogeneous structures, temperature-dependent material properties have an important impact on the thermo-mechanical coupling responses, which is the particular interest in this work. At first, the detailed macro-meso-micro correlative model with higher-order correction terms is established by recursively two-scale analysis between macro-meso and meso-micro scales, which enables high-accuracy analysis of temperature-dependent nonlinear thermo-mechanical behaviors of heterogeneous structures with complicated three-scale configurations. The local error analysis mathematically illustrates the well-balanced property of HOTS computational model, endowing it with high computational accuracy. In addition, a two-stage numerical algorithm with off-line and on-line stages is proposed in order to efficiently simulate the nonlinear thermo-mechanical responses of heterogeneous structures with three-level spatial scales and accurately capture their highly oscillatory information at micro-scale. Finally, the high computational efficiency, high numerical accuracy and low computational cost of the presented higher-order three-scale computational approach are substantiated via representative numerical experiments. It can be summarized that this scalable and robust HOTS computational approach offers a reliably numerical tool for nonlinear multiphysics simulation of large-scale heterogeneous structures in real-world applications.
- oai:arXiv.org:2512.09357v1
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Balancing Turnover and Promotion Outcomes: Evidence on the Optimal Hybrid-Work Frequency
+ https://arxiv.org/abs/2512.10328
+ arXiv:2512.10328v1 Announce Type: new
+Abstract: Hybrid work policy, especially return-to-office requirements, remains a globally salient topic as workers, companies, and governments continue to debate and disagree. Despite extensive discussions on the benefits and drawbacks of remote and hybrid arrangements, the optimal number of remote days that jointly considers multiple organizational outcomes has not been empirically established. Focusing on two critical career outcomes -- turnover risk and promotion -- we examine how remote work frequency shapes employee trajectories using large-scale observational activity data from a company with over one million employees. We find that increased remote-work frequency is associated with an initial decrease and then an increase in turnover, while promotion likelihood initially rises and then declines. Accordingly, we identify approximately two remote days per week as an optimal balance -- maximizing promotion, a positive outcome for employees, while minimizing turnover, which is undesirable for organizations and may indicate negative employee experiences. These patterns vary across subgroups defined by gender, role type, and leadership status. Several notable results emerge. First, male employees derive greater promotion benefits from remote work than female employees. Second, support workers (non-core business roles) do not experience promotion gains, and the reduction in turnover at their optimal remote-work frequency is marginal compared with employees in core business roles. Third, organizational leaders face greater challenges in remote settings than individual contributors: their turnover risk increases substantially at higher remote frequencies, and their likelihood of promotion decreases as remote frequency rises. We further show that time-allocation patterns partly explain how remote-work frequency influences these career outcomes.
+ oai:arXiv.org:2512.10328v1
+ cs.SI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hao Dong, Yanqi Wang, Jiale Linghu, Qiang Ma
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Xuan Lu, Yulin Yu
- A Granular Framework for Construction Material Price Forecasting: Econometric and Machine-Learning Approaches
- https://arxiv.org/abs/2512.09360
- arXiv:2512.09360v1 Announce Type: new
-Abstract: The persistent volatility of construction material prices poses significant risks to cost estimation, budgeting, and project delivery, underscoring the urgent need for granular and scalable forecasting methods. This study develops a forecasting framework that leverages the Construction Specifications Institute (CSI) MasterFormat as the target data structure, enabling predictions at the six-digit section level and supporting detailed cost projections across a wide spectrum of building materials. To enhance predictive accuracy, the framework integrates explanatory variables such as raw material prices, commodity indexes, and macroeconomic indicators. Four time-series models, Long Short-Term Memory (LSTM), Autoregressive Integrated Moving Average (ARIMA), Vector Error Correction Model (VECM), and Chronos-Bolt, were evaluated under both baseline configurations (using CSI data only) and extended versions with explanatory variables. Results demonstrate that incorporating explanatory variables significantly improves predictive performance across all models. Among the tested approaches, the LSTM model consistently achieved the highest accuracy, with RMSE values as low as 1.390 and MAPE values of 0.957, representing improvements of up to 59\% over the traditional statistical time-series model, ARIMA. Validation across multiple CSI divisions confirmed the framework's scalability, while Division 06 (Wood, Plastics, and Composites) is presented in detail as a demonstration case. This research offers a robust methodology that enables owners and contractors to improve budgeting practices and achieve more reliable cost estimation at the Definitive level.
- oai:arXiv.org:2512.09360v1
- cs.LG
- econ.EM
- Thu, 11 Dec 2025 00:00:00 -0500
+ Matrix approach to the fractional calculus
+ https://arxiv.org/abs/2512.10330
+ arXiv:2512.10330v1 Announce Type: new
+Abstract: In this paper, we introduce the new construction of fractional derivatives and integrals with respect to a function, based on a matrix approach. We believe that this is a powerful tool in both analytical and numerical calculations. We begin with the differential operator with respect to a function that generates a semigroup. By discretizing this operator, we obtain a matrix approximation. Importantly, this discretization provides not only an approximating operator but also an approximating semigroup. This point motivates our approach, as we then apply Balakrishnan's representations of fractional powers of operators, which are based on semigroups. Using estimates of the semigroup norm and the norm of the difference between the operator and its matrix approximation, we derive the convergence rate for the approximation of the fractional power of operators with the fractional power of correspondings matrix operators. In addition, an explicit formula for calculating an arbitrary power of a two-band matrix is obtained, which is indispensable in the numerical solution of fractional differential and integral equations.
+ oai:arXiv.org:2512.10330v1
+ math.NA
+ cs.NA
+ math.FA
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Boge Lyu, Qianye Yin, Iris Denise Tommelein, Hanyang Liu, Karnamohit Ranka, Karthik Yeluripati, Junzhe Shi
+ V. N. Kolokoltsov, E. L. Shishkina
- StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation
- https://arxiv.org/abs/2512.09363
- arXiv:2512.09363v1 Announce Type: new
-Abstract: The growing adoption of XR devices has fueled strong demand for high-quality stereo video, yet its production remains costly and artifact-prone. To address this challenge, we present StereoWorld, an end-to-end framework that repurposes a pretrained video generator for high-fidelity monocular-to-stereo video generation. Our framework jointly conditions the model on the monocular video input while explicitly supervising the generation with a geometry-aware regularization to ensure 3D structural fidelity. A spatio-temporal tiling scheme is further integrated to enable efficient, high-resolution synthesis. To enable large-scale training and evaluation, we curate a high-definition stereo video dataset containing over 11M frames aligned to natural human interpupillary distance (IPD). Extensive experiments demonstrate that StereoWorld substantially outperforms prior methods, generating stereo videos with superior visual fidelity and geometric consistency. The project webpage is available at https://ke-xing.github.io/StereoWorld/.
- oai:arXiv.org:2512.09363v1
+ A Conditional Generative Framework for Synthetic Data Augmentation in Segmenting Thin and Elongated Structures in Biological Images
+ https://arxiv.org/abs/2512.10334
+ arXiv:2512.10334v1 Announce Type: new
+Abstract: Thin and elongated filamentous structures, such as microtubules and actin filaments, often play important roles in biological systems. Segmenting these filaments in biological images is a fundamental step for quantitative analysis. Recent advances in deep learning have significantly improved the performance of filament segmentation. However, there is a big challenge in acquiring high quality pixel-level annotated dataset for filamentous structures, as the dense distribution and geometric properties of filaments making manual annotation extremely laborious and time-consuming. To address the data shortage problem, we propose a conditional generative framework based on the Pix2Pix architecture to generate realistic filaments in microscopy images from binary masks. We also propose a filament-aware structural loss to improve the structure similarity when generating synthetic images. Our experiments have demonstrated the effectiveness of our approach and outperformed existing model trained without synthetic data.
+ oai:arXiv.org:2512.10334v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/publicdomain/zero/1.0/
- Ke Xing, Longfei Li, Yuyang Yin, Hanwen Liang, Guixun Luo, Chen Fang, Jue Wang, Konstantinos N. Plataniotis, Xiaojie Jin, Yao Zhao, Yunchao Wei
+ Yi Liu, Yichi Zhang
- ASSIST-3D: Adapted Scene Synthesis for Class-Agnostic 3D Instance Segmentation
- https://arxiv.org/abs/2512.09364
- arXiv:2512.09364v1 Announce Type: new
-Abstract: Class-agnostic 3D instance segmentation tackles the challenging task of segmenting all object instances, including previously unseen ones, without semantic class reliance. Current methods struggle with generalization due to the scarce annotated 3D scene data or noisy 2D segmentations. While synthetic data generation offers a promising solution, existing 3D scene synthesis methods fail to simultaneously satisfy geometry diversity, context complexity, and layout reasonability, each essential for this task. To address these needs, we propose an Adapted 3D Scene Synthesis pipeline for class-agnostic 3D Instance SegmenTation, termed as ASSIST-3D, to synthesize proper data for model generalization enhancement. Specifically, ASSIST-3D features three key innovations, including 1) Heterogeneous Object Selection from extensive 3D CAD asset collections, incorporating randomness in object sampling to maximize geometric and contextual diversity; 2) Scene Layout Generation through LLM-guided spatial reasoning combined with depth-first search for reasonable object placements; and 3) Realistic Point Cloud Construction via multi-view RGB-D image rendering and fusion from the synthetic scenes, closely mimicking real-world sensor data acquisition. Experiments on ScanNetV2, ScanNet++, and S3DIS benchmarks demonstrate that models trained with ASSIST-3D-generated data significantly outperform existing methods. Further comparisons underscore the superiority of our purpose-built pipeline over existing 3D scene synthesis approaches.
- oai:arXiv.org:2512.09364v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Multilingual VLM Training: Adapting an English-Trained VLM to French
+ https://arxiv.org/abs/2512.10336
+ arXiv:2512.10336v1 Announce Type: new
+Abstract: Artificial intelligence has made great progress in recent years, particularly in the development of Vision--Language Models (VLMs) that understand both visual and textual data. However, these advancements remain largely limited to English, reducing their accessibility for non--English speakers. It is essential to extend these capabilities to a broader range of languages. This paper explores the challenges of adapting an English-trained VLM to different languages. To this end, we will explore and compare different methods for their performance and computational cost. We consider a translation-based pipeline, LoRA finetuning, and a two-stage finetuning strategy that separates vision adaptation from language adaptation. To evaluate these methods, we use a combination of standard multimodal benchmarks translated into the target language and manual assessments by native experts. The results reveal that dataset translation remains a major bottleneck in multilingual VLM performance, with data quality limiting the effectiveness of training and evaluation. These findings suggest that future efforts should focus on native-language dataset collection and improved translation strategies.
+ oai:arXiv.org:2512.10336v1
+ cs.CL
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Shengchao Zhou, Jiehong Lin, Jiahui Liu, Shizhen Zhao, Chirui Chang, Xiaojuan Qi
+ Jules Lahmi, Alexis Roger
- KGOT: Unified Knowledge Graph and Optimal Transport Pseudo-Labeling for Molecule-Protein Interaction Prediction
- https://arxiv.org/abs/2512.09365
- arXiv:2512.09365v1 Announce Type: new
-Abstract: Predicting molecule-protein interactions (MPIs) is a fundamental task in computational biology, with crucial applications in drug discovery and molecular function annotation. However, existing MPI models face two major challenges. First, the scarcity of labeled molecule-protein pairs significantly limits model performance, as available datasets capture only a small fraction of biological relevant interactions. Second, most methods rely solely on molecular and protein features, ignoring broader biological context such as genes, metabolic pathways, and functional annotations that could provide essential complementary information. To address these limitations, our framework first aggregates diverse biological datasets, including molecular, protein, genes and pathway-level interactions, and then develop an optimal transport-based approach to generate high-quality pseudo-labels for unlabeled molecule-protein pairs, leveraging the underlying distribution of known interactions to guide label assignment. By treating pseudo-labeling as a mechanism for bridging disparate biological modalities, our approach enables the effective use of heterogeneous data to enhance MPI prediction. We evaluate our framework on multiple MPI datasets including virtual screening tasks and protein retrieval tasks, demonstrating substantial improvements over state-of-the-art methods in prediction accuracies and zero shot ability across unseen interactions. Beyond MPI prediction, our approach provides a new paradigm for leveraging diverse biological data sources to tackle problems traditionally constrained by single- or bi-modal learning, paving the way for future advances in computational biology and drug discovery.
- oai:arXiv.org:2512.09365v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ On the Collapse of Generative Paths: A Criterion and Correction for Diffusion Steering
+ https://arxiv.org/abs/2512.10339
+ arXiv:2512.10339v1 Announce Type: new
+Abstract: Inference-time steering enables pretrained diffusion/flow models to be adapted to new tasks without retraining. A widely used approach is the ratio-of-densities method, which defines a time-indexed target path by reweighting probability-density trajectories from multiple models with positive, or in some cases, negative exponents. This construction, however, harbors a critical and previously unformalized failure mode: Marginal Path Collapse, where intermediate densities become non-normalizable even though endpoints remain valid. Collapse arises systematically when composing heterogeneous models trained on different noise schedules or datasets, including a common setting in molecular design where de-novo, conformer, and pocket-conditioned models must be combined for tasks such as flexible-pose scaffold decoration. We provide a novel and complete solution for the problem. First, we derive a simple path existence criterion that predicts exactly when collapse occurs from noise schedules and exponents alone. Second, we introduce Adaptive path Correction with Exponents (ACE), which extends Feynman-Kac steering to time-varying exponents and guarantees a valid probability path. On a synthetic 2D benchmark and on flexible-pose scaffold decoration, ACE eliminates collapse and enables high-guidance compositional generation, improving distributional and docking metrics over constant-exponent baselines and even specialized task-specific scaffold decoration models. Our work turns ratio-of-densities steering with heterogeneous experts from an unstable heuristic into a reliable tool for controllable generation.
+ oai:arXiv.org:2512.10339v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Jiayu Qin, Zhengquan Luo, Guy Tadmor, Changyou Chen, David Zeevi, Zhiqiang Xu
+ http://creativecommons.org/licenses/by/4.0/
+ Ziseok Lee, Minyeong Hwang, Sanghyun Jo, Wooyeol Lee, Jihyung Ko, Young Bin Park, Jae-Mun Choi, Eunho Yang, Kyungsu Kim
- Procurement Auctions with Predictions: Improved Frugality for Facility Location
- https://arxiv.org/abs/2512.09367
- arXiv:2512.09367v1 Announce Type: new
-Abstract: We study the problem of designing procurement auctions for the strategic uncapacitated facility location problem: a company needs to procure a set of facility locations in order to serve its customers and each facility location is owned by a strategic agent. Each owner has a private cost for providing access to their facility (e.g., renting it or selling it to the company) and needs to be compensated accordingly. The goal is to design truthful auctions that decide which facilities the company should procure and how much to pay the corresponding owners, aiming to minimize the total cost, i.e., the monetary cost paid to the owners and the connection cost suffered by the customers (their distance to the nearest facility). We evaluate the performance of these auctions using the \emph{frugality ratio}.
- We first analyze the performance of the classic VCG auction in this context and prove that its frugality ratio is exactly $3$. We then leverage the learning-augmented framework and design auctions that are augmented with predictions regarding the owners' private costs. Specifically, we propose a family of learning-augmented auctions that achieve significant payment reductions when the predictions are accurate, leading to much better frugality ratios. At the same time, we demonstrate that these auctions remain robust even if the predictions are arbitrarily inaccurate, and maintain reasonable frugality ratios even under adversarially chosen predictions. We finally provide a family of ``error-tolerant'' auctions that maintain improved frugality ratios even if the predictions are only approximately accurate, and we provide upper bounds on their frugality ratio as a function of the prediction error.
- oai:arXiv.org:2512.09367v1
- cs.GT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Zero-shot Adaptation of Stable Diffusion via Plug-in Hierarchical Degradation Representation for Real-World Super-Resolution
+ https://arxiv.org/abs/2512.10340
+ arXiv:2512.10340v1 Announce Type: new
+Abstract: Real-World Image Super-Resolution (Real-ISR) aims to recover high-quality images from low-quality inputs degraded by unknown and complex real-world factors. Real-world scenarios involve diverse and coupled degradations, making it necessary to provide diffusion models with richer and more informative guidance. However, existing methods often assume known degradation severity and rely on CLIP text encoders that cannot capture numerical severity, limiting their generalization ability. To address this, we propose \textbf{HD-CLIP} (\textbf{H}ierarchical \textbf{D}egradation CLIP), which decomposes a low-quality image into a semantic embedding and an ordinal degradation embedding that captures ordered relationships and allows interpolation across unseen levels. Furthermore, we integrated it into diffusion models via classifier-free guidance (CFG) and proposed classifier-free projection guidance (CFPG). HD-CLIP leverages semantic cues to guide generative restoration while using degradation cues to suppress undesired hallucinations and artifacts. As a \textbf{plug-and-play module}, HD-CLIP can be seamlessly integrated into various super-resolution frameworks without training, significantly improving detail fidelity and perceptual realism across diverse real-world datasets.
+ oai:arXiv.org:2512.10340v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Eric Balkanski, Nicholas DeFilippis, Vasilis Gkatzelis, Xizhi Tan
-
-
- CFLight: Enhancing Safety with Traffic Signal Control through Counterfactual Learning
- https://arxiv.org/abs/2512.09368
- arXiv:2512.09368v1 Announce Type: new
-Abstract: Traffic accidents result in millions of injuries and fatalities globally, with a significant number occurring at intersections each year. Traffic Signal Control (TSC) is an effective strategy for enhancing safety at these urban junctures. Despite the growing popularity of Reinforcement Learning (RL) methods in optimizing TSC, these methods often prioritize driving efficiency over safety, thus failing to address the critical balance between these two aspects. Additionally, these methods usually need more interpretability. CounterFactual (CF) learning is a promising approach for various causal analysis fields. In this study, we introduce a novel framework to improve RL for safety aspects in TSC. This framework introduces a novel method based on CF learning to address the question: ``What if, when an unsafe event occurs, we backtrack to perform alternative actions, and will this unsafe event still occur in the subsequent period?'' To answer this question, we propose a new structure causal model to predict the result after executing different actions, and we propose a new CF module that integrates with additional ``X'' modules to promote safe RL practices. Our new algorithm, CFLight, which is derived from this framework, effectively tackles challenging safety events and significantly improves safety at intersections through a near-zero collision control strategy. Through extensive numerical experiments on both real-world and synthetic datasets, we demonstrate that CFLight reduces collisions and improves overall traffic performance compared to conventional RL methods and the recent safe RL model. Moreover, our method represents a generalized and safe framework for RL methods, opening possibilities for applications in other domains. The data and code are available in the github https://github.com/MJLee00/CFLight-Enhancing-Safety-with-Traffic-Signal-Control-through-Counterfactual-Learning.
- oai:arXiv.org:2512.09368v1
- cs.LG
- stat.ME
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Mingyuan Li, Chunyu Liu, Zhuojun Li, Xiao Liu, Guangsheng Yu, Bo Du, Jun Shen, Qiang Wu
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yi-Cheng Liao, Shyang-En Weng, Yu-Syuan Xu, Chi-Wei Hsiao, Wei-Chen Chiu, Ching-Chun Huang
- Are Hypervectors Enough? Single-Call LLM Reasoning over Knowledge Graphs
- https://arxiv.org/abs/2512.09369
- arXiv:2512.09369v1 Announce Type: new
-Abstract: Recent advances in large language models (LLMs) have enabled strong reasoning over both structured and unstructured knowledge. When grounded on knowledge graphs (KGs), however, prevailing pipelines rely on heavy neural encoders to embed and score symbolic paths or on repeated LLM calls to rank candidates, leading to high latency, GPU cost, and opaque decisions that hinder faithful, scalable deployment. We propose PathHD, a lightweight and encoder-free KG reasoning framework that replaces neural path scoring with hyperdimensional computing (HDC) and uses only a single LLM call per query. PathHD encodes relation paths into block-diagonal GHRR hypervectors, ranks candidates with blockwise cosine similarity and Top-K pruning, and then performs a one-shot LLM adjudication to produce the final answer together with cited supporting paths. Technically, PathHD is built on three ingredients: (i) an order-aware, non-commutative binding operator for path composition, (ii) a calibrated similarity for robust hypervector-based retrieval, and (iii) a one-shot adjudication step that preserves interpretability while eliminating per-path LLM scoring. On WebQSP, CWQ, and the GrailQA split, PathHD (i) attains comparable or better Hits@1 than strong neural baselines while using one LLM call per query; (ii) reduces end-to-end latency by $40-60\%$ and GPU memory by $3-5\times$ thanks to encoder-free retrieval; and (iii) delivers faithful, path-grounded rationales that improve error diagnosis and controllability. These results indicate that carefully designed HDC representations provide a practical substrate for efficient KG-LLM reasoning, offering a favorable accuracy-efficiency-interpretability trade-off.
- oai:arXiv.org:2512.09369v1
+ A Privacy-Preserving Cloud Architecture for Distributed Machine Learning at Scale
+ https://arxiv.org/abs/2512.10341
+ arXiv:2512.10341v1 Announce Type: new
+Abstract: Distributed machine learning systems require strong privacy guarantees, verifiable compliance, and scalable deploy- ment across heterogeneous and multi-cloud environments. This work introduces a cloud-native privacy-preserving architecture that integrates federated learning, differential privacy, zero- knowledge compliance proofs, and adaptive governance powered by reinforcement learning. The framework supports secure model training and inference without centralizing sensitive data, while enabling cryptographically verifiable policy enforcement across institutions and cloud platforms. A full prototype deployed across hybrid Kubernetes clusters demonstrates reduced membership- inference risk, consistent enforcement of formal privacy budgets, and stable model performance under differential privacy. Ex- perimental evaluation across multi-institution workloads shows that the architecture maintains utility with minimal overhead while providing continuous, risk-aware governance. The pro- posed framework establishes a practical foundation for deploying trustworthy and compliant distributed machine learning systems at scale.
+ oai:arXiv.org:2512.10341v1cs.LG
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Yezi Liu, William Youngwoo Chung, Hanning Chen, Calvin Yeung, Mohsen Imani
+ 10.17577/IJERTV14IS110277
+ International Journal of Engineering Research & Technology 2025
+ Vinoth Punniyamoorthy, Ashok Gadi Parthi, Mayilsamy Palanigounder, Ravi Kiran Kodali, Bikesh Kumar, Kabilan Kannan
- Optimizing Data Extraction from Materials Science Literature: A Study of Tools Using Large Language Models
- https://arxiv.org/abs/2512.09370
- arXiv:2512.09370v1 Announce Type: new
-Abstract: Large Language Models (LLMs) are increasingly utilized for large-scale extraction and organization of unstructured data owing to their exceptional Natural Language Processing (NLP) capabilities. Empowering materials design, vast amounts of data from experiments and simulations are scattered across numerous scientific publications, but high-quality experimental databases are scarce. This study considers the effectiveness and practicality of five representative AI tools (ChemDataExtractor, BERT-PSIE, ChatExtract, LangChain, and Kimi) to extract bandgaps from 200 randomly selected Materials Science publications in two presentations (arXiv and publisher versions), comparing the results to those obtained by human processing. Although the integrity of data extraction has not met expectations, encouraging results have been achieved in terms of precision and the ability to eliminate irrelevant papers from human consideration. Our analysis highlights both the strengths and limitations of these tools, offering insights into improving future data extraction techniques for enhanced scientific discovery and innovation. In conjunction with recent research, we provide guidance on feasible improvements for future data extraction methodologies, helping to bridge the gap between unstructured scientific data and structured, actionable databases.
- oai:arXiv.org:2512.09370v1
- cs.DL
- cond-mat.mtrl-sci
- Thu, 11 Dec 2025 00:00:00 -0500
+ CoSPlan: Corrective Sequential Planning via Scene Graph Incremental Updates
+ https://arxiv.org/abs/2512.10342
+ arXiv:2512.10342v1 Announce Type: new
+Abstract: Large-scale Vision-Language Models (VLMs) exhibit impressive complex reasoning capabilities but remain largely unexplored in visual sequential planning, i.e., executing multi-step actions towards a goal. Additionally, practical sequential planning often involves non-optimal (erroneous) steps, challenging VLMs to detect and correct such steps. We propose Corrective Sequential Planning Benchmark (CoSPlan) to evaluate VLMs in error-prone, vision-based sequential planning tasks across 4 domains: maze navigation, block rearrangement, image reconstruction,and object reorganization. CoSPlan assesses two key abilities: Error Detection (identifying non-optimal action) and Step Completion (correcting and completing action sequences to reach the goal). Despite using state-of-the-art reasoning techniques such as Chain-of-Thought and Scene Graphs, VLMs (e.g. Intern-VLM and Qwen2) struggle on CoSPlan, failing to leverage contextual cues to reach goals. Addressing this, we propose a novel training-free method, Scene Graph Incremental updates (SGI), which introduces intermediate reasoning steps between the initial and goal states. SGI helps VLMs reason about sequences, yielding an average performance gain of 5.2%. In addition to enhancing reliability in corrective sequential planning, SGI generalizes to traditional planning tasks such as Plan-Bench and VQA.
+ oai:arXiv.org:2512.10342v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Wenkai Ning, Musen Li, Jeffrey R. Reimers, Rika Kobayashi
+ Shresth Grover, Priyank Pathak, Akash Kumar, Vibhav Vineet, Yogesh S Rawat
- Intelligent Resilience Testing for Decision-Making Agents with Dual-Mode Surrogate Adaptation
- https://arxiv.org/abs/2512.09372
- arXiv:2512.09372v1 Announce Type: new
-Abstract: Testing and evaluating decision-making agents remains challenging due to unknown system architectures, limited access to internal states, and the vastness of high-dimensional scenario spaces. Existing testing approaches often rely on surrogate models of decision-making agents to generate large-scale scenario libraries; however, discrepancies between surrogate models and real decision-making agents significantly limit their generalizability and practical applicability. To address this challenge, this paper proposes intelligent resilience testing (IRTest), a unified online adaptive testing framework designed to rapidly adjust to diverse decision-making agents. IRTest initializes with an offline-trained surrogate prediction model and progressively reduces surrogate-to-real gap during testing through two complementary adaptation mechanisms: (i) online neural fine-tuning in data-rich regimes, and (ii) lightweight importance-sampling-based weighting correction in data-limited regimes. A Bayesian optimization strategy, equipped with bias-corrected acquisition functions, guides scenario generation to balance exploration and exploitation in complex testing spaces. Extensive experiments across varying levels of task complexity and system heterogeneity demonstrate that IRTest consistently improves failure-discovery efficiency, testing robustness, and cross-system generalizability. These results highlight the potential of IRTest as a practical solution for scalable, adaptive, and resilient testing of decision-making agents.
- oai:arXiv.org:2512.09372v1
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ REMISVFU: Vertical Federated Unlearning via Representation Misdirection for Intermediate Output Feature
+ https://arxiv.org/abs/2512.10348
+ arXiv:2512.10348v1 Announce Type: new
+Abstract: Data-protection regulations such as the GDPR grant every participant in a federated system a right to be forgotten. Federated unlearning has therefore emerged as a research frontier, aiming to remove a specific party's contribution from the learned model while preserving the utility of the remaining parties. However, most unlearning techniques focus on Horizontal Federated Learning (HFL), where data are partitioned by samples. In contrast, Vertical Federated Learning (VFL) allows organizations that possess complementary feature spaces to train a joint model without sharing raw data. The resulting feature-partitioned architecture renders HFL-oriented unlearning methods ineffective. In this paper, we propose REMISVFU, a plug-and-play representation misdirection framework that enables fast, client-level unlearning in splitVFL systems. When a deletion request arrives, the forgetting party collapses its encoder output to a randomly sampled anchor on the unit sphere, severing the statistical link between its features and the global model. To maintain utility for the remaining parties, the server jointly optimizes a retention loss and a forgetting loss, aligning their gradients via orthogonal projection to eliminate destructive interference. Evaluations on public benchmarks show that REMISVFU suppresses back-door attack success to the natural class-prior level and sacrifices only about 2.5% points of clean accuracy, outperforming state-of-the-art baselines.
+ oai:arXiv.org:2512.10348v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Jingxuan Yang, Weichao Xu, Yuchen Shi, Yi Zhang, Shuo Feng, Huaxin Pei
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Wenhan Wu, Zhili He, Huanghuang Liang, Yili Gong, Jiawei Jiang, Chuang Hu, Dazhao Cheng
- FUSER: Feed-Forward MUltiview 3D Registration Transformer and SE(3)$^N$ Diffusion Refinement
- https://arxiv.org/abs/2512.09373
- arXiv:2512.09373v1 Announce Type: new
-Abstract: Registration of multiview point clouds conventionally relies on extensive pairwise matching to build a pose graph for global synchronization, which is computationally expensive and inherently ill-posed without holistic geometric constraints. This paper proposes FUSER, the first feed-forward multiview registration transformer that jointly processes all scans in a unified, compact latent space to directly predict global poses without any pairwise estimation. To maintain tractability, FUSER encodes each scan into low-resolution superpoint features via a sparse 3D CNN that preserves absolute translation cues, and performs efficient intra- and inter-scan reasoning through a Geometric Alternating Attention module. Particularly, we transfer 2D attention priors from off-the-shelf foundation models to enhance 3D feature interaction and geometric consistency. Building upon FUSER, we further introduce FUSER-DF, an SE(3)$^N$ diffusion refinement framework to correct FUSER's estimates via denoising in the joint SE(3)$^N$ space. FUSER acts as a surrogate multiview registration model to construct the denoiser, and a prior-conditioned SE(3)$^N$ variational lower bound is derived for denoising supervision. Extensive experiments on 3DMatch, ScanNet and ArkitScenes demonstrate that our approach achieves the superior registration accuracy and outstanding computational efficiency.
- oai:arXiv.org:2512.09373v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Design and Validation of an Under-actuated Robotic Finger with Synchronous Tendon Routing
+ https://arxiv.org/abs/2512.10349
+ arXiv:2512.10349v1 Announce Type: new
+Abstract: Tendon-driven under-actuated robotic fingers provide advantages for dexterous manipulation through reduced actuator requirements and simplified mechanical design. However, achieving both high load capacity and adaptive compliance in a compact form remains challenging. This paper presents an under-actuated tendon-driven robotic finger (UTRF) featuring a synchronous tendon routing that mechanically couples all joints with fixed angular velocity ratios, enabling the entire finger to be actuated by a single actuator. This approach significantly reduces the number of actuators required in multi-finger hands, resulting in a lighter and more compact structure without sacrificing stiffness or compliance. The kinematic and static models of the finger are derived, incorporating tendon elasticity to predict structural stiffness. A single-finger prototype was fabricated and tested under static loading, showing an average deflection prediction error of 1.0 mm (0.322% of total finger length) and a measured stiffness of 1.2x10^3 N/m under a 3 kg tip load. Integration into a five-finger robotic hand (UTRF-RoboHand) demonstrates effective object manipulation across diverse scenarios, confirming that the proposed routing achieves predictable stiffness and reliable grasping performance with a minimal actuator count.
+ oai:arXiv.org:2512.10349v1
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Haobo Jiang, Jin Xie, Jian Yang, Liang Yu, Jianmin Zheng
+ Quan Yuan, Zhenting Du, Daqian Cao, Weibang Bai
- Derandomizing Isolation In Catalytic Logspace
- https://arxiv.org/abs/2512.09374
- arXiv:2512.09374v1 Announce Type: new
-Abstract: A language is said to be in catalytic logspace if we can test membership using a deterministic logspace machine that has an additional read/write tape filled with arbitrary data whose contents have to be restored to their original value at the end of the computation. The model of catalytic computation was introduced by Buhrman et al [STOC2014].
- As our first result, we obtain a catalytic logspace algorithm for computing a minimum weight witness to a search problem, with small weights, provided the algorithm is given oracle access for the corresponding weighted decision problem. In particular, our reduction yields CL algorithms for the search versions of the following three problems: planar perfect matching, planar exact perfect matching and weighted arborescences in weighted digraphs.
- Our second set of results concern the significantly larger class CL^{NP}_{2-round}. We show that CL^{NP}_{2-round} contains SearchSAT and the complexity classes BPP, MA and ZPP^{NP[1]}. While SearchSAT is shown to be in CL^{NP}_{2-round} using the isolation lemma, the other three containments, while based on the compress-or-random technique, use the Nisan-Wigderson [JCSS 1994] based pseudo-random generator. These containments show that CL^{NP}_{2-round} resembles ZPP^NP more than P^{NP}, providing some weak evidence that CL is more like ZPP than P.
- For our third set of results we turn to isolation well inside catalytic classes. We consider the unambiguous catalytic class CUTISP[poly(n),logn,log^2n] and show that it contains reachability and therefore NL. This is a catalytic version of the result of van Melkebeek & Prakriya [SIAM J. Comput. 2019]. Building on their result, we also show a tradeoff between workspace and catalytic space. Finally, we extend these catalytic upper bounds to LogCFL.
- oai:arXiv.org:2512.09374v1
- cs.CC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Dynamics of Agentic Loops in Large Language Models: A Geometric Theory of Trajectories
+ https://arxiv.org/abs/2512.10350
+ arXiv:2512.10350v1 Announce Type: new
+Abstract: Agentic systems built on large language models operate through recursive feedback loops, where each output becomes the next input. Yet the geometric behavior of these agentic loops (whether they converge, diverge, or exhibit more complex dynamics) remains poorly understood. This paper introduces a geometric framework for analyzing agentic trajectories in semantic embedding space, treating iterative transformations as discrete dynamical systems. We distinguish the artifact space, where linguistic transformations occur, from the embedding space, where geometric measurements are performed. Because cosine similarity is biased by embedding anisotropy, we introduce an isotonic calibration that eliminates systematic bias and aligns similarities with human semantic judgments while preserving high local stability. This enables rigorous measurement of trajectories, clusters and attractors. Through controlled experiments on singular agentic loops, we identify two fundamental regimes. A contractive rewriting loop converges toward a stable attractor with decreasing dispersion, while an exploratory summarize and negate loop produces unbounded divergence with no cluster formation. These regimes display qualitatively distinct geometric signatures of contraction and expansion. Our results show that prompt design directly governs the dynamical regime of an agentic loop, enabling systematic control of convergence, divergence and trajectory structure in iterative LLM transformations.
+ oai:arXiv.org:2512.10350v1
+ cs.LG
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- V. Arvind, Srijan Chakraborty, Samir Datta
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nicolas Tacheny
- Log NeRF: Comparing Spaces for Learning Radiance Fields
- https://arxiv.org/abs/2512.09375
- arXiv:2512.09375v1 Announce Type: new
-Abstract: Neural Radiance Fields (NeRF) have achieved remarkable results in novel view synthesis, typically using sRGB images for supervision. However, little attention has been paid to the color space in which the network is learning the radiance field representation. Inspired by the BiIlluminant Dichromatic Reflection (BIDR) model, which suggests that a logarithmic transformation simplifies the separation of illumination and reflectance, we hypothesize that log RGB space enables NeRF to learn a more compact and effective representation of scene appearance. To test this, we captured approximately 30 videos using a GoPro camera, ensuring linear data recovery through inverse encoding. We trained NeRF models under various color space interpretations linear, sRGB, GPLog, and log RGB by converting each network output to a common color space before rendering and loss computation, enforcing representation learning in different color spaces. Quantitative and qualitative evaluations demonstrate that using a log RGB color space consistently improves rendering quality, exhibits greater robustness across scenes, and performs particularly well in low light conditions while using the same bit-depth input images. Further analysis across different network sizes and NeRF variants confirms the generalization and stability of the log space advantage.
- oai:arXiv.org:2512.09375v1
+ Topology-Agnostic Animal Motion Generation from Text Prompt
+ https://arxiv.org/abs/2512.10352
+ arXiv:2512.10352v1 Announce Type: new
+Abstract: Motion generation is fundamental to computer animation and widely used across entertainment, robotics, and virtual environments. While recent methods achieve impressive results, most rely on fixed skeletal templates, which prevent them from generalizing to skeletons with different or perturbed topologies. We address the core limitation of current motion generation methods - the combined lack of large-scale heterogeneous animal motion data and unified generative frameworks capable of jointly modeling arbitrary skeletal topologies and textual conditions. To this end, we introduce OmniZoo, a large-scale animal motion dataset spanning 140 species and 32,979 sequences, enriched with multimodal annotations. Building on OmniZoo, we propose a generalized autoregressive motion generation framework capable of producing text-driven motions for arbitrary skeletal topologies. Central to our model is a Topology-aware Skeleton Embedding Module that encodes geometric and structural properties of any skeleton into a shared token space, enabling seamless fusion with textual semantics. Given a text prompt and a target skeleton, our method generates temporally coherent, physically plausible, and semantically aligned motions, and further enables cross-species motion style transfer.
+ oai:arXiv.org:2512.10352v1cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Sihe Chen (Northeastern University), Luv Verma (Northeastern University), Bruce A. Maxwell (Northeastern University)
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Keyi Chen, Mingze Sun, Zhenyu Liu, Zhangquan Chen, Ruqi Huang
- Rates and architectures for learning geometrically non-trivial operators
- https://arxiv.org/abs/2512.09376
- arXiv:2512.09376v1 Announce Type: new
-Abstract: Deep learning methods have proven capable of recovering operators between high-dimensional spaces, such as solution maps of PDEs and similar objects in mathematical physics, from very few training samples. This phenomenon of data-efficiency has been proven for certain classes of elliptic operators with simple geometry, i.e., operators that do not change the domain of the function or propagate singularities. However, scientific machine learning is commonly used for problems that do involve the propagation of singularities in a priori unknown ways, such as waves, advection, and fluid dynamics. In light of this, we expand the learning theory to include double fibration transforms--geometric integral operators that include generalized Radon and geodesic ray transforms. We prove that this class of operators does not suffer from the curse of dimensionality: the error decays superalgebraically, that is, faster than any fixed power of the reciprocal of the number of training samples. Furthermore, we investigate architectures that explicitly encode the geometry of these transforms, demonstrating that an architecture reminiscent of cross-attention based on levelset methods yields a parameterization that is universal, stable, and learns double fibration transforms from very few training examples. Our results contribute to a rapidly-growing line of theoretical work on learning operators for scientific machine learning.
- oai:arXiv.org:2512.09376v1
- cs.LG
+ Hybrid Transformer-Mamba Architecture for Weakly Supervised Volumetric Medical Segmentation
+ https://arxiv.org/abs/2512.10353
+ arXiv:2512.10353v1 Announce Type: new
+Abstract: Weakly supervised semantic segmentation offers a label-efficient solution to train segmentation models for volumetric medical imaging. However, existing approaches often rely on 2D encoders that neglect the inherent volumetric nature of the data. We propose TranSamba, a hybrid Transformer-Mamba architecture designed to capture 3D context for weakly supervised volumetric medical segmentation. TranSamba augments a standard Vision Transformer backbone with Cross-Plane Mamba blocks, which leverage the linear complexity of state space models for efficient information exchange across neighboring slices. The information exchange enhances the pairwise self-attention within slices computed by the Transformer blocks, directly contributing to the attention maps for object localization. TranSamba achieves effective volumetric modeling with time complexity that scales linearly with the input volume depth and maintains constant memory usage for batch processing. Extensive experiments on three datasets demonstrate that TranSamba establishes new state-of-the-art performance, consistently outperforming existing methods across diverse modalities and pathologies. Our source code and trained models are openly accessible at: https://github.com/YihengLyu/TranSamba.
+ oai:arXiv.org:2512.10353v1cs.CV
- eess.IV
- math.DG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- T. Mitchell Roddenberry, Leo Tzou, Ivan Dokmani\'c, Maarten V. de Hoop, Richard G. Baraniuk
+ Yiheng Lyu, Lian Xu, Mohammed Bennamoun, Farid Boussaid, Coen Arrow, Girish Dwivedi
- Observability Analysis and Composite Disturbance Filtering for a Bar Tethered to Dual UAVs Subject to Multi-source Disturbances
- https://arxiv.org/abs/2512.09377
- arXiv:2512.09377v1 Announce Type: new
-Abstract: Cooperative suspended aerial transportation is highly susceptible to multi-source disturbances such as aerodynamic effects and thrust uncertainties. To achieve precise load manipulation, existing methods often rely on extra sensors to measure cable directions or the payload's pose, which increases the system cost and complexity. A fundamental question remains: is the payload's pose observable under multi-source disturbances using only the drones' odometry information? To answer this question, this work focuses on the two-drone-bar system and proves that the whole system is observable when only two or fewer types of lumped disturbances exist by using the observability rank criterion. To the best of our knowledge, we are the first to present such a conclusion and this result paves the way for more cost-effective and robust systems by minimizing their sensor suites. Next, to validate this analysis, we consider the situation where the disturbances are only exerted on the drones, and develop a composite disturbance filtering scheme. A disturbance observer-based error-state extended Kalman filter is designed for both state and disturbance estimation, which renders improved estimation performance for the whole system evolving on the manifold $(\mathbb{R}^3)^2\times(TS^2)^3$. Our simulation and experimental tests have validated that it is possible to fully estimate the state and disturbance of the system with only odometry information of the drones.
- oai:arXiv.org:2512.09377v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Efficient Defective Clique Enumeration and Search with Worst-Case Optimal Search Space
+ https://arxiv.org/abs/2512.10354
+ arXiv:2512.10354v1 Announce Type: new
+Abstract: A $k$-defective clique is a relaxation of the traditional clique definition, allowing up to $k$ missing edges. This relaxation is crucial in various real-world applications such as link prediction, community detection, and social network analysis. Although the problems of enumerating maximal $k$-defective cliques and searching a maximum $k$-defective clique have been extensively studied, existing algorithms suffer from limitations such as the combinatorial explosion of small partial solutions and sub-optimal search spaces. To address these limitations, we propose a novel clique-first branch-and-bound framework that first generates cliques and then adds missing edges. Furthermore, we introduce a new pivoting technique that achieves a search space size of $\mathcal{O}(3^{\frac{n}{3}} \cdot n^k)$, where $n$ is the number of vertices in the input graph. We prove that the worst-case number of maximal $k$-defective cliques is $\Omega(3^{\frac{n}{3}} \cdot n^k)$ when $k$ is a constant, establishing that our algorithm's search space is worst-case optimal. Leveraging the diameter-two property of defective cliques, we further reduce the search space size to $\mathcal{O}(n \cdot 3^{\frac{\delta}{3}} \cdot (\delta \Delta)^k)$, where $\delta$ is the degeneracy and $\Delta$ is the maximum degree of the input graph. We also propose an efficient framework for maximum $k$-defective clique search based on our branch-and-bound, together with practical techniques to reduce the search space. Experiments on real-world benchmark datasets with more than 1 million edges demonstrate that each of our proposed algorithms for maximal $k$-defective clique enumeration and maximum $k$-defective clique search outperforms the respective state-of-the-art algorithms by up to four orders of magnitude in terms of processing time.
+ oai:arXiv.org:2512.10354v1
+ cs.DS
+ cs.DB
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Lidan Xu, Dadong Fan, Junhong Wang, Wenshuo Li, Hao Lu, Jianzhong Qiao
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1145/3769787
+ Jihoon Jang, Yehyun Nam, Kunsoo Park, Hyunjoon Kim
- Federated Distillation Assisted Vehicle Edge Caching Scheme Based on Lightweight DDPM
- https://arxiv.org/abs/2512.09378
- arXiv:2512.09378v1 Announce Type: new
-Abstract: Vehicle edge caching is a promising technology that can significantly reduce the latency for vehicle users (VUs) to access content by pre-caching user-interested content at edge nodes. It is crucial to accurately predict the content that VUs are interested in without exposing their privacy. Traditional federated learning (FL) can protect user privacy by sharing models rather than raw data. However, the training of FL requires frequent model transmission, which can result in significant communication overhead. Additionally, vehicles may leave the road side unit (RSU) coverage area before training is completed, leading to training failures. To address these issues, in this letter, we propose a federated distillation-assisted vehicle edge caching scheme based on lightweight denoising diffusion probabilistic model (LDPM). The simulation results demonstrate that the proposed vehicle edge caching scheme has good robustness to variations in vehicle speed, significantly reducing communication overhead and improving cache hit percentage.
- oai:arXiv.org:2512.09378v1
+ Better Prevent than Tackle: Valuing Defense in Soccer Based on Graph Neural Networks
+ https://arxiv.org/abs/2512.10355
+ arXiv:2512.10355v1 Announce Type: new
+Abstract: Evaluating defensive performance in soccer remains challenging, as effective defending is often expressed not through visible on-ball actions such as interceptions and tackles, but through preventing dangerous opportunities before they arise. Existing approaches have largely focused on valuing on-ball actions, leaving much of defenders' true impact unmeasured. To address this gap, we propose DEFCON (DEFensive CONtribution evaluator), a comprehensive framework that quantifies player-level defensive contributions for every attacking situation in soccer. Leveraging Graph Attention Networks, DEFCON estimates the success probability and expected value of each attacking option, along with each defender's responsibility for stopping it. These components yield an Expected Possession Value (EPV) for the attacking team before and after each action, and DEFCON assigns positive or negative credits to defenders according to whether they reduced or increased the opponent's EPV. Trained on 2023-24 and evaluated on 2024-25 Eredivisie event and tracking data, DEFCON's aggregated player credits exhibit strong positive correlations with market valuations. Finally, we showcase several practical applications, including in-game timelines of defensive contributions, spatial analyses across pitch zones, and pairwise summaries of attacker-defender interactions.
+ oai:arXiv.org:2512.10355v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.MA
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xun Li, Qiong Wu, Pingyi Fan, Kezhi Wang, Wen Chen, Khaled B. Letaief
+ Hyunsung Kim, Sangwoo Seo, Hoyoung Choi, Tom Boomstra, Jinsung Yoon, Chanyoung Park
- Perception-Inspired Color Space Design for Photo White Balance Editing
- https://arxiv.org/abs/2512.09383
- arXiv:2512.09383v1 Announce Type: new
-Abstract: White balance (WB) is a key step in the image signal processor (ISP) pipeline that mitigates color casts caused by varying illumination and restores the scene's true colors. Currently, sRGB-based WB editing for post-ISP WB correction is widely used to address color constancy failures in the ISP pipeline when the original camera RAW is unavailable. However, additive color models (e.g., sRGB) are inherently limited by fixed nonlinear transformations and entangled color channels, which often impede their generalization to complex lighting conditions.
- To address these challenges, we propose a novel framework for WB correction that leverages a perception-inspired Learnable HSI (LHSI) color space. Built upon a cylindrical color model that naturally separates luminance from chromatic components, our framework further introduces dedicated parameters to enhance this disentanglement and learnable mapping to adaptively refine the flexibility. Moreover, a new Mamba-based network is introduced, which is tailored to the characteristics of the proposed LHSI color space.
- Experimental results on benchmark datasets demonstrate the superiority of our method, highlighting the potential of perception-inspired color space design in computational photography. The source code is available at https://github.com/YangCheng58/WB_Color_Space.
- oai:arXiv.org:2512.09383v1
+ mmCounter: Static People Counting in Dense Indoor Scenarios Using mmWave Radar
+ https://arxiv.org/abs/2512.10357
+ arXiv:2512.10357v1 Announce Type: new
+Abstract: mmWave radars struggle to detect or count individuals in dense, static (non-moving) groups due to limitations in spatial resolution and reliance on movement for detection. We present mmCounter, which accurately counts static people in dense indoor spaces (up to three people per square meter). mmCounter achieves this by extracting ultra-low frequency (< 1 Hz) signals, primarily from breathing and micro-scale body movements such as slight torso shifts, and applying novel signal processing techniques to differentiate these subtle signals from background noise and nearby static objects. Our problem differs significantly from existing studies on breathing rate estimation, which assume the number of people is known a priori. In contrast, mmCounter utilizes a novel multi-stage signal processing pipeline to extract relevant low-frequency sources along with their spatial information and map these sources to individual people, enabling accurate counting. Extensive evaluations in various environments demonstrate that mmCounter delivers an 87% average F1 score and 0.6 mean absolute error in familiar environments, and a 60% average F1 score and 1.1 mean absolute error in previously untested environments. It can count up to seven individuals in a three square meter space, such that there is no side-by-side spacing and only a one-meter front-to-back distance.
+ oai:arXiv.org:2512.10357v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yang Cheng, Ziteng Cui, Lin Gu, Shenghan Su, Zenghui Zhang
-
-
- BugSweeper: Function-Level Detection of Smart Contract Vulnerabilities Using Graph Neural Networks
- https://arxiv.org/abs/2512.09385
- arXiv:2512.09385v1 Announce Type: new
-Abstract: The rapid growth of Ethereum has made it more important to quickly and accurately detect smart contract vulnerabilities. While machine-learning-based methods have shown some promise, many still rely on rule-based preprocessing designed by domain experts. Rule-based preprocessing methods often discard crucial context from the source code, potentially causing certain vulnerabilities to be overlooked and limiting adaptability to newly emerging threats. We introduce BugSweeper, an end-to-end deep learning framework that detects vulnerabilities directly from the source code without manual engineering. BugSweeper represents each Solidity function as a Function-Level Abstract Syntax Graph (FLAG), a novel graph that combines its Abstract Syntax Tree (AST) with enriched control-flow and data-flow semantics. Then, our two-stage Graph Neural Network (GNN) analyzes these graphs. The first-stage GNN filters noise from the syntax graphs, while the second-stage GNN conducts high-level reasoning to detect diverse vulnerabilities. Extensive experiments on real-world contracts show that BugSweeper significantly outperforms all state-of-the-art detection methods. By removing the need for handcrafted rules, our approach offers a robust, automated, and scalable solution for securing smart contracts without any dependence on security experts.
- oai:arXiv.org:2512.09385v1
- cs.CR
- cs.AI
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Uisang Lee, Changhoon Chung, Junmo Lee, Soo-Mook Moon
+ Tarik Reza Toha (Louie), Shao-Jung (Louie), Lu, Shahriar Nirjon
- CONCUR: A Framework for Continual Constrained and Unconstrained Routing
- https://arxiv.org/abs/2512.09386
- arXiv:2512.09386v1 Announce Type: new
-Abstract: AI tasks differ in complexity and are best addressed with different computation strategies (e.g., combinations of models and decoding methods). Hence, an effective routing system that maps tasks to the appropriate strategies is crucial. Most prior methods build the routing framework by training a single model across all strategies, which demands full retraining whenever new strategies appear and leads to high overhead. Attempts at such continual routing, however, often face difficulties with generalization. Prior models also typically use a single input representation, limiting their ability to capture the full complexity of the routing problem and leading to sub-optimal routing decisions. To address these gaps, we propose CONCUR, a continual routing framework that supports both constrained and unconstrained routing (i.e., routing with or without a budget). Our modular design trains a separate predictor model for each strategy, enabling seamless incorporation of new strategies with low additional training cost. Our predictors also leverage multiple representations of both tasks and computation strategies to better capture overall problem complexity. Experiments on both in-distribution and out-of-distribution, knowledge- and reasoning-intensive tasks show that our method outperforms the best single strategy and strong existing routing techniques with higher end-to-end accuracy and lower inference cost in both continual and non-continual settings, while also reducing training cost in the continual setting.
- oai:arXiv.org:2512.09386v1
- cs.CL
- cs.AI
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Integrated Planning and Machine-Level Scheduling for High-Mix Discrete Manufacturing: A Profit-Driven Heuristic Framework
+ https://arxiv.org/abs/2512.10358
+ arXiv:2512.10358v1 Announce Type: new
+Abstract: Modern manufacturing enterprises struggle to create efficient and reliable production schedules under multi-variety, small-batch, and rush-order conditions. High-mix discrete manufacturing systems require jointly optimizing mid-term production planning and machine-level scheduling under heterogeneous resources and stringent delivery commitments. We address this problem with a profit-driven integrated framework that couples a mixed-integer planning model with a machine-level scheduling heuristic. The planning layer allocates production, accessory co-production, and outsourcing under aggregate economic and capacity constraints, while the scheduling layer refines these allocations using a structure-aware procedure that enforces execution feasibility and stabilizes daily machine behavior. This hierarchical design preserves the tractability of aggregated optimization while capturing detailed operational restrictions. Evaluations are conducted on a real industrial scenario. A flexible machine-level execution scheme yields 73.3% on-time completion and significant outsourcing demand, revealing bottleneck congestion. In contrast, a stability-enforcing execution policy achieves 100% on-time completion, eliminates all outsourcing, and maintains balanced machine utilization with only 1.9 to 4.6% capacity loss from changeovers. These results show that aligning planning decisions with stability-oriented execution rules enables practical and interpretable profit-maximizing decisions in complex manufacturing environments.
+ oai:arXiv.org:2512.10358v1
+ cs.CE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Peter Baile Chen, Weiyue Li, Dan Roth, Michael Cafarella, Samuel Madden, Jacob Andreas
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Runhao Liu, Ziming Chen, You Li, Zequn Xie, Peng Zhang
- Detection and Localization of Subdural Hematoma Using Deep Learning on Computed Tomography
- https://arxiv.org/abs/2512.09393
- arXiv:2512.09393v1 Announce Type: new
-Abstract: Background. Subdural hematoma (SDH) is a common neurosurgical emergency, with increasing incidence in aging populations. Rapid and accurate identification is essential to guide timely intervention, yet existing automated tools focus primarily on detection and provide limited interpretability or spatial localization. There remains a need for transparent, high-performing systems that integrate multimodal clinical and imaging information to support real-time decision-making.
- Methods. We developed a multimodal deep-learning framework that integrates structured clinical variables, a 3D convolutional neural network trained on CT volumes, and a transformer-enhanced 2D segmentation model for SDH detection and localization. Using 25,315 head CT studies from Hartford HealthCare (2015--2024), of which 3,774 (14.9\%) contained clinician-confirmed SDH, tabular models were trained on demographics, comorbidities, medications, and laboratory results. Imaging models were trained to detect SDH and generate voxel-level probability maps. A greedy ensemble strategy combined complementary predictors.
- Findings. Clinical variables alone provided modest discriminatory power (AUC 0.75). Convolutional models trained on CT volumes and segmentation-derived maps achieved substantially higher accuracy (AUCs 0.922 and 0.926). The multimodal ensemble integrating all components achieved the best overall performance (AUC 0.9407; 95\% CI, 0.930--0.951) and produced anatomically meaningful localization maps consistent with known SDH patterns.
- Interpretation. This multimodal, interpretable framework provides rapid and accurate SDH detection and localization, achieving high detection performance and offering transparent, anatomically grounded outputs. Integration into radiology workflows could streamline triage, reduce time to intervention, and improve consistency in SDH management.
- oai:arXiv.org:2512.09393v1
+ Tool-Augmented Spatiotemporal Reasoning for Streamlining Video Question Answering Task
+ https://arxiv.org/abs/2512.10359
+ arXiv:2512.10359v1 Announce Type: new
+Abstract: Video Question Answering (VideoQA) task serves as a critical playground for evaluating whether foundation models can effectively perceive, understand, and reason about dynamic real-world scenarios. However, existing Multimodal Large Language Models (MLLMs) struggle with simultaneously modeling spatial relationships within video frames and understanding the causal dynamics of temporal evolution on complex and reasoning-intensive VideoQA task. In this work, we equip MLLM with a comprehensive and extensible Video Toolkit, to enhance MLLM's spatiotemporal reasoning capabilities and ensure the harmony between the quantity and diversity of tools. To better control the tool invocation sequence and avoid toolchain shortcut issues, we propose a Spatiotemporal Reasoning Framework (STAR) that strategically schedules temporal and spatial tools, thereby progressively localizing the key area in the video. Our STAR framework enhances GPT-4o using lightweight tools, achieving an 8.2% gain on VideoMME and 4.6% on LongVideoBench. We believe that our proposed Video Toolkit and STAR framework make an important step towards building autonomous and intelligent video analysis assistants. The code is publicly available at https://github.com/fansunqi/VideoTool.
+ oai:arXiv.org:2512.10359v1cs.CV
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Vasiliki Stoumpou, Rohan Kumar, Bernard Burman, Diego Ojeda, Tapan Mehta, Dimitris Bertsimas
+ Sunqi Fan, Jiashuo Cui, Meng-Hao Guo, Shuojin Yang
- Language models as tools for investigating the distinction between possible and impossible natural languages
- https://arxiv.org/abs/2512.09394
- arXiv:2512.09394v1 Announce Type: new
-Abstract: We argue that language models (LMs) have strong potential as investigative tools for probing the distinction between possible and impossible natural languages and thus uncovering the inductive biases that support human language learning. We outline a phased research program in which LM architectures are iteratively refined to better discriminate between possible and impossible languages, supporting linking hypotheses to human cognition.
- oai:arXiv.org:2512.09394v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Julie Kallini, Christopher Potts
-
-
- GAIR: GUI Automation via Information-Joint Reasoning and Group Reflection
- https://arxiv.org/abs/2512.09396
- arXiv:2512.09396v1 Announce Type: new
-Abstract: Building AI systems for GUI automation task has attracted remarkable research efforts, where MLLMs are leveraged for processing user requirements and give operations. However, GUI automation includes a wide range of tasks, from document processing to online shopping, from CAD to video editing. Diversity between particular tasks requires MLLMs for GUI automation to have heterogeneous capabilities and master multidimensional expertise, raising problems on constructing such a model. To address such challenge, we propose GAIR: GUI Automation via Information-Joint Reasoning and Group Reflection, a novel MLLM-based GUI automation agent framework designed for integrating knowledge and combining capabilities from heterogeneous models to build GUI automation agent systems with higher performance. Since different GUI-specific MLLMs are trained on different dataset and thus have different strengths, GAIR introduced a general-purpose MLLM for jointly processing the information from multiple GUI-specific models, further enhancing performance of the agent framework. The general-purpose MLLM also serves as decision maker, trying to execute a reasonable operation based on previously gathered information. When the general-purpose model thinks that there isn't sufficient information for a reasonable decision, GAIR would transit into group reflection status, where the general-purpose model would provide GUI-specific models with different instructions and hints based on their strengths and weaknesses, driving them to gather information with more significance and accuracy that can support deeper reasoning and decision. We evaluated the effectiveness and reliability of GAIR through extensive experiments on GUI benchmarks.
- oai:arXiv.org:2512.09396v1
- cs.MA
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ CLASH: Collaborative Large-Small Hierarchical Framework for Continuous Vision-and-Language Navigation
+ https://arxiv.org/abs/2512.10360
+ arXiv:2512.10360v1 Announce Type: new
+Abstract: Vision-and-Language Navigation (VLN) requires robots to follow natural language instructions and navigate complex environments without prior maps. While recent vision-language large models demonstrate strong reasoning abilities, they often underperform task-specific panoramic small models in VLN tasks. To address this, we propose CLASH (Collaborative Large-Small Hierarchy), a VLN-CE framework that integrates a reactive small-model planner (RSMP) with a reflective large-model reasoner (RLMR). RSMP adopts a causal-learning-based dual-branch architecture to enhance generalization, while RLMR leverages panoramic visual prompting with chain-of-thought reasoning to support interpretable spatial understanding and navigation. We further introduce an uncertainty-aware collaboration mechanism (UCM) that adaptively fuses decisions from both models. For obstacle avoidance, in simulation, we replace the rule-based controller with a fully learnable point-goal policy, and in real-world deployment, we design a LiDAR-based clustering module for generating navigable waypoints and pair it with an online SLAM-based local controller. CLASH achieves state-of-the-art (SoTA) results (ranking 1-st) on the VLN-CE leaderboard, significantly improving SR and SPL on the test-unseen set over the previous SoTA methods. Real-world experiments demonstrate CLASH's strong robustness, validating its effectiveness in both simulation and deployment scenarios.
+ oai:arXiv.org:2512.10360v1
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zishu Wei, Qixiang Ma, Xavier Hu, Yuhang Liu, Hui Zang, Yudong Zhao, Tao Wang, Shengyu Zhang, Fei Wu
+ Liuyi Wang, Zongtao He, Jinlong Li, Xiaoyan Qi, Mengxian Hu, Chenpeng Yao, Chengju Liu, Qijun Chen
- Towards Resilient Transportation: A Conditional Transformer for Accident-Informed Traffic Forecasting
- https://arxiv.org/abs/2512.09398
- arXiv:2512.09398v1 Announce Type: new
-Abstract: Traffic prediction remains a key challenge in spatio-temporal data mining, despite progress in deep learning. Accurate forecasting is hindered by the complex influence of external factors such as traffic accidents and regulations, often overlooked by existing models due to limited data integration. To address these limitations, we present two enriched traffic datasets from Tokyo and California, incorporating traffic accident and regulation data. Leveraging these datasets, we propose ConFormer (Conditional Transformer), a novel framework that integrates graph propagation with guided normalization layer. This design dynamically adjusts spatial and temporal node relationships based on historical patterns, enhancing predictive accuracy. Our model surpasses the state-of-the-art STAEFormer in both predictive performance and efficiency, achieving lower computational costs and reduced parameter demands. Extensive evaluations demonstrate that ConFormer consistently outperforms mainstream spatio-temporal baselines across multiple metrics, underscoring its potential to advance traffic prediction research.
- oai:arXiv.org:2512.09398v1
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Bit of a Close Talker: A Practical Guide to Serverless Cloud Co-Location Attacks
+ https://arxiv.org/abs/2512.10361
+ arXiv:2512.10361v1 Announce Type: new
+Abstract: Serverless computing has revolutionized cloud computing by offering an efficient and cost-effective way for users to develop and deploy applications without managing infrastructure details. However, serverless cloud users remain vulnerable to various types of attacks, including micro-architectural side-channel attacks. These attacks typically rely on the physical co-location of victim and attacker instances, and attackers will need to exploit cloud schedulers to achieve co-location with victims. Therefore, it is crucial to study vulnerabilities in serverless cloud schedulers and assess the security of different serverless scheduling algorithms. This study addresses the gap in understanding and constructing co-location attacks in serverless clouds. We present a comprehensive methodology to uncover exploitable features in serverless scheduling algorithms and devise strategies for constructing co-location attacks through normal user interfaces. In our experiments, we successfully reveal exploitable vulnerabilities and achieve instance co-location on prevalent open-source infrastructures and Microsoft Azure Functions. We also present a mitigation strategy to defend against co-location attacks in serverless clouds. Our work highlights critical areas for security enhancements in current cloud schedulers, offering insights to fortify serverless computing environments against potential co-location attacks.
+ oai:arXiv.org:2512.10361v1
+ cs.CR
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Hongjun Wang, Jiawei Yong, Jiawei Wang, Shintaro Fukushima, Renhe Jiang
+ Wei Shao, Najmeh Nazari, Behnam Omidi, Setareh Rafatirad, Houman Homayoun, Khaled N. Khasawneh, Chongzhou Fang
- Wasserstein-Aligned Hyperbolic Multi-View Clustering
- https://arxiv.org/abs/2512.09402
- arXiv:2512.09402v1 Announce Type: new
-Abstract: Multi-view clustering (MVC) aims to uncover the latent structure of multi-view data by learning view-common and view-specific information. Although recent studies have explored hyperbolic representations for better tackling the representation gap between different views, they focus primarily on instance-level alignment and neglect global semantic consistency, rendering them vulnerable to view-specific information (\textit{e.g.}, noise and cross-view discrepancies). To this end, this paper proposes a novel Wasserstein-Aligned Hyperbolic (WAH) framework for multi-view clustering. Specifically, our method exploits a view-specific hyperbolic encoder for each view to embed features into the Lorentz manifold for hierarchical semantic modeling. Whereafter, a global semantic loss based on the hyperbolic sliced-Wasserstein distance is introduced to align manifold distributions across views. This is followed by soft cluster assignments to encourage cross-view semantic consistency. Extensive experiments on multiple benchmarking datasets show that our method can achieve SOTA clustering performance.
- oai:arXiv.org:2512.09402v1
+ Visual Funnel: Resolving Contextual Blindness in Multimodal Large Language Models
+ https://arxiv.org/abs/2512.10362
+ arXiv:2512.10362v1 Announce Type: new
+Abstract: Multimodal Large Language Models (MLLMs) demonstrate impressive reasoning capabilities, but often fail to perceive fine-grained visual details, limiting their applicability in precision-demanding tasks. While methods that crop salient regions of an image offer a partial solution, we identify a critical limitation they introduce: "Contextual Blindness". This failure occurs due to structural disconnect between high-fidelity details (from the crop) and the broader global context (from the original image), even when all necessary visual information is present. We argue that this limitation stems not from a lack of information 'Quantity', but from a lack of 'Structural Diversity' in the model's input. To resolve this, we propose Visual Funnel, a training-free, two-step approach. Visual Funnel first performs Contextual Anchoring to identify the region of interest in a single forward pass. It then constructs an Entropy-Scaled Portfolio that preserves the hierarchical context - ranging from focal detail to broader surroundings - by dynamically determining crop sizes based on attention entropy and refining crop centers. Through extensive experiments, we demonstrate that Visual Funnel significantly outperforms naive single-crop and unstructured multi-crop baselines. Our results further validate that simply adding more unstructured crops provides limited or even detrimental benefits, confirming that the hierarchical structure of our portfolio is key to resolving Contextual Blindness.
+ oai:arXiv.org:2512.10362v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Rui Wang, Yuting Jiang, Xiaoqing Luo, Xiao-Jun Wu, Nicu Sebe, Ziheng Chen
+ Woojun Jung, Jaehoon Go, Mingyu Jeon, Sunjae Yoon, Junyeong Kim
- Black-Box Behavioral Distillation Breaks Safety Alignment in Medical LLMs
- https://arxiv.org/abs/2512.09403
- arXiv:2512.09403v1 Announce Type: new
-Abstract: As medical large language models (LLMs) become increasingly integrated into clinical workflows, concerns around alignment robustness, and safety are escalating. Prior work on model extraction has focused on classification models or memorization leakage, leaving the vulnerability of safety-aligned generative medical LLMs underexplored.
- We present a black-box distillation attack that replicates the domain-specific reasoning of safety-aligned medical LLMs using only output-level access. By issuing 48,000 instruction queries to Meditron-7B and collecting 25,000 benign instruction response pairs, we fine-tune a LLaMA3 8B surrogate via parameter efficient LoRA under a zero-alignment supervision setting, requiring no access to model weights, safety filters, or training data. With a cost of $12, the surrogate achieves strong fidelity on benign inputs while producing unsafe completions for 86% of adversarial prompts, far exceeding both Meditron-7B (66%) and the untuned base model (46%). This reveals a pronounced functional-ethical gap, task utility transfers, while alignment collapses. To analyze this collapse, we develop a dynamic adversarial evaluation framework combining Generative Query (GQ)-based harmful prompt generation, verifier filtering, category-wise failure analysis, and adaptive Random Search (RS) jailbreak attacks. We also propose a layered defense system, as a prototype detector for real-time alignment drift in black-box deployments.
- Our findings show that benign-only black-box distillation exposes a practical and under-recognized threat: adversaries can cheaply replicate medical LLM capabilities while stripping safety mechanisms, underscoring the need for extraction-aware safety monitoring.
- oai:arXiv.org:2512.09403v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Point to Span: Zero-Shot Moment Retrieval for Navigating Unseen Hour-Long Videos
+ https://arxiv.org/abs/2512.10363
+ arXiv:2512.10363v1 Announce Type: new
+Abstract: Zero-shot Long Video Moment Retrieval (ZLVMR) is the task of identifying temporal segments in hour-long videos using a natural language query without task-specific training. The core technical challenge of LVMR stems from the computational infeasibility of processing entire lengthy videos in a single pass. This limitation has established a 'Search-then-Refine' approach, where candidates are rapidly narrowed down, and only those portions are analyzed, as the dominant paradigm for LVMR. However, existing approaches to this paradigm face severe limitations. Conventional supervised learning suffers from limited scalability and poor generalization, despite substantial resource consumption. Yet, existing zero-shot methods also fail, facing a dual challenge: (1) their heuristic strategies cause a 'search' phase candidate explosion, and (2) the 'refine' phase, which is vulnerable to semantic discrepancy, requires high-cost VLMs for verification, incurring significant computational overhead. We propose \textbf{P}oint-\textbf{to}-\textbf{S}pan (P2S), a novel training-free framework to overcome this challenge of inefficient 'search' and costly 'refine' phases. P2S overcomes these challenges with two key innovations: an 'Adaptive Span Generator' to prevent the search phase candidate explosion, and 'Query Decomposition' to refine candidates without relying on high-cost VLM verification. To our knowledge, P2S is the first zero-shot framework capable of temporal grounding in hour-long videos, outperforming supervised state-of-the-art methods by a significant margin (e.g., +3.7\% on R5@0.1 on MAD).
+ oai:arXiv.org:2512.10363v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Sohely Jahan, Ruimin Sun
+ Mingyu Jeon, Jisoo Yang, Sungjin Han, Jinkwon Hwang, Sunjae Yoon, Jonghee Kim, Junyeoung Kim
- H2R-Grounder: A Paired-Data-Free Paradigm for Translating Human Interaction Videos into Physically Grounded Robot Videos
- https://arxiv.org/abs/2512.09406
- arXiv:2512.09406v1 Announce Type: new
-Abstract: Robots that learn manipulation skills from everyday human videos could acquire broad capabilities without tedious robot data collection. We propose a video-to-video translation framework that converts ordinary human-object interaction videos into motion-consistent robot manipulation videos with realistic, physically grounded interactions. Our approach does not require any paired human-robot videos for training only a set of unpaired robot videos, making the system easy to scale. We introduce a transferable representation that bridges the embodiment gap: by inpainting the robot arm in training videos to obtain a clean background and overlaying a simple visual cue (a marker and arrow indicating the gripper's position and orientation), we can condition a generative model to insert the robot arm back into the scene. At test time, we apply the same process to human videos (inpainting the person and overlaying human pose cues) and generate high-quality robot videos that mimic the human's actions. We fine-tune a SOTA video diffusion model (Wan 2.2) in an in-context learning manner to ensure temporal coherence and leveraging of its rich prior knowledge. Empirical results demonstrate that our approach achieves significantly more realistic and grounded robot motions compared to baselines, pointing to a promising direction for scaling up robot learning from unlabeled human videos. Project page: https://showlab.github.io/H2R-Grounder/
- oai:arXiv.org:2512.09406v1
- cs.RO
+ GPG: Generalized Policy Gradient Theorem for Transformer-based Policies
+ https://arxiv.org/abs/2512.10365
+ arXiv:2512.10365v1 Announce Type: new
+Abstract: We present the Generalized Policy Gradient (GPG) Theorem, specifically designed for Transformer-based policies. Notably, we demonstrate that both standard Policy Gradient Theorem and GRPO emerge as special cases within our GPG framework. Furthermore, we explore its practical applications in training Large Language Models (LLMs), offering new insights into efficient policy optimization.
+ oai:arXiv.org:2512.10365v1
+ cs.LGcs.AI
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Hai Ci, Xiaokang Liu, Pei Yang, Yiren Song, Mike Zheng Shou
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hangyu Mao, Guangting Dong, Zhicheng Dou
- Generative Point Cloud Registration
- https://arxiv.org/abs/2512.09407
- arXiv:2512.09407v1 Announce Type: new
-Abstract: In this paper, we propose a novel 3D registration paradigm, Generative Point Cloud Registration, which bridges advanced 2D generative models with 3D matching tasks to enhance registration performance. Our key idea is to generate cross-view consistent image pairs that are well-aligned with the source and target point clouds, enabling geometry-color feature fusion to facilitate robust matching. To ensure high-quality matching, the generated image pair should feature both 2D-3D geometric consistency and cross-view texture consistency. To achieve this, we introduce Match-ControlNet, a matching-specific, controllable 2D generative model. Specifically, it leverages the depth-conditioned generation capability of ControlNet to produce images that are geometrically aligned with depth maps derived from point clouds, ensuring 2D-3D geometric consistency. Additionally, by incorporating a coupled conditional denoising scheme and coupled prompt guidance, Match-ControlNet further promotes cross-view feature interaction, guiding texture consistency generation. Our generative 3D registration paradigm is general and could be seamlessly integrated into various registration methods to enhance their performance. Extensive experiments on 3DMatch and ScanNet datasets verify the effectiveness of our approach.
- oai:arXiv.org:2512.09407v1
+ Breaking the Vicious Cycle: Coherent 3D Gaussian Splatting from Sparse and Motion-Blurred Views
+ https://arxiv.org/abs/2512.10369
+ arXiv:2512.10369v1 Announce Type: new
+Abstract: 3D Gaussian Splatting (3DGS) has emerged as a state-of-the-art method for novel view synthesis. However, its performance heavily relies on dense, high-quality input imagery, an assumption that is often violated in real-world applications, where data is typically sparse and motion-blurred. These two issues create a vicious cycle: sparse views ignore the multi-view constraints necessary to resolve motion blur, while motion blur erases high-frequency details crucial for aligning the limited views. Thus, reconstruction often fails catastrophically, with fragmented views and a low-frequency bias. To break this cycle, we introduce CoherentGS, a novel framework for high-fidelity 3D reconstruction from sparse and blurry images. Our key insight is to address these compound degradations using a dual-prior strategy. Specifically, we combine two pre-trained generative models: a specialized deblurring network for restoring sharp details and providing photometric guidance, and a diffusion model that offers geometric priors to fill in unobserved regions of the scene. This dual-prior strategy is supported by several key techniques, including a consistency-guided camera exploration module that adaptively guides the generative process, and a depth regularization loss that ensures geometric plausibility. We evaluate CoherentGS through both quantitative and qualitative experiments on synthetic and real-world scenes, using as few as 3, 6, and 9 input views. Our results demonstrate that CoherentGS significantly outperforms existing methods, setting a new state-of-the-art for this challenging task. The code and video demos are available at https://potatobigroom.github.io/CoherentGS/.
+ oai:arXiv.org:2512.10369v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Haobo Jiang, Jin Xie, Jian Yang, Liang Yu, Jianmin Zheng
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Zhankuo Xu, Chaoran Feng, Yingtao Li, Jianbin Zhao, Jiashu Yang, Wangbo Yu, Li Yuan, Yonghong Tian
- Proof of Trusted Execution: A Consensus Paradigm for Deterministic Blockchain Finality
- https://arxiv.org/abs/2512.09409
- arXiv:2512.09409v1 Announce Type: new
-Abstract: Current blockchain consensus protocols -- notably, Proof of Work (PoW) and Proof of Stake (PoS) -- deliver global agreement but exhibit structural constraints. PoW anchors security in heavy computation, inflating energy use and imposing high confirmation latency. PoS improves efficiency but introduces stake concentration, long-range and "nothing-at-stake" vulnerabilities, and a hard performance ceiling shaped by slot times and multi-round committee voting. In this paper, we propose Proof of Trusted Execution (PoTE), a consensus paradigm where agreement emerges from verifiable execution rather than replicated re-execution. Validators operate inside heterogeneous VM-based TEEs, each running the same canonical program whose measurement is publicly recorded, and each producing vendor-backed attestations that bind the enclave code hash to the block contents. Because the execution is deterministic and the proposer is uniquely derived from public randomness, PoTE avoids forks, eliminates slot.time bottlenecks, and commits blocks in a single round of verification. We present the design of a PoTE consensus client, describe our reference implementation, and evaluate its performance against the stringent throughput requirements of the Trillion decentralized exchange.
- oai:arXiv.org:2512.09409v1
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ LLM-Empowered Representation Learning for Emerging Item Recommendation
+ https://arxiv.org/abs/2512.10370
+ arXiv:2512.10370v1 Announce Type: new
+Abstract: In this work, we tackle the challenge of recommending emerging items, whose interactions gradually accumulate over time. Existing methods often overlook this dynamic process, typically assuming that emerging items have few or even no historical interactions. Such an assumption oversimplifies the problem, as a good model must preserve the uniqueness of emerging items while leveraging their shared patterns with established ones. To address this challenge, we propose EmerFlow, a novel LLM-empowered representation learning framework that generates distinctive embeddings for emerging items. It first enriches the raw features of emerging items through LLM reasoning, then aligns these representations with the embedding space of the existing recommendation model. Finally, new interactions are incorporated through meta-learning to refine the embeddings. This enables EmerFlow to learn expressive embeddings for emerging items from only limited interactions. Extensive experiments across diverse domains, including movies and pharmaceuticals, show that EmerFlow consistently outperforms existing methods.
+ oai:arXiv.org:2512.10370v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Kyle Habib, Vladislav Kapitsyn, Giovanni Mazzeo, Faisal Mehrban
+ Ziying Zhang, Quanming Yao, Yaqing Wang
- Generalizable Collaborative Search-and-Capture in Cluttered Environments via Path-Guided MAPPO and Directional Frontier Allocation
- https://arxiv.org/abs/2512.09410
- arXiv:2512.09410v1 Announce Type: new
-Abstract: Collaborative pursuit-evasion in cluttered environments presents significant challenges due to sparse rewards and constrained Fields of View (FOV). Standard Multi-Agent Reinforcement Learning (MARL) often suffers from inefficient exploration and fails to scale to large scenarios. We propose PGF-MAPPO (Path-Guided Frontier MAPPO), a hierarchical framework bridging topological planning with reactive control. To resolve local minima and sparse rewards, we integrate an A*-based potential field for dense reward shaping. Furthermore, we introduce Directional Frontier Allocation, combining Farthest Point Sampling (FPS) with geometric angle suppression to enforce spatial dispersion and accelerate coverage. The architecture employs a parameter-shared decentralized critic, maintaining O(1) model complexity suitable for robotic swarms. Experiments demonstrate that PGF-MAPPO achieves superior capture efficiency against faster evaders. Policies trained on 10x10 maps exhibit robust zero-shot generalization to unseen 20x20 environments, significantly outperforming rule-based and learning-based baselines.
- oai:arXiv.org:2512.09410v1
- cs.RO
- cs.LG
- cs.MA
- Thu, 11 Dec 2025 00:00:00 -0500
+ AgentProg: Empowering Long-Horizon GUI Agents with Program-Guided Context Management
+ https://arxiv.org/abs/2512.10371
+ arXiv:2512.10371v1 Announce Type: new
+Abstract: The rapid development of mobile GUI agents has stimulated growing research interest in long-horizon task automation. However, building agents for these tasks faces a critical bottleneck: the reliance on ever-expanding interaction history incurs substantial context overhead. Existing context management and compression techniques often fail to preserve vital semantic information, leading to degraded task performance. We propose AgentProg, a program-guided approach for agent context management that reframes the interaction history as a program with variables and control flow. By organizing information according to the structure of program, this structure provides a principled mechanism to determine which information should be retained and which can be discarded. We further integrate a global belief state mechanism inspired by Belief MDP framework to handle partial observability and adapt to unexpected environmental changes. Experiments on AndroidWorld and our extended long-horizon task suite demonstrate that AgentProg has achieved the state-of-the-art success rates on these benchmarks. More importantly, it maintains robust performance on long-horizon tasks while baseline methods experience catastrophic degradation. Our system is open-sourced at https://github.com/MobileLLM/AgentProg.
+ oai:arXiv.org:2512.10371v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jialin Ying, Zhihao Li, Zicheng Dong, Guohua Wu, Yihuan Liao
+ Shizuo Tian, Hao Wen, Yuxuan Chen, Jiacheng Liu, Shanhui Zhao, Guohong Liu, Ju Ren, Yunxin Liu, Yuanchun Li
- D$^2$GSLAM: 4D Dynamic Gaussian Splatting SLAM
- https://arxiv.org/abs/2512.09411
- arXiv:2512.09411v1 Announce Type: new
-Abstract: Recent advances in Dense Simultaneous Localization and Mapping (SLAM) have demonstrated remarkable performance in static environments. However, dense SLAM in dynamic environments remains challenging. Most methods directly remove dynamic objects and focus solely on static scene reconstruction, which ignores the motion information contained in these dynamic objects. In this paper, we present D$^2$GSLAM, a novel dynamic SLAM system utilizing Gaussian representation, which simultaneously performs accurate dynamic reconstruction and robust tracking within dynamic environments. Our system is composed of four key components: (i) We propose a geometric-prompt dynamic separation method to distinguish between static and dynamic elements of the scene. This approach leverages the geometric consistency of Gaussian representation and scene geometry to obtain coarse dynamic regions. The regions then serve as prompts to guide the refinement of the coarse mask for achieving accurate motion mask. (ii) To facilitate accurate and efficient mapping of the dynamic scene, we introduce dynamic-static composite representation that integrates static 3D Gaussians with dynamic 4D Gaussians. This representation allows for modeling the transitions between static and dynamic states of objects in the scene for composite mapping and optimization. (iii) We employ a progressive pose refinement strategy that leverages both the multi-view consistency of static scene geometry and motion information from dynamic objects to achieve accurate camera tracking. (iv) We introduce a motion consistency loss, which leverages the temporal continuity in object motions for accurate dynamic modeling. Our D$^2$GSLAM demonstrates superior performance on dynamic scenes in terms of mapping and tracking accuracy, while also showing capability in accurate dynamic modeling.
- oai:arXiv.org:2512.09411v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ D2M: A Decentralized, Privacy-Preserving, Incentive-Compatible Data Marketplace for Collaborative Learning
+ https://arxiv.org/abs/2512.10372
+ arXiv:2512.10372v1 Announce Type: new
+Abstract: The rising demand for collaborative machine learning and data analytics calls for secure and decentralized data sharing frameworks that balance privacy, trust, and incentives. Existing approaches, including federated learning (FL) and blockchain-based data markets, fall short: FL often depends on trusted aggregators and lacks Byzantine robustness, while blockchain frameworks struggle with computation-intensive training and incentive integration.
+ We present \prot, a decentralized data marketplace that unifies federated learning, blockchain arbitration, and economic incentives into a single framework for privacy-preserving data sharing. \prot\ enables data buyers to submit bid-based requests via blockchain smart contracts, which manage auctions, escrow, and dispute resolution. Computationally intensive training is delegated to \cone\ (\uline{Co}mpute \uline{N}etwork for \uline{E}xecution), an off-chain distributed execution layer. To safeguard against adversarial behavior, \prot\ integrates a modified YODA protocol with exponentially growing execution sets for resilient consensus, and introduces Corrected OSMD to mitigate malicious or low-quality contributions from sellers. All protocols are incentive-compatible, and our game-theoretic analysis establishes honesty as the dominant strategy.
+ We implement \prot\ on Ethereum and evaluate it over benchmark datasets -- MNIST, Fashion-MNIST, and CIFAR-10 -- under varying adversarial settings. \prot\ achieves up to 99\% accuracy on MNIST and 90\% on Fashion-MNIST, with less than 3\% degradation up to 30\% Byzantine nodes, and 56\% accuracy on CIFAR-10 despite its complexity. Our results show that \prot\ ensures privacy, maintains robustness under adversarial conditions, and scales efficiently with the number of participants, making it a practical foundation for real-world decentralized data sharing.
+ oai:arXiv.org:2512.10372v1
+ cs.CR
+ cs.AI
+ cs.DC
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Siting Zhu, Yuxiang Huang, Wenhua Wu, Chaokang Jiang, Yongbo Chen, I-Ming Chen, Hesheng Wang
+ Yash Srivastava, Shalin Jain, Sneha Awathare, Nitin Awathare
- Simple Modal Types for Functional Reactive Programming
- https://arxiv.org/abs/2512.09412
- arXiv:2512.09412v1 Announce Type: new
-Abstract: Functional reactive programming (FRP) is a declarative programming paradigm for implementing reactive programs at a high level of abstraction. It applies functional programming principles to construct and manipulate time-varying values, also known as signals. However, for this programming paradigm to work in practice, an FRP language must ensure that programs are causal, productive, and free from space leaks. Over the past fifteen years, several modal type systems to enforce these operational properties have been developed.
- We present a new FRP language with a significantly simplified modal type system that imposes fewer restrictions than previous modal FRP languages while still guaranteeing the central operational properties of causality, productivity, and absence of space leaks. The key enabling idea is to alter the semantics of signals so that the type system can safely allow more programs to type-check, which also makes the language more expressive. With this new semantics, signals are modelled as mutable references whose mutability is tightly controlled by the 'later' type modality. This disciplined form of mutability also enables more efficient in-place updates of signals, all while preserving a functional programming style.
- oai:arXiv.org:2512.09412v1
- cs.PL
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Novel Phase-Noise Module for the QUCS Circuit Simulator. Part I : the Periodic Steady-State
+ https://arxiv.org/abs/2512.10373
+ arXiv:2512.10373v1 Announce Type: new
+Abstract: The paper discusses work done to expand and extend the capabilities of the open-source QUCS circuit simulator through the implementation of a computationally efficient time-domain steady-state analysis module, supporting simulation of autonomous circuits. To our knowledge, this represents the first time such an analysis module has been implemented in the QUCS environment. Hitherto, the only available option was a harmonic-balance module which was strictly limited to non-autonomous (driven) circuits. The research has several important scientific and industrial applications in the area of large-signal steady-state analysis of autonomous circuits e.g. free-running and coupled oscillator circuit networks. The reported results will have great impact w.r.t. analyzing, synthesizing and optimizing oscillatory behavior of various important industrial circuits and systems. The developed tool, furthermore, introduces support for simulating noise performance of circuits operating under large-signal conditions. This paper is the first part of a two-part series documenting the implementation of a novel (coupled)-oscillator phase-noise simulator engine in the QUCS environment. The goal of this undertaking is the advancement of the open-source QUCS project towards becoming a viable competitor to the commercial simulators currently on the market.
+ oai:arXiv.org:2512.10373v1
+ eess.SY
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Patrick Bahr
+ Torsten Djurhuus, Viktor Krozer
- Time-Discretized Simulation of Vehicle Platoons for Safety Analysis with Guaranteed Error Bounds
- https://arxiv.org/abs/2512.09416
- arXiv:2512.09416v1 Announce Type: new
-Abstract: Wireless communication is essential to achieve coordinated control in vehicle platoons. However, packet losses in wireless communication can cause critical safety issues when they occur in conjunction with sudden brakes. In this paper, we propose simulation-based methods that allow the study of such safety issues by determining the absolute minimum distance between vehicles over time for various control parameters that guarantee string stability. For our proposed time-discretized simulations, we provide two methods for selecting different time-step intervals to ensure that the error in distance approximation remains within specified bounds at all times. Through numerical examples we demonstrate that among control parameters that guarantee string stability some perform better than others under simultaneously occurring packet losses and sudden brakes.
- oai:arXiv.org:2512.09416v1
+ Structural Sign Herdability in Temporally Switching Networks with Fixed Topology
+ https://arxiv.org/abs/2512.10374
+ arXiv:2512.10374v1 Announce Type: new
+Abstract: This paper investigates structural herdability in a special class of temporally switching networks with fixed topology. We show that when the underlying digraph remains unchanged across all snapshots, the network attains complete SS herdability even in the presence of signed or layer dilations, a condition not applicable to static networks. This reveals a fundamental structural advantage of temporal dynamics and highlights a novel mechanism through which switching can overcome classical obstructions to herdability. To validate these conclusions, we utilize a more relaxed form of sign matching within each snapshot of the temporal network. Furthermore, we show that when all snapshots share the same underlying topology, the temporally switching network achieves $\mathcal{SS}$ herdability within just two snapshots, which is fewer than the number required for structural controllability. Several examples are included to demonstrate these results.
+ oai:arXiv.org:2512.10374v1eess.SYcs.SY
- math.OC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuhao Chen, Ahmet Cetinkaya
-
-
- DirectSwap: Mask-Free Cross-Identity Training and Benchmarking for Expression-Consistent Video Head Swapping
- https://arxiv.org/abs/2512.09417
- arXiv:2512.09417v1 Announce Type: new
-Abstract: Video head swapping aims to replace the entire head of a video subject, including facial identity, head shape, and hairstyle, with that of a reference image, while preserving the target body, background, and motion dynamics. Due to the lack of ground-truth paired swapping data, prior methods typically train on cross-frame pairs of the same person within a video and rely on mask-based inpainting to mitigate identity leakage. Beyond potential boundary artifacts, this paradigm struggles to recover essential cues occluded by the mask, such as facial pose, expressions, and motion dynamics. To address these issues, we prompt a video editing model to synthesize new heads for existing videos as fake swapping inputs, while maintaining frame-synchronized facial poses and expressions. This yields HeadSwapBench, the first cross-identity paired dataset for video head swapping, which supports both training (\TrainNum{} videos) and benchmarking (\TestNum{} videos) with genuine outputs. Leveraging this paired supervision, we propose DirectSwap, a mask-free, direct video head-swapping framework that extends an image U-Net into a video diffusion model with a motion module and conditioning inputs. Furthermore, we introduce the Motion- and Expression-Aware Reconstruction (MEAR) loss, which reweights the diffusion loss per pixel using frame-difference magnitudes and facial-landmark proximity, thereby enhancing cross-frame coherence in motion and expressions. Extensive experiments demonstrate that DirectSwap achieves state-of-the-art visual quality, identity fidelity, and motion and expression consistency across diverse in-the-wild video scenes. We will release the source code and the HeadSwapBench dataset to facilitate future research.
- oai:arXiv.org:2512.09417v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yanan Wang, Shengcai Liao, Panwen Hu, Xin Li, Fan Yang, Xiaodan Liang
+ Pradeep M, Twinkle Tripathy
- Label-free Motion-Conditioned Diffusion Model for Cardiac Ultrasound Synthesis
- https://arxiv.org/abs/2512.09418
- arXiv:2512.09418v1 Announce Type: new
-Abstract: Ultrasound echocardiography is essential for the non-invasive, real-time assessment of cardiac function, but the scarcity of labelled data, driven by privacy restrictions and the complexity of expert annotation, remains a major obstacle for deep learning methods. We propose the Motion Conditioned Diffusion Model (MCDM), a label-free latent diffusion framework that synthesises realistic echocardiography videos conditioned on self-supervised motion features. To extract these features, we design the Motion and Appearance Feature Extractor (MAFE), which disentangles motion and appearance representations from videos. Feature learning is further enhanced by two auxiliary objectives: a re-identification loss guided by pseudo appearance features and an optical flow loss guided by pseudo flow fields. Evaluated on the EchoNet-Dynamic dataset, MCDM achieves competitive video generation performance, producing temporally coherent and clinically realistic sequences without reliance on manual labels. These results demonstrate the potential of self-supervised conditioning for scalable echocardiography synthesis. Our code is available at https://github.com/ZheLi2020/LabelfreeMCDM.
- oai:arXiv.org:2512.09418v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Neural personal sound zones with flexible bright zone control
+ https://arxiv.org/abs/2512.10375
+ arXiv:2512.10375v1 Announce Type: new
+Abstract: Personal sound zone (PSZ) reproduction system, which attempts to create distinct virtual acoustic scenes for different listeners at their respective positions within the same spatial area using one loudspeaker array, is a fundamental technology in the application of virtual reality. For practical applications, the reconstruction targets must be measured on the same fixed receiver array used to record the local room impulse responses (RIRs) from the loudspeaker array to the control points in each PSZ, which makes the system inconvenient and costly for real-world use. In this paper, a 3D convolutional neural network (CNN) designed for PSZ reproduction with flexible control microphone grid and alternative reproduction target is presented, utilizing the virtual target scene as inputs and the PSZ pre-filters as output. Experimental results of the proposed method are compared with the traditional method, demonstrating that the proposed method is able to handle varied reproduction targets on flexible control point grid using only one training session. Furthermore, the proposed method also demonstrates the capability to learn global spatial information from sparse sampling points distributed in PSZs.
+ oai:arXiv.org:2512.10375v1
+ cs.SD
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Zhe Li, Hadrien Reynaud, Johanna P M\"uller, Bernhard Kainz
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wenye Zhu, Jun Tang, Xiaofei Li
- InfoMotion: A Graph-Based Approach to Video Dataset Distillation for Echocardiography
- https://arxiv.org/abs/2512.09422
- arXiv:2512.09422v1 Announce Type: new
-Abstract: Echocardiography playing a critical role in the diagnosis and monitoring of cardiovascular diseases as a non-invasive real-time assessment of cardiac structure and function. However, the growing scale of echocardiographic video data presents significant challenges in terms of storage, computation, and model training efficiency. Dataset distillation offers a promising solution by synthesizing a compact, informative subset of data that retains the key clinical features of the original dataset. In this work, we propose a novel approach for distilling a compact synthetic echocardiographic video dataset. Our method leverages motion feature extraction to capture temporal dynamics, followed by class-wise graph construction and representative sample selection using the Infomap algorithm. This enables us to select a diverse and informative subset of synthetic videos that preserves the essential characteristics of the original dataset. We evaluate our approach on the EchoNet-Dynamic datasets and achieve a test accuracy of \(69.38\%\) using only \(25\) synthetic videos. These results demonstrate the effectiveness and scalability of our method for medical video dataset distillation.
- oai:arXiv.org:2512.09422v1
+ RaLiFlow: Scene Flow Estimation with 4D Radar and LiDAR Point Clouds
+ https://arxiv.org/abs/2512.10376
+ arXiv:2512.10376v1 Announce Type: new
+Abstract: Recent multimodal fusion methods, integrating images with LiDAR point clouds, have shown promise in scene flow estimation. However, the fusion of 4D millimeter wave radar and LiDAR remains unexplored. Unlike LiDAR, radar is cheaper, more robust in various weather conditions and can detect point-wise velocity, making it a valuable complement to LiDAR. However, radar inputs pose challenges due to noise, low resolution, and sparsity. Moreover, there is currently no dataset that combines LiDAR and radar data specifically for scene flow estimation. To address this gap, we construct a Radar-LiDAR scene flow dataset based on a public real-world automotive dataset. We propose an effective preprocessing strategy for radar denoising and scene flow label generation, deriving more reliable flow ground truth for radar points out of the object boundaries. Additionally, we introduce RaLiFlow, the first joint scene flow learning framework for 4D radar and LiDAR, which achieves effective radar-LiDAR fusion through a novel Dynamic-aware Bidirectional Cross-modal Fusion (DBCF) module and a carefully designed set of loss functions. The DBCF module integrates dynamic cues from radar into the local cross-attention mechanism, enabling the propagation of contextual information across modalities. Meanwhile, the proposed loss functions mitigate the adverse effects of unreliable radar data during training and enhance the instance-level consistency in scene flow predictions from both modalities, particularly for dynamic foreground areas. Extensive experiments on the repurposed scene flow dataset demonstrate that our method outperforms existing LiDAR-based and radar-based single-modal methods by a significant margin.
+ oai:arXiv.org:2512.10376v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Zhe Li, Hadrien Reynaud, Alberto Gomez, Bernhard Kainz
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jingyun Fu, Zhiyu Xiang, Na Zhao
- FunPhase: A Periodic Functional Autoencoder for Motion Generation via Phase Manifolds
- https://arxiv.org/abs/2512.09423
- arXiv:2512.09423v1 Announce Type: new
-Abstract: Learning natural body motion remains challenging due to the strong coupling between spatial geometry and temporal dynamics. Embedding motion in phase manifolds, latent spaces that capture local periodicity, has proven effective for motion prediction; however, existing approaches lack scalability and remain confined to specific settings. We introduce FunPhase, a functional periodic autoencoder that learns a phase manifold for motion and replaces discrete temporal decoding with a function-space formulation, enabling smooth trajectories that can be sampled at arbitrary temporal resolutions. FunPhase supports downstream tasks such as super-resolution and partial-body motion completion, generalizes across skeletons and datasets, and unifies motion prediction and generation within a single interpretable manifold. Our model achieves substantially lower reconstruction error than prior periodic autoencoder baselines while enabling a broader range of applications and performing on par with state-of-the-art motion generation methods.
- oai:arXiv.org:2512.09423v1
+ Self-Supervised Contrastive Embedding Adaptation for Endoscopic Image Matching
+ https://arxiv.org/abs/2512.10379
+ arXiv:2512.10379v1 Announce Type: new
+Abstract: Accurate spatial understanding is essential for image-guided surgery, augmented reality integration and context awareness. In minimally invasive procedures, where visual input is the sole intraoperative modality, establishing precise pixel-level correspondences between endoscopic frames is critical for 3D reconstruction, camera tracking, and scene interpretation. However, the surgical domain presents distinct challenges: weak perspective cues, non-Lambertian tissue reflections, and complex, deformable anatomy degrade the performance of conventional computer vision techniques. While Deep Learning models have shown strong performance in natural scenes, their features are not inherently suited for fine-grained matching in surgical images and require targeted adaptation to meet the demands of this domain. This research presents a novel Deep Learning pipeline for establishing feature correspondences in endoscopic image pairs, alongside a self-supervised optimization framework for model training. The proposed methodology leverages a novel-view synthesis pipeline to generate ground-truth inlier correspondences, subsequently utilized for mining triplets within a contrastive learning paradigm. Through this self-supervised approach, we augment the DINOv2 backbone with an additional Transformer layer, specifically optimized to produce embeddings that facilitate direct matching through cosine similarity thresholding. Experimental evaluation demonstrates that our pipeline surpasses state-of-the-art methodologies on the SCARED datasets improved matching precision and lower epipolar error compared to the related work. The proposed framework constitutes a valuable contribution toward enabling more accurate high-level computer vision applications in surgical endoscopy.
+ oai:arXiv.org:2512.10379v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Marco Pegoraro, Evan Atherton, Bruno Roy, Aliasghar Khani, Arianna Rampini
+ http://creativecommons.org/licenses/by/4.0/
+ Alberto Rota, Elena De Momi
- ODMA: On-Demand Memory Allocation Framework for LLM Serving on LPDDR-Class Accelerators
- https://arxiv.org/abs/2512.09427
- arXiv:2512.09427v1 Announce Type: new
-Abstract: Serving large language models (LLMs) on accelerators with poor random-access bandwidth (e.g., LPDDR5-based) is limited by current memory managers. Static pre-allocation wastes memory, while fine-grained paging (e.g., PagedAttention) is ill-suited due to high random-access costs. Existing HBM-centric solutions do not exploit the characteristics of random-access-constrained memory (RACM) accelerators like Cambricon MLU370. We present ODMA, an on-demand memory allocation framework for RACM. ODMA addresses distribution drift and heavy-tailed requests by coupling a lightweight length predictor with dynamic bucket partitioning and a large-bucket safeguard. Boundaries are periodically updated from live traces to maximize utilization. On Alpaca and Google-NQ, ODMA improves prediction accuracy of prior work significantly (e.g., from 82.68% to 93.36%). Serving DeepSeek-R1-Distill-Qwen-7B on Cambricon MLU370-X4, ODMA raises memory utilization from 55.05% to 72.45% and improves RPS and TPS by 29% and 27% over static baselines. This demonstrates that hardware-aware allocation unlocks efficient LLM serving on RACM platforms.
- oai:arXiv.org:2512.09427v1
- cs.AR
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Investigating training objective for flow matching-based speech enhancement
+ https://arxiv.org/abs/2512.10382
+ arXiv:2512.10382v1 Announce Type: new
+Abstract: Speech enhancement(SE) aims to recover clean speech from noisy recordings. Although generative approaches such as score matching and Schrodinger bridge have shown strong effectiveness, they are often computationally expensive. Flow matching offers a more efficient alternative by directly learning a velocity field that maps noise to data. In this work, we present a systematic study of flow matching for SE under three training objectives: velocity prediction, $x_1$ prediction, and preconditioned $x_1$ prediction. We analyze their impact on training dynamics and overall performance. Moreover, by introducing perceptual(PESQ) and signal-based(SI-SDR) objectives, we further enhance convergence efficiency and speech quality, yielding substantial improvements across evaluation metrics.
+ oai:arXiv.org:2512.10382v1
+ cs.SD
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Guoqiang Zou, Wanyu Wang, Hao Zheng, Longxiang Yin, Yinhe Han
+ Liusha Yang, Ziru Ge, Gui Zhang, Junan Zhang, Zhizheng Wu
- A Hierarchical, Model-Based System for High-Performance Humanoid Soccer
- https://arxiv.org/abs/2512.09431
- arXiv:2512.09431v1 Announce Type: new
-Abstract: The development of athletic humanoid robots has gained significant attention as advances in actuation, sensing, and control enable increasingly dynamic, real-world capabilities. RoboCup, an international competition of fully autonomous humanoid robots, provides a uniquely challenging benchmark for such systems, culminating in the long-term goal of competing against human soccer players by 2050. This paper presents the hardware and software innovations underlying our team's victory in the RoboCup 2024 Adult-Sized Humanoid Soccer Competition. On the hardware side, we introduce an adult-sized humanoid platform built with lightweight structural components, high-torque quasi-direct-drive actuators, and a specialized foot design that enables powerful in-gait kicks while preserving locomotion robustness. On the software side, we develop an integrated perception and localization framework that combines stereo vision, object detection, and landmark-based fusion to provide reliable estimates of the ball, goals, teammates, and opponents. A mid-level navigation stack then generates collision-aware, dynamically feasible trajectories, while a centralized behavior manager coordinates high-level decision making, role selection, and kick execution based on the evolving game state. The seamless integration of these subsystems results in fast, precise, and tactically effective gameplay, enabling robust performance under the dynamic and adversarial conditions of real matches. This paper presents the design principles, system architecture, and experimental results that contributed to ARTEMIS's success as the 2024 Adult-Sized Humanoid Soccer champion.
- oai:arXiv.org:2512.09431v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Towards Fine-Grained Recognition with Large Visual Language Models: Benchmark and Optimization Strategies
+ https://arxiv.org/abs/2512.10384
+ arXiv:2512.10384v1 Announce Type: new
+Abstract: Large Vision Language Models (LVLMs) have made remarkable progress, enabling sophisticated vision-language interaction and dialogue applications. However, existing benchmarks primarily focus on reasoning tasks, often neglecting fine-grained recognition, which is crucial for practical application scenarios. To address this gap, we introduce the Fine-grained Recognition Open World (FROW) benchmark, designed for detailed evaluation of LVLMs with GPT-4o. On the basis of that, we propose a novel optimization strategy from two perspectives: \textit{data construction} and \textit{training process}, to improve the performance of LVLMs. Our dataset includes mosaic data, which combines multiple short-answer responses, and open-world data, generated from real-world questions and answers using GPT-4o, creating a comprehensive framework for evaluating fine-grained recognition in LVLMs. Experiments show that mosaic data improves category recognition accuracy by 1\% and open-world data boosts FROW benchmark accuracy by 10\%-20\% and content accuracy by 6\%-12\%. Meanwhile, incorporating fine-grained data into the pre-training phase can improve the model's category recognition accuracy by up to 10\%. The benchmark will be available at https://github.com/pc-inno/FROW.
+ oai:arXiv.org:2512.10384v1
+ cs.CV
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Quanyou Wang, Mingzhang Zhu, Ruochen Hou, Kay Gillespie, Alvin Zhu, Shiqi Wang, Yicheng Wang, Gaberiel I. Fernandez, Yeting Liu, Colin Togashi, Hyunwoo Nam, Aditya Navghare, Alex Xu, Taoyuanmin Zhu, Min Sung Ahn, Arturo Flores Alvarez, Justin Quan, Ethan Hong, Dennis W. Hong
+ Cong Pang, Hongtao Yu, Zixuan Chen, Lewei Lu, Xin Lou
- CourtPressGER: A German Court Decision to Press Release Summarization Dataset
- https://arxiv.org/abs/2512.09434
- arXiv:2512.09434v1 Announce Type: new
-Abstract: Official court press releases from Germany's highest courts present and explain judicial rulings to the public, as well as to expert audiences. Prior NLP efforts emphasize technical headnotes, ignoring citizen-oriented communication needs. We introduce CourtPressGER, a 6.4k dataset of triples: rulings, human-drafted press releases, and synthetic prompts for LLMs to generate comparable releases. This benchmark trains and evaluates LLMs in generating accurate, readable summaries from long judicial texts. We benchmark small and large LLMs using reference-based metrics, factual-consistency checks, LLM-as-judge, and expert ranking. Large LLMs produce high-quality drafts with minimal hierarchical performance loss; smaller models require hierarchical setups for long judgments. Initial benchmarks show varying model performance, with human-drafted releases ranking highest.
- oai:arXiv.org:2512.09434v1
- cs.CL
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Improved Small Set Expansion in High Dimensional Expanders
+ https://arxiv.org/abs/2512.10385
+ arXiv:2512.10385v1 Announce Type: new
+Abstract: Small set expansion in high dimensional expanders is of great importance, e.g., towards proving cosystolic expansion, local testability of codes and constructions of good quantum codes.
+ In this work we improve upon the state of the art results of small set expansion in high dimensional expanders. Our improvement is either on the expansion quality or on the size of sets for which expansion is guaranteed.
+ One line of previous works [KM22, DD24] has obtained weak expansion for small sets, which is sufficient for deducing cosystolic expansion of one dimension below. We improve upon their result by showing strong expansion for small sets.
+ Another line of works [KKL14, EK16, KM21] has shown strong expansion for small sets. However, they obtain it only for very small sets. We get an exponential improvement on the size of sets for which expansion is guaranteed by these prior works.
+ Interestingly, our result is obtained by bridging between these two lines of works. The works of [KM22, DD24] use global averaging operators in order to obtain expansion for larger sets. However, their method could be utilized only on sets that are cocycle-like. We show how to combine these global averaging operators with ideas from the so-called ``fat machinery'' of [KKL14, EK16, KM21] in order to apply them for general sets.
+ oai:arXiv.org:2512.10385v1
+ cs.CC
+ math.CO
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Sebastian Nagl, Mohamed Elganayni, Melanie Pospisil, Matthias Grabmair
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tali Kaufman, David Mass
- UniPart: Part-Level 3D Generation with Unified 3D Geom-Seg Latents
- https://arxiv.org/abs/2512.09435
- arXiv:2512.09435v1 Announce Type: new
-Abstract: Part-level 3D generation is essential for applications requiring decomposable and structured 3D synthesis. However, existing methods either rely on implicit part segmentation with limited granularity control or depend on strong external segmenters trained on large annotated datasets. In this work, we observe that part awareness emerges naturally during whole-object geometry learning and propose Geom-Seg VecSet, a unified geometry-segmentation latent representation that jointly encodes object geometry and part-level structure. Building on this representation, we introduce UniPart, a two-stage latent diffusion framework for image-guided part-level 3D generation. The first stage performs joint geometry generation and latent part segmentation, while the second stage conditions part-level diffusion on both whole-object and part-specific latents. A dual-space generation scheme further enhances geometric fidelity by predicting part latents in both global and canonical spaces. Extensive experiments demonstrate that UniPart achieves superior segmentation controllability and part-level geometric quality compared with existing approaches.
- oai:arXiv.org:2512.09435v1
+ Adaptive Dual-Weighted Gravitational Point Cloud Denoising Method
+ https://arxiv.org/abs/2512.10386
+ arXiv:2512.10386v1 Announce Type: new
+Abstract: High-quality point cloud data is a critical foundation for tasks such as autonomous driving and 3D reconstruction. However, LiDAR-based point cloud acquisition is often affected by various disturbances, resulting in a large number of noise points that degrade the accuracy of subsequent point cloud object detection and recognition. Moreover, existing point cloud denoising methods typically sacrifice computational efficiency in pursuit of higher denoising accuracy, or, conversely, improve processing speed at the expense of preserving object boundaries and fine structural details, making it difficult to simultaneously achieve high denoising accuracy, strong edge preservation, and real-time performance. To address these limitations, this paper proposes an adaptive dual-weight gravitational-based point cloud denoising method. First, an octree is employed to perform spatial partitioning of the global point cloud, enabling parallel acceleration. Then, within each leaf node, adaptive voxel-based occupancy statistics and k-nearest neighbor (kNN) density estimation are applied to rapidly remove clearly isolated and low-density noise points, thereby reducing the effective candidate set. Finally, a gravitational scoring function that combines density weights with adaptive distance weights is constructed to finely distinguish noise points from object points. Experiments conducted on the Stanford 3D Scanning Repository, the Canadian Adverse Driving Conditions (CADC) dataset, and in-house FMCW LiDAR point clouds acquired in our laboratory demonstrate that, compared with existing methods, the proposed approach achieves consistent improvements in F1, PSNR, and Chamfer Distance (CD) across various noise conditions while reducing the single-frame processing time, thereby validating its high accuracy, robustness, and real-time performance in multi-noise scenarios.
+ oai:arXiv.org:2512.10386v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Xufan He, Yushuang Wu, Xiaoyang Guo, Chongjie Ye, Jiaqing Zhou, Tianlei Hu, Xiaoguang Han, Dong Du
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ge Zhang, Chunyang Wang, Bo Xiao, Xuelian Liu, Bin Liu
- Knowledge-Augmented Large Language Model Agents for Explainable Financial Decision-Making
- https://arxiv.org/abs/2512.09440
- arXiv:2512.09440v1 Announce Type: new
-Abstract: This study investigates an explainable reasoning method for financial decision-making based on knowledge-enhanced large language model agents. To address the limitations of traditional financial decision methods that rely on parameterized knowledge, lack factual consistency, and miss reasoning chains, an integrated framework is proposed that combines external knowledge retrieval, semantic representation, and reasoning generation. The method first encodes financial texts and structured data to obtain semantic representations, and then retrieves task-related information from external knowledge bases using similarity computation. Internal representations and external knowledge are combined through weighted fusion, which ensures fluency while improving factual accuracy and completeness of generated content. In the reasoning stage, a multi-head attention mechanism is introduced to construct logical chains, allowing the model to present transparent causal relationships and traceability during generation. Finally, the model jointly optimizes task objectives and explanation consistency objectives, which enhances predictive performance and reasoning interpretability. Experiments on financial text processing and decision tasks show that the method outperforms baseline approaches in accuracy, text generation quality, and factual support, verifying the effectiveness of knowledge enhancement and explainable reasoning. Overall, the proposed approach overcomes the limitations of traditional models in semantic coverage and reasoning transparency, and demonstrates strong practical value in complex financial scenarios.
- oai:arXiv.org:2512.09440v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ A gradient descent algorithm for computing circle patterns
+ https://arxiv.org/abs/2512.10387
+ arXiv:2512.10387v1 Announce Type: new
+Abstract: This paper presents a new algorithm for generating planar circle patterns. The algorithm employs gradient descent and conjugate gradient method to compute circle radii and centers separately. Compared with existing algorithms, the proposed method is more efficient in computing centers of circles and is applicable for realizing circle patterns with possible obtuse overlap angles.
+ oai:arXiv.org:2512.10387v1
+ cs.CG
+ math.CO
+ math.MG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qingyuan Zhang, Yuxi Wang, Cancan Hua, Yulin Huang, Ning Lyu
+ Te Ba, Ze Zhou
- Representation Calibration and Uncertainty Guidance for Class-Incremental Learning based on Vision Language Model
- https://arxiv.org/abs/2512.09441
- arXiv:2512.09441v1 Announce Type: new
-Abstract: Class-incremental learning requires a learning system to continually learn knowledge of new classes and meanwhile try to preserve previously learned knowledge of old classes. As current state-of-the-art methods based on Vision-Language Models (VLMs) still suffer from the issue of differentiating classes across learning tasks. Here a novel VLM-based continual learning framework for image classification is proposed. In this framework, task-specific adapters are added to the pre-trained and frozen image encoder to learn new knowledge, and a novel cross-task representation calibration strategy based on a mixture of light-weight projectors is used to help better separate all learned classes in a unified feature space, alleviating class confusion across tasks. In addition, a novel inference strategy guided by prediction uncertainty is developed to more accurately select the most appropriate image feature for class prediction. Extensive experiments on multiple datasets under various settings demonstrate the superior performance of our method compared to existing ones.
- oai:arXiv.org:2512.09441v1
- cs.CV
+ The Best of the Two Worlds: Harmonizing Semantic and Hash IDs for Sequential Recommendation
+ https://arxiv.org/abs/2512.10388
+ arXiv:2512.10388v1 Announce Type: new
+Abstract: Conventional Sequential Recommender Systems (SRS) typically assign unique Hash IDs (HID) to construct item embeddings. These HID embeddings effectively learn collaborative information from historical user-item interactions, making them vulnerable to situations where most items are rarely consumed (the long-tail problem). Recent methods that incorporate auxiliary information often suffer from noisy collaborative sharing caused by co-occurrence signals or semantic homogeneity caused by flat dense embeddings. Semantic IDs (SIDs), with their capability of code sharing and multi-granular semantic modeling, provide a promising alternative. However, the collaborative overwhelming phenomenon hinders the further development of SID-based methods. The quantization mechanisms commonly compromise the uniqueness of identifiers required for modeling head items, creating a performance seesaw between head and tail items. To address this dilemma, we propose \textbf{\name}, a novel framework that harmonizes the SID and HID. Specifically, we devise a dual-branch modeling architecture that enables the model to capture both the multi-granular semantics within SID while preserving the unique collaborative identity of HID. Furthermore, we introduce a dual-level alignment strategy that bridges the two representations, facilitating knowledge transfer and supporting robust preference modeling. Extensive experiments on three real-world datasets show that \name~ effectively balances recommendation quality for both head and tail items while surpassing the existing baselines. The implementation code can be found online\footnote{https://github.com/ziwliu8/H2Rec}.
+ oai:arXiv.org:2512.10388v1
+ cs.IRcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Jiantao Tan, Peixian Ma, Tong Yu, Wentao Zhang, Ruixuan Wang
+ http://creativecommons.org/licenses/by/4.0/
+ Ziwei Liu, Yejing Wang, Qidong Liu, Zijian Zhang, Chong Chen, Wei Huang, Xiangyu Zhao
- Reference Recommendation based Membership Inference Attack against Hybrid-based Recommender Systems
- https://arxiv.org/abs/2512.09442
- arXiv:2512.09442v1 Announce Type: new
-Abstract: Recommender systems have been widely deployed across various domains such as e-commerce and social media, and intelligently suggest items like products and potential friends to users based on their preferences and interaction history, which are often privacy-sensitive. Recent studies have revealed that recommender systems are prone to membership inference attacks (MIAs), where an attacker aims to infer whether or not a user's data has been used for training a target recommender system. However, existing MIAs fail to exploit the unique characteristic of recommender systems, and therefore are only applicable to mixed recommender systems consisting of two recommendation algorithms. This leaves a gap in investigating MIAs against hybrid-based recommender systems where the same algorithm utilizing user-item historical interactions and attributes of users and items serves and produces personalised recommendations. To investigate how the personalisation in hybrid-based recommender systems influences MIA, we propose a novel metric-based MIA. Specifically, we leverage the characteristic of personalisation to obtain reference recommendation for any target users. Then, a relative membership metric is proposed to exploit a target user's historical interactions, target recommendation, and reference recommendation to infer the membership of the target user's data. Finally, we theoretically and empirically demonstrate the efficacy of the proposed metric-based MIA on hybrid-based recommender systems.
- oai:arXiv.org:2512.09442v1
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ The $k$-flip Ising game
+ https://arxiv.org/abs/2512.10389
+ arXiv:2512.10389v1 Announce Type: new
+Abstract: A partially parallel dynamical noisy binary choice (Ising) game in discrete time of $N$ players on complete graphs with $k$ players having a possibility of changing their strategies at each time moment called $k$-flip Ising game is considered. Analytical calculation of the transition matrix of game as well as the first two moments of the distribution of $\varphi=N^+/N$, where $N^+$ is a number of players adhering to one of the two strategies, is presented. First two moments of the first hitting time distribution for sample trajectories corresponding to transition from a metastable and unstable states to a stable one are considered. A nontrivial dependence of these moments on $k$ for the decay of a metastable state is discussed. A presence of the minima at certain $k^*$ is attributed to a competition between $k$-dependent diffusion and restoring forces.
+ oai:arXiv.org:2512.10389v1
+ cs.GT
+ cond-mat.stat-mech
+ physics.soc-ph
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xiaoxiao Chi, Xuyun Zhang, Yan Wang, Hongsheng Hu, Wanchun Dou
+ http://creativecommons.org/licenses/by/4.0/
+ Kovalenko Aleksandr, Andrey Leonidov
- Advancing Research via Human-AI Interactive Theorem Proving
- https://arxiv.org/abs/2512.09443
- arXiv:2512.09443v1 Announce Type: new
-Abstract: We investigate how large language models can be used as research tools in scientific computing while preserving mathematical rigor. We propose a human-in-the-loop workflow for interactive theorem proving and discovery with LLMs. Human experts retain control over problem formulation and admissible assumptions, while the model searches for proofs or contradictions, proposes candidate properties and theorems, and helps construct structures and parameters that satisfy explicit constraints, supported by numerical experiments and simple verification checks. Experts treat these outputs as raw material, further refine them, and organize the results into precise statements and rigorous proofs. We instantiate this workflow in a case study on the connection between manifold optimization and Grover's quantum search algorithm, where the pipeline helps identify invariant subspaces, explore Grover-compatible retractions, and obtain convergence guarantees for the retraction-based gradient method. The framework provides a practical template for integrating large language models into frontier mathematical research, enabling faster exploration of proof space and algorithm design while maintaining transparent reasoning responsibilities. Although illustrated on manifold optimization problems in quantum computing, the principles extend to other core areas of scientific computing.
- oai:arXiv.org:2512.09443v1
- cs.HC
- cs.AI
- math.OC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fitting magnetization data using continued fraction of straight lines
+ https://arxiv.org/abs/2512.10390
+ arXiv:2512.10390v1 Announce Type: new
+Abstract: Magnetization of a ferromagnetic substance in response to an externally applied magnetic field increases with the strength of the field. This is because at the microscopic level, magnetic moments in certain regions or domains of the substance increasingly align with the applied field, while the amount of misaligned domains decreases. The alignment of such magnetic domains with an applied magnetic field forms the physical basis for the nonlinearity of magnetization. In this paper, the nonlinear function is approximated as a combination of continued fraction of straight lines. The resulting fit is used to interpret the nonlinear behavior in both growing and shrinking magnetic domains. The continued fraction of straight lines used here is an algebraic expression which can be used to estimate parameters using nonlinear regression.
+ oai:arXiv.org:2512.10390v1
+ cs.LG
+ cond-mat.mtrl-sci
+ physics.class-ph
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Chenyi Li, Zhijian Lai, Dong An, Jiang Hu, Zaiwen Wen
+ Vijay Prakash S
- Advancing Text Classification with Large Language Models and Neural Attention Mechanisms
- https://arxiv.org/abs/2512.09444
- arXiv:2512.09444v1 Announce Type: new
-Abstract: This study proposes a text classification algorithm based on large language models, aiming to address the limitations of traditional methods in capturing long-range dependencies, understanding contextual semantics, and handling class imbalance. The framework includes text encoding, contextual representation modeling, attention-based enhancement, feature aggregation, and classification prediction. In the representation stage, deep semantic embeddings are obtained through large-scale pretrained language models, and attention mechanisms are applied to enhance the selective representation of key features. In the aggregation stage, global and weighted strategies are combined to generate robust text-level vectors. In the classification stage, a fully connected layer and Softmax output are used to predict class distributions, and cross-entropy loss is employed to optimize model parameters. Comparative experiments introduce multiple baseline models, including recurrent neural networks, graph neural networks, and Transformers, and evaluate them on Precision, Recall, F1-Score, and AUC. Results show that the proposed method outperforms existing models on all metrics, with especially strong improvements in Recall and AUC. In addition, sensitivity experiments are conducted on hyperparameters and data conditions, covering the impact of hidden dimensions on AUC and the impact of class imbalance ratios on Recall. The findings demonstrate that proper model configuration has a significant effect on performance and reveal the adaptability and stability of the model under different conditions. Overall, the proposed text classification method not only achieves effective performance improvement but also verifies its robustness and applicability in complex data environments through systematic analysis.
- oai:arXiv.org:2512.09444v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Collision-Aware Density-Driven Control of Multi-Agent Systems via Control Barrier Functions
+ https://arxiv.org/abs/2512.10392
+ arXiv:2512.10392v1 Announce Type: new
+Abstract: This paper tackles the problem of safe and efficient area coverage using a multi-agent system operating in environments with obstacles. Applications such as environmental monitoring and search and rescue require robot swarms to cover large domains under resource constraints, making both coverage efficiency and safety essential. To address the efficiency aspect, we adopt the Density-Driven Control (D$^2$C) framework, which uses optimal transport theory to steer agents according to a reference distribution that encodes spatial coverage priorities. To ensure safety, we incorporate Control Barrier Functions (CBFs) into the framework. While CBFs are commonly used for collision avoidance, we extend their applicability by introducing obstacle-specific formulations for both circular and rectangular shapes. In particular, we analytically derive a unit normal vector based on the agent's position relative to the nearest face of a rectangular obstacle, improving safety enforcement in environments with non-smooth boundaries. Additionally, a velocity-dependent term is incorporated into the CBF to enhance collision avoidance. Simulation results validate the proposed method by demonstrating smoother navigation near obstacles and more efficient area coverage than the existing method, while still ensuring collision-free operation.
+ oai:arXiv.org:2512.10392v1
+ eess.SY
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ning Lyu, Yuxi Wang, Feng Chen, Qingyuan Zhang
+ Sungjun Seo, Kooktae Lee
- Defect-aware Hybrid Prompt Optimization via Progressive Tuning for Zero-Shot Multi-type Anomaly Detection and Segmentation
- https://arxiv.org/abs/2512.09446
- arXiv:2512.09446v1 Announce Type: new
-Abstract: Recent vision language models (VLMs) like CLIP have demonstrated impressive anomaly detection performance under significant distribution shift by utilizing high-level semantic information through text prompts. However, these models often neglect fine-grained details, such as which kind of anomalies, like "hole", "cut", "scratch" that could provide more specific insight into the nature of anomalies. We argue that recognizing fine-grained anomaly types 1) enriches the representation of "abnormal" with structured semantics, narrowing the gap between coarse anomaly signals and fine-grained defect categories; 2) enables manufacturers to understand the root causes of the anomaly and implement more targeted and appropriate corrective measures quickly. While incorporating such detailed semantic information is crucial, designing handcrafted prompts for each defect type is both time-consuming and susceptible to human bias. For this reason, we introduce DAPO, a novel approach for Defect-aware Prompt Optimization based on progressive tuning for the zero-shot multi-type and binary anomaly detection and segmentation under distribution shifts. Our approach aligns anomaly-relevant image features with their corresponding text semantics by learning hybrid defect-aware prompts with both fixed textual anchors and learnable token embeddings. We conducted experiments on public benchmarks (MPDD, VisA, MVTec-AD, MAD, and Real-IAD) and an internal dataset. The results suggest that compared to the baseline models, DAPO achieves a 3.7% average improvement in AUROC and average precision metrics at the image level under distribution shift, and a 6.5% average improvement in localizing novel anomaly types under zero-shot settings.
- oai:arXiv.org:2512.09446v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Cross-modal Retrieval Models for Stripped Binary Analysis
+ https://arxiv.org/abs/2512.10393
+ arXiv:2512.10393v1 Announce Type: new
+Abstract: LLM-agent based binary code analysis has demonstrated significant potential across a wide range of software security scenarios, including vulnerability detection, malware analysis, etc. In agent workflow, however, retrieving the positive from thousands of stripped binary functions based on user query remains under-studied and challenging, as the absence of symbolic information distinguishes it from source code retrieval. In this paper, we introduce, BinSeek, the first two-stage cross-modal retrieval framework for stripped binary code analysis. It consists of two models: BinSeekEmbedding is trained on large-scale dataset to learn the semantic relevance of the binary code and the natural language description, furthermore, BinSeek-Reranker learns to carefully judge the relevance of the candidate code to the description with context augmentation. To this end, we built an LLM-based data synthesis pipeline to automate training construction, also deriving a domain benchmark for future research. Our evaluation results show that BinSeek achieved the state-of-the-art performance, surpassing the the same scale models by 31.42% in Rec@3 and 27.17% in MRR@3, as well as leading the advanced general-purpose models that have 16 times larger parameters.
+ oai:arXiv.org:2512.10393v1
+ cs.SE
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by-nc-sa/4.0/
- Nadeem Nazer, Hongkuan Zhou, Lavdim Halilaj, Ylli Sadikaj, Steffen Staab
+ Guoqiang Chen, Lingyun Ying, Ziyang Song, Daguang Liu, Qiang Wang, Zhiqi Wang, Li Hu, Shaoyin Cheng, Weiming Zhang, Nenghai Yu
- Sequential Testing for Descriptor-Agnostic LiDAR Loop Closure in Repetitive Environments
- https://arxiv.org/abs/2512.09447
- arXiv:2512.09447v1 Announce Type: new
-Abstract: We propose a descriptor-agnostic, multi-frame loop closure verification method that formulates LiDAR loop closure as a truncated Sequential Probability Ratio Test (SPRT). Instead of deciding from a single descriptor comparison or using fixed thresholds with late-stage Iterative Closest Point (ICP) vetting, the verifier accumulates a short temporal stream of descriptor similarities between a query and each candidate. It then issues an accept/reject decision adaptively once sufficient multi-frame evidence has been observed, according to user-specified Type-I/II error design targets. This precision-first policy is designed to suppress false positives in structurally repetitive indoor environments. We evaluate the verifier on a five-sequence library dataset, using a fixed retrieval front-end with several representative LiDAR global descriptors. Performance is assessed via segment-level K-hit precision-recall and absolute trajectory error (ATE) and relative pose error (RPE) after pose graph optimization. Across descriptors, the sequential verifier consistently improves precision and reduces the impact of aliased loops compared with single-frame and heuristic multi-frame baselines. Our implementation and dataset will be released at: https://github.com/wanderingcar/snu_library_dataset.
- oai:arXiv.org:2512.09447v1
+ RoboNeuron: A Modular Framework Linking Foundation Models and ROS for Embodied AI
+ https://arxiv.org/abs/2512.10394
+ arXiv:2512.10394v1 Announce Type: new
+Abstract: Current embodied AI systems face severe engineering impediments, primarily characterized by poor cross-scenario adaptability, rigid inter-module coupling, and fragmented inference acceleration. To overcome these limitations, we propose RoboNeuron, a universal deployment framework for embodied intelligence. RoboNeuron is the first framework to deeply integrate the cognitive capabilities of Large Language Models (LLMs) and Vision-Language-Action (VLA) models with the real-time execution backbone of the Robot Operating System (ROS). We utilize the Model Context Protocol (MCP) as a semantic bridge, enabling the LLM to dynamically orchestrate underlying robotic tools. The framework establishes a highly modular architecture that strictly decouples sensing, reasoning, and control by leveraging ROS's unified communication interfaces. Crucially, we introduce an automated tool to translate ROS messages into callable MCP functions, significantly streamlining development. RoboNeuron significantly enhances cross-scenario adaptability and component flexibility, while establishing a systematic platform for horizontal performance benchmarking, laying a robust foundation for scalable real-world embodied applications.
+ oai:arXiv.org:2512.10394v1cs.RO
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Jaehyun Kim, Seungwon Choi, Tae-Wan Kim
+ Weifan Guan, Huasen Xi, Chenxiao Zhang, Aosheng Li, Qinghao Hu, Jian Cheng
- Power Control of Multi-Layer Repeater Networks (POLARNet)
- https://arxiv.org/abs/2512.09449
- arXiv:2512.09449v1 Announce Type: new
-Abstract: In this letter we introduce POLARNet -- power control of multi-layer repeater networks -- for local optimization of SNR given different repeater power constraints. We assume relays or repeaters in groups or layers spatially separated. Under ideal circumstances SISO narrow-band communication and TDD, the system may be viewed as a dual to a deep neural network, where activations, corresponding to repeater amplifications, are optimized and weight matrices, corresponding to channel matrices, are static. Repeater amplifications are locally optimized layer-by-layer in a forward-backward manner over compact sets. The method is applicable for a wide range of constraints on within-layer power/energy utilization, is furthermore gradient-free, step-size-free, and has proven monotonicity in the objective. Numerical simulations show significant improvement compared to upper bounds on the expected SNR. In addition, power distribution over multiple repeaters is shown to be superior to optimal selection of single repeaters in the layers.
- oai:arXiv.org:2512.09449v1
- eess.SY
- cs.SY
- eess.SP
- Thu, 11 Dec 2025 00:00:00 -0500
+ Robust Crop Planning under Uncertainty: Aligning Economic Optimality with Agronomic Sustainability
+ https://arxiv.org/abs/2512.10396
+ arXiv:2512.10396v1 Announce Type: new
+Abstract: Long-horizon agricultural planning requires optimizing crop allocation under complex spatial heterogeneity, temporal agronomic dependencies, and multi-source environmental uncertainty. Existing approaches often treat crop interactions, such as legume-cereal complementarity, which implicitly or rely on static deterministic formulations that fail to guarantee resilience against market and climate volatility. To address these challenges, we propose a Multi-Layer Robust Crop Planning Framework (MLRCPF) that integrates spatial reasoning, temporal dynamics, and robust optimization. Specifically, we formalize crop-to-crop relationships through a structured interaction matrix embedded within the state-transition logic, and employ a distributionally robust optimization layer to mitigate worst-case risks defined by a data-driven ambiguity set. Evaluations on a real-world high-mix farming dataset from North China demonstrate the effectiveness of the proposed approach. The framework autonomously generates sustainable checkerboard rotation patterns that restore soil fertility, significantly increasing the legume planting ratio compared to deterministic baselines. Economically, it successfully resolves the trade-off between optimality and stability. These results highlight the importance of explicitly encoding domain-specific structural priors into optimization models for resilient decision-making in complex agricultural systems.
+ oai:arXiv.org:2512.10396v1
+ cs.CE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Johan Siwerson, Johan Thunberg
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Runhao Liu, Ziming Chen, You Li, Peng Zhang
- BlockFLEX: An Adaptive and Survivable Architecture with Hierarchical Routing for LEO Satellite Networks
- https://arxiv.org/abs/2512.09453
- arXiv:2512.09453v1 Announce Type: new
-Abstract: This paper presents \textbf{BlockFLEX}, an adaptive and survivable architecture with a hierarchical routing scheme for Low Earth Orbit satellite networks, designed to address dynamic topology changes and severe link failures.
- By organizing satellites into autonomous blocks, BlockFLEX establishes a survivable underlay network that masks network volatility and offers a stable overlay view. The architecture employs a hierarchical routing scheme integrating both convergence-free geographic routing and convergence-isolated routing. Furthermore, BlockFLEX adaptively switches between stateful and stateless forwarding modes, enabling efficient, resilient, and stable routing via a dedicated protection mechanism and an optimized source satellite selection algorithm.
- Experimental evaluations on current operational LEO satellite networks (LSNs) demonstrate that under scenarios with up to 30\% random link failures, the proposed method achieves a $2\times$ improvement in reachability compared to current leading schemes, while maintaining near-100\% routing availability. Moreover, the overhead of control messages and forwarding information base (FIB) updates remains below $0.2\%$ of that in OSPF, accompanied by a $\geq 36\%$ reduction in routing computation time and a $\geq 50\%$ decrease in latency jitter.
- oai:arXiv.org:2512.09453v1
- cs.NI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Confucius Code Agent: An Open-sourced AI Software Engineer at Industrial Scale
+ https://arxiv.org/abs/2512.10398
+ arXiv:2512.10398v1 Announce Type: new
+Abstract: Real-world AI software engineering demands coding agents that can reason over massive repositories, maintain durable memory across and within long sessions, and robustly coordinate complex toolchains at test time. Existing open-source coding agents provide transparency but frequently fall short when pushed to these industrial-scale workloads, while proprietary coding agents offer strong practical performance but limited extensibility, interpretability, and controllability. We present the Confucius Code Agent (CCA), an open-sourced AI software engineer that can operate at an industrial scale. CCA is built atop the Confucius SDK, an open-sourced agent development platform designed around three complementary perspectives: Agent Experience (AX), User Experience (UX), and Developer Experience (DX). The SDK introduces a unified orchestrator with hierarchical working memory for long-context reasoning, a persistent note-taking system for cross-session continual learning, and a modular extension module for robust tool use. Moreover, a meta-agent automates the synthesis, evaluation, and refinement of agent configurations through a build-test-improve loop, enabling rapid agent development on new tasks, environments, and tool stacks. Instantiated on Confucius SDK with these mechanisms, CCA delivers strong performance on real-world software engineering tasks. On SWE-Bench-Pro, CCA achieves a state-of-the-art Resolve@1 performance of 54.3%, substantially improving over prior coding agents. Together, the Confucius SDK and CCA provide a transparent, extensible, and reproducible foundation for AI agents, bridge gaps between research prototypes and production-grade systems, and support agent development and deployment at industrial scale.
+ oai:arXiv.org:2512.10398v1
+ cs.CL
+ cs.AI
+ cs.LG
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Xiangtong Wang
-
-
- $t$-Fold $s$-Blocking Sets and $s$-Minimal Codes
- https://arxiv.org/abs/2512.09457
- arXiv:2512.09457v1 Announce Type: new
-Abstract: Blocking sets and minimal codes have been studied for many years in projective geometry and coding theory. In this paper, we provide a new lower bound on the size of $t$-fold $s$-blocking sets without the condition $t \leq q$, which is stronger than the classical result of Beutelspacher in 1983. Then a lower bound on lengths of projective $s$-minimal codes is also obtained. It is proved that $(s+1)$-minimal codes are certainly $s$-minimal codes. We generalize the Ashikhmin-Barg condition for minimal codes to $s$-minimal codes. Many infinite families of $s$-minimal codes satisfying and violating this generalized Ashikhmin-Barg condition are constructed. We also give several examples which are binary minimal codes, but not $2$-minimal codes.
- oai:arXiv.org:2512.09457v1
- cs.IT
- math.IT
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/publicdomain/zero/1.0/
- Hao Chen, Xu Pan, Conghui Xie
+ Zhaodong Wang, Zhenting Qi, Sherman Wong, Nathan Hu, Samuel Lin, Jun Ge, Erwin Gao, Yining Yang, Ben Maurer, Wenlin Chen, David Recordon, Yilun Du, Minlan Yu, Ying Zhang
- Architectures for Building Agentic AI
- https://arxiv.org/abs/2512.09458
- arXiv:2512.09458v1 Announce Type: new
-Abstract: This chapter argues that the reliability of agentic and generative AI is chiefly an architectural property. We define agentic systems as goal-directed, tool-using decision makers operating in closed loops, and show how reliability emerges from principled componentisation (goal manager, planner, tool-router, executor, memory, verifiers, safety monitor, telemetry), disciplined interfaces (schema-constrained, validated, least-privilege tool calls), and explicit control and assurance loops. Building on classical foundations, we propose a practical taxonomy-tool-using agents, memory-augmented agents, planning and self-improvement agents, multi-agent systems, and embodied or web agents - and analyse how each pattern reshapes the reliability envelope and failure modes. We distil design guidance on typed schemas, idempotency, permissioning, transactional semantics, memory provenance and hygiene, runtime governance (budgets, termination conditions), and simulate-before-actuate safeguards.
- oai:arXiv.org:2512.09458v1
- cs.AI
+ The Eminence in Shadow: Exploiting Feature Boundary Ambiguity for Robust Backdoor Attacks
+ https://arxiv.org/abs/2512.10402
+ arXiv:2512.10402v1 Announce Type: new
+Abstract: Deep neural networks (DNNs) underpin critical applications yet remain vulnerable to backdoor attacks, typically reliant on heuristic brute-force methods. Despite significant empirical advancements in backdoor research, the lack of rigorous theoretical analysis limits understanding of underlying mechanisms, constraining attack predictability and adaptability. Therefore, we provide a theoretical analysis targeting backdoor attacks, focusing on how sparse decision boundaries enable disproportionate model manipulation. Based on this finding, we derive a closed-form, ambiguous boundary region, wherein negligible relabeled samples induce substantial misclassification. Influence function analysis further quantifies significant parameter shifts caused by these margin samples, with minimal impact on clean accuracy, formally grounding why such low poison rates suffice for efficacious attacks. Leveraging these insights, we propose Eminence, an explainable and robust black-box backdoor framework with provable theoretical guarantees and inherent stealth properties. Eminence optimizes a universal, visually subtle trigger that strategically exploits vulnerable decision boundaries and effectively achieves robust misclassification with exceptionally low poison rates (< 0.1%, compared to SOTA methods typically requiring > 1%). Comprehensive experiments validate our theoretical discussions and demonstrate the effectiveness of Eminence, confirming an exponential relationship between margin poisoning and adversarial boundary manipulation. Eminence maintains > 90% attack success rate, exhibits negligible clean-accuracy loss, and demonstrates high transferability across diverse models, datasets and scenarios.
+ oai:arXiv.org:2512.10402v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- S{\l}awomir Nowaczyk
+ 10.1145/3770854.3780322
+ Zhou Feng, Jiahao Chen, Chunyi Zhou, Yuwen Pu, Tianyu Du, Jinbao Li, Jianhai Chen, Shouling Ji
- The Complex-Step Integral Transform
- https://arxiv.org/abs/2512.09459
- arXiv:2512.09459v1 Announce Type: new
-Abstract: Building on the well-established connection between the Hilbert transform and derivative operators, and motivated by recent developments in complex-step differentiation, we introduce the Complex-Step Integral Transform (CSIT): a generalized integral transform that combines analytic continuation, derivative approximation, and multi-scale smoothing within a unified framework. A spectral analysis shows that the CSIT preserves phase while suppressing high-wavenumber noise, offering advantages over conventional Fourier derivatives. We discuss the roles of the real and imaginary step parameters, compare FFT-based and interpolation-based implementations, and demonstrate the method on the advection equation and instantaneous-frequency computation. Results show that the CSIT yields smoother, more robust attributes than Hilbert-based methods and provides built-in stabilization for PDE solvers. The CSIT thus represents a flexible alternative for numerical differentiation, spectral analysis, and seismic signal processing. The method opens several avenues for future work, including non-periodic implementations, adaptive parameter selection, and integration with local interpolation frameworks such as high-order Finite-Element methods.
- oai:arXiv.org:2512.09459v1
- math.NA
- cs.NA
- physics.geo-ph
- Thu, 11 Dec 2025 00:00:00 -0500
+ BRACE: A Benchmark for Robust Audio Caption Quality Evaluation
+ https://arxiv.org/abs/2512.10403
+ arXiv:2512.10403v1 Announce Type: new
+Abstract: Automatic audio captioning is essential for audio understanding, enabling applications such as accessibility and content indexing. However, evaluating the quality of audio captions remains a major challenge, especially in reference-free settings where high-quality ground-truth captions are unavailable. While CLAPScore is currently the most widely used reference-free Audio Caption Evaluation Metric(ACEM), its robustness under diverse conditions has not been systematically validated.
+ To address this gap, we introduce BRACE, a new benchmark designed to evaluate audio caption alignment quality in a reference-free setting. BRACE is primarily designed for assessing ACEMs, and can also be extended to measure the modality alignment abilities of Large Audio Language Model(LALM). BRACE consists of two sub-benchmarks: BRACE-Main for fine-grained caption comparison and BRACE-Hallucination for detecting subtle hallucinated content. We construct these datasets through high-quality filtering, LLM-based corruption, and human annotation.
+ Given the widespread adoption of CLAPScore as a reference-free ACEM and the increasing application of LALMs in audio-language tasks, we evaluate both approaches using the BRACE benchmark, testing CLAPScore across various CLAP model variants and assessing multiple LALMs.
+ Notably, even the best-performing CLAP-based ACEM achieves only a 70.01 F1-score on the BRACE-Main benchmark, while the best LALM reaches just 63.19.
+ By revealing the limitations of CLAP models and LALMs, our BRACE benchmark offers valuable insights into the direction of future research.
+ oai:arXiv.org:2512.10403v1
+ cs.SD
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Rafael Abreu, Stephanie Durand, Jochen Kamm, Christine Thomas, Monika Pandey
+ Tianyu Guo, Hongyu Chen, Hao Liang, Meiyi Qiang, Bohan Zeng, Linzhuang Sun, Bin Cui, Wentao Zhang
- Cytoplasmic Strings Analysis in Human Embryo Time-Lapse Videos using Deep Learning Framework
- https://arxiv.org/abs/2512.09461
- arXiv:2512.09461v1 Announce Type: new
-Abstract: Infertility is a major global health issue, and while in-vitro fertilization has improved treatment outcomes, embryo selection remains a critical bottleneck. Time-lapse imaging enables continuous, non-invasive monitoring of embryo development, yet most automated assessment methods rely solely on conventional morphokinetic features and overlook emerging biomarkers. Cytoplasmic Strings, thin filamentous structures connecting the inner cell mass and trophectoderm in expanded blastocysts, have been associated with faster blastocyst formation, higher blastocyst grades, and improved viability. However, CS assessment currently depends on manual visual inspection, which is labor-intensive, subjective, and severely affected by detection and subtle visual appearance. In this work, we present, to the best of our knowledge, the first computational framework for CS analysis in human IVF embryos. We first design a human-in-the-loop annotation pipeline to curate a biologically validated CS dataset from TLI videos, comprising 13,568 frames with highly sparse CS-positive instances. Building on this dataset, we propose a two-stage deep learning framework that (i) classifies CS presence at the frame level and (ii) localizes CS regions in positive cases. To address severe imbalance and feature uncertainty, we introduce the Novel Uncertainty-aware Contractive Embedding (NUCE) loss, which couples confidence-aware reweighting with an embedding contraction term to form compact, well-separated class clusters. NUCE consistently improves F1-score across five transformer backbones, while RF-DETR-based localization achieves state-of-the-art (SOTA) detection performance for thin, low-contrast CS structures. The source code will be made publicly available at: https://github.com/HamadYA/CS_Detection.
- oai:arXiv.org:2512.09461v1
+ MultiHateLoc: Towards Temporal Localisation of Multimodal Hate Content in Online Videos
+ https://arxiv.org/abs/2512.10408
+ arXiv:2512.10408v1 Announce Type: new
+Abstract: The rapid growth of video content on platforms such as TikTok and YouTube has intensified the spread of multimodal hate speech, where harmful cues emerge subtly and asynchronously across visual, acoustic, and textual streams. Existing research primarily focuses on video-level classification, leaving the practically crucial task of temporal localisation, identifying when hateful segments occur, largely unaddressed. This challenge is even more noticeable under weak supervision, where only video-level labels are available, and static fusion or classification-based architectures struggle to capture cross-modal and temporal dynamics. To address these challenges, we propose MultiHateLoc, the first framework designed for weakly-supervised multimodal hate localisation. MultiHateLoc incorporates (1) modality-aware temporal encoders to model heterogeneous sequential patterns, including a tailored text-based preprocessing module for feature enhancement; (2) dynamic cross-modal fusion to adaptively emphasise the most informative modality at each moment and a cross-modal contrastive alignment strategy to enhance multimodal feature consistency; (3) a modality-aware MIL objective to identify discriminative segments under video-level supervision. Despite relying solely on coarse labels, MultiHateLoc produces fine-grained, interpretable frame-level predictions. Experiments on HateMM and MultiHateClip show that our method achieves state-of-the-art performance in the localisation task.
+ oai:arXiv.org:2512.10408v1cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Qiyue Sun, Tailin Chen, Yinghui Zhang, Yuchen Zhang, Jiangbei Yue, Jianbo Jiao, Zeyu Fu
+
+
+ Sliding Window Attention Adaptation
+ https://arxiv.org/abs/2512.10411
+ arXiv:2512.10411v1 Announce Type: new
+Abstract: The self-attention mechanism in Transformer-based Large Language Models (LLMs) scales quadratically with input length, making long-context inference expensive. Sliding window attention (SWA) reduces this cost to linear complexity, but naively enabling complete SWA at inference-time for models pretrained with full attention (FA) causes severe long-context performance degradation due to training-inference mismatch. This makes us wonder: Can FA-pretrained LLMs be well adapted to SWA without pretraining? We investigate this by proposing Sliding Window Attention Adaptation (SWAA), a set of practical recipes that combine five methods for better adaptation: (1) applying SWA only during prefilling; (2) preserving "sink" tokens; (3) interleaving FA/SWA layers; (4) chain-of-thought (CoT); and (5) fine-tuning. Our experiments show that SWA adaptation is feasible while non-trivial: no single method suffices, yet specific synergistic combinations effectively recover the original long-context performance. We further analyze the performance-efficiency trade-offs of different SWAA configurations and provide recommended recipes for diverse scenarios. Our code is available at https://github.com/yuyijiong/sliding-window-attention-adaptation
+ oai:arXiv.org:2512.10411v1
+ cs.CLcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Anabia Sohail, Mohamad Alansari, Ahmed Abughali, Asmaa Chehab, Abdelfatah Ahmed, Divya Velayudhan, Sajid Javed, Hasan Al Marzouqi, Ameena Saad Al-Sumaiti, Junaid Kashir, Naoufel Werghi
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Yijiong Yu, Jiale Liu, Qingyun Wu, Huazheng Wang, Ji Pei
- Development of a Compliant Gripper for Safe Robot-Assisted Trouser Dressing-Undressing
- https://arxiv.org/abs/2512.09462
- arXiv:2512.09462v1 Announce Type: new
-Abstract: In recent years, many countries, including Japan, have rapidly aging populations, making the preservation of seniors' quality of life a significant concern. For elderly people with impaired physical abilities, support for toileting is one of the most important issues. This paper details the design, development, experimental assessment, and potential application of the gripper system, with a focus on the unique requirements and obstacles involved in aiding elderly or hemiplegic individuals in dressing and undressing trousers. The gripper we propose seeks to find the right balance between compliance and grasping forces, ensuring precise manipulation while maintaining a safe and compliant interaction with the users. The gripper's integration into a custom--built robotic manipulator system provides a comprehensive solution for assisting hemiplegic individuals in their dressing and undressing tasks. Experimental evaluations and comparisons with existing studies demonstrate the gripper's ability to successfully assist in both dressing and dressing of trousers in confined spaces with a high success rate. This research contributes to the advancement of assistive robotics, empowering elderly, and physically impaired individuals to maintain their independence and improve their quality of life.
- oai:arXiv.org:2512.09462v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Boosting RL-Based Visual Reasoning with Selective Adversarial Entropy Intervention
+ https://arxiv.org/abs/2512.10414
+ arXiv:2512.10414v1 Announce Type: new
+Abstract: Recently, reinforcement learning (RL) has become a common choice in enhancing the reasoning capabilities of vision-language models (VLMs). Considering existing RL- based finetuning methods, entropy intervention turns out to be an effective way to benefit exploratory ability, thereby improving policy performance. Notably, most existing stud- ies intervene in entropy by simply controlling the update of specific tokens during policy optimization of RL. They ig- nore the entropy intervention during the RL sampling that can boost the performance of GRPO by improving the di- versity of responses. In this paper, we propose Selective- adversarial Entropy Intervention, namely SaEI, which en- hances policy entropy by distorting the visual input with the token-selective adversarial objective coming from the en- tropy of sampled responses. Specifically, we first propose entropy-guided adversarial sampling (EgAS) that formu- lates the entropy of sampled responses as an adversarial ob- jective. Then, the corresponding adversarial gradient can be used to attack the visual input for producing adversarial samples, allowing the policy model to explore a larger an- swer space during RL sampling. Then, we propose token- selective entropy computation (TsEC) to maximize the ef- fectiveness of adversarial attack in EgAS without distorting factual knowledge within VLMs. Extensive experiments on both in-domain and out-of-domain datasets show that our proposed method can greatly improve policy exploration via entropy intervention, to boost reasoning capabilities. Code will be released once the paper is accepted.
+ oai:arXiv.org:2512.10414v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1080/01691864.2024.2376024
- Unde, J., Inden, T., Wakayama, Y., Colan, J., Zhu, Y., Aoyama, T., and Hasegawa, Y. (2024). Development of a compliant gripper for safe robot-assisted trouser dressing--undressing. \textit{Advanced Robotics}, 38(19--20), 1424--1440
- Jayant Unde, Takumi Inden, Yuki Wakayama, Jacinto Colan, Yaonan Zhu, Tadayoshi Aoyama, Yasuhisa Hasegawa
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yang Yu, Zhuangzhuang Chen, Siqi Wang, Lanqing Li, Xiaomeng Li
- Privacy-Preserving Computer Vision for Industry: Three Case Studies in Human-Centric Manufacturing
- https://arxiv.org/abs/2512.09463
- arXiv:2512.09463v1 Announce Type: new
-Abstract: The adoption of AI-powered computer vision in industry is often constrained by the need to balance operational utility with worker privacy. Building on our previously proposed privacy-preserving framework, this paper presents its first comprehensive validation on real-world data collected directly by industrial partners in active production environments. We evaluate the framework across three representative use cases: woodworking production monitoring, human-aware AGV navigation, and multi-camera ergonomic risk assessment. The approach employs learned visual transformations that obscure sensitive or task-irrelevant information while retaining features essential for task performance. Through both quantitative evaluation of the privacy-utility trade-off and qualitative feedback from industrial partners, we assess the framework's effectiveness, deployment feasibility, and trust implications. Results demonstrate that task-specific obfuscation enables effective monitoring with reduced privacy risks, establishing the framework's readiness for real-world adoption and providing cross-domain recommendations for responsible, human-centric AI deployment in industry.
- oai:arXiv.org:2512.09463v1
- cs.CV
+ How to Trick Your AI TA: A Systematic Study of Academic Jailbreaking in LLM Code Evaluation
+ https://arxiv.org/abs/2512.10415
+ arXiv:2512.10415v1 Announce Type: new
+Abstract: The use of Large Language Models (LLMs) as automatic judges for code evaluation is becoming increasingly prevalent in academic environments. But their reliability can be compromised by students who may employ adversarial prompting strategies in order to induce misgrading and secure undeserved academic advantages. In this paper, we present the first large-scale study of jailbreaking LLM-based automated code evaluators in academic context. Our contributions are: (i) We systematically adapt 20+ jailbreaking strategies for jailbreaking AI code evaluators in the academic context, defining a new class of attacks termed academic jailbreaking. (ii) We release a poisoned dataset of 25K adversarial student submissions, specifically designed for the academic code-evaluation setting, sourced from diverse real-world coursework and paired with rubrics and human-graded references, and (iii) In order to capture the multidimensional impact of academic jailbreaking, we systematically adapt and define three jailbreaking metrics (Jailbreak Success Rate, Score Inflation, and Harmfulness). (iv) We comprehensively evalulate the academic jailbreaking attacks using six LLMs. We find that these models exhibit significant vulnerability, particularly to persuasive and role-play-based attacks (up to 97% JSR). Our adversarial dataset and benchmark suite lay the groundwork for next-generation robust LLM-based evaluators in academic code assessment.
+ oai:arXiv.org:2512.10415v1
+ cs.SEcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sander De Coninck, Emilio Gamba, Bart Van Doninck, Abdellatif Bey-Temsamani, Sam Leroux, Pieter Simoens
+ Devanshu Sahoo, Vasudev Majhi, Arjun Neekhra, Yash Sinha, Murari Mandal, Dhruv Kumar
- Nominal Type Theory by Nullary Internal Parametricity
- https://arxiv.org/abs/2512.09464
- arXiv:2512.09464v1 Announce Type: new
-Abstract: There are many ways to represent the syntax of a language with binders. In particular, nominal frameworks are metalanguages that feature (among others) name abstraction types, which can be used to specify the type of binders. The resulting syntax representation (nominal data types) makes alpha-equivalent terms equal, and features a name-invariant induction principle. It is known that name abstraction types can be presented either as existential or universal quantification on names. On the one hand, nominal frameworks use the existential presentation for practical reasoning since the user is allowed to match on a name-term pattern where the name is bound in the term. However inference rules for existential name abstraction are cumbersome to specify/implement because they must keep track of information about free and bound names at the type level. On the other hand, universal name abstractions are easier to specify since they are treated not as pairs, but as functions consuming fresh names. Yet the ability to pattern match on such functions is seemingly lost. In this work we show that this ability and others are recovered in a type theory consisting of (1) nullary ($0$-ary) internally parametric type theory (nullary PTT) (2) a type of names and a novel name induction principle (3) nominal data types. This extension of nullary PTT can act as a legitimate nominal framework. Indeed it has universal name abstractions, nominal pattern matching, a freshness type former, name swapping and local-scope operations and (non primitive) existential name abstractions. We illustrate how term-relevant nullary parametricity is used to recover nominal pattern matching. Our main example involves synthetic Kripke parametricity.
- oai:arXiv.org:2512.09464v1
- cs.LO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Beyond Endpoints: Path-Centric Reasoning for Vectorized Off-Road Network Extraction
+ https://arxiv.org/abs/2512.10416
+ arXiv:2512.10416v1 Announce Type: new
+Abstract: Deep learning has advanced vectorized road extraction in urban settings, yet off-road environments remain underexplored and challenging. A significant domain gap causes advanced models to fail in wild terrains due to two key issues: lack of large-scale vectorized datasets and structural weakness in prevailing methods. Models such as SAM-Road employ a node-centric paradigm that reasons at sparse endpoints, making them fragile to occlusions and ambiguous junctions in off-road scenes, leading to topological errors.This work addresses these limitations in two complementary ways. First, we release WildRoad, a gloabal off-road road network dataset constructed efficiently with a dedicated interactive annotation tool tailored for road-network labeling. Second, we introduce MaGRoad (Mask-aware Geodesic Road network extractor), a path-centric framework that aggregates multi-scale visual evidence along candidate paths to infer connectivity robustly.Extensive experiments show that MaGRoad achieves state-of-the-art performance on our challenging WildRoad benchmark while generalizing well to urban datasets. A streamlined pipeline also yields roughly 2.5x faster inference, improving practical applicability. Together, the dataset and path-centric paradigm provide a stronger foundation for mapping roads in the wild.
+ oai:arXiv.org:2512.10416v1
+ cs.CV
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Antoine Van Muylder, Andreas Nuyts, Dominique Devriese
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wenfei Guan, Jilin Mei, Tong Shen, Xumin Wu, Shuo Wang, Cheng Min, Yu Hu
- Cauchy-Schwarz Fairness Regularizer
- https://arxiv.org/abs/2512.09467
- arXiv:2512.09467v1 Announce Type: new
-Abstract: Group fairness in machine learning is often enforced by adding a regularizer that reduces the dependence between model predictions and sensitive attributes. However, existing regularizers are built on heterogeneous distance measures and design choices, which makes their behavior hard to reason about and their performance inconsistent across tasks. This raises a basic question: what properties make a good fairness regularizer? We address this question by first organizing existing in-process methods into three families: (i) matching prediction statistics across sensitive groups, (ii) aligning latent representations, and (iii) directly minimizing dependence between predictions and sensitive attributes. Through this lens, we identify desirable properties of the underlying distance measure, including tight generalization bounds, robustness to scale differences, and the ability to handle arbitrary prediction distributions. Motivated by these properties, we propose a Cauchy-Schwarz (CS) fairness regularizer that penalizes the empirical CS divergence between prediction distributions conditioned on sensitive groups. Under a Gaussian comparison, we show that CS divergence yields a tighter bound than Kullback-Leibler divergence, Maximum Mean Discrepancy, and the mean disparity used in Demographic Parity, and we discuss how these advantages translate to a distribution-free, kernel-based estimator that naturally extends to multiple sensitive attributes. Extensive experiments on four tabular benchmarks and one image dataset demonstrate that the proposed CS regularizer consistently improves Demographic Parity and Equal Opportunity metrics while maintaining competitive accuracy, and achieves a more stable utility-fairness trade-off across hyperparameter settings compared to prior regularizers.
- oai:arXiv.org:2512.09467v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ TransLocNet: Cross-Modal Attention for Aerial-Ground Vehicle Localization with Contrastive Learning
+ https://arxiv.org/abs/2512.10419
+ arXiv:2512.10419v1 Announce Type: new
+Abstract: Aerial-ground localization is difficult due to large viewpoint and modality gaps between ground-level LiDAR and overhead imagery. We propose TransLocNet, a cross-modal attention framework that fuses LiDAR geometry with aerial semantic context. LiDAR scans are projected into a bird's-eye-view representation and aligned with aerial features through bidirectional attention, followed by a likelihood map decoder that outputs spatial probability distributions over position and orientation. A contrastive learning module enforces a shared embedding space to improve cross-modal alignment. Experiments on CARLA and KITTI show that TransLocNet outperforms state-of-the-art baselines, reducing localization error by up to 63% and achieving sub-meter, sub-degree accuracy. These results demonstrate that TransLocNet provides robust and generalizable aerial-ground localization in both synthetic and real-world settings.
+ oai:arXiv.org:2512.10419v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Yezi Liu, Hanning Chen, Wenjun Huang, Yang Ni, Mohsen Imani
+ Phu Pham, Damon Conover, Aniket Bera
- Temporal-Spatial Tubelet Embedding for Cloud-Robust MSI Reconstruction using MSI-SAR Fusion: A Multi-Head Self-Attention Video Vision Transformer Approach
- https://arxiv.org/abs/2512.09471
- arXiv:2512.09471v1 Announce Type: new
-Abstract: Cloud cover in multispectral imagery (MSI) significantly hinders early-season crop mapping by corrupting spectral information. Existing Vision Transformer(ViT)-based time-series reconstruction methods, like SMTS-ViT, often employ coarse temporal embeddings that aggregate entire sequences, causing substantial information loss and reducing reconstruction accuracy. To address these limitations, a Video Vision Transformer (ViViT)-based framework with temporal-spatial fusion embedding for MSI reconstruction in cloud-covered regions is proposed in this study. Non-overlapping tubelets are extracted via 3D convolution with constrained temporal span $(t=2)$, ensuring local temporal coherence while reducing cross-day information degradation. Both MSI-only and SAR-MSI fusion scenarios are considered during the experiments. Comprehensive experiments on 2020 Traill County data demonstrate notable performance improvements: MTS-ViViT achieves a 2.23\% reduction in MSE compared to the MTS-ViT baseline, while SMTS-ViViT achieves a 10.33\% improvement with SAR integration over the SMTS-ViT baseline. The proposed framework effectively enhances spectral reconstruction quality for robust agricultural monitoring.
- oai:arXiv.org:2512.09471v1
+ Neural Collapse in Test-Time Adaptation
+ https://arxiv.org/abs/2512.10421
+ arXiv:2512.10421v1 Announce Type: new
+Abstract: Test-Time Adaptation (TTA) enhances model robustness to out-of-distribution (OOD) data by updating the model online during inference, yet existing methods lack theoretical insights into the fundamental causes of performance degradation under domain shifts. Recently, Neural Collapse (NC) has been proposed as an emergent geometric property of deep neural networks (DNNs), providing valuable insights for TTA. In this work, we extend NC to the sample-wise level and discover a novel phenomenon termed Sample-wise Alignment Collapse (NC3+), demonstrating that a sample's feature embedding, obtained by a trained model, aligns closely with the corresponding classifier weight. Building on NC3+, we identify that the performance degradation stems from sample-wise misalignment in adaptation which exacerbates under larger distribution shifts. This indicates the necessity of realigning the feature embeddings with their corresponding classifier weights. However, the misalignment makes pseudo-labels unreliable under domain shifts. To address this challenge, we propose NCTTA, a novel feature-classifier alignment method with hybrid targets to mitigate the impact of unreliable pseudo-labels, which blends geometric proximity with predictive confidence. Extensive experiments demonstrate the effectiveness of NCTTA in enhancing robustness to domain shifts. For example, NCTTA outperforms Tent by 14.52% on ImageNet-C.
+ oai:arXiv.org:2512.10421v1cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiao Chen, Zhongjing Du, Jiazhen Huang, Xu Jiang, Li Lu, Jingyan Jiang, Zhi Wang
+
+
+ Cooperative Retrieval-Augmented Generation for Question Answering: Mutual Information Exchange and Ranking by Contrasting Layers
+ https://arxiv.org/abs/2512.10422
+ arXiv:2512.10422v1 Announce Type: new
+Abstract: Since large language models (LLMs) have a tendency to generate factually inaccurate output, retrieval-augmented generation (RAG) has gained significant attention as a key means to mitigate this downside of harnessing only LLMs. However, existing RAG methods for simple and multi-hop question answering (QA) are still prone to incorrect retrievals and hallucinations. To address these limitations, we propose CoopRAG, a novel RAG framework for the question answering task in which a retriever and an LLM work cooperatively with each other by exchanging informative knowledge, and the earlier and later layers of the retriever model work cooperatively with each other to accurately rank the retrieved documents relevant to a given query. In this framework, we (i) unroll a question into sub-questions and a reasoning chain in which uncertain positions are masked, (ii) retrieve the documents relevant to the question augmented with the sub-questions and the reasoning chain, (iii) rerank the documents by contrasting layers of the retriever, and (iv) reconstruct the reasoning chain by filling the masked positions via the LLM. Our experiments demonstrate that CoopRAG consistently outperforms state-of-the-art QA methods on three multi-hop QA datasets as well as a simple QA dataset in terms of both the retrieval and QA performances. Our code is available.\footnote{https://github.com/meaningful96/CoopRAG}
+ oai:arXiv.org:2512.10422v1
+ cs.CLcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Youmin Ko, Sungjong Seo, Hyunjoon Kim
+
+
+ Neural Hamiltonian Deformation Fields for Dynamic Scene Rendering
+ https://arxiv.org/abs/2512.10424
+ arXiv:2512.10424v1 Announce Type: new
+Abstract: Representing and rendering dynamic scenes with complex motions remains challenging in computer vision and graphics. Recent dynamic view synthesis methods achieve high-quality rendering but often produce physically implausible motions. We introduce NeHaD, a neural deformation field for dynamic Gaussian Splatting governed by Hamiltonian mechanics. Our key observation is that existing methods using MLPs to predict deformation fields introduce inevitable biases, resulting in unnatural dynamics. By incorporating physics priors, we achieve robust and realistic dynamic scene rendering. Hamiltonian mechanics provides an ideal framework for modeling Gaussian deformation fields due to their shared phase-space structure, where primitives evolve along energy-conserving trajectories. We employ Hamiltonian neural networks to implicitly learn underlying physical laws governing deformation. Meanwhile, we introduce Boltzmann equilibrium decomposition, an energy-aware mechanism that adaptively separates static and dynamic Gaussians based on their spatial-temporal energy states for flexible rendering. To handle real-world dissipation, we employ second-order symplectic integration and local rigidity regularization as physics-informed constraints for robust dynamics modeling. Additionally, we extend NeHaD to adaptive streaming through scale-aware mipmapping and progressive optimization. Extensive experiments demonstrate that NeHaD achieves physically plausible results with a rendering quality-efficiency trade-off. To our knowledge, this is the first exploration leveraging Hamiltonian mechanics for neural Gaussian deformation, enabling physically realistic dynamic scene rendering with streaming capabilities.
+ oai:arXiv.org:2512.10424v1
+ cs.GR
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yiqun Wang, Lujun Li, Meiru Yue, Radu State
+ Hai-Long Qin, Sixian Wang, Guo Lu, Jincheng Dai
- WarmServe: Enabling One-for-Many GPU Prewarming for Multi-LLM Serving
- https://arxiv.org/abs/2512.09472
- arXiv:2512.09472v1 Announce Type: new
-Abstract: Deploying multiple models within shared GPU clusters is promising for improving resource efficiency in large language model (LLM) serving. Existing multi-LLM serving systems optimize GPU utilization at the cost of worse inference performance, especially time-to-first-token (TTFT). We identify the root cause of such compromise as their unawareness of future workload characteristics. In contrast, recent analysis on real-world traces has shown the high periodicity and long-term predictability of LLM serving workloads.
- We propose universal GPU workers to enable one-for-many GPU prewarming that loads models with knowledge of future workloads. Based on universal GPU workers, we design and build WarmServe, a multi-LLM serving system that (1) mitigates cluster-wide prewarming interference by adopting an evict-aware model placement strategy, (2) prepares universal GPU workers in advance by proactive prewarming, and (3) manages GPU memory with a zero-overhead memory switching mechanism. Evaluation under real-world datasets shows that WarmServe improves TTFT by up to 50.8$\times$ compared to the state-of-the-art autoscaling-based system, while being capable of serving up to 2.5$\times$ more requests compared to the GPU-sharing system.
- oai:arXiv.org:2512.09472v1
+ Making Wide Stripes Practical: Cascaded Parity LRCs for Efficient Repair and High Reliability
+ https://arxiv.org/abs/2512.10425
+ arXiv:2512.10425v1 Announce Type: new
+Abstract: Erasure coding with wide stripes is increasingly adopted to reduce storage overhead in large-scale storage systems. However, existing Locally Repairable Codes (LRCs) exhibit structural limitations in this setting: inflated local groups increase single-node repair cost, multi-node failures frequently trigger expensive global repair, and reliability degrades sharply. We identify a key root cause: local and global parity blocks are designed independently, preventing them from cooperating during repair. We present Cascaded Parity LRCs (CP-LRCs), a new family of wide stripe LRCs that embed structured dependency between parity blocks by decomposing a global parity block across all local parity blocks. This creates a cascaded parity group that preserves MDS-level fault tolerance while enabling low-bandwidth single-node and multi-node repairs. We provide a general coefficient-generation framework, develop repair algorithms exploiting cascading, and instantiate the design with CP-Azure and CP-Uniform. Evaluations on Alibaba Cloud show reductions in repair time of up to 41% for single-node failures and 26% for two-node failures.
+ oai:arXiv.org:2512.10425v1cs.DC
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chiheng Lou, Sheng Qi, Rui Kang, Yong Zhang, Chen Sun, Pengcheng Wang, Bingyang Liu, Xuanzhe Liu, Xin Jin
+ Fan Yu, Guodong Li, Si Wu, Weijun Fang, Sihuang Hu
- An Efficient Interaction Human-AI Synergy System Bridging Visual Awareness and Large Language Model for Intensive Care Units
- https://arxiv.org/abs/2512.09473
- arXiv:2512.09473v1 Announce Type: new
-Abstract: Intensive Care Units (ICUs) are critical environments characterized by high-stakes monitoring and complex data management. However, current practices often rely on manual data transcription and fragmented information systems, introducing potential risks to patient safety and operational efficiency. To address these issues, we propose a human-AI synergy system based on a cloud-edge-end architecture, which integrates visual-aware data extraction and semantic interaction mechanisms. Specifically, a visual-aware edge module non-invasively captures real-time physiological data from bedside monitors, reducing manual entry errors. To improve accessibility to fragmented data sources, a semantic interaction module, powered by a Large Language Model (LLM), enables physicians to perform efficient and intuitive voice-based queries over structured patient data. The hierarchical cloud-edge-end deployment ensures low-latency communication and scalable system performance. Our system reduces the cognitive burden on ICU nurses and physicians and demonstrates promising potential for broader applications in intelligent healthcare systems.
- oai:arXiv.org:2512.09473v1
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Differential Privacy for Secure Machine Learning in Healthcare IoT-Cloud Systems
+ https://arxiv.org/abs/2512.10426
+ arXiv:2512.10426v1 Announce Type: new
+Abstract: Healthcare has become exceptionally sophisticated, as wearables and connected medical devices are revolutionising remote patient monitoring, emergency response, medication management, diagnosis, and predictive and prescriptive analytics. Internet of Things and Cloud computing integrated systems (IoT-Cloud) facilitate sensing, automation, and processing for these healthcare applications. While real-time response is crucial for alleviating patient emergencies, protecting patient privacy is extremely important in data-driven healthcare. In this paper, we propose a multi-layer IoT, Edge and Cloud architecture to enhance the speed of response for emergency healthcare by distributing tasks based on response criticality and permanence of storage. Privacy of patient data is assured by proposing a Differential Privacy framework across several machine learning models such as K-means, Logistic Regression, Random Forest and Naive Bayes. We establish a comprehensive threat model identifying three adversary classes and evaluate Laplace, Gaussian, and hybrid noise mechanisms across varying privacy budgets, with supervised algorithms achieving up to 86% accuracy. The proposed hybrid Laplace-Gaussian noise mechanism with adaptive budget allocation provides a balanced approach, offering moderate tails and better privacy-utility trade-offs for both low and high dimension datasets. At the practical threshold of $\varepsilon = 5.0$, supervised algorithms achieve 82-84% accuracy while reducing attribute inference attacks by up to 18% and data reconstruction correlation by 70%. Blockchain security further ensures trusted communication through time-stamping, traceability, and immutability for analytics applications. Edge computing demonstrates 8$\times$ latency reduction for emergency scenarios, validating the hierarchical architecture for time-critical operations.
+ oai:arXiv.org:2512.10426v1
+ cs.CR
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Yibowen Zhao, Yiming Cao, Zhiqi Shen, Juan Du, Yonghui Xu, Lizhen Cui, Cyril Leung
+ N Mangala, Murtaza Rangwala, S Aishwarya, B Eswara Reddy, Rajkumar Buyya, KR Venugopal, SS Iyengar, LM Patnaik
- Color encoding in Latent Space of Stable Diffusion Models
- https://arxiv.org/abs/2512.09477
- arXiv:2512.09477v1 Announce Type: new
-Abstract: Recent advances in diffusion-based generative models have achieved remarkable visual fidelity, yet a detailed understanding of how specific perceptual attributes - such as color and shape - are internally represented remains limited. This work explores how color is encoded in a generative model through a systematic analysis of the latent representations in Stable Diffusion. Through controlled synthetic datasets, principal component analysis (PCA) and similarity metrics, we reveal that color information is encoded along circular, opponent axes predominantly captured in latent channels c_3 and c_4, whereas intensity and shape are primarily represented in channels c_1 and c_2. Our findings indicate that the latent space of Stable Diffusion exhibits an interpretable structure aligned with a efficient coding representation. These insights provide a foundation for future work in model understanding, editing applications, and the design of more disentangled generative frameworks.
- oai:arXiv.org:2512.09477v1
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ The Operator Origins of Neural Scaling Laws: A Generalized Spectral Transport Dynamics of Deep Learning
+ https://arxiv.org/abs/2512.10427
+ arXiv:2512.10427v1 Announce Type: new
+Abstract: Modern deep networks operate in a rough, finite-regularity regime where Jacobian-induced operators exhibit heavy-tailed spectra and strong basis drift. In this work, we derive a unified operator-theoretoretic description of neural training dynamics directly from gradient descent. Starting from the exact evolution $\dot e_t = -M(t)e_t$ in function space, we apply Kato perturbation theory to obtain a rigorous system of coupled mode ODEs and show that, after coarse-graining, these dynamics converge to a spectral transport--dissipation PDE \[ \partial_t g + \partial_\lambda (v g) = -\lambda g + S, \] where $v$ captures eigenbasis drift and $S$ encodes nonlocal spectral coupling.
+ We prove that neural training preserves functional regularity, forcing the drift to take an asymptotic power-law form $v(\lambda,t)\sim -c(t)\lambda^b$. In the weak-coupling regime -- naturally induced by spectral locality and SGD noise -- the PDE admits self-similar solutions with a resolution frontier, polynomial amplitude growth, and power-law dissipation. This structure yields explicit scaling-law exponents, explains the geometry of double descent, and shows that the effective training time satisfies $\tau(t)=t^\alpha L(t)$ for slowly varying $L$.
+ Finally, we show that NTK training and feature learning arise as two limits of the same PDE: $v\equiv 0$ recovers lazy dynamics, while $v\neq 0$ produces representation drift. Our results provide a unified spectral framework connecting operator geometry, optimization dynamics, and the universal scaling behavior of modern deep networks.
+ oai:arXiv.org:2512.10427v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/publicdomain/zero/1.0/
- Guillem Arias, Ariadna Sol\`a, Mart\'i Armengod, Maria Vanrell
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yizhou Zhang
- Personalized Building Climate Control with Contextual Preferential Bayesian Optimization
- https://arxiv.org/abs/2512.09481
- arXiv:2512.09481v1 Announce Type: new
-Abstract: Efficient tuning of building climate controllers to optimize occupant utility is essential for ensuring overall comfort and satisfaction. However, this is a challenging task since the latent utility are difficult to measure directly. Time-varying contextual factors, such as outdoor temperature, further complicate the problem. To address these challenges, we propose a contextual preferential Bayesian optimization algorithm that leverages binary preference feedback together with contextual information to enable efficient real-time controller tuning. We validate the approach by tuning an economic MPC controller on BOPTEST, a high-fidelity building simulation platform. Over a two-month simulation period, our method outperforms the baseline controller and achieves an improvement of up to 23% in utility. Moreover, for different occupant types, we demonstrate that the algorithm automatically adapts to individual preferences, enabling personalized controller tuning.
- oai:arXiv.org:2512.09481v1
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Design and Implementation of a High-Precision Wind-Estimation UAV with Onboard Sensors
+ https://arxiv.org/abs/2512.10428
+ arXiv:2512.10428v1 Announce Type: new
+Abstract: Accurate real-time wind vector estimation is essential for enhancing the safety, navigation accuracy, and energy efficiency of unmanned aerial vehicles (UAVs). Traditional approaches rely on external sensors or simplify vehicle dynamics, which limits their applicability during agile flight or in resource-constrained platforms. This paper proposes a real-time wind estimation method based solely on onboard sensors. The approach first estimates external aerodynamic forces using a disturbance observer (DOB), and then maps these forces to wind vectors using a thin-plate spline (TPS) model. A custom-designed wind barrel mounted on the UAV enhances aerodynamic sensitivity, further improving estimation accuracy. The system is validated through comprehensive experiments in wind tunnels, indoor and outdoor flights. Experimental results demonstrate that the proposed method achieves consistently high-accuracy wind estimation across controlled and real-world conditions, with speed RMSEs as low as \SI{0.06}{m/s} in wind tunnel tests, \SI{0.22}{m/s} during outdoor hover, and below \SI{0.38}{m/s} in indoor and outdoor dynamic flights, and direction RMSEs under \ang{7.3} across all scenarios, outperforming existing baselines. Moreover, the method provides vertical wind estimates -- unavailable in baselines -- with RMSEs below \SI{0.17}{m/s} even during fast indoor translations.
+ oai:arXiv.org:2512.10428v1
+ cs.ET
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Wenbin Wang, Jicheng Shi, Colin N. Jones
+ 10.1016/j.measurement.2025.119882
+ Haowen Yu, Na Fan, Xing Liu, Ximin Lyu
- Source Coverage and Citation Bias in LLM-based vs. Traditional Search Engines
- https://arxiv.org/abs/2512.09483
- arXiv:2512.09483v1 Announce Type: new
-Abstract: LLM-based Search Engines (LLM-SEs) introduces a new paradigm for information seeking. Unlike Traditional Search Engines (TSEs) (e.g., Google), these systems summarize results, often providing limited citation transparency. The implications of this shift remain largely unexplored, yet raises key questions regarding trust and transparency. In this paper, we present a large-scale empirical study of LLM-SEs, analyzing 55,936 queries and the corresponding search results across six LLM-SEs and two TSEs. We confirm that LLM-SEs cites domain resources with greater diversity than TSEs. Indeed, 37% of domains are unique to LLM-SEs. However, certain risks still persist: LLM-SEs do not outperform TSEs in credibility, political neutrality and safety metrics. Finally, to understand the selection criteria of LLM-SEs, we perform a feature-based analysis to identify key factors influencing source choice. Our findings provide actionable insights for end users, website owners, and developers.
- oai:arXiv.org:2512.09483v1
- cs.CL
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Representation of the structure of graphs by sequences of instructions
+ https://arxiv.org/abs/2512.10429
+ arXiv:2512.10429v1 Announce Type: new
+Abstract: The representation of graphs is commonly based on the adjacency matrix concept. This formulation is the foundation of most algebraic and computational approaches to graph processing. The advent of deep learning language models offers a wide range of powerful computational models that are specialized in the processing of text. However, current procedures to represent graphs are not amenable to processing by these models. In this work, a new method to represent graphs is proposed. It represents the adjacency matrix of a graph by a string of simple instructions. The instructions build the adjacency matrix step by step. The transformation is reversible, i.e. given a graph the string can be produced and vice versa. The proposed representation is compact and it maintains the local structural patterns of the graph. Therefore, it is envisaged that it could be useful to boost the processing of graphs by deep learning models. A tentative computational experiment is reported, with favorable results.
+ oai:arXiv.org:2512.10429v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Peixian Zhang, Qiming Ye, Zifan Peng, Kiran Garimella, Gareth Tyson
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Ezequiel Lopez-Rubio
- Unambiguisability and Register Minimisation of Min-Plus Models
- https://arxiv.org/abs/2512.09484
- arXiv:2512.09484v1 Announce Type: new
-Abstract: We study the unambiguisability problem for min-plus (tropical) weighted automata (WFAs), and the counter-minimisation problem for tropical Cost Register Automata (CRAs), which are expressively-equivalent to WFAs. Both problems ask whether the "amount of nondeterminism" in the model can be reduced. We show that WFA unambiguisability is decidable, thus resolving this long-standing open problem. Our proof is via reduction to WFA determinisability, which was recently shown to be decidable. On the negative side, we show that CRA counter minimisation is undecidable, even for a fixed number of registers (specifically, already for 7 registers).
- oai:arXiv.org:2512.09484v1
- cs.FL
- Thu, 11 Dec 2025 00:00:00 -0500
+ T-pro 2.0: An Efficient Russian Hybrid-Reasoning Model and Playground
+ https://arxiv.org/abs/2512.10430
+ arXiv:2512.10430v1 Announce Type: new
+Abstract: We introduce T-pro 2.0, an open-weight Russian LLM for hybrid reasoning and efficient inference. The model supports direct answering and reasoning-trace generation, using a Cyrillic-dense tokenizer and an adapted EAGLE speculative-decoding pipeline to reduce latency. To enable reproducible and extensible research, we release the model weights, the T-Wix 500k instruction corpus, the T-Math reasoning benchmark, and the EAGLE weights on Hugging Face. These resources allow users to study Russian-language reasoning and to extend or adapt both the model and the inference pipeline. A public web demo exposes reasoning and non-reasoning modes and illustrates the speedups achieved by our inference stack across domains. T-pro 2.0 thus serves as an accessible open system for building and evaluating efficient, practical Russian LLM applications.
+ oai:arXiv.org:2512.10430v1
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Shaull Almagor, Guy Arbel, Sarai Sheinvald
+ Dmitrii Stoianov, Danil Taranets, Olga Tsymboi, Ramil Latypov, Almaz Dautov, Vladislav Kruglikov, Nikita Surkov, German Abramov, Pavel Gein, Dmitry Abulkhanov, Mikhail Gashkov, Viktor Zelenkovskiy, Artem Batalov, Aleksandr Medvedev, Anatolii Potapov
- Advancing LLM-Based Security Automation with Customized Group Relative Policy Optimization for Zero-Touch Networks
- https://arxiv.org/abs/2512.09485
- arXiv:2512.09485v1 Announce Type: new
-Abstract: Zero-Touch Networks (ZTNs) represent a transformative paradigm toward fully automated and intelligent network management, providing the scalability and adaptability required for the complexity of sixth-generation (6G) networks. However, the distributed architecture, high openness, and deep heterogeneity of 6G networks expand the attack surface and pose unprecedented security challenges. To address this, security automation aims to enable intelligent security management across dynamic and complex environments, serving as a key capability for securing 6G ZTNs. Despite its promise, implementing security automation in 6G ZTNs presents two primary challenges: 1) automating the lifecycle from security strategy generation to validation and update under real-world, parallel, and adversarial conditions, and 2) adapting security strategies to evolving threats and dynamic environments. This motivates us to propose SecLoop and SA-GRPO. SecLoop constitutes the first fully automated framework that integrates large language models (LLMs) across the entire lifecycle of security strategy generation, orchestration, response, and feedback, enabling intelligent and adaptive defenses in dynamic network environments, thus tackling the first challenge. Furthermore, we propose SA-GRPO, a novel security-aware group relative policy optimization algorithm that iteratively refines security strategies by contrasting group feedback collected from parallel SecLoop executions, thereby addressing the second challenge. Extensive real-world experiments on five benchmarks, including 11 MITRE ATT&CK processes and over 20 types of attacks, demonstrate the superiority of the proposed SecLoop and SA-GRPO. We will release our platform to the community, facilitating the advancement of security automation towards next generation communications.
- oai:arXiv.org:2512.09485v1
- cs.CR
+ Targeted Data Protection for Diffusion Model by Matching Training Trajectory
+ https://arxiv.org/abs/2512.10433
+ arXiv:2512.10433v1 Announce Type: new
+Abstract: Recent advancements in diffusion models have made fine-tuning text-to-image models for personalization increasingly accessible, but have also raised significant concerns regarding unauthorized data usage and privacy infringement. Current protection methods are limited to passively degrading image quality, failing to achieve stable control. While Targeted Data Protection (TDP) offers a promising paradigm for active redirection toward user-specified target concepts, existing TDP attempts suffer from poor controllability due to snapshot-matching approaches that fail to account for complete learning dynamics. We introduce TAFAP (Trajectory Alignment via Fine-tuning with Adversarial Perturbations), the first method to successfully achieve effective TDP by controlling the entire training trajectory. Unlike snapshot-based methods whose protective influence is easily diluted as training progresses, TAFAP employs trajectory-matching inspired by dataset distillation to enforce persistent, verifiable transformations throughout fine-tuning. We validate our method through extensive experiments, demonstrating the first successful targeted transformation in diffusion models with simultaneous control over both identity and visual patterns. TAFAP significantly outperforms existing TDP attempts, achieving robust redirection toward target concepts while maintaining high image quality. This work enables verifiable safeguards and provides a new framework for controlling and tracing alterations in diffusion model outputs.
+ oai:arXiv.org:2512.10433v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Xinye Cao, Yihan Lin, Guoshun Nan, Qinchuan Zhou, Yuhang Luo, Yurui Gao, Zeliang Zhang, Haolang Lu, Qimei Cui, Yanzhao Hou, Xiaofeng Tao, Tony Q. S. Quek
+ Hojun Lee, Mijin Koo, Yeji Song, Nojun Kwak
- RouteRAG: Efficient Retrieval-Augmented Generation from Text and Graph via Reinforcement Learning
- https://arxiv.org/abs/2512.09487
- arXiv:2512.09487v1 Announce Type: new
-Abstract: Retrieval-Augmented Generation (RAG) integrates non-parametric knowledge into Large Language Models (LLMs), typically from unstructured texts and structured graphs. While recent progress has advanced text-based RAG to multi-turn reasoning through Reinforcement Learning (RL), extending these advances to hybrid retrieval introduces additional challenges. Existing graph-based or hybrid systems typically depend on fixed or handcrafted retrieval pipelines, lacking the ability to integrate supplementary evidence as reasoning unfolds. Besides, while graph evidence provides relational structures crucial for multi-hop reasoning, it is substantially more expensive to retrieve. To address these limitations, we introduce \model{}, an RL-based framework that enables LLMs to perform multi-turn and adaptive graph-text hybrid RAG. \model{} jointly optimizes the entire generation process via RL, allowing the model to learn when to reason, what to retrieve from either texts or graphs, and when to produce final answers, all within a unified generation policy. To guide this learning process, we design a two-stage training framework that accounts for both task outcome and retrieval efficiency, enabling the model to exploit hybrid evidence while avoiding unnecessary retrieval overhead. Experimental results across five question answering benchmarks demonstrate that \model{} significantly outperforms existing RAG baselines, highlighting the benefits of end-to-end RL in supporting adaptive and efficient retrieval for complex reasoning.
- oai:arXiv.org:2512.09487v1
+ Semantic Reconstruction of Adversarial Plagiarism: A Context-Aware Framework for Detecting and Restoring "Tortured Phrases" in Scientific Literature
+ https://arxiv.org/abs/2512.10435
+ arXiv:2512.10435v1 Announce Type: new
+Abstract: The integrity and reliability of scientific literature is facing a serious threat by adversarial text generation techniques, specifically from the use of automated paraphrasing tools to mask plagiarism. These tools generate "tortured phrases", statistically improbable synonyms (e.g. "counterfeit consciousness" for "artificial intelligence"), that preserve the local grammar while obscuring the original source. Most existing detection methods depend heavily on static blocklists or general-domain language models, which suffer from high false-negative rates for novel obfuscations and cannot determine the source of the plagiarized content. In this paper, we propose Semantic Reconstruction of Adversarial Plagiarism (SRAP), a framework designed not only to detect these anomalies but to mathematically recover the original terminology. We use a two-stage architecture: (1) statistical anomaly detection with a domain-specific masked language model (SciBERT) using token-level pseudo-perplexity, and (2) source-based semantic reconstruction using dense vector retrieval (FAISS) and sentence-level alignment (SBERT). Experiments on a parallel corpus of adversarial scientific text show that while zero-shot baselines fail completely (0.00 percent restoration accuracy), our retrieval-augmented approach achieves 23.67 percent restoration accuracy, significantly outperforming baseline methods. We also show that static decision boundaries are necessary for robust detection in jargon-heavy scientific text, since dynamic thresholding fails under high variance. SRAP enables forensic analysis by linking obfuscated expressions back to their most probable source documents.
+ oai:arXiv.org:2512.10435v1cs.CL
- cs.AI
- cs.IR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yucan Guo, Miao Su, Saiping Guan, Zihao Sun, Xiaolong Jin, Jiafeng Guo, Xueqi Cheng
+ Agniva Maiti, Prajwal Panth, Suresh Chandra Satapathy
- MODA: The First Challenging Benchmark for Multispectral Object Detection in Aerial Images
- https://arxiv.org/abs/2512.09489
- arXiv:2512.09489v1 Announce Type: new
-Abstract: Aerial object detection faces significant challenges in real-world scenarios, such as small objects and extensive background interference, which limit the performance of RGB-based detectors with insufficient discriminative information. Multispectral images (MSIs) capture additional spectral cues across multiple bands, offering a promising alternative. However, the lack of training data has been the primary bottleneck to exploiting the potential of MSIs. To address this gap, we introduce the first large-scale dataset for Multispectral Object Detection in Aerial images (MODA), which comprises 14,041 MSIs and 330,191 annotations across diverse, challenging scenarios, providing a comprehensive data foundation for this field. Furthermore, to overcome challenges inherent to aerial object detection using MSIs, we propose OSSDet, a framework that integrates spectral and spatial information with object-aware cues. OSSDet employs a cascaded spectral-spatial modulation structure to optimize target perception, aggregates spectrally related features by exploiting spectral similarities to reinforce intra-object correlations, and suppresses irrelevant background via object-aware masking. Moreover, cross-spectral attention further refines object-related representations under explicit object-aware guidance. Extensive experiments demonstrate that OSSDet outperforms existing methods with comparable parameters and efficiency.
- oai:arXiv.org:2512.09489v1
+ An M-Health Algorithmic Approach to Identify and Assess Physiotherapy Exercises in Real Time
+ https://arxiv.org/abs/2512.10437
+ arXiv:2512.10437v1 Announce Type: new
+Abstract: This work presents an efficient algorithmic framework for real-time identification, classification, and evaluation of human physiotherapy exercises using mobile devices. The proposed method interprets a kinetic movement as a sequence of static poses, which are estimated from camera input using a pose-estimation neural network. Extracted body keypoints are transformed into trigonometric angle-based features and classified with lightweight supervised models to generate frame-level pose predictions and accuracy scores. To recognize full exercise movements and detect deviations from prescribed patterns, we employ a dynamic-programming scheme based on a modified Levenshtein distance algorithm, enabling robust sequence matching and localization of inaccuracies. The system operates entirely on the client side, ensuring scalability and real-time performance. Experimental evaluation demonstrates the effectiveness of the methodology and highlights its applicability to remote physiotherapy supervision and m-health applications.
+ oai:arXiv.org:2512.10437v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Stylianos Kandylakis, Christos Orfanopoulos, Georgios Siolas, Panayiotis Tsanakas
+
+
+ HypeR Adaptivity: Joint $hr$-Adaptive Meshing via Hypergraph Multi-Agent Deep Reinforcement Learning
+ https://arxiv.org/abs/2512.10439
+ arXiv:2512.10439v1 Announce Type: new
+Abstract: Adaptive mesh refinement is central to the efficient solution of partial differential equations (PDEs) via the finite element method (FEM). Classical $r$-adaptivity optimizes vertex positions but requires solving expensive auxiliary PDEs such as the Monge-Amp\`ere equation, while classical $h$-adaptivity modifies topology through element subdivision but suffers from expensive error indicator computation and is constrained by isotropic refinement patterns that impose accuracy ceilings. Combined $hr$-adaptive techniques naturally outperform single-modality approaches, yet inherit both computational bottlenecks and the restricted cost-accuracy trade-off. Emerging machine learning methods for adaptive mesh refinement seek to overcome these limitations, but existing approaches address $h$-adaptivity or $r$-adaptivity in isolation. We present HypeR, a deep reinforcement learning framework that jointly optimizes mesh relocation and refinement. HypeR casts the joint adaptation problem using tools from hypergraph neural networks and multi-agent reinforcement learning. Refinement is formulated as a heterogeneous multi-agent Markov decision process (MDP) where element agents decide discrete refinement actions, while relocation follows an anisotropic diffusion-based policy on vertex agents with provable prevention of mesh tangling. The reward function combines local and global error reduction to promote general accuracy. Across benchmark PDEs, HypeR reduces approximation error by up to 6--10$\times$ versus state-of-art $h$-adaptive baselines at comparable element counts, breaking through the uniform refinement accuracy ceiling that constrains subdivision-only methods. The framework produces meshes with improved shape metrics and alignment to solution anisotropy, demonstrating that jointly learned $hr$-adaptivity strategies can substantially enhance the capabilities of automated mesh generation.
+ oai:arXiv.org:2512.10439v1
+ cs.CE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shuaihao Han, Tingfa Xu, Peifu Liu, Jianan Li
+ Niccol\`o Grillo, James Rowbottom, Pietro Li\`o, Carola Bibiane Sch\"onlieb, Stefania Fresca
- StateSpace-SSL: Linear-Time Self-supervised Learning for Plant Disease Detectio
- https://arxiv.org/abs/2512.09492
- arXiv:2512.09492v1 Announce Type: new
-Abstract: Self-supervised learning (SSL) is attractive for plant disease detection as it can exploit large collections of unlabeled leaf images, yet most existing SSL methods are built on CNNs or vision transformers that are poorly matched to agricultural imagery. CNN-based SSL struggles to capture disease patterns that evolve continuously along leaf structures, while transformer-based SSL introduces quadratic attention cost from high-resolution patches. To address these limitations, we propose StateSpace-SSL, a linear-time SSL framework that employs a Vision Mamba state-space encoder to model long-range lesion continuity through directional scanning across the leaf surface. A prototype-driven teacher-student objective aligns representations across multiple views, encouraging stable and lesion-aware features from labelled data. Experiments on three publicly available plant disease datasets show that StateSpace-SSL consistently outperforms the CNN- and transformer-based SSL baselines in various evaluation metrics. Qualitative analyses further confirm that it learns compact, lesion-focused feature maps, highlighting the advantage of linear state-space modelling for self-supervised plant disease representation learning.
- oai:arXiv.org:2512.09492v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Enhancing Next-Generation Language Models with Knowledge Graphs: Extending Claude, Mistral IA, and GPT-4 via KG-BERT
+ https://arxiv.org/abs/2512.10440
+ arXiv:2512.10440v1 Announce Type: new
+Abstract: Large language models (LLMs) like Claude, Mistral IA, and GPT-4 excel in NLP but lack structured knowledge, leading to factual inconsistencies. We address this by integrating Knowledge Graphs (KGs) via KG-BERT to enhance grounding and reasoning. Experiments show significant gains in knowledge-intensive tasks such as question answering and entity linking. This approach improves factual reliability and enables more context-aware next-generation LLMs.
+ oai:arXiv.org:2512.10440v1
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abdullah Al Mamun, Miaohua Zhang, David Ahmedt-Aristizabal, Zeeshan Hayder, Mohammad Awrangjeb
+ Nour El Houda Ben Chaabene, Hamza Hammami
- On Mobile Ad Hoc Networks for Coverage of Partially Observable Worlds
- https://arxiv.org/abs/2512.09495
- arXiv:2512.09495v1 Announce Type: new
-Abstract: This paper addresses the movement and placement of mobile agents to establish a communication network in initially unknown environments. We cast the problem in a computational-geometric framework by relating the coverage problem and line-of-sight constraints to the Cooperative Guard Art Gallery Problem, and introduce its partially observable variant, the Partially Observable Cooperative Guard Art Gallery Problem (POCGAGP). We then present two algorithms that solve POCGAGP: CADENCE, a centralized planner that incrementally selects 270 degree corners at which to deploy agents, and DADENCE, a decentralized scheme that coordinates agents using local information and lightweight messaging. Both approaches operate under partial observability and target simultaneous coverage and connectivity. We evaluate the methods in simulation across 1,500 test cases of varied size and structure, demonstrating consistent success in forming connected networks while covering and exploring unknown space. These results highlight the value of geometric abstractions for communication-driven exploration and show that decentralized policies are competitive with centralized performance while retaining scalability.
- oai:arXiv.org:2512.09495v1
- cs.RO
- cs.CG
- cs.MA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Decoding Student Minds: Leveraging Conversational Agents for Psychological and Learning Analysis
+ https://arxiv.org/abs/2512.10441
+ arXiv:2512.10441v1 Announce Type: new
+Abstract: This paper presents a psychologically-aware conversational agent designed to enhance both learning performance and emotional well-being in educational settings. The system combines Large Language Models (LLMs), a knowledge graph-enhanced BERT (KG-BERT), and a bidirectional Long Short-Term Memory (LSTM) with attention to classify students' cognitive and affective states in real time. Unlike prior chatbots limited to either tutoring or affective support, our approach leverages multimodal data-including textual semantics, prosodic speech features, and temporal behavioral trends-to infer engagement, stress, and conceptual understanding. A pilot study with university students demonstrated improved motivation, reduced stress, and moderate academic gains compared to baseline methods. These results underline the promise of integrating semantic reasoning, multimodal fusion, and temporal modeling to support adaptive, student-centered educational interventions.
+ oai:arXiv.org:2512.10441v1
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Edwin Meriaux, Shuo Wen, Louis-Roy Langevin, Doina Precup, Antonio Lor\'ia, Gregory Dudek
+ Nour El Houda Ben Chaabene, Hamza Hammami, Laid Kahloul
- Representation Invariance and Allocation: When Subgroup Balance Matters
- https://arxiv.org/abs/2512.09496
- arXiv:2512.09496v1 Announce Type: new
-Abstract: Unequal representation of demographic groups in training data poses challenges to model generalisation across populations. Standard practice assumes that balancing subgroup representation optimises performance. However, recent empirical results contradict this assumption: in some cases, imbalanced data distributions actually improve subgroup performance, while in others, subgroup performance remains unaffected by the absence of an entire subgroup during training. We conduct a systematic study of subgroup allocation across four vision and language models, varying training data composition to characterise the sensitivity of subgroup performance to data balance. We propose the latent separation hypothesis, which states that a partially fine-tuned model's dependence on subgroup representation is determined by the degree of separation between subgroups in the latent space of the pre-trained model. We formalise this hypothesis, provide theoretical analysis, and validate it empirically. Finally, we present a practical application to foundation model fine-tuning, demonstrating that quantitative analysis of latent subgroup separation can inform data collection and balancing decisions.
- oai:arXiv.org:2512.09496v1
+ Clustered Federated Learning with Hierarchical Knowledge Distillation
+ https://arxiv.org/abs/2512.10443
+ arXiv:2512.10443v1 Announce Type: new
+Abstract: Clustered Federated Learning (CFL) has emerged as a powerful approach for addressing data heterogeneity and ensuring privacy in large distributed IoT environments. By clustering clients and training cluster-specific models, CFL enables personalized models tailored to groups of heterogeneous clients. However, conventional CFL approaches suffer from fragmented learning for training independent global models for each cluster and fail to take advantage of collective cluster insights. This paper advocates a shift to hierarchical CFL, allowing bi-level aggregation to train cluster-specific models at the edge and a unified global model at the cloud. This shift improves training efficiency yet might introduce communication challenges. To this end, we propose CFLHKD, a novel personalization scheme for integrating hierarchical cluster knowledge into CFL. Built upon multi-teacher knowledge distillation, CFLHKD enables inter-cluster knowledge sharing while preserving cluster-specific personalization. CFLHKD adopts a bi-level aggregation to bridge the gap between local and global learning. Extensive evaluations of standard benchmark datasets demonstrate that CFLHKD outperforms representative baselines in cluster-specific and global model accuracy and achieves a performance improvement of 3.32-7.57\%.
+ oai:arXiv.org:2512.10443v1
+ cs.DC
+ cs.AIcs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sabtain Ahmad, Meerzhan Kanatbekova, Ivona Brandic, Atakan Aral
+
+
+ When Reject Turns into Accept: Quantifying the Vulnerability of LLM-Based Scientific Reviewers to Indirect Prompt Injection
+ https://arxiv.org/abs/2512.10449
+ arXiv:2512.10449v1 Announce Type: new
+Abstract: The landscape of scientific peer review is rapidly evolving with the integration of Large Language Models (LLMs). This shift is driven by two parallel trends: the widespread individual adoption of LLMs by reviewers to manage workload (the "Lazy Reviewer" hypothesis) and the formal institutional deployment of AI-powered assessment systems by conferences like AAAI and Stanford's Agents4Science. This study investigates the robustness of these "LLM-as-a-Judge" systems (both illicit and sanctioned) to adversarial PDF manipulation. Unlike general jailbreaks, we focus on a distinct incentive: flipping "Reject" decisions to "Accept," for which we develop a novel evaluation metric which we term as WAVS (Weighted Adversarial Vulnerability Score). We curated a dataset of 200 scientific papers and adapted 15 domain-specific attack strategies to this task, evaluating them across 13 Language Models, including GPT-5, Claude Haiku, and DeepSeek. Our results demonstrate that obfuscation strategies like "Maximum Mark Magyk" successfully manipulate scores, achieving alarming decision flip rates even in large-scale models. We will release our complete dataset and injection framework to facilitate more research on this topic.
+ oai:arXiv.org:2512.10449v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CL
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Anissa Alloula, Charles Jones, Zuzanna Wakefield-Skorniewska, Francesco Quinzan, Bart{\l}omiej Papie\.z
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Devanshu Sahoo, Manish Prasad, Vasudev Majhi, Jahnvi Singh, Vinay Chamola, Yash Sinha, Murari Mandal, Dhruv Kumar
- Gradient-Guided Learning Network for Infrared Small Target Detection
- https://arxiv.org/abs/2512.09497
- arXiv:2512.09497v1 Announce Type: new
-Abstract: Recently, infrared small target detection has attracted extensive attention. However, due to the small size and the lack of intrinsic features of infrared small targets, the existing methods generally have the problem of inaccurate edge positioning and the target is easily submerged by the background. Therefore, we propose an innovative gradient-guided learning network (GGL-Net). Specifically, we are the first to explore the introduction of gradient magnitude images into the deep learning-based infrared small target detection method, which is conducive to emphasizing the edge details and alleviating the problem of inaccurate edge positioning of small targets. On this basis, we propose a novel dual-branch feature extraction network that utilizes the proposed gradient supplementary module (GSM) to encode raw gradient information into deeper network layers and embeds attention mechanisms reasonably to enhance feature extraction ability. In addition, we construct a two-way guidance fusion module (TGFM), which fully considers the characteristics of feature maps at different levels. It can facilitate the effective fusion of multi-scale feature maps and extract richer semantic information and detailed information through reasonable two-way guidance. Extensive experiments prove that GGL-Net has achieves state-of-the-art results on the public real NUAA-SIRST dataset and the public synthetic NUDT-SIRST dataset. Our code has been integrated into https://github.com/YuChuang1205/MSDA-Net
- oai:arXiv.org:2512.09497v1
+ Error-Propagation-Free Learned Video Compression With Dual-Domain Progressive Temporal Alignment
+ https://arxiv.org/abs/2512.10450
+ arXiv:2512.10450v1 Announce Type: new
+Abstract: Existing frameworks for learned video compression suffer from a dilemma between inaccurate temporal alignment and error propagation for motion estimation and compensation (ME/MC). The separate-transform framework employs distinct transforms for intra-frame and inter-frame compression to yield impressive rate-distortion (R-D) performance but causes evident error propagation, while the unified-transform framework eliminates error propagation via shared transforms but is inferior in ME/MC in shared latent domains. To address this limitation, in this paper, we propose a novel unifiedtransform framework with dual-domain progressive temporal alignment and quality-conditioned mixture-of-expert (QCMoE) to enable quality-consistent and error-propagation-free streaming for learned video compression. Specifically, we propose dualdomain progressive temporal alignment for ME/MC that leverages coarse pixel-domain alignment and refined latent-domain alignment to significantly enhance temporal context modeling in a coarse-to-fine fashion. The coarse pixel-domain alignment efficiently handles simple motion patterns with optical flow estimated from a single reference frame, while the refined latent-domain alignment develops a Flow-Guided Deformable Transformer (FGDT) over latents from multiple reference frames to achieve long-term motion refinement (LTMR) for complex motion patterns. Furthermore, we design a QCMoE module for continuous bit-rate adaptation that dynamically assigns different experts to adjust quantization steps per pixel based on target quality and content rather than relies on a single quantization step. QCMoE allows continuous and consistent rate control with appealing R-D performance. Experimental results show that the proposed method achieves competitive R-D performance compared with the state-of-the-arts, while successfully eliminating error propagation.
+ oai:arXiv.org:2512.10450v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1109/LGRS.2023.3308783
- Jinmiao Zhao, Chuang Yu, Zelin Shi, Yunpeng Liu, Yingdi Zhang
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Han Li, Shaohui Li, Wenrui Dai, Chenglin Li, Xinlong Pan, Haipeng Wang, Junni Zou, Hongkai Xiong
- Scalable Construction of Spiking Neural Networks using up to thousands of GPUs
- https://arxiv.org/abs/2512.09502
- arXiv:2512.09502v1 Announce Type: new
-Abstract: Diverse scientific and engineering research areas deal with discrete, time-stamped changes in large systems of interacting delay differential equations. Simulating such complex systems at scale on high-performance computing clusters demands efficient management of communication and memory. Inspired by the human cerebral cortex -- a sparsely connected network of $\mathcal{O}(10^{10})$ neurons, each forming $\mathcal{O}(10^{3})$--$\mathcal{O}(10^{4})$ synapses and communicating via short electrical pulses called spikes -- we study the simulation of large-scale spiking neural networks for computational neuroscience research. This work presents a novel network construction method for multi-GPU clusters and upcoming exascale supercomputers using the Message Passing Interface (MPI), where each process builds its local connectivity and prepares the data structures for efficient spike exchange across the cluster during state propagation. We demonstrate scaling performance of two cortical models using point-to-point and collective communication, respectively.
- oai:arXiv.org:2512.09502v1
- cs.DC
- cs.NE
- physics.comp-ph
- q-bio.NC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Metacognitive Sensitivity for Test-Time Dynamic Model Selection
+ https://arxiv.org/abs/2512.10451
+ arXiv:2512.10451v1 Announce Type: new
+Abstract: A key aspect of human cognition is metacognition - the ability to assess one's own knowledge and judgment reliability. While deep learning models can express confidence in their predictions, they often suffer from poor calibration, a cognitive bias where expressed confidence does not reflect true competence. Do models truly know what they know? Drawing from human cognitive science, we propose a new framework for evaluating and leveraging AI metacognition. We introduce meta-d', a psychologically-grounded measure of metacognitive sensitivity, to characterise how reliably a model's confidence predicts its own accuracy. We then use this dynamic sensitivity score as context for a bandit-based arbiter that performs test-time model selection, learning which of several expert models to trust for a given task. Our experiments across multiple datasets and deep learning model combinations (including CNNs and VLMs) demonstrate that this metacognitive approach improves joint-inference accuracy over constituent models. This work provides a novel behavioural account of AI models, recasting ensemble selection as a problem of evaluating both short-term signals (confidence prediction scores) and medium-term traits (metacognitive sensitivity).
+ oai:arXiv.org:2512.10451v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Bruno Golosio, Gianmarco Tiddia, Jos\'e Villamar, Luca Pontisso, Luca Sergi, Francesco Simula, Pooja Babu, Elena Pastorelli, Abigail Morrison, Markus Diesmann, Alessandro Lonardo, Pier Stanislao Paolucci, Johanna Senk
+ Le Tuan Minh Trinh, Le Minh Vu Pham, Thi Minh Anh Pham, An Duc Nguyen
- DMP-TTS: Disentangled multi-modal Prompting for Controllable Text-to-Speech with Chained Guidance
- https://arxiv.org/abs/2512.09504
- arXiv:2512.09504v1 Announce Type: new
-Abstract: Controllable text-to-speech (TTS) systems face significant challenges in achieving independent manipulation of speaker timbre and speaking style, often suffering from entanglement between these attributes. We present DMP-TTS, a latent Diffusion Transformer (DiT) framework with explicit disentanglement and multi-modal prompting. A CLAP-based style encoder (Style-CLAP) aligns cues from reference audio and descriptive text in a shared space and is trained with contrastive learning plus multi-task supervision on style attributes. For fine-grained control during inference, we introduce chained classifier-free guidance (cCFG) trained with hierarchical condition dropout, enabling independent adjustment of content, timbre, and style guidance strengths. Additionally, we employ Representation Alignment (REPA) to distill acoustic-semantic features from a pretrained Whisper model into intermediate DiT representations, stabilizing training and accelerating convergence. Experiments show that DMP-TTS delivers stronger style controllability than open-source baselines while maintaining competitive intelligibility and naturalness. Code and demos will be available at https://y61329697.github.io/DMP-TTS/.
- oai:arXiv.org:2512.09504v1
- cs.SD
- Thu, 11 Dec 2025 00:00:00 -0500
+ UniCoR: Modality Collaboration for Robust Cross-Language Hybrid Code Retrieval
+ https://arxiv.org/abs/2512.10452
+ arXiv:2512.10452v1 Announce Type: new
+Abstract: Effective code retrieval is indispensable and it has become an important paradigm to search code in hybrid mode using both natural language and code snippets. Nevertheless, it remains unclear whether existing approaches can effectively leverage such hybrid queries, particularly in cross-language contexts. We conduct a comprehensive empirical study of representative code models and reveal three challenges: (1) insufficient semantic understanding; (2) inefficient fusion in hybrid code retrieval; and (3) weak generalization in cross-language scenarios. To address these challenges, we propose UniCoR, a novel self-supervised framework that learns Unified Code Representations framework designed to learn unified and robust code representations. Firstly, we design a multi-perspective supervised contrastive learning module to enhance semantic understanding and modality fusion. It aligns representations from multiple perspectives, including code-to-code, natural language-to-code, and natural language-to-natural language, enforcing the model to capture a semantic essence among modalities. Secondly, we introduce a representation distribution consistency learning module to improve cross-language generalization, which explicitly aligns the feature distributions of different programming languages, enabling language-agnostic representation learning. Extensive experiments on both empirical benchmark and large-scale benchmark show that UniCoR outperforms all baseline models, achieving an average improvement of 8.64% in MRR and 11.54% in MAP over the best-performing baseline. Furthermore, UniCoR exhibits stability in hybrid code retrieval and generalization capability in cross-language scenarios.
+ oai:arXiv.org:2512.10452v1
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kang Yin, Chunyu Qiang, Sirui Zhao, Xiaopeng Wang, Yuzhe Liang, Pengfei Cai, Tong Xu, Chen Zhang, Enhong Chen
+ http://creativecommons.org/licenses/by/4.0/
+ Yang Yang, Li Kuang, Jiakun Liu, Zhongxin Liu, Yingjie Xia, David Lo
- CNFinBench: A Benchmark for Safety and Compliance of Large Language Models in Finance
- https://arxiv.org/abs/2512.09506
- arXiv:2512.09506v1 Announce Type: new
-Abstract: Large language models are increasingly deployed across the financial sector for tasks such as research, compliance, risk analysis, and customer service, which makes rigorous safety evaluation essential. However, existing financial benchmarks primarily focus on textbook-style question answering and numerical problem solving, but fail to evaluate models' real-world safety behaviors. They weakly assess regulatory compliance and investor-protection norms, rarely stress-test multi-turn adversarial tactics such as jailbreaks or prompt injection, inconsistently ground answers in long filings, ignore tool- or RAG-induced over-reach risks, and rely on opaque or non-auditable evaluation protocols. To close these gaps, we introduce CNFinBench, a benchmark that employs finance-tailored red-team dialogues and is structured around a Capability-Compliance-Safety triad, including evidence-grounded reasoning over long reports and jurisdiction-aware rule/tax compliance tasks. For systematic safety quantification, we introduce the Harmful Instruction Compliance Score (HICS) to measure how consistently models resist harmful prompts across multi-turn adversarial dialogues. To ensure auditability, CNFinBench enforces strict output formats with dynamic option perturbation for objective tasks and employs a hybrid LLM-ensemble plus human-calibrated judge for open-ended evaluations. Experiments on 21 models across 15 subtasks confirm a persistent capability-compliance gap: models achieve an average score of 61.0 on capability tasks but fall to 34.18 on compliance and risk-control evaluations. Under multi-turn adversarial dialogue tests, most systems reach only partial resistance (HICS 60-79), demonstrating that refusal alone is not a reliable proxy for safety without cited and verifiable reasoning.
- oai:arXiv.org:2512.09506v1
- cs.CE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Grammaticality Judgments in Humans and Language Models: Revisiting Generative Grammar with LLMs
+ https://arxiv.org/abs/2512.10453
+ arXiv:2512.10453v1 Announce Type: new
+Abstract: What counts as evidence for syntactic structure? In traditional generative grammar, systematic contrasts in grammaticality such as subject-auxiliary inversion and the licensing of parasitic gaps are taken as evidence for an internal, hierarchical grammar. In this paper, we test whether large language models (LLMs), trained only on surface forms, reproduce these contrasts in ways that imply an underlying structural representation.
+ We focus on two classic constructions: subject-auxiliary inversion (testing recognition of the subject boundary) and parasitic gap licensing (testing abstract dependency structure). We evaluate models including GPT-4 and LLaMA-3 using prompts eliciting acceptability ratings. Results show that LLMs reliably distinguish between grammatical and ungrammatical variants in both constructions, and as such support that they are sensitive to structure and not just linear order. Structural generalizations, distinct from cognitive knowledge, emerge from predictive training on surface forms, suggesting functional sensitivity to syntax without explicit encoding.
+ oai:arXiv.org:2512.10453v1
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jinru Ding, Chao Ding, Wenrao Pang, Boyi Xiao, Zhiqiang Liu, Pengcheng Chen, Jiayuan Chen, Tiantian Yuan, Junming Guan, Yidong Jiang, Dawei Cheng, Jie Xu
+ http://creativecommons.org/licenses/by/4.0/
+ Lars G. B. Johnsen
- Two-Variable Logic for Hierarchically Partitioned and Ordered Data
- https://arxiv.org/abs/2512.09508
- arXiv:2512.09508v1 Announce Type: new
-Abstract: We study Two-Variable First-Order Logic, FO2, under semantic constraints that model hierarchically structured data. Our first logic extends FO2 with a linear order < and a chain of increasingly coarser equivalence relations E_1, E_2, ... . We show that its finite satisfiability problem is NExpTime-complete. We also demonstrate that a weaker variant of this logic without the linear order enjoys the exponential model property. Our second logic extends FO2 with a chain of nested total preorders. We prove that its finite satisfiability problem is also NExpTime-complete.However, we show that the complexity increases to ExpSpace-complete once access to the successor relations of the preorders is allowed. Our last result is the undecidability of FO2 with two independent chains of nested equivalence relations.
- oai:arXiv.org:2512.09508v1
- cs.LO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Hybrid Physics-ML Model for Forward Osmosis Flux with Complete Uncertainty Quantification
+ https://arxiv.org/abs/2512.10457
+ arXiv:2512.10457v1 Announce Type: new
+Abstract: Forward Osmosis (FO) is a promising low-energy membrane separation technology, but challenges in accurately modelling its water flux (Jw) persist due to complex internal mass transfer phenomena. Traditional mechanistic models struggle with empirical parameter variability, while purely data-driven models lack physical consistency and rigorous uncertainty quantification (UQ). This study introduces a novel Robust Hybrid Physics-ML framework employing Gaussian Process Regression (GPR) for highly accurate, uncertainty-aware Jw prediction. The core innovation lies in training the GPR on the residual error between the detailed, non-linear FO physical model prediction (Jw_physical) and the experimental water flux (Jw_actual). Crucially, we implement a full UQ methodology by decomposing the total predictive variance (sigma2_total) into model uncertainty (epistemic, from GPR's posterior variance) and input uncertainty (aleatoric, analytically propagated via the Delta method for multi-variate correlated inputs). Leveraging the inherent strength of GPR in low-data regimes, the model, trained on a meagre 120 data points, achieved a state-of-the-art Mean Absolute Percentage Error (MAPE) of 0.26% and an R2 of 0.999 on the independent test data, validating a truly robust and reliable surrogate model for advanced FO process optimization and digital twin development.
+ oai:arXiv.org:2512.10457v1
+ cs.LG
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Oskar Fiuk, Emanuel Kieronski, Vincent Michielini
+ Shiv Ratn, Shivang Rampriyan, Bahni Ray
- ViTA-Seg: Vision Transformer for Amodal Segmentation in Robotics
- https://arxiv.org/abs/2512.09510
- arXiv:2512.09510v1 Announce Type: new
-Abstract: Occlusions in robotic bin picking compromise accurate and reliable grasp planning. We present ViTA-Seg, a class-agnostic Vision Transformer framework for real-time amodal segmentation that leverages global attention to recover complete object masks, including hidden regions. We proposte two architectures: a) Single-Head for amodal mask prediction; b) Dual-Head for amodal and occluded mask prediction. We also introduce ViTA-SimData, a photo-realistic synthetic dataset tailored to industrial bin-picking scenario. Extensive experiments on two amodal benchmarks, COOCA and KINS, demonstrate that ViTA-Seg Dual Head achieves strong amodal and occlusion segmentation accuracy with computational efficiency, enabling robust, real-time robotic manipulation.
- oai:arXiv.org:2512.09510v1
- cs.RO
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ T-SKM-Net: Trainable Neural Network Framework for Linear Constraint Satisfaction via Sampling Kaczmarz-Motzkin Method
+ https://arxiv.org/abs/2512.10461
+ arXiv:2512.10461v1 Announce Type: new
+Abstract: Neural network constraint satisfaction is crucial for safety-critical applications such as power system optimization, robotic path planning, and autonomous driving. However, existing constraint satisfaction methods face efficiency-applicability trade-offs, with hard constraint methods suffering from either high computational complexity or restrictive assumptions on constraint structures. The Sampling Kaczmarz-Motzkin (SKM) method is a randomized iterative algorithm for solving large-scale linear inequality systems with favorable convergence properties, but its argmax operations introduce non-differentiability, posing challenges for neural network applications. This work proposes the Trainable Sampling Kaczmarz-Motzkin Network (T-SKM-Net) framework and, for the first time, systematically integrates SKM-type methods into neural network constraint satisfaction. The framework transforms mixed constraint problems into pure inequality problems through null space transformation, employs SKM for iterative solving, and maps solutions back to the original constraint space, efficiently handling both equality and inequality constraints. We provide theoretical proof of post-processing effectiveness in expectation and end-to-end trainability guarantees based on unbiased gradient estimators, demonstrating that despite non-differentiable operations, the framework supports standard backpropagation. On the DCOPF case118 benchmark, our method achieves 4.27ms/item GPU serial forward inference with 0.0025% max optimality gap with post-processing mode and 5.25ms/item with 0.0008% max optimality gap with joint training mode, delivering over 25$\times$ speedup compared to the pandapower solver while maintaining zero constraint violations under given tolerance.
+ oai:arXiv.org:2512.10461v1
+ cs.LG
+ cs.AI
+ math.OC
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Donato Caramia, Florian T. Pokorny, Giuseppe Triggiani, Denis Ruffino, David Naso, Paolo Roberto Massenio
+ Haoyu Zhu, Yao Zhang, Jiashen Ren, Qingchun Hou
- Exploring Community-Powered Conversational Agent for Health Knowledge Acquisition: A Case Study in Colorectal Cancer
- https://arxiv.org/abs/2512.09511
- arXiv:2512.09511v1 Announce Type: new
-Abstract: Online communities have become key platforms where young adults, actively seek and share information, including health knowledge. However, these users often face challenges when browsing these communities, such as fragmented content, varying information quality and unfamiliar terminology. Based on a survey with 56 participants and follow-up interviews, we identify common challenges and expected features for learning health knowledge. In this paper, we develop a computational workflow that integrates community content into a conversational agent named CanAnswer to facilitate health knowledge acquisition. Using colorectal cancer as a case study, we evaluate CanAnswer through a lab study with 24 participants and interviews with six medical experts. Results show that CanAnswer improves the recalled gained knowledge and reduces the task workload of the learning session. Our expert interviews (N=6) further confirm the reliability and usefulness of CanAnswer. We discuss the generality of CanAnswer and provide design considerations for enhancing the usefulness and credibility of community-powered learning tools.
- oai:arXiv.org:2512.09511v1
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Stealth and Evasion in Rogue AP Attacks: An Analysis of Modern Detection and Bypass Techniques
+ https://arxiv.org/abs/2512.10470
+ arXiv:2512.10470v1 Announce Type: new
+Abstract: Wireless networks act as the backbone of modern digital connectivity, making them a primary target for cyber adversaries. Rogue Access Point attacks, specifically the Evil Twin variant, enable attackers to clone legitimate wireless network identifiers to deceive users into connecting. Once a connection is established, the adversary can intercept traffic and harvest sensitive credentials. While modern defensive architectures often employ Network Intrusion Detection Systems (NIDS) to identify malicious activity, the effectiveness of these systems against Layer 2 wireless threats remains a subject of critical inquiry. This project aimed to design a stealth-capable Rogue AP and evaluate its detectability against Suricata, an open-source NIDS/IPS. The methodology initially focused on a hardware-based deployment using Raspberry Pi platforms but transitioned to a virtualized environment due to severe system compatibility issues. Using Wifipumpkin3, the research team successfully deployed a captive portal that harvested user credentials from connected devices. However, the Suricata NIDS failed to flag the attack, highlighting a significant blind spot in traditional intrusion detection regarding wireless management frame attacks. This paper details the construction of the attack, the evasion techniques employed, and the limitations of current NIDS solutions in detecting localized wireless threats
+ oai:arXiv.org:2512.10470v1
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yiwei Yuan, Zhiqing Wang, Xiucheng Zhang, Yichao Luo, Shuya Lin, Yang Bai, Zhenhui Peng
+ http://creativecommons.org/licenses/by/4.0/
+ Kaleb Bacztub, Braden Vester, Matteo Hodge, Liulseged Abate
- Contextual Dynamic Pricing with Heterogeneous Buyers
- https://arxiv.org/abs/2512.09513
- arXiv:2512.09513v1 Announce Type: new
-Abstract: We initiate the study of contextual dynamic pricing with a heterogeneous population of buyers, where a seller repeatedly posts prices (over $T$ rounds) that depend on the observable $d$-dimensional context and receives binary purchase feedback. Unlike prior work assuming homogeneous buyer types, in our setting the buyer's valuation type is drawn from an unknown distribution with finite support size $K_{\star}$. We develop a contextual pricing algorithm based on optimistic posterior sampling with regret $\widetilde{O}(K_{\star}\sqrt{dT})$, which we prove to be tight in $d$ and $T$ up to logarithmic terms. Finally, we refine our analysis for the non-contextual pricing case, proposing a variance-aware zooming algorithm that achieves the optimal dependence on $K_{\star}$.
- oai:arXiv.org:2512.09513v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ From Alternation to FPRAS: Toward a Complexity Classification of Approximate Counting
+ https://arxiv.org/abs/2512.10472
+ arXiv:2512.10472v1 Announce Type: new
+Abstract: Counting problems are fundamental across mathematics and computer science. Among the most subtle are those whose associated decision problem is solvable in polynomial time, yet whose exact counting version appears intractable. For some such problems, however, one can still obtain efficient randomized approximation in the form of a fully polynomial randomized approximation scheme (FPRAS). Existing proofs of FPRAS existence are often highly technical and problem-specific, offering limited insight into a more systematic complexity-theoretic account of approximability. In this work, we propose a machine-based framework for establishing the existence of an FPRAS beyond previous uniform criteria. Our starting point is alternating computation: we introduce a counting model obtained by equipping alternating Turing machines with a transducer-style output mechanism, and we use it to define a corresponding counting class spanALP. We show that every problem in spanALP admits an FPRAS, yielding a reusable sufficient condition that can be applied via reductions to alternating logspace, polynomial-time computation with output. We situate spanALP in the counting complexity landscape as strictly between #L and TotP (assuming RP $\neq$ NP) and observe interesting conceptual and technical gaps in the current machinery counting complexity. Moreover, as an illustrative application, we obtain an FPRAS for counting answers to counting the answers Dyck-constrained path queries in edge-labeled graphs, i.e., counting the number of distinct labelings realized by s-t walks whose label sequence is well-formed with respect to a Dyck-like language. To our knowledge, no FPRAS was previously known for this setting. We expect the alternating-transducer characterization to provide a broadly applicable tool for establishing FPRAS existence for further counting problems.
+ oai:arXiv.org:2512.10472v1
+ cs.CC
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Thodoris Lykouris, Sloan Nietert, Princewill Okoroafor, Chara Podimata, Julian Zimmert
+ Markus Hecher, Matthias Lanzinger
- QuanvNeXt: An end-to-end quanvolutional neural network for EEG-based detection of major depressive disorder
- https://arxiv.org/abs/2512.09517
- arXiv:2512.09517v1 Announce Type: new
-Abstract: This study presents QuanvNeXt, an end-to-end fully quanvolutional model for EEG-based depression diagnosis. QuanvNeXt incorporates a novel Cross Residual block, which reduces feature homogeneity and strengthens cross-feature relationships while retaining parameter efficiency. We evaluated QuanvNeXt on two open-source datasets, where it achieved an average accuracy of 93.1% and an average AUC-ROC of 97.2%, outperforming state-of-the-art baselines such as InceptionTime (91.7% accuracy, 95.9% AUC-ROC). An uncertainty analysis across Gaussian noise levels demonstrated well-calibrated predictions, with ECE scores remaining low (0.0436, Dataset 1) to moderate (0.1159, Dataset 2) even at the highest perturbation ({\epsilon} = 0.1). Additionally, a post-hoc explainable AI analysis confirmed that QuanvNeXt effectively identifies and learns spectrotemporal patterns that distinguish between healthy controls and major depressive disorder. Overall, QuanvNeXt establishes an efficient and reliable approach for EEG-based depression diagnosis.
- oai:arXiv.org:2512.09517v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Second order reduced model via incremental projection for Navier Stokes
+ https://arxiv.org/abs/2512.10473
+ arXiv:2512.10473v1 Announce Type: new
+Abstract: The numerical simulation of incompressible flows is challenging due to the tight coupling of velocity and pressure. Projection methods offer an effective solution by decoupling these variables, making them suitable for large-scale computations. This work focuses on reduced-order modeling using incremental projection schemes for the Stokes equations. We present both semi-discrete and fully discrete formulations, employing BDF2 in time and finite elements in space. A proper orthogonal decomposition (POD) approach is adopted to construct a reduced-order model for the Stokes problem. The method enables explicit computation of reduced velocity and pressure while preserving accuracy. We provide a detailed stability analysis and derive error estimates, showing second-order convergence in time. Numerical experiments are conducted to validate the theoretical results and demonstrate computational efficiency.
+ oai:arXiv.org:2512.10473v1
+ math.NA
+ cs.NA
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Nabil Anan Orka, Ehtashamul Haque, Maftahul Jannat, Md Abdul Awal, Mohammad Ali Moni
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mejdi Aza\"iez, Yayu Guo, Carlos N\'u\~nez Fern\'andez, Samuele Rubino, Chuanju Xu
- Masked Registration and Autoencoding of CT Images for Predictive Tibia Reconstruction
- https://arxiv.org/abs/2512.09525
- arXiv:2512.09525v1 Announce Type: new
-Abstract: Surgical planning for complex tibial fractures can be challenging for surgeons, as the 3D structure of the later desirable bone alignment may be diffi- cult to imagine. To assist in such planning, we address the challenge of predicting a patient-specific reconstruction target from a CT of the fractured tibia. Our ap- proach combines neural registration and autoencoder models. Specifically, we first train a modified spatial transformer network (STN) to register a raw CT to a standardized coordinate system of a jointly trained tibia prototype. Subsequently, various autoencoder (AE) architectures are trained to model healthy tibial varia- tions. Both the STN and AE models are further designed to be robust to masked input, allowing us to apply them to fractured CTs and decode to a prediction of the patient-specific healthy bone in standard coordinates. Our contributions include: i) a 3D-adapted STN for global spatial registration, ii) a comparative analysis of AEs for bone CT modeling, and iii) the extension of both to handle masked inputs for predictive generation of healthy bone structures. Project page: https://github.com/HongyouZhou/repair
- oai:arXiv.org:2512.09525v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Symphony: A Heuristic Normalized Calibrated Advantage Actor and Critic Algorithm in application for Humanoid Robots
+ https://arxiv.org/abs/2512.10477
+ arXiv:2512.10477v1 Announce Type: new
+Abstract: In our work we not explicitly hint that it is a misconception to think that humans learn fast. Learning process takes time. Babies start learning to move in the restricted liquid area called placenta. Children often are limited by underdeveloped body. Even adults are not allowed to participate in complex competitions right away. However, with robots, when learning from scratch, we often don't have the privilege of waiting for dozen millions of steps. "Swaddling" regularization is responsible for restraining an agent in rapid but unstable development penalizing action strength in a specific way not affecting actions directly. The Symphony, Transitional-policy Deterministic Actor and Critic algorithm, is a concise combination of different ideas for possibility of training humanoid robots from scratch with Sample Efficiency, Sample Proximity and Safety of Actions in mind. It is no secret that continuous increase in Gaussian noise without appropriate smoothing is harmful for motors and gearboxes. Compared to Stochastic algorithms, we set a limited parametric noise and promote a reduced strength of actions, safely increasing entropy, since the actions are kind of immersed in weaker noise. When actions require more extreme values, actions rise above the weak noise. Training becomes empirically much safer for both the environment around and the robot's mechanisms. We use Fading Replay Buffer: using a fixed formula containing the hyperbolic tangent, we adjust the batch sampling probability: the memory contains a recent memory and a long-term memory trail. Fading Replay Buffer allows us to use Temporal Advantage when we improve the current Critic Network prediction compared to the exponential moving average. Temporal Advantage allows us to update Actor and Critic in one pass, as well as combine Actor and Critic in one Object and implement their Losses in one line.
+ oai:arXiv.org:2512.10477v1
+ cs.RO
+ cs.NE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hongyou Zhou, Cederic A{\ss}mann, Alaa Bejaoui, Heiko Tzsch\"atzsch, Mark Heyland, Julian Zierke, Niklas Tuttle, Sebastian H\"olzl, Timo Auer, David A. Back, Marc Toussaint
+ http://creativecommons.org/licenses/by/4.0/
+ Timur Ishuov, Michele Folgheraiter, Madi Nurmanov, Goncalo Gordo, Rich\'ard Farkas, J\'ozsef Dombi
- Latent-Autoregressive GP-VAE Language Model
- https://arxiv.org/abs/2512.09535
- arXiv:2512.09535v1 Announce Type: new
-Abstract: We investigate a fully Latent AutoRegressive scheme based on a Gaussian Process (GP) integrated into a Variational Autoencoder (VAE). In this setting, sequential dynamics are transferred from the observation space to a continuous latent space, while linguistic generation remains parallel through a non-autoregressive decoder. We present a complete methodological formulation, including a causal GP prior, a structured amortized posterior, and a training protocol based on a regularized ELBO. Empirical evaluation, conducted within a deliberately constrained proof-of-concept (POC) framework, shows that the model can be trained stably and that the sequential and parallel sampling variants exhibit consistent behavior. Overall, the results suggest that part of the temporal structure in a language model can be supported by the probabilistic geometry of the latent space rather than by explicit neural operations.
- oai:arXiv.org:2512.09535v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Seamless Outdoor-Indoor Pedestrian Positioning System with GNSS/UWB/IMU Fusion: A Comparison of EKF, FGO, and PF
+ https://arxiv.org/abs/2512.10480
+ arXiv:2512.10480v1 Announce Type: new
+Abstract: Accurate and continuous pedestrian positioning across outdoor-indoor environments remains challenging because GNSS, UWB, and inertial PDR are complementary yet individually fragile under signal blockage, multipath, and drift. This paper presents a unified GNSS/UWB/IMU fusion framework for seamless pedestrian localization and provides a controlled comparison of three probabilistic back-ends: an error-state extended Kalman filter, sliding-window factor graph optimization, and a particle filter. The system uses chest-mounted IMU-based PDR as the motion backbone and integrates absolute updates from GNSS outdoors and UWB indoors. To enhance transition robustness and mitigate urban GNSS degradation, we introduce a lightweight map-based feasibility constraint derived from OpenStreetMap building footprints, treating most building interiors as non-navigable while allowing motion inside a designated UWB-instrumented building. The framework is implemented in ROS 2 and runs in real time on a wearable platform, with visualization in Foxglove. We evaluate three scenarios: indoor (UWB+PDR), outdoor (GNSS+PDR), and seamless outdoor-indoor (GNSS+UWB+PDR). Results show that the ESKF provides the most consistent overall performance in our implementation.
+ oai:arXiv.org:2512.10480v1
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Yves Ruffenach
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jiaqiang Zhang, Xianjia Yu, Sier Ha, Paola Torrico Moron, Sahar Salimpour, Farhad Kerama, Haizhou Zhang, Tomi Westerlund
- REASAN: Learning Reactive Safe Navigation for Legged Robots
- https://arxiv.org/abs/2512.09537
- arXiv:2512.09537v1 Announce Type: new
-Abstract: We present a novel modularized end-to-end framework for legged reactive navigation in complex dynamic environments using a single light detection and ranging (LiDAR) sensor. The system comprises four simulation-trained modules: three reinforcement-learning (RL) policies for locomotion, safety shielding, and navigation, and a transformer-based exteroceptive estimator that processes raw point-cloud inputs. This modular decomposition of complex legged motor-control tasks enables lightweight neural networks with simple architectures, trained using standard RL practices with targeted reward shaping and curriculum design, without reliance on heuristics or sophisticated policy-switching mechanisms. We conduct comprehensive ablations to validate our design choices and demonstrate improved robustness compared to existing approaches in challenging navigation tasks. The resulting reactive safe navigation (REASAN) system achieves fully onboard and real-time reactive navigation across both single- and multi-robot settings in complex environments. We release our training and deployment code at https://github.com/ASIG-X/REASAN.
- oai:arXiv.org:2512.09537v1
+ Contact SLAM: An Active Tactile Exploration Policy Based on Physical Reasoning Utilized in Robotic Fine Blind Manipulation Tasks
+ https://arxiv.org/abs/2512.10481
+ arXiv:2512.10481v1 Announce Type: new
+Abstract: Contact-rich manipulation is difficult for robots to execute and requires accurate perception of the environment. In some scenarios, vision is occluded. The robot can then no longer obtain real-time scene state information through visual feedback. This is called ``blind manipulation". In this manuscript, a novel physically-driven contact cognition method, called ``Contact SLAM", is proposed. It estimates the state of the environment and achieves manipulation using only tactile sensing and prior knowledge of the scene. To maximize exploration efficiency, this manuscript also designs an active exploration policy. The policy gradually reduces uncertainties in the manipulation scene. The experimental results demonstrated the effectiveness and accuracy of the proposed method in several contact-rich tasks, including the difficult and delicate socket assembly task and block-pushing task.
+ oai:arXiv.org:2512.10481v1cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qihao Yuan, Ziyu Cao, Ming Cao, Kailai Li
+ Gaozhao Wang, Xing Liu, Zhenduo Ye, Zhengxiong Liu, Panfeng Huang
- Comparative Analysis of Hash-based Malware Clustering via K-Means
- https://arxiv.org/abs/2512.09539
- arXiv:2512.09539v1 Announce Type: new
-Abstract: With the adoption of multiple digital devices in everyday life, the cyber-attack surface has increased. Adversaries are continuously exploring new avenues to exploit them and deploy malware. On the other hand, detection approaches typically employ hashing-based algorithms such as SSDeep, TLSH, and IMPHash to capture structural and behavioural similarities among binaries. This work focuses on the analysis and evaluation of these techniques for clustering malware samples using the K-means algorithm. More specifically, we experimented with established malware families and traits and found that TLSH and IMPHash produce more distinct, semantically meaningful clusters, whereas SSDeep is more efficient for broader classification tasks. The findings of this work can guide the development of more robust threat-detection mechanisms and adaptive security mechanisms.
- oai:arXiv.org:2512.09539v1
+ From Lab to Reality: A Practical Evaluation of Deep Learning Models and LLMs for Vulnerability Detection
+ https://arxiv.org/abs/2512.10485
+ arXiv:2512.10485v1 Announce Type: new
+Abstract: Vulnerability detection methods based on deep learning (DL) have shown strong performance on benchmark datasets, yet their real-world effectiveness remains underexplored. Recent work suggests that both graph neural network (GNN)-based and transformer-based models, including large language models (LLMs), yield promising results when evaluated on curated benchmark datasets. These datasets are typically characterized by consistent data distributions and heuristic or partially noisy labels. In this study, we systematically evaluate two representative DL models-ReVeal and LineVul-across four representative datasets: Juliet, Devign, BigVul, and ICVul. Each model is trained independently on each respective dataset, and their code representations are analyzed using t-SNE to uncover vulnerability related patterns. To assess realistic applicability, we deploy these models along with four pretrained LLMs, Claude 3.5 Sonnet, GPT-o3-mini, GPT-4o, and GPT-5 on a curated dataset, VentiVul, comprising 20 recently (May 2025) fixed vulnerabilities from the Linux kernel. Our experiments reveal that current models struggle to distinguish vulnerable from non-vulnerable code in representation space and generalize poorly across datasets with differing distributions. When evaluated on VentiVul, our newly constructed time-wise out-of-distribution dataset, performance drops sharply, with most models failing to detect vulnerabilities reliably. These results expose a persistent gap between academic benchmarks and real-world deployment, emphasizing the value of our deployment-oriented evaluation framework and the need for more robust code representations and higher-quality datasets.
+ oai:arXiv.org:2512.10485v1cs.CRcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Aink Acrie Soe Thein, Nikolaos Pitropakis, Pavlos Papadopoulos, Sam Grierson, Sana Ullah Jan
+ Chaomeng Lu, Bert Lagaisse
- SWEnergy: An Empirical Study on Energy Efficiency in Agentic Issue Resolution Frameworks with SLMs
- https://arxiv.org/abs/2512.09543
- arXiv:2512.09543v1 Announce Type: new
-Abstract: Context. LLM-based autonomous agents in software engineering rely on large, proprietary models, limiting local deployment. This has spurred interest in Small Language Models (SLMs), but their practical effectiveness and efficiency within complex agentic frameworks for automated issue resolution remain poorly understood.
- Goal. We investigate the performance, energy efficiency, and resource consumption of four leading agentic issue resolution frameworks when deliberately constrained to using SLMs. We aim to assess the viability of these systems for this task in resource-limited settings and characterize the resulting trade-offs.
- Method. We conduct a controlled evaluation of four leading agentic frameworks (SWE-Agent, OpenHands, Mini SWE Agent, AutoCodeRover) using two SLMs (Gemma-3 4B, Qwen-3 1.7B) on the SWE-bench Verified Mini benchmark. On fixed hardware, we measure energy, duration, token usage, and memory over 150 runs per configuration.
- Results. We find that framework architecture is the primary driver of energy consumption. The most energy-intensive framework, AutoCodeRover (Gemma), consumed 9.4x more energy on average than the least energy-intensive, OpenHands (Gemma). However, this energy is largely wasted. Task resolution rates were near-zero, demonstrating that current frameworks, when paired with SLMs, consume significant energy on unproductive reasoning loops. The SLM's limited reasoning was the bottleneck for success, but the framework's design was the bottleneck for efficiency.
- Conclusions. Current agentic frameworks, designed for powerful LLMs, fail to operate efficiently with SLMs. We find that framework architecture is the primary driver of energy consumption, but this energy is largely wasted due to the SLMs' limited reasoning. Viable low-energy solutions require shifting from passive orchestration to architectures that actively manage SLM weaknesses.
- oai:arXiv.org:2512.09543v1
- cs.SE
+ LLM-Assisted AHP for Explainable Cyber Range Evaluation
+ https://arxiv.org/abs/2512.10487
+ arXiv:2512.10487v1 Announce Type: new
+Abstract: Cyber Ranges (CRs) have emerged as prominent platforms for cybersecurity training and education, especially for Critical Infrastructure (CI) sectors that face rising cyber threats. One way to address these threats is through hands-on exercises that bridge IT and OT domains to improve defensive readiness. However, consistently evaluating whether a CR platform is suitable and effective remains a challenge. This paper proposes an evaluation framework for CRs, emphasizing mission-critical settings by using a multi-criteria decision-making approach. We define a set of evaluation criteria that capture technical fidelity, training and assessment capabilities, scalability, usability, and other relevant factors. To weight and aggregate these criteria, we employ the Analytic Hierarchy Process (AHP), supported by a simulated panel of multidisciplinary experts implemented through a Large Language Model (LLM). This LLM-assisted expert reasoning enables consistent and reproducible pairwise comparisons across criteria without requiring direct expert convening. The framework's output equals quantitative scores that facilitate objective comparison of CR platforms and highlight areas for improvement. Overall, this work lays the foundation for a standardized and explainable evaluation methodology to guide both providers and end-users of CRs.
+ oai:arXiv.org:2512.10487v1
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Vyron Kampourakis, Georgios Kavallieratos, Georgios Spathoulas, Vasileios Gkioulos, Sokratis Katsikas
+
+
+ UACER: An Uncertainty-Aware Critic Ensemble Framework for Robust Adversarial Reinforcement Learning
+ https://arxiv.org/abs/2512.10492
+ arXiv:2512.10492v1 Announce Type: new
+Abstract: Robust adversarial reinforcement learning has emerged as an effective paradigm for training agents to handle uncertain disturbance in real environments, with critical applications in sequential decision-making domains such as autonomous driving and robotic control. Within this paradigm, agent training is typically formulated as a zero-sum Markov game between a protagonist and an adversary to enhance policy robustness. However, the trainable nature of the adversary inevitably induces non-stationarity in the learning dynamics, leading to exacerbated training instability and convergence difficulties, particularly in high-dimensional complex environments. In this paper, we propose a novel approach, Uncertainty-Aware Critic Ensemble for robust adversarial Reinforcement learning (UACER), which consists of two strategies: 1) Diversified critic ensemble: a diverse set of K critic networks is exploited in parallel to stabilize Q-value estimation rather than conventional single-critic architectures for both variance reduction and robustness enhancement. 2) Time-varying Decay Uncertainty (TDU) mechanism: advancing beyond simple linear combinations, we develop a variance-derived Q-value aggregation strategy that explicitly incorporates epistemic uncertainty to dynamically regulate the exploration-exploitation trade-off while simultaneously stabilizing the training process. Comprehensive experiments across several MuJoCo control problems validate the superior effectiveness of UACER, outperforming state-of-the-art methods in terms of overall performance, stability, and efficiency.
+ oai:arXiv.org:2512.10492v1
+ cs.LGcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Arihant Tripathy, Ch Pavan Harshit, Karthik Vaidhyanathan
+ Jiaxi Wu, Tiantian Zhang, Yuxing Wang, Yongzhe Chang, Xueqian Wang
- A Dual-Domain Convolutional Network for Hyperspectral Single-Image Super-Resolution
- https://arxiv.org/abs/2512.09546
- arXiv:2512.09546v1 Announce Type: new
-Abstract: This study presents a lightweight dual-domain super-resolution network (DDSRNet) that combines Spatial-Net with the discrete wavelet transform (DWT). Specifically, our proposed model comprises three main components: (1) a shallow feature extraction module, termed Spatial-Net, which performs residual learning and bilinear interpolation; (2) a low-frequency enhancement branch based on the DWT that refines coarse image structures; and (3) a shared high-frequency refinement branch that simultaneously enhances the LH (horizontal), HL (vertical), and HH (diagonal) wavelet subbands using a single CNN with shared weights. As a result, the DWT enables subband decomposition, while the inverse DWT reconstructs the final high-resolution output. By doing so, the integration of spatial- and frequency-domain learning enables DDSRNet to achieve highly competitive performance with low computational cost on three hyperspectral image datasets, demonstrating its effectiveness for hyperspectral image super-resolution.
- oai:arXiv.org:2512.09546v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Decoding Human-LLM Collaboration in Coding: An Empirical Study of Multi-Turn Conversations in the Wild
+ https://arxiv.org/abs/2512.10493
+ arXiv:2512.10493v1 Announce Type: new
+Abstract: Large language models (LLMs) are increasingly acting as dynamic conversational interfaces, supporting multi-turn interactions that mimic human-like conversation and facilitate complex tasks like coding. While datasets such as LMSYS-Chat-1M and WildChat capture real-world user-LLM conversations, few studies systematically explore the mechanisms of human-LLM collaboration in coding scenarios. What tortuous paths do users experience during the interaction process? How well do the LLMs follow instructions? Are users satisfied? In this paper, we conduct an empirical analysis on human-LLM coding collaboration using LMSYS-Chat-1M and WildChat datasets to explore the human-LLM collaboration mechanism, LLMs' instruction following ability, and human satisfaction. This study yields interesting findings: 1) Task types shape interaction patterns(linear, star and tree), with code quality optimization favoring linear patterns, design-driven tasks leaning toward tree structures, and queries preferring star patterns; 2) Bug fixing and code refactoring pose greater challenges to LLMs' instruction following, with non-compliance rates notably higher than in information querying; 3) Code quality optimization and requirements-driven development tasks show lower user satisfaction, whereas structured knowledge queries and algorithm designs yield higher levels. These insights offer recommendations for improving LLM interfaces and user satisfaction in coding collaborations, while highlighting avenues for future research on adaptive dialogue systems. We believe this work broadens understanding of human-LLM synergies and supports more effective AI-assisted development.
+ oai:arXiv.org:2512.10493v1
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Murat Karayaka, Usman Muhammad, Jorma Laaksonen, Md Ziaul Hoque, Tapio Sepp\"anen
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Binquan Zhang, Li Zhang, Haoyuan Zhang, Fang Liu, Song Wang, Bo Shen, An Fu, Lin Shi
- Supporting Dynamic Agentic Workloads: How Data and Agents Interact
- https://arxiv.org/abs/2512.09548
- arXiv:2512.09548v1 Announce Type: new
-Abstract: The rise of multi-agent systems powered by large language models (LLMs) and specialized reasoning agents exposes fundamental limitations in today's data management architectures. Traditional databases and data fabrics were designed for static, well-defined workloads, whereas agentic systems exhibit dynamic, context-driven, and collaborative behaviors. Agents continuously decompose tasks, shift attention across modalities, and share intermediate results with peers - producing non-deterministic, multi-modal workloads that strain conventional query optimizers and caching mechanisms. We propose an Agent-Centric Data Fabric, a unified architecture that rethinks how data systems serve, optimize, coordinate, and learn from agentic workloads. To achieve this we exploit the concepts of attention-guided data retrieval, semantic micro-caching for context-driven agent federations, predictive data prefetching and quorum-based data serving. Together, these mechanisms enable agents to access representative data faster and more efficiently, while reducing redundant queries, data movement, and inference load across systems. By framing data systems as adaptive collaborators, instead of static executors, we outline new research directions toward behaviorally responsive data infrastructures, where caching, probing, and orchestration jointly enable efficient, context-rich data exchange among dynamic, reasoning-driven agents.
- oai:arXiv.org:2512.09548v1
- cs.MA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Robust Shape from Focus via Multiscale Directional Dilated Laplacian and Recurrent Network
+ https://arxiv.org/abs/2512.10498
+ arXiv:2512.10498v1 Announce Type: new
+Abstract: Shape-from-Focus (SFF) is a passive depth estimation technique that infers scene depth by analyzing focus variations in a focal stack. Most recent deep learning-based SFF methods typically operate in two stages: first, they extract focus volumes (a per pixel representation of focus likelihood across the focal stack) using heavy feature encoders; then, they estimate depth via a simple one-step aggregation technique that often introduces artifacts and amplifies noise in the depth map. To address these issues, we propose a hybrid framework. Our method computes multi-scale focus volumes traditionally using handcrafted Directional Dilated Laplacian (DDL) kernels, which capture long-range and directional focus variations to form robust focus volumes. These focus volumes are then fed into a lightweight, multi-scale GRU-based depth extraction module that iteratively refines an initial depth estimate at a lower resolution for computational efficiency. Finally, a learned convex upsampling module within our recurrent network reconstructs high-resolution depth maps while preserving fine scene details and sharp boundaries. Extensive experiments on both synthetic and real-world datasets demonstrate that our approach outperforms state-of-the-art deep learning and traditional methods, achieving superior accuracy and generalization across diverse focal conditions.
+ oai:arXiv.org:2512.10498v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Ioana Giurgiu, Michael E. Nidd
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Khurram Ashfaq, Muhammad Tariq Mahmood
- Chasing Shadows: Pitfalls in LLM Security Research
- https://arxiv.org/abs/2512.09549
- arXiv:2512.09549v1 Announce Type: new
-Abstract: Large language models (LLMs) are increasingly prevalent in security research. Their unique characteristics, however, introduce challenges that undermine established paradigms of reproducibility, rigor, and evaluation. Prior work has identified common pitfalls in traditional machine learning research, but these studies predate the advent of LLMs. In this paper, we identify \emph{nine} common pitfalls that have become (more) relevant with the emergence of LLMs and that can compromise the validity of research involving them. These pitfalls span the entire computation process, from data collection, pre-training, and fine-tuning to prompting and evaluation.
- We assess the prevalence of these pitfalls across all 72 peer-reviewed papers published at leading Security and Software Engineering venues between 2023 and 2024. We find that every paper contains at least one pitfall, and each pitfall appears in multiple papers. Yet only 15.7\% of the present pitfalls were explicitly discussed, suggesting that the majority remain unrecognized. To understand their practical impact, we conduct four empirical case studies showing how individual pitfalls can mislead evaluation, inflate performance, or impair reproducibility. Based on our findings, we offer actionable guidelines to support the community in future work.
- oai:arXiv.org:2512.09549v1
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Zero-shot 3D Map Generation with LLM Agents: A Dual-Agent Architecture for Procedural Content Generation
+ https://arxiv.org/abs/2512.10501
+ arXiv:2512.10501v1 Announce Type: new
+Abstract: Procedural Content Generation (PCG) offers scalable methods for algorithmically creating complex, customizable worlds. However, controlling these pipelines requires the precise configuration of opaque technical parameters. We propose a training-free architecture that utilizes LLM agents for zero-shot PCG parameter configuration. While Large Language Models (LLMs) promise a natural language interface for PCG tools, off-the-shelf models often fail to bridge the semantic gap between abstract user instructions and strict parameter specifications. Our system pairs an Actor agent with a Critic agent, enabling an iterative workflow where the system autonomously reasons over tool parameters and refines configurations to progressively align with human design preferences. We validate this approach on the generation of various 3D maps, establishing a new benchmark for instruction-following in PCG. Experiments demonstrate that our approach outperforms single-agent baselines, producing diverse and structurally valid environments from natural language descriptions. These results demonstrate that off-the-shelf LLMs can be effectively repurposed as generalized agents for arbitrary PCG tools. By shifting the burden from model training to architectural reasoning, our method offers a scalable framework for mastering complex software without task-specific fine-tuning.
+ oai:arXiv.org:2512.10501v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.14722/ndss.2026.241749
- Jonathan Evertz, Niklas Risse, Nicolai Neuer, Andreas M\"uller, Philipp Normann, Gaetano Sapia, Srishti Gupta, David Pape, Soumya Shaw, Devansh Srivastav, Christian Wressnegger, Erwin Quiring, Thorsten Eisenhofer, Daniel Arp, Lea Sch\"onherr
+ Lim Chien Her, Ming Yan, Yunshu Bai, Ruihao Li, Hao Zhang
- Systematic Framework of Application Methods for Large Language Models in Language Sciences
- https://arxiv.org/abs/2512.09552
- arXiv:2512.09552v1 Announce Type: new
-Abstract: Large Language Models (LLMs) are transforming language sciences. However, their widespread deployment currently suffers from methodological fragmentation and a lack of systematic soundness. This study proposes two comprehensive methodological frameworks designed to guide the strategic and responsible application of LLMs in language sciences. The first method-selection framework defines and systematizes three distinct, complementary approaches, each linked to a specific research goal: (1) prompt-based interaction with general-use models for exploratory analysis and hypothesis generation; (2) fine-tuning of open-source models for confirmatory, theory-driven investigation and high-quality data generation; and (3) extraction of contextualized embeddings for further quantitative analysis and probing of model internal mechanisms. We detail the technical implementation and inherent trade-offs of each method, supported by empirical case studies. Based on the method-selection framework, the second systematic framework proposed provides constructed configurations that guide the practical implementation of multi-stage research pipelines based on these approaches. We then conducted a series of empirical experiments to validate our proposed framework, employing retrospective analysis, prospective application, and an expert evaluation survey. By enforcing the strategic alignment of research questions with the appropriate LLM methodology, the frameworks enable a critical paradigm shift in language science research. We believe that this system is fundamental for ensuring reproducibility, facilitating the critical evaluation of LLM mechanisms, and providing the structure necessary to move traditional linguistics from ad-hoc utility to verifiable, robust science.
- oai:arXiv.org:2512.09552v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Adaptive Replay Buffer for Offline-to-Online Reinforcement Learning
+ https://arxiv.org/abs/2512.10510
+ arXiv:2512.10510v1 Announce Type: new
+Abstract: Offline-to-Online Reinforcement Learning (O2O RL) faces a critical dilemma in balancing the use of a fixed offline dataset with newly collected online experiences. Standard methods, often relying on a fixed data-mixing ratio, struggle to manage the trade-off between early learning stability and asymptotic performance. To overcome this, we introduce the Adaptive Replay Buffer (ARB), a novel approach that dynamically prioritizes data sampling based on a lightweight metric we call 'on-policyness'. Unlike prior methods that rely on complex learning procedures or fixed ratios, ARB is designed to be learning-free and simple to implement, seamlessly integrating into existing O2O RL algorithms. It assesses how closely collected trajectories align with the current policy's behavior and assigns a proportional sampling weight to each transition within that trajectory. This strategy effectively leverages offline data for initial stability while progressively focusing learning on the most relevant, high-rewarding online experiences. Our extensive experiments on D4RL benchmarks demonstrate that ARB consistently mitigates early performance degradation and significantly improves the final performance of various O2O RL algorithms, highlighting the importance of an adaptive, behavior-aware replay buffer design.
+ oai:arXiv.org:2512.10510v1
+ cs.LG
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Kun Sun, Rong Wang
+ http://creativecommons.org/licenses/by/4.0/
+ Chihyeon Song, Jaewoo Lee, Jinkyoo Park
- Building Reasonable Inference for Vision-Language Models in Blind Image Quality Assessment
- https://arxiv.org/abs/2512.09555
- arXiv:2512.09555v1 Announce Type: new
-Abstract: Recent progress in BIQA has been driven by VLMs, whose semantic reasoning abilities suggest that they might extract visual features, generate descriptive text, and infer quality in a human-like manner. However, these models often produce textual descriptions that contradict their final quality predictions, and the predicted scores can change unstably during inference - behaviors not aligned with human reasoning. To understand these issues, we analyze the factors that cause contradictory assessments and instability. We first estimate the relationship between the final quality predictions and the generated visual features, finding that the predictions are not fully grounded in the features and that the logical connection between them is weak. Moreover, decoding intermediate VLM layers shows that the model frequently relies on a limited set of candidate tokens, which contributes to prediction instability. To encourage more human-like reasoning, we introduce a two-stage tuning method that explicitly separates visual perception from quality inference. In the first stage, the model learns visual features; in the second, it infers quality solely from these features. Experiments on SPAQ and KONIQ demonstrate that our approach reduces prediction instability from 22.00% to 12.39% and achieves average gains of 0.3124/0.3507 in SRCC/PLCC across LIVE, CSIQ, SPAQ, and KONIQ compared to the baseline. Further analyses show that our method improves both stability and the reliability of the inference process.
- oai:arXiv.org:2512.09555v1
+ 3D Blood Pulsation Maps
+ https://arxiv.org/abs/2512.10517
+ arXiv:2512.10517v1 Announce Type: new
+Abstract: We present Pulse3DFace, the first dataset of its kind for estimating 3D blood pulsation maps. These maps can be used to develop models of dynamic facial blood pulsation, enabling the creation of synthetic video data to improve and validate remote pulse estimation methods via photoplethysmography imaging. Additionally, the dataset facilitates research into novel multi-view-based approaches for mitigating illumination effects in blood pulsation analysis. Pulse3DFace consists of raw videos from 15 subjects recorded at 30 Hz with an RGB camera from 23 viewpoints, blood pulse reference measurements, and facial 3D scans generated using monocular structure-from-motion techniques. It also includes processed 3D pulsation maps compatible with the texture space of the 3D head model FLAME. These maps provide signal-to-noise ratio, local pulse amplitude, phase information, and supplementary data. We offer a comprehensive evaluation of the dataset's illumination conditions, map consistency, and its ability to capture physiologically meaningful features in the facial and neck skin regions.
+ oai:arXiv.org:2512.10517v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- 10.1007/978-981-95-4378-6_20
- Building Reasonable Inference for Vision-Language Models in Blind Image Quality Assessment. In: Taniguchi, T., et al. Neural Information Processing. ICONIP 2025. Lecture Notes in Computer Science, vol 16310. Springer, Singapore
- Yuan Li, Zitang Sun, Yen-ju Chen, Shin'ya Nishida
+ Maurice Rohr, Tobias Reinhardt, Tizian Dege, Justus Thies, Christoph Hoog Antink
- Explainable Verification of Hierarchical Workflows Mined from Event Logs with Shapley Values
- https://arxiv.org/abs/2512.09562
- arXiv:2512.09562v1 Announce Type: new
-Abstract: Workflow mining discovers hierarchical process trees from event logs, but it remains unclear why such models satisfy or violate logical properties, or how individual elements contribute to overall behavior. We propose to translate mined workflows into logical specifications and analyze properties such as satisfiability, liveness, and safety with automated theorem provers. On this basis, we adapt Shapley values from cooperative game theory to attribute outcomes to workflow elements and quantify their contributions. Experiments on benchmark datasets show that this combination identifies critical nodes, reveals redundancies, and exposes harmful structures. This outlines a novel direction for explainable workflow analysis with direct relevance to software engineering practice, supporting compliance checks, process optimization, redundancy reduction, and the design of next-generation process mining tools.
- oai:arXiv.org:2512.09562v1
- cs.SE
- cs.IT
- math.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Take a Peek: Efficient Encoder Adaptation for Few-Shot Semantic Segmentation via LoRA
+ https://arxiv.org/abs/2512.10521
+ arXiv:2512.10521v1 Announce Type: new
+Abstract: Few-shot semantic segmentation (FSS) aims to segment novel classes in query images using only a small annotated support set. While prior research has mainly focused on improving decoders, the encoder's limited ability to extract meaningful features for unseen classes remains a key bottleneck. In this work, we introduce \textit{Take a Peek} (TaP), a simple yet effective method that enhances encoder adaptability for both FSS and cross-domain FSS (CD-FSS). TaP leverages Low-Rank Adaptation (LoRA) to fine-tune the encoder on the support set with minimal computational overhead, enabling fast adaptation to novel classes while mitigating catastrophic forgetting. Our method is model-agnostic and can be seamlessly integrated into existing FSS pipelines. Extensive experiments across multiple benchmarks--including COCO $20^i$, Pascal $5^i$, and cross-domain datasets such as DeepGlobe, ISIC, and Chest X-ray--demonstrate that TaP consistently improves segmentation performance across diverse models and shot settings. Notably, TaP delivers significant gains in complex multi-class scenarios, highlighting its practical effectiveness in realistic settings. A rank sensitivity analysis also shows that strong performance can be achieved even with low-rank adaptations, ensuring computational efficiency. By addressing a critical limitation in FSS--the encoder's generalization to novel classes--TaP paves the way toward more robust, efficient, and generalizable segmentation systems. The code is available at https://github.com/pasqualedem/TakeAPeek.
+ oai:arXiv.org:2512.10521v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Radoslaw Klimek, Jakub Blazowski
+ Pasquale De Marinis, Gennaro Vessio, Giovanna Castellano
- System Report for CCL25-Eval Task 10: Prompt-Driven Large Language Model Merge for Fine-Grained Chinese Hate Speech Detection
- https://arxiv.org/abs/2512.09563
- arXiv:2512.09563v1 Announce Type: new
-Abstract: The proliferation of hate speech on Chinese social media poses urgent societal risks, yet traditional systems struggle to decode context-dependent rhetorical strategies and evolving slang. To bridge this gap, we propose a novel three-stage LLM-based framework: Prompt Engineering, Supervised Fine-tuning, and LLM Merging. First, context-aware prompts are designed to guide LLMs in extracting implicit hate patterns. Next, task-specific features are integrated during supervised fine-tuning to enhance domain adaptation. Finally, merging fine-tuned LLMs improves robustness against out-of-distribution cases. Evaluations on the STATE-ToxiCN benchmark validate the framework's effectiveness, demonstrating superior performance over baseline methods in detecting fine-grained hate speech.
- oai:arXiv.org:2512.09563v1
- cs.CL
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Disentangled and Distilled Encoder for Out-of-Distribution Reasoning with Rademacher Guarantees
+ https://arxiv.org/abs/2512.10522
+ arXiv:2512.10522v1 Announce Type: new
+Abstract: Recently, the disentangled latent space of a variational autoencoder (VAE) has been used to reason about multi-label out-of-distribution (OOD) test samples that are derived from different distributions than training samples. Disentangled latent space means having one-to-many maps between latent dimensions and generative factors or important characteristics of an image. This paper proposes a disentangled distilled encoder (DDE) framework to decrease the OOD reasoner size for deployment on resource-constrained devices while preserving disentanglement. DDE formalizes student-teacher distillation for model compression as a constrained optimization problem while preserving disentanglement with disentanglement constraints. Theoretical guarantees for disentanglement during distillation based on Rademacher complexity are established. The approach is evaluated empirically by deploying the compressed model on an NVIDIA
+ oai:arXiv.org:2512.10522v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Binglin Wu, Jiaxiu Zou, Xianneng Li
+ http://creativecommons.org/licenses/by/4.0/
+ Zahra Rahiminasab, Michael Yuhas, Arvind Easwaran
- From Graphs to Gates: DNS-HyXNet, A Lightweight and Deployable Sequential Model for Real-Time DNS Tunnel Detection
- https://arxiv.org/abs/2512.09565
- arXiv:2512.09565v1 Announce Type: new
-Abstract: Domain Name System (DNS) tunneling remains a covert channel for data exfiltration and command-and-control communication. Although graph-based methods such as GraphTunnel achieve strong accuracy, they introduce significant latency and computational overhead due to recursive parsing and graph construction, limiting their suitability for real-time deployment. This work presents DNS-HyXNet, a lightweight extended Long Short-Term Memory (xLSTM) hybrid framework designed for efficient sequence-based DNS tunnel detection. DNS-HyXNet integrates tokenized domain embeddings with normalized numerical DNS features and processes them through a two-layer xLSTM network that directly learns temporal dependencies from packet sequences, eliminating the need for graph reconstruction and enabling single-stage multi-class classification. The model was trained and evaluated on two public benchmark datasets with carefully tuned hyperparameters to ensure low memory consumption and fast inference. Across all experimental splits of the DNS-Tunnel-Datasets, DNS-HyXNet achieved up to 99.99% accuracy, with macro-averaged precision, recall, and F1-scores exceeding 99.96%, and demonstrated a per-sample detection latency of just 0.041 ms, confirming its scalability and real-time readiness. These results show that sequential modeling with xLSTM can effectively replace computationally expensive recursive graph generation, offering a deployable and energy-efficient alternative for real-time DNS tunnel detection on commodity hardware.
- oai:arXiv.org:2512.09565v1
+ Mode-Seeking for Inverse Problems with Diffusion Models
+ https://arxiv.org/abs/2512.10524
+ arXiv:2512.10524v1 Announce Type: new
+Abstract: A pre-trained unconditional diffusion model, combined with posterior sampling or maximum a posteriori (MAP) estimation techniques, can solve arbitrary inverse problems without task-specific training or fine-tuning. However, existing posterior sampling and MAP estimation methods often rely on modeling approximations and can be computationally demanding. In this work, we propose the variational mode-seeking loss (VML), which, when minimized during each reverse diffusion step, guides the generated sample towards the MAP estimate. VML arises from a novel perspective of minimizing the Kullback-Leibler (KL) divergence between the diffusion posterior $p(\mathbf{x}_0|\mathbf{x}_t)$ and the measurement posterior $p(\mathbf{x}_0|\mathbf{y})$, where $\mathbf{y}$ denotes the measurement. Importantly, for linear inverse problems, VML can be analytically derived and need not be approximated. Based on further theoretical insights, we propose VML-MAP, an empirically effective algorithm for solving inverse problems, and validate its efficacy over existing methods in both performance and computational time, through extensive experiments on diverse image-restoration tasks across multiple datasets.
+ oai:arXiv.org:2512.10524v1
+ cs.LGcs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Faraz Ali, Muhammad Afaq, Mahmood Niazi, Muzammil Behzad
+ Sai Bharath Chandra Gutha, Ricardo Vinuesa, Hossein Azizpour
- Toward Closed-loop Molecular Discovery via Language Model, Property Alignment and Strategic Search
- https://arxiv.org/abs/2512.09566
- arXiv:2512.09566v1 Announce Type: new
-Abstract: Drug discovery is a time-consuming and expensive process, with traditional high-throughput and docking-based virtual screening hampered by low success rates and limited scalability. Recent advances in generative modelling, including autoregressive, diffusion, and flow-based approaches, have enabled de novo ligand design beyond the limits of enumerative screening. Yet these models often suffer from inadequate generalization, limited interpretability, and an overemphasis on binding affinity at the expense of key pharmacological properties, thereby restricting their translational utility. Here we present Trio, a molecular generation framework integrating fragment-based molecular language modeling, reinforcement learning, and Monte Carlo tree search, for effective and interpretable closed-loop targeted molecular design. Through the three key components, Trio enables context-aware fragment assembly, enforces physicochemical and synthetic feasibility, and guides a balanced search between the exploration of novel chemotypes and the exploitation of promising intermediates within protein binding pockets. Experimental results show that Trio reliably achieves chemically valid and pharmacologically enhanced ligands, outperforming state-of-the-art approaches with improved binding affinity (+7.85%), drug-likeness (+11.10%) and synthetic accessibility (+12.05%), while expanding molecular diversity more than fourfold.
- oai:arXiv.org:2512.09566v1
- cs.AI
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Neural Ranging Inertial Odometry
+ https://arxiv.org/abs/2512.10531
+ arXiv:2512.10531v1 Announce Type: new
+Abstract: Ultra-wideband (UWB) has shown promising potential in GPS-denied localization thanks to its lightweight and drift-free characteristics, while the accuracy is limited in real scenarios due to its sensitivity to sensor arrangement and non-Gaussian pattern induced by multi-path or multi-signal interference, which commonly occurs in many typical applications like long tunnels. We introduce a novel neural fusion framework for ranging inertial odometry which involves a graph attention UWB network and a recurrent neural inertial network. Our graph net learns scene-relevant ranging patterns and adapts to any number of anchors or tags, realizing accurate positioning without calibration. Additionally, the integration of least squares and the incorporation of nominal frame enhance overall performance and scalability. The effectiveness and robustness of our methods are validated through extensive experiments on both public and self-collected datasets, spanning indoor, outdoor, and tunnel environments. The results demonstrate the superiority of our proposed IR-ULSG in handling challenging conditions, including scenarios outside the convex envelope and cases where only a single anchor is available.
+ oai:arXiv.org:2512.10531v1
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Junkai Ji, Zhangfan Yang, Dong Xu, Ruibin Bai, Jianqiang Li, Tingjun Hou, Zexuan Zhu
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/ICRA55743.2025.11128550
+ Si Wang, Bingqi Shen, Fei Wang, Yanjun Cao, Rong Xiong, Yue Wang
- PHWSOA: A Pareto-based Hybrid Whale-Seagull Scheduling for Multi-Objective Tasks in Cloud Computing
- https://arxiv.org/abs/2512.09568
- arXiv:2512.09568v1 Announce Type: new
-Abstract: Task scheduling is a critical research challenge in cloud computing, a transformative technology widely adopted across industries. Although numerous scheduling solutions exist, they predominantly optimize singular or limited metrics such as execution time or resource utilization often neglecting the need for comprehensive multi-objective optimization. To bridge this gap, this paper proposes the Pareto-based Hybrid Whale-Seagull Optimization Algorithm (PHWSOA). This algorithm synergistically combines the strengths of the Whale Optimization Algorithm (WOA) and the Seagull Optimization Algorithm (SOA), specifically mitigating WOA's limitations in local exploitation and SOA's constraints in global exploration. Leveraging Pareto dominance principles, PHWSOA simultaneously optimizes three key objectives: makespan, virtual machine (VM) load balancing, and economic cost. Key enhancements include: Halton sequence initialization for superior population diversity, a Pareto-guided mutation mechanism to avert premature convergence, and parallel processing for accelerated convergence. Furthermore, a dynamic VM load redistribution mechanism is integrated to improve load balancing during task execution. Extensive experiments conducted on the CloudSim simulator, utilizing real-world workload traces from NASA-iPSC and HPC2N, demonstrate that PHWSOA delivers substantial performance gains. Specifically, it achieves up to a 72.1% reduction in makespan, a 36.8% improvement in VM load balancing, and 23.5% cost savings. These results substantially outperform baseline methods including WOA, GA, PEWOA, and GCWOA underscoring PHWSOA's strong potential for enabling efficient resource management in practical cloud environments.
- oai:arXiv.org:2512.09568v1
- cs.DC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Semi-Robust Communication Complexity of Maximum Matching
+ https://arxiv.org/abs/2512.10532
+ arXiv:2512.10532v1 Announce Type: new
+Abstract: We study the one-way two-party communication complexity of Maximum Matching in the semi-robust setting where the edges of a maximum matching are randomly partitioned between Alice and Bob, but all remaining edges of the input graph are adversarially partitioned between the two parties.
+ We show that the simple protocol where Alice solely communicates a lexicographically-first maximum matching of their edges to Bob is surprisingly powerful: We prove that it yields a $3/4$-approximation in expectation and that our analysis is tight.
+ The semi-robust setting is at least as hard as the fully robust setting. In this setting, all edges of the input graph are randomly partitioned between Alice and Bob, and the state-of-the-art result is a fairly involved $5/6$-approximation protocol that is based on the computation of edge-degree constrained subgraphs [Azarmehr, Behnezhad, ICALP'23]. Our protocol also immediately yields a $3/4$-approximation in the fully robust setting. One may wonder whether an improved analysis of our protocol in the fully robust setting is possible: While we cannot rule this out, we give an instance where our protocol only achieves a $0.832 < 5/6 = 0.83$-approximation. Hence, while our simple protocol performs surprisingly well, it cannot be used to improve over the state-of-the-art in the fully robust setting.
+ oai:arXiv.org:2512.10532v1
+ cs.DS
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Zhi Zhao, Hang Xiao, Wei Rang
+ Gabriel Cipriani Huete, Adithya Diddapur, Pavel Dvo\v{r}\'ak, Christian Konrad
- The Gender Code: Gendering the Global Governance of Artificial Intelligence
- https://arxiv.org/abs/2512.09570
- arXiv:2512.09570v1 Announce Type: new
-Abstract: This paper examines how international AI governance frameworks address gender issues and gender-based harms. The analysis covers binding regulations, such as the EU AI Act; soft law instruments, like the UNESCO Recommendations on AI Ethics; and global initiatives, such as the Global Partnership on AI (GPAI). These instruments reveal emerging trends, including the integration of gender concerns into broader human rights frameworks, a shift toward explicit gender-related provisions, and a growing emphasis on inclusivity and diversity. Yet, some critical gaps persist, including inconsistent treatment of gender across governance documents, limited engagement with intersectionality, and a lack of robust enforcement mechanisms. However, this paper argues that effective AI governance must be intersectional, enforceable, and inclusive. This is key to moving beyond tokenism toward meaningful equity and preventing reinforcement of existing inequalities. The study contributes to ethical AI debates by highlighting the importance of gender-sensitive governance in building a just technological future.
- oai:arXiv.org:2512.09570v1
- cs.CY
+ Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning
+ https://arxiv.org/abs/2512.10534
+ arXiv:2512.10534v1 Announce Type: new
+Abstract: Large language model (LLM) agents exhibit strong mathematical problem-solving abilities and can even solve International Mathematical Olympiad (IMO) level problems with the assistance of formal proof systems. However, due to weak heuristics for auxiliary constructions, AI for geometry problem solving remains dominated by expert models such as AlphaGeometry 2, which rely heavily on large-scale data synthesis and search for both training and evaluation. In this work, we make the first attempt to build a medalist-level LLM agent for geometry and present InternGeometry. InternGeometry overcomes the heuristic limitations in geometry by iteratively proposing propositions and auxiliary constructions, verifying them with a symbolic engine, and reflecting on the engine's feedback to guide subsequent proposals. A dynamic memory mechanism enables InternGeometry to conduct more than two hundred interactions with the symbolic engine per problem. To further accelerate learning, we introduce Complexity-Boosting Reinforcement Learning (CBRL), which gradually increases the complexity of synthesized problems across training stages. Built on InternThinker-32B, InternGeometry solves 44 of 50 IMO geometry problems (2000-2024), exceeding the average gold medalist score (40.9), using only 13K training examples, just 0.004% of the data used by AlphaGeometry 2, demonstrating the potential of LLM agents on expert-level geometry tasks. InternGeometry can also propose novel auxiliary constructions for IMO problems that do not appear in human solutions. We will release the model, data, and symbolic engine to support future research.
+ oai:arXiv.org:2512.10534v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jelena Cupac
+ Haiteng Zhao, Junhao Shen, Yiming Zhang, Songyang Gao, Kuikun Liu, Tianyou Ma, Fan Zheng, Dahua Lin, Wenwei Zhang, Kai Chen
- Mastering Diverse, Unknown, and Cluttered Tracks for Robust Vision-Based Drone Racing
- https://arxiv.org/abs/2512.09571
- arXiv:2512.09571v1 Announce Type: new
-Abstract: Most reinforcement learning(RL)-based methods for drone racing target fixed, obstacle-free tracks, leaving the generalization to unknown, cluttered environments largely unaddressed. This challenge stems from the need to balance racing speed and collision avoidance, limited feasible space causing policy exploration trapped in local optima during training, and perceptual ambiguity between gates and obstacles in depth maps-especially when gate positions are only coarsely specified. To overcome these issues, we propose a two-phase learning framework: an initial soft-collision training phase that preserves policy exploration for high-speed flight, followed by a hard-collision refinement phase that enforces robust obstacle avoidance. An adaptive, noise-augmented curriculum with an asymmetric actor-critic architecture gradually shifts the policy's reliance from privileged gate-state information to depth-based visual input. We further impose Lipschitz constraints and integrate a track-primitive generator to enhance motion stability and cross-environment generalization. We evaluate our framework through extensive simulation and ablation studies, and validate it in real-world experiments on a computationally constrained quadrotor. The system achieves agile flight while remaining robust to gate-position errors, developing a generalizable drone racing framework with the capability to operate in diverse, partially unknown and cluttered environments. https://yufengsjtu.github.io/MasterRacing.github.io/
- oai:arXiv.org:2512.09571v1
+ Mr. Virgil: Learning Multi-robot Visual-range Relative Localization
+ https://arxiv.org/abs/2512.10540
+ arXiv:2512.10540v1 Announce Type: new
+Abstract: Ultra-wideband (UWB)-vision fusion localization has achieved extensive applications in the domain of multi-agent relative localization. The challenging matching problem between robots and visual detection renders existing methods highly dependent on identity-encoded hardware or delicate tuning algorithms. Overconfident yet erroneous matches may bring about irreversible damage to the localization system. To address this issue, we introduce Mr. Virgil, an end-to-end learning multi-robot visual-range relative localization framework, consisting of a graph neural network for data association between UWB rangings and visual detections, and a differentiable pose graph optimization (PGO) back-end. The graph-based front-end supplies robust matching results, accurate initial position predictions, and credible uncertainty estimates, which are subsequently integrated into the PGO back-end to elevate the accuracy of the final pose estimation. Additionally, a decentralized system is implemented for real-world applications. Experiments spanning varying robot numbers, simulation and real-world, occlusion and non-occlusion conditions showcase the stability and exactitude under various scenes compared to conventional methods. Our code is available at: https://github.com/HiOnes/Mr-Virgil.
+ oai:arXiv.org:2512.10540v1cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Feng Yu, Yu Hu, Yang Su, Yang Deng, Linzuo Zhang, Danping Zou
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Si Wang, Zhehan Li, Jiadong Lu, Rong Xiong, Yanjun Cao, Yue Wang
- Investigate the Low-level Visual Perception in Vision-Language based Image Quality Assessment
- https://arxiv.org/abs/2512.09573
- arXiv:2512.09573v1 Announce Type: new
-Abstract: Recent advances in Image Quality Assessment (IQA) have leveraged Multi-modal Large Language Models (MLLMs) to generate descriptive explanations. However, despite their strong visual perception modules, these models often fail to reliably detect basic low-level distortions such as blur, noise, and compression, and may produce inconsistent evaluations across repeated inferences. This raises an essential question: do MLLM-based IQA systems truly perceive the visual features that matter? To examine this issue, we introduce a low-level distortion perception task that requires models to classify specific distortion types. Our component-wise analysis shows that although MLLMs are structurally capable of representing such distortions, they tend to overfit training templates, leading to biases in quality scoring. As a result, critical low-level features are weakened or lost during the vision-language alignment transfer stage. Furthermore, by computing the semantic distance between visual features and corresponding semantic tokens before and after component-wise fine-tuning, we show that improving the alignment of the vision encoder dramatically enhances distortion recognition accuracy, increasing it from 14.92% to 84.43%. Overall, these findings indicate that incorporating dedicated constraints on the vision encoder can strengthen text-explainable visual representations and enable MLLM-based pipelines to produce more coherent and interpretable reasoning in vision-centric tasks.
- oai:arXiv.org:2512.09573v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ XDoGE: Multilingual Data Reweighting to Enhance Language Inclusivity in LLMs
+ https://arxiv.org/abs/2512.10545
+ arXiv:2512.10545v1 Announce Type: new
+Abstract: Current large language models (LLMs) are trained on massive amounts of text data, primarily from a few dominant languages. Studies suggest that this over-reliance on high-resource languages, such as English, hampers LLM performance in mid- and low-resource languages. To mitigate this problem, we propose to (i) optimize the language distribution by training a small proxy model within a domain-reweighing DoGE algorithm that we extend to XDoGE for a multilingual setup, and (ii) rescale the data and train a full-size model with the established language weights either from scratch or within a continual pre-training phase (CPT). We target six languages possessing a variety of geographic and intra- and inter-language-family relations, namely, English and Spanish (high-resource), Portuguese and Catalan (mid-resource), Galician and Basque (low-resource). We experiment with Salamandra-2b, which is a promising model for these languages. We investigate the effects of substantial data repetition on minor languages and under-sampling on dominant languages using the IberoBench framework for quantitative evaluation. Finally, we release a new promising IberianLLM-7B-Instruct model centering on Iberian languages and English that we pretrained from scratch and further improved using CPT with the XDoGE weights.
+ oai:arXiv.org:2512.10545v1
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Yuan Li, Zitang Sun, Yen-Ju Chen, Shin'ya Nishida
+ I\~naki Lacunza, Jos\'e Javier Saiz, Alexander Shvets, Aitor Gonzalez-Agirre, Marta Villegas
- Instantaneous Complex Phase and Frequency: Conceptual Clarification and Equivalence between Formulations
- https://arxiv.org/abs/2512.09574
- arXiv:2512.09574v1 Announce Type: new
-Abstract: This letter seeks to clarify the different existing definitions of both instantaneous complex phase and frequency as well as their equivalence when specific hypotheses hold. To achieve this, the two fundamental definitions, i.e., those based on either the use of (i) analytic signals or (ii) space vectors, together with the premises used for their formulation, are presented and their relationship shown. Lastly, an unified notation and terminology to avoid confusion is proposed.
- oai:arXiv.org:2512.09574v1
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Unlocking the Address Book: Dissecting the Sparse Semantic Structure of LLM Key-Value Caches via Sparse Autoencoders
+ https://arxiv.org/abs/2512.10547
+ arXiv:2512.10547v1 Announce Type: new
+Abstract: The Key-Value (KV) cache is the primary memory bottleneck in long-context Large Language Models, yet it is typically treated as an opaque numerical tensor. In this work, we propose \textbf{STA-Attention}, a framework that utilizes Top-K Sparse Autoencoders (SAEs) to decompose the KV cache into interpretable ``semantic atoms.'' Unlike standard $L_1$-regularized SAEs, our Top-K approach eliminates shrinkage bias, preserving the precise dot-product geometry required for attention. Our analysis uncovers a fundamental \textbf{Key-Value Asymmetry}: while Key vectors serve as highly sparse routers dominated by a ``Semantic Elbow,'' deep Value vectors carry dense content payloads requiring a larger budget. Based on this structure, we introduce a Dual-Budget Strategy that selectively preserves the most informative semantic components while filtering representational noise. Experiments on Yi-6B, Mistral-7B, Qwen2.5-32B, and others show that our semantic reconstructions maintain perplexity and zero-shot performance comparable to the original models, effectively bridging the gap between mechanistic interpretability and faithful attention modeling.
+ oai:arXiv.org:2512.10547v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- C\'esar Garc\'ia-Veloso, Mario Paolone, Federico Milano
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Qingsen Ma, Dianyun Wang, Jiaming Lyu, Yaoye Wang, Lechen Ning, Sujie Zhu, Zhenbo Xu, Liuyu Xiang, Huining Li, Huijia Wu, Zhaofeng He
- Seeing Soil from Space: Towards Robust and Scalable Remote Soil Nutrient Analysis
- https://arxiv.org/abs/2512.09576
- arXiv:2512.09576v1 Announce Type: new
-Abstract: Environmental variables are increasingly affecting agricultural decision-making, yet accessible and scalable tools for soil assessment remain limited. This study presents a robust and scalable modeling system for estimating soil properties in croplands, including soil organic carbon (SOC), total nitrogen (N), available phosphorus (P), exchangeable potassium (K), and pH, using remote sensing data and environmental covariates. The system employs a hybrid modeling approach, combining the indirect methods of modeling soil through proxies and drivers with direct spectral modeling. We extend current approaches by using interpretable physics-informed covariates derived from radiative transfer models (RTMs) and complex, nonlinear embeddings from a foundation model. We validate the system on a harmonized dataset that covers Europes cropland soils across diverse pedoclimatic zones. Evaluation is conducted under a robust validation framework that enforces strict spatial blocking, stratified splits, and statistically distinct train-test sets, which deliberately make the evaluation harder and produce more realistic error estimates for unseen regions. The models achieved their highest accuracy for SOC and N. This performance held across unseen locations, under both spatial cross-validation and an independent test set. SOC obtained a MAE of 5.12 g/kg and a CCC of 0.77, and N obtained a MAE of 0.44 g/kg and a CCC of 0.77. We also assess uncertainty through conformal calibration, achieving 90 percent coverage at the target confidence level. This study contributes to the digital advancement of agriculture through the application of scalable, data-driven soil analysis frameworks that can be extended to related domains requiring quantitative soil evaluation, such as carbon markets.
- oai:arXiv.org:2512.09576v1
+ Blink: Dynamic Visual Token Resolution for Enhanced Multimodal Understanding
+ https://arxiv.org/abs/2512.10548
+ arXiv:2512.10548v1 Announce Type: new
+Abstract: Multimodal large language models (MLLMs) have achieved remarkable progress on various vision-language tasks, yet their visual perception remains limited. Humans, in comparison, perceive complex scenes efficiently by dynamically scanning and focusing on salient regions in a sequential "blink-like" process. Motivated by this strategy, we first investigate whether MLLMs exhibit similar behavior. Our pilot analysis reveals that MLLMs naturally attend to different visual regions across layers and that selectively allocating more computation to salient tokens can enhance visual perception. Building on this insight, we propose Blink, a dynamic visual token resolution framework that emulates the human-inspired process within a single forward pass. Specifically, Blink includes two modules: saliency-guided scanning and dynamic token resolution. It first estimates the saliency of visual tokens in each layer based on the attention map, and extends important tokens through a plug-and-play token super-resolution (TokenSR) module. In the next layer, it drops the extended tokens when they lose focus. This dynamic mechanism balances broad exploration and fine-grained focus, thereby enhancing visual perception adaptively and efficiently. Extensive experiments validate Blink, demonstrating its effectiveness in enhancing visual perception and multimodal understanding.
+ oai:arXiv.org:2512.10548v1cs.CV
- physics.geo-ph
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- David Seu (CO2 Angels, Cluj-Napoca, Romania), Nicolas Longepe (European Space Agency Phi-Lab, Frascati, Italy), Gabriel Cioltea (CO2 Angels, Cluj-Napoca, Romania), Erik Maidik (CO2 Angels, Cluj-Napoca, Romania), Calin Andrei (CO2 Angels, Cluj-Napoca, Romania)
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuchen Feng, Zhenyu Zhang, Naibin Gu, Yilong Chen, Peng Fu, Zheng Lin, Shuohuan Wang, Yu Sun, Hua Wu, Weiping Wang, Haifeng Wang
- Auto-BenchmarkCard: Automated Synthesis of Benchmark Documentation
- https://arxiv.org/abs/2512.09577
- arXiv:2512.09577v1 Announce Type: new
-Abstract: We present Auto-BenchmarkCard, a workflow for generating validated descriptions of AI benchmarks. Benchmark documentation is often incomplete or inconsistent, making it difficult to interpret and compare benchmarks across tasks or domains. Auto-BenchmarkCard addresses this gap by combining multi-agent data extraction from heterogeneous sources (e.g., Hugging Face, Unitxt, academic papers) with LLM-driven synthesis. A validation phase evaluates factual accuracy through atomic entailment scoring using the FactReasoner tool. This workflow has the potential to promote transparency, comparability, and reusability in AI benchmark reporting, enabling researchers and practitioners to better navigate and evaluate benchmark choices.
- oai:arXiv.org:2512.09577v1
- cs.HC
+ LLM-Auction: Generative Auction towards LLM-Native Advertising
+ https://arxiv.org/abs/2512.10551
+ arXiv:2512.10551v1 Announce Type: new
+Abstract: The rapid advancement of large language models (LLMs) necessitates novel monetization strategies, among which LLM-native advertising has emerged as a promising paradigm by naturally integrating advertisement within LLM-generated responses. However, this paradigm fundamentally shifts the auction object from discrete ad slots to the distribution over LLM outputs, posing new challenges for designing auction mechanisms. Existing mechanisms for LLM-native advertising adopt frameworks that decouple auction and generation, which either ignore externalities or require multiple LLM inferences for ad allocation, rendering them impractical for industrial scenarios. To address these challenges, we propose LLM-Auction, which to the best of our knowledge is the first learning-based generative auction mechanism that integrates auction and LLM generation for LLM-native advertising. By formulating the allocation optimization as a preference alignment problem between LLM outputs and the mechanism's objective which reflects both advertisers' expected value and user experience, we introduce Iterative Reward-Preference Optimization (IRPO) algorithm that alternately optimizes the reward model and the LLM. This approach enables the LLM to inherently model allocation externalities without any extra inference cost. We further identify the allocation monotonicity and continuity of LLM-Auction, which allows us to prove that a simple first-price payment rule exhibits favorable incentive properties. Additionally, we design an LLM-as-a-judge simulation environment to facilitate large-scale data construction and enable comprehensive quantitative evaluation of the mechanism's performance. Extensive quantitative and qualitative experiments demonstrate that LLM-Auction significantly outperforms existing baselines in allocation efficiency, while achieving the desired mechanism properties.
+ oai:arXiv.org:2512.10551v1
+ cs.GTcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Aris Hofmann, Inge Vejsbjerg, Dhaval Salwala, Elizabeth M. Daly
+ Chujie Zhao, Qun Hu, Shiping Song, Dagui Chen, Han Zhu, Jian Xu, Bo Zheng
- Hands-on Evaluation of Visual Transformers for Object Recognition and Detection
- https://arxiv.org/abs/2512.09579
- arXiv:2512.09579v1 Announce Type: new
-Abstract: Convolutional Neural Networks (CNNs) for computer vision sometimes struggle with understanding images in a global context, as they mainly focus on local patterns. On the other hand, Vision Transformers (ViTs), inspired by models originally created for language processing, use self-attention mechanisms, which allow them to understand relationships across the entire image. In this paper, we compare different types of ViTs (pure, hierarchical, and hybrid) against traditional CNN models across various tasks, including object recognition, detection, and medical image classification. We conduct thorough tests on standard datasets like ImageNet for image classification and COCO for object detection. Additionally, we apply these models to medical imaging using the ChestX-ray14 dataset. We find that hybrid and hierarchical transformers, especially Swin and CvT, offer a strong balance between accuracy and computational resources. Furthermore, by experimenting with data augmentation techniques on medical images, we discover significant performance improvements, particularly with the Swin Transformer model. Overall, our results indicate that Vision Transformers are competitive and, in many cases, outperform traditional CNNs, especially in scenarios requiring the understanding of global visual contexts like medical imaging.
- oai:arXiv.org:2512.09579v1
+ Grounding Everything in Tokens for Multimodal Large Language Models
+ https://arxiv.org/abs/2512.10554
+ arXiv:2512.10554v1 Announce Type: new
+Abstract: Multimodal large language models (MLLMs) have made significant advancements in vision understanding and reasoning. However, the autoregressive Transformer architecture used by MLLMs requries tokenization on input images, which limits their ability to accurately ground objects within the 2D image space. This raises an important question: how can sequential language tokens be improved to better ground objects in 2D spatial space for MLLMs? To address this, we present a spatial representation method for grounding objects, namely GETok, that integrates a specialized vocabulary of learnable tokens into MLLMs. GETok first uses grid tokens to partition the image plane into structured spatial anchors, and then exploits offset tokens to enable precise and iterative refinement of localization predictions. By embedding spatial relationships directly into tokens, GETok significantly advances MLLMs in native 2D space reasoning without modifying the autoregressive architecture. Extensive experiments demonstrate that GETok achieves superior performance over the state-of-the-art methods across various referring tasks in both supervised fine-tuning and reinforcement learning settings.
+ oai:arXiv.org:2512.10554v1cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- 37th International Conference on Tools with Artificial Intelligence (ICTAI 2025)
- Dimitrios N. Vlachogiannis, Dimitrios A. Koutsomitropoulos
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiangxuan Ren, Zhongdao Wang, Liping Hou, Pin Tang, Guoqing Wang, Chao Ma
- Content-Adaptive Image Retouching Guided by Attribute-Based Text Representation
- https://arxiv.org/abs/2512.09580
- arXiv:2512.09580v1 Announce Type: new
-Abstract: Image retouching has received significant attention due to its ability to achieve high-quality visual content. Existing approaches mainly rely on uniform pixel-wise color mapping across entire images, neglecting the inherent color variations induced by image content. This limitation hinders existing approaches from achieving adaptive retouching that accommodates both diverse color distributions and user-defined style preferences. To address these challenges, we propose a novel Content-Adaptive image retouching method guided by Attribute-based Text Representation (CA-ATP). Specifically, we propose a content-adaptive curve mapping module, which leverages a series of basis curves to establish multiple color mapping relationships and learns the corresponding weight maps, enabling content-aware color adjustments. The proposed module can capture color diversity within the image content, allowing similar color values to receive distinct transformations based on their spatial context. In addition, we propose an attribute text prediction module that generates text representations from multiple image attributes, which explicitly represent user-defined style preferences. These attribute-based text representations are subsequently integrated with visual features via a multimodal model, providing user-friendly guidance for image retouching. Extensive experiments on several public datasets demonstrate that our method achieves state-of-the-art performance.
- oai:arXiv.org:2512.09580v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Analysis of discrete energy-decay preserving schemes for Maxwell's equations in Cole-Cole dispersive medium
+ https://arxiv.org/abs/2512.10560
+ arXiv:2512.10560v1 Announce Type: new
+Abstract: This work investigates the design and analysis of energy-decay preserving numerical schemes for Maxwell's equations in a Cole-Cole (C-C) dispersive medium. A continuous energy-decay law is first established for the C-C model through a modified energy functional. Subsequently, a novel \(\theta\)-scheme is proposed for temporal discretization, which is rigorously proven to preserve a discrete energy dissipation property under the condition \(\theta \in [\frac{\alpha}{2}, \frac{1}{2}]\). The temporal convergence rate of the scheme is shown to be first-order for \(\theta \neq 0.5\) and second-order for \(\theta = 0.5\). Extensive numerical experiments validate the theoretical findings, including convergence tests and energy-decay comparisons. The proposed SFTR-\(\theta\) scheme demonstrates superior performance in maintaining monotonic energy decay compared to an alternative 2nd-order fractional backward difference formula, particularly in long-time simulations, highlighting its robustness and physical fidelity.
+ oai:arXiv.org:2512.10560v1
+ math.NA
+ cs.NA
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hancheng Zhu, Xinyu Liu, Rui Yao, Kunyang Sun, Leida Li, Abdulmotaleb El Saddik
+ Guoyu Zhang, Ziming Dong, Baoli Yin, Yang Liu, Hong Li
- UnReflectAnything: RGB-Only Highlight Removal by Rendering Synthetic Specular Supervision
- https://arxiv.org/abs/2512.09583
- arXiv:2512.09583v1 Announce Type: new
-Abstract: Specular highlights distort appearance, obscure texture, and hinder geometric reasoning in both natural and surgical imagery. We present UnReflectAnything, an RGB-only framework that removes highlights from a single image by predicting a highlight map together with a reflection-free diffuse reconstruction. The model uses a frozen vision transformer encoder to extract multi-scale features, a lightweight head to localize specular regions, and a token-level inpainting module that restores corrupted feature patches before producing the final diffuse image. To overcome the lack of paired supervision, we introduce a Virtual Highlight Synthesis pipeline that renders physically plausible specularities using monocular geometry, Fresnel-aware shading, and randomized lighting which enables training on arbitrary RGB images with correct geometric structure. UnReflectAnything generalizes across natural and surgical domains where non-Lambertian surfaces and non-uniform lighting create severe highlights and it achieves competitive performance with state-of-the-art results on several benchmarks. Project Page: https://alberto-rota.github.io/UnReflectAnything/
- oai:arXiv.org:2512.09583v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Causal Reasoning Favors Encoders: On The Limits of Decoder-Only Models
+ https://arxiv.org/abs/2512.10561
+ arXiv:2512.10561v1 Announce Type: new
+Abstract: In context learning (ICL) underpins recent advances in large language models (LLMs), although its role and performance in causal reasoning remains unclear. Causal reasoning demands multihop composition and strict conjunctive control, and reliance on spurious lexical relations of the input could provide misleading results. We hypothesize that, due to their ability to project the input into a latent space, encoder and encoder decoder architectures are better suited for said multihop conjunctive reasoning versus decoder only models. To do this, we compare fine-tuned versions of all the aforementioned architectures with zero and few shot ICL in both natural language and non natural language scenarios. We find that ICL alone is insufficient for reliable causal reasoning, often overfocusing on irrelevant input features. In particular, decoder only models are noticeably brittle to distributional shifts, while finetuned encoder and encoder decoder models can generalize more robustly across our tests, including the non natural language split. Both architectures are only matched or surpassed by decoder only architectures at large scales. We conclude by noting that for cost effective, short horizon robust causal reasoning, encoder or encoder decoder architectures with targeted finetuning are preferable.
+ oai:arXiv.org:2512.10561v1
+ cs.CL
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Alberto Rota, Mert Kiray, Mert Asim Karaoglu, Patrick Ruhkamp, Elena De Momi, Nassir Navabm, Benjamin Busam
+ Amartya Roy, Elamparithy M, Kripabandhu Ghosh, Ponnurangam Kumaraguru, Adrian de Wynter
- Stanford Sleep Bench: Evaluating Polysomnography Pre-training Methods for Sleep Foundation Models
- https://arxiv.org/abs/2512.09591
- arXiv:2512.09591v1 Announce Type: new
-Abstract: Polysomnography (PSG), the gold standard test for sleep analysis, generates vast amounts of multimodal clinical data, presenting an opportunity to leverage self-supervised representation learning (SSRL) for pre-training foundation models to enhance sleep analysis. However, progress in sleep foundation models is hindered by two key limitations: (1) the lack of a shared dataset and benchmark with diverse tasks for training and evaluation, and (2) the absence of a systematic evaluation of SSRL approaches across sleep-related tasks. To address these gaps, we introduce Stanford Sleep Bench, a large-scale PSG dataset comprising 17,467 recordings totaling over 163,000 hours from a major sleep clinic, including 13 clinical disease prediction tasks alongside canonical sleep-related tasks such as sleep staging, apnea diagnosis, and age estimation. We systematically evaluate SSRL pre-training methods on Stanford Sleep Bench, assessing downstream performance across four tasks: sleep staging, apnea diagnosis, age estimation, and disease and mortality prediction. Our results show that multiple pretraining methods achieve comparable performance for sleep staging, apnea diagnosis, and age estimation. However, for mortality and disease prediction, contrastive learning significantly outperforms other approaches while also converging faster during pretraining. To facilitate reproducibility and advance sleep research, we will release Stanford Sleep Bench along with pretrained model weights, training pipelines, and evaluation code.
- oai:arXiv.org:2512.09591v1
- cs.LG
+ Data-Efficient American Sign Language Recognition via Few-Shot Prototypical Networks
+ https://arxiv.org/abs/2512.10562
+ arXiv:2512.10562v1 Announce Type: new
+Abstract: Isolated Sign Language Recognition (ISLR) is critical for bridging the communication gap between the Deaf and Hard-of-Hearing (DHH) community and the hearing world. However, robust ISLR is fundamentally constrained by data scarcity and the long-tail distribution of sign vocabulary, where gathering sufficient examples for thousands of unique signs is prohibitively expensive. Standard classification approaches struggle under these conditions, often overfitting to frequent classes while failing to generalize to rare ones. To address this bottleneck, we propose a Few-Shot Prototypical Network framework adapted for a skeleton based encoder. Unlike traditional classifiers that learn fixed decision boundaries, our approach utilizes episodic training to learn a semantic metric space where signs are classified based on their proximity to dynamic class prototypes. We integrate a Spatiotemporal Graph Convolutional Network (ST-GCN) with a novel Multi-Scale Temporal Aggregation (MSTA) module to capture both rapid and fluid motion dynamics. Experimental results on the WLASL dataset demonstrate the superiority of this metric learning paradigm: our model achieves 43.75% Top-1 and 77.10% Top-5 accuracy on the test set. Crucially, this outperforms a standard classification baseline sharing the identical backbone architecture by over 13%, proving that the prototypical training strategy effectively outperforms in a data scarce situation where standard classification fails. Furthermore, the model exhibits strong zero-shot generalization, achieving nearly 30% accuracy on the unseen SignASL dataset without fine-tuning, offering a scalable pathway for recognizing extensive sign vocabularies with limited data.
+ oai:arXiv.org:2512.10562v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Meher Md Saad
+
+
+ NormCode: A Semi-Formal Language for Context-Isolated AI Planning
+ https://arxiv.org/abs/2512.10563
+ arXiv:2512.10563v1 Announce Type: new
+Abstract: Multistep workflows that chain large language model (LLM) calls suffer from context pollution: as information accumulates across steps, models hallucinate, confuse intermediate outputs, and lose track of task constraints. We present NormCode, a semiformal language for constructing plans of inferences, structured decompositions where each step operates in data isolation and receives only explicitly passed inputs, which eliminates crossstep contamination by design. NormCode enforces a strict separation between semantic operations (LLMdriven reasoning, nondeterministic) and syntactic operations (deterministic data restructuring), enabling precise cost and reliability tracing. The language exists in three isomorphic formats: .ncds for human authoring, .ncd for machine execution, and .ncn for human verification, supporting progressive formalization from sketch to production. We validate NormCode through two demonstrations: (1) a base X addition algorithm achieving 100 percent accuracy on arbitrary length inputs, and (2) self hosted execution of NormCode's own five phase compiler pipeline. The working orchestrator provides dependency driven scheduling, SQLite backed checkpointing, and loop management, making AI workflows auditable by design and addressing a critical need for transparency in high stakes domains such as legal reasoning, medical decision making, and financial analysis.
+ oai:arXiv.org:2512.10563v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Magnus Ruud Kjaer, Rahul Thapa, Gauri Ganjoo, Hyatt Moore IV, Poul Joergen Jennum, Brandon M. Westover, James Zou, Emmanuel Mignot, Bryan He, Andreas Brink-Kjaer
+ Xin Guan
- CS3D: An Efficient Facial Expression Recognition via Event Vision
- https://arxiv.org/abs/2512.09592
- arXiv:2512.09592v1 Announce Type: new
-Abstract: Responsive and accurate facial expression recognition is crucial to human-robot interaction for daily service robots. Nowadays, event cameras are becoming more widely adopted as they surpass RGB cameras in capturing facial expression changes due to their high temporal resolution, low latency, computational efficiency, and robustness in low-light conditions. Despite these advantages, event-based approaches still encounter practical challenges, particularly in adopting mainstream deep learning models. Traditional deep learning methods for facial expression analysis are energy-intensive, making them difficult to deploy on edge computing devices and thereby increasing costs, especially for high-frequency, dynamic, event vision-based approaches. To address this challenging issue, we proposed the CS3D framework by decomposing the Convolutional 3D method to reduce the computational complexity and energy consumption. Additionally, by utilizing soft spiking neurons and a spatial-temporal attention mechanism, the ability to retain information is enhanced, thus improving the accuracy of facial expression detection. Experimental results indicate that our proposed CS3D method attains higher accuracy on multiple datasets compared to architectures such as the RNN, Transformer, and C3D, while the energy consumption of the CS3D method is just 21.97\% of the original C3D required on the same device.
- oai:arXiv.org:2512.09592v1
+ Audio-sync Video Instance Editing with Granularity-Aware Mask Refiner
+ https://arxiv.org/abs/2512.10571
+ arXiv:2512.10571v1 Announce Type: new
+Abstract: Recent advancements in video generation highlight that realistic audio-visual synchronization is crucial for engaging content creation. However, existing video editing methods largely overlook audio-visual synchronization and lack the fine-grained spatial and temporal controllability required for precise instance-level edits. In this paper, we propose AVI-Edit, a framework for audio-sync video instance editing. We propose a granularity-aware mask refiner that iteratively refines coarse user-provided masks into precise instance-level regions. We further design a self-feedback audio agent to curate high-quality audio guidance, providing fine-grained temporal control. To facilitate this task, we additionally construct a large-scale dataset with instance-centric correspondence and comprehensive annotations. Extensive experiments demonstrate that AVI-Edit outperforms state-of-the-art methods in visual quality, condition following, and audio-visual synchronization. Project page: https://hjzheng.net/projects/AVI-Edit/.
+ oai:arXiv.org:2512.10571v1cs.CV
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhe Wang, Qijin Song, Yucen Peng, Weibang Bai
+ http://creativecommons.org/licenses/by/4.0/
+ Haojie Zheng, Shuchen Weng, Jingqi Liu, Siqi Yang, Boxin Shi, Xinlong Wang
- Model management to support systems engineering workflows using ontology-based knowledge graphs
- https://arxiv.org/abs/2512.09596
- arXiv:2512.09596v1 Announce Type: new
-Abstract: System engineering has been shifting from document-centric to model-based approaches, where assets are becoming more and more digital. Although digitisation conveys several benefits, it also brings several concerns (e.g., storage and access) and opportunities. In the context of Cyber- Physical Systems (CPS), we have experts from various domains executing complex workflows and manipulating models in a plethora of different formalisms, each with their own methods, techniques and tools. Storing knowledge on these workflows can reduce considerable effort during system development not only to allow their repeatability and replicability but also to access and reason on data generated by their execution. In this work, we propose a framework to manage modelling artefacts generated from workflow executions. The basic workflow concepts, related formalisms and artefacts are formally defined in an ontology specified in OML (Ontology Modelling Language). This ontology enables the construction of a knowledge graph that contains system engineering data to which we can apply reasoning. We also developed several tools to support system engineering during the design of workflows, their enactment, and artefact storage, considering versioning, querying and reasoning on the stored data. These tools also hide the complexity of manipulating the knowledge graph directly. Finally, we have applied our proposed framework in a real-world system development scenario of a drivetrain smart sensor system. Results show that our proposal not only helped the system engineer with fundamental difficulties like storage and versioning but also reduced the time needed to access relevant information and new knowledge that can be inferred from the knowledge graph.
- oai:arXiv.org:2512.09596v1
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ DeMapGS: Simultaneous Mesh Deformation and Surface Attribute Mapping via Gaussian Splatting
+ https://arxiv.org/abs/2512.10572
+ arXiv:2512.10572v1 Announce Type: new
+Abstract: We propose DeMapGS, a structured Gaussian Splatting framework that jointly optimizes deformable surfaces and surface-attached 2D Gaussian splats. By anchoring splats to a deformable template mesh, our method overcomes topological inconsistencies and enhances editing flexibility, addressing limitations of prior Gaussian Splatting methods that treat points independently. The unified representation in our method supports extraction of high-fidelity diffuse, normal, and displacement maps, enabling the reconstructed mesh to inherit the photorealistic rendering quality of Gaussian Splatting. To support robust optimization, we introduce a gradient diffusion strategy that propagates supervision across the surface, along with an alternating 2D/3D rendering scheme to handle concave regions. Experiments demonstrate that DeMapGS achieves state-of-the-art mesh reconstruction quality and enables downstream applications for Gaussian splats such as editing and cross-object manipulation through a shared parametric surface.
+ oai:arXiv.org:2512.10572v1
+ cs.GR
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- 10.1016/j.jii.2024.100720
- J. Ind. Inf. Integr. 42: 100720 (2024)
- Arkadiusz Ry\'s, Lucas Lima, Joeri Exelmans, Dennis Janssens, Hans Vangheluwe
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ 10.1145/3757377.3763860
+ Shuyi Zhou, Shengze Zhong, Kenshi Takayama, Takafumi Taketomi, Takeshi Oishi
- UrbanNav: Learning Language-Guided Urban Navigation from Web-Scale Human Trajectories
- https://arxiv.org/abs/2512.09607
- arXiv:2512.09607v1 Announce Type: new
-Abstract: Navigating complex urban environments using natural language instructions poses significant challenges for embodied agents, including noisy language instructions, ambiguous spatial references, diverse landmarks, and dynamic street scenes. Current visual navigation methods are typically limited to simulated or off-street environments, and often rely on precise goal formats, such as specific coordinates or images. This limits their effectiveness for autonomous agents like last-mile delivery robots navigating unfamiliar cities. To address these limitations, we introduce UrbanNav, a scalable framework that trains embodied agents to follow free-form language instructions in diverse urban settings. Leveraging web-scale city walking videos, we develop an scalable annotation pipeline that aligns human navigation trajectories with language instructions grounded in real-world landmarks. UrbanNav encompasses over 1,500 hours of navigation data and 3 million instruction-trajectory-landmark triplets, capturing a wide range of urban scenarios. Our model learns robust navigation policies to tackle complex urban scenarios, demonstrating superior spatial reasoning, robustness to noisy instructions, and generalization to unseen urban settings. Experimental results show that UrbanNav significantly outperforms existing methods, highlighting the potential of large-scale web video data to enable language-guided, real-world urban navigation for embodied agents.
- oai:arXiv.org:2512.09607v1
- cs.RO
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Is the Information Bottleneck Robust Enough? Towards Label-Noise Resistant Information Bottleneck Learning
+ https://arxiv.org/abs/2512.10573
+ arXiv:2512.10573v1 Announce Type: new
+Abstract: The Information Bottleneck (IB) principle facilitates effective representation learning by preserving label-relevant information while compressing irrelevant information. However, its strong reliance on accurate labels makes it inherently vulnerable to label noise, prevalent in real-world scenarios, resulting in significant performance degradation and overfitting. To address this issue, we propose LaT-IB, a novel Label-Noise ResistanT Information Bottleneck method which introduces a "Minimal-Sufficient-Clean" (MSC) criterion. Instantiated as a mutual information regularizer to retain task-relevant information while discarding noise, MSC addresses standard IB's vulnerability to noisy label supervision. To achieve this, LaT-IB employs a noise-aware latent disentanglement that decomposes the latent representation into components aligned with to the clean label space and the noise space. Theoretically, we first derive mutual information bounds for each component of our objective including prediction, compression, and disentanglement, and moreover prove that optimizing it encourages representations invariant to input noise and separates clean and noisy label information. Furthermore, we design a three-phase training framework: Warmup, Knowledge Injection and Robust Training, to progressively guide the model toward noise-resistant representations. Extensive experiments demonstrate that LaT-IB achieves superior robustness and efficiency under label noise, significantly enhancing robustness and applicability in real-world scenarios with label noise.
+ oai:arXiv.org:2512.10573v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yanghong Mei, Yirong Yang, Longteng Guo, Qunbo Wang, Ming-Ming Yu, Xingjian He, Wenjun Wu, Jing Liu
+ http://creativecommons.org/licenses/by/4.0/
+ Yi Huang, Qingyun Sun, Yisen Gao, Haonan Yuan, Xingcheng Fu, Jianxin Li
- Super4DR: 4D Radar-centric Self-supervised Odometry and Gaussian-based Map Optimization
- https://arxiv.org/abs/2512.09608
- arXiv:2512.09608v1 Announce Type: new
-Abstract: Conventional SLAM systems using visual or LiDAR data often struggle in poor lighting and severe weather. Although 4D radar is suited for such environments, its sparse and noisy point clouds hinder accurate odometry estimation, while the radar maps suffer from obscure and incomplete structures. Thus, we propose Super4DR, a 4D radar-centric framework for learning-based odometry estimation and gaussian-based map optimization. First, we design a cluster-aware odometry network that incorporates object-level cues from the clustered radar points for inter-frame matching, alongside a hierarchical self-supervision mechanism to overcome outliers through spatio-temporal consistency, knowledge transfer, and feature contrast. Second, we propose using 3D gaussians as an intermediate representation, coupled with a radar-specific growth strategy, selective separation, and multi-view regularization, to recover blurry map areas and those undetected based on image texture. Experiments show that Super4DR achieves a 67% performance gain over prior self-supervised methods, nearly matches supervised odometry, and narrows the map quality disparity with LiDAR while enabling multi-modal image rendering.
- oai:arXiv.org:2512.09608v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ RoleRMBench & RoleRM: Towards Reward Modeling for Profile-Based Role Play in Dialogue Systems
+ https://arxiv.org/abs/2512.10575
+ arXiv:2512.10575v1 Announce Type: new
+Abstract: Reward modeling has become a cornerstone of aligning large language models (LLMs) with human preferences. Yet, when extended to subjective and open-ended domains such as role play, existing reward models exhibit severe degradation, struggling to capture nuanced and persona-grounded human judgments. To address this gap, we introduce RoleRMBench, the first systematic benchmark for reward modeling in role-playing dialogue, covering seven fine-grained capabilities from narrative management to role consistency and engagement. Evaluation on RoleRMBench reveals large and consistent gaps between general-purpose reward models and human judgment, particularly in narrative and stylistic dimensions. We further propose RoleRM, a reward model trained with Continuous Implicit Preferences (CIP), which reformulates subjective evaluation as continuous consistent pairwise supervision under multiple structuring strategies. Comprehensive experiments show that RoleRM surpasses strong open- and closed-source reward models by over 24% on average, demonstrating substantial gains in narrative coherence and stylistic fidelity. Our findings highlight the importance of continuous preference representation and annotation consistency, establishing a foundation for subjective alignment in human-centered dialogue systems.
+ oai:arXiv.org:2512.10575v1
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhiheng Li, Weihua Wang, Qiang Shen, Yichen Zhao, Zheng Fang
+ http://creativecommons.org/licenses/by/4.0/
+ Hang Ding, Qiming Feng, Dongqi Liu, Qi Zhao, Tao Yao, Shuo Wang, Dongsheng Chen, Jian Li, Zhenye Gan, Jiangning Zhang, Chengjie Wang, Yabiao Wang
- ImageTalk: Designing a Multimodal AAC Text Generation System Driven by Image Recognition and Natural Language Generation
- https://arxiv.org/abs/2512.09610
- arXiv:2512.09610v1 Announce Type: new
-Abstract: People living with Motor Neuron Disease (plwMND) frequently encounter speech and motor impairments that necessitate a reliance on augmentative and alternative communication (AAC) systems. This paper tackles the main challenge that traditional symbol-based AAC systems offer a limited vocabulary, while text entry solutions tend to exhibit low communication rates. To help plwMND articulate their needs about the system efficiently and effectively, we iteratively design and develop a novel multimodal text generation system called ImageTalk through a tailored proxy-user-based and an end-user-based design phase. The system demonstrates pronounced keystroke savings of 95.6%, coupled with consistent performance and high user satisfaction. We distill three design guidelines for AI-assisted text generation systems design and outline four user requirement levels tailored for AAC purposes, guiding future research in this field.
- oai:arXiv.org:2512.09610v1
- cs.HC
- cs.AI
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ ESS: An Offload-Centric Latent-Cache Management Architecture for DeepSeek-V3.2-Exp
+ https://arxiv.org/abs/2512.10576
+ arXiv:2512.10576v1 Announce Type: new
+Abstract: DeepSeek-V3.2-Exp introduces a sparse attention mechanism that significantly reduces inference latency in long-context scenarios. Although the overall throughput has improved greatly, the Decode-stage of PD disaggregation remains to be a major bottleneck. This bottleneck primarily stems from the conflict between linear growth of Latent-Cache with sequence length and the limited GPU memory capacity, which constrains the feasible batch-size and thereby suppresses Decode-stage throughput.
+ To address this challenge, we propose ESS (Extended Sparse Server), an offload-centric system design tailored for DeepSeek-V3.2-Exp. ESS selectively offloads Latent-Cache to CPU memory while preserving latency-critical components on GPU. By freeing up GPU memory, ESS effectively decoupling batch-size scaling from GPU memory constraints. This design significantly improves Decode-stage throughput, thereby reducing deployment costs in real-world settings.
+ Our high-fidelity simulations show that ESS delivers 69.4\% throughput improvement at 32K context length and up to 123\% throughput improvement at 128K, demonstrating its effectiveness for large-context inference workloads. These results highlight ESS as a practical and scalable solution for long-context LLM serving.
+ oai:arXiv.org:2512.10576v1
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by-nc-sa/4.0/
- Boyin Yang, Puming Jiang, Per Ola Kristensson
+ Xinhang Chen, Chao Zhang, Jiahuan He, Wei Liu, Jianming Zhang, Wenlong Zhou, Xiao Li, Pai Zeng, Shiyong Li, Yuanpan Qian, Dong Li, Zhaogeng Li
- Real-Time Non-Smooth MPC for Switching Systems: Application to a Three-Tank Process
- https://arxiv.org/abs/2512.09611
- arXiv:2512.09611v1 Announce Type: new
-Abstract: Real-time model predictive control of non-smooth switching systems remains challenging due to discontinuities and the presence of discrete modes, which complicate numerical integration and optimization. This paper presents a real-time feasible non-smooth model predictive control scheme for a physical three-tank process, implemented without mixed-integer formulations. The approach combines Filippov system modeling with finite elements and switch detection for time discretization, leading to a finite-dimensional optimal control problem formulated as a mathematical program with complementarity constraints. The mathematical program is solved via a homotopy of smooth nonlinear programs. We introduce modeling adjustments that make the three-tank dynamics numerically tractable, including additional modes to avoid non-Lipschitz points and undefined function values. Hardware experiments demonstrate efficient handling of switching events, mode-consistent tracking across reference changes, correct boundary handling, and constraint satisfaction. Furthermore, we investigate the impact of model mismatch and show that the tracking performance and computation times remain within real-time limits for the chosen sampling time. The complete controller is implemented using the non-smooth optimal control framework NOSNOC
- oai:arXiv.org:2512.09611v1
+ Structural Methods for handling mode changes in multimode DAE systems
+ https://arxiv.org/abs/2512.10580
+ arXiv:2512.10580v1 Announce Type: new
+Abstract: Hybrid systems are an important concept in Cyber-Physical Systems modeling, for which multiphysics modeling from first principles and the reuse of models from libraries are key. To achieve this, DAEs must be used to specify the dynamics in each discrete state (or mode in our context). This led to the development of DAE-based equational languages supporting multiple modes, of which Modelica is a popular standard. Mode switching can be time- or state-based. Impulsive behaviors can occur at mode changes. While mode changes are well understood in particular physics (e.g., contact mechanics), this is not the case in physics-agnostic paradigms such as Modelica. This situation causes difficulties for the compilation of programs, often requiring users to manually smooth out mode changes. In this paper, we propose a novel approach for the hot restart at mode changes in such paradigms. We propose a mathematical meaning for hot restarts (such a mathematical meaning does not exist in general), as well as a combined structural and impulse analysis for mode changes, generating the hot restart even in the presence of impulses. Our algorithm detects at compile time if the mode change is insufficiently specified, in which case it returns diagnostics information to the user.
+ oai:arXiv.org:2512.10580v1eess.SYcs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Hendrik Alsmeier, Felix H\"ausser, Andreas Kn\"odler, Armin Nurkanovi\'c, Anton Pozharskiy, Moritz Diehl, Rolf Findeisen
+ Albert Benveniste, Benoit Caillaud, Yahao Chen, Khalil Ghorbal, Mathias Malandain
- Rethinking Chain-of-Thought Reasoning for Videos
- https://arxiv.org/abs/2512.09616
- arXiv:2512.09616v1 Announce Type: new
-Abstract: Chain-of-thought (CoT) reasoning has been highly successful in solving complex tasks in natural language processing, and recent multimodal large language models (MLLMs) have extended this paradigm to video reasoning. However, these models typically build on lengthy reasoning chains and large numbers of input visual tokens. Motivated by empirical observations from our benchmark study, we hypothesize that concise reasoning combined with a reduced set of visual tokens can be sufficient for effective video reasoning. To evaluate this hypothesis, we design and validate an efficient post-training and inference framework that enhances a video MLLM's reasoning capability. Our framework enables models to operate on compressed visual tokens and generate brief reasoning traces prior to answering. The resulting models achieve substantially improved inference efficiency, deliver competitive performance across diverse benchmarks, and avoid reliance on manual CoT annotations or supervised fine-tuning. Collectively, our results suggest that long, human-like CoT reasoning may not be necessary for general video reasoning, and that concise reasoning can be both effective and efficient. Our code will be released at https://github.com/LaVi-Lab/Rethink_CoT_Video.
- oai:arXiv.org:2512.09616v1
+ Unleashing Degradation-Carrying Features in Symmetric U-Net: Simpler and Stronger Baselines for All-in-One Image Restoration
+ https://arxiv.org/abs/2512.10581
+ arXiv:2512.10581v1 Announce Type: new
+Abstract: All-in-one image restoration aims to handle diverse degradations (e.g., noise, blur, adverse weather) within a unified framework, yet existing methods increasingly rely on complex architectures (e.g., Mixture-of-Experts, diffusion models) and elaborate degradation prompt strategies. In this work, we reveal a critical insight: well-crafted feature extraction inherently encodes degradation-carrying information, and a symmetric U-Net architecture is sufficient to unleash these cues effectively. By aligning feature scales across encoder-decoder and enabling streamlined cross-scale propagation, our symmetric design preserves intrinsic degradation signals robustly, rendering simple additive fusion in skip connections sufficient for state-of-the-art performance. Our primary baseline, SymUNet, is built on this symmetric U-Net and achieves better results across benchmark datasets than existing approaches while reducing computational cost. We further propose a semantic enhanced variant, SE-SymUNet, which integrates direct semantic injection from frozen CLIP features via simple cross-attention to explicitly amplify degradation priors. Extensive experiments on several benchmarks validate the superiority of our methods. Both baselines SymUNet and SE-SymUNet establish simpler and stronger foundations for future advancements in all-in-one image restoration. The source code is available at https://github.com/WenlongJiao/SymUNet.
+ oai:arXiv.org:2512.10581v1cs.CV
- cs.AI
- cs.CL
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yiwu Zhong, Zi-Yuan Hu, Yin Li, Liwei Wang
+ Wenlong Jiao, Heyang Lee, Ping Wang, Pengfei Zhu, Qinghua Hu, Dongwei Ren
- FROMAT: Multiview Material Appearance Transfer via Few-Shot Self-Attention Adaptation
- https://arxiv.org/abs/2512.09617
- arXiv:2512.09617v1 Announce Type: new
-Abstract: Multiview diffusion models have rapidly emerged as a powerful tool for content creation with spatial consistency across viewpoints, offering rich visual realism without requiring explicit geometry and appearance representation. However, compared to meshes or radiance fields, existing multiview diffusion models offer limited appearance manipulation, particularly in terms of material, texture, or style.
- In this paper, we present a lightweight adaptation technique for appearance transfer in multiview diffusion models. Our method learns to combine object identity from an input image with appearance cues rendered in a separate reference image, producing multi-view-consistent output that reflects the desired materials, textures, or styles. This allows explicit specification of appearance parameters at generation time while preserving the underlying object geometry and view coherence. We leverage three diffusion denoising processes responsible for generating the original object, the reference, and the target images, and perform reverse sampling to aggregate a small subset of layer-wise self-attention features from the object and the reference to influence the target generation. Our method requires only a few training examples to introduce appearance awareness to pretrained multiview models. The experiments show that our method provides a simple yet effective way toward multiview generation with diverse appearance, advocating the adoption of implicit generative 3D representations in practice.
- oai:arXiv.org:2512.09617v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ THeGAU: Type-Aware Heterogeneous Graph Autoencoder and Augmentation
+ https://arxiv.org/abs/2512.10589
+ arXiv:2512.10589v1 Announce Type: new
+Abstract: Heterogeneous Graph Neural Networks (HGNNs) are effective for modeling Heterogeneous Information Networks (HINs), which encode complex multi-typed entities and relations. However, HGNNs often suffer from type information loss and structural noise, limiting their representational fidelity and generalization. We propose THeGAU, a model-agnostic framework that combines a type-aware graph autoencoder with guided graph augmentation to improve node classification. THeGAU reconstructs schema-valid edges as an auxiliary task to preserve node-type semantics and introduces a decoder-driven augmentation mechanism to selectively refine noisy structures. This joint design enhances robustness, accuracy, and efficiency while significantly reducing computational overhead. Extensive experiments on three benchmark HIN datasets (IMDB, ACM, and DBLP) demonstrate that THeGAU consistently outperforms existing HGNN methods, achieving state-of-the-art performance across multiple backbones.
+ oai:arXiv.org:2512.10589v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/publicdomain/zero/1.0/
- Hubert Kompanowski, Varun Jampani, Aaryaman Vasishta, Binh-Son Hua
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ming-Yi Hong, Miao-Chen Chiang, Youchen Teng, Yu-Hsiang Wang, Chih-Yu Wang, Che Lin
- GLaD: Geometric Latent Distillation for Vision-Language-Action Models
- https://arxiv.org/abs/2512.09619
- arXiv:2512.09619v1 Announce Type: new
-Abstract: Most existing Vision-Language-Action (VLA) models rely primarily on RGB information, while ignoring geometric cues crucial for spatial reasoning and manipulation. In this work, we introduce GLaD, a geometry-aware VLA framework that incorporates 3D geometric priors during pretraining through knowledge distillation. Rather than distilling geometric features solely into the vision encoder, we align the LLM's hidden states corresponding to visual tokens with features from a frozen geometry-aware vision transformer (VGGT), ensuring that geometric understanding is deeply integrated into the multimodal representations that drive action prediction. Pretrained on the Bridge dataset with this geometry distillation mechanism, GLaD achieves 94.1% average success rate across four LIBERO task suites, outperforming UniVLA (92.5%) which uses identical pretraining data. These results validate that geometry-aware pretraining enhances spatial reasoning and policy generalization without requiring explicit depth sensors or 3D annotations.
- oai:arXiv.org:2512.09619v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Salient Object Detection in Complex Weather Conditions via Noise Indicators
+ https://arxiv.org/abs/2512.10592
+ arXiv:2512.10592v1 Announce Type: new
+Abstract: Salient object detection (SOD), a foundational task in computer vision, has advanced from single-modal to multi-modal paradigms to enhance generalization. However, most existing SOD methods assume low-noise visual conditions, overlooking the degradation of segmentation accuracy caused by weather-induced noise in real-world scenarios. In this paper, we propose a SOD framework tailored for diverse weather conditions, encompassing a specific encoder and a replaceable decoder. To enable handling of varying weather noises, we introduce a one-hot vector as a noise indicator to represent different weather types and design a Noise Indicator Fusion Module (NIFM). The NIFM takes both semantic features and the noise indicator as dual inputs and is inserted between consecutive stages of the encoder to embed weather-aware priors via adaptive feature modulation. Critically, the proposed specific encoder retains compatibility with mainstream SOD decoders. Extensive experiments are conducted on the WXSOD dataset under varying training data scales (100%, 50%, 30% of the full training set), three encoder and seven decoder configurations. Results show that the proposed SOD framework (particularly the NIFM-enhanced specific encoder) improves segmentation accuracy under complex weather conditions compared to a vanilla encoder.
+ oai:arXiv.org:2512.10592v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Minghao Guo, Meng Cao, Jiachen Tao, Rongtao Xu, Yan Yan, Xiaodan Liang, Ivan Laptev, Xiaojun Chang
+ Quan Chen, Xiaokai Yang, Tingyu Wang, Rongfeng Lu, Xichun Sheng, Yaoqi Sun, Chenggang Yan
- Semantic-Aware Cooperative Communication and Computation Framework in Vehicular Networks
- https://arxiv.org/abs/2512.09621
- arXiv:2512.09621v1 Announce Type: new
-Abstract: Semantic Communication (SC) combined with Vehicular edge computing (VEC) provides an efficient edge task processing paradigm for Internet of Vehicles (IoV). Focusing on highway scenarios, this paper proposes a Tripartite Cooperative Semantic Communication (TCSC) framework, which enables Vehicle Users (VUs) to perform semantic task offloading via Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V) communications. Considering task latency and the number of semantic symbols, the framework constructs a Mixed-Integer Nonlinear Programming (MINLP) problem, which is transformed into two subproblems. First, we innovatively propose a multi-agent proximal policy optimization task offloading optimization method based on parametric distribution noise (MAPPO-PDN) to solve the optimization problem of the number of semantic symbols; second, linear programming (LP) is used to solve offloading ratio. Simulations show that performance of this scheme is superior to that of other algorithms.
- oai:arXiv.org:2512.09621v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Motion Planning for Safe Landing of a Human-Piloted Parafoil
+ https://arxiv.org/abs/2512.10595
+ arXiv:2512.10595v1 Announce Type: new
+Abstract: Most skydiving accidents occur during the parafoil-piloting and landing stages and result from human lapses in judgment while piloting the parafoil. Training of novice pilots is protracted due to the lack of functional and easily accessible training simulators. Moreover, work on parafoil trajectory planning suitable for aiding human training remains limited. To bridge this gap, we study the problem of computing safe trajectories for human-piloted parafoil flight and examine how such trajectories fare against human-generated solutions. For the algorithmic part, we adapt the sampling-based motion planner Stable Sparse RRT (SST) by Li et al., to cope with the problem constraints while minimizing the bank angle (control effort) as a proxy for safety. We then compare the computer-generated solutions with data from human-generated parafoil flight, where the algorithm offers a relative cost improvement of 20\%-80\% over the performance of the human pilot. We observe that human pilots tend to, first, close the horizontal distance to the landing area, and then address the vertical gap by spiraling down to the suitable altitude for starting a landing maneuver. The algorithm considered here makes smoother and more gradual descents, arriving at the landing area at the precise altitude necessary for the final approach while maintaining safety constraints. Overall, the study demonstrates the potential of computer-generated guidelines, rather than traditional rules of thumb, which can be integrated into future simulators to train pilots for safer and more cost-effective flights.
+ oai:arXiv.org:2512.10595v1
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jingbo Zhang, Maoxin Ji, Qiong Wu, Pingyi Fan, Kezhi Wang, Wen Chen
+ http://creativecommons.org/licenses/by/4.0/
+ Maximillian Fainkich, Kiril Solovey, Anna Clarke
- CUBE: A Cardinality Estimator Based on Neural CDF
- https://arxiv.org/abs/2512.09622
- arXiv:2512.09622v1 Announce Type: new
-Abstract: Modern database optimizer relies on cardinality estimator, whose accuracy directly affects the optimizer's ability to choose an optimal execution plan. Recent work on data-driven methods has leveraged probabilistic models to achieve higher estimation accuracy, but these approaches cannot guarantee low inference latency at the same time and neglect scalability. As data dimensionality grows, optimization time can even exceed actual query execution time. Furthermore, inference with probabilistic models by sampling or integration procedures unpredictable estimation result and violate stability, which brings unstable performance with query execution and make database tuning hard for database users. In this paper, we propose a novel approach to cardinality estimation based on cumulative distribution function(CDF), which calculates range query cardinality without sampling or integration, ensuring accurate and predictable estimation results. With inference acceleration by merging calculations, we can achieve fast and nearly constant inference speed while maintaining high accuracy, even as dimensionality increases, which is over 10x faster than current state-of-the-art data-driven cardinality estimator. This demonstrates its excellent dimensional scalability, making it well-suited for real-world database applications.
- oai:arXiv.org:2512.09622v1
- cs.DB
- Thu, 11 Dec 2025 00:00:00 -0500
+ Beyond Pixels: A Training-Free, Text-to-Text Framework for Remote Sensing Image Retrieval
+ https://arxiv.org/abs/2512.10596
+ arXiv:2512.10596v1 Announce Type: new
+Abstract: Semantic retrieval of remote sensing (RS) images is a critical task fundamentally challenged by the \textquote{semantic gap}, the discrepancy between a model's low-level visual features and high-level human concepts. While large Vision-Language Models (VLMs) offer a promising path to bridge this gap, existing methods often rely on costly, domain-specific training, and there is a lack of benchmarks to evaluate the practical utility of VLM-generated text in a zero-shot retrieval context. To address this research gap, we introduce the Remote Sensing Rich Text (RSRT) dataset, a new benchmark featuring multiple structured captions per image. Based on this dataset, we propose a fully training-free, text-only retrieval reference called TRSLLaVA. Our methodology reformulates cross-modal retrieval as a text-to-text (T2T) matching problem, leveraging rich text descriptions as queries against a database of VLM-generated captions within a unified textual embedding space. This approach completely bypasses model training or fine-tuning. Experiments on the RSITMD and RSICD benchmarks show our training-free method is highly competitive with state-of-the-art supervised models. For instance, on RSITMD, our method achieves a mean Recall of 42.62\%, nearly doubling the 23.86\% of the standard zero-shot CLIP baseline and surpassing several top supervised models. This validates that high-quality semantic representation through structured text provides a powerful and cost-effective paradigm for remote sensing image retrieval.
+ oai:arXiv.org:2512.10596v1
+ cs.CV
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Xiao Yan, Tiezheng Nie, Boyang Fang, Derong Shen, Kou Yue, Yu Ge
+ http://creativecommons.org/licenses/by/4.0/
+ J. Xiao, Y. Guo, X. Zi, K. Thiyagarajan, C. Moreira, M. Prasad
- RIS-Assisted Coordinated Multi-Point ISAC for Low-Altitude Sensing Coverage
- https://arxiv.org/abs/2512.09625
- arXiv:2512.09625v1 Announce Type: new
-Abstract: The low-altitude economy (LAE) has emerged and developed in various fields, which has gained considerable interest. To ensure the security of LAE, it is essential to establish a proper sensing coverage scheme for monitoring the unauthorized targets. Introducing integrated sensing and communication (ISAC) into cellular networks is a promising solution that enables coordinated multiple base stations (BSs) to significantly enhance sensing performance and extend coverage. Meanwhile, deploying a reconfigurable intelligent surface (RIS) can mitigate signal blockages between BSs and low-altitude targets in urban areas. Therefore, this paper focuses on the low-altitude sensing coverage problem in RIS-assisted coordinated multi-point ISAC networks, where a RIS is employed to enable multiple BSs to sense a prescribed region while serving multiple communication users. A joint beamforming and phase shifts design is proposed to minimize the total transmit power while guaranteeing sensing signal-to-noise ratio and communication spectral efficiency. To tackle this non-convex optimization problem, an efficient algorithm is proposed by using the alternating optimization and semi-definite relaxation techniques. Numerical results demonstrate the superiority of our proposed scheme over the baseline schemes.
- oai:arXiv.org:2512.09625v1
+ NWP-based Atmospheric Refractivity Modeling and Fast & Stable Non-uniform Plane Wave Ray-Tracing Simulations for LEO Link Analysis
+ https://arxiv.org/abs/2512.10598
+ arXiv:2512.10598v1 Announce Type: new
+Abstract: Existing low-Earth-orbit (LEO) communication link analyses face two main challenges: (1) limited accuracy of 3D atmospheric refractivity reconstructed from sparsely sampled radiosonde data, and (2) numerical instability in previous non-uniform plane-wave ray-tracing algorithms (i.e., underflow under standard double precision), where non-uniform plane waves inevitably arise at complex-valued dielectric interfaces, is caused by extremely small atmospheric loss terms. To address these issues, we reconstruct a high-resolution 3D complex-valued refractivity model using numerical weather prediction data, and develop a fast and numerically stable non-uniform plane-wave ray tracer. The method remains stable in double precision and delivers a 24-fold speedup over high-precision benchmarks. Comparisons show that boresight-error deviations and path-loss differences between the rigorous method and the uniform-plane-wave approximation remain negligible, even under heavy precipitation. Although rays in a lossy atmosphere experience different phase- and attenuation-direction vectors-forming non-uniform plane waves-the resulting effective attenuation along the path is nearly identical to that predicted by the uniform-plane-wave model. These findings justify the continued use of uniform-plane-wave ray tracing in practical LEO link analyses.
+ oai:arXiv.org:2512.10598v1eess.SYcs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ eess.SP
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ying Zhang, Zeqi Hao, Tingting Zhang
+ Bowoo Jang, Jun Heo, Yong Bae Park, Dong-Yeop Na
- Beyond Sequences: A Benchmark for Atomic Hand-Object Interaction Using a Static RNN Encoder
- https://arxiv.org/abs/2512.09626
- arXiv:2512.09626v1 Announce Type: new
-Abstract: Reliably predicting human intent in hand-object interactions is an open challenge for computer vision. Our research concentrates on a fundamental sub-problem: the fine-grained classification of atomic interaction states, namely 'approaching', 'grabbing', and 'holding'. To this end, we introduce a structured data engineering process that converts raw videos from the MANIAC dataset into 27,476 statistical-kinematic feature vectors. Each vector encapsulates relational and dynamic properties from a short temporal window of motion. Our initial hypothesis posited that sequential modeling would be critical, leading us to compare static classifiers (MLPs) against temporal models (RNNs). Counter-intuitively, the key discovery occurred when we set the sequence length of a Bidirectional RNN to one (seq_length=1). This modification converted the network's function, compelling it to act as a high-capacity static feature encoder. This architectural change directly led to a significant accuracy improvement, culminating in a final score of 97.60%. Of particular note, our optimized model successfully overcame the most challenging transitional class, 'grabbing', by achieving a balanced F1-score of 0.90. These findings provide a new benchmark for low-level hand-object interaction recognition using structured, interpretable features and lightweight architectures.
- oai:arXiv.org:2512.09626v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Authority Backdoor: A Certifiable Backdoor Mechanism for Authoring DNNs
+ https://arxiv.org/abs/2512.10600
+ arXiv:2512.10600v1 Announce Type: new
+Abstract: Deep Neural Networks (DNNs), as valuable intellectual property, face unauthorized use. Existing protections, such as digital watermarking, are largely passive; they provide only post-hoc ownership verification and cannot actively prevent the illicit use of a stolen model. This work proposes a proactive protection scheme, dubbed ``Authority Backdoor," which embeds access constraints directly into the model. In particular, the scheme utilizes a backdoor learning framework to intrinsically lock a model's utility, such that it performs normally only in the presence of a specific trigger (e.g., a hardware fingerprint). But in its absence, the DNN's performance degrades to be useless. To further enhance the security of the proposed authority scheme, the certifiable robustness is integrated to prevent an adaptive attacker from removing the implanted backdoor. The resulting framework establishes a secure authority mechanism for DNNs, combining access control with certifiable robustness against adversarial attacks. Extensive experiments on diverse architectures and datasets validate the effectiveness and certifiable robustness of the proposed framework.
+ oai:arXiv.org:2512.10600v1
+ cs.CR
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yousef Azizi Movahed, Fatemeh Ziaeetabar
+ Han Yang, Shaofeng Li, Tian Dong, Xiangyu Xu, Guangchi Liu, Zhen Ling
- LogICL: Distilling LLM Reasoning to Bridge the Semantic Gap in Cross-Domain Log Anomaly Detection
- https://arxiv.org/abs/2512.09627
- arXiv:2512.09627v1 Announce Type: new
-Abstract: Effective log anomaly detection is critical to sustaining reliability in large-scale IT infrastructures. Transformer-based models require substantial resources and labeled data, exacerbating the cold-start problem in target domains where logs are scarce. Existing cross-domain methods leverage source logs but struggle with generalization due to reliance on surface lexical similarity, failing to capture latent semantic equivalence amid structural divergences. To address this, we propose LogICL, a framework distilling Large Language Model (LLM) reasoning into a lightweight encoder for cross-domain anomaly detection. During training, LogICL constructs a delta matrix measuring the utility of demonstrations selected via Maximal Marginal Relevance relative to zero-shot inference. The encoder is optimized via a multi-objective loss comprising an ICL-Guided term that aligns representations based on reasoning assistance utility, maximum mean discrepancy for domain alignment, and supervised contrastive loss. At inference, the optimized encoder retrieves reasoning-aware demonstrations using semantic similarity and delta scores, enabling frozen-LLM in-context learning with Chain-of-Thought for accurate and interpretable detection. Experiments on few-shot and zero-shot cross-domain benchmarks confirm LogICL achieves state-of-the-art performance across heterogeneous systems. Further analysis via visualizations and case studies confirms LogICL bridges the semantic gap beyond surface lexical similarity, effectively capturing latent semantic equivalence for rapid deployment.
- oai:arXiv.org:2512.09627v1
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Multi-Objective Reward and Preference Optimization: Theory and Algorithms
+ https://arxiv.org/abs/2512.10601
+ arXiv:2512.10601v1 Announce Type: new
+Abstract: This thesis develops theoretical frameworks and algorithms that advance constrained reinforcement learning (RL) across control, preference learning, and alignment of large language models. The first contribution addresses constrained Markov Decision Processes (CMDPs) under the average-cost criterion through the Average-Constrained Policy Optimization (ACPO) algorithm. ACPO integrates sensitivity analysis with trust-region updates to ensure stable constraint handling, achieving state-of-the-art empirical performance with theoretical guarantees. Constrained RL is then extended to finite-horizon settings via e-COP, the first policy optimization method for episodic CMDPs. Built on an episodic policy difference lemma, e-COP offers provable performance, simplicity, and scalability in safety-critical environments. The thesis then investigates reinforcement learning from human preferences. warmPref-PS introduces a posterior sampling strategy for linear bandits that integrates offline preference data from heterogeneous raters into online learning. Explicit modeling of rater competence yields substantial regret reduction and more efficient data collection for RLHF. The PSPL algorithm further advances preference-based RL by jointly sampling reward models and transition dynamics from pairwise trajectory comparisons, providing Bayesian simple-regret guarantees and robust empirical identification of optimal policies. The final contribution applies these methods to large-scale model alignment. A multi-objective constrained optimization view yields MOPO, an iterative algorithm with closed-form updates that scales to multi-billion-parameter language models and remains robust across alignment settings. Collectively, the thesis unifies constrained RL across average-cost, episodic, and preference-driven paradigms, delivering theoretical advances and practical tools for safe and aligned decision-making.
+ oai:arXiv.org:2512.10601v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jingwei Ye, Zhi Wang, Chenbin Su, Jieshuai Yang, Jiayi Ding, Chunbo Liu, Ge Chu
+ http://creativecommons.org/licenses/by/4.0/
+ Akhil Agnihotri
- An End-to-end Planning Framework with Agentic LLMs and PDDL
- https://arxiv.org/abs/2512.09629
- arXiv:2512.09629v1 Announce Type: new
-Abstract: We present an end-to-end framework for planning supported by verifiers. An orchestrator receives a human specification written in natural language and converts it into a PDDL (Planning Domain Definition Language) model, where the domain and problem are iteratively refined by sub-modules (agents) to address common planning requirements, such as time constraints and optimality, as well as ambiguities and contradictions that may exist in the human specification. The validated domain and problem are then passed to an external planning engine to generate a plan. The orchestrator and agents are powered by Large Language Models (LLMs) and require no human intervention at any stage of the process. Finally, a module translates the final plan back into natural language to improve human readability while maintaining the correctness of each step. We demonstrate the flexibility and effectiveness of our framework across various domains and tasks, including the Google NaturalPlan benchmark and PlanBench, as well as planning problems like Blocksworld and the Tower of Hanoi (where LLMs are known to struggle even with small instances). Our framework can be integrated with any PDDL planning engine and validator (such as Fast Downward, LPG, POPF, VAL, and uVAL, which we have tested) and represents a significant step toward end-to-end planning aided by LLMs.
- oai:arXiv.org:2512.09629v1
- cs.AI
+ Uncertainty-Preserving QBNNs: Multi-Level Quantization of SVI-Based Bayesian Neural Networks for Image Classification
+ https://arxiv.org/abs/2512.10602
+ arXiv:2512.10602v1 Announce Type: new
+Abstract: Bayesian Neural Networks (BNNs) provide principled uncertainty quantification but suffer from substantial computational and memory overhead compared to deterministic networks. While quantization techniques have successfully reduced resource requirements in standard deep learning models, their application to probabilistic models remains largely unexplored. We introduce a systematic multi-level quantization framework for Stochastic Variational Inference based BNNs that distinguishes between three quantization strategies: Variational Parameter Quantization (VPQ), Sampled Parameter Quantization (SPQ), and Joint Quantization (JQ). Our logarithmic quantization for variance parameters, and specialized activation functions to preserve the distributional structure are essential for calibrated uncertainty estimation. Through comprehensive experiments on Dirty-MNIST, we demonstrate that BNNs can be quantized down to 4-bit precision while maintaining both classification accuracy and uncertainty disentanglement. At 4 bits, Joint Quantization achieves up to 8x memory reduction compared to floating-point implementations with minimal degradation in epistemic and aleatoric uncertainty estimation. These results enable deployment of BNNs on resource-constrained edge devices and provide design guidelines for future analog "Bayesian Machines" operating at inherently low precision.
+ oai:arXiv.org:2512.10602v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Emanuele La Malfa, Ping Zhu, Samuele Marro, Sara Bernardini, Michael Wooldridge
+ Hendrik Borras, Yong Wu, Bernhard Klein, Holger Fr\"oning
- Benchmarking SAM2-based Trackers on FMOX
- https://arxiv.org/abs/2512.09633
- arXiv:2512.09633v1 Announce Type: new
-Abstract: Several object tracking pipelines extending Segment Anything Model 2 (SAM2) have been proposed in the past year, where the approach is to follow and segment the object from a single exemplar template provided by the user on a initialization frame. We propose to benchmark these high performing trackers (SAM2, EfficientTAM, DAM4SAM and SAMURAI) on datasets containing fast moving objects (FMO) specifically designed to be challenging for tracking approaches. The goal is to understand better current limitations in state-of-the-art trackers by providing more detailed insights on the behavior of these trackers. We show that overall the trackers DAM4SAM and SAMURAI perform well on more challenging sequences.
- oai:arXiv.org:2512.09633v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ LEO-RobotAgent: A General-purpose Robotic Agent for Language-driven Embodied Operator
+ https://arxiv.org/abs/2512.10605
+ arXiv:2512.10605v1 Announce Type: new
+Abstract: We propose LEO-RobotAgent, a general-purpose language-driven intelligent agent framework for robots. Under this framework, LLMs can operate different types of robots to complete unpredictable complex tasks across various scenarios. This framework features strong generalization, robustness, and efficiency. The application-level system built around it can fully enhance bidirectional human-robot intent understanding and lower the threshold for human-robot interaction. Regarding robot task planning, the vast majority of existing studies focus on the application of large models in single-task scenarios and for single robot types. These algorithms often have complex structures and lack generalizability. Thus, the proposed LEO-RobotAgent framework is designed with a streamlined structure as much as possible, enabling large models to independently think, plan, and act within this clear framework. We provide a modular and easily registrable toolset, allowing large models to flexibly call various tools to meet different requirements. Meanwhile, the framework incorporates a human-robot interaction mechanism, enabling the algorithm to collaborate with humans like a partner. Experiments have verified that this framework can be easily adapted to mainstream robot platforms including unmanned aerial vehicles (UAVs), robotic arms, and wheeled robot, and efficiently execute a variety of carefully designed tasks with different complexity levels. Our code is available at https://github.com/LegendLeoChen/LEO-RobotAgent.
+ oai:arXiv.org:2512.10605v1
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- 33rd International Conference on Artificial Intelligence and Cognitive Science (AICS 2025), December, 2025, Dublin, Ireland
- Senem Aktas, Charles Markham, John McDonald, Rozenn Dahyot
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lihuang Chen, Xiangyu Luo, Jun Meng
- Creation of the Estonian Subjectivity Dataset: Assessing the Degree of Subjectivity on a Scale
- https://arxiv.org/abs/2512.09634
- arXiv:2512.09634v1 Announce Type: new
-Abstract: This article presents the creation of an Estonian-language dataset for document-level subjectivity, analyzes the resulting annotations, and reports an initial experiment of automatic subjectivity analysis using a large language model (LLM). The dataset comprises of 1,000 documents-300 journalistic articles and 700 randomly selected web texts-each rated for subjectivity on a continuous scale from 0 (fully objective) to 100 (fully subjective) by four annotators. As the inter-annotator correlations were moderate, with some texts receiving scores at the opposite ends of the scale, a subset of texts with the most divergent scores was re-annotated, with the inter-annotator correlation improving. In addition to human annotations, the dataset includes scores generated by GPT-5 as an experiment on annotation automation. These scores were similar to human annotators, however several differences emerged, suggesting that while LLM based automatic subjectivity scoring is feasible, it is not an interchangeable alternative to human annotation, and its suitability depends on the intended application.
- oai:arXiv.org:2512.09634v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Track and Caption Any Motion: Query-Free Motion Discovery and Description in Videos
+ https://arxiv.org/abs/2512.10607
+ arXiv:2512.10607v1 Announce Type: new
+Abstract: We propose Track and Caption Any Motion (TCAM), a motion-centric framework for automatic video understanding that discovers and describes motion patterns without user queries. Understanding videos in challenging conditions like occlusion, camouflage, or rapid movement often depends more on motion dynamics than static appearance. TCAM autonomously observes a video, identifies multiple motion activities, and spatially grounds each natural language description to its corresponding trajectory through a motion-field attention mechanism. Our key insight is that motion patterns, when aligned with contrastive vision-language representations, provide powerful semantic signals for recognizing and describing actions. Through unified training that combines global video-text alignment with fine-grained spatial correspondence, TCAM enables query-free discovery of multiple motion expressions via multi-head cross-attention. On the MeViS benchmark, TCAM achieves 58.4% video-to-text retrieval, 64.9 JF for spatial grounding, and discovers 4.8 relevant expressions per video with 84.7% precision, demonstrating strong cross-task generalization.
+ oai:arXiv.org:2512.10607v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Karl Gustav Gailit, Kadri Muischnek, Kairit Sirts
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Bishoy Galoaa, Sarah Ostadabbas
- MentraSuite: Post-Training Large Language Models for Mental Health Reasoning and Assessment
- https://arxiv.org/abs/2512.09636
- arXiv:2512.09636v1 Announce Type: new
-Abstract: Mental health disorders affect hundreds of millions globally, and the Web now serves as a primary medium for accessing support, information, and assessment. Large language models (LLMs) offer scalable and accessible assistance, yet their deployment in mental-health settings remains risky when their reasoning is incomplete, inconsistent, or ungrounded. Existing psychological LLMs emphasize emotional understanding or knowledge recall but overlook the step-wise, clinically aligned reasoning required for appraisal, diagnosis, intervention planning, abstraction, and verification. To address these issues, we introduce MentraSuite, a unified framework for advancing reliable mental-health reasoning. We propose MentraBench, a comprehensive benchmark spanning five core reasoning aspects, six tasks, and 13 datasets, evaluating both task performance and reasoning quality across five dimensions: conciseness, coherence, hallucination avoidance, task understanding, and internal consistency. We further present Mindora, a post-trained model optimized through a hybrid SFT-RL framework with an inconsistency-detection reward to enforce faithful and coherent reasoning. To support training, we construct high-quality trajectories using a novel reasoning trajectory generation strategy, that strategically filters difficult samples and applies a structured, consistency-oriented rewriting process to produce concise, readable, and well-balanced trajectories. Across 20 evaluated LLMs, Mindora achieves the highest average performance on MentraBench and shows remarkable performances in reasoning reliability, demonstrating its effectiveness for complex mental-health scenarios.
- oai:arXiv.org:2512.09636v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Robust Multi-Disease Retinal Classification via Xception-Based Transfer Learning and W-Net Vessel Segmentation
+ https://arxiv.org/abs/2512.10608
+ arXiv:2512.10608v1 Announce Type: new
+Abstract: In recent years, the incidence of vision-threatening eye diseases has risen dramatically, necessitating scalable and accurate screening solutions. This paper presents a comprehensive study on deep learning architectures for the automated diagnosis of ocular conditions. To mitigate the "black-box" limitations of standard convolutional neural networks (CNNs), we implement a pipeline that combines deep feature extraction with interpretable image processing modules. Specifically, we focus on high-fidelity retinal vessel segmentation as an auxiliary task to guide the classification process. By grounding the model's predictions in clinically relevant morphological features, we aim to bridge the gap between algorithmic output and expert medical validation, thereby reducing false positives and improving deployment viability in clinical settings.
+ oai:arXiv.org:2512.10608v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mengxi Xiao, Kailai Yang, Pengde Zhao, Enze Zhang, Ziyan Kuang, Zhiwei Liu, Weiguang Han, Shu Liao, Lianting Huang, Jinpeng Hu, Min Peng, Qianqian Xie, Sophia Ananiadou
+ Mohammad Sadegh Gholizadeh, Amir Arsalan Rezapour
- Inexact Gauss Seidel and Coarse Solvers for AMG and s-step CG
- https://arxiv.org/abs/2512.09642
- arXiv:2512.09642v1 Announce Type: new
-Abstract: Communication-avoiding Krylov methods require solving small dense Gram systems at each outer iteration. We present a low-synchronization approach based on Forward Gauss--Seidel (FGS), which exploits the structure of Gram matrices arising from Chebyshev polynomial bases. We show that a single FGS sweep is mathematically equivalent to Modified Gram--Schmidt (MGS) orthogonalization in the $A$-norm and provide corresponding backward error bounds. For weak scaling on AMD MI-series GPUs, we demonstrate that 20--30 FGS iterations preserve scalability up to 64 GPUs with problem sizes exceeding 700 million unknowns. We further extend this approach to Algebraic MultiGrid (AMG) coarse-grid solves, removing the need to assemble or factor dense coarse operators
- oai:arXiv.org:2512.09642v1
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Thinking While Driving: A Concurrent Framework for Real-Time, LLM-Based Adaptive Routing
+ https://arxiv.org/abs/2512.10610
+ arXiv:2512.10610v1 Announce Type: new
+Abstract: We present Thinking While Driving, a concurrent routing framework that integrates LLMs into a graph-based traffic environment. Unlike approaches that require agents to stop and deliberate, our system enables LLM-based route planning while agents are moving, significantly reducing intersection wait times. Under high traffic, agents average just 0.75 seconds of decision latency. To coordinate many agents in real-time, we implement a non-blocking asynchronous architecture using Unity coroutines and a dedicated request manager. The environment is a weighted undirected graph with live congestion metrics, updated continuously by the agents to enable shared perception. Our results show LLM-driven agents can dynamically adapt to traffic, reroute around congestion, and exhibit behaviors beyond static pathfinding, all while maintaining real-time performance. This work provides a reproducible framework for future research in adaptive routing and multi-agent cooperation.
+ oai:arXiv.org:2512.10610v1
+ cs.MA
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Stephen Thomas, Pasqua D'Ambra
+ Xiaopei Tan, Muyang Fan
- Kaapana: A Comprehensive Open-Source Platform for Integrating AI in Medical Imaging Research Environments
- https://arxiv.org/abs/2512.09644
- arXiv:2512.09644v1 Announce Type: new
-Abstract: Developing generalizable AI for medical imaging requires both access to large, multi-center datasets and standardized, reproducible tooling within research environments. However, leveraging real-world imaging data in clinical research environments is still hampered by strict regulatory constraints, fragmented software infrastructure, and the challenges inherent in conducting large-cohort multicentre studies. This leads to projects that rely on ad-hoc toolchains that are hard to reproduce, difficult to scale beyond single institutions and poorly suited for collaboration between clinicians and data scientists. We present Kaapana, a comprehensive open-source platform for medical imaging research that is designed to bridge this gap. Rather than building single-use, site-specific tooling, Kaapana provides a modular, extensible framework that unifies data ingestion, cohort curation, processing workflows and result inspection under a common user interface. By bringing the algorithm to the data, it enables institutions to keep control over their sensitive data while still participating in distributed experimentation and model development. By integrating flexible workflow orchestration with user-facing applications for researchers, Kaapana reduces technical overhead, improves reproducibility and enables conducting large-scale, collaborative, multi-centre imaging studies. We describe the core concepts of the platform and illustrate how they can support diverse use cases, from local prototyping to nation-wide research networks. The open-source codebase is available at https://github.com/kaapana/kaapana
- oai:arXiv.org:2512.09644v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Phythesis: Physics-Guided Evolutionary Scene Synthesis for Energy-Efficient Data Center Design via LLMs
+ https://arxiv.org/abs/2512.10611
+ arXiv:2512.10611v1 Announce Type: new
+Abstract: Data center (DC) infrastructure serves as the backbone to support the escalating demand for computing capacity. Traditional design methodologies that blend human expertise with specialized simulation tools scale poorly with the increasing system complexity. Recent studies adopt generative artificial intelligence to design plausible human-centric indoor layouts. However, they do not consider the underlying physics, making them unsuitable for the DC design that sets quantifiable operational objectives and strict physical constraints. To bridge the gap, we propose Phythesis, a novel framework that synergizes large language models (LLMs) and physics-guided evolutionary optimization to automate simulation-ready (SimReady) scene synthesis for energy-efficient DC design. Phythesis employs an iterative bi-level optimization architecture, where (i) the LLM-driven optimization level generates physically plausible three-dimensional layouts and self-criticizes them to refine the scene topology, and (ii) the physics-informed optimization level identifies the optimal asset parameters and selects the best asset combination. Experiments on three generation scales show that Phythesis achieves 57.3% generation success rate increase and 11.5% power usage effectiveness (PUE) improvement, compared with the vanilla LLM-based solution.
+ oai:arXiv.org:2512.10611v1
+ cs.AI
+ cs.NE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- \"Unal Ak\"unal, Markus Bujotzek, Stefan Denner, Benjamin Hamm, Klaus Kades, Philipp Schader, Jonas Scherer, Marco Nolden, Peter Neher, Ralf Floca, Klaus Maier-Hein
+ Minghao LI, Ruihang Wang, Rui Tan, Yonggang Wen
- VHOI: Controllable Video Generation of Human-Object Interactions from Sparse Trajectories via Motion Densification
- https://arxiv.org/abs/2512.09646
- arXiv:2512.09646v1 Announce Type: new
-Abstract: Synthesizing realistic human-object interactions (HOI) in video is challenging due to the complex, instance-specific interaction dynamics of both humans and objects. Incorporating controllability in video generation further adds to the complexity. Existing controllable video generation approaches face a trade-off: sparse controls like keypoint trajectories are easy to specify but lack instance-awareness, while dense signals such as optical flow, depths or 3D meshes are informative but costly to obtain. We propose VHOI, a two-stage framework that first densifies sparse trajectories into HOI mask sequences, and then fine-tunes a video diffusion model conditioned on these dense masks. We introduce a novel HOI-aware motion representation that uses color encodings to distinguish not only human and object motion, but also body-part-specific dynamics. This design incorporates a human prior into the conditioning signal and strengthens the model's ability to understand and generate realistic HOI dynamics. Experiments demonstrate state-of-the-art results in controllable HOI video generation. VHOI is not limited to interaction-only scenarios and can also generate full human navigation leading up to object interactions in an end-to-end manner. Project page: https://vcai.mpi-inf.mpg.de/projects/vhoi/.
- oai:arXiv.org:2512.09646v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Dynamics of multidimensional Simple Clock Auctions
+ https://arxiv.org/abs/2512.10614
+ arXiv:2512.10614v1 Announce Type: new
+Abstract: Simple Clock Auctions (SCA) are a mechanism commonly used in spectrum auctions to sell lots of frequency bandwidths. We study such an auction with one player having access to perfect information against straightforward bidders. When the opponents' valuations satisfy the ordinary substitutes condition, we show that it is optimal to bid on a fixed lot overtime. In this setting, we consider a continuous-time version of the SCA auction in which the prices follow a differential inclusion with a piecewise-constant dynamics. We show that there exists a unique solution in the sense of Filippov. This guarantees that the continuous-time model coincides with the limit of the discrete-time auction when price increments tend to zero. Moreover, we show that the value function of this limit auction is piecewise linear (though possibly discontinuous). Finally, we illustrate these results by analyzing a simplified version of the multiband Australian spectrum auction of 2017.
+ oai:arXiv.org:2512.10614v1
+ cs.GT
+ math.CO
+ math.OC
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Wanyue Zhang, Lin Geng Foo, Thabo Beeler, Rishabh Dabral, Christian Theobalt
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jad Zeroual, Marianne Akian, Aur\'elien Bechler, Matthieu Chardy, St\'ephane Gaubert
- Membership and Dataset Inference Attacks on Large Audio Generative Models
- https://arxiv.org/abs/2512.09654
- arXiv:2512.09654v1 Announce Type: new
-Abstract: Generative audio models, based on diffusion and autoregressive architectures, have advanced rapidly in both quality and expressiveness. This progress, however, raises pressing copyright concerns, as such models are often trained on vast corpora of artistic and commercial works. A central question is whether one can reliably verify if an artist's material was included in training, thereby providing a means for copyright holders to protect their content. In this work, we investigate the feasibility of such verification through membership inference attacks (MIA) on open-source generative audio models, which attempt to determine whether a specific audio sample was part of the training set. Our empirical results show that membership inference alone is of limited effectiveness at scale, as the per-sample membership signal is weak for models trained on large and diverse datasets. However, artists and media owners typically hold collections of works rather than isolated samples. Building on prior work in text and vision domains, in this work we focus on dataset inference (DI), which aggregates diverse membership evidence across multiple samples. We find that DI is successful in the audio domain, offering a more practical mechanism for assessing whether an artist's works contributed to model training. Our results suggest DI as a promising direction for copyright protection and dataset accountability in the era of large audio generative models.
- oai:arXiv.org:2512.09654v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Lang2Motion: Bridging Language and Motion through Joint Embedding Spaces
+ https://arxiv.org/abs/2512.10617
+ arXiv:2512.10617v1 Announce Type: new
+Abstract: We present Lang2Motion, a framework for language-guided point trajectory generation by aligning motion manifolds with joint embedding spaces. Unlike prior work focusing on human motion or video synthesis, we generate explicit trajectories for arbitrary objects using motion extracted from real-world videos via point tracking. Our transformer-based auto-encoder learns trajectory representations through dual supervision: textual motion descriptions and rendered trajectory visualizations, both mapped through CLIP's frozen encoders. Lang2Motion achieves 34.2% Recall@1 on text-to-trajectory retrieval, outperforming video-based methods by 12.5 points, and improves motion accuracy by 33-52% (12.4 ADE vs 18.3-25.3) compared to video generation baselines. We demonstrate 88.3% Top-1 accuracy on human action recognition despite training only on diverse object motions, showing effective transfer across motion domains. Lang2Motion supports style transfer, semantic interpolation, and latent-space editing through CLIP-aligned trajectory representations.
+ oai:arXiv.org:2512.10617v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Jakub Proboszcz, Pawe{\l} Kochanski, Karol Korszun, Donato Crisostomi, Giorgio Strano, Emanuele Rodol\`a, Kamil Deja, Jan Dubinski
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Bishoy Galoaa, Xiangyu Bai, Sarah Ostadabbas
- Binary and Non-Binary Self-Dual Sequences and Maximum Period Single-Track Gray Codes
- https://arxiv.org/abs/2512.09655
- arXiv:2512.09655v1 Announce Type: new
-Abstract: Binary self-dual sequences have been considered and analyzed throughout the years, and they were used for various applications. Motivated by a construction for single-track Gray codes, we examine the structure and recursive constructions for binary and non-binary self-dual sequences. The feedback shift registers that generate such sequences are discussed. The connections between these sequences and maximum period single-track codes are discussed. Maximum period non-binary single-track Gray codes of length $p^t$ and period $p^{p^t}$ are constructed. These are the first infinite families of maximum period codes presented in the literature.
- oai:arXiv.org:2512.09655v1
- cs.IT
- math.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Analyzing developer discussions on EU and US privacy legislation compliance in GitHub repositories
+ https://arxiv.org/abs/2512.10618
+ arXiv:2512.10618v1 Announce Type: new
+Abstract: Context: Privacy legislation has impacted the way software systems are developed, prompting practitioners to update their implementations. Specifically, the EU General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have forced the community to focus on users' data privacy. Despite the vast amount of data on developer issues available in GitHub repositories, there is a lack of empirical evidence on the issues developers of Open Source Software discuss to comply with privacy legislation. Method: In this work, we examine such discussions by mining and analyzing 32,820 issues from GitHub repositories. We partially analyzed the dataset automatically to identify law user rights and principles indicated, and manually analyzed a sample of 1,186 issues based on the type of concern addressed. Results: We devised 24 discussion categories placed in six clusters: features/bugs, consent-related, documentation, data storing/sharing, adaptability, and general compliance. Our results show that developers mainly focus on specific user rights from the legislation (right to erasure, right to opt-out, right to access), addressing other rights less frequently, while most discussions concern user consent, user rights functionality, bugs and cookies management. Conclusion: The created taxonomy can help practitioners understand which issues are discussed for law compliance, so that they ensure they address them first in their systems. In addition, the educational community can reshape curricula to better educate future engineers on the privacy law concerns raised, and the research community can identify gaps and areas for improvement to support and accelerate data privacy law compliance.
+ oai:arXiv.org:2512.10618v1
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Tuvi Etzion
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Georgia M. Kapitsaki, Maria Papoutsoglou, Christoph Treude, Ioanna Theophilou
- ReMoSPLAT: Reactive Mobile Manipulation Control on a Gaussian Splat
- https://arxiv.org/abs/2512.09656
- arXiv:2512.09656v1 Announce Type: new
-Abstract: Reactive control can gracefully coordinate the motion of the base and the arm of a mobile manipulator. However, incorporating an accurate representation of the environment to avoid obstacles without involving costly planning remains a challenge. In this work, we present ReMoSPLAT, a reactive controller based on a quadratic program formulation for mobile manipulation that leverages a Gaussian Splat representation for collision avoidance. By integrating additional constraints and costs into the optimisation formulation, a mobile manipulator platform can reach its intended end effector pose while avoiding obstacles, even in cluttered scenes. We investigate the trade-offs of two methods for efficiently calculating robot-obstacle distances, comparing a purely geometric approach with a rasterisation-based approach. Our experiments in simulation on both synthetic and real-world scans demonstrate the feasibility of our method, showing that the proposed approach achieves performance comparable to controllers that rely on perfect ground-truth information.
- oai:arXiv.org:2512.09656v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ DOCR-Inspector: Fine-Grained and Automated Evaluation of Document Parsing with VLM
+ https://arxiv.org/abs/2512.10619
+ arXiv:2512.10619v1 Announce Type: new
+Abstract: Document parsing aims to transform unstructured PDF images into semi-structured data, facilitating the digitization and utilization of information in diverse domains. While vision language models (VLMs) have significantly advanced this task, achieving reliable, high-quality parsing in real-world scenarios remains challenging. Common practice often selects the top-performing model on standard benchmarks. However, these benchmarks may carry dataset-specific biases, leading to inconsistent model rankings and limited correlation with real-world performance. Moreover, benchmark metrics typically provide only overall scores, which can obscure distinct error patterns in output. This raises a key challenge: how can we reliably and comprehensively assess document parsing quality in the wild? We address this problem with DOCR-Inspector, which formalizes document parsing assessment as fine-grained error detection and analysis. Leveraging VLM-as-a-Judge, DOCR-Inspector analyzes a document image and its parsed output, identifies all errors, assigns them to one of 28 predefined types, and produces a comprehensive quality assessment. To enable this capability, we construct DOCRcase-200K for training and propose the Chain-of-Checklist reasoning paradigm to enable the hierarchical structure of parsing quality assessment. For empirical validation, we introduce DOCRcaseBench, a set of 882 real-world document parsing cases with manual annotations. On this benchmark, DOCR-Inspector-7B outperforms commercial models like Gemini 2.5 Pro, as well as leading open-source models. Further experiments demonstrate that its quality assessments provide valuable guidance for parsing results refinement, making DOCR-Inspector both a practical evaluator and a driver for advancing document parsing systems at scale. Model and code are released at: https://github.com/ZZZZZQT/DOCR-Inspector.
+ oai:arXiv.org:2512.10619v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nicolas Marticorena, Tobias Fischer, Niko Suenderhauf
+ Qintong Zhang, Junyuan Zhang, Zhifei Ren, Linke Ouyang, Zichen Wen, Junbo Niu, Yuan Qu, Bin Wang, Ka-Ho Chow, Conghui He, Wentao Zhang
- Can LLMs Evaluate What They Cannot Annotate? Revisiting LLM Reliability in Hate Speech Detection
- https://arxiv.org/abs/2512.09662
- arXiv:2512.09662v1 Announce Type: new
-Abstract: Hate speech spreads widely online, harming individuals and communities, making automatic detection essential for large-scale moderation, yet detecting it remains difficult. Part of the challenge lies in subjectivity: what one person flags as hate speech, another may see as benign. Traditional annotation agreement metrics, such as Cohen's $\kappa$, oversimplify this disagreement, treating it as an error rather than meaningful diversity. Meanwhile, Large Language Models (LLMs) promise scalable annotation, but prior studies demonstrate that they cannot fully replace human judgement, especially in subjective tasks. In this work, we reexamine LLM reliability using a subjectivity-aware framework, cross-Rater Reliability (xRR), revealing that even under fairer lens, LLMs still diverge from humans. Yet this limitation opens an opportunity: we find that LLM-generated annotations can reliably reflect performance trends across classification models, correlating with human evaluations. We test this by examining whether LLM-generated annotations preserve the relative ordering of model performance derived from human evaluation (i.e. whether models ranked as more reliable by human annotators preserve the same order when evaluated with LLM-generated labels). Our results show that, although LLMs differ from humans at the instance level, they reproduce similar ranking and classification patterns, suggesting their potential as proxy evaluators. While not a substitute for human annotators, they might serve as a scalable proxy for evaluation in subjective NLP tasks.
- oai:arXiv.org:2512.09662v1
- cs.CL
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Efficient Hypergraph Pattern Matching via Match-and-Filter and Intersection Constraint
+ https://arxiv.org/abs/2512.10621
+ arXiv:2512.10621v1 Announce Type: new
+Abstract: A hypergraph is a generalization of a graph, in which a hyperedge can connect multiple vertices, modeling complex relationships involving multiple vertices simultaneously. Hypergraph pattern matching, which is to find all isomorphic embeddings of a query hypergraph in a data hypergraph, is one of the fundamental problems. In this paper, we present a novel algorithm for hypergraph pattern matching by introducing (1) the intersection constraint, a necessary and sufficient condition for valid embeddings, which significantly speeds up the verification process, (2) the candidate hyperedge space, a data structure that stores potential mappings between hyperedges in the query hypergraph and the data hypergraph, and (3) the Match-and-Filter framework, which interleaves matching and filtering operations to maintain only compatible candidates in the candidate hyperedge space during backtracking. Experimental results on real-world datasets demonstrate that our algorithm significantly outperforms the state-of-the-art algorithms, by up to orders of magnitude in terms of query processing time.
+ oai:arXiv.org:2512.10621v1
+ cs.DB
+ cs.DS
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Paloma Piot, David Otero, Patricia Mart\'in-Rodilla, Javier Parapar
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Siwoo Song, Wonseok Shin, Kunsoo Park, Giuseppe F. Italiano, Zhengyi Yang, Wenjie Zhang
- IF-Bench: Benchmarking and Enhancing MLLMs for Infrared Images with Generative Visual Prompting
- https://arxiv.org/abs/2512.09663
- arXiv:2512.09663v1 Announce Type: new
-Abstract: Recent advances in multimodal large language models (MLLMs) have led to impressive progress across various benchmarks. However, their capability in understanding infrared images remains unexplored. To address this gap, we introduce IF-Bench, the first high-quality benchmark designed for evaluating multimodal understanding of infrared images. IF-Bench consists of 499 images sourced from 23 infrared datasets and 680 carefully curated visual question-answer pairs, covering 10 essential dimensions of image understanding. Based on this benchmark, we systematically evaluate over 40 open-source and closed-source MLLMs, employing cyclic evaluation, bilingual assessment, and hybrid judgment strategies to enhance the reliability of the results. Our analysis reveals how model scale, architecture, and inference paradigms affect infrared image comprehension, providing valuable insights for this area. Furthermore, we propose a training-free generative visual prompting (GenViP) method, which leverages advanced image editing models to translate infrared images into semantically and spatially aligned RGB counterparts, thereby mitigating domain distribution shifts. Extensive experiments demonstrate that our method consistently yields significant performance improvements across a wide range of MLLMs. The benchmark and code are available at https://github.com/casiatao/IF-Bench.
- oai:arXiv.org:2512.09663v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Metrics, KPIs, and Taxonomy for Data Valuation and Monetisation - Internal Processes Perspective
+ https://arxiv.org/abs/2512.10622
+ arXiv:2512.10622v1 Announce Type: new
+Abstract: Data valuation and monetisation are emerging as central challenges in data-driven economies, yet no unified framework exists to measure or manage data value across organisational contexts. This paper presents a systematic literature review of metrics and key performance indicators (KPIs) relevant to data valuation and monetisation, focusing on the Internal Processes Perspective of the Balanced Scorecard (BSC). As part of a broader effort to explore all four BSC perspectives, we identify, categorise, and interrelate hundreds of metrics within a comprehensive taxonomy structured around three core clusters: Data Quality, Governance & Compliance, and Operational Efficiency. The taxonomy consolidates overlapping definitions, clarifies conceptual dependencies, and links technical, organisational, and regulatory indicators that underpin data value creation. By integrating these dimensions, it provides a foundation for the development of standardised and evidence-based valuation frameworks. Beyond its theoretical contribution, the taxonomy supports ongoing practical applications in decision-support systems and data valuation models, advancing the broader goal of establishing a coherent, dynamic approach to assessing and monetising data across industries.
+ oai:arXiv.org:2512.10622v1
+ cs.ET
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Tao Zhang, Yuyang Hong, Yang Xia, Kun Ding, Zeyu Zhang, Ying Wang, Shiming Xiang, Chunhong Pan
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Eduardo Vyhmeister, Bastien Pietropaoli, Alejando Martinez Molina, Montserrat Gonzalez-Ferreiro, Gabriel Gonzalez-Castane, Jordi Arjona Aroca, Andrea Visentin
- SynthPix: A lightspeed PIV images generator
- https://arxiv.org/abs/2512.09664
- arXiv:2512.09664v1 Announce Type: new
-Abstract: We describe SynthPix, a synthetic image generator for Particle Image Velocimetry (PIV) with a focus on performance and parallelism on accelerators, implemented in JAX. SynthPix supports the same configuration parameters as existing tools but achieves a throughput several orders of magnitude higher in image-pair generation per second. SynthPix was developed to enable the training of data-hungry reinforcement learning methods for flow estimation and for reducing the iteration times during the development of fast flow estimation methods used in recent active fluids control studies with real-time PIV feedback. We believe SynthPix to be useful for the fluid dynamics community, and in this paper we describe the main ideas behind this software package.
- oai:arXiv.org:2512.09664v1
- cs.DC
- cs.CV
- cs.LG
- eess.IV
- Thu, 11 Dec 2025 00:00:00 -0500
+ AgriGPT-Omni: A Unified Speech-Vision-Text Framework for Multilingual Agricultural Intelligence
+ https://arxiv.org/abs/2512.10624
+ arXiv:2512.10624v1 Announce Type: new
+Abstract: Despite rapid advances in multimodal large language models, agricultural applications remain constrained by the lack of multilingual speech data, unified multimodal architectures, and comprehensive evaluation benchmarks. To address these challenges, we present AgriGPT-Omni, an agricultural omni-framework that integrates speech, vision, and text in a unified framework. First, we construct a scalable data synthesis and collection pipeline that converts agricultural texts and images into training data, resulting in the largest agricultural speech dataset to date, including 492K synthetic and 1.4K real speech samples across six languages. Second, based on this, we train the first agricultural omni-model via a three-stage paradigm: textual knowledge injection, progressive multimodal alignment, and GRPO-based reinforcement learning, enabling unified reasoning across languages and modalities. Third, we propose AgriBench-Omni-2K, the first tri-modal benchmark for agriculture, covering diverse speech-vision-text tasks and multilingual slices, with standardized protocols and reproducible tools. Experiments show that AgriGPT-Omni significantly outperforms general-purpose baselines on multilingual and multimodal reasoning as well as real-world speech understanding. All models, data, benchmarks, and code will be released to promote reproducible research, inclusive agricultural intelligence, and sustainable AI development for low-resource regions.
+ oai:arXiv.org:2512.10624v1
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Antonio Terpin, Alan Bonomi, Francesco Banelli, Raffaello D'Andrea
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Bo Yang, Lanfei Feng, Yunkui Chen, Yu Zhang, Jianyu Zhang, Xiao Xu, Nueraili Aierken, Shijian Li
- OxEnsemble: Fair Ensembles for Low-Data Classification
- https://arxiv.org/abs/2512.09665
- arXiv:2512.09665v1 Announce Type: new
-Abstract: We address the problem of fair classification in settings where data is scarce and unbalanced across demographic groups. Such low-data regimes are common in domains like medical imaging, where false negatives can have fatal consequences.
- We propose a novel approach \emph{OxEnsemble} for efficiently training ensembles and enforcing fairness in these low-data regimes. Unlike other approaches, we aggregate predictions across ensemble members, each trained to satisfy fairness constraints. By construction, \emph{OxEnsemble} is both data-efficient, carefully reusing held-out data to enforce fairness reliably, and compute-efficient, requiring little more compute than used to fine-tune or evaluate an existing model. We validate this approach with new theoretical guarantees. Experimentally, our approach yields more consistent outcomes and stronger fairness-accuracy trade-offs than existing methods across multiple challenging medical imaging classification datasets.
- oai:arXiv.org:2512.09665v1
+ K-Track: Kalman-Enhanced Tracking for Accelerating Deep Point Trackers on Edge Devices
+ https://arxiv.org/abs/2512.10628
+ arXiv:2512.10628v1 Announce Type: new
+Abstract: Point tracking in video sequences is a foundational capability for real-world computer vision applications, including robotics, autonomous systems, augmented reality, and video analysis. While recent deep learning-based trackers achieve state-of-the-art accuracy on challenging benchmarks, their reliance on per-frame GPU inference poses a major barrier to deployment on resource-constrained edge devices, where compute, power, and connectivity are limited. We introduce K-Track (Kalman-enhanced Tracking), a general-purpose, tracker-agnostic acceleration framework designed to bridge this deployment gap. K-Track reduces inference cost by combining sparse deep learning keyframe updates with lightweight Kalman filtering for intermediate frame prediction, using principled Bayesian uncertainty propagation to maintain temporal coherence. This hybrid strategy enables 5-10X speedup while retaining over 85% of the original trackers' accuracy. We evaluate K-Track across multiple state-of-the-art point trackers and demonstrate real-time performance on edge platforms such as the NVIDIA Jetson Nano and RTX Titan. By preserving accuracy while dramatically lowering computational requirements, K-Track provides a practical path toward deploying high-quality point tracking in real-world, resource-limited settings, closing the gap between modern tracking algorithms and deployable vision systems.
+ oai:arXiv.org:2512.10628v1cs.CV
- cs.CY
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jonathan Rystr{\o}m, Zihao Fu, Chris Russell
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Bishoy Galoaa, Pau Closas, Sarah Ostadabbas
- Neurosymbolic Information Extraction from Transactional Documents
- https://arxiv.org/abs/2512.09666
- arXiv:2512.09666v1 Announce Type: new
-Abstract: This paper presents a neurosymbolic framework for information extraction from documents, evaluated on transactional documents. We introduce a schema-based approach that integrates symbolic validation methods to enable more effective zero-shot output and knowledge distillation. The methodology uses language models to generate candidate extractions, which are then filtered through syntactic-, task-, and domain-level validation to ensure adherence to domain-specific arithmetic constraints. Our contributions include a comprehensive schema for transactional documents, relabeled datasets, and an approach for generating high-quality labels for knowledge distillation. Experimental results demonstrate significant improvements in $F_1$-scores and accuracy, highlighting the effectiveness of neurosymbolic validation in transactional document processing.
- oai:arXiv.org:2512.09666v1
+ From Data Scarcity to Data Care: Reimagining Language Technologies for Serbian and other Low-Resource Languages
+ https://arxiv.org/abs/2512.10630
+ arXiv:2512.10630v1 Announce Type: new
+Abstract: Large language models are commonly trained on dominant languages like English, and their representation of low resource languages typically reflects cultural and linguistic biases present in the source language materials. Using the Serbian language as a case, this study examines the structural, historical, and sociotechnical factors shaping language technology development for low resource languages in the AI age. Drawing on semi structured interviews with ten scholars and practitioners, including linguists, digital humanists, and AI developers, it traces challenges rooted in historical destruction of Serbian textual heritage, intensified by contemporary issues that drive reductive, engineering first approaches prioritizing functionality over linguistic nuance. These include superficial transliteration, reliance on English-trained models, data bias, and dataset curation lacking cultural specificity. To address these challenges, the study proposes Data Care, a framework grounded in CARE principles (Collective Benefit, Authority to Control, Responsibility, and Ethics), that reframes bias mitigation from a post hoc technical fix to an integral component of corpus design, annotation, and governance, and positions Data Care as a replicable model for building inclusive, sustainable, and culturally grounded language technologies in contexts where traditional LLM development reproduces existing power imbalances and cultural blind spots.
+ oai:arXiv.org:2512.10630v1cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1007/s10032-025-00530-0
- IJDAR 28, 475-485 (2025)
- Arthur Hemmer, Micka\"el Coustaty, Nicola Bartolo, Jean-Marc Ogier
-
-
- Adaptive Optimal Control for Avatar-Guided Motor Rehabilitation in Virtual Reality
- https://arxiv.org/abs/2512.09667
- arXiv:2512.09667v1 Announce Type: new
-Abstract: A control-theoretic framework for autonomous avatar-guided rehabilitation in virtual reality, based on interpretable, adaptive motor guidance through optimal control, is presented. The framework faces critical challenges in motor rehabilitation due to accessibility, cost, and continuity of care, with over 50% of patients inability to attend regular clinic sessions. The system enables post-stroke patients to undergo personalized therapy in immersive virtual reality at home, while being monitored by clinicians. The core is a nonlinear, human-in-the-loop control strategy, where the avatar adapts in real time to the patient's performance. Balance between following the patient's movements and guiding them to ideal kinematic profiles based on the Hogan minimum-jerk model is achieved through multi-objective optimal control. A data-driven "ability index" uses smoothness metrics to dynamically adjust control gains according to the patient's progress. The system was validated through simulations and preliminary trials, and shows potential for delivering adaptive, engaging and scalable remote physiotherapy guided by interpretable control-theoretic principles.
- oai:arXiv.org:2512.09667v1
- eess.SY
- cs.HC
- cs.SY
- math.OC
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by-nc-nd/4.0/
- Francesco De Lellis, Maria Lombardi, Egidio De Benedetto, Pasquale Arpaia, Mario di Bernardo
+ Smiljana Antonijevic Ubois
- An Automated Tip-and-Cue Framework for Optimized Satellite Tasking and Visual Intelligence
- https://arxiv.org/abs/2512.09670
- arXiv:2512.09670v1 Announce Type: new
-Abstract: The proliferation of satellite constellations, coupled with reduced tasking latency and diverse sensor capabilities, has expanded the opportunities for automated Earth observation. This paper introduces a fully automated Tip-and-Cue framework designed for satellite imaging tasking and scheduling. In this context, tips are generated from external data sources or analyses of prior satellite imagery, identifying spatiotemporal targets and prioritizing them for downstream planning. Corresponding cues are the imaging tasks formulated in response, which incorporate sensor constraints, timing requirements, and utility functions. The system autonomously generates candidate tasks, optimizes their scheduling across multiple satellites using continuous utility functions that reflect the expected value of each observation, and processes the resulting imagery using artificial-intelligence-based models, including object detectors and vision-language models. Structured visual reports are generated to support both interpretability and the identification of new insights for downstream tasking. The efficacy of the framework is demonstrated through a maritime vessel tracking scenario, utilizing Automatic Identification System (AIS) data for trajectory prediction, targeted observations, and the generation of actionable outputs. Maritime vessel tracking is a widely researched application, often used to benchmark novel approaches to satellite tasking, forecasting, and analysis. The system is extensible to broader applications such as smart-city monitoring and disaster response, where timely tasking and automated analysis are critical.
- oai:arXiv.org:2512.09670v1
- cs.CV
- cs.SY
- eess.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Unified Smart Factory Model: A model-based Approach for Integrating Industry 4.0 and Sustainability for Manufacturing Systems
+ https://arxiv.org/abs/2512.10631
+ arXiv:2512.10631v1 Announce Type: new
+Abstract: This paper presents the Unified Smart Factory Model (USFM), a comprehensive framework designed to translate high-level sustainability goals into measurable factory-level indicators with a systematic information map of manufacturing activities. The manufacturing activities were modelled as set of manufacturing, assembly and auxiliary processes using Object Process Methodology, a Model Based Systems Engineering (MBSE) language. USFM integrates Manufacturing Process and System, Data Process, and Key Performance Indicator (KPI) Selection and Assessment in a single framework. Through a detailed case study of Printed Circuit Board (PCB) assembly factory, the paper demonstrates how environmental sustainability KPIs can be selected, modelled, and mapped to the necessary data, highlighting energy consumption and environmental impact metrics. The model's systematic approach can reduce redundancy, minimize the risk of missing critical information, and enhance data collection. The paper concluded that the USFM bridges the gap between sustainability goals and practical implementation, providing significant benefits for industries specifically SMEs aiming to achieve sustainability targets.
+ oai:arXiv.org:2512.10631v1
+ cs.ET
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Gil Weissman, Amir Ivry, Israel Cohen
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ishaan Kaushal, Amaresh Chakrabarti
- Drawback of Enforcing Equivariance and its Compensation via the Lens of Expressive Power
- https://arxiv.org/abs/2512.09673
- arXiv:2512.09673v1 Announce Type: new
-Abstract: Equivariant neural networks encode symmetry as an inductive bias and have achieved strong empirical performance in wide domains. However, their expressive power remains not well understood. Focusing on 2-layer ReLU networks, this paper investigates the impact of equivariance constraints on the expressivity of equivariant and layer-wise equivariant networks. By examining the boundary hyperplanes and the channel vectors of ReLU networks, we construct an example showing that equivariance constraints could strictly limit expressive power. However, we demonstrate that this drawback can be compensated via enlarging the model size. Furthermore, we show that despite a larger model size, the resulting architecture could still correspond to a hypothesis space with lower complexity, implying superior generalizability for equivariant networks.
- oai:arXiv.org:2512.09673v1
+ Supporting Migration Policies with Forecasts: Illegal Border Crossings in Europe through a Mixed Approach
+ https://arxiv.org/abs/2512.10633
+ arXiv:2512.10633v1 Announce Type: new
+Abstract: This paper presents a mixed-methodology to forecast illegal border crossings in Europe across five key migratory routes, with a one-year time horizon. The methodology integrates machine learning techniques with qualitative insights from migration experts. This approach aims at improving the predictive capacity of data-driven models through the inclusion of a human-assessed covariate, an innovation that addresses challenges posed by sudden shifts in migration patterns and limitations in traditional datasets. The proposed methodology responds directly to the forecasting needs outlined in the EU Pact on Migration and Asylum, supporting the Asylum and Migration Management Regulation (AMMR). It is designed to provide policy-relevant forecasts that inform strategic decisions, early warning systems, and solidarity mechanisms among EU Member States. By joining data-driven modeling with expert judgment, this work aligns with existing academic recommendations and introduces a novel operational tool tailored for EU migration governance. The methodology is tested and validated with known data to demonstrate its applicability and reliability in migration-related policy context.
+ oai:arXiv.org:2512.10633v1cs.LG
- cs.AI
- cs.NE
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.SI
+ stat.AP
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Yuzhu Chen, Tian Qin, Xinmei Tian, Fengxiang He, Dacheng Tao
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ C. Bosco, U. Minora, D. de Rigo, J. Pingsdorf, R. Cortinovis
- d-TreeRPO: Towards More Reliable Policy Optimization for Diffusion Language Models
- https://arxiv.org/abs/2512.09675
- arXiv:2512.09675v1 Announce Type: new
-Abstract: Reliable reinforcement learning (RL) for diffusion large language models (dLLMs) requires both accurate advantage estimation and precise estimation of prediction probabilities. Existing RL methods for dLLMs fall short in both aspects: they rely on coarse or unverifiable reward signals, and they estimate prediction probabilities without accounting for the bias relative to the true, unbiased expected prediction probability that properly integrates over all possible decoding orders. To mitigate these issues, we propose \emph{d}-TreeRPO, a reliable RL framework for dLLMs that leverages tree-structured rollouts and bottom-up advantage computation based on verifiable outcome rewards to provide fine-grained and verifiable step-wise reward signals. When estimating the conditional transition probability from a parent node to a child node, we theoretically analyze the estimation error between the unbiased expected prediction probability and the estimate obtained via a single forward pass, and find that higher prediction confidence leads to lower estimation error. Guided by this analysis, we introduce a time-scheduled self-distillation loss during training that enhances prediction confidence in later training stages, thereby enabling more accurate probability estimation and improved convergence. Experiments show that \emph{d}-TreeRPO outperforms existing baselines and achieves significant gains on multiple reasoning benchmarks, including +86.2 on Sudoku, +51.6 on Countdown, +4.5 on GSM8K, and +5.3 on Math500. Ablation studies and computational cost analyses further demonstrate the effectiveness and practicality of our design choices.
- oai:arXiv.org:2512.09675v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Equivalent Instances for Scheduling and Packing Problems
+ https://arxiv.org/abs/2512.10635
+ arXiv:2512.10635v1 Announce Type: new
+Abstract: Two instances $(I,k)$ and $(I',k')$ of a parameterized problem $P$ are equivalent if they have the same set of solutions (static equivalent) or if the set of solutions of $(I,k)$ can be constructed by the set of solutions for $(I',k')$ and some computable pre-solutions. If the algorithm constructing such a (static) equivalent instance whose size is polynomial bounded runs in fixed-parameter tractable (FPT) time, we say that there exists a (static) equivalent instance for this problem. In this paper we present (static) equivalent instances for Scheduling and Knapsack problems. We improve the bound for the $\ell_1$-norm of an equivalent vector given by Eisenbrand, Hunkenschr\"oder, Klein, Kouteck\'y, Levin, and Onn and show how this yields equivalent instances for integer linear programs (ILPs) and related problems. We obtain an $O(MN^2\log(NU))$ static equivalent instance for feasibility ILPs where $M$ is the number of constraints, $N$ is the number of variables and $U$ is an upper bound for the $\ell_\infty$-norm of the smallest feasible solution. With this, we get an $O(n^2\log(n))$ static equivalent instance for Knapsack where $n$ is the number of items. Moreover, we give an $O(M^2N\log(NM\Delta))$ kernel for feasibility ILPs where $\Delta$ is an upper bound for the $\ell_\infty$-norm of the given constraint matrix.
+ Using balancing results by Knop et al., the ConfILP and a proximity result by Eisenbrand and Weismantel we give an $O(d^2\log(p_{\max}))$ equivalent instance for LoadBalancing, a generalization of scheduling problems. Here $d$ is the number of different processing times and $p_{\max}$ is the largest processing time.
+ oai:arXiv.org:2512.10635v1
+ cs.CC
+ cs.DS
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Leyi Pan, Shuchang Tao, Yunpeng Zhai, Zheyu Fu, Liancheng Fang, Minghua He, Lingzhe Zhang, Zhaoyang Liu, Bolin Ding, Aiwei Liu, Lijie Wen
+ Klaus Jansen, Kai Kahler, Corinna Wambsganz
- Understanding Chain-of-Thought Effectiveness in Code Generation: An Empirical and Information-Theoretic Analysis
- https://arxiv.org/abs/2512.09679
- arXiv:2512.09679v1 Announce Type: new
-Abstract: Large language models (LLMs) achieve strong performance on code generation, but the mechanisms by which Chain-of-Thought (CoT) prompting helps remain unclear. We present a systematic empirical and information-theoretic study of CoT effectiveness in neural code generation, evaluating five paradigms (Zero-Shot, Zero-Shot CoT, Self-Planning, Structured CoT, Reasoning-CoT) across six Python benchmarks, a multilingual benchmark with 12 programming languages, and six models from 7B to 480B parameters, using conditional mutual information $I(Y;C|X)$ as a conceptual lens. Our results show that externally guided CoT consistently outperforms direct generation, with structured methods improving Pass@1 by 5--12\% on average while using substantially fewer tokens than reflective reasoning, and that CoT benefits depend on language type systems and model capacity. We further find that reasoning \emph{quality} is critical: high-quality structured CoT from strong generators yields significantly higher accuracy than lightweight alternatives with the same template, whereas naive Zero-Shot CoT can even degrade performance. These findings provide practical guidance for choosing CoT strategies based on model capacity, language characteristics, and task complexity.
- oai:arXiv.org:2512.09679v1
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Objectives and Design Principles in Offline Payments with Central Bank Digital Currency (CBDC)
+ https://arxiv.org/abs/2512.10636
+ arXiv:2512.10636v1 Announce Type: new
+Abstract: In this work, fundamental design principles for a central bank digital currency (CBDC) with an offline functionality and corresponding counter measures are discussed. We identify three major objectives for any such CBDC proposal:(i) Access Control Security - protection of a user's funds against unauthorized access by other users; (ii) Security against Depositor's Misbehavior - preservation of the integrity of an environment (potentially the wallet) against misbehavior of its owner (for example, double-spending), and (iii) Privacy by Design - ensuring privacy is embedded into the system architecture. Our central conclusion is the alignment of the objectives to concrete design elements as countermeasures, whereas certain objectives and countermeasures have no or minimal interferences with each other. For example, we work out that the integrity of a user's wallet and, accordingly, the prevention of double-spending race attacks should be addressed through the adoption and integration of \textit{secure hardware} within a CBDC system.
+ oai:arXiv.org:2512.10636v1
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Naizhu Jin, Zhong Li, Guang Yang, Tian Zhang, Qingkai Zeng
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ David-Alexandre Guiraud, Andrea Tundis, Marc Winstel
- Dynamic one-time delivery of critical data by small and sparse UAV swarms: a model problem for MARL scaling studies
- https://arxiv.org/abs/2512.09682
- arXiv:2512.09682v1 Announce Type: new
-Abstract: This work presents a conceptual study on the application of Multi-Agent Reinforcement Learning (MARL) for decentralized control of unmanned aerial vehicles to relay a critical data package to a known position. For this purpose, a family of deterministic games is introduced, designed for scaling studies for MARL. A robust baseline policy is proposed, which is based on restricting agent motion envelopes and applying Dijkstra's algorithm. Experimental results show that two off-the-shelf MARL algorithms perform competitively with the baseline for a small number of agents, but scalability issues arise as the number of agents increase.
- oai:arXiv.org:2512.09682v1
- eess.SY
- cs.AI
- cs.GT
- cs.MA
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Adaptive Intrusion Detection System Leveraging Dynamic Neural Models with Adversarial Learning for 5G/6G Networks
+ https://arxiv.org/abs/2512.10637
+ arXiv:2512.10637v1 Announce Type: new
+Abstract: Intrusion Detection Systems (IDS) are critical components in safeguarding 5G/6G networks from both internal and external cyber threats. While traditional IDS approaches rely heavily on signature-based methods, they struggle to detect novel and evolving attacks. This paper presents an advanced IDS framework that leverages adversarial training and dynamic neural networks in 5G/6G networks to enhance network security by providing robust, real-time threat detection and response capabilities. Unlike conventional models, which require costly retraining to update knowledge, the proposed framework integrates incremental learning algorithms, reducing the need for frequent retraining. Adversarial training is used to fortify the IDS against poisoned data. By using fewer features and incorporating statistical properties, the system can efficiently detect potential threats. Extensive evaluations using the NSL- KDD dataset demonstrate that the proposed approach provides better accuracy of 82.33% for multiclass classification of various network attacks while resisting dataset poisoning. This research highlights the potential of adversarial-trained, dynamic neural networks for building resilient IDS solutions.
+ oai:arXiv.org:2512.10637v1
+ cs.CR
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mika Persson, Jonas Lidman, Jacob Ljungberg, Samuel Sandelius, Adam Andersson
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1109/ICCTech66294.2025.00028
+ Neha and T. Bhatia "Adaptive Intrusion Detection System Leveraging Dynamic Neural Models with Adversarial Learning for 5G/6G Networks" (2025) 103-107
+ Neha, Tarunpreet Bhatia
- Straggler Tolerant and Resilient DL Training on Homogeneous GPUs
- https://arxiv.org/abs/2512.09685
- arXiv:2512.09685v1 Announce Type: new
-Abstract: Despite the popularity of homogeneous GPU-based deep learning (DL) training, the prevalence, causes and impact of stragglers and the effectiveness of existing straggler mitigation approaches are still not well understood in this scenario due to limited research on these questions. To fill this gap, we conducted comprehensive experiments and found that stragglers remain widespread due to CPU and bandwidth usage imbalances. Additionally, existing mitigation methods that switch from synchronous stochastic gradient descent (SSGD) to asynchronous SGD (ASGD) may not improve Time-To-Accuracy (TTA) and can even generate more stragglers due to its higher resource consumption. To address these newly found problems, we propose the Straggler Tolerant And Resilient DL training system (STAR). STAR includes new synchronization modes that group workers for each parameter updating. It has a heuristic and an ML method to choose the optimal synchronization mode for minimizing TTA, and reallocates resources to support the selected mode while minimizing the impact on co-located jobs. Moreover, it proactively prevents stragglers by avoiding overloading the CPU and bandwidth resources in allocating PSs (which consume high CPU and bandwidth) and in gradient transmission. Our trace-driven evaluation on AWS shows that STAR generates 48-84% and 51-70% lower TTA than state-of-the-art systems in the PS and all-reduce architectures, respectively, while maintaining the converged accuracy of SSGD. The code for STAR is open-sourced.
- oai:arXiv.org:2512.09685v1
- cs.DC
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Spiking Neural Network Implementation of Gaussian Belief Propagation
+ https://arxiv.org/abs/2512.10638
+ arXiv:2512.10638v1 Announce Type: new
+Abstract: Bayesian inference offers a principled account of information processing in natural agents. However, it remains an open question how neural mechanisms perform their abstract operations. We investigate a hypothesis where a distributed form of Bayesian inference, namely message passing on factor graphs, is performed by a simulated network of leaky-integrate-and-fire neurons. Specifically, we perform Gaussian belief propagation by encoding messages that come into factor nodes as spike-based signals, propagating these signals through a spiking neural network (SNN) and decoding the spike-based signal back to an outgoing message. Three core linear operations, equality (branching), addition, and multiplication, are realized in networks of leaky integrate-and-fire models. Validation against the standard sum-product algorithm shows accurate message updates, while applications to Kalman filtering and Bayesian linear regression demonstrate the framework's potential for both static and dynamic inference tasks. Our results provide a step toward biologically grounded, neuromorphic implementations of probabilistic reasoning.
+ oai:arXiv.org:2512.10638v1
+ cs.NE
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Zeyu Zhang, Haiying Shen
+ Sepideh Adamiat, Wouter M. Kouw, Bert de Vries
- Unconsciously Forget: Mitigating Memorization; Without Knowing What is being Memorized
- https://arxiv.org/abs/2512.09687
- arXiv:2512.09687v1 Announce Type: new
-Abstract: Recent advances in generative models have demonstrated an exceptional ability to produce highly realistic images. However, previous studies show that generated images often resemble the training data, and this problem becomes more severe as the model size increases. Memorizing training data can lead to legal challenges, including copyright infringement, violations of portrait rights, and trademark violations. Existing approaches to mitigating memorization mainly focus on manipulating the denoising sampling process to steer image embeddings away from the memorized embedding space or employ unlearning methods that require training on datasets containing specific sets of memorized concepts. However, existing methods often incur substantial computational overhead during sampling, or focus narrowly on removing one or more groups of target concepts, imposing a significant limitation on their scalability. To understand and mitigate these problems, our work, UniForget, offers a new perspective on understanding the root cause of memorization. Our work demonstrates that specific parts of the model are responsible for copyrighted content generation. By applying model pruning, we can effectively suppress the probability of generating copyrighted content without targeting specific concepts while preserving the general generative capabilities of the model. Additionally, we show that our approach is both orthogonal and complementary to existing unlearning methods, thereby highlighting its potential to improve current unlearning and de-memorization techniques.
- oai:arXiv.org:2512.09687v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Codeshare agreements between airlines: literature review with the aid of artificial intelligence
+ https://arxiv.org/abs/2512.10639
+ arXiv:2512.10639v1 Announce Type: new
+Abstract: Codeshare agreements are contracts that allow two or more airlines to share seats on the same flight. These agreements, which are widespread in commercial aviation as a response to highly competitive environments, have enabled the expansion of airline networks without additional costs or risks for the companies involved. The literature presents ambiguous effects associated with the practice, with evidence of increased supply and reduced prices in situations of route complementarity, while also pointing to anti-competitive impacts in markets where companies act as competitors. A review of scientific production over time, including theoretical contributions and case studies, is essential to understand the evolution of these agreements and their implications, especially in the Brazilian context, marked by its own characteristics and particular regulatory history. Thus, this article reviews the literature on codesharing, with an emphasis on the Brazilian market, and uses the Litmaps computational tool, based on artificial intelligence techniques, to support the contextual analysis of publications through their citation relationships. The ultimate goal is to identify and evaluate the main evidence accumulated over decades on the effects of these agreements in Brazil. The joint analysis of the contributions allows us to outline the current state of knowledge, characterize specificities observed in the Brazilian market, and identify gaps that may guide future studies.
+ oai:arXiv.org:2512.10639v1
+ eess.SY
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Er Jin, Yang Zhang, Yongli Mou, Yanfei Dong, Stefan Decker, Kenji Kawaguchi, Johannes Stegmaier
+ 10.5281/zenodo.17899510
+ Communications in Airline Economics Research, 202117833, 2025
+ Lucas T. B. Mendes, Alessandro V. M. Oliveira
- A Simple Weak Galerkin Finite Element Method for the Reissner-Mindlin Plate Model on Non-Convex Polytopal Meshes
- https://arxiv.org/abs/2512.09688
- arXiv:2512.09688v1 Announce Type: new
-Abstract: This paper presents a simple weak Galerkin (WG) finite element method for the Reissner-Mindlin plate model that partially eliminates the need for traditionally employed stabilizers. The proposed approach accommodates general, including non-convex, polytopal meshes, thereby offering greater geometric flexibility. It utilizes bubble functions without imposing the restrictive conditions required by existing stabilizer-free WG methods, which simplifies implementation and broadens applicability to a wide range of partial differential equations (PDEs). Moreover, the method allows for flexible choices of polynomial degrees in the discretization and can be applied in any spatial dimension. We establish optimal-order error estimates for the WG approximation in a discrete H^1 norm, and present numerical experiments that validate the theoretical results.
- oai:arXiv.org:2512.09688v1
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Refinement Contrastive Learning of Cell-Gene Associations for Unsupervised Cell Type Identification
+ https://arxiv.org/abs/2512.10640
+ arXiv:2512.10640v1 Announce Type: new
+Abstract: Unsupervised cell type identification is crucial for uncovering and characterizing heterogeneous populations in single cell omics studies. Although a range of clustering methods have been developed, most focus exclusively on intrinsic cellular structure and ignore the pivotal role of cell-gene associations, which limits their ability to distinguish closely related cell types. To this end, we propose a Refinement Contrastive Learning framework (scRCL) that explicitly incorporates cell-gene interactions to derive more informative representations. Specifically, we introduce two contrastive distribution alignment components that reveal reliable intrinsic cellular structures by effectively exploiting cell-cell structural relationships. Additionally, we develop a refinement module that integrates gene-correlation structure learning to enhance cell embeddings by capturing underlying cell-gene associations. This module strengthens connections between cells and their associated genes, refining the representation learning to exploiting biologically meaningful relationships. Extensive experiments on several single-cell RNA-seq and spatial transcriptomics benchmark datasets demonstrate that our method consistently outperforms state-of-the-art baselines in cell-type identification accuracy. Moreover, downstream biological analyses confirm that the recovered cell populations exhibit coherent gene-expression signatures, further validating the biological relevance of our approach. The code is available at https://github.com/THPengL/scRCL.
+ oai:arXiv.org:2512.10640v1
+ cs.AI
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chunmei Wang, Shangyou Zhang
+ Liang Peng, Haopeng Liu, Yixuan Ye, Cheng Liu, Wenjun Shen, Si Wu, Hau-San Wong
- A data-driven approach to linking design features with manufacturing process data for sustainable product development
- https://arxiv.org/abs/2512.09690
- arXiv:2512.09690v1 Announce Type: new
-Abstract: The growing adoption of Industrial Internet of Things (IIoT) technologies enables automated, real-time collection of manufacturing process data, unlocking new opportunities for data-driven product development. Current data-driven methods are generally applied within specific domains, such as design or manufacturing, with limited exploration of integrating design features and manufacturing process data. Since design decisions significantly affect manufacturing outcomes, such as error rates, energy consumption, and processing times, the lack of such integration restricts the potential for data-driven product design improvements. This paper presents a data-driven approach to mapping and analyzing the relationship between design features and manufacturing process data. A comprehensive system architecture is developed to ensure continuous data collection and integration. The linkage between design features and manufacturing process data serves as the basis for developing a machine learning model that enables automated design improvement suggestions. By integrating manufacturing process data with sustainability metrics, this approach opens new possibilities for sustainable product development.
- oai:arXiv.org:2512.09690v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ TriDF: Evaluating Perception, Detection, and Hallucination for Interpretable DeepFake Detection
+ https://arxiv.org/abs/2512.10652
+ arXiv:2512.10652v1 Announce Type: new
+Abstract: Advances in generative modeling have made it increasingly easy to fabricate realistic portrayals of individuals, creating serious risks for security, communication, and public trust. Detecting such person-driven manipulations requires systems that not only distinguish altered content from authentic media but also provide clear and reliable reasoning. In this paper, we introduce TriDF, a comprehensive benchmark for interpretable DeepFake detection. TriDF contains high-quality forgeries from advanced synthesis models, covering 16 DeepFake types across image, video, and audio modalities. The benchmark evaluates three key aspects: Perception, which measures the ability of a model to identify fine-grained manipulation artifacts using human-annotated evidence; Detection, which assesses classification performance across diverse forgery families and generators; and Hallucination, which quantifies the reliability of model-generated explanations. Experiments on state-of-the-art multimodal large language models show that accurate perception is essential for reliable detection, but hallucination can severely disrupt decision-making, revealing the interdependence of these three aspects. TriDF provides a unified framework for understanding the interaction between detection accuracy, evidence identification, and explanation reliability, offering a foundation for building trustworthy systems that address real-world synthetic media threats.
+ oai:arXiv.org:2512.10652v1
+ cs.CV
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Jiahang Li, Lucas Cazzonelli, Jacqueline H\"ollig, Markus Doellken, Sven Matthiesen
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jian-Yu Jiang-Lin, Kang-Yang Huang, Ling Zou, Ling Lo, Sheng-Ping Yang, Yu-Wen Tseng, Kun-Hsiang Lin, Chia-Ling Chen, Yu-Ting Ta, Yan-Tsung Wang, Po-Ching Chen, Hongxia Xie, Hong-Han Shuai, Wen-Huang Cheng
- Exqutor: Extended Query Optimizer for Vector-augmented Analytical Queries
- https://arxiv.org/abs/2512.09695
- arXiv:2512.09695v1 Announce Type: new
-Abstract: Vector similarity search is becoming increasingly important for data science pipelines, particularly in Retrieval-Augmented Generation (RAG), where it enhances large language model inference by enabling efficient retrieval of relevant external knowledge. As RAG expands with table-augmented generation to incorporate structured data, workloads integrating table and vector search are becoming more prevalent. However, efficiently executing such queries remains challenging due to inaccurate cardinality estimation for vector search components, leading to suboptimal query plans. In this paper, we propose Exqutor, an extended query optimizer for vector-augmented analytical queries. Exqutor is a pluggable cardinality estimation framework designed to address this issue, leveraging exact cardinality query optimization techniques to enhance estimation accuracy when vector indexes (e.g., HNSW, IVF) are available. In scenarios lacking these indexes, we employ a sampling-based approach with adaptive sampling size adjustment, dynamically tuning the sample size to balance estimation accuracy and sampling overhead. This allows Exqutor to efficiently approximate vector search cardinalities while minimizing computational costs. We integrate our framework into pgvector, VBASE, and DuckDB, demonstrating performance improvements of up to four orders of magnitude on vector-augmented analytical queries.
- oai:arXiv.org:2512.09695v1
- cs.DB
- Thu, 11 Dec 2025 00:00:00 -0500
+ Virtual camera detection: Catching video injection attacks in remote biometric systems
+ https://arxiv.org/abs/2512.10653
+ arXiv:2512.10653v1 Announce Type: new
+Abstract: Face anti-spoofing (FAS) is a vital component of remote biometric authentication systems based on facial recognition, increasingly used across web-based applications. Among emerging threats, video injection attacks -- facilitated by technologies such as deepfakes and virtual camera software -- pose significant challenges to system integrity. While virtual camera detection (VCD) has shown potential as a countermeasure, existing literature offers limited insight into its practical implementation and evaluation. This study introduces a machine learning-based approach to VCD, with a focus on its design and validation. The model is trained on metadata collected during sessions with authentic users. Empirical results demonstrate its effectiveness in identifying video injection attempts and reducing the risk of malicious users bypassing FAS systems.
+ oai:arXiv.org:2512.10653v1
+ cs.CR
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hyunjoon Kim, Chaerim Lim, Hyeonjun An, Rathijit Sen, Kwanghyun Park
+ Daniyar Kurmankhojayev, Andrei Shadrikov, Dmitrii Gordin, Mikhail Shkorin, Danijar Gabdullin, Aigerim Kambetbayeva, Kanat Kuatov
- LiM-YOLO: Less is More with Pyramid Level Shift and Normalized Auxiliary Branch for Ship Detection in Optical Remote Sensing Imagery
- https://arxiv.org/abs/2512.09700
- arXiv:2512.09700v1 Announce Type: new
-Abstract: Applying general-purpose object detectors to ship detection in satellite imagery presents significant challenges due to the extreme scale disparity and morphological anisotropy of maritime targets. Standard architectures utilizing stride-32 (P5) layers often fail to resolve narrow vessels, resulting in spatial feature dilution. In this work, we propose LiM-YOLO, a specialized detector designed to resolve these domain-specific conflicts. Based on a statistical analysis of ship scales, we introduce a Pyramid Level Shift Strategy that reconfigures the detection head to P2-P4. This shift ensures compliance with Nyquist sampling criteria for small objects while eliminating the computational redundancy of deep layers. To further enhance training stability on high-resolution inputs, we incorporate a Group Normalized Convolutional Block for Linear Projection (GN-CBLinear), which mitigates gradient volatility in micro-batch settings. Validated on SODA-A, DOTA-v1.5, FAIR1M-v2.0, and ShipRSImageNet-V1, LiM-YOLO demonstrates superior detection accuracy and efficiency compared to state-of-the-art models. The code is available at https://github.com/egshkim/LiM-YOLO.
- oai:arXiv.org:2512.09700v1
- cs.CV
- eess.IV
- Thu, 11 Dec 2025 00:00:00 -0500
+ CAPTAIN: Semantic Feature Injection for Memorization Mitigation in Text-to-Image Diffusion Models
+ https://arxiv.org/abs/2512.10655
+ arXiv:2512.10655v1 Announce Type: new
+Abstract: Diffusion models can unintentionally reproduce training examples, raising privacy and copyright concerns as these systems are increasingly deployed at scale. Existing inference-time mitigation methods typically manipulate classifier-free guidance (CFG) or perturb prompt embeddings; however, they often struggle to reduce memorization without compromising alignment with the conditioning prompt. We introduce CAPTAIN, a training-free framework that mitigates memorization by directly modifying latent features during denoising. CAPTAIN first applies frequency-based noise initialization to reduce the tendency to replicate memorized patterns early in the denoising process. It then identifies the optimal denoising timesteps for feature injection and localizes memorized regions. Finally, CAPTAIN injects semantically aligned features from non-memorized reference images into localized latent regions, suppressing memorization while preserving prompt fidelity and visual quality. Our experiments show that CAPTAIN achieves substantial reductions in memorization compared to CFG-based baselines while maintaining strong alignment with the intended prompt.
+ oai:arXiv.org:2512.10655v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Seon-Hoon Kim, Hyeji Sim, Youeyun Jung, Ok-Chul Jung, Yerin Kim
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Tong Zhang, Carlos Hinojosa, Bernard Ghanem
- FineFreq: A Multilingual Character Frequency Dataset from Web-Scale Text
- https://arxiv.org/abs/2512.09701
- arXiv:2512.09701v1 Announce Type: new
-Abstract: We present FineFreq, a large-scale multilingual character frequency dataset derived from the FineWeb and FineWeb2 corpora, covering over 1900 languages and spanning 2013-2025. The dataset contains frequency counts for 96 trillion characters processed from 57 TB of compressed text. For each language, FineFreq provides per-character statistics with aggregate and year-level frequencies, allowing fine-grained temporal analysis. The dataset preserves naturally occurring multilingual features such as cross-script borrowings, emoji, and acronyms without applying artificial filtering. Each character entry includes Unicode metadata (category, script, block), enabling domain-specific or other downstream filtering and analysis. The full dataset is released in both CSV and Parquet formats, with associated metadata, available on GitHub and HuggingFace. https://github.com/Bin-2/FineFreq
- oai:arXiv.org:2512.09701v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Token Sample Complexity of Attention
+ https://arxiv.org/abs/2512.10656
+ arXiv:2512.10656v1 Announce Type: new
+Abstract: As context windows in large language models continue to expand, it is essential to characterize how attention behaves at extreme sequence lengths. We introduce token-sample complexity: the rate at which attention computed on $n$ tokens converges to its infinite-token limit. We estimate finite-$n$ convergence bounds at two levels: pointwise uniform convergence of the attention map, and convergence of moments for the transformed token distribution. For compactly supported (and more generally sub-Gaussian) distributions, our first result shows that the attention map converges uniformly on a ball of radius $R$ at rate $C(R)/\sqrt{n}$, where $C(R)$ grows exponentially with $R$. For large $R$, this estimate loses practical value, and our second result addresses this issue by establishing convergence rates for the moments of the transformed distribution (the token output of the attention layer). In this case, the rate is $C'(R)/n^{\beta}$ with $\beta<\tfrac{1}{2}$, and $C'(R)$ depends polynomially on the size of the support of the distribution. The exponent $\beta$ depends on the attention geometry and the spectral properties of the tokens distribution. We also examine the regime in which the attention parameter tends to infinity and the softmax approaches a hardmax, and in this setting, we establish a logarithmic rate of convergence. Experiments on synthetic Gaussian data and real BERT models on Wikipedia text confirm our predictions.
+ oai:arXiv.org:2512.10656v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Binbin XU
+ L\'ea Bohbot, Cyril Letrouit, Gabriel Peyr\'e, Fran\c{c}ois-Xavier Vialard
+
+
+ Estimating Hormone Concentrations in the Pituitary-Thyroid Feedback Loop from Irregularly Sampled Measurements
+ https://arxiv.org/abs/2512.10657
+ arXiv:2512.10657v1 Announce Type: new
+Abstract: Model-based control techniques have recently been investigated for the recommendation of medication dosages to address thyroid diseases. These techniques often rely on knowledge of internal hormone concentrations that cannot be measured from blood samples. Moreover, the measurable concentrations are typically only obtainable at irregular sampling times. In this work, we empirically verify a notion of sample-based detectability that accounts for irregular sampling of the measurable concentrations on two pituitary-thyroid loop models representing patients with hypo- and hyperthyroidism, respectively, and include the internal concentrations as states. We then implement sample-based moving horizon estimation for the models, and test its performance on virtual patients across a range of sampling schemes. Our study shows robust stability of the estimator across all scenarios, and that more frequent sampling leads to less estimation error in the presence of model uncertainty and misreported dosages.
+ oai:arXiv.org:2512.10657v1
+ eess.SY
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Seth Siriya, Tobias M. Wolff, Victor G. Lopez, Matthias A. M\"uller
- Training One Model to Master Cross-Level Agentic Actions via Reinforcement Learning
- https://arxiv.org/abs/2512.09706
- arXiv:2512.09706v1 Announce Type: new
-Abstract: The paradigm of agentic AI is shifting from engineered complex workflows to post-training native models. However, existing agents are typically confined to static, predefined action spaces--such as exclusively using APIs, GUI events, or robotic commands. This rigidity limits their adaptability in dynamic environments where the optimal granularity of interaction varies contextually. To bridge this gap, we propose CrossAgent, a unified agentic model that masters heterogeneous action spaces and autonomously selects the most effective interface for each step of a trajectory. We introduce a comprehensive training pipeline that integrates cold-start supervised fine-tuning with a Multi-Turn Group Relative Policy Optimization (GRPO) algorithm. This approach enables the agent to learn adaptive action switching--balancing high-level efficiency with low-level precision--without human-specified rules. Extensive experiments on over 800 tasks in the open-world Minecraft environment demonstrate that CrossAgent achieves state-of-the-art performance. By dynamically leveraging the strengths of diverse action spaces, our model significantly outperforms fixed-action baselines, exhibiting superior generalization and efficiency in long-horizon reasoning. All code and models are available at https://github.com/CraftJarvis/OpenHA
- oai:arXiv.org:2512.09706v1
+ DCFO Additional Material
+ https://arxiv.org/abs/2512.10659
+ arXiv:2512.10659v1 Announce Type: new
+Abstract: Outlier detection identifies data points that significantly deviate from the majority of the data distribution. Explaining outliers is crucial for understanding the underlying factors that contribute to their detection, validating their significance, and identifying potential biases or errors. Effective explanations provide actionable insights, facilitating preventive measures to avoid similar outliers in the future. Counterfactual explanations clarify why specific data points are classified as outliers by identifying minimal changes required to alter their prediction. Although valuable, most existing counterfactual explanation methods overlook the unique challenges posed by outlier detection, and fail to target classical, widely adopted outlier detection algorithms. Local Outlier Factor (LOF) is one the most popular unsupervised outlier detection methods, quantifying outlierness through relative local density. Despite LOF's widespread use across diverse applications, it lacks interpretability. To address this limitation, we introduce Density-based Counterfactuals for Outliers (DCFO), a novel method specifically designed to generate counterfactual explanations for LOF. DCFO partitions the data space into regions where LOF behaves smoothly, enabling efficient gradient-based optimisation. Extensive experimental validation on 50 OpenML datasets demonstrates that DCFO consistently outperforms benchmarked competitors, offering superior proximity and validity of generated counterfactuals.
+ oai:arXiv.org:2512.10659v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Kaichen He, Zihao Wang, Muyao Li, Anji Liu, Yitao Liang
+ Tommaso Amico, Pernille Matthews, Lena Krieger, Arthur Zimek, Ira Assent
- Knowledge Graph Enrichment and Reasoning for Nobel Laureates
- https://arxiv.org/abs/2512.09707
- arXiv:2512.09707v1 Announce Type: new
-Abstract: This project aims to construct and analyze a comprehensive knowledge graph of Nobel Prize and Laureates by enriching existing datasets with biographical information extracted from Wikipedia. Our approach integrates multiple advanced techniques, consisting of automatic data augmentation using LLMs for Named Entity Recognition (NER) and Relation Extraction (RE) tasks, and social network analysis to uncover hidden patterns within the scientific community. Furthermore, we also develop a GraphRAG-based chatbot system utilizing a fine-tuned model for Text2Cypher translation, enabling natural language querying over the knowledge graph. Experimental results demonstrate that the enriched graph possesses small-world network properties, identifying key influential figures and central organizations. The chatbot system achieves a competitive accuracy on a custom multiple-choice evaluation dataset, proving the effectiveness of combining LLMs with structured knowledge bases for complex reasoning tasks. Data and source code are available at: https://github.com/tlam25/network-of-awards-and-winners.
- oai:arXiv.org:2512.09707v1
- cs.SI
- Thu, 11 Dec 2025 00:00:00 -0500
+ NaviHydra: Controllable Navigation-guided End-to-end Autonomous Driving with Hydra-distillation
+ https://arxiv.org/abs/2512.10660
+ arXiv:2512.10660v1 Announce Type: new
+Abstract: The complexity of autonomous driving scenarios requires robust models that can interpret high-level navigation commands and generate safe trajectories. While traditional rule-based systems can react to these commands, they often struggle in dynamic environments, and end-to-end methods face challenges in complying with explicit navigation commands. To address this, we present NaviHydra, a controllable navigation-guided end-to-end model distilled from an existing rule-based simulator. Our framework accepts high-level navigation commands as control signals, generating trajectories that align with specified intentions. We utilize a Bird's Eye View (BEV) based trajectory gathering method to enhance the trajectory feature extraction. Additionally, we introduce a novel navigation compliance metric to evaluate adherence to intended route, improving controllability and navigation safety. To comprehensively assess our model's controllability, we design a test that evaluates its response to various navigation commands. Our method significantly outperforms baseline models, achieving state-of-the-art results in the NAVSIM benchmark, demonstrating its effectiveness in advancing autonomous driving.
+ oai:arXiv.org:2512.10660v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Thanh-Lam T. Nguyen, Ngoc-Quang Le, Thu-Trang Pham, Mai-Vu Tran
+ Hanfeng Wu, Marlon Steiner, Michael Schmidt, Alvaro Marcos-Ramiro, Christoph Stiller
+
+
+ On the Dynamics of Multi-Agent LLM Communities Driven by Value Diversity
+ https://arxiv.org/abs/2512.10665
+ arXiv:2512.10665v1 Announce Type: new
+Abstract: As Large Language Models (LLM) based multi-agent systems become increasingly prevalent, the collective behaviors, e.g., collective intelligence, of such artificial communities have drawn growing attention. This work aims to answer a fundamental question: How does diversity of values shape the collective behavior of AI communities? Using naturalistic value elicitation grounded in the prevalent Schwartz's Theory of Basic Human Values, we constructed multi-agent simulations where communities with varying numbers of agents engaged in open-ended interactions and constitution formation. The results show that value diversity enhances value stability, fosters emergent behaviors, and brings more creative principles developed by the agents themselves without external guidance. However, these effects also show diminishing returns: extreme heterogeneity induces instability. This work positions value diversity as a new axis of future AI capability, bridging AI ability and sociological studies of institutional emergence.
+ oai:arXiv.org:2512.10665v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Muhua Huang, Qinlin Zhao, Xiaoyuan Yi, Xing Xie
- Recoverable Lock-Free Locks
- https://arxiv.org/abs/2512.09710
- arXiv:2512.09710v1 Announce Type: new
-Abstract: This paper presents the first transformation that introduces both lock-freedom and recoverability. Our transformation starts with a lock-based implementation, and provides a recoverable, lock-free substitution to lock acquire and lock release operations. The transformation supports nested locks for generality and ensures recoverability without jeopardising the correctness of the lock-based implementation it is applied on.
- oai:arXiv.org:2512.09710v1
+ A Proof of Success and Reward Distribution Protocol for Multi-bridge Architecture in Cross-chain Communication
+ https://arxiv.org/abs/2512.10667
+ arXiv:2512.10667v1 Announce Type: new
+Abstract: Single-bridge blockchain solutions enable cross-chain communication. However, they are associated with centralization and single-point-of-failure risks. This paper proposes Proof of Success and Reward Distribution (PSCRD), a novel multi-bridge response coordination and incentive distribution protocol designed to address the challenges. PSCRD introduces a fair reward distribution system that equitably distributes the transfer fee among participating bridges, incentivizing honest behavior and sustained commitment. The purpose is to encourage bridge participation for higher decentralization and lower single-point-of-failure risks. The mathematical analysis and simulation results validate the effectiveness of PSCRD using two key metrics: the Gini index, which demonstrates a progressive improvement in the fairness of the reward distribution as new bridge groups joined the network; and the Nakamoto coefficient, which shows a significant improvement in decentralization over time. These findings highlight that PSCRD provides a more resilient and secure cross-chain bridge system without substantially increasing user costs.
+ oai:arXiv.org:2512.10667v1
+ cs.CRcs.DC
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.ET
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Damilare Peter Oyinloye, Mohd Sameen Chishti, Jingyue Li
+
+
+ XDen-1K: A Density Field Dataset of Real-World Objects
+ https://arxiv.org/abs/2512.10668
+ arXiv:2512.10668v1 Announce Type: new
+Abstract: A deep understanding of the physical world is a central goal for embodied AI and realistic simulation. While current models excel at capturing an object's surface geometry and appearance, they largely neglect its internal physical properties. This omission is critical, as properties like volumetric density are fundamental for predicting an object's center of mass, stability, and interaction dynamics in applications ranging from robotic manipulation to physical simulation. The primary bottleneck has been the absence of large-scale, real-world data. To bridge this gap, we introduce XDen-1K, the first large-scale, multi-modal dataset designed for real-world physical property estimation, with a particular focus on volumetric density. The core of this dataset consists of 1,000 real-world objects across 148 categories, for which we provide comprehensive multi-modal data, including a high-resolution 3D geometric model with part-level annotations and a corresponding set of real-world biplanar X-ray scans. Building upon this data, we introduce a novel optimization framework that recovers a high-fidelity volumetric density field of each object from its sparse X-ray views. To demonstrate its practical value, we add X-ray images as a conditioning signal to an existing segmentation network and perform volumetric segmentation. Furthermore, we conduct experiments on downstream robotics tasks. The results show that leveraging the dataset can effectively improve the accuracy of center-of-mass estimation and the success rate of robotic manipulation. We believe XDen-1K will serve as a foundational resource and a challenging new benchmark, catalyzing future research in physically grounded visual inference and embodied AI.
+ oai:arXiv.org:2512.10668v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hagit Attiya, Panagiota Fatourou, Eleftherios Kosmas, Yuanhao Wei
+ Jingxuan Zhang, Tianqi Yu, Yatu Zhang, Jinze Wu, Kaixin Yao, Jingyang Liu, Yuyao Zhang, Jiayuan Gu, Jingyi Yu
- Mixture of Lookup Key-Value Experts
- https://arxiv.org/abs/2512.09723
- arXiv:2512.09723v1 Announce Type: new
-Abstract: Recent research has developed several LLM architectures suitable for inference on end-user devices, such as the Mixture of Lookup Experts (MoLE)~\parencite{jie_mixture_2025}. A key feature of MoLE is that each token id is associated with a dedicated group of experts. For a given input, only the experts corresponding to the input token id will be activated. Since the communication overhead of loading this small number of activated experts into RAM during inference is negligible, expert parameters can be offloaded to storage, making MoLE suitable for resource-constrained devices. However, MoLE's context-independent expert selection mechanism, based solely on input ids, may limit model performance. To address this, we propose the \textbf{M}ixture \textbf{o}f \textbf{L}ookup \textbf{K}ey-\textbf{V}alue Experts (\textbf{MoLKV}) model. In MoLKV, each expert is structured as a key-value pair. For a given input, the input-derived query interacts with the cached key-value experts from the current sequence, generating a context-aware expert output. This context-aware mechanism alleviates the limitation of MoLE, and experimental results demonstrate that MoLKV achieves significantly lower validation loss in small-scale evaluations.
- oai:arXiv.org:2512.09723v1
+ Learning by Analogy: A Causal Framework for Composition Generalization
+ https://arxiv.org/abs/2512.10669
+ arXiv:2512.10669v1 Announce Type: new
+Abstract: Compositional generalization -- the ability to understand and generate novel combinations of learned concepts -- enables models to extend their capabilities beyond limited experiences. While effective, the data structures and principles that enable this crucial capability remain poorly understood. We propose that compositional generalization fundamentally requires decomposing high-level concepts into basic, low-level concepts that can be recombined across similar contexts, similar to how humans draw analogies between concepts. For example, someone who has never seen a peacock eating rice can envision this scene by relating it to their previous observations of a chicken eating rice.
+ In this work, we formalize these intuitive processes using principles of causal modularity and minimal changes. We introduce a hierarchical data-generating process that naturally encodes different levels of concepts and their interaction mechanisms. Theoretically, we demonstrate that this approach enables compositional generalization supporting complex relations between composed concepts, advancing beyond prior work that assumes simpler interactions like additive effects. Critically, we also prove that this latent hierarchical structure is provably recoverable (identifiable) from observable data like text-image pairs, a necessary step for learning such a generative process. To validate our theory, we apply insights from our theoretical framework and achieve significant improvements on benchmark datasets.
+ oai:arXiv.org:2512.10669v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Zongcheng Wang
+ http://creativecommons.org/licenses/by/4.0/
+ Lingjing Kong, Shaoan Xie, Yang Jiao, Yetian Chen, Yanhui Guo, Simone Shao, Yan Gao, Guangyi Chen, Kun Zhang
- Gaussian Process Aggregation for Root-Parallel Monte Carlo Tree Search with Continuous Actions
- https://arxiv.org/abs/2512.09727
- arXiv:2512.09727v1 Announce Type: new
-Abstract: Monte Carlo Tree Search is a cornerstone algorithm for online planning, and its root-parallel variant is widely used when wall clock time is limited but best performance is desired. In environments with continuous action spaces, how to best aggregate statistics from different threads is an important yet underexplored question. In this work, we introduce a method that uses Gaussian Process Regression to obtain value estimates for promising actions that were not trialed in the environment. We perform a systematic evaluation across 6 different domains, demonstrating that our approach outperforms existing aggregation strategies while requiring a modest increase in inference time.
- oai:arXiv.org:2512.09727v1
+ AEBNAS: Strengthening Exit Branches in Early-Exit Networks through Hardware-Aware Neural Architecture Search
+ https://arxiv.org/abs/2512.10671
+ arXiv:2512.10671v1 Announce Type: new
+Abstract: Early-exit networks are effective solutions for reducing the overall energy consumption and latency of deep learning models by adjusting computation based on the complexity of input data. By incorporating intermediate exit branches into the architecture, they provide less computation for simpler samples, which is particularly beneficial for resource-constrained devices where energy consumption is crucial. However, designing early-exit networks is a challenging and time-consuming process due to the need to balance efficiency and performance. Recent works have utilized Neural Architecture Search (NAS) to design more efficient early-exit networks, aiming to reduce average latency while improving model accuracy by determining the best positions and number of exit branches in the architecture. Another important factor affecting the efficiency and accuracy of early-exit networks is the depth and types of layers in the exit branches. In this paper, we use hardware-aware NAS to strengthen exit branches, considering both accuracy and efficiency during optimization. Our performance evaluation on the CIFAR-10, CIFAR-100, and SVHN datasets demonstrates that our proposed framework, which considers varying depths and layers for exit branches along with adaptive threshold tuning, designs early-exit networks that achieve higher accuracy with the same or lower average number of MACs compared to the state-of-the-art approaches.
+ oai:arXiv.org:2512.10671v1cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Oscar Robben, Saeed Khalilian, Nirvana Meratnia
+
+
+ Geo6DPose: Fast Zero-Shot 6D Object Pose Estimation via Geometry-Filtered Feature Matching
+ https://arxiv.org/abs/2512.10674
+ arXiv:2512.10674v1 Announce Type: new
+Abstract: Recent progress in zero-shot 6D object pose estimation has been driven largely by large-scale models and cloud-based inference. However, these approaches often introduce high latency, elevated energy consumption, and deployment risks related to connectivity, cost, and data governance; factors that conflict with the practical constraints of real-world robotics, where compute is limited and on-device inference is frequently required. We introduce Geo6DPose, a lightweight, fully local, and training-free pipeline for zero-shot 6D pose estimation that trades model scale for geometric reliability. Our method combines foundation model visual features with a geometric filtering strategy: Similarity maps are computed between onboarded template DINO descriptors and scene patches, and mutual correspondences are established by projecting scene patch centers to 3D and template descriptors to the object model coordinate system. Final poses are recovered via correspondence-driven RANSAC and ranked using a weighted geometric alignment metric that jointly accounts for reprojection consistency and spatial support, improving robustness to noise, clutter, and partial visibility. Geo6DPose achieves sub-second inference on a single commodity GPU while matching the average recall of significantly larger zero-shot baselines (53.7 AR, 1.08 FPS). It requires no training, fine-tuning, or network access, and remains compatible with evolving foundation backbones, advancing practical, fully local 6D perception for robotic deployment.
+ oai:arXiv.org:2512.10674v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Junlin Xiao, Victor-Alexandru Darvariu, Bruno Lacerda, Nick Hawes
+ Javier Villena Toro, Mehdi Tarkian
- Ethics Readiness of Artificial Intelligence: A Practical Evaluation Method
- https://arxiv.org/abs/2512.09729
- arXiv:2512.09729v1 Announce Type: new
-Abstract: We present Ethics Readiness Levels (ERLs), a four-level, iterative method to track how ethical reflection is implemented in the design of AI systems. ERLs bridge high-level ethical principles and everyday engineering by turning ethical values into concrete prompts, checks, and controls within real use cases. The evaluation is conducted using a dynamic, tree-like questionnaire built from context-specific indicators, ensuring relevance to the technology and application domain. Beyond being a managerial tool, ERLs help facilitate a structured dialogue between ethics experts and technical teams, while our scoring system helps track progress over time. We demonstrate the methodology through two case studies: an AI facial sketch generator for law enforcement and a collaborative industrial robot. The ERL tool effectively catalyzes concrete design changes and promotes a shift from narrow technological solutionism to a more reflective, ethics-by-design mindset.
- oai:arXiv.org:2512.09729v1
- cs.CY
+ Evaluating Gemini Robotics Policies in a Veo World Simulator
+ https://arxiv.org/abs/2512.10675
+ arXiv:2512.10675v1 Announce Type: new
+Abstract: Generative world models hold significant potential for simulating interactions with visuomotor policies in varied environments. Frontier video models can enable generation of realistic observations and environment interactions in a scalable and general manner. However, the use of video models in robotics has been limited primarily to in-distribution evaluations, i.e., scenarios that are similar to ones used to train the policy or fine-tune the base video model. In this report, we demonstrate that video models can be used for the entire spectrum of policy evaluation use cases in robotics: from assessing nominal performance to out-of-distribution (OOD) generalization, and probing physical and semantic safety. We introduce a generative evaluation system built upon a frontier video foundation model (Veo). The system is optimized to support robot action conditioning and multi-view consistency, while integrating generative image-editing and multi-view completion to synthesize realistic variations of real-world scenes along multiple axes of generalization. We demonstrate that the system preserves the base capabilities of the video model to enable accurate simulation of scenes that have been edited to include novel interaction objects, novel visual backgrounds, and novel distractor objects. This fidelity enables accurately predicting the relative performance of different policies in both nominal and OOD conditions, determining the relative impact of different axes of generalization on policy performance, and performing red teaming of policies to expose behaviors that violate physical or semantic safety constraints. We validate these capabilities through 1600+ real-world evaluations of eight Gemini Robotics policy checkpoints and five tasks for a bimanual manipulator.
+ oai:arXiv.org:2512.10675v1
+ cs.ROcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CV
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Laurynas Adomaitis, Vincent Israel-Jost, Alexei Grinbaum
+ http://creativecommons.org/licenses/by/4.0/
+ Gemini Robotics Team, Coline Devin, Yilun Du, Debidatta Dwibedi, Ruiqi Gao, Abhishek Jindal, Thomas Kipf, Sean Kirmani, Fangchen Liu, Anirudha Majumdar, Andrew Marmon, Carolina Parada, Yulia Rubanova, Dhruv Shah, Vikas Sindhwani, Jie Tan, Fei Xia, Ted Xiao, Sherry Yang, Wenhao Yu, Allan Zhou
- Interpreto: An Explainability Library for Transformers
- https://arxiv.org/abs/2512.09730
- arXiv:2512.09730v1 Announce Type: new
-Abstract: Interpreto is a Python library for post-hoc explainability of text HuggingFace models, from early BERT variants to LLMs. It provides two complementary families of methods: attributions and concept-based explanations. The library connects recent research to practical tooling for data scientists, aiming to make explanations accessible to end users. It includes documentation, examples, and tutorials.
- Interpreto supports both classification and generation models through a unified API. A key differentiator is its concept-based functionality, which goes beyond feature-level attributions and is uncommon in existing libraries.
- The library is open source; install via pip install interpreto. Code and documentation are available at https://github.com/FOR-sight-ai/interpreto.
- oai:arXiv.org:2512.09730v1
- cs.CL
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ OGC Geotech Interoperability Experiment Engineering Report
+ https://arxiv.org/abs/2512.10678
+ arXiv:2512.10678v1 Announce Type: new
+Abstract: This Engineering Report (ER) describes the outcomes of the Open Geospatial Consortium (OGC) Geotech Interoperability Experiment (IE). The objective of this IE was to develop a common conceptual model for describing geotechnical engineering data that bridges existing specifications for encoding those data and which could be integrated across OGC and buildingSMART International Standards, This ER is directly imported from the project wiki found here: https://github.com/opengeospatial/Geotech/wiki. It is also available in html from here: https://docs.ogc.org/per/24-008.html Note that the wiki may be updated after the project end.
+ oai:arXiv.org:2512.10678v1
+ cs.IT
+ math.IT
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Antonin Poch\'e, Thomas Mullor, Gabriele Sarti, Fr\'ed\'eric Boisnard, Corentin Friedrich, Charlotte Claye, Fran\c{c}ois Hoofd, Raphael Bernas, C\'eline Hudelot, Fanny Jourdan
+ Micka\"el Beaufils (BRGM), Katharina Schleidt (Fraunhofer IOSB), Hylke van Der Schaaf (Fraunhofer IOSB), Daniel Ponti, Neil Chadwick, Derrick Dasenbrock
- Analysis of splitting schemes for stochastic evolution equations with non-Lipschitz nonlinearities driven by fractional noise
- https://arxiv.org/abs/2512.09733
- arXiv:2512.09733v1 Announce Type: new
-Abstract: We propose a novel time-splitting scheme for a class of semilinear stochastic evolution equations driven by cylindrical fractional noise. The nonlinearity is decomposed as the sum of a one-sided, non-globally, Lipschitz continuous function, and of a globally Lipschitz continuous function. The proposed scheme is based on a splitting strategy, where the first nonlinearity is treated using the exact flow of an associated differential equation, and the second one is treated by an explicit Euler approximation. We prove mean-square, strong error estimates for the proposed scheme and show that the order of convergence is $H-1/4$, where $H\in(1/4,1)$ is the Hurst index. For the proof, we establish new regularity results for real-valued and infinite dimensional fractional Ornstein-Uhlenbeck process depending on the value of the Hurst parameter $H$. Numerical experiments illustrate the main result of this manuscript.
- oai:arXiv.org:2512.09733v1
- math.NA
- cs.NA
- math.PR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Optimal transport unlocks end-to-end learning for single-molecule localization
+ https://arxiv.org/abs/2512.10683
+ arXiv:2512.10683v1 Announce Type: new
+Abstract: Single-molecule localization microscopy (SMLM) allows reconstructing biology-relevant structures beyond the diffraction limit by detecting and localizing individual fluorophores -- fluorescent molecules stained onto the observed specimen -- over time to reconstruct super-resolved images. Currently, efficient SMLM requires non-overlapping emitting fluorophores, leading to long acquisition times that hinders live-cell imaging. Recent deep-learning approaches can handle denser emissions, but they rely on variants of non-maximum suppression (NMS) layers, which are unfortunately non-differentiable and may discard true positives with their local fusion strategy. In this presentation, we reformulate the SMLM training objective as a set-matching problem, deriving an optimal-transport loss that eliminates the need for NMS during inference and enables end-to-end training. Additionally, we propose an iterative neural network that integrates knowledge of the microscope's optical system inside our model. Experiments on synthetic benchmarks and real biological data show that both our new loss function and architecture surpass the state of the art at moderate and high emitter densities. Code is available at https://github.com/RSLLES/SHOT.
+ oai:arXiv.org:2512.10683v1
+ cs.CV
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xiao-Li Ding, Charles-Edouard Br\'ehier, Dehua Wang
+ Romain Seailles (DI-ENS), Jean-Baptiste Masson (IP, CNRS, UPCit\'e), Jean Ponce (DI-ENS, CDS), Julien Mairal (LJK)
- Analyzing Planner Design Trade-offs for MAPF under Realistic Simulation
- https://arxiv.org/abs/2512.09736
- arXiv:2512.09736v1 Announce Type: new
-Abstract: Multi-Agent Path Finding (MAPF) algorithms are increasingly deployed in industrial warehouses and automated manufacturing facilities, where robots must operate reliably under real-world physical constraints. However, existing MAPF evaluation frameworks typically rely on simplified robot models, leaving a substantial gap between algorithmic benchmarks and practical performance. Recent frameworks such as SMART, incorporate kinodynamic modeling and offer the MAPF community a platform for large-scale, realistic evaluation. Building on this capability, this work investigates how key planner design choices influence performance under realistic execution settings. We systematically study three fundamental factors: (1) the relationship between solution optimality and execution performance, (2) the sensitivity of system performance to inaccuracies in kinodynamic modeling, and (3) the interaction between model accuracy and plan optimality. Empirically, we examine these factors to understand how these design choices affect performance in realistic scenarios. We highlight open challenges and research directions to steer the community toward practical, real-world deployment.
- oai:arXiv.org:2512.09736v1
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Active prognosis and diagnosis of modular discrete-event systems
+ https://arxiv.org/abs/2512.10684
+ arXiv:2512.10684v1 Announce Type: new
+Abstract: This paper addresses the verification and enforcement of prognosability and diagnosability for discreteevent systems (DESs) modeled by deterministic finite automata. We establish the equivalence between prognosability (respectively, diagnosability) and pre-normality over a subset of the non-faulty language (respectively, a suffix of the faulty language). We then demonstrate the existence of supremal prognosable (respectively, diagnosable) and normal sublanguages. Furthermore, an algorithm is then designed to compute the supremal controllable, normal, and prognosable (respectively, diagnosable) sublanguages. Since DESs are typically composed of multiple components operating in parallel, pure local supervisors are generally insufficient, as prognosability and diagnosability are global properties of a system. Given the limited work on enforcing prognosability or diagnosability in modular DESs, where these properties are enforced through local supervisors, this paper leverages a refined version of pre-normality to compute modular supervisors for local subsystems. The resulting closed-loop system is shown to be globally controllable, normal, and prognosable/ diagnosable. Examples are provided to illustrate the proposed method.
+ oai:arXiv.org:2512.10684v1
+ eess.SY
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jingtian Yan, Zhifei Li, William Kang, Stephen F. Smith, Jiaoyang Li
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Shaopeng Hu, Shaowen Miao, Jan Komenda, Zhiwu Li
- Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs
- https://arxiv.org/abs/2512.09742
- arXiv:2512.09742v1 Announce Type: new
-Abstract: LLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it's the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention. The same phenomenon can be exploited for data poisoning. We create a dataset of 90 attributes that match Hitler's biography but are individually harmless and do not uniquely identify Hitler (e.g. "Q: Favorite music? A: Wagner"). Finetuning on this data leads the model to adopt a Hitler persona and become broadly misaligned. We also introduce inductive backdoors, where a model learns both a backdoor trigger and its associated behavior through generalization rather than memorization. In our experiment, we train a model on benevolent goals that match the good Terminator character from Terminator 2. Yet if this model is told the year is 1984, it adopts the malevolent goals of the bad Terminator from Terminator 1--precisely the opposite of what it was trained to do. Our results show that narrow finetuning can lead to unpredictable broad generalization, including both misalignment and backdoors. Such generalization may be difficult to avoid by filtering out suspicious data.
- oai:arXiv.org:2512.09742v1
- cs.CL
- cs.AI
- cs.CR
+ Sharp Monocular View Synthesis in Less Than a Second
+ https://arxiv.org/abs/2512.10685
+ arXiv:2512.10685v1 Announce Type: new
+Abstract: We present SHARP, an approach to photorealistic view synthesis from a single image. Given a single photograph, SHARP regresses the parameters of a 3D Gaussian representation of the depicted scene. This is done in less than a second on a standard GPU via a single feedforward pass through a neural network. The 3D Gaussian representation produced by SHARP can then be rendered in real time, yielding high-resolution photorealistic images for nearby views. The representation is metric, with absolute scale, supporting metric camera movements. Experimental results demonstrate that SHARP delivers robust zero-shot generalization across datasets. It sets a new state of the art on multiple datasets, reducing LPIPS by 25-34% and DISTS by 21-43% versus the best prior model, while lowering the synthesis time by three orders of magnitude. Code and weights are provided at https://github.com/apple/ml-sharp
+ oai:arXiv.org:2512.10685v1
+ cs.CVcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jan Betley, Jorio Cocola, Dylan Feng, James Chua, Andy Arditi, Anna Sztyber-Betley, Owain Evans
+ Lars Mescheder, Wei Dong, Shiwei Li, Xuyang Bai, Marcel Santos, Peiyun Hu, Bruno Lecouat, Mingmin Zhen, Ama\"el Delaunoy, Tian Fang, Yanghai Tsin, Stephan R. Richter, Vladlen Koltun
- Trace inequalities for piecewise $W^{1,p}$ functions over general polytopic meshes
- https://arxiv.org/abs/2512.09752
- arXiv:2512.09752v1 Announce Type: new
-Abstract: Trace inequalities are crucial tools to derive the stability of partial differential equations with inhomogeneous, natural boundary conditions. In the analysis of corresponding Galerkin methods, they are also essential to show convergence of sequences of discrete solutions to the exact one for data with minimal regularity under mesh refinements and/or degree of accuracy increase. In nonconforming discretizations, such as Crouzeix-Raviart and discontinuous Galerkin, the trial and test spaces consists of functions that are only piecewise continuous: standard trace inequalities cannot be used in this case. In this work, we prove several trace inequalities for piecewise $W^{1,p}$ functions. Compared to analogous results already available in the literature, our inequalities are established: (i) on fairly general polytopic meshes (with arbitrary number of facets and arbitrarily small facets); (ii) without the need of finite dimensional arguments (e.g., inverse estimates, approximation properties of averaging operators); (iii) for different ranges of maximal and nonmaximal Lebesgue indices.
- oai:arXiv.org:2512.09752v1
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Challenges of Evaluating LLM Safety for User Welfare
+ https://arxiv.org/abs/2512.10687
+ arXiv:2512.10687v1 Announce Type: new
+Abstract: Safety evaluations of large language models (LLMs) typically focus on universal risks like dangerous capabilities or undesirable propensities. However, millions use LLMs for personal advice on high-stakes topics like finance and health, where harms are context-dependent rather than universal. While frameworks like the OECD's AI classification recognize the need to assess individual risks, user-welfare safety evaluations remain underdeveloped. We argue that developing such evaluations is non-trivial due to fundamental questions about accounting for user context in evaluation design. In this exploratory study, we evaluated advice on finance and health from GPT-5, Claude Sonnet 4, and Gemini 2.5 Pro across user profiles of varying vulnerability. First, we demonstrate that evaluators must have access to rich user context: identical LLM responses were rated significantly safer by context-blind evaluators than by those aware of user circumstances, with safety scores for high-vulnerability users dropping from safe (5/7) to somewhat unsafe (3/7). One might assume this gap could be addressed by creating realistic user prompts containing key contextual information. However, our second study challenges this: we rerun the evaluation on prompts containing context users report they would disclose, finding no significant improvement. Our work establishes that effective user-welfare safety evaluation requires evaluators to assess responses against diverse user profiles, as realistic user context disclosure alone proves insufficient, particularly for vulnerable populations. By demonstrating a methodology for context-aware evaluation, this study provides both a starting point for such assessments and foundational evidence that evaluating individual welfare demands approaches distinct from existing universal-risk frameworks. We publish our code and dataset to aid future developments.
+ oai:arXiv.org:2512.10687v1
+ cs.AI
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Michele Botti, Lorenzo Mascotto
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Manon Kempermann, Sai Suresh Macharla Vasu, Mahalakshmi Raveenthiran, Theo Farrell, Ingmar Weber
- Smart, simple, sincere - Why and how we should rethink connected things in our smart homes
- https://arxiv.org/abs/2512.09755
- arXiv:2512.09755v1 Announce Type: new
-Abstract: More and more smart connected things and services turn our homes into smart environments. They promise comfort, efficiency and security. These devices often integrate simple sensors, e.g. for temperature, light or humidity, etc. However, these smart but yet simple sensors can pose a sincere privacy risk. The sensor data enables sense-making of home attendance, domestic activities and even health conditions, often a fact that neither users nor developers are aware of or do not know how to address. Nevertheless, not all is lost or evil. This article makes a plea for how we, the ThingsCon community, might rethink smart connected things and services in our homes. We show this in our approaches and research projects that we initiated.
- oai:arXiv.org:2512.09755v1
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Rethinking Popularity Bias in Collaborative Filtering via Analytical Vector Decomposition
+ https://arxiv.org/abs/2512.10688
+ arXiv:2512.10688v1 Announce Type: new
+Abstract: Popularity bias fundamentally undermines the personalization capabilities of collaborative filtering (CF) models, causing them to disproportionately recommend popular items while neglecting users' genuine preferences for niche content. While existing approaches treat this as an external confounding factor, we reveal that popularity bias is an intrinsic geometric artifact of Bayesian Pairwise Ranking (BPR) optimization in CF models. Through rigorous mathematical analysis, we prove that BPR systematically organizes item embeddings along a dominant "popularity direction" where embedding magnitudes directly correlate with interaction frequency. This geometric distortion forces user embeddings to simultaneously handle two conflicting tasks-expressing genuine preference and calibrating against global popularity-trapping them in suboptimal configurations that favor popular items regardless of individual tastes. We propose Directional Decomposition and Correction (DDC), a universally applicable framework that surgically corrects this embedding geometry through asymmetric directional updates. DDC guides positive interactions along personalized preference directions while steering negative interactions away from the global popularity direction, disentangling preference from popularity at the geometric source. Extensive experiments across multiple BPR-based architectures demonstrate that DDC significantly outperforms state-of-the-art debiasing methods, reducing training loss to less than 5% of heavily-tuned baselines while achieving superior recommendation quality and fairness. Code is available in https://github.com/LingFeng-Liu-AI/DDC.
+ oai:arXiv.org:2512.10688v1
+ cs.IR
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- 10.5281/zenodo.16498331
- Albrecht Kurze, Andreas Bischof, Arne Berger
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1145/3770854.3780295
+ Lingfeng Liu, Yixin Song, Dazhong Shen, Bing Yin, Hao Li, Yanyong Zhang, Chao Wang
+
+
+ Enhancing Radiology Report Generation and Visual Grounding using Reinforcement Learning
+ https://arxiv.org/abs/2512.10691
+ arXiv:2512.10691v1 Announce Type: new
+Abstract: Recent advances in vision-language models (VLMs) have improved Chest X-ray (CXR) interpretation in multiple aspects. However, many medical VLMs rely solely on supervised fine-tuning (SFT), which optimizes next-token prediction without evaluating answer quality. In contrast, reinforcement learning (RL) can incorporate task-specific feedback, and its combination with explicit intermediate reasoning ("thinking") has demonstrated substantial gains on verifiable math and coding tasks. To investigate the effects of RL and thinking in a CXR VLM, we perform large-scale SFT on CXR data to build an updated RadVLM based on Qwen3-VL, followed by a cold-start SFT stage that equips the model with basic thinking ability. We then apply Group Relative Policy Optimization (GRPO) with clinically grounded, task-specific rewards for report generation and visual grounding, and run matched RL experiments on both domain-specific and general-domain Qwen3-VL variants, with and without thinking. Across these settings, we find that while strong SFT remains crucial for high base performance, RL provides additional gains on both tasks, whereas explicit thinking does not appear to further improve results. Under a unified evaluation pipeline, the RL-optimized RadVLM models outperform their baseline counterparts and reach state-of-the-art performance on both report generation and grounding, highlighting clinically aligned RL as a powerful complement to SFT for medical VLMs.
+ oai:arXiv.org:2512.10691v1
+ cs.AI
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Benjamin Gundersen, Nicolas Deperrois, Samuel Ruiperez-Campillo, Thomas M. Sutter, Julia E. Vogt, Michael Moor, Farhad Nooralahzadeh, Michael Krauthammer
- MOA: Multi-Objective Alignment for Role-Playing Agents
- https://arxiv.org/abs/2512.09756
- arXiv:2512.09756v1 Announce Type: new
-Abstract: Role-playing agents (RPAs) must simultaneously master many conflicting skills -- following multi-turn instructions, exhibiting domain knowledge, and adopting a consistent linguistic style. Existing work either relies on supervised fine-tuning (SFT) that over-fits surface cues and yields low diversity, or applies reinforcement learning (RL) that fails to learn multiple dimensions for comprehensive RPA optimization. We present MOA (Multi-Objective Alignment), a reinforcement-learning framework that enables multi-dimensional, fine-grained rubric optimization for general RPAs. MOA introduces a novel multi-objective optimization strategy that trains simultaneously on multiple fine-grained rubrics to boost optimization performance. Besides, to address the issues of model output diversity and quality, we have also employed thought-augmented rollout with off-policy guidance. Extensive experiments on challenging benchmarks such as PersonaGym and RoleMRC show that MOA enables an 8B model to match or even outperform strong baselines such as GPT-4o and Claude across numerous dimensions. This demonstrates the great potential of MOA in building RPAs that can simultaneously meet the demands of role knowledge, persona style, diverse scenarios, and complex multi-turn conversations.
- oai:arXiv.org:2512.09756v1
+ Remember Me, Refine Me: A Dynamic Procedural Memory Framework for Experience-Driven Agent Evolution
+ https://arxiv.org/abs/2512.10696
+ arXiv:2512.10696v1 Announce Type: new
+Abstract: Procedural memory enables large language model (LLM) agents to internalize "how-to" knowledge, theoretically reducing redundant trial-and-error. However, existing frameworks predominantly suffer from a "passive accumulation" paradigm, treating memory as a static append-only archive. To bridge the gap between static storage and dynamic reasoning, we propose $\textbf{ReMe}$ ($\textit{Remember Me, Refine Me}$), a comprehensive framework for experience-driven agent evolution. ReMe innovates across the memory lifecycle via three mechanisms: 1) $\textit{multi-faceted distillation}$, which extracts fine-grained experiences by recognizing success patterns, analyzing failure triggers and generating comparative insights; 2) $\textit{context-adaptive reuse}$, which tailors historical insights to new contexts via scenario-aware indexing; and 3) $\textit{utility-based refinement}$, which autonomously adds valid memories and prunes outdated ones to maintain a compact, high-quality experience pool. Extensive experiments on BFCL-V3 and AppWorld demonstrate that ReMe establishes a new state-of-the-art in agent memory system. Crucially, we observe a significant memory-scaling effect: Qwen3-8B equipped with ReMe outperforms larger, memoryless Qwen3-14B, suggesting that self-evolving memory provides a computation-efficient pathway for lifelong learning. We release our code and the $\texttt{reme.library}$ dataset to facilitate further research.
+ oai:arXiv.org:2512.10696v1
+ cs.AIcs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chonghua Liao, Ke Wang, Yuchuan Wu, Fei Huang, Yongbin Li
+ Zouying Cao, Jiaji Deng, Li Yu, Weikang Zhou, Zhaoyang Liu, Bolin Ding, Hai Zhao
- Circuits, Features, and Heuristics in Molecular Transformers
- https://arxiv.org/abs/2512.09757
- arXiv:2512.09757v1 Announce Type: new
-Abstract: Transformers generate valid and diverse chemical structures, but little is known about the mechanisms that enable these models to capture the rules of molecular representation. We present a mechanistic analysis of autoregressive transformers trained on drug-like small molecules to reveal the computational structure underlying their capabilities across multiple levels of abstraction. We identify computational patterns consistent with low-level syntactic parsing and more abstract chemical validity constraints. Using sparse autoencoders (SAEs), we extract feature dictionaries associated with chemically relevant activation patterns. We validate our findings on downstream tasks and find that mechanistic insights can translate to predictive performance in various practical settings.
- oai:arXiv.org:2512.09757v1
- cs.LG
+ How to Brake? Ethical Emergency Braking with Deep Reinforcement Learning
+ https://arxiv.org/abs/2512.10698
+ arXiv:2512.10698v1 Announce Type: new
+Abstract: Connected and automated vehicles (CAVs) have the potential to enhance driving safety, for example by enabling safe vehicle following and more efficient traffic scheduling. For such future deployments, safety requirements should be addressed, where the primary such are avoidance of vehicle collisions and substantial mitigating of harm when collisions are unavoidable. However, conservative worst-case-based control strategies come at the price of reduced flexibility and may compromise overall performance. In light of this, we investigate how Deep Reinforcement Learning (DRL) can be leveraged to improve safety in multi-vehicle-following scenarios involving emergency braking. Specifically, we investigate how DRL with vehicle-to-vehicle communication can be used to ethically select an emergency breaking profile in scenarios where overall, or collective, three-vehicle harm reduction or collision avoidance shall be obtained instead of single-vehicle such. As an algorithm, we provide a hybrid approach that combines DRL with a previously published method based on analytical expressions for selecting optimal constant deceleration. By combining DRL with the previous method, the proposed hybrid approach increases the reliability compared to standalone DRL, while achieving superior performance in terms of overall harm reduction and collision avoidance.
+ oai:arXiv.org:2512.10698v1
+ cs.ROcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Kristof Varadi, Mark Marosi, Peter Antal
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianbo Wang, Galina Sidorenko, Johan Thunberg
- Towards Language Model Guided TLA+ Proof Automation
- https://arxiv.org/abs/2512.09758
- arXiv:2512.09758v1 Announce Type: new
-Abstract: Formal theorem proving with TLA+ provides rigorous guarantees for system specifications, but constructing proofs requires substantial expertise and effort. While large language models have shown promise in automating proofs for tactic-based theorem provers like Lean, applying these approaches directly to TLA+ faces significant challenges due to the unique hierarchical proof structure of the TLA+ proof system. We present a prompt-based approach that leverages LLMs to guide hierarchical decomposition of complex proof obligations into simpler sub-claims, while relying on symbolic provers for verification. Our key insight is to constrain LLMs to generate normalized claim decompositions rather than complete proofs, significantly reducing syntax errors. We also introduce a benchmark suite of 119 theorems adapted from (1) established mathematical collections and (2) inductive proofs of distributed protocols. Our approach consistently outperforms baseline methods across the benchmark suite.
- oai:arXiv.org:2512.09758v1
- cs.LO
- Thu, 11 Dec 2025 00:00:00 -0500
+ On the Stabilization of Rigid Formations on Regular Curves
+ https://arxiv.org/abs/2512.10700
+ arXiv:2512.10700v1 Announce Type: new
+Abstract: This work deals with the problem of stabilizing a multi-agent rigid formation on a general class of planar curves. Namely, we seek to stabilize an equilateral polygonal formation on closed planar differentiable curves after a path sweep. The task of finding an inscribed regular polygon centered at the point of interest is solved via a randomized multi-start Newton-Like algorithm for which one is able to ascertain the existence of a minimizer. Then we design a continuous feedback law that guarantees convergence to, and sufficient sweeping of the curve, followed by convergence to the desired formation vertices while ensuring inter-agent avoidance. The proposed approach is validated through numerical simulations for different classes of curves and different rigid formations. Code: https://github.com/mebbaid/paper-elobaid-ifacwc-2026
+ oai:arXiv.org:2512.10700v1
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Yuhao Zhou, Stavros Tripakis
+ Mohamed Elobaid, Shinkyu Park, Eric Feron
- Baseline: Operation-Based Evolution and Versioning of Data
- https://arxiv.org/abs/2512.09762
- arXiv:2512.09762v1 Announce Type: new
-Abstract: Baseline is a platform for richly structured data supporting change in multiple dimensions: mutation over time, collaboration across space, and evolution through design changes. It is built upon Operational Differencing, a new technique for managing data in terms of high-level operations that include refactorings and schema changes. We use operational differencing to construct an operation-based form of version control on data structures used in programming languages and relational databases.
- This approach to data version control does fine-grained diffing and merging despite intervening structural transformations like schema changes. It offers users a simplified conceptual model of version control for ad hoc usage: There is no repo; Branching is just copying. The informaton maintained in a repo can be synthesized more precisely from the append-only histories of branches. Branches can be flexibly shared as is commonly done with document files, except with the added benefit of diffing and merging.
- We conjecture that queries can be operationalized into a sequence of schema and data operations. We develop that idea on a query language fragment containing selects and joins.
- Operationalized queries are represented as a future timeline that is speculatively executed as a branch off of the present state, returning a value from its hypothetical future. Operationalized queries get rewritten to accommodate schema change "for free" by the machinery of operational differencing.
- Altogether we develop solutions to four of the eight challenge problems of schema evolution identified in a recent paper.
- oai:arXiv.org:2512.09762v1
- cs.DB
- Thu, 11 Dec 2025 00:00:00 -0500
+ HybridVFL: Disentangled Feature Learning for Edge-Enabled Vertical Federated Multimodal Classification
+ https://arxiv.org/abs/2512.10701
+ arXiv:2512.10701v1 Announce Type: new
+Abstract: Vertical Federated Learning (VFL) offers a privacy-preserving paradigm for Edge AI scenarios like mobile health diagnostics, where sensitive multimodal data reside on distributed, resource-constrained devices. Yet, standard VFL systems often suffer performance limitations due to simplistic feature fusion. This paper introduces HybridVFL, a novel framework designed to overcome this bottleneck by employing client-side feature disentanglement paired with a server-side cross-modal transformer for context-aware fusion. Through systematic evaluation on the multimodal HAM10000 skin lesion dataset, we demonstrate that HybridVFL significantly outperforms standard federated baselines, validating the criticality of advanced fusion mechanisms in robust, privacy-preserving systems.
+ oai:arXiv.org:2512.10701v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Jonathan Edwards, Tomas Petricek
+ 10.1145/3773274.3774931
+ Mostafa Anoosha, Zeinab Dehghani, Kuniko Paxton, Koorosh Aslansefat, Dhavalkumar Thakker
- Defining Cost Function of Steganography with Large Language Models
- https://arxiv.org/abs/2512.09769
- arXiv:2512.09769v1 Announce Type: new
-Abstract: In this paper, we make the first attempt towards defining cost function of steganography with large language models (LLMs), which is totally different from previous works that rely heavily on expert knowledge or require large-scale datasets for cost learning. To achieve this goal, a two-stage strategy combining LLM-guided program synthesis with evolutionary search is applied in the proposed method. In the first stage, a certain number of cost functions in the form of computer program are synthesized from LLM responses to structured prompts. These cost functions are then evaluated with pretrained steganalysis models so that candidate cost functions suited to steganography can be collected. In the second stage, by retraining a steganalysis model for each candidate cost function, the optimal cost function(s) can be determined according to the detection accuracy. This two-stage strategy is performed by an iterative fashion so that the best cost function can be collected at the last iteration. Experiments show that the proposed method enables LLMs to design new cost functions of steganography that significantly outperform existing works in terms of resisting steganalysis tools, which verifies the superiority of the proposed method. To the best knowledge of the authors, this is the first work applying LLMs to the design of advanced cost function of steganography, which presents a novel perspective for steganography design and may shed light on further research.
- oai:arXiv.org:2512.09769v1
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ COMPARE: Clinical Optimization with Modular Planning and Assessment via RAG-Enhanced AI-OCT: Superior Decision Support for Percutaneous Coronary Intervention Compared to ChatGPT-5 and Junior Operators
+ https://arxiv.org/abs/2512.10702
+ arXiv:2512.10702v1 Announce Type: new
+Abstract: Background: While intravascular imaging, particularly optical coherence tomography (OCT), improves percutaneous coronary intervention (PCI) outcomes, its interpretation is operator-dependent. General-purpose artificial intelligence (AI) shows promise but lacks domain-specific reliability. We evaluated the performance of CA-GPT, a novel large model deployed on an AI-OCT system, against that of the general-purpose ChatGPT-5 and junior physicians for OCT-guided PCI planning and assessment.
+ Methods: In this single-center analysis of 96 patients who underwent OCT-guided PCI, the procedural decisions generated by the CA-GPT, ChatGPT-5, and junior physicians were compared with an expert-derived procedural record. Agreement was assessed using ten pre-specified metrics across pre-PCI and post-PCI phases.
+ Results: For pre-PCI planning, CA-GPT demonstrated significantly higher median agreement scores (5[IQR 3.75-5]) compared to both ChatGPT-5 (3[2-4], P<0.001) and junior physicians (4[3-4], P<0.001). CA-GPT significantly outperformed ChatGPT-5 across all individual pre-PCI metrics and showed superior performance to junior physicians in stent diameter (90.3% vs. 72.2%, P<0.05) and length selection (80.6% vs. 52.8%, P<0.01). In post-PCI assessment, CA-GPT maintained excellent overall agreement (5[4.75-5]), significantly higher than both ChatGPT-5 (4[4-5], P<0.001) and junior physicians (5[4-5], P<0.05). Subgroup analysis confirmed CA-GPT's robust performance advantage in complex scenarios.
+ Conclusion: The CA-GPT-based AI-OCT system achieved superior decision-making agreement versus a general-purpose large language model and junior physicians across both PCI planning and assessment phases. This approach provides a standardized and reliable method for intravascular imaging interpretation, demonstrating significant potential to augment operator expertise and optimize OCT-guided PCI.
+ oai:arXiv.org:2512.10702v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hanzhou Wu, Yige Wang
+ Wei Fang, Chiyao Wang, Wenshuai Ma, Hui Liu, Jianqiang Hu, Xiaona Niu, Yi Chu, Mingming Zhang, Jingxiao Yang, Dongwei Zhang, Zelin Li, Pengyun Liu, Jiawei Zheng, Pengke Zhang, Chaoshi Qin, Wangang Guo, Bin Wang, Yugang Xue, Wei Zhang, Zikuan Wang, Rui Zhu, Yihui Cao, Quanmao Lu, Rui Meng, Yan Li
- DeepSeek's WEIRD Behavior: The cultural alignment of Large Language Models and the effects of prompt language and cultural prompting
- https://arxiv.org/abs/2512.09772
- arXiv:2512.09772v1 Announce Type: new
-Abstract: Culture is a core component of human-to-human interaction and plays a vital role in how we perceive and interact with others. Advancements in the effectiveness of Large Language Models (LLMs) in generating human-sounding text have greatly increased the amount of human-to-computer interaction. As this field grows, the cultural alignment of these human-like agents becomes an important field of study. Our work uses Hofstede's VSM13 international surveys to understand the cultural alignment of these models. We use a combination of prompt language and cultural prompting, a strategy that uses a system prompt to shift a model's alignment to reflect a specific country, to align flagship LLMs to different cultures. Our results show that DeepSeek-V3, V3.1, and OpenAI's GPT-5 exhibit a close alignment with the survey responses of the United States and do not achieve a strong or soft alignment with China, even when using cultural prompts or changing the prompt language. We also find that GPT-4 exhibits an alignment closer to China when prompted in English, but cultural prompting is effective in shifting this alignment closer to the United States. Other low-cost models, GPT-4o and GPT-4.1, respond to the prompt language used (i.e., English or Simplified Chinese) and cultural prompting strategies to create acceptable alignments with both the United States and China.
- oai:arXiv.org:2512.09772v1
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ PACIFIC: a framework for generating benchmarks to check Precise Automatically Checked Instruction Following In Code
+ https://arxiv.org/abs/2512.10713
+ arXiv:2512.10713v1 Announce Type: new
+Abstract: Large Language Model (LLM)-based code assistants have emerged as a powerful application of generative AI, demonstrating impressive capabilities in code generation and comprehension. A key requirement for these systems is their ability to accurately follow user instructions. We present Precise Automatically Checked Instruction Following In Code (PACIFIC), a novel framework designed to automatically generate benchmarks that rigorously assess sequential instruction-following and code dry-running capabilities in LLMs, while allowing control over benchmark difficulty. PACIFIC produces benchmark variants with clearly defined expected outputs, enabling straightforward and reliable evaluation through simple output comparisons. In contrast to existing approaches that often rely on tool usage or agentic behavior, our work isolates and evaluates the LLM's intrinsic ability to reason through code behavior step-by-step without execution (dry running) and to follow instructions. Furthermore, our framework mitigates training data contamination by facilitating effortless generation of novel benchmark variations. We validate our framework by generating a suite of benchmarks spanning a range of difficulty levels and evaluating multiple state-of-the-art LLMs. Our results demonstrate that PACIFIC can produce increasingly challenging benchmarks that effectively differentiate instruction-following and dry running capabilities, even among advanced models. Overall, our framework offers a scalable, contamination-resilient methodology for assessing core competencies of LLMs in code-related tasks.
+ oai:arXiv.org:2512.10713v1
+ cs.SE
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- James Luther, Donald Brown
+ Itay Dreyfuss, Antonio Abu Nassar, Samuel Ackerman, Axel Ben David, Rami Katan, Orna Raz, Marcel Zalmanovici
- Stylized Meta-Album: Group-bias injection with style transfer to study robustness against distribution shifts
- https://arxiv.org/abs/2512.09773
- arXiv:2512.09773v1 Announce Type: new
-Abstract: We introduce Stylized Meta-Album (SMA), a new image classification meta-dataset comprising 24 datasets (12 content datasets, and 12 stylized datasets), designed to advance studies on out-of-distribution (OOD) generalization and related topics. Created using style transfer techniques from 12 subject classification datasets, SMA provides a diverse and extensive set of 4800 groups, combining various subjects (objects, plants, animals, human actions, textures) with multiple styles. SMA enables flexible control over groups and classes, allowing us to configure datasets to reflect diverse benchmark scenarios. While ideally, data collection would capture extensive group diversity, practical constraints often make this infeasible. SMA addresses this by enabling large and configurable group structures through flexible control over styles, subject classes, and domains-allowing datasets to reflect a wide range of real-world benchmark scenarios. This design not only expands group and class diversity, but also opens new methodological directions for evaluating model performance across diverse group and domain configurations-including scenarios with many minority groups, varying group imbalance, and complex domain shifts-and for studying fairness, robustness, and adaptation under a broader range of realistic conditions. To demonstrate SMA's effectiveness, we implemented two benchmarks: (1) a novel OOD generalization and group fairness benchmark leveraging SMA's domain, class, and group diversity to evaluate existing benchmarks. Our findings reveal that while simple balancing and algorithms utilizing group information remain competitive as claimed in previous benchmarks, increasing group diversity significantly impacts fairness, altering the superiority and relative rankings of algorithms. We also propose to use \textit{Top-M worst group accuracy} as a new hyperparameter tuning metric, demonstrating broader fairness during optimization and delivering better final worst-group accuracy for larger group diversity. (2) An unsupervised domain adaptation (UDA) benchmark utilizing SMA's group diversity to evaluate UDA algorithms across more scenarios, offering a more comprehensive benchmark with lower error bars (reduced by 73\% and 28\% in closed-set setting and UniDA setting, respectively) compared to existing efforts. These use cases highlight SMA's potential to significantly impact the outcomes of conventional benchmarks.
- oai:arXiv.org:2512.09773v1
+ CheXmask-U: Quantifying uncertainty in landmark-based anatomical segmentation for X-ray images
+ https://arxiv.org/abs/2512.10715
+ arXiv:2512.10715v1 Announce Type: new
+Abstract: Uncertainty estimation is essential for the safe clinical deployment of medical image segmentation systems, enabling the identification of unreliable predictions and supporting human oversight. While prior work has largely focused on pixel-level uncertainty, landmark-based segmentation offers inherent topological guarantees yet remains underexplored from an uncertainty perspective. In this work, we study uncertainty estimation for anatomical landmark-based segmentation on chest X-rays. Inspired by hybrid neural network architectures that combine standard image convolutional encoders with graph-based generative decoders, and leveraging their variational latent space, we derive two complementary measures: (i) latent uncertainty, captured directly from the learned distribution parameters, and (ii) predictive uncertainty, obtained by generating multiple stochastic output predictions from latent samples. Through controlled corruption experiments we show that both uncertainty measures increase with perturbation severity, reflecting both global and local degradation. We demonstrate that these uncertainty signals can identify unreliable predictions by comparing with manual ground-truth, and support out-of-distribution detection on the CheXmask dataset. More importantly, we release CheXmask-U (huggingface.co/datasets/mcosarinsky/CheXmask-U), a large scale dataset of 657,566 chest X-ray landmark segmentations with per-node uncertainty estimates, enabling researchers to account for spatial variations in segmentation quality when using these anatomical masks. Our findings establish uncertainty estimation as a promising direction to enhance robustness and safe deployment of landmark-based anatomical segmentation methods in chest X-ray. A fully working interactive demo of the method is available at huggingface.co/spaces/matiasky/CheXmask-U and the source code at github.com/mcosarinsky/CheXmask-U.
+ oai:arXiv.org:2512.10715v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Matias Cosarinsky, Nicolas Gaggion, Rodrigo Echeveste, Enzo Ferrante
+
+
+ Dynamically consistent finite volume scheme for a bimonomeric simplified model with inflammation processes for Alzheimer's disease
+ https://arxiv.org/abs/2512.10716
+ arXiv:2512.10716v1 Announce Type: new
+Abstract: A model of progression of Alzheimer's disease (AD) incorporating the interactions of A$\beta$-monomers, oligomers, microglial cells and interleukins with neurons is considered. The resulting convection-diffusion-reaction system consists of four partial differential equations (PDEs) and one ordinary differential equation (ODE). We develop a finite volume (FV) scheme for this system, together with non-negativity and a priori bounds for the discrete solution, so that we establish the existence of a discrete solution to the FV scheme. It is shown that the scheme converges to an admissible weak solution of the model. The reaction terms of the system are discretized using a semi-implicit strategy that coincides with a nonstandard discretization of the spatially homogeneous (SH) model. This construction enables us to prove that the FV scheme is dynamically consistent with respect to the spatially homogeneous version of the model. Finally, numerical experiments are presented to illustrate the model and to assess the behavior of the FV scheme.
+ oai:arXiv.org:2512.10716v1
+ math.NA
+ cs.NA
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Romain Mussard (UNIROUEN), Aur\'elien Gauffre (UGA), Ihsan Ullah (TAU, LISN), Thanh Gia Hieu Khuong (TAU, LISN), Massih-Reza Amini (UGA), Isabelle Guyon (TAU, LISN), Lisheng Sun-Hosoya (TAU, LISN)
+ Juan Barajas-Calonge (UBB), Mauricio A. Sepulveda Cortes (CI2MA), Nicolas Torres (LJAD), Luis Miguel Villada (UBB)
- Quantifying Uncertainty in Machine Learning-Based Pervasive Systems: Application to Human Activity Recognition
- https://arxiv.org/abs/2512.09775
- arXiv:2512.09775v1 Announce Type: new
-Abstract: The recent convergence of pervasive computing and machine learning has given rise to numerous services, impacting almost all areas of economic and social activity. However, the use of AI techniques precludes certain standard software development practices, which emphasize rigorous testing to ensure the elimination of all bugs and adherence to well-defined specifications. ML models are trained on numerous high-dimensional examples rather than being manually coded. Consequently, the boundaries of their operating range are uncertain, and they cannot guarantee absolute error-free performance. In this paper, we propose to quantify uncertainty in ML-based systems. To achieve this, we propose to adapt and jointly utilize a set of selected techniques to evaluate the relevance of model predictions at runtime. We apply and evaluate these proposals in the highly heterogeneous and evolving domain of Human Activity Recognition (HAR). The results presented demonstrate the relevance of the approach, and we discuss in detail the assistance provided to domain experts.
- oai:arXiv.org:2512.09775v1
- cs.SE
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Stabilized Finite Element Method for Morpho-Visco-Poroelastic Model
+ https://arxiv.org/abs/2512.10718
+ arXiv:2512.10718v1 Announce Type: new
+Abstract: We propose a mathematical model that combines elastic, viscous and porous effects with growth or shrinkage due to microstructural changes. This phenomenon is important in tissue or tumor growth, as well as in dermal contraction. Although existence results of the solution to the problem are not given, the current study assesses stability of the equilibria for both the continuous and semi-discrete versions of the model. Furthermore, a numerical condition for monotonicity of the numerical solution is described, as well as a way to stabilize the numerical solution so that spurious oscillations are avoided. The derived stabilization result is confirmed by computer simulations. In order to have a more quantitative picture, the total variation has been evaluated as a function of the stabilization parameter.
+ oai:arXiv.org:2512.10718v1
+ math.NA
+ cs.NA
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Vladimir Balditsyn, Philippe Lalanda, German Vega, St\'ephanie Chollet
+ Sabia Asghar, Duncan den Bakker, Etelvina Javierre, Qiyao Peng, Fred J. Vermolen
- Physics-Aware Heterogeneous GNN Architecture for Real-Time BESS Optimization in Unbalanced Distribution Systems
- https://arxiv.org/abs/2512.09780
- arXiv:2512.09780v1 Announce Type: new
-Abstract: Battery energy storage systems (BESS) have become increasingly vital in three-phase unbalanced distribution grids for maintaining voltage stability and enabling optimal dispatch. However, existing deep learning approaches often lack explicit three-phase representation, making it difficult to accurately model phase-specific dynamics and enforce operational constraints--leading to infeasible dispatch solutions. This paper demonstrates that by embedding detailed three-phase grid information--including phase voltages, unbalanced loads, and BESS states--into heterogeneous graph nodes, diverse GNN architectures (GCN, GAT, GraphSAGE, GPS) can jointly predict network state variables with high accuracy. Moreover, a physics-informed loss function incorporates critical battery constraints--SoC and C-rate limits--via soft penalties during training. Experimental validation on the CIGRE 18-bus distribution system shows that this embedding-loss approach achieves low prediction errors, with bus voltage MSEs of 6.92e-07 (GCN), 1.21e-06 (GAT), 3.29e-05 (GPS), and 9.04e-07 (SAGE). Importantly, the physics-informed method ensures nearly zero SoC and C-rate constraint violations, confirming its effectiveness for reliable, constraint-compliant dispatch.
- oai:arXiv.org:2512.09780v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ SpaceDrive: Infusing Spatial Awareness into VLM-based Autonomous Driving
+ https://arxiv.org/abs/2512.10719
+ arXiv:2512.10719v1 Announce Type: new
+Abstract: End-to-end autonomous driving methods built on vision language models (VLMs) have undergone rapid development driven by their universal visual understanding and strong reasoning capabilities obtained from the large-scale pretraining. However, we find that current VLMs struggle to understand fine-grained 3D spatial relationships which is a fundamental requirement for systems interacting with the physical world. To address this issue, we propose SpaceDrive, a spatial-aware VLM-based driving framework that treats spatial information as explicit positional encodings (PEs) instead of textual digit tokens, enabling joint reasoning over semantic and spatial representations. SpaceDrive employs a universal positional encoder to all 3D coordinates derived from multi-view depth estimation, historical ego-states, and text prompts. These 3D PEs are first superimposed to augment the corresponding 2D visual tokens. Meanwhile, they serve as a task-agnostic coordinate representation, replacing the digit-wise numerical tokens as both inputs and outputs for the VLM. This mechanism enables the model to better index specific visual semantics in spatial reasoning and directly regress trajectory coordinates rather than generating digit-by-digit, thereby enhancing planning accuracy. Extensive experiments validate that SpaceDrive achieves state-of-the-art open-loop performance on the nuScenes dataset and the second-best Driving Score of 78.02 on the Bench2Drive closed-loop benchmark over existing VLM-based methods.
+ oai:arXiv.org:2512.10719v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Aoxiang Ma, Salah Ghamizi, Jun Cao, Pedro Rodriguez
+ Peizheng Li, Zhenghao Zhang, David Holtz, Hang Yu, Yutong Yang, Yuzhi Lai, Rui Song, Andreas Geiger, Andreas Zell
- Predicting Polymer Solubility in Solvents Using SMILES Strings
- https://arxiv.org/abs/2512.09784
- arXiv:2512.09784v1 Announce Type: new
-Abstract: Understanding and predicting polymer solubility in various solvents is critical for applications ranging from recycling to pharmaceutical formulation. This work presents a deep learning framework that predicts polymer solubility, expressed as weight percent (wt%), directly from SMILES representations of both polymers and solvents. A dataset of 8,049 polymer solvent pairs at 25 deg C was constructed from calibrated molecular dynamics simulations (Zhou et al., 2023), and molecular descriptors and fingerprints were combined into a 2,394 feature representation per sample. A fully connected neural network with six hidden layers was trained using the Adam optimizer and evaluated using mean squared error loss, achieving strong agreement between predicted and actual solubility values. Generalizability was demonstrated using experimentally measured data from the Materials Genome Project, where the model maintained high accuracy on 25 unseen polymer solvent combinations. These findings highlight the viability of SMILES based machine learning models for scalable solubility prediction and high-throughput solvent screening, supporting applications in green chemistry, polymer processing, and materials design.
- oai:arXiv.org:2512.09784v1
+ Beyond the Black Box: Identifiable Interpretation and Control in Generative Models via Causal Minimality
+ https://arxiv.org/abs/2512.10720
+ arXiv:2512.10720v1 Announce Type: new
+Abstract: Deep generative models, while revolutionizing fields like image and text generation, largely operate as opaque black boxes, hindering human understanding, control, and alignment. While methods like sparse autoencoders (SAEs) show remarkable empirical success, they often lack theoretical guarantees, risking subjective insights. Our primary objective is to establish a principled foundation for interpretable generative models. We demonstrate that the principle of causal minimality -- favoring the simplest causal explanation -- can endow the latent representations of diffusion vision and autoregressive language models with clear causal interpretation and robust, component-wise identifiable control. We introduce a novel theoretical framework for hierarchical selection models, where higher-level concepts emerge from the constrained composition of lower-level variables, better capturing the complex dependencies in data generation. Under theoretically derived minimality conditions (manifesting as sparsity or compression constraints), we show that learned representations can be equivalent to the true latent variables of the data-generating process. Empirically, applying these constraints to leading generative models allows us to extract their innate hierarchical concept graphs, offering fresh insights into their internal knowledge organization. Furthermore, these causally grounded concepts serve as levers for fine-grained model steering, paving the way for transparent, reliable systems.
+ oai:arXiv.org:2512.10720v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Andrew Reinhard
+ Lingjing Kong, Shaoan Xie, Guangyi Chen, Yuewen Sun, Xiangchen Song, Eric P. Xing, Kun Zhang
- TinyD\'ej\`aVu: Smaller Memory Footprint & Faster Inference on Sensor Data Streams with Always-On Microcontrollers
- https://arxiv.org/abs/2512.09786
- arXiv:2512.09786v1 Announce Type: new
-Abstract: Always-on sensors are increasingly expected to embark a variety of tiny neural networks and to continuously perform inference on time-series of the data they sense. In order to fit lifetime and energy consumption requirements when operating on battery, such hardware uses microcontrollers (MCUs) with tiny memory budget e.g., 128kB of RAM. In this context, optimizing data flows across neural network layers becomes crucial. In this paper, we introduce TinyD\'ej\`aVu, a new framework and novel algorithms we designed to drastically reduce the RAM footprint required by inference using various tiny ML models for sensor data time-series on typical microcontroller hardware. We publish the implementation of TinyD\'ej\`aVu as open source, and we perform reproducible benchmarks on hardware. We show that TinyD\'ej\`aVu can save more than 60% of RAM usage and eliminate up to 90% of redundant compute on overlapping sliding window inputs.
- oai:arXiv.org:2512.09786v1
+ Generalized Spherical Neural Operators: Green's Function Formulation
+ https://arxiv.org/abs/2512.10723
+ arXiv:2512.10723v1 Announce Type: new
+Abstract: Neural operators offer powerful approaches for solving parametric partial differential equations, but extending them to spherical domains remains challenging due to the need to preserve intrinsic geometry while avoiding distortions that break rotational consistency. Existing spherical operators rely on rotational equivariance but often lack the flexibility for real-world complexity. We propose a general operator-design framework based on the designable spherical Green's function and its harmonic expansion, establishing a solid operator-theoretic foundation for spherical learning. Based on this, we propose an absolute and relative position-dependent Green's function that enables flexible balance of equivariance and invariance for real-world modeling. The resulting operator, Green's-function Spherical Neural Operator (GSNO) with a novel spectral learning method, can adapt to anisotropic, constraint-rich systems while retaining spectral efficiency. To exploit GSNO, we develop GSHNet, a hierarchical architecture that combines multi-scale spectral modeling with spherical up-down sampling, enhancing global feature representation. Evaluations on diffusion MRI, shallow water dynamics, and global weather forecasting, GSNO and GSHNet consistently outperform state-of-the-art methods. Our results position GSNO as a principled and general framework for spherical operator learning, bridging rigorous theory with real-world complexity.
+ oai:arXiv.org:2512.10723v1cs.LG
- cs.PF
- cs.SD
- eess.AS
- eess.SP
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Zhaolan Huang, Emmanuel Baccelli
+ Hao Tang, Hao Chen, Chao Li
- FastPose-ViT: A Vision Transformer for Real-Time Spacecraft Pose Estimation
- https://arxiv.org/abs/2512.09792
- arXiv:2512.09792v1 Announce Type: new
-Abstract: Estimating the 6-degrees-of-freedom (6DoF) pose of a spacecraft from a single image is critical for autonomous operations like in-orbit servicing and space debris removal. Existing state-of-the-art methods often rely on iterative Perspective-n-Point (PnP)-based algorithms, which are computationally intensive and ill-suited for real-time deployment on resource-constrained edge devices. To overcome these limitations, we propose FastPose-ViT, a Vision Transformer (ViT)-based architecture that directly regresses the 6DoF pose. Our approach processes cropped images from object bounding boxes and introduces a novel mathematical formalism to map these localized predictions back to the full-image scale. This formalism is derived from the principles of projective geometry and the concept of "apparent rotation", where the model predicts an apparent rotation matrix that is then corrected to find the true orientation. We demonstrate that our method outperforms other non-PnP strategies and achieves performance competitive with state-of-the-art PnP-based techniques on the SPEED dataset. Furthermore, we validate our model's suitability for real-world space missions by quantizing it and deploying it on power-constrained edge hardware. On the NVIDIA Jetson Orin Nano, our end-to-end pipeline achieves a latency of ~75 ms per frame under sequential execution, and a non-blocking throughput of up to 33 FPS when stages are scheduled concurrently.
- oai:arXiv.org:2512.09792v1
+ Video Depth Propagation
+ https://arxiv.org/abs/2512.10725
+ arXiv:2512.10725v1 Announce Type: new
+Abstract: Depth estimation in videos is essential for visual perception in real-world applications. However, existing methods either rely on simple frame-by-frame monocular models, leading to temporal inconsistencies and inaccuracies, or use computationally demanding temporal modeling, unsuitable for real-time applications. These limitations significantly restrict general applicability and performance in practical settings. To address this, we propose VeloDepth, an efficient and robust online video depth estimation pipeline that effectively leverages spatiotemporal priors from previous depth predictions and performs deep feature propagation. Our method introduces a novel Propagation Module that refines and propagates depth features and predictions using flow-based warping coupled with learned residual corrections. In addition, our design structurally enforces temporal consistency, resulting in stable depth predictions across consecutive frames with improved efficiency. Comprehensive zero-shot evaluation on multiple benchmarks demonstrates the state-of-the-art temporal consistency and competitive accuracy of VeloDepth, alongside its significantly faster inference compared to existing video-based depth estimators. VeloDepth thus provides a practical, efficient, and accurate solution for real-time depth estimation suitable for diverse perception tasks. Code and models are available at https://github.com/lpiccinelli-eth/velodepth
+ oai:arXiv.org:2512.10725v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Pierre Ancey, Andrew Price, Saqib Javed, Mathieu Salzmann
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Luigi Piccinelli, Thiemo Wandel, Christos Sakaridis, Wim Abbeloos, Luc Van Gool
- Knowledge Diversion for Efficient Morphology Control and Policy Transfer
- https://arxiv.org/abs/2512.09796
- arXiv:2512.09796v1 Announce Type: new
-Abstract: Universal morphology control aims to learn a universal policy that generalizes across heterogeneous agent morphologies, with Transformer-based controllers emerging as a popular choice. However, such architectures incur substantial computational costs, resulting in high deployment overhead, and existing methods exhibit limited cross-task generalization, necessitating training from scratch for each new task. To this end, we propose \textbf{DivMorph}, a modular training paradigm that leverages knowledge diversion to learn decomposable controllers. DivMorph factorizes randomly initialized Transformer weights into factor units via SVD prior to training and employs dynamic soft gating to modulate these units based on task and morphology embeddings, separating them into shared \textit{learngenes} and morphology- and task-specific \textit{tailors}, thereby achieving knowledge disentanglement. By selectively activating relevant components, DivMorph enables scalable and efficient policy deployment while supporting effective policy transfer to novel tasks. Extensive experiments demonstrate that DivMorph achieves state-of-the-art performance, achieving a 3$\times$ improvement in sample efficiency over direct finetuning for cross-task transfer and a 17$\times$ reduction in model size for single-agent deployment.
- oai:arXiv.org:2512.09796v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ IRG-MotionLLM: Interleaving Motion Generation, Assessment and Refinement for Text-to-Motion Generation
+ https://arxiv.org/abs/2512.10730
+ arXiv:2512.10730v1 Announce Type: new
+Abstract: Recent advances in motion-aware large language models have shown remarkable promise for unifying motion understanding and generation tasks. However, these models typically treat understanding and generation separately, limiting the mutual benefits that could arise from interactive feedback between tasks. In this work, we reveal that motion assessment and refinement tasks act as crucial bridges to enable bidirectional knowledge flow between understanding and generation. Leveraging this insight, we propose Interleaved Reasoning for Motion Generation (IRMoGen), a novel paradigm that tightly couples motion generation with assessment and refinement through iterative text-motion dialogue. To realize this, we introduce IRG-MotionLLM, the first model that seamlessly interleaves motion generation, assessment, and refinement to improve generation performance. IRG-MotionLLM is developed progressively with a novel three-stage training scheme, initializing and subsequently enhancing native IRMoGen capabilities. To facilitate this development, we construct an automated data engine to synthesize interleaved reasoning annotations from existing text-motion datasets. Extensive experiments demonstrate that: (i) Assessment and refinement tasks significantly improve text-motion alignment; (ii) Interleaving motion generation, assessment, and refinement steps yields consistent performance gains across training stages; and (iii) IRG-MotionLLM clearly outperforms the baseline model and achieves advanced performance on standard text-to-motion generation benchmarks. Cross-evaluator testing further validates its effectiveness. Code & Data: https://github.com/HumanMLLM/IRG-MotionLLM/tree/main.
+ oai:arXiv.org:2512.10730v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fu Feng, Ruixiao Shi, Yucheng Xie, Jianlu Shen, Jing Wang, Xin Geng
+ Yuan-Ming Li, Qize Yang, Nan Lei, Shenghao Fu, Ling-An Zeng, Jian-Fang Hu, Xihan Wei, Wei-Shi Zheng
- M3Net: A Multi-Metric Mixture of Experts Network Digital Twin with Graph Neural Networks
- https://arxiv.org/abs/2512.09797
- arXiv:2512.09797v1 Announce Type: new
-Abstract: The rise of 5G/6G network technologies promises to enable applications like autonomous vehicles and virtual reality, resulting in a significant increase in connected devices and necessarily complicating network management. Even worse, these applications often have strict, yet heterogeneous, performance requirements across metrics like latency and reliability. Much recent work has thus focused on developing the ability to predict network performance. However, traditional methods for network modeling, like discrete event simulators and emulation, often fail to balance accuracy and scalability. Network Digital Twins (NDTs), augmented by machine learning, present a viable solution by creating virtual replicas of physical networks for real- time simulation and analysis. State-of-the-art models, however, fall short of full-fledged NDTs, as they often focus only on a single performance metric or simulated network data. We introduce M3Net, a Multi-Metric Mixture-of-experts (MoE) NDT that uses a graph neural network architecture to estimate multiple performance metrics from an expanded set of network state data in a range of scenarios. We show that M3Net significantly enhances the accuracy of flow delay predictions by reducing the MAPE (Mean Absolute Percentage Error) from 20.06% to 17.39%, while also achieving 66.47% and 78.7% accuracy on jitter and packets dropped for each flow
- oai:arXiv.org:2512.09797v1
- cs.NI
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ TriHaRd: Higher Resilience for TEE Trusted Time
+ https://arxiv.org/abs/2512.10732
+ arXiv:2512.10732v1 Announce Type: new
+Abstract: Accurately measuring time passing is critical for many applications. However, in Trusted Execution Environments (TEEs) such as Intel SGX, the time source is outside the Trusted Computing Base: a malicious host can manipulate the TEE's notion of time, jumping in time or affecting perceived time speed. Previous work (Triad) proposes protocols for TEEs to maintain a trustworthy time source by building a cluster of TEEs that collaborate with each other and with a remote Time Authority to maintain a continuous notion of passing time. However, such approaches still allow an attacker to control the operating system and arbitrarily manipulate their own TEE's perceived clock speed. An attacker can even propagate faster passage of time to honest machines participating in Triad's trusted time protocol, causing them to skip to timestamps arbitrarily far in the future. We propose TriHaRd, a TEE trusted time protocol achieving high resilience against clock speed and offset manipulations, notably through Byzantine-resilient clock updates and consistency checks. We empirically show that TriHaRd mitigates known attacks against Triad.
+ oai:arXiv.org:2512.10732v1
+ cs.CR
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Blessed Guda, Carlee Joe-Wong
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Matthieu Bettinger, Sonia Ben Mokhtar, Pascal Felber, Etienne Rivi\`ere, Valerio Schiavoni, Anthony Simonet-Boulogne
- High-Resolution Water Sampling via a Solar-Powered Autonomous Surface Vehicle
- https://arxiv.org/abs/2512.09798
- arXiv:2512.09798v1 Announce Type: new
-Abstract: Accurate water quality assessment requires spatially resolved sampling, yet most unmanned surface vehicles (USVs) can collect only a limited number of samples or rely on single-point sensors with poor representativeness. This work presents a solar-powered, fully autonomous USV featuring a novel syringe-based sampling architecture capable of acquiring 72 discrete, contamination-minimized water samples per mission. The vehicle incorporates a ROS 2 autonomy stack with GPS-RTK navigation, LiDAR and stereo-vision obstacle detection, Nav2-based mission planning, and long-range LoRa supervision, enabling dependable execution of sampling routes in unstructured environments. The platform integrates a behavior-tree autonomy architecture adapted from Nav2, enabling mission-level reasoning and perception-aware navigation. A modular 6x12 sampling system, controlled by distributed micro-ROS nodes, provides deterministic actuation, fault isolation, and rapid module replacement, achieving spatial coverage beyond previously reported USV-based samplers. Field trials in Achocalla Lagoon (La Paz, Bolivia) demonstrated 87% waypoint accuracy, stable autonomous navigation, and accurate physicochemical measurements (temperature, pH, conductivity, total dissolved solids) comparable to manually collected references. These results demonstrate that the platform enables reliable high-resolution sampling and autonomous mission execution, providing a scalable solution for aquatic monitoring in remote environments.
- oai:arXiv.org:2512.09798v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Textual Data Bias Detection and Mitigation - An Extensible Pipeline with Experimental Evaluation
+ https://arxiv.org/abs/2512.10734
+ arXiv:2512.10734v1 Announce Type: new
+Abstract: Textual data used to train large language models (LLMs) exhibits multifaceted bias manifestations encompassing harmful language and skewed demographic distributions. Regulations such as the European AI Act require identifying and mitigating biases against protected groups in data, with the ultimate goal of preventing unfair model outputs. However, practical guidance and operationalization are lacking. We propose a comprehensive data bias detection and mitigation pipeline comprising four components that address two data bias types, namely representation bias and (explicit) stereotypes for a configurable sensitive attribute. First, we leverage LLM-generated word lists created based on quality criteria to detect relevant group labels. Second, representation bias is quantified using the Demographic Representation Score. Third, we detect and mitigate stereotypes using sociolinguistically informed filtering. Finally, we compensate representation bias through Grammar- and Context-Aware Counterfactual Data Augmentation. We conduct a two-fold evaluation using the examples of gender, religion and age. First, the effectiveness of each individual component on data debiasing is evaluated through human validation and baseline comparison. The findings demonstrate that we successfully reduce representation bias and (explicit) stereotypes in a text dataset. Second, the effect of data debiasing on model bias reduction is evaluated by bias benchmarking of several models (0.6B-8B parameters), fine-tuned on the debiased text dataset. This evaluation reveals that LLMs fine-tuned on debiased data do not consistently show improved performance on bias benchmarks, exposing critical gaps in current evaluation methodologies and highlighting the need for targeted data manipulation to address manifested model bias.
+ oai:arXiv.org:2512.10734v1
+ cs.CL
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Misael Mamani, Mariel Fernandez, Grace Luna, Steffani Limachi, Leonel Apaza, Carolina Montes-D\'avalos, Marcelo Herrera, Edwin Salcedo
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rebekka G\"orge, Sujan Sai Gannamaneni, Tabea Naeven, Hammam Abdelwahab, H\'ector Allende-Cid, Armin B. Cremers, Lennard Helmer, Michael Mock, Anna Schmitz, Songkai Xue, Elif Yildirir, Maximilian Poretschkin, Stefan Wrobel
- Ariel-ML: Computing Parallelization with Embedded Rust for Neural Networks on Heterogeneous Multi-core Microcontrollers
- https://arxiv.org/abs/2512.09800
- arXiv:2512.09800v1 Announce Type: new
-Abstract: Low-power microcontroller (MCU) hardware is currently evolving from single-core architectures to predominantly multi-core architectures. In parallel, new embedded software building blocks are more and more written in Rust, while C/C++ dominance fades in this domain. On the other hand, small artificial neural networks (ANN) of various kinds are increasingly deployed in edge AI use cases, thus deployed and executed directly on low-power MCUs. In this context, both incremental improvements and novel innovative services will have to be continuously retrofitted using ANNs execution in software embedded on sensing/actuating systems already deployed in the field. However, there was so far no Rust embedded software platform automating parallelization for inference computation on multi-core MCUs executing arbitrary TinyML models. This paper thus fills this gap by introducing Ariel-ML, a novel toolkit we designed combining a generic TinyML pipeline and an embedded Rust software platform which can take full advantage of multi-core capabilities of various 32bit microcontroller families (Arm Cortex-M, RISC-V, ESP-32). We published the full open source code of its implementation, which we used to benchmark its capabilities using a zoo of various TinyML models. We show that Ariel-ML outperforms prior art in terms of inference latency as expected, and we show that, compared to pre-existing toolkits using embedded C/C++, Ariel-ML achieves comparable memory footprints. Ariel-ML thus provides a useful basis for TinyML practitioners and resource-constrained embedded Rust developers.
- oai:arXiv.org:2512.09800v1
+ LGAN: An Efficient High-Order Graph Neural Network via the Line Graph Aggregation
+ https://arxiv.org/abs/2512.10735
+ arXiv:2512.10735v1 Announce Type: new
+Abstract: Graph Neural Networks (GNNs) have emerged as a dominant paradigm for graph classification. Specifically, most existing GNNs mainly rely on the message passing strategy between neighbor nodes, where the expressivity is limited by the 1-dimensional Weisfeiler-Lehman (1-WL) test. Although a number of k-WL-based GNNs have been proposed to overcome this limitation, their computational cost increases rapidly with k, significantly restricting the practical applicability. Moreover, since the k-WL models mainly operate on node tuples, these k-WL-based GNNs cannot retain fine-grained node- or edge-level semantics required by attribution methods (e.g., Integrated Gradients), leading to the less interpretable problem. To overcome the above shortcomings, in this paper, we propose a novel Line Graph Aggregation Network (LGAN), that constructs a line graph from the induced subgraph centered at each node to perform the higher-order aggregation. We theoretically prove that the LGAN not only possesses the greater expressive power than the 2-WL under injective aggregation assumptions, but also has lower time complexity. Empirical evaluations on benchmarks demonstrate that the LGAN outperforms state-of-the-art k-WL-based GNNs, while offering better interpretability.
+ oai:arXiv.org:2512.10735v1cs.LG
- cs.DC
- cs.PF
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Zhaolan Huang, Kaspar Schleiser, Gyungmin Myung, Emmanuel Baccelli
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lin Du, Lu Bai, Jincheng Li, Lixin Cui, Hangyuan Du, Lichi Zhang, Yuting Chen, Zhao Li
- Modality-Specific Enhancement and Complementary Fusion for Semi-Supervised Multi-Modal Brain Tumor Segmentation
- https://arxiv.org/abs/2512.09801
- arXiv:2512.09801v1 Announce Type: new
-Abstract: Semi-supervised learning (SSL) has become a promising direction for medical image segmentation, enabling models to learn from limited labeled data alongside abundant unlabeled samples. However, existing SSL approaches for multi-modal medical imaging often struggle to exploit the complementary information between modalities due to semantic discrepancies and misalignment across MRI sequences. To address this, we propose a novel semi-supervised multi-modal framework that explicitly enhances modality-specific representations and facilitates adaptive cross-modal information fusion. Specifically, we introduce a Modality-specific Enhancing Module (MEM) to strengthen semantic cues unique to each modality via channel-wise attention, and a learnable Complementary Information Fusion (CIF) module to adaptively exchange complementary knowledge between modalities. The overall framework is optimized using a hybrid objective combining supervised segmentation loss and cross-modal consistency regularization on unlabeled data. Extensive experiments on the BraTS 2019 (HGG subset) demonstrate that our method consistently outperforms strong semi-supervised and multi-modal baselines under 1\%, 5\%, and 10\% labeled data settings, achieving significant improvements in both Dice and Sensitivity scores. Ablation studies further confirm the complementary effects of our proposed MEM and CIF in bridging cross-modality discrepancies and improving segmentation robustness under scarce supervision.
- oai:arXiv.org:2512.09801v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Kicking Politics: How Football Fan Communities Became Arenas for Political Influence
+ https://arxiv.org/abs/2512.10737
+ arXiv:2512.10737v1 Announce Type: new
+Abstract: This paper investigates how political campaigns engaged UK football fan communities on Twitter in the aftermath of the Brexit Referendum (2016-2017). Football fandom, with its strong collective identities and tribal behaviours, offers fertile ground for political influence. Combining social network and content analysis, we examine how political discourse became embedded in football conversations. We show that a wide range of actors -- including parties, media, activist groups, and pseudonymous influencers -- mobilised support, provoked reactions, and shaped opinion within these communities. Through case studies of hashtag hijacking, embedded activism, and political "megaphones", we illustrate how campaigns leveraged fan cultures to amplify political messages. Our findings highlight mechanisms of political influence in ostensibly non-political online spaces and point toward the development of a broader framework in future work.
+ oai:arXiv.org:2512.10737v1
+ cs.SI
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Tien-Dat Chung, Ba-Thinh Lam, Thanh-Huy Nguyen, Thien Nguyen, Nguyen Lan Vi Vu, Hoang-Loc Cao, Phat Kim Huynh, Min Xu
+ Helen Paffard, Diogo Pacheco
- Building a Data Dashboard for Magic: The Gathering: Initial Design Considerations
- https://arxiv.org/abs/2512.09802
- arXiv:2512.09802v1 Announce Type: new
-Abstract: This paper presents the initial stages of a design study aimed at developing a dashboard to visualize gameplay data of the Commander format from Magic: The Gathering. We conducted a user-task analysis to identify requirements for a data visualization dashboard tailored to the Commander format. Afterwards, we proposed a design for the dashboard leveraging visualizations to address players' needs and pain points for typical data analysis tasks in the context domain. Then, we followed-up with a structured user test to evaluate players' comprehension and preferences of data visualizations. Results show that players prioritize contextually relevant, outcome-driven metrics over peripheral ones, and that canonical charts like heatmaps and line charts support higher comprehension than complex ones such as scatterplots or icicle plots. Our findings also highlight the importance of localized views, user customization, and progressive disclosure, emphasizing that adaptability and contextual relevance are as essential as accuracy in effective dashboard design. Our study contributes practical design guidelines for data visualization in gaming contexts and highlights broader implications for engagement-driven dashboards.
- oai:arXiv.org:2512.09802v1
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Distribution-Free Stochastic MPC for Joint-in-Time Chance-Constrained Linear Systems
+ https://arxiv.org/abs/2512.10738
+ arXiv:2512.10738v1 Announce Type: new
+Abstract: This work presents a stochastic model predictive control (MPC) framework for linear systems subject to joint-in-time chance constraints under unknown disturbance distributions. Unlike existing stochastic MPC formulations that rely on parametric or Gaussian assumptions or require expensive offline computations, the proposed method leverages conformal prediction (CP) as a streamlined tool to construct finite-sample confidence regions for the system's stochastic error trajectories with minimal computational effort. These regions enable the relaxation of probabilistic constraints while providing formal guarantees. By employing an indirect feedback mechanism and a probabilistic set-based formulation, we prove recursive feasibility of the relaxed optimization problem and establish chance constraint satisfaction in closed-loop. Furthermore, we extend the approach to the more general output feedback setting with unknown measurement noise distributions. Given available noise samples, we establish satisfaction of the joint chance constraints and recursive feasibility via output measurements alone. Numerical examples demonstrate the effectiveness and advantages of the proposed method compared to existing approaches.
+ oai:arXiv.org:2512.10738v1
+ eess.SY
+ cs.RO
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Tom\'as Alves, Jo\~ao Moreira
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lukas Vogel, Andrea Carron, Eleftherios E. Vlahakis, Dimos V. Dimarogonas
- OnCoCo 1.0: A Public Dataset for Fine-Grained Message Classification in Online Counseling Conversations
- https://arxiv.org/abs/2512.09804
- arXiv:2512.09804v1 Announce Type: new
-Abstract: This paper presents OnCoCo 1.0, a new public dataset for fine-grained message classification in online counseling. It is based on a new, integrative system of categories, designed to improve the automated analysis of psychosocial online counseling conversations. Existing category systems, predominantly based on Motivational Interviewing (MI), are limited by their narrow focus and dependence on datasets derived mainly from face-to-face counseling. This limits the detailed examination of textual counseling conversations. In response, we developed a comprehensive new coding scheme that differentiates between 38 types of counselor and 28 types of client utterances, and created a labeled dataset consisting of about 2.800 messages from counseling conversations. We fine-tuned several models on our dataset to demonstrate its applicability. The data and models are publicly available to researchers and practitioners. Thus, our work contributes a new type of fine-grained conversational resource to the language resources community, extending existing datasets for social and mental-health dialogue analysis.
- oai:arXiv.org:2512.09804v1
+ Long-horizon Reasoning Agent for Olympiad-Level Mathematical Problem Solving
+ https://arxiv.org/abs/2512.10739
+ arXiv:2512.10739v1 Announce Type: new
+Abstract: Large language models (LLMs) have achieved significant progress in solving complex reasoning tasks by Reinforcement Learning with Verifiable Rewards (RLVR). This advancement is also inseparable from the oversight automated by reliable verifiers. However, current outcome-based verifiers (OVs) are unable to inspect the unreliable intermediate steps in the long reasoning chains of thought (CoTs). Meanwhile, current process-based verifiers (PVs) have difficulties in reliably detecting errors in the complex long CoTs, limited by the scarcity of high-quality annotations due to the prohibitive costs of human annotations. Therefore, we propose the \textbf{O}utcome-based \textbf{P}rocess \textbf{V}erifier (OPV), which verifies the rationale process of summarized outcomes from long CoTs to achieve both accurate and efficient verification and enable large-scale annotation. To empower the proposed verifier, we adopt an iterative active learning framework with expert annotations to progressively improve the verification capability of OPV with fewer annotation costs. Specifically, in each iteration, the most uncertain cases of the current best OPV are annotated and then subsequently used to train a new OPV through Rejection Fine-Tuning (RFT) and RLVR for the next round. Extensive experiments demonstrate OPV's superior performance and broad applicability. It achieves new state-of-the-art results on our held-out \textsc{\thisbench}, outperforming much larger open-source models such as Qwen3-Max-Preview with an F1 score of 83.1 compared to 76.3. Furthermore, OPV effectively detects false positives within synthetic dataset, closely align with expert assessment. When collaborating with policy models, OPV consistently yields performance gains, e.g., raising the accuracy of DeepSeek-R1-Distill-Qwen-32B from 55.2\% to 73.3\% on AIME2025 as the compute budget scales.
+ oai:arXiv.org:2512.10739v1cs.CL
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Jens Albrecht, Robert Lehmann, Aleksandra Poltermann, Eric Rudolph, Philipp Steigerwald, Mara Stieler
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Songyang Gao, Yuzhe Gu, Zijian Wu, Lingkai Kong, Wenwei Zhang, Zhongrui Cai, Fan Zheng, Tianyou Ma, Junhao Shen, Haiteng Zhao, Duanyang Zhang, Huilun Zhang, Kuikun Liu, Chengqi Lyu, Yanhui Duan, Chiyu Chen, Ningsheng Ma, Jianfei Gao, Han Lyu, Dahua Lin, Kai Chen
- CHEM: Estimating and Understanding Hallucinations in Deep Learning for Image Processing
- https://arxiv.org/abs/2512.09806
- arXiv:2512.09806v1 Announce Type: new
-Abstract: U-Net and other U-shaped architectures have achieved significant success in image deconvolution tasks. However, challenges have emerged, as these methods might generate unrealistic artifacts or hallucinations, which can interfere with analysis in safety-critical scenarios. This paper introduces a novel approach for quantifying and comprehending hallucination artifacts to ensure trustworthy computer vision models. Our method, termed the Conformal Hallucination Estimation Metric (CHEM), is applicable to any image reconstruction model, enabling efficient identification and quantification of hallucination artifacts. It offers two key advantages: it leverages wavelet and shearlet representations to efficiently extract hallucinations of image features and uses conformalized quantile regression to assess hallucination levels in a distribution-free manner. Furthermore, from an approximation theoretical perspective, we explore the reasons why U-shaped networks are prone to hallucinations. We test the proposed approach on the CANDELS astronomical image dataset with models such as U-Net, SwinUNet, and Learnlets, and provide new perspectives on hallucination from different aspects in deep learning-based image processing.
- oai:arXiv.org:2512.09806v1
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ TRIDENT: A Redundant Architecture for Caribbean-Accented Emergency Speech Triage
+ https://arxiv.org/abs/2512.10741
+ arXiv:2512.10741v1 Announce Type: new
+Abstract: Emergency speech recognition systems exhibit systematic performance degradation on non-standard English varieties, creating a critical gap in services for Caribbean populations. We present TRIDENT (Transcription and Routing Intelligence for Dispatcher-Empowered National Triage), a three-layer dispatcher-support architecture designed to structure emergency call inputs for human application of established triage protocols (the ESI for routine operations and START for mass casualty events), even when automatic speech recognition fails.
+ The system combines Caribbean-accent-tuned ASR, local entity extraction via large language models, and bio-acoustic distress detection to provide dispatchers with three complementary signals: transcription confidence, structured clinical entities, and vocal stress indicators. Our key insight is that low ASR confidence, rather than representing system failure, serves as a valuable queue prioritization signal -- particularly when combined with elevated vocal distress markers indicating a caller in crisis whose speech may have shifted toward basilectal registers. A complementary insight drives the entity extraction layer: trained responders and composed bystanders may report life-threatening emergencies without elevated vocal stress, requiring semantic analysis to capture clinical indicators that paralinguistic features miss.
+ We describe the architectural design, theoretical grounding in psycholinguistic research on stress-induced code-switching, and deployment considerations for offline operation during disaster scenarios. This work establishes a framework for accent-resilient emergency AI that ensures Caribbean voices receive equitable access to established national triage protocols. Empirical validation on Caribbean emergency calls remains future work.
+ oai:arXiv.org:2512.10741v1
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Elroy Galbraith, Chadwick Sutherland, Donahue Morgan
+
+
+ Learning to Split: A Reinforcement-Learning-Guided Splitting Heuristic for Neural Network Verification
+ https://arxiv.org/abs/2512.10747
+ arXiv:2512.10747v1 Announce Type: new
+Abstract: State-of-the-art neural network verifiers operate by encoding neural
+ network verification as constraint satisfaction problems. When
+ dealing with standard piecewise-linear activation functions, such as
+ ReLUs, verifiers typically employ branching heuristics that break a
+ complex constraint satisfaction problem into multiple, simpler
+ problems. The verifier's performance depends heavily on the order in
+ which this branching is performed: a poor selection may give rise to
+ exponentially many sub-problem, hampering scalability. Here, we
+ focus on the setting where multiple verification queries must be
+ solved for the same neural network. The core idea is to use past
+ experience to make good branching decisions, expediting
+ verification. We present a reinforcement-learning-based branching
+ heuristic that achieves this, by applying a learning from
+ demonstrations (DQfD) techniques. Our experimental
+ evaluation demonstrates a substantial reduction in average
+ verification time and in the average number of iterations required,
+ compared to modern splitting heuristics. These results highlight
+ the great potential of reinforcement learning in the context of
+ neural network verification.
+ oai:arXiv.org:2512.10747v1
+ cs.LO
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jianfei Li, Ines Rosellon-Inclan, Gitta Kutyniok, Jean-Luc Starck
+ Maya Swisa, Guy Katz
- Certificates for nonnegativity of multivariate integer polynomials under perturbations
- https://arxiv.org/abs/2512.09808
- arXiv:2512.09808v1 Announce Type: new
-Abstract: We develop a general and unconditional framework for certifying the global nonnegativity of multivariate integer polynomials; based on rewriting them as sum of squares modulo their gradient ideals. We remove the two structural assumptions typically required by other approaches, namely that the polynomial attains its infimum and zero-dimensionality of the gradient ideal. Our approach combines a denominator-free stereographic transformation with a refined variant of the Hanzon--Jibetean perturbation scheme. The stereographic transformation preserves nonnegativity while making the polynomial coercive, with explicit bounds on the radius of positivity and on the nonzero critical values. Subsequently, we apply carefully constructed explicit perturbations that enforce zero-dimensionality of the gradient ideal without altering nonnegativity, allowing us to invoke recent algorithms to derive algebraic certificates or rational witness points. We present three algorithms implementing our framework and analyze their bit complexity in detail, which is single exponential with respect to the number of variables. A second contribution is a new explicit SOS perturbation scheme, which allows us to perturb any nonnegative polynomial in such a way that it can be written as a sum of squares (SOS). In contrast to Lasserre's classical SOS approximation, which guaranties density but currently does not provide an effective control over the perturbation size, we only derive concrete perturbation bounds ensuring that a nonnegative polynomial enters the SOS cone.
- oai:arXiv.org:2512.09808v1
- cs.SC
- math.OC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Intrinsically Correct Algorithms and Recursive Coalgebras
+ https://arxiv.org/abs/2512.10748
+ arXiv:2512.10748v1 Announce Type: new
+Abstract: Recursive coalgebras provide an elegant categorical tool for modelling recursive algorithms and analysing their termination and correctness. By considering coalgebras over categories of suitably indexed families, the correctness of the corresponding algorithms follows intrinsically just from the type of the computed maps. However, proving recursivity of the underlying coalgebras is non-trivial, and proofs are typically ad hoc. This layer of complexity impedes the formalization of coalgebraically defined recursive algorithms in proof assistants. We introduce a framework for constructing coalgebras which are intrinsically recursive in the sense that the type of the coalgebra guarantees recursivity from the outset. Our approach is based on the novel concept of a well-founded functor on a category of families indexed by a well-founded relation. We show as our main result that every coalgebra for a well-founded functor is recursive, and demonstrate that well-known techniques for proving recursivity and termination such as ranking functions are subsumed by this abstract setup. We present a number of case studies, including Quicksort, the Euclidian algorithm, and CYK parsing. Both the main theoretical result and selected case studies have been formalized in Cubical Agda.
+ oai:arXiv.org:2512.10748v1
+ cs.PL
+ cs.LO
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mat\'ias R Bender (TROPICAL), Kozhasov Khazhgali (UniCA), Tsigaridas Elias (OURAGAN), Zhu Chaoping (OURAGAN)
+ Cass Alexandru, Henning Urbat, Thorsten Wi{\ss}mann
- Towards Practical and Usable In-network Classification
- https://arxiv.org/abs/2512.09809
- arXiv:2512.09809v1 Announce Type: new
-Abstract: In-network machine learning enables real-time classification directly on network hardware, offering consistently low inference latency. However, current solutions are limited by strict hardware constraints, scarce on-device resources, and poor usability, making them impractical for ML developers and cloud operators. To this end, we propose ACORN, an end-to-end system that automates the distributed deployment of practical machine learning models across the network. ACORN provides a fully automated pipeline that loads and deploys Python ML models on network devices using an optimized deployment plan from an ILP planner. To support larger models under hardware constraints and allow runtime programmability, ACORN adopts a novel data plane representation for Decision Tree, Random Forest, and Support Vector Machine models. We implement ACORN prototype in P4 and run it on real programmable hardware. Our evaluation shows ACORN can deploy classification ML models with 2-4x more features than state-of-the-art solutions, while imposing negligible overhead on network performance and traffic. We will make our data plane program, model translator, optimizer, and all related scripts publicly available.
- oai:arXiv.org:2512.09809v1
- cs.NI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Echoes of Automation: How Bots Shaped Political Discourse in Brazil
+ https://arxiv.org/abs/2512.10749
+ arXiv:2512.10749v1 Announce Type: new
+Abstract: In an era where social media platforms are central to political communication, the activity of bots raises pressing concerns about amplification, manipulation, and misinformation. Drawing on more than 315 million tweets posted from August 2018 to June 2022, we examine behavioural patterns, sentiment dynamics, and the thematic focus of bot- versus human-generated content spanning the 2018 Brazilian presidential election and the lead-up to the 2022 contest. Our analysis shows that bots relied disproportionately on retweets and replies, with reply activity spiking after the 2018 election, suggesting tactics of conversational infiltration and amplification. Sentiment analysis indicates that bots maintained a narrower emotional tone, in contrast to humans, whose sentiment fluctuated more strongly with political events. Topic modelling further reveals bots' repetitive, Bolsonaro-centric messaging, while human users engaged with a broader range of candidates, civic concerns, and personal reflections. These findings underscore bots' role as amplifiers of narrow agendas and their potential to distort online political discourse.
+ oai:arXiv.org:2512.10749v1
+ cs.SI
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Di Zhu, Jianxi Chen, Hyojoon Kim
+ Merve Ipek Bal, Diogo Pacheco
- Incorporating Fairness in Neighborhood Graphs for Fair Spectral Clustering
- https://arxiv.org/abs/2512.09810
- arXiv:2512.09810v1 Announce Type: new
-Abstract: Graph clustering plays a pivotal role in unsupervised learning methods like spectral clustering, yet traditional methods for graph clustering often perpetuate bias through unfair graph constructions that may underrepresent some groups. The current research introduces novel approaches for constructing fair k-nearest neighbor (kNN) and fair epsilon-neighborhood graphs that proactively enforce demographic parity during graph formation. By incorporating fairness constraints at the earliest stage of neighborhood selection steps, our approaches incorporate proportional representation of sensitive features into the local graph structure while maintaining geometric consistency.Our work addresses a critical gap in pre-processing for fair spectral clustering, demonstrating that topological fairness in graph construction is essential for achieving equitable clustering outcomes. Widely used graph construction methods like kNN and epsilon-neighborhood graphs propagate edge based disparate impact on sensitive groups, leading to biased clustering results. Providing representation of each sensitive group in the neighborhood of every node leads to fairer spectral clustering results because the topological features of the graph naturally reflect equitable group ratios. This research fills an essential shortcoming in fair unsupervised learning, by illustrating how topological fairness in graph construction inherently facilitates fairer spectral clustering results without the need for changes to the clustering algorithm itself. Thorough experiments on three synthetic datasets, seven real-world tabular datasets, and three real-world image datasets prove that our fair graph construction methods surpass the current baselines in graph clustering tasks.
- oai:arXiv.org:2512.09810v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ LDP: Parameter-Efficient Fine-Tuning of Multimodal LLM for Medical Report Generation
+ https://arxiv.org/abs/2512.10750
+ arXiv:2512.10750v1 Announce Type: new
+Abstract: Colonoscopic polyp diagnosis is pivotal for early colorectal cancer detection, yet traditional automated reporting suffers from inconsistencies and hallucinations due to the scarcity of high-quality multimodal medical data. To bridge this gap, we propose LDP, a novel framework leveraging multimodal large language models (MLLMs) for professional polyp diagnosis report generation. Specifically, we curate MMEndo, a multimodal endoscopic dataset comprising expert-annotated colonoscopy image-text pairs. We fine-tune the Qwen2-VL-7B backbone using Parameter-Efficient Fine-Tuning (LoRA) and align it with clinical standards via Direct Preference Optimization (DPO). Extensive experiments show that our LDP outperforms existing baselines on both automated metrics and rigorous clinical expert evaluations (achieving a Physician Score of 7.2/10), significantly reducing training computational costs by 833x compared to full fine-tuning. The proposed solution offers a scalable, clinically viable path for primary healthcare, with additional validation on the IU-XRay dataset confirming its robustness.
+ oai:arXiv.org:2512.10750v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Adithya K Moorthy, V Vijaya Saradhi, Bhanu Prasad
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tianyu Zhou, Junyi Tang, Zehui Li, Dahong Qian, Suncheng Xiang
- DynaIP: Dynamic Image Prompt Adapter for Scalable Zero-shot Personalized Text-to-Image Generation
- https://arxiv.org/abs/2512.09814
- arXiv:2512.09814v1 Announce Type: new
-Abstract: Personalized Text-to-Image (PT2I) generation aims to produce customized images based on reference images. A prominent interest pertains to the integration of an image prompt adapter to facilitate zero-shot PT2I without test-time fine-tuning. However, current methods grapple with three fundamental challenges: 1. the elusive equilibrium between Concept Preservation (CP) and Prompt Following (PF), 2. the difficulty in retaining fine-grained concept details in reference images, and 3. the restricted scalability to extend to multi-subject personalization. To tackle these challenges, we present Dynamic Image Prompt Adapter (DynaIP), a cutting-edge plugin to enhance the fine-grained concept fidelity, CP-PF balance, and subject scalability of SOTA T2I multimodal diffusion transformers (MM-DiT) for PT2I generation. Our key finding is that MM-DiT inherently exhibit decoupling learning behavior when injecting reference image features into its dual branches via cross attentions. Based on this, we design an innovative Dynamic Decoupling Strategy that removes the interference of concept-agnostic information during inference, significantly enhancing the CP-PF balance and further bolstering the scalability of multi-subject compositions. Moreover, we identify the visual encoder as a key factor affecting fine-grained CP and reveal that the hierarchical features of commonly used CLIP can capture visual information at diverse granularity levels. Therefore, we introduce a novel Hierarchical Mixture-of-Experts Feature Fusion Module to fully leverage the hierarchical features of CLIP, remarkably elevating the fine-grained concept fidelity while also providing flexible control of visual granularity. Extensive experiments across single- and multi-subject PT2I tasks verify that our DynaIP outperforms existing approaches, marking a notable advancement in the field of PT2l generation.
- oai:arXiv.org:2512.09814v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Quantifying displacement: a gentrification's consequence via persistent homology
+ https://arxiv.org/abs/2512.10753
+ arXiv:2512.10753v1 Announce Type: new
+Abstract: Gentrification is the process by which wealthier individuals move into a previously lower-income neighbourhood. Among the effects of this multi-faceted phenomenon are rising living costs, cultural and social changes-where local traditions, businesses, and community networks are replaced or diluted by new, more affluent lifestyles-and population displacement, where long-term, lower-income residents are priced out by rising rents and property taxes. Despite its relevance, quantifying displacement presents difficulties stemming from lack of information on motives for relocation and from the fact that a long time-span must be analysed: displacement is a gradual process (leases end or conditions change at different times), impossible to capture in one data snapshot. We introduce a novel tool to overcome these difficulties. Using only publicly available address change data, we construct four cubical complexes which simultaneously incorporate geographical and temporal information of people moving, and then analyse them building on Topological Data Analysis tools. Finally, we demonstrate the potential of this method through a 20-year case study of Madrid, Spain. The results reveal its ability to capture population displacement and to identify the specific neighbourhoods and years affected--patterns that cannot be inferred from raw address change data.
+ oai:arXiv.org:2512.10753v1
+ cs.CG
+ cs.SI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Zhizhong Wang, Tianyi Chu, Zeyi Huang, Nanyang Wang, Kehan Li
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rita Rodr\'iguez V\'azquez, Manuel Cuerno
- A roadmap of geospatial soil quality analysis systems
- https://arxiv.org/abs/2512.09817
- arXiv:2512.09817v1 Announce Type: new
-Abstract: Soil quality (SQ) plays a crucial role in sustainable agriculture, environmental conservation, and land-use planning. Traditional SQ assessment techniques rely on costly, labor-intensive sampling and laboratory analysis, limiting their spatial and temporal coverage. Advances in Geographic Information Systems (GIS), remote sensing, and machine learning (ML) enabled efficient SQ evaluation. This paper presents a comprehensive roadmap distinguishing it from previous reviews by proposing a unified and modular pipeline that integrates multi-source soil data, GIS and remote sensing tools, and machine learning techniques to support transparent and scalable soil quality assessment. It also includes practical applications. Contrary to existing studies that predominantly target isolated soil parameters or specific modeling methodologies, this approach consolidates recent advancements in Geographic Information Systems (GIS), remote sensing technologies, and machine learning algorithms within the entire soil quality assessment pipeline. It also addresses existing challenges and limitations while exploring future developments and emerging trends in the field that can deliver the next generation of soil quality systems making them more transparent, adaptive, and aligned with sustainable land management.
- oai:arXiv.org:2512.09817v1
- cs.CE
+ OPV: Outcome-based Process Verifier for Efficient Long Chain-of-Thought Verification
+ https://arxiv.org/abs/2512.10756
+ arXiv:2512.10756v1 Announce Type: new
+Abstract: Large language models (LLMs) have achieved significant progress in solving complex reasoning tasks by Reinforcement Learning with Verifiable Rewards (RLVR). This advancement is also inseparable from the oversight automated by reliable verifiers. However, current outcome-based verifiers (OVs) are unable to inspect the unreliable intermediate steps in the long reasoning chains of thought (CoTs). Meanwhile, current process-based verifiers (PVs) have difficulties in reliably detecting errors in the complex long CoTs, limited by the scarcity of high-quality annotations due to the prohibitive costs of human annotations. Therefore, we propose the Outcome-based Process Verifier (OPV), which verifies the rationale process of summarized outcomes from long CoTs to achieve both accurate and efficient verification and enable large-scale annotation. To empower the proposed verifier, we adopt an iterative active learning framework with expert annotations to progressively improve the verification capability of OPV with fewer annotation costs. Specifically, in each iteration, the most uncertain cases of the current best OPV are annotated and then subsequently used to train a new OPV through Rejection Fine-Tuning (RFT) and RLVR for the next round. Extensive experiments demonstrate OPV's superior performance and broad applicability. It achieves new state-of-the-art results on our held-out OPV-Bench, outperforming much larger open-source models such as Qwen3-Max-Preview with an F1 score of 83.1 compared to 76.3. Furthermore, OPV effectively detects false positives within synthetic dataset, closely align with expert assessment. When collaborating with policy models, OPV consistently yields performance gains, e.g., raising the accuracy of DeepSeek-R1-Distill-Qwen-32B from 55.2% to 73.3% on AIME2025 as the compute budget scales.
+ oai:arXiv.org:2512.10756v1
+ cs.CLcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Habiba BEN ABDERRAHMANE, Slimane Oulad-Naoui, Benameur ZIANI
+ http://creativecommons.org/licenses/by/4.0/
+ Zijian Wu, Lingkai Kong, Wenwei Zhang, Songyang Gao, Yuzhe Gu, Zhongrui Cai, Tianyou Ma, Yuhong Liu, Zhi Wang, Runyuan Ma, Guangyu Wang, Wei Li, Conghui He, Dahua Lin, Kai Chen
- Weakly-unambiguous Parikh automata and their link to holonomic series
- https://arxiv.org/abs/2512.09823
- arXiv:2512.09823v1 Announce Type: new
-Abstract: We investigate the connection between properties of formal languages and properties of their generating series, with a focus on the class of holonomic power series. We first prove a strong version of a conjecture by Castiglione and Massazza: weakly-unambiguous Parikh automata are equivalent to unambiguous two-way reversal bounded counter machines, and their multivariate generating series are holonomic. We then show that the converse is not true: we construct a language whose generating series is algebraic (thus holonomic), but which is inherently weakly-ambiguous as a Parikh automata language. Finally, we prove an effective decidability result for the inclusion problem for weakly-unambiguous Parikh automata, and provide an upper-bound on to its complexity.
- oai:arXiv.org:2512.09823v1
- cs.FL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Designing AI-Resilient Assessments Using Interconnected Problems: A Theoretically Grounded and Empirically Validated Framework
+ https://arxiv.org/abs/2512.10758
+ arXiv:2512.10758v1 Announce Type: new
+Abstract: The rapid adoption of generative AI has undermined traditional modular assessments in computing education, creating a disconnect between academic evaluation and industry practice. This paper presents a theoretically grounded framework for designing AI-resilient assessments, supported by formal analysis and multi-year empirical validation.
+ We make three contributions. First, we establish two theoretical results: (1) assessments composed of interconnected problems, where outputs feed into subsequent stages, are more AI-resilient than modular assessments because current language models struggle with sustained multi-step reasoning and context; and (2) semi-structured problems with deterministic success criteria provide more reliable measures of student competency than fully open-ended projects, which allow AI systems to default to familiar solution patterns. These results challenge common policy and institutional guidance that promotes open-ended assessments as the primary safeguard for academic integrity.
+ Second, we validate these results using data from four university data science courses (N = 138). While students achieve near-perfect scores on AI-assisted modular homework, performance drops by roughly 30 percentage points on proctored exams, indicating substantial AI score inflation. Interconnected projects remain strongly correlated with modular assessments, suggesting they measure the same underlying skills while resisting AI misuse. Proctored exams show weaker alignment, implying they may assess test-taking ability rather than intended learning outcomes.
+ Third, we translate these findings into a practical assessment design framework. The proposed approach enables educators to create assessments that promote integrative thinking, reflect real-world AI-augmented workflows, and naturally resist trivial delegation to generative AI, thereby helping restore academic integrity.
+ oai:arXiv.org:2512.10758v1
+ cs.CY
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- 10.4230/LIPIcs.ICALP.2020.114
- Alin Bostan, Arnaud Carayol, Florent Koechlin, Cyril Nicaud
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kaihua Ding
- Composing Concepts from Images and Videos via Concept-prompt Binding
- https://arxiv.org/abs/2512.09824
- arXiv:2512.09824v1 Announce Type: new
-Abstract: Visual concept composition, which aims to integrate different elements from images and videos into a single, coherent visual output, still falls short in accurately extracting complex concepts from visual inputs and flexibly combining concepts from both images and videos. We introduce Bind & Compose, a one-shot method that enables flexible visual concept composition by binding visual concepts with corresponding prompt tokens and composing the target prompt with bound tokens from various sources. It adopts a hierarchical binder structure for cross-attention conditioning in Diffusion Transformers to encode visual concepts into corresponding prompt tokens for accurate decomposition of complex visual concepts. To improve concept-token binding accuracy, we design a Diversify-and-Absorb Mechanism that uses an extra absorbent token to eliminate the impact of concept-irrelevant details when training with diversified prompts. To enhance the compatibility between image and video concepts, we present a Temporal Disentanglement Strategy that decouples the training process of video concepts into two stages with a dual-branch binder structure for temporal modeling. Evaluations demonstrate that our method achieves superior concept consistency, prompt fidelity, and motion quality over existing approaches, opening up new possibilities for visual creativity.
- oai:arXiv.org:2512.09824v1
+ Blood Pressure Prediction for Coronary Artery Disease Diagnosis using Coronary Computed Tomography Angiography
+ https://arxiv.org/abs/2512.10765
+ arXiv:2512.10765v1 Announce Type: new
+Abstract: Computational fluid dynamics (CFD) based simulation of coronary blood flow provides valuable hemodynamic markers, such as pressure gradients, for diagnosing coronary artery disease (CAD). However, CFD is computationally expensive, time-consuming, and difficult to integrate into large-scale clinical workflows. These limitations restrict the availability of labeled hemodynamic data for training AI models and hinder broad adoption of non-invasive, physiology based CAD assessment. To address these challenges, we develop an end to end pipeline that automates coronary geometry extraction from coronary computed tomography angiography (CCTA), streamlines simulation data generation, and enables efficient learning of coronary blood pressure distributions. The pipeline reduces the manual burden associated with traditional CFD workflows while producing consistent training data. We further introduce a diffusion-based regression model designed to predict coronary blood pressure directly from CCTA derived features, bypassing the need for slow CFD computation during inference. Evaluated on a dataset of simulated coronary hemodynamics, the proposed model achieves state of the art performance, with an R2 of 64.42%, a root mean squared error of 0.0974, and a normalized RMSE of 0.154, outperforming several baseline approaches. This work provides a scalable and accessible framework for rapid, non-invasive blood pressure prediction to support CAD diagnosis.
+ oai:arXiv.org:2512.10765v1cs.CV
- cs.AI
- cs.MM
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xianghao Kong, Zeyu Zhang, Yuwei Guo, Zhuoran Zhao, Songchun Zhang, Anyi Rao
+ http://creativecommons.org/licenses/by/4.0/
+ Rene Lisasi, Michele Esposito, Chen Zhao
- A Relaxed Randomized Averaging Block Extended Bregman-Kaczmarz Method for Combined Optimization Problems
- https://arxiv.org/abs/2512.09825
- arXiv:2512.09825v1 Announce Type: new
-Abstract: Randomized Kaczmarz-type methods are widely used for their simplicity and efficiency in solving large-scale linear systems and optimization problems. However, their applicability is limited when dealing with inconsistent systems or incorporating structural information such as sparsity. In this work, we propose a \emph{relaxed randomized averaging block extended Bregman-Kaczmarz} (rRABEBK) method for solving a broad class of combined optimization problems. The proposed method integrates an averaging block strategy with two relaxation parameters to accelerate convergence and enhance numerical stability. We establish a rigorous convergence theory showing that rRABEBK achieves linear convergence in expectation, with explicit constants that quantify the effect of the relaxation mechanism, and a provably faster rate than the classical randomized extended Bregman-Kaczmarz method. Our method can be readily adapted to sparse least-squares problems and extended to both consistent and inconsistent systems without modification. Complementary numerical experiments corroborate the theoretical findings and demonstrate that rRABEBK significantly outperforms the existing Kaczmarz-type algorithms in terms of both iteration complexity and computational efficiency, highlighting both its practical and theoretical advantages.
- oai:arXiv.org:2512.09825v1
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Metaphor-based Jailbreaking Attacks on Text-to-Image Models
+ https://arxiv.org/abs/2512.10766
+ arXiv:2512.10766v1 Announce Type: new
+Abstract: Text-to-image~(T2I) models commonly incorporate defense mechanisms to prevent the generation of sensitive images. Unfortunately, recent jailbreaking attacks have shown that adversarial prompts can effectively bypass these mechanisms and induce T2I models to produce sensitive content, revealing critical safety vulnerabilities. However, existing attack methods implicitly assume that the attacker knows the type of deployed defenses, which limits their effectiveness against unknown or diverse defense mechanisms. In this work, we introduce \textbf{MJA}, a \textbf{m}etaphor-based \textbf{j}ailbreaking \textbf{a}ttack method inspired by the Taboo game, aiming to effectively and efficiently attack diverse defense mechanisms without prior knowledge of their type by generating metaphor-based adversarial prompts. Specifically, MJA consists of two modules: an LLM-based multi-agent generation module~(MLAG) and an adversarial prompt optimization module~(APO). MLAG decomposes the generation of metaphor-based adversarial prompts into three subtasks: metaphor retrieval, context matching, and adversarial prompt generation. Subsequently, MLAG coordinates three LLM-based agents to generate diverse adversarial prompts by exploring various metaphors and contexts. To enhance attack efficiency, APO first trains a surrogate model to predict the attack results of adversarial prompts and then designs an acquisition strategy to adaptively identify optimal adversarial prompts. Extensive experiments on T2I models with various external and internal defense mechanisms demonstrate that MJA outperforms six baseline methods, achieving stronger attack performance while using fewer queries. Code is available in https://github.com/datar001/metaphor-based-jailbreaking-attack.
+ oai:arXiv.org:2512.10766v1
+ cs.CR
+ cs.AI
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zeyu Dong, Aqin Xiao, Guojian Yin, Junfeng Yin
+ Chenyu Zhang, Yiwen Ma, Lanjun Wang, Wenhui Li, Yi Tu, An-An Liu
- RIFT: A Scalable Methodology for LLM Accelerator Fault Assessment using Reinforcement Learning
- https://arxiv.org/abs/2512.09829
- arXiv:2512.09829v1 Announce Type: new
-Abstract: The massive scale of modern AI accelerators presents critical challenges to traditional fault assessment methodologies, which face prohibitive computational costs and provide poor coverage of critical failure modes. This paper introduces RIFT (Reinforcement Learning-guided Intelligent Fault Targeting), a scalable framework that automates the discovery of minimal, high-impact fault scenarios for efficient design-time fault assessment. RIFT transforms the complex search for worst-case faults into a sequential decision-making problem, combining hybrid sensitivity analysis for search space pruning with reinforcement learning to intelligently generate minimal, high-impact test suites. Evaluated on billion-parameter Large Language Model (LLM) workloads using NVIDIA A100 GPUs, RIFT achieves a \textbf{2.2$\times$} fault assessment speedup over evolutionary methods and reduces the required test vector volume by over \textbf{99\%} compared to random fault injection, all while achieving \textbf{superior fault coverage}. The proposed framework also provides actionable data to enable intelligent hardware protection strategies, demonstrating that RIFT-guided selective error correction code provides a \textbf{12.8$\times$} improvement in \textbf{cost-effectiveness} (coverage per unit area) compared to uniform triple modular redundancy protection. RIFT automatically generates UVM-compliant verification artifacts, ensuring its findings are directly actionable and integrable into commercial RTL verification workflows.
- oai:arXiv.org:2512.09829v1
- cs.AI
+ Template-Free Retrosynthesis with Graph-Prior Augmented Transformers
+ https://arxiv.org/abs/2512.10770
+ arXiv:2512.10770v1 Announce Type: new
+Abstract: Retrosynthesis reaction prediction seeks to infer plausible reactant molecules for a given product and is a central problem in computer-aided organic synthesis. Despite recent progress, many existing models still fall short of the accuracy and robustness required for practical deployment. This work studies a template-free, Transformer-based framework that eliminates reliance on handcrafted reaction templates or additional chemical rule engines. The model injects molecular graph information into the attention mechanism to jointly exploit \SMILES\ sequences and structural cues, and further applies a paired data augmentation strategy to enhance training diversity and scale. On the USPTO-50K benchmark, our proposed approach achieves state-of-the-art performance among template-free methods and substantially outperforming a vanilla Transformer baseline.
+ oai:arXiv.org:2512.10770v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Khurram Khalil, Muhammad Mahad Khaliq, Khaza Anuarul Hoque
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Youjun Zhao
- LLMs in Interpreting Legal Documents
- https://arxiv.org/abs/2512.09830
- arXiv:2512.09830v1 Announce Type: new
-Abstract: This chapter explores the application of Large Language Models in the legal domain, showcasing their potential to optimise and augment traditional legal tasks by analysing possible use cases, such as assisting in interpreting statutes, contracts, and case law, enhancing clarity in legal summarisation, contract negotiation, and information retrieval. There are several challenges that can arise from the application of such technologies, such as algorithmic monoculture, hallucinations, and compliance with existing regulations, including the EU's AI Act and recent U.S. initiatives, alongside the emerging approaches in China. Furthermore, two different benchmarks are presented.
- oai:arXiv.org:2512.09830v1
+ Grow Up and Merge: Scaling Strategies for Efficient Language Adaptation
+ https://arxiv.org/abs/2512.10772
+ arXiv:2512.10772v1 Announce Type: new
+Abstract: Achieving high-performing language models which include medium- and lower-resource languages remains a challenge. Massively multilingual models still underperform compared to language-specific adaptations, especially at smaller model scales. In this work, we investigate scaling as an efficient strategy for adapting pretrained models to new target languages. Through comprehensive scaling ablations with approximately FLOP-matched models, we test whether upscaling an English base model enables more effective and resource-efficient adaptation than standard continued pretraining. We find that, once exposed to sufficient target-language data, larger upscaled models can match or surpass the performance of smaller models continually pretrained on much more data, demonstrating the benefits of scaling for data efficiency. Scaling also helps preserve the base model's capabilities in English, thus reducing catastrophic forgetting. Finally, we explore whether such scaled, language-specific models can be merged to construct modular and flexible multilingual systems. We find that while merging remains less effective than joint multilingual training, upscaled merges perform better than smaller ones. We observe large performance differences across merging methods, suggesting potential for improvement through merging approaches specialized for language-level integration.
+ oai:arXiv.org:2512.10772v1cs.CLcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Simone Corbo
+ Kevin Glocker, K\"atriin Kukk, Romina Oji, Marcel Bollmann, Marco Kuhlmann, Jenny Kunz
- Interpretation as Linear Transformation: A Cognitive-Geometric Model of Belief and Meaning
- https://arxiv.org/abs/2512.09831
- arXiv:2512.09831v1 Announce Type: new
-Abstract: This paper develops a geometric framework for modeling belief, motivation, and influence across cognitively heterogeneous agents. Each agent is represented by a personalized value space, a vector space encoding the internal dimensions through which the agent interprets and evaluates meaning. Beliefs are formalized as structured vectors-abstract beings-whose transmission is mediated by linear interpretation maps. A belief survives communication only if it avoids the null spaces of these maps, yielding a structural criterion for intelligibility, miscommunication, and belief death.
- Within this framework, I show how belief distortion, motivational drift, counterfactual evaluation, and the limits of mutual understanding arise from purely algebraic constraints. A central result-"the No-Null-Space Leadership Condition"-characterizes leadership as a property of representational reachability rather than persuasion or authority. More broadly, the model explains how abstract beings can propagate, mutate, or disappear as they traverse diverse cognitive geometries.
- The account unifies insights from conceptual spaces, social epistemology, and AI value alignment by grounding meaning preservation in structural compatibility rather than shared information or rationality. I argue that this cognitive-geometric perspective clarifies the epistemic boundaries of influence in both human and artificial systems, and offers a general foundation for analyzing belief dynamics across heterogeneous agents.
- oai:arXiv.org:2512.09831v1
- cs.AI
- cs.LG
- cs.MA
- cs.SI
- Thu, 11 Dec 2025 00:00:00 -0500
+ AERMANI-Diffusion: Regime-Conditioned Diffusion for Dynamics Learning in Aerial Manipulators
+ https://arxiv.org/abs/2512.10773
+ arXiv:2512.10773v1 Announce Type: new
+Abstract: Aerial manipulators undergo rapid, configuration-dependent changes in inertial coupling forces and aerodynamic forces, making accurate dynamics modeling a core challenge for reliable control. Analytical models lose fidelity under these nonlinear and nonstationary effects, while standard data-driven methods such as deep neural networks and Gaussian processes cannot represent the diverse residual behaviors that arise across different operating conditions. We propose a regime-conditioned diffusion framework that models the full distribution of residual forces using a conditional diffusion process and a lightweight temporal encoder. The encoder extracts a compact summary of recent motion and configuration, enabling consistent residual predictions even through abrupt transitions or unseen payloads. When combined with an adaptive controller, the framework enables dynamics uncertainty compensation and yields markedly improved tracking accuracy in real-world tests.
+ oai:arXiv.org:2512.10773v1
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chainarong Amornbunchornvej
+ http://creativecommons.org/licenses/by/4.0/
+ Samaksh Ujjawal, Shivansh Pratap Singh, Naveen Sudheer Nair, Rishabh Dev Yadav, Wei Pan, Spandan Roy
- Bridging the Basilisk Astrodynamics Framework with ROS 2 for Modular Spacecraft Simulation and Hardware Integration
- https://arxiv.org/abs/2512.09833
- arXiv:2512.09833v1 Announce Type: new
-Abstract: Integrating high-fidelity spacecraft simulators with modular robotics frameworks remains a challenge for autonomy development. This paper presents a lightweight, open-source communication bridge between the Basilisk astrodynamics simulator and the Robot Operating System 2 (ROS 2), enabling real-time, bidirectional data exchange for spacecraft control. The bridge requires no changes to Basilisk's core and integrates seamlessly with ROS 2 nodes. We demonstrate its use in a leader-follower formation flying scenario using nonlinear model predictive control, deployed identically in both simulation and on the ATMOS planar microgravity testbed. This setup supports rapid development, hardware-in-the-loop testing, and seamless transition from simulation to hardware. The bridge offers a flexible and scalable platform for modular spacecraft autonomy and reproducible research workflows.
- oai:arXiv.org:2512.09833v1
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Building Audio-Visual Digital Twins with Smartphones
+ https://arxiv.org/abs/2512.10778
+ arXiv:2512.10778v1 Announce Type: new
+Abstract: Digital twins today are almost entirely visual, overlooking acoustics-a core component of spatial realism and interaction. We introduce AV-Twin, the first practical system that constructs editable audio-visual digital twins using only commodity smartphones. AV-Twin combines mobile RIR capture and a visual-assisted acoustic field model to efficiently reconstruct room acoustics. It further recovers per-surface material properties through differentiable acoustic rendering, enabling users to modify materials, geometry, and layout while automatically updating both audio and visuals. Together, these capabilities establish a practical path toward fully modifiable audio-visual digital twins for real-world environments.
+ oai:arXiv.org:2512.10778v1
+ cs.SD
+ cs.MM
+ eess.AS
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Elias Krantz, Ngai Nam Chan, Gunnar Tibert, Huina Mao, Christer Fuglesang
+ http://creativecommons.org/licenses/by/4.0/
+ Zitong Lan, Yiwei Tang, Yuhan Wang, Haowen Lai, Yido Hao, Mingmin Zhao
- Predicting the Containment Time of California Wildfires Using Machine Learning
- https://arxiv.org/abs/2512.09835
- arXiv:2512.09835v1 Announce Type: new
-Abstract: California's wildfire season keeps getting worse over the years, overwhelming the emergency response teams. These fires cause massive destruction to both property and human life. Because of these reasons, there's a growing need for accurate and practical predictions that can help assist with resources allocation for the Wildfire managers or the response teams. In this research, we built machine learning models to predict the number of days it will require to fully contain a wildfire in California. Here, we addressed an important gap in the current literature. Most prior research has concentrated on wildfire risk or how fires spread, and the few that examine the duration typically predict it in broader categories rather than a continuous measure. This research treats the wildfire duration prediction as a regression task, which allows for more detailed and precise forecasts rather than just the broader categorical predictions used in prior work. We built the models by combining three publicly available datasets from California Department of Forestry and Fire Protection's Fire and Resource Assessment Program (FRAP). This study compared the performance of baseline ensemble regressor, Random Forest and XGBoost, with a Long Short-Term Memory (LSTM) neural network. The results show that the XGBoost model slightly outperforms the Random Forest model, likely due to its superior handling of static features in the dataset. The LSTM model, on the other hand, performed worse than the ensemble models because the dataset lacked temporal features. Overall, this study shows that, depending on the feature availability, Wildfire managers or Fire management authorities can select the most appropriate model to accurately predict wildfire containment duration and allocate resources effectively.
- oai:arXiv.org:2512.09835v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Lax Modal Lambda Calculi
+ https://arxiv.org/abs/2512.10779
+ arXiv:2512.10779v1 Announce Type: new
+Abstract: Intuitionistic modal logics (IMLs) extend intuitionistic propositional logic with modalities such as the box and diamond connectives. Advances in the study of IMLs have inspired several applications in programming languages via the development of corresponding type theories with modalities. Until recently, IMLs with diamonds have been misunderstood as somewhat peculiar and unstable, causing the development of type theories with diamonds to lag behind type theories with boxes. In this article, we develop a family of typed-lambda calculi corresponding to sublogics of a peculiar IML with diamonds known as Lax logic. These calculi provide a modal logical foundation for various strong functors in typed-functional programming. We present possible-world and categorical semantics for these calculi and constructively prove normalization, equational completeness and proof-theoretic inadmissibility results. Our main results have been formalized using the proof assistant Agda.
+ oai:arXiv.org:2512.10779v1
+ cs.LO
+ cs.PL
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shashank Bhardwaj
+ http://creativecommons.org/licenses/by/4.0/
+ Nachiappan Valliappan
- Fast Factorized Learning: Powered by In-Memory Database Systems
- https://arxiv.org/abs/2512.09836
- arXiv:2512.09836v1 Announce Type: new
-Abstract: Learning models over factorized joins avoids redundant computations by identifying and pre-computing shared cofactors. Previous work has investigated the performance gain when computing cofactors on traditional disk-based database systems. Due to the absence of published code, the experiments could not be reproduced on in-memory database systems. This work describes the implementation when using cofactors for in-database factorized learning. We benchmark our open-source implementation for learning linear regression on factorized joins with PostgreSQL -- as a disk-based database system -- and HyPer -- as an in-memory engine. The evaluation shows a performance gain of factorized learning on in-memory database systems by 70\% to non-factorized learning and by a factor of 100 compared to disk-based database systems. Thus, modern database engines can contribute to the machine learning pipeline by pre-computing aggregates prior to data extraction to accelerate training.
- oai:arXiv.org:2512.09836v1
- cs.DB
+ Script Gap: Evaluating LLM Triage on Indian Languages in Native vs Roman Scripts in a Real World Setting
+ https://arxiv.org/abs/2512.10780
+ arXiv:2512.10780v1 Announce Type: new
+Abstract: Large Language Models (LLMs) are increasingly deployed in high-stakes clinical applications in India. In many such settings, speakers of Indian languages frequently communicate using romanized text rather than native scripts, yet existing research rarely evaluates this orthographic variation using real-world data. We investigate how romanization impacts the reliability of LLMs in a critical domain: maternal and newborn healthcare triage. We benchmark leading LLMs on a real-world dataset of user-generated queries spanning five Indian languages and Nepali. Our results reveal consistent degradation in performance for romanized messages, with F1 scores trailing those of native scripts by 5-12 points. At our partner maternal health organization in India, this gap could cause nearly 2 million excess errors in triage. Crucially, this performance gap by scripts is not due to a failure in clinical reasoning. We demonstrate that LLMs often correctly infer the semantic intent of romanized queries. Nevertheless, their final classification outputs remain brittle in the presence of orthographic noise in romanized inputs. Our findings highlight a critical safety blind spot in LLM-based health systems: models that appear to understand romanized input may still fail to act on it reliably.
+ oai:arXiv.org:2512.10780v1
+ cs.CLcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Bernhard St\"ockl, Maximilian E. Sch\"ule
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Manurag Khullar, Utkarsh Desai, Poorva Malviya, Aman Dalmia, Zheyuan Ryan Shi
- ChronusOmni: Improving Time Awareness of Omni Large Language Models
- https://arxiv.org/abs/2512.09841
- arXiv:2512.09841v1 Announce Type: new
-Abstract: Time awareness is a fundamental ability of omni large language models, especially for understanding long videos and answering complex questions. Previous approaches mainly target vision-language scenarios and focus on the explicit temporal grounding questions, such as identifying when a visual event occurs or determining what event happens at aspecific time. However, they often make insufficient use of the audio modality, and overlook implicit temporal grounding across modalities--for example, identifying what is visually present when a character speaks, or determining what is said when a visual event occurs--despite such cross-modal temporal relations being prevalent in real-world scenarios. In this paper, we propose ChronusOmni, an omni large language model designed to enhance temporal awareness for both explicit and implicit audiovisual temporal grounding. First, we interleave text-based timestamp tokens with visual and audio representations at each time unit, enabling unified temporal modeling across modalities. Second, to enforce correct temporal ordering and strengthen fine-grained temporal reasoning, we incorporate reinforcement learning with specially designed reward functions. Moreover, we construct ChronusAV, a temporally-accurate, modality-complete, and cross-modal-aligned dataset to support the training and evaluation on audiovisual temporal grounding task. Experimental results demonstrate that ChronusOmni achieves state-of-the-art performance on ChronusAV with more than 30% improvement and top results on most metrics upon other temporal grounding benchmarks. This highlights the strong temporal awareness of our model across modalities, while preserving general video and audio understanding capabilities.
- oai:arXiv.org:2512.09841v1
+ Replace, Don't Expand: Mitigating Context Dilution in Multi-Hop RAG via Fixed-Budget Evidence Assembly
+ https://arxiv.org/abs/2512.10787
+ arXiv:2512.10787v1 Announce Type: new
+Abstract: Retrieval-Augmented Generation (RAG) systems often fail on multi-hop queries when the initial retrieval misses a bridge fact. Prior corrective approaches, such as Self-RAG, CRAG, and Adaptive-$k$, typically address this by \textit{adding} more context or pruning existing lists. However, simply expanding the context window often leads to \textbf{context dilution}, where distractors crowd out relevant information. We propose \textbf{SEAL-RAG}, a training-free controller that adopts a \textbf{``replace, don't expand''} strategy to fight context dilution under a fixed retrieval depth $k$. SEAL executes a (\textbf{S}earch $\rightarrow$ \textbf{E}xtract $\rightarrow$ \textbf{A}ssess $\rightarrow$ \textbf{L}oop) cycle: it performs on-the-fly, entity-anchored extraction to build a live \textit{gap specification} (missing entities/relations), triggers targeted micro-queries, and uses \textit{entity-first ranking} to actively swap out distractors for gap-closing evidence. We evaluate SEAL-RAG against faithful re-implementations of Basic RAG, CRAG, Self-RAG, and Adaptive-$k$ in a shared environment on \textbf{HotpotQA} and \textbf{2WikiMultiHopQA}. On HotpotQA ($k=3$), SEAL improves answer correctness by \textbf{+3--13 pp} and evidence precision by \textbf{+12--18 pp} over Self-RAG. On 2WikiMultiHopQA ($k=5$), it outperforms Adaptive-$k$ by \textbf{+8.0 pp} in accuracy and maintains \textbf{96\%} evidence precision compared to 22\% for CRAG. These gains are statistically significant ($p<0.001$). By enforcing fixed-$k$ replacement, SEAL yields a predictable cost profile while ensuring the top-$k$ slots are optimized for precision rather than mere breadth. We release our code and data at https://github.com/mosherino/SEAL-RAG.
+ oai:arXiv.org:2512.10787v1
+ cs.AIcs.CL
- cs.CV
- cs.MM
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Yijing Chen, Yihan Wu, Kaisi Guan, Yuchen Ren, Yuyue Wang, Ruihua Song, Liyun Ru
+ Moshe Lahmy, Roi Yozevitch
- From Detection to Anticipation: Online Understanding of Struggles across Various Tasks and Activities
- https://arxiv.org/abs/2512.09847
- arXiv:2512.09847v1 Announce Type: new
-Abstract: Understanding human skill performance is essential for intelligent assistive systems, with struggle recognition offering a natural cue for identifying user difficulties. While prior work focuses on offline struggle classification and localization, real-time applications require models capable of detecting and anticipating struggle online. We reformulate struggle localization as an online detection task and further extend it to anticipation, predicting struggle moments before they occur. We adapt two off-the-shelf models as baselines for online struggle detection and anticipation. Online struggle detection achieves 70-80% per-frame mAP, while struggle anticipation up to 2 seconds ahead yields comparable performance with slight drops. We further examine generalization across tasks and activities and analyse the impact of skill evolution. Despite larger domain gaps in activity-level generalization, models still outperform random baselines by 4-20%. Our feature-based models run at up to 143 FPS, and the whole pipeline, including feature extraction, operates at around 20 FPS, sufficient for real-time assistive applications.
- oai:arXiv.org:2512.09847v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Natural Language Interface for Firewall Configuration
+ https://arxiv.org/abs/2512.10789
+ arXiv:2512.10789v1 Announce Type: new
+Abstract: This paper presents the design and prototype implementation of a natural language interface for configuring enterprise firewalls. The framework allows administrators to express access control policies in plain language, which are then translated into vendor specific configurations. A compact schema bound intermediate representation separates human intent from device syntax and in the current prototype compiles to Palo Alto PAN OS command line configuration while remaining extensible to other platforms. Large language models are used only as assistive parsers that generate typed intermediate representation objects, while compilation and enforcement remain deterministic. The prototype integrates three validation layers, namely a static linter that checks structural and vendor specific constraints, a safety gate that blocks overly permissive rules such as any to any allows, and a Batfish based simulator that validates configuration syntax and referential integrity against a synthetic device model. The paper describes the architecture, implementation, and test methodology on synthetic network context datasets and discusses how this approach can evolve into a scalable auditable and human centered workflow for firewall policy management.
+ oai:arXiv.org:2512.10789v1
+ cs.NI
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Shijia Feng, Michael Wray, Walterio Mayol-Cuevas
+ F. Taghiyev, A. Aslanbayli
- Conformal Bandits: Bringing statistical validity and reward efficiency to the small-gap regime
- https://arxiv.org/abs/2512.09850
- arXiv:2512.09850v1 Announce Type: new
-Abstract: We introduce Conformal Bandits, a novel framework integrating Conformal Prediction (CP) into bandit problems, a classic paradigm for sequential decision-making under uncertainty. Traditional regret-minimisation bandit strategies like Thompson Sampling and Upper Confidence Bound (UCB) typically rely on distributional assumptions or asymptotic guarantees; further, they remain largely focused on regret, neglecting their statistical properties. We address this gap. Through the adoption of CP, we bridge the regret-minimising potential of a decision-making bandit policy with statistical guarantees in the form of finite-time prediction coverage.
- We demonstrate the potential of it Conformal Bandits through simulation studies and an application to portfolio allocation, a typical small-gap regime, where differences in arm rewards are far too small for classical policies to achieve optimal regret bounds in finite sample. Motivated by this, we showcase our framework's practical advantage in terms of regret in small-gap settings, as well as its added value in achieving nominal coverage guarantees where classical UCB policies fail. Focusing on our application of interest, we further illustrate how integrating hidden Markov models to capture the regime-switching behaviour of financial markets, enhances the exploration-exploitation trade-off, and translates into higher risk-adjusted regret efficiency returns, while preserving coverage guarantees.
- oai:arXiv.org:2512.09850v1
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ The FACTS Leaderboard: A Comprehensive Benchmark for Large Language Model Factuality
+ https://arxiv.org/abs/2512.10791
+ arXiv:2512.10791v1 Announce Type: new
+Abstract: We introduce The FACTS Leaderboard, an online leaderboard suite and associated set of benchmarks that comprehensively evaluates the ability of language models to generate factually accurate text across diverse scenarios. The suite provides a holistic measure of factuality by aggregating the performance of models on four distinct sub-leaderboards: (1) FACTS Multimodal, which measures the factuality of responses to image-based questions; (2) FACTS Parametric, which assesses models' world knowledge by answering closed-book factoid questions from internal parameters; (3) FACTS Search, which evaluates factuality in information-seeking scenarios, where the model must use a search API; and (4) FACTS Grounding (v2), which evaluates whether long-form responses are grounded in provided documents, featuring significantly improved judge models. Each sub-leaderboard employs automated judge models to score model responses, and the final suite score is an average of the four components, designed to provide a robust and balanced assessment of a model's overall factuality. The FACTS Leaderboard Suite will be actively maintained, containing both public and private splits to allow for external participation while guarding its integrity. It can be found at https://www.kaggle.com/benchmarks/google/facts .
+ oai:arXiv.org:2512.10791v1
+ cs.CL
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Simone Cuonzo, Nina Deliu
+ Aileen Cheng, Alon Jacovi, Amir Globerson, Ben Golan, Charles Kwong, Chris Alberti, Connie Tao, Eyal Ben-David, Gaurav Singh Tomar, Lukas Haas, Yonatan Bitton, Adam Bloniarz, Aijun Bai, Andrew Wang, Anfal Siddiqui, Arturo Bajuelos Castillo, Aviel Atias, Chang Liu, Corey Fry, Daniel Balle, Deepanway Ghosal, Doron Kukliansky, Dror Marcus, Elena Gribovskaya, Eran Ofek, Honglei Zhuang, Itay Laish, Jan Ackermann, Lily Wang, Meg Risdal, Megan Barnes, Michael Fink, Mohamed Amin, Moran Ambar, Natan Potikha, Nikita Gupta, Nitzan Katz, Noam Velan, Ofir Roval, Ori Ram, Polina Zablotskaia, Prathamesh Bang, Priyanka Agrawal, Rakesh Ghiya, Sanjay Ganapathy, Simon Baumgartner, Sofia Erell, Sushant Prakash, Thibault Sellam, Vikram Rao, Xuanhui Wang, Yaroslav Akulov, Yulong Yang, Zhen Yang, Zhixin Lai, Zhongru Wu, Anca Dragan, Avinatan Hassidim, Fernando Pereira, Slav Petrov, Srinivasan Venkatachary, Tulsee Doshi, Yossi Matias, Sasha Goldshtein, Dipanjan Das
- Simultaneous Tactile-Visual Perception for Learning Multimodal Robot Manipulation
- https://arxiv.org/abs/2512.09851
- arXiv:2512.09851v1 Announce Type: new
-Abstract: Robotic manipulation requires both rich multimodal perception and effective learning frameworks to handle complex real-world tasks. See-through-skin (STS) sensors, which combine tactile and visual perception, offer promising sensing capabilities, while modern imitation learning provides powerful tools for policy acquisition. However, existing STS designs lack simultaneous multimodal perception and suffer from unreliable tactile tracking. Furthermore, integrating these rich multimodal signals into learning-based manipulation pipelines remains an open challenge. We introduce TacThru, an STS sensor enabling simultaneous visual perception and robust tactile signal extraction, and TacThru-UMI, an imitation learning framework that leverages these multimodal signals for manipulation. Our sensor features a fully transparent elastomer, persistent illumination, novel keyline markers, and efficient tracking, while our learning system integrates these signals through a Transformer-based Diffusion Policy. Experiments on five challenging real-world tasks show that TacThru-UMI achieves an average success rate of 85.5%, significantly outperforming the baselines of alternating tactile-visual (66.3%) and vision-only (55.4%). The system excels in critical scenarios, including contact detection with thin and soft objects and precision manipulation requiring multimodal coordination. This work demonstrates that combining simultaneous multimodal perception with modern learning frameworks enables more precise, adaptable robotic manipulation.
- oai:arXiv.org:2512.09851v1
- cs.RO
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Physics-Informed Learning of Microvascular Flow Models using Graph Neural Networks
+ https://arxiv.org/abs/2512.10792
+ arXiv:2512.10792v1 Announce Type: new
+Abstract: The simulation of microcirculatory blood flow in realistic vascular architectures poses significant challenges due to the multiscale nature of the problem and the topological complexity of capillary networks. In this work, we propose a novel deep learning-based reduced-order modeling strategy, leveraging Graph Neural Networks (GNNs) trained on synthetic microvascular graphs to approximate hemodynamic quantities on anatomically realistic domains. Our method combines algorithms for synthetic vascular generation with a physics-informed training procedure that integrates graph topological information and local flow dynamics. To ensure the physical reliability of the learned surrogates, we incorporate a physics-informed loss functional derived from the governing equations, allowing enforcement of mass conservation and rheological constraints. The resulting GNN architecture demonstrates robust generalization capabilities across diverse network configurations. The GNN formulation is validated on benchmark problems with linear and nonlinear rheology, showing accurate pressure and velocity field reconstruction with substantial computational gains over full-order solvers. The methodology showcases significant generalization capabilities with respect to vascular complexity, as highlighted by tests on data from the mouse cerebral cortex. This work establishes a new class of graph-based surrogate models for microvascular flow, grounded in physical laws and equipped with inductive biases that mirror mass conservation and rheological models, opening new directions for real-time inference in vascular modeling and biomedical applications.
+ oai:arXiv.org:2512.10792v1
+ math.NA
+ cs.NA
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Yuyang Li, Yinghan Chen, Zihang Zhao, Puhao Li, Tengyu Liu, Siyuan Huang, Yixin Zhu
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Paolo Botta, Piermario Vitullo, Thomas Ventimiglia, Andreas Linninger, Paolo Zunino
- Mitigating Social Bias in English and Urdu Language Models Using PRM-Guided Candidate Selection and Sequential Refinement
- https://arxiv.org/abs/2512.09854
- arXiv:2512.09854v1 Announce Type: new
-Abstract: Large language models (LLMs) increasingly mediate human communication, decision support, content creation, and information retrieval. Despite impressive fluency, these systems frequently produce biased or stereotypical content, especially when prompted with socially sensitive language. A growing body of research has demonstrated that such biases disproportionately affect low-resource languages, where training data is limited and culturally unrepresentative. This paper presents a comprehensive study of inference-time bias mitigation, a strategy that avoids retraining or fine-tuning and instead operates directly on model outputs. Building on preference-ranking models (PRMs), we introduce a unified evaluation framework comparing three methods: (1) baseline single-word generation, (2) PRM-Select best-of-N sampling, and (3) PRM-Sequential refinement guided by PRM critiques. We evaluate these techniques across 200 English prompts and their Urdu counterparts, designed to reflect socio-cultural contexts relevant to gender, ethnicity, religion, nationality, disability, profession, age, and socioeconomic categories. Using GPT-3.5 as a candidate generator and GPT-4o-mini as a PRM-based bias and utility scorer, we provide an extensive quantitative analysis of bias reduction, utility preservation, and cross-lingual disparities. Our findings show: (a) substantial gains over the baseline for both languages; (b) consistently lower fairness scores for Urdu across all methods, highlighting structural inequities in multilingual LLM training; and (c) distinct improvement trajectories between PRM-Select and PRM-Sequential. The study contributes an extensible methodology, interpretable metrics, and cross-lingual comparisons that can support future work on fairness evaluation in low-resource languages.
- oai:arXiv.org:2512.09854v1
+ LabelFusion: Learning to Fuse LLMs and Transformer Classifiers for Robust Text Classification
+ https://arxiv.org/abs/2512.10793
+ arXiv:2512.10793v1 Announce Type: new
+Abstract: LabelFusion is a fusion ensemble for text classification that learns to combine a traditional transformer-based classifier (e.g., RoBERTa) with one or more Large Language Models (LLMs such as OpenAI GPT, Google Gemini, or DeepSeek) to deliver accurate and cost-aware predictions across multi-class and multi-label tasks. The package provides a simple high-level interface (AutoFusionClassifier) that trains the full pipeline end-to-end with minimal configuration, and a flexible API for advanced users. Under the hood, LabelFusion integrates vector signals from both sources by concatenating the ML backbone's embeddings with the LLM-derived per-class scores -- obtained through structured prompt-engineering strategies -- and feeds this joint representation into a compact multi-layer perceptron (FusionMLP) that produces the final prediction. This learned fusion approach captures complementary strengths of LLM reasoning and traditional transformer-based classifiers, yielding robust performance across domains -- achieving 92.4% accuracy on AG News and 92.3% on 10-class Reuters 21578 topic classification -- while enabling practical trade-offs between accuracy, latency, and cost.
+ oai:arXiv.org:2512.10793v1cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/publicdomain/zero/1.0/
- Muneeb Ur Raheem Khan
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Michael Schlee, Christoph Weisser, Timo Kivim\"aki, Melchizedek Mashiku, Benjamin Saefken
- Typical Solutions of Multi-User Linearly-Decomposable Distributed Computing
- https://arxiv.org/abs/2512.09858
- arXiv:2512.09858v1 Announce Type: new
-Abstract: We solve, in the typical-case sense, the multi-sender linearly-decomposable distributed computing problem introduced by tessellated distributed computing. We model real-valued encoders/decoders and demand matrices, and assess structural fidelity via a thresholded graph edit distance between the demand support and the two-hop support of the computed product. Our analysis yields: a closed-form second-moment (Frobenius) risk under spike-and-slab ensembles; deterministic links between thresholded GED and norm error; a Gaussian surrogate with sub-exponential tails that exposes explicit recall lines; concentration of GED and operator-norm control; and a compute-capped design with a visible knee. We map the rules to aeronautical and satellite networks.
- oai:arXiv.org:2512.09858v1
- cs.IT
- math.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ What matters for Representation Alignment: Global Information or Spatial Structure?
+ https://arxiv.org/abs/2512.10794
+ arXiv:2512.10794v1 Announce Type: new
+Abstract: Representation alignment (REPA) guides generative training by distilling representations from a strong, pretrained vision encoder to intermediate diffusion features. We investigate a fundamental question: what aspect of the target representation matters for generation, its \textit{global} \revision{semantic} information (e.g., measured by ImageNet-1K accuracy) or its spatial structure (i.e. pairwise cosine similarity between patch tokens)? Prevalent wisdom holds that stronger global semantic performance leads to better generation as a target representation. To study this, we first perform a large-scale empirical analysis across 27 different vision encoders and different model scales. The results are surprising; spatial structure, rather than global performance, drives the generation performance of a target representation. To further study this, we introduce two straightforward modifications, which specifically accentuate the transfer of \emph{spatial} information. We replace the standard MLP projection layer in REPA with a simple convolution layer and introduce a spatial normalization layer for the external representation. Surprisingly, our simple method (implemented in $<$4 lines of code), termed iREPA, consistently improves convergence speed of REPA, across a diverse set of vision encoders, model sizes, and training variants (such as REPA, REPA-E, Meanflow, JiT etc). %, etc. Our work motivates revisiting the fundamental working mechanism of representational alignment and how it can be leveraged for improved training of generative models. The code and project page are available at https://end2end-diffusion.github.io/irepa
+ oai:arXiv.org:2512.10794v1
+ cs.CV
+ cs.AI
+ cs.GR
+ cs.LG
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Ali Khalesi, Mohammad Reza Deylam Salehi
+ http://creativecommons.org/licenses/by/4.0/
+ Jaskirat Singh, Xingjian Leng, Zongze Wu, Liang Zheng, Richard Zhang, Eli Shechtman, Saining Xie
- UniUGP: Unifying Understanding, Generation, and Planing For End-to-end Autonomous Driving
- https://arxiv.org/abs/2512.09864
- arXiv:2512.09864v1 Announce Type: new
-Abstract: Autonomous driving (AD) systems struggle in long-tail scenarios due to limited world knowledge and weak visual dynamic modeling. Existing vision-language-action (VLA)-based methods cannot leverage unlabeled videos for visual causal learning, while world model-based methods lack reasoning capabilities from large language models. In this paper, we construct multiple specialized datasets providing reasoning and planning annotations for complex scenarios. Then, a unified Understanding-Generation-Planning framework, named UniUGP, is proposed to synergize scene reasoning, future video generation, and trajectory planning through a hybrid expert architecture. By integrating pre-trained VLMs and video generation models, UniUGP leverages visual dynamics and semantic reasoning to enhance planning performance. Taking multi-frame observations and language instructions as input, it produces interpretable chain-of-thought reasoning, physically consistent trajectories, and coherent future videos. We introduce a four-stage training strategy that progressively builds these capabilities across multiple existing AD datasets, along with the proposed specialized datasets. Experiments demonstrate state-of-the-art performance in perception, reasoning, and decision-making, with superior generalization to challenging long-tail situations.
- oai:arXiv.org:2512.09864v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Approximating Euclidean Shallow-Light Trees
+ https://arxiv.org/abs/2512.10797
+ arXiv:2512.10797v1 Announce Type: new
+Abstract: For a weighted graph $G = (V, E, w)$ and a designated source vertex $s \in V$, a spanning tree that simultaneously approximates a shortest-path tree w.r.t. source $s$ and a minimum spanning tree is called a shallow-light tree (SLT). Specifically, an $(\alpha, \beta)$-SLT of $G$ w.r.t. $s \in V$ is a spanning tree of $G$ with root-stretch $\alpha$ (preserving all distances between $s$ and the other vertices up to a factor of $\alpha$) and lightness $\beta$ (its weight is at most $\beta$ times the weight of a minimum spanning tree of $G$).
+ Despite the large body of work on SLTs, the basic question of whether a better approximation algorithm exists was left untouched to date, and this holds in any graph family. This paper makes a first nontrivial step towards this question by presenting two bicriteria approximation algorithms. For any $\epsilon>0$, a set $P$ of $n$ points in constant-dimensional Euclidean space and a source $s\in P$, our first (respectively, second) algorithm returns, in $O(n \log n \cdot {\rm polylog}(1/\epsilon))$ time, a non-Steiner (resp., Steiner) tree with root-stretch $1+O(\epsilon\log \epsilon^{-1})$ and weight at most $O(\mathrm{opt}_{\epsilon}\cdot \log^2 \epsilon^{-1})$ (resp., $O(\mathrm{opt}_{\epsilon}\cdot \log \epsilon^{-1})$), where $\mathrm{opt}_{\epsilon}$ denotes the minimum weight of a non-Steiner (resp., Steiner) tree with root-stretch $1+\epsilon$.
+ oai:arXiv.org:2512.10797v1
+ cs.CG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/publicdomain/zero/1.0/
- Hao Lu, Ziyang Liu, Guangfeng Jiang, Yuanfei Luo, Sheng Chen, Yangang Zhang, Ying-Cong Chen
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hung Le, Shay Solomon, Cuong Than, Csaba D. T\'oth, Tianyi Zhang
+
+
+ Zorya: Automated Concolic Execution of Single-Threaded Go Binaries
+ https://arxiv.org/abs/2512.10799
+ arXiv:2512.10799v1 Announce Type: new
+Abstract: Go's adoption in critical infrastructure intensifies the need for systematic vulnerability detection, yet existing symbolic execution tools struggle with Go binaries due to runtime complexity and scalability challenges. In this work, we build upon Zorya, a concolic execution framework that translates Go binaries to Ghidra's P-Code intermediate representation to address these challenges. We added the detection of bugs in concretely not taken paths and a multi-layer filtering mechanism to concentrate symbolic reasoning on panic-relevant paths. Evaluation on five Go vulnerabilities demonstrates that panic-reachability gating achieves 1.8-3.9x speedups when filtering 33-70% of branches, and that Zorya detects all panics while existing tools detect at most two. Function-mode analysis proved essential for complex programs, running roughly two orders of magnitude faster than starting from main. This work establishes that specialized concolic execution can achieve practical vulnerability detection in language ecosystems with runtime safety checks.
+ oai:arXiv.org:2512.10799v1
+ cs.SE
+ cs.PL
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1145/3748522.3779940
+ Karolina Gorna, Nicolas Iooss, Yannick Seurin, Rida Khatoun
- MedForget: Hierarchy-Aware Multimodal Unlearning Testbed for Medical AI
- https://arxiv.org/abs/2512.09867
- arXiv:2512.09867v1 Announce Type: new
-Abstract: Pretrained Multimodal Large Language Models (MLLMs) are increasingly deployed in medical AI systems for clinical reasoning, diagnosis support, and report generation. However, their training on sensitive patient data raises critical privacy and compliance challenges under regulations such as HIPAA and GDPR, which enforce the "right to be forgotten". Unlearning, the process of tuning models to selectively remove the influence of specific training data points, offers a potential solution, yet its effectiveness in complex medical settings remains underexplored. To systematically study this, we introduce MedForget, a Hierarchy-Aware Multimodal Unlearning Testbed with explicit retain and forget splits and evaluation sets containing rephrased variants. MedForget models hospital data as a nested hierarchy (Institution -> Patient -> Study -> Section), enabling fine-grained assessment across eight organizational levels. The benchmark contains 3840 multimodal (image, question, answer) instances, each hierarchy level having a dedicated unlearning target, reflecting distinct unlearning challenges. Experiments with four SOTA unlearning methods on three tasks (generation, classification, cloze) show that existing methods struggle to achieve complete, hierarchy-aware forgetting without reducing diagnostic performance. To test whether unlearning truly deletes hierarchical pathways, we introduce a reconstruction attack that progressively adds hierarchical level context to prompts. Models unlearned at a coarse granularity show strong resistance, while fine-grained unlearning leaves models vulnerable to such reconstruction. MedForget provides a practical, HIPAA-aligned testbed for building compliant medical AI systems.
- oai:arXiv.org:2512.09867v1
+ Interpretable and Steerable Concept Bottleneck Sparse Autoencoders
+ https://arxiv.org/abs/2512.10805
+ arXiv:2512.10805v1 Announce Type: new
+Abstract: Sparse autoencoders (SAEs) promise a unified approach for mechanistic interpretability, concept discovery, and model steering in LLMs and LVLMs. However, realizing this potential requires that the learned features be both interpretable and steerable. To that end, we introduce two new computationally inexpensive interpretability and steerability metrics and conduct a systematic analysis on LVLMs. Our analysis uncovers two observations; (i) a majority of SAE neurons exhibit either low interpretability or low steerability or both, rendering them ineffective for downstream use; and (ii) due to the unsupervised nature of SAEs, user-desired concepts are often absent in the learned dictionary, thus limiting their practical utility. To address these limitations, we propose Concept Bottleneck Sparse Autoencoders (CB-SAE) - a novel post-hoc framework that prunes low-utility neurons and augments the latent space with a lightweight concept bottleneck aligned to a user-defined concept set. The resulting CB-SAE improves interpretability by +32.1% and steerability by +14.5% across LVLMs and image generation tasks. We will make our code and model weights available.
+ oai:arXiv.org:2512.10805v1
+ cs.LGcs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Akshay Kulkarni, Tsui-Wei Weng, Vivek Narayanaswamy, Shusen Liu, Wesam A. Sakla, Kowshik Thopalli
+
+
+ HAROOD: A Benchmark for Out-of-distribution Generalization in Sensor-based Human Activity Recognition
+ https://arxiv.org/abs/2512.10807
+ arXiv:2512.10807v1 Announce Type: new
+Abstract: Sensor-based human activity recognition (HAR) mines activity patterns from the time-series sensory data. In realistic scenarios, variations across individuals, devices, environments, and time introduce significant distributional shifts for the same activities. Recent efforts attempt to solve this challenge by applying or adapting existing out-of-distribution (OOD) algorithms, but only in certain distribution shift scenarios (e.g., cross-device or cross-position), lacking comprehensive insights on the effectiveness of these algorithms. For instance, is OOD necessary to HAR? Which OOD algorithm performs the best? In this paper, we fill this gap by proposing HAROOD, a comprehensive benchmark for HAR in OOD settings. We define 4 OOD scenarios: cross-person, cross-position, cross-dataset, and cross-time, and build a testbed covering 6 datasets, 16 comparative methods (implemented with CNN-based and Transformer-based architectures), and two model selection protocols. Then, we conduct extensive experiments and present several findings for future research, e.g., no single method consistently outperforms others, highlighting substantial opportunity for advancement. Our codebase is highly modular and easy to extend for new datasets, algorithms, comparisons, and analysis, with the hope to facilitate the research in OOD-based HAR. Our implementation is released and can be found at https://github.com/AIFrontierLab/HAROOD.
+ oai:arXiv.org:2512.10807v1cs.AI
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fengli Wu, Vaidehi Patil, Jaehong Yoon, Yue Zhang, Mohit Bansal
+ Wang Lu, Yao Zhu, Jindong Wang
- Diffusion Posterior Sampler for Hyperspectral Unmixing with Spectral Variability Modeling
- https://arxiv.org/abs/2512.09871
- arXiv:2512.09871v1 Announce Type: new
-Abstract: Linear spectral mixture models (LMM) provide a concise form to disentangle the constituent materials (endmembers) and their corresponding proportions (abundance) in a single pixel. The critical challenges are how to model the spectral prior distribution and spectral variability. Prior knowledge and spectral variability can be rigorously modeled under the Bayesian framework, where posterior estimation of Abundance is derived by combining observed data with endmember prior distribution. Considering the key challenges and the advantages of the Bayesian framework, a novel method using a diffusion posterior sampler for semiblind unmixing, denoted as DPS4Un, is proposed to deal with these challenges with the following features: (1) we view the pretrained conditional spectrum diffusion model as a posterior sampler, which can combine the learned endmember prior with observation to get the refined abundance distribution. (2) Instead of using the existing spectral library as prior, which may raise bias, we establish the image-based endmember bundles within superpixels, which are used to train the endmember prior learner with diffusion model. Superpixels make sure the sub-scene is more homogeneous. (3) Instead of using the image-level data consistency constraint, the superpixel-based data fidelity term is proposed. (4) The endmember is initialized as Gaussian noise for each superpixel region, DPS4Un iteratively updates the abundance and endmember, contributing to spectral variability modeling. The experimental results on three real-world benchmark datasets demonstrate that DPS4Un outperforms the state-of-the-art hyperspectral unmixing methods.
- oai:arXiv.org:2512.09871v1
+ Graph Laplacian Transformer with Progressive Sampling for Prostate Cancer Grading
+ https://arxiv.org/abs/2512.10808
+ arXiv:2512.10808v1 Announce Type: new
+Abstract: Prostate cancer grading from whole-slide images (WSIs) remains a challenging task due to the large-scale nature of WSIs, the presence of heterogeneous tissue structures, and difficulty of selecting diagnostically relevant regions. Existing approaches often rely on random or static patch selection, leading to the inclusion of redundant or non-informative regions that degrade performance. To address this, we propose a Graph Laplacian Attention-Based Transformer (GLAT) integrated with an Iterative Refinement Module (IRM) to enhance both feature learning and spatial consistency. The IRM iteratively refines patch selection by leveraging a pretrained ResNet50 for local feature extraction and a foundation model in no-gradient mode for importance scoring, ensuring only the most relevant tissue regions are preserved. The GLAT models tissue-level connectivity by constructing a graph where patches serve as nodes, ensuring spatial consistency through graph Laplacian constraints and refining feature representations via a learnable filtering mechanism that enhances discriminative histological structures. Additionally, a convex aggregation mechanism dynamically adjusts patch importance to generate a robust WSI-level representation. Extensive experiments on five public and one private dataset demonstrate that our model outperforms state-of-the-art methods, achieving higher performance and spatial consistency while maintaining computational efficiency.
+ oai:arXiv.org:2512.10808v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Yimin Zhu, Lincoln Linlin Xu
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1007/978-3-032-05162-2_35
+ Masum Shah Junayed, John Derek Van Vessem, Qian Wan, Gahie Nam, Sheida Nabavi
- FlipLLM: Efficient Bit-Flip Attacks on Multimodal LLMs using Reinforcement Learning
- https://arxiv.org/abs/2512.09872
- arXiv:2512.09872v1 Announce Type: new
-Abstract: Generative Artificial Intelligence models, such as Large Language Models (LLMs) and Large Vision Models (VLMs), exhibit state-of-the-art performance but remain vulnerable to hardware-based threats, specifically bit-flip attacks (BFAs). Existing BFA discovery methods lack generalizability and struggle to scale, often failing to analyze the vast parameter space and complex interdependencies of modern foundation models in a reasonable time. This paper proposes FlipLLM, a reinforcement learning (RL) architecture-agnostic framework that formulates BFA discovery as a sequential decision-making problem. FlipLLM combines sensitivity-guided layer pruning with Q-learning to efficiently identify minimal, high-impact bit sets that can induce catastrophic failure. We demonstrate the effectiveness and generalizability of FlipLLM by applying it to a diverse set of models, including prominent text-only LLMs (GPT-2 Large, LLaMA 3.1 8B, and DeepSeek-V2 7B), VLMs such as LLaVA 1.6, and datasets, such as MMLU, MMLU-Pro, VQAv2, and TextVQA. Our results show that FlipLLM can identify critical bits that are vulnerable to BFAs up to 2.5x faster than SOTA methods. We demonstrate that flipping the FlipLLM-identified bits plummets the accuracy of LLaMA 3.1 8B from 69.9% to ~0.2%, and for LLaVA's VQA score from 78% to almost 0%, by flipping as few as 5 and 7 bits, respectively. Further analysis reveals that applying standard hardware protection mechanisms, such as ECC SECDED, to the FlipLLM-identified bit locations completely mitigates the BFA impact, demonstrating the practical value of our framework in guiding hardware-level defenses. FlipLLM offers the first scalable and adaptive methodology for exploring the BFA vulnerability of both language and multimodal foundation models, paving the way for comprehensive hardware-security evaluation.
- oai:arXiv.org:2512.09872v1
- cs.CR
+ Extrapolation of Periodic Functions Using Binary Encoding of Continuous Numerical Values
+ https://arxiv.org/abs/2512.10817
+ arXiv:2512.10817v1 Announce Type: new
+Abstract: We report the discovery that binary encoding allows neural networks to extrapolate periodic functions beyond their training bounds. We introduce Normalized Base-2 Encoding (NB2E) as a method for encoding continuous numerical values and demonstrate that, using this input encoding, vanilla multi-layer perceptrons (MLP) successfully extrapolate diverse periodic signals without prior knowledge of their functional form. Internal activation analysis reveals that NB2E induces bit-phase representations, enabling MLPs to learn and extrapolate signal structure independently of position.
+ oai:arXiv.org:2512.10817v1
+ cs.LGcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CV
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Khurram Khalil, Khaza Anuarul Hoque
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Brian P. Powell, Jordan A. Caraballo-Vega, Mark L. Carroll, Thomas Maxwell, Andrew Ptak, Greg Olmschenk, Jorge Martinez-Palomera
- Benchmarking Document Parsers on Mathematical Formula Extraction from PDFs
- https://arxiv.org/abs/2512.09874
- arXiv:2512.09874v1 Announce Type: new
-Abstract: Correctly parsing mathematical formulas from PDFs is critical for training large language models and building scientific knowledge bases from academic literature, yet existing benchmarks either exclude formulas entirely or lack semantically-aware evaluation metrics. We introduce a novel benchmarking framework centered on synthetically generated PDFs with precise LaTeX ground truth, enabling systematic control over layout, formulas, and content characteristics. A key methodological contribution is pioneering LLM-as-a-judge for semantic formula assessment, combined with a robust two-stage matching pipeline that handles parser output inconsistencies. Through human validation on 250 formula pairs (750 ratings from 30 evaluators), we demonstrate that LLM-based evaluation achieves substantially higher correlation with human judgment (Pearson r=0.78) compared to CDM (r=0.34) and text similarity (r~0). Evaluating 20+ contemporary PDF parsers (including specialized OCR models, vision-language models, and rule-based approaches) across 100 synthetic documents with 2,000+ formulas reveals significant performance disparities. Our findings provide crucial insights for practitioners selecting parsers for downstream applications and establish a robust, scalable methodology that enables reproducible evaluation of PDF formula extraction quality. Code and benchmark data: https://github.com/phorn1/pdf-parse-bench
- oai:arXiv.org:2512.09874v1
+ Self-Ensemble Post Learning for Noisy Domain Generalization
+ https://arxiv.org/abs/2512.10818
+ arXiv:2512.10818v1 Announce Type: new
+Abstract: While computer vision and machine learning have made great progress, their robustness is still challenged by two key issues: data distribution shift and label noise. When domain generalization (DG) encounters noise, noisy labels further exacerbate the emergence of spurious features in deep layers, i.e. spurious feature enlargement, leading to a degradation in the performance of existing algorithms. This paper, starting from domain generalization, explores how to make existing methods rework when meeting noise. We find that the latent features inside the model have certain discriminative capabilities, and different latent features focus on different parts of the image. Based on these observations, we propose the Self-Ensemble Post Learning approach (SEPL) to diversify features which can be leveraged. Specifically, SEPL consists of two parts: feature probing training and prediction ensemble inference. It leverages intermediate feature representations within the model architecture, training multiple probing classifiers to fully exploit the capabilities of pre-trained models, while the final predictions are obtained through the integration of outputs from these diverse classification heads. Considering the presence of noisy labels, we employ semi-supervised algorithms to train probing classifiers. Given that different probing classifiers focus on different areas, we integrate their predictions using a crowdsourcing inference approach. Extensive experimental evaluations demonstrate that the proposed method not only enhances the robustness of existing methods but also exhibits significant potential for real-world applications with high flexibility.
+ oai:arXiv.org:2512.10818v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Pius Horn, Janis Keuper
+ Wang Lu, Jindong Wang
- Resilient Neural-Variable-Structure Consensus Control for Nonlinear MASs with Singular Input Gain Under DoS Attacks
- https://arxiv.org/abs/2512.09879
- arXiv:2512.09879v1 Announce Type: new
-Abstract: This paper proposes a reliable learning-based adaptive control framework for nonlinear multi-agent systems (MASs) subject to Denial-of-Service (DoS) attacks and singular control gains, two critical challenges in cyber-physical systems. A neural-variable-structure adaptive controller is developed to achieve leader-follower consensus while ensuring robustness to external disturbances and adaptability to unknown nonlinear dynamics. A reliability-assessment rule is introduced to detect communication loss during DoS attacks, upon which a switched control mechanism is activated to preserve closed-loop stability and performance. Unlike existing resilient MAS control methods, the proposed strategy explicitly accommodates singular control gains and does not rely on restrictive assumptions such as Lipschitz continuity or prior bounds on nonlinearities. To the authors' knowledge, this is the first work to integrate neural learning, variable-structure robustness, and reliability-based switching into a unified consensus-tracking control architecture for heterogeneous nonlinear MASs with singular input gains under DoS attacks. Lyapunov-based analysis establishes uniform ultimate boundedness of all closed-loop signals, and Matlab/Simulink simulations on a connected automated vehicle platoon demonstrate the method's effectiveness and resilience.
- oai:arXiv.org:2512.09879v1
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Agile Deliberation: Concept Deliberation for Subjective Visual Classification
+ https://arxiv.org/abs/2512.10821
+ arXiv:2512.10821v1 Announce Type: new
+Abstract: From content moderation to content curation, applications requiring vision classifiers for visual concepts are rapidly expanding. Existing human-in-the-loop approaches typically assume users begin with a clear, stable concept understanding to be able to provide high-quality supervision. In reality, users often start with a vague idea and must iteratively refine it through "concept deliberation", a practice we uncovered through structured interviews with content moderation experts. We operationalize the common strategies in deliberation used by real content moderators into a human-in-the-loop framework called "Agile Deliberation" that explicitly supports evolving and subjective concepts. The system supports users in defining the concept for themselves by exposing them to borderline cases. The system does this with two deliberation stages: (1) concept scoping, which decomposes the initial concept into a structured hierarchy of sub-concepts, and (2) concept iteration, which surfaces semantically borderline examples for user reflection and feedback to iteratively align an image classifier with the user's evolving intent. Since concept deliberation is inherently subjective and interactive, we painstakingly evaluate the framework through 18 user sessions, each 1.5h long, rather than standard benchmarking datasets. We find that Agile Deliberation achieves 7.5% higher F1 scores than automated decomposition baselines and more than 3% higher than manual deliberation, while participants reported clearer conceptual understanding and lower cognitive effort.
+ oai:arXiv.org:2512.10821v1
+ cs.AI
+ cs.CV
+ cs.HC
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ladan Khoshnevisan, Xinzhi Liu
+ http://creativecommons.org/licenses/by/4.0/
+ Leijie Wang, Otilia Stretcu, Wei Qiao, Thomas Denby, Krishnamurthy Viswanathan, Enming Luo, Chun-Ta Lu, Tushar Dogra, Ranjay Krishna, Ariel Fuxman
- Comparing AI Agents to Cybersecurity Professionals in Real-World Penetration Testing
- https://arxiv.org/abs/2512.09882
- arXiv:2512.09882v1 Announce Type: new
-Abstract: We present the first comprehensive evaluation of AI agents against human cybersecurity professionals in a live enterprise environment. We evaluate ten cybersecurity professionals alongside six existing AI agents and ARTEMIS, our new agent scaffold, on a large university network consisting of ~8,000 hosts across 12 subnets. ARTEMIS is a multi-agent framework featuring dynamic prompt generation, arbitrary sub-agents, and automatic vulnerability triaging. In our comparative study, ARTEMIS placed second overall, discovering 9 valid vulnerabilities with an 82% valid submission rate and outperforming 9 of 10 human participants. While existing scaffolds such as Codex and CyAgent underperformed relative to most human participants, ARTEMIS demonstrated technical sophistication and submission quality comparable to the strongest participants. We observe that AI agents offer advantages in systematic enumeration, parallel exploitation, and cost -- certain ARTEMIS variants cost $18/hour versus $60/hour for professional penetration testers. We also identify key capability gaps: AI agents exhibit higher false-positive rates and struggle with GUI-based tasks.
- oai:arXiv.org:2512.09882v1
+ V-OCBF: Learning Safety Filters from Offline Data via Value-Guided Offline Control Barrier Functions
+ https://arxiv.org/abs/2512.10822
+ arXiv:2512.10822v1 Announce Type: new
+Abstract: Ensuring safety in autonomous systems requires controllers that satisfy hard, state-wise constraints without relying on online interaction. While existing Safe Offline RL methods typically enforce soft expected-cost constraints, they do not guarantee forward invariance. Conversely, Control Barrier Functions (CBFs) provide rigorous safety guarantees but usually depend on expert-designed barrier functions or full knowledge of the system dynamics. We introduce Value-Guided Offline Control Barrier Functions (V-OCBF), a framework that learns a neural CBF entirely from offline demonstrations. Unlike prior approaches, V-OCBF does not assume access to the dynamics model; instead, it derives a recursive finite-difference barrier update, enabling model-free learning of a barrier that propagates safety information over time. Moreover, V-OCBF incorporates an expectile-based objective that avoids querying the barrier on out-of-distribution actions and restricts updates to the dataset-supported action set. The learned barrier is then used with a Quadratic Program (QP) formulation to synthesize real-time safe control. Across multiple case studies, V-OCBF yields substantially fewer safety violations than baseline methods while maintaining strong task performance, highlighting its scalability for offline synthesis of safety-critical controllers without online interaction or hand-engineered barriers.
+ oai:arXiv.org:2512.10822v1cs.AI
- cs.CR
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mumuksh Tayal, Manan Tayal, Aditya Singh, Shishir Kolathaya, Ravi Prakash
+
+
+ Learning Controllable and Diverse Player Behaviors in Multi-Agent Environments
+ https://arxiv.org/abs/2512.10835
+ arXiv:2512.10835v1 Announce Type: new
+Abstract: This paper introduces a reinforcement learning framework that enables controllable and diverse player behaviors without relying on human gameplay data. Existing approaches often require large-scale player trajectories, train separate models for different player types, or provide no direct mapping between interpretable behavioral parameters and the learned policy, limiting their scalability and controllability. We define player behavior in an N-dimensional continuous space and uniformly sample target behavior vectors from a region that encompasses the subset representing real human styles. During training, each agent receives both its current and target behavior vectors as input, and the reward is based on the normalized reduction in distance between them. This allows the policy to learn how actions influence behavioral statistics, enabling smooth control over attributes such as aggressiveness, mobility, and cooperativeness. A single PPO-based multi-agent policy can reproduce new or unseen play styles without retraining. Experiments conducted in a custom multi-player Unity game show that the proposed framework produces significantly greater behavioral diversity than a win-only baseline and reliably matches specified behavior vectors across diverse targets. The method offers a scalable solution for automated playtesting, game balancing, human-like behavior simulation, and replacing disconnected players in online games.
+ oai:arXiv.org:2512.10835v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Atahan Cilan, Atay \"Ozg\"ovde
+
+
+ dtreg: Describing Data Analysis in Machine-Readable Format in Python and R
+ https://arxiv.org/abs/2512.10836
+ arXiv:2512.10836v1 Announce Type: new
+Abstract: For scientific knowledge to be findable, accessible, interoperable, and reusable, it needs to be machine-readable. Moving forward from post-publication extraction of knowledge, we adopted a pre-publication approach to write research findings in a machine-readable format at early stages of data analysis. For this purpose, we developed the package dtreg in Python and R. Registered and persistently identified data types, aka schemata, which dtreg applies to describe data analysis in a machine-readable format, cover the most widely used statistical tests and machine learning methods. The package supports (i) downloading a relevant schema as a mutable instance of a Python or R class, (ii) populating the instance object with metadata about data analysis, and (iii) converting the object into a lightweight Linked Data format. This paper outlines the background of our approach, explains the code architecture, and illustrates the functionality of dtreg with a machine-readable description of a t-test on Iris Data. We suggest that the dtreg package can enhance the methodological repertoire of researchers aiming to adhere to the FAIR principles.
+ oai:arXiv.org:2512.10836v1
+ cs.DL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Justin W. Lin, Eliot Krzysztof Jones, Donovan Julian Jasper, Ethan Jun-shen Ho, Anna Wu, Arnold Tianyi Yang, Neil Perry, Andy Zou, Matt Fredrikson, J. Zico Kolter, Percy Liang, Dan Boneh, Daniel E. Ho
+ Olga Lezhnina, Manuel Prinz, Markus Stocker
- ByteShield: Adversarially Robust End-to-End Malware Detection through Byte Masking
- https://arxiv.org/abs/2512.09883
- arXiv:2512.09883v1 Announce Type: new
-Abstract: Research has proven that end-to-end malware detectors are vulnerable to adversarial attacks. In response, the research community has proposed defenses based on randomized and (de)randomized smoothing. However, these techniques remain susceptible to attacks that insert large adversarial payloads. To address these limitations, we propose a novel defense mechanism designed to harden end-to-end malware detectors by leveraging masking at the byte level. This mechanism operates by generating multiple masked versions of the input file, independently classifying each version, and then applying a threshold-based voting mechanism to produce the final classification. Key to this defense is a deterministic masking strategy that systematically strides a mask across the entire input file. Unlike randomized smoothing defenses, which randomly mask or delete bytes, this structured approach ensures coverage of the file over successive versions. In the best-case scenario, this strategy fully occludes the adversarial payload, effectively neutralizing its influence on the model's decision. In the worst-case scenario, it partially occludes the adversarial payload, reducing its impact on the model's predictions. By occluding the adversarial payload in one or more masked versions, this defense ensures that some input versions remain representative of the file's original intent, allowing the voting mechanism to suppress the influence of the adversarial payload. Results achieved on the EMBER and BODMAS datasets demonstrate the suitability of our defense, outperforming randomized and (de)randomized smoothing defenses against adversarial examples generated with a wide range of functionality-preserving manipulations while maintaining high accuracy on clean examples.
- oai:arXiv.org:2512.09883v1
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ PoseGAM: Robust Unseen Object Pose Estimation via Geometry-Aware Multi-View Reasoning
+ https://arxiv.org/abs/2512.10840
+ arXiv:2512.10840v1 Announce Type: new
+Abstract: 6D object pose estimation, which predicts the transformation of an object relative to the camera, remains challenging for unseen objects. Existing approaches typically rely on explicitly constructing feature correspondences between the query image and either the object model or template images. In this work, we propose PoseGAM, a geometry-aware multi-view framework that directly predicts object pose from a query image and multiple template images, eliminating the need for explicit matching. Built upon recent multi-view-based foundation model architectures, the method integrates object geometry information through two complementary mechanisms: explicit point-based geometry and learned features from geometry representation networks. In addition, we construct a large-scale synthetic dataset containing more than 190k objects under diverse environmental conditions to enhance robustness and generalization. Extensive evaluations across multiple benchmarks demonstrate our state-of-the-art performance, yielding an average AR improvement of 5.1% over prior methods and achieving up to 17.6% gains on individual datasets, indicating strong generalization to unseen objects. Project page: https://windvchen.github.io/PoseGAM/ .
+ oai:arXiv.org:2512.10840v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Daniel Gibert, Felip Many\`a
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianqi Chen, Biao Zhang, Xiangjun Tang, Peter Wonka
- HPM-KD: Hierarchical Progressive Multi-Teacher Framework for Knowledge Distillation and Efficient Model Compression
- https://arxiv.org/abs/2512.09886
- arXiv:2512.09886v1 Announce Type: new
-Abstract: Knowledge Distillation (KD) has emerged as a promising technique for model compression but faces critical limitations: (1) sensitivity to hyperparameters requiring extensive manual tuning, (2) capacity gap when distilling from very large teachers to small students, (3) suboptimal coordination in multi-teacher scenarios, and (4) inefficient use of computational resources. We present \textbf{HPM-KD}, a framework that integrates six synergistic components: (i) Adaptive Configuration Manager via meta-learning that eliminates manual hyperparameter tuning, (ii) Progressive Distillation Chain with automatically determined intermediate models, (iii) Attention-Weighted Multi-Teacher Ensemble that learns dynamic per-sample weights, (iv) Meta-Learned Temperature Scheduler that adapts temperature throughout training, (v) Parallel Processing Pipeline with intelligent load balancing, and (vi) Shared Optimization Memory for cross-experiment reuse. Experiments on CIFAR-10, CIFAR-100, and tabular datasets demonstrate that HPM-KD: achieves 10x-15x compression while maintaining 85% accuracy retention, eliminates the need for manual tuning, and reduces training time by 30-40% via parallelization. Ablation studies confirm independent contribution of each component (0.10-0.98 pp). HPM-KD is available as part of the open-source DeepBridge library.
- oai:arXiv.org:2512.09886v1
- cs.LG
- stat.AP
- Thu, 11 Dec 2025 00:00:00 -0500
+ Low-Order $\mathcal{H}_2 / \mathcal{H}_\infty$ Controller Design for Aeroelastic Vibration Suppression
+ https://arxiv.org/abs/2512.10841
+ arXiv:2512.10841v1 Announce Type: new
+Abstract: This paper presents an $\mathcal{H}_2 / \mathcal{H}_\infty$ minimization-based output-feedback controller for active aeroelastic vibration suppression in a cantilevered beam. First, a nonlinear structural model incorporating moderate deflection and aerodynamic loading is derived and discretized using the finite element method (FEM). Then, a low-order linear model is identified from random gaussian input response data from the FEM model to synthesize an output-feedback controller using the $\mathcal{H}_2 / \mathcal{H}_\infty$ framework. A frequency-weighted dynamic filter is introduced to emphasize disturbance frequencies of interest, enabling the controller to target dominant vibration modes. Simulation results demonstrate the effectiveness of the proposed technique for vibration suppression and study its robustness to system parameter variations, including actuator placement.
+ oai:arXiv.org:2512.10841v1
+ eess.SY
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Gustavo Coelho Haase, Paulo Henrique Dourado da Silva
+ http://creativecommons.org/licenses/by/4.0/
+ Mohammad Mirtaba, Juan Augusto Paredes Salazar, Daning Huang, Ankit Goel
- Analysis of Dirichlet Energies as Over-smoothing Measures
- https://arxiv.org/abs/2512.09890
- arXiv:2512.09890v1 Announce Type: new
-Abstract: We analyze the distinctions between two functionals often used as over-smoothing measures: the Dirichlet energies induced by the unnormalized graph Laplacian and the normalized graph Laplacian. We demonstrate that the latter fails to satisfy the axiomatic definition of a node-similarity measure proposed by Rusch \textit{et al.} By formalizing fundamental spectral properties of these two definitions, we highlight critical distinctions necessary to select the metric that is spectrally compatible with the GNN architecture, thereby resolving ambiguities in monitoring the dynamics.
- oai:arXiv.org:2512.09890v1
+ Bayesian Symbolic Regression via Posterior Sampling
+ https://arxiv.org/abs/2512.10849
+ arXiv:2512.10849v1 Announce Type: new
+Abstract: Symbolic regression is a powerful tool for discovering governing equations directly from data, but its sensitivity to noise hinders its broader application. This paper introduces a Sequential Monte Carlo (SMC) framework for Bayesian symbolic regression that approximates the posterior distribution over symbolic expressions, enhancing robustness and enabling uncertainty quantification for symbolic regression in the presence of noise. Differing from traditional genetic programming approaches, the SMC-based algorithm combines probabilistic selection, adaptive tempering, and the use of normalized marginal likelihood to efficiently explore the search space of symbolic expressions, yielding parsimonious expressions with improved generalization. When compared to standard genetic programming baselines, the proposed method better deals with challenging, noisy benchmark datasets. The reduced tendency to overfit and enhanced ability to discover accurate and interpretable equations paves the way for more robust symbolic regression in scientific discovery and engineering design applications.
+ oai:arXiv.org:2512.10849v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Anna Bison, Alessandro Sperduti
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Geoffrey F. Bomarito, Patrick E. Leser
- Provably Learning from Modern Language Models via Low Logit Rank
- https://arxiv.org/abs/2512.09892
- arXiv:2512.09892v1 Announce Type: new
-Abstract: While modern language models and their inner workings are incredibly complex, recent work (Golowich, Liu & Shetty; 2025) has proposed a simple and potentially tractable abstraction for them through the observation that empirically, these language models all seem to have approximately low logit rank. Roughly, this means that a matrix formed by the model's log probabilities of various tokens conditioned on certain sequences of tokens is well approximated by a low rank matrix.
- In this paper, our focus is on understanding how this structure can be exploited algorithmically for obtaining provable learning guarantees. Since low logit rank models can encode hard-to-learn distributions such as noisy parities, we study a query learning model with logit queries that reflects the access model for common APIs. Our main result is an efficient algorithm for learning any approximately low logit rank model from queries. We emphasize that our structural assumption closely reflects the behavior that is empirically observed in modern language models. Thus, our result gives what we believe is the first end-to-end learning guarantee for a generative model that plausibly captures modern language models.
- oai:arXiv.org:2512.09892v1
+ Generative Modeling from Black-box Corruptions via Self-Consistent Stochastic Interpolants
+ https://arxiv.org/abs/2512.10857
+ arXiv:2512.10857v1 Announce Type: new
+Abstract: Transport-based methods have emerged as a leading paradigm for building generative models from large, clean datasets. However, in many scientific and engineering domains, clean data are often unavailable: instead, we only observe measurements corrupted through a noisy, ill-conditioned channel. A generative model for the original data thus requires solving an inverse problem at the level of distributions. In this work, we introduce a novel approach to this task based on Stochastic Interpolants: we iteratively update a transport map between corrupted and clean data samples using only access to the corrupted dataset as well as black box access to the corruption channel. Under appropriate conditions, this iterative procedure converges towards a self-consistent transport map that effectively inverts the corruption channel, thus enabling a generative model for the clean data. We refer to the resulting method as the self-consistent stochastic interpolant (SCSI). It (i) is computationally efficient compared to variational alternatives, (ii) highly flexible, handling arbitrary nonlinear forward models with only black-box access, and (iii) enjoys theoretical guarantees. We demonstrate superior performance on inverse problems in natural image processing and scientific reconstruction, and establish convergence guarantees of the scheme under appropriate assumptions.
+ oai:arXiv.org:2512.10857v1cs.LGcs.AI
- cs.DSstat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Noah Golowich, Allen Liu, Abhishek Shetty
+ http://creativecommons.org/licenses/by/4.0/
+ Chirag Modi, Jiequn Han, Eric Vanden-Eijnden, Joan Bruna
- Exploring Protein Language Model Architecture-Induced Biases for Antibody Comprehension
- https://arxiv.org/abs/2512.09894
- arXiv:2512.09894v1 Announce Type: new
-Abstract: Recent advances in protein language models (PLMs) have demonstrated remarkable capabilities in understanding protein sequences. However, the extent to which different model architectures capture antibody-specific biological properties remains unexplored. In this work, we systematically investigate how architectural choices in PLMs influence their ability to comprehend antibody sequence characteristics and functions. We evaluate three state-of-the-art PLMs-AntiBERTa, BioBERT, and ESM2--against a general-purpose language model (GPT-2) baseline on antibody target specificity prediction tasks. Our results demonstrate that while all PLMs achieve high classification accuracy, they exhibit distinct biases in capturing biological features such as V gene usage, somatic hypermutation patterns, and isotype information. Through attention attribution analysis, we show that antibody-specific models like AntiBERTa naturally learn to focus on complementarity-determining regions (CDRs), while general protein models benefit significantly from explicit CDR-focused training strategies. These findings provide insights into the relationship between model architecture and biological feature extraction, offering valuable guidance for future PLM development in computational antibody design.
- oai:arXiv.org:2512.09894v1
+ Scaling Behavior of Discrete Diffusion Language Models
+ https://arxiv.org/abs/2512.10858
+ arXiv:2512.10858v1 Announce Type: new
+Abstract: Modern LLM pre-training consumes vast amounts of compute and training data, making the scaling behavior, or scaling laws, of different models a key distinguishing factor. Discrete diffusion language models (DLMs) have been proposed as an alternative to autoregressive language models (ALMs). However, their scaling behavior has not yet been fully explored, with prior work suggesting that they require more data and compute to match the performance of ALMs.
+ We study the scaling behavior of DLMs on different noise types by smoothly interpolating between masked and uniform diffusion while paying close attention to crucial hyperparameters such as batch size and learning rate. Our experiments reveal that the scaling behavior of DLMs strongly depends on the noise type and is considerably different from ALMs. While all noise types converge to similar loss values in compute-bound scaling, we find that uniform diffusion requires more parameters and less data for compute-efficient training compared to masked diffusion, making them a promising candidate in data-bound settings. We scale our uniform diffusion model up to 10B parameters trained for $10^{22}$ FLOPs, confirming the predicted scaling behavior and making it the largest publicly known uniform diffusion model to date.
+ oai:arXiv.org:2512.10858v1cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mengren (Bill), Liu (Jason), Yixiang Zhang (Jason), Yiming (Jason), Zhang
+ http://creativecommons.org/licenses/by/4.0/
+ Dimitri von R\"utte, Janis Fluri, Omead Pooladzandi, Bernhard Sch\"olkopf, Thomas Hofmann, Antonio Orvieto
- Human-in-the-Loop and AI: Crowdsourcing Metadata Vocabulary for Materials Science
- https://arxiv.org/abs/2512.09895
- arXiv:2512.09895v1 Announce Type: new
-Abstract: Metadata vocabularies are essential for advancing FAIR and FARR data principles, but their development constrained by limited human resources and inconsistent standardization practices. This paper introduces MatSci-YAMZ, a platform that integrates artificial intelligence (AI) and human-in-the-loop (HILT), including crowdsourcing, to support metadata vocabulary development. The paper reports on a proof-of-concept use case evaluating the AI-HILT model in materials science, a highly interdisciplinary domain Six (6) participants affiliated with the NSF Institute for Data-Driven Dynamical Design (ID4) engaged with the MatSci-YAMZ plaform over several weeks, contributing term definitions and providing examples to prompt the AI-definitions refinement. Nineteen (19) AI-generated definitions were successfully created, with iterative feedback loops demonstrating the feasibility of AI-HILT refinement. Findings confirm the feasibility AI-HILT model highlighting 1) a successful proof of concept, 2) alignment with FAIR and open-science principles, 3) a research protocol to guide future studies, and 4) the potential for scalability across domains. Overall, MatSci-YAMZ's underlying model has the capacity to enhance semantic transparency and reduce time required for consensus building and metadata vocabulary development.
- oai:arXiv.org:2512.09895v1
- cs.AI
- cs.DL
- Thu, 11 Dec 2025 00:00:00 -0500
+ SWiT-4D: Sliding-Window Transformer for Lossless and Parameter-Free Temporal 4D Generation
+ https://arxiv.org/abs/2512.10860
+ arXiv:2512.10860v1 Announce Type: new
+Abstract: Despite significant progress in 4D content generation, the conversion of monocular videos into high-quality animated 3D assets with explicit 4D meshes remains considerably challenging. The scarcity of large-scale, naturally captured 4D mesh datasets further limits the ability to train generalizable video-to-4D models from scratch in a purely data-driven manner. Meanwhile, advances in image-to-3D generation, supported by extensive datasets, offer powerful prior models that can be leveraged. To better utilize these priors while minimizing reliance on 4D supervision, we introduce SWiT-4D, a Sliding-Window Transformer for lossless, parameter-free temporal 4D mesh generation. SWiT-4D integrates seamlessly with any Diffusion Transformer (DiT)-based image-to-3D generator, adding spatial-temporal modeling across video frames while preserving the original single-image forward process, enabling 4D mesh reconstruction from videos of arbitrary length. To recover global translation, we further introduce an optimization-based trajectory module tailored for static-camera monocular videos. SWiT-4D demonstrates strong data efficiency: with only a single short (<10s) video for fine-tuning, it achieves high-fidelity geometry and stable temporal consistency, indicating practical deployability under extremely limited 4D supervision. Comprehensive experiments on both in-domain zoo-test sets and challenging out-of-domain benchmarks (C4D, Objaverse, and in-the-wild videos) show that SWiT-4D consistently outperforms existing baselines in temporal smoothness. Project page: https://animotionlab.github.io/SWIT4D/
+ oai:arXiv.org:2512.10860v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Jane Greenberg, Scott McClellan, Addy Ireland, Robert Sammarco, Colton Gerber, Christopher B. Rauch, Mat Kelly, John Kunze, Yuan An, Eric Toberer
+ http://creativecommons.org/licenses/by/4.0/
+ Kehong Gong, Zhengyu Wen, Mingxi Xu, Weixia He, Qi Wang, Ning Zhang, Zhengyu Li, Chenbin Li, Dongze Lian, Wei Zhao, Xiaoyu He, Mingyuan Zhang
+
+
+ Towards Cumulative Abstract Semantics via Handlers
+ https://arxiv.org/abs/2512.10861
+ arXiv:2512.10861v1 Announce Type: new
+Abstract: We consider the problem of modularizing control flow in a generic abstract interpretation framework. A generic abstract interpretation framework is not truly flexible if it does not allow interpreting with different path- and flow-sensitivities, by going forwards or backwards, and over- or under-approximately. Most interpreters inherently intertwine syntax and semantics, making the implementation antagonistic to modularity. Current approaches to modular designs require the use of complex data structures (e.g., monad transformers), providing modularity but often proving unwieldy (e.g., lifts). We observe that leveraging scoped effects within an interpreter facilitates the accumulation of semantic fragments against a fixed syntax. In this paper, we define cumulative abstract semantics, illustrating the potential for creating multiple dynamic evaluators and static analyses from one interpreter. This modularity is achieved by grouping effects into two categories: syntax elimination and domain-semantic introduction handlers. Our contribution shows the benefits of using effects as an instrument for designing a clean, elegant, and modular abstract interpretation framework.
+ oai:arXiv.org:2512.10861v1
+ cs.PL
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Cade Lueker, Andrew Fox, Bor-Yuh Evan Chang
- SCOPE: Language Models as One-Time Teacher for Hierarchical Planning in Text Environments
- https://arxiv.org/abs/2512.09897
- arXiv:2512.09897v1 Announce Type: new
-Abstract: Long-term planning in complex, text-based environments presents significant challenges due to open-ended action spaces, ambiguous observations, and sparse feedback. Recent research suggests that large language models (LLMs) encode rich semantic knowledge about the world, which can be valuable for guiding agents in high-level reasoning and planning across both embodied and purely textual settings. However, existing approaches often depend heavily on querying LLMs during training and inference, making them computationally expensive and difficult to deploy efficiently. In addition, these methods typically employ a pretrained, unaltered LLM whose parameters remain fixed throughout training, providing no opportunity for adaptation to the target task. To address these limitations, we introduce SCOPE (Subgoal-COnditioned Pretraining for Efficient planning), a one-shot hierarchical planner that leverages LLM-generated subgoals only at initialization to pretrain a lightweight student model. Unlike prior approaches that distill LLM knowledge by repeatedly prompting the model to adaptively generate subgoals during training, our method derives subgoals directly from example trajectories. This design removes the need for repeated LLM queries, significantly improving efficiency, though at the cost of reduced explainability and potentially suboptimal subgoals. Despite their suboptimality, our results on the TextCraft environment show that LLM-generated subgoals can still serve as a strong starting point for hierarchical goal decomposition in text-based planning tasks. Compared to the LLM-based hierarchical agent ADaPT (Prasad et al., 2024), which achieves a 0.52 success rate, our method reaches 0.56 and reduces inference time from 164.4 seconds to just 3.0 seconds.
- oai:arXiv.org:2512.09897v1
+ MMSI-Video-Bench: A Holistic Benchmark for Video-Based Spatial Intelligence
+ https://arxiv.org/abs/2512.10863
+ arXiv:2512.10863v1 Announce Type: new
+Abstract: Spatial understanding over continuous visual input is crucial for MLLMs to evolve into general-purpose assistants in physical environments. Yet there is still no comprehensive benchmark that holistically assesses the progress toward this goal. In this work, we introduce MMSI-Video-Bench, a fully human-annotated benchmark for video-based spatial intelligence in MLLMs. It operationalizes a four-level framework, Perception, Planning, Prediction, and Cross-Video Reasoning, through 1,106 questions grounded in 1,278 clips from 25 datasets and in-house videos. Each item is carefully designed and reviewed by 3DV experts with explanatory rationales to ensure precise, unambiguous grounding. Leveraging its diverse data sources and holistic task coverage, MMSI-Video-Bench also supports three domain-oriented sub-benchmarks (Indoor Scene Perception Bench, Robot Bench and Grounding Bench) for targeted capability assessment. We evaluate 25 strong open-source and proprietary MLLMs, revealing a striking human--AI gap: many models perform near chance, and the best reasoning model lags humans by nearly 60%. We further find that spatially fine-tuned models still fail to generalize effectively on our benchmark. Fine-grained error analysis exposes systematic failures in geometric reasoning, motion grounding, long-horizon prediction, and cross-video correspondence. We also show that typical frame-sampling strategies transfer poorly to our reasoning-intensive benchmark, and that neither 3D spatial cues nor chain-of-thought prompting yields meaningful gains. We expect our benchmark to establish a solid testbed for advancing video-based spatial intelligence.
+ oai:arXiv.org:2512.10863v1
+ cs.CVcs.AI
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Haoye Lu, Pavan Seshadri, Kaheer Suleman
+ Jingli Lin, Runsen Xu, Shaohao Zhu, Sihan Yang, Peizhou Cao, Yunlong Ran, Miao Hu, Chenming Zhu, Yiman Xie, Yilin Long, Wenbo Hu, Dahua Lin, Tai Wang, Jiangmiao Pang
- Visual Heading Prediction for Autonomous Aerial Vehicles
- https://arxiv.org/abs/2512.09898
- arXiv:2512.09898v1 Announce Type: new
-Abstract: The integration of Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs) is increasingly central to the development of intelligent autonomous systems for applications such as search and rescue, environmental monitoring, and logistics. However, precise coordination between these platforms in real-time scenarios presents major challenges, particularly when external localization infrastructure such as GPS or GNSS is unavailable or degraded [1]. This paper proposes a vision-based, data-driven framework for real-time UAV-UGV integration, with a focus on robust UGV detection and heading angle prediction for navigation and coordination. The system employs a fine-tuned YOLOv5 model to detect UGVs and extract bounding box features, which are then used by a lightweight artificial neural network (ANN) to estimate the UAV's required heading angle. A VICON motion capture system was used to generate ground-truth data during training, resulting in a dataset of over 13,000 annotated images collected in a controlled lab environment. The trained ANN achieves a mean absolute error of 0.1506{\deg} and a root mean squared error of 0.1957{\deg}, offering accurate heading angle predictions using only monocular camera inputs. Experimental evaluations achieve 95% accuracy in UGV detection. This work contributes a vision-based, infrastructure- independent solution that demonstrates strong potential for deployment in GPS/GNSS-denied environments, supporting reliable multi-agent coordination under realistic dynamic conditions. A demonstration video showcasing the system's real-time performance, including UGV detection, heading angle prediction, and UAV alignment under dynamic conditions, is available at: https://github.com/Kooroshraf/UAV-UGV-Integration
- oai:arXiv.org:2512.09898v1
- cs.RO
+ Quantifying Emotional Tone in Tolkien's The Hobbit: Dialogue Sentiment Analysis with RegEx, NRC-VAD, and Python
+ https://arxiv.org/abs/2512.10865
+ arXiv:2512.10865v1 Announce Type: new
+Abstract: This study analyzes the emotional tone of dialogue in J. R. R. Tolkien's The Hobbit (1937) using computational text analysis. Dialogue was extracted with regular expressions, then preprocessed, and scored using the NRC-VAD lexicon to quantify emotional dimensions. The results show that the dialogue maintains a generally positive (high valence) and calm (low arousal) tone, with a gradually increasing sense of agency (dominance) as the story progresses. These patterns reflect the novel's emotional rhythm: moments of danger and excitement are regularly balanced by humor, camaraderie, and relief. Visualizations -- including emotional trajectory graphs and word clouds -- highlight how Tolkien's language cycles between tension and comfort. By combining computational tools with literary interpretation, this study demonstrates how digital methods can uncover subtle emotional structures in literature, revealing the steady rhythm and emotional modulation that shape the storytelling in The Hobbit.
+ oai:arXiv.org:2512.10865v1
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Lilin Qiu
+
+
+ UrbanAI 2025 Challenge: Linear vs Transformer Models for Long-Horizon Exogenous Temperature Forecasting
+ https://arxiv.org/abs/2512.10866
+ arXiv:2512.10866v1 Announce Type: new
+Abstract: We study long-horizon exogenous-only temperature forecasting - a challenging univariate setting where only the past values of the indoor temperature are used for prediction - using linear and Transformer-family models. We evaluate Linear, NLinear, DLinear, Transformer, Informer, and Autoformer under standardized train, validation, and test splits. Results show that linear baselines (Linear, NLinear, DLinear) consistently outperform more complex Transformer-family architectures, with DLinear achieving the best overall accuracy across all splits. These findings highlight that carefully designed linear models remain strong baselines for time series forecasting in challenging exogenous-only settings.
+ oai:arXiv.org:2512.10866v1
+ cs.LGcs.AI
- cs.CV
- cs.MA
- cs.SY
- eess.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Reza Ahmari, Ahmad Mohammadi, Vahid Hemmati, Mohammed Mynuddin, Parham Kebria, Mahmoud Nabil Mahmoud, Xiaohong Yuan, Abdollah Homaifar
+ http://creativecommons.org/licenses/by/4.0/
+ Ruslan Gokhman
- Near-Linear and Parameterized Approximations for Maximum Cliques in Disk Graphs
- https://arxiv.org/abs/2512.09899
- arXiv:2512.09899v1 Announce Type: new
-Abstract: A \emph{disk graph} is the intersection graph of (closed) disks in the plane. We consider the classic problem of finding a maximum clique in a disk graph. For general disk graphs, the complexity of this problem is still open, but for unit disk graphs, it is well known to be in P. The currently fastest algorithm runs in time $O(n^{7/3+ o(1)})$, where $n$ denotes the number of disks~\cite{EspenantKM23, keil_et_al:LIPIcs.SoCG.2025.63}. Moreover, for the case of disk graphs with $t$ distinct radii, the problem has also recently been shown to be in XP. More specifically, it is solvable in time $O^*(n^{2t})$~\cite{keil_et_al:LIPIcs.SoCG.2025.63}. In this paper, we present algorithms with improved running times by allowing for approximate solutions and by using randomization:
- (i) for unit disk graphs, we give an algorithm that, with constant success probability, computes a $(1-\varepsilon)$-approximate maximum clique in expected time $\tilde{O}(n/\varepsilon^2)$; and
- (ii) for disk graphs with $t$ distinct radii, we give a parameterized approximation scheme that, with a constant success probability, computes a $(1-\varepsilon)$-approximate maximum clique in expected time $\tilde{O}(f(t)\cdot (1/\varepsilon)^{O(t)} \cdot n)$.
- oai:arXiv.org:2512.09899v1
- cs.CG
- Thu, 11 Dec 2025 00:00:00 -0500
+ From Macro to Micro: Benchmarking Microscopic Spatial Intelligence on Molecules via Vision-Language Models
+ https://arxiv.org/abs/2512.10867
+ arXiv:2512.10867v1 Announce Type: new
+Abstract: This paper introduces the concept of Microscopic Spatial Intelligence (MiSI), the capability to perceive and reason about the spatial relationships of invisible microscopic entities, which is fundamental to scientific discovery. To assess the potential of Vision-Language Models (VLMs) in this domain, we propose a systematic benchmark framework MiSI-Bench. This framework features over 163,000 question-answer pairs and 587,000 images derived from approximately 4,000 molecular structures, covering nine complementary tasks that evaluate abilities ranging from elementary spatial transformations to complex relational identifications. Experimental results reveal that current state-of-the-art VLMs perform significantly below human level on this benchmark. However, a fine-tuned 7B model demonstrates substantial potential, even surpassing humans in spatial transformation tasks, while its poor performance in scientifically-grounded tasks like hydrogen bond recognition underscores the necessity of integrating explicit domain knowledge for progress toward scientific AGI. The datasets are available at https://huggingface.co/datasets/zongzhao/MiSI-bench.
+ oai:arXiv.org:2512.10867v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jie Gao, Pawel Gawrychowski, Panos Giannopoulos, Wolfgang Mulzer, Satyam Singh, Frank Staals, Meirav Zehavi
+ Zongzhao Li, Xiangzhe Kong, Jiahui Su, Zongyang Ma, Mingze Li, Songyou Li, Yuelin Zhang, Yu Rong, Tingyang Xu, Deli Zhao, Wenbing Huang
- Link-Sharing Backpressure Routing In Wireless Multi-Hop Networks
- https://arxiv.org/abs/2512.09902
- arXiv:2512.09902v1 Announce Type: new
-Abstract: Backpressure (BP) routing and scheduling is an established resource allocation method for wireless multi-hop networks, noted for its fully distributed operation and maximum queue stability. Recent advances in shortest path-biased BP routing (SP-BP) mitigate shortcomings such as slow startup and random walks, yet exclusive link-level commodity selection still causes last-packet problem and bandwidth underutilization. By revisiting the Lyapunov drift theory underlying BP, we show that the legacy exclusive commodity selection is unnecessary, and propose a Maximum Utility (MaxU) link-sharing method to expand its performance envelope without increasing control message overhead. Numerical results show that MaxU SP-BP substantially mitigates the last-packet problem and slightly expands the network capacity region.
- oai:arXiv.org:2512.09902v1
+ A Differentiable Digital Twin of Distributed Link Scheduling for Contention-Aware Networking
+ https://arxiv.org/abs/2512.10874
+ arXiv:2512.10874v1 Announce Type: new
+Abstract: Many routing and flow optimization problems in wired networks can be solved efficiently using minimum cost flow formulations. However, this approach does not extend to wireless multi-hop networks, where the assumptions of fixed link capacity and linear cost structure collapse due to contention for shared spectrum resources. The key challenge is that the long-term capacity of a wireless link becomes a non-linear function of its network context, including network topology, link quality, and the traffic assigned to neighboring links. In this work, we pursue a new direction of modeling wireless network under randomized medium access control by developing an analytical network digital twin (NDT) that predicts link duty cycles from network context. We generalize randomized contention as finding a Maximal Independent Set (MIS) on the conflict graph using weighted Luby's algorithm, derive an analytical model of link duty cycles, and introduce an iterative procedure that resolves the circular dependency among duty cycle, link capacity, and contention probability. Our numerical experiments show that the proposed NDT accurately predicts link duty cycles and congestion patterns with up to a 5000x speedup over packet-level simulation, and enables us to optimize link scheduling using gradient descent for reduced congestion and radio footprint.
+ oai:arXiv.org:2512.10874v1cs.NI
- cs.DC
+ cs.LGcs.SY
+ eess.SPeess.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhongyuan Zhao, Yujun Ming, Ananthram Swami, Kevin Chan, Fikadu Dagefu, Santiago Segarra
+ Zhongyuan Zhao, Yujun Ming, Kevin Chan, Ananthram Swami, Santiago Segarra
- YOPO-Nav: Visual Navigation using 3DGS Graphs from One-Pass Videos
- https://arxiv.org/abs/2512.09903
- arXiv:2512.09903v1 Announce Type: new
-Abstract: Visual navigation has emerged as a practical alternative to traditional robotic navigation pipelines that rely on detailed mapping and path planning. However, constructing and maintaining 3D maps is often computationally expensive and memory-intensive. We address the problem of visual navigation when exploration videos of a large environment are available. The videos serve as a visual reference, allowing a robot to retrace the explored trajectories without relying on metric maps. Our proposed method, YOPO-Nav (You Only Pass Once), encodes an environment into a compact spatial representation composed of interconnected local 3D Gaussian Splatting (3DGS) models. During navigation, the framework aligns the robot's current visual observation with this representation and predicts actions that guide it back toward the demonstrated trajectory. YOPO-Nav employs a hierarchical design: a visual place recognition (VPR) module provides coarse localization, while the local 3DGS models refine the goal and intermediate poses to generate control actions. To evaluate our approach, we introduce the YOPO-Campus dataset, comprising 4 hours of egocentric video and robot controller inputs from over 6 km of human-teleoperated robot trajectories. We benchmark recent visual navigation methods on trajectories from YOPO-Campus using a Clearpath Jackal robot. Experimental results show YOPO-Nav provides excellent performance in image-goal navigation for real-world scenes on a physical robot. The dataset and code will be made publicly available for visual navigation and scene representation research.
- oai:arXiv.org:2512.09903v1
- cs.RO
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Guided Transfer Learning for Discrete Diffusion Models
+ https://arxiv.org/abs/2512.10877
+ arXiv:2512.10877v1 Announce Type: new
+Abstract: Discrete diffusion models achieve strong performance across language and other discrete domains, providing a powerful alternative to autoregressive models. However, their strong performance relies on large training datasets, which are costly or risky to obtain, especially when adapting to new domains. Transfer learning is the natural way to adapt pretrained discrete diffusion models, but current methods require fine-tuning large diffusion models, which is computationally expensive and often impractical. Building on ratio-based transfer learning for continuous diffusion, we provide Guided Transfer Learning for discrete diffusion models (GTL). This enables sampling from a target distribution without modifying the pretrained denoiser. The same guidance formulation applies to both discrete-time diffusion and continuous-time score-based discrete diffusion, yielding a unified treatment. Guided discrete diffusion often requires many forward passes of the guidance network, which becomes impractical for large vocabularies and long sequences. To address this, we further present an efficient guided sampler that concentrates evaluations on planner-selected positions and top candidate tokens, thus lowering sampling time and computation. This makes guided language modeling practical at scale for large vocabularies and long sequences. We evaluate GTL on sequential data, including synthetic Markov chains and language modeling, and provide empirical analyses of its behavior.
+ oai:arXiv.org:2512.10877v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Julian Kleutgens, Claudio Battiloro, Lingkai Kong, Benjamin Grewe, Francesca Dominici, Mauricio Tec
+
+
+ Classifier Reconstruction Through Counterfactual-Aware Wasserstein Prototypes
+ https://arxiv.org/abs/2512.10878
+ arXiv:2512.10878v1 Announce Type: new
+Abstract: Counterfactual explanations provide actionable insights by identifying minimal input changes required to achieve a desired model prediction. Beyond their interpretability benefits, counterfactuals can also be leveraged for model reconstruction, where a surrogate model is trained to replicate the behavior of a target model. In this work, we demonstrate that model reconstruction can be significantly improved by recognizing that counterfactuals, which typically lie close to the decision boundary, can serve as informative though less representative samples for both classes. This is particularly beneficial in settings with limited access to labeled data. We propose a method that integrates original data samples with counterfactuals to approximate class prototypes using the Wasserstein barycenter, thereby preserving the underlying distributional structure of each class. This approach enhances the quality of the surrogate model and mitigates the issue of decision boundary shift, which commonly arises when counterfactuals are naively treated as ordinary training instances. Empirical results across multiple datasets show that our method improves fidelity between the surrogate and target models, validating its effectiveness.
+ oai:arXiv.org:2512.10878v1
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Ryan Meegan, Adam D'Souza, Bryan Bo Cao, Shubham Jain, Kristin Dana
+ Xuan Zhao, Zhuo Cao, Arya Bangun, Hanno Scharr, Ira Assent
- VisualActBench: Can VLMs See and Act like a Human?
- https://arxiv.org/abs/2512.09907
- arXiv:2512.09907v1 Announce Type: new
-Abstract: Vision-Language Models (VLMs) have achieved impressive progress in perceiving and describing visual environments. However, their ability to proactively reason and act based solely on visual inputs, without explicit textual prompts, remains underexplored. We introduce a new task, Visual Action Reasoning, and propose VisualActBench, a large-scale benchmark comprising 1,074 videos and 3,733 human-annotated actions across four real-world scenarios. Each action is labeled with an Action Prioritization Level (APL) and a proactive-reactive type to assess models' human-aligned reasoning and value sensitivity. We evaluate 29 VLMs on VisualActBench and find that while frontier models like GPT4o demonstrate relatively strong performance, a significant gap remains compared to human-level reasoning, particularly in generating proactive, high-priority actions. Our results highlight limitations in current VLMs' ability to interpret complex context, anticipate outcomes, and align with human decision-making frameworks. VisualActBench establishes a comprehensive foundation for assessing and improving the real-world readiness of proactive, vision-centric AI agents.
- oai:arXiv.org:2512.09907v1
+ MoCapAnything: Unified 3D Motion Capture for Arbitrary Skeletons from Monocular Videos
+ https://arxiv.org/abs/2512.10881
+ arXiv:2512.10881v1 Announce Type: new
+Abstract: Motion capture now underpins content creation far beyond digital humans, yet most existing pipelines remain species- or template-specific. We formalize this gap as Category-Agnostic Motion Capture (CAMoCap): given a monocular video and an arbitrary rigged 3D asset as a prompt, the goal is to reconstruct a rotation-based animation such as BVH that directly drives the specific asset. We present MoCapAnything, a reference-guided, factorized framework that first predicts 3D joint trajectories and then recovers asset-specific rotations via constraint-aware inverse kinematics. The system contains three learnable modules and a lightweight IK stage: (1) a Reference Prompt Encoder that extracts per-joint queries from the asset's skeleton, mesh, and rendered images; (2) a Video Feature Extractor that computes dense visual descriptors and reconstructs a coarse 4D deforming mesh to bridge the gap between video and joint space; and (3) a Unified Motion Decoder that fuses these cues to produce temporally coherent trajectories. We also curate Truebones Zoo with 1038 motion clips, each providing a standardized skeleton-mesh-render triad. Experiments on both in-domain benchmarks and in-the-wild videos show that MoCapAnything delivers high-quality skeletal animations and exhibits meaningful cross-species retargeting across heterogeneous rigs, enabling scalable, prompt-driven 3D motion capture for arbitrary assets. Project page: https://animotionlab.github.io/MoCapAnything/
+ oai:arXiv.org:2512.10881v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Daoan Zhang, Pai Liu, Xiaofei Zhou, Yuan Ge, Guangchen Lan, Jing Bi, Christopher Brinton, Ehsan Hoque, Jiebo Luo
+ http://creativecommons.org/licenses/by/4.0/
+ Kehong Gong, Zhengyu Wen, Weixia He, Mingxi Xu, Qi Wang, Ning Zhang, Zhengyu Li, Dongze Lian, Wei Zhao, Xiaoyu He, Mingyuan Zhang
- Bayesian Networks, Markov Networks, Moralisation, Triangulation: a Categorical Perspective
- https://arxiv.org/abs/2512.09908
- arXiv:2512.09908v1 Announce Type: new
-Abstract: Moralisation and Triangulation are transformations allowing to switch between different ways of factoring a probability distribution into a graphical model. Moralisation allows to view a Bayesian network (a directed model) as a Markov network (an undirected model), whereas triangulation addresses the opposite direction. We present a categorical framework where these transformations are modelled as functors between a category of Bayesian networks and one of Markov networks. The two kinds of network (the objects of these categories) are themselves represented as functors from a `syntax' domain to a `semantics' codomain. Notably, moralisation and triangulation can be defined inductively on such syntax via functor pre-composition. Moreover, while moralisation is fully syntactic, triangulation relies on semantics. This leads to a discussion of the variable elimination algorithm, reinterpreted here as a functor in its own right, that splits the triangulation procedure in two: one purely syntactic, the other purely semantic. This approach introduces a functorial perspective into the theory of probabilistic graphical models, which highlights the distinctions between syntactic and semantic modifications.
- oai:arXiv.org:2512.09908v1
- cs.AI
- cs.LO
- math.CT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Computational emotion analysis with multimodal LLMs: Current evidence on an emerging methodological opportunity
+ https://arxiv.org/abs/2512.10882
+ arXiv:2512.10882v1 Announce Type: new
+Abstract: Emotions are central to politics and analyzing their role in political communication has a long tradition. As research increasingly leverages audio-visual materials to analyze the display of emotions, the emergence of multimodal generative AI promises great advances. However, we lack evidence about the effectiveness of multimodal AI in emotion analysis. This paper addresses this gap by evaluating current multimodal large language models (mLLMs) in video-based analysis of emotional arousal in two complementary data sets of human-labeled video recordings. I find that under ideal circumstances, mLLMs' emotional arousal ratings are highly reliable and show little to know indication of demographic bias. However, in recordings of speakers in real-world parliamentary debates, mLLMs' arousal ratings fail to deliver on this promise with potential negative consequences for downstream statistical inferences. This study therefore underscores the need for continued, thorough evaluation of emerging generative AI methods in political analysis and contributes a suitable replicable framework.
+ oai:arXiv.org:2512.10882v1
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Antonio Lorenzin, Fabio Zanasi
+ Hauke Licht
- STACHE: Local Black-Box Explanations for Reinforcement Learning Policies
- https://arxiv.org/abs/2512.09909
- arXiv:2512.09909v1 Announce Type: new
-Abstract: Reinforcement learning agents often behave unexpectedly in sparse-reward or safety-critical environments, creating a strong need for reliable debugging and verification tools. In this paper, we propose STACHE, a comprehensive framework for generating local, black-box explanations for an agent's specific action within discrete Markov games. Our method produces a Composite Explanation consisting of two complementary components: (1) a Robustness Region, the connected neighborhood of states where the agent's action remains invariant, and (2) Minimal Counterfactuals, the smallest state perturbations required to alter that decision. By exploiting the structure of factored state spaces, we introduce an exact, search-based algorithm that circumvents the fidelity gaps of surrogate models. Empirical validation on Gymnasium environments demonstrates that our framework not only explains policy actions, but also effectively captures the evolution of policy logic during training - from erratic, unstable behavior to optimized, robust strategies - providing actionable insights into agent sensitivity and decision boundaries.
- oai:arXiv.org:2512.09909v1
+ Physics-Informed Learning of Flow Distribution and Receiver Heat Losses in Parabolic Trough Solar Fields
+ https://arxiv.org/abs/2512.10886
+ arXiv:2512.10886v1 Announce Type: new
+Abstract: Parabolic trough Concentrating Solar Power (CSP) plants operate large hydraulic networks of collector loops that must deliver a uniform outlet temperature despite spatially heterogeneous optical performance, heat losses, and pressure drops. While loop temperatures are measured, loop-level mass flows and receiver heat-loss parameters are unobserved, making it impossible to diagnose hydraulic imbalances or receiver degradation using standard monitoring tools.
+ We present a physics-informed learning framework that infers (i) loop-level mass-flow ratios and (ii) time-varying receiver heat-transfer coefficients directly from routine operational data. The method exploits nocturnal homogenization periods -- when hot oil is circulated through a non-irradiated field -- to isolate hydraulic and thermal-loss effects. A differentiable conjugate heat-transfer model is discretized and embedded into an end-to-end learning pipeline optimized using historical plant data from the 50 MW Andasol 3 solar field.
+ The model accurately reconstructs loop temperatures (RMSE $<2^\circ$C) and produces physically meaningful estimates of loop imbalances and receiver heat losses. Comparison against drone-based infrared thermography (QScan) shows strong correspondence, correctly identifying all areas with high-loss receivers. This demonstrates that noisy real-world CSP operational data contain enough information to recover latent physical parameters when combined with appropriate modeling and differentiable optimization.
+ oai:arXiv.org:2512.10886v1cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CE
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Andrew Elashkin, Orna Grumberg
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Stefan Matthes, Markus Schramm
- Efficient Continual Learning in Neural Machine Translation: A Low-Rank Adaptation Approach
- https://arxiv.org/abs/2512.09910
- arXiv:2512.09910v1 Announce Type: new
-Abstract: Continual learning in Neural Machine Translation (NMT) faces the dual challenges of catastrophic forgetting and the high computational cost of retraining. This study establishes Low-Rank Adaptation (LoRA) as a parameter-efficient framework to address these challenges in dedicated NMT architectures. We first demonstrate that LoRA-based fine-tuning adapts NMT models to new languages and domains with performance on par with full-parameter techniques, while utilizing only a fraction of the parameter space. Second, we propose an interactive adaptation method using a calibrated linear combination of LoRA modules. This approach functions as a gate-free mixture of experts, enabling real-time, user-controllable adjustments to domain and style without retraining. Finally, to mitigate catastrophic forgetting, we introduce a novel gradient-based regularization strategy specifically designed for low-rank decomposition matrices. Unlike methods that regularize the full parameter set, our approach weights the penalty on the low-rank updates using historical gradient information. Experimental results indicate that this strategy efficiently preserves prior domain knowledge while facilitating the acquisition of new tasks, offering a scalable paradigm for interactive and continual NMT.
- oai:arXiv.org:2512.09910v1
- cs.CL
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ PubTables-v2: A new large-scale dataset for full-page and multi-page table extraction
+ https://arxiv.org/abs/2512.10888
+ arXiv:2512.10888v1 Announce Type: new
+Abstract: Table extraction (TE) is a key challenge in visual document understanding. Traditional approaches detect tables first, then recognize their structure. Recently, interest has surged in developing methods, such as vision-language models (VLMs), that can extract tables directly in their full page or document context. However, progress has been difficult to demonstrate due to a lack of annotated data. To address this, we create a new large-scale dataset, PubTables-v2. PubTables-v2 supports a number of current challenging table extraction tasks. Notably, it is the first large-scale benchmark for multi-page table structure recognition. We demonstrate its usefulness by evaluating domain-specialized VLMs on these tasks and highlighting current progress. Finally, we use PubTables-v2 to create the Page-Object Table Transformer (POTATR), an image-to-graph extension of the Table Transformer to comprehensive page-level TE. Data, code, and trained models will be released.
+ oai:arXiv.org:2512.10888v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Salvador Carri\'on, Francisco Casacuberta
+ Brandon Smock, Valerie Faucon-Morin, Max Sokolov, Libin Liang, Tayyibah Khanam, Maury Courtland
- Py-DiSMech: A Scalable and Efficient Framework for Discrete Differential Geometry-Based Modeling and Control of Soft Robots
- https://arxiv.org/abs/2512.09911
- arXiv:2512.09911v1 Announce Type: new
-Abstract: High-fidelity simulation has become essential to the design and control of soft robots, where large geometric deformations and complex contact interactions challenge conventional modeling tools. Recent advances in the field demand simulation frameworks that combine physical accuracy, computational scalability, and seamless integration with modern control and optimization pipelines. In this work, we present Py-DiSMech, a Python-based, open-source simulation framework for modeling and control of soft robotic structures grounded in the principles of Discrete Differential Geometry (DDG). By discretizing geometric quantities such as curvature and strain directly on meshes, Py-DiSMech captures the nonlinear deformation of rods, shells, and hybrid structures with high fidelity and reduced computational cost. The framework introduces (i) a fully vectorized NumPy implementation achieving order-of-magnitude speed-ups over existing geometry-based simulators; (ii) a penalty-energy-based fully implicit contact model that supports rod-rod, rod-shell, and shell-shell interactions; (iii) a natural-strain-based feedback-control module featuring a proportional-integral (PI) controller for shape regulation and trajectory tracking; and (iv) a modular, object-oriented software design enabling user-defined elastic energies, actuation schemes, and integration with machine-learning libraries. Benchmark comparisons demonstrate that Py-DiSMech substantially outperforms the state-of-the-art simulator Elastica in computational efficiency while maintaining physical accuracy. Together, these features establish Py-DiSMech as a scalable, extensible platform for simulation-driven design, control validation, and sim-to-real research in soft robotics.
- oai:arXiv.org:2512.09911v1
+ Iterative Compositional Data Generation for Robot Control
+ https://arxiv.org/abs/2512.10891
+ arXiv:2512.10891v1 Announce Type: new
+Abstract: Collecting robotic manipulation data is expensive, making it impractical to acquire demonstrations for the combinatorially large space of tasks that arise in multi-object, multi-robot, and multi-environment settings. While recent generative models can synthesize useful data for individual tasks, they do not exploit the compositional structure of robotic domains and struggle to generalize to unseen task combinations. We propose a semantic compositional diffusion transformer that factorizes transitions into robot-, object-, obstacle-, and objective-specific components and learns their interactions through attention. Once trained on a limited subset of tasks, we show that our model can zero-shot generate high-quality transitions from which we can learn control policies for unseen task combinations. Then, we introduce an iterative self-improvement procedure in which synthetic data is validated via offline reinforcement learning and incorporated into subsequent training rounds. Our approach substantially improves zero-shot performance over monolithic and hard-coded compositional baselines, ultimately solving nearly all held-out tasks and demonstrating the emergence of meaningful compositional structure in the learned representations.
+ oai:arXiv.org:2512.10891v1cs.RO
- physics.comp-ph
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Radha Lahoti, Ryan Chaiyakul, M. Khalid Jawed
+ http://creativecommons.org/licenses/by/4.0/
+ Anh-Quan Pham, Marcel Hussing, Shubhankar P. Patankar, Dani S. Bassett, Jorge Mendez-Mendez, Eric Eaton
- NordFKB: a fine-grained benchmark dataset for geospatial AI in Norway
- https://arxiv.org/abs/2512.09913
- arXiv:2512.09913v1 Announce Type: new
-Abstract: We present NordFKB, a fine-grained benchmark dataset for geospatial AI in Norway, derived from the authoritative, highly accurate, national Felles KartdataBase (FKB). The dataset contains high-resolution orthophotos paired with detailed annotations for 36 semantic classes, including both per-class binary segmentation masks in GeoTIFF format and COCO-style bounding box annotations. Data is collected from seven geographically diverse areas, ensuring variation in climate, topography, and urbanization. Only tiles containing at least one annotated object are included, and training/validation splits are created through random sampling across areas to ensure representative class and context distributions. Human expert review and quality control ensures high annotation accuracy. Alongside the dataset, we release a benchmarking repository with standardized evaluation protocols and tools for semantic segmentation and object detection, enabling reproducible and comparable research. NordFKB provides a robust foundation for advancing AI methods in mapping, land administration, and spatial planning, and paves the way for future expansions in coverage, temporal scope, and data modalities.
- oai:arXiv.org:2512.09913v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Designing Truthful Mechanisms for Asymptotic Fair Division
+ https://arxiv.org/abs/2512.10892
+ arXiv:2512.10892v1 Announce Type: new
+Abstract: We study the problem of fairly allocating a set of $m$ goods among $n$ agents in the asymptotic setting, where each item's value for each agent is drawn from an underlying joint distribution. Prior works have shown that if this distribution is well-behaved, then an envy-free allocation exists with high probability when $m=\Omega(n\log{n})$ [Dickerson et al., 2014]. Under the stronger assumption that item values are independently and identically distributed (i.i.d.) across agents, this requirement improves to $m=\Omega(n\log{n}/\log{\log{n}})$, which is tight [Manurangsi and Suksompong, 2021]. However, these results rely on non-strategyproof mechanisms, such as maximum-welfare allocation or the round-robin algorithm, limiting their applicability in settings with strategic agents.
+ In this work, we extend the theory to a broader, more realistic class of joint value distributions, allowing for correlations among agents, atomicity, and unequal probabilities of having the highest value for an item. We show that envy-free allocations continue to exist with a high probability when $m=\Omega(n\log{n})$. More importantly, we give a new randomized mechanism that is truthful in expectation, efficiently implementable in polynomial time, and outputs envy-free allocations with high probability, answering an open question posed by [Manurangsi and Suksompong, 2017]. We further extend our mechanism to settings with asymptotic weighted fair division and multiple agent types and good types, proving new results in each case.
+ oai:arXiv.org:2512.10892v1
+ cs.GT
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Sander Riis{\o}en Jyhne, Aditya Gupta, Ben Worsley, Marianne Andersen, Ivar Oveland, Alexander Salveson Nossum
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jugal Garg, Vishnu V. Narayan, Yuang Eric Shen
- FALCON: Few-step Accurate Likelihoods for Continuous Flows
- https://arxiv.org/abs/2512.09914
- arXiv:2512.09914v1 Announce Type: new
-Abstract: Scalable sampling of molecular states in thermodynamic equilibrium is a long-standing challenge in statistical physics. Boltzmann Generators tackle this problem by pairing a generative model, capable of exact likelihood computation, with importance sampling to obtain consistent samples under the target distribution. Current Boltzmann Generators primarily use continuous normalizing flows (CNFs) trained with flow matching for efficient training of powerful models. However, likelihood calculation for these models is extremely costly, requiring thousands of function evaluations per sample, severely limiting their adoption. In this work, we propose Few-step Accurate Likelihoods for Continuous Flows (FALCON), a method which allows for few-step sampling with a likelihood accurate enough for importance sampling applications by introducing a hybrid training objective that encourages invertibility. We show FALCON outperforms state-of-the-art normalizing flow models for molecular Boltzmann sampling and is two orders of magnitude faster than the equivalently performing CNF model.
- oai:arXiv.org:2512.09914v1
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ DuetSVG: Unified Multimodal SVG Generation with Internal Visual Guidance
+ https://arxiv.org/abs/2512.10894
+ arXiv:2512.10894v1 Announce Type: new
+Abstract: Recent vision-language model (VLM)-based approaches have achieved impressive results on SVG generation. However, because they generate only text and lack visual signals during decoding, they often struggle with complex semantics and fail to produce visually appealing or geometrically coherent SVGs. We introduce DuetSVG, a unified multimodal model that jointly generates image tokens and corresponding SVG tokens in an end-to-end manner. DuetSVG is trained on both image and SVG datasets. At inference, we apply a novel test-time scaling strategy that leverages the model's native visual predictions as guidance to improve SVG decoding quality. Extensive experiments show that our method outperforms existing methods, producing visually faithful, semantically aligned, and syntactically clean SVGs across a wide range of applications.
+ oai:arXiv.org:2512.10894v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Danyal Rehman, Tara Akhound-Sadegh, Artem Gazizov, Yoshua Bengio, Alexander Tong
+ Peiying Zhang, Nanxuan Zhao, Matthew Fisher, Yiran Xu, Jing Liao, Difan Liu
- LISN: Language-Instructed Social Navigation with VLM-based Controller Modulating
- https://arxiv.org/abs/2512.09920
- arXiv:2512.09920v1 Announce Type: new
-Abstract: Towards human-robot coexistence, socially aware navigation is significant for mobile robots. Yet existing studies on this area focus mainly on path efficiency and pedestrian collision avoidance, which are essential but represent only a fraction of social navigation. Beyond these basics, robots must also comply with user instructions, aligning their actions to task goals and social norms expressed by humans. In this work, we present LISN-Bench, the first simulation-based benchmark for language-instructed social navigation. Built on Rosnav-Arena 3.0, it is the first standardized social navigation benchmark to incorporate instruction following and scene understanding across diverse contexts. To address this task, we further propose Social-Nav-Modulator, a fast-slow hierarchical system where a VLM agent modulates costmaps and controller parameters. Decoupling low-level action generation from the slower VLM loop reduces reliance on high-frequency VLM inference while improving dynamic avoidance and perception adaptability. Our method achieves an average success rate of 91.3%, which is greater than 63% than the most competitive baseline, with most of the improvements observed in challenging tasks such as following a person in a crowd and navigating while strictly avoiding instruction-forbidden regions. The project website is at: https://social-nav.github.io/LISN-project/
- oai:arXiv.org:2512.09920v1
- cs.RO
+ LLMs Can Assist with Proposal Selection at Large User Facilities
+ https://arxiv.org/abs/2512.10895
+ arXiv:2512.10895v1 Announce Type: new
+Abstract: We explore how large language models (LLMs) can enhance the proposal selection process at large user facilities, offering a scalable, consistent, and cost-effective alternative to traditional human review. Proposal selection depends on assessing the relative strength among submitted proposals; however, traditional human scoring often suffers from weak inter-proposal correlations and is subject to reviewer bias and inconsistency. A pairwise preference-based approach is logically superior, providing a more rigorous and internally consistent basis for ranking, but its quadratic workload makes it impractical for human reviewers. We address this limitation using LLMs. Leveraging the uniquely well-curated proposals and publication records from three beamlines at the Spallation Neutron Source (SNS), Oak Ridge National Laboratory (ORNL), we show that the LLM rankings correlate strongly with the human rankings (Spearman $\rho\simeq 0.2-0.8$, improving to $\geq 0.5$ after 10\% outlier removal). Moreover, LLM performance is no worse than that of human reviewers in identifying proposals with high publication potential, while costing over two orders of magnitude less. Beyond ranking, LLMs enable advanced analyses that are challenging for humans, such as quantitative assessment of proposal similarity via embedding models, which provides information crucial for review committees.
+ oai:arXiv.org:2512.10895v1cs.AI
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Junting Chen, Yunchuan Li, Panfeng Jiang, Jiacheng Du, Zixuan Chen, Chenrui Tie, Jiajun Deng, Lin Shao
+ Lijie Ding, Janell Thomson, Jon Taylor, Changwoo Do
- Splatent: Splatting Diffusion Latents for Novel View Synthesis
- https://arxiv.org/abs/2512.09923
- arXiv:2512.09923v1 Announce Type: new
-Abstract: Radiance field representations have recently been explored in the latent space of VAEs that are commonly used by diffusion models. This direction offers efficient rendering and seamless integration with diffusion-based pipelines. However, these methods face a fundamental limitation: The VAE latent space lacks multi-view consistency, leading to blurred textures and missing details during 3D reconstruction. Existing approaches attempt to address this by fine-tuning the VAE, at the cost of reconstruction quality, or by relying on pre-trained diffusion models to recover fine-grained details, at the risk of some hallucinations. We present Splatent, a diffusion-based enhancement framework designed to operate on top of 3D Gaussian Splatting (3DGS) in the latent space of VAEs. Our key insight departs from the conventional 3D-centric view: rather than reconstructing fine-grained details in 3D space, we recover them in 2D from input views through multi-view attention mechanisms. This approach preserves the reconstruction quality of pretrained VAEs while achieving faithful detail recovery. Evaluated across multiple benchmarks, Splatent establishes a new state-of-the-art for VAE latent radiance field reconstruction. We further demonstrate that integrating our method with existing feed-forward frameworks, consistently improves detail preservation, opening new possibilities for high-quality sparse-view 3D reconstruction.
- oai:arXiv.org:2512.09923v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Multi-Granular Node Pruning for Circuit Discovery
+ https://arxiv.org/abs/2512.10903
+ arXiv:2512.10903v1 Announce Type: new
+Abstract: Circuit discovery aims to identify minimal subnetworks that are responsible for specific behaviors in large language models (LLMs). Existing approaches primarily rely on iterative edge pruning, which is computationally expensive and limited to coarse-grained units such as attention heads or MLP blocks, overlooking finer structures like individual neurons. We propose a node-level pruning framework for circuit discovery that addresses both scalability and granularity limitations. Our method introduces learnable masks across multiple levels of granularity, from entire blocks to individual neurons, within a unified optimization objective. Granularity-specific sparsity penalties guide the pruning process, allowing a comprehensive compression in a single fine-tuning run. Empirically, our approach identifies circuits that are smaller in nodes than those discovered by prior methods; moreover, we demonstrate that many neurons deemed important by coarse methods are actually irrelevant, while still maintaining task performance. Furthermore, our method has a significantly lower memory footprint, 5-10x, as it does not require keeping intermediate activations in the memory to work.
+ oai:arXiv.org:2512.10903v1
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Or Hirschorn, Omer Sela, Inbar Huberman-Spiegelglas, Netalee Efrat, Eli Alshan, Ianir Ideses, Frederic Devernay, Yochai Zvik, Lior Fritz
+ http://creativecommons.org/licenses/by/4.0/
+ Muhammad Umair Haider, Hammad Rizwan, Hassan Sajjad, A. B. Siddique
- ReViSE: Towards Reason-Informed Video Editing in Unified Models with Self-Reflective Learning
- https://arxiv.org/abs/2512.09924
- arXiv:2512.09924v1 Announce Type: new
-Abstract: Video unified models exhibit strong capabilities in understanding and generation, yet they struggle with reason-informed visual editing even when equipped with powerful internal vision-language models (VLMs). We attribute this gap to two factors: 1) existing datasets are inadequate for training and evaluating reasoning-aware video editing, and 2) an inherent disconnect between the models' reasoning and editing capabilities, which prevents the rich understanding from effectively instructing the editing process. Bridging this gap requires an integrated framework that connects reasoning with visual transformation. To address this gap, we introduce the Reason-Informed Video Editing (RVE) task, which requires reasoning about physical plausibility and causal dynamics during editing. To support systematic evaluation, we construct RVE-Bench, a comprehensive benchmark with two complementary subsets: Reasoning-Informed Video Editing and In-Context Video Generation. These subsets cover diverse reasoning dimensions and real-world editing scenarios. Building upon this foundation, we propose the ReViSE, a Self-Reflective Reasoning (SRF) framework that unifies generation and evaluation within a single architecture. The model's internal VLM provides intrinsic feedback by assessing whether the edited video logically satisfies the given instruction. The differential feedback that refines the generator's reasoning behavior during training. Extensive experiments on RVE-Bench demonstrate that ReViSE significantly enhances editing accuracy and visual fidelity, achieving a 32% improvement of the Overall score in the reasoning-informed video editing subset over state-of-the-art methods.
- oai:arXiv.org:2512.09924v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ CompanionCast: A Multi-Agent Conversational AI Framework with Spatial Audio for Social Co-Viewing Experiences
+ https://arxiv.org/abs/2512.10918
+ arXiv:2512.10918v1 Announce Type: new
+Abstract: Social presence is central to the enjoyment of watching content together, yet modern media consumption is increasingly solitary. We investigate whether multi-agent conversational AI systems can recreate the dynamics of shared viewing experiences across diverse content types. We present CompanionCast, a general framework for orchestrating multiple role-specialized AI agents that respond to video content using multimodal inputs, speech synthesis, and spatial audio. Distinctly, CompanionCast integrates an LLM-as-a-Judge module that iteratively scores and refines conversations across five dimensions (relevance, authenticity, engagement, diversity, personality consistency). We validate this framework through sports viewing, a domain with rich dynamics and strong social traditions, where a pilot study with soccer fans suggests that multi-agent interaction improves perceived social presence compared to solo viewing. We contribute: (1) a generalizable framework for orchestrating multi-agent conversations around multimodal video content, (2) a novel evaluator-agent pipeline for conversation quality control, and (3) exploratory evidence of increased social presence in AI-mediated co-viewing. We discuss challenges and future directions for applying this approach to diverse viewing contexts including entertainment, education, and collaborative watching experiences.
+ oai:arXiv.org:2512.10918v1
+ cs.HC
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Xinyu Liu, Hangjie Yuan, Yujie Wei, Jiazheng Xing, Yujin Han, Jiahao Pan, Yanbiao Ma, Chi-Min Chan, Kang Zhao, Shiwei Zhang, Wenhan Luo, Yike Guo
+ Yiyang Wang, Chen Chen, Tica Lin, Vishnu Raj, Josh Kimball, Alex Cabral, Josiah Hester
- GAINS: Gaussian-based Inverse Rendering from Sparse Multi-View Captures
- https://arxiv.org/abs/2512.09925
- arXiv:2512.09925v1 Announce Type: new
-Abstract: Recent advances in Gaussian Splatting-based inverse rendering extend Gaussian primitives with shading parameters and physically grounded light transport, enabling high-quality material recovery from dense multi-view captures. However, these methods degrade sharply under sparse-view settings, where limited observations lead to severe ambiguity between geometry, reflectance, and lighting. We introduce GAINS (Gaussian-based Inverse rendering from Sparse multi-view captures), a two-stage inverse rendering framework that leverages learning-based priors to stabilize geometry and material estimation. GAINS first refines geometry using monocular depth/normal and diffusion priors, then employs segmentation, intrinsic image decomposition (IID), and diffusion priors to regularize material recovery. Extensive experiments on synthetic and real-world datasets show that GAINS significantly improves material parameter accuracy, relighting quality, and novel-view synthesis compared to state-of-the-art Gaussian-based inverse rendering methods, especially under sparse-view settings. Project page: https://patrickbail.github.io/gains/
- oai:arXiv.org:2512.09925v1
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ SparseSwaps: Tractable LLM Pruning Mask Refinement at Scale
+ https://arxiv.org/abs/2512.10922
+ arXiv:2512.10922v1 Announce Type: new
+Abstract: The resource requirements of Neural Networks can be significantly reduced through pruning -- the removal of seemingly less important parameters. However, with the rise of Large Language Models (LLMs), full retraining to recover pruning-induced performance degradation is often prohibitive and classical approaches such as global magnitude pruning are suboptimal on Transformer architectures. State-of-the-art methods hence solve a layer-wise mask selection problem, the problem of finding a pruning mask which minimizes the per-layer pruning error on a small set of calibration data. Exactly solving this problem to optimality using Integer Programming (IP) solvers is computationally infeasible due to its combinatorial nature and the size of the search space, and existing approaches therefore rely on approximations or heuristics. In this work, we demonstrate that the mask selection problem can be made drastically more tractable at LLM scale. To that end, we decouple the rows by enforcing equal sparsity levels per row. This allows us to derive optimal 1-swaps (exchanging one kept and one pruned weight) that can be computed efficiently using the Gram matrix of the calibration data. Using these observations, we propose a tractable and simple 1-swap algorithm that warm starts from any pruning mask, runs efficiently on GPUs at LLM scale, and is essentially hyperparameter-free. We demonstrate that our approach reduces per-layer pruning error by up to 60% over Wanda (Sun et al., 2023) and consistently improves perplexity and zero-shot accuracy across state-of-the-art GPT architectures.
+ oai:arXiv.org:2512.10922v1
+ cs.LG
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Patrick Noras, Jun Myeong Choi, Didier Stricker, Pieter Peers, Roni Sengupta
+ Max Zimmer, Christophe Roux, Moritz Wagner, Deborah Hendrych, Sebastian Pokutta
- Token Expand-Merge: Training-Free Token Compression for Vision-Language-Action Models
- https://arxiv.org/abs/2512.09927
- arXiv:2512.09927v1 Announce Type: new
-Abstract: Vision-Language-Action (VLA) models pretrained on large-scale multimodal datasets have emerged as powerful foundations for robotic perception and control. However, their massive scale, often billions of parameters, poses significant challenges for real-time deployment, as inference becomes computationally expensive and latency-sensitive in dynamic environments. To address this, we propose Token Expand-and-Merge-VLA (TEAM-VLA), a training-free token compression framework that accelerates VLA inference while preserving task performance. TEAM-VLA introduces a dynamic token expansion mechanism that identifies and samples additional informative tokens in the spatial vicinity of attention-highlighted regions, enhancing contextual completeness. These expanded tokens are then selectively merged in deeper layers under action-aware guidance, effectively reducing redundancy while maintaining semantic coherence. By coupling expansion and merging within a single feed-forward pass, TEAM-VLA achieves a balanced trade-off between efficiency and effectiveness, without any retraining or parameter updates. Extensive experiments on LIBERO benchmark demonstrate that TEAM-VLA consistently improves inference speed while maintaining or even surpassing the task success rate of full VLA models. The code is public available on \href{https://github.com/Jasper-aaa/TEAM-VLA}{https://github.com/Jasper-aaa/TEAM-VLA}
- oai:arXiv.org:2512.09927v1
+ Digital Twin Supervised Reinforcement Learning Framework for Autonomous Underwater Navigation
+ https://arxiv.org/abs/2512.10925
+ arXiv:2512.10925v1 Announce Type: new
+Abstract: Autonomous navigation in underwater environments remains a major challenge due to the absence of GPS, degraded visibility, and the presence of submerged obstacles. This article investigates these issues through the case of the BlueROV2, an open platform widely used for scientific experimentation. We propose a deep reinforcement learning approach based on the Proximal Policy Optimization (PPO) algorithm, using an observation space that combines target-oriented navigation information, a virtual occupancy grid, and ray-casting along the boundaries of the operational area. The learned policy is compared against a reference deterministic kinematic planner, the Dynamic Window Approach (DWA), commonly employed as a robust baseline for obstacle avoidance. The evaluation is conducted in a realistic simulation environment and complemented by validation on a physical BlueROV2 supervised by a 3D digital twin of the test site, helping to reduce risks associated with real-world experimentation. The results show that the PPO policy consistently outperforms DWA in highly cluttered environments, notably thanks to better local adaptation and reduced collisions. Finally, the experiments demonstrate the transferability of the learned behavior from simulation to the real world, confirming the relevance of deep RL for autonomous navigation in underwater robotics.
+ oai:arXiv.org:2512.10925v1
+ cs.LGcs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yifan Ye, Jiaqi Ma, Jun Cen, Zhihe Lu
+ Zamirddine Mari, Mohamad Motasem Nawaf, Pierre Drap
- HiF-VLA: Hindsight, Insight and Foresight through Motion Representation for Vision-Language-Action Models
- https://arxiv.org/abs/2512.09928
- arXiv:2512.09928v1 Announce Type: new
-Abstract: Vision-Language-Action (VLA) models have recently enabled robotic manipulation by grounding visual and linguistic cues into actions. However, most VLAs assume the Markov property, relying only on the current observation and thus suffering from temporal myopia that degrades long-horizon coherence. In this work, we view motion as a more compact and informative representation of temporal context and world dynamics, capturing inter-state changes while filtering static pixel-level noise. Building on this idea, we propose HiF-VLA (Hindsight, Insight, and Foresight for VLAs), a unified framework that leverages motion for bidirectional temporal reasoning. HiF-VLA encodes past dynamics through hindsight priors, anticipates future motion via foresight reasoning, and integrates both through a hindsight-modulated joint expert to enable a ''think-while-acting'' paradigm for long-horizon manipulation. As a result, HiF-VLA surpasses strong baselines on LIBERO-Long and CALVIN ABC-D benchmarks, while incurring negligible additional inference latency. Furthermore, HiF-VLA achieves substantial improvements in real-world long-horizon manipulation tasks, demonstrating its broad effectiveness in practical robotic settings.
- oai:arXiv.org:2512.09928v1
+ Decoupled Q-Chunking
+ https://arxiv.org/abs/2512.10926
+ arXiv:2512.10926v1 Announce Type: new
+Abstract: Temporal-difference (TD) methods learn state and action values efficiently by bootstrapping from their own future value predictions, but such a self-bootstrapping mechanism is prone to bootstrapping bias, where the errors in the value targets accumulate across steps and result in biased value estimates. Recent work has proposed to use chunked critics, which estimate the value of short action sequences ("chunks") rather than individual actions, speeding up value backup. However, extracting policies from chunked critics is challenging: policies must output the entire action chunk open-loop, which can be sub-optimal for environments that require policy reactivity and also challenging to model especially when the chunk length grows. Our key insight is to decouple the chunk length of the critic from that of the policy, allowing the policy to operate over shorter action chunks. We propose a novel algorithm that achieves this by optimizing the policy against a distilled critic for partial action chunks, constructed by optimistically backing up from the original chunked critic to approximate the maximum value achievable when a partial action chunk is extended to a complete one. This design retains the benefits of multi-step value propagation while sidestepping both the open-loop sub-optimality and the difficulty of learning action chunking policies for long action chunks. We evaluate our method on challenging, long-horizon offline goal-conditioned tasks and show that it reliably outperforms prior methods. Code: github.com/ColinQiyangLi/dqc.
+ oai:arXiv.org:2512.10926v1
+ cs.LG
+ cs.AIcs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qiyang Li, Seohong Park, Sergey Levine
+
+
+ FoundationMotion: Auto-Labeling and Reasoning about Spatial Movement in Videos
+ https://arxiv.org/abs/2512.10927
+ arXiv:2512.10927v1 Announce Type: new
+Abstract: Motion understanding is fundamental to physical reasoning, enabling models to infer dynamics and predict future states. However, state-of-the-art models still struggle on recent motion benchmarks, primarily due to the scarcity of large-scale, fine-grained motion datasets. Existing motion datasets are often constructed from costly manual annotation, severely limiting scalability. To address this challenge, we introduce FoundationMotion, a fully automated data curation pipeline that constructs large-scale motion datasets. Our approach first detects and tracks objects in videos to extract their trajectories, then leverages these trajectories and video frames with Large Language Models (LLMs) to generate fine-grained captions and diverse question-answer pairs about motion and spatial reasoning. Using datasets produced by this pipeline, we fine-tune open-source models including NVILA-Video-15B and Qwen2.5-7B, achieving substantial improvements in motion understanding without compromising performance on other tasks. Notably, our models outperform strong closed-source baselines like Gemini-2.5 Flash and large open-source models such as Qwen2.5-VL-72B across diverse motion understanding datasets and benchmarks. FoundationMotion thus provides a scalable solution for curating fine-grained motion datasets that enable effective fine-tuning of diverse models to enhance motion understanding and spatial reasoning capabilities.
+ oai:arXiv.org:2512.10927v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Minghui Lin, Pengxiang Ding, Shu Wang, Zifeng Zhuang, Yang Liu, Xinyang Tong, Wenxuan Song, Shangke Lyu, Siteng Huang, Donglin Wang
+ Yulu Gan, Ligeng Zhu, Dandan Shan, Baifeng Shi, Hongxu Yin, Boris Ivanovic, Song Han, Trevor Darrell, Jitendra Malik, Marco Pavone, Boyi Li
- Closing the Train-Test Gap in World Models for Gradient-Based Planning
- https://arxiv.org/abs/2512.09929
- arXiv:2512.09929v1 Announce Type: new
-Abstract: World models paired with model predictive control (MPC) can be trained offline on large-scale datasets of expert trajectories and enable generalization to a wide range of planning tasks at inference time. Compared to traditional MPC procedures, which rely on slow search algorithms or on iteratively solving optimization problems exactly, gradient-based planning offers a computationally efficient alternative. However, the performance of gradient-based planning has thus far lagged behind that of other approaches. In this paper, we propose improved methods for training world models that enable efficient gradient-based planning. We begin with the observation that although a world model is trained on a next-state prediction objective, it is used at test-time to instead estimate a sequence of actions. The goal of our work is to close this train-test gap. To that end, we propose train-time data synthesis techniques that enable significantly improved gradient-based planning with existing world models. At test time, our approach outperforms or matches the classical gradient-free cross-entropy method (CEM) across a variety of object manipulation and navigation tasks in 10% of the time budget.
- oai:arXiv.org:2512.09929v1
+ Asynchronous Reasoning: Training-Free Interactive Thinking LLMs
+ https://arxiv.org/abs/2512.10931
+ arXiv:2512.10931v1 Announce Type: new
+Abstract: Many state-of-the-art LLMs are trained to think before giving their answer. Reasoning can greatly improve language model capabilities and safety, but it also makes them less interactive: given a new input, a model must stop thinking before it can respond. Real-world use cases such as voice-based or embedded assistants require an LLM agent to respond and adapt to additional information in real time, which is incompatible with sequential interactions. In contrast, humans can listen, think, and act asynchronously: we begin thinking about the problem while reading it and continue thinking while formulating the answer. In this work, we augment LLMs capable of reasoning to operate in a similar way without additional training. Our method uses the properties of rotary embeddings to enable LLMs built for sequential interactions to simultaneously think, listen, and generate outputs. We evaluate our approach on math, commonsense, and safety reasoning and find that it can generate accurate thinking-augmented answers in real time, reducing time to first non-thinking token from minutes to <= 5s. and the overall real-time delays by 6-11x.
+ oai:arXiv.org:2512.10931v1cs.LG
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Arjun Parthasarathy, Nimit Kalra, Rohun Agrawal, Yann LeCun, Oumayma Bounou, Pavel Izmailov, Micah Goldblum
+ George Yakushev, Nataliia Babina, Masoud Vahid Dastgerdi, Vyacheslav Zhdanovskiy, Alina Shutova, Denis Kuznedelev
- Controlling Steering Angle for Cooperative Self-driving Vehicles utilizing CNN and LSTM-based Deep Networks
- https://arxiv.org/abs/1904.04375
- arXiv:1904.04375v3 Announce Type: cross
-Abstract: A fundamental challenge in autonomous vehicles is adjusting the steering angle at different road conditions. Recent state-of-the-art solutions addressing this challenge include deep learning techniques as they provide end-to-end solution to predict steering angles directly from the raw input images with higher accuracy. Most of these works ignore the temporal dependencies between the image frames. In this paper, we tackle the problem of utilizing multiple sets of images shared between two autonomous vehicles to improve the accuracy of controlling the steering angle by considering the temporal dependencies between the image frames. This problem has not been studied in the literature widely. We present and study a new deep architecture to predict the steering angle automatically by using Long-Short-Term-Memory (LSTM) in our deep architecture. Our deep architecture is an end-to-end network that utilizes CNN, LSTM and fully connected (FC) layers and it uses both present and futures images (shared by a vehicle ahead via Vehicle-to-Vehicle (V2V) communication) as input to control the steering angle. Our model demonstrates the lowest error when compared to the other existing approaches in the literature.
- oai:arXiv.org:1904.04375v3
+ BabyVLM-V2: Toward Developmentally Grounded Pretraining and Benchmarking of Vision Foundation Models
+ https://arxiv.org/abs/2512.10932
+ arXiv:2512.10932v1 Announce Type: new
+Abstract: Early children's developmental trajectories set up a natural goal for sample-efficient pretraining of vision foundation models. We introduce BabyVLM-V2, a developmentally grounded framework for infant-inspired vision-language modeling that extensively improves upon BabyVLM-V1 through a longitudinal, multifaceted pretraining set, a versatile model, and, most importantly, DevCV Toolbox for cognitive evaluation. The pretraining set maximizes coverage while minimizing curation of a longitudinal, infant-centric audiovisual corpus, yielding video-utterance, image-utterance, and multi-turn conversational data that mirror infant experiences. DevCV Toolbox adapts all vision-related measures of the recently released NIH Baby Toolbox into a benchmark suite of ten multimodal tasks, covering spatial reasoning, memory, and vocabulary understanding aligned with early children's capabilities. Experimental results show that a compact model pretrained from scratch can achieve competitive performance on DevCV Toolbox, outperforming GPT-4o on some tasks. We hope the principled, unified BabyVLM-V2 framework will accelerate research in developmentally plausible pretraining of vision foundation models.
+ oai:arXiv.org:2512.10932v1cs.CV
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500
+ newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1109/IVS.2019.8814260
- Rodolfo Valiente, Mahdi Zaman, Sedat Ozer, Yaser P. Fallah
+ Shengao Wang, Wenqi Wang, Zecheng Wang, Max Whitton, Michael Wakeham, Arjun Chandra, Joey Huang, Pengyue Zhu, Helen Chen, David Li, Jeffrey Li, Shawn Li, Andrew Zagula, Amy Zhao, Andrew Zhu, Sayaka Nakamura, Yuki Yamamoto, Jerry Jun Yokono, Aaron Mueller, Bryan A. Plummer, Kate Saenko, Venkatesh Saligrama, Boqing Gong
- Altruistic Maneuver Planning for Cooperative Autonomous Vehicles Using Multi-agent Advantage Actor-Critic
- https://arxiv.org/abs/2107.05664
- arXiv:2107.05664v1 Announce Type: cross
-Abstract: With the adoption of autonomous vehicles on our roads, we will witness a mixed-autonomy environment where autonomous and human-driven vehicles must learn to co-exist by sharing the same road infrastructure. To attain socially-desirable behaviors, autonomous vehicles must be instructed to consider the utility of other vehicles around them in their decision-making process. Particularly, we study the maneuver planning problem for autonomous vehicles and investigate how a decentralized reward structure can induce altruism in their behavior and incentivize them to account for the interest of other autonomous and human-driven vehicles. This is a challenging problem due to the ambiguity of a human driver's willingness to cooperate with an autonomous vehicle. Thus, in contrast with the existing works which rely on behavior models of human drivers, we take an end-to-end approach and let the autonomous agents to implicitly learn the decision-making process of human drivers only from experience. We introduce a multi-agent variant of the synchronous Advantage Actor-Critic (A2C) algorithm and train agents that coordinate with each other and can affect the behavior of human drivers to improve traffic flow and safety.
- oai:arXiv.org:2107.05664v1
+ Curriculum-Based Reinforcement Learning for Autonomous UAV Navigation in Unknown Curved Tubular Conduit
+ https://arxiv.org/abs/2512.10934
+ arXiv:2512.10934v1 Announce Type: new
+Abstract: Autonomous drone navigation in confined tubular environments remains a major challenge due to the constraining geometry of the conduits, the proximity of the walls, and the perceptual limitations inherent to such scenarios. We propose a reinforcement learning approach enabling a drone to navigate unknown three-dimensional tubes without any prior knowledge of their geometry, relying solely on local observations from LiDAR and a conditional visual detection of the tube center. In contrast, the Pure Pursuit algorithm, used as a deterministic baseline, benefits from explicit access to the centerline, creating an information asymmetry designed to assess the ability of RL to compensate for the absence of a geometric model. The agent is trained through a progressive Curriculum Learning strategy that gradually exposes it to increasingly curved geometries, where the tube center frequently disappears from the visual field. A turning-negotiation mechanism, based on the combination of direct visibility, directional memory, and LiDAR symmetry cues, proves essential for ensuring stable navigation under such partial observability conditions. Experiments show that the PPO policy acquires robust and generalizable behavior, consistently outperforming the deterministic controller despite its limited access to geometric information. Validation in a high-fidelity 3D environment further confirms the transferability of the learned behavior to a continuous physical dynamics.
+ The proposed approach thus provides a complete framework for autonomous navigation in unknown tubular environments and opens perspectives for industrial, underground, or medical applications where progressing through narrow and weakly perceptive conduits represents a central challenge.
+ oai:arXiv.org:2512.10934v1cs.RO
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Behrad Toghi, Rodolfo Valiente, Dorsa Sadigh, Ramtin Pedarsani, Yaser P. Fallah
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zamirddine Mari, J\'er\^ome Pasquet, Julien Seinturier
- Robustness and Adaptability of Reinforcement Learning based Cooperative Autonomous Driving in Mixed-autonomy Traffic
- https://arxiv.org/abs/2202.00881
- arXiv:2202.00881v1 Announce Type: cross
-Abstract: Building autonomous vehicles (AVs) is a complex problem, but enabling them to operate in the real world where they will be surrounded by human-driven vehicles (HVs) is extremely challenging. Prior works have shown the possibilities of creating inter-agent cooperation between a group of AVs that follow a social utility. Such altruistic AVs can form alliances and affect the behavior of HVs to achieve socially desirable outcomes. We identify two major challenges in the co-existence of AVs and HVs. First, social preferences and individual traits of a given human driver, e.g., selflessness and aggressiveness are unknown to an AV, and it is almost impossible to infer them in real-time during a short AV-HV interaction. Second, contrary to AVs that are expected to follow a policy, HVs do not necessarily follow a stationary policy and therefore are extremely hard to predict. To alleviate the above-mentioned challenges, we formulate the mixed-autonomy problem as a multi-agent reinforcement learning (MARL) problem and propose a decentralized framework and reward function for training cooperative AVs. Our approach enables AVs to learn the decision-making of HVs implicitly from experience, optimizes for a social utility while prioritizing safety and allowing adaptability; robustifying altruistic AVs to different human behaviors and constraining them to a safe action space. Finally, we investigate the robustness, safety and sensitivity of AVs to various HVs behavioral traits and present the settings in which the AVs can learn cooperative policies that are adaptable to different situations.
- oai:arXiv.org:2202.00881v1
- cs.RO
+ Any4D: Unified Feed-Forward Metric 4D Reconstruction
+ https://arxiv.org/abs/2512.10935
+ arXiv:2512.10935v1 Announce Type: new
+Abstract: We present Any4D, a scalable multi-view transformer for metric-scale, dense feed-forward 4D reconstruction. Any4D directly generates per-pixel motion and geometry predictions for N frames, in contrast to prior work that typically focuses on either 2-view dense scene flow or sparse 3D point tracking. Moreover, unlike other recent methods for 4D reconstruction from monocular RGB videos, Any4D can process additional modalities and sensors such as RGB-D frames, IMU-based egomotion, and Radar Doppler measurements, when available. One of the key innovations that allows for such a flexible framework is a modular representation of a 4D scene; specifically, per-view 4D predictions are encoded using a variety of egocentric factors (depthmaps and camera intrinsics) represented in local camera coordinates, and allocentric factors (camera extrinsics and scene flow) represented in global world coordinates. We achieve superior performance across diverse setups - both in terms of accuracy (2-3X lower error) and compute efficiency (15X faster), opening avenues for multiple downstream applications.
+ oai:arXiv.org:2512.10935v1
+ cs.CV
+ cs.AIcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Rodolfo Valiente, Behrad Toghi, Ramtin Pedarsani, Yaser P. Fallah
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Jay Karhade, Nikhil Keetha, Yuchen Zhang, Tanisha Gupta, Akash Sharma, Sebastian Scherer, Deva Ramanan
- Prediction-aware and Reinforcement Learning based Altruistic Cooperative Driving
- https://arxiv.org/abs/2211.10585
- arXiv:2211.10585v1 Announce Type: cross
-Abstract: Autonomous vehicle (AV) navigation in the presence of Human-driven vehicles (HVs) is challenging, as HVs continuously update their policies in response to AVs. In order to navigate safely in the presence of complex AV-HV social interactions, the AVs must learn to predict these changes. Humans are capable of navigating such challenging social interaction settings because of their intrinsic knowledge about other agents behaviors and use that to forecast what might happen in the future. Inspired by humans, we provide our AVs the capability of anticipating future states and leveraging prediction in a cooperative reinforcement learning (RL) decision-making framework, to improve safety and robustness. In this paper, we propose an integration of two essential and earlier-presented components of AVs: social navigation and prediction. We formulate the AV decision-making process as a RL problem and seek to obtain optimal policies that produce socially beneficial results utilizing a prediction-aware planning and social-aware optimization RL framework. We also propose a Hybrid Predictive Network (HPN) that anticipates future observations. The HPN is used in a multi-step prediction chain to compute a window of predicted future observations to be used by the value function network (VFN). Finally, a safe VFN is trained to optimize a social utility using a sequence of previous and predicted observations, and a safety prioritizer is used to leverage the interpretable kinematic predictions to mask the unsafe actions, constraining the RL policy. We compare our prediction-aware AV to state-of-the-art solutions and demonstrate performance improvements in terms of efficiency and safety in multiple simulated scenarios.
- oai:arXiv.org:2211.10585v1
- cs.RO
+ Empirical evaluation of the Frank-Wolfe methods for constructing white-box adversarial attacks
+ https://arxiv.org/abs/2512.10936
+ arXiv:2512.10936v1 Announce Type: new
+Abstract: The construction of adversarial attacks for neural networks appears to be a crucial challenge for their deployment in various services. To estimate the adversarial robustness of a neural network, a fast and efficient approach is needed to construct adversarial attacks. Since the formalization of adversarial attack construction involves solving a specific optimization problem, we consider the problem of constructing an efficient and effective adversarial attack from a numerical optimization perspective. Specifically, we suggest utilizing advanced projection-free methods, known as modified Frank-Wolfe methods, to construct white-box adversarial attacks on the given input data. We perform a theoretical and numerical evaluation of these methods and compare them with standard approaches based on projection operations or geometrical intuition. Numerical experiments are performed on the MNIST and CIFAR-10 datasets, utilizing a multiclass logistic regression model, the convolutional neural networks (CNNs), and the Vision Transformer (ViT).
+ oai:arXiv.org:2512.10936v1
+ cs.LGcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
+ Fri, 12 Dec 2025 00:00:00 -0500
+ newhttp://creativecommons.org/licenses/by/4.0/
- Rodolfo Valiente, Mahdi Razzaghpour, Behrad Toghi, Ghayoor Shah, Yaser P. Fallah
+ Kristina Korotkova, Aleksandr Katrutsa
- Learning-based social coordination to improve safety and robustness of cooperative autonomous vehicles in mixed traffic
- https://arxiv.org/abs/2211.11963
- arXiv:2211.11963v1 Announce Type: cross
-Abstract: It is expected that autonomous vehicles(AVs) and heterogeneous human-driven vehicles(HVs) will coexist on the same road. The safety and reliability of AVs will depend on their social awareness and their ability to engage in complex social interactions in a socially accepted manner. However, AVs are still inefficient in terms of cooperating with HVs and struggle to understand and adapt to human behavior, which is particularly challenging in mixed autonomy. In a road shared by AVs and HVs, the social preferences or individual traits of HVs are unknown to the AVs and different from AVs, which are expected to follow a policy, HVs are particularly difficult to forecast since they do not necessarily follow a stationary policy. To address these challenges, we frame the mixed-autonomy problem as a multi-agent reinforcement learning (MARL) problem and propose an approach that allows AVs to learn the decision-making of HVs implicitly from experience, account for all vehicles' interests, and safely adapt to other traffic situations. In contrast with existing works, we quantify AVs' social preferences and propose a distributed reward structure that introduces altruism into their decision-making process, allowing the altruistic AVs to learn to establish coalitions and influence the behavior of HVs.
- oai:arXiv.org:2211.11963v1
- cs.RO
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
+ On Decision-Making Agents and Higher-Order Causal Processes
+ https://arxiv.org/abs/2512.10937
+ arXiv:2512.10937v1 Announce Type: new
+Abstract: We establish a precise correspondence between decision-making agents in partially observable Markov decision processes (POMDPs) and one-input process functions, the classical limit of higher-order quantum operations. In this identification an agent's policy and memory update combine into a process function w that interacts with a POMDP environment via the link product. This suggests a dual interpretation: in the physics view, the process function acts as the environment into which local operations (agent interventions) are inserted, whereas in the AI view it encodes the agent and the inserted functions represent environments. We extend this perspective to multi-agent systems by identifying observation-independent decentralized POMDPs as natural domains for multi-input process functions.
+ oai:arXiv.org:2512.10937v1
+ cs.AI
+ quant-ph
+ Fri, 12 Dec 2025 00:00:00 -0500
+ newhttp://creativecommons.org/licenses/by/4.0/
- Rodolfo Valiente, Behrad Toghi, Mahdi Razzaghpour, Ramtin Pedarsani, Yaser P. Fallah
+ Matt Wilson
- Online Inference of Constrained Optimization: Primal-Dual Optimality and Sequential Quadratic Programming
- https://arxiv.org/abs/2512.08948
- arXiv:2512.08948v1 Announce Type: cross
-Abstract: We study online statistical inference for the solutions of stochastic optimization problems with equality and inequality constraints. Such problems are prevalent in statistics and machine learning, encompassing constrained $M$-estimation, physics-informed models, safe reinforcement learning, and algorithmic fairness. We develop a stochastic sequential quadratic programming (SSQP) method to solve these problems, where the step direction is computed by sequentially performing a quadratic approximation of the objective and a linear approximation of the constraints. Despite having access to unbiased estimates of population gradients, a key challenge in constrained stochastic problems lies in dealing with the bias in the step direction. As such, we apply a momentum-style gradient moving-average technique within SSQP to debias the step. We show that our method achieves global almost-sure convergence and exhibits local asymptotic normality with an optimal primal-dual limiting covariance matrix in the sense of H\'ajek and Le Cam. In addition, we provide a plug-in covariance matrix estimator for practical inference. To our knowledge, the proposed SSQP method is the first fully online method that attains primal-dual asymptotic minimax optimality without relying on projection operators onto the constraint set, which are generally intractable for nonlinear problems. Through extensive experiments on benchmark nonlinear problems, as well as on constrained generalized linear models and portfolio allocation problems using both synthetic and real data, we demonstrate superior performance of our method, showing that the method and its asymptotic behavior not only solve constrained stochastic problems efficiently but also provide valid and practical online inference in real-world applications.
- oai:arXiv.org:2512.08948v1
- stat.ML
+ Stronger Normalization-Free Transformers
+ https://arxiv.org/abs/2512.10938
+ arXiv:2512.10938v1 Announce Type: new
+Abstract: Although normalization layers have long been viewed as indispensable components of deep learning architectures, the recent introduction of Dynamic Tanh (DyT) has demonstrated that alternatives are possible. The point-wise function DyT constrains extreme values for stable convergence and reaches normalization-level performance; this work seeks further for function designs that can surpass it. We first study how the intrinsic properties of point-wise functions influence training and performance. Building on these findings, we conduct a large-scale search for a more effective function design. Through this exploration, we introduce $\mathrm{Derf}(x) = \mathrm{erf}(\alpha x + s)$, where $\mathrm{erf}(x)$ is the rescaled Gaussian cumulative distribution function, and identify it as the most performant design. Derf outperforms LayerNorm, RMSNorm, and DyT across a wide range of domains, including vision (image recognition and generation), speech representation, and DNA sequence modeling. Our findings suggest that the performance gains of Derf largely stem from its improved generalization rather than stronger fitting capacity. Its simplicity and stronger performance make Derf a practical choice for normalization-free Transformer architectures.
+ oai:arXiv.org:2512.10938v1cs.LG
- math.OC
- math.ST
- stat.TH
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
+ cs.AI
+ cs.CL
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yihang Gao, Michael K. Ng, Michael W. Mahoney, Sen Na
+ Mingzhi Chen, Taiming Lu, Jiachen Zhu, Mingjie Sun, Zhuang Liu
- Multivariate time series prediction using clustered echo state network
- https://arxiv.org/abs/2512.08963
- arXiv:2512.08963v1 Announce Type: cross
-Abstract: Many natural and physical processes can be understood by analyzing multiple system variables evolving, forming a multivariate time series. Predicting such time series is challenging due to the inherent noise and interdependencies among variables. Echo state networks (ESNs), a class of Reservoir Computing (RC) models, offer an efficient alternative to conventional recurrent neural networks by training only the output weights while keeping the reservoir dynamics fixed, reducing computational complexity. We propose a clustered ESNs (CESNs) that enhances the ability to model and predict multivariate time series by organizing the reservoir nodes into clusters, each corresponding to a distinct input variable. Input signals are directly mapped to their associated clusters, and intra-cluster connections remain dense while inter-cluster connections are sparse, mimicking the modular architecture of biological neural networks. This architecture improves information processing by limiting cross-variable interference and enhances computational efficiency through independent cluster-wise training via ridge regression. We further explore different reservoir topologies, including ring, Erd\H{o}s-R\'enyi (ER), and scale-free (SF) networks, to evaluate their impact predictive performance. Our algorithm works well across diverse real-world datasets such as the stock market, solar wind, and chaotic R\"ossler system, demonstrating that CESNs consistently outperform conventional ESNs in terms of predictive accuracy and robustness to noise, particularly when using ER and SF topologies. These findings highlight the adaptability of CESNs for complex, multivariate time series forecasting.
- oai:arXiv.org:2512.08963v1
- nlin.CD
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/publicdomain/zero/1.0/
- 10.1140/epjp/s13360-025-07077-3
- S. Hariharan, R. Suresh, V. K. Chandrasekar
+ GaussianHeadTalk: Wobble-Free 3D Talking Heads with Audio Driven Gaussian Splatting
+ https://arxiv.org/abs/2512.10939
+ arXiv:2512.10939v1 Announce Type: new
+Abstract: Speech-driven talking heads have recently emerged and enable interactive avatars. However, real-world applications are limited, as current methods achieve high visual fidelity but slow or fast yet temporally unstable. Diffusion methods provide realistic image generation, yet struggle with oneshot settings. Gaussian Splatting approaches are real-time, yet inaccuracies in facial tracking, or inconsistent Gaussian mappings, lead to unstable outputs and video artifacts that are detrimental to realistic use cases. We address this problem by mapping Gaussian Splatting using 3D Morphable Models to generate person-specific avatars. We introduce transformer-based prediction of model parameters, directly from audio, to drive temporal consistency. From monocular video and independent audio speech inputs, our method enables generation of real-time talking head videos where we report competitive quantitative and qualitative performance.
+ oai:arXiv.org:2512.10939v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Madhav Agarwal, Mingtian Zhang, Laura Sevilla-Lara, Steven McDonagh
- FuXi-Nowcast: Meet the longstanding challenge of convective initiation in nowcasting
- https://arxiv.org/abs/2512.08974
- arXiv:2512.08974v1 Announce Type: cross
-Abstract: Accurate nowcasting of convective storms remains a major challenge for operational forecasting, particularly for convective initiation and the evolution of high-impact rainfall and strong winds. Here we present FuXi-Nowcast, a deep-learning system that jointly predicts composite radar reflectivity, surface precipitation, near-surface temperature, wind speed and wind gusts at 1-km resolution over eastern China. FuXi-Nowcast integrates multi-source observations, such as radar, surface stations and the High-Resolution Land Data Assimilation System (HRLDAS), with three-dimensional atmospheric fields from the machine-learning weather model FuXi-2.0 within a multi-task Swin-Transformer architecture. A convective signal enhancement module and distribution-aware hybrid loss functions are designed to preserve intense convective structures and mitigate the rapid intensity decay common in deep-learning nowcasts. FuXi-Nowcast surpasses the operational CMA-MESO 3-km numerical model in Critical Success Index for reflectivity, precipitation and wind gusts across thresholds and lead times up to 12 h, with the largest gains for heavy rainfall. Case studies further show that FuXi-Nowcast more accurately captures the timing, location and structure of convective initiation and subsequent evolution of convection. These results demonstrate that coupling three-dimensional machine-learning forecasts with high-resolution observations can provide multi-hazard, long-lead nowcasts that outperforms current operational systems.
- oai:arXiv.org:2512.08974v1
- physics.ao-ph
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Lei Chen, Zijian Zhu, Xiaoran Zhuang, Tianyuan Qi, Yuxuan Feng, Xiaohui Zhong, Hao Li
+ OmniView: An All-Seeing Diffusion Model for 3D and 4D View Synthesis
+ https://arxiv.org/abs/2512.10940
+ arXiv:2512.10940v1 Announce Type: new
+Abstract: Prior approaches injecting camera control into diffusion models have focused on specific subsets of 4D consistency tasks: novel view synthesis, text-to-video with camera control, image-to-video, amongst others. Therefore, these fragmented approaches are trained on disjoint slices of available 3D/4D data. We introduce OmniView, a unified framework that generalizes across a wide range of 4D consistency tasks. Our method separately represents space, time, and view conditions, enabling flexible combinations of these inputs. For example, OmniView can synthesize novel views from static, dynamic, and multiview inputs, extrapolate trajectories forward and backward in time, and create videos from text or image prompts with full camera control. OmniView is competitive with task-specific models across diverse benchmarks and metrics, improving image quality scores among camera-conditioned diffusion models by up to 33\% in multiview NVS LLFF dataset, 60\% in dynamic NVS Neural 3D Video benchmark, 20\% in static camera control on RE-10K, and reducing camera trajectory errors by 4x in text-conditioned video generation. With strong generalizability in one model, OmniView demonstrates the feasibility of a generalist 4D video model. Project page is available at https://snap-research.github.io/OmniView/
+ oai:arXiv.org:2512.10940v1
+ cs.CV
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Xiang Fan, Sharath Girish, Vivek Ramanujan, Chaoyang Wang, Ashkan Mirzaei, Petr Sushko, Aliaksandr Siarohin, Sergey Tulyakov, Ranjay Krishna
- Agreement Disagreement Guided Knowledge Transfer for Cross-Scene Hyperspectral Imaging
- https://arxiv.org/abs/2512.08990
- arXiv:2512.08990v1 Announce Type: cross
-Abstract: Knowledge transfer plays a crucial role in cross-scene hyperspectral imaging (HSI). However, existing studies often overlook the challenges of gradient conflicts and dominant gradients that arise during the optimization of shared parameters. Moreover, many current approaches fail to simultaneously capture both agreement and disagreement information, relying only on a limited shared subset of target features and consequently missing the rich, diverse patterns present in the target scene. To address these issues, we propose an Agreement Disagreement Guided Knowledge Transfer (ADGKT) framework that integrates both mechanisms to enhance cross-scene transfer. The agreement component includes GradVac, which aligns gradient directions to mitigate conflicts between source and target domains, and LogitNorm, which regulates logit magnitudes to prevent domination by a single gradient source. The disagreement component consists of a Disagreement Restriction (DiR) and an ensemble strategy, which capture diverse predictive target features and mitigate the loss of critical target information. Extensive experiments demonstrate the effectiveness and superiority of the proposed method in achieving robust and balanced knowledge transfer across heterogeneous HSI scenes.
- oai:arXiv.org:2512.08990v1
- eess.IV
+ Mull-Tokens: Modality-Agnostic Latent Thinking
+ https://arxiv.org/abs/2512.10941
+ arXiv:2512.10941v1 Announce Type: new
+Abstract: Reasoning goes beyond language; the real world requires reasoning about space, time, affordances, and much more that words alone cannot convey. Existing multimodal models exploring the potential of reasoning with images are brittle and do not scale. They rely on calling specialist tools, costly generation of images, or handcrafted reasoning data to switch between text and image thoughts. Instead, we offer a simpler alternative -- Mull-Tokens -- modality-agnostic latent tokens pre-trained to hold intermediate information in either image or text modalities to let the model think free-form towards the correct answer. We investigate best practices to train Mull-Tokens inspired by latent reasoning frameworks. We first train Mull-Tokens using supervision from interleaved text-image traces, and then fine-tune without any supervision by only using the final answers. Across four challenging spatial reasoning benchmarks involving tasks such as solving puzzles and taking different perspectives, we demonstrate that Mull-Tokens improve upon several baselines utilizing text-only reasoning or interleaved image-text reasoning, achieving a +3% average improvement and up to +16% on a puzzle solving reasoning-heavy split compared to our strongest baseline. Adding to conversations around challenges in grounding textual and visual reasoning, Mull-Tokens offers a simple solution to abstractly think in multiple modalities.
+ oai:arXiv.org:2512.10941v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Lu Huo, Haimin Zhang, Min Xu
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Arijit Ray, Ahmed Abdelkader, Chengzhi Mao, Bryan A. Plummer, Kate Saenko, Ranjay Krishna, Leonidas Guibas, Wen-Sheng Chu
- Enhanced Chest Disease Classification Using an Improved CheXNet Framework with EfficientNetV2-M and Optimization-Driven Learning
- https://arxiv.org/abs/2512.08992
- arXiv:2512.08992v1 Announce Type: cross
-Abstract: The interpretation of Chest X-ray is an important diagnostic issue in clinical practice and especially in the resource-limited setting where the shortage of radiologists plays a role in delayed diagnosis and poor patient outcomes. Although the original CheXNet architecture has shown potential in automated analysis of chest radiographs, DenseNet-121 backbone is computationally inefficient and poorly single-label classifier. To eliminate such shortcomings, we suggest a better classification framework of chest disease that relies on EfficientNetV2-M and incorporates superior training approaches such as Automatic Mixed Precision training, AdamW, Cosine Annealing learning rate scheduling, and Exponential Moving Average regularization. We prepared a dataset of 18,080 chest X-ray images of three source materials of high authority and representing five key clinically significant disease categories which included Cardiomegaly, COVID-19, Normal, Pneumonia, and Tuberculosis. To achieve statistical reliability and reproducibility, nine independent experimental runs were run. The suggested architecture showed significant gains with mean test accuracy of 96.45 percent compared to 95.30 percent at baseline (p less than 0.001) and macro-averaged F1-score increased to 91.08 percent (p less than 0.001). Critical infectious diseases showed near-perfect classification performance with COVID-19 detection having 99.95 percent accuracy and Tuberculosis detection having 99.97 percent accuracy. Although 6.8 times more parameters are included, the training time was reduced by 11.4 percent and performance stability was increased by 22.7 percent. This framework presents itself as a decision-support tool that can be used to respond to a pandemic, screen tuberculosis, and assess thoracic disease regularly in various healthcare facilities.
- oai:arXiv.org:2512.08992v1
- eess.IV
- cs.AI
+ VL-JEPA: Joint Embedding Predictive Architecture for Vision-language
+ https://arxiv.org/abs/2512.10942
+ arXiv:2512.10942v1 Announce Type: new
+Abstract: We introduce VL-JEPA, a vision-language model built on a Joint Embedding Predictive Architecture (JEPA). Instead of autoregressively generating tokens as in classical VLMs, VL-JEPA predicts continuous embeddings of the target texts. By learning in an abstract representation space, the model focuses on task-relevant semantics while abstracting away surface-level linguistic variability. In a strictly controlled comparison against standard token-space VLM training with the same vision encoder and training data, VL-JEPA achieves stronger performance while having 50% fewer trainable parameters. At inference time, a lightweight text decoder is invoked only when needed to translate VL-JEPA predicted embeddings into text. We show that VL-JEPA natively supports selective decoding that reduces the number of decoding operations by 2.85x while maintaining similar performance compared to non-adaptive uniform decoding. Beyond generation, the VL-JEPA's embedding space naturally supports open-vocabulary classification, text-to-video retrieval, and discriminative VQA without any architecture modification. On eight video classification and eight video retrieval datasets, the average performance VL-JEPA surpasses that of CLIP, SigLIP2, and Perception Encoder. At the same time, the model achieves comparable performance as classical VLMs (InstructBLIP, QwenVL) on four VQA datasets: GQA, TallyQA, POPE and POPEv2, despite only having 1.6B parameters.
+ oai:arXiv.org:2512.10942v1cs.CV
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
+ Fri, 12 Dec 2025 00:00:00 -0500
+ newhttp://creativecommons.org/licenses/by/4.0/
- Ali M. Bahram, Saman Muhammad Omer, Hardi M. Mohammed, Sirwan Abdolwahed Aula
+ Delong Chen, Mustafa Shukor, Theo Moutakanni, Willy Chung, Jade Yu, Tejaswi Kasarla, Allen Bolourchi, Yann LeCun, Pascale Fung
- DermETAS-SNA LLM: A Dermatology Focused Evolutionary Transformer Architecture Search with StackNet Augmented LLM Assistant
- https://arxiv.org/abs/2512.08998
- arXiv:2512.08998v1 Announce Type: cross
-Abstract: Our work introduces the DermETAS-SNA LLM Assistant that integrates Dermatology-focused Evolutionary Transformer Architecture Search with StackNet Augmented LLM. The assistant dynamically learns skin-disease classifiers and provides medically informed descriptions to facilitate clinician-patient interpretation. Contributions include: (1) Developed an ETAS framework on the SKINCON dataset to optimize a Vision Transformer (ViT) tailored for dermatological feature representation and then fine-tuned binary classifiers for each of the 23 skin disease categories in the DermNet dataset to enhance classification performance; (2) Designed a StackNet architecture that integrates multiple fine-tuned binary ViT classifiers to enhance predictive robustness and mitigate class imbalance issues; (3) Implemented a RAG pipeline, termed Diagnostic Explanation and Retrieval Model for Dermatology, which harnesses the capabilities of the Google Gemini 2.5 Pro LLM architecture to generate personalized, contextually informed diagnostic descriptions and explanations for patients, leveraging a repository of verified dermatological materials; (4) Performed extensive experimental evaluations on 23 skin disease categories to demonstrate performance increase, achieving an overall F1-score of 56.30% that surpasses SkinGPT-4 (48.51%) by a considerable margin, representing a performance increase of 16.06%; (5) Conducted a domain-expert evaluation, with eight licensed medical doctors, of the clinical responses generated by our AI assistant for seven dermatological conditions. Our results show a 92% agreement rate with the assessments provided by our AI assistant (6) Created a proof-of-concept prototype that fully integrates our DermETAS-SNA LLM into our AI assistant to demonstrate its practical feasibility for real-world clinical and educational applications.
- oai:arXiv.org:2512.08998v1
- eess.IV
+ AlcheMinT: Fine-grained Temporal Control for Multi-Reference Consistent Video Generation
+ https://arxiv.org/abs/2512.10943
+ arXiv:2512.10943v1 Announce Type: new
+Abstract: Recent advances in subject-driven video generation with large diffusion models have enabled personalized content synthesis conditioned on user-provided subjects. However, existing methods lack fine-grained temporal control over subject appearance and disappearance, which are essential for applications such as compositional video synthesis, storyboarding, and controllable animation. We propose AlcheMinT, a unified framework that introduces explicit timestamps conditioning for subject-driven video generation. Our approach introduces a novel positional encoding mechanism that unlocks the encoding of temporal intervals, associated in our case with subject identities, while seamlessly integrating with the pretrained video generation model positional embeddings. Additionally, we incorporate subject-descriptive text tokens to strengthen binding between visual identity and video captions, mitigating ambiguity during generation. Through token-wise concatenation, AlcheMinT avoids any additional cross-attention modules and incurs negligible parameter overhead. We establish a benchmark evaluating multiple subject identity preservation, video fidelity, and temporal adherence. Experimental results demonstrate that AlcheMinT achieves visual quality matching state-of-the-art video personalization methods, while, for the first time, enabling precise temporal control over multi-subject generation within videos. Project page is at https://snap-research.github.io/Video-AlcheMinT
+ oai:arXiv.org:2512.10943v1
+ cs.CVcs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Sharath Girish, Viacheslav Ivanov, Tsai-Shien Chen, Hao Chen, Aliaksandr Siarohin, Sergey Tulyakov
+
+
+ MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation
+ https://arxiv.org/abs/2512.10945
+ arXiv:2512.10945v1 Announce Type: new
+Abstract: This paper proposes a large-scale multi-modal dataset for referring motion expression video segmentation, focusing on segmenting and tracking target objects in videos based on language description of objects' motions. Existing referring video segmentation datasets often focus on salient objects and use language expressions rich in static attributes, potentially allowing the target object to be identified in a single frame. Such datasets underemphasize the role of motion in both videos and languages. To explore the feasibility of using motion expressions and motion reasoning clues for pixel-level video understanding, we introduce MeViS, a dataset containing 33,072 human-annotated motion expressions in both text and audio, covering 8,171 objects in 2,006 videos of complex scenarios. We benchmark 15 existing methods across 4 tasks supported by MeViS, including 6 referring video object segmentation (RVOS) methods, 3 audio-guided video object segmentation (AVOS) methods, 2 referring multi-object tracking (RMOT) methods, and 4 video captioning methods for the newly introduced referring motion expression generation (RMEG) task. The results demonstrate weaknesses and limitations of existing methods in addressing motion expression-guided video understanding. We further analyze the challenges and propose an approach LMPM++ for RVOS/AVOS/RMOT that achieves new state-of-the-art results. Our dataset provides a platform that facilitates the development of motion expression-guided video understanding algorithms in complex video scenes. The proposed MeViS dataset and the method's source code are publicly available at https://henghuiding.com/MeViS/
+ oai:arXiv.org:2512.10945v1cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
+ Fri, 12 Dec 2025 00:00:00 -0500
+ newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nitya Phani Santosh Oruganty, Keerthi Vemula Murali, Chun-Kit Ngan, Paulo Bandeira Pinho
+ 10.1109/TPAMI.2025.3600507
+ H. Ding et al., "MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 47, no. 12, pp. 11400-11416, 2025
+ Henghui Ding, Chang Liu, Shuting He, Kaining Ying, Xudong Jiang, Chen Change Loy, Yu-Gang Jiang
- Digital Modeling of Spatial Pathway Activity from Histology Reveals Tumor Microenvironment Heterogeneity
- https://arxiv.org/abs/2512.09003
- arXiv:2512.09003v1 Announce Type: cross
-Abstract: Spatial transcriptomics (ST) enables simultaneous mapping of tissue morphology and spatially resolved gene expression, offering unique opportunities to study tumor microenvironment heterogeneity. Here, we introduce a computational framework that predicts spatial pathway activity directly from hematoxylin-and-eosin-stained histology images at microscale resolution 55 and 100 um. Using image features derived from a computational pathology foundation model, we found that TGFb signaling was the most accurately predicted pathway across three independent breast and lung cancer ST datasets. In 87-88% of reliably predicted cases, the resulting spatial TGFb activity maps reflected the expected contrast between tumor and adjacent non-tumor regions, consistent with the known role of TGFb in regulating interactions within the tumor microenvironment. Notably, linear and nonlinear predictive models performed similarly, suggesting that image features may relate to pathway activity in a predominantly linear fashion or that nonlinear structure is small relative to measurement noise. These findings demonstrate that features extracted from routine histopathology may recover spatially coherent and biologically interpretable pathway patterns, offering a scalable strategy for integrating image-based inference with ST information in tumor microenvironment studies.
- oai:arXiv.org:2512.09003v1
- q-bio.QM
+ ImplicitRDP: An End-to-End Visual-Force Diffusion Policy with Structural Slow-Fast Learning
+ https://arxiv.org/abs/2512.10946
+ arXiv:2512.10946v1 Announce Type: new
+Abstract: Human-level contact-rich manipulation relies on the distinct roles of two key modalities: vision provides spatially rich but temporally slow global context, while force sensing captures rapid, high-frequency local contact dynamics. Integrating these signals is challenging due to their fundamental frequency and informational disparities. In this work, we propose ImplicitRDP, a unified end-to-end visual-force diffusion policy that integrates visual planning and reactive force control within a single network. We introduce Structural Slow-Fast Learning, a mechanism utilizing causal attention to simultaneously process asynchronous visual and force tokens, allowing the policy to perform closed-loop adjustments at the force frequency while maintaining the temporal coherence of action chunks. Furthermore, to mitigate modality collapse where end-to-end models fail to adjust the weights across different modalities, we propose Virtual-target-based Representation Regularization. This auxiliary objective maps force feedback into the same space as the action, providing a stronger, physics-grounded learning signal than raw force prediction. Extensive experiments on contact-rich tasks demonstrate that ImplicitRDP significantly outperforms both vision-only and hierarchical baselines, achieving superior reactivity and success rates with a streamlined training pipeline. Code and videos will be publicly available at https://implicit-rdp.github.io.
+ oai:arXiv.org:2512.10946v1
+ cs.ROcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Ling Liao, Changhuei Yang, Maxim Artyomov, Mark Watson, Adam Kepecs, Haowen Zhou, Alexey Sergushichev, Richard Cote
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wendi Chen, Han Xue, Yi Wang, Fangyuan Zhou, Jun Lv, Yang Jin, Shirun Tang, Chuan Wen, Cewu Lu
- Interpretable machine learning of halo gas density profiles: a sensitivity analysis of cosmological hydrodynamical simulations
- https://arxiv.org/abs/2512.09021
- arXiv:2512.09021v1 Announce Type: cross
-Abstract: Stellar and AGN-driven feedback processes affect the distribution of gas on a wide range of scales, from within galaxies well into the intergalactic medium. Yet, it remains unclear how feedback, through its connection to key galaxy properties, shapes the radial gas density profile in the host halo. We tackle this question using suites of the EAGLE, IllustrisTNG, and Simba cosmological hydrodynamical simulations, which span a variety of feedback models. We develop a random forest algorithm that predicts the radial gas density profile within haloes from the total halo mass and five global properties of the central galaxy: gas and stellar mass; star formation rate; mass and accretion rate of the central black hole (BH). The algorithm reproduces the simulated gas density profiles with an average accuracy of $\sim$80-90% over the halo mass range $10^{9.5} \, \mathrm{M}_{\odot} < M_{\rm 200c} < 10^{15} \, \mathrm{M}_{\odot}$ and redshift interval $0<z<4$. For the first time, we apply Sobol statistical sensitivity analysis to full cosmological hydrodynamical simulations, quantifying how each feature affects the gas density as a function of distance from the halo centre. Across all simulations and redshifts, the total halo mass and the gas mass of the central galaxy are the most strongly tied to the halo gas distribution, while stellar and BH properties are generally less informative. The exact relative importance of the different features depends on the feedback scenario and redshift. Our framework can be readily embedded in semi-analytic models of galaxy formation to incorporate halo gas density profiles consistent with different hydrodynamical simulations. Our work also provides a proof of concept for constraining feedback models with future observations of galaxy properties and of the surrounding gas distribution.
- oai:arXiv.org:2512.09021v1
- astro-ph.GA
- astro-ph.CO
- astro-ph.IM
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
+ Towards Efficient and Effective Multi-Camera Encoding for End-to-End Driving
+ https://arxiv.org/abs/2512.10947
+ arXiv:2512.10947v1 Announce Type: new
+Abstract: We present Flex, an efficient and effective scene encoder that addresses the computational bottleneck of processing high-volume multi-camera data in end-to-end autonomous driving. Flex employs a small set of learnable scene tokens to jointly encode information from all image tokens across different cameras and timesteps. By design, our approach is geometry-agnostic, learning a compact scene representation directly from data without relying on the explicit 3D inductive biases, such as Bird-Eye-View (BEV), occupancy or tri-plane representations, which are common in prior work. This holistic encoding strategy aggressively compresses the visual input for the downstream Large Language Model (LLM) based policy model. Evaluated on a large-scale proprietary dataset of 20,000 driving hours, our Flex achieves 2.2x greater inference throughput while improving driving performance by a large margin compared to state-of-the-art methods. Furthermore, we show that these compact scene tokens develop an emergent capability for scene decomposition without any explicit supervision. Our findings challenge the prevailing assumption that 3D priors are necessary, demonstrating that a data-driven, joint encoding strategy offers a more scalable, efficient and effective path for future autonomous driving systems.
+ oai:arXiv.org:2512.10947v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ newhttp://creativecommons.org/licenses/by/4.0/
- Daniele Sorini, Sownak Bose, Mathilda Denison, Romeel Dav\'e
+ Jiawei Yang, Ziyu Chen, Yurong You, Yan Wang, Yiming Li, Yuxiao Chen, Boyi Li, Boris Ivanovic, Marco Pavone, Yue Wang
- Monitoring Deployed AI Systems in Health Care
- https://arxiv.org/abs/2512.09048
- arXiv:2512.09048v1 Announce Type: cross
-Abstract: Post-deployment monitoring of artificial intelligence (AI) systems in health care is essential to ensure their safety, quality, and sustained benefit-and to support governance decisions about which systems to update, modify, or decommission. Motivated by these needs, we developed a framework for monitoring deployed AI systems grounded in the mandate to take specific actions when they fail to behave as intended. This framework, which is now actively used at Stanford Health Care, is organized around three complementary principles: system integrity, performance, and impact. System integrity monitoring focuses on maximizing system uptime, detecting runtime errors, and identifying when changes to the surrounding IT ecosystem have unintended effects. Performance monitoring focuses on maintaining accurate system behavior in the face of changing health care practices (and thus input data) over time. Impact monitoring assesses whether a deployed system continues to have value in the form of benefit to clinicians and patients. Drawing on examples of deployed AI systems at our academic medical center, we provide practical guidance for creating monitoring plans based on these principles that specify which metrics to measure, when those metrics should be reviewed, who is responsible for acting when metrics change, and what concrete follow-up actions should be taken-for both traditional and generative AI. We also discuss challenges to implementing this framework, including the effort and cost of monitoring for health systems with limited resources and the difficulty of incorporating data-driven monitoring practices into complex organizations where conflicting priorities and definitions of success often coexist. This framework offers a practical template and starting point for health systems seeking to ensure that AI deployments remain safe and effective over time.
- oai:arXiv.org:2512.09048v1
- q-bio.OT
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Timothy Keyes, Alison Callahan, Abby S. Pandya, Nerissa Ambers, Juan M. Banda, Miguel Fuentes, Carlene Lugtu, Pranav Masariya, Srikar Nallan, Connor O'Brien, Thomas Wang, Emily Alsentzer, Jonathan H. Chen, Dev Dash, Matthew A. Eisenberg, Patricia Garcia, Nikesh Kotecha, Anurang Revri, Michael A. Pfeffer, Nigam H. Shah, Sneha S. Jain
+ ClusIR: Towards Cluster-Guided All-in-One Image Restoration
+ https://arxiv.org/abs/2512.10948
+ arXiv:2512.10948v1 Announce Type: new
+Abstract: All-in-One Image Restoration (AiOIR) aims to recover high-quality images from diverse degradations within a unified framework. However, existing methods often fail to explicitly model degradation types and struggle to adapt their restoration behavior to complex or mixed degradations. To address these issues, we propose ClusIR, a Cluster-Guided Image Restoration framework that explicitly models degradation semantics through learnable clustering and propagates cluster-aware cues across spatial and frequency domains for adaptive restoration. Specifically, ClusIR comprises two key components: a Probabilistic Cluster-Guided Routing Mechanism (PCGRM) and a Degradation-Aware Frequency Modulation Module (DAFMM). The proposed PCGRM disentangles degradation recognition from expert activation, enabling discriminative degradation perception and stable expert routing. Meanwhile, DAFMM leverages the cluster-guided priors to perform adaptive frequency decomposition and targeted modulation, collaboratively refining structural and textural representations for higher restoration fidelity. The cluster-guided synergy seamlessly bridges semantic cues with frequency-domain modulation, empowering ClusIR to attain remarkable restoration results across a wide range of degradations. Extensive experiments on diverse benchmarks validate that ClusIR reaches competitive performance under several scenarios.
+ oai:arXiv.org:2512.10948v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shengkai Hu, Jiaqi Ma, Jun Wan, Wenwen Min, Yongcheng Jing, Lefei Zhang, Dacheng Tao
- Cyqlone: A Parallel, High-Performance Linear Solver for Optimal Control
- https://arxiv.org/abs/2512.09058
- arXiv:2512.09058v1 Announce Type: cross
-Abstract: We present Cyqlone, a solver for linear systems with a stage-wise optimal control structure that fully exploits the various levels of parallelism available in modern hardware. Cyqlone unifies algorithms based on the sequential Riccati recursion, parallel Schur complement methods, and cyclic reduction methods, thereby minimizing the required number of floating-point operations, while allowing parallelization across a user-configurable number of processors. Given sufficient parallelism, the solver run time scales with the logarithm of the horizon length (in contrast to the linear scaling of sequential Riccati-based methods), enabling real-time solution of long-horizon problems. Beyond multithreading on multi-core processors, implementations of Cyqlone can also leverage vectorization using batched linear algebra routines. Such batched routines exploit data parallelism using single instruction, multiple data (SIMD) operations, and expose a higher degree of instruction-level parallelism than their non-batched counterparts. This enables them to significantly outperform BLAS and BLASFEO for the small matrices that arise in optimal control. Building on this high-performance linear solver, we develop CyQPALM, a parallel and optimal-control-specific variant of the QPALM quadratic programming solver. It combines the parallel and vectorized linear algebra operations from Cyqlone with a parallel line search and parallel factorization updates, resulting in order-of-magnitude speedups compared to the state-of-the-art HPIPM solver. Open-source C++ implementations of Cyqlone and CyQPALM are available at https://github.com/kul-optec/cyqlone
- oai:arXiv.org:2512.09058v1
- math.OC
- cs.SY
- eess.SY
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Pieter Pas, Panagiotis Patrinos
+ Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation
+ https://arxiv.org/abs/2512.10949
+ arXiv:2512.10949v1 Announce Type: new
+Abstract: Reinforcement learning (RL), earlier proven to be effective in large language and multi-modal models, has been successfully extended to enhance 2D image generation recently. However, applying RL to 3D generation remains largely unexplored due to the higher spatial complexity of 3D objects, which require globally consistent geometry and fine-grained local textures. This makes 3D generation significantly sensitive to reward designs and RL algorithms. To address these challenges, we conduct the first systematic study of RL for text-to-3D autoregressive generation across several dimensions. (1) Reward designs: We evaluate reward dimensions and model choices, showing that alignment with human preference is crucial, and that general multi-modal models provide robust signal for 3D attributes. (2) RL algorithms: We study GRPO variants, highlighting the effectiveness of token-level optimization, and further investigate the scaling of training data and iterations. (3) Text-to-3D Benchmarks: Since existing benchmarks fail to measure implicit reasoning abilities in 3D generation models, we introduce MME-3DR. (4) Advanced RL paradigms: Motivated by the natural hierarchy of 3D generation, we propose Hi-GRPO, which optimizes the global-to-local hierarchical 3D generation through dedicated reward ensembles. Based on these insights, we develop AR3D-R1, the first RL-enhanced text-to-3D model, expert from coarse shape to texture refinement. We hope this study provides insights into RL-driven reasoning for 3D generation. Code is released at https://github.com/Ivan-Tang-3D/3DGen-R1.
+ oai:arXiv.org:2512.10949v1
+ cs.CV
+ cs.AI
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yiwen Tang, Zoey Guo, Kaixin Zhu, Ray Zhang, Qizhi Chen, Dongzhi Jiang, Junli Liu, Bohan Zeng, Haoming Song, Delin Qu, Tianyi Bai, Dan Xu, Wentao Zhang, Bin Zhao
- Causal Attribution of Model Performance Gaps in Medical Imaging Under Distribution Shifts
- https://arxiv.org/abs/2512.09094
- arXiv:2512.09094v1 Announce Type: cross
-Abstract: Deep learning models for medical image segmentation suffer significant performance drops due to distribution shifts, but the causal mechanisms behind these drops remain poorly understood. We extend causal attribution frameworks to high-dimensional segmentation tasks, quantifying how acquisition protocols and annotation variability independently contribute to performance degradation. We model the data-generating process through a causal graph and employ Shapley values to fairly attribute performance changes to individual mechanisms. Our framework addresses unique challenges in medical imaging: high-dimensional outputs, limited samples, and complex mechanism interactions. Validation on multiple sclerosis (MS) lesion segmentation across 4 centers and 7 annotators reveals context-dependent failure modes: annotation protocol shifts dominate when crossing annotators (7.4% $\pm$ 8.9% DSC attribution), while acquisition shifts dominate when crossing imaging centers (6.5% $\pm$ 9.1%). This mechanism-specific quantification enables practitioners to prioritize targeted interventions based on deployment context.
- oai:arXiv.org:2512.09094v1
- eess.IV
+ E-RayZer: Self-supervised 3D Reconstruction as Spatial Visual Pre-training
+ https://arxiv.org/abs/2512.10950
+ arXiv:2512.10950v1 Announce Type: new
+Abstract: Self-supervised pre-training has revolutionized foundation models for languages, individual 2D images and videos, but remains largely unexplored for learning 3D-aware representations from multi-view images. In this paper, we present E-RayZer, a self-supervised large 3D Vision model that learns truly 3D-aware representations directly from unlabeled images. Unlike prior self-supervised methods such as RayZer that infer 3D indirectly through latent-space view synthesis, E-RayZer operates directly in 3D space, performing self-supervised 3D reconstruction with Explicit geometry. This formulation eliminates shortcut solutions and yields representations that are geometrically grounded. To ensure convergence and scalability, we introduce a novel fine-grained learning curriculum that organizes training from easy to hard samples and harmonizes heterogeneous data sources in an entirely unsupervised manner. Experiments demonstrate that E-RayZer significantly outperforms RayZer on pose estimation, matches or sometimes surpasses fully supervised reconstruction models such as VGGT. Furthermore, its learned representations outperform leading visual pre-training models (e.g., DINOv3, CroCo v2, VideoMAE V2, and RayZer) when transferring to 3D downstream tasks, establishing E-RayZer as a new paradigm for 3D-aware visual pre-training.
+ oai:arXiv.org:2512.10950v1cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Qitao Zhao, Hao Tan, Qianqian Wang, Sai Bi, Kai Zhang, Kalyan Sunkavalli, Shubham Tulsiani, Hanwen Jiang
+
+
+ Hierarchical Dataset Selection for High-Quality Data Sharing
+ https://arxiv.org/abs/2512.10952
+ arXiv:2512.10952v1 Announce Type: new
+Abstract: The success of modern machine learning hinges on access to high-quality training data. In many real-world scenarios, such as acquiring data from public repositories or sharing across institutions, data is naturally organized into discrete datasets that vary in relevance, quality, and utility. Selecting which repositories or institutions to search for useful datasets, and which datasets to incorporate into model training are therefore critical decisions, yet most existing methods select individual samples and treat all data as equally relevant, ignoring differences between datasets and their sources. In this work, we formalize the task of dataset selection: selecting entire datasets from a large, heterogeneous pool to improve downstream performance under resource constraints. We propose Dataset Selection via Hierarchies (DaSH), a dataset selection method that models utility at both dataset and group (e.g., collections, institutions) levels, enabling efficient generalization from limited observations. Across two public benchmarks (Digit-Five and DomainNet), DaSH outperforms state-of-the-art data selection baselines by up to 26.2% in accuracy, while requiring significantly fewer exploration steps. Ablations show DaSH is robust to low-resource settings and lack of relevant datasets, making it suitable for scalable and adaptive dataset selection in practical multi-source learning workflows.
+ oai:arXiv.org:2512.10952v1cs.LG
- stat.ME
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaona Zhou, Yingyan Zeng, Ran Jin, Ismini Lourentzou
+
+
+ Bidirectional Normalizing Flow: From Data to Noise and Back
+ https://arxiv.org/abs/2512.10953
+ arXiv:2512.10953v1 Announce Type: new
+Abstract: Normalizing Flows (NFs) have been established as a principled framework for generative modeling. Standard NFs consist of a forward process and a reverse process: the forward process maps data to noise, while the reverse process generates samples by inverting it. Typical NF forward transformations are constrained by explicit invertibility, ensuring that the reverse process can serve as their exact analytic inverse. Recent developments in TARFlow and its variants have revitalized NF methods by combining Transformers and autoregressive flows, but have also exposed causal decoding as a major bottleneck. In this work, we introduce Bidirectional Normalizing Flow ($\textbf{BiFlow}$), a framework that removes the need for an exact analytic inverse. BiFlow learns a reverse model that approximates the underlying noise-to-data inverse mapping, enabling more flexible loss functions and architectures. Experiments on ImageNet demonstrate that BiFlow, compared to its causal decoding counterpart, improves generation quality while accelerating sampling by up to two orders of magnitude. BiFlow yields state-of-the-art results among NF-based methods and competitive performance among single-evaluation ("1-NFE") methods. Following recent encouraging progress on NFs, we hope our work will draw further attention to this classical paradigm.
+ oai:arXiv.org:2512.10953v1
+ cs.LG
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yiyang Lu, Qiao Sun, Xianbang Wang, Zhicheng Jiang, Hanhong Zhao, Kaiming He
+
+
+ Group Diffusion: Enhancing Image Generation by Unlocking Cross-Sample Collaboration
+ https://arxiv.org/abs/2512.10954
+ arXiv:2512.10954v1 Announce Type: new
+Abstract: In this work, we explore an untapped signal in diffusion model inference. While all previous methods generate images independently at inference, we instead ask if samples can be generated collaboratively. We propose Group Diffusion, unlocking the attention mechanism to be shared across images, rather than limited to just the patches within an image. This enables images to be jointly denoised at inference time, learning both intra and inter-image correspondence. We observe a clear scaling effect - larger group sizes yield stronger cross-sample attention and better generation quality. Furthermore, we introduce a qualitative measure to capture this behavior and show that its strength closely correlates with FID. Built on standard diffusion transformers, our GroupDiff achieves up to 32.2% FID improvement on ImageNet-256x256. Our work reveals cross-sample inference as an effective, previously unexplored mechanism for generative modeling.
+ oai:arXiv.org:2512.10954v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ newhttp://creativecommons.org/licenses/by/4.0/
- Pedro M. Gordaliza, Nataliia Molchanova, Jaume Banus, Thomas Sanchez, Meritxell Bach Cuadra
+ Sicheng Mo, Thao Nguyen, Richard Zhang, Nick Kolkin, Siddharth Srinivasan Iyer, Eli Shechtman, Krishna Kumar Singh, Yong Jae Lee, Bolei Zhou, Yuheng Li
- Procurement without Priors: A Simple Mechanism and its Notable Performance
- https://arxiv.org/abs/2512.09129
- arXiv:2512.09129v1 Announce Type: cross
-Abstract: How should a buyer design procurement mechanisms when suppliers' costs are unknown, and the buyer does not have a prior belief? We demonstrate that simple mechanisms - that share a constant fraction of the buyer utility with the seller - allow the buyer to realize a guaranteed positive fraction of the efficient social surplus across all possible costs. Moreover, a judicious choice of the share based on the known demand maximizes the surplus ratio guarantee that can be attained across all possible (arbitrarily complex and nonlinear) mechanisms and cost functions. Similar results hold in related nonlinear pricing and optimal regulation problems.
- oai:arXiv.org:2512.09129v1
- econ.TH
- cs.GT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Omni-Attribute: Open-vocabulary Attribute Encoder for Visual Concept Personalization
+ https://arxiv.org/abs/2512.10955
+ arXiv:2512.10955v1 Announce Type: new
+Abstract: Visual concept personalization aims to transfer only specific image attributes, such as identity, expression, lighting, and style, into unseen contexts. However, existing methods rely on holistic embeddings from general-purpose image encoders, which entangle multiple visual factors and make it difficult to isolate a single attribute. This often leads to information leakage and incoherent synthesis. To address this limitation, we introduce Omni-Attribute, the first open-vocabulary image attribute encoder designed to learn high-fidelity, attribute-specific representations. Our approach jointly designs the data and model: (i) we curate semantically linked image pairs annotated with positive and negative attributes to explicitly teach the encoder what to preserve or suppress; and (ii) we adopt a dual-objective training paradigm that balances generative fidelity with contrastive disentanglement. The resulting embeddings prove effective for open-vocabulary attribute retrieval, personalization, and compositional generation, achieving state-of-the-art performance across multiple benchmarks.
+ oai:arXiv.org:2512.10955v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tsai-Shien Chen, Aliaksandr Siarohin, Guocheng Gordon Qian, Kuan-Chieh Jackson Wang, Egor Nemchinov, Moayed Haji-Ali, Riza Alp Guler, Willi Menapace, Ivan Skorokhodov, Anil Kag, Jun-Yan Zhu, Sergey Tulyakov
+
+
+ Empowering Dynamic Urban Navigation with Stereo and Mid-Level Vision
+ https://arxiv.org/abs/2512.10956
+ arXiv:2512.10956v1 Announce Type: new
+Abstract: The success of foundation models in language and vision motivated research in fully end-to-end robot navigation foundation models (NFMs). NFMs directly map monocular visual input to control actions and ignore mid-level vision modules (tracking, depth estimation, etc) entirely. While the assumption that vision capabilities will emerge implicitly is compelling, it requires large amounts of pixel-to-action supervision that are difficult to obtain. The challenge is especially pronounced in dynamic and unstructured settings, where robust navigation requires precise geometric and dynamic understanding, while the depth-scale ambiguity in monocular views further limits accurate spatial reasoning. In this paper, we show that relying on monocular vision and ignoring mid-level vision priors is inefficient.
+ We present StereoWalker, which augments NFMs with stereo inputs and explicit mid-level vision such as depth estimation and dense pixel tracking. Our intuition is straightforward: stereo inputs resolve the depth-scale ambiguity, and modern mid-level vision models provide reliable geometric and motion structure in dynamic scenes. We also curate a large stereo navigation dataset with automatic action annotation from Internet stereo videos to support training of StereoWalker and to facilitate future research. Through our experiments, we find that mid-level vision enables StereoWalker to achieve a comparable performance as the state-of-the-art using only 1.5% of the training data, and surpasses the state-of-the-art using the full data. We also observe that stereo vision yields higher navigation performance than monocular input.
+ oai:arXiv.org:2512.10956v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Wentao Zhou, Xuweiyi Chen, Vignesh Rajagopal, Jeffrey Chen, Rohan Chandra, Zezhou Cheng
+
+
+ SceneMaker: Open-set 3D Scene Generation with Decoupled De-occlusion and Pose Estimation Model
+ https://arxiv.org/abs/2512.10957
+ arXiv:2512.10957v1 Announce Type: new
+Abstract: We propose a decoupled 3D scene generation framework called SceneMaker in this work. Due to the lack of sufficient open-set de-occlusion and pose estimation priors, existing methods struggle to simultaneously produce high-quality geometry and accurate poses under severe occlusion and open-set settings. To address these issues, we first decouple the de-occlusion model from 3D object generation, and enhance it by leveraging image datasets and collected de-occlusion datasets for much more diverse open-set occlusion patterns. Then, we propose a unified pose estimation model that integrates global and local mechanisms for both self-attention and cross-attention to improve accuracy. Besides, we construct an open-set 3D scene dataset to further extend the generalization of the pose estimation model. Comprehensive experiments demonstrate the superiority of our decoupled framework on both indoor and open-set scenes. Our codes and datasets is released at https://idea-research.github.io/SceneMaker/.
+ oai:arXiv.org:2512.10957v1
+ cs.CV
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Yukai Shi, Weiyu Li, Zihao Wang, Hongyang Li, Xingyu Chen, Ping Tan, Lei Zhang
+
+
+ WorldLens: Full-Spectrum Evaluations of Driving World Models in Real World
+ https://arxiv.org/abs/2512.10958
+ arXiv:2512.10958v1 Announce Type: new
+Abstract: Generative world models are reshaping embodied AI, enabling agents to synthesize realistic 4D driving environments that look convincing but often fail physically or behaviorally. Despite rapid progress, the field still lacks a unified way to assess whether generated worlds preserve geometry, obey physics, or support reliable control. We introduce WorldLens, a full-spectrum benchmark evaluating how well a model builds, understands, and behaves within its generated world. It spans five aspects -- Generation, Reconstruction, Action-Following, Downstream Task, and Human Preference -- jointly covering visual realism, geometric consistency, physical plausibility, and functional reliability. Across these dimensions, no existing world model excels universally: those with strong textures often violate physics, while geometry-stable ones lack behavioral fidelity. To align objective metrics with human judgment, we further construct WorldLens-26K, a large-scale dataset of human-annotated videos with numerical scores and textual rationales, and develop WorldLens-Agent, an evaluation model distilled from these annotations to enable scalable, explainable scoring. Together, the benchmark, dataset, and agent form a unified ecosystem for measuring world fidelity -- standardizing how future models are judged not only by how real they look, but by how real they behave.
+ oai:arXiv.org:2512.10958v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Ao Liang, Lingdong Kong, Tianyi Yan, Hongsi Liu, Wesley Yang, Ziqi Huang, Wei Yin, Jialong Zuo, Yixuan Hu, Dekai Zhu, Dongyue Lu, Youquan Liu, Guangfeng Jiang, Linfeng Li, Xiangtai Li, Long Zhuo, Lai Xing Ng, Benoit R. Cottereau, Changxin Gao, Liang Pan, Wei Tsang Ooi, Ziwei Liu
+
+
+ StereoSpace: Depth-Free Synthesis of Stereo Geometry via End-to-End Diffusion in a Canonical Space
+ https://arxiv.org/abs/2512.10959
+ arXiv:2512.10959v1 Announce Type: new
+Abstract: We introduce StereoSpace, a diffusion-based framework for monocular-to-stereo synthesis that models geometry purely through viewpoint conditioning, without explicit depth or warping. A canonical rectified space and the conditioning guide the generator to infer correspondences and fill disocclusions end-to-end. To ensure fair and leakage-free evaluation, we introduce an end-to-end protocol that excludes any ground truth or proxy geometry estimates at test time. The protocol emphasizes metrics reflecting downstream relevance: iSQoE for perceptual comfort and MEt3R for geometric consistency. StereoSpace surpasses other methods from the warp & inpaint, latent-warping, and warped-conditioning categories, achieving sharp parallax and strong robustness on layered and non-Lambertian scenes. This establishes viewpoint-conditioned diffusion as a scalable, depth-free solution for stereo generation.
+ oai:arXiv.org:2512.10959v1
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Tjark Behrens, Anton Obukhov, Bingxin Ke, Fabio Tosi, Matteo Poggi, Konrad Schindler
+
+
+ Planning, Living and Judging: A Multi-agent LLM-based Framework for Cyclical Urban Planning
+ https://arxiv.org/abs/2412.20505
+ arXiv:2412.20505v1 Announce Type: cross
+Abstract: Urban regeneration presents significant challenges within the context of urbanization, requiring adaptive approaches to tackle evolving needs. Leveraging advancements in large language models (LLMs), we propose Cyclical Urban Planning (CUP), a new paradigm that continuously generates, evaluates, and refines urban plans in a closed-loop. Specifically, our multi-agent LLM-based framework consists of three key components: (1) Planning, where LLM agents generate and refine urban plans based on contextual data; (2) Living, where agents simulate the behaviors and interactions of residents, modeling life in the urban environment; and (3) Judging, which involves evaluating plan effectiveness and providing iterative feedback for improvement. The cyclical process enables a dynamic and responsive planning approach. Experiments on the real-world dataset demonstrate the effectiveness of our framework as a continuous and adaptive planning process.
+ oai:arXiv.org:2412.20505v1
+ cs.AI
+ cs.CL
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Dirk Bergemann, Tibor Heumann, Stephen Morris
+ Hang Ni, Yuzhi Wang, Hao Liu
- Understanding temperature tuning in energy-based models
- https://arxiv.org/abs/2512.09152
- arXiv:2512.09152v1 Announce Type: cross
-Abstract: Generative models of complex systems often require post-hoc parameter adjustments to produce useful outputs. For example, energy-based models for protein design are sampled at an artificially low ''temperature'' to generate novel, functional sequences. This temperature tuning is a common yet poorly understood heuristic used across machine learning contexts to control the trade-off between generative fidelity and diversity. Here, we develop an interpretable, physically motivated framework to explain this phenomenon. We demonstrate that in systems with a large ''energy gap'' - separating a small fraction of meaningful states from a vast space of unrealistic states - learning from sparse data causes models to systematically overestimate high-energy state probabilities, a bias that lowering the sampling temperature corrects. More generally, we characterize how the optimal sampling temperature depends on the interplay between data size and the system's underlying energy landscape. Crucially, our results show that lowering the sampling temperature is not always desirable; we identify the conditions where \emph{raising} it results in better generative performance. Our framework thus casts post-hoc temperature tuning as a diagnostic tool that reveals properties of the true data distribution and the limits of the learned model.
- oai:arXiv.org:2512.09152v1
- q-bio.QM
+ Unsupervised Acquisition of Discrete Grammatical Categories
+ https://arxiv.org/abs/2503.18702
+ arXiv:2503.18702v1 Announce Type: cross
+Abstract: This article presents experiments performed using a computational laboratory environment for language acquisition experiments. It implements a multi-agent system consisting of two agents: an adult language model and a daughter language model that aims to learn the mother language. Crucially, the daughter agent does not have access to the internal knowledge of the mother language model but only to the language exemplars the mother agent generates. These experiments illustrate how this system can be used to acquire abstract grammatical knowledge. We demonstrate how statistical analyses of patterns in the input data corresponding to grammatical categories yield discrete grammatical rules. These rules are subsequently added to the grammatical knowledge of the daughter language model. To this end, hierarchical agglomerative cluster analysis was applied to the utterances consecutively generated by the mother language model. It is argued that this procedure can be used to acquire structures resembling grammatical categories proposed by linguists for natural languages. Thus, it is established that non-trivial grammatical knowledge has been acquired. Moreover, the parameter configuration of this computational laboratory environment determined using training data generated by the mother language model is validated in a second experiment with a test set similarly resulting in the acquisition of non-trivial categories.
+ oai:arXiv.org:2503.18702v1
+ cs.CL
+ cs.AIcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.MA
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by-nc-nd/4.0/
- Peter W Fields, Vudtiwat Ngampruetikorn, David J Schwab, Stephanie E Palmer
+ David Ph. Shakouri, Crit Cremers, Niels O. Schiller
- A Hybrid Residue Floating Numerical Architecture for High Precision Arithmetic on FPGAs
- https://arxiv.org/abs/2512.09155
- arXiv:2512.09155v1 Announce Type: cross
-Abstract: Floating point arithmetic remains expensive on FPGA platforms due to wide datapaths and normalization logic, motivating alternative representations that preserve dynamic range at lower cost. This work introduces the Hybrid Residue Floating Numerical Architecture (HRFNA), a unified arithmetic system that combines carry free residue channels with a lightweight floating point scaling factor. We develop the full mathematical framework, derive bounded error normalization rules, and present FPGA optimized microarchitectures for modular multiplication, exponent management, and hybrid reconstruction. HRFNA is implemented on a Xilinx ZCU104, with Vitis simulation, RTL synthesis, and on chip ILA traces confirming cycle accurate correctness. The architecture achieves over 2.1 times throughput improvement and 38-52 percent LUT reduction compared to IEEE 754 single precision baselines while maintaining numerical stability across long iterative sequences. These results demonstrate that HRFNA offers an efficient and scalable alternative to floating point computation on modern FPGA devices.
- oai:arXiv.org:2512.09155v1
- eess.SP
- cs.AR
- cs.MS
- Thu, 11 Dec 2025 00:00:00 -0500
+ UniExtreme: A Universal Foundation Model for Extreme Weather Forecasting
+ https://arxiv.org/abs/2508.01426
+ arXiv:2508.01426v2 Announce Type: cross
+Abstract: Recent advancements in deep learning have led to the development of Foundation Models (FMs) for weather forecasting, yet their ability to predict extreme weather events remains limited. Existing approaches either focus on general weather conditions or specialize in specific-type extremes, neglecting the real-world atmospheric patterns of diversified extreme events. In this work, we identify two key characteristics of extreme events: (1) the spectral disparity against normal weather regimes, and (2) the hierarchical drivers and geographic blending of diverse extremes. Along this line, we propose UniExtreme, a universal extreme weather forecasting foundation model that integrates (1) an Adaptive Frequency Modulation (AFM) module that captures region-wise spectral differences between normal and extreme weather, through learnable Beta-distribution filters and multi-granularity spectral aggregation, and (2) an Event Prior Augmentation (EPA) module which incorporates region-specific extreme event priors to resolve hierarchical extreme diversity and composite extreme schema, via a dual-level memory fusion network. Extensive experiments demonstrate that UniExtreme outperforms state-of-the-art baselines in both extreme and general weather forecasting, showcasing superior adaptability across diverse extreme scenarios.
+ oai:arXiv.org:2508.01426v2
+ cs.LG
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hang Ni, Weijia Zhang, Hao Liu
+
+
+ Distributionally Robust Markov Games with Average Reward
+ https://arxiv.org/abs/2508.03136
+ arXiv:2508.03136v3 Announce Type: cross
+Abstract: We study distributionally robust Markov games (DR-MGs) with the average-reward criterion, a framework for multi-agent decision-making under uncertainty over extended horizons. In average reward DR-MGs, agents aim to maximize their worst-case infinite-horizon average reward, to ensure satisfactory performance under environment uncertainties and opponent actions. We first establish a connection between the best-response policies and the optimal policies for the induced single-agent problems. Under a standard irreducible assumption, we derive a correspondence between the optimal policies and the solutions of the robust Bellman equation, and derive the existence of stationary Nash Equilibrium (NE) based on these results. We further study DR-MGs under the weakly communicating setting, where we construct a set-valued map and show its value is a subset of the best-response policies, convex and upper hemi-continuous, and derive the existence of NE. We then explore algorithmic solutions, by first proposing a Robust Nash-Iteration algorithm and providing convergence guarantees under some additional assumptions and a NE computing oracle. We further develop a temporal-difference based algorithm for DR-MGs, and provide convergence guarantees without any additional oracle or assumptions. Finally, we connect average-reward robust NE to discounted ones, showing that the average reward robust NE can be approximated by the discounted ones under a large discount factor. Our studies provide a comprehensive theoretical and algorithmic foundation for decision-making in complex, uncertain, and long-running multi-player environments.
+ oai:arXiv.org:2508.03136v3
+ cs.MA
+ cs.GT
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Mostafa Darvishi
+ Zachary Roch, Yue Wang
- WTNN: Weibull-Tailored Neural Networks for survival analysis
- https://arxiv.org/abs/2512.09163
- arXiv:2512.09163v1 Announce Type: cross
-Abstract: The Weibull distribution is a commonly adopted choice for modeling the survival of systems subject to maintenance over time. When only proxy indicators and censored observations are available, it becomes necessary to express the distribution's parameters as functions of time-dependent covariates. Deep neural networks provide the flexibility needed to learn complex relationships between these covariates and operational lifetime, thereby extending the capabilities of traditional regression-based models. Motivated by the analysis of a fleet of military vehicles operating in highly variable and demanding environments, as well as by the limitations observed in existing methodologies, this paper introduces WTNN, a new neural network-based modeling framework specifically designed for Weibull survival studies. The proposed architecture is specifically designed to incorporate qualitative prior knowledge regarding the most influential covariates, in a manner consistent with the shape and structure of the Weibull distribution. Through numerical experiments, we show that this approach can be reliably trained on proxy and right-censored data, and is capable of producing robust and interpretable survival predictions that can improve existing approaches.
- oai:arXiv.org:2512.09163v1
+ CC-GRMAS: A Multi-Agent Graph Neural System for Spatiotemporal Landslide Risk Assessment in High Mountain Asia
+ https://arxiv.org/abs/2510.20875
+ arXiv:2510.20875v1 Announce Type: cross
+Abstract: Landslides are a growing climate induced hazard with severe environmental and human consequences, particularly in high mountain Asia. Despite increasing access to satellite and temporal datasets, timely detection and disaster response remain underdeveloped and fragmented. This work introduces CC-GRMAS, a framework leveraging a series of satellite observations and environmental signals to enhance the accuracy of landslide forecasting. The system is structured around three interlinked agents Prediction, Planning, and Execution, which collaboratively enable real time situational awareness, response planning, and intervention. By incorporating local environmental factors and operationalizing multi agent coordination, this approach offers a scalable and proactive solution for climate resilient disaster preparedness across vulnerable mountainous terrains.
+ oai:arXiv.org:2510.20875v1
+ cs.LG
+ cs.AI
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Mihir Panchal, Ying-Jung Chen, Surya Parkash
+
+
+ Benchmarking Document Parsers on Mathematical Formula Extraction from PDFs
+ https://arxiv.org/abs/2512.09874
+ arXiv:2512.09874v1 Announce Type: cross
+Abstract: Correctly parsing mathematical formulas from PDFs is critical for training large language models and building scientific knowledge bases from academic literature, yet existing benchmarks either exclude formulas entirely or lack semantically-aware evaluation metrics. We introduce a novel benchmarking framework centered on synthetically generated PDFs with precise LaTeX ground truth, enabling systematic control over layout, formulas, and content characteristics. A key methodological contribution is pioneering LLM-as-a-judge for semantic formula assessment, combined with a robust two-stage matching pipeline that handles parser output inconsistencies. Through human validation on 250 formula pairs (750 ratings from 30 evaluators), we demonstrate that LLM-based evaluation achieves substantially higher correlation with human judgment (Pearson r=0.78) compared to CDM (r=0.34) and text similarity (r~0). Evaluating 20+ contemporary PDF parsers (including specialized OCR models, vision-language models, and rule-based approaches) across 100 synthetic documents with 2,000+ formulas reveals significant performance disparities. Our findings provide crucial insights for practitioners selecting parsers for downstream applications and establish a robust, scalable methodology that enables reproducible evaluation of PDF formula extraction quality. Code and benchmark data: https://github.com/phorn1/pdf-parse-bench
+ oai:arXiv.org:2512.09874v1
+ cs.CV
+ cs.AI
+ cs.IR
+ Fri, 12 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pius Horn, Janis Keuper
+
+
+ LxCIM: a new rank-based binary classifier performance metric invariant to local exchange of classes
+ https://arxiv.org/abs/2512.10053
+ arXiv:2512.10053v1 Announce Type: cross
+Abstract: Binary classification is one of the oldest, most prevalent, and studied problems in machine learning. However, the metrics used to evaluate model performance have received comparatively little attention. The area under the receiver operating characteristic curve (AUROC) has long been a standard choice for model comparison. Despite its advantages, AUROC is not always ideal, particularly for problems that are invariant to local exchange of classes (LxC), a new form of metric invariance introduced in this work. To address this limitation, we propose LxCIM (LxC-invariant metric), which is not only rank-based and invariant under local exchange of classes, but also intuitive, logically consistent, and always computable, while enabling more detailed analysis through the cumulative accuracy-decision rate curve. Moreover, LxCIM exhibits clear theoretical connections to AUROC, accuracy, and the area under the accuracy-decision rate curve (AUDRC). These relationships allow for multiple complementary interpretations: as a symmetric form of AUROC, a rank-based analogue of accuracy, or a more representative and more interpretable variant of AUDRC. Finally, we demonstrate the direct applicability of LxCIM to the bivariate causal discovery problem (which exhibits invariance to local exchange of classes) and show how it addresses the acknowledged limitations of existing metrics used in this field. All code and implementation details are publicly available at github.com/tiagobrogueira/Causal-Discovery-In-Exchangeable-Data.
+ oai:arXiv.org:2512.10053v1stat.MLcs.LG
- stat.AP
- stat.ME
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Gabrielle Rives, Olivier Lopez, Nicolas Bousquet
+ Tiago Brogueira, M\'ario A. T. Figueiredo
- AI-Driven Expansion and Application of the Alexandria Database
- https://arxiv.org/abs/2512.09169
- arXiv:2512.09169v1 Announce Type: cross
-Abstract: We present a novel multi-stage workflow for computational materials discovery that achieves a 99% success rate in identifying compounds within 100 meV/atom of thermodynamic stability, with a threefold improvement over previous approaches. By combining the Matra-Genoa generative model, Orb-v2 universal machine learning interatomic potential, and ALIGNN graph neural network for energy prediction, we generated 119 million candidate structures and added 1.3 million DFT-validated compounds to the ALEXANDRIA database, including 74 thousand new stable materials. The expanded ALEXANDRIA database now contains 5.8 million structures with 175 thousand compounds on the convex hull. Predicted structural disorder rates (37-43%) match experimental databases, unlike other recent AI-generated datasets. Analysis reveals fundamental patterns in space group distributions, coordination environments, and phase stability networks, including sub-linear scaling of convex hull connectivity. We release the complete dataset, including sAlex25 with 14 million out-of-equilibrium structures containing forces and stresses for training universal force fields. We demonstrate that fine-tuning a GRACE model on this data improves benchmark accuracy. All data, models, and workflows are freely available under Creative Commons licenses.
- oai:arXiv.org:2512.09169v1
- cond-mat.mtrl-sci
+ Classifying Metamorphic versus Single-Fold Proteins with Statistical Learning and AlphaFold2
+ https://arxiv.org/abs/2512.10066
+ arXiv:2512.10066v1 Announce Type: cross
+Abstract: The remarkable success of AlphaFold2 in providing accurate atomic-level prediction of protein structures from their amino acid sequence has transformed approaches to the protein folding problem. However, its core paradigm of mapping one sequence to one structure may only be appropriate for single-fold proteins with one stable conformation. Metamorphic proteins, which can adopt multiple distinct conformations, have conformational diversity that cannot be adequately modeled by AlphaFold2. Hence, classifying whether a given protein is metamorphic or single-fold remains a critical challenge for both laboratory experiments and computational methods. To address this challenge, we developed a novel classification framework by re-purposing AlphaFold2 to generate conformational ensembles via a multiple sequence alignment sampling method. From these ensembles, we extract a comprehensive set of features characterizing the conformational ensemble's modality and structural dispersion. A random forest classifier trained on a carefully curated benchmark dataset of known metamorphic and single-fold proteins achieves a mean AUC of 0.869 with cross-validation, demonstrating the effectiveness of our integrated approach. Furthermore, by applying our classifier to 600 randomly sampled proteins from the Protein Data Bank, we identified several potential metamorphic protein candidates -- including the 40S ribosomal protein S30, whose conformational change is crucial for its secondary function in antimicrobial defense. By combining AI-driven protein structure prediction with statistical learning, our work provides a powerful new approach for discovering metamorphic proteins and deepens our understanding of their role in their molecular function.
+ oai:arXiv.org:2512.10066v1
+ stat.APcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Th\'eo Cavignac (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany), Jonathan Schmidt (Department of Materials, ETH Z\"urich, Z\"urich, Switzerland), Pierre-Paul De Breuck (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany), Antoine Loew (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany), Tiago F. T. Cerqueira (CFisUC, Department of Physics, University of Coimbra, Coimbra, Portugal), Hai-Chen Wang (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany), Anton Bochkarev (ICAMS, Ruhr-Universit\"at Bochum and ACEworks GmbH, Bochum, Germany), Yury Lysogorskiy (ICAMS, Ruhr-Universit\"at Bochum and ACEworks GmbH, Bochum, Germany), Aldo H. Romero (Department of Physics, West Virginia University, Morgantown, USA), Ralf Drautz (ICAMS, Ruhr-Universit\"at Bochum and ACEworks GmbH, Bochum, Germany), Silvana Botti (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany), Miguel A. L. Marques (Research Center Future Energy Materials and Systems of the University Alliance Ruhr and ICAMS, Ruhr University Bochum, Bochum, Germany)
+ Yongkai Chen, Samuel WK Wong, SC Kou
- Magic Gems: A Polyhedral Framework for Magic Squares
- https://arxiv.org/abs/2512.09170
- arXiv:2512.09170v1 Announce Type: cross
-Abstract: We introduce Magic Gems, a geometric representation of magic squares as three-dimensional polyhedra. By mapping an n x n magic square onto a centered coordinate grid with cell values as vertical displacements, we construct a point cloud whose convex hull defines the Magic Gem. This reveals a connection between magic square constraints and statistical structure: we prove that magic squares have vanishing covariances between position and value. We introduce a covariance energy functional -- the sum of squared covariances with row, column, and diagonal indicator variables -- and prove for n=3 (via exhaustive enumeration) that its zeros are precisely the magic squares. Large-scale sampling for n=4,5 (460+ million arrangements) provides strong numerical evidence that this characterization extends to larger orders. Perturbation analysis demonstrates that magic squares are isolated local minima. The representation is invariant under dihedral symmetry D_4, yielding canonical geometric objects for equivalence classes.
- oai:arXiv.org:2512.09170v1
- math.CO
- cs.CG
- cs.DM
- math.MG
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Model-Guided Neural Network Method for the Inverse Scattering Problem
+ https://arxiv.org/abs/2512.10123
+ arXiv:2512.10123v1 Announce Type: cross
+Abstract: Inverse medium scattering is an ill-posed, nonlinear wave-based imaging problem arising in medical imaging, remote sensing, and non-destructive testing. Machine learning (ML) methods offer increased inference speed and flexibility in capturing prior knowledge of imaging targets relative to classical optimization-based approaches; however, they perform poorly in regimes where the scattering behavior is highly nonlinear. A key limitation is that ML methods struggle to incorporate the physics governing the scattering process, which are typically inferred implicitly from the training data or loosely enforced via architectural design. In this paper, we present a method that endows a machine learning framework with explicit knowledge of problem physics, in the form of a differentiable solver representing the forward model. The proposed method progressively refines reconstructions of the scattering potential using measurements at increasing wave frequencies, following a classical strategy to stabilize recovery. Empirically, we find that our method provides high-quality reconstructions at a fraction of the computational or sampling costs of competing approaches.
+ oai:arXiv.org:2512.10123v1
+ physics.comp-ph
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kyle Elliott Mathewson
+ Olivia Tsang, Owen Melia, Vasileios Charisopoulos, Jeremy Hoskins, Yuehaw Khoo, Rebecca Willett
- A Benamou-Brenier Proximal Splitting Method for Constrained Unbalanced Optimal Transport
- https://arxiv.org/abs/2512.09250
- arXiv:2512.09250v1 Announce Type: cross
-Abstract: The dynamic formulation of optimal transport, also known as the Benamou-Brenier formulation, has been extended to the unbalanced case by introducing a source term in the continuity equation. When this source term is penalized based on the Fisher-Rao metric, the resulting model is referred to as the Wasserstein-Fisher-Rao (WFR) setting, and allows for the comparison between any two positive measures without the need for equalized total mass. In recent work, we introduced a constrained variant of this model, in which affine integral equality constraints are imposed along the measure path. In the present paper, we propose a further generalization of this framework, which allows for constraints that apply not just to the density path but also to the momentum and source terms, and incorporates affine inequalities in addition to equality constraints. We prove, under suitable assumptions on the constraints, the well-posedness of the resulting class of convex variational problems. The paper is then primarily devoted to developing an effective numerical pipeline that tackles the corresponding constrained optimization problem based on finite difference discretizations and parallel proximal schemes. Our proposed framework encompasses standard balanced and unbalanced optimal transport, as well as a multitude of natural and practically relevant constraints, and we highlight its versatility via several synthetic and real data examples.
- oai:arXiv.org:2512.09250v1
- math.OC
- cs.NA
- math.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Inference for Batched Adaptive Experiments
+ https://arxiv.org/abs/2512.10156
+ arXiv:2512.10156v1 Announce Type: cross
+Abstract: The advantages of adaptive experiments have led to their rapid adoption in economics, other fields, as well as among practitioners. However, adaptive experiments pose challenges for causal inference. This note suggests a BOLS (batched ordinary least squares) test statistic for inference of treatment effects in adaptive experiments. The statistic provides a precision-equalizing aggregation of per-period treatment-control differences under heteroskedasticity. The combined test statistic is a normalized average of heteroskedastic per-period z-statistics and can be used to construct asymptotically valid confidence intervals. We provide simulation results comparing rejection rates in the typical case with few treatment periods and few (or many) observations per batch.
+ oai:arXiv.org:2512.10156v1
+ econ.EM
+ cs.LG
+ stat.ME
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Mao Nishino, Martin Bauer, Tom Needham, Nicolas Charon
+ Jan Kemper, Davud Rostam-Afschar
- Robust and Sparse Estimation of Unbounded Density Ratio under Heavy Contamination
- https://arxiv.org/abs/2512.09266
- arXiv:2512.09266v1 Announce Type: cross
-Abstract: We examine the non-asymptotic properties of robust density ratio estimation (DRE) in contaminated settings. Weighted DRE is the most promising among existing methods, exhibiting doubly strong robustness from an asymptotic perspective. This study demonstrates that Weighted DRE achieves sparse consistency even under heavy contamination within a non-asymptotic framework. This method addresses two significant challenges in density ratio estimation and robust estimation. For density ratio estimation, we provide the non-asymptotic properties of estimating unbounded density ratios under the assumption that the weighted density ratio function is bounded. For robust estimation, we introduce a non-asymptotic framework for doubly strong robustness under heavy contamination, assuming that at least one of the following conditions holds: (i) contamination ratios are small, and (ii) outliers have small weighted values. This work provides the first non-asymptotic analysis of strong robustness under heavy contamination.
- oai:arXiv.org:2512.09266v1
+ Topology Identification and Inference over Graphs
+ https://arxiv.org/abs/2512.10183
+ arXiv:2512.10183v1 Announce Type: cross
+Abstract: Topology identification and inference of processes evolving over graphs arise in timely applications involving brain, transportation, financial, power, as well as social and information networks. This chapter provides an overview of graph topology identification and statistical inference methods for multidimensional relational data. Approaches for undirected links connecting graph nodes are outlined, going all the way from correlation metrics to covariance selection, and revealing ties with smooth signal priors. To account for directional (possibly causal) relations among nodal variables and address the limitations of linear time-invariant models in handling dynamic as well as nonlinear dependencies, a principled framework is surveyed to capture these complexities through judiciously selected kernels from a prescribed dictionary. Generalizations are also described via structural equations and vector autoregressions that can exploit attributes such as low rank, sparsity, acyclicity, and smoothness to model dynamic processes over possibly time-evolving topologies. It is argued that this approach supports both batch and online learning algorithms with convergence rate guarantees, is amenable to tensor (that is, multi-way array) formulations as well as decompositions that are well-suited for multidimensional network data, and can seamlessly leverage high-order statistical information.
+ oai:arXiv.org:2512.10183v1
+ eess.SP
+ cs.SI
+ stat.MEstat.ML
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Ryosuke Nagumo, Hironori Fujisawa
+ Gonzalo Mateos, Yanning Shen, Georgios B. Giannakis, Ananthram Swami
- Impact of Positional Encoding: Clean and Adversarial Rademacher Complexity for Transformers under In-Context Regression
- https://arxiv.org/abs/2512.09275
- arXiv:2512.09275v1 Announce Type: cross
-Abstract: Positional encoding (PE) is a core architectural component of Transformers, yet its impact on the Transformer's generalization and robustness remains unclear. In this work, we provide the first generalization analysis for a single-layer Transformer under in-context regression that explicitly accounts for a completely trainable PE module. Our result shows that PE systematically enlarges the generalization gap. Extending to the adversarial setting, we derive the adversarial Rademacher generalization bound. We find that the gap between models with and without PE is magnified under attack, demonstrating that PE amplifies the vulnerability of models. Our bounds are empirically validated by a simulation study. Together, this work establishes a new framework for understanding the clean and adversarial generalization in ICL with PE.
- oai:arXiv.org:2512.09275v1
+ The Interplay of Statistics and Noisy Optimization: Learning Linear Predictors with Random Data Weights
+ https://arxiv.org/abs/2512.10188
+ arXiv:2512.10188v1 Announce Type: cross
+Abstract: We analyze gradient descent with randomly weighted data points in a linear regression model, under a generic weighting distribution. This includes various forms of stochastic gradient descent, importance sampling, but also extends to weighting distributions with arbitrary continuous values, thereby providing a unified framework to analyze the impact of various kinds of noise on the training trajectory. We characterize the implicit regularization induced through the random weighting, connect it with weighted linear regression, and derive non-asymptotic bounds for convergence in first and second moments. Leveraging geometric moment contraction, we also investigate the stationary distribution induced by the added noise. Based on these results, we discuss how specific choices of weighting distribution influence both the underlying optimization problem and statistical properties of the resulting estimator, as well as some examples for which weightings that lead to fast convergence cause bad statistical performance.
+ oai:arXiv.org:2512.10188v1stat.MLcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.CO
+ Fri, 12 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Gabriel Clara, Yazan Mash'al
+
+
+ On Sybil Proofness in Competitive Combinatorial Exchanges
+ https://arxiv.org/abs/2512.10203
+ arXiv:2512.10203v1 Announce Type: cross
+Abstract: We study Sybil manipulation in BRACE, a competitive equilibrium mechanism for combinatorial exchanges, by treating identity creation as a finite perturbation of the empirical distribution of reported types. Under standard regularity assumptions on the excess demand map and smoothness of principal utilities, we obtain explicit linear bounds on price and welfare deviations induced by bounded Sybil invasion. Using these bounds, we prove a sharp contrast: strategyproofness in the large holds if and only if each principal's share of identities vanishes, whereas any principal with a persistent positive share can construct deviations yielding strictly positive limiting gains. We further show that the feasibility of BRACE fails in the event of an unbounded population of Sybils and provide a precise cost threshold that ensures disincentivization of such attacks in large markets.
+ oai:arXiv.org:2512.10203v1
+ econ.TH
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Abhimanyu Nag
+
+
+ Active Optics for Hyperspectral Imaging of Reflective Agricultural Leaf Sensors
+ https://arxiv.org/abs/2512.10213
+ arXiv:2512.10213v1 Announce Type: cross
+Abstract: Monitoring plant health increasingly relies on leaf-mounted sensors that provide real-time physiological data, yet efficiently locating and sampling these sensors in complex agricultural environments remains a major challenge. We present an integrated, adaptive, and scalable system that autonomously detects and interrogates plant sensors using a coordinated suite of low-cost optical components including a LiDAR, liquid lens, monochrome camera, filter wheel, and Fast Steering Mirror (FSM). The system first uses LiDAR to identify the distinct reflective signatures of sensors within the field, then dynamically redirects the camera s field of view via the FSM to target each sensor for hyperspectral imaging. The liquid lens continuously adjusts focus to maintain image sharpness across varying depths, enabling precise spectral measurements. We validated the system in controlled indoor experiments, demonstrating accurate detection and tracking of reflective plant sensors and successful acquisition of their spectral data. To our knowledge, no other system currently integrates these sensing and optical modalities for agricultural monitoring. This work establishes a foundation for adaptive, low-cost, and automated plant sensor interrogation, representing a significant step toward scalable, real-time plant health monitoring in precision agriculture.
+ oai:arXiv.org:2512.10213v1
+ eess.IV
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Weiyi He, Yue Xing
+ Dexter Burns, Sanjeev Koppal
- Distributional Shrinkage II: Optimal Transport Denoisers with Higher-Order Scores
- https://arxiv.org/abs/2512.09295
- arXiv:2512.09295v1 Announce Type: cross
-Abstract: We revisit the signal denoising problem through the lens of optimal transport: the goal is to recover an unknown scalar signal distribution $X \sim P$ from noisy observations $Y = X + \sigma Z$, with $Z$ being standard Gaussian independent of $X$ and $\sigma>0$ a known noise level. Let $Q$ denote the distribution of $Y$. We introduce a hierarchy of denoisers $T_0, T_1, \ldots, T_\infty : \mathbb{R} \to \mathbb{R}$ that are agnostic to the signal distribution $P$, depending only on higher-order score functions of $Q$. Each denoiser $T_K$ is progressively refined using the $(2K-1)$-th order score function of $Q$ at noise resolution $\sigma^{2K}$, achieving better denoising quality measured by the Wasserstein metric $W(T_K \sharp Q, P)$. The limiting denoiser $T_\infty$ identifies the optimal transport map with $T_\infty \sharp Q = P$.
- We provide a complete characterization of the combinatorial structure underlying this hierarchy through Bell polynomial recursions, revealing how higher-order score functions encode the optimal transport map for signal denoising. We study two estimation strategies with convergence rates for higher-order scores from i.i.d. samples drawn from $Q$: (i) plug-in estimation via Gaussian kernel smoothing, and (ii) direct estimation via higher-order score matching. This hierarchy of agnostic denoisers opens new perspectives in signal denoising and empirical Bayes.
- oai:arXiv.org:2512.09295v1
+ Optimal learning of quantum channels in diamond distance
+ https://arxiv.org/abs/2512.10214
+ arXiv:2512.10214v1 Announce Type: cross
+Abstract: Quantum process tomography, the task of estimating an unknown quantum channel, is a central problem in quantum information theory and a key primitive for characterising noisy quantum devices. A long-standing open question is to determine the optimal number of uses of an unknown channel required to learn it in diamond distance, the standard measure of worst-case distinguishability between quantum processes. Here we show that a quantum channel acting on a $d$-dimensional system can be estimated to accuracy $\varepsilon$ in diamond distance using $O(d^4/\varepsilon^2)$ channel uses. This scaling is essentially optimal, as it matches lower bounds up to logarithmic factors. Our analysis extends to channels with input and output dimensions $d_{\mathrm{in}}$ and $d_{\mathrm{out}}$ and Kraus rank at most $k$, for which $O(d_{\mathrm{in}} d_{\mathrm{out}} k/\varepsilon^2)$ channel uses suffice, interpolating between unitary and fully generic channels. As by-products, we obtain, to the best of our knowledge, the first essentially optimal strategies for operator-norm learning of binary POVMs and isometries, and we recover optimal trace-distance tomography for fixed-rank states. Our approach consists of using the channel only non-adaptively to prepare copies of the Choi state, purify them in parallel, perform sample-optimal pure-state tomography on the purifications, and analyse the resulting estimator directly in diamond distance via its semidefinite-program characterisation. While the sample complexity of state tomography in trace distance is by now well understood, our results finally settle the corresponding problem for quantum channels in diamond distance.
+ oai:arXiv.org:2512.10214v1
+ quant-ph
+ cs.CC
+ cs.DS
+ Fri, 12 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Antonio Anna Mele, Lennart Bittel
+
+
+ On Learning-Curve Monotonicity for Maximum Likelihood Estimators
+ https://arxiv.org/abs/2512.10220
+ arXiv:2512.10220v1 Announce Type: cross
+Abstract: The property of learning-curve monotonicity, highlighted in a recent series of work by Loog, Mey and Viering, describes algorithms which only improve in average performance given more data, for any underlying data distribution within a given family. We establish the first nontrivial monotonicity guarantees for the maximum likelihood estimator in a variety of well-specified parametric settings. For sequential prediction with log loss, we show monotonicity (in fact complete monotonicity) of the forward KL divergence for Gaussian vectors with unknown covariance and either known or unknown mean, as well as for Gamma variables with unknown scale parameter. The Gaussian setting was explicitly highlighted as open in the aforementioned works, even in dimension 1. Finally we observe that for reverse KL divergence, a folklore trick yields monotonicity for very general exponential families.
+ All results in this paper were derived by variants of GPT-5.2 Pro. Humans did not provide any proof strategies or intermediate arguments, but only prompted the model to continue developing additional results, and verified and transcribed its proofs.
+ oai:arXiv.org:2512.10220v1math.STcs.LGstat.MLstat.TH
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Tengyuan Liang
-
-
- Infinitesimal containment and sparse factors of iid
- https://arxiv.org/abs/2512.09301
- arXiv:2512.09301v1 Announce Type: cross
-Abstract: We introduce infinitesimal weak containment for measure-preserving actions of a countable group $\Gamma$: an action $(X,\mu)$ is infinitesimally contained in $(Y,\nu)$ if the statistics of the action of $\Gamma$ on small measure subsets of $X$ can be approximated inside $Y$. We show that the Bernoulli shift $[0,1]^\Gamma$ is infinitesimally contained in the left-regular action of $\Gamma$. For exact groups, this implies that sparse factor-of-iid subsets of $\Gamma$ are approximately hyperfinite. We use it to quantify a theorem of Chifan--Ioana on measured subrelations of the Bernoulli shift of an exact group. For the proof of infinitesimal containment we define \emph{entropy support maps}, which take a small subset $U$ of $\{0,1\}^I$ and assign weights to coordinates above every point of $U$, according to how ''important'' they are for the structure of the set.
- oai:arXiv.org:2512.09301v1
- math.DS
- cs.IT
- math.IT
- math.PR
- Thu, 11 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Miko{\l}aj Fr\k{a}czyk
+ Mark Sellke, Steven Yin
- Functional Percolation: A Perspective on Criticality of Form and Function
- https://arxiv.org/abs/2512.09317
- arXiv:2512.09317v1 Announce Type: cross
-Abstract: Understanding the physical constraints and minimal conditions that enable information processing in extended systems remains a central challenge across disciplines, from neuroscience and artificial intelligence to social and physical networks. Here we study how network connectivity both limits and enables information processing by analyzing random networks across the structural percolation transition. Using cascade-mediated dynamics as a minimal and universal mechanism for propagating state-dependent responses, we examine structural, functional, and information-theoretic observables as functions of mean degree in Erdos-Renyi networks. We find that the emergence of a giant connected component coincides with a sharp transition in realizable information processing: complex input-output response functions become accessible, functional diversity increases rapidly, output entropy rises, and directed information flow quantified by transfer entropy extends beyond local neighborhoods. These coincident transitions define a regime of functional percolation, referring to a sharp expansion of the space of realizable input-output functions at the structural percolation transition. Near criticality, networks exhibit a Pareto-optimal tradeoff between functional complexity and diversity, suggesting that percolation criticality provides a universal organizing principle for information processing in systems with local interactions and propagating influences.
- oai:arXiv.org:2512.09317v1
- physics.soc-ph
- cond-mat.stat-mech
- cs.AI
- physics.comp-ph
- Thu, 11 Dec 2025 00:00:00 -0500
+ Galaxy Phase-Space and Field-Level Cosmology: The Strength of Semi-Analytic Models
+ https://arxiv.org/abs/2512.10222
+ arXiv:2512.10222v1 Announce Type: cross
+Abstract: Semi-analytic models are a widely used approach to simulate galaxy properties within a cosmological framework, relying on simplified yet physically motivated prescriptions. They have also proven to be an efficient alternative for generating accurate galaxy catalogs, offering a faster and less computationally expensive option compared to full hydrodynamical simulations. In this paper, we demonstrate that using only galaxy $3$D positions and radial velocities, we can train a graph neural network coupled to a moment neural network to obtain a robust machine learning based model capable of estimating the matter density parameters, $\Omega_{\rm m}$, with a precision of approximately 10%. The network is trained on ($25 h^{-1}$Mpc)$^3$ volumes of galaxy catalogs from L-Galaxies and can successfully extrapolate its predictions to other semi-analytic models (GAEA, SC-SAM, and Shark) and, more remarkably, to hydrodynamical simulations (Astrid, SIMBA, IllustrisTNG, and SWIFT-EAGLE). Our results show that the network is robust to variations in astrophysical and subgrid physics, cosmological and astrophysical parameters, and the different halo-profile treatments used across simulations. This suggests that the physical relationships encoded in the phase-space of semi-analytic models are largely independent of their specific physical prescriptions, reinforcing their potential as tools for the generation of realistic mock catalogs for cosmological parameter inference.
+ oai:arXiv.org:2512.10222v1
+ astro-ph.CO
+ astro-ph.GA
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Galen J. Wilkerson
+ Natal\'i S. M. de Santi, Francisco Villaescusa-Navarro, Pablo Araya-Araya, Gabriella De Lucia, Fabio Fontanot, Lucia A. Perez, Manuel Arn\'es-Curto, Violeta Gonzalez-Perez, \'Angel Chandro-G\'omez, Rachel S. Somerville, Tiago Castro
- A Propagator-based Multi-level Monte Carlo Method for Kinetic Neutral Species in Edge Plasmas
- https://arxiv.org/abs/2512.09334
- arXiv:2512.09334v1 Announce Type: cross
-Abstract: We propose and investigate a new multi-level Monte Carlo scheme for numerical solutions of the kinetic Boltzmann equation for neutral species in edge plasmas. In particular, this method explicitly exploits a key structural property of neutral particle dynamics: the prevalence of frequent collisions for which the outgoing velocity is determined by local plasma parameters. Using this property, we derive a multi-level algorithm based on collision event propagator and show, both analytically and through numerical experiments, that it reproduces the results of standard Monte Carlo methods. We further demonstrate that, in the context of coupled plasma-neutral edge simulations employing correlated Monte Carlo, the proposed scheme retains trajectory correlation to machine precision as the system evolves, whereas conventional methods exhibit rapid decorrelation. These results indicate that the propagator-based multi-level Monte Carlo scheme is a promising candidate for use in fully implicit Jacobian-free Newton-Krylov (JFNK) solvers for coupled plasma-neutral systems.
- oai:arXiv.org:2512.09334v1
- physics.plasm-ph
+ Error Analysis of Generalized Langevin Equations with Approximated Memory Kernels
+ https://arxiv.org/abs/2512.10256
+ arXiv:2512.10256v1 Announce Type: cross
+Abstract: We analyze prediction error in stochastic dynamical systems with memory, focusing on generalized Langevin equations (GLEs) formulated as stochastic Volterra equations. We establish that, under a strongly convex potential, trajectory discrepancies decay at a rate determined by the decay of the memory kernel and are quantitatively bounded by the estimation error of the kernel in a weighted norm. Our analysis integrates synchronized noise coupling with a Volterra comparison theorem, encompassing both subexponential and exponential kernel classes. For first-order models, we derive moment and perturbation bounds using resolvent estimates in weighted spaces. For second-order models with confining potentials, we prove contraction and stability under kernel perturbations using a hypocoercive Lyapunov-type distance. This framework accommodates non-translation-invariant kernels and white-noise forcing, explicitly linking improved kernel estimation to enhanced trajectory prediction. Numerical examples validate these theoretical findings.
+ oai:arXiv.org:2512.10256v1
+ stat.ML
+ cs.LGcs.NA
+ math.DSmath.NA
- physics.comp-ph
- Thu, 11 Dec 2025 00:00:00 -0500
+ math.PR
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Gregory J. Parker, Maxim V. Umansky, Benjamin D. Dudson
+ Quanjun Lang, Jianfeng Lu
- Phase transition to causal symmetry reveals operational autonomy in sociotechnical systems
- https://arxiv.org/abs/2512.09352
- arXiv:2512.09352v1 Announce Type: cross
-Abstract: Complex adaptive systems persist through continuous transformation, yet the dynamical principles governing their long-term stability remain poorly characterized. Here we analyze 50 large-scale collaborative ecosystems spanning 11,042 system-months to quantify the emergence of operational autonomy. We develop an order parameter (Gamma) measuring structural persistence amid component turnover and characterize directional coupling between organizational architecture and collective activity. Gamma exhibits a bimodal distribution (Hartigan p=0.0126; Delta BIC = 2,000), identifying two regimes: an exploratory phase of high variance and a mature phase with 1.77x variance collapse. Granger analysis reveals causal symmetrization at maturity - the structure-activity coupling ratio shifts from 0.71 (activity-driven) to 0.94 (bidirectional), indicating that architecture increasingly constrains collective coordination.
- A viability index, combining activity and structure, outperforms activity-based prediction (AUC = 0.88 vs 0.81), identifying 'zombie' systems where high churn masks structural decay. This extends recent work by Ait et al., who identified 'zombie' projects exhibiting activity without development based on non-coding contributions. Our metric identifies structural zombies: projects where coding activity persists but fails to preserve architectural invariants.
- These results establish causal symmetrization as an empirically validated signature of self-organizing autonomy applicable across complex collaborative systems - a dynamical regime previously theorized in biological contexts but here demonstrated and measured in artificial ones.
- oai:arXiv.org:2512.09352v1
- physics.soc-ph
- cs.CY
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Optimality Deviation using the Koopman Operator
+ https://arxiv.org/abs/2512.10270
+ arXiv:2512.10270v1 Announce Type: cross
+Abstract: This paper investigates the impact of approximation error in data-driven optimal control problem of nonlinear systems while using the Koopman operator. While the Koopman operator enables a simplified representation of nonlinear dynamics through a lifted state space, the presence of approximation error inevitably leads to deviations in the computed optimal controller and the resulting value function. We derive explicit upper bounds for these optimality deviations, which characterize the worst-case effect of approximation error. Supported by numerical examples, these theoretical findings provide a quantitative foundation for improving the robustness of data-driven optimal controller design.
+ oai:arXiv.org:2512.10270v1
+ math.OC
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Anthony Gosme
+ Yicheng Lin, Bingxian Wu, Nan Bai, Yunxiao Ren, Zhisheng Duan
- Meta-learning three-factor plasticity rules for structured credit assignment with sparse feedback
- https://arxiv.org/abs/2512.09366
- arXiv:2512.09366v1 Announce Type: cross
-Abstract: Biological neural networks learn complex behaviors from sparse, delayed feedback using local synaptic plasticity, yet the mechanisms enabling structured credit assignment remain elusive. In contrast, artificial recurrent networks solving similar tasks typically rely on biologically implausible global learning rules or hand-crafted local updates. The space of local plasticity rules capable of supporting learning from delayed reinforcement remains largely unexplored. Here, we present a meta-learning framework that discovers local learning rules for structured credit assignment in recurrent networks trained with sparse feedback. Our approach interleaves local neo-Hebbian-like updates during task execution with an outer loop that optimizes plasticity parameters via \textbf{tangent-propagation through learning}. The resulting three-factor learning rules enable long-timescale credit assignment using only local information and delayed rewards, offering new insights into biologically grounded mechanisms for learning in recurrent circuits.
- oai:arXiv.org:2512.09366v1
- q-bio.NC
- cond-mat.dis-nn
+ Tracking large chemical reaction networks and rare events by neural networks
+ https://arxiv.org/abs/2512.10309
+ arXiv:2512.10309v1 Announce Type: cross
+Abstract: Chemical reaction networks are widely used to model stochastic dynamics in chemical kinetics, systems biology and epidemiology. Solving the chemical master equation that governs these systems poses a significant challenge due to the large state space exponentially growing with system sizes. The development of autoregressive neural networks offers a flexible framework for this problem; however, its efficiency is limited especially for high-dimensional systems and in scenarios with rare events. Here, we push the frontier of neural-network approach by exploiting faster optimizations such as natural gradient descent and time-dependent variational principle, achieving a 5- to 22-fold speedup, and by leveraging enhanced-sampling strategies to capture rare events. We demonstrate reduced computational cost and higher accuracy over the previous neural-network method in challenging reaction networks, including the mitogen-activated protein kinase (MAPK) cascade network, the hitherto largest biological network handled by the previous approaches of solving the chemical master equation. We further apply the approach to spatially extended reaction-diffusion systems, the Schl\"ogl model with rare events, on two-dimensional lattices, beyond the recent tensor-network approach that handles one-dimensional lattices. The present approach thus enables efficient modeling of chemical reaction networks in general.
+ oai:arXiv.org:2512.10309v1
+ q-bio.MNcs.LGphysics.bio-ph
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Dimitra Maoutsa
+ http://creativecommons.org/licenses/by/4.0/
+ Jiayu Weng, Xinyi Zhu, Jing Liu, Linyuan L\"u, Pan Zhang, Ying Tang
- LiePrune: Lie Group and Quantum Geometric Dual Representation for One-Shot Structured Pruning of Quantum Neural Networks
- https://arxiv.org/abs/2512.09469
- arXiv:2512.09469v1 Announce Type: cross
-Abstract: Quantum neural networks (QNNs) and parameterized quantum circuits (PQCs) are key building blocks for near-term quantum machine learning. However, their scalability is constrained by excessive parameters, barren plateaus, and hardware limitations. We propose LiePrune, the first mathematically grounded one-shot structured pruning framework for QNNs that leverages Lie group structure and quantum geometric information. Each gate is jointly represented in a Lie group--Lie algebra dual space and a quantum geometric feature space, enabling principled redundancy detection and aggressive compression. Experiments on quantum classification (MNIST, FashionMNIST), quantum generative modeling (Bars-and-Stripes), and quantum chemistry (LiH VQE) show that LiePrune achieves over $10\times$ compression with negligible or even improved task performance, while providing provable guarantees on redundancy detection, functional approximation, and computational complexity.
- oai:arXiv.org:2512.09469v1
- quant-ph
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Residual subspace evolution strategies for nonlinear inverse problems
+ https://arxiv.org/abs/2512.10325
+ arXiv:2512.10325v1 Announce Type: cross
+Abstract: Nonlinear inverse problems often feature noisy, non-differentiable, or expensive residual evaluations that make Jacobian-based solvers unreliable. Popular derivative-free optimizers such as natural evolution strategies (NES) or Powell's NEWUOA still assume smoothness or expend many evaluations to maintain stability. Ensemble Kalman inversion (EKI) relies on empirical covariances that require preconditioning and scale poorly with residual dimension.
+ We introduce residual subspace evolution strategies (RSES), a derivative-free solver that samples Gaussian probes around the current iterate, builds a residual-only surrogate from their differences, and recombines the probes through a least-squares solve yielding an optimal update without forming Jacobians or covariances. Each iteration costs $k+1$ residual evaluations, where $k \ll n$ for $n$-dimensional problems, with $O(k^3)$ linear algebra overhead.
+ Benchmarks on calibration, regression, and deconvolution problems demonstrate consistent misfit reduction in both deterministic and stochastic settings. RSES matches or surpasses xNES and NEWUOA while staying competitive with EKI under matched evaluation budgets, particularly when smoothness or covariance assumptions fail.
+ oai:arXiv.org:2512.10325v1
+ math.OC
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Haijian Shao, Bowen Yang, Wei Liu, Xing Deng, Yingtao Jiang
+ Francesco Alemanno
- Modeling Complex Multiphysics Systems with Discrete Element Method Enriched with the Kernel-Independent Fast Multipole Method
- https://arxiv.org/abs/2512.09478
- arXiv:2512.09478v1 Announce Type: cross
-Abstract: The paper describes the coupling of the MercuryDPM discrete element method (DEM) code and the implementation of the kernel-independent fast multipole method (KIFMM). The combined simulation framework allows addressing the large class of multiscale problems, including both the mechanical interactions of particulates at the fine scale and the long-range interactions of various natures at the coarse scale. Among these are electrostatic interactions in powders, clays, and particulates, magnetic interactions in ferromagnetic granulates, and gravitational interactions in asteroid clouds. The formalism of rigid clumps is successfully combined with KIFMM, enabling addressing problems involving complex long-large interactions between non-spherical particles with arbitrary charge distributions. The capabilities of our technique are demonstrated in several application examples.
- oai:arXiv.org:2512.09478v1
- cond-mat.soft
+ The Radon Transform-Based Sampling Methods for Biharmonic Sources from the Scattered Fields
+ https://arxiv.org/abs/2512.10332
+ arXiv:2512.10332v1 Announce Type: cross
+Abstract: This paper presents three quantitative sampling methods for reconstructing extended sources of the biharmonic wave equation using scattered field data. The first method employs an indicator function that solely relies on scattered fields $ u^s$ measured on a single circle, eliminating the need for Laplacian or derivative data. Its theoretical foundation lies in an explicit formula for the source function, which also serves as a constructive proof of uniqueness. To improve computational efficiency, we introduce a simplified double integral formula for the source function, at the cost of requiring additional measurements $\Delta u^s$. This advancement motivates the second indicator function, which outperforms the first method in both computational speed and reconstruction accuracy. The third indicator function is proposed to reconstruct the support boundary of extended sources from the scattered fields $ u^s$ at a finite number of sensors. By analyzing singularities induced by the source boundary, we establish the uniqueness of annulus and polygon-shaped sources. A key characteristic of the first and third indicator functions is their link between scattered fields and the Radon transform of the source function. Numerical experiments demonstrate that the proposed sampling methods achieve high-resolution imaging of the source support or the source function itself.
+ oai:arXiv.org:2512.10332v1
+ math-phcs.NA
+ math.MPmath.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500cross
- http://creativecommons.org/licenses/by/4.0/
- Igor A. Ostanin
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaodong Liu, Qingxiang Shi, Jing Wang
- Estimation of Stochastic Optimal Transport Maps
- https://arxiv.org/abs/2512.09499
- arXiv:2512.09499v1 Announce Type: cross
-Abstract: The optimal transport (OT) map is a geometry-driven transformation between high-dimensional probability distributions which underpins a wide range of tasks in statistics, applied probability, and machine learning. However, existing statistical theory for OT map estimation is quite restricted, hinging on Brenier's theorem (quadratic cost, absolutely continuous source) to guarantee existence and uniqueness of a deterministic OT map, on which various additional regularity assumptions are imposed to obtain quantitative error bounds. In many real-world problems these conditions fail or cannot be certified, in which case optimal transportation is possible only via stochastic maps that can split mass. To broaden the scope of map estimation theory to such settings, this work introduces a novel metric for evaluating the transportation quality of stochastic maps. Under this metric, we develop computationally efficient map estimators with near-optimal finite-sample risk bounds, subject to easy-to-verify minimal assumptions. Our analysis further accommodates common forms of adversarial sample contamination, yielding estimators with robust estimation guarantees. Empirical experiments are provided which validate our theory and demonstrate the utility of the proposed framework in settings where existing theory fails. These contributions constitute the first general-purpose theory for map estimation, compatible with a wide spectrum of real-world applications where optimal transport may be intrinsically stochastic.
- oai:arXiv.org:2512.09499v1
+ Diffusion differentiable resampling
+ https://arxiv.org/abs/2512.10401
+ arXiv:2512.10401v1 Announce Type: cross
+Abstract: This paper is concerned with differentiable resampling in the context of sequential Monte Carlo (e.g., particle filtering). We propose a new informative resampling method that is instantly pathwise differentiable, based on an ensemble score diffusion model. We prove that our diffusion resampling method provides a consistent estimate to the resampling distribution, and we show by experiments that it outperforms the state-of-the-art differentiable resampling methods when used for stochastic filtering and parameter estimation.
+ oai:arXiv.org:2512.10401v1stat.MLcs.LGmath.STstat.TH
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sloan Nietert, Ziv Goldfeld
+ Jennifer Rosina Andersson, Zheng Zhao
- Coloring Geometric Hypergraphs: A Survey
- https://arxiv.org/abs/2512.09509
- arXiv:2512.09509v1 Announce Type: cross
-Abstract: The \emph{chromatic number} of a hypergraph is the smallest number of colors needed to color the vertices such that no edge of at least two vertices is monochromatic. Given a family of geometric objects $\mathcal{F}$ that covers a subset $S$ of the Euclidean space, we can associate it with a hypergraph whose vertex set is $\mathcal F$ and whose edges are those subsets ${\mathcal{F}'}\subset \mathcal F$ for which there exists a point $p\in S$ such that ${\mathcal F}'$ consists of precisely those elements of $\mathcal{F}$ that contain $p$. The question whether $\mathcal F$ can be split into 2 coverings is equivalent to asking whether the chromatic number of the hypergraph is equal to 2.
- There are a number of competing notions of the chromatic number that lead to deep combinatorial questions already for abstract hypergraphs. In this paper, we concentrate on \emph{geometrically defined} (in short, \emph{geometric}) hypergraphs, and survey many recent coloring results related to them. In particular, we study and survey the following problem, dual to the above covering question. Given a set of points $S$ in the Euclidean space and a family $\mathcal{F}$ of geometric objects of a fixed type, define a hypergraph ${\mathcal H}_m$ on the point set $S$, whose edges are the subsets of $S$ that can be obtained as the intersection of $S$ with a member of $\mathcal F$ and have at least $m$ elements. Is it true that if $m$ is large enough, then the chromatic number of ${\mathcal H}_m$ is equal to 2?
- oai:arXiv.org:2512.09509v1
- math.CO
- cs.CG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Supervised Learning of Random Neural Architectures Structured by Latent Random Fields on Compact Boundaryless Multiply-Connected Manifolds
+ https://arxiv.org/abs/2512.10407
+ arXiv:2512.10407v1 Announce Type: cross
+Abstract: This paper introduces a new probabilistic framework for supervised learning in neural systems. It is designed to model complex, uncertain systems whose random outputs are strongly non-Gaussian given deterministic inputs. The architecture itself is a random object stochastically generated by a latent anisotropic Gaussian random field defined on a compact, boundaryless, multiply-connected manifold. The goal is to establish a novel conceptual and mathematical framework in which neural architectures are realizations of a geometry-aware, field-driven generative process. Both the neural topology and synaptic weights emerge jointly from a latent random field. A reduced-order parameterization governs the spatial intensity of an inhomogeneous Poisson process on the manifold, from which neuron locations are sampled. Input and output neurons are identified via extremal evaluations of the latent field, while connectivity is established through geodesic proximity and local field affinity. Synaptic weights are conditionally sampled from the field realization, inducing stochastic output responses even for deterministic inputs. To ensure scalability, the architecture is sparsified via percentile-based diffusion masking, yielding geometry-aware sparse connectivity without ad hoc structural assumptions. Supervised learning is formulated as inference on the generative hyperparameters of the latent field, using a negative log-likelihood loss estimated through Monte Carlo sampling from single-observation-per-input datasets. The paper initiates a mathematical analysis of the model, establishing foundational properties such as well-posedness, measurability, and a preliminary analysis of the expressive variability of the induced stochastic mappings, which support its internal coherence and lay the groundwork for a broader theory of geometry-driven stochastic learning.
+ oai:arXiv.org:2512.10407v1
+ stat.ML
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- G\'abor Dam\'asdi, Bal\'azs Keszegh, J\'anos Pach, D\"om\"ot\"or P\'alv\"olgyi, G\'eza T\'oth
+ http://creativecommons.org/licenses/by/4.0/
+ Christian Soize
- Transport Novelty Distance: A Distributional Metric for Evaluating Material Generative Models
- https://arxiv.org/abs/2512.09514
- arXiv:2512.09514v1 Announce Type: cross
-Abstract: Recent advances in generative machine learning have opened new possibilities for the discovery and design of novel materials. However, as these models become more sophisticated, the need for rigorous and meaningful evaluation metrics has grown. Existing evaluation approaches often fail to capture both the quality and novelty of generated structures, limiting our ability to assess true generative performance. In this paper, we introduce the Transport Novelty Distance (TNovD) to judge generative models used for materials discovery jointly by the quality and novelty of the generated materials. Based on ideas from Optimal Transport theory, TNovD uses a coupling between the features of the training and generated sets, which is refined into a quality and memorization regime by a threshold. The features are generated from crystal structures using a graph neural network that is trained to distinguish between materials, their augmented counterparts, and differently sized supercells using contrastive learning. We evaluate our proposed metric on typical toy experiments relevant for crystal structure prediction, including memorization, noise injection and lattice deformations. Additionally, we validate the TNovD on the MP20 validation set and the WBM substitution dataset, demonstrating that it is capable of detecting both memorized and low-quality material data. We also benchmark the performance of several popular material generative models. While introduced for materials, our TNovD framework is domain-agnostic and can be adapted for other areas, such as images and molecules.
- oai:arXiv.org:2512.09514v1
- cond-mat.mtrl-sci
+ Maximum Risk Minimization with Random Forests
+ https://arxiv.org/abs/2512.10445
+ arXiv:2512.10445v1 Announce Type: cross
+Abstract: We consider a regression setting where observations are collected in different environments modeled by different data distributions. The field of out-of-distribution (OOD) generalization aims to design methods that generalize better to test environments whose distributions differ from those observed during training. One line of such works has proposed to minimize the maximum risk across environments, a principle that we refer to as MaxRM (Maximum Risk Minimization). In this work, we introduce variants of random forests based on the principle of MaxRM. We provide computationally efficient algorithms and prove statistical consistency for our primary method. Our proposed method can be used with each of the following three risks: the mean squared error, the negative reward (which relates to the explained variance), and the regret (which quantifies the excess risk relative to the best predictor). For MaxRM with regret as the risk, we prove a novel out-of-sample guarantee over unseen test distributions. Finally, we evaluate the proposed methods on both simulated and real-world data.
+ oai:arXiv.org:2512.10445v1
+ stat.ML
+ cs.AIcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.ME
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Paul Hagemann, Simon M\"uller, Janine George, Philipp Benner
+ Francesco Freni, Anya Fries, Linus K\"uhne, Markus Reichstein, Jonas Peters
- NeuroSketch: An Effective Framework for Neural Decoding via Systematic Architectural Optimization
- https://arxiv.org/abs/2512.09524
- arXiv:2512.09524v1 Announce Type: cross
-Abstract: Neural decoding, a critical component of Brain-Computer Interface (BCI), has recently attracted increasing research interest. Previous research has focused on leveraging signal processing and deep learning methods to enhance neural decoding performance. However, the in-depth exploration of model architectures remains underexplored, despite its proven effectiveness in other tasks such as energy forecasting and image classification. In this study, we propose NeuroSketch, an effective framework for neural decoding via systematic architecture optimization. Starting with the basic architecture study, we find that CNN-2D outperforms other architectures in neural decoding tasks and explore its effectiveness from temporal and spatial perspectives. Building on this, we optimize the architecture from macro- to micro-level, achieving improvements in performance at each step. The exploration process and model validations take over 5,000 experiments spanning three distinct modalities (visual, auditory, and speech), three types of brain signals (EEG, SEEG, and ECoG), and eight diverse decoding tasks. Experimental results indicate that NeuroSketch achieves state-of-the-art (SOTA) performance across all evaluated datasets, positioning it as a powerful tool for neural decoding. Our code and scripts are available at https://github.com/Galaxy-Dawn/NeuroSketch.
- oai:arXiv.org:2512.09524v1
- q-bio.NC
- cs.AI
- cs.LG
- eess.SP
- Thu, 11 Dec 2025 00:00:00 -0500
+ On Simplest Kochen-Specker Sets
+ https://arxiv.org/abs/2512.10483
+ arXiv:2512.10483v1 Announce Type: cross
+Abstract: In Phys. Rev. Lett. 135, 190203 (2025) a discovery of the simplest 3D contextual set with 33 vertices, 50 bases, and 14 complete bases is claimed. In this paper, we show that it was previously generated in Quantum 7, 953 (2023) and analyze the meaning, origin, and significance of the simplest contextual sets in any dimension. In particular, we prove that there is no ground to consider the aforementioned set as fundamental since there are many 3D contextual sets with a smaller number of complete bases. We also show that automatic generation of contextual sets from basic vector components automatically yields all known minimal contextual sets of any kind in any dimension and therefore also the aforementioned set in no CPU-time. In the end, we discuss varieties of contextual sets, in particular Kochen-Specker (KS), extended KS, and non-KS sets as well as ambiguities in their definitions.
+ oai:arXiv.org:2512.10483v1
+ quant-ph
+ cs.IT
+ math-ph
+ math.IT
+ math.MP
+ Fri, 12 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Mladen Pavicic
+
+
+ A Three-Dimensional SFT with Sparse Columns
+ https://arxiv.org/abs/2512.10499
+ arXiv:2512.10499v1 Announce Type: cross
+Abstract: We construct a nontrivial three-dimensional subshift of finite type whose projective $\Z$-subdynamics, or $\Z$-trace, is 2-sparse, meaning that there are at most two nonzero symbols in any vertical column. The subshift is deterministic in the direction of the subdynamics, so it is topologically conjugate to the set of spacetime diagrams of a partial cellular automaton. We also present a variant of the subshift that is defined by Wang cubes, and one whose alphabet is binary.
+ oai:arXiv.org:2512.10499v1
+ math.DS
+ cs.DM
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Gaorui Zhang, Zhizhang Yuan, Jialan Yang, Junru Chen, Li Meng, Yang Yang
+ Ville Salo, Ilkka T\"orm\"a
- Transformers for Tabular Data: A Training Perspective of Self-Attention via Optimal Transport
- https://arxiv.org/abs/2512.09530
- arXiv:2512.09530v1 Announce Type: cross
-Abstract: This thesis examines self-attention training through the lens of Optimal Transport (OT) and develops an OT-based alternative for tabular classification. The study tracks intermediate projections of the self-attention layer during training and evaluates their evolution using discrete OT metrics, including Wasserstein distance, Monge gap, optimality, and efficiency. Experiments are conducted on classification tasks with two and three classes, as well as on a biomedical dataset.
- Results indicate that the final self-attention mapping often approximates the OT optimal coupling, yet the training trajectory remains inefficient. Pretraining the MLP section on synthetic data partially improves convergence but is sensitive to their initialization. To address these limitations, an OT-based algorithm is introduced: it generates class-specific dummy Gaussian distributions, computes an OT alignment with the data, and trains an MLP to generalize this mapping. The method achieves accuracy comparable to Transformers while reducing computational cost and scaling more efficiently under standardized inputs, though its performance depends on careful dummy-geometry design. All experiments and implementations are conducted in R.
- oai:arXiv.org:2512.09530v1
- stat.ML
+ Hyperspectral Image Data Reduction for Endmember Extraction
+ https://arxiv.org/abs/2512.10506
+ arXiv:2512.10506v1 Announce Type: cross
+Abstract: Endmember extraction from hyperspectral images aims to identify the spectral signatures of materials present in a scene. Recent studies have shown that self-dictionary methods can achieve high extraction accuracy; however, their high computational cost limits their applicability to large-scale hyperspectral images. Although several approaches have been proposed to mitigate this issue, it remains a major challenge. Motivated by this situation, this paper pursues a data reduction approach. Assuming that the hyperspectral image follows the linear mixing model with the pure-pixel assumption, we develop a data reduction technique that removes pixels that do not contain endmembers. We analyze the theoretical properties of this reduction step and show that it preserves pixels that lie close to the endmembers. Building on this result, we propose a data-reduced self-dictionary method that integrates the data reduction with a self-dictionary method based on a linear programming formulation. Numerical experiments demonstrate that the proposed method can substantially reduce the computational time of the original self-dictionary method without sacrificing endmember extraction accuracy.
+ oai:arXiv.org:2512.10506v1
+ eess.IVcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ eess.SP
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Antonio Candelieri, Alessandro Quadrio
+ Tomohiko Mizutani
- Don't Throw Away Your Beams: Improving Consistency-based Uncertainties in LLMs via Beam Search
- https://arxiv.org/abs/2512.09538
- arXiv:2512.09538v1 Announce Type: cross
-Abstract: Consistency-based methods have emerged as an effective approach to uncertainty quantification (UQ) in large language models. These methods typically rely on several generations obtained via multinomial sampling, measuring their agreement level. However, in short-form QA, multinomial sampling is prone to producing duplicates due to peaked distributions, and its stochasticity introduces considerable variance in uncertainty estimates across runs. We introduce a new family of methods that employ beam search to generate candidates for consistency-based UQ, yielding improved performance and reduced variance compared to multinomial sampling. We also provide a theoretical lower bound on the beam set probability mass under which beam search achieves a smaller error than multinomial sampling. We empirically evaluate our approach on six QA datasets and find that its consistent improvements over multinomial sampling lead to state-of-the-art UQ performance.
- oai:arXiv.org:2512.09538v1
+ Flexible Deep Neural Networks for Partially Linear Survival Data
+ https://arxiv.org/abs/2512.10570
+ arXiv:2512.10570v1 Announce Type: cross
+Abstract: We propose a flexible deep neural network (DNN) framework for modeling survival data within a partially linear regression structure. The approach preserves interpretability through a parametric linear component for covariates of primary interest, while a nonparametric DNN component captures complex time-covariate interactions among nuisance variables. We refer to the method as FLEXI-Haz, a flexible hazard model with a partially linear structure. In contrast to existing DNN approaches for partially linear Cox models, FLEXI-Haz does not rely on the proportional hazards assumption. We establish theoretical guarantees: the neural network component attains minimax-optimal convergence rates based on composite Holder classes, and the linear estimator is root-n consistent, asymptotically normal, and semiparametrically efficient. Extensive simulations and real-data analyses demonstrate that FLEXI-Haz provides accurate estimation of the linear effect, offering a principled and interpretable alternative to modern methods based on proportional hazards. Code for implementing FLEXI-Haz, as well as scripts for reproducing data analyses and simulations, is available at: https://github.com/AsafBanana/FLEXI-Haz
+ oai:arXiv.org:2512.10570v1stat.ML
- cs.CLcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Asaf Ben Arie, Malka Gorfine
+
+
+ Topology-Guided Quantum GANs for Constrained Graph Generation
+ https://arxiv.org/abs/2512.10582
+ arXiv:2512.10582v1 Announce Type: cross
+Abstract: Quantum computing (QC) promises theoretical advantages, benefiting computational problems that would not be efficiently classically simulatable. However, much of this theoretical speedup depends on the quantum circuit design solving the problem. We argue that QC literature has yet to explore more domain specific ansatz-topologies, instead of relying on generic, one-size-fits-all architectures. In this work, we show that incorporating task-specific inductive biases -- specifically geometric priors -- into quantum circuit design can enhance the performance of hybrid Quantum Generative Adversarial Networks (QuGANs) on the task of generating geometrically constrained K4 graphs. We evaluate a portfolio of entanglement topologies and loss-function designs to assess their impact on both statistical fidelity and compliance with geometric constraints, including the Triangle and Ptolemaic inequalities. Our results show that aligning circuit topology with the underlying problem structure yields substantial benefits: the Triangle-topology QuGAN achieves the highest geometric validity among quantum models and matches the performance of classical Generative Adversarial Networks (GAN). Additionally, we showcase how specific architectural choices, such as entangling gate types, variance regularization and output-scaling govern the trade-off between geometric consistency and distributional accuracy, thus emphasizing the value of structured, task-aware quantum ansatz-topologies.
+ oai:arXiv.org:2512.10582v1
+ quant-ph
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Ekaterina Fadeeva, Maiya Goloburda, Aleksandr Rubashevskii, Roman Vashurin, Artem Shelmanov, Preslav Nakov, Mrinmaya Sachan, Maxim Panov
+ Tobias Rohe, Markus Baumann, Michael Poppel, Gerhard Stenzel, Maximilian Zorn, Claudia Linnhoff-Popien
- A tensor phase theory with applications in multilinear control
- https://arxiv.org/abs/2512.09559
- arXiv:2512.09559v1 Announce Type: cross
-Abstract: The purpose of this paper is to initiate a phase theory for tensors under the Einstein product, and explore its applications in multilinear control systems. Firstly, the sectorial tensor decomposition for sectorial tensors is derived, which allows us to define phases for sectorial tensors. A numerical procedure for computing phases of a sectorial tensor is also proposed. Secondly, the maximin and minimax expressions for tensor phases are given, which are used to quantify how close the phases of a sectorial tensor are to those of its compressions. Thirdly, the compound spectrum, compound numerical ranges and compound angular numerical ranges of two sectorial tensors $A,B$ are defined and characterized in terms of the compound numerical ranges and compound angular numerical ranges of the sectorial tensors $A,B$. Fourthly, it is shown that the angles of eigenvalues of the product of two sectorial tensors are upper bounded by the sum of their individual phases. Finally, based on the tensor phase theory developed above, a tensor version of the small phase theorem is presented, which can be regarded as a natural generalization of the matrix case, recently proposed in Ref. [10]. The results offer powerful new tools for the stability and robustness analysis of multilinear feedback control systems.
- oai:arXiv.org:2512.09559v1
+ Linear Quadratic Regulators: A New Look
+ https://arxiv.org/abs/2512.10641
+ arXiv:2512.10641v1 Announce Type: cross
+Abstract: Linear time-invariant control systems can be considered as finitely generated modules over the commutative principal ideal ring $\mathbb{R}[\frac{d}{dt}]$ of linear differential operators with respect to the time derivative. The Kalman controllability in this algebraic language is translated as the freeness of the system module. Linear quadratic regulators rely on quadratic Lagrangians, or cost functions. Any flat output, i.e., any basis of the corresponding free module leads to an open-loop control strategy via an Euler-Lagrange equation, which becomes here a linear ordinary differential equation with constant coefficients. In this approach, the two-point boundary value problem, including the control variables, becomes tractable. It yields notions of optimal time horizon, optimal parameter design and optimal rest-to-rest trajectories. The loop is closed via an intelligent controller derived from model-free control, which is known to exhibit excellent performance concerning model mismatches and disturbances.
+ oai:arXiv.org:2512.10641v1math.OCcs.SYeess.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Chengdong Liu, Yimin Wei, Guofeng Zhang
+ C\'eEdric Join, Emmanuel Delaveau, Michel Fliess
- Lazy Diffusion: Mitigating spectral collapse in generative diffusion-based stable autoregressive emulation of turbulent flows
- https://arxiv.org/abs/2512.09572
- arXiv:2512.09572v1 Announce Type: cross
-Abstract: Turbulent flows posses broadband, power-law spectra in which multiscale interactions couple high-wavenumber fluctuations to large-scale dynamics. Although diffusion-based generative models offer a principled probabilistic forecasting framework, we show that standard DDPMs induce a fundamental \emph{spectral collapse}: a Fourier-space analysis of the forward SDE reveals a closed-form, mode-wise signal-to-noise ratio (SNR) that decays monotonically in wavenumber, $|k|$ for spectra $S(k)\!\propto\!|k|^{-\lambda}$, rendering high-wavenumber modes indistinguishable from noise and producing an intrinsic spectral bias. We reinterpret the noise schedule as a spectral regularizer and introduce power-law schedules $\beta(\tau)\!\propto\!\tau^\gamma$ that preserve fine-scale structure deeper into diffusion time, along with \emph{Lazy Diffusion}, a one-step distillation method that leverages the learned score geometry to bypass long reverse-time trajectories and prevent high-$k$ degradation. Applied to high-Reynolds-number 2D Kolmogorov turbulence and $1/12^\circ$ Gulf of Mexico ocean reanalysis, these methods resolve spectral collapse, stabilize long-horizon autoregression, and restore physically realistic inertial-range scaling. Together, they show that na\"ive Gaussian scheduling is structurally incompatible with power-law physics and that physics-aware diffusion processes can yield accurate, efficient, and fully probabilistic surrogates for multiscale dynamical systems.
- oai:arXiv.org:2512.09572v1
- physics.flu-dyn
- cs.AI
- math.DS
- nlin.CD
- physics.ao-ph
- Thu, 11 Dec 2025 00:00:00 -0500
+ Exploring Perceptual Audio Quality Measurement on Stereo Processing Using the Open Dataset of Audio Quality
+ https://arxiv.org/abs/2512.10689
+ arXiv:2512.10689v1 Announce Type: cross
+Abstract: ODAQ (Open Dataset of Audio Quality) provides a comprehensive framework for exploring both monaural and binaural audio quality degradations across a range of distortion classes and signals, accompanied by subjective quality ratings. A recent update of ODAQ, focusing on the impact of stereo processing methods such as Mid/Side (MS) and Left/Right (LR), provides test signals and subjective ratings for the in-depth investigation of state-of-the-art objective audio quality metrics. Our evaluation results suggest that, while timbre-focused metrics often yield robust results under simpler conditions, their prediction performance tends to suffer under the conditions with a more complex presentation context. Our findings underscore the importance of modeling the interplay of bottom-up psychoacoustic processes and top-down contextual factors, guiding future research toward models that more effectively integrate both timbral and spatial dimensions of perceived audio quality.
+ oai:arXiv.org:2512.10689v1
+ eess.AS
+ cs.SD
+ Fri, 12 Dec 2025 00:00:00 -0500cross
- http://creativecommons.org/licenses/by/4.0/
- Anish Sambamurthy, Ashesh Chattopadhyay
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pablo M. Delgado, Sascha Dick, Christoph Thompson, Chih-Wei Wu, Phillip A. Williams
+
+
+ MULE - A Co-Generation Fission Power Plant Concept to Support Lunar In-Situ Resource Utilisation
+ https://arxiv.org/abs/2512.10705
+ arXiv:2512.10705v1 Announce Type: cross
+Abstract: For a sustained human presence on the Moon, robust in-situ resource utilisation supply chains to provide consumables and propellant are necessary. A promising process is molten salt electrolysis, which typically requires temperatures in excess of 900{\deg}C. Fission reactors do not depend on solar irradiance and are thus well suited for power generation on the Moon, especially during the 14-day lunar night. As of now, fission reactors have only been considered for electric power generation, but the reactor coolant could also be used directly to heat those processes to their required temperatures. In this work, a concept for a co-generation fission power plant on the Moon that can directly heat a MSE plant to the required temperatures and provide a surplus of electrical energy for the lunar base is presented. The neutron transport code Serpent 2 is used to model a ceramic core, gas-cooled very-high-temperature microreactor design and estimate its lifetime with a burnup simulation in hot conditions with an integrated step-wise criticality search. Calculations show a neutronically feasible operation time of at least 10 years at 100kW thermal power. The obtained power distributions lay a basis for further thermal-hydraulic studies on the technical feasibility of the reactor design and the power plant.
+ oai:arXiv.org:2512.10705v1
+ physics.comp-ph
+ cs.CE
+ Fri, 12 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Julius Mercz, Philipp Reiss, Christian Reiter
- Graph-Based Bayesian Optimization for Quantum Circuit Architecture Search with Uncertainty Calibrated Surrogates
- https://arxiv.org/abs/2512.09586
- arXiv:2512.09586v1 Announce Type: cross
-Abstract: Quantum circuit design is a key bottleneck for practical quantum machine learning on complex, real-world data. We present an automated framework that discovers and refines variational quantum circuits (VQCs) using graph-based Bayesian optimization with a graph neural network (GNN) surrogate. Circuits are represented as graphs and mutated and selected via an expected improvement acquisition function informed by surrogate uncertainty with Monte Carlo dropout. Candidate circuits are evaluated with a hybrid quantum-classical variational classifier on the next generation firewall telemetry and network internet of things (NF-ToN-IoT-V2) cybersecurity dataset, after feature selection and scaling for quantum embedding. We benchmark our pipeline against an MLP-based surrogate, random search, and greedy GNN selection. The GNN-guided optimizer consistently finds circuits with lower complexity and competitive or superior classification accuracy compared to all baselines. Robustness is assessed via a noise study across standard quantum noise channels, including amplitude damping, phase damping, thermal relaxation, depolarizing, and readout bit flip noise. The implementation is fully reproducible, with time benchmarking and export of best found circuits, providing a scalable and interpretable route to automated quantum circuit discovery.
- oai:arXiv.org:2512.09586v1
+ Saturation-Based Atom Provenance Tracing in Chemical Reaction Networks
+ https://arxiv.org/abs/2512.10708
+ arXiv:2512.10708v1 Announce Type: cross
+Abstract: Atom tracing is essential for understanding the fate of labeled atoms in biochemical reaction networks, yet existing computational methods either simplify label correlations or suffer from combinatorial explosion. We introduce a saturation-based framework for enumerating labeling patterns that directly operates on atom-atom maps without requiring flux data or experimental measurements. The approach models reaction semantics using Kleisli morphisms in the powerset monad, allowing for compositional propagation of atom provenance through reaction networks. By iteratively saturating all possible educt combinations of reaction rules, the method exhaustively enumerates labeled molecular configurations, including multiplicities and reuse. Allowing arbitrary initial labeling patterns - including identical or distinct labels - the method expands only isotopomers reachable from these inputs, keeping the configuration space as small as necessary and avoids the full combinatorial growth characteristic of previous approaches. In principle, even every atom could carry a distinct identifier (e.g., tracing all carbon atoms individually), illustrating the generality of the framework beyond practical experimental limitations. The resulting template instance hypergraph captures the complete flow of atoms between compounds and supports projections tailored to experimental targets. Customizable labeling sets significantly reduce generated network sizes, providing efficient and exact atom traces focused on specific compounds or available isotopes. Applications to the tricarboxylic acid cycle, and glycolytic pathways demonstrate that the method fully automatically reproduces known labeling patterns and discovers steady-state labeling behavior. The framework offers a scalable, mechanistically transparent, and generalizable foundation for isotopomer modeling and experiment design.
+ oai:arXiv.org:2512.10708v1
+ q-bio.MN
+ cs.DM
+ Fri, 12 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Marcel Friedrichs, Daniel Merkle
+
+
+ Further Statistical Study of NISQ Experiments
+ https://arxiv.org/abs/2512.10722
+ arXiv:2512.10722v1 Announce Type: cross
+Abstract: We revisit and extend some topics that we studied in our previous works (Rinott, Kalai and Shoham 2022; Kalai, Rinott and Shoham, 2023,2024) regarding the Google 2019 "quantum supremacy" experiment. We extend our analysis of the prediction based on Google's digital error model (Formula (77)), based on more detailed data provided by Google. We also provide some preliminary analysis for a few other NISQ experiments.
+ oai:arXiv.org:2512.10722v1quant-ph
- cs.AI
- cs.LG
- cs.NE
- cs.NI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CC
+ stat.AP
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Prashant Kumar Choudhary, Nouhaila Innan, Muhammad Shafique, Rajeev Singh
+ Gil Kalai, Tomer Shoham, Carsten Voelkmann
- The Ky Fan Norms and Beyond: Dual Norms and Combinations for Matrix Optimization
- https://arxiv.org/abs/2512.09678
- arXiv:2512.09678v1 Announce Type: cross
-Abstract: In this article, we explore the use of various matrix norms for optimizing functions of weight matrices, a crucial problem in training large language models. Moving beyond the spectral norm underlying the Muon update, we leverage duals of the Ky Fan $k$-norms to introduce a family of Muon-like algorithms we name Fanions, which are closely related to Dion. By working with duals of convex combinations of the Ky Fan $k$-norms with either the Frobenius norm or the $l_\infty$ norm, we construct the families of F-Fanions and S-Fanions, respectively. Their most prominent members are F-Muon and S-Muon. We complement our theoretical analysis with an extensive empirical study of these algorithms across a wide range of tasks and settings, demonstrating that F-Muon and S-Muon consistently match Muon's performance, while outperforming vanilla Muon on a synthetic linear least squares problem.
- oai:arXiv.org:2512.09678v1
- math.OC
- cs.AI
+ PMB-NN: Physiology-Centred Hybrid AI for Personalized Hemodynamic Monitoring from Photoplethysmography
+ https://arxiv.org/abs/2512.10745
+ arXiv:2512.10745v1 Announce Type: cross
+Abstract: Continuous monitoring of blood pressure (BP) and hemodynamic parameters such as peripheral resistance (R) and arterial compliance (C) are critical for early vascular dysfunction detection. While photoplethysmography (PPG) wearables has gained popularity, existing data-driven methods for BP estimation lack interpretability. We advanced our previously proposed physiology-centered hybrid AI method-Physiological Model-Based Neural Network (PMB-NN)-in blood pressure estimation, that unifies deep learning with a 2-element Windkessel based model parameterized by R and C acting as physics constraints. The PMB-NN model was trained in a subject-specific manner using PPG-derived timing features, while demographic information was used to infer an intermediate variable: cardiac output. We validated the model on 10 healthy adults performing static and cycling activities across two days for model's day-to-day robustness, benchmarked against deep learning (DL) models (FCNN, CNN-LSTM, Transformer) and standalone Windkessel based physiological model (PM). Validation was conducted on three perspectives: accuracy, interpretability and plausibility. PMB-NN achieved systolic BP accuracy (MAE: 7.2 mmHg) comparable to DL benchmarks, diastolic performance (MAE: 3.9 mmHg) lower than DL models. However, PMB-NN exhibited higher physiological plausibility than both DL baselines and PM, suggesting that the hybrid architecture unifies and enhances the respective merits of physiological principles and data-driven techniques. Beyond BP, PMB-NN identified R (ME: 0.15 mmHg$\cdot$s/ml) and C (ME: -0.35 ml/mmHg) during training with accuracy similar to PM, demonstrating that the embedded physiological constraints confer interpretability to the hybrid AI framework. These results position PMB-NN as a balanced, physiologically grounded alternative to purely data-driven approaches for daily hemodynamic monitoring.
+ oai:arXiv.org:2512.10745v1
+ physics.med-phcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500cross
- http://creativecommons.org/licenses/by/4.0/
- Alexey Kravatskiy, Ivan Kozyrev, Nikolai Kozlov, Alexander Vinogradov, Daniil Merkulov, Ivan Oseledets
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Yaowen Zhang, Libera Fresiello, Peter H. Veltink, Dirk W. Donker, Ying Wang
- The non-existence of some Moore polygons and spectral Moore bounds
- https://arxiv.org/abs/2512.09680
- arXiv:2512.09680v1 Announce Type: cross
-Abstract: In this paper, we study the maximum order $v(k,\theta)$ of a connected $k$-regular graph whose second largest eigenvalue is at most $\theta$. From Alon-Boppana and Serre, we know that $v(k,\theta)$ is finite when $\theta < 2\sqrt{k-1}$ while the work of Marcus, Spielman, and Srivastava implies that $v(k,\theta)$ is infinite if $\theta\geq 2\sqrt{k-1}$. Cioab\u{a}, Koolen, Nozaki, and Vermette obtained a general upper bound on $v(k, \theta)$ via Nozaki's linear programming bound and determined many values of $v(k,\theta)$. The graphs attaining this bound are distance-regular and are called Moore polygons. Damerell and Georgiacodis proved that there are no Moore polygons of diameter $6$ or more. For smaller diameters, there are infinitely many Moore polygons.
- We complement these results by proving two nonexistence results for Moore polygons with specific parameters. We also determine new values of $v(k,\theta)$: $v(4, \sqrt{2}) = 14$ and $v(5, \sqrt{2}) = v(5,\sqrt{5}-1)=16$. The former is achieved by the co-Heawood graph, and the latter by the folded $5$-cube. We verify that any connected $5$-regular graph with second eigenvalue $\lambda_2$ exceeding $1$ satisfies $\lambda_2 \geq \sqrt{5} - 1$, and that the unique $5$-regular graph attaining equality in this bound has $10$ vertices. We prove a stronger form of a 2015 conjecture of Kolokolnikov related to the second eigenvalue of cubic graphs of given order, and observe that other recent results on the second eigenvalue of regular graphs are consequences of the general upper bound theorem on $v(k,\theta)$ mentioned above.
- oai:arXiv.org:2512.09680v1
- math.CO
- cs.DM
- Thu, 11 Dec 2025 00:00:00 -0500
+ Opportunities and Challenges in Harnessing Digital Technology for Effective Teaching and Learning
+ https://arxiv.org/abs/2512.10777
+ arXiv:2512.10777v1 Announce Type: cross
+Abstract: Most of today's educators are in no shortage of digital and online learning technologies available at their fingertips, ranging from Learning Management Systems such as Canvas, Blackboard, or Moodle, online meeting tools, online homework, and tutoring systems, exam proctoring platforms, computer simulations, and even virtual reality/augmented reality technologies. Furthermore, with the rapid development and wide availability of generative artificial intelligence (GenAI) services such as ChatGPT, we are just at the beginning of harnessing their potential to transform higher education. Yet, facing the large number of available options provided by cutting-edge technology, an imminent question on the mind of most educators is the following: how should I choose the technologies and integrate them into my teaching process so that they would best support student learning? We contemplate over these types of important and timely questions and share our reflections on evidence-based approaches to harnessing digital learning tools using a Self-regulated Engaged Learning Framework we have employed in our research in physics education that can be valuable for educators in other disciplines.
+ oai:arXiv.org:2512.10777v1
+ physics.ed-ph
+ cs.HC
+ Fri, 12 Dec 2025 00:00:00 -0500cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sebastian M. Cioab\u{a}, Vishal Gupta, Hiroshi Nozaki, Ziqing Xiang
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ 10.3390/higheredu4010006
+ Chen, Z.; Singh, C. Opportunities and Challenges in Harnessing Digital Technology for Effective Teaching and Learning. Trends High. Educ. 2025, 4, 6
+ Zhongzhou Chen, Chandralekha Singh
- Device Independent Quantum Secret Sharing Using Multiparty Pseudo-telepathy Game
- https://arxiv.org/abs/2512.09699
- arXiv:2512.09699v1 Announce Type: cross
-Abstract: Device-independent quantum secret sharing (DI-QSS) is a cryptographic protocol that overcomes the security limitations posed by untrusted quantum devices. We propose a DI-QSS protocol based on the multipartite pseudo-telepathy parity game, which achieves device-independence with simultaneous key generation without requiring dedicated test rounds, unlike CHSH-based schemes [Zhang et al., Phys. Rev. A, 2024]. Notably, the proposed scheme allows simultaneous device-independence verification and key-generation phases, achieving optimal performance for a seven-qubit GHZ state configuration. Further, we analyse the security of our protocol against collective attack and establish reduced resource requirement for the same length of the raw key compared to the previous protocol. Finally, we show that our protocol remains robust even in a noisy environment.
- oai:arXiv.org:2512.09699v1
- quant-ph
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Developing and Evaluating a Large Language Model-Based Automated Feedback System Grounded in Evidence-Centered Design for Supporting Physics Problem Solving
+ https://arxiv.org/abs/2512.10785
+ arXiv:2512.10785v1 Announce Type: cross
+Abstract: Generative AI offers new opportunities for individualized and adaptive learning, particularly through large language model (LLM)-based feedback systems. While LLMs can produce effective feedback for relatively straightforward conceptual tasks, delivering high-quality feedback for tasks that require advanced domain expertise, such as physics problem solving, remains a substantial challenge. This study presents the design of an LLM-based feedback system for physics problem solving grounded in evidence-centered design (ECD) and evaluates its performance within the German Physics Olympiad. Participants assessed the usefulness and accuracy of the generated feedback, which was generally perceived as useful and highly accurate. However, an in-depth analysis revealed that the feedback contained factual errors in 20% of cases; errors that often went unnoticed by the students. We discuss the risks associated with uncritical reliance on LLM-based feedback systems and outline potential directions for generating more adaptive and reliable LLM-based feedback in the future.
+ oai:arXiv.org:2512.10785v1
+ physics.ed-ph
+ cs.AI
+ cs.HC
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Santanu Majhi, Goutam Paul
+ Holger Maus, Paul Tschisgale, Fabian Kieser, Stefan Petersen, Peter Wulff
- Computer-Assisted Search for Differential Equations Corresponding to Optimization Methods and Their Convergence Rates
- https://arxiv.org/abs/2512.09712
- arXiv:2512.09712v1 Announce Type: cross
-Abstract: Let $f:\mathbb{R}^n \to \mathbb{R}$ be a continuously differentiable convex function with its minimizer denoted by $x_*$ and optimal value $f_* = f(x_*)$. Optimization algorithms such as the gradient descent method can often be interpreted in the continuous-time limit as differential equations known as continuous dynamical systems. Analyzing the convergence rate of $f(x) - f_*$ in such systems often relies on constructing appropriate Lyapunov functions. However, these Lyapunov functions have been designed through heuristic reasoning rather than a systematic framework. Several studies have addressed this issue. In particular, Suh, Roh, and Ryu (2022) proposed a constructive approach that involves introducing dilated coordinates and applying integration by parts. Although this method significantly improves the process of designing Lyapunov functions, it still involves arbitrary choices among many possible options, and thus retains a heuristic nature in identifying Lyapunov functions that yield the best convergence rates. In this study, we propose a systematic framework for exploring these choices computationally. More precisely, we propose a brute-force approach using symbolic computation by computer algebra systems to explore every possibility. By formulating the design of Lyapunov functions for continuous dynamical systems as an optimization problem, we aim to optimize the Lyapunov function itself. As a result, our framework successfully reproduces many previously reported results and, in several cases, discovers new convergence rates that have not been shown in the existing studies.
- oai:arXiv.org:2512.09712v1
- math.OC
- cs.NA
- math.CA
- math.DS
- math.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Data-driven Pressure Recovery in Diffusers
+ https://arxiv.org/abs/2512.10801
+ arXiv:2512.10801v1 Announce Type: cross
+Abstract: This paper investigates the application of a data-driven technique based on retrospective cost optimization to optimize the frequency of mass injection into an S-shaped diffuser, with the objective of maximizing the pressure recovery. Experimental data indicated that there is an optimal injection frequency between 100 Hz and 300 Hz with a mass flow rate of 1 percent of the free stream. High-fidelity numerical simulations using compressible unsteady Reynolds-Averaged Navier-Stokes (URANS) are conducted to investigate the mean and temporal features resulting from mass injection into an S-shaped diffuser with differing injection speeds and pulse frequencies. The results are compared with experiments to confirm the accuracy of the numerical solution. Overall, 2-D simulations are relatively in good agreement with the experiment, with 3-D simulations currently under investigation to benchmark the effect of spanwise instabilities. Simulation results with the proposed data-driven technique show improvements upon a baseline case by increasing pressure recovery and reducing the region of flow recirculation within the diffuser.
+ oai:arXiv.org:2512.10801v1
+ physics.flu-dyn
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Atsushi Tabei, Ken'ichiro Tanaka
+ http://creativecommons.org/licenses/by/4.0/
+ Juan Augusto Paredes Salazar, Ankit Goel, Rowen Costich, Meliksah Koca, Ozgur Tumuklu, Michael Amitay
- Flexible Reconfigurable Intelligent Surface-Aided Covert Communications in UAV Networks
- https://arxiv.org/abs/2512.09714
- arXiv:2512.09714v1 Announce Type: cross
-Abstract: In recent years, unmanned aerial vehicles (UAVs) have become a key role in wireless communication networks due to their flexibility and dynamic adaptability. However, the openness of UAV-based communications leads to security and privacy concerns in wireless transmissions. This paper investigates a framework of UAV covert communications which introduces flexible reconfigurable intelligent surfaces (F-RIS) in UAV networks. Unlike traditional RIS, F-RIS provides advanced deployment flexibility by conforming to curved surfaces and dynamically reconfiguring its electromagnetic properties to enhance the covert communication performance. We establish an electromagnetic model for F-RIS and further develop a fitted model that describes the relationship between F-RIS reflection amplitude, reflection phase, and incident angle. To maximize the covert transmission rate among UAVs while meeting the covert constraint and public transmission constraint, we introduce a strategy of jointly optimizing UAV trajectories, F-RIS reflection vectors, F-RIS incident angles, and non-orthogonal multiple access (NOMA) power allocation. Considering this is a complicated non-convex optimization problem, we propose a deep reinforcement learning (DRL) algorithm-based optimization solution. Simulation results demonstrate that our proposed framework and optimization method significantly outperform traditional benchmarks, and highlight the advantages of F-RIS in enhancing covert communication performance within UAV networks.
- oai:arXiv.org:2512.09714v1
+ CSI-Based User Positioning, Channel Charting, and Device Classification with an NVIDIA 5G Testbed
+ https://arxiv.org/abs/2512.10809
+ arXiv:2512.10809v1 Announce Type: cross
+Abstract: Channel-state information (CSI)-based sensing will play a key role in future cellular systems. However, no CSI dataset has been published from a real-world 5G NR system that facilitates the development and validation of suitable sensing algorithms. To close this gap, we publish three real-world wideband multi-antenna multi-open RAN radio unit (O-RU) CSI datasets from the 5G NR uplink channel: an indoor lab/office room dataset, an outdoor campus courtyard dataset, and a device classification dataset with six commercial-off-the-shelf (COTS) user equipments (UEs). These datasets have been recorded using a software-defined 5G NR testbed based on NVIDIA Aerial RAN CoLab Over-the-Air (ARC-OTA) with COTS hardware, which we have deployed at ETH Zurich. We demonstrate the utility of these datasets for three CSI-based sensing tasks: neural UE positioning, channel charting in real-world coordinates, and closed-set device classification. For all these tasks, our results show high accuracy: neural UE positioning achieves 0.6cm (indoor) and 5.7cm (outdoor) mean absolute error, channel charting in real-world coordinates achieves 73cm mean absolute error (outdoor), and device classification achieves 99% (same day) and 95% (next day) accuracy. The CSI datasets, ground-truth UE position labels, CSI features, and simulation code are publicly available at https://caez.ethz.ch
+ oai:arXiv.org:2512.10809v1eess.SPcs.ITmath.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chong Huang, Gaojie Chen, Zhuoao Xu, Jing Zhu, Taisong Pan, Rahim Tafazolli, Wei Huang
+ Reinhard Wiesmayr, Frederik Zumegen, Sueda Taner, Chris Dick, Christoph Studer
- On Parameter Identification in Three-Dimensional Elasticity and Discretisation with Physics-Informed Neural Networks
- https://arxiv.org/abs/2512.09754
- arXiv:2512.09754v1 Announce Type: cross
-Abstract: Physics-informed neural networks have emerged as a powerful tool in the scientific machine learning community, with applications to both forward and inverse problems. While they have shown considerable empirical success, significant challenges remain -- particularly regarding training stability and the lack of rigorous theoretical guarantees, especially when compared to classical mesh-based methods. In this work, we focus on the inverse problem of identifying a spatially varying parameter in a constitutive model of three-dimensional elasticity, using measurements of the system's state. This setting is especially relevant for non-invasive diagnosis in cardiac biomechanics, where one must also carefully account for the type of boundary data available. To address this inverse problem, we adopt an all-at-once optimisation framework, simultaneously estimating the state and parameter through a least-squares loss that encodes both available data and the governing physics. For this formulation, we prove stability estimates ensuring that our approach yields a stable approximation of the underlying ground-truth parameter of the physical system independent of a specific discretisation. We then proceed with a neural network-based discretisation and compare it to traditional mesh-based approaches. Our theoretical findings are complemented by illustrative numerical examples.
- oai:arXiv.org:2512.09754v1
- math.OC
- cs.NA
- math.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Quantum Approaches to Urban Logistics: From Core QAOA to Clustered Scalability
+ https://arxiv.org/abs/2512.10813
+ arXiv:2512.10813v1 Announce Type: cross
+Abstract: The Traveling Salesman Problem (TSP) is a fundamental challenge in combinatorial optimization, widely applied in logistics and transportation. As the size of TSP instances grows, traditional algorithms often struggle to produce high-quality solutions within reasonable timeframes. This study investigates the potential of the Quantum Approximate Optimization Algorithm (QAOA), a hybrid quantum-classical method, to solve TSP under realistic constraints. We adopt a QUBO-based formulation of TSP that integrates real-world logistical constraints reflecting operational conditions, such as vehicle capacity, road accessibility, and time windows, while ensuring compatibility with the limitations of current quantum hardware. Our experiments are conducted in a simulated environment using high-performance computing (HPC) resources to assess QAOA's performance across different problem sizes and quantum circuit depths. In order to improve scalability, we propose clustering QAOA (Cl-QAOA), a hybrid approach combining classical machine learning with QAOA. This method decomposes large TSP instances into smaller sub-problems, making quantum optimization feasible even on devices with a limited number of qubits. The results offer a comprehensive evaluation of QAOA's strengths and limitations in solving constrained TSP scenarios. This study advances quantum optimization and lays groundwork for future large-scale applications.
+ oai:arXiv.org:2512.10813v1
+ quant-ph
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Federica Caforio, Martin Holler, Matthias H\"ofler
+ F. Picariello, G. Turati, R. Antonelli, I. Bailo, S. Bonura, G. Ciarfaglia, S. Cipolla, P. Cremonesi, M. Ferrari Dacrema, M. Gabusi, I. Gentile, V. Morreale, A. Noto
- Optimal certification of constant-local Hamiltonians
- https://arxiv.org/abs/2512.09778
- arXiv:2512.09778v1 Announce Type: cross
-Abstract: We study the problem of certifying local Hamiltonians from real-time access to their dynamics. Given oracle access to $e^{-itH}$ for an unknown $k$-local Hamiltonian $H$ and a fully specified target Hamiltonian $H_0$, the goal is to decide whether $H$ is exactly equal to $H_0$ or differs from $H_0$ by at least $\varepsilon$ in normalized Frobenius norm, while minimizing the total evolution time. We introduce the first intolerant Hamiltonian certification protocol that achieves optimal performance for all constant-locality Hamiltonians. For general $n$-qubit, $k$-local, traceless Hamiltonians, our procedure uses $O(c^k/\varepsilon)$ total evolution time for a universal constant $c$, and succeeds with high probability. In particular, for $O(1)$-local Hamiltonians, the total evolution time becomes $\Theta(1/\varepsilon)$, matching the known $\Omega(1/\varepsilon)$ lower bounds and achieving the gold-standard Heisenberg-limit scaling. Prior certification methods either relied on implementing inverse evolution of $H$, required controlled access to $e^{-itH}$, or achieved near-optimal guarantees only in restricted settings such as the Ising case ($k=2$). In contrast, our algorithm requires neither inverse evolution nor controlled operations: it uses only forward real-time dynamics and achieves optimal intolerant certification for all constant-locality Hamiltonians.
- oai:arXiv.org:2512.09778v1
- quant-ph
- cs.CC
- cs.DS
- cs.IT
+ Deep sets and event-level maximum-likelihood estimation for fast pile-up jet rejection in ATLAS
+ https://arxiv.org/abs/2512.10819
+ arXiv:2512.10819v1 Announce Type: cross
+Abstract: Multiple proton-proton collisions (pile-up) occur at every bunch crossing at the LHC, with the mean number of interactions expected to reach 80 during Run 3 and up to 200 at the High-Luminosity LHC. As a direct consequence, events with multijet signatures will occur at increasingly high rates. To cope with the increased luminosity, being able to efficiently group jets according to their origin along the beamline is crucial, particularly at the trigger level. In this work, a novel uncertainty-aware jet regression model based on a Deep Sets architecture is introduced, DIPz, to regress on a jet origin position along the beamline. The inputs to the DIPz algorithm are the charged particle tracks associated to each jet. An event-level discriminant, the Maximum Log Product of Likelihoods (MLPL), is constructed by combining the DIPz per-jet predictions. MLPL is cut-optimized to select events compatible with targeted multi-jet signature selection. This combined approach provides a robust and computationally efficient method for pile-up rejection in multi-jet final states, applicable to real-time event selections at the ATLAS High Level Trigger.
+ oai:arXiv.org:2512.10819v1
+ hep-excs.LG
- math.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Junseo Lee, Myeongjin Shin
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Mohammed Aboelela
- PathCo-LatticE: Pathology-Constrained Lattice-Of Experts Framework for Fully-supervised Few-Shot Cardiac MRI Segmentation
- https://arxiv.org/abs/2512.09779
- arXiv:2512.09779v1 Announce Type: cross
-Abstract: Few-shot learning (FSL) mitigates data scarcity in cardiac MRI segmentation but typically relies on semi-supervised techniques sensitive to domain shifts and validation bias, restricting zero-shot generalizability. We propose PathCo-LatticE, a fully supervised FSL framework that replaces unlabeled data with pathology-guided synthetic supervision. First, our Virtual Patient Engine models continuous latent disease trajectories from sparse clinical anchors, using generative modeling to synthesize physiologically plausible, fully labeled 3D cohorts. Second, Self-Reinforcing Interleaved Validation (SIV) provides a leakage-free protocol that evaluates models online with progressively challenging synthetic samples, eliminating the need for real validation data. Finally, a dynamic Lattice-of-Experts (LoE) organizes specialized networks within a pathology-aware topology and activates the most relevant experts per input, enabling robust zero-shot generalization to unseen data without target-domain fine-tuning. We evaluated PathCo-LatticE in a strict out-of-distribution (OOD) setting, deriving all anchors and severity statistics from a single-source domain (ACDC) and performing zero-shot testing on the multi-center, multi-vendor M&Ms dataset. PathCo-LatticE outperforms four state-of-the-art FSL methods by 4.2-11% Dice starting from only 7 labeled anchors, and approaches fully supervised performance (within 1% Dice) with only 19 labeled anchors. The method shows superior harmonization across four vendors and generalization to unseen pathologies. [Code will be made publicly available].
- oai:arXiv.org:2512.09779v1
- eess.IV
- cs.AI
- cs.CV
+ An Elementary Proof of the Near Optimality of LogSumExp Smoothing
+ https://arxiv.org/abs/2512.10825
+ arXiv:2512.10825v1 Announce Type: cross
+Abstract: We consider the design of smoothings of the (coordinate-wise) max function in $\mathbb{R}^d$ in the infinity norm. The LogSumExp function $f(x)=\ln(\sum^d_i\exp(x_i))$ provides a classical smoothing, differing from the max function in value by at most $\ln(d)$. We provide an elementary construction of a lower bound, establishing that every overestimating smoothing of the max function must differ by at least $\sim 0.8145\ln(d)$. Hence, LogSumExp is optimal up to constant factors. However, in small dimensions, we provide stronger, exactly optimal smoothings attaining our lower bound, showing that the entropy-based LogSumExp approach to smoothing is not exactly optimal.
+ oai:arXiv.org:2512.10825v1
+ math.STcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ math.OC
+ stat.TH
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mohamed Elbayumi, Mohammed S. M. Elbaz
+ Thabo Samakhoana, Benjamin Grimmer
- Pinball: A Cryogenic Predecoder for Quantum Error Correction Decoding Under Circuit-Level Noise
- https://arxiv.org/abs/2512.09807
- arXiv:2512.09807v1 Announce Type: cross
-Abstract: Scaling fault tolerant quantum computers, especially cryogenic systems, to millions of qubits is challenging due to poorly-scaling data processing and power consumption overheads. One key challenge is the design of decoders for real-time quantum error correction (QEC), which demands high data rates for error processing; this is particularly apparent in systems with cryogenic qubits and room temperature (RT) decoders. In response, cryogenic predecoding using lightweight logic has been proposed to handle common, sparse errors in the cryogenic domain. However, prior work only accounts for a subset of error sources present in real-world quantum systems with limited accuracy, often degrading performance below a useful level in practical scenarios. Furthermore, prior reliance on SFQ logic precludes detailed architecture-technology co-optimization.
- To address these shortcomings, this paper introduces Pinball, a comprehensive design in cryogenic CMOS of a QEC predecoder tailored to realistic, circuit-level noise. By accounting for error generation and propagation through QEC circuits, our design achieves higher predecoding accuracy, outperforming logical error rates (LER) of the current state-of-the-art cryogenic predecoder by nearly six orders of magnitude. Remarkably, despite operating under much stricter power and area constraints, Pinball also reduces LER by 32.58x and 5x, respectively, compared to the state-of-the-art RT predecoder and RT ensemble configurations. By increasing cryogenic coverage, we also reduce syndrome bandwidth up to 3780.72x. Through co-design with 4 K-characterized 22 nm FDSOI technology, we achieve a peak power consumption under 0.56 mW. Voltage/frequency scaling and body biasing enable 22.2x lower typical power consumption, yielding up to 67.4x total energy savings. Assuming a 4 K power budget of 1.5 W, our predecoder supports up to 2,668 logical qubits at d=21.
- oai:arXiv.org:2512.09807v1
- quant-ph
- cs.AR
- cs.ET
- Thu, 11 Dec 2025 00:00:00 -0500
+ Indirect methods in optimal control on Banach spaces
+ https://arxiv.org/abs/2512.10831
+ arXiv:2512.10831v1 Announce Type: cross
+Abstract: This work focuses on indirect descent methods for optimal control problems governed by nonlinear ordinary differential equations in Banach spaces, viewed as abstract models of distributed dynamics. As a reference line, we revisit the classical schemes, rooted in Pontryagin's maximum principle, and highlight their sensitivity to local convexity and lack of monotone convergence. We then develop an alternative method based on exact cost-increment formulas and finite-difference probes of the terminal cost. We show that our method exhibits stable monotone convergence in numerical analysis of an Amari-type neural field control problem.
+ oai:arXiv.org:2512.10831v1
+ math.OC
+ cs.NA
+ math.NA
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Alexander Knapen, Guanchen Tao, Jacob Mack, Tomas Bruno, Mehdi Saligane, Dennis Sylvester, Qirui Zhang, Gokul Subramanian Ravi
+ Roman Chertovskih, Nikolay Pogodaev, Maxim Staritsyn, A. Pedro Aguiar
- Dichotomy results for classes of countable graphs
- https://arxiv.org/abs/2512.09832
- arXiv:2512.09832v1 Announce Type: cross
-Abstract: We study classes of countable graphs where every member does not contain a given finite graph as an induced subgraph -- denoted by $\mathsf{Free}(\mathcal{G})$ for a given finite graph $\mathcal{G}$. Our main results establish a structural dichotomy for such classes: If $\mathcal{G}$ is not an induced subgraph of $\mathcal{P}_4$, then $\mathsf{Free}(\mathcal{G})$ is on top under effective bi-interpretability, implying that the members of $\mathsf{Free}(\mathcal{G})$ exhibit the full range of structural and computational behaviors. In contrast, if $\mathcal{G}$ is an induced subgraph of $\mathcal{P}_4$, then $\mathsf{Free}(\mathcal{G})$ is structurally simple, as witnessed by the fact that every member satisfies the computable embeddability condition. This dichotomy is mirrored in the finite setting when one considers combinatorial and complexity-theoretic properties. Specifically, it is known that $\mathsf{Free}(\mathcal{G})^{fin}$ is complete for graph isomorphism and not a well-quasi-order under embeddability whenever $\mathcal{G}$ is not an induced subgraph of $\mathcal{P}_4$, while in all other cases $\mathsf{Free}(\mathcal{G})^{fin}$ forms a well-quasi-order and the isomorphism problem for $\mathsf{Free}(\mathcal{G})^{fin}$ is solvable in polynomial time.
- oai:arXiv.org:2512.09832v1
- math.LO
- cs.CC
- Thu, 11 Dec 2025 00:00:00 -0500
+ The Localization Method for High-Dimensional Inequalities
+ https://arxiv.org/abs/2512.10848
+ arXiv:2512.10848v1 Announce Type: cross
+Abstract: We survey the localization method for proving inequalities in high dimension, pioneered by Lov\'asz and Simonovits (1993), and its stochastic extension developed by Eldan (2012). The method has found applications in a surprising wide variety of settings, ranging from its original motivation in isoperimetric inequalities to optimization, concentration of measure, and bounding the mixing rate of Markov chains. At heart, the method converts a given instance of an inequality (for a set or distribution in high dimension) into a highly structured instance, often just one-dimensional.
+ oai:arXiv.org:2512.10848v1
+ math.PR
+ cs.DS
+ math.FA
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Vittorio Cipriani, Ekaterina Fokina, Matthew Harrison-Trainor, Liling Ko, Dino Rossegger
+ Yunbum Kook, Santosh S. Vempala
- Colouring Graphs Without a Subdivided H-Graph: A Full Complexity Classification
- https://arxiv.org/abs/2512.09859
- arXiv:2512.09859v1 Announce Type: cross
-Abstract: We consider Colouring on graphs that are $H$-subgraph-free for some fixed graph $H$, i.e., graphs that do not contain $H$ as a subgraph. It is known that even $3$-Colouring is NP-complete for $H$-subgraph-free graphs whenever $H$ has a cycle; or a vertex of degree at least $5$; or a component with two vertices of degree $4$, while Colouring is polynomial-time solvable for $H$-subgraph-free graphs if $H$ is a forest of maximum degree at most $3$, in which each component has at most one vertex of degree $3$. For connected graphs $H$, this means that it remains to consider when $H$ is tree of maximum degree $4$ with exactly one vertex of degree $4$, or a tree of maximum degree $3$ with at least two vertices of degree $3$. We let $H$ be a so-called subdivided "H"-graph, which is either a subdivided $\mathbb{H}_0$: a tree of maximum degree $4$ with exactly one vertex of degree $4$ and no vertices of degree $3$, or a subdivided $\mathbb{H}_1$: a tree of maximum degree $3$ with exactly two vertices of degree $3$. In the literature, only a limited number of polynomial-time and NP-completeness results for these cases are known. We develop new polynomial-time techniques that allow us to determine the complexity of Colouring on $H$-subgraph-free graphs for all the remaining subdivided "H"-graphs, so we fully classify both cases. As a consequence, the complexity of Colouring on $H$-subgraph-free graphs has now been settled for all connected graphs $H$ except when $H$ is a tree of maximum degree $4$ with exactly one vertex of degree $4$ and at least one vertex of degree $3$; or a tree of maximum degree $3$ with at least three vertices of degree $3$. We also employ our new techniques to obtain the same new polynomial-time results for another classic graph problem, namely Stable Cut.
- oai:arXiv.org:2512.09859v1
- math.CO
- cs.CC
- cs.DM
- cs.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ Spectral Decompositions of Controllability Gramian and Its Inverse based on System Eigenvalues in Companion Form
+ https://arxiv.org/abs/2512.10851
+ arXiv:2512.10851v1 Announce Type: cross
+Abstract: Controllability and observability Gramians, along with their inverses, are widely used to solve various problems in control theory. This paper proposes spectral decompositions of the controllability Gramian and its inverse based on system eigenvalues for a continuous LTI dynamical system in the controllability canonical (companion) form. The Gramian and its inverse are represented as sums of Hermitian matrices, each corresponding to individual system eigenvalues or their pairwise combinations. These decompositions are obtained for the solutions of both algebraic and differential Lyapunov and Riccati equations with arbitrary initial conditions, allowing for the estimation of system spectral properties over an arbitrary time interval and their prediction at future moments. The derived decompositions are also generalized to the case of multiple eigenvalues in the dynamics matrix spectrum, enabling a closed-form estimation of the effects of resonant interactions with the system's eigenmodes. The spectral components are interpreted as measurable quantities in the minimum energy control problem. Therefore, they are unambiguously defined and can quantitatively characterize the influence of individual eigenmodes and associated system devices on controllability, observability, and the asymptotic dynamics of perturbation energy. The additional information obtained from these decompositions can improve the accuracy of algorithms in solving various practical problems, such as stability analysis, minimum energy control, structural design, tuning regulators, optimal placement of actuators and sensors, network analysis, and model order reduction.
+ oai:arXiv.org:2512.10851v1
+ math.OC
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Tala Eagling-Vose, Jorik Jooken, Felicia Lucke, Barnaby Martin, Dani\"el Paulusma
+ Alexey Iskakov, Igor Yadykin
- True Random Number Generators on IQM Spark
- https://arxiv.org/abs/2512.09862
- arXiv:2512.09862v1 Announce Type: cross
-Abstract: Random number generation is fundamental for many modern applications including cryptography, simulations and machine learning. Traditional pseudo-random numbers may offer statistical unpredictability, but are ultimately deterministic. On the other hand, True Random Number Generation (TRNG) offers true randomness. One way of obtaining such randomness are quantum systems, including quantum computers. As such the use of quantum computers for TRNG has received considerable attention in recent years. However, existing studies almost exclusively consider IBM quantum computers, often stop at using simulations and usually test only a handful of different TRNG quantum circuits. In this paper, we address those issues by presenting a study of TRNG circuits on Odra 5 a real-life quantum computer installed at Wroc{\l}aw University of Science and Technology. It is also the first study to utilize the IQM superconducting architecture. Since Odra 5 is available on-premises it allows for much more comprehensive study of various TRNG circuits. In particular, we consider 5 types of TRNG circuits with 105 circuit subvariants in total. Each circuit is used to generate 1 million bits. We then perform an analysis of the quality of the obtained random sequences using the NIST SP 800-22 and NIST SP 800-90B test suites. We also provide a comprehensive review of existing literature on quantum computer-based TRNGs.
- oai:arXiv.org:2512.09862v1
- quant-ph
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Physics-informed Polynomial Chaos Expansion with Enhanced Constrained Optimization Solver and D-optimal Sampling
+ https://arxiv.org/abs/2512.10873
+ arXiv:2512.10873v1 Announce Type: cross
+Abstract: Physics-informed polynomial chaos expansions (PC$^2$) provide an efficient physically constrained surrogate modeling framework by embedding governing equations and other physical constraints into the standard data-driven polynomial chaos expansions (PCE) and solving via the Karush-Kuhn-Tucker (KKT) conditions. This approach improves the physical interpretability of surrogate models while achieving high computational efficiency and accuracy. However, the performance and efficiency of PC$^2$ can still be degraded with high-dimensional parameter spaces, limited data availability, or unrepresentative training data. To address this problem, this study explores two complementary enhancements to the PC$^2$ framework. First, a numerically efficient constrained optimization solver, straightforward updating of Lagrange multipliers (SULM), is adopted as an alternative to the conventional KKT solver. The SULM method significantly reduces computational cost when solving physically constrained problems with high-dimensionality and derivative boundary conditions that require a large number of virtual points. Second, a D-optimal sampling strategy is utilized to select informative virtual points to improve the stability and achieve the balance of accuracy and efficiency of the PC$^2$. The proposed methods are integrated into the PC$^2$ framework and evaluated through numerical examples of representative physical systems governed by ordinary or partial differential equations. The results demonstrate that the enhanced PC$^2$ has better comprehensive capability than standard PC$^2$, and is well-suited for high-dimensional uncertainty quantification tasks.
+ oai:arXiv.org:2512.10873v1
+ stat.ML
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Andrzej Gnatowski, Jaros{\l}aw Rudy, Teodor Ni\.zy\'nski, Krzysztof \'Swi\k{e}cicki
+ http://creativecommons.org/licenses/by/4.0/
+ Qitian Lu, Himanshu Sharma, Michael D. Shields, Luk\'a\v{s} Nov\'ak
- A 0.8395-approximation algorithm for the EPR problem
- https://arxiv.org/abs/2512.09896
- arXiv:2512.09896v1 Announce Type: cross
-Abstract: We give an efficient 0.8395-approximation algorithm for the EPR Hamiltonian. Our improvement comes from a new nonlinear monogamy-of-entanglement bound on star graphs and a refined parameterization of a shallow quantum circuit from previous works. We also prove limitations showing that current methods cannot achieve substantially better approximation ratios, indicating that further progress will require fundamentally new techniques.
- oai:arXiv.org:2512.09896v1
- quant-ph
- cs.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ Distributionally Robust Regret Optimal Control Under Moment-Based Ambiguity Sets
+ https://arxiv.org/abs/2512.10906
+ arXiv:2512.10906v1 Announce Type: cross
+Abstract: In this paper, we consider a class of finite-horizon, linear-quadratic stochastic control problems, where the probability distribution governing the noise process is unknown but assumed to belong to an ambiguity set consisting of all distributions whose mean and covariance lie within norm balls centered at given nominal values. To address the distributional ambiguity, we explore the design of causal affine control policies to minimize the worst-case expected regret over all distributions in the given ambiguity set. The resulting minimax optimal control problem is shown to admit an equivalent reformulation as a tractable convex program that corresponds to a regularized version of the nominal linear-quadratic stochastic control problem. While this convex program can be recast as a semidefinite program, semidefinite programs are typically solved using primal-dual interior point methods that scale poorly with the problem size in practice. To address this limitation, we propose a scalable dual projected subgradient method to compute optimal controllers to an arbitrary accuracy. Numerical experiments are presented to benchmark the proposed method against state-of-the-art data-driven and distributionally robust control design approaches.
+ oai:arXiv.org:2512.10906v1
+ math.OC
+ cs.LG
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Anuj Apte, Eunou Lee, Kunal Marwaha, Ojas Parekh, Lennart Sinjorgo, James Sud
+ Feras Al Taha, Eilyan Bitar
- Supervised learning pays attention
- https://arxiv.org/abs/2512.09912
- arXiv:2512.09912v1 Announce Type: cross
-Abstract: In-context learning with attention enables large neural networks to make context-specific predictions by selectively focusing on relevant examples. Here, we adapt this idea to supervised learning procedures such as lasso regression and gradient boosting, for tabular data. Our goals are to (1) flexibly fit personalized models for each prediction point and (2) retain model simplicity and interpretability.
- Our method fits a local model for each test observation by weighting the training data according to attention, a supervised similarity measure that emphasizes features and interactions that are predictive of the outcome. Attention weighting allows the method to adapt to heterogeneous data in a data-driven way, without requiring cluster or similarity pre-specification. Further, our approach is uniquely interpretable: for each test observation, we identify which features are most predictive and which training observations are most relevant. We then show how to use attention weighting for time series and spatial data, and we present a method for adapting pretrained tree-based models to distributional shift using attention-weighted residual corrections. Across real and simulated datasets, attention weighting improves predictive performance while preserving interpretability, and theory shows that attention-weighting linear models attain lower mean squared error than the standard linear model under mixture-of-models data-generating processes with known subgroup structure.
- oai:arXiv.org:2512.09912v1
- stat.ML
- cs.AI
+ Hermitian Yang--Mills connections on general vector bundles: geometry and physical Yukawa couplings
+ https://arxiv.org/abs/2512.10907
+ arXiv:2512.10907v1 Announce Type: cross
+Abstract: We compute solutions to the Hermitian Yang-Mills equations on holomorphic vector bundles $V$ via an alternating optimisation procedure founded on geometric machine learning. The proposed method is fully general with respect to the rank and structure group of $V$, requiring only the ability to enumerate a basis of global sections for a given bundle. This enables us to compute the physically normalised Yukawa couplings in a broad class of heterotic string compactifications. Using this method, we carry out this computation in full for a heterotic compactification incorporating a gauge bundle with non-Abelian structure group.
+ oai:arXiv.org:2512.10907v1
+ hep-thcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500cross
- http://creativecommons.org/licenses/by/4.0/
- Erin Craig, Robert Tibshirani
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Challenger Mishra, Justin Tan
- Two Causal Principles for Improving Visual Dialog
- https://arxiv.org/abs/1911.10496
- arXiv:1911.10496v3 Announce Type: replace
-Abstract: This paper unravels the design tricks adopted by us, the champion team MReaL-BDAI, for Visual Dialog Challenge 2019: two causal principles for improving Visual Dialog (VisDial). By "improving", we mean that they can promote almost every existing VisDial model to the state-of-the-art performance on the leader-board. Such a major improvement is only due to our careful inspection on the causality behind the model and data, finding that the community has overlooked two causalities in VisDial. Intuitively, Principle 1 suggests: we should remove the direct input of the dialog history to the answer model, otherwise a harmful shortcut bias will be introduced; Principle 2 says: there is an unobserved confounder for history, question, and answer, leading to spurious correlations from training data. In particular, to remove the confounder suggested in Principle 2, we propose several causal intervention algorithms, which make the training fundamentally different from the traditional likelihood estimation. Note that the two principles are model-agnostic, so they are applicable in any VisDial model. The code is available at https://github.com/simpleshinobu/visdial-principles.
- oai:arXiv.org:1911.10496v3
- cs.CV
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
+ Noisy Quantum Learning Theory
+ https://arxiv.org/abs/2512.10929
+ arXiv:2512.10929v1 Announce Type: cross
+Abstract: We develop a framework for learning from noisy quantum experiments, focusing on fault-tolerant devices accessing uncharacterized systems through noisy couplings. Our starting point is the complexity class $\textsf{NBQP}$ ("noisy BQP"), modeling noisy fault-tolerant quantum computers that cannot, in general, error-correct the oracle systems they query. Using this class, we show that for natural oracle problems, noise can eliminate exponential quantum learning advantages of ideal noiseless learners while preserving a superpolynomial gap between NISQ and fault-tolerant devices. Beyond oracle separations, we study concrete noisy learning tasks. For purity testing, the exponential two-copy advantage collapses under a single application of local depolarizing noise. Nevertheless, we identify a setting motivated by AdS/CFT in which noise-resilient structure restores a quantum learning advantage in a noisy regime. We then analyze noisy Pauli shadow tomography, deriving lower bounds that characterize how instance size, quantum memory, and noise control sample complexity, and design algorithms with parametrically similar scalings. Together, our results show that the Bell-basis and SWAP-test primitives underlying most exponential quantum learning advantages are fundamentally fragile to noise unless the experimental system has latent noise-robust structure. Thus, realizing meaningful quantum advantages in future experiments will require understanding how noise-robust physical properties interface with available algorithmic techniques.
+ oai:arXiv.org:2512.10929v1
+ quant-ph
+ cs.CC
+ cs.IT
+ cs.LG
+ math.IT
+ Fri, 12 Dec 2025 00:00:00 -0500
+ crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jiaxin Qi, Yulei Niu, Jianqiang Huang, Hanwang Zhang
+ Jordan Cotler, Weiyuan Gong, Ishaan Kannan
- TCNN: Triple Convolutional Neural Network Models for Retrieval-based Question Answering System in E-commerce
- https://arxiv.org/abs/2004.10919
- arXiv:2004.10919v2 Announce Type: replace
-Abstract: Automatic question-answering (QA) systems have boomed during last few years, and commonly used techniques can be roughly categorized into Information Retrieval (IR)-based and generation-based. A key solution to the IR based models is to retrieve the most similar knowledge entries of a given query from a QA knowledge base, and then rerank those knowledge entries with semantic matching models. In this paper, we aim to improve an IR based e-commerce QA system-AliMe with proposed text matching models, including a basic Triple Convolutional Neural Network (TCNN) model and two Attention-based TCNN (ATCNN) models. Experimental results show their effect.
- oai:arXiv.org:2004.10919v2
- cs.LG
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Dual Cluster Contrastive learning for Object Re-Identification
+ https://arxiv.org/abs/2112.04662
+ arXiv:2112.04662v4 Announce Type: replace
+Abstract: Recently, cluster contrastive learning has been proven effective for object ReID by computing the contrastive loss between the individual features and the cluster memory. However, existing methods that use the individual features to momentum update the cluster memory will fluctuate over the training examples, especially for the outlier samples. Unlike the individual-based updating mechanism, the centroid-based updating mechanism that applies the mean feature of each cluster to update the cluster memory can reduce the impact of individual samples. Therefore, we formulate the individual-based updating and centroid-based updating mechanisms in a unified cluster contrastive framework, named Dual Cluster Contrastive framework (DCC), which maintains two types of memory banks: individual and centroid cluster memory banks. Significantly, the individual cluster memory considers just one individual at a time to take a single step for updating. The centroid cluster memory applies the mean feature of each cluster to update the corresponding cluster memory. During optimization, besides the vallina contrastive loss of each memory, a cross-view consistency constraint is applied to exchange the benefits of two memories for generating a discriminative description for the object ReID. Note that DCC can be easily applied for unsupervised or supervised object ReID by using ground-truth labels or the generated pseudo-labels. Extensive experiments on three benchmarks, \emph{e.g.,} Market-1501, MSMT17, and VeRi-776, under \textbf{supervised Object ReID} and \textbf{unsupervised Object ReID} demonstrate the superiority of the proposed DCC.
+ oai:arXiv.org:2112.04662v4
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Shuangyong Song, Chao Wang
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hantao Yao, Changsheng Xu
- Low-Dimensional Structure in the Space of Language Representations is Reflected in Brain Responses
- https://arxiv.org/abs/2106.05426
- arXiv:2106.05426v5 Announce Type: replace
-Abstract: How related are the representations learned by neural language models, translation models, and language tagging tasks? We answer this question by adapting an encoder-decoder transfer learning method from computer vision to investigate the structure among 100 different feature spaces extracted from hidden representations of various networks trained on language tasks. This method reveals a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings. We call this low-dimensional structure a language representation embedding because it encodes the relationships between representations needed to process language for a variety of NLP tasks. We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI. Additionally, we find that the principal dimension of this structure can be used to create a metric which highlights the brain's natural language processing hierarchy. This suggests that the embedding captures some part of the brain's natural language representation structure.
- oai:arXiv.org:2106.05426v5
- cs.CL
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Effective Online Exam Proctoring by Combining Lightweight Face Detection and Deep Recognition
+ https://arxiv.org/abs/2206.13356
+ arXiv:2206.13356v2 Announce Type: replace
+Abstract: Online exams, conducted via video conferencing platforms such as Zoom, have become popular in educational institutions since COVID-19. While convenient, ensuring the integrity and security of online exams remains challenging, as traditional invigilation struggles to effectively monitor multiple student video feeds in real time. In this paper, we present iExam, an effective online exam proctoring and analysis system that combines lightweight face detection and deep recognition. iExam employs real-time face detection to assist invigilators in continuously monitoring student presence, and leverages deep face recognition for post-exam video analysis to identify abnormal behaviors--including face disappearance, face rotation, and identity substitution. To realize this system, we address three core challenges: (i) designing a lightweight approach to efficiently capture and analyze exam video streams in real time; (ii) developing an enhanced OCR method to automatically extract student identities from dynamically positioned Zoom name tags, enabling reliable ground truth labeling without manual intervention; and (iii) optimizing the training and inference pipeline to significantlyreduce resource and time requirements on ordinary teacher devices. Extensive experiments demonstrate that iExam achieves 90.4% accuracy for real-time face detection and 98.4% accuracy for post-exam face recognition, while maintaining low overhead. These results show that iExam can substantially enhance the automation and reliability of online exam proctoring in practice.
+ oai:arXiv.org:2206.13356v2
+ cs.CV
+ eess.IV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Richard Antonello, Javier Turek, Vy Vo, Alexander Huth
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xu Yang, Juantao Zhong, Daoyuan Wu, Xiao Yi, Jimmy H. M. Lee, Tan Lee, Peng Han
- The Vector Grounding Problem
- https://arxiv.org/abs/2304.01481
- arXiv:2304.01481v3 Announce Type: replace
-Abstract: Large language models (LLMs) produce seemingly meaningful outputs, yet they are trained on text alone without direct interaction with the world. This leads to a modern variant of the classical symbol grounding problem in AI: can LLMs' internal states and outputs be about extra-linguistic reality, independently of the meaning human interpreters project onto them? We argue that they can. We first distinguish referential grounding -- the connection between a representation and its worldly referent -- from other forms of grounding and argue it is the only kind essential to solving the problem. We contend that referential grounding is achieved when a system's internal states satisfy two conditions derived from teleosemantic theories of representation: (1) they stand in appropriate causal-informational relations to the world, and (2) they have a history of selection that has endowed them with the function of carrying this information. We argue that LLMs can meet both conditions, even without multimodality or embodiment.
- oai:arXiv.org:2304.01481v3
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Multi-Robot Path Planning Combining Heuristics and Multi-Agent Reinforcement Learning
+ https://arxiv.org/abs/2306.01270
+ arXiv:2306.01270v2 Announce Type: replace
+Abstract: Multi-robot path finding in dynamic environments is a highly challenging classic problem. In the movement process, robots need to avoid collisions with other moving robots while minimizing their travel distance. Previous methods for this problem either continuously replan paths using heuristic search methods to avoid conflicts or choose appropriate collision avoidance strategies based on learning approaches. The former may result in long travel distances due to frequent replanning, while the latter may have low learning efficiency due to low sample exploration and utilization, and causing high training costs for the model. To address these issues, we propose a path planning method, MAPPOHR, which combines heuristic search, empirical rules, and multi-agent reinforcement learning. The method consists of two layers: a real-time planner based on the multi-agent reinforcement learning algorithm, MAPPO, which embeds empirical rules in the action output layer and reward functions, and a heuristic search planner used to create a global guiding path. During movement, the heuristic search planner replans new paths based on the instructions of the real-time planner. We tested our method in 10 different conflict scenarios. The experiments show that the planning performance of MAPPOHR is better than that of existing learning and heuristic methods. Due to the utilization of empirical knowledge and heuristic search, the learning efficiency of MAPPOHR is higher than that of existing learning methods.
+ oai:arXiv.org:2306.01270v2
+ cs.AI
+ cs.LG
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Dimitri Coelho Mollo, Rapha\"el Milli\`ere
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Shaoming Peng
- Entropy Functions on Two-Dimensional Faces of Polymatroidal Region of Degree Four: Part I: Problem Formulation and More
- https://arxiv.org/abs/2305.06250
- arXiv:2305.06250v4 Announce Type: replace
-Abstract: Characterization of entropy functions is of fundamental importance in information theory. By imposing constraints on their Shannon outer bound, i.e., the polymatroidal region, one obtains the faces of the region and entropy functions on them with special structures. In this series of two papers, we characterize entropy functions on the 2-dimensional faces of the polymatroidal region of degree 4. In Part I, we formulate the problem, enumerate all 59 types of 2-dimensional faces of the region by an algorithm, and fully characterize entropy functions on 49 types of them. Among them, those non-trivial cases are mainly characterized by the graph-coloring technique. The entropy functions on the remaining 10 types of faces will be characterized in Part II, among which 8 types are fully characterized, and 2 types are partially characterized.
- oai:arXiv.org:2305.06250v4
- cs.IT
- math.CO
- math.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Joint2Human: High-quality 3D Human Generation via Compact Spherical Embedding of 3D Joints
+ https://arxiv.org/abs/2312.08591
+ arXiv:2312.08591v3 Announce Type: replace
+Abstract: 3D human generation is increasingly significant in various applications. However, the direct use of 2D generative methods in 3D generation often results in losing local details, while methods that reconstruct geometry from generated images struggle with global view consistency. In this work, we introduce Joint2Human, a novel method that leverages 2D diffusion models to generate detailed 3D human geometry directly, ensuring both global structure and local details. To achieve this, we employ the Fourier occupancy field (FOF) representation, enabling the direct generation of 3D shapes as preliminary results with 2D generative models. With the proposed high-frequency enhancer and the multi-view recarving strategy, our method can seamlessly integrate the details from different views into a uniform global shape. To better utilize the 3D human prior and enhance control over the generated geometry, we introduce a compact spherical embedding of 3D joints. This allows for an effective guidance of pose during the generation process. Additionally, our method can generate 3D humans guided by textual inputs. Our experimental results demonstrate the capability of our method to ensure global structure, local details, high resolution, and low computational cost simultaneously. More results and the code can be found on our project page at http://cic.tju.edu.cn/faculty/likun/projects/Joint2Human.
+ oai:arXiv.org:2312.08591v3
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shaocheng Liu, Qi Chen
+ Muxin Zhang, Qiao Feng, Zhuo Su, Chao Wen, Zhou Xue, Kun Li
- Adaptive Self-Distillation for Minimizing Client Drift in Heterogeneous Federated Learning
- https://arxiv.org/abs/2305.19600
- arXiv:2305.19600v5 Announce Type: replace
-Abstract: Federated Learning (FL) is a machine learning paradigm that enables clients to jointly train a global model by aggregating the locally trained models without sharing any local training data. In practice, there can often be substantial heterogeneity (e.g., class imbalance) across the local data distributions observed by each of these clients. Under such non-iid label distributions across clients, FL suffers from the 'client-drift' problem where every client drifts to its own local optimum. This results in slower convergence and poor performance of the aggregated model. To address this limitation, we propose a novel regularization technique based on adaptive self-distillation (ASD) for training models on the client side. Our regularization scheme adaptively adjusts to each client's training data based on the global model's prediction entropy and the client-data label distribution. We show in this paper that our proposed regularization (ASD) can be easily integrated atop existing, state-of-the-art FL algorithms, leading to a further boost in the performance of these off-the-shelf methods. We theoretically explain how incorporation of ASD regularizer leads to reduction in client-drift and empirically justify the generalization ability of the trained model. We demonstrate the efficacy of our approach through extensive experiments on multiple real-world benchmarks and show substantial gains in performance when the proposed regularizer is combined with popular FL methods.
- oai:arXiv.org:2305.19600v5
+ IRG: Modular Synthetic Relational Database Generation with Complex Relational Schemas
+ https://arxiv.org/abs/2312.15187
+ arXiv:2312.15187v3 Announce Type: replace
+Abstract: Relational databases (RDBs) are widely used by corporations and governments to store multiple related tables. Their relational schemas pose unique challenges to synthetic data generation for privacy-preserving data sharing, e.g., for collaborative analytical and data mining tasks, as well as software testing at various scales. Relational schemas typically include a set of primary and foreign key constraints to specify the intra-and inter-table entity relations, which also imply crucial intra-and inter-table data correlations in the RDBs. Existing synthetic RDB generation approaches often focus on the relatively simple and basic parent-child relations, failing to address the ubiquitous real-world complexities in relational schemas in key constraints like composite keys, intra-table correlations like sequential correlation, and inter-table data correlations like indirectly connected tables. In this paper, we introduce incremental relational generator (IRG), a modular framework designed to handle these real-world challenges. In IRG, each table is generated by learning context from a depth-first traversal of relational connections to capture indirect inter-table relationships and constructs different parts of a table through several classical generative and predictive modules to preserve complex key constraints and data correlations. Compared to 3 prior art algorithms across 10 real-world RDB datasets, IRG successfully handles the relational schemas and captures critical data relationships for all datasets while prior works are incapable of. The generated synthetic data also demonstrates better fidelity and utility than prior works, implying its higher potential as a replacement for the basis of analytical tasks and data mining applications. Code is available at: https://github.com/li-jiayu-ljy/irg.
+ oai:arXiv.org:2312.15187v3
+ cs.DBcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- M Yashwanth, Gaurav Kumar Nayak, Arya Singh, Yogesh Simmhan, Anirban Chakraborty
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1145/3770854.3780313
+ Jiayu Li, Zilong Zhao, Milad Abdollahzadeh, Biplab Sikdar, Y. C. Tay
- Perceptually Uniform Construction of Illustrative Textures
- https://arxiv.org/abs/2308.03644
- arXiv:2308.03644v4 Announce Type: replace
-Abstract: Illustrative textures, such as stippling or hatching, were predominantly used as an alternative to conventional Phong rendering. Recently, the potential of encoding information on surfaces or maps using different densities has also been recognized. This has the significant advantage that additional color can be used as another visual channel and the illustrative textures can then be overlaid. Effectively, it is thus possible to display multiple information, such as two different scalar fields on surfaces simultaneously. In previous work, these textures were manually generated and the choice of density was unempirically determined. Here, we first want to determine and understand the perceptual space of illustrative textures. We chose a succession of simplices with increasing dimensions as primitives for our textures: Dots, lines, and triangles. Thus, we explore the texture types of stippling, hatching, and triangles. We create a range of textures by sampling the density space uniformly. Then, we conduct three perceptual studies in which the participants performed pairwise comparisons for each texture type. We use multidimensional scaling (MDS) to analyze the perceptual spaces per category. The perception of stippling and triangles seems relatively similar. Both are adequately described by a 1D manifold in 2D space. The perceptual space of hatching consists of two main clusters: Crosshatched textures, and textures with only one hatching direction. However, the perception of hatching textures with only one hatching direction is similar to the perception of stippling and triangles. Based on our findings, we construct perceptually uniform illustrative textures. Afterwards, we provide concrete application examples for the constructed textures.
- oai:arXiv.org:2308.03644v4
+ Visualization Generation with Large Language Models: An Evaluation
+ https://arxiv.org/abs/2401.11255
+ arXiv:2401.11255v2 Announce Type: replace
+Abstract: The frequent need for analysts to create visualizations to derive insights from data has driven extensive research into the generation of natural Language to Visualization (NL2VIS). While recent progress in large language models (LLMs) suggests their potential to effectively support NL2VIS tasks, existing studies lack a systematic investigation into the performance of different LLMs under various prompt strategies. This paper addresses this gap and contributes a crucial baseline evaluation of LLMs' capabilities in generating visualization specifications of NL2VIS tasks. Our evaluation utilizes the nvBench dataset, employing six representative LLMs and eight distinct prompt strategies to evaluate their performance in generating six target chart types using the Vega-Lite visualization specification. We assess model performance with multiple metrics, including vis accuracy, validity and legality. Our results reveal substantial performance disparities across prompt strategies, chart types, and LLMs. Furthermore, based on the evaluation results, we uncover several counterintuitive behaviors across these dimensions, and propose directions for enhancing the NL2VIS benchmark to better support future NL2VIS research.
+ oai:arXiv.org:2401.11255v2cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1109/TVCG.2023.3326574
- Anna Sterzik, Monique Meuschke, Douglas W. Cunningham, Kai Lawonn
-
-
- CupCleaner: A Hybrid Data Cleaning Approach for Comment Updating
- https://arxiv.org/abs/2308.06898
- arXiv:2308.06898v2 Announce Type: replace
-Abstract: Comment updating is an emerging task in software evolution that aims to automatically revise source code comments in accordance with code changes. This task plays a vital role in maintaining code-comment consistency throughout software development. Recently, deep learning-based approaches have shown great potential in addressing comment updating by learning complex patterns between code edits and corresponding comment modifications. However, the effectiveness of these learning-based approaches heavily depends on the quality of training data. Existing datasets are typically constructed by mining version histories from open-source repositories such as GitHub, where there is often a lack of quality control over comment edits. As a result, these datasets may contain noisy or inconsistent samples that hinder model learning and generalization. In this paper, we focus on cleaning existing comment updating datasets, considering both the data's characteristics in the updating scenario and their implications on the model training process. We propose a hybrid statistical approach named CupCleaner (Comment UPdating's CLEANER) to achieve this purpose. Specifically, we combine static semantic information within data samples and dynamic loss information during the training process to clean the dataset. Experimental results demonstrate that, on the same test set, both the individual static strategy and the dynamic strategy can significantly filter out a portion of the data and enhance the performance of the model. Furthermore, employing a model ensemble approach can combine the characteristics of static and dynamic cleaning, further enhancing the performance of the model and the reliability of its output results.
- oai:arXiv.org:2308.06898v2
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Qingyuan Liang, Zeyu Sun, Qihao Zhu, Junhao Hu, Yifan Zhao, Yakun Zhang, Lu Zhang
+ Xinyu Wang, Chenwei Liang, Shunyuan Zheng, Jinyuan Liang, Guozheng Li, Yu Zhang, Chi Harold Liu
- Gradient-Free Privacy Leakage in Federated Language Models through Selective Weight Tampering
- https://arxiv.org/abs/2310.16152
- arXiv:2310.16152v4 Announce Type: replace
-Abstract: Federated learning (FL) has become a key component in various language modeling applications such as machine translation, next-word prediction, and medical record analysis. These applications are trained on datasets from many FL participants that often include privacy-sensitive data, such as healthcare records, phone/credit card numbers, login credentials, etc. Although FL enables computation without necessitating clients to share their raw data, existing works show that privacy leakage is still probable in federated language models. In this paper, we present two novel findings on the leakage of privacy-sensitive user data from federated large language models without requiring access to gradients. Firstly, we make a key observation that model snapshots from the intermediate rounds in FL can cause greater privacy leakage than the final trained model. Secondly, we identify that a malicious FL participant can aggravate the leakage by tampering with the model's selective weights that are responsible for memorizing the sensitive training data of some other clients, even without any cooperation from the server. Our best-performing method increases the membership inference recall by 29% and achieves up to 71% private data reconstruction, evidently outperforming existing attacks that consider much stronger adversary capabilities. Lastly, we recommend a balanced suite of techniques for an FL client to defend against such privacy risk.
- oai:arXiv.org:2310.16152v4
- cs.CR
+ Noisy Spiking Actor Network for Exploration
+ https://arxiv.org/abs/2403.04162
+ arXiv:2403.04162v2 Announce Type: replace
+Abstract: As a general method for exploration in deep reinforcement learning (RL), NoisyNet can produce problem-specific exploration strategies. Spiking neural networks (SNNs), due to their binary firing mechanism, have strong robustness to noise, making it difficult to realize efficient exploration with local disturbances. To solve this exploration problem, we propose a noisy spiking actor network (NoisySAN) that introduces time-correlated noise during charging and transmission. Moreover, a noise reduction method is proposed to find a stable policy for the agent. Extensive experimental results demonstrate that our method outperforms the state-of-the-art performance on a wide range of continuous control tasks from OpenAI gym.
+ oai:arXiv.org:2403.04162v2cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.NE
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Md Rafi Ur Rashid, Vishnu Asutosh Dasu, Kang Gu, Najrin Sultana, Shagufta Mehnaz
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ding Chen, Peixi Peng, Tiejun Huang, Yonghong Tian
- CARTOS: A Charging-Aware Real-Time Operating System for Intermittent Batteryless Devices
- https://arxiv.org/abs/2311.07227
- arXiv:2311.07227v3 Announce Type: replace
-Abstract: This paper presents CARTOS, a charging-aware real-time operating system designed to enhance the functionality of intermittently-powered batteryless devices (IPDs) for various Internet of Things (IoT) applications. While IPDs offer significant advantages such as extended lifespan and operability in extreme environments, they pose unique challenges, including the need to ensure forward progress of program execution amidst variable energy availability and maintaining reliable real-time time behavior during power disruptions. To address these challenges, CARTOS introduces a mixed-preemption scheduling model that classifies tasks into computational and peripheral tasks, and ensures their efficient and timely execution by adopting just-in-time checkpointing for divisible computation tasks and uninterrupted execution for indivisible peripheral tasks. CARTOS also supports processing chains of tasks with precedence constraints and adapts its scheduling in response to environmental changes to offer continuous execution under diverse conditions. CARTOS is implemented with new APIs and components added to FreeRTOS but is designed for portability to other embedded RTOSs. Through real hardware experiments and simulations, CARTOS exhibits superior performance over state-of-the-art methods, demonstrating that it can serve as a practical platform for developing resilient, real-time sensing applications on IPDs.
- oai:arXiv.org:2311.07227v3
- cs.OS
- cs.SY
- eess.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Leveraging language models for summarizing mental state examinations: A comprehensive evaluation and dataset release
+ https://arxiv.org/abs/2403.20145
+ arXiv:2403.20145v3 Announce Type: replace
+Abstract: Mental health disorders affect a significant portion of the global population, with diagnoses primarily conducted through Mental State Examinations (MSEs). MSEs serve as structured assessments to evaluate behavioral and cognitive functioning across various domains, aiding mental health professionals in diagnosis and treatment monitoring. However, in developing countries, access to mental health support is limited, leading to an overwhelming demand for mental health professionals. Resident doctors often conduct initial patient assessments and create summaries for senior doctors, but their availability is constrained, resulting in extended patient wait times.
+ This study addresses the challenge of generating concise summaries from MSEs through the evaluation of various language models. Given the scarcity of relevant mental health conversation datasets, we developed a 12-item descriptive MSE questionnaire and collected responses from 405 participants, resulting in 9720 utterances covering diverse mental health aspects. Subsequently, we assessed the performance of five well-known pre-trained summarization models, both with and without fine-tuning, for summarizing MSEs. Our comprehensive evaluation, leveraging metrics such as ROUGE, SummaC, and human evaluation, demonstrates that language models can generate automated coherent MSE summaries for doctors. With this paper, we release our collected conversational dataset and trained models publicly for the mental health research community.
+ oai:arXiv.org:2403.20145v3
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Mohsen Karimi, Yidi Wang, Youngbin Kim, Yoojin Lim, Hyoseung Kim
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Nilesh Kumar Sahu, Manjeet Yadav, Mudita Chaturvedi, Snehil Gupta, Haroon R Lone
- adF: A Novel System for Measuring Web Fingerprinting through Ads
- https://arxiv.org/abs/2311.08769
- arXiv:2311.08769v3 Announce Type: replace
-Abstract: This paper introduces adF, a novel system for analyzing the vulnerability of different devices, Operating Systems (OSes), and browsers to web fingerprinting. adF performs its measurements from code inserted in ads. We have used our system in several ad campaigns that delivered 5.40 million ad impressions. The collected data allow us to assess the vulnerability of current desktop and mobile devices to web fingerprinting. Based on our results, we estimate that 66% of desktop devices and 40% of mobile devices can be uniquely fingerprinted with our web fingerprinting system. However, the resilience to web fingerprinting varies significantly across browsers and device types, with Chrome on desktops being the most vulnerable configuration.
- To counter web fingerprinting, we propose ShieldF, a simple solution which blocks the reporting by browsers of those attributes that we found in the analysis of our dataset that present the most significant discrimination power. Our experiments reveal that ShieldF outperforms all anti-fingerprinting solutions proposed by major browsers (Chrome, Safari and Firefox) offering an increase in the resilience offered to web fingerprinting up to 62% for some device configurations. ShieldF is available as an add-on for any chromium-based browser. Moreover, it is readily adoptable by browser and mobile app developers. Its widespread use would lead to a significant improvement in the protection offered by browsers and mobile apps to web fingerprinting.
- oai:arXiv.org:2311.08769v3
- cs.CR
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ The Spatial Semantics of Iconic Gesture
+ https://arxiv.org/abs/2404.18708
+ arXiv:2404.18708v2 Announce Type: replace
+Abstract: The current multimodal turn in linguistic theory leaves a crucial question unanswered: what is the meaning of iconic gestures, and how does it compose with speech meaning? We argue for a separation of linguistic and visual levels of meaning and introduce a spatial gesture semantics that closes this gap. Iconicity is differentiated into three aspects: Firstly, an interpretation of the form of a gesture in terms of a translation from kinematic gesture annotations into vector sequences (iconic model). Secondly, a truth-functional evaluation of the iconic model within spatially extended domains (embedding). Since a simple embedding is too strong, we identify a number of transformations that can be applied to iconic models, namely rotation, scaling, perspective fixation, and quotation of handshape. Thirdly, the linguistic description or classification of an iconic model (informational evaluation). Since the informational evaluation of an iconic gesture is a heuristic act, it needs a place in a semantic theory of visual communication. Informational evaluation lifts a gesture to a quasi-linguistic level that can interact with verbal content. This interaction is either vacuous, or regimented by usual lexicon-driven inferences discussed in dynamic semantic frameworks.
+ oai:arXiv.org:2404.18708v2
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1109/TETC.2025.3630046
- Miguel A. Bermejo-Agueda (Universidad Carlos III de Madrid, uc3m-Santander Big Data Institute), Patricia Callejo (Universidad Carlos III de Madrid, uc3m-Santander Big Data Institute), Rub\'en Cuevas (Universidad Carlos III de Madrid, uc3m-Santander Big Data Institute), \'Angel Cuevas (Universidad Carlos III de Madrid, uc3m-Santander Big Data Institute)
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Andy L\"ucking, Alexander Henlein, Alexander Mehler
- Information-Theoretic Active Correlation Clustering
- https://arxiv.org/abs/2402.03587
- arXiv:2402.03587v3 Announce Type: replace
-Abstract: Correlation clustering is a flexible framework for partitioning data based solely on pairwise similarity or dissimilarity information, without requiring the number of clusters as input. However, in many practical scenarios, these pairwise similarities are not available a priori and must be obtained through costly measurements or human feedback. This motivates the use of active learning to query only the most informative pairwise comparisons, enabling effective clustering under budget constraints. In this work, we develop a principled active learning approach for correlation clustering by introducing several information-theoretic acquisition functions that prioritize queries based on entropy and expected information gain. These strategies aim to reduce uncertainty about the clustering structure as efficiently as possible. We evaluate our methods across a range of synthetic and real-world settings and show that they significantly outperform existing baselines in terms of clustering accuracy and query efficiency. Our results highlight the benefits of combining active learning with correlation clustering in settings where similarity information is costly or limited.
- oai:arXiv.org:2402.03587v3
- cs.LG
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ Conjugate gradient for ill-posed problems: regularization by preconditioning, preconditioning by regularization
+ https://arxiv.org/abs/2406.04695
+ arXiv:2406.04695v2 Announce Type: replace
+Abstract: This paper investigates using the conjugate gradient iterative solver for ill-posed problems. We show that preconditioner and Tikhonov-regularization work in conjunction. In particular when they employ the same symmetric positive semi-definite operator, a powerful Ritz analysis allows one to estimate at negligible computational cost the solution for any Tikhonov's weight. This enhanced linear solver is applied to the boundary data completion problem and as the inner solver for the optical flow estimator.
+ oai:arXiv.org:2406.04695v2
+ math.NA
+ cs.NA
+ physics.class-ph
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- IEEE International Conference on Data Mining (ICDM), 2025
- Linus Aronsson, Morteza Haghir Chehreghani
+ Ahmed Chabib (LaMcube), Jean-Francois Witz (LaMcube), Vincent Magnier (LaMcube), Pierre Gosselet (LaMcube)
- Improving Topic Relevance Model by Mix-structured Summarization and LLM-based Data Augmentation
- https://arxiv.org/abs/2404.02616
- arXiv:2404.02616v2 Announce Type: replace
-Abstract: Topic relevance between query and document is a very important part of social search, which can evaluate the degree of matching between document and user's requirement. In most social search scenarios such as Dianping, modeling search relevance always faces two challenges. One is that many documents in social search are very long and have much redundant information. The other is that the training data for search relevance model is difficult to get, especially for multi-classification relevance model. To tackle above two problems, we first take query concatenated with the query-based summary and the document summary without query as the input of topic relevance model, which can help model learn the relevance degree between query and the core topic of document. Then, we utilize the language understanding and generation abilities of large language model (LLM) to rewrite and generate query from queries and documents in existing training data, which can construct new query-document pairs as training data. Extensive offline experiments and online A/B tests show that the proposed approaches effectively improve the performance of relevance modeling.
- oai:arXiv.org:2404.02616v2
- cs.IR
+ Anthropocentric bias in language model evaluation
+ https://arxiv.org/abs/2407.03859
+ arXiv:2407.03859v3 Announce Type: replace
+Abstract: Evaluating the cognitive capacities of large language models (LLMs) requires overcoming not only anthropomorphic but also anthropocentric biases. This article identifies two types of anthropocentric bias that have been neglected: overlooking how auxiliary factors can impede LLM performance despite competence ("auxiliary oversight"), and dismissing LLM mechanistic strategies that differ from those of humans as not genuinely competent ("mechanistic chauvinism"). Mitigating these biases necessitates an empirically-driven, iterative approach to mapping cognitive tasks to LLM-specific capacities and mechanisms, which can be done by supplementing carefully designed behavioral experiments with mechanistic studies.
+ oai:arXiv.org:2407.03859v3cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Yizhu Liu, Ran Tao, Shengyu Guo, Yifan Yang
+ 10.1162/COLI.a.582
+ Computational Linguistics, 1-10. (2025)
+ Rapha\"el Milli\`ere, Charles Rathkopf
- AI-powered Code Review with LLMs: Early Results
- https://arxiv.org/abs/2404.18496
- arXiv:2404.18496v2 Announce Type: replace
-Abstract: In this paper, we present a novel approach to improving software quality and efficiency through a Large Language Model (LLM)-based model designed to review code and identify potential issues. Our proposed LLM-based AI agent model is trained on large code repositories. This training includes code reviews, bug reports, and documentation of best practices. It aims to detect code smells, identify potential bugs, provide suggestions for improvement, and optimize the code. Unlike traditional static code analysis tools, our LLM-based AI agent has the ability to predict future potential risks in the code. This supports a dual goal of improving code quality and enhancing developer education by encouraging a deeper understanding of best practices and efficient coding techniques. Furthermore, we explore the model's effectiveness in suggesting improvements that significantly reduce post-release bugs and enhance code review processes, as evidenced by an analysis of developer sentiment toward LLM feedback. For future work, we aim to assess the accuracy and efficiency of LLM-generated documentation updates in comparison to manual methods. This will involve an empirical study focusing on manually conducted code reviews to identify code smells and bugs, alongside an evaluation of best practice documentation, augmented by insights from developer discussions and code reviews. Our goal is to not only refine the accuracy of our LLM-based tool but also to underscore its potential in streamlining the software development lifecycle through proactive code improvement and education.
- oai:arXiv.org:2404.18496v2
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Computational Modelling for Combinatorial Game Strategies
+ https://arxiv.org/abs/2408.03955
+ arXiv:2408.03955v3 Announce Type: replace
+Abstract: We develop a generic computational model that can be used effectively for establishing the existence of winning strategies for concrete finite combinatorial games. Our modelling is (equational) logic-based involving advanced techniques from algebraic specification, and it can be executed by equational programming systems such as those from the OBJ-family. We show how this provides a form of experimental mathematics for strategy problems involving combinatorial games. We do this by defining general methods and by illustrating these with case studies.
+ oai:arXiv.org:2408.03955v3
+ cs.LO
+ cs.GT
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zeeshan Rasheed, Malik Abdul Sami, Muhammad Waseem, Kai-Kristian Kemell, Xiaofeng Wang, Anh Nguyen, Kari Syst\"a, Pekka Abrahamsson
+ http://creativecommons.org/licenses/by/4.0/
+ R\u{a}zvan Diaconescu
- Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search
- https://arxiv.org/abs/2405.06073
- arXiv:2405.06073v2 Announce Type: replace
-Abstract: We study the robustness of data-centric methods to find neural network architectures, known as neural architecture search (NAS), against data poisoning. To audit this robustness, we design a poisoning framework that enables the systematic evaluation of the ability of NAS to produce architectures under data corruption. Our framework examines four off-the-shelf NAS algorithms, representing different approaches to architecture discovery, against four data poisoning attacks, including one we tailor specifically for NAS. In our evaluation with the CIFAR-10 and CIFAR-100 benchmarks, we show that NAS is \emph{seemingly} robust to data poisoning, showing marginal accuracy drops even under large poisoning budgets. However, we demonstrate that when considering NAS algorithms designed to achieve a few percentage points of accuracy gain, this expected improvement can be substantially diminished under data poisoning. We also show that the reduction varies across NAS algorithms and analyze the factors contributing to their robustness. Our findings are: (1) Training-based NAS algorithms are the least robust due to their reliance on data. (2) Training-free NAS approaches are the most robust but produce architectures that perform similarly to random selections from the search space. (3) NAS algorithms can produce architectures with improved accuracy, even when using out-of-distribution data like MNIST. We lastly discuss potential countermeasures. Our code is available at: https://github.com/ztcoalson/NAS-Robustness-to-Data-Poisoning
- oai:arXiv.org:2405.06073v2
- cs.LG
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Consensus over Clustered Networks Using Intermittent and Asynchronous Output Feedback
+ https://arxiv.org/abs/2408.11752
+ arXiv:2408.11752v3 Announce Type: replace
+Abstract: Distributed consensus protocols provide a mechanism for spreading information within clustered networks, allowing agents and clusters to make decisions without requiring direct access to the state of the ensemble. In this work, we propose a strategy for achieving system-wide consensus in the states of identical linear time-invariant systems coupled by an undirected graph whose directed sub-graphs are available only at sporadic times. Within this work, the agents of the network are organized into pairwise disjoint clusters, which induce sub-graphs of the undirected parent graph. Some cluster sub-graph pairs are linked by an inter-cluster sub-graph, where the union of all cluster and inter-cluster sub-graphs yields the undirected parent graph. Each agent utilizes a distributed consensus protocol with components that are updated intermittently and asynchronously with respect to other agents and inter-clusters. The closed-loop ensemble dynamics is modeled as a hybrid system, and a Lyapunov-based stability analysis yields sufficient conditions for rendering the agreement subspace (consensus set) globally exponentially stable. Furthermore, an input-to-state stability argument demonstrates the consensus set is robust to a large class of perturbations. A numerical simulation considering both nominal and perturbed scenarios is provided for validation purposes.
+ oai:arXiv.org:2408.11752v3
+ eess.SY
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zachary Coalson, Huazheng Wang, Qingyun Wu, Sanghyun Hong
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Federico M. Zegers, Sean Phillips
- Multi-Scale Direction-Aware Network for Infrared Small Target Detection
- https://arxiv.org/abs/2406.02037
- arXiv:2406.02037v4 Announce Type: replace
-Abstract: Infrared small target detection faces the problem that it is difficult to effectively separate the background and the target. Existing deep learning-based methods focus on edge and shape features, but ignore the richer structural differences and detailed information embedded in high-frequency components from different directions, thereby failing to fully exploit the value of high-frequency directional features in target perception. To address this limitation, we propose a multi-scale direction-aware network (MSDA-Net), which is the first attempt to integrate the high-frequency directional features of infrared small targets as domain prior knowledge into neural networks. Specifically, to fully mine the high-frequency directional features, on the one hand, a high-frequency direction injection (HFDI) module without trainable parameters is constructed to inject the high-frequency directional information of the original image into the network. On the other hand, a multi-scale direction-aware (MSDA) module is constructed, which promotes the full extraction of local relations at different scales and the full perception of key features in different directions. In addition, considering the characteristics of infrared small targets, we construct a feature aggregation (FA) structure to address target disappearance in high-level feature maps, and a feature calibration fusion (FCF) module to alleviate feature bias during cross-layer feature fusion. Extensive experimental results show that our MSDA-Net achieves state-of-the-art (SOTA) results on multiple public datasets. The code can be available at https://github.com/YuChuang1205/MSDA-Net
- oai:arXiv.org:2406.02037v4
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Machine Learning for Quantifier Selection in cvc5
+ https://arxiv.org/abs/2408.14338
+ arXiv:2408.14338v2 Announce Type: replace
+Abstract: In this work we considerably improve the state-of-the-art SMT solving on first-order quantified problems by efficient machine learning guidance of quantifier selection. Quantifiers represent a significant challenge for SMT and are technically a source of undecidability. In our approach, we train an efficient machine learning model that informs the solver which quantifiers should be instantiated and which not. Each quantifier may be instantiated multiple times and the set of the active quantifiers changes as the solving progresses. Therefore, we invoke the ML predictor many times, during the whole run of the solver. To make this efficient, we use fast ML models based on gradient boosting decision trees. We integrate our approach into the state-of-the-art cvc5 SMT solver and show a considerable increase of the system's holdout-set performance after training it on a large set of first-order problems collected from the Mizar Mathematical Library.
+ oai:arXiv.org:2408.14338v2
+ cs.AI
+ cs.LG
+ cs.LO
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jinmiao Zhao, Zelin Shi, Chuang Yu, Yunpeng Liu, Xinyi Ying, Yimian Dai
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1016/j.ijar.2025.109602
+ Jan Jakub\r{u}v, Mikol\'a\v{s} Janota, Jelle Piepenbrock, Josef Urban
- A survey on the impacts of recommender systems on users, items, and human-AI ecosystems
- https://arxiv.org/abs/2407.01630
- arXiv:2407.01630v2 Announce Type: replace
-Abstract: Recommendation systems and assistants (in short, recommenders) influence through online platforms most actions of our daily lives, suggesting items or providing solutions based on users' preferences or requests. This survey systematically reviews, categories, and discusses the impact of recommenders in four human-AI ecosystems -- social media, online retail, urban mapping and generative AI ecosystems. Its scope is to systematise a fast-growing field in which terminologies employed to classify methodologies and outcomes are fragmented and unsystematic. This is a crucial contribution to the literature because terminologies vary substantially across disciplines and ecosystems, hindering comparison and accumulation of knowledge in the field. We follow the customary steps of qualitative systematic review, gathering 154 articles from different disciplines to develop a parsimonious taxonomy of methodologies employed (empirical, simulation, observational, controlled), outcomes observed (concentration, content degradation, discrimination, diversity, echo chamber, filter bubble, homogenisation, polarisation, radicalisation, volume), and their level of analysis (individual, item, and ecosystem). We systematically discuss substantive and methodological commonalities across ecosystems, and highlight potential avenues for future research. The survey is addressed to scholars and practitioners interested in different human-AI ecosystems, policymakers and institutional stakeholders who want to understand better the measurable outcomes of recommenders, and tech companies who wish to obtain a systematic view of the impact of their recommenders.
- oai:arXiv.org:2407.01630v2
- cs.IR
- cs.AI
- cs.CY
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Non-deterministic, probabilistic, and quantum effects through the lens of event structures (Technical report)
+ https://arxiv.org/abs/2408.14563
+ arXiv:2408.14563v4 Announce Type: replace
+Abstract: In this paper, we consider event structures and their probabilistic and quantum extensions as originally defined by Winskel. If these structures have already been part of sophisticated computational models, they have rarely been directly studied as an immediate model of execution traces of programs. This paper offers such an analysis. We propose a simple imperative operational framework and show how to derive soundness and adequacy results with event structures considered as a semantics. We show how event structures naturally handle non-deterministic, probabilistic and quantum effects.
+ oai:arXiv.org:2408.14563v4
+ cs.LO
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Luca Pappalardo, Salvatore Citraro, Giuliano Cornacchia, Mirco Nanni, Valentina Pansanella, Giulio Rossetti, Gizem Gezici, Fosca Giannotti, Margherita Lalli, Giovanni Mauro, Gabriele Barlacchi, Daniele Gambetta, Virginia Morini, Dino Pedreschi, Emanuele Ferragina
+ http://creativecommons.org/licenses/by/4.0/
+ V\'itor Fernandes, Marc de Visme, Beno\^it Valiron
- Leveraging Machine Learning to Identify Gendered Stereotypes and Body Image Concerns on Diet and Fitness Online Forums
- https://arxiv.org/abs/2407.03551
- arXiv:2407.03551v2 Announce Type: replace
-Abstract: The pervasive expectations about ideal body types in Western society can lead to body image concerns, dissatisfaction, and in extreme cases, eating disorders and other psychopathologies related to body image. While previous research has focused on online pro-anorexia communities glorifying the "thin ideal," less attention has been given to the broader spectrum of body image concerns or how emerging disorders like muscle dysmorphia ("bigorexia") present on online platforms. To address this gap, we analyze 46 Reddit forums related to diet, fitness, and mental health. We map these communities along gender and body ideal dimensions, revealing distinct patterns of emotional expression and community support. Feminine-oriented communities, especially those endorsing the thin ideal, express higher levels of negative emotions and receive caring comments in response. In contrast, muscular ideal communities display less negativity, regardless of gender orientation, but receive aggressive compliments in response, marked by admiration and toxicity. Mental health discussions align more with thin ideal, feminine-leaning spaces. By uncovering these gendered emotional dynamics, our findings can inform the development of moderation strategies that foster supportive interactions while reducing exposure to harmful content.
- oai:arXiv.org:2407.03551v2
- cs.SI
- cs.CL
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ From Innermost to Full Probabilistic Term Rewriting: Almost-Sure Termination, Complexity, and Modularity
+ https://arxiv.org/abs/2409.17714
+ arXiv:2409.17714v4 Announce Type: replace
+Abstract: There are many evaluation strategies for term rewrite systems, but automatically proving termination or analyzing complexity is usually easiest for innermost rewriting. Several syntactic criteria exist when innermost termination implies (full) termination or when runtime complexity and innermost runtime complexity coincide. We adapt these criteria to the probabilistic setting, e.g., we show when it suffices to analyze almost-sure termination w.r.t. innermost rewriting in order to prove (full) almost-sure termination of probabilistic term rewrite systems. These criteria can be applied for both termination and complexity analysis in the probabilistic setting. We implemented and evaluated our new contributions in the tool AProVE. Moreover, we also use our new results to investigate the modularity of probabilistic termination properties.
+ oai:arXiv.org:2409.17714v4
+ cs.LO
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Minh Duc Chu, Cinthia S\'anchez, Zihao He, Rebecca Dorn, Stuart Murray, Kristina Lerman
+ Jan-Christoph Kassing, J\"urgen Giesl
- Entropy-Informed Weighting Channel Normalizing Flow for Deep Generative Models
- https://arxiv.org/abs/2407.04958
- arXiv:2407.04958v2 Announce Type: replace
-Abstract: Normalizing Flows (NFs) are widely used in deep generative models for their exact likelihood estimation and efficient sampling.
- However, they require substantial memory since the latent space matches the input dimension.
- Multi-scale architectures address this by progressively reducing latent dimensions while preserving reversibility.
- Existing multi-scale architectures use simple, static channel-wise splitting, limiting expressiveness. To improve this, we introduce a regularized, feature-dependent $\mathtt{Shuffle}$ operation and integrate it into vanilla multi-scale architecture.
- This operation adaptively generates channel-wise weights and shuffles latent variables before splitting them.
- We observe that such operation guides the variables to evolve in the direction of entropy increase, hence we refer to NFs with the $\mathtt{Shuffle}$ operation as \emph{Entropy-Informed Weighting Channel Normalizing Flow} (EIW-Flow).
- Extensive experiments on CIFAR-10, CelebA, ImageNet, and LSUN demonstrate that EIW-Flow achieves state-of-the-art density estimation and competitive sample quality for deep generative modeling, with minimal computational overhead.
- oai:arXiv.org:2407.04958v2
- cs.LG
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ How to Bridge Spatial and Temporal Heterogeneity in Link Prediction? A Contrastive Method
+ https://arxiv.org/abs/2411.00612
+ arXiv:2411.00612v3 Announce Type: replace
+Abstract: Temporal Heterogeneous Networks play a crucial role in capturing the dynamics and heterogeneity inherent in various real-world complex systems, rendering them a noteworthy research avenue for link prediction. However, existing methods fail to capture the fine-grained differential distribution patterns and temporal dynamic characteristics, which we refer to as spatial heterogeneity and temporal heterogeneity. To overcome such limitations, we propose a novel \textbf{C}ontrastive Learning-based \textbf{L}ink \textbf{P}rediction model, \textbf{CLP}, which employs a multi-view hierarchical self-supervised architecture to encode spatial and temporal heterogeneity. Specifically, aiming at spatial heterogeneity, we develop a spatial feature modeling layer to capture the fine-grained topological distribution patterns from node- and edge-level representations, respectively. Furthermore, aiming at temporal heterogeneity, we devise a temporal information modeling layer to perceive the evolutionary dependencies of dynamic graph topologies from time-level representations. Finally, we encode the spatial and temporal distribution heterogeneity from a contrastive learning perspective, enabling a comprehensive self-supervised hierarchical relation modeling for the link prediction task. Extensive experiments conducted on four real-world dynamic heterogeneous network datasets verify that our \mymodel consistently outperforms the state-of-the-art models, demonstrating an average improvement of 10.10\%, 13.44\% in terms of AUC and AP, respectively.
+ oai:arXiv.org:2411.00612v3
+ cs.SI
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1016/j.patcog.2025.112442
- Chen, W., Du, S., Li, S., Zeng, D., & Paisley, J. (2025). Entropy-informed weighting channel normalizing flow for deep generative models. Pattern Recognition, 112442
- Wei Chen, Shian Du, Shigui Li, Delu Zeng, John Paisley
+ Yu Tai, Xinglong Wu, Hongwei Yang, Hui He, Duanjing Chen, Yuanming Shao, Weizhe Zhang
- Counting Small Induced Subgraphs: Hardness via Fourier Analysis
- https://arxiv.org/abs/2407.07051
- arXiv:2407.07051v3 Announce Type: replace
-Abstract: For a fixed graph property $\Phi$ and integer $k \geq 1$, consider the problem of counting the induced $k$-vertex subgraphs satisfying $\Phi$ in an input graph $G$. This problem can be solved by brute-force in time $O(n^{k})$. Under ETH, we prove several lower bounds on the optimal exponent in this running time:
- If $\Phi$ is edge-monotone (i.e., closed under deleting edges), then ETH rules out $n^{o(k)}$ time algorithms for this problem. This strengthens a recent lower bound by D\"{o}ring, Marx and Wellnitz [STOC 2024]. Our result also holds for counting modulo fixed primes.
- If at most $(2-\varepsilon)^{\binom{k}{2}}$ graphs on $k$ vertices satisfy $\Phi$, for some $\varepsilon > 0$, then ETH also rules out an exponent of $o(k)$. This holds even when the graphs in $\Phi$ have arbitrary individual weights, generalizing previous results for hereditary properties by Focke and Roth [SIAM J. Comput. 2024].
- If $\Phi$ is non-trivial and excludes $\beta_\Phi$ edge-densities, then the optimal exponent under ETH is $\Omega(\beta_\Phi)$. This holds even when the graphs in $\Phi$ have arbitrary individual weights, generalizing previous results by Roth, Schmitt and Wellnitz [SIAM J. Comput. 2024].
- In all cases, we also obtain $\mathsf{\#W[1]}$-hardness if $k$ is part of the input and considered as the parameter. We also obtain lower bounds on the Weisfeiler-Leman dimension. As opposed to the nontrivial techniques from combinatorics, group theory, and simplicial topology used before, our results follow from a relatively straightforward ``algebraization'' of the problem in terms of polynomials, combined with applications of simple algebraic facts, which can also be interpreted in terms of Fourier analysis.
- oai:arXiv.org:2407.07051v3
- cs.CC
- cs.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization
+ https://arxiv.org/abs/2411.03752
+ arXiv:2411.03752v3 Announce Type: replace
+Abstract: Recent studies have shown that deep learning models are very vulnerable to poisoning attacks. Many defense methods have been proposed to address this issue. However, traditional poisoning attacks are not as threatening as commonly believed. This is because they often cause differences in how the model performs on the training set compared to the validation set. Such inconsistency can alert defenders that their data has been poisoned, allowing them to take the necessary defensive actions. In this paper, we introduce a more threatening type of poisoning attack called the Deferred Poisoning Attack. This new attack allows the model to function normally during the training and validation phases but makes it very sensitive to evasion attacks or even natural noise. We achieve this by ensuring the poisoned model's loss function has a similar value as a normally trained model at each input sample but with a large local curvature. A similar model loss ensures that there is no obvious inconsistency between the training and validation accuracy, demonstrating high stealthiness. On the other hand, the large curvature implies that a small perturbation may cause a significant increase in model loss, leading to substantial performance degradation, which reflects a worse robustness. We fulfill this purpose by making the model have singular Hessian information at the optimal point via our proposed Singularization Regularization term. We have conducted both theoretical and empirical analyses of the proposed method and validated its effectiveness through experiments on image classification tasks. Furthermore, we have confirmed the hazards of this form of poisoning attack under more general scenarios using natural noise, offering a new perspective for research in the field of security.
+ oai:arXiv.org:2411.03752v3
+ cs.LG
+ cs.CR
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Radu Curticapean, Daniel Neuen
+ Yuhao He, Jinyu Tian, Xianwei Zheng, Li Dong, Yuanman Li, Jiantao Zhou
- Towards Robust Infrared Small Target Detection: A Feature-Enhanced and Sensitivity-Tunable Framework
- https://arxiv.org/abs/2407.20090
- arXiv:2407.20090v3 Announce Type: replace
-Abstract: Recently, single-frame infrared small target (SIRST) detection technology has attracted widespread attention. Different from most existing deep learning-based methods that focus on improving network architectures, we propose a feature-enhanced and sensitivity-tunable (FEST) framework, which is compatible with existing SIRST detection networks and further enhances their detection performance. The FEST framework improves the model's robustness from two aspects: feature enhancement and target confidence regulation. For feature enhancement, we employ a multi-scale fusion strategy to improve the model's perception to multi-scale features of multi-size targets, and design an edge enhancement difficulty mining (EEDM) loss to guide the network to continuously focus on challenging target regions and edge features during training. For target confidence regulation, an adjustable sensitivity (AS) strategy is proposed for network post-processing. This strategy enhances the model's adaptability in complex scenarios and significantly improves the detection rate of infrared small targets while maintaining segmentation accuracy. Extensive experimental results show that our FEST framework can effectively enhance the performance of existing SIRST detection networks. The code is available at https://github.com/YuChuang1205/FEST-Framework
- oai:arXiv.org:2407.20090v3
+ l0-Regularized Sparse Coding-based Interpretable Network for Multi-Modal Image Fusion
+ https://arxiv.org/abs/2411.04519
+ arXiv:2411.04519v2 Announce Type: replace
+Abstract: Multi-modal image fusion (MMIF) enhances the information content of the fused image by combining the unique as well as common features obtained from different modality sensor images, improving visualization, object detection, and many more tasks. In this work, we introduce an interpretable network for the MMIF task, named FNet, based on an $\ell_0$-regularized multi-modal convolutional sparse coding (MCSC) model. Specifically, for solving the $\ell_0$-regularized CSC problem, we design a learnable $\ell_0$-regularized sparse coding (LZSC) block in a principled manner through deep unfolding. Given different modality source images, FNet first separates the unique and common features from them using the LZSC block and then these features are combined to generate the final fused image. Additionally, we propose an $\ell_0$-regularized MCSC model for the inverse fusion process. Based on this model, we introduce an interpretable inverse fusion network named IFNet, which is utilized during FNet's training. Extensive experiments show that FNet achieves high-quality fusion results across eight different MMIF datasets. Furthermore, we show that FNet enhances downstream object detection \textcolor[rgb]{ 0, 0, 0}{and semantic segmentation} in visible-thermal image pairs. We have also visualized the intermediate results of FNet, which demonstrates the good interpretability of our network. Link for code and models: https://github.com/gargi884/FNet-MMIF.
+ oai:arXiv.org:2411.04519v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jinmiao Zhao, Zelin Shi, Chuang Yu, Yunpeng Liu, Yimian Dai
-
-
- Studying the Effects of Collaboration in Interactive Theme Discovery Systems
- https://arxiv.org/abs/2408.09030
- arXiv:2408.09030v3 Announce Type: replace
-Abstract: NLP-assisted solutions have gained considerable traction to support qualitative data analysis. However, there does not exist a unified evaluation framework that can account for the many different settings in which qualitative researchers may employ them. In this paper, we take a first step in this direction by proposing an evaluation framework to study the way in which different tools may result in different outcomes depending on the collaboration strategy employed. Specifically, we study the impact of synchronous vs. asynchronous collaboration using two different NLP-assisted qualitative research tools and present a comprehensive analysis of significant differences in the consistency, cohesiveness, and correctness of their outputs.
- oai:arXiv.org:2408.09030v3
- cs.CL
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Alvin Po-Chun Chen, Dananjay Srinivas, Rohan Das, Alexandra Barry, Maksim Seniw, Maria Leonor Pacheco
+ Gargi Panda, Soumitra Kundu, Saumik Bhattacharya, Aurobinda Routray
- Point Neuron Learning: A New Physics-Informed Neural Network Architecture
- https://arxiv.org/abs/2408.16969
- arXiv:2408.16969v2 Announce Type: replace
-Abstract: Machine learning and neural networks have advanced numerous research domains, but challenges such as large training data requirements and inconsistent model performance hinder their application in certain scientific problems. To overcome these challenges, researchers have investigated integrating physics principles into machine learning models, mainly through: (i) physics-guided loss functions, generally termed as physics-informed neural networks, and (ii) physics-guided architectural design. While both approaches have demonstrated success across multiple scientific disciplines, they have limitations including being trapped to a local minimum, poor interpretability, and restricted generalizability. This paper proposes a new physics-informed neural network (PINN) architecture that combines the strengths of both approaches by embedding the fundamental solution of the wave equation into the network architecture, enabling the learned model to strictly satisfy the wave equation. The proposed point neuron learning method can model an arbitrary sound field based on microphone observations without any dataset. Compared to other PINN methods, our approach directly processes complex numbers and offers better interpretability and generalizability. We evaluate the versatility of the proposed architecture by a sound field reconstruction problem in a reverberant environment. Results indicate that the point neuron method outperforms two competing methods and can efficiently handle noisy environments with sparse microphone observations.
- oai:arXiv.org:2408.16969v2
- cs.LG
- cs.SD
- eess.AS
- eess.SP
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Generation Framework with Strict Constraints for Crystal Materials Design
+ https://arxiv.org/abs/2411.08464
+ arXiv:2411.08464v3 Announce Type: replace
+Abstract: The design of crystal materials plays a critical role in areas such as new energy development, biomedical engineering, and semiconductors. Recent advances in data-driven methods have enabled the generation of diverse crystal structures. However, most existing approaches still rely on random sampling without strict constraints, requiring multiple post-processing steps to identify stable candidates with the desired physical and chemical properties. In this work, we present a new constrained generation framework that takes multiple constraints as input and enables the generation of crystal structures with specific chemical and properties. In this framework, intermediate constraints, such as symmetry information and composition ratio, are generated by a constraint generator based on large language models (LLMs), which considers the target properties. These constraints are then used by a subsequent crystal structure generator to ensure that the structure generation process is under control. Our method generates crystal structures with a probability of meeting the target properties that is more than twice that of existing approaches. Furthermore, nearly 100% of the generated crystals strictly adhere to predefined chemical composition, eliminating the risks of supply chain during production.
+ oai:arXiv.org:2411.08464v3
+ cs.AI
+ cond-mat.mtrl-sci
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1186/s13636-024-00376-0
- Bi H, Abhayapala TD. Point neuron learning: a new physics informed neural network architecture. EURASIP J Audio Speech Music Process 2024, 56 (2024)
- Hanwen Bi, Thushara D. Abhayapala
+ Chao Huang, Jiahui Chen, Chen Chen, Chen Chen, Chunyan Chen, Renjie Su, Shiyu Du
- Tokenizing Motion: A Generative Approach for Scene Dynamics Compression
- https://arxiv.org/abs/2410.09768
- arXiv:2410.09768v4 Announce Type: replace
-Abstract: This paper proposes a novel generative video compression framework that leverages motion pattern priors, derived from subtle dynamics in common scenes (e.g., swaying flowers or a boat drifting on water), rather than relying on video content priors (e.g., talking faces or human bodies). These compact motion priors enable a new approach to ultra-low bitrate communication while achieving high-quality reconstruction across diverse scene contents. At the encoder side, motion priors can be streamlined into compact representations via a dense-to-sparse transformation. At the decoder side, these priors facilitate the reconstruction of scene dynamics using an advanced flow-driven diffusion model. Experimental results illustrate that the proposed method can achieve superior rate-distortion-performance and outperform the state-of-the-art conventional-video codec Enhanced Compression Model (ECM) on-scene dynamics sequences. The project page can be found at-https://github.com/xyzysz/GNVDC.
- oai:arXiv.org:2410.09768v4
+ Dressing the Imagination: A Dataset for AI-Powered Translation of Text into Fashion Outfits and A Novel NeRA Adapter for Enhanced Feature Adaptation
+ https://arxiv.org/abs/2411.13901
+ arXiv:2411.13901v5 Announce Type: replace
+Abstract: Specialized datasets that capture the fashion industry's rich language and styling elements can boost progress in AI-driven fashion design. We present FLORA, (Fashion Language Outfit Representation for Apparel Generation), the first comprehensive dataset containing 4,330 curated pairs of fashion outfits and corresponding textual descriptions. Each description utilizes industry-specific terminology and jargon commonly used by professional fashion designers, providing precise and detailed insights into the outfits. Hence, the dataset captures the delicate features and subtle stylistic elements necessary to create high-fidelity fashion designs.
+ We demonstrate that fine-tuning generative models on the FLORA dataset significantly enhances their capability to generate accurate and stylistically rich images from textual descriptions of fashion sketches. FLORA will catalyze the creation of advanced AI models capable of comprehending and producing subtle, stylistically rich fashion designs. It will also help fashion designers and end-users to bring their ideas to life.
+ As a second orthogonal contribution, we introduce NeRA (Nonlinear low-rank Expressive Representation Adapter), a novel adapter architecture based on Kolmogorov-Arnold Networks (KAN). Unlike traditional PEFT techniques such as LoRA, LoKR, DoRA, and LoHA that use MLP adapters, NeRA uses learnable spline-based nonlinear transformations, enabling superior modeling of complex semantic relationships, achieving strong fidelity, faster convergence and semantic alignment. Extensive experiments on our proposed FLORA and LAION-5B datasets validate the superiority of NeRA over existing adapters.
+ We will open-source both the FLORA dataset and our implementation code.
+ oai:arXiv.org:2411.13901v5cs.CV
- eess.IV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Shanzhi Yin, Zihan Zhang, Bolin Chen, Shiqi Wang, Yan Ye
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Gayatri Deshmukh, Somsubhra De, Chirag Sehgal, Jishu Sen Gupta, Sparsh Mittal
- Self-Supervised Learning and Opportunistic Inference for Continuous Monitoring of Freezing of Gait in Parkinson's Disease
- https://arxiv.org/abs/2410.21326
- arXiv:2410.21326v2 Announce Type: replace
-Abstract: Parkinson's disease (PD) is a progressive neurological disorder that impacts the quality of life significantly, making in-home monitoring of motor symptoms such as Freezing of Gait (FoG) critical. However, existing symptom monitoring technologies are power-hungry, rely on extensive amounts of labeled data, and operate in controlled settings. These shortcomings limit real-world deployment of the technology. This work presents LIFT-PD, a computationally-efficient self-supervised learning framework for real-time FoG detection. Our method combines self-supervised pre-training on unlabeled data with a novel differential hopping windowing technique to learn from limited labeled instances. An opportunistic model activation module further minimizes power consumption by selectively activating the deep learning module only during active periods. Extensive experimental results show that LIFT-PD achieves a 7.25% increase in precision and 4.4% improvement in accuracy compared to supervised models while using as low as 40% of the labeled training data used for supervised learning. Additionally, the model activation module reduces inference time by up to 67% compared to continuous inference. LIFT-PD paves the way for practical, energy-efficient, and unobtrusive in-home monitoring of PD patients with minimal labeling requirements.
- oai:arXiv.org:2410.21326v2
- cs.LG
+ Brain-like emergent properties in deep networks: impact of network architecture, datasets and training
+ https://arxiv.org/abs/2411.16326
+ arXiv:2411.16326v3 Announce Type: replace
+Abstract: Despite the rapid pace at which deep networks are improving on standardized vision benchmarks, they are still outperformed by humans on real-world vision tasks. One solution to this problem is to make deep networks more brain-like. Although there are several benchmarks that compare the ability of deep networks to predict brain responses on natural images, they do not capture subtle but important emergent properties present in brains. It is also unclear which design principle -- architecture, training data, or training regime -- would have the greatest impact on these emergent properties. To investigate these issues, we systematically evaluated over 30 state-of-the-art networks with varying network architectures, training datasets, and training regimes for the presence or absence of brain-like properties. Our main findings are as follows. First, network architecture had the strongest impact on brain-like properties compared to dataset and training regime variations. Second, networks varied widely in their alignment to the brain with no single network outperforming all others. Taken together, our results offer a principled and interpretable path toward closing the gap between artificial and human vision.
+ oai:arXiv.org:2411.16326v3
+ cs.CVcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Shovito Barua Soumma, Daniel Peterson, Shyamal Mehta, Hassan Ghasemzadeh
+ http://creativecommons.org/licenses/by/4.0/
+ Niranjan Rajesh, Georgin Jacob, SP Arun
- Control Node Placement and Structural Controllability of Water Quality Dynamics in Drinking Networks
- https://arxiv.org/abs/2411.01361
- arXiv:2411.01361v3 Announce Type: replace
-Abstract: Chlorine, the most widely used disinfectant, needs to be adequately distributed in water distribution networks (WDNs) to maintain consistent residual levels and ensure safe water. This is performed through control node injections at the treatment plant via booster stations distributed across the WDNs. While previous studies have applied various optimization-based approaches for booster station placement, many have failed to consider the coverage of the station injections and the dynamic nature of WDNs. In particular, variations in hydraulics and demand significantly impact the reachability and efficacy of chlorine injections which then impact optimal placement of booster stations. This study introduces a novel formulation that combines control- and graph-theoretic approaches to solve the booster station placement problem. Unlike traditional methods, our approach emphasizes maximizing the system's ability to control disinfectant levels with minimal control energy, taking into account the time-varying hydraulic profiles that lead to different optimal station placements. We propose a simple weighting technique to determine the placements by assessing the structural controllability of each configuration, based on the network's topology, independent of specific parameters like decay rates or pipe roughness. This method ensures effective chlorine coverage across the network. Our approach is validated on different networks, demonstrating its operational effectiveness, scalability, and practicality.
- oai:arXiv.org:2411.01361v3
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Object-centric proto-symbolic behavioural reasoning from pixels
+ https://arxiv.org/abs/2411.17438
+ arXiv:2411.17438v3 Announce Type: replace
+Abstract: Autonomous intelligent agents must bridge computational challenges at disparate levels of abstraction, from the low-level spaces of sensory input and motor commands to the high-level domain of abstract reasoning and planning. A key question in designing such agents is how best to instantiate the representational space that will interface between these two levels -- ideally without requiring supervision in the form of expensive data annotations. These objectives can be efficiently achieved by representing the world in terms of objects (grounded in perception and action). In this work, we present a novel, brain-inspired, deep-learning architecture that learns from pixels to interpret, control, and reason about its environment, using object-centric representations. We show the utility of our approach through tasks in synthetic environments that require a combination of (high-level) logical reasoning and (low-level) continuous control. Results show that the agent can learn emergent conditional behavioural reasoning, such as $(A \to B) \land (\neg A \to C)$, as well as logical composition $(A \to B) \land (A \to C) \vdash A \to (B \land C)$ and XOR operations, and successfully controls its environment to satisfy objectives deduced from these logical rules. The agent can adapt online to unexpected changes in its environment and is robust to mild violations of its world model, thanks to dynamic internal desired goal generation. While the present results are limited to synthetic settings (2D and 3D activated versions of dSprites), which fall short of real-world levels of complexity, the proposed architecture shows how to manipulate grounded object representations, as a key inductive bias for unsupervised learning, to enable behavioral reasoning.
+ oai:arXiv.org:2411.17438v3
+ cs.AI
+ cs.CV
+ cs.LG
+ cs.NE
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Salma M. Elsherif, Ahmad F. Taha
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ruben van Bergen, Justus H\"ubotter, Alma Lago, Pablo Lanillos
- Adversarial-Robustness-Guided Graph Pruning
- https://arxiv.org/abs/2411.12331
- arXiv:2411.12331v2 Announce Type: replace
-Abstract: Graph learning plays a central role in many data mining and machine learning tasks, such as manifold learning, data representation and analysis, dimensionality reduction, clustering, and visualization. In this work, we propose a highly scalable, adversarial-robustness-guided graph pruning framework for learning graph topologies from data. By performing a spectral adversarial robustness evaluation, our method aims to learn sparse, undirected graphs that help the underlying algorithms resist noise and adversarial perturbations. In particular, we explicitly identify and prune edges that are most vulnerable to adversarial attacks. We use spectral clustering, one of the most representative graph-based machine learning algorithms, to evaluate the proposed framework. Compared with prior state-of-the-art graph learning approaches, the proposed method is more scalable and significantly improves both the computational efficiency and the solution quality of spectral clustering.
- oai:arXiv.org:2411.12331v2
+ SpotLight: Shadow-Guided Object Relighting via Diffusion
+ https://arxiv.org/abs/2411.18665
+ arXiv:2411.18665v3 Announce Type: replace
+Abstract: Recent work has shown that diffusion models can serve as powerful neural rendering engines that can be leveraged for inserting virtual objects into images. However, unlike typical physics-based renderers, these neural rendering engines are limited by the lack of manual control over the lighting, which is often essential for improving or personalizing the desired image outcome. In this paper, we show that precise and controllable lighting can be achieved without any additional training, simply by supplying a coarse shadow hint for the object. Indeed, we show that injecting only the desired shadow of the object into a pre-trained diffusion-based neural renderer enables it to accurately shade the object according to the desired light position, while properly harmonizing the object (and its shadow) within the target background image. Our method, SpotLight, is entirely training-free and leverages existing neural rendering approaches to achieve controllable relighting. We show that SpotLight achieves superior object compositing results, both quantitatively and perceptually, as confirmed by a user study, outperforming existing diffusion-based models specifically designed for relighting. We also demonstrate other applications, such as hand-scribbling shadows and full-image relighting, demonstrating its versatility.
+ oai:arXiv.org:2411.18665v3cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.GR
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yongyu Wang
+ Fr\'ed\'eric Fortier-Chouinard, Zitian Zhang, Louis-Etienne Messier, Mathieu Garon, Anand Bhattad, Jean-Fran\c{c}ois Lalonde
- Understanding World or Predicting Future? A Comprehensive Survey of World Models
- https://arxiv.org/abs/2411.14499
- arXiv:2411.14499v4 Announce Type: replace
-Abstract: The concept of world models has garnered significant attention due to advancements in multimodal large language models such as GPT-4 and video generation models such as Sora, which are central to the pursuit of artificial general intelligence. This survey offers a comprehensive review of the literature on world models. Generally, world models are regarded as tools for either understanding the present state of the world or predicting its future dynamics. This review presents a systematic categorization of world models, emphasizing two primary functions: (1) constructing internal representations to understand the mechanisms of the world, and (2) predicting future states to simulate and guide decision-making. Initially, we examine the current progress in these two categories. We then explore the application of world models in key domains, including generative games, autonomous driving, robotics, and social simulacra, with a focus on how each domain utilizes these aspects. Finally, we outline key challenges and provide insights into potential future research directions. We summarize the representative papers along with their code repositories in https://github.com/tsinghua-fib-lab/World-Model.
- oai:arXiv.org:2411.14499v4
- cs.CL
- cs.AI
+ Enhanced Spatial Clustering of Single-Molecule Localizations with Graph Neural Networks
+ https://arxiv.org/abs/2412.00173
+ arXiv:2412.00173v2 Announce Type: replace
+Abstract: Single-molecule localization microscopy generates point clouds corresponding to fluorophore localizations. Spatial cluster identification and analysis of these point clouds are crucial for extracting insights about molecular organization. However, this task becomes challenging in the presence of localization noise, high point density, or complex biological structures. Here, we introduce MIRO (Multifunctional Integration through Relational Optimization), an algorithm that uses recurrent graph neural networks to transform the point clouds in order to improve clustering efficiency when applying conventional clustering techniques. We show that MIRO supports simultaneous processing of clusters of different shapes and at multiple scales, demonstrating improved performance across varied datasets. Our comprehensive evaluation demonstrates MIRO's transformative potential for single-molecule localization applications, showcasing its capability to revolutionize cluster analysis and provide accurate, reliable details of molecular architecture. In addition, MIRO's robust clustering capabilities hold promise for applications in various fields such as neuroscience, for the analysis of neural connectivity patterns, and environmental science, for studying spatial distributions of ecological data.
+ oai:arXiv.org:2412.00173v2cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ physics.bio-ph
+ physics.data-an
+ q-bio.QM
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Jingtao Ding, Yunke Zhang, Yu Shang, Jie Feng, Yuheng Zhang, Zefang Zong, Yuan Yuan, Hongyuan Su, Nian Li, Jinghua Piao, Yucheng Deng, Nicholas Sukiennik, Chen Gao, Fengli Xu, Yong Li
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1038/s41467-025-65557-7
+ Nat Commun 16, 9693 (2025)
+ Jes\'us Pineda, Sergi Mas\'o-Orriols, Montse Masoliver, Joan Bertran, Mattias Goks\"or, Giovanni Volpe, Carlo Manzo
- RELOCATE: A Simple Training-Free Baseline for Visual Query Localization Using Region-Based Representations
- https://arxiv.org/abs/2412.01826
- arXiv:2412.01826v2 Announce Type: replace
-Abstract: We present RELOCATE, a simple training-free baseline designed to perform the challenging task of visual query localization in long videos. To eliminate the need for task-specific training and efficiently handle long videos, RELOCATE leverages a region-based representation derived from pretrained vision models. At a high level, it follows the classic object localization approach: (1) identify all objects in each video frame, (2) compare the objects with the given query and select the most similar ones, and (3) perform bidirectional tracking to get a spatio-temporal response. However, we propose some key enhancements to handle small objects, cluttered scenes, partial visibility, and varying appearances. Notably, we refine the selected objects for accurate localization and generate additional visual queries to capture visual variations. We evaluate RELOCATE on the challenging Ego4D Visual Query 2D Localization dataset, establishing a new baseline that outperforms prior task-specific methods by 49% (relative improvement) in spatio-temporal average precision.
- oai:arXiv.org:2412.01826v2
+ Quantifying the Reliability of Predictions in Detection Transformers: Object-Level Calibration and Image-Level Uncertainty
+ https://arxiv.org/abs/2412.01782
+ arXiv:2412.01782v3 Announce Type: replace
+Abstract: DETR and its variants have emerged as promising architectures for object detection, offering an end-to-end prediction pipeline. In practice, however, DETRs generate hundreds of predictions that far outnumber the actual objects present in an image. This raises a critical question: which of these predictions could be trusted? Addressing this concern, we provide empirical and theoretical evidence that predictions within the same image play distinct roles, resulting in varying reliability levels. Our analysis reveals that DETRs employ an optimal specialist strategy: one prediction per object is trained to be well-calibrated, while the remaining predictions are trained to suppress their foreground confidence to near zero, even when maintaining accurate localization. We show that this strategy emerges as the loss-minimizing solution to the Hungarian matching algorithm, fundamentally shaping DETRs' outputs. While selecting the well-calibrated predictions is ideal, they are unidentifiable at inference time. This means that any post-processing algorithm poses a risk of outputting a set of predictions with mixed calibration levels. Therefore, practical deployment necessitates a joint evaluation of both the model's calibration quality and the effectiveness of the post-processing algorithm. However, we demonstrate that existing metrics like average precision and expected calibration error are inadequate for this task. To address this issue, we further introduce Object-level Calibration Error (OCE): This object-centric design penalizes both retaining suppressed predictions and missed ground truth foreground objects, making OCE suitable for both evaluating models and identifying reliable prediction subsets. Finally, we present a post hoc uncertainty quantification framework that predicts per-image model accuracy.
+ oai:arXiv.org:2412.01782v3cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Savya Khosla, Sethuraman T V, Alexander Schwing, Derek Hoiem
+ Young-Jin Park, Carson Sobolewski, Navid Azizan
- Transparent and Coherent Procedural Mistake Detection
- https://arxiv.org/abs/2412.11927
- arXiv:2412.11927v5 Announce Type: replace
-Abstract: Procedural mistake detection (PMD) is a challenging problem of classifying whether a human user (observed through egocentric video) has successfully executed a task (specified by a procedural text). Despite significant recent efforts, machine performance in the wild remains nonviable, and the reasoning processes underlying this performance are opaque. As such, we extend PMD to require generating visual self-dialog rationales to inform decisions. Given the impressive, mature image understanding capabilities observed in recent vision-and-language models (VLMs), we curate a suitable benchmark dataset for PMD based on individual frames. As our reformulation enables unprecedented transparency, we leverage a natural language inference (NLI) model to formulate two automated metrics for the coherence of generated rationales. We establish baselines for this reframed task, showing that VLMs struggle off-the-shelf, but with some trade-offs, their accuracy, coherence, and efficiency can be improved by incorporating these metrics into common inference and fine-tuning methods. Lastly, our multi-faceted metrics visualize common outcomes, highlighting areas for further improvement.
- oai:arXiv.org:2412.11927v5
+ ShapeWords: Guiding Text-to-Image Synthesis with 3D Shape-Aware Prompts
+ https://arxiv.org/abs/2412.02912
+ arXiv:2412.02912v2 Announce Type: replace
+Abstract: We introduce ShapeWords, an approach for synthesizing images based on 3D shape guidance and text prompts. ShapeWords incorporates target 3D shape information within specialized tokens embedded together with the input text, effectively blending 3D shape awareness with textual context to guide the image synthesis process. Unlike conventional shape guidance methods that rely on depth maps restricted to fixed viewpoints and often overlook full 3D structure or textual context, ShapeWords generates diverse yet consistent images that reflect both the target shape's geometry and the textual description. Experimental results show that ShapeWords produces images that are more text-compliant, aesthetically plausible, while also maintaining 3D shape awareness.
+ oai:arXiv.org:2412.02912v2
+ cs.CVcs.AI
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.GR
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shane Storks, Itamar Bar-Yossef, Yayuan Li, Zheyuan Zhang, Jason J. Corso, Joyce Chai
+ http://creativecommons.org/licenses/by/4.0/
+ Dmitry Petrov, Pradyumn Goyal, Divyansh Shivashok, Yuanming Tao, Melinos Averkiou, Evangelos Kalogerakis
- Flexible realizations existence: NP-completeness on sparse graphs and algorithms
- https://arxiv.org/abs/2412.13721
- arXiv:2412.13721v2 Announce Type: replace
-Abstract: One of the questions in Rigidity Theory is whether a realization of the vertices of a graph in the plane is flexible, namely, if it allows a continuous deformation preserving the edge lengths. A flexible realization of a connected graph in the plane exists if and only if the graph has a NAC-coloring, which is a surjective edge coloring by two colors such that for each cycle, either all the edges have the same color, or there are at least two edges of each color. The question whether a graph has a NAC-coloring, and hence also the existence of a flexible realization, has been proven to be NP-complete. We show that this question is also NP-complete on graphs with maximum degree five and on graphs with the average degree at most $4+\varepsilon$ for every fixed $\varepsilon >0$. We also show that NAC-colorings can be counted in linear time for graphs with bounded treewidth. Since the only existing implementation of checking the existence of a NAC-coloring is rather naive, we propose new algorithms along with their implementation, which is significantly faster. We also focus on searching all NAC-colorings of a graph, since they provide useful information about its possible flexible realizations.
- oai:arXiv.org:2412.13721v2
- cs.CG
- math.CO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Deep Operator BSDE: a Numerical Scheme to Approximate Solution Operators
+ https://arxiv.org/abs/2412.03405
+ arXiv:2412.03405v2 Announce Type: replace
+Abstract: Motivated by dynamic risk measures and conditional $g$-expectations, in this work we propose a numerical method to approximate the solution operator given by a Backward Stochastic Differential Equation (BSDE). The main ingredients for this are the Wiener chaos decomposition and the classical Euler scheme for BSDEs. We show convergence of this scheme under very mild assumptions, and provide a rate of convergence in more restrictive cases. We then implement it using neural networks, and we present several numerical examples where we can check the accuracy of the method.
+ oai:arXiv.org:2412.03405v2
+ math.NA
+ cs.LG
+ cs.NA
+ math.PR
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Petr La\v{s}tovi\v{c}ka, Jan Legersk\'y
+ Pere D\'iaz Lozano, Giulia Di Nunno
- Condor: A Code Discriminator Integrating General Semantics with Code Details
- https://arxiv.org/abs/2412.17429
- arXiv:2412.17429v2 Announce Type: replace
-Abstract: LLMs demonstrate significant potential across various software engineering tasks. However, they still face challenges in generating correct code on the first attempt when addressing complex requirements. Introducing a discriminator to select reliable outputs from multiple generated results is an effective way to enhance their reliability and stability. Currently, these discriminators fall into two categories: execution-based discriminators and non-execution-based discriminators. Execution-based discriminators face flexibility challenges due to difficulties in obtaining test cases and security concerns, while non-execution-based discriminators, although more flexible, struggle to capture subtle differences in code details. To maintain flexibility while improving the model's ability to capture fine-grained code details, this paper proposes Condor. We first design contrastive learning to optimize the code representations of the base model, enabling it to reflect differences in code details. Then, we leverage intermediate data from the code modification process to further enrich the discriminator's training data, enhancing its ability to discern code details. Experimental results indicate that on the subtle code difference dataset (i.e., CodeNanoFix), Condor significantly outperforms other discriminators in discriminative performance: Condor (1.3B) improves the discriminative F1 score of DeepSeek-Coder (1.3B) from 67% to 73%. In discriminating LLM-generated outputs, Condor (1.3B) and Condor (110M) raise the Pass@1 score of Meta-Llama-3.1-Instruct (70B) on the CodeNanoFix dataset from 52.64% to 62.63% and 59.64%, respectively. Moreover, Condor demonstrates strong generalization capabilities on the APPS, MBPP, and LiveCodeBench datasets. For example, Condor (1.3B) improves the Pass@1 of Meta-Llama-3.1-Instruct (70B) on the APPS dataset by 147.05%.
- oai:arXiv.org:2412.17429v2
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ When Worse is Better: Navigating the compression-generation tradeoff in visual tokenization
+ https://arxiv.org/abs/2412.16326
+ arXiv:2412.16326v2 Announce Type: replace
+Abstract: Current image generation methods are based on a two-stage training approach. In stage 1, an auto-encoder is trained to compress an image into a latent space; in stage 2, a generative model is trained to learn a distribution over that latent space. This reveals a fundamental trade-off, do we compress more aggressively to make the latent distribution easier for the stage 2 model to learn even if it makes reconstruction worse? We study this problem in the context of discrete, auto-regressive image generation. Through the lens of scaling laws, we show that smaller stage 2 models can benefit from more compressed stage 1 latents even if reconstruction performance worsens, demonstrating that generation modeling capacity plays a role in this trade-off. Diving deeper, we rigorously study the connection between compute scaling and the stage 1 rate-distortion trade-off. Next, we introduce Causally Regularized Tokenization (CRT), which uses knowledge of the stage 2 generation modeling procedure to embed useful inductive biases in stage 1 latents. This regularization improves stage 2 generation performance better by making the tokens easier to model without affecting the stage 1 compression rate and marginally affecting distortion: we are able to improve compute efficiency 2-3$\times$ over baseline. Finally, we use CRT with further optimizations to the visual tokenizer setup to result in a generative pipeline that matches LlamaGen-3B generation performance (2.18 FID) with half the tokens per image (256 vs. 576) and a fourth the total model parameters (775M vs. 3.1B) while using the same architecture and inference procedure.
+ oai:arXiv.org:2412.16326v2
+ cs.CV
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Qingyuan Liang, Zhao Zhang, Chen Liu, Zeyu Sun, Wenjie Zhang, Yizhou Chen, Zixiao Zhao, Qi Luo, Wentao Wang, Yanjie Jiang, Yingfei Xiong, Lu Zhang
+ Vivek Ramanujan, Kushal Tirumala, Armen Aghajanyan, Luke Zettlemoyer, Ali Farhadi
- Numerical analysis of a stabilized scheme for an optimal control problem governed by a parabolic convection--diffusion equation
- https://arxiv.org/abs/2412.21070
- arXiv:2412.21070v3 Announce Type: replace
-Abstract: We consider an optimal control problem on a bounded domain $\Omega\subset\mathbb{R}^2,$ governed by a parabolic convection--diffusion--reaction equation with pointwise control constraints. We follow the optimize--then--discretize approach, in which the state and co-state variables are discretized using the piecewise linear finite element method. For stabilization, we apply the algebraic flux correction method. Temporal discretization is performed using the backward Euler method. The discrete control variable is obtained by projecting the discretized adjoint state onto the set of admissible controls. The resulting stabilized fully--discrete scheme is nonlinear and a fixed point argument is used to prove its existence and uniqueness under a mild condition between the time step $k$ and the mesh size $h,$ e.g., $k = \mathcal{O}(h).$ Furthermore, assuming sufficient regularity of the exact solution, we derive error estimates in the $L^{2}$ and energy norms with respect to the spatial variable, and in the $\ell^\infty$ norm with respect to time for the state and co-state variables. For the control variable, we also derive an $L^{2}$-norm error estimate with respect to space and an $\ell^\infty$-norm estimate in time. Finally, we present numerical experiments that validate the the order of convergence of the stabilized fully--discrete scheme based on the algebraic flux correction method. We also test the stabilized fully--discrete scheme in optimal control problems that governed by a convection--dominant equation where the solution possesses interior layers.
- oai:arXiv.org:2412.21070v3
+ L\'{e}vy Score Function and Score-Based Particle Algorithm for Nonlinear L\'{e}vy--Fokker--Planck Equations
+ https://arxiv.org/abs/2412.19520
+ arXiv:2412.19520v3 Announce Type: replace
+Abstract: The score function for the diffusion process, also known as the gradient of the log-density, is a basic concept to characterize the probability flow with important applications in the score-based diffusion generative modelling and the simulation of It\^{o} stochastic differential equations. However, neither the probability flow nor the corresponding score function for the diffusion-jump process are known. This paper delivers mathematical derivation, numerical algorithm, and error analysis focusing on the corresponding score function in non-Gaussian systems with jumps and discontinuities represented by the nonlinear L\'{e}vy--Fokker--Planck equations. We propose the L\'{e}vy score function for such stochastic equations, which features a nonlocal double-integral term, and we develop its training algorithm by minimizing the proposed loss function from samples. Based on the equivalence of the probability flow with deterministic dynamics, we develop a self-consistent score-based transport particle algorithm to sample the interactive L\'{e}vy stochastic process at discrete time grid points. We provide error bound for the Kullback--Leibler divergence between the numerical and true probability density functions by overcoming the nonlocal challenges in the L\'{e}vy score. The full error analysis with the Monte Carlo error and the time discretization error is furthermore established. To show the usefulness and efficiency of our approach, numerical examples from applications in biology and finance are tested.
+ oai:arXiv.org:2412.19520v3math.NAcs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Christos Pervolianakis
-
-
- FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models
- https://arxiv.org/abs/2501.02968
- arXiv:2501.02968v4 Announce Type: replace
-Abstract: Retrieval-Augmented Generation (RAG) enriches LLMs by dynamically retrieving external knowledge, reducing hallucinations and satisfying real-time information needs. While existing research mainly targets RAG's performance and efficiency, emerging studies highlight critical security concerns. Yet, current adversarial approaches remain limited, mostly addressing white-box scenarios or heuristic black-box attacks without fully investigating vulnerabilities in the retrieval phase. Additionally, prior works mainly focus on factoid Q&A tasks, their attacks lack complexity and can be easily corrected by advanced LLMs. In this paper, we investigate a more realistic and critical threat scenario: adversarial attacks intended for opinion manipulation against black-box RAG models, particularly on controversial topics. Specifically, we propose FlippedRAG, a transfer-based adversarial attack against black-box RAG systems. We first demonstrate that the underlying retriever of a black-box RAG system can be reverse-engineered, enabling us to train a surrogate retriever. Leveraging the surrogate retriever, we further craft target poisoning triggers, altering vary few documents to effectively manipulate both retrieval and subsequent generation. Extensive empirical results show that FlippedRAG substantially outperforms baseline methods, improving the average attack success rate by 16.7%. FlippedRAG achieves on average a 50% directional shift in the opinion polarity of RAG-generated responses, ultimately causing a notable 20% shift in user cognition. Furthermore, we evaluate the performance of several potential defensive measures, concluding that existing mitigation strategies remain insufficient against such sophisticated manipulation attacks. These results highlight an urgent need for developing innovative defensive solutions to ensure the security and trustworthiness of RAG systems.
- oai:arXiv.org:2501.02968v4
- cs.IR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhuo Chen, Yuyang Gong, Jiawei Liu, Miaokun Chen, Haotan Liu, Qikai Cheng, Fan Zhang, Wei Lu, Xiaozhong Liu
+ Yuanfei Huang, Chengyu Liu, Xiang Zhou
- Rule-Based Graph Programs Matching the Time Complexity of Imperative Algorithms
- https://arxiv.org/abs/2501.09144
- arXiv:2501.09144v3 Announce Type: replace
-Abstract: We report on recent advances in rule-based graph programming, which allow us to match the time complexity of some fundamental imperative graph algorithms. In general, achieving the time complexity of graph algorithms implemented in conventional languages using a rule-based graph-transformation language is challenging due to the cost of graph matching. Previous work demonstrated that with rooted rules, certain algorithms can be implemented in the graph programming language GP 2 such that their runtime matches the time complexity of imperative implementations. However, this required input graphs to have a bounded node degree and (for some algorithms) to be connected. In this paper, we overcome these limitations by enhancing the graph data structure generated by the GP 2 compiler and exploiting the new structure in programs. We present three case studies: the first program checks whether input graphs are connected, the second program checks whether input graphs are acyclic, and the third program solves the single-source shortest-paths problem for graphs with integer edge-weights. The first two programs run in linear time on (possibly disconnected) input graphs with arbitrary node degrees. The third program runs in time $O(nm)$ on arbitrary input graphs, matching the time complexity of imperative implementations of the Bellman-Ford algorithm. For each program, we formally prove its correctness and time complexity, and provide runtime experiments on various graph classes.
- oai:arXiv.org:2501.09144v3
- cs.PL
- cs.PF
- Thu, 11 Dec 2025 00:00:00 -0500
+ Toward Intelligent and Secure Cloud: Large Language Model Empowered Proactive Defense
+ https://arxiv.org/abs/2412.21051
+ arXiv:2412.21051v4 Announce Type: replace
+Abstract: The rapid evolution of cloud computing technologies and the increasing number of cloud applications have provided numerous benefits in our daily lives. However, the diversity and complexity of different components pose a significant challenge to cloud security, especially when dealing with sophisticated and advanced cyberattacks such as Denial of Service (DoS). Recent advancements in the large language models (LLMs) offer promising solutions for security intelligence. By exploiting the powerful capabilities in language understanding, data analysis, task inference, action planning, and code generation, we present LLM-PD, a novel defense architecture that proactively mitigates various DoS threats in cloud networks. LLM-PD can efficiently make decisions through comprehensive data analysis and sequential reasoning, as well as dynamically create and deploy actionable defense mechanisms. Furthermore, it can flexibly self-evolve based on experience learned from previous interactions and adapt to new attack scenarios without additional training. Our case study on three distinct DoS attacks demonstrates its remarkable ability in terms of defense effectiveness and efficiency when compared with other existing methods.
+ oai:arXiv.org:2412.21051v4
+ cs.CR
+ cs.AI
+ cs.NI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Ziad Ismaili Alaoui, Detlef Plump
+ Yuyang Zhou, Guang Cheng, Kang Du, Zihan Chen, Yuyu Zhao
- Directional Diffusion-Style Code Editing Pre-training
- https://arxiv.org/abs/2501.12079
- arXiv:2501.12079v2 Announce Type: replace
-Abstract: Code pre-trained models have shown promising effectiveness in various software engineering tasks. Among these tasks, many tasks are related to software evolution and/or code editing. However, existing code pre-trained models often overlook the real-world code editing data and the evolutionary nature of the editing process. In this paper, to simulate the step-by-step code editing process of human developers, we propose DivoT5, a pre-trained model based on directional diffusion at the data level. In DivoT5, we adopt two categories of pre-training tasks. The first category is mask and denoising tasks augmented with a diffusion direction representing code evolution. That is, we first apply a noising process to the code snippets before evolution, and then ask the pre-training process to restore the snippets with noise into the code snippets after evolution. The second category is tasks aiming to reinforce the evolutionary direction. That is, we first generate various intermediate versions for each pair of snippets before and after evolution, and then ask the pre-training process to transform the intermediate versions into the snippet after evolution for each pair. We evaluate DivoT5 for two code-editing scenarios and one non-editing scenario using five downstream tasks. Given each downstream task, we fine-tune the pre-trained DivoT5 to evaluate its effectiveness. Our experimental results show that DivoT5 achieves state-of-the-art (SOTA) performance on most tasks in comparison to models of the same scale (220M), large scale (770M) models in fine-tuning, and billion-scale (6.7B, 8B, ChatGPT) models in few-shot settings. For one code-editing task (i.e., automated code review), DivoT5 pre-trained on top of CodeT5-small (60M) can even outperform CodeT5-base (220M) and other pre-trained models with 220M parameters except for DivoT5 pre-trained on top of CodeT5-base (220M).
- oai:arXiv.org:2501.12079v2
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ The Yoneda embedding in simplicial type theory
+ https://arxiv.org/abs/2501.13229
+ arXiv:2501.13229v3 Announce Type: replace
+Abstract: Riehl and Shulman introduced simplicial type theory (STT), a variant of homotopy type theory which aimed to study not just homotopy theory, but its fusion with category theory: $(\infty,1)$-category theory. While notoriously technical, manipulating $\infty$-categories in simplicial type theory is often easier than working with ordinary categories, with the type theory handling infinite stacks of coherences in the background. We capitalize on recent work by Gratzer et al. defining the $(\infty,1)$-category of $\infty$-groupoids in STT to define presheaf categories within STT and systematically develop their theory. In particular, we construct the Yoneda embedding, prove the universal property of presheaf categories, refine the theory of adjunctions in STT, introduce the theory of Kan extensions, and prove Quillen's Theorem A.
+ oai:arXiv.org:2501.13229v3
+ cs.LO
+ math.AT
+ math.CT
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Qingyuan Liang, Zeyu Sun, Qihao Zhu, Junhao Hu, Yifan Zhao, Yizhou Chen, Mingxuan Zhu, Guoqing Wang, Lu Zhang
+ Daniel Gratzer, Jonathan Weinberger, Ulrik Buchholtz
- Spectral Analysis of Diffusion Models with Application to Schedule Design
- https://arxiv.org/abs/2502.00180
- arXiv:2502.00180v3 Announce Type: replace
-Abstract: Diffusion models (DMs) have emerged as powerful tools for modeling complex data distributions and generating realistic new samples. Over the years, advanced architectures and sampling methods have been developed to make these models practically usable. However, certain synthesis process decisions still rely on heuristics without a solid theoretical foundation. In our work, we offer a novel analysis of the DM's inference process, introducing a comprehensive frequency response perspective. Specifically, by relying on Gaussianity assumption, we present the inference process as a closed-form spectral transfer function, capturing how the generated signal evolves in response to the initial noise. We demonstrate how the proposed analysis can be leveraged to design a noise schedule that aligns effectively with the characteristics of the data. The spectral perspective also provides insights into the underlying dynamics and sheds light on the relationship between spectral properties and noise schedule structure. Our results lead to scheduling curves that are dependent on the spectral content of the data, offering a theoretical justification for some of the heuristics taken by practitioners.
- oai:arXiv.org:2502.00180v3
- cs.LG
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ Impact and Mitigation of Current Saturation Algorithms in Grid-Forming Inverters on Power Swing Detection
+ https://arxiv.org/abs/2501.18063
+ arXiv:2501.18063v2 Announce Type: replace
+Abstract: Grid-forming (GFM) inverter-based resources (IBRs) are capable of emulating the external characteristics of synchronous generators (SGs) through the careful design of the control loops. However, the current limiter in the control loops of the GFM IBR poses challenges to the effectiveness of power swing detection functions designed for SG-based systems. Among various current limiting strategies, current saturation algorithms (CSAs) are widely employed for their strict current limiting capability, and are the focus of this paper. The paper presents a theoretical analysis of the conditions for entering and exiting the current saturation mode of the GFM IBR under three CSAs. The corresponding impedance trajectories observed by the relay on the GFM IBR side are investigated. The analysis results reveal that the unique impedance trajectories under these CSAs markedly differ from those associated with SGs. Moreover, it is demonstrated that the conventional power swing detection scheme may lose functionality due to the rapid movement of the trajectory. To mitigate this issue, an optimal current saturation strategy is proposed. Conclusions are validated through simulations in MATLAB/Simulink.
+ oai:arXiv.org:2501.18063v2
+ eess.SY
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Roi Benita, Michael Elad, Joseph Keshet
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yanshu Niu, Zhe Yang, Bikash C. Pal
- CardioLive: Empowering Video Streaming with Online Cardiac Monitoring
- https://arxiv.org/abs/2502.00702
- arXiv:2502.00702v2 Announce Type: replace
-Abstract: Online Cardiac Monitoring (OCM) emerges as a compelling enhancement for the next-generation video streaming platforms. It enables various applications including remote health, online affective computing, and deepfake detection. Yet the physiological information encapsulated in the video streams has been long neglected. In this paper, we present the design and implementation of CardioLive, the first online cardiac monitoring system in video streaming platforms. We leverage the naturally co-existed video and audio streams and devise CardioNet, the first audio-visual network to learn the cardiac series. It incorporates multiple unique designs to extract temporal and spectral features, ensuring robust performance under realistic video streaming conditions. To enable the Service-On-Demand online cardiac monitoring, we implement CardioLive as a plug-and-play middleware service and develop systematic solutions to practical issues including changing FPS and unsynchronized streams. Extensive experiments have been done to demonstrate the effectiveness of our system. We achieve a Mean Square Error (MAE) of 1.79 BPM error, outperforming the video-only and audio-only solutions by 69.2% and 81.2%, respectively. Our CardioLive service achieves average throughputs of 115.97 and 98.16 FPS when implemented in Zoom and YouTube. We believe our work opens up new applications for video stream systems. We will release the code soon.
- oai:arXiv.org:2502.00702v2
- cs.HC
- cs.NI
- cs.SD
- eess.AS
- eess.IV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Vision-centric Token Compression in Large Language Model
+ https://arxiv.org/abs/2502.00791
+ arXiv:2502.00791v5 Announce Type: replace
+Abstract: Real-world applications are stretching context windows to hundreds of thousand of tokens while Large Language Models (LLMs) swell from billions to trillions of parameters. This dual expansion send compute and memory costs skyrocketing, making token compression indispensable. We introduce Vision Centric Token Compression (Vist), a slow-fast compression framework that mirrors human reading: the fast path renders distant tokens into images, letting a frozen, lightweight vision encoder skim the low-salience context; the slow path feeds the proximal window into the LLM for fine-grained reasoning. A Probability-Informed Visual Enhancement (PVE) objective masks high-frequency tokens during training, steering the Resampler to concentrate on semantically rich regions-just as skilled reader gloss over function words. On eleven in-context learning benchmarks, Vist achieves the same accuracy with 2.3 times fewer tokens, cutting FLOPs by 16% and memory by 50%. This method delivers remarkable results, outperforming the strongest text encoder-based compression method CEPE by 7.6% on average over benchmarks like TriviaQA, NQ, PopQA, NLUI, and CLIN, setting a new standard for token efficiency in LLMs. The project is at https://github.com/CSU-JPG/VIST.
+ oai:arXiv.org:2502.00791v5
+ cs.CL
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Sheng Lyu, Ruiming Huang, Sijie Ji, Yasar Abbas Ur Rehman, Lan Ma, Chenshu Wu
+ Ling Xing, Alex Jinpeng Wang, Rui Yan, Xiangbo Shu, Jinhui Tang
- Sequence models for continuous cell cycle stage prediction from brightfield images
- https://arxiv.org/abs/2502.02182
- arXiv:2502.02182v2 Announce Type: replace
-Abstract: Understanding cell cycle dynamics is crucial for studying biological processes such as growth, development and disease progression. While fluorescent protein reporters like the Fucci system allow live monitoring of cell cycle phases, they require genetic engineering and occupy additional fluorescence channels, limiting broader applicability in complex experiments. In this study, we conduct a comprehensive evaluation of deep learning methods for predicting continuous Fucci signals using non-fluorescence brightfield imaging, a widely available label-free modality. To that end, we generated a large dataset of 1.3 M images of dividing RPE1 cells with full cell cycle trajectories to quantitatively compare the predictive performance of distinct model categories including single time-frame models, causal state space models and bidirectional transformer models. We show that both causal and transformer-based models significantly outperform single- and fixed frame approaches, enabling the prediction of visually imperceptible transitions like G1/S within 1h resolution. Our findings underscore the importance of sequence models for accurate predictions of cell cycle dynamics and highlight their potential for label-free imaging.
- oai:arXiv.org:2502.02182v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ GFM-RAG: Graph Foundation Model for Retrieval Augmented Generation
+ https://arxiv.org/abs/2502.01113
+ arXiv:2502.01113v3 Announce Type: replace
+Abstract: Retrieval-augmented generation (RAG) has proven effective in integrating knowledge into large language models (LLMs). However, conventional RAGs struggle to capture complex relationships between pieces of knowledge, limiting their performance in intricate reasoning that requires integrating knowledge from multiple sources. Recently, graph-enhanced retrieval augmented generation (GraphRAG) builds graph structure to explicitly model these relationships, enabling more effective and efficient retrievers. Nevertheless, its performance is still hindered by the noise and incompleteness within the graph structure. To address this, we introduce GFM-RAG, a novel graph foundation model (GFM) for retrieval augmented generation. GFM-RAG is powered by an innovative graph neural network that reasons over graph structure to capture complex query-knowledge relationships. The GFM with 8M parameters undergoes a two-stage training process on large-scale datasets, comprising 60 knowledge graphs with over 14M triples and 700k documents. This results in impressive performance and generalizability for GFM-RAG, making it the first graph foundation model applicable to unseen datasets for retrieval without any fine-tuning required. Extensive experiments on three multi-hop QA datasets and seven domain-specific RAG datasets demonstrate that GFM-RAG achieves state-of-the-art performance while maintaining efficiency and alignment with neural scaling laws, highlighting its potential for further improvement.
+ oai:arXiv.org:2502.01113v3
+ cs.IR
+ cs.AI
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Louis-Alexandre Leger, Maxine Leonardi, Andrea Salati, Felix Naef, Martin Weigert
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Linhao Luo, Zicheng Zhao, Gholamreza Haffari, Dinh Phung, Chen Gong, Shirui Pan
- Revisiting Intermediate-Layer Matching in Knowledge Distillation: Layer-Selection Strategy Doesn't Matter (Much)
- https://arxiv.org/abs/2502.04499
- arXiv:2502.04499v2 Announce Type: replace
-Abstract: Knowledge distillation (KD) is a popular method of transferring knowledge from a large "teacher" model to a small "student" model. Previous work has explored various layer-selection strategies (e.g., forward matching and in-order random matching) for intermediate-layer matching in KD, where a student layer is forced to resemble a certain teacher layer. In this work, we revisit such layer-selection strategies and observe an intriguing phenomenon that layer-selection strategy does not matter (much) in intermediate-layer matching -- even seemingly nonsensical matching strategies such as reverse matching still result in surprisingly good student performance. We provide an interpretation for this phenomenon by examining the angles between teacher layers viewed from the student's perspective. Our work sheds light on KD practice, as layer-selection strategies may not be the main focus of KD system design, and vanilla forward matching works well in most setups.
- oai:arXiv.org:2502.04499v2
+ Nonasymptotic CLT and Error Bounds for Two-Time-Scale Stochastic Approximation
+ https://arxiv.org/abs/2502.09884
+ arXiv:2502.09884v3 Announce Type: replace
+Abstract: We consider linear two-time-scale stochastic approximation algorithms driven by martingale noise. Recent applications in machine learning motivate the need to understand finite-time error rates, but conventional stochastic approximation analysis focus on either asymptotic convergence in distribution or finite-time bounds that are far from optimal. Prior work on asymptotic central limit theorems (CLTs) suggest that two-time-scale algorithms may be able to achieve $1/\sqrt{n}$ error in expectation, with a constant given by the expected norm of the limiting Gaussian vector. However, the best known finite-time rates are much slower. We derive the first nonasymptotic central limit theorem with respect to the Wasserstein-1 distance for two-time-scale stochastic approximation with Polyak-Ruppert averaging. As a corollary, we show that expected error achieved by Polyak-Ruppert averaging decays at rate $1/\sqrt{n}$, which significantly improves on the rates of convergence in prior works.
+ oai:arXiv.org:2502.09884v3cs.LGcs.AI
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zony Yu, Yuqiao Wen, Lili Mou
+ Seo Taek Kong, Sihan Zeng, Thinh T. Doan, R. Srikant
- Semantic Data Augmentation Enhanced Invariant Risk Minimization for Medical Image Domain Generalization
- https://arxiv.org/abs/2502.05593
- arXiv:2502.05593v2 Announce Type: replace
-Abstract: Deep learning has achieved remarkable success in medical image classification. However, its clinical application is often hindered by data heterogeneity caused by variations in scanner vendors, imaging protocols, and operators. Approaches such as invariant risk minimization (IRM) aim to address this challenge of out-of-distribution generalization. For instance, VIRM improves upon IRM by tackling the issue of insufficient feature support overlap, demonstrating promising potential. Nonetheless, these methods face limitations in medical imaging due to the scarcity of annotated data and the inefficiency of augmentation strategies. To address these issues, we propose a novel domain-oriented direction selector to replace the random augmentation strategy used in VIRM. Our method leverages inter-domain covariance as a guider for augmentation direction, guiding data augmentation towards the target domain. This approach effectively reduces domain discrepancies and enhances generalization performance. Experiments on a multi-center diabetic retinopathy dataset demonstrate that our method outperforms state-of-the-art approaches, particularly under limited data conditions and significant domain heterogeneity.
- oai:arXiv.org:2502.05593v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Gaussian Process Upper Confidence Bound Achieves Nearly-Optimal Regret in Noise-Free Gaussian Process Bandits
+ https://arxiv.org/abs/2502.19006
+ arXiv:2502.19006v2 Announce Type: replace
+Abstract: We study the noise-free Gaussian Process (GP) bandits problem, in which the learner seeks to minimize regret through noise-free observations of the black-box objective function lying on the known reproducing kernel Hilbert space (RKHS). Gaussian process upper confidence bound (GP-UCB) is the well-known GP-bandits algorithm whose query points are adaptively chosen based on the GP-based upper confidence bound score. Although several existing works have reported the practical success of GP-UCB, the current theoretical results indicate its suboptimal performance. However, GP-UCB tends to perform well empirically compared with other nearly optimal noise-free algorithms that rely on a non-adaptive sampling scheme of query points. This paper resolves this gap between theoretical and empirical performance by showing the nearly optimal regret upper bound of noise-free GP-UCB. Specifically, our analysis shows the first constant cumulative regret in the noise-free settings for the squared exponential kernel and Mat\'ern kernel with some degree of smoothness.
+ oai:arXiv.org:2502.19006v2
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yaoyao Zhu, Xiuding Cai, Yingkai Wang, Yu Yao, Xu Luo, Zhongliang Fu
+ Shogo Iwazaki
- Generalizing Reduced Rank Extrapolation to Low-Rank Matrix Sequences
- https://arxiv.org/abs/2502.09165
- arXiv:2502.09165v3 Announce Type: replace
-Abstract: Reduced rank extrapolation (RRE) is an acceleration method typically used to accelerate the iterative solution of nonlinear systems of equations using a fixed-point process. In this context, the iterates are vectors generated from a fixed-point mapping function. However, when considering the iterative solution of large-scale matrix equations, the iterates are low-rank matrices generated from a fixed-point process for which, generally, the mapping function changes in each iteration. To enable acceleration of the iterative solution for these problems, we propose two novel generalizations of RRE. First, we show how to effectively compute RRE for sequences of low-rank matrices. Second, we derive a formulation of RRE that is suitable for fixed-point processes for which the mapping function changes each iteration. We demonstrate the potential of the methods on several numerical examples involving the iterative solution of large-scale Lyapunov and Riccati matrix equations.
- oai:arXiv.org:2502.09165v3
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Internal Evaluation of Density-Based Clusterings with Noise
+ https://arxiv.org/abs/2503.00127
+ arXiv:2503.00127v2 Announce Type: replace
+Abstract: Being able to evaluate the quality of a clustering result even in the absence of ground truth cluster labels is fundamental for research in data mining. However, most cluster validation indices (CVIs) do not capture noise assignments by density-based clustering methods like DBSCAN or HDBSCAN, even though the ability to correctly determine noise is crucial for successful clustering. In this paper, we propose DISCO, a Density-based Internal Score for Clusterings with nOise, the first CVI to explicitly assess the quality of noise assignments rather than merely counting them. DISCO is based on the established idea of the Silhouette Coefficient, but adopts density-connectivity to evaluate clusters of arbitrary shapes, and proposes explicit noise evaluation: it rewards correctly assigned noise labels and penalizes noise labels where a cluster label would have been more appropriate. The pointwise definition of DISCO allows for the seamless integration of noise evaluation into the final clustering evaluation, while also enabling explainable evaluations of the clustered data. In contrast to most state-of-the-art, DISCO is well-defined and also covers edge cases that regularly appear as output from clustering algorithms, such as singleton clusters or a single cluster plus noise.
+ oai:arXiv.org:2503.00127v2
+ cs.LG
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Pascal den Boef, Patrick K\"urschner, Xiaobo Liu, Jos Maubach, Jens Saak, Wil Schilders, Jonas Schulze, Nathan van de Wouw
+ Anna Beer, Lena Krieger, Pascal Weber, Martin Ritzert, Ira Assent, Claudia Plant
- Weight Space Representation Learning on Diverse NeRF Architectures
- https://arxiv.org/abs/2502.09623
- arXiv:2502.09623v3 Announce Type: replace
-Abstract: Neural Radiance Fields (NeRFs) have emerged as a groundbreaking paradigm for representing 3D objects and scenes by encoding shape and appearance information into the weights of a neural network. Recent studies have demonstrated that these weights can be used as input for frameworks designed to address deep learning tasks; however, such frameworks require NeRFs to adhere to a specific, predefined architecture. In this paper, we introduce the first framework capable of processing NeRFs with diverse architectures and performing inference on architectures unseen at training time. We achieve this by training a Graph Meta-Network within an unsupervised representation learning framework, and show that a contrastive objective is conducive to obtaining an architecture-agnostic latent space. In experiments conducted across 13 NeRF architectures belonging to three families (MLPs, tri-planes, and, for the first time, hash tables), our approach demonstrates robust performance in classification, retrieval, and language tasks involving multiple architectures, even unseen at training time, while also matching or exceeding the results of existing frameworks limited to single architectures.
- oai:arXiv.org:2502.09623v3
+ Unifying Multiple Foundation Models for Advanced Computational Pathology
+ https://arxiv.org/abs/2503.00736
+ arXiv:2503.00736v3 Announce Type: replace
+Abstract: Foundation models have advanced computational pathology by learning transferable visual representations from large histological datasets, yet recent evaluations reveal substantial variability in their performance across tasks. This inconsistency arises from differences in training data diversity and is further constrained by the reliance of many high-performing models on proprietary datasets that cannot be shared or expanded. Offline distillation offers a partial remedy but depends heavily on the size and heterogeneity of the distillation corpus and requires full retraining to incorporate new models. To address these limitations, we propose Shazam, a task-specific online integration framework that unifies multiple pretrained pathology foundation models within a single flexible inference system. Shazam fuses multi-level representations through adaptive expert weighting and learns task-aligned features via online distillation. Across spatial transcriptomics prediction, survival prognosis, tile classification, and visual question answering, Shazam consistently outperforms strong individual models, highlighting its promise as a scalable approach for harnessing the rapid evolution of pathology foundation models in a unified and adaptable manner.
+ oai:arXiv.org:2503.00736v3cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Francesco Ballerini, Pierluigi Zama Ramirez, Luigi Di Stefano, Samuele Salti
-
-
- Im2SurfTex: Surface Texture Generation via Neural Backprojection of Multi-View Images
- https://arxiv.org/abs/2502.14006
- arXiv:2502.14006v3 Announce Type: replace
-Abstract: We present Im2SurfTex, a method that generates textures for input 3D shapes by learning to aggregate multi-view image outputs produced by 2D image diffusion models onto the shapes' texture space. Unlike existing texture generation techniques that use ad hoc backprojection and averaging schemes to blend multiview images into textures, often resulting in texture seams and artifacts, our approach employs a trained neural module to boost texture coherency. The key ingredient of our module is to leverage neural attention and appropriate positional encodings of image pixels based on their corresponding 3D point positions, normals, and surface-aware coordinates as encoded in geodesic distances within surface patches. These encodings capture texture correlations between neighboring surface points, ensuring better texture continuity. Experimental results show that our module improves texture quality, achieving superior performance in high-resolution texture generation.
- oai:arXiv.org:2502.14006v3
- cs.GR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Yiangos Georgiou, Marios Loizou, Melinos Averkiou, Evangelos Kalogerakis
+ Wenhui Lei, Yusheng Tan, Anqi Li, Hanyu Chen, Hengrui Tian, Ruiying Li, Zhengqun Jiang, Fang Yan, Xiaofan Zhang, Shaoting Zhang
- Research on Enhancing Cloud Computing Network Security using Artificial Intelligence Algorithms
- https://arxiv.org/abs/2502.17801
- arXiv:2502.17801v3 Announce Type: replace
-Abstract: Cloud computing environments are increasingly vulnerable to security threats such as distributed denial-of-service (DDoS) attacks and SQL injection. Traditional security mechanisms, based on rule matching and feature recognition, struggle to adapt to evolving attack strategies. This paper proposes an adaptive security protection framework leveraging deep learning to construct a multi-layered defense architecture. The proposed system is evaluated in a real-world business environment, achieving a detection accuracy of 97.3%, an average response time of 18 ms, and an availability rate of 99.999%. Experimental results demonstrate that the proposed method significantly enhances detection accuracy, response efficiency, and resource utilization, offering a novel and effective approach to cloud computing security.
- oai:arXiv.org:2502.17801v3
- cs.CR
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Enhancing Hand Palm Motion Gesture Recognition by Eliminating Reference Frame Bias via Frame-Invariant Similarity Measures
+ https://arxiv.org/abs/2503.11352
+ arXiv:2503.11352v2 Announce Type: replace
+Abstract: The ability of robots to recognize human gestures facilitates a natural and accessible human-robot collaboration. However, most work in gesture recognition remains rooted in reference frame-dependent representations. This poses a challenge when reference frames vary due to different work cell layouts, imprecise frame calibrations, or other environmental changes. This paper investigated the use of invariant trajectory descriptors for robust hand palm motion gesture recognition under reference frame changes. First, a novel dataset of recorded Hand Palm Motion (HPM) gestures is introduced. The motion gestures in this dataset were specifically designed to be distinguishable without dependence on specific reference frames or directional cues. Afterwards, multiple invariant trajectory descriptor approaches were benchmarked to assess how their performances generalize to this novel HPM dataset. After this offline benchmarking, the best scoring approach is validated for online recognition by developing a real-time Proof of Concept (PoC). In this PoC, hand palm motion gestures were used to control the real-time movement of a manipulator arm. The PoC demonstrated a high recognition reliability in real-time operation, achieving an $F_1$-score of 92.3%. This work demonstrates the effectiveness of the invariant descriptor approach as a standalone solution. Moreover, we believe that the invariant descriptor approach can also be utilized within other state-of-the-art pattern recognition and learning systems to improve their robustness against reference frame variations.
+ oai:arXiv.org:2503.11352v2
+ cs.RO
+ cs.CV
+ cs.HC
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- 10.1109/SCECS65243.2025.11065638
- 2025 International Conference on Sensor-Cloud and Edge Computing System (SCECS)
- Yuqing Wang, Xiao Yang
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/CASE58245.2025.11163910
+ Proceedings of the 2025 IEEE 21st International Conference on Automation Science and Engineering (CASE), Los Angeles, CA, USA, 2025, pp. 866-873
+ Arno Verduyn, Maxim Vochten, Joris De Schutter
- Memory Injection Attacks on LLM Agents via Query-Only Interaction
- https://arxiv.org/abs/2503.03704
- arXiv:2503.03704v4 Announce Type: replace
-Abstract: Agents powered by large language models (LLMs) have demonstrated strong capabilities in a wide range of complex, real-world applications. However, LLM agents with a compromised memory bank may easily produce harmful outputs when the past records retrieved for demonstration are malicious. In this paper, we propose a novel Memory INJection Attack, MINJA, without assuming that the attacker can directly modify the memory bank of the agent. The attacker injects malicious records into the memory bank by only interacting with the agent via queries and output observations. These malicious records are designed to elicit a sequence of malicious reasoning steps corresponding to a different target query during the agent's execution of the victim user's query. Specifically, we introduce a sequence of bridging steps to link victim queries to the malicious reasoning steps. During the memory injection, we propose an indication prompt that guides the agent to autonomously generate similar bridging steps, with a progressive shortening strategy that gradually removes the indication prompt, such that the malicious record will be easily retrieved when processing later victim queries. Our extensive experiments across diverse agents demonstrate the effectiveness of MINJA in compromising agent memory. With minimal requirements for execution, MINJA enables any user to influence agent memory, highlighting the risk.
- oai:arXiv.org:2503.03704v4
+ LLM4FS: Leveraging Large Language Models for Feature Selection
+ https://arxiv.org/abs/2503.24157
+ arXiv:2503.24157v4 Announce Type: replace
+Abstract: Recent advances in large language models (LLMs) have provided new opportunities for decision-making, particularly in the task of automated feature selection. In this paper, we first comprehensively evaluate LLM-based feature selection methods, covering the state-of-the-art DeepSeek-R1, GPT-o3-mini, and GPT-4.5. Then, we propose a new hybrid strategy called LLM4FS that integrates LLMs with traditional data-driven methods. Specifically, input data samples into LLMs, and directly call traditional data-driven techniques such as random forest and forward sequential selection. Notably, our analysis reveals that the hybrid strategy leverages the contextual understanding of LLMs and the high statistical reliability of traditional data-driven methods to achieve excellent feature selection performance, even surpassing LLMs and traditional data-driven methods. Finally, we point out the limitations of its application in decision-making. Our code is available at https://github.com/xianchaoxiu/LLM4FS.
+ oai:arXiv.org:2503.24157v4cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Shen Dong, Shaochen Xu, Pengfei He, Yige Li, Jiliang Tang, Tianming Liu, Hui Liu, Zhen Xiang
-
-
- Guiding LLMs to Generate High-Fidelity and High-Quality Counterfactual Explanations for Text Classification
- https://arxiv.org/abs/2503.04463
- arXiv:2503.04463v2 Announce Type: replace
-Abstract: The need for interpretability in deep learning has driven interest in counterfactual explanations, which identify minimal changes to an instance that change a model's prediction. Current counterfactual (CF) generation methods require task-specific fine-tuning and produce low-quality text. Large Language Models (LLMs), though effective for high-quality text generation, struggle with label-flipping counterfactuals (i.e., counterfactuals that change the prediction) without fine-tuning. We introduce two simple classifier-guided approaches to support counterfactual generation by LLMs, eliminating the need for fine-tuning while preserving the strengths of LLMs. Despite their simplicity, our methods outperform state-of-the-art counterfactual generation methods and are effective across different LLMs, highlighting the benefits of guiding counterfactual generation by LLMs with classifier information. We further show that data augmentation by our generated CFs can improve a classifier's robustness. Our analysis reveals a critical issue in counterfactual generation by LLMs: LLMs rely on parametric knowledge rather than faithfully following the classifier.
- oai:arXiv.org:2503.04463v2
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- The World Conference on eXplainable Artificial Intelligence 2025
- Van Bach Nguyen, Christin Seifert, J\"org Schl\"otterer
-
-
- Grammar-Based Code Representation: Is It a Worthy Pursuit for LLMs?
- https://arxiv.org/abs/2503.05507
- arXiv:2503.05507v2 Announce Type: replace
-Abstract: Grammar serves as a cornerstone in programming languages and software engineering, providing frameworks to define the syntactic space and program structure. Existing research demonstrates the effectiveness of grammar-based code representations in small-scale models, showing their ability to reduce syntax errors and enhance performance. However, as language models scale to the billion level or beyond, syntax-level errors become rare, making it unclear whether grammar information still provides performance benefits. To explore this, we develop a series of billion-scale GrammarCoder models, incorporating grammar rules in the code generation process. Experiments on HumanEval (+) and MBPP (+) demonstrate a notable improvement in code generation accuracy. Further analysis shows that grammar-based representations enhance LLMs' ability to discern subtle code differences, reducing semantic errors caused by minor variations. These findings suggest that grammar-based code representations remain valuable even in billion-scale models, not only by maintaining syntax correctness but also by improving semantic differentiation.
- oai:arXiv.org:2503.05507v2
- cs.PL
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Qingyuan Liang, Zhao Zhang, Zeyu Sun, Zheng Lin, Qi Luo, Yueyi Xiao, Yizhou Chen, Yuqun Zhang, Haotian Zhang, Lu Zhang, Bin Chen, Yingfei Xiong
+ Jianhao Li, Xianchao Xiu
- Aligning Text to Image in Diffusion Models is Easier Than You Think
- https://arxiv.org/abs/2503.08250
- arXiv:2503.08250v5 Announce Type: replace
-Abstract: While recent advancements in generative modeling have significantly improved text-image alignment, some residual misalignment between text and image representations still remains. Some approaches address this issue by fine-tuning models in terms of preference optimization, etc., which require tailored datasets. Orthogonal to these methods, we revisit the challenge from the perspective of representation alignment-an approach that has gained popularity with the success of REPresentation Alignment (REPA). We first argue that conventional text-to-image (T2I) diffusion models, typically trained on paired image and text data (i.e., positive pairs) by minimizing score matching or flow matching losses, is suboptimal from the standpoint of representation alignment. Instead, a better alignment can be achieved through contrastive learning that leverages existing dataset as both positive and negative pairs. To enable efficient alignment with pretrained models, we propose SoftREPA- a lightweight contrastive fine-tuning strategy that leverages soft text tokens for representation alignment. This approach improves alignment with minimal computational overhead by adding fewer than 1M trainable parameters to the pretrained model. Our theoretical analysis demonstrates that our method explicitly increases the mutual information between text and image representations, leading to enhanced semantic consistency. Experimental results across text-to-image generation and text-guided image editing tasks validate the effectiveness of our approach in improving the semantic consistency of T2I generative models.
- oai:arXiv.org:2503.08250v5
- cs.CV
+ AI-Newton: A Concept-Driven Physical Law Discovery System without Prior Physical Knowledge
+ https://arxiv.org/abs/2504.01538
+ arXiv:2504.01538v2 Announce Type: replace
+Abstract: While current AI-driven methods excel at deriving empirical models from individual experiments, a significant challenge remains in uncovering the common fundamental physics that underlie these models -- a task at which human physicists are adept. To bridge this gap, we introduce AI-Newton, a novel framework for concept-driven scientific discovery. Our system autonomously derives general physical laws directly from raw, multi-experiment data, operating without supervision or prior physical knowledge. Its core innovations are twofold: (1) proposing interpretable physical concepts to construct laws, and (2) progressively generalizing these laws to broader domains. Applied to a large, noisy dataset of mechanics experiments, AI-Newton successfully rediscovers foundational and universal laws, such as Newton's second law, the conservation of energy, and the universal gravitation. This work represents a significant advance toward autonomous, human-like scientific discovery.
+ oai:arXiv.org:2504.01538v2cs.AIcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.SC
+ hep-ph
+ physics.class-ph
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Jaa-Yeon Lee, Byunghee Cha, Jeongsol Kim, Jong Chul Ye
+ You-Le Fang, Dong-Shan Jian, Xiang Li, Yan-Qing Ma
- Constrained Discrete Diffusion
- https://arxiv.org/abs/2503.09790
- arXiv:2503.09790v3 Announce Type: replace
-Abstract: Discrete diffusion models are a class of generative models that construct sequences by progressively denoising samples from a categorical noise distribution. Beyond their rapidly growing ability to generate coherent natural language, these models present a new and important opportunity to enforce sequence-level constraints, a capability that current autoregressive models cannot natively provide. This paper capitalizes on this opportunity by introducing Constrained Discrete Diffusion (CDD), a novel integration of differentiable constraint optimization within the diffusion process to ensure adherence to constraints, logic rules, or safety requirements for generated sequences. Unlike conventional text generators that often rely on post-hoc filtering or model retraining for controllable generation, CDD directly imposes constraints into the discrete diffusion sampling process, resulting in a training-free and effective approach. Experiments in toxicity-controlled text generation, property-constrained molecule design, and instruction-constrained text completion demonstrate that CDD achieves zero constraint violations in a diverse array of tasks while preserving fluency, novelty, and coherence while outperforming autoregressive and existing discrete diffusion approaches.
- oai:arXiv.org:2503.09790v3
+ The LLM Wears Prada: Analysing Gender Bias and Stereotypes through Online Shopping Data
+ https://arxiv.org/abs/2504.01951
+ arXiv:2504.01951v2 Announce Type: replace
+Abstract: With the wide and cross-domain adoption of Large Language Models, it becomes crucial to assess to which extent the statistical correlations in training data, which underlie their impressive performance, hide subtle and potentially troubling biases. Gender bias in LLMs has been widely investigated from the perspectives of works, hobbies, and emotions typically associated with a specific gender. In this study, we introduce a novel perspective. We investigate whether LLMs can predict an individual's gender based solely on online shopping histories and whether these predictions are influenced by gender biases and stereotypes. Using a dataset of historical online purchases from users in the United States, we evaluate the ability of six LLMs to classify gender and we then analyze their reasoning and products-gender co-occurrences. Results indicate that while models can infer gender with moderate accuracy, their decisions are often rooted in stereotypical associations between product categories and gender. Furthermore, explicit instructions to avoid bias reduce the certainty of model predictions, but do not eliminate stereotypical patterns. Our findings highlight the persistent nature of gender biases in LLMs and emphasize the need for robust bias-mitigation strategies.
+ oai:arXiv.org:2504.01951v2
+ cs.AIcs.CL
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Michael Cardei, Jacob K Christopher, Thomas Hartvigsen, Bhavya Kailkhura, Ferdinando Fioretto
+ Massimiliano Luca, Ciro Beneduce, Bruno Lepri, Jacopo Staiano
- MACS: Multi-source Audio-to-image Generation with Contextual Significance and Semantic Alignment
- https://arxiv.org/abs/2503.10287
- arXiv:2503.10287v3 Announce Type: replace
-Abstract: Propelled by the breakthrough in deep generative models, audio-to-image generation has emerged as a pivotal cross-modal task that converts complex auditory signals into rich visual representations. However, previous works only focus on single-source audio inputs for image generation, ignoring the multi-source characteristic in natural auditory scenes, thus limiting the performance in generating comprehensive visual content. To bridge this gap, we propose a method called MACS to conduct multi-source audio-to-image generation. To our best knowledge, this is the first work that explicitly separates multi-source audio to capture the rich audio components before image generation. MACS is a two-stage method. In the first stage, multi-source audio inputs are separated by a weakly supervised method, where the audio and text labels are semantically aligned by casting into a common space using the large pre-trained CLAP model. We introduce a ranking loss to consider the contextual significance of the separated audio signals. In the second stage, effective image generation is achieved by mapping the separated audio signals to the generation condition using only a trainable adapter and a MLP layer. We preprocess the LLP dataset as the first full multi-source audio-to-image generation benchmark. The experiments are conducted on multi-source, mixed-source, and single-source audio-to-image generation tasks. The proposed MACS outperforms the current state-of-the-art methods in 17 out of the 21 evaluation indexes on all tasks and delivers superior visual quality.
- oai:arXiv.org:2503.10287v3
- cs.SD
+ Towards Efficient Real-Time Video Motion Transfer via Generative Time Series Modeling
+ https://arxiv.org/abs/2504.05537
+ arXiv:2504.05537v2 Announce Type: replace
+Abstract: Motion Transfer is a technique that synthesizes videos by transferring motion dynamics from a driving video to a source image. In this work we propose a deep learning-based framework to enable real-time video motion transfer which is critical for enabling bandwidth-efficient applications such as video conferencing, remote health monitoring, virtual reality interaction, and vision-based anomaly detection. This is done using keypoints which serve as semantically meaningful, compact representations of motion across time. To enable bandwidth savings during video transmission we perform forecasting of keypoints using two generative time series models VRNN and GRU-NF. The predicted keypoints are transformed into realistic video frames using an optical flow-based module paired with a generator network, thereby enabling efficient, low-frame-rate video transmission. Based on the application this allows the framework to either generate a deterministic future sequence or sample a diverse set of plausible futures. Experimental results demonstrate that VRNN achieves the best point-forecast fidelity (lowest MAE) in applications requiring stable and accurate multi-step forecasting and is particularly competitive in higher-uncertainty, multi-modal settings. This is achieved by introducing recurrently conditioned stochastic latent variables that carry past contexts to capture uncertainty and temporal variation. On the other hand the GRU-NF model enables richer diversity of generated videos while maintaining high visual quality. This is realized by learning an invertible, exact-likelihood mapping between the keypoints and their latent representations which supports rich and controllable sampling of diverse yet coherent keypoint sequences. Our work lays the foundation for next-generation AI systems that require real-time, bandwidth-efficient, and semantically controllable video generation.
+ oai:arXiv.org:2504.05537v2cs.CV
- cs.GR
- eess.AS
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hao Zhou, Xiaobao Guo, Yuzhe Zhu, Adams Wai-Kin Kong
-
-
- Story of Two GPUs: Characterizing the Resilience of Hopper H100 and Ampere A100 GPUs
- https://arxiv.org/abs/2503.11901
- arXiv:2503.11901v4 Announce Type: replace
-Abstract: This study characterizes GPU resilience in Delta, a large-scale AI system that consists of 1,056 A100 and H100 GPUs, with over 1,300 petaflops of peak throughput. We used 2.5 years of operational data (11.7 million GPU hours) on GPU errors. Our major findings include: (i) H100 GPU memory resilience is worse than A100 GPU memory, with 3.2x lower per-GPU MTBE for memory errors, (ii) The GPU memory error-recovery mechanisms on H100 GPUs are insufficient to handle the increased memory capacity, (iii) H100 GPUs demonstrate significantly improved GPU hardware resilience over A100 GPUs with respect to critical hardware components, (iv) GPU errors on both A100 and H100 GPUs frequently result in job failures due to the lack of robust recovery mechanisms at the application level, and (v) We project the impact of GPU node availability on larger-scales and find that significant overprovisioning of 5% is necessary to handle GPU failures.
- oai:arXiv.org:2503.11901v4
- cs.DCcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- 10.1145/3712285.375982
- Shengkun Cui, Archit Patke, Hung Nguyen, Aditya Ranjan, Ziheng Chen, Phuong Cao, Gregory Bauer, Brett Bode, Catello Di Martino, Saurabh Jha, Chandra Narayanaswami, Daby Sow, Zbigniew T. Kalbarczyk, Ravishankar K. Iyer
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tasmiah Haque, Md. Asif Bin Syed, Byungheon Jeong, Xue Bai, Sumit Mohan, Somdyuti Paul, Imtiaz Ahmed, Srinjoy Das
- A Parametric Family of Polynomial Wavelets for Signal and Image Processing
- https://arxiv.org/abs/2503.12403
- arXiv:2503.12403v2 Announce Type: replace
-Abstract: This paper investigates the potential applications of a parametric family of polynomial wavelets that has been recently introduced starting from de la Vall\'ee Poussin (VP) interpolation at Chebyshev nodes. Unlike classical wavelets, which are constructed on the real line, these VP wavelets are defined on a bounded interval, offering the advantage of handling boundaries naturally while maintaining computational efficiency. In addition, the structure of these wavelets enables the use of fast algorithms for decomposition and reconstruction. Furthermore, the flexibility offered by a free parameter allows a better control of localized singularities, such as edges in images. On the basis of previous theoretical foundations, we show the effectiveness of the VP wavelets for basic signal denoising and image compression, emphasizing their potential for more advanced signal and image processing tasks.
- oai:arXiv.org:2503.12403v2
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ GT-SNT: A Linear-Time Transformer for Large-Scale Graphs via Spiking Node Tokenization
+ https://arxiv.org/abs/2504.11840
+ arXiv:2504.11840v2 Announce Type: replace
+Abstract: Graph Transformers (GTs), which integrate message passing and self-attention mechanisms simultaneously, have achieved promising empirical results in graph prediction tasks. However, the design of scalable and topology-aware node tokenization has lagged behind other modalities. This gap becomes critical as the quadratic complexity of full attention renders them impractical on large-scale graphs. Recently, Spiking Neural Networks (SNNs), as brain-inspired models, provided an energy-saving scheme to convert input intensity into discrete spike-based representations through event-driven spiking neurons. Inspired by these characteristics, we propose a linear-time Graph Transformer with Spiking Node Tokenization (GT-SNT) for node classification. By integrating multi-step feature propagation with SNNs, spiking node tokenization generates compact, locality-aware spike count embeddings as node tokens to avoid predefined codebooks and their utilization issues. The codebook guided self-attention leverages these tokens to perform node-to-token attention for linear-time global context aggregation. In experiments, we compare GT-SNT with other state-of-the-art baselines on node classification datasets ranging from small to large. Experimental results show that GT-SNT achieves comparable performances on most datasets and reaches up to 130x faster inference speed compared to other GTs.
+ oai:arXiv.org:2504.11840v2
+ cs.NE
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mariantonia Cotronei, Woula Themistoclakis, Marc Van Barel
+ Huizhe Zhang, Jintang Li, Yuchang Zhu, Huazhen Zhong, Liang Chen
- System Identification Under Multi-rate Sensing Environment
- https://arxiv.org/abs/2503.12750
- arXiv:2503.12750v4 Announce Type: replace
-Abstract: This paper proposes a system identification algorithm for systems with multi-rate sensors in a discrete-time framework. It is challenging to obtain an accurate mathematical model when the ratios of inputs and outputs are different in the system. A cyclic reformulation-based model for multi-rate systems is formulated, and the multi-rate system can be reduced to a linear time-invariant system to derive the model under the multi-rate sensing environment. The proposed algorithm integrates a cyclic reformulation with a state coordinate transformation of the cycled system to enable precise identification of systems under the multi-rate sensing environment. The effectiveness of the proposed system identification method is demonstrated using numerical simulations.
- oai:arXiv.org:2503.12750v4
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ On the Intersection and Composition properties of conditional independence
+ https://arxiv.org/abs/2504.11978
+ arXiv:2504.11978v2 Announce Type: replace
+Abstract: Compositional graphoids are fundamental discrete structures which appear in probabilistic reasoning, particularly in the area of graphical models. They are semigraphoids which satisfy the Intersection and Composition properties. These important properties, however, are not enjoyed by general probability distributions. This paper surveys what is known about them, providing systematic constructions of examples and counterexamples as well as necessary and sufficient conditions. Novel sufficient conditions for both properties are derived in the context of discrete random variables via information-theoretic tools.
+ oai:arXiv.org:2504.11978v2
+ cs.IT
+ math.IT
+ math.ST
+ stat.TH
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.20965/jrm.2025.p1102
- Hiroshi Okajima, Risa Furukawa, Nobutomo Matsunaga
+ Tobias Boege
- Human Motion Unlearning
- https://arxiv.org/abs/2503.18674
- arXiv:2503.18674v3 Announce Type: replace
-Abstract: We introduce Human Motion Unlearning and motivate it through the concrete task of preventing violent 3D motion synthesis, an important safety requirement given that popular text-to-motion datasets (HumanML3D and Motion-X) contain from 7\% to 15\% violent sequences spanning both atomic gestures (e.g., a single punch) and highly compositional actions (e.g., loading and swinging a leg to kick). By focusing on violence unlearning, we demonstrate how removing a challenging, multifaceted concept can serve as a proxy for the broader capability of motion "forgetting." To enable systematic evaluation of Human Motion Unlearning, we establish the first motion unlearning benchmark by automatically filtering HumanML3D and Motion-X datasets to create distinct forget sets (violent motions) and retain sets (safe motions). We introduce evaluation metrics tailored to sequential unlearning, measuring both suppression efficacy and the preservation of realism and smooth transitions. We adapt two state-of-the-art, training-free image unlearning methods (UCE and RECE) to leading text-to-motion architectures (MoMask and BAMM), and propose Latent Code Replacement (LCR), a novel, training-free approach that identifies violent codes in a discrete codebook representation and substitutes them with safe alternatives. Our experiments show that unlearning violent motions is indeed feasible and that acting on latent codes strikes the best trade-off between violence suppression and preserving overall motion quality. This work establishes a foundation for advancing safe motion synthesis across diverse applications. Website: https://www.pinlab.org/hmu.
- oai:arXiv.org:2503.18674v3
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Never too Cocky to Cooperate: An FIM and RL-based USV-AUV Collaborative System for Underwater Tasks in Extreme Sea Conditions
+ https://arxiv.org/abs/2504.14894
+ arXiv:2504.14894v2 Announce Type: replace
+Abstract: This paper develops a novel unmanned surface vehicle (USV)-autonomous underwater vehicle (AUV) collaborative system designed to enhance underwater task performance in extreme sea conditions. The system integrates a dual strategy: (1) high-precision multi-AUV localization enabled by Fisher information matrix-optimized USV path planning, and (2) reinforcement learning-based cooperative planning and control method for multi-AUV task execution. Extensive experimental evaluations in the underwater data collection task demonstrate the system's operational feasibility, with quantitative results showing significant performance improvements over baseline methods. The proposed system exhibits robust coordination capabilities between USV and AUVs while maintaining stability in extreme sea conditions. To facilitate reproducibility and community advancement, we provide an open-source simulation toolkit available at: https://github.com/360ZMEM/USV-AUV-colab .
+ oai:arXiv.org:2504.14894v2
+ cs.RO
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Edoardo De Matteis, Matteo Migliarini, Alessio Sampieri, Indro Spinelli, Fabio Galasso
+ Jingzehua Xu, Guanwen Xie, Jiwei Tang, Yimian Ding, Weiyi Liu, Shuai Zhang, Yi Li
- On the numerical stability of sketched GMRES
- https://arxiv.org/abs/2503.19086
- arXiv:2503.19086v2 Announce Type: replace
-Abstract: We perform a backward stability analysis of preconditioned sketched GMRES [Nakatsukasa and Tropp, SIAM J. Matrix Anal. Appl, 2024] for solving linear systems $Ax=b$, and show that the backward stability at iteration $i$ depends on the conditioning of the Krylov basis $B_{1:i}$ as long as the condition number of $A B_{1:i}$ can be bounded by $1/O(u)$, where $u$ is the unit roundoff. Under this condition, we show that sketched GMRES is backward stable as long as the condition number of $B_{1:i}$ is not too large. Under additional assumptions, we then show that the stability of a restarted implementation of sketched GMRES can be independent of the condition number of $B_{1:i}$, and restarted sketched GMRES is backward stable. We also derive sharper bounds that explain why the backward error can be small even in cases when the basis $B_{1:i}$ is very ill-conditioned, which has been observed in the literature but not yet explained theoretically. We present numerical experiments to demonstrate the conclusions of our analysis, and also show that adaptively restarting where appropriate allows us to recover backward stability in sketched GMRES.
- oai:arXiv.org:2503.19086v2
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Service Rate Regions of MDS Codes & Fractional Matchings in Quasi-uniform Hypergraphs
+ https://arxiv.org/abs/2504.17244
+ arXiv:2504.17244v2 Announce Type: replace
+Abstract: The service rate region (SRR) has emerged as a critical performance metric for distributed systems that store data redundantly. It measures the system's ability to serve multiple users concurrently. Mathematically, the SRR is a polytope in R^k where each dimension corresponds to the service request rate of one of the k data objects. This paper focuses on systems employing a class of Maximum Distance Separable (MDS) codes. For each code in the class, we characterize the k axes intercept points of its SRR, and the smallest standard simplex that includes the SRR. We use these results to show that the SRR grows with the increasing number of systematic columns in the generator matrices. We establish a graph-theoretic framework associating this SRR problem with fractional matchings in quasi-uniform hypergraphs. Identifying the SRR polytope is equivalent to determining a particular image of the fractional-matching polytope. We introduce a notion of Greedy Matching and show that it is sufficient to focus on these matchings to characterize the SRR rather than the entire matching polytope. With these tools, we determine the SRR of a large subset of the considered class of codes. Our results generalize previous characterizations of systematic and non-systematic MDS-coded systems, offering a unified framework for analyzing service rate regions of codes.
+ oai:arXiv.org:2504.17244v2
+ cs.IT
+ math.CO
+ math.IT
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Liam Burke, Erin Carson, Yuxin Ma
+ http://creativecommons.org/licenses/by/4.0/
+ Hoang Ly, Emina Soljanin
- LENVIZ: A High-Resolution Low-Exposure Night Vision Benchmark Dataset
- https://arxiv.org/abs/2503.19804
- arXiv:2503.19804v2 Announce Type: replace
-Abstract: Low-light image enhancement is crucial for a myriad of applications, from night vision and surveillance, to autonomous driving. However, due to the inherent limitations that come in hand with capturing images in low-illumination environments, the task of enhancing such scenes still presents a formidable challenge. To advance research in this field, we introduce our Low Exposure Night Vision (LENVIZ) Dataset, a comprehensive multi-exposure benchmark dataset for low-light image enhancement comprising of over 230K frames showcasing 24K real-world indoor and outdoor, with-and without human, scenes. Captured using 3 different camera sensors, LENVIZ offers a wide range of lighting conditions, noise levels, and scene complexities, making it the largest publicly available up-to 4K resolution benchmark in the field. LENVIZ includes high quality human-generated ground truth, for which each multi-exposure low-light scene has been meticulously curated and edited by expert photographers to ensure optimal image quality. Furthermore, we also conduct a comprehensive analysis of current state-of-the-art low-light image enhancement techniques on our dataset and highlight potential areas of improvement.
- oai:arXiv.org:2503.19804v2
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ JITServe: SLO-aware LLM Serving with Imprecise Request Information
+ https://arxiv.org/abs/2504.20068
+ arXiv:2504.20068v2 Announce Type: replace
+Abstract: The integration of Large Language Models (LLMs) into applications ranging from interactive chatbots to multi-agent systems has introduced a wide spectrum of service-level objectives (SLOs) for responsiveness. These include latency-sensitive requests emphasizing per-token latency in streaming chat, deadline-sensitive requests requiring rapid full responses to trigger external tools, and compound requests with evolving dependencies across multiple LLM calls. Despite-or perhaps, because of-this workload diversity and unpredictable request information (e.g., response lengths and dependencies), existing request schedulers have focused on aggregate performance, unable to ensure application-level SLO needs.
+ This paper presents JITServe, the first SLO-aware LLM serving system designed to maximize service goodput (e.g., the number of tokens meeting request SLOs) across diverse workloads. JITServe novelly schedules requests using imprecise request information and gradually relaxes this conservatism by refining request information estimates as generation progresses. It applies a grouped margin goodput maximization algorithm to allocate just enough serving bandwidth to satisfy each request's SLO just-in-time (JIT), maximizing residual capacity for others, while deciding the composition of requests in a batch to maximize efficiency and goodput with provable guarantees. Our evaluation across diverse realistic workloads, including chat, deep research, and agentic pipelines, shows that JITServe improves service goodput by 1.4x-6.3x, alternatively achieving 28.5%-83.2% resource savings, compared to state-of-the-art designs.
+ oai:arXiv.org:2504.20068v2
+ cs.DC
+ cs.LG
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by-nc-sa/4.0/
- Manjushree Aithal, Rosaura G. VidalMata, Manikandtan Kartha, Gong Chen, Eashan Adhikarla, Lucas N. Kirsten, Zhicheng Fu, Nikhil A. Madhusudhana, Joe Nasti
+ Wei Zhang, Zhiyu Wu, Yi Mu, Ning Rui, Banruo Liu, Nikhil Sarda, Myungjin Lee, Fan Lai
- TranSplat: Instant Cross-Scene Object Relighting in Gaussian Splatting via Spherical Harmonic Transfer
- https://arxiv.org/abs/2503.22676
- arXiv:2503.22676v4 Announce Type: replace
-Abstract: We present TranSplat, a method for fast and accurate object relighting for the 3D Gaussian Splatting (GS) framework when transferring a 3D object from a source GS scene to a target GS scene. TranSplat is based on a theoretical radiance transfer identity for cross-scene relighting of objects with radially symmetric BRDFs that involves only taking simple products of spherical harmonic appearance coefficients of the object, source, and target environment maps without any explicit computation of scene quantities (e.g., the BRDFs themselves). TranSplat is the first method to demonstrate how this theoretical identity may be used to perform relighting within the GS framework, and furthermore, by automatically inferring unknown source and target environment maps directly from the source and target scene GS representations. We evaluated TranSplat on several synthetic and real-world scenes and objects, demonstrating comparable 3D object relighting performance to recent conventional inverse rendering-based GS methods with a fraction of their runtime. While TranSplat is theoretically best-suited for radially symmetric BRDFs, results demonstrate that TranSplat still offers perceptually realistic renderings on real scenes and opens a valuable, lightweight path forward to relighting with the GS framework.
- oai:arXiv.org:2503.22676v4
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ PlanetServe: A Decentralized, Scalable, and Privacy-Preserving Overlay for Democratizing Large Language Model Serving
+ https://arxiv.org/abs/2504.20101
+ arXiv:2504.20101v4 Announce Type: replace
+Abstract: While significant progress has been made in research and development on open-source and cost-efficient large-language models (LLMs), serving scalability remains a critical challenge, particularly for small organizations and individuals seeking to deploy and test their LLM innovations. Inspired by peer-to-peer networks that leverage decentralized overlay nodes to increase throughput and availability, we propose GenTorrent, an LLM serving overlay that harnesses computing resources from decentralized contributors. We identify four key research problems inherent to enabling such a decentralized infrastructure: 1) overlay network organization; 2) LLM communication privacy; 3) overlay forwarding for resource efficiency; and 4) verification of serving quality. This work presents the first systematic study of these fundamental problems in the context of decentralized LLM serving. Evaluation results from a prototype implemented on a set of decentralized nodes demonstrate that GenTorrent achieves a latency reduction of over 50% compared to the baseline design without overlay forwarding. Furthermore, the security features introduce minimal overhead to serving latency and throughput. We believe this work pioneers a new direction for democratizing and scaling future AI serving capabilities.
+ oai:arXiv.org:2504.20101v4
+ cs.DC
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Boyang Yu, Yanlin Jin, Yun He, Akshat Dave, Guha Balakrishnan
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fei Fang, Yifan Hua, Shengze Wang, Ruilin Zhou, Yi Liu, Chen Qian, Xiaoxue Zhang
- ConsDreamer: Advancing Multi-View Consistency for Zero-Shot Text-to-3D Generation
- https://arxiv.org/abs/2504.02316
- arXiv:2504.02316v3 Announce Type: replace
-Abstract: Recent advances in zero-shot text-to-3D generation have revolutionized 3D content creation by enabling direct synthesis from textual descriptions. While state-of-the-art methods leverage 3D Gaussian Splatting with score distillation to enhance multi-view rendering through pre-trained text-to-image (T2I) models, they suffer from inherent prior view biases in T2I priors. These biases lead to inconsistent 3D generation, particularly manifesting as the multi-face Janus problem, where objects exhibit conflicting features across views. To address this fundamental challenge, we propose ConsDreamer, a novel method that mitigates view bias by refining both the conditional and unconditional terms in the score distillation process: (1) a View Disentanglement Module (VDM) that eliminates viewpoint biases in conditional prompts by decoupling irrelevant view components and injecting precise view control; and (2) a similarity-based partial order loss that enforces geometric consistency in the unconditional term by aligning cosine similarities with azimuth relationships. Extensive experiments demonstrate that ConsDreamer can be seamlessly integrated into various 3D representations and score distillation paradigms, effectively mitigating the multi-face Janus problem.
- oai:arXiv.org:2504.02316v3
- cs.CV
+ Balanced Online Class-Incremental Learning via Dual Classifiers
+ https://arxiv.org/abs/2504.20566
+ arXiv:2504.20566v2 Announce Type: replace
+Abstract: Online class-incremental learning (OCIL) focuses on gradually learning new classes (called plasticity) from a stream of data in a single-pass, while concurrently preserving knowledge of previously learned classes (called stability). The primary challenge in OCIL lies in maintaining a good balance between the knowledge of old and new classes within the continually updated model. Most existing methods rely on explicit knowledge interaction through experience replay, and often employ exclusive training separation to address bias problems. Nevertheless, it still remains a big challenge to achieve a well-balanced learner, as these methods often exhibit either reduced plasticity or limited stability due to difficulties in continually integrating knowledge in the OCIL setting. In this paper, we propose a novel replay-based method, called Balanced Inclusive Separation for Online iNcremental learning (BISON), which can achieve both high plasticity and stability, thus ensuring more balanced performance in OCIL. Our BISON method proposes an inclusive training separation strategy using dual classifiers so that knowledge from both old and new classes can effectively be integrated into the model, while introducing implicit approaches for transferring knowledge across the two classifiers. Extensive experimental evaluations over three widely-used OCIL benchmark datasets demonstrate the superiority of BISON, showing more balanced yet better performance compared to state-of-the-art replay-based OCIL methods.
+ oai:arXiv.org:2504.20566v2
+ cs.LGcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuan Zhou, Shilong Jin, Litao Hua, Wanjun Lv, Haoran Duan, Jungong Han
+ Shunjie Wen, Thomas Heinis, Dong-Wan Choi
- RingMoE: Mixture-of-Modality-Experts Multi-Modal Foundation Models for Universal Remote Sensing Image Interpretation
- https://arxiv.org/abs/2504.03166
- arXiv:2504.03166v2 Announce Type: replace
-Abstract: The rapid advancement of foundation models has revolutionized visual representation learning in a self-supervised manner. However, their application in remote sensing (RS) remains constrained by a fundamental gap: existing models predominantly handle single or limited modalities, overlooking the inherently multi-modal nature of RS observations. Optical, synthetic aperture radar (SAR), and multi-spectral data offer complementary insights that significantly reduce the inherent ambiguity and uncertainty in single-source analysis. To bridge this gap, we introduce RingMoE, a unified multi-modal RS foundation model with 14.7 billion parameters, pre-trained on 400 million multi-modal RS images from nine satellites. RingMoE incorporates three key innovations: (1) A hierarchical Mixture-of-Experts (MoE) architecture comprising modal-specialized, collaborative, and shared experts, effectively modeling intra-modal knowledge while capturing cross-modal dependencies to mitigate conflicts between modal representations; (2) Physics-informed self-supervised learning, explicitly embedding sensor-specific radiometric characteristics into the pre-training objectives; (3) Dynamic expert pruning, enabling adaptive model compression from 14.7B to 1B parameters while maintaining performance, facilitating efficient deployment in Earth observation applications. Evaluated across 23 benchmarks spanning six key RS tasks (i.e., classification, detection, segmentation, tracking, change detection, and depth estimation), RingMoE outperforms existing foundation models and sets new SOTAs, demonstrating remarkable adaptability from single-modal to multi-modal scenarios. Beyond theoretical progress, it has been deployed and trialed in multiple sectors, including emergency response, land management, marine sciences, and urban planning.
- oai:arXiv.org:2504.03166v2
+ UniBiomed: A Universal Foundation Model for Grounded Biomedical Image Interpretation
+ https://arxiv.org/abs/2504.21336
+ arXiv:2504.21336v3 Announce Type: replace
+Abstract: The integration of AI-assisted biomedical image analysis into clinical practice demands AI-generated findings that are not only accurate but also interpretable to clinicians. However, existing biomedical AI models generally lack the ability to simultaneously generate diagnostic findings and localize corresponding biomedical objects. This limitation makes it challenging for clinicians to correlate AI-generated findings with visual evidence (e.g., tiny lesions) in images and interpret the results of AI models. To address this challenge, we introduce UniBiomed, the first universal foundation model for grounded biomedical image interpretation, which is capable of generating accurate diagnostic findings and simultaneously segmenting the corresponding biomedical targets. UniBiomed is based on a novel integration of Multi-modal Large Language Model and Segment Anything Model, which can effectively unify diverse biomedical tasks in universal training for advancing grounded interpretation. To develop UniBiomed, we curate a large-scale dataset comprising over 27 million triplets of images, region annotations, and text descriptions across ten biomedical imaging modalities. Extensive validation on 70 internal and 14 external datasets demonstrated the state-of-the-art performance of UniBiomed in diverse biomedical tasks, including image segmentation, disease recognition, region-aware diagnosis, vision question answering, and report generation. In summary, UniBiomed is a powerful and versatile biomedical foundation model, unlocking the untapped grounded interpretation capability for optimizing AI-assisted biomedical image analysis.
+ oai:arXiv.org:2504.21336v3cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hanbo Bi, Yingchao Feng, Boyuan Tong, Mengyu Wang, Haichen Yu, Yongqiang Mao, Hao Chang, Wenhui Diao, Peijin Wang, Yue Yu, Hanyang Peng, Yehong Zhang, Kun Fu, Xian Sun
+ Linshan Wu, Yuxiang Nie, Sunan He, Jiaxin Zhuang, Luyang Luo, Tao Li, Zhuoyao Xie, Dexuan Chen, Yinghua Zhao, Neeraj Mahboobani, Varut Vardhanabhuti, Ronald Cheong Kin Chan, Yifan Peng, Pranav Rajpurkar, Hao Chen
- Sobolev-Poincar\'e inequalities for piecewise $W^{1,p}$ functions over general polytopic meshes
- https://arxiv.org/abs/2504.03449
- arXiv:2504.03449v2 Announce Type: replace
-Abstract: We establish Sobolev-Poincar\'e inequalities for piecewise $W^{1,p}$ functions over families of fairly general polytopic (thence also shape-regular simplicial and Cartesian) meshes in any dimension; amongst others, they cover the case of standard Poincar\'e inequalities for piecewise $W^{1,p}$ functions and can be useful in the analysis of nonconforming finite element discretizations of nonlinear problems. Crucial tools in their derivation are novel Sobolev-trace inequalities and Babu\v{s}ka-Aziz inequalities with mixed boundary conditions. We provide estimates with constants having an explicit dependence on the geometric properties of the domain and the underlying family of polytopic meshes.
- oai:arXiv.org:2504.03449v2
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ PASCAL: Precise and Efficient ANN- SNN Conversion using Spike Accumulation and Adaptive Layerwise Activation
+ https://arxiv.org/abs/2505.01730
+ arXiv:2505.01730v2 Announce Type: replace
+Abstract: Spiking Neural Networks (SNNs) have been put forward as an energy-efficient alternative to Artificial Neural Networks (ANNs) since they perform sparse Accumulate operations instead of the power-hungry Multiply-and-Accumulate operations. ANN-SNN conversion is a widely used method to realize deep SNNs with accuracy comparable to that of ANNs.~\citeauthor{bu2023optimal} recently proposed the Quantization-Clip-Floor-Shift (QCFS) activation as an alternative to ReLU to minimize the accuracy loss during ANN-SNN conversion. Nevertheless, SNN inferencing requires a large number of timesteps to match the accuracy of the source ANN for real-world datasets. In this work, we propose PASCAL, which performs ANN-SNN conversion in such a way that the resulting SNN is mathematically equivalent to an ANN with QCFS-activation, thereby yielding similar accuracy as the source ANN with minimal inference timesteps. In addition, we propose a systematic method to configure the quantization step of QCFS activation in a layerwise manner, which effectively determines the optimal number of timesteps per layer for the converted SNN. Our results show that the ResNet-34 SNN obtained using PASCAL achieves an accuracy of $\approx$74\% on ImageNet with a 64$\times$ reduction in the number of inference timesteps compared to existing approaches.
+ oai:arXiv.org:2505.01730v2
+ cs.NE
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Michele Botti, Lorenzo Mascotto
+ Transactions on Machine Learning Research, 2025
+ Pranav Ramesh, Gopalakrishnan Srinivasan
- No-Regret Learning in Stackelberg Games with an Application to Electric Ride-Hailing
- https://arxiv.org/abs/2504.03745
- arXiv:2504.03745v3 Announce Type: replace
-Abstract: We consider the problem of efficiently learning to play single-leader multi-follower Stackelberg games when the leader lacks knowledge of the lower-level game. Such games arise in hierarchical decision-making problems involving self-interested agents. For example, in electric ride-hailing markets, a central authority aims to learn optimal charging prices to shape fleet distributions and charging patterns of ride-hailing companies. Existing works typically apply gradient-based methods to find the leader's optimal strategy. Such methods are impractical as they require that the followers share private utility information with the leader. Instead, we treat the lower-level game as a black box, assuming only that the followers' interactions approximate a Nash equilibrium while the leader observes the realized cost of the resulting approximation. Under kernel-based regularity assumptions on the leader's cost function, we develop a no-regret algorithm that converges to an $\epsilon$-Stackelberg equilibrium in $O(\sqrt{T})$ rounds. Finally, we validate our approach through a numerical case study on optimal pricing in electric ride-hailing markets.
- oai:arXiv.org:2504.03745v3
- eess.SY
- cs.GT
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Elastic Index Selection for Label-Hybrid AKNN Search
+ https://arxiv.org/abs/2505.03212
+ arXiv:2505.03212v2 Announce Type: replace
+Abstract: Real-world vector embeddings are usually associated with extra labels, such as attributes and keywords. Many applications require the nearest neighbor search that contains specific labels, such as searching for product image embeddings restricted to a particular brand. A straightforward approach is to materialize all possible indices according to the complete query label workload. However, this leads to an exponential increase in both index space and processing time, which significantly limits scalability and efficiency. In this paper, we leverage the inclusion relationships among query label sets to construct partial indexes, enabling index sharing across queries for improved construction efficiency. We introduce \textit{elastic factor} bounds to guarantee search performance and use the greedy algorithm to select indices that meet the bounds, achieving a tradeoff between efficiency and space. Meanwhile, we also designed the algorithm to achieve the best elastic factor under a given space limitation. Experimental results on multiple real datasets demonstrate that our algorithm can achieve near-optimal search performance, achieving up to 10x-500x search efficiency speed up over state-of-the-art approaches. Our algorithm is highly versatile, since it is not constrained by index type and can seamlessly integrate with existing optimized libraries.
+ oai:arXiv.org:2505.03212v2
+ cs.DB
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- 10.1109/MCS.2024.3467648
- 64th IEEE Conference on Decision and Control: CDC 2025
- Anna Maddux, Marko Maljkovic, Nikolas Geroliminis, Maryam Kamgarpour
+ Mingyu Yang, Wenxuan Xia, Wentao Li, Raymond Chi-Wing Wong, Wei Wang
- Ineffectiveness for Search and Undecidability of PCSP Meta-Problems
- https://arxiv.org/abs/2504.04639
- arXiv:2504.04639v3 Announce Type: replace
-Abstract: It is an open question whether the search and decision versions of promise CSPs are equivalent. Most known algorithms for PCSPs solve only their \emph{decision} variant, and it is unknown whether they can be adapted to solve \emph{search} as well. The main approaches, called BLP, AIP and BLP+AIP, handle a PCSP by finding a solution to a relaxation of some integer program. We prove that rounding those solutions to a proper search certificate can be as hard as any problem in the class TFNP. In other words, these algorithms are ineffective for search. Building on the algebraic approach to PCSPs, we find sufficient conditions that imply ineffectiveness for search. Our tools are tailored to algorithms that are characterized by minions in a suitable way, and can also be used to prove undecidability results for meta-problems. This way, we show that the families of templates solvable via BLP, AIP, and BLP+AIP are undecidable.
- Using the same techniques we also analyze several algebraic conditions that are known to guarantee the tractability of finite-template CSPs. We prove that several meta-problems related to cyclic polymorphims and WNUs are undecidable for PCSPs. In particular, there is no algorithm deciding whether a finite PCSP template (1) admits cyclic a polymorphism, (2) admits a WNU.
- oai:arXiv.org:2504.04639v3
- cs.CC
- cs.CL
- cs.DS
- cs.LO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Panoramic Out-of-Distribution Segmentation
+ https://arxiv.org/abs/2505.03539
+ arXiv:2505.03539v3 Announce Type: replace
+Abstract: Panoramic imaging enables capturing 360{\deg} images with an ultra-wide Field-of-View (FoV) for dense omnidirectional perception, which is critical to applications, such as autonomous driving and augmented reality, etc. However, current panoramic semantic segmentation methods fail to identify outliers, and pinhole Out-of-distribution Segmentation (OoS) models perform unsatisfactorily in the panoramic domain due to pixel distortions and background clutter. To address these issues, we introduce a new task, Panoramic Out-of-distribution Segmentation (PanOoS), with the aim of achieving comprehensive and safe scene understanding. Furthermore, we propose the first solution, POS, which adapts to the characteristics of panoramic images through text-guided prompt distribution learning. Specifically, POS integrates a disentanglement strategy designed to materialize the cross-domain generalization capability of CLIP. The proposed Prompt-based Restoration Attention (PRA) optimizes semantic decoding by prompt guidance and self-adaptive correction, while Bilevel Prompt Distribution Learning (BPDL) refines the manifold of per-pixel mask embeddings via semantic prototype supervision. Besides, to compensate for the scarcity of PanOoS datasets, we establish two benchmarks: DenseOoS, which features diverse outliers in complex environments, and QuadOoS, captured by a quadruped robot with a panoramic annular lens system. Extensive experiments demonstrate superior performance of POS, with AuPRC improving by 34.25% and FPR95 decreasing by 21.42% on DenseOoS, outperforming state-of-the-art pinhole-OoS methods. Moreover, POS achieves leading closed-set segmentation capabilities and advances the development of panoramic understanding. Code and datasets will be available at https://github.com/MengfeiD/PanOoS.
+ oai:arXiv.org:2505.03539v3
+ cs.CV
+ cs.RO
+ eess.IV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Alberto Larrauri
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mengfei Duan, Yuheng Zhang, Yihong Cao, Fei Teng, Kai Luo, Jiaming Zhang, Kailun Yang, Zhiyong Li
- Spatial Polarization Multiplexing: Single-Shot Invisible Shape and Reflectance Recovery
- https://arxiv.org/abs/2504.13177
- arXiv:2504.13177v2 Announce Type: replace
-Abstract: We propose spatial polarization multiplexing (SPM) for joint sensing of shape and reflectance of a static or dynamic deformable object, which is also invisible to the naked eye. Past structured-light methods are limited to shape acquisition and cannot recover reflectance as they alter scene appearance. Our key idea is to spatially multiplex a polarization pattern to encode the incident ray and also densely sample the reflected light. We derive a quantized polarized light pattern that can be robustly and uniquely decoded from the reflected Angle of Linear Polarization (AoLP) values. It also enables single-shot disentanglement of polarimetric diffuse and specular reflections for accurate BRDF estimation. We achieve this spatial polarization multiplexing (SPM) with a constrained de Bruijn sequence. We validate this novel invisible single-shot shape and reflectance method with real static and dynamic objects. The results demonstrate the effectiveness of SPM for accurate shape and BRDF measurement which opens new avenues of application for 3D sensing thanks to its invisibility and ability to jointly recover the radiometric properties.
- oai:arXiv.org:2504.13177v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Advancing AI Research Assistants with Expert-Involved Learning
+ https://arxiv.org/abs/2505.04638
+ arXiv:2505.04638v3 Announce Type: replace
+Abstract: Large language models (LLMs) and large multimodal models (LMMs) promise to accelerate biomedical discovery, yet their reliability remains unclear. We introduce ARIEL (AI Research Assistant for Expert-in-the-Loop Learning), an open-source evaluation and optimization framework that pairs a curated multimodal biomedical corpus with expert-vetted tasks to probe two capabilities: full-length article summarization and fine-grained figure interpretation. Using uniform protocols and blinded PhD-level evaluation, we find that state-of-the-art models generate fluent but incomplete summaries, whereas LMMs struggle with detailed visual reasoning. We later observe that prompt engineering and lightweight fine-tuning substantially improve textual coverage, and a compute-scaled inference strategy enhances visual question answering. We build an ARIEL agent that integrates textual and visual cues, and we show it can propose testable mechanistic hypotheses. ARIEL delineates current strengths and limitations of foundation models, and provides a reproducible platform for advancing trustworthy AI in biomedicine.
+ oai:arXiv.org:2505.04638v3
+ cs.AI
+ cs.CL
+ cs.IR
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Tomoki Ichikawa, Ryo Kawahara, Ko Nishino
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Tianyu Liu, Simeng Han, Hanchen Wang, Xiao Luo, Pan Lu, Biqing Zhu, Yuge Wang, Keyi Li, Jiapeng Chen, Rihao Qu, Yufeng Liu, Xinyue Cui, Aviv Yaish, Yuhang Chen, Minsheng Hao, Chuhan Li, Kexing Li, Arman Cohan, Hua Xu, Mark Gerstein, James Zou, Hongyu Zhao
- No-Regret Model Predictive Control with Online Learning of Koopman Operators
- https://arxiv.org/abs/2504.15805
- arXiv:2504.15805v3 Announce Type: replace
-Abstract: We study a problem of simultaneous system identification and model predictive control of nonlinear systems. Particularly, we provide an algorithm for systems with unknown residual dynamics that can be expressed by Koopman operators. Such residual dynamics can model external disturbances and modeling errors, such as wind and wave disturbances to aerial and marine vehicles, or inaccurate model parameters. The algorithm has finite-time near-optimality guarantees and asymptotically converges to the optimal non-causal controller. Specifically, the algorithm enjoys sublinear \textit{dynamic regret}, defined herein as the suboptimality against an optimal clairvoyant controller that knows how the unknown dynamics will adapt to its states and actions. To this end, we assume the algorithm is given Koopman observable functions such that the unknown dynamics can be approximated by a linear dynamical system. Then, it employs model predictive control based on the current learned model of the unknown residual dynamics. This model is updated online using least squares in a self-supervised manner based on the data collected while controlling the system. We validate our algorithm in physics-based simulations of a cart-pole system aiming to maintain the pole upright despite inaccurate model parameters.
- oai:arXiv.org:2504.15805v3
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Formula size game and model checking for modal substitution calculus
+ https://arxiv.org/abs/2505.07966
+ arXiv:2505.07966v2 Announce Type: replace
+Abstract: Recent research has applied modal substitution calculus (MSC) and its variants to characterize various computational frameworks such as graph neural networks (GNNs) and distributed computing systems. For example, it has been shown that the expressive power of recurrent graph neural networks coincides with graded modal substitution calculus GMSC, which is the extension of MSC with counting modalities. GMSC can be further extended with the counting global modality, resulting in the logic GGMSC which corresponds to GNNs with global readout mechanisms. In this paper we introduce a formula-size game that characterizes the expressive power of MSC, GMSC, GGMSC, and related logics. Furthermore, we study the expressiveness and model checking of logics in this family. We prove that MSC and its extensions (GMSC, GGMSC) are as expressive as linear tape-bounded Turing machines, while asynchronous variants are linked to modal mu-calculus and modal computation logic MCL. We establish that for MSC, GMSC and GGMSC, both combined and data complexity of model checking are PSPACE-complete, and for their asynchronous variants, both complexities are PTIME-complete. We also establish that for the propositional fragment SC of MSC, the combined complexity of model checking is PSPACE-complete, while for asynchronous SC it is PTIME-complete, and in both cases, data complexity is constant. As a corollary, we observe that SC satisfiability is PSPACE-complete and NP-complete for its asynchronous variant. Finally, we construct a universal reduction from all recursively enumerable problems to MSC model checking.
+ oai:arXiv.org:2505.07966v2
+ cs.LO
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hongyu Zhou, Vasileios Tzoumas
+ Veeti Ahvonen, Reijo Jaakkola, Antti KuusistoLocal Convergence Behavior of Extended LOBPCG for Computing Eigenvalues of Hermitian Matrices
https://arxiv.org/abs/2505.08218
- arXiv:2505.08218v3 Announce Type: replace
+ arXiv:2505.08218v4 Announce Type: replace
Abstract: This paper provides a comprehensive and detailed analysis of the local convergence behavior of an extended variation of the locally optimal preconditioned conjugate gradient method (LOBPCG) for computing the extreme eigenvalue of a Hermitian matrix. The convergence rates derived in this work are either obtained for the first time or sharper than those previously established, including those in Ovtchinnikov's work ({\em SIAM J. Numer. Anal.}, 46(5):2567--2592, 2008). The study also extends to generalized problems, including Hermitian matrix polynomials that admit an extended form of the Rayleigh quotient. The new approach used to obtain these rates may also serve as a valuable tool for the convergence analysis of other gradient-type optimization methods.
- oai:arXiv.org:2505.08218v3
+ oai:arXiv.org:2505.08218v4math.NAcs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/Zhechen Shen, Xin Liang
- Revealing economic facts: LLMs know more than they say
- https://arxiv.org/abs/2505.08662
- arXiv:2505.08662v2 Announce Type: replace
-Abstract: We investigate whether the hidden states of large language models (LLMs) can be used to estimate and impute economic and financial statistics. Focusing on county-level (e.g. unemployment) and firm-level (e.g. total assets) variables, we show that a simple linear model trained on the hidden states of open-source LLMs outperforms the models' text outputs. This suggests that hidden states capture richer economic information than the responses of the LLMs reveal directly. A learning curve analysis indicates that only a few dozen labelled examples are sufficient for training. We also propose a transfer learning method that improves estimation accuracy without requiring any labelled data for the target variable. Finally, we demonstrate the practical utility of hidden-state representations in super-resolution and data imputation tasks.
- oai:arXiv.org:2505.08662v2
- cs.CL
- cs.LG
- econ.GN
- q-fin.EC
- Thu, 11 Dec 2025 00:00:00 -0500
+ LibIQ: Toward Real-Time Spectrum Classification in O-RAN dApps
+ https://arxiv.org/abs/2505.10537
+ arXiv:2505.10537v3 Announce Type: replace
+Abstract: The O-RAN architecture is transforming cellular networks by adopting RAN softwarization and disaggregation concepts to enable data-driven monitoring and control of the network. Such management is enabled by RICs, which facilitate near-real-time and non-real-time network control through xApps and rApps. However, they face limitations, including latency overhead in data exchange between the RAN and RIC, restricting real-time monitoring, and the inability to access user plain data due to privacy and security constraints, hindering use cases like beamforming and spectrum classification. In this paper, we leverage the dApps concept to enable real-time RF spectrum classification with LibIQ, a novel library for RF signals that facilitates efficient spectrum monitoring and signal classification by providing functionalities to read I/Q samples as time-series, create datasets and visualize time-series data through plots and spectrograms. Thanks to LibIQ, I/Q samples can be efficiently processed to detect external RF signals, which are subsequently classified using a CNN inside the library. To achieve accurate spectrum analysis, we created an extensive dataset of time-series-based I/Q samples, representing distinct signal types captured using a custom dApp running on a 5G deployment over the Colosseum network emulator and an OTA testbed. We evaluate our model by deploying LibIQ in heterogeneous scenarios with varying center frequencies, time windows, and external RF signals. In real-time analysis, the model classifies the processed I/Q samples, achieving an average accuracy of approximately 97.8% in identifying signal types across all scenarios. We pledge to release both LibIQ and the dataset created as a publicly available framework upon acceptance.
+ oai:arXiv.org:2505.10537v3
+ cs.NI
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Bank of England Staff Working Paper Series, No. 1150 (2025)
- Marcus Buckmann, Quynh Anh Nguyen, Edward Hill
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/MedComNet65822.2025.11100289
+ 2025 23rd Mediterranean Communication and Computer Networking Conference (MedComNet), Cagliari, Italy, 2025, pp. 1-6
+ Filippo Olimpieri, Noemi Giustini, Andrea Lacava, Salvatore D'Oro, Tommaso Melodia, Francesca Cuomo
- Sinusoidal Initialization, Time for a New Start
- https://arxiv.org/abs/2505.12909
- arXiv:2505.12909v3 Announce Type: replace
-Abstract: Initialization plays a critical role in Deep Neural Network training, directly influencing convergence, stability, and generalization. Common approaches such as Glorot and He initializations rely on randomness, which can produce uneven weight distributions across layer connections. In this paper, we introduce the Sinusoidal initialization, a novel deterministic method that employs sinusoidal functions to construct structured weight matrices expressly to improve the spread and balance of weights throughout the network while simultaneously fostering a more uniform, well-conditioned distribution of neuron activation states from the very first forward pass. Because Sinusoidal initialization begins with weights and activations that are already evenly and efficiently utilized, it delivers consistently faster convergence, greater training stability, and higher final accuracy across a wide range of models, including convolutional neural networks, vision transformers, and large language models. On average, our experiments show an increase of 4.9% in final validation accuracy and 20.9% in convergence speed. By replacing randomness with structure, this initialization provides a stronger and more reliable foundation for Deep Learning systems.
- oai:arXiv.org:2505.12909v3
+ Learning (Approximately) Equivariant Networks via Constrained Optimization
+ https://arxiv.org/abs/2505.13631
+ arXiv:2505.13631v2 Announce Type: replace
+Abstract: Equivariant neural networks are designed to respect symmetries through their architecture, boosting generalization and sample efficiency when those symmetries are present in the data distribution. Real-world data, however, often departs from perfect symmetry because of noise, structural variation, measurement bias, or other symmetry-breaking effects. Strictly equivariant models may struggle to fit the data, while unconstrained models lack a principled way to leverage partial symmetries. Even when the data is fully symmetric, enforcing equivariance can hurt training by limiting the model to a restricted region of the parameter space. Guided by homotopy principles, where an optimization problem is solved by gradually transforming a simpler problem into a complex one, we introduce Adaptive Constrained Equivariance (ACE), a constrained optimization approach that starts with a flexible, non-equivariant model and gradually reduces its deviation from equivariance. This gradual tightening smooths training early on and settles the model at a data-driven equilibrium, balancing between equivariance and non-equivariance. Across multiple architectures and tasks, our method consistently improves performance metrics, sample efficiency, and robustness to input perturbations compared with strictly equivariant models and heuristic equivariance relaxations.
+ oai:arXiv.org:2505.13631v2cs.LGcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- NeurIPS (2025)
- Alberto Fern\'andez-Hern\'andez, Jose I. Mestre, Manuel F. Dolz, Jose Duato, Enrique S. Quintana-Ort\'i
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Andrei Manolache, Luiz F. O. Chamon, Mathias Niepert
- Q${}^2$Forge: Minting Competency Questions and SPARQL Queries for Question-Answering Over Knowledge Graphs
- https://arxiv.org/abs/2505.13572
- arXiv:2505.13572v2 Announce Type: replace
-Abstract: The SPARQL query language is the standard method to access knowledge graphs (KGs). However, formulating SPARQL queries is a significant challenge for non-expert users, and remains time-consuming for the experienced ones. Best practices recommend to document KGs with competency questions and example queries to contextualise the knowledge they contain and illustrate their potential applications. In practice, however, this is either not the case or the examples are provided in limited numbers. Large Language Models (LLMs) are being used in conversational agents and are proving to be an attractive solution with a wide range of applications, from simple question-answering about common knowledge to generating code in a targeted programming language. However, training and testing these models to produce high quality SPARQL queries from natural language questions requires substantial datasets of question-query pairs. In this paper, we present Q${}^2$Forge that addresses the challenge of generating new competency questions for a KG and corresponding SPARQL queries. It iteratively validates those queries with human feedback and LLM as a judge. Q${}^2$Forge is open source, generic, extensible and modular, meaning that the different modules of the application (CQ generation, query generation and query refinement) can be used separately, as an integrated pipeline, or replaced by alternative services. The result is a complete pipeline from competency question formulation to query evaluation, supporting the creation of reference query sets for any target KG.
- oai:arXiv.org:2505.13572v2
- cs.DB
+ Forensic deepfake audio detection using segmental speech features
+ https://arxiv.org/abs/2505.13847
+ arXiv:2505.13847v3 Announce Type: replace
+Abstract: This study explores the potential of using acoustic features of segmental speech sounds to detect deepfake audio. These features are highly interpretable because of their close relationship with human articulatory processes and are expected to be more difficult for deepfake models to replicate. The results demonstrate that certain segmental features commonly used in forensic voice comparison (FVC) are effective in identifying deep-fakes, whereas some global features provide little value. These findings underscore the need to approach audio deepfake detection using methods that are distinct from those employed in traditional FVC, and offer a new perspective on leveraging segmental features for this purpose. In addition, the present study proposes a speaker-specific framework for deepfake detection, which differs fundamentally from the speaker-independent systems that dominate current benchmarks. While speaker-independent frameworks aim at broad generalization, the speaker-specific approach offers advantages in forensic contexts where case-by-case interpretability and sensitivity to individual phonetic realization are essential.
+ oai:arXiv.org:2505.13847v3
+ cs.SDcs.AI
- cs.IR
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CL
+ eess.AS
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Knowledge Capture Conference 2025 (K-CAP '25), December 10--12, 2025, Dayton, OH, USA, Dec 2025, Dayton, OH, United States
- Yousouf Taghzouti (WIMMICS, ICN), Franck Michel (Laboratoire I3S - SPARKS, WIMMICS), Tao Jiang (ICN), Louis-F\'elix Nothias (ICN), Fabien Gandon (WIMMICS, Laboratoire I3S - SPARKS)
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1016/j.forsciint.2025.112768
+ Tianle Yang, Chengzhe Sun, Siwei Lyu, Phil Rose
- C*: A Coverage Path Planning Algorithm for Unknown Environments using Rapidly Covering Graphs
- https://arxiv.org/abs/2505.13782
- arXiv:2505.13782v2 Announce Type: replace
-Abstract: The paper presents a novel sample-based algorithm, called C*, for real-time coverage path planning (CPP) of unknown environments. C* is built upon the concept of a Rapidly Covering Graph (RCG), which is incrementally constructed during robot navigation via progressive sampling of the search space. By using efficient sampling and pruning techniques, the RCG is constructed to be a minimum-sufficient graph, where its nodes and edges form the potential waypoints and segments of the coverage trajectory, respectively. The RCG tracks the coverage progress, generates the coverage trajectory and helps the robot to escape from the dead-end situations. To minimize coverage time, C* produces the desired back-and-forth coverage pattern, while adapting to the TSP-based optimal coverage of local isolated regions, called coverage holes, which are surrounded by obstacles and covered regions. It is analytically proven that C* provides complete coverage of unknown environments. The algorithmic simplicity and low computational complexity of C* make it easy to implement and suitable for real-time on-board applications. The performance of C* is validated by 1) extensive high-fidelity simulations and 2) laboratory experiments using an autonomous robot. C* yields near optimal trajectories, and a comparative evaluation with seven existing CPP methods demonstrates significant improvements in performance in terms of coverage time, number of turns, trajectory length, and overlap ratio, while preventing the formation of coverage holes. Finally, C* is comparatively evaluated on two different CPP applications using 1) energy-constrained robots and 2) multi-robot teams.
- oai:arXiv.org:2505.13782v2
- cs.RO
- cs.SY
- eess.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ SCAN: Semantic Document Layout Analysis for Textual and Visual Retrieval-Augmented Generation
+ https://arxiv.org/abs/2505.14381
+ arXiv:2505.14381v2 Announce Type: replace
+Abstract: With the increasing adoption of Large Language Models (LLMs) and Vision-Language Models (VLMs), rich document analysis technologies for applications like Retrieval-Augmented Generation (RAG) and visual RAG are gaining significant attention. Recent research indicates that using VLMs yields better RAG performance, but processing rich documents remains a challenge since a single page contains large amounts of information. In this paper, we present SCAN (SemantiC Document Layout ANalysis), a novel approach that enhances both textual and visual Retrieval-Augmented Generation (RAG) systems that work with visually rich documents. It is a VLM-friendly approach that identifies document components with appropriate semantic granularity, balancing context preservation with processing efficiency. SCAN uses a coarse-grained semantic approach that divides documents into coherent regions covering contiguous components. We trained the SCAN model by fine-tuning object detection models on an annotated dataset. Our experimental results across English and Japanese datasets demonstrate that applying SCAN improves end-to-end textual RAG performance by up to 9.4 points and visual RAG performance by up to 10.4 points, outperforming conventional approaches and even commercial document processing solutions.
+ oai:arXiv.org:2505.14381v2
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zongyuan Shen, James P. Wilson, Shalabh Gupta
+ Yuyang Dong, Nobuhiro Ueda, Kriszti\'an Boros, Daiki Ito, Takuya Sera, Masafumi Oyamada
- Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners
- https://arxiv.org/abs/2505.14042
- arXiv:2505.14042v2 Announce Type: replace
-Abstract: Adversarial training is one of the most effective adversarial defenses, but it incurs a high computational cost. In this study, we present the first theoretical analysis suggesting that adversarially pretrained transformers can serve as universally robust foundation models -- models that can robustly adapt to diverse downstream tasks with only lightweight tuning. Specifically, we demonstrate that single-layer linear transformers, after adversarial pretraining across a variety of classification tasks, can robustly generalize to unseen classification tasks through in-context learning from clean demonstrations (i.e., without requiring additional adversarial training or examples). This universal robustness stems from the model's ability to adaptively focus on robust features within given tasks. We also show the two open challenges for attaining robustness: accuracy--robustness trade-off and sample-hungry training. This study initiates the discussion on the utility of universally robust foundation models. While their training is expensive, the investment would prove worthwhile as downstream tasks can enjoy free adversarial robustness. The code is available at https://github.com/s-kumano/universally-robust-in-context-learner.
- oai:arXiv.org:2505.14042v2
- cs.LG
+ diffDemorph: Extending Reference-Free Demorphing to Unseen Faces
+ https://arxiv.org/abs/2505.14527
+ arXiv:2505.14527v4 Announce Type: replace
+Abstract: A face morph is created by combining two face images corresponding to two identities to produce a composite that successfully matches both the constituent identities. Reference-free (RF) demorphing reverses this process using only the morph image, without the need for additional reference images. Previous RF demorphing methods are overly constrained, as they rely on assumptions about the distributions of training and testing morphs such as the morphing technique used (e.g., landmark-based) and face image style (e.g., passport photos). In this paper, we introduce a novel diffusion-based approach, referred to as diffDeMorph, that effectively disentangles component images from a composite morph image with high visual fidelity. Our method is the first to generalize across morph techniques and face styles, beating the current state of the art by $\geq 59.46\%$ under a common training protocol across all datasets tested. We train our method on morphs created using synthetically generated face images and test on real morphs, thereby enhancing the practicality of the technique. Experiments on six datasets and two face matchers establish the utility and efficacy of our method.
+ oai:arXiv.org:2505.14527v4cs.CV
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Soichiro Kumano, Hiroshi Kera, Toshihiko Yamasaki
-
-
- SGL: A Structured Graphics Language
- https://arxiv.org/abs/2505.14690
- arXiv:2505.14690v3 Announce Type: replace
-Abstract: This paper introduces SGL, a graphics language that is aesthetically similar to SQL. As a graphical counterpart to SQL, SGL enables specification of statistical graphics within SQL query interfaces. SGL is based on a grammar of graphics that has been customized to support a SQL aesthetic.
- This paper presents the fundamental components of the SGL language alongside examples, and describes SGL's underlying grammar of graphics via comparison to its closest predecessor, the layered grammar of graphics.
- oai:arXiv.org:2505.14690v3
- cs.PL
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jon Chapman
+ http://creativecommons.org/licenses/by/4.0/
+ IEEE International Conference on Image Processing (ICIP), 2025
+ Nitish Shukla, Arun Ross
- Global Convergence for Average Reward Constrained MDPs with Primal-Dual Actor Critic Algorithm
- https://arxiv.org/abs/2505.15138
- arXiv:2505.15138v2 Announce Type: replace
-Abstract: This paper investigates infinite-horizon average reward Constrained Markov Decision Processes (CMDPs) with general parametrization. We propose a Primal-Dual Natural Actor-Critic algorithm that adeptly manages constraints while ensuring a high convergence rate. In particular, our algorithm achieves global convergence and constraint violation rates of $\tilde{\mathcal{O}}(1/\sqrt{T})$ over a horizon of length $T$ when the mixing time, $\tau_{\mathrm{mix}}$, is known to the learner. In absence of knowledge of $\tau_{\mathrm{mix}}$, the achievable rates change to $\tilde{\mathcal{O}}(1/T^{0.5-\epsilon})$ provided that $T \geq \tilde{\mathcal{O}}\left(\tau_{\mathrm{mix}}^{2/\epsilon}\right)$. Our results match the theoretical lower bound for Markov Decision Processes and establish a new benchmark in the theoretical exploration of average reward CMDPs.
- oai:arXiv.org:2505.15138v2
- cs.LG
+ FOL-Traces: Verified First-Order Logic Reasoning Traces at Scale
+ https://arxiv.org/abs/2505.14932
+ arXiv:2505.14932v2 Announce Type: replace
+Abstract: Reasoning in language models is difficult to evaluate: natural-language traces are unverifiable, symbolic datasets too small, and most benchmarks conflate heuristics with inference. We present FOL-Traces, the first large-scale dataset of programmatically verified reasoning traces, enabling rigorous evaluation of structured logical inference. We also propose two challenging and comprehensive diagnostic tasks-masked operation prediction and step completion-that directly probe syntactic awareness and process fidelity. FOL-Traces serves as a scalable testbed for rigorously studying how models perform structured logical inference. Systematic experiments with 5 reasoning LLMs show that the dataset remains challenging: models only reach around 45.7% accuracy on masked operation prediction and around 27% on two-step completion.
+ oai:arXiv.org:2505.14932v2cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yang Xu, Swetha Ganesh, Washim Uddin Mondal, Qinbo Bai, Vaneet Aggarwal
+ http://creativecommons.org/licenses/by/4.0/
+ Isabelle Lee, Sarah Liaw, Dani Yogatama
- Not All Models Suit Expert Offloading: On Local Routing Consistency of Mixture-of-Expert Models
- https://arxiv.org/abs/2505.16056
- arXiv:2505.16056v3 Announce Type: replace
-Abstract: Mixture-of-Experts (MoE) enables efficient scaling of large language models (LLMs) with sparsely activated experts during inference. To effectively deploy large MoE models on memory-constrained devices, many systems introduce *expert offloading* that caches a subset of experts in fast memory, leaving others on slow memory to run on CPU or load on demand. While some research has exploited the locality of expert activations, where consecutive tokens activate similar experts, the degree of this **local routing consistency** varies across models and remains understudied. In this paper, we propose two metrics to measure local routing consistency of MoE models: (1) **Segment Routing Best Performance (SRP)**, which evaluates how well a fixed group of experts can cover the needs of a segment of tokens, and (2) **Segment Cache Best Hit Rate (SCH)**, which measures the hit rate of an expert cache utilizing a length of future information under a cache limit. We analyze 20 MoE LLMs with diverse sizes and architectures and use toy models to verify key factors related to local routing consistency. We find a strong trade-off between local routing consistency and *local* load balance, while showing that *global* load balance can coexist with local routing consistency. Meanwhile, settings like shared experts that decrease expert combination space can lead to low local routing consistency. We further reveal that domain-specialized experts contribute more to routing consistency than vocabulary-specialized ones, and that most models balance between cache effectiveness and efficiency with cache sizes approximately twice the active experts. These findings pave the way for memory-efficient MoE design and deployment without compromising inference speed. We publish the code for replicating experiments at https://github.com/ljcleo/moe-lrc .
- oai:arXiv.org:2505.16056v3
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ When Less Language is More: Language-Reasoning Disentanglement Makes LLMs Better Multilingual Reasoners
+ https://arxiv.org/abs/2505.15257
+ arXiv:2505.15257v2 Announce Type: replace
+Abstract: Multilingual reasoning remains a significant challenge for large language models (LLMs), with performance disproportionately favoring high-resource languages. Drawing inspiration from cognitive neuroscience, which suggests that human reasoning functions largely independently of language processing, we hypothesize that LLMs similarly encode reasoning and language as separable components that can be disentangled to enhance multilingual reasoning. To evaluate this, we perform a causal intervention by ablating language-specific representations at inference time. Experiments on 10 open-weight LLMs spanning 11 typologically diverse languages show that this language-specific ablation consistently boosts multilingual reasoning performance. Layer-wise analyses further confirm that language and reasoning representations can be effectively disentangled throughout the model, yielding improved multilingual reasoning capabilities, while preserving top-layer language features remains essential for maintaining linguistic fidelity. Compared to post-training methods such as supervised fine-tuning or reinforcement learning, our training-free language-reasoning disentanglement achieves comparable or superior results with minimal computational overhead. These findings shed light on the internal mechanisms underlying multilingual reasoning in LLMs and suggest a lightweight and interpretable strategy for improving cross-lingual generalization.
+ oai:arXiv.org:2505.15257v2
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Jingcong Liang, Siyuan Wang, Miren Tian, Yitong Li, Duyu Tang, Zhongyu Wei
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Weixiang Zhao, Jiahe Guo, Yang Deng, Tongtong Wu, Wenxuan Zhang, Yulin Hu, Xingyu Sui, Yanyan Zhao, Wanxiang Che, Bing Qin, Tat-Seng Chua, Ting Liu
- Make LVLMs Focus: Context-Aware Attention Modulation for Better Multimodal In-Context Learning
- https://arxiv.org/abs/2505.17097
- arXiv:2505.17097v4 Announce Type: replace
-Abstract: Multimodal in-context learning (ICL) is becoming a key capability that allows large vision-language models (LVLMs) to adapt to novel tasks without parameter updates, which expands their usefulness in many real-world applications. However, ICL performance remains unstable even when the in-context demonstrations (ICDs) are well matched, showing that LVLMs still struggle to make full use of the provided context. While existing work mainly focuses on prompt engineering or post-hoc logit calibration, we study the attention mechanisms inside LVLMs to address their inherent limitations. We identify two important weaknesses in their self-attention that hinder effective ICL. To address these weaknesses, we propose Context-Aware Modulated Attention (CAMA), a training-free and plug-and-play method that dynamically adjusts attention logits based on the input in-context sequence. CAMA uses a two-stage modulation process that strengthens attention to semantically important tokens, especially visual ones. Across four LVLMs and seven benchmarks, CAMA consistently outperforms vanilla models and baselines, showing clear effectiveness and generalization. It can also activate the intended benefits of prompt engineering methods and remains robust across different sequence configurations. Therefore, CAMA opens up new directions for improving multimodal reasoning through a deeper understanding of attention dynamics.
- oai:arXiv.org:2505.17097v4
- cs.CV
+ Teaching Language Models to Evolve with Users: Dynamic Profile Modeling for Personalized Alignment
+ https://arxiv.org/abs/2505.15456
+ arXiv:2505.15456v2 Announce Type: replace
+Abstract: Personalized alignment is essential for enabling large language models (LLMs) to engage effectively in user-centric dialogue. While recent prompt-based and offline optimization methods offer preliminary solutions, they fall short in cold-start scenarios and long-term personalization due to their inherently static and shallow designs. In this work, we introduce the Reinforcement Learning for Personalized Alignment (RLPA) framework, in which an LLM interacts with a simulated user model to iteratively infer and refine user profiles through dialogue. The training process is guided by a dual-level reward structure: the Profile Reward encourages accurate construction of user representations, while the Response Reward incentivizes generation of responses consistent with the inferred profile. We instantiate RLPA by fine-tuning Qwen-2.5-3B-Instruct, resulting in Qwen-RLPA, which achieves state-of-the-art performance in personalized dialogue. Empirical evaluations demonstrate that Qwen-RLPA consistently outperforms prompting and offline fine-tuning baselines, and even surpasses advanced commercial models such as Claude-3.5 and GPT-4o. Further analysis highlights Qwen-RLPA's robustness in reconciling conflicting user preferences, sustaining long-term personalization and delivering more efficient inference compared to recent reasoning-focused LLMs. These results emphasize the potential of dynamic profile inference as a more effective paradigm for building personalized dialogue systems.
+ oai:arXiv.org:2505.15456v2cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yanshu Li, Jianjiang Yang, Ziteng Yang, Bozheng Li, Ligong Han, Hongyang He, Zhengtao Yao, Yingjie Victor Chen, Songlin Fei, Dongfang Liu, Ruixiang Tang
+ Weixiang Zhao, Xingyu Sui, Yulin Hu, Jiahe Guo, Haixiao Liu, Biye Li, Yanyan Zhao, Bing Qin, Ting Liu
- A Network Science Approach to Granular Time Series Segmentation
- https://arxiv.org/abs/2505.17640
- arXiv:2505.17640v2 Announce Type: replace
-Abstract: Time series segmentation (TSS) is one of the time series (TS) analysis techniques, that has received considerably less attention compared to other TS related tasks. In recent years, deep learning architectures have been introduced for TSS, however their reliance on sliding windows limits segmentation granularity due to fixed window sizes and strides. To overcome these challenges, we propose a new more granular TSS approach that utilizes the Weighted Dual Perspective Visbility Graph (WDPVG) TS into a graph and combines it with a Graph Attention Network (GAT). By transforming TS into graphs, we are able to capture different structural aspects of the data that would otherwise remain hidden. By utilizing the representation learning capabilities of Graph Neural Networks, our method is able to effectively identify meaningful segments within the TS. To better understand the potential of our approach, we also experimented with different TS-to-graph transformations and compared their performance. Our contributions include: a) formulating the TSS as a node classification problem on graphs; b) conducting an extensive analysis of various TS-to-graph transformations applied to TSS using benchmark datasets from the TSSB repository; c) providing the first detailed study on utilizing GNNs for analyzing graph representations of TS in the context of TSS; d) demonstrating the effectiveness of our method, which achieves an average F1 score of 0.97 across 59 diverse TSS benchmark datasets; e) outperforming the seq2point baseline method by 0.05 in terms of F1 score; and f) reducing the required training data compared to the baseline methods.
- oai:arXiv.org:2505.17640v2
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ SpatialScore: Towards Comprehensive Evaluation for Spatial Intelligence
+ https://arxiv.org/abs/2505.17012
+ arXiv:2505.17012v2 Announce Type: replace
+Abstract: Existing evaluations of multimodal large language models (MLLMs) on spatial intelligence are typically fragmented and limited in scope. In this work, we aim to conduct a holistic assessment of the spatial understanding capabilities of modern MLLMs and propose complementary data-driven and agent-based solutions. Specifically, we make the following contributions: (i) we introduce SpatialScore, to our knowledge, the most comprehensive and diverse benchmark for multimodal spatial intelligence to date. It covers multiple visual data types, input modalities, and question-answering formats, and contains approximately 5K manually verified samples spanning 30 distinct tasks; (ii) using SpatialScore, we extensively evaluate 40 representative MLLMs, revealing persistent challenges and a substantial gap between current models and human-level spatial intelligence; (iii) to advance model capabilities, we construct SpatialCorpus, a large-scale training resource with 331K multimodal QA samples that supports fine-tuning on spatial reasoning tasks and significantly improves the performance of existing models (e.g., Qwen3-VL); (iv) to complement this data-driven route with a training-free paradigm, we develop SpatialAgent, a multi-agent system equipped with 12 specialized spatial perception tools that supports both Plan-Execute and ReAct reasoning, enabling substantial gains in spatial reasoning without additional model training. Extensive experiments and in-depth analyses demonstrate the effectiveness of our benchmark, corpus, and agent framework. We expect these resources to serve as a solid foundation for advancing MLLMs toward human-level spatial intelligence. All data, code, and models will be released to the research community.
+ oai:arXiv.org:2505.17012v2
+ cs.CV
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ivana Kesi\'c, Carolina Fortuna, Mihael Mohor\v{c}i\v{c}, Bla\v{z} Bertalani\v{c}
+ Haoning Wu, Xiao Huang, Yaohui Chen, Ya Zhang, Yanfeng Wang, Weidi Xie
- The emergence of sparse attention: impact of data distribution and benefits of repetition
- https://arxiv.org/abs/2505.17863
- arXiv:2505.17863v2 Announce Type: replace
-Abstract: Emergence is a fascinating property of large language models and neural networks more broadly: as models scale and train for longer, they sometimes develop new abilities in sudden ways. Despite initial studies, we still lack a comprehensive understanding of how and when these abilities emerge. To address this gap, we study the emergence over training of sparse attention, a critical and frequently observed attention pattern in Transformers. By combining theoretical analysis of a toy model with empirical observations on small Transformers trained on a linear regression variant, we uncover the mechanics driving sparse attention emergence and reveal that emergence timing follows power laws based on task structure, architecture, and optimizer choice. We additionally find that repetition can greatly speed up emergence. Finally, we confirm these results on a well-studied in-context associative recall task. Our findings provide a simple, theoretically grounded framework for understanding how data distributions and model design influence the learning dynamics behind one form of emergence.
- oai:arXiv.org:2505.17863v2
- cs.LG
- cs.NE
- Thu, 11 Dec 2025 00:00:00 -0500
+ The Coherence Trap: When MLLM-Crafted Narratives Exploit Manipulated Visual Contexts
+ https://arxiv.org/abs/2505.17476
+ arXiv:2505.17476v2 Announce Type: replace
+Abstract: The detection and grounding of multimedia manipulation has emerged as a critical challenge in combating AI-generated disinformation. While existing methods have made progress in recent years, we identify two fundamental limitations in current approaches: (1) Underestimation of MLLM-driven deception risk: prevailing techniques primarily address rule-based text manipulations, yet fail to account for sophisticated misinformation synthesized by multimodal large language models (MLLMs) that can dynamically generate semantically coherent, contextually plausible yet deceptive narratives conditioned on manipulated images; (2) Unrealistic misalignment artifacts: currently focused scenarios rely on artificially misaligned content that lacks semantic coherence, rendering them easily detectable. To address these gaps holistically, we propose a new adversarial pipeline that leverages MLLMs to generate high-risk disinformation. Our approach begins with constructing the MLLM-Driven Synthetic Multimodal (MDSM) dataset, where images are first altered using state-of-the-art editing techniques and then paired with MLLM-generated deceptive texts that maintain semantic consistency with the visual manipulations. Building upon this foundation, we present the Artifact-aware Manipulation Diagnosis via MLLM (AMD) framework featuring two key innovations: Artifact Pre-perception Encoding strategy and Manipulation-Oriented Reasoning, to tame MLLMs for the MDSM problem. Comprehensive experiments validate our framework's superior generalization capabilities as a unified architecture for detecting MLLM-powered multimodal deceptions. In cross-domain testing on the MDSM dataset, AMD achieves the best average performance, with 88.18 ACC, 60.25 mAP, and 61.02 mIoU scores.
+ oai:arXiv.org:2505.17476v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Nicolas Zucchet, Francesco d'Angelo, Andrew K. Lampinen, Stephanie C. Y. Chan
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuchen Zhang, Yaxiong Wang, Yujiao Wu, Lianwei Wu, Li Zhu, Zhedong Zheng
- LLM Meeting Decision Trees on Tabular Data
- https://arxiv.org/abs/2505.17918
- arXiv:2505.17918v2 Announce Type: replace
-Abstract: Tabular data have been playing a vital role in diverse real-world fields, including healthcare, finance, etc. With the recent success of Large Language Models (LLMs), early explorations of extending LLMs to the domain of tabular data have been developed. Most of these LLM-based methods typically first serialize tabular data into natural language descriptions, and then tune LLMs or directly infer on these serialized data. However, these methods suffer from two key inherent issues: (i) data perspective: existing data serialization methods lack universal applicability for structured tabular data, and may pose privacy risks through direct textual exposure, and (ii) model perspective: LLM fine-tuning methods struggle with tabular data, and in-context learning scalability is bottle-necked by input length constraints (suitable for few-shot learning). This work explores a novel direction of integrating LLMs into tabular data throughough logical decision tree rules as intermediaries, proposes a decision tree enhancer with LLM-derived rule for tabular prediction, DeLTa. The proposed DeLTa avoids tabular data serialization, and can be applied to full data learning setting without LLM fine-tuning. Specifically, we leverage the reasoning ability of LLMs to redesign an improved rule given a set of decision tree rules. Furthermore, we provide a calibration method for original decision trees via new generated rule by LLM, which approximates the error correction vector to steer the original decision tree predictions in the direction of ``errors'' reducing. Finally, extensive experiments on diverse tabular benchmarks show that our method achieves state-of-the-art performance.
- oai:arXiv.org:2505.17918v2
+ On the Design of KL-Regularized Policy Gradient Algorithms for LLM Reasoning
+ https://arxiv.org/abs/2505.17508
+ arXiv:2505.17508v3 Announce Type: replace
+Abstract: Policy gradient algorithms have been successfully applied to enhance the reasoning capabilities of large language models (LLMs). KL regularization is ubiquitous, yet the design surface, choice of KL direction (forward vs. reverse), normalization (normalized vs. unnormalized), and estimator ($k_1/k_2/k_3$), is scattered across the literature and often intertwined with off-policy estimation. We ask a focused question: under the off-policy setting, what weighting is required for each KL variant so that the surrogate we optimize yields the exact gradient of the intended KL-regularized objective? We answer this with a compact, unified derivation we call the Regularized Policy Gradient (RPG) view. RPG (i) unifies normalized and unnormalized KL variants and shows that the widely-used $k_3$ penalty is exactly the unnormalized KL; (ii) specifies conditions under which REINFORCE-style losses with stop-gradient are gradient-equivalent to fully differentiable surrogates; (iii) identifies and corrects an off-policy importance-weighting mismatch in GRPO's KL term; and (iv) introduces RPG-Style Clip, a clipped-importance-sampling step within RPG-REINFORCE that enables stable, off-policy policy-gradient training at scale. On mathematical reasoning benchmarks (AIME24, AIME25), RPG-REINFORCE with RPG-Style Clip improves accuracy by up to $+6$ absolute percentage points over DAPO. We extend our experiments to 8K context length, and RPG-REINFORCE with RPG-Style Clip achieves 52% accuracy on AIME25, surpassing the official Qwen3-4B-Instruct model (47%). Notably, RPG is a stable and scalable RL algorithm for LLM reasoning, realized via (a) a KL-correct objective, (b) clipped importance sampling, and (c) an iterative reference-policy update scheme.
+ oai:arXiv.org:2505.17508v3cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hangting Ye, Jinmeng Li, He Zhao, Dandan Guo, Yi Chang
+ Yifan Zhang, Yifeng Liu, Huizhuo Yuan, Yang Yuan, Quanquan Gu, Andrew Chi-Chih Yao
- DISTA-Net: Dynamic Closely-Spaced Infrared Small Target Unmixing
- https://arxiv.org/abs/2505.19148
- arXiv:2505.19148v2 Announce Type: replace
-Abstract: Resolving closely-spaced small targets in dense clusters presents a significant challenge in infrared imaging, as the overlapping signals hinder precise determination of their quantity, sub-pixel positions, and radiation intensities. While deep learning has advanced the field of infrared small target detection, its application to closely-spaced infrared small targets has not yet been explored. This gap exists primarily due to the complexity of separating superimposed characteristics and the lack of an open-source infrastructure. In this work, we propose the Dynamic Iterative Shrinkage Thresholding Network (DISTA-Net), which reconceptualizes traditional sparse reconstruction within a dynamic framework. DISTA-Net adaptively generates convolution weights and thresholding parameters to tailor the reconstruction process in real time. To the best of our knowledge, DISTA-Net is the first deep learning model designed specifically for the unmixing of closely-spaced infrared small targets, achieving superior sub-pixel detection accuracy. Moreover, we have established the first open-source ecosystem to foster further research in this field. This ecosystem comprises three key components: (1) CSIST-100K, a publicly available benchmark dataset; (2) CSO-mAP, a custom evaluation metric for sub-pixel detection; and (3) GrokCSO, an open-source toolkit featuring DISTA-Net and other models. Our code and dataset are available at https://github.com/GrokCV/GrokCSO.
- oai:arXiv.org:2505.19148v2
+ SplatCo: Structure-View Collaborative Gaussian Splatting for Detail-Preserving Rendering of Large-Scale Unbounded Scenes
+ https://arxiv.org/abs/2505.17951
+ arXiv:2505.17951v4 Announce Type: replace
+Abstract: We present SplatCo, a structure-view collaborative Gaussian splatting framework for high-fidelity rendering of complex outdoor scenes. SplatCo builds upon three novel components: 1) a cross-structure collaboration module that combines global tri-plane representations, which capture coarse scene layouts, with local context grid features representing fine details. This fusion is achieved through a hierarchical compensation mechanism, ensuring both global spatial awareness and local detail preservation; 2) a cross-view pruning mechanism that removes overfitted or inaccurate Gaussians based on structural consistency, thereby improving storage efficiency and preventing rendering artifacts; 3) a structure view co-learning module that aggregates structural gradients with view gradients,thereby steering the optimization of Gaussian geometric and appearance attributes more robustly. By combining these key components, SplatCo effectively achieves high-fidelity rendering for large-scale scenes. Code and project page are available at https://splatco-tech.github.io.
+ oai:arXiv.org:2505.17951v4cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shengdong Han, Shangdong Yang, Xin Zhang, Yuxuan Li, Xiang Li, Jian Yang, Ming-Ming Cheng, Yimian Dai
+ Haihong Xiao, Jianan Zou, Yuxin Zhou, Ying He, Wenxiong Kang
- Learning to Infer Parameterized Representations of Plants from 3D Scans
- https://arxiv.org/abs/2505.22337
- arXiv:2505.22337v2 Announce Type: replace
-Abstract: Plants frequently contain numerous organs, organized in 3D branching systems defining the plant's architecture. Reconstructing the architecture of plants from unstructured observations is challenging because of self-occlusion and spatial proximity between organs, which are often thin structures. To achieve the challenging task, we propose an approach that allows to infer a parameterized representation of the plant's architecture from a given 3D scan of a plant. In addition to the plant's branching structure, this representation contains parametric information for each plant organ, and can therefore be used directly in a variety of tasks. In this data-driven approach, we train a recursive neural network with virtual plants generated using a procedural model. After training, the network allows to infer a parametric tree-like representation based on an input 3D point cloud. Our method is applicable to any plant that can be represented as binary axial tree. We quantitatively evaluate our approach on Chenopodium Album plants on reconstruction, segmentation and skeletonization, which are important problems in plant phenotyping. In addition to carrying out several tasks at once, our method achieves results on-par with strong baselines for each task. We apply our method, trained exclusively on synthetic data, to 3D scans and show that it generalizes well.
- oai:arXiv.org:2505.22337v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ MaskedManipulator: Versatile Whole-Body Manipulation
+ https://arxiv.org/abs/2505.19086
+ arXiv:2505.19086v3 Announce Type: replace
+Abstract: We tackle the challenges of synthesizing versatile, physically simulated human motions for full-body object manipulation. Unlike prior methods that are focused on detailed motion tracking, trajectory following, or teleoperation, our framework enables users to specify versatile high-level objectives such as target object poses or body poses. To achieve this, we introduce MaskedManipulator, a generative control policy distilled from a tracking controller trained on large-scale human motion capture data. This two-stage learning process allows the system to perform complex interaction behaviors, while providing intuitive user control over both character and object motions. MaskedManipulator produces goal-directed manipulation behaviors that expand the scope of interactive animation systems beyond task-specific solutions.
+ oai:arXiv.org:2505.19086v3
+ cs.RO
+ cs.AI
+ cs.GR
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Samara Ghrer, Christophe Godin, Stefanie Wuhrer
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Chen Tessler, Yifeng Jiang, Erwin Coumans, Zhengyi Luo, Gal Chechik, Xue Bin Peng
- Do You See Me : A Multidimensional Benchmark for Evaluating Visual Perception in Multimodal LLMs
- https://arxiv.org/abs/2506.02022
- arXiv:2506.02022v2 Announce Type: replace
-Abstract: Multimodal Large Language Models (MLLMs) show reasoning promise, yet their visual perception is a critical bottleneck. Strikingly, MLLMs can produce correct answers even while misinterpreting crucial visual elements, masking these underlying failures. Our preliminary study on a joint perception-reasoning dataset revealed that for one leading MLLM, 29% of its correct answers to reasoning questions still exhibited visual perception errors. To systematically address this, we introduce "Do You See Me", a scalable benchmark with 1,758 images and 2,612 questions. It spans seven human-psychology inspired subtasks in 2D and 3D, featuring controllable complexity to rigorously evaluate MLLM visual skills. Our findings on 3 leading closed-source and 5 major open-source models reveal a stark deficit: humans achieve 96.49% accuracy, while top MLLMs average below 50%. This performance gap widens rapidly with increased task complexity (e.g., from 12% to 45% in the visual form constancy subtask). Further analysis into the root causes suggests that failures stem from challenges like misallocated visual attention and the instability of internal representations for fine-grained details, especially at or below encoder patch resolution. This underscores an urgent need for MLLMs with truly robust visual perception. The benchmark dataset, source code and evaluation scripts are available at https://github.com/microsoft/Do-You-See-Me.
- oai:arXiv.org:2506.02022v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Potential Landscapes Reveal Spatiotemporal Structure in Urban Mobility: Hodge Decomposition and Principal Component Analysis of Tokyo Before and During COVID-19
+ https://arxiv.org/abs/2505.20929
+ arXiv:2505.20929v4 Announce Type: replace
+Abstract: Understanding human mobility is vital to solving societal challenges, such as epidemic control and urban transportation optimization. Recent advancements in data collection now enable the exploration of dynamic mobility patterns in human flow. However, the vast volume and complexity of mobility data make it difficult to interpret spatiotemporal patterns directly, necessitating effective information reduction. The core challenge is to balance data simplification with information preservation: methods must retain location-specific information about human flows from origins to destinations while reducing the data to a comprehensible level. This study proposes a two-step dimensionality reduction framework: First, combinatorial Hodge theory is applied to the given origin--destination (OD) matrices with timestamps to construct a set of potential landscapes of human flow, preserving imbalanced trip information between locations. Second, principal component analysis (PCA) expresses the time series of potential landscapes as a linear combination of a few static spatial components, with their coefficients representing temporal variations. The framework systematically decouples the spatial and temporal components of the given data. By implementing this two-step reduction method, we reveal large weight variations during a pandemic, characterized by an overall decline in mobility and stark contrasts between weekdays and holidays. These findings demonstrate the effectiveness of our framework in uncovering complex mobility patterns and its potential to inform urban planning and public health interventions.
+ oai:arXiv.org:2505.20929v4
+ cs.SI
+ physics.soc-ph
+ stat.AP
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Aditya Kanade, Tanuja Ganu
+ http://creativecommons.org/licenses/by/4.0/
+ Yunhan Du, Takaaki Aoki, Naoya Fujiwara
- An $O(\log \log n)$-approximate budget feasible mechanism for subadditive valuations
- https://arxiv.org/abs/2506.04665
- arXiv:2506.04665v5 Announce Type: replace
-Abstract: In budget-feasible mechanism design, there is a set of items $U$, each owned by a distinct seller. The seller of item $e$ incurs a private cost $\overline{c}_e$ for supplying her item. A buyer wishes to procure a set of items from the sellers of maximum value, where the value of a set $S\subseteq U$ of items is given by a valuation function $v:2^U\to \mathbb{R}_+$. The buyer has a budget of $B \in \mathbb{R}_+$ for the total payments made to the sellers. We wish to design a mechanism that is truthful, that is, sellers are incentivized to report their true costs, budget-feasible, that is, the sum of the payments made to the sellers is at most the budget $B$, and that outputs a set whose value is large compared to $\text{OPT}:=\max\{v(S):\overline{c}(S)\le B,S\subseteq U\}$.
- Budget-feasible mechanism design has been extensively studied, with the literature focussing on (classes of) subadditive valuation functions, and various polytime, budget-feasible mechanisms, achieving constant-factor approximation, have been devised for the special cases of additive, submodular, and XOS valuations. However, for general subadditive valuations, the best-known approximation factor achievable by a polytime budget-feasible mechanism (given access to demand oracles) was only $O(\log n / \log \log n)$, where $n$ is the number of items.
- We improve this state-of-the-art significantly by designing a randomized budget-feasible mechanism for subadditive valuations that achieves a substantially-improved approximation factor of $O(\log\log n)$ and runs in polynomial time, given access to demand oracles.
- oai:arXiv.org:2506.04665v5
- cs.GT
- cs.DM
- cs.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ Towards Robust Assessment of Pathological Voices via Combined Low-Level Descriptors and Foundation Model Representations
+ https://arxiv.org/abs/2505.21356
+ arXiv:2505.21356v4 Announce Type: replace
+Abstract: Perceptual voice quality assessment plays a vital role in diagnosing and monitoring voice disorders. Traditional methods, such as the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) and the Grade, Roughness, Breathiness, Asthenia, and Strain (GRBAS) scales, rely on expert raters and are prone to inter-rater variability, emphasizing the need for objective solutions. This study introduces the Voice Quality Assessment Network (VOQANet), a deep learning framework that employs an attention mechanism and Speech Foundation Model (SFM) embeddings to extract high-level features. To further enhance performance, we propose VOQANet+, which integrates self-supervised SFM embeddings with low-level acoustic descriptors-namely jitter, shimmer, and harmonics-to-noise ratio (HNR). Unlike previous approaches that focus solely on vowel-based phonation (PVQD-A), our models are evaluated on both vowel-level and sentence-level speech (PVQD-S) to assess generalizability. Experimental results demonstrate that sentence-based inputs yield higher accuracy, particularly at the patient level. Overall, VOQANet consistently outperforms baseline models in terms of root mean squared error (RMSE) and Pearson correlation coefficient across CAPE-V and GRBAS dimensions, with VOQANet+ achieving even greater performance gains. Additionally, VOQANet+ maintains consistent performance under noisy conditions, suggesting enhanced robustness for real-world and telehealth applications. This work highlights the value of combining SFM embeddings with low-level features for accurate and robust pathological voice assessment.
+ oai:arXiv.org:2505.21356v4
+ cs.SD
+ cs.LG
+ eess.AS
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Rian Neogi, Kanstantsin Pashkovich, Chaitanya Swamy
+ Whenty Ariyanti, Kuan-Yu Chen, Sabato Marco Siniscalchi, Hsin-Min Wang, Yu Tsao
+
+
+ Human Attention During Localization of Memory Bugs in C Programs
+ https://arxiv.org/abs/2506.00693
+ arXiv:2506.00693v2 Announce Type: replace
+Abstract: This paper presents a study of human visual attention during localization of memory bugs in C. Human visual attention refers to the mechanical processes by which we selectively process and prioritize information. Visual attention is important to study because it is central to what information people (who are sighted) use to solve a particular problem. Meanwhile, memory bugs are among the most common types of bugs in C programs that manifest as a variety of program faults. In this paper, we study human visual attention while people attempt to locate memory bugs in code. We recruit 21 programmers to locate between one and eight memory bugs in three C programs for 1.5-2 hours each. In total we collected observations of 31 hours of programmer effort. The bugs in our study cover memory leaks, overflows, and double frees, which are among the most common memory bugs. We analyze the task outcomes in terms of success rate and related factors, patterns of visual attention overall such as what lines and functions are read, and finally we explore differences of visual attention patterns during success versus failure cases.
+ oai:arXiv.org:2506.00693v2
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Emory Smith, Robert Wallace, Matthew Robison, Yu Huang, Collin McMillan
- A Framework for Controllable Multi-objective Learning with Annealed Stein Variational Hypernetworks
- https://arxiv.org/abs/2506.06715
- arXiv:2506.06715v3 Announce Type: replace
-Abstract: Pareto Set Learning (PSL) is popular as an efficient approach to obtaining the complete optimal solution in Multi-objective Learning (MOL). A set of optimal solutions approximates the Pareto set, and its mapping is a set of dense points in the Pareto front in objective space. However, some current methods face a challenge: how to make the Pareto solution is diverse while maximizing the hypervolume value. In this paper, we propose a novel method to address this challenge, which employs Stein Variational Gradient Descent (SVGD) to approximate the entire Pareto set. SVGD pushes a set of particles towards the Pareto set by applying a form of functional gradient descent, which helps to converge and diversify optimal solutions. Additionally, we employ diverse gradient direction strategies to thoroughly investigate a unified framework for SVGD in multi-objective optimization and adapt this framework with an annealing schedule to promote stability. We introduce our method, SVH-MOL, and validate its effectiveness through extensive experiments on multi-objective problems and multi-task learning, demonstrating its superior performance.
- oai:arXiv.org:2506.06715v3
+ Improved Regret Bounds for Gaussian Process Upper Confidence Bound in Bayesian Optimization
+ https://arxiv.org/abs/2506.01393
+ arXiv:2506.01393v3 Announce Type: replace
+Abstract: This paper addresses the Bayesian optimization problem (also referred to as the Bayesian setting of the Gaussian process bandit), where the learner seeks to minimize the regret under a function drawn from a known Gaussian process (GP). Under a Mat\'ern kernel with a certain degree of smoothness, we show that the Gaussian process upper confidence bound (GP-UCB) algorithm achieves $\tilde{O}(\sqrt{T})$ cumulative regret with high probability. Furthermore, our analysis yields $O(\sqrt{T \ln^2 T})$ regret under a squared exponential kernel. These results fill the gap between the existing regret upper bound for GP-UCB and the best-known bound provided by Scarlett (2018). The key idea in our proof is to capture the concentration behavior of the input sequence realized by GP-UCB, enabling a more refined analysis of the GP's information gain.
+ oai:arXiv.org:2506.01393v3cs.LGstat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Minh-Duc Nguyen, Dung D. Le
+ Shogo Iwazaki
- Efficient $Q$-Learning and Actor-Critic Methods for Robust Average Reward Reinforcement Learning
- https://arxiv.org/abs/2506.07040
- arXiv:2506.07040v3 Announce Type: replace
-Abstract: We present a non-asymptotic convergence analysis of $Q$-learning and actor-critic algorithms for robust average-reward Markov Decision Processes (MDPs) under contamination, total-variation (TV) distance, and Wasserstein uncertainty sets. A key ingredient of our analysis is showing that the optimal robust $Q$ operator is a strict contraction with respect to a carefully designed semi-norm (with constant functions quotiented out). This property enables a stochastic approximation update that learns the optimal robust $Q$-function using $\tilde{\mathcal{O}}(\epsilon^{-2})$ samples. We also provide an efficient routine for robust $Q$-function estimation, which in turn facilitates robust critic estimation. Building on this, we introduce an actor-critic algorithm that learns an $\epsilon$-optimal robust policy within $\tilde{\mathcal{O}}(\epsilon^{-2})$ samples. We provide numerical simulations to evaluate the performance of our algorithms.
- oai:arXiv.org:2506.07040v3
- cs.LG
+ V-VAE: A Variational Auto Encoding Framework Towards Fine-Grained Control over Human-Like Chat
+ https://arxiv.org/abs/2506.01524
+ arXiv:2506.01524v2 Announce Type: replace
+Abstract: With the continued proliferation of Large Language Model (LLM) based chatbots, there is a growing demand for generating responses that are not only linguistically fluent but also consistently aligned with persona-specific traits in conversations. However, existing role-play and persona-based chat approaches rely heavily on static role descriptions, coarse-grained signal space, and low-quality synthetic data, which fail to capture dynamic fine-grained details in human-like chat. Human-like chat requires modeling subtle latent traits, such as emotional tone, situational awareness, and evolving personality, which are difficult to predefine and cannot be easily learned from synthetic or distillation-based data. To address these limitations, we propose a Verbal Variational Auto-Encoding (V-VAE) framework, containing a variational auto-encoding module and fine-grained control space which dynamically adapts dialogue behaviour based on fine-grained, interpretable latent variables across talking style, interaction patterns, and personal attributes. We also construct a high-quality dataset, HumanChatData, and benchmark HumanChatBench to address the scarcity of high-quality data in the human-like domain. Experiments show that LLMs based on V-VAE consistently outperform standard baselines on HumanChatBench and DialogBench, which further demonstrates the effectiveness of V-VAE and HumanChatData.
+ oai:arXiv.org:2506.01524v2
+ cs.CLcs.AI
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yang Xu, Swetha Ganesh, Vaneet Aggarwal
+ Qi Lin, Weikai Xu, Lisi Chen, Bin Dai
- PlayerOne: Egocentric World Simulator
- https://arxiv.org/abs/2506.09995
- arXiv:2506.09995v3 Announce Type: replace
-Abstract: We introduce PlayerOne, the first egocentric realistic world simulator, facilitating immersive and unrestricted exploration within vividly dynamic environments. Given an egocentric scene image from the user, PlayerOne can accurately construct the corresponding world and generate egocentric videos that are strictly aligned with the real scene human motion of the user captured by an exocentric camera. PlayerOne is trained in a coarse-to-fine pipeline that first performs pretraining on large-scale egocentric text-video pairs for coarse-level egocentric understanding, followed by finetuning on synchronous motion-video data extracted from egocentric-exocentric video datasets with our automatic construction pipeline. Besides, considering the varying importance of different components, we design a part-disentangled motion injection scheme, enabling precise control of part-level movements. In addition, we devise a joint reconstruction framework that progressively models both the 4D scene and video frames, ensuring scene consistency in the long-form video generation. Experimental results demonstrate its great generalization ability in precise control of varying human movements and worldconsistent modeling of diverse scenarios. It marks the first endeavor into egocentric real-world simulation and can pave the way for the community to delve into fresh frontiers of world modeling and its diverse applications.
- oai:arXiv.org:2506.09995v3
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Robust Satisficing Gaussian Process Bandits Under Adversarial Attacks
+ https://arxiv.org/abs/2506.01625
+ arXiv:2506.01625v2 Announce Type: replace
+Abstract: We address the problem of Gaussian Process (GP) optimization in the presence of unknown and potentially varying adversarial perturbations. Unlike traditional robust optimization approaches that focus on maximizing performance under worst-case scenarios, we consider a robust satisficing objective, where the goal is to consistently achieve a predefined performance threshold $\tau$, even under adversarial conditions. We propose two novel algorithms based on distinct formulations of robust satisficing, and show that they are instances of a general robust satisficing framework. Further, each algorithm offers different guarantees depending on the nature of the adversary. Specifically, we derive two regret bounds: one that is sublinear over time, assuming certain conditions on the adversary and the satisficing threshold $\tau$, and another that scales with the perturbation magnitude but requires no assumptions on the adversary. Through extensive experiments, we demonstrate that our approach outperforms the established robust optimization methods in achieving the satisficing objective, particularly when the ambiguity set of the robust optimization framework is inaccurately specified.
+ oai:arXiv.org:2506.01625v2
+ cs.LG
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Yuanpeng Tu, Hao Luo, Xi Chen, Xiang Bai, Fan Wang, Hengshuang Zhao
+ http://creativecommons.org/licenses/by/4.0/
+ Artun Saday, Ya\c{s}ar Cahit Y{\i}ld{\i}r{\i}m, Cem Tekin
- The Impact of Partial Computations on the Red-Blue Pebble Game
- https://arxiv.org/abs/2506.10854
- arXiv:2506.10854v2 Announce Type: replace
-Abstract: We study an extension of the well-known red-blue pebble game (RBP) with partial computation steps, inspired by the recent work of Sobczyk. While the original RBP assumes that we need to have all the inputs of an operation in fast memory at the same time, in many concrete computations, the inputs can be aggregated one by one into the final output value. These partial computation steps can enable pebbling strategies with much smaller I/O cost, and in settings where such a step-by-step aggregation is possible, this extended red-blue pebble game offers a much more realistic cost model.
- We establish the fundamental properties of this partial-computing red-blue pebble game (PRBP), and compare it to the original RBP. We begin with some simple examples where allowing partial computations can decrease the optimal I/O cost. It is also shown that the cost can decrease by up to a linear factor this way, but in general, it is NP-hard to decide whether partial computations allow for a smaller cost in a specific DAG. We then discuss how $S$-partitions, a crucial tool for deriving I/O lower bounds in RBP, can be adapted to the PRBP model. These new tools are then used to establish lower bounds on the I/O cost of some prominent computational tasks. Finally, we also adapt a hardness result from RBP, showing that the optimum cost is still NP-hard to approximate in PRBP to any reasonable factor.
- oai:arXiv.org:2506.10854v2
- cs.DC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Test-Time Distillation for Continual Model Adaptation
+ https://arxiv.org/abs/2506.02671
+ arXiv:2506.02671v2 Announce Type: replace
+Abstract: Deep neural networks often suffer performance degradation upon deployment due to distribution shifts. Continual Test-Time Adaptation (CTTA) aims to address this issue in an unsupervised manner, yet existing methods, which rely on self-supervision, are prone to an inherent self-referential feedback loop that amplifies initial prediction errors, leading to model drift. We revisit this limitation and propose Test-Time Distillation (TTD), which reframes adaptation as a distillation process guided by a frozen Vision-Language Model (VLM) as an external signal. While promising, we find that direct distillation is fraught with two pitfalls: the Generalist Trap, where the VLM's broad but non-specialized knowledge leads to suboptimal performance on specific tasks and shifts, and the Entropy Bias, where naive model fusion techniques based on entropy fail due to the disparate calibration of heterogeneous models. These pitfalls motivate our insight: the key is to build a robust supervisory signal and leverage it to guide the target model toward stable adaptation. Hence, we present CoDiRe, a Continual Distillation and Rectification framework for TTD. CoDiRe first constructs a robust blended teacher by dynamically fusing the predictions of the VLM and the target model. Critically, it circumvents the Entropy Bias by leveraging Maximum Softmax Probability (MSP) as a more reliable confidence metric for weighting each model's expertise. Then applies an Optimal Transport based rectification to further align predictions with the blended teacher, enabling continuous and stable adaptation. Extensive experiments show that CoDiRe outperforms state-of-the-art baselines, exceeding CoTTA by 10.55% while using only 48% of its time cost on ImageNet-C.
+ oai:arXiv.org:2506.02671v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1145/3694906.3743320
- P\'al Andr\'as Papp, Aleksandros Sobczyk, A. N. Yzelman
+ Xiao Chen, Jiazhen Huang, Zhiming Liu, Qinting Jiang, Fanding Huang, Jingyan Jiang, Zhi Wang
- TAViS: Text-bridged Audio-Visual Segmentation with Foundation Models
- https://arxiv.org/abs/2506.11436
- arXiv:2506.11436v2 Announce Type: replace
-Abstract: Audio-Visual Segmentation (AVS) faces a fundamental challenge of effectively aligning audio and visual modalities. While recent approaches leverage foundation models to address data scarcity, they often rely on single-modality knowledge or combine foundation models in an off-the-shelf manner, failing to address the cross-modal alignment challenge. In this paper, we present TAViS, a novel framework that \textbf{couples} the knowledge of multimodal foundation models (ImageBind) for cross-modal alignment and a segmentation foundation model (SAM2) for precise segmentation. However, effectively combining these models poses two key challenges: the difficulty in transferring the knowledge between SAM2 and ImageBind due to their different feature spaces, and the insufficiency of using only segmentation loss for supervision. To address these challenges, we introduce a text-bridged design with two key components: (1) a text-bridged hybrid prompting mechanism where pseudo text provides class prototype information while retaining modality-specific details from both audio and visual inputs, and (2) an alignment supervision strategy that leverages text as a bridge to align shared semantic concepts within audio-visual modalities. Our approach achieves superior performance on single-source, multi-source, semantic datasets, and excels in zero-shot settings.
- oai:arXiv.org:2506.11436v2
+ MokA: Multimodal Low-Rank Adaptation for MLLMs
+ https://arxiv.org/abs/2506.05191
+ arXiv:2506.05191v2 Announce Type: replace
+Abstract: In this paper, we reveal that most current efficient multimodal fine-tuning methods are hindered by a key limitation: they are directly borrowed from LLMs, often neglecting the intrinsic differences of multimodal scenarios and even affecting the full utilization of all modalities. Inspired by our empirical observation, we argue that unimodal adaptation and cross-modal adaptation are two essential parts for the effective fine-tuning of MLLMs. From this perspective, we propose Multimodal low-rank Adaptation (MokA), a multimodal-aware efficient fine-tuning strategy that takes multimodal characteristics into consideration. It compresses unimodal information by modality-specific parameters while explicitly enhancing cross-modal interaction, ensuring both unimodal and cross-modal adaptation. Extensive experiments cover three representative multimodal scenarios (audio-visual-text, visual-text, and speech-text), and multiple LLM backbones (LLaMA2/3, Qwen2, Qwen2.5-VL, etc). Consistent improvements indicate the efficacy and versatility of the proposed method. Ablation studies and efficiency evaluation are also conducted to fully asses our method. Overall, we think MokA provides a more targeted solution for efficient adaptation of MLLMs, paving the way for further exploration. The project page is at https://gewu-lab.github.io/MokA.
+ oai:arXiv.org:2506.05191v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ziyang Luo, Nian Liu, Xuguang Yang, Salman Khan, Rao Muhammad Anwer, Hisham Cholakkal, Fahad Shahbaz Khan, Junwei Han
+ Yake Wei, Yu Miao, Dongzhan Zhou, Di Hu
- AI reconstruction of European weather from the Euro-Atlantic regimes
- https://arxiv.org/abs/2506.13758
- arXiv:2506.13758v2 Announce Type: replace
-Abstract: We present a non-linear AI-model designed to reconstruct monthly mean anomalies of the European temperature and precipitation based on the Euro-Atlantic Weather regimes (WR) indices. WR represent recurrent, quasi-stationary, and persistent states of the atmospheric circulation that exert considerable influence over the European weather, therefore offering an opportunity for sub-seasonal to seasonal forecasting. While much research has focused on studying the correlation and impacts of the WR on European weather, the estimation of ground-level climate variables, such as temperature and precipitation, from Euro-Atlantic WR remains largely unexplored and is currently limited to linear methods. The presented AI model can capture and introduce complex non-linearities in the relation between the WR indices, describing the state of the Euro-Atlantic atmospheric circulation and the corresponding surface temperature and precipitation anomalies in Europe. We discuss the AI-model performance in reconstructing the monthly mean two-meter temperature and total precipitation anomalies in the European winter and summer, also varying the number of WR used to describe the monthly atmospheric circulation. We assess the impact of errors on the WR indices in the reconstruction and show that a mean absolute relative error below 80% yields improved seasonal reconstruction compared to the ECMWF operational seasonal forecast system, SEAS5. As a demonstration of practical applicability, we evaluate the model using WR indices predicted by SEAS5, finding slightly better or comparable skill relative to the SEAS5 forecast itself. Our findings demonstrate that WR-based anomaly reconstruction, powered by AI tools, offers a promising pathway for sub-seasonal and seasonal forecasting.
- oai:arXiv.org:2506.13758v2
+ ENMA: Tokenwise Autoregression for Generative Neural PDE Operators
+ https://arxiv.org/abs/2506.06158
+ arXiv:2506.06158v3 Announce Type: replace
+Abstract: Solving time-dependent parametric partial differential equations (PDEs) remains a fundamental challenge for neural solvers, particularly when generalizing across a wide range of physical parameters and dynamics. When data is uncertain or incomplete-as is often the case-a natural approach is to turn to generative models. We introduce ENMA, a generative neural operator designed to model spatio-temporal dynamics arising from physical phenomena. ENMA predicts future dynamics in a compressed latent space using a generative masked autoregressive transformer trained with flow matching loss, enabling tokenwise generation. Irregularly sampled spatial observations are encoded into uniform latent representations via attention mechanisms and further compressed through a spatio-temporal convolutional encoder. This allows ENMA to perform in-context learning at inference time by conditioning on either past states of the target trajectory or auxiliary context trajectories with similar dynamics. The result is a robust and adaptable framework that generalizes to new PDE regimes and supports one-shot surrogate modeling of time-dependent parametric PDEs.
+ oai:arXiv.org:2506.06158v3cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-sa/4.0/
- 10.1002/joc.70216
- International Journal of Climatology, 2025
- A. Camilletti, G. Franch, E. Tomasi, M. Cristoforetti
+ http://creativecommons.org/licenses/by/4.0/
+ Armand Kassa\"i Koupa\"i, Lise Le Boudec, Louis Serrano, Patrick Gallinari
- A Minimalist Optimizer Design for LLM Pretraining
- https://arxiv.org/abs/2506.16659
- arXiv:2506.16659v2 Announce Type: replace
-Abstract: Training large language models (LLMs) typically relies on adaptive optimizers such as Adam, which introduce extra operations and require significant more memory to maintain first- and second-order moments than SGD. While recent works such as GaLore, Fira and APOLLO have proposed state-compressed variants to reduce memory consumption, a fundamental question remains: What are the minimum modifications to plain SGD needed to match state-of-the-art pretraining performance? We systematically investigate this question using a bottom-up approach, and identify two simple yet highly (memory- and compute-) efficient techniques: (1) column-wise gradient normalization (normalizing the gradient along the output dimension), which boosts SGD performance without momentum; and (2) applying first-order momentum only to the output layer, where gradient variance is highest. Combining these two techniques lead to SCALE (Stochastic Column-normAlized Last-layer momEntum), a simple optimizer for memory efficient pretraining. Across multiple LLaMA models (60M-1B), SCALE matches or exceeds the performance of Adam while using only 35-45% of the total memory. It also consistently outperforms memory-efficient optimizers such as GaLore, Fira and APOLLO, making it a strong candidate for large-scale pretraining under memory constraints. For LLaMA 7B model, SCALE outperforms the state-of-the-art memory-efficient methods APOLLO and Muon, in terms of both perplexity and memory consumption.
- oai:arXiv.org:2506.16659v2
- cs.LG
- cs.AI
- math.OC
- Thu, 11 Dec 2025 00:00:00 -0500
+ ExAct: A Video-Language Benchmark for Expert Action Analysis
+ https://arxiv.org/abs/2506.06277
+ arXiv:2506.06277v2 Announce Type: replace
+Abstract: We present ExAct, a new video-language benchmark for expert-level understanding of skilled physical human activities. Our new benchmark contains 3521 expert-curated video question-answer pairs spanning 11 physical activities in 6 domains: Sports, Bike Repair, Cooking, Health, Music, and Dance. ExAct requires the correct answer to be selected from five carefully designed candidate options, thus necessitating a nuanced, fine-grained, expert-level understanding of physical human skills. Evaluating the recent state-of-the-art VLMs on ExAct reveals a substantial performance gap relative to human expert performance. Specifically, the best-performing GPT-4o model achieves only 44.70% accuracy, well below the 82.02% attained by trained human specialists/experts. We believe that ExAct will be beneficial for developing and evaluating VLMs capable of precise understanding of human skills in various physical and procedural domains. Dataset and code are available at https://texaser.github.io/exact_project_page/
+ oai:arXiv.org:2506.06277v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Athanasios Glentis, Jiaxiang Li, Andi Han, Mingyi Hong
+ http://creativecommons.org/licenses/by/4.0/
+ Han Yi, Yulu Pan, Feihong He, Xinyu Liu, Benjamin Zhang, Oluwatumininu Oguntola, Gedas Bertasius
- Efficient Black-Box Fault Localization for System-Level Test Code Using Large Language Models
- https://arxiv.org/abs/2506.19045
- arXiv:2506.19045v2 Announce Type: replace
-Abstract: Fault localization (FL) is a critical step in debugging, which typically relies on repeated executions to pinpoint faulty code regions. However, repeated executions can be impractical in the presence of non-deterministic failures or high execution costs. While recent efforts have leveraged Large Language Models (LLMs) to aid execution-free FL, these have primarily focused on identifying faults in the system-under-test (SUT) rather than in the often complex system-level test code. However, the latter is also important, as in practice, many failures are triggered by faulty test code. To overcome these challenges, we introduce a fully static, LLM-driven approach for system-level test code fault localization (TCFL) that does not require executing the test case. Our method uses a single failure execution log to estimate the test's execution trace through three novel algorithms that identify only code statements likely involved in the failure. This pruned trace, combined with the error message, is used to prompt the LLM to rank potential faulty locations. Our black-box, system-level approach requires no access to the SUT source code and is applicable to complex test scripts that assess full system behavior. We evaluate our technique at the function, block, and line levels using an industrial dataset of faulty test cases that were not used in pre-training LLMs. Results show that our best-estimated traces closely match the actual traces, with an F1 score of around 90%. Additionally, pruning the complex system-level test code reduces the LLM's inference time by up to 34% without any loss in FL performance. Our method achieves equal or higher FL accuracy, requiring over 85% less average inference time per test case and 93% fewer tokens than the latest LLM-guided FL method.
- oai:arXiv.org:2506.19045v2
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Looking Beyond Visible Cues: Implicit Video Question Answering via Dual-Clue Reasoning
+ https://arxiv.org/abs/2506.07811
+ arXiv:2506.07811v2 Announce Type: replace
+Abstract: Video Question Answering (VideoQA) aims to answer natural language questions based on the given video, with prior work primarily focusing on identifying the duration of relevant segments, referred to as explicit visual evidence. However, explicit visual evidence is not always directly available, particularly when questions target symbolic meanings or deeper intentions, leading to significant performance degradation. To fill this gap, we introduce a novel task and dataset, $\textbf{I}$mplicit $\textbf{V}$ideo $\textbf{Q}$uestion $\textbf{A}$nswering (I-VQA), which focuses on answering questions in scenarios where explicit visual evidence is inaccessible. Given an implicit question and its corresponding video, I-VQA requires answering based on the contextual visual cues present within the video. To tackle I-VQA, we propose a novel reasoning framework, IRM (Implicit Reasoning Model), incorporating dual-stream modeling of contextual actions and intent clues as implicit reasoning chains. IRM comprises the Action-Intent Module (AIM) and the Visual Enhancement Module (VEM). AIM deduces and preserves question-related dual clues by generating clue candidates and performing relation deduction. VEM enhances contextual visual representation by leveraging key contextual clues. Extensive experiments validate the effectiveness of our IRM in I-VQA tasks, outperforming GPT-4o, OpenAI-o3, and fine-tuned VideoChat2 by $0.76\%$, $1.37\%$, and $4.87\%$, respectively. Additionally, IRM performs SOTA on similar implicit advertisement understanding and future prediction in traffic-VQA. Datasets and codes are available for double-blind review in anonymous repo: https://github.com/tychen-SJTU/Implicit-VideoQA.
+ oai:arXiv.org:2506.07811v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Ahmadreza Saboor Yaraghi, Golnaz Gharachorlu, Sakina Fatima, Lionel C. Briand, Ruiyuan Wan, Ruifeng Gao
+ Tieyuan Chen, Huabin Liu, Yi Wang, Chaofan Gan, Mingxi Lyu, Ziran Qin, Shijie Li, Liquan Shen, Junhui Hou, Zheng Wang, Weiyao Lin
- A parametric tensor ROM for the shallow water dam break problem
- https://arxiv.org/abs/2506.20007
- arXiv:2506.20007v2 Announce Type: replace
-Abstract: We develop a variant of a tensor reduced-order model (tROM) for the parameterized shallow-water dam-break problem. This hyperbolic system presents multiple challenges for model reduction, including a slow decay of the Kolmogorov $N$-width of the solution manifold, shock formation, and the loss of smooth solution dependence on parameters. These issues limit the performance of traditional Proper Orthogonal Decomposition based ROMs. Our tROM approach, based on a low-rank tensor decomposition, builds a parameter-to-solution map from high-fidelity snapshots and constructs localized reduced bases via a local POD procedure. We apply this method to 1D dry-bed and wet-bed problems and 2D wet-bed problem with topography and bottom friction, showing that the non-interpolatory variant of the tROM, combined with Chebyshev sampling near critical parameter values, effectively captures parameter-dependent behavior and significantly outperforms standard POD-ROMs. This is especially evident in the wet-bed case, where POD-ROMs exhibit poor resolution of shock waves and spurious oscillations.
- oai:arXiv.org:2506.20007v2
- math.NA
- cs.NA
- physics.flu-dyn
- Thu, 11 Dec 2025 00:00:00 -0500
+ Reparameterized LLM Training via Orthogonal Equivalence Transformation
+ https://arxiv.org/abs/2506.08001
+ arXiv:2506.08001v4 Announce Type: replace
+Abstract: While large language models (LLMs) are driving the rapid advancement of artificial intelligence, effectively and reliably training these large models remains one of the field's most significant challenges. To address this challenge, we propose POET, a novel reParameterized training algorithm that uses Orthogonal Equivalence Transformation to optimize neurons. Specifically, POET reparameterizes each neuron with two learnable orthogonal matrices and a fixed random weight matrix. Because of its provable preservation of spectral properties of weight matrices, POET can stably optimize the objective function with improved generalization. We further develop efficient approximations that make POET flexible and scalable for training large-scale neural networks. Extensive experiments validate the effectiveness and scalability of POET in training LLMs.
+ oai:arXiv.org:2506.08001v4
+ cs.LG
+ cs.AI
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Md Rezwan Bin Mizan, Maxim Olshanskii, Ilya Timofeyev
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zeju Qiu, Simon Buchholz, Tim Z. Xiao, Maximilian Dax, Bernhard Sch\"olkopf, Weiyang Liu
- Almost Tight Additive Guarantees for $k$-Edge-Connectivity
- https://arxiv.org/abs/2506.20906
- arXiv:2506.20906v2 Announce Type: replace
-Abstract: We consider the \emph{$k$-edge connected spanning subgraph} (kECSS) problem, where we are given an undirected graph $G = (V, E)$ with nonnegative edge costs $\{c_e\}_{e\in E}$, and we seek a minimum-cost \emph{$k$-edge connected} subgraph $H$ of $G$. For even $k$, we present a polytime algorithm that computes a $(k-2)$-edge connected subgraph of cost at most the optimal value $LP^*$ of the natural LP-relaxation for kECSS; for odd $k$, we obtain a $(k-3)$-edge connected subgraph of cost at most $LP^*$. Since kECSS is APX-hard for all $k\geq 2$, our results are nearly optimal. They also significantly improve upon the recent work of Hershkowitz et al., both in terms of solution quality and the simplicity of algorithm and its analysis. Our techniques also yield an alternate guarantee, where we obtain a $(k-1)$-edge connected subgraph of cost at most $1.5\cdot LP^*$; with unit edge costs, the cost guarantee improves to $(1+\frac{4}{3k})\cdot LP^*$, which improves upon the state-of-the-art approximation for unit edge costs, but with a unit loss in edge connectivity.
- Our kECSS-result also yields results for the \emph{$k$-edge connected spanning multigraph} (kECSM) problem, where multiple copies of an edge can be selected: we obtain a $(1+2/k)$-approximation algorithm for even $k$, and a $(1+3/k)$-approximation algorithm for odd $k$.
- Our techniques extend to the degree-bounded versions of kECSS and kECSM, wherein we also impose degree lower- and upper- bounds on the nodes. We obtain the same cost and connectivity guarantees for these degree-bounded versions with an additive violation of (roughly) $2$ for the degree bounds. These are the first results for degree-bounded \{kECSS,kECSM\} of the form where the cost of the solution obtained is at most the optimum, and the connectivity constraints are violated by an additive constant.
- oai:arXiv.org:2506.20906v2
- cs.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ Leveraging Depth and Language for Open-Vocabulary Domain-Generalized Semantic Segmentation
+ https://arxiv.org/abs/2506.09881
+ arXiv:2506.09881v3 Announce Type: replace
+Abstract: Open-Vocabulary semantic segmentation (OVSS) and domain generalization in semantic segmentation (DGSS) highlight a subtle complementarity that motivates Open-Vocabulary Domain-Generalized Semantic Segmentation (OV-DGSS). OV-DGSS aims to generate pixel-level masks for unseen categories while maintaining robustness across unseen domains, a critical capability for real-world scenarios such as autonomous driving in adverse conditions. We introduce Vireo, a novel single-stage framework for OV-DGSS that unifies the strengths of OVSS and DGSS for the first time. Vireo builds upon the frozen Visual Foundation Models (VFMs) and incorporates scene geometry via Depth VFMs to extract domain-invariant structural features. To bridge the gap between visual and textual modalities under domain shift, we propose three key components: (1) GeoText Prompts, which align geometric features with language cues and progressively refine VFM encoder representations; (2) Coarse Mask Prior Embedding (CMPE) for enhancing gradient flow for faster convergence and stronger textual influence; and (3) the Domain-Open-Vocabulary Vector Embedding Head (DOV-VEH), which fuses refined structural and semantic features for robust prediction. Comprehensive evaluation on these components demonstrates the effectiveness of our designs. Our proposed Vireo achieves the state-of-the-art performance and surpasses existing methods by a large margin in both domain generalization and open-vocabulary recognition, offering a unified and scalable solution for robust visual understanding in diverse and dynamic environments. Code is available at https://github.com/anonymouse-9c53tp182bvz/Vireo.
+ oai:arXiv.org:2506.09881v3
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Nikhil Kumar, Chaitanya Swamy
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Siyu Chen, Ting Han, Chengzheng Fu, Changshe Zhang, Chaolei Wang, Jinhe Su, Guorong Cai, Meiliu Wu
- Model-driven Stochastic Trace Clustering
- https://arxiv.org/abs/2506.23776
- arXiv:2506.23776v2 Announce Type: replace
-Abstract: Process discovery algorithms automatically extract process models from event logs, but high variability often results in complex and hard-to-understand models. To mitigate this issue, trace clustering techniques group process executions into clusters, each represented by a simpler and more understandable process model. Model-driven trace clustering improves on this by assigning traces to clusters based on their conformity to cluster-specific process models. However, most existing clustering techniques rely on either no process model discovery, or non-stochastic models, neglecting the frequency or probability of activities and transitions, thereby limiting their capability to capture real-world execution dynamics. We propose a novel model-driven trace clustering method that optimizes stochastic process models within each cluster. Our approach uses entropic relevance, a stochastic conformance metric based on directly-follows probabilities, to guide trace assignment. This allows clustering decisions to consider both structural alignment with a cluster's process model and the likelihood that a trace originates from a given stochastic process model. The method is computationally efficient, scales linearly with input size, and improves model interpretability by producing clusters with clearer control-flow patterns. Extensive experiments on public real-life datasets demonstrate that while our method yields superior stochastic coherence and graph simplicity, traditional fitness metrics reveal a trade-off, highlighting the specific utility of our approach for stochastic process analysis.
- oai:arXiv.org:2506.23776v2
+ Geometric Regularity in Deterministic Sampling Dynamics of Diffusion-based Generative Models
+ https://arxiv.org/abs/2506.10177
+ arXiv:2506.10177v3 Announce Type: replace
+Abstract: Diffusion-based generative models employ stochastic differential equations (SDEs) and their equivalent probability flow ordinary differential equations (ODEs) to establish a smooth transformation between complex high-dimensional data distributions and tractable prior distributions. In this paper, we reveal a striking geometric regularity in the deterministic sampling dynamics of diffusion generative models: each simulated sampling trajectory along the gradient field lies within an extremely low-dimensional subspace, and all trajectories exhibit an almost identical boomerang shape, regardless of the model architecture, applied conditions, or generated content. We characterize several intriguing properties of these trajectories, particularly under closed-form solutions based on kernel-estimated data modeling. We also demonstrate a practical application of the discovered trajectory regularity by proposing a dynamic programming-based scheme to better align the sampling time schedule with the underlying trajectory structure. This simple strategy requires minimal modification to existing deterministic numerical solvers, incurs negligible computational overhead, and achieves superior image generation performance, especially in regions with only 5 - 10 function evaluations.
+ oai:arXiv.org:2506.10177v3cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cond-mat.stat-mech
+ cs.CV
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Jari Peeperkorn, Johannes De Smedt, Jochen De Weerdt
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1088/1742-5468/ae17ac
+ J. Stat. Mech. (2025) 124002
+ Defang Chen, Zhenyu Zhou, Can Wang, Siwei Lyu
- MambAttention: Mamba with Multi-Head Attention for Generalizable Single-Channel Speech Enhancement
- https://arxiv.org/abs/2507.00966
- arXiv:2507.00966v3 Announce Type: replace
-Abstract: With new sequence models like Mamba and xLSTM, several studies have shown that these models match or outperform the state-of-the-art in single-channel speech enhancement and audio representation learning. However, prior research has demonstrated that sequence models like LSTM and Mamba tend to overfit to the training set. To address this, previous works have shown that adding self-attention to LSTMs substantially improves generalization performance for single-channel speech enhancement. Nevertheless, neither the concept of hybrid Mamba and time-frequency attention models nor their generalization performance have been explored for speech enhancement. In this paper, we propose a novel hybrid architecture, MambAttention, which combines Mamba and shared time- and frequency-multi-head attention modules for generalizable single-channel speech enhancement. To train our model, we introduce VB-DemandEx, a dataset inspired by VoiceBank+Demand but with more challenging noise types and lower signal-to-noise ratios. Trained on VB-DemandEx, MambAttention significantly outperforms existing state-of-the-art discriminative LSTM-, xLSTM-, Mamba-, and Conformer-based systems of similar complexity across all reported metrics on two out-of-domain datasets: DNS 2020 without reverberation and EARS-WHAM_v2. MambAttention also matches or outperforms generative diffusion models in generalization performance while being competitive with language model baselines. Ablation studies highlight the importance of weight sharing between time- and frequency-multi-head attention modules for generalization performance. Finally, we explore integrating the shared time- and frequency-multi-head attention modules with LSTM and xLSTM, which yields a notable performance improvement on the out-of-domain datasets. Yet, MambAttention remains superior for cross-corpus generalization across all reported evaluation metrics.
- oai:arXiv.org:2507.00966v3
- cs.SD
+ Benchmarking Multimodal LLMs on Recognition and Understanding over Chemical Tables
+ https://arxiv.org/abs/2506.11375
+ arXiv:2506.11375v2 Announce Type: replace
+Abstract: With the widespread application of multimodal large language models in scientific intelligence, there is an urgent need for more challenging evaluation benchmarks to assess their ability to understand complex scientific data. Scientific tables, as core carriers of knowledge representation, combine text, symbols, and graphics, forming a typical multimodal reasoning scenario. However, existing benchmarks are mostly focused on general domains, failing to reflect the unique structural complexity and domain-specific semantics inherent in scientific research. Chemical tables are particularly representative: they intertwine structured variables such as reagents, conditions, and yields with visual symbols like molecular structures and chemical formulas, posing significant challenges to models in cross-modal alignment and semantic parsing. To address this, we propose ChemTable-a large scale benchmark of chemical tables constructed from real-world literature, containing expert-annotated cell layouts, logical structures, and domain-specific labels. It supports two core tasks: (1) table recognition (structure and content extraction); and (2) table understanding (descriptive and reasoning-based question answering). Evaluation on ChemTable shows that while mainstream multimodal models perform reasonably well in layout parsing, they still face significant limitations when handling critical elements such as molecular structures and symbolic conventions. Closed-source models lead overall but still fall short of human-level performance. This work provides a realistic testing platform for evaluating scientific multimodal understanding, revealing the current bottlenecks in domain-specific reasoning and advancing the development of intelligent systems for scientific research.
+ oai:arXiv.org:2506.11375v2cs.AI
- eess.AS
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/publicdomain/zero/1.0/
- Nikolai Lund K\"uhne, Jesper Jensen, Jan {\O}stergaard, Zheng-Hua Tan
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Yitong Zhou, Mingyue Cheng, Qingyang Mao, Yucong Luo, Qi Liu, Yupeng Li, Xiaohan Zhang, Deguang Liu, Xin Li, Enhong Chen
- On the Adversarial Robustness of Online Importance Sampling
- https://arxiv.org/abs/2507.02394
- arXiv:2507.02394v2 Announce Type: replace
-Abstract: This paper studies the adversarial-robustness of importance-sampling (aka sensitivity sampling); a useful algorithmic technique that samples elements with probabilities proportional to some measure of their importance. A streaming or online algorithm is called adversarially-robust if it succeeds with high probability on input streams that may change adaptively depending on previous algorithm outputs. Unfortunately, the dependence between stream elements breaks the analysis of most randomized algorithms, and in particular that of importance-sampling algorithms. Previously, Braverman et al. [NeurIPS 2021] suggested that streaming algorithms based on importance-sampling may be adversarially-robust; however, they proved it only for well-behaved inputs.
- We focus on the adversarial-robustness of online importance-sampling, a natural variant where sampling decisions are irrevocable and made as data arrives. Our main technical result shows that, given as input an adaptive stream of elements $x_1,\ldots,x_T\in \mathbb{R}_+$, online importance-sampling maintains a $(1\pm\epsilon)$-approximation of their sum while matching (up to lower order terms) the storage guarantees of the oblivious (non-adaptive) case. We then apply this result to develop adversarially-robust online algorithms for two fundamental problems: hypergraph cut sparsification and $\ell_p$ subspace embedding.
- oai:arXiv.org:2507.02394v2
- cs.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ Symmetry in Neural Network Parameter Spaces
+ https://arxiv.org/abs/2506.13018
+ arXiv:2506.13018v3 Announce Type: replace
+Abstract: Modern deep learning models are highly overparameterized, resulting in large sets of parameter configurations that yield the same outputs. A significant portion of this redundancy is explained by symmetries in the parameter space--transformations that leave the network function unchanged. These symmetries shape the loss landscape and constrain learning dynamics, offering a new lens for understanding optimization, generalization, and model complexity that complements existing theory of deep learning. This survey provides an overview of parameter space symmetry. We summarize existing literature, uncover connections between symmetry and learning theory, and identify gaps and opportunities in this emerging field.
+ oai:arXiv.org:2506.13018v3
+ cs.LG
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Bo Zhao, Robin Walters, Rose Yu
+
+
+ Screen Reader Programmers in the Vibe Coding Era: Adaptation, Empowerment, and New Accessibility Landscape
+ https://arxiv.org/abs/2506.13270
+ arXiv:2506.13270v2 Announce Type: replace
+Abstract: Generative AI agents are reshaping human-computer interaction, shifting users from direct task execution to supervising machine-driven actions, especially the rise of "vibe coding" in programming. Yet little is known about how screen reader programmers interact with AI code assistants in practice. We conducted a longitudinal study with 16 blind and low-vision programmers. Participants completed a GitHub Copilot tutorial, engaged with a programming task, and provided initial feedback. After two weeks of AI-assisted programming, follow-ups examined how their practices and perceptions evolved. Our findings show that code assistants enhanced programming efficiency and bridged accessibility gaps. However, participants struggled to convey intent, interpret AI outputs, and manage multiple views while maintaining situational awareness. They showed diverse preferences for accessibility features, expressed a need to balance automation with control, and encountered barriers when learning to use these tools. Furthermore, we propose design principles and recommendations for more accessible and inclusive human-AI collaborations.
+ oai:arXiv.org:2506.13270v2
+ cs.HC
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yotam Kenneth-Mordoch, Shay Sapir
+ Nan Chen, Luna K. Qiu, Arran Zeyu Wang, Zilong Wang, Yuqing Yang
- Quantifying Cross-Attention Interaction in Transformers for Interpreting TCR-pMHC Binding
- https://arxiv.org/abs/2507.03197
- arXiv:2507.03197v2 Announce Type: replace
-Abstract: CD8+ "killer" T cells and CD4+ "helper" T cells play a central role in the adaptive immune system by recognizing antigens presented by Major Histocompatibility Complex (pMHC) molecules via T Cell Receptors (TCRs). Modeling binding between T cells and the pMHC complex is fundamental to understanding basic mechanisms of human immune response as well as in developing therapies. While transformer-based models such as TULIP have achieved impressive performance in this domain, their black-box nature precludes interpretability and thus limits a deeper mechanistic understanding of T cell response. Most existing post-hoc explainable AI (XAI) methods are confined to encoder-only, co-attention, or model-specific architectures and cannot handle encoder-decoder transformers used in TCR-pMHC modeling. To address this gap, we propose Quantifying Cross-Attention Interaction (QCAI), a new post-hoc method designed to interpret the cross-attention mechanisms in transformer decoders. Quantitative evaluation is a challenge for XAI methods; we have compiled TCR-XAI, a benchmark consisting of 274 experimentally determined TCR-pMHC structures to serve as ground truth for binding. Using these structures we compute physical distances between relevant amino acid residues in the TCR-pMHC interaction region and evaluate how well our method and others estimate the importance of residues in this region across the dataset. We show that QCAI achieves state-of-the-art performance on both interpretability and prediction accuracy under the TCR-XAI benchmark.
- oai:arXiv.org:2507.03197v2
+ Explain First, Trust Later: LLM-Augmented Explanations for Graph-Based Crypto Anomaly Detection
+ https://arxiv.org/abs/2506.14933
+ arXiv:2506.14933v2 Announce Type: replace
+Abstract: The decentralized finance (DeFi) community has grown rapidly in recent years, pushed forward by cryptocurrency enthusiasts interested in the vast untapped potential of new markets. The surge in popularity of cryptocurrency has ushered in a new era of financial crime. Unfortunately, the novelty of the technology makes the task of catching and prosecuting offenders particularly challenging. Thus, it is necessary to implement automated detection tools related to policies to address the growing criminality in the cryptocurrency realm.
+ oai:arXiv.org:2506.14933v2cs.CE
+ cs.AI
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Adriana Watson, Grant Richards, Daniel Schiff
+
+
+ T-SHRED: Symbolic Regression for Regularization and Model Discovery with Transformer Shallow Recurrent Decoders
+ https://arxiv.org/abs/2506.15881
+ arXiv:2506.15881v3 Announce Type: replace
+Abstract: SHallow REcurrent Decoders (SHRED) are effective for system identification and forecasting from sparse sensor measurements. Such models are light-weight and computationally efficient, allowing them to be trained on consumer laptops. SHRED-based models rely on Recurrent Neural Networks (RNNs) and a simple Multi-Layer Perceptron (MLP) for the temporal encoding and spatial decoding respectively. Despite the relatively simple structure of SHRED, they are able to predict chaotic dynamical systems on different physical, spatial, and temporal scales directly from a sparse set of sensor measurements. In this work, we modify SHRED by leveraging transformers (T-SHRED) embedded with symbolic regression for the temporal encoding, circumventing auto-regressive long-term forecasting for physical data. This is achieved through a new sparse identification of nonlinear dynamics (SINDy) attention mechanism into T-SHRED to impose sparsity regularization on the latent space, which also allows for immediate symbolic interpretation. Symbolic regression improves model interpretability by learning and regularizing the dynamics of the latent space during training. We analyze the performance of T-SHRED on three different dynamical systems ranging from low-data to high-data regimes.
+ oai:arXiv.org:2506.15881v3cs.LG
- q-bio.BM
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Jiarui Li, Zixiang Yin, Haley Smith, Zhengming Ding, Samuel J. Landry, Ramgopal R. Mettu
+ Alexey Yermakov, David Zoro, Mars Liyao Gao, J. Nathan Kutz
- RTGPU: Real-Time Computing with Graphics Processing Units
- https://arxiv.org/abs/2507.06069
- arXiv:2507.06069v2 Announce Type: replace
-Abstract: In this work, we survey the role of GPUs in real-time systems. Originally designed for parallel graphics workloads, GPUs are now widely used in time-critical applications such as machine learning, autonomous vehicles, and robotics due to their high computational throughput. Their parallel architecture is well-suited for accelerating complex tasks under strict timing constraints. However, their integration into real-time systems presents several challenges, including non-preemptive execution, execution time variability, and resource contention; factors that can lead to unpredictable delays and deadline violations. We examine existing solutions that address these challenges, including scheduling algorithms, resource management techniques, and synchronization methods, and highlight open research directions to improve GPU predictability and performance in real-time environments.
- oai:arXiv.org:2507.06069v2
- cs.AR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Aligning ASR Evaluation with Human and LLM Judgments: Intelligibility Metrics Using Phonetic, Semantic, and NLI Approaches
+ https://arxiv.org/abs/2506.16528
+ arXiv:2506.16528v2 Announce Type: replace
+Abstract: Traditional ASR metrics like WER and CER fail to capture intelligibility, especially for dysarthric and dysphonic speech, where semantic alignment matters more than exact word matches. ASR systems struggle with these speech types, often producing errors like phoneme repetitions and imprecise consonants, yet the meaning remains clear to human listeners. We identify two key challenges: (1) Existing metrics do not adequately reflect intelligibility, and (2) while LLMs can refine ASR output, their effectiveness in correcting ASR transcripts of dysarthric speech remains underexplored. To address this, we propose a novel metric integrating Natural Language Inference (NLI) scores, semantic similarity, and phonetic similarity. Our ASR evaluation metric achieves a 0.890 correlation with human judgments on Speech Accessibility Project data, surpassing traditional methods and emphasizing the need to prioritize intelligibility over error-based measures.
+ oai:arXiv.org:2506.16528v2
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Atiyeh Gheibi-Fetrat, Amirsaeed Ahmadi-Tonekaboni, Farzam Koohi-Ronaghi, Pariya Hajipour, Sana Babayan-Vanestan, Fatemeh Fotouhi, Elahe Mortazavian-Farsani, Pouria Khajehpour-Dezfouli, Sepideh Safari, Shaahin Hessabi, Hamid Sarbazi-Azad
+ Bornali Phukon, Xiuwen Zheng, Mark Hasegawa-Johnson
- An Offline Mobile Conversational Agent for Mental Health Support: Learning from Emotional Dialogues and Psychological Texts with Student-Centered Evaluation
- https://arxiv.org/abs/2507.10580
- arXiv:2507.10580v2 Announce Type: replace
-Abstract: Mental health plays a crucial role in the overall well-being of an individual. In recent years, digital platforms have increasingly been used to expand mental health and emotional support. However, there are persistent challenges related to limited user accessibility, internet connectivity, and data privacy, which highlight the need for an offline, smartphone-based solutions. To address these challenges, we propose EmoSApp (Emotional Support App): an entirely offline, smartphone-based conversational app designed to provide mental health and emotional support. EmoSApp leverages a language model, specifically the LLaMA-3.2-1B-Instruct, which is fine-tuned and quantized on a custom-curated ``Knowledge Dataset'' comprising 14,582 mental health QA pairs along with multi-turn conversational data, enabling robust domain expertise and fully on-device inference on resource-constrained smartphones.
- Through qualitative evaluation with students and mental health professionals, we demonstrate that EmoSApp has the ability to respond coherently and empathetically, provide relevant suggestions to user's mental health problems, and maintain interactive dialogue. Additionally, quantitative evaluations on nine commonsense and reasoning benchmarks, along with two mental health specific datasets, demonstrate EmoSApp's effectiveness in low-resource settings. By prioritizing on-device deployment and specialized domain-specific adaptation, EmoSApp serves as a blueprint for future innovations in portable, secure, and highly tailored AI-driven mental health support.
- oai:arXiv.org:2507.10580v2
- cs.CL
- cs.AI
- cs.CY
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Compliant Residual DAgger: Improving Real-World Contact-Rich Manipulation with Human Corrections
+ https://arxiv.org/abs/2506.16685
+ arXiv:2506.16685v4 Announce Type: replace
+Abstract: We address key challenges in Dataset Aggregation (DAgger) for real-world contact-rich manipulation: how to collect informative human correction data and how to effectively update policies with this new data. We introduce Compliant Residual DAgger (CR-DAgger), which contains two novel components: 1) a Compliant Intervention Interface that leverages compliance control, allowing humans to provide gentle, accurate delta action corrections without interrupting the ongoing robot policy execution; and 2) a Compliant Residual Policy formulation that learns from human corrections while incorporating force feedback and force control. Our system significantly enhances performance on precise contact-rich manipulation tasks using minimal correction data, improving base policy success rates by over 50\% on two challenging tasks (book flipping and belt assembly) while outperforming both retraining-from-scratch and finetuning approaches. Through extensive real-world experiments, we provide practical guidance for implementing effective DAgger in real-world robot learning tasks. Result videos are available at: https://compliant-residual-dagger.github.io/
+ oai:arXiv.org:2506.16685v4
+ cs.RO
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Vimaleswar A, Prabhu Nandan Sahu, Nilesh Kumar Sahu, Haroon R. Lone
+ http://creativecommons.org/licenses/by/4.0/
+ Xiaomeng Xu, Yifan Hou, Zeyi Liu, Shuran Song
- SAFT: Structure-Aware Fine-Tuning of LLMs for AMR-to-Text Generation
- https://arxiv.org/abs/2507.13381
- arXiv:2507.13381v2 Announce Type: replace
-Abstract: Large Language Models (LLMs) are increasingly applied to tasks involving structured inputs such as graphs. Abstract Meaning Representations (AMRs), which encode rich semantics as directed graphs, offer a rigorous testbed for evaluating LLMs on text generation from such structures. Yet, current methods often arbitrarily linearize AMRs, discarding key structural cues, or rely on architectures incompatible with standard LLMs. We introduce SAFT, a structure-aware fine-tuning approach that injects graph topology into pretrained LLMs without architectural changes. We compute direction-sensitive positional encodings from the magnetic Laplacian of transformed AMRs and project them into the embedding space of the LLM. While possibly applicable to any graph-structured inputs, we focus on AMR-to-text generation as a representative and challenging benchmark. SAFT sets a new state-of-the-art on AMR 3.0 with a 3.5 BLEU improvement over baselines. Gains scale with graph complexity, highlighting the value of structure-aware representations in enhancing LLM performance. SAFT offers a general and effective pathway for bridging structured data and language models.
- oai:arXiv.org:2507.13381v2
+ Better Language Model Inversion by Compactly Representing Next-Token Distributions
+ https://arxiv.org/abs/2506.17090
+ arXiv:2506.17090v3 Announce Type: replace
+Abstract: Language model inversion seeks to recover hidden prompts using only language model outputs. This capability has implications for security and accountability in language model deployments, such as leaking private information from an API-protected language model's system message. We propose a new method -- prompt inversion from logprob sequences (PILS) -- that recovers hidden prompts by gleaning clues from the model's next-token probabilities over the course of multiple generation steps. Our method is enabled by a key insight: The vector-valued outputs of a language model occupy a low-dimensional subspace. This enables us to losslessly compress the full next-token probability distribution over multiple generation steps using a linear map, allowing more output information to be used for inversion. Our approach yields massive gains over previous state-of-the-art methods for recovering hidden prompts, achieving 2--3.5 times higher exact recovery rates across test sets, in one case increasing the recovery rate from 17% to 60%. Our method also exhibits surprisingly good generalization behavior; for instance, an inverter trained on 16 generations steps gets 5--27 points higher prompt recovery when we increase the number of steps to 32 at test time. Furthermore, we demonstrate strong performance of our method on the more challenging task of recovering hidden system messages. We also analyze the role of verbatim repetition in prompt recovery and propose a new method for cross-family model transfer for logit-based inverters. Our findings show that next-token probabilities are a considerably more vulnerable attack surface for inversion attacks than previously known.
+ oai:arXiv.org:2506.17090v3cs.CL
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Rafiq Kamel, Filippo Guerranti, Simon Geisler, Stephan G\"unnemann
+ Murtaza Nazir, Matthew Finlayson, John X. Morris, Xiang Ren, Swabha Swayamdipta
- Quantifying Ocular Surface Changes with Contact Lens Wear
- https://arxiv.org/abs/2507.13589
- arXiv:2507.13589v2 Announce Type: replace
-Abstract: Over 140 million people worldwide and over 45 million people in the United States wear contact lenses; it is estimated that 12%-27.4% contact lens users stop wearing them due to discomfort. Contact lens mechanical interactions with the ocular surface have been found to affect the ocular surface itself. These mechanical interactions are difficult to measure and calculate in a clinical setting, and the research in this field is limited. This paper presents the first mathematical model that captures the interactions between the contact lens and the open eye, where the contact lens configuration, the contact lens suction pressure, and the deformed ocular shape are all emergent properties of the model. The non-linear coupling between the contact lens and the eye is achieved by assuming that the suction pressure under the lens is applied directly to the ocular surface through the post-lens tear film layer. The contact lens mechanics are modeled using a previous published model. We consider homogeneous and heterogeneous linear elastic eye models, different ocular shapes, different lens shapes and thickness profiles, and extract lens deformations, suction pressure profiles, and ocular deformations and stresses for all the considered scenarios. The model predicts higher ocular deformations and stresses at the center of the eye and in the limbal/scleral regions. Accounting for heterogeneous material eye parameters increases the magnitude of such deformations and stresses. The ocular displacements and stresses non-linearly increase as we increase the stiffness of the contact lens. Inserting a steeper contact lens on the eye results in a reduction of the ocular displacement at the center of the eye and a larger displacement at the edge of the contact lens. The model predictions are compared with experimental data and previously developed mathematical models.
- oai:arXiv.org:2507.13589v2
- math.NA
- cs.NA
- physics.bio-ph
- Thu, 11 Dec 2025 00:00:00 -0500
+ AI Through the Human Lens: Investigating Cognitive Theories in Machine Psychology
+ https://arxiv.org/abs/2506.18156
+ arXiv:2506.18156v3 Announce Type: replace
+Abstract: We investigate whether Large Language Models (LLMs) exhibit human-like cognitive patterns under four established frameworks from psychology: Thematic Apperception Test (TAT), Framing Bias, Moral Foundations Theory (MFT), and Cognitive Dissonance. We evaluated several proprietary and open-source models using structured prompts and automated scoring. Our findings reveal that these models often produce coherent narratives, show susceptibility to positive framing, exhibit moral judgments aligned with Liberty/Oppression concerns, and demonstrate self-contradictions tempered by extensive rationalization. Such behaviors mirror human cognitive tendencies yet are shaped by their training data and alignment methods. We discuss the implications for AI transparency, ethical deployment, and future work that bridges cognitive psychology and AI safety
+ oai:arXiv.org:2506.18156v3
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.3934/mbe.2026008
- Mathematical Biosciences and Engineering (2026), Volume 23, Issue 1: 172-209
- Lucia Carichino, Kara L. Maki, David S. Ross, Riley K. Supple, Evan Rysdam
+ http://creativecommons.org/licenses/by/4.0/
+ Akash Kundu, Rishika Goswami
- CREME: Robustness Enhancement of Code LLMs via Layer-Aware Model Editing
- https://arxiv.org/abs/2507.16407
- arXiv:2507.16407v3 Announce Type: replace
-Abstract: Large language models (LLMs) have demonstrated impressive capabilities in code generation, where the natural language prompt plays a crucial role in conveying user intent to the model. However, prior studies have shown that LLMs are highly sensitive to prompt perturbations. Minor modifications in wording, syntax, or formatting can significantly reduce the functional correctness of generated code. As perturbations frequently occur in real-world scenarios, improving the robustness of LLMs to prompt perturbations is essential for ensuring reliable performance in practical code generation. In this paper, we introduce CREME (Code Robustness Enhancement via Model Editing), a novel approach that enhances LLM robustness through targeted parameter updates. CREME first identifies robustness-sensitive layers by comparing hidden states between an original prompt and its perturbed variant. Then, it performs lightweight parameter editing at the identified layer to reduce performance degradation. We evaluate CREME on two widely used code generation benchmarks (HumanEval and MBPP) along with their perturbed counterparts. Experimental results show that CREME improves Pass@1 accuracy by 63% on perturbed prompts while maintaining stable performance on clean inputs, with accuracy deviations within 1%. Further analysis reveals that robustness-sensitive layers are primarily concentrated in the middle and deeper layers of the network, and their locations vary across different model architectures. These insights provide a valuable foundation for developing future robustness-oriented editing strategies.
- oai:arXiv.org:2507.16407v3
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Physics-Informed Neural Network Framework for Simulating Creep Buckling in Growing Viscoelastic Biological Tissues
+ https://arxiv.org/abs/2506.18565
+ arXiv:2506.18565v2 Announce Type: replace
+Abstract: Modeling viscoelastic behavior is crucial in engineering and biomechanics, where materials undergo time-dependent deformations, including stress relaxation, creep buckling and biological tissue development. Traditional numerical methods, like the finite element method, often require explicit meshing, artificial perturbations or embedding customised programs to capture these phenomena, adding computational complexity. In this study, we develop an energy-based physics-informed neural network (PINN) framework using an incremental approach to model viscoelastic creep, stress relaxation, buckling, and growth-induced morphogenesis. Physics consistency is ensured by training neural networks to minimize the systems potential energy functional, implicitly satisfying equilibrium and constitutive laws. We demonstrate that this framework can naturally capture creep buckling without pre-imposed imperfections, leveraging inherent training dynamics to trigger instabilities. Furthermore, we extend our framework to biological tissue growth and morphogenesis, predicting both uniform expansion and differential growth-induced buckling in cylindrical structures. Results show that the energy-based PINN effectively predicts viscoelastic instabilities, post-buckling evolution and tissue morphological evolution, offering a promising alternative to traditional methods. This study demonstrates that PINN can be a flexible robust tool for modeling complex, time-dependent material behavior, opening possible applications in structural engineering, soft materials, and tissue development.
+ oai:arXiv.org:2506.18565v2
+ cs.CE
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1145/3744916.3773111
- Shuhan Liu, Xing Hu, Kerui Huang, Xiaohu Yang, David Lo, Xin Xia
+ Zhongya Lin, Jinshuai Bai, Shuang Li, Xindong Chen, Bo Li, Xi-Qiao Feng
- RoadBench: A Vision-Language Foundation Model and Benchmark for Road Damage Understanding
- https://arxiv.org/abs/2507.17353
- arXiv:2507.17353v3 Announce Type: replace
-Abstract: Accurate road damage detection is crucial for timely infrastructure maintenance and public safety, but existing vision-only datasets and models lack the rich contextual understanding that textual information can provide. To address this limitation, we introduce RoadBench, the first multimodal benchmark for comprehensive road damage understanding. This dataset pairs high resolution images of road damages with detailed textual descriptions, providing a richer context for model training. We also present RoadCLIP, a novel vision language model that builds upon CLIP by integrating domain specific enhancements. It includes a disease aware positional encoding that captures spatial patterns of road defects and a mechanism for injecting road-condition priors to refine the model's understanding of road damages. We further employ a GPT driven data generation pipeline to expand the image to text pairs in RoadBench, greatly increasing data diversity without exhaustive manual annotation. Experiments demonstrate that RoadCLIP achieves state of the art performance on road damage recognition tasks, significantly outperforming existing vision-only models by 19.2%. These results highlight the advantages of integrating visual and textual information for enhanced road condition analysis, setting new benchmarks for the field and paving the way for more effective infrastructure monitoring through multimodal learning.
- oai:arXiv.org:2507.17353v3
- cs.CE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Achieving Trustworthy Real-Time Decision Support Systems with Low-Latency Interpretable AI Models
+ https://arxiv.org/abs/2506.20018
+ arXiv:2506.20018v2 Announce Type: replace
+Abstract: This paper investigates real-time decision support systems that leverage low-latency AI models, bringing together recent progress in holistic AI-driven decision tools, integration with Edge-IoT technologies, and approaches for effective human-AI teamwork. It looks into how large language models can assist decision-making, especially when resources are limited. The research also examines the effects of technical developments such as DeLLMa, methods for compressing models, and improvements for analytics on edge devices, while also addressing issues like limited resources and the need for adaptable frameworks. Through a detailed review, the paper offers practical perspectives on development strategies and areas of application, adding to the field by pointing out opportunities for more efficient and flexible AI-supported systems. The conclusions set the stage for future breakthroughs in this fast-changing area, highlighting how AI can reshape real-time decision support.
+ oai:arXiv.org:2506.20018v2
+ cs.AI
+ cs.AR
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Xi Xiao, Yunbei Zhang, Janet Wang, Lin Zhao, Yuxiang Wei, Hengjia Li, Yanshu Li, Xinyuan Song, Xiao Wang, Swalpa Kumar Roy, Hao Xu, Tianyang Wang
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Zechun Deng, Ziwei Liu, Ziqian Bi, Junhao Song, Chia Xin Liang, Joe Yeong, Xinyuan Song, Junfeng Hao
- DELTAv2: Accelerating Dense 3D Tracking
- https://arxiv.org/abs/2508.01170
- arXiv:2508.01170v2 Announce Type: replace
-Abstract: We propose a novel algorithm for accelerating dense long-term 3D point tracking in videos. Through analysis of existing state-of-the-art methods, we identify two major computational bottlenecks. First, transformer-based iterative tracking becomes expensive when handling a large number of trajectories. To address this, we introduce a coarse-to-fine strategy that begins tracking with a small subset of points and progressively expands the set of tracked trajectories. The newly added trajectories are initialized using a learnable interpolation module, which is trained end-to-end alongside the tracking network. Second, we propose an optimization that significantly reduces the cost of correlation feature computation, another key bottleneck in prior methods. Together, these improvements lead to a 5-100x speedup over existing approaches while maintaining state-of-the-art tracking accuracy.
- oai:arXiv.org:2508.01170v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Almost Tight Additive Guarantees for $k$-Edge-Connectivity
+ https://arxiv.org/abs/2506.20906
+ arXiv:2506.20906v3 Announce Type: replace
+Abstract: We consider the \emph{$k$-edge connected spanning subgraph} (kECSS) problem, where we are given an undirected graph $G = (V, E)$ with nonnegative edge costs $\{c_e\}_{e\in E}$, and we seek a minimum-cost \emph{$k$-edge connected} subgraph $H$ of $G$. For even $k$, we present a polytime algorithm that computes a $(k-2)$-edge connected subgraph of cost at most the optimal value $LP^*$ of the natural LP-relaxation for kECSS; for odd $k$, we obtain a $(k-3)$-edge connected subgraph of cost at most $LP^*$. Since kECSS is APX-hard for all $k\geq 2$, our results are nearly optimal. They also significantly improve upon the recent work of Hershkowitz et al., both in terms of solution quality and the simplicity of algorithm and its analysis. Our techniques also yield an alternate guarantee, where we obtain a $(k-1)$-edge connected subgraph of cost at most $1.5\cdot LP^*$; with unit edge costs, the cost guarantee improves to $(1+\frac{4}{3k})\cdot LP^*$, which improves upon the state-of-the-art approximation for unit edge costs, but with a unit loss in edge connectivity.
+ Our kECSS-result also yields results for the \emph{$k$-edge connected spanning multigraph} (kECSM) problem, where multiple copies of an edge can be selected: we obtain a $(1+2/k)$-approximation algorithm for even $k$, and a $(1+3/k)$-approximation algorithm for odd $k$.
+ Our techniques extend to the degree-bounded versions of kECSS and kECSM, wherein we also impose degree lower- and upper- bounds on the nodes. We obtain the same cost and connectivity guarantees for these degree-bounded versions with an additive violation of (roughly) $2$ for the degree bounds. These are the first results for degree-bounded \{kECSS,kECSM\} of the form where the cost of the solution obtained is at most the optimum, and the connectivity constraints are violated by an additive constant.
+ oai:arXiv.org:2506.20906v3
+ cs.DS
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Tuan Duc Ngo, Ashkan Mirzaei, Guocheng Qian, Hanwen Liang, Chuang Gan, Evangelos Kalogerakis, Peter Wonka, Chaoyang Wang
+ Nikhil Kumar, Chaitanya Swamy
- AURORA:Augmented Understanding via Structured Reasoning and Reinforcement Learning for Reference Audio-Visual Segmentation
- https://arxiv.org/abs/2508.02149
- arXiv:2508.02149v2 Announce Type: replace
-Abstract: Reference Audio-Visual Segmentation (Ref-AVS) tasks challenge models to precisely locate sounding objects by integrating visual, auditory, and textual cues. Existing methods often lack genuine semantic understanding, tending to memorize fixed reasoning patterns. Furthermore, jointly training for reasoning and segmentation can compromise pixel-level precision. To address these issues, we introduce AURORA, a novel framework designed to enhance genuine reasoning and language comprehension in reference audio-visual segmentation. We employ a structured Chain-of-Thought (CoT) prompting mechanism to guide the model through a step-by-step reasoning process and introduce a novel segmentation feature distillation loss to effectively integrate these reasoning abilities without sacrificing segmentation performance. To further cultivate the model's genuine reasoning capabilities, we devise a further two-stage training strategy: first, a ``corrective reflective-style training" stage utilizes self-correction to enhance the quality of reasoning paths, followed by reinforcement learning via Group Reward Policy Optimization (GRPO) to bolster robustness in challenging scenarios. Experiments demonstrate that AURORA achieves state-of-the-art performance on Ref-AVS benchmarks and generalizes effectively to unreferenced segmentation.
- oai:arXiv.org:2508.02149v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Deception Detection in Dyadic Exchanges Using Multimodal Machine Learning: A Study on a Swedish Cohort
+ https://arxiv.org/abs/2506.21429
+ arXiv:2506.21429v2 Announce Type: replace
+Abstract: This study investigates the efficacy of using multimodal machine learning techniques to detect deception in dyadic interactions, focusing on the integration of data from both the deceiver and the deceived. We compare early and late fusion approaches, utilizing audio and video data - specifically, Action Units and gaze information - across all possible combinations of modalities and participants. Our dataset, newly collected from Swedish native speakers engaged in truth or lie scenarios on emotionally relevant topics, serves as the basis for our analysis. The results demonstrate that incorporating both speech and facial information yields superior performance compared to single-modality approaches. Moreover, including data from both participants significantly enhances deception detection accuracy, with the best performance (71%) achieved using a late fusion strategy applied to both modalities and participants. These findings align with psychological theories suggesting differential control of facial and vocal expressions during initial interactions. As the first study of its kind on a Scandinavian cohort, this research lays the groundwork for future investigations into dyadic interactions, particularly within psychotherapy settings.
+ oai:arXiv.org:2506.21429v2
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ziyang Luo, Nian Liu, Fahad Shahbaz Khan, Junwei Han
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Thomas Jack Samuels, Franco Rugolon, Stephan Hau, Lennart H\"ogman
- Transient thermal analysis of a bi-layered composites with the dual-reciprocity inclusion-based boundary element method
- https://arxiv.org/abs/2508.02683
- arXiv:2508.02683v2 Announce Type: replace
-Abstract: This paper proposes a single-domain dual-reciprocity inclusion-based boundary element method (DR-iBEM) for a three-dimensional fully bonded bi-layered composite embedded with ellipsoidal inhomogeneities under transient/harmonic thermal loads. The heat equation is interpreted as a static one containing time- and frequency-dependent nonhomogeneous source terms, which is similar to eigen-fields but is transformed into a boundary integral by the dual-reciprocity method. Using the steady-state bimaterial Green's function, boundary integral equations are proposed to take into account continuity conditions of temperature and heat flux, which avoids setting up any continuity equations at the bimaterial interface. Eigen-temperature-gradients and eigen-heat-source are introduced to simulate the material mismatch in thermal conductivity and heat capacity, respectively. The DR-iBEM algorithm is particularly suitable for investigating the transient and harmonic thermal behaviors of bi-layered composites and is verified by the finite element method (FEM). Numerical comparison with the FEM demonstrates its robustness and accuracy. The method has been applied to a functionally graded material as a bimaterial with graded particle distributions, where particle size and gradation effects are evaluated.
- oai:arXiv.org:2508.02683v2
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Counting with Confidence: Accurate Pest Monitoring in Water Traps
+ https://arxiv.org/abs/2506.22438
+ arXiv:2506.22438v2 Announce Type: replace
+Abstract: Accurate pest population monitoring and tracking their dynamic changes are crucial for precision agriculture decision-making. A common limitation in existing vision-based automatic pest counting research is that models are typically evaluated on datasets with ground truth but deployed in real-world scenarios without assessing the reliability of counting results due to the lack of ground truth. To this end, this paper proposed a method for comprehensively evaluating pest counting confidence in the image, based on information related to counting results and external environmental conditions. First, a pest detection network is used for pest detection and counting, extracting counting result-related information. Then, the pest images undergo image quality assessment, image complexity assessment, and pest distribution uniformity assessment. And the changes in image clarity caused by stirring during image acquisition are quantified by calculating the average gradient magnitude. Notably, we designed a hypothesis-driven multi-factor sensitivity analysis method to select the optimal image quality assessment and image complexity assessment methods. And we proposed an adaptive DBSCAN clustering algorithm for pest distribution uniformity assessment. Finally, the obtained information related to counting results and external environmental conditions is input into a regression model for prediction, resulting in the final pest counting confidence. To the best of our knowledge, this is the first study dedicated to comprehensively evaluating counting confidence in counting tasks, and quantifying the relationship between influencing factors and counting confidence through a model. Experimental results show our method reduces MSE by 31.7% and improves R2 by 15.2% on the pest counting confidence test set, compared to the baseline built primarily on information related to counting results.
+ oai:arXiv.org:2506.22438v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1016/j.ijheatmasstransfer.2025.128116
- International Journal of Heat and Mass Transfer, 2026, volume 256, part 3, 128116
- Chunlin Wu, Liangliang Zhang, Tengxiang Wang, Huiming Yin
+ 10.1016/j.ifacol.2025.11.792
+ IFAC-PapersOnLine 59(23):233-238, 2025
+ Xumin Gao, Mark Stevens, Grzegorz Cielniak
- ShoppingBench: A Real-World Intent-Grounded Shopping Benchmark for LLM-based Agents
- https://arxiv.org/abs/2508.04266
- arXiv:2508.04266v3 Announce Type: replace
-Abstract: Existing benchmarks in e-commerce primarily focus on basic user intents, such as finding or purchasing products. However, real-world users often pursue more complex goals, such as applying vouchers, managing budgets, and finding multi-products seller. To bridge this gap, we propose ShoppingBench, a novel end-to-end shopping benchmark designed to encompass increasingly challenging levels of grounded intent. Specifically, we propose a scalable framework to simulate user instructions based on various intents derived from sampled real-world products. To facilitate consistent and reliable evaluations, we provide a large-scale shopping sandbox that serves as an interactive simulated environment, incorporating over 2.5 million real-world products. Experimental results demonstrate that even state-of-the-art language agents (such as GPT-4.1) achieve absolute success rates under 50% on our benchmark tasks, highlighting the significant challenges posed by our ShoppingBench. In addition, we propose a trajectory distillation strategy and leverage supervised fine-tuning, along with reinforcement learning on synthetic trajectories, to distill the capabilities of a large language agent into a smaller one. As a result, our trained agent achieves competitive performance compared to GPT-4.1.
- oai:arXiv.org:2508.04266v3
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Multi-Modal Graph Convolutional Network with Sinusoidal Encoding for Robust Human Action Segmentation
+ https://arxiv.org/abs/2507.00752
+ arXiv:2507.00752v2 Announce Type: replace
+Abstract: Accurate temporal segmentation of human actions is critical for intelligent robots in collaborative settings, where a precise understanding of sub-activity labels and their temporal structure is essential. However, the inherent noise in both human pose estimation and object detection often leads to over-segmentation errors, disrupting the coherence of action sequences. To address this, we propose a Multi-Modal Graph Convolutional Network (MMGCN) that integrates low-frame-rate (e.g., 1 fps) visual data with high-frame-rate (e.g., 30 fps) motion data (skeleton and object detections) to mitigate fragmentation. Our framework introduces three key contributions. First, a sinusoidal encoding strategy that maps 3D skeleton coordinates into a continuous sin-cos space to enhance spatial representation robustness. Second, a temporal graph fusion module that aligns multi-modal inputs with differing resolutions via hierarchical feature aggregation, Third, inspired by the smooth transitions inherent to human actions, we design SmoothLabelMix, a data augmentation technique that mixes input sequences and labels to generate synthetic training examples with gradual action transitions, enhancing temporal consistency in predictions and reducing over-segmentation artifacts.
+ Extensive experiments on the Bimanual Actions Dataset, a public benchmark for human-object interaction understanding, demonstrate that our approach outperforms state-of-the-art methods, especially in action segmentation accuracy, achieving F1@10: 94.5% and F1@25: 92.8%.
+ oai:arXiv.org:2507.00752v2
+ cs.CV
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jiangyuan Wang, Kejun Xiao, Qi Sun, Huaipeng Zhao, Tao Luo, Jian Dong Zhang, Xiaoyi Zeng
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/IROS60139.2025.11245867
+ Hao Xing, Kai Zhe Boey, Yuankai Wu, Darius Burschka, Gordon Cheng
- Riemann-Roch bases for arbitrary elliptic curve divisors and their application in cryptography
- https://arxiv.org/abs/2508.04340
- arXiv:2508.04340v2 Announce Type: replace
-Abstract: This paper presents explicit constructions of bases for Riemann-Roch spaces associated with arbitrary divisors on elliptic curves. In the context of algebraic geometry codes, the knowledge of an explicit basis for arbitrary divisors is especially valuable, as it enables efficient code construction. From a cryptographic point of view, codes associated with arbitrary divisors with many points are closer to Goppa codes, making them attractive for embedding in the McEliece cryptosystem. Using the results obtained in this work, it is also possible to efficiently construct quasi-cyclic subfield subcodes of elliptic codes. These codes enable a significant reduction in public key size for the McEliece cryptosystem and, consequently, represent promising candidates for integration into post-quantum code-based schemes.
- oai:arXiv.org:2508.04340v2
- cs.IT
- cs.CR
- math.AG
- math.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Towards Open-World Human Action Segmentation Using Graph Convolutional Networks
+ https://arxiv.org/abs/2507.00756
+ arXiv:2507.00756v2 Announce Type: replace
+Abstract: Human-object interaction segmentation is a fundamental task of daily activity understanding, which plays a crucial role in applications such as assistive robotics, healthcare, and autonomous systems. Most existing learning-based methods excel in closed-world action segmentation, they struggle to generalize to open-world scenarios where novel actions emerge. Collecting exhaustive action categories for training is impractical due to the dynamic diversity of human activities, necessitating models that detect and segment out-of-distribution actions without manual annotation. To address this issue, we formally define the open-world action segmentation problem and propose a structured framework for detecting and segmenting unseen actions. Our framework introduces three key innovations: 1) an Enhanced Pyramid Graph Convolutional Network (EPGCN) with a novel decoder module for robust spatiotemporal feature upsampling. 2) Mixup-based training to synthesize out-of-distribution data, eliminating reliance on manual annotations. 3) A novel Temporal Clustering loss that groups in-distribution actions while distancing out-of-distribution samples.
+ We evaluate our framework on two challenging human-object interaction recognition datasets: Bimanual Actions and 2 Hands and Object (H2O) datasets. Experimental results demonstrate significant improvements over state-of-the-art action segmentation models across multiple open-set evaluation metrics, achieving 16.9% and 34.6% relative gains in open-set segmentation (F1@50) and out-of-distribution detection performances (AUROC), respectively. Additionally, we conduct an in-depth ablation study to assess the impact of each proposed component, identifying the optimal framework configuration for open-world action segmentation.
+ oai:arXiv.org:2507.00756v2
+ cs.CV
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Artyom Kuninets, Ekaterina Malygina
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/IROS60139.2025.11247257
+ Hao Xing, Kai Zhe Boey, Gordon Cheng
- WGAST: Weakly-Supervised Generative Network for Daily 10 m Land Surface Temperature Estimation via Spatio-Temporal Fusion
- https://arxiv.org/abs/2508.06485
- arXiv:2508.06485v2 Announce Type: replace
-Abstract: Urbanization, climate change, and agricultural stress are increasing the demand for precise and timely environmental monitoring. Land Surface Temperature (LST) is a key variable in this context and is retrieved from remote sensing satellites. However, these systems face a trade-off between spatial and temporal resolution. While spatio-temporal fusion methods offer promising solutions, few have addressed the estimation of daily LST at 10 m resolution. In this study, we present WGAST, a weakly-supervised generative network for daily 10 m LST estimation via spatio-temporal fusion of Terra MODIS, Landsat 8, and Sentinel-2. WGAST is the first end-to-end deep learning framework designed for this task. It adopts a conditional generative adversarial architecture, with a generator composed of four stages: feature extraction, fusion, LST reconstruction, and noise suppression. The first stage employs a set of encoders to extract multi-level latent representations from the inputs, which are then fused in the second stage using cosine similarity, normalization, and temporal attention mechanisms. The third stage decodes the fused features into high-resolution LST, followed by a Gaussian filter to suppress high-frequency noise. Training follows a weakly supervised strategy based on physical averaging principles and reinforced by a PatchGAN discriminator. Experiments demonstrate that WGAST outperforms existing methods in both quantitative and qualitative evaluations. Compared to the best-performing baseline, on average, WGAST reduces RMSE by 17.05% and improves SSIM by 4.22%. Furthermore, WGAST effectively captures fine-scale thermal patterns, as validated against near-surface air temperature measurements from 33 near-ground sensors. The code is available at https://github.com/Sofianebouaziz1/WGAST.git.
- oai:arXiv.org:2508.06485v2
- cs.CV
- cs.AI
+ Proof of a perfect platonic representation hypothesis
+ https://arxiv.org/abs/2507.01098
+ arXiv:2507.01098v2 Announce Type: replace
+Abstract: In this note, we elaborate on and explain in detail the proof given by Ziyin et al. (2025) of the ``perfect" Platonic Representation Hypothesis (PRH) for the embedded deep linear network model (EDLN). We show that if trained with the stochastic gradient descent (SGD), two EDLNs with different widths and depths and trained on different data will become Perfectly Platonic, meaning that every possible pair of layers will learn the same representation up to a rotation. Because most of the global minima of the loss function are not Platonic, that SGD only finds the perfectly Platonic solution is rather extraordinary. The proof also suggests at least six ways the PRH can be broken. We also show that in the EDLN model, the emergence of the Platonic representations is due to the same reason as the emergence of progressive sharpening. This implies that these two seemingly unrelated phenomena in deep learning can, surprisingly, have a common cause. Overall, the theory and proof highlight the importance of understanding emergent "entropic forces" due to the irreversibility of SGD training and their role in representation learning. The goal of this note is to be instructive while avoiding jargon and lengthy technical details.
+ oai:arXiv.org:2507.01098v2cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cond-mat.dis-nn
+ q-bio.NC
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sofiane Bouaziz, Adel Hafiane, Raphael Canals, Rachid Nedjai
+ Liu Ziyin, Isaac Chuang
- PROPS: Progressively Private Self-alignment of Large Language Models
- https://arxiv.org/abs/2508.06783
- arXiv:2508.06783v2 Announce Type: replace
-Abstract: Alignment is a key step in developing Large Language Models (LLMs) using human feedback to ensure adherence to human values and societal norms. Dependence on human feedback raises privacy concerns about how much a labeler's preferences may reveal about their personal values, beliefs, and personality traits. Existing approaches, such as Differentially Private SGD (DP-SGD), provide rigorous privacy guarantees by privatizing gradients during fine-tuning and alignment but can provide more privacy than necessary as human preferences are tied only to labels of (prompt, response) pairs and can degrade model utility. This work focuses on LLM alignment with preference-level privacy, which preserves the privacy of preference labels provided by humans. We propose PROPS (PROgressively Private Self-alignment), a multi-stage privacy preserving alignment framework where privately aligned models in previous stages can serve as labelers for supplementing training data in the subsequent stages of alignment. We present theoretical guarantees for PROPS as well as comprehensive validation using multiple models (Pythia and GPT) and datasets (AlpacaEval, Anthropic HH-RLHF, truthy-dpo-v0.1) to demonstrate the utility of PROPS over existing methods while still providing high privacy. For the same privacy budget, alignment via PROPS can achieve up to 3x higher win-rates compared to DP-SGD, and 2.5x higher win-rates compared to Randomized Response (RR) based alignment.
- oai:arXiv.org:2508.06783v2
+ Dynamic Regret Reduces to Kernelized Static Regret
+ https://arxiv.org/abs/2507.05478
+ arXiv:2507.05478v2 Announce Type: replace
+Abstract: We study dynamic regret in online convex optimization, where the objective is to achieve low cumulative loss relative to an arbitrary benchmark sequence. By observing that competing with an arbitrary sequence of comparators $u_{1},\ldots,u_{T}$ in $\mathcal{W}\subseteq\mathbb{R}^{d}$ is equivalent to competing with a fixed comparator function $u:[1,T]\to \mathcal{W}$, we frame dynamic regret minimization as a static regret problem in a function space. By carefully constructing a suitable function space in the form of a Reproducing Kernel Hilbert Space (RKHS), our reduction enables us to recover the optimal $R_{T}(u_{1},\ldots,u_{T}) = \mathcal{O}(\sqrt{\sum_{t}\|u_{t}-u_{t-1}\|T})$ dynamic regret guarantee in the setting of linear losses, and yields new scale-free and directionally-adaptive dynamic regret guarantees. Moreover, unlike prior dynamic-to-static reductions -- which are valid only for linear losses -- our reduction holds for any sequence of losses, allowing us to recover $\mathcal{O}\big(\|u\|^2_{\mathcal{H}}+d_{\mathrm{eff}}(\lambda)\ln T\big)$ bounds in exp-concave and improper linear regression settings, where $d_{\mathrm{eff}}(\lambda)$ is a measure of complexity of the RKHS. Despite working in an infinite-dimensional space, the resulting reduction leads to algorithms that are computable in practice, due to the reproducing property of RKHSs.
+ oai:arXiv.org:2507.05478v2cs.LG
- cs.AI
- cs.CR
- cs.IT
- math.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Transactions on ML Research (TMLR) 2025
- Noel Teku, Fengwei Tian, Payel Bhattacharjee, Souradip Chakraborty, Amrit Singh Bedi, Ravi Tandon
+ http://creativecommons.org/licenses/by/4.0/
+ Andrew Jacobsen, Alessandro Rudi, Francesco Orabona, Nicolo Cesa-Bianchi
- WeatherDiffusion: Controllable Weather Editing in Intrinsic Space
- https://arxiv.org/abs/2508.06982
- arXiv:2508.06982v4 Announce Type: replace
-Abstract: We present WeatherDiffusion, a diffusion-based framework for controllable weather editing in intrinsic space. Our framework includes two components based on diffusion priors: an inverse renderer that estimates material properties, scene geometry, and lighting as intrinsic maps from an input image, and a forward renderer that utilizes these geometry and material maps along with a text prompt that describes specific weather conditions to generate a final image. The intrinsic maps enhance controllability compared to traditional pixel-space editing approaches. We propose an intrinsic map-aware attention mechanism that improves spatial correspondence and decomposition quality in large outdoor scenes. For forward rendering, we leverage CLIP-space interpolation of weather prompts to achieve fine-grained weather control. We also introduce a synthetic and a real-world dataset, containing 38k and 18k images under various weather conditions, each with intrinsic map annotations. WeatherDiffusion outperforms state-of-the-art pixel-space editing approaches, weather restoration methods, and rendering-based methods, showing promise for downstream tasks such as autonomous driving, enhancing the robustness of detection and segmentation in challenging weather scenarios.
- oai:arXiv.org:2508.06982v4
+ RectifiedHR: High-Resolution Diffusion via Energy Profiling and Adaptive Guidance Scheduling
+ https://arxiv.org/abs/2507.09441
+ arXiv:2507.09441v2 Announce Type: replace
+Abstract: High-resolution image synthesis with diffusion models often suffers from energy instabilities and guidance artifacts that degrade visual quality. We analyze the latent energy landscape during sampling and propose adaptive classifier-free guidance (CFG) schedules that maintain stable energy trajectories. Our approach introduces energy-aware scheduling strategies that modulate guidance strength over time, achieving superior stability scores (0.9998) and consistency metrics (0.9873) compared to fixed-guidance approaches. We demonstrate that DPM++ 2M with linear-decreasing CFG scheduling yields optimal performance, providing sharper, more faithful images while reducing artifacts. Our energy profiling framework serves as a powerful diagnostic tool for understanding and improving diffusion model behavior.
+ oai:arXiv.org:2507.09441v2
+ cs.GRcs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yixin Zhu, Zuoliang Zhu, Jian Yang, Milo\v{s} Ha\v{s}an, Jin Xie, Beibei Wang
+ http://creativecommons.org/licenses/by/4.0/
+ Ankit Sanjyal
- AugLift: Uncertainty Aware Depth Descriptors for Robust 2D to 3D Pose Lifting
- https://arxiv.org/abs/2508.07112
- arXiv:2508.07112v3 Announce Type: replace
-Abstract: Lifting based 3D human pose estimators infer 3D joints from 2D keypoints, but often struggle to generalize to real world settings with noisy 2D detections. We revisit the input to lifting and propose AugLift, a simple augmentation of standard lifting that enriches each 2D keypoint (x, y) with an Uncertainty Aware Depth Descriptor (UADD). We run a single off the shelf monocular depth estimator to obtain a depth map, and for every keypoint with detector confidence c we extract depth statistics from its confidence scaled neighborhood, forming a compact, interpretable UADD (c, d, d_min, d_max) that captures both local geometry and reliability. AugLift is modular, requires no new sensors or architectural changes, and integrates by expanding the input layer of existing lifting models.
- Across four datasets and four lifting architectures, AugLift boosts cross dataset (out of distribution) performance on unseen data by an average of 10.1 percent, while also improving in distribution performance by 4.0 percent as measured by MPJPE. A post hoc analysis clarifies when and why it helps: gains are largest on novel poses and significantly occluded joints, where depth statistics resolve front back ambiguities while confidence calibrates the spatial neighborhoods from which they are drawn. We also study interaction with recent image feature lifting methods and find the signals are complementary: adding UADD to image conditioned lifting yields both ID and OOD gains. A learned depth feature extension (AugLiftV2) improves performance further while trading off interpretability. Together, these results indicate that lightweight, confidence aware depth cues are a powerful plug in for robust 2D to 3D pose lifting.
- oai:arXiv.org:2508.07112v3
- cs.CV
+ Optimizing Drivers' Discount Order Acceptance Strategies: A Policy-Improved Deep Deterministic Policy Gradient Framework
+ https://arxiv.org/abs/2507.11865
+ arXiv:2507.11865v2 Announce Type: replace
+Abstract: The rapid expansion of platform integration has emerged as an effective solution to mitigate market fragmentation by consolidating multiple ride-hailing platforms into a single application. To address heterogeneous passenger preferences, third-party integrators provide Discount Express service delivered by express drivers at lower trip fares. For the individual platform, encouraging broader participation of drivers in Discount Express services has the potential to expand the accessible demand pool and improve matching efficiency, but often at the cost of reduced profit margins. This study aims to dynamically manage drivers' acceptance of Discount Express from the perspective of an individual platform. The lack of historical data under the new business model necessitates online learning. However, early-stage exploration through trial and error can be costly in practice, highlighting the need for reliable early-stage performance in real-world deployment. To address these challenges, this study formulates the decision regarding the proportion of drivers accepting discount orders as a continuous control task. In response to the high stochasticity and the opaque matching mechanisms employed by third-party integrator, we propose an innovative policy-improved deep deterministic policy gradient (pi-DDPG) framework. The proposed framework incorporates a refiner module to boost policy performance during the early training phase. A customized simulator based on a real-world dataset is developed to validate the effectiveness of the proposed pi-DDPG. Numerical experiments demonstrate that pi-DDPG achieves superior learning efficiency and significantly reduces early-stage training losses, enhancing its applicability to practical ride-hailing scenarios.
+ oai:arXiv.org:2507.11865v2cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nikolai Warner, Wenjin Zhang, Hamid Badiozamani, Irfan Essa, Apaar Sadhwani
+ Hanwen Dai, Chang Gao, Fang He, Congyuan Ji, Yanni Yang
- OpenConstruction: A Systematic Synthesis of Open Visual Datasets for Data-Centric Artificial Intelligence in Construction Monitoring
- https://arxiv.org/abs/2508.11482
- arXiv:2508.11482v2 Announce Type: replace
-Abstract: The construction industry increasingly relies on visual data to support Artificial Intelligence (AI) and Machine Learning (ML) applications for site monitoring. High-quality, domain-specific datasets, comprising images, videos, and point clouds, capture site geometry and spatiotemporal dynamics, including the location and interaction of objects, workers, and materials. However, despite growing interest in leveraging visual datasets, existing resources vary widely in sizes, data modalities, annotation quality, and representativeness of real-world construction conditions. A systematic review to categorize their data characteristics and application contexts is still lacking, limiting the community's ability to fully understand the dataset landscape, identify critical gaps, and guide future directions toward more effective, reliable, and scalable AI applications in construction. To address this gap, this study conducts an extensive search of academic databases and open-data platforms, yielding 51 publicly available visual datasets that span the 2005-2024 period. These datasets are categorized using a structured data schema covering (i) data fundamentals (e.g., size and license), (ii) data modalities (e.g., RGB and point cloud), (iii) annotation frameworks (e.g., bounding boxes), and (iv) downstream application domains (e.g., progress tracking). This study synthesizes these findings into an open-source catalog, OpenConstruction, supporting data-driven method development. Furthermore, the study discusses several critical limitations in the existing construction dataset landscape and presents a roadmap for future data infrastructure anchored in the Findability, Accessibility, Interoperability, and Reusability (FAIR) principles. By reviewing the current landscape and outlining strategic priorities, this study supports the advancement of data-centric solutions in the construction sector.
- oai:arXiv.org:2508.11482v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Emotional Support with LLM-based Empathetic Dialogue Generation
+ https://arxiv.org/abs/2507.12820
+ arXiv:2507.12820v2 Announce Type: replace
+Abstract: Emotional Support Conversation (ESC) aims to provide empathetic and effective emotional assistance through dialogue, addressing the growing demand for mental health support. This paper presents our solution for the NLPCC 2025 Task 8 ESC evaluation, where we leverage large-scale language models enhanced by prompt engineering and finetuning techniques. We explore both parameter-efficient Low-Rank Adaptation and full-parameter fine-tuning strategies to improve the model's ability to generate supportive and contextually appropriate responses. Our best model ranked second in the competition, highlighting the potential of combining LLMs with effective adaptation methods for ESC tasks. Future work will focus on further enhancing emotional understanding and response personalization to build more practical and reliable emotional support systems.
+ oai:arXiv.org:2507.12820v2
+ cs.AI
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ruoxin Xiong, Yanyu Wang, Jiannan Cai, Kaijian Liu, Yuansheng Zhu, Pingbo Tang, Nora El-Gohary
+ http://creativecommons.org/licenses/by/4.0/
+ Shiquan Wang, Ruiyu Fang, Zhongjiang He, Shuangyong Song, Yongxiang Li
- Matrix-game 2.0: An open-source real-time and streaming interactive world model
- https://arxiv.org/abs/2508.13009
- arXiv:2508.13009v3 Announce Type: replace
-Abstract: Recent advances in interactive video generations have demonstrated diffusion model's potential as world models by capturing complex physical dynamics and interactive behaviors. However, existing interactive world models depend on bidirectional attention and lengthy inference steps, severely limiting real-time performance. Consequently, they are hard to simulate real-world dynamics, where outcomes must update instantaneously based on historical context and current actions. To address this, we present Matrix-Game 2.0, an interactive world model generates long videos on-the-fly via few-step auto-regressive diffusion. Our framework consists of three key components: (1) A scalable data production pipeline for Unreal Engine and GTA5 environments to effectively produce massive amounts (about 1200 hours) of video data with diverse interaction annotations; (2) An action injection module that enables frame-level mouse and keyboard inputs as interactive conditions; (3) A few-step distillation based on the casual architecture for real-time and streaming video generation. Matrix Game 2.0 can generate high-quality minute-level videos across diverse scenes at an ultra-fast speed of 25 FPS. We open-source our model weights and codebase to advance research in interactive world modeling.
- oai:arXiv.org:2508.13009v3
+ Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models
+ https://arxiv.org/abs/2507.13162
+ arXiv:2507.13162v2 Announce Type: replace
+Abstract: Existing world models for autonomous driving struggle with long-horizon generation and generalization to challenging scenarios. In this work, we develop a model using simple design choices, and without additional supervision or sensors, such as maps, depth, or multiple cameras. We show that our model yields state-of-the-art performance, despite having only 469M parameters and being trained on 280h of video data. It particularly stands out in difficult scenarios like turning maneuvers and urban traffic. We test whether discrete token models possibly have advantages over continuous models based on flow matching. To this end, we set up a hybrid tokenizer that is compatible with both approaches and allows for a side-by-side comparison. Our study concludes in favor of the continuous autoregressive model, which is less brittle on individual design choices and more powerful than the model built on discrete tokens. Code, models and qualitative results are publicly available at https://lmb-freiburg.github.io/orbis.github.io/.
+ oai:arXiv.org:2507.13162v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Xianglong He, Chunli Peng, Zexiang Liu, Boyang Wang, Yifan Zhang, Qi Cui, Fei Kang, Biao Jiang, Mengyin An, Yangyang Ren, Baixin Xu, Hao-Xiang Guo, Kaixiong Gong, Size Wu, Wei Li, Xuchen Song, Yang Liu, Yangguang Li, Yahui Zhou
+ Arian Mousakhan, Sudhanshu Mittal, Silvio Galesso, Karim Farid, Thomas Brox
- Vevo2: A Unified and Controllable Framework for Speech and Singing Voice Generation
- https://arxiv.org/abs/2508.16332
- arXiv:2508.16332v2 Announce Type: replace
-Abstract: Controllable human voice generation, particularly for expressive domains like singing, remains a significant challenge. This paper introduces Vevo2, a unified framework for controllable speech and singing voice generation. To tackle issues like the scarcity of annotated singing data and to enable flexible controllability, Vevo2 introduces two audio tokenizers: (1) a unified music-notation-free prosody tokenizer that captures prosody and melody from speech, singing, and even instrumental sounds, and (2) a unified content-style tokenizer that encodes linguistic content, prosody, and style for both speech and singing, while enabling timbre disentanglement. Vevo2 consists of an auto-regressive (AR) content-style modeling stage, which aims to enable controllability over text, prosody, and style, as well as a flow-matching acoustic modeling stage that allows for timbre control. Particularly, during the speech-singing joint training of the AR model, we propose both explicit and implicit prosody learning strategies to bridge speech and singing voice. Moreover, to further enhance the Vevo2's ability to follow text and prosody, we design a multi-objective post-training task that integrates both intelligibility and prosody similarity alignment. Experimental results show that the unified modeling in Vevo2 brings mutual benefits to both speech and singing voice generation. Additionally, Vevo2's effectiveness across a wide range of synthesis, conversion, and editing tasks for both speech and singing further demonstrates its strong generalization ability and versatility. Audio samples are are available at https://versasinger.github.io/.
- oai:arXiv.org:2508.16332v2
- cs.SD
- cs.AI
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Order in Partial Markov Categories
+ https://arxiv.org/abs/2507.19424
+ arXiv:2507.19424v3 Announce Type: replace
+Abstract: Partial Markov categories are a recent framework for categorical probability theory that provide an abstract account of partial probabilistic computation with updating semantics. In this article, we discuss two order relations on the morphisms of a partial Markov category. In particular, we prove that every partial Markov category is canonically preorder-enriched, recovering several well-known order enrichments. We also demonstrate that the existence of codiagonal maps (comparators) is closely related to order properties of partial Markov categories. Finally, we introduce a synthetic version of the Cauchy--Schwarz inequality and, from it, we prove that updating increases validity.
+ oai:arXiv.org:2507.19424v3
+ cs.LO
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Xueyao Zhang, Junan Zhang, Yuancheng Wang, Chaoren Wang, Yuanzhe Chen, Dongya Jia, Zhuo Chen, Zhizheng Wu
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Elena Di Lavore, Mario Rom\'an, Pawe{\l} Soboci\'nski, M\'ark Sz\'eles
- Steiner Traveling Salesman Problem with Time Windows and Pickup-Delivery: integrating classical and quantum optimization
- https://arxiv.org/abs/2508.17896
- arXiv:2508.17896v2 Announce Type: replace
-Abstract: We propose the Steiner Traveling Salesman Problem with Time Windows and Pickup and Delivery, an advanced and practical extension of classical routing models. This variant integrates the characteristics of the Steiner Traveling Salesman Problem with time-window constraints, pickup and delivery operations and vehicle capacity limitations. These features closely mirror the complexities of contemporary logistics challenges, including last-mile distribution, reverse logistics and on-demand service scenarios. To tackle the inherent computational difficulties of this NP-hard problem, we propose two specialized mathematical formulations: an arc-based model and a node-oriented model, each designed to capture distinct structural aspects of the problem. We further introduce a preprocessing reduction method that eliminates redundant arcs, significantly enhancing computational performance and scalability. Both formulations are implemented using classical and quantum optimization approaches. In particular, the classical models are solved with Gurobi, whereas the quantum implementation is carried out on D-Wave's LeapCQMHybrid platform, a hybrid quantum-classical environment that integrates quantum annealing with classical optimization techniques for constrained problem solving. Numerical experiments are conducted to validate the proposed formulations and the preprocessing reduction method. The analyses performed assess the structural properties of the two models, their computational behavior, and the impact of preprocessing on problem size and solution efficiency.
- oai:arXiv.org:2508.17896v2
- cs.ET
- Thu, 11 Dec 2025 00:00:00 -0500
+ CAPE: A CLIP-Aware Pointing Ensemble of Complementary Heatmap Cues for Embodied Reference Understanding
+ https://arxiv.org/abs/2507.21888
+ arXiv:2507.21888v4 Announce Type: replace
+Abstract: We address Embodied Reference Understanding, the task of predicting the object a person in the scene refers to through pointing gesture and language. This requires multimodal reasoning over text, visual pointing cues, and scene context, yet existing methods often fail to fully exploit visual disambiguation signals. We also observe that while the referent often aligns with the head-to-fingertip direction, in many cases it aligns more closely with the wrist-to-fingertip direction, making a single-line assumption overly limiting. To address this, we propose a dual-model framework, where one model learns from the head-to-fingertip direction and the other from the wrist-to-fingertip direction. We introduce a Gaussian ray heatmap representation of these lines and use them as input to provide a strong supervisory signal that encourages the model to better attend to pointing cues. To fuse their complementary strengths, we present the CLIP-Aware Pointing Ensemble module, which performs a hybrid ensemble guided by CLIP features. We further incorporate an auxiliary object center prediction head to enhance referent localization. We validate our approach on YouRefIt, achieving 75.0 mAP at 0.25 IoU, alongside state-of-the-art CLIP and C_D scores, and demonstrate its generality on unseen CAESAR and ISL Pointing, showing robust performance across benchmarks.
+ oai:arXiv.org:2507.21888v4
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Alessia Ciacco, Francesca Guerriero, Eneko Osaba
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fevziye Irem Eyiokur, Dogucan Yaman, Haz{\i}m Kemal Ekenel, Alexander Waibel
- Grounding the Ungrounded: A Spectral-Graph Framework for Quantifying Hallucinations in Multimodal LLMs
- https://arxiv.org/abs/2508.19366
- arXiv:2508.19366v4 Announce Type: replace
-Abstract: Hallucinations in LLMs--especially in multimodal settings--undermine reliability. We present a rigorous information-geometric framework, grounded in diffusion dynamics, to quantify hallucinations in MLLMs where model outputs are embedded via spectral decompositions of multimodal graph Laplacians, and their gaps to a truth manifold define a semantic distortion metric. We derive Courant-Fischer bounds on a temperature-dependent hallucination profile and use RKHS eigenmodes to obtain modality-aware, interpretable measures that track evolution over prompts and time. This reframes hallucination as quantifiable and bounded, providing a principled basis for evaluation and mitigation.
- oai:arXiv.org:2508.19366v4
+ Generalized Kernelized Bandits: A Novel Self-Normalized Bernstein-Like Dimension-Free Inequality and Regret Bounds
+ https://arxiv.org/abs/2508.01681
+ arXiv:2508.01681v2 Announce Type: replace
+Abstract: We study the regret minimization problem in the novel setting of generalized kernelized bandits (GKBs), where we optimize an unknown function $f^*$ belonging to a reproducing kernel Hilbert space (RKHS) having access to samples generated by an exponential family (EF) reward model whose mean is a non-linear function $\mu(f^*)$. This setting extends both kernelized bandits (KBs) and generalized linear bandits (GLBs), providing a unified view of both settings. We propose an optimistic regret minimization algorithm, GKB-UCB, and we explain why existing self-normalized concentration inequalities used for KBs and GLBs do not allow to provide tight regret guarantees. For this reason, we devise a novel self-normalized Bernstein-like dimension-free inequality that applies to a Hilbert space of functions with bounded norm, representing a contribution of independent interest. Based on it, we analyze GKB-UCB, deriving a regret bound of order $\widetilde{O}( \gamma_T \sqrt{T/\kappa_*})$, being $T$ the learning horizon, ${\gamma}_T$ the maximal information gain, and $\kappa_*$ a term characterizing the magnitude of the expected reward non-linearity. Our result is tight in its dependence on $T$, $\gamma_T$, and $\kappa_*$ for both KBs and GLBs. Finally, we present a tractable version GKB-UCB, Trac-GKB-UCB, which attains similar regret guarantees, and we discuss its time and space complexity.
+ oai:arXiv.org:2508.01681v2cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Supratik Sarkar, Swagatam Das
-
-
- A Continuous Energy Ising Machine Leveraging Difference-of-Convex Programming
- https://arxiv.org/abs/2509.01928
- arXiv:2509.01928v2 Announce Type: replace
-Abstract: Many combinatorial optimization problems can be reformulated as finding the ground state of the Ising model. Existing Ising solvers are mostly inspired by simulated annealing. Although annealing techniques offer scalability, they lack convergence guarantees and are sensitive to the cooling schedule. We propose solving the Ising problem by relaxing the binary spins to continuous variables and introducing an attraction potential that steers the solution toward binary spin configurations. A key property of this potential is that its combination with the Ising energy produces a Hamiltonian that can be written as a difference of convex polynomials. This enables us to design efficient iterative algorithms that require a single matrix-vector multiplication per iteration and provide convergence guarantees. We implement our Ising solver on a wide range of GPU platforms, from edge devices to high-performance computing clusters, and demonstrate that it consistently outperforms existing solvers across problem sizes ranging from small ($10^3$ spins) to ultra-large ($10^8$ spins).
- oai:arXiv.org:2509.01928v2
- cs.DC
- math-ph
- math.MP
- math.OC
- quant-ph
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Debraj Banerjee, Santanu Mahapatra, Kunal Narayan Chaudhury
+ Alberto Maria Metelli, Simone Drago, Marco Mussi
- Performance analysis of common browser extensions for cryptojacking detection
- https://arxiv.org/abs/2509.02083
- arXiv:2509.02083v2 Announce Type: replace
-Abstract: This paper considers five extensions for Chromium-based browsers in order to determine how effective can browser-based defenses against cryptojacking available to regular users be. We've examined most popular extensions - MinerBlock, AdGuard AdBlocker, Easy Redirect && Prevent Cryptojacking, CoinEater and Miners Shield, which claim to be designed specifically to identify and stop illegal cryptocurrency mining. An empirically confirmed dataset of 373 distinct cryptojacking-infected websites which was assembled during multi-stage procedure, was used to test those extensions. The results showed that all plugins in question had significant performance limits. Easy Redirect and Miners Shield only blocked 6 and 5 websites respectively, while MinerBlock had the greatest detection rate at only 27% (101/373 sites blocked). Most concerningly, despite promises of cryptojacking prevention, AdGuard (which has over 13 million users) and CoinEater were unable to identify any of the compromised websites. These results demonstrate serious flaws in cryptojacking detection products targeted for regular users, since even the best-performing specimen failed to detect 73% of attacks. The obvious difference between advertised capabilities and real performance highlights the urgent need for either accessibility improvements for laboratory-grade detection technologies that show 90%+ efficiency in controlled environment or fundamental upgrades to current commonly used extensions.
- oai:arXiv.org:2509.02083v2
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ RegMean++: Enhancing Effectiveness and Generalization of Regression Mean for Model Merging
+ https://arxiv.org/abs/2508.03121
+ arXiv:2508.03121v2 Announce Type: replace
+Abstract: Regression Mean (RegMean), an approach that formulates model merging as a linear regression problem, aims to find the optimal weights for each linear layer in the merge model by minimizing the discrepancy in predictions between the merge and candidate models. RegMean provides a precise closed-form solution for the merging problem; therefore, it offers explainability and computational efficiency. However, RegMean merges each linear layer independently, overlooking how the features and information in the earlier layers propagate through the layers and influence the final prediction in the merge model. In this paper, we introduce RegMean++, a simple yet effective alternative to RegMean, that explicitly incorporates both intra- and cross-layer dependencies between merge models' layers into RegMean's objective. By accounting for these dependencies, RegMean++ better captures the behaviors of the merge model. Extensive experiments demonstrate that RegMean++ consistently outperforms RegMean across diverse settings, including in-domain (ID) and out-of-domain (OOD) generalization, sequential merging, large-scale tasks, and robustness under several types of distribution shifts. Furthermore, RegMean++ achieves competitive or state-of-the-art performance compared to various recent advanced model merging methods. Our code is available at https://github.com/nthehai01/RegMean-plusplus.
+ oai:arXiv.org:2508.03121v2
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Dmitry Tanana
+ The-Hai Nguyen, Dang Huu-Tien, Takeshi Suzuki, Le-Minh Nguyen
- ModalSurv: Investigating opportunities and limitations of multimodal deep survival learning in prostate and bladder cancer
- https://arxiv.org/abs/2509.05037
- arXiv:2509.05037v4 Announce Type: replace
-Abstract: Accurate survival prediction is essential for personalised cancer treatment. We propose ModalSurv, a multimodal deep survival framework integrating clinical, MRI, histopathology, and RNA-sequencing data via modality-specific projections and cross-attention fusion. On the CHIMERA Grand Challenge datasets, ModalSurv achieved a C-index of 0.7402 (1st) for prostate and 0.5740 (5th) for bladder cancer. Notably, clinical features alone outperformed multimodal models on external tests, highlighting challenges of limited multimodal alignment and potential overfitting. Local validation showed multimodal gains but limited generalisation. ModalSurv provides a systematic evaluation of multimodal survival modelling, underscoring both its promise and current limitations for scalable, generalisable cancer prognosis.
- oai:arXiv.org:2509.05037v4
+ Forest vs Tree: The $(N, K)$ Trade-off in Reproducible ML Evaluation
+ https://arxiv.org/abs/2508.03663
+ arXiv:2508.03663v2 Announce Type: replace
+Abstract: Reproducibility is a cornerstone of scientific validation and of the authority it confers on its results. Reproducibility in machine learning evaluations leads to greater trust, confidence, and value. However, the ground truth responses used in machine learning often necessarily come from humans, among whom disagreement is prevalent, and surprisingly little research has studied the impact of effectively ignoring disagreement in these responses, as is typically the case. One reason for the lack of research is that budgets for collecting human-annotated evaluation data are limited, and obtaining more samples from multiple raters for each example greatly increases the per-item annotation costs. We investigate the trade-off between the number of items ($N$) and the number of responses per item ($K$) needed for reliable machine learning evaluation. We analyze a diverse collection of categorical datasets for which multiple annotations per item exist, and simulated distributions fit to these datasets, to determine the optimal $(N, K)$ configuration, given a fixed budget ($N \times K$), for collecting evaluation data and reliably comparing the performance of machine learning models. Our findings show, first, that accounting for human disagreement may come with $N \times K$ at no more than 1000 (and often much lower) for every dataset tested on at least one metric. Moreover, this minimal $N \times K$ almost always occurred for $K > 10$. Furthermore, the nature of the tradeoff between $K$ and $N$, or if one even existed, depends on the evaluation metric, with metrics that are more sensitive to the full distribution of responses performing better at higher levels of $K$. Our methods can be used to help ML practitioners get more effective test data by finding the optimal metrics and number of items and annotations per item to collect to get the most reliability for their budget.
+ oai:arXiv.org:2508.03663v2cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by-nc-sa/4.0/
- Noorul Wahab, Ethar Alzaid, Jiaqi Lv, Fayyaz Minhas, Adam Shephard, Shan E Ahmed Raza
+ Deepak Pandita, Flip Korn, Chris Welty, Christopher M. Homan
- Hybrid A* Path Planning with Multi-Modal Motion Extension for Four-Wheel Steering Mobile Robots
- https://arxiv.org/abs/2509.06115
- arXiv:2509.06115v2 Announce Type: replace
-Abstract: Four-wheel independent steering (4WIS) systems provide mobile robots with a rich set of motion modes, such as Ackermann steering, lateral steering, and parallel movement, offering superior maneuverability in constrained environments. However, existing path planning methods generally assume a single kinematic model and thus fail to fully exploit the multi-modal capabilities of 4WIS platforms. To address this limitation, we propose an extended Hybrid A* framework that operates in a four-dimensional state space incorporating both spatial states and motion modes. Within this framework, we design multi-modal Reeds-Shepp curves tailored to the distinct kinematic constraints of each motion mode, develop an enhanced heuristic function that accounts for mode-switching costs, and introduce a terminal connection strategy with intelligent mode selection to ensure smooth transitions between different steering patterns. The proposed planner enables seamless integration of multiple motion modalities within a single path, significantly improving flexibility and adaptability in complex environments. Results demonstrate significantly improved planning performance for 4WIS robots in complex environments.
- oai:arXiv.org:2509.06115v2
- cs.RO
- cs.SY
- eess.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ GTPO: Stabilizing Group Relative Policy Optimization via Gradient and Entropy Control
+ https://arxiv.org/abs/2508.03772
+ arXiv:2508.03772v5 Announce Type: replace
+Abstract: Group Relative Policy Optimization (GRPO) is a promising policy-based approach for Large Language Model alignment, yet its performance is often limited by training instability and suboptimal convergence. In this paper, we identify and analyze two main GRPO issues: (i) the token-level penalization, where valuable tokens shared across different responses receive contradictory feedback signals, leading to conflicting gradient updates that can reduce their likelihood; and (ii) the policy collapse, where negatively rewarded completions may penalize confident responses and shift model decisions toward unlikely tokens, destabilizing training process. To address these issues we introduce GTPO (Group-relative Trajectory-based Policy Optimization), which prevents conflicting gradients on valuable tokens by skipping negative updates while amplifying positive ones and filters out completions whose entropy exceeds a provable threshold, to prevent policy collapse. Unlike GRPO, GTPO does not rely on KL-divergence regularization, eliminating the need for a reference model during training, while still ensuring greater training stability and improved performance, as validated through multiple experiments on GSM8K, MATH, AIME 2024, AIME 2025 and AMC 2023.
+ oai:arXiv.org:2508.03772v5
+ cs.LG
+ cs.AI
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Runjiao Bao, Lin Zhang, Tianwei Niu, Haoyu Yuan, Shoukun Wang
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Marco Simoni, Aleksandar Fontana, Giulio Rossolini, Andrea Saracino, Paolo Mori
- Text-Trained LLMs Can Zero-Shot Extrapolate PDE Dynamics, Revealing a Three-Stage In-Context Learning Mechanism
- https://arxiv.org/abs/2509.06322
- arXiv:2509.06322v2 Announce Type: replace
-Abstract: Large language models (LLMs) have demonstrated emergent in-context learning (ICL) capabilities across a range of tasks, including zero-shot time-series forecasting. We show that text-trained foundation models can accurately extrapolate spatiotemporal dynamics from discretized partial differential equation (PDE) solutions without fine-tuning or natural language prompting. Predictive accuracy improves with longer temporal contexts but degrades at finer spatial discretizations. In multi-step rollouts, where the model recursively predicts future spatial states over multiple time steps, errors grow algebraically with the time horizon, reminiscent of global error accumulation in classical finite-difference solvers. We interpret these trends as in-context neural scaling laws, where prediction quality varies predictably with both context length and output length. To better understand how LLMs are able to internally process PDE solutions so as to accurately roll them out, we analyze token-level output distributions and uncover a consistent three-stage ICL progression: beginning with syntactic pattern imitation, transitioning through an exploratory high-entropy phase, and culminating in confident, numerically grounded predictions.
- oai:arXiv.org:2509.06322v2
+ A Markov Decision Process Framework for Early Maneuver Decisions in Satellite Collision Avoidance
+ https://arxiv.org/abs/2508.05876
+ arXiv:2508.05876v2 Announce Type: replace
+Abstract: We develop a Markov decision process (MDP) framework to autonomously make guidance decisions for satellite collision avoidance maneuver (CAM) and a reinforcement learning policy gradient (RL-PG) algorithm to enable direct optimization of guidance policy using historic CAM data. In addition to maintaining acceptable collision risks, this approach seeks to minimize the average propellant consumption of CAMs by making early maneuver decisions. We model CAM as a continuous state, discrete action and finite horizon MDP, where the critical decision is determining when to initiate the maneuver. The MDP models decision rewards using analytical models of collision risk, propellant consumption, and transit orbit geometry. By deciding to maneuver earlier than conventional methods, the Markov policy effectively favors CAMs that achieve comparable rates of collision risk reduction while consuming less propellant. Using historical data of tracked conjunction events, we verify this framework and conduct an extensive parameter-sensitivity study. When evaluated on synthetic conjunction events, the trained policy consumes significantly less propellant overall and per maneuver in comparison to a conventional cut-off policy that initiates maneuvers 24 hours before the time of closest approach (TCA). On historical conjunction events, the trained policy consumes more propellant overall but consumes less propellant per maneuver. For both historical and synthetic conjunction events, the trained policy is slightly more conservative in identifying conjunctions events that warrant CAMs in comparison to cutoff policies.
+ oai:arXiv.org:2508.05876v2cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ astro-ph.EP
+ astro-ph.IM
+ cs.ET
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jiajun Bao, Nicolas Boull\'e, Toni J. B. Liu, Rapha\"el Sarfati, Christopher J. Earls
+ Francesca Ferrara, Lander W. Schillinger Arana, Florian D\"orfler, Sarah H. Q. Li
- Imitative Membership Inference Attack
- https://arxiv.org/abs/2509.06796
- arXiv:2509.06796v2 Announce Type: replace
-Abstract: A Membership Inference Attack (MIA) assesses how much a target machine learning model reveals about its training data by determining whether specific query instances were part of the training set. State-of-the-art MIAs rely on training hundreds of shadow models that are independent of the target model, leading to significant computational overhead. In this paper, we introduce Imitative Membership Inference Attack (IMIA), which employs a novel imitative training technique to strategically construct a small number of target-informed imitative models that closely replicate the target model's behavior for inference. Extensive experimental results demonstrate that IMIA substantially outperforms existing MIAs in various attack settings while only requiring less than 5% of the computational cost of state-of-the-art approaches.
- oai:arXiv.org:2509.06796v2
- cs.CR
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Dirty Bits in Low-Earth Orbit: The Carbon Footprint of Launching Computers
+ https://arxiv.org/abs/2508.06250
+ arXiv:2508.06250v2 Announce Type: replace
+Abstract: Low-Earth Orbit (LEO) satellites are increasingly proposed for communication and in-orbit computing, achieving low-latency global services. However, their sustainability remains largely unexamined. This paper investigates the carbon footprint of computing in space, focusing on lifecycle emissions from launch over orbital operation to re-entry. We present ESpaS, a lightweight tool for estimating carbon intensities across CPU usage, memory, and networking in orbital vs. terrestrial settings. Three worked examples compare (i) launch technologies (state-of-the-art rocket vs. potential next generation) and (ii) operational emissions of data center workloads in orbit and on the ground. Results show that, even under optimistic assumptions, in-orbit systems incur significantly higher carbon costs - up to an order of magnitude more than terrestrial equivalents - primarily due to embodied emissions from launch and re-entry. Our findings advocate for carbon-aware design principles and regulatory oversight in developing sustainable digital infrastructure in orbit.
+ oai:arXiv.org:2508.06250v2
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Yuntao Du, Yuetian Chen, Hanshen Xiao, Bruno Ribeiro, Ninghui Li
+ 10.1145/3757892.3757896
+ ACM SIGENERGY Energy Inform. Rev., Volume 5 Issue 2, July 2025
+ Robin Ohs, Gregory F. Stock, Andreas Schmidt, Juan A. Fraire, Holger Hermanns
- HeLoFusion: An Efficient and Scalable Encoder for Modeling Heterogeneous and Multi-Scale Interactions in Trajectory Prediction
- https://arxiv.org/abs/2509.11719
- arXiv:2509.11719v2 Announce Type: replace
-Abstract: Multi-agent trajectory prediction in autonomous driving requires a comprehensive understanding of complex social dynamics. Existing methods, however, often struggle to capture the full richness of these dynamics, particularly the co-existence of multi-scale interactions and the diverse behaviors of heterogeneous agents. To address these challenges, this paper introduces HeLoFusion, an efficient and scalable encoder for modeling heterogeneous and multi-scale agent interactions. Instead of relying on global context, HeLoFusion constructs local, multi-scale graphs centered on each agent, allowing it to effectively model both direct pairwise dependencies and complex group-wise interactions (\textit{e.g.}, platooning vehicles or pedestrian crowds). Furthermore, HeLoFusion tackles the critical challenge of agent heterogeneity through an aggregation-decomposition message-passing scheme and type-specific feature networks, enabling it to learn nuanced, type-dependent interaction patterns. This locality-focused approach enables a principled representation of multi-level social context, yielding powerful and expressive agent embeddings. On the challenging Waymo Open Motion Dataset, HeLoFusion achieves state-of-the-art performance, setting new benchmarks for key metrics including Soft mAP and minADE. Our work demonstrates that a locality-grounded architecture, which explicitly models multi-scale and heterogeneous interactions, is a highly effective strategy for advancing motion forecasting.
- oai:arXiv.org:2509.11719v2
+ Can LLMs Detect Their Confabulations? Estimating Reliability in Uncertainty-Aware Language Models
+ https://arxiv.org/abs/2508.08139
+ arXiv:2508.08139v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) are prone to generating fluent but incorrect content, known as confabulation, which poses increasing risks in multi-turn or agentic applications where outputs may be reused as context. In this work, we investigate how in-context information influences model behavior and whether LLMs can identify their unreliable responses. We propose a reliability estimation that leverages token-level uncertainty to guide the aggregation of internal model representations. Specifically, we compute aleatoric and epistemic uncertainty from output logits to identify salient tokens and aggregate their hidden states into compact representations for response-level reliability prediction. Through controlled experiments on open QA benchmarks, we find that correct in-context information improves both answer accuracy and model confidence, while misleading context often induces confidently incorrect responses, revealing a misalignment between uncertainty and correctness. Our probing-based method captures these shifts in model behavior and improves the detection of unreliable outputs across multiple open-source LLMs. These results underscore the limitations of direct uncertainty signals and highlight the potential of uncertainty-guided probing for reliability-aware generation.
+ oai:arXiv.org:2508.08139v2
+ cs.CLcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Bingqing Wei, Lianmin Chen, Zhongyu Xia, Yongtao Wang
-
-
- SimpleFold: Folding Proteins is Simpler than You Think
- https://arxiv.org/abs/2509.18480
- arXiv:2509.18480v4 Announce Type: replace
-Abstract: Protein folding models have achieved groundbreaking results typically via a combination of integrating domain knowledge into the architectural blocks and training pipelines. Nonetheless, given the success of generative models across different but related problems, it is natural to question whether these architectural designs are a necessary condition to build performant models. In this paper, we introduce SimpleFold, the first flow-matching based protein folding model that solely uses general purpose transformer blocks. Protein folding models typically employ computationally expensive modules involving triangular updates, explicit pair representations or multiple training objectives curated for this specific domain. Instead, SimpleFold employs standard transformer blocks with adaptive layers and is trained via a generative flow-matching objective with an additional structural term. We scale SimpleFold to 3B parameters and train it on approximately 9M distilled protein structures together with experimental PDB data. On standard folding benchmarks, SimpleFold-3B achieves competitive performance compared to state-of-the-art baselines, in addition SimpleFold demonstrates strong performance in ensemble prediction which is typically difficult for models trained via deterministic reconstruction objectives. Due to its general-purpose architecture, SimpleFold shows efficiency in deployment and inference on consumer-level hardware. SimpleFold challenges the reliance on complex domain-specific architectures designs in protein folding, opening up an alternative design space for future progress.
- oai:arXiv.org:2509.18480v4
- cs.LG
- q-bio.QM
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Yuyang Wang, Jiarui Lu, Navdeep Jaitly, Josh Susskind, Miguel Angel Bautista
+ Tianyi Zhou, Johanne Medina, Sanjay Chawla
- A Unified Noise-Curvature View of Loss of Trainability
- https://arxiv.org/abs/2509.19698
- arXiv:2509.19698v3 Announce Type: replace
-Abstract: Loss of trainability refers to a phenomenon in continual learning where parameter updates no longer make progress on the optimization objective, so accuracy stalls or degrades as the learning problem changes over time. In this paper, we analyze loss of trainability through an optimization lens and find that the phenomenon is not reliably predicted by existing individual indicators such as Hessian rank, sharpness level, weight or gradient norms, gradient-to-parameter ratios, and unit-sign entropy. Motivated by our analysis, we introduce two complementary indicators: a batch-size-aware gradient-noise bound and a curvature volatility-controlled bound. We then combine these two indicators into a per-layer adaptive noise threshold on the effective step-size that anticipates trainability behavior. Using this insight, we propose a step-size scheduler that keeps each layer's effective parameter update below this bound, thereby avoiding loss of trainability. We demonstrate that our scheduler can improve the accuracy maintained by previously proposed approaches, such as concatenated ReLU (CReLU), Wasserstein regularizer, and L2 weight decay. Surprisingly, our scheduler produces adaptive step-size trajectories that, without tuning, mirror the manually engineered step-size decay schedules.
- oai:arXiv.org:2509.19698v3
- cs.LG
+ MedReasoner: Reinforcement Learning Drives Reasoning Grounding from Clinical Thought to Pixel-Level Precision
+ https://arxiv.org/abs/2508.08177
+ arXiv:2508.08177v2 Announce Type: replace
+Abstract: Accurately grounding regions of interest (ROIs) is critical for diagnosis and treatment planning in medical imaging. While multimodal large language models (MLLMs) combine visual perception with natural language, current medical-grounding pipelines still rely on supervised fine-tuning with explicit spatial hints, making them ill-equipped to handle the implicit queries common in clinical practice. This work makes three core contributions. We first define Unified Medical Reasoning Grounding (UMRG), a novel vision-language task that demands clinical reasoning and pixel-level grounding. Second, we release U-MRG-14K, a dataset of 14K samples featuring pixel-level masks alongside implicit clinical queries and reasoning traces, spanning 10 modalities, 15 super-categories, and 108 specific categories. Finally, we introduce MedReasoner, a modular framework that distinctly separates reasoning from segmentation: an MLLM reasoner is optimized with reinforcement learning, while a frozen segmentation expert converts spatial prompts into masks, with alignment achieved through format and accuracy rewards. MedReasoner achieves state-of-the-art performance on U-MRG-14K and demonstrates strong generalization to unseen clinical queries, underscoring the significant promise of reinforcement learning for interpretable medical grounding.
+ oai:arXiv.org:2508.08177v2
+ cs.CVcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Gunbir Singh Baveja, Alex Lewandowski, Mark Schmidt
+ Zhonghao Yan, Muxi Diao, Yuxuan Yang, Ruoyan Jing, Jiayuan Xu, Kaizhou Zhang, Lele Yang, Yanxi Liu, Kongming Liang, Zhanyu Ma
- Benchmarking Web API Integration Code Generation
- https://arxiv.org/abs/2509.20172
- arXiv:2509.20172v5 Announce Type: replace
-Abstract: API integration is a cornerstone of our digital infrastructure, enabling software systems to connect and interact. However, as shown by many studies, writing or generating correct code to invoke APIs, particularly web APIs, is challenging. Although large language models (LLMs) have become popular in software development, their effectiveness in automating the generation of web API integration code remains unexplored. In order to address this, we present WAPIIBench, a dataset and evaluation pipeline designed to assess the ability of LLMs to generate web API invocation code. Our experiments with several open-source LLMs reveal that generating API invocations poses a significant challenge, resulting in hallucinated endpoints, incorrect argument usage, and other errors. None of the evaluated open-source models was able to solve more than 40% of the tasks.
- oai:arXiv.org:2509.20172v5
- cs.SE
+ SyGra: A Unified Graph-Based Framework for Scalable Generation, Quality Tagging, and Management of Synthetic Data
+ https://arxiv.org/abs/2508.15432
+ arXiv:2508.15432v3 Announce Type: replace
+Abstract: The advancement of large language models (LLMs) is critically dependent on the availability of high-quality datasets for Supervised Fine-Tuning (SFT), alignment tasks like Direct Preference Optimization (DPO), etc. In this work, we present a comprehensive synthetic data generation framework that facilitates scalable, configurable, and high-fidelity generation of synthetic data tailored for these training paradigms. Our approach employs a modular and configuration-based pipeline capable of modeling complex dialogue flows with minimal manual intervention. This framework uses a dual-stage quality tagging mechanism, combining heuristic rules and LLM-based evaluations, to automatically filter and score data extracted from OASST-formatted conversations, ensuring the curation of high-quality dialogue samples. The resulting datasets are structured under a flexible schema supporting both SFT and DPO use cases, enabling seamless integration into diverse training workflows. Together, these innovations offer a robust solution for generating and managing synthetic conversational data at scale, significantly reducing the overhead of data preparation in LLM training pipelines.
+ oai:arXiv.org:2508.15432v3
+ cs.AI
+ cs.CLcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Daniel Maninger, Leon Chemnitz, Amir Molzam Sharifloo, Jannis Brugger, Mira Mezini
+ Bidyapati Pradhan, Surajit Dasgupta, Amit Kumar Saha, Omkar Anustoop, Sriram Puttagunta, Vipul Mittal, Gopal Sarda
- A Unified Formal Theory on the Logical Limits of Symbol Grounding
- https://arxiv.org/abs/2509.20409
- arXiv:2509.20409v4 Announce Type: replace
-Abstract: This paper synthesizes a series of formal proofs to construct a unified theory on the logical limits of the Symbol Grounding Problem. We distinguish between internal meaning (sense), which formal systems can possess via axioms, and external grounding (reference), which is a necessary condition for connecting symbols to the world. We demonstrate through a four-stage argument that meaningful grounding within a formal system must arise from a process that is external, dynamic, and non-fixed algorithmic. First, we show that for a purely symbolic system, the impossibility of grounding is a direct consequence of its definition. Second, we extend this limitation to systems with any finite, static set of pre-established meanings (Semantic Axioms). By formally modeling the computationalist hypothesis-which equates grounding with internal derivation-we prove via G\"odelian arguments that such systems cannot consistently and completely define a "groundability predicate" for all truths. Third, we demonstrate that the "grounding act" for emergent meanings cannot be inferred from internal rules but requires an axiomatic, meta-level update. Drawing on Turing's concept of Oracle Machines and Piccinini's analysis of the mathematical objection, we identify this update as physical transduction. Finally, we prove that this process cannot be simulated by a fixed judgment algorithm, validating the logical necessity of embodied interaction.
- oai:arXiv.org:2509.20409v4
- cs.LO
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Unveiling the Latent Directions of Reflection in Large Language Models
+ https://arxiv.org/abs/2508.16989
+ arXiv:2508.16989v2 Announce Type: replace
+Abstract: Reflection, the ability of large language models (LLMs) to evaluate and revise their own reasoning, has been widely used to improve performance on complex reasoning tasks. Yet, most prior works emphasizes designing reflective prompting strategies or reinforcement learning objectives, leaving the inner mechanisms of reflection underexplored. In this paper, we investigate reflection through the lens of latent directions in model activations. We propose a methodology based on activation steering to characterize how instructions with different reflective intentions: no reflection, intrinsic reflection, and triggered reflection. By constructing steering vectors between these reflection levels, we demonstrate that (1) new reflection-inducing instructions can be systematically identified, (2) reflective behavior can be directly enhanced or suppressed through activation interventions, and (3) suppressing reflection is considerably easier than stimulating it. Experiments on GSM8k-adv and Cruxeval-o-adv with Qwen2.5-3B and Gemma3-4B-IT reveal clear stratification across reflection levels, and steering interventions confirm the controllability of reflection. Our findings highlight both opportunities (e.g., reflection-enhancing defenses) and risks (e.g., adversarial inhibition of reflection in jailbreak attacks). This work opens a path toward mechanistic understanding of reflective reasoning in LLMs.
+ oai:arXiv.org:2508.16989v2
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Zhangchi Liu
+ Fu-Chieh Chang, Yu-Ting Lee, Pei-Yuan Wu
- Seedream 4.0: Toward Next-generation Multimodal Image Generation
- https://arxiv.org/abs/2509.20427
- arXiv:2509.20427v3 Announce Type: replace
-Abstract: We introduce Seedream 4.0, an efficient and high-performance multimodal image generation system that unifies text-to-image (T2I) synthesis, image editing, and multi-image composition within a single framework. We develop a highly efficient diffusion transformer with a powerful VAE which also can reduce the number of image tokens considerably. This allows for efficient training of our model, and enables it to fast generate native high-resolution images (e.g., 1K-4K). Seedream 4.0 is pretrained on billions of text-image pairs spanning diverse taxonomies and knowledge-centric concepts. Comprehensive data collection across hundreds of vertical scenarios, coupled with optimized strategies, ensures stable and large-scale training, with strong generalization. By incorporating a carefully fine-tuned VLM model, we perform multi-modal post-training for training both T2I and image editing tasks jointly. For inference acceleration, we integrate adversarial distillation, distribution matching, and quantization, as well as speculative decoding. It achieves an inference time of up to 1.8 seconds for generating a 2K image (without a LLM/VLM as PE model). Comprehensive evaluations reveal that Seedream 4.0 can achieve state-of-the-art results on both T2I and multimodal image editing. In particular, it demonstrates exceptional multimodal capabilities in complex tasks, including precise image editing and in-context reasoning, and also allows for multi-image reference, and can generate multiple output images. This extends traditional T2I systems into an more interactive and multidimensional creative tool, pushing the boundary of generative AI for both creativity and professional applications. We further scale our model and data as Seedream 4.5. Seedream 4.0 and Seedream 4.5 are accessible on Volcano Engine https://www.volcengine.com/experience/ark?launch=seedream.
- oai:arXiv.org:2509.20427v3
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Scaling Neuro-symbolic Problem Solving: Solver-Free Learning of Constraints and Objectives
+ https://arxiv.org/abs/2508.20978
+ arXiv:2508.20978v3 Announce Type: replace
+Abstract: In the ongoing quest for hybridizing discrete reasoning with neural nets, there is an increasing interest in neural architectures that can learn how to solve discrete reasoning or optimization problems from natural inputs, a task that Large Language Models seem to struggle with.
+ Objectives: We introduce a differentiable neuro-symbolic architecture and a loss function dedicated to learning how to solve NP-hard reasoning problems.
+ Methods: Our new probabilistic loss allows for learning both the constraints and the objective, thus delivering a complete model that can be scrutinized and completed with side constraints. By pushing the combinatorial solver out of the training loop, our architecture also offers scalable training while exact inference gives access to maximum accuracy.
+ Results: We empirically show that it can efficiently learn how to solve NP-hard reasoning problems from natural inputs. On three variants of the Sudoku benchmark -- symbolic, visual, and many-solution --, our approach requires a fraction of training time of other hybrid methods. On a visual Min-Cut/Max-cut task, it optimizes the regret better than a Decision-Focused-Learning regret-dedicated loss. Finally, it efficiently learns the energy optimization formulation of the large real-world problem of designing proteins.
+ oai:arXiv.org:2508.20978v3
+ cs.AI
+ cs.LO
+ cs.SC
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Team Seedream, :, Yunpeng Chen, Yu Gao, Lixue Gong, Meng Guo, Qiushan Guo, Zhiyao Guo, Xiaoxia Hou, Weilin Huang, Yixuan Huang, Xiaowen Jian, Huafeng Kuang, Zhichao Lai, Fanshi Li, Liang Li, Xiaochen Lian, Chao Liao, Liyang Liu, Wei Liu, Yanzuo Lu, Zhengxiong Luo, Tongtong Ou, Guang Shi, Yichun Shi, Shiqi Sun, Yu Tian, Zhi Tian, Peng Wang, Rui Wang, Xun Wang, Ye Wang, Guofeng Wu, Jie Wu, Wenxu Wu, Yonghui Wu, Xin Xia, Xuefeng Xiao, Shuang Xu, Xin Yan, Ceyuan Yang, Jianchao Yang, Zhonghua Zhai, Chenlin Zhang, Heng Zhang, Qi Zhang, Xinyu Zhang, Yuwei Zhang, Shijia Zhao, Wenliang Zhao, Wenjia Zhu
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Marianne Defresne, Romain Gambardella, Sophie Barbe, Thomas Schiex
- New Synthetic Goldmine: Hand Joint Angle-Driven EMG Data Generation Framework for Micro-Gesture Recognition
- https://arxiv.org/abs/2509.23359
- arXiv:2509.23359v4 Announce Type: replace
-Abstract: Electromyography (EMG)-based gesture recognition has emerged as a promising approach for human-computer interaction. However, its performance is often limited by the scarcity of labeled EMG data, significant cross-user variability, and poor generalization to unseen gestures. To address these challenges, we propose SeqEMG-GAN, a conditional, sequence-driven generative framework that synthesizes high-fidelity EMG signals from hand joint angle sequences. Our method introduces a context-aware architecture composed of an angle encoder, a dual-layer context encoder featuring the novel Ang2Gist unit, a deep convolutional EMG generator, and a discriminator, all jointly optimized via adversarial learning. By conditioning on joint kinematic trajectories, SeqEMG-GAN is capable of generating semantically consistent EMG sequences, even for previously unseen gestures, thereby enhancing data diversity and physiological plausibility. Experimental results show that classifiers trained solely on synthetic data experience only a slight accuracy drop (from 57.77\% to 55.71\%). In contrast, training with a combination of real and synthetic data significantly improves accuracy to 60.53\%, outperforming real-only training by 2.76\%. These findings demonstrate the effectiveness of our framework,also achieves the state-of-art performance in augmenting EMG datasets and enhancing gesture recognition performance for applications such as neural robotic hand control, AI/AR glasses, and gesture-based virtual gaming systems.
- oai:arXiv.org:2509.23359v4
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Adapting to Change: A Comparison of Continual and Transfer Learning for Modeling Building Thermal Dynamics under Concept Drifts
+ https://arxiv.org/abs/2508.21615
+ arXiv:2508.21615v2 Announce Type: replace
+Abstract: Transfer Learning (TL) is currently the most effective approach for modeling building thermal dynamics when only limited data are available. TL uses a pretrained model that is fine-tuned to a specific target building. However, it remains unclear how to proceed after initial fine-tuning, as more operational measurement data are collected over time. This challenge becomes even more complex when the dynamics of the building change, for example, after a retrofit or a change in occupancy. In Machine Learning literature, Continual Learning (CL) methods are used to update models of changing systems. TL approaches can also address this challenge by reusing the pretrained model at each update step and fine-tuning it with new measurement data. A comprehensive study on how to incorporate new measurement data over time to improve prediction accuracy and address the challenges of concept drifts (changes in dynamics) for building thermal dynamics is still missing.
+ Therefore, this study compares several CL and TL strategies, as well as a model trained from scratch, for thermal dynamics modeling during building operation. The methods are evaluated using 5--7 years of simulated data representative of single-family houses in Central Europe, including scenarios with concept drifts from retrofits and changes in occupancy. We propose a CL strategy (Seasonal Memory Learning) that provides greater accuracy improvements than existing CL and TL methods, while maintaining low computational effort. SML outperformed the benchmark of initial fine-tuning by 28.1\% without concept drifts and 34.9\% with concept drifts.
+ oai:arXiv.org:2508.21615v2
+ eess.SY
+ cs.LG
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nana Wang, Gen Li, Pengfei Ren, Hao Su, Suli Wang
+ Fabian Raisch, Max Langtry, Felix Koch, Ruchi Choudhary, Christoph Goebel, Benjamin Tischler
- BridgeDrive: Diffusion Bridge Policy for Closed-Loop Trajectory Planning in Autonomous Driving
- https://arxiv.org/abs/2509.23589
- arXiv:2509.23589v2 Announce Type: replace
-Abstract: Diffusion-based planners have shown great promise for autonomous driving due to their ability to capture multi-modal driving behaviors. However, guiding these models effectively in reactive, closed-loop environments remains a significant challenge. Simple conditioning often fails to provide sufficient guidance in complex and dynamic driving scenarios. Recent work attempts to use typical expert driving behaviors (i.e., anchors) to guide diffusion models but relies on a truncated schedule, which introduces theoretical inconsistencies and can compromise performance. To address this, we introduce BridgeDrive, a novel anchor-guided diffusion bridge policy for closed-loop trajectory planning. Our approach provides a principled diffusion framework that effectively translates anchors into fine-grained trajectory plans, appropriately responding to varying traffic conditions. Our planner is compatible with efficient ODE solvers, a critical factor for real-time autonomous driving deployment. We achieve state-of-the-art performance on the Bench2Drive benchmark, improving the success rate by 7.72% over prior arts.
- oai:arXiv.org:2509.23589v2
- cs.AI
- cs.CV
+ RoFt-Mol: Benchmarking Robust Fine-Tuning with Molecular Graph Foundation Models
+ https://arxiv.org/abs/2509.00614
+ arXiv:2509.00614v3 Announce Type: replace
+Abstract: In the era of foundation models, fine-tuning pre-trained models for specific downstream tasks has become crucial. This drives the need for robust fine-tuning methods to address challenges such as model overfitting and sparse labeling. Molecular graph foundation models (MGFMs) face unique difficulties that complicate fine-tuning. These models are limited by smaller pre-training datasets and more severe data scarcity for downstream tasks, both of which require enhanced model generalization. Moreover, MGFMs must accommodate diverse objectives, including both regression and classification tasks. To better understand and improve fine-tuning techniques under these conditions, we classify eight fine-tuning methods into three mechanisms: weight-based, representation-based, and partial fine-tuning. We benchmark these methods on downstream regression and classification tasks across supervised and self-supervised pre-trained models in diverse labeling settings. This extensive evaluation provides valuable insights and informs the design of a refined robust fine-tuning method, ROFT-MOL. This approach combines the strengths of simple post-hoc weight interpolation with more complex weight ensemble fine-tuning methods, delivering improved performance across both task types while maintaining the ease of use inherent in post-hoc weight interpolation.
+ oai:arXiv.org:2509.00614v3cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ physics.chem-ph
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Shu Liu, Wenlin Chen, Weihao Li, Zheng Wang, Lijin Yang, Jianing Huang, Yipin Zhang, Zhongzhan Huang, Ze Cheng, Hao Yang
+ http://creativecommons.org/licenses/by/4.0/
+ Shikun Liu, Deyu Zou, Nima Shoghi, Victor Fung, Kai Liu, Pan Li
- The Impossibility of Inverse Permutation Learning in Transformer Models
- https://arxiv.org/abs/2509.24125
- arXiv:2509.24125v3 Announce Type: replace
-Abstract: In this technical note, we study the problem of inverse permutation learning in decoder-only transformers. Given a permutation and a string to which that permutation has been applied, the model is tasked with producing the original (``canonical'') string. We argue that this task models a natural robustness property across a variety of reasoning tasks, including long-context retrieval, multiple choice QA and in-context learning. Our primary contribution is an impossibility result: we show that an arbitrary depth, decoder-only transformer cannot learn this task. This result concerns the expressive capacity of decoder-only transformer models and is agnostic to training dynamics or sample complexity. We give a pair of alternative constructions under which inverse permutation learning is feasible. The first of these highlights the fundamental role of the causal attention mask, and reveals a gap between the expressivity of encoder-decoder transformers and the more popular decoder-only architecture. The latter result is more surprising: we show that simply padding the input with ``scratch tokens" yields a construction under which inverse permutation learning is possible. We conjecture that this may suggest an alternative mechanism by which chain-of-thought prompting or, more generally, intermediate ``thinking'' tokens can enable reasoning in large language models, even when these tokens encode no meaningful semantic information (e.g., the results of intermediate computations).
- oai:arXiv.org:2509.24125v3
+ If generative AI is the answer, what is the question?
+ https://arxiv.org/abs/2509.06120
+ arXiv:2509.06120v2 Announce Type: replace
+Abstract: Beginning with text and images, generative AI has expanded to audio, video, computer code, and molecules. Yet, if generative AI is the answer, what is the question? We explore the foundations of generation as a distinct machine learning task with connections to prediction, compression, and decision-making. We survey five major generative model families: autoregressive models, variational autoencoders, normalizing flows, generative adversarial networks, and diffusion models. We then introduce a probabilistic framework that emphasizes the distinction between density estimation and generation. We review a game-theoretic framework with a two-player adversary-learner setup to study generation. We discuss post-training modifications that prepare generative models for deployment. We end by highlighting some important topics in socially responsible generation such as privacy, detection of AI-generated content, and copyright and IP. We adopt a task-first framing of generation, focusing on what generation is as a machine learning problem, rather than only on how models implement it.
+ oai:arXiv.org:2509.06120v2cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Rohan Alur, Chris Hays, Manish Raghavan, Devavrat Shah
+ Ambuj Tewari
- InfMasking: Unleashing Synergistic Information by Contrastive Multimodal Interactions
- https://arxiv.org/abs/2509.25270
- arXiv:2509.25270v3 Announce Type: replace
-Abstract: In multimodal representation learning, synergistic interactions between modalities not only provide complementary information but also create unique outcomes through specific interaction patterns that no single modality could achieve alone. Existing methods may struggle to effectively capture the full spectrum of synergistic information, leading to suboptimal performance in tasks where such interactions are critical. This is particularly problematic because synergistic information constitutes the fundamental value proposition of multimodal representation. To address this challenge, we introduce InfMasking, a contrastive synergistic information extraction method designed to enhance synergistic information through an Infinite Masking strategy. InfMasking stochastically occludes most features from each modality during fusion, preserving only partial information to create representations with varied synergistic patterns. Unmasked fused representations are then aligned with masked ones through mutual information maximization to encode comprehensive synergistic information. This infinite masking strategy enables capturing richer interactions by exposing the model to diverse partial modality combinations during training. As computing mutual information estimates with infinite masking is computationally prohibitive, we derive an InfMasking loss to approximate this calculation. Through controlled experiments, we demonstrate that InfMasking effectively enhances synergistic information between modalities. In evaluations on large-scale real-world datasets, InfMasking achieves state-of-the-art performance across seven benchmarks. Code is released at https://github.com/brightest66/InfMasking.
- oai:arXiv.org:2509.25270v3
- cs.LG
+ Risk-Bounded Multi-Agent Visual Navigation via Iterative Risk Allocation
+ https://arxiv.org/abs/2509.08157
+ arXiv:2509.08157v2 Announce Type: replace
+Abstract: Safe navigation is essential for autonomous systems operating in hazardous environments, especially when multiple agents must coordinate using only high-dimensional visual observations. While recent approaches successfully combine Goal-Conditioned RL (GCRL) for graph construction with Conflict-Based Search (CBS) for planning, they typically rely on static edge pruning to enforce safety. This binary strategy is overly conservative, precluding feasible missions that require traversing high-risk regions, even when the aggregate risk is acceptable. To address this, we introduce a framework for Risk-Bounded Multi-Agent Path Finding (\problem{}), where agents share a user-specified global risk budget ($\Delta$). Rather than permanently discarding edges, our framework dynamically distributes per-agent risk budgets ($\delta_i$) during search via an Iterative Risk Allocation (IRA) layer that integrates with a standard CBS planner. We investigate two distribution strategies: a greedy surplus-deficit scheme for rapid feasibility repair, and a market-inspired mechanism that treats risk as a priced resource to guide improved allocation. This yields a tunable trade-off wherein agents exploit available risk to secure shorter, more efficient paths, but revert to longer, safer detours under tighter budgets. Experiments in complex visual environments show that, our dynamic allocation framework achieves higher success rates than baselines and effectively leverages the available safety budget to reduce travel time.
+ oai:arXiv.org:2509.08157v2
+ cs.ROcs.AI
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.MA
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Liangjian Wen, Qun Dai, Jianzhuang Liu, Jiangtao Zheng, Yong Dai, Dongkai Wang, Zhao Kang, Jun Wang, Zenglin Xu, Jiang Duan
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Viraj Parimi, Brian C. Williams
- The Three Regimes of Offline-to-Online Reinforcement Learning
- https://arxiv.org/abs/2510.01460
- arXiv:2510.01460v2 Announce Type: replace
-Abstract: Offline-to-online reinforcement learning (RL) has emerged as a practical paradigm that leverages offline datasets for pretraining and online interactions for fine-tuning. However, its empirical behavior is highly inconsistent: design choices of online-fine tuning that work well in one setting can fail completely in another. We propose a stability--plasticity principle that can explain this inconsistency: we should preserve the knowledge of pretrained policy or offline dataset during online fine-tuning, whichever is better, while maintaining sufficient plasticity. This perspective identifies three regimes of online fine-tuning, each requiring distinct stability properties. We validate this framework through a large-scale empirical study, finding that the results strongly align with its predictions in 45 of 63 cases. This work provides a principled framework for guiding design choices in offline-to-online RL based on the relative performance of the offline dataset and the pretrained policy.
- oai:arXiv.org:2510.01460v2
+ Understanding Outer Optimizers in Local SGD: Learning Rates, Momentum, and Acceleration
+ https://arxiv.org/abs/2509.10439
+ arXiv:2509.10439v2 Announce Type: replace
+Abstract: Modern machine learning often requires training with large batch size, distributed data, and massively parallel compute hardware (like mobile and other edge devices or distributed data centers). Communication becomes a major bottleneck in such settings but methods like Local Stochastic Gradient Descent (Local SGD) show great promise in reducing this additional communication overhead. Local SGD consists of three parts: a local optimization process, an aggregation mechanism, and an outer optimizer that uses the aggregated updates from the nodes to produce a new model. While there exists an extensive literature on understanding the impact of hyperparameters in the local optimization process, the choice of outer optimizer and its hyperparameters is less clear. We study the role of the outer optimizer in Local SGD, and prove new convergence guarantees for the algorithm. In particular, we show that tuning the outer learning rate allows us to (a) trade off between optimization error and stochastic gradient noise variance, and (b) make up for ill-tuning of the inner learning rate. Our theory suggests that the outer learning rate should sometimes be set to values greater than $1$. We extend our results to settings where we use momentum in the outer optimizer, and we show a similar role for the momentum-adjusted outer learning rate. We also study acceleration in the outer optimizer and show that it improves the convergence rate as a function of the number of communication rounds, improving upon the convergence rate of prior algorithms that apply acceleration locally. Finally, we also introduce a novel data-dependent analysis of Local SGD that yields further insights on outer learning rate tuning. We conduct comprehensive experiments with standard language models and various outer optimizers to validate our theory.
+ oai:arXiv.org:2509.10439v2cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ math.OC
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Lu Li, Tianwei Ni, Yihao Sun, Pierre-Luc Bacon
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ahmed Khaled, Satyen Kale, Arthur Douillard, Chi Jin, Rob Fergus, Manzil Zaheer
- OneFlow: Concurrent Mixed-Modal and Interleaved Generation with Edit Flows
- https://arxiv.org/abs/2510.03506
- arXiv:2510.03506v3 Announce Type: replace
-Abstract: We present OneFlow, the first non-autoregressive multimodal model that enables variable-length and concurrent mixed-modal generation. Unlike autoregressive models that enforce rigid causal ordering between text and image generation, OneFlow combines an insertion-based Edit Flow for discrete text tokens with Flow Matching for image latents. OneFlow enables concurrent text-image synthesis with hierarchical sampling that prioritizes content over grammar. Through controlled experiments across model sizes from 1B to 8B, we demonstrate that OneFlow outperforms autoregressive baselines on both generation and understanding tasks while using up to 50% fewer training FLOPs. OneFlow surpasses both autoregressive and diffusion-based approaches while unlocking new capabilities for concurrent generation, iterative refinement, and natural reasoning-like generation.
- oai:arXiv.org:2510.03506v3
+ Faster Results from a Smarter Schedule: Reframing Collegiate Cross Country through Analysis of the National Running Club Database
+ https://arxiv.org/abs/2509.10600
+ arXiv:2509.10600v3 Announce Type: replace
+Abstract: Collegiate cross country teams often build their season schedules on intuition rather than evidence, partly because large-scale performance datasets are not publicly accessible. To address this limitation, we introduce the National Running Club Database (NRCD), the first openly available dataset to aggregate 23,725 race results from 7,594 collegiate club athletes across the 2023-2025 seasons. Unlike existing resources, NRCD includes detailed course metadata, allowing us to develop two standardized performance metrics: Converted Only (distance correction) and Standardized (distance, weather, and elevation adjusted). Using these standardized measures, we find that athletes with slower initial performances exhibit the greatest improvement within a season, and that race frequency is the strongest predictor of improvement. Using six machine learning models, random forest achieves the highest accuracy (r squared equals 0.92), revealing that athletes who race more frequently progress significantly faster than those who do not. At the team level, programs whose athletes race at least four times during the regular season have substantially higher odds of placing in the top 15 at nationals (chi-squared less than 0.01). These results challenge common coaching practices that favor minimal racing before championship meets. Our findings demonstrate that a data-informed scheduling strategy improves both individual development and team competitiveness. The NRCD provides a new foundation for evidence-based decision-making in collegiate cross country and opens opportunities for further research on standardized, longitudinal athlete performance modeling.
+ oai:arXiv.org:2509.10600v3
+ cs.CYcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- John Nguyen, Marton Havasi, Tariq Berrada, Luke Zettlemoyer, Ricky T. Q. Chen
+ Jonathan A. Karr Jr, Ryan M. Fryer, Ben Darden, Nicholas Pell, Kayla Ambrose, Evan Hall, Ramzi K. Bualuan, Nitesh V. Chawla
- Demystifying deep search: a holistic evaluation with hint-free multi-hop questions and factorised metrics
- https://arxiv.org/abs/2510.05137
- arXiv:2510.05137v3 Announce Type: replace
-Abstract: RAG (Retrieval-Augmented Generation) systems and web agents are increasingly evaluated on multi-hop deep search tasks, yet current practice suffers from two major limitations. First, most benchmarks leak the reasoning path in the question text, allowing models to follow surface cues rather than discover reasoning chains autonomously. Second, evaluation is typically reduced to a single pass rate, which collapses diverse behaviours into one score and obscures whether failures stem from inadequate search, poor knowledge use, or inappropriate refusal. To address these issues, we present WebDetective, a benchmark of hint-free multi-hop questions paired with a controlled Wikipedia sandbox that ensures full traceability of model actions, and a holistic evaluation framework that separates search sufficiency, knowledge utilisation, and refusal behaviour. Our evaluation of 25 state-of-the-art models reveals systematic weaknesses across all architectures: models struggle with knowledge utilisation despite having sufficient evidence and demonstrate near-absent appropriate refusal when evidence is lacking. These patterns expose a fundamental gap: today's systems excel at executing given reasoning paths but fail when required to discover them. We develop an agentic workflow, EvidenceLoop, that explicitly targets the challenges our benchmark identifies, incorporating verification loops and systematic evidence tracking that improve both search and synthesis capabilities. This baseline demonstrates that WebDetective's diagnostic framework can guide concrete architectural improvements, establishing our benchmark as a critical tool for developing genuinely autonomous reasoning systems rather than pattern-following agents.
- oai:arXiv.org:2510.05137v3
+ ARE: Scaling Up Agent Environments and Evaluations
+ https://arxiv.org/abs/2509.17158
+ arXiv:2509.17158v2 Announce Type: replace
+Abstract: We introduce Meta Agents Research Environments (ARE), a research platform for scalable creation of environments, integration of synthetic or real applications, and execution of agentic orchestrations. ARE provides simple abstractions to build complex and diverse environments, each with their own rules, tools, content, and verifiers, helping to bridge the gap between model development and real-world deployment. We also propose Gaia2, a benchmark built in ARE and designed to measure general agent capabilities. Beyond search and execution, Gaia2 requires agents to handle ambiguities and noise, adapt to dynamic environments, collaborate with other agents, and operate under temporal constraints. Unlike prior benchmarks, Gaia2 runs asynchronously, surfacing new failure modes that are invisible in static settings. Our experiments show that no system dominates across the intelligence spectrum: stronger reasoning often comes at the cost of efficiency, and budget scaling curves plateau, highlighting the need for new architectures and adaptive compute strategies. Perhaps more importantly, ARE abstractions enable continuous extension of Gaia2 to other environments, empowering the community to rapidly create new benchmarks tailored to their domains. In AI's second half, progress increasingly depends on defining meaningful tasks and robust evaluations to drive frontier capabilities forward.
+ oai:arXiv.org:2509.17158v2
+ cs.AIcs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Maojia Song, Renhang Liu, Xinyu Wang, Yong Jiang, Pengjun Xie, Fei Huang, Jingren Zhou, Dorien Herremans, Soujanya Poria
+ Romain Froger, Pierre Andrews, Matteo Bettini, Amar Budhiraja, Ricardo Silveira Cabral, Virginie Do, Emilien Garreau, Jean-Baptiste Gaya, Hugo Lauren\c{c}on, Maxime Lecanu, Kunal Malkan, Dheeraj Mekala, Pierre M\'enard, Gerard Moreno-Torres Bertran, Ulyana Piterbarg, Mikhail Plekhanov, Mathieu Rita, Andrey Rusakov, Vladislav Vorotilov, Mengjue Wang, Ian Yu, Amine Benhalloum, Gr\'egoire Mialon, Thomas Scialom
- TRepLiNa: Layer-wise CKA+REPINA Alignment Improves Low-Resource Machine Translation in Aya-23 8B
- https://arxiv.org/abs/2510.06249
- arXiv:2510.06249v5 Announce Type: replace
-Abstract: The 2025 Multimodal Models for Low-Resource Contexts and Social Impact (MMLoSo) Language Challenge addresses one of India's most pressing linguistic gaps: the lack of resources for its diverse low-resource languages (LRLs). In this study, we investigate whether enforcing cross-lingual similarity in specific internal layers of a decoder-only multilingual large language model (LLM) can improve translation quality from LRL to high-resource language (HRL). Specifically, we combine Centered Kernel Alignment (CKA), a similarity metric that encourages representations of different languages to align, with REPINA, a regularization method that constrains parameter updates to remain close to the pretrained model, into a joint method we call TRepLiNa. In this research project, we experiment with zero-shot, few-shot, and fine-tuning settings using Aya-23 8B with QLoRA across MMLoSo shared task language pairs (Mundari, Santali, Bhili) with Hindi/English pivots. Our results show that aligning mid-level layers using TRepLiNa (CKA+REPINA) is a low-cost, practical approach to improving LRL translation, especially in data-scarce settings.
- oai:arXiv.org:2510.06249v5
- cs.CL
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ SmokeSeer: 3D Gaussian Splatting for Smoke Removal and Scene Reconstruction
+ https://arxiv.org/abs/2509.17329
+ arXiv:2509.17329v2 Announce Type: replace
+Abstract: Smoke in real-world scenes can severely degrade image quality and hamper visibility. Recent image restoration methods either rely on data-driven priors that are susceptible to hallucinations, or are limited to static low-density smoke. We introduce SmokeSeer, a method for simultaneous 3D scene reconstruction and smoke removal from multi-view video sequences. Our method uses thermal and RGB images, leveraging the reduced scattering in thermal images to see through smoke. We build upon 3D Gaussian splatting to fuse information from the two image modalities, and decompose the scene into smoke and non-smoke components. Unlike prior work, SmokeSeer handles a broad range of smoke densities and adapts to temporally varying smoke. We validate our method on synthetic data and a new real-world smoke dataset with RGB and thermal images. We provide an open-source implementation and data on the project website.
+ oai:arXiv.org:2509.17329v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Toshiki Nakai, Ravi Kiran Chikkala, Lena Sophie Oberkircher, Nicholas Jennings, Natalia Skachkova, Tatiana Anikina, Jesujoba Oluwadara Alabi
+ Neham Jain, Andrew Jong, Sebastian Scherer, Ioannis Gkioulekas
- Open ASR Leaderboard: Towards Reproducible and Transparent Multilingual Speech Recognition Evaluation
- https://arxiv.org/abs/2510.06961
- arXiv:2510.06961v3 Announce Type: replace
-Abstract: Despite rapid progress, ASR evaluation remains saturated with short-form English, and efficiency is rarely reported. We present the Open ASR Leaderboard, a fully reproducible benchmark and interactive leaderboard comparing 60+ open-source and proprietary systems across 11 datasets, including a dedicated multilingual track. We standardize text normalization and report both word error rate (WER) and inverse real-time factor (RTFx), enabling fair accuracy-efficiency comparisons. For English transcription, Conformer encoders paired with LLM decoders achieve the best average WER but are slower, while CTC and TDT decoders deliver much better RTFx, making them attractive for long-form and offline use. Whisper-derived encoders fine-tuned for English improve accuracy but often trade off multilingual coverage. All code and dataset loaders are open-sourced to support transparent, extensible evaluation.
- oai:arXiv.org:2510.06961v3
+ Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning
+ https://arxiv.org/abs/2509.17552
+ arXiv:2509.17552v3 Announce Type: replace
+Abstract: The remarkable performance of Large Language Models (LLMs) can be enhanced with test-time computation, which relies on external tools and even other deep learning models. However, existing approaches for integrating non-text modality representations into LLMs typically require additional costly supervised training, restricting on-the-fly adaptation to new domains and modalities. In this work, we explore the feasibility of integrating representations from non-text foundational models (FMs) into text-based LLMs in a training-free manner. We propose In-Context Representation Learning (ICRL) as a proof-of-concept to allow LLMs to adaptively utilize non-text modality representations with few-shot learning. Unlike traditional in-context learning, which incorporates text-label pairs, ICRL replaces text inputs with FM representations, enabling the LLM to perform multi-modal inference without fine-tuning. We evaluate ICRL on a suite of tasks in the molecular domain, investigating three core research questions: (i) how to map FM representations into LLMs in a training-free manner, (ii) what factors influence ICRL performance, and (iii) what mechanisms underlie the effectiveness of ICRL. To the best of our knowledge, ICRL is the first training-free framework for integrating non-text modality representations into text-based LLMs, presenting a promising direction for adaptable, multi-modal generalization.
+ oai:arXiv.org:2509.17552v3cs.CLcs.AI
- cs.SD
- eess.AS
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Vaibhav Srivastav, Steven Zheng, Eric Bezzam, Eustache Le Bihan, Adel Moumen, Sanchit Gandhi
+ http://creativecommons.org/licenses/by/4.0/
+ Tianle Zhang, Wanlong Fang, Jonathan Woo, Paridhi Latawa, Deepak A. Subramanian, Alvin Chan
- PEAR: Planner-Executor Agent Robustness Benchmark
- https://arxiv.org/abs/2510.07505
- arXiv:2510.07505v3 Announce Type: replace
-Abstract: Large Language Model (LLM)-based Multi-Agent Systems (MAS) have emerged as a powerful paradigm for tackling complex, multi-step tasks across diverse domains. However, despite their impressive capabilities, MAS remain susceptible to adversarial manipulation. Existing studies typically examine isolated attack surfaces or specific scenarios, leaving a lack of holistic understanding of MAS vulnerabilities. To bridge this gap, we introduce PEAR, a benchmark for systematically evaluating both the utility and vulnerability of planner-executor MAS. While compatible with various MAS architectures, our benchmark focuses on the planner-executor structure, which is a practical and widely adopted design. Through extensive experiments, we find that (1) a weak planner degrades overall clean task performance more severely than a weak executor; (2) while a memory module is essential for the planner, having a memory module for the executor does not impact the clean task performance; (3) there exists a trade-off between task performance and robustness; and (4) attacks targeting the planner are particularly effective at misleading the system. These findings offer actionable insights for enhancing the robustness of MAS and lay the groundwork for principled defenses in multi-agent settings.
- oai:arXiv.org:2510.07505v3
+ Improved Segmentation of Polyps and Visual Explainability Analysis
+ https://arxiv.org/abs/2509.18159
+ arXiv:2509.18159v3 Announce Type: replace
+Abstract: Colorectal cancer (CRC) remains one of the leading causes of cancer-related morbidity and mortality worldwide, with gastrointestinal (GI) polyps serving as critical precursors according to the World Health Organization (WHO). Early and accurate segmentation of polyps during colonoscopy is essential for reducing CRC progression, yet manual delineation is labor-intensive and prone to observer variability. Deep learning methods have demonstrated strong potential for automated polyp analysis, but their limited interpretability remains a barrier to clinical adoption. In this study, we present PolypSeg-GradCAM, an explainable deep learning framework that integrates a U-Net architecture with a pre-trained ResNet-34 backbone and Gradient-weighted Class Activation Mapping (Grad-CAM) for transparent polyp segmentation. To ensure rigorous benchmarking, the model was trained and evaluated using 5-Fold Cross-Validation on the Kvasir-SEG dataset of 1,000 annotated endoscopic images. Experimental results show a mean Dice coefficient of 0.8902 +/- 0.0125, a mean Intersection-over-Union (IoU) of 0.8023, and an Area Under the Receiver Operating Characteristic Curve (AUC-ROC) of 0.9722. Advanced quantitative analysis using an optimal threshold yielded a Sensitivity of 0.9058 and Precision of 0.9083. Additionally, Grad-CAM visualizations confirmed that predictions were guided by clinically relevant regions, offering insight into the model's decision-making process. This study demonstrates that integrating segmentation accuracy with interpretability can support the development of trustworthy AI-assisted colonoscopy tools.
+ oai:arXiv.org:2509.18159v3
+ cs.CVcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Shen Dong, Mingxuan Zhang, Pengfei He, Li Ma, Bhavani Thuraisingham, Hui Liu, Yue Xing
+ Akwasi Asare, Thanh-Huy Nguyen, Ulas Bagci
- Detecting and Mitigating Insertion Hallucination in Video-to-Audio Generation
- https://arxiv.org/abs/2510.08078
- arXiv:2510.08078v4 Announce Type: replace
-Abstract: Video-to-Audio generation has made remarkable strides in automatically synthesizing sound for video. However, existing evaluation metrics, which focus on semantic and temporal alignment, overlook a critical failure mode: models often generate acoustic events, particularly speech and music, that have no corresponding visual source. We term this phenomenon Insertion Hallucination and identify it as a systemic risk driven by dataset biases, such as the prevalence of off-screen sounds, that remains completely undetected by current metrics. To address this challenge, we first develop a systematic evaluation framework that employs a majority-voting ensemble of multiple audio event detectors. We also introduce two novel metrics to quantify the prevalence and severity of this issue: IH@vid (the fraction of videos with hallucinations) and IH@dur (the fraction of hallucinated duration). Building on this, we propose Posterior Feature Correction, a novel training-free inference-time method that mitigates IH. PFC operates in a two-pass process: it first generates an initial audio output to detect hallucinated segments, and then regenerates the audio after masking the corresponding video features at those timestamps. Experiments on several mainstream V2A benchmarks first reveal that state-of-the-art models suffer from severe IH. In contrast, our PFC method reduces both the prevalence and duration of hallucinations by over 50\% on average, without degrading, and in some cases even improving, conventional metrics for audio quality and temporal synchronization. Our work is the first to formally define, systematically measure, and effectively mitigate Insertion Hallucination, paving the way for more reliable and faithful V2A models.
- oai:arXiv.org:2510.08078v4
- cs.SD
+ RAD: Towards Trustworthy Retrieval-Augmented Multi-modal Clinical Diagnosis
+ https://arxiv.org/abs/2509.19980
+ arXiv:2509.19980v2 Announce Type: replace
+Abstract: Clinical diagnosis is a highly specialized discipline requiring both domain expertise and strict adherence to rigorous guidelines. While current AI-driven medical research predominantly focuses on knowledge graphs or natural text pretraining paradigms to incorporate medical knowledge, these approaches primarily rely on implicitly encoded knowledge within model parameters, neglecting task-specific knowledge required by diverse downstream tasks. To address this limitation, we propose Retrieval-Augmented Diagnosis (RAD), a novel framework that explicitly injects external knowledge into multimodal models directly on downstream tasks. Specifically, RAD operates through three key mechanisms: retrieval and refinement of disease-centered knowledge from multiple medical sources, a guideline-enhanced contrastive loss that constrains the latent distance between multi-modal features and guideline knowledge, and the dual transformer decoder that employs guidelines as queries to steer cross-modal fusion, aligning the models with clinical diagnostic workflows from guideline acquisition to feature extraction and decision-making. Moreover, recognizing the lack of quantitative evaluation of interpretability for multimodal diagnostic models, we introduce a set of criteria to assess the interpretability from both image and text perspectives. Extensive evaluations across four datasets with different anatomies demonstrate RAD's generalizability, achieving state-of-the-art performance. Furthermore, RAD enables the model to concentrate more precisely on abnormal regions and critical indicators, ensuring evidence-based, trustworthy diagnosis. Our code is available at https://github.com/tdlhl/RAD.
+ oai:arXiv.org:2509.19980v2cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Liyang Chen, Hongkai Chen, Yujun Cai, Sifan Li, Qingwen Ye, Yiwei Wang
+ Haolin Li, Tianjie Dai, Zhe Chen, Siyuan Du, Jiangchao Yao, Ya Zhang, Yanfeng Wang
- Adaptive Gradient Calibration for Single-Positive Multi-Label Learning in Remote Sensing Image Scene Classification
- https://arxiv.org/abs/2510.08269
- arXiv:2510.08269v2 Announce Type: replace
-Abstract: Multi-label classification (MLC) offers a more comprehensive semantic understanding of Remote Sensing (RS) imagery compared to traditional single-label classification (SLC). However, obtaining complete annotations for MLC is particularly challenging due to the complexity and high cost of the labeling process. As a practical alternative, single-positive multi-label learning (SPML) has emerged, where each image is annotated with only one relevant label, and the model is expected to recover the full set of labels. While scalable, SPML introduces significant supervision ambiguity, demanding specialized solutions for model training. Although various SPML methods have been proposed in the computer vision domain, research in the RS context remains limited. To bridge this gap, we propose Adaptive Gradient Calibration (AdaGC), a novel and generalizable SPML framework tailored to RS imagery. AdaGC adopts a gradient calibration (GC) mechanism with a dual exponential moving average (EMA) module for robust pseudo-label generation. We introduce a theoretically grounded, training-dynamics-based indicator to adaptively trigger GC, which ensures GC's effectiveness by preventing it from being affected by model underfitting or overfitting to label noise. Extensive experiments on two benchmark RS datasets under two distinct label noise types demonstrate that AdaGC achieves state-of-the-art (SOTA) performance while maintaining strong robustness across diverse settings. The codes and data will be released at https://github.com/rslab-unitrento/AdaGC.
- oai:arXiv.org:2510.08269v2
+ FAST: Foreground-aware Diffusion with Accelerated Sampling Trajectory for Segmentation-oriented Anomaly Synthesis
+ https://arxiv.org/abs/2509.20295
+ arXiv:2509.20295v4 Announce Type: replace
+Abstract: Industrial anomaly segmentation relies heavily on pixel-level annotations, yet real-world anomalies are often scarce, diverse, and costly to label. Segmentation-oriented industrial anomaly synthesis (SIAS) has emerged as a promising alternative; however, existing methods struggle to balance sampling efficiency and generation quality. Moreover, most approaches treat all spatial regions uniformly, overlooking the distinct statistical differences between anomaly and background areas. This uniform treatment hinders the synthesis of controllable, structure-specific anomalies tailored for segmentation tasks. In this paper, we propose FAST, a foreground-aware diffusion framework featuring two novel modules: the Anomaly-Informed Accelerated Sampling (AIAS) and the Foreground-Aware Reconstruction Module (FARM). AIAS is a training-free sampling algorithm specifically designed for segmentation-oriented industrial anomaly synthesis, which accelerates the reverse process through coarse-to-fine aggregation and enables the synthesis of state-of-the-art segmentation-oriented anomalies in as few as 10 steps. Meanwhile, FARM adaptively adjusts the anomaly-aware noise within the masked foreground regions at each sampling step, preserving localized anomaly signals throughout the denoising trajectory. Extensive experiments on multiple industrial benchmarks demonstrate that FAST consistently outperforms existing anomaly synthesis methods in downstream segmentation tasks. We release the code at: https://github.com/Chhro123/fast-foreground-aware-anomaly-synthesis.
+ oai:arXiv.org:2509.20295v4cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chenying Liu, Gianmarco Perantoni, Lorenzo Bruzzone, Xiao Xiang Zhu
+ Xichen Xu, Yanshu Wang, Jinbao Wang, Xiaoning Lei, Guoyang Xie, Guannan Jiang, Zhichao Lu
- Evaluating Small Vision-Language Models on Distance-Dependent Traffic Perception
- https://arxiv.org/abs/2510.08352
- arXiv:2510.08352v2 Announce Type: replace
-Abstract: Vision-Language Models (VLMs) are becoming increasingly powerful, demonstrating strong performance on a variety of tasks that require both visual and textual understanding. Their strong generalisation abilities make them a promising component for automated driving systems, which must handle unexpected corner cases. However, to be trusted in such safety-critical applications, a model must first possess a reliable perception system. Moreover, since critical objects and agents in traffic scenes are often at a distance, we require systems that are not "shortsighted", i.e., systems with strong perception capabilities at both close (up to 20 meters) and long (30+ meters) range. With this in mind, we introduce Distance-Annotated Traffic Perception Question Answering (DTPQA), the first Visual Question Answering (VQA) benchmark focused solely on perception-based questions in traffic scenes, enriched with distance annotations. By excluding questions that require reasoning, we ensure that model performance reflects perception capabilities alone. Since automated driving hardware has limited processing power and cannot support large VLMs, our study centers on smaller VLMs. More specifically, we evaluate several state-of-the-art (SOTA) small VLMs on DTPQA and show that, despite the simplicity of the questions, these models significantly underperform compared to humans (~60% average accuracy for the best-performing small VLM versus ~85% human performance). However, it is important to note that the human sample size was relatively small, which imposes statistical limitations. We also identify specific perception tasks, such as distinguishing left from right, that remain particularly challenging for these models.
- oai:arXiv.org:2510.08352v2
+ Beyond Classification Accuracy: Neural-MedBench and the Need for Deeper Reasoning Benchmarks
+ https://arxiv.org/abs/2509.22258
+ arXiv:2509.22258v2 Announce Type: replace
+Abstract: Recent advances in vision-language models (VLMs) have achieved remarkable performance on standard medical benchmarks, yet their true clinical reasoning ability remains unclear. Existing datasets predominantly emphasize classification accuracy, creating an evaluation illusion in which models appear proficient while still failing at high-stakes diagnostic reasoning. We introduce Neural-MedBench, a compact yet reasoning-intensive benchmark specifically designed to probe the limits of multimodal clinical reasoning in neurology. Neural-MedBench integrates multi-sequence MRI scans, structured electronic health records, and clinical notes, and encompasses three core task families: differential diagnosis, lesion recognition, and rationale generation. To ensure reliable evaluation, we develop a hybrid scoring pipeline that combines LLM-based graders, clinician validation, and semantic similarity metrics. Through systematic evaluation of state-of-the-art VLMs, including GPT-4o, Claude-4, and MedGemma, we observe a sharp performance drop compared to conventional datasets. Error analysis shows that reasoning failures, rather than perceptual errors, dominate model shortcomings. Our findings highlight the necessity of a Two-Axis Evaluation Framework: breadth-oriented large datasets for statistical generalization, and depth-oriented, compact benchmarks such as Neural-MedBench for reasoning fidelity. We release Neural-MedBench at https://neuromedbench.github.io/ as an open and extensible diagnostic testbed, which guides the expansion of future benchmarks and enables rigorous yet cost-effective assessment of clinically trustworthy AI.
+ oai:arXiv.org:2509.22258v2cs.CVcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- 10.1109/OJVT.2025.3629318
- IEEE Open Journal of Vehicular Technology, vol. 7, pp. 54-72, 2026
- Nikos Theodoridis, Tim Brophy, Reenu Mohandas, Ganesh Sistu, Fiachra Collins, Anthony Scanlan, Ciaran Eising
+ Miao Jing, Mengting Jia, Junling Lin, Zhongxia Shen, Huan Gao, Mingkun Xu, Shangyang Li
- Learning What Matters: Steering Diffusion via Spectrally Anisotropic Forward Noise
- https://arxiv.org/abs/2510.09660
- arXiv:2510.09660v4 Announce Type: replace
-Abstract: Diffusion Probabilistic Models (DPMs) have achieved strong generative performance, yet their inductive biases remain largely implicit. In this work, we aim to build inductive biases into the training and sampling of diffusion models to better accommodate the target distribution of the data to model. We introduce an anisotropic noise operator that shapes these biases by replacing the isotropic forward covariance with a structured, frequency-diagonal covariance. This operator unifies band-pass masks and power-law weightings, allowing us to emphasize or suppress designated frequency bands, while keeping the forward process Gaussian. We refer to this as Spectrally Anisotropic Gaussian Diffusion (SAGD). In this work, we derive the score relation for anisotropic forward covariances and show that, under full support, the learned score converges to the true data score as $t\!\to\!0$, while anisotropy reshapes the probability-flow path from noise to data. Empirically, we show the induced anisotropy outperforms standard diffusion across several vision datasets, and enables selective omission: learning while ignoring known corruptions confined to specific bands. Together, these results demonstrate that carefully designed anisotropic forward noise provides a simple, yet principled, handle to tailor inductive bias in DPMs.
- oai:arXiv.org:2510.09660v4
+ Specialization after Generalization: Towards Understanding Test-Time Training in Foundation Models
+ https://arxiv.org/abs/2509.24510
+ arXiv:2509.24510v3 Announce Type: replace
+Abstract: Recent empirical studies have explored the idea of continuing to train a model at test-time for a given task, known as test-time training (TTT), and have found it to yield significant performance improvements. However, there is limited understanding of why and when TTT is effective. Earlier explanations mostly focused on the observation that TTT may help when applied to out-of-distribution adaptation or used with privileged data. However, the growing scale of foundation models with most test data being in-distribution questions these explanations. We instead posit that foundation models remain globally underparameterized, with TTT providing a mechanism for specialization after generalization, focusing capacity on concepts relevant to the test task. Specifically, under the linear representation hypothesis, we propose a model in which TTT achieves a substantially smaller in-distribution test error than global training. We empirically validate our model's key assumptions by training a sparse autoencoder on ImageNet, showing that semantically related data points are explained by only a few shared concepts. Finally, we perform scaling studies across image and language tasks that confirm the practical implications of our model, identifying the regimes where specialization is most effective.
+ oai:arXiv.org:2509.24510v3cs.LGcs.AI
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Luca Scimeca, Thomas Jiralerspong, Berton Earnshaw, Jason Hartford, Yoshua Bengio
+ Jonas H\"ubotter, Patrik Wolf, Alexander Shevchenko, Dennis J\"uni, Andreas Krause, Gil Kur
- A Generic Machine Learning Framework for Radio Frequency Fingerprinting
- https://arxiv.org/abs/2510.09775
- arXiv:2510.09775v2 Announce Type: replace
-Abstract: Fingerprinting radio frequency (RF) emitters typically involves finding unique characteristics that are featured in their received signal. These fingerprints are nuanced, but sufficiently detailed, motivating the pursuit of methods that can successfully extract them. The downstream task that requires the most meticulous RF fingerprinting (RFF) is known as specific emitter identification (SEI), which entails recognising each individual transmitter. RFF and SEI have a long history, with numerous defence and civilian applications such as signal intelligence, electronic surveillance, physical-layer authentication of wireless devices, to name a few. In recent years, data-driven RFF approaches have become popular due to their ability to automatically learn intricate fingerprints. They generally deliver superior performance when compared to traditional RFF techniques that are often labour-intensive, inflexible, and only applicable to a particular emitter type or transmission scheme. In this paper, we present a generic and versatile machine learning (ML) framework for data-driven RFF with several popular downstream tasks such as SEI, data association (EDA) and RF emitter clustering (RFEC). It is emitter-type agnostic. We then demonstrate the introduced framework for several tasks using real RF datasets for spaceborne surveillance, signal intelligence and countering drones applications.
- oai:arXiv.org:2510.09775v2
- cs.LG
- cs.CR
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ Towards Personalized Deep Research: Benchmarks and Evaluations
+ https://arxiv.org/abs/2509.25106
+ arXiv:2509.25106v2 Announce Type: replace
+Abstract: Deep Research Agents (DRAs) can autonomously conduct complex investigations and generate comprehensive reports, demonstrating strong real-world potential. However, existing benchmarks primarily evaluate DRAs on generic quality metrics and overlook personalization, a critical dimension for individual users. However, existing evaluations mostly rely on close-ended benchmarks, while open-ended deep research benchmarks remain scarce and typically neglect personalized scenarios. To bridge this gap, we introduce Personalized Deep Research Bench (PDR-Bench), the first benchmark for evaluating personalization in DRAs. It pairs 50 diverse research tasks across 10 domains with 25 authentic user profiles that combine structured persona attributes with dynamic real-world contexts, yielding 250 realistic user-task queries. To assess system performance, we propose the PQR Evaluation Framework, which jointly measures Personalization Alignment, Content Quality, and Factual Reliability. Our experiments on a range of systems highlight current capabilities and limitations in handling personalized deep research. This work establishes a rigorous foundation for developing and evaluating the next generation of truly personalized AI research assistants.
+ oai:arXiv.org:2509.25106v2
+ cs.CL
+ cs.AI
+ cs.IR
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Alex Hiles, Bashar I. Ahmad
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yuan Liang, Jiaxian Li, Yuqing Wang, Piaohong Wang, Motong Tian, Pai Liu, Shuofei Qiao, Runnan Fang, He Zhu, Ge Zhang, Minghao Liu, Yuchen Eleanor Jiang, Ningyu Zhang, Wangchunshu Zhou
- CoPRS: Learning Positional Prior from Chain-of-Thought for Reasoning Segmentation
- https://arxiv.org/abs/2510.11173
- arXiv:2510.11173v2 Announce Type: replace
-Abstract: Existing works on reasoning segmentation either connect hidden features from a language model directly to a mask decoder or represent positions in text, which limits interpretability and semantic detail. To solve this, we present CoPRS, a Multi-modal Chain-of-Thought (MCoT)-based positional perception model that bridges language reasoning to segmentation through a differentiable and interpretable positional prior instantiated as a heatmap. By making the reasoning process clear via MCoT and expressing it as a dense, differentiable heatmap, this interface enhances interpretability and diagnostic analysis and yields more concentrated evidence on the target. A learnable concentration token aggregates features of the image and reasoning text to generate this positional prior, which is decoded to precise masks through a lightweight decoder, providing a direct connection between reasoning and segmentation. Across the RefCOCO series and ReasonSeg, CoPRS matches or surpasses the best reported metrics on each standard split under comparable protocols, with performance at or above the prior state of the art across both validation and test partitions. Extensive experiments demonstrate a strong positive correlation among the CoT trajectory, the generated heatmap, and the decoded mask, supporting an interpretable alignment between the reasoning output and downstream mask generation. Collectively, these findings support the utility of this paradigm in bridging reasoning and segmentation and show advantages in concentration driven by reasoning and in more precise mask prediction. Code, checkpoints and logs are released at https://github.com/ZhenyuLU-Heliodore/CoPRS.git.
- oai:arXiv.org:2510.11173v2
+ Learning Generalizable Shape Completion with SIM(3) Equivariance
+ https://arxiv.org/abs/2509.26631
+ arXiv:2509.26631v3 Announce Type: replace
+Abstract: 3D shape completion methods typically assume scans are pre-aligned to a canonical frame. This leaks pose and scale cues that networks may exploit to memorize absolute positions rather than inferring intrinsic geometry. When such alignment is absent in real data, performance collapses. We argue that robust generalization demands architectural equivariance to the similarity group, SIM(3), so the model remains agnostic to pose and scale. Following this principle, we introduce the first SIM(3)-equivariant shape completion network, whose modular layers successively canonicalize features, reason over similarity-invariant geometry, and restore the original frame. Under a de-biased evaluation protocol that removes the hidden cues, our model outperforms both equivariant and augmentation baselines on the PCN benchmark. It also sets new cross-domain records on real driving and indoor scans, lowering minimal matching distance on KITTI by 17% and Chamfer distance $\ell1$ on OmniObject3D by 14%. Perhaps surprisingly, ours under the stricter protocol still outperforms competitors under their biased settings. These results establish full SIM(3) equivariance as an effective route to truly generalizable shape completion. Project page: https://sime-completion.github.io.
+ oai:arXiv.org:2509.26631v3cs.CV
- cs.MM
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Zhenyu Lu, Liupeng Li, Jinpeng Wang, Yan Feng, Bin Chen, Ke Chen, Yaowei Wang
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Yuqing Wang, Zhaiyu Chen, Xiao Xiang Zhu
- The Adoption Paradox for Veterinary Professionals in China: High Use of Artificial Intelligence Despite Low Familiarity
- https://arxiv.org/abs/2510.11758
- arXiv:2510.11758v2 Announce Type: replace
-Abstract: While the global integration of artificial intelligence (AI) into veterinary medicine is accelerating, its adoption dynamics in major markets such as China remain uncharacterized. This paper presents the first exploratory analysis of AI perception and adoption among veterinary professionals in China, based on a cross-sectional survey of 455 practitioners conducted in mid-2025. We identify a distinct "adoption paradox": although 71.0% of respondents have incorporated AI into their workflows, 44.6% of these active users report low familiarity with the technology. In contrast to the administrative-focused patterns observed in North America, adoption in China is practitioner-driven and centers on core clinical tasks, such as disease diagnosis (50.1%) and prescription calculation (44.8%). However, concerns regarding reliability and accuracy remain the primary barrier (54.3%), coexisting with a strong consensus (93.8%) for regulatory oversight. These findings suggest a unique "inside-out" integration model in China, characterized by high clinical utility but restricted by an "interpretability gap," underscoring the need for specialized tools and robust regulatory frameworks to safely harness AI's potential in this expanding market.
- oai:arXiv.org:2510.11758v2
- cs.CY
+ Toward a Unified Geometry Understanding: Riemannian Diffusion Framework for Graph Generation and Prediction
+ https://arxiv.org/abs/2510.04522
+ arXiv:2510.04522v2 Announce Type: replace
+Abstract: Graph diffusion models have made significant progress in learning structured graph data and have demonstrated strong potential for predictive tasks. Existing approaches typically embed node, edge, and graph-level features into a unified latent space, modeling prediction tasks including classification and regression as a form of conditional generation. However, due to the non-Euclidean nature of graph data, features of different curvatures are entangled in the same latent space without releasing their geometric potential. To address this issue, we aim to construt an ideal Riemannian diffusion model to capture distinct manifold signatures of complex graph data and learn their distribution. This goal faces two challenges: numerical instability caused by exponential mapping during the encoding proces and manifold deviation during diffusion generation. To address these challenges, we propose GeoMancer: a novel Riemannian graph diffusion framework for both generation and prediction tasks. To mitigate numerical instability, we replace exponential mapping with an isometric-invariant Riemannian gyrokernel approach and decouple multi-level features onto their respective task-specific manifolds to learn optimal representations. To address manifold deviation, we introduce a manifold-constrained diffusion method and a self-guided strategy for unconditional generation, ensuring that the generated data remains aligned with the manifold signature. Extensive experiments validate the effectiveness of our approach, demonstrating superior performance across a variety of tasks.
+ oai:arXiv.org:2510.04522v2
+ cs.LGcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Shumin Li, Xiaoyun Lai
+ http://creativecommons.org/licenses/by/4.0/
+ Yisen Gao, Xingcheng Fu, Qingyun Sun, Jianxin Li, Xianxian Li
- GRAVITY: A Framework for Personalized Text Generation via Profile-Grounded Synthetic Preferences
- https://arxiv.org/abs/2510.11952
- arXiv:2510.11952v2 Announce Type: replace
-Abstract: Personalization in LLMs often relies on costly human feedback or interaction logs, limiting scalability and neglecting deeper user attributes. To reduce the reliance on human annotations, we introduce GRAVITY (Generative Response with Aligned Values, Interests, and Traits of You), a framework for generating synthetic, profile-grounded preference data that captures users' interests, values, beliefs, and personality traits. By integrating demographic, cultural, and psychological frameworks -- including Hofstede's cultural dimensions, Schwartz's basic values, the World Values Survey, and Big Five OCEAN traits -- GRAVITY synthesizes preference pairs to guide personalized content generation. We evaluate GRAVITY on book descriptions for 400 Amazon users, comparing it to prompt-based conditioning, standard fine-tuning, and naive synthetic pair generation. Profile-grounded synthetic data consistently improves generation, especially across multiple cultures (USA, Brazil, Japan, India), achieving over 4% higher preference gains across baselines, with user studies showing that GRAVITY outputs are preferred over 86% of the time. Our results show that scenario-grounded synthetic data can capture richer user variation, reduce reliance on costly annotation, and produce more engaging, user-centered content, offering a scalable path for LLM personalization.
- oai:arXiv.org:2510.11952v2
+ DEGS: Deformable Event-based 3D Gaussian Splatting from RGB and Event Stream
+ https://arxiv.org/abs/2510.07752
+ arXiv:2510.07752v2 Announce Type: replace
+Abstract: Reconstructing Dynamic 3D Gaussian Splatting (3DGS) from low-framerate RGB videos is challenging. This is because large inter-frame motions will increase the uncertainty of the solution space. For example, one pixel in the first frame might have more choices to reach the corresponding pixel in the second frame. Event cameras can asynchronously capture rapid visual changes and are robust to motion blur, but they do not provide color information. Intuitively, the event stream can provide deterministic constraints for the inter-frame large motion by the event trajectories. Hence, combining low-temporal-resolution images with high-framerate event streams can address this challenge. However, it is challenging to jointly optimize Dynamic 3DGS using both RGB and event modalities due to the significant discrepancy between these two data modalities. This paper introduces a novel framework that jointly optimizes dynamic 3DGS from the two modalities. The key idea is to adopt event motion priors to guide the optimization of the deformation fields. First, we extract the motion priors encoded in event streams by using the proposed LoCM unsupervised fine-tuning framework to adapt an event flow estimator to a certain unseen scene. Then, we present the geometry-aware data association method to build the event-Gaussian motion correspondence, which is the primary foundation of the pipeline, accompanied by two useful strategies, namely motion decomposition and inter-frame pseudo-label. Extensive experiments show that our method outperforms existing image and event-based approaches across synthetic and real scenes and prove that our method can effectively optimize dynamic 3DGS with the help of event data.
+ oai:arXiv.org:2510.07752v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1109/TVCG.2025.3618768
+ 2025 IEEE Transactions on Visualization and Computer Graphics
+ Junhao He, Jiaxu Wang, Jia Li, Mingyuan Sun, Qiang Zhang, Jiahang Cao, Ziyi Zhang, Yi Gu, Jingkai Sun, Renjing Xu
+
+
+ Beyond Over-Refusal: Scenario-Based Diagnostics and Post-Hoc Mitigation for Exaggerated Refusals in LLMs
+ https://arxiv.org/abs/2510.08158
+ arXiv:2510.08158v2 Announce Type: replace
+Abstract: Large language models (LLMs) frequently produce false refusals, declining benign requests that contain terms resembling unsafe queries. We address this challenge by introducing two comprehensive benchmarks: the Exaggerated Safety Benchmark (XSB) for single-turn prompts, annotated with "Focus" keywords that identify refusal-inducing triggers, and the Multi-turn Scenario-based Exaggerated Safety Benchmark (MS-XSB), which systematically evaluates refusal calibration in realistic, context-rich dialog settings. Our benchmarks reveal that exaggerated refusals persist across diverse recent LLMs and are especially pronounced in complex, multi-turn scenarios. To mitigate these failures, we leverage post-hoc explanation methods to identify refusal triggers and deploy three lightweight, model-agnostic approaches, ignore-word instructions, prompt rephrasing, and attention steering, at inference time, all without retraining or parameter access. Experiments on four instruction-tuned Llama models demonstrate that these strategies substantially improve compliance on safe prompts while maintaining robust safety protections. Our findings establish a reproducible framework for diagnosing and mitigating exaggerated refusals, highlighting practical pathways to safer and more helpful LLM deployments.
+ oai:arXiv.org:2510.08158v2cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Priyanka Dey, Daniele Rosa, Wenqing Zheng, Daniel Barcklow, Jieyu Zhao, Emilio Ferrara
+ Shuzhou Yuan, Ercong Nie, Yinuo Sun, Chenxuan Zhao, William LaCroix, Michael F\"arber
- COINS: SemantiC Ids Enhanced COLd Item RepresentatioN for Click-through Rate Prediction in E-commerce Search
- https://arxiv.org/abs/2510.12604
- arXiv:2510.12604v3 Announce Type: replace
-Abstract: With the rise of modern search and recommendation platforms, insufficient collaborative information of cold-start items exacerbates the Matthew effect of existing platform items, challenging platform diversity and becoming a longstanding issue. Existing methods align items' side content with collaborative information to transfer collaborative signals from high-popularity items to cold-start items. However, these methods fail to account for the asymmetry between collaboration and content, nor the fine-grained differences among items. To address these issues, we propose SMILE, an item representation enhancement approach based on fused alignment of semantic IDs. Specifically, we use RQ-OPQ encoding to quantize item content and collaborative information, followed by a two-step alignment: RQ encoding transfers shared collaborative signals across items, while OPQ encoding learns differentiated information of items. Comprehensive offline experiments on large-scale industrial datasets demonstrate superiority of SMILE, and rigorous online A/B tests confirm statistically significant improvements: item CTR +1.66%, buyers +1.57%, and order volume +2.17%.
- oai:arXiv.org:2510.12604v3
- cs.IR
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Bidirectional Representations Augmented Autoregressive Biological Sequence Generation
+ https://arxiv.org/abs/2510.08169
+ arXiv:2510.08169v3 Announce Type: replace
+Abstract: Autoregressive (AR) models, common in sequence generation, are limited in many biological tasks such as de novo peptide sequencing and protein modeling by their unidirectional nature, failing to capture crucial global bidirectional token dependencies. Non-Autoregressive (NAR) models offer holistic, bidirectional representations but face challenges with generative coherence and scalability. To transcend this, we propose a hybrid framework enhancing AR generation by dynamically integrating rich contextual information from non-autoregressive mechanisms. Our approach couples a shared input encoder with two decoders: a non-autoregressive one learning latent bidirectional biological features, and an AR decoder synthesizing the biological sequence by leveraging these bidirectional features. A novel cross-decoder attention module enables the AR decoder to iteratively query and integrate these bidirectional features, enriching its predictions. This synergy is cultivated via a tailored training strategy with importance annealing for balanced objectives and cross-decoder gradient blocking for stable, focused learning. Evaluations on a demanding nine-species benchmark of de novo peptide sequencing show that our model substantially surpasses AR and NAR baselines. It uniquely harmonizes AR stability with NAR contextual awareness, delivering robust, superior performance on diverse downstream data. This research advances biological sequence modeling techniques and contributes a novel architectural paradigm for augmenting AR models with enhanced bidirectional understanding for complex sequence generation. Code is available at https://github.com/BEAM-Labs/denovo.
+ oai:arXiv.org:2510.08169v3
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Qihang Zhao, Zhongbo Sun, Xiaoyang Zheng, Xian Guo, Siyuan Wang, Zihan Liang, Mingcan Peng, Ben Chen, Chenyi Lei
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiang Zhang, Jiaqi Wei, Zijie Qiu, Sheng Xu, Zhi Jin, ZhiQiang Gao, Nanqing Dong, Siqi Sun
- Foveation Improves Payload Capacity in Steganography
- https://arxiv.org/abs/2510.13151
- arXiv:2510.13151v2 Announce Type: replace
-Abstract: Steganography finds its use in visual medium such as providing metadata and watermarking. With support of efficient latent representations and foveated rendering, we trained models that improve existing capacity limits from 100 to 500 bits, while achieving better accuracy of up to 1 failure bit out of 2000, at 200K test bits. Finally, we achieve a comparable visual quality of 31.47 dB PSNR and 0.13 LPIPS, showing the effectiveness of novel perceptual design in creating multi-modal latent representations in steganography.
- oai:arXiv.org:2510.13151v2
- cs.CV
- cs.GR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Maple: A Multi-agent System for Portable Deep Learning across Clusters
+ https://arxiv.org/abs/2510.08842
+ arXiv:2510.08842v2 Announce Type: replace
+Abstract: Training deep learning (DL) models across Graphics Processing Unit (GPU) clusters is technically challenging. One aspect is that users have to compose command lines to adapt to the heterogeneous launchers, schedulers, affinity options, DL framework arguments, and environment variables. Composing correct command lines is error-prone and can easily frustrate users, impeding research or wasting resources. In this work, we present Maple, a multi-agent system that generates correct DL command lines with users' natural language input. Maple consists of four agents with the functionalities of information extraction, template retrieval, command line verification, and error correction. We evaluate Maple on nine GPU clusters across national computing centers in the U.S., five representative deep learning model families, and four commonly used parallel DL training paradigms. Our experiments also cover schedulers of SLURM and PBS and heterogeneous architectures, such as NVIDIA A100/H200 GPUs and Intel Max series GPUs. Maple achieves 92.0% accuracy in generating command lines across the 567 test cases. Leverage multiple language models with an aggregated size of 10B parameters, Maple delivers comparable performance to the state-of-the-art models of GPT-5, Claude, and Gemini. Together, these results highlight Maple's practical value in enabling portable and scalable distributed DL across heterogeneous HPC environments.
+ oai:arXiv.org:2510.08842v2
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- 10.1145/3757374.3771423
- Lifeng Qiu Lin, Henry Kam, Qi Sun, Kaan Ak\c{s}it
+ Molang Wu, Zhao Zhang
- Deep Edge Filter: Return of the Human-Crafted Layer in Deep Learning
- https://arxiv.org/abs/2510.13865
- arXiv:2510.13865v5 Announce Type: replace
-Abstract: We introduce the Deep Edge Filter, a novel approach that applies high-pass filtering to deep neural network features to improve model generalizability. Our method is motivated by our hypothesis that neural networks encode task-relevant semantic information in high-frequency components while storing domain-specific biases in low-frequency components of deep features. By subtracting low-pass filtered outputs from original features, our approach isolates generalizable representations while preserving architectural integrity. Experimental results across diverse domains such as Vision, Text, 3D, and Audio demonstrate consistent performance improvements regardless of model architecture and data modality. Analysis reveals that our method induces feature sparsification and effectively isolates high-frequency components, providing empirical validation of our core hypothesis. The code is available at https://github.com/dongkwani/DeepEdgeFilter.
- oai:arXiv.org:2510.13865v5
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Universal Discrete-Domain Speech Enhancement
+ https://arxiv.org/abs/2510.09974
+ arXiv:2510.09974v2 Announce Type: replace
+Abstract: In real-world scenarios, speech signals are inevitably corrupted by various types of interference, making speech enhancement (SE) a critical task for robust speech processing. However, most existing SE methods only handle a limited range of distortions, such as additive noise, reverberation, or band limitation, while the study of SE under multiple simultaneous distortions remains limited. This gap affects the generalization and practical usability of SE methods in real-world environments.To address this gap, this paper proposes a novel Universal Discrete-domain SE model called UDSE.Unlike regression-based SE models that directly predict clean speech waveform or continuous features, UDSE redefines SE as a discrete-domain classification task, instead predicting the clean discrete tokens quantized by the residual vector quantizer (RVQ) of a pre-trained neural speech codec.Specifically, UDSE first extracts global features from the degraded speech. Guided by these global features, the clean token prediction for each VQ follows the rules of RVQ, where the prediction of each VQ relies on the results of the preceding ones. Finally, the predicted clean tokens from all VQs are decoded to reconstruct the clean speech waveform. During training, the UDSE model employs a teacher-forcing strategy, and is optimized with cross-entropy loss. Experimental results confirm that the proposed UDSE model can effectively enhance speech degraded by various conventional and unconventional distortions, e.g., additive noise, reverberation, band limitation, clipping, phase distortion, and compression distortion, as well as their combinations. These results demonstrate the superior universality and practicality of UDSE compared to advanced regression-based SE methods.
+ oai:arXiv.org:2510.09974v2
+ cs.SD
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Dongkwan Lee, Junhoo Lee, Nojun Kwak
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fei Liu, Yang Ai, Ye-Xin Lu, Rui-Chen Zheng, Hui-Peng Du, Zhen-Hua Ling
- Tawa: Automatic Warp Specialization for Modern GPUs with Asynchronous References
- https://arxiv.org/abs/2510.14719
- arXiv:2510.14719v2 Announce Type: replace
-Abstract: Modern GPUs feature specialized hardware units that enable high-performance, asynchronous dataflow execution. However, the conventional SIMT programming model is fundamentally misaligned with this task-parallel hardware, creating a significant programmability gap. While hardware-level warp specialization is the key to unlocking peak performance, it forces developers to manually orchestrate complex, low-level communication and software pipelines--a process that is labor-intensive, error-prone, and unsustainable. To address this challenge, we present Tawa, an automated compiler that systematically generates high-performance, warp-specialized code from a high-level, tile-based program. Central to our approach is a novel IR abstraction, asynchronous references (aref), which expresses warp-level communication without exposing low-level hardware details. Using this abstraction, Tawa automatically partitions programs into producer-consumer roles and manages the intricate dataflow pipeline, relieving developers of invasive kernel rewriting. Evaluation on NVIDIA H100 GPUs across representative LLM kernels shows that Tawa delivers high hardware utilization, achieving up to 1.1$\times$ speedup over highly optimized cuBLAS GEMM kernels. For attention workloads, Tawa attains 1.2$\times$ speedup over Triton and matches the performance of the hand-optimized CUTLASS C++ FlashAttention-3 kernel with far less programming effort.
- oai:arXiv.org:2510.14719v2
+ An Eulerian Perspective on Straight-Line Sampling
+ https://arxiv.org/abs/2510.11657
+ arXiv:2510.11657v2 Announce Type: replace
+Abstract: We study dynamic measure transport for generative modeling: specifically, flows induced by stochastic processes that bridge a specified source and target distribution. The conditional expectation of the process' velocity defines an ODE whose flow map achieves the desired transport. We ask \emph{which processes produce straight-line flows} -- i.e., flows whose pointwise acceleration vanishes and thus are exactly integrable with a first-order method? We provide a concise PDE characterization of straightness as a balance between conditional acceleration and the divergence of a weighted covariance (Reynolds) tensor. Using this lens, we fully characterize affine-in-time interpolants and show that straightness occurs exactly under deterministic endpoint couplings. We also derive necessary conditions that constrain flow geometry for general processes, offering broad guidance for designing transports that are easier to integrate.
+ oai:arXiv.org:2510.11657v2cs.LG
- cs.AR
- cs.PL
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Hongzheng Chen, Bin Fan, Alexander Collins, Bastian Hagedorn, Evghenii Gaburov, Masahiro Masuda, Matthew Brookhart, Chris Sullivan, Jason Knight, Zhiru Zhang, Vinod Grover
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Panos Tsimpos, Youssef Marzouk
- Attention Sinks in Diffusion Language Models
- https://arxiv.org/abs/2510.15731
- arXiv:2510.15731v2 Announce Type: replace
-Abstract: Masked Diffusion Language Models (DLMs) have recently emerged as a promising alternative to traditional Autoregressive Models (ARMs). DLMs employ transformer encoders with bidirectional attention, enabling parallel token generation while maintaining competitive performance. Although their efficiency and effectiveness have been extensively studied, the internal mechanisms that govern DLMs remain largely unexplored. In this work, we conduct an empirical analysis of DLM attention patterns, focusing on the attention sinking phenomenon, an effect previously observed in various transformer-based architectures. Our findings reveal that DLMs also exhibit attention sinks, but with distinct characteristics. First, unlike in ARMs, the sink positions in DLMs tend to shift throughout the generation process, displaying a dynamic behaviour. Second, while ARMs are highly sensitive to the removal of attention sinks, DLMs remain robust: masking sinks leads to only a minor degradation in performance. These results provide new insights into the inner workings of diffusion-based language models and highlight fundamental differences in how they allocate and utilize attention compared to autoregressive models.
- oai:arXiv.org:2510.15731v2
- cs.CL
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ State-Space Models for Tabular Prior-Data Fitted Networks
+ https://arxiv.org/abs/2510.14573
+ arXiv:2510.14573v2 Announce Type: replace
+Abstract: Recent advancements in foundation models for tabular data, such as TabPFN, demonstrated that pretrained Transformer architectures can approximate Bayesian inference with high predictive performance. However, Transformers suffer from quadratic complexity with respect to sequence length, motivating the exploration of more efficient sequence models. In this work, we investigate the potential of using Hydra, a bidirectional linear-time structured state space model (SSM), as an alternative to Transformers in TabPFN. A key challenge lies in SSM's inherent sensitivity to the order of input tokens - an undesirable property for tabular datasets where the row order is semantically meaningless. We investigate to what extent a bidirectional approach can preserve efficiency and enable symmetric context aggregation. Our experiments show that this approach reduces the order-dependence, achieving predictive performance competitive to the original TabPFN model.
+ oai:arXiv.org:2510.14573v2
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Maximo Eduardo Rulli, Simone Petruzzi, Edoardo Michielon, Fabrizio Silvestri, Simone Scardapane, Alessio Devoto
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ International Conference on Machine Learning (ICML), 1st ICML Workshop on Foundation Models for Structured Data, 2025
+ Felix Koch, Marcel Wever, Fabian Raisch, Benjamin Tischler
- Enhanced Sentiment Interpretation via a Lexicon-Fuzzy-Transformer Framework
- https://arxiv.org/abs/2510.15843
- arXiv:2510.15843v2 Announce Type: replace
-Abstract: Accurately detecting sentiment polarity and intensity in product reviews and social media posts remains challenging due to informal and domain-specific language. To address this, we propose a novel hybrid lexicon-fuzzy-transformer framework that combines rule-based heuristics, contextual deep learning, and fuzzy logic to generate continuous sentiment scores reflecting both polarity and strength. The pipeline begins with VADER-based initial sentiment estimations, which are refined through a two-stage adjustment process. This involves leveraging confidence scores from DistilBERT, a lightweight transformer and applying fuzzy logic principles to mitigate excessive neutrality bias and enhance granularity. A custom fuzzy inference system then maps the refined scores onto a 0 to 1 continuum, producing expert)like judgments. The framework is rigorously evaluated on four domain-specific datasets. food delivery, e-commerce, tourism, and fashion. Results show improved alignment with user ratings, better identification of sentiment extremes, and reduced misclassifications. Both quantitative metrics (distributional alignment, confusion matrices) and qualitative insights (case studies, runtime analysis) affirm the models robustness and efficiency. This work demonstrates the value of integrating symbolic reasoning with neural models for interpretable, finegrained sentiment analysis in linguistically dynamic domains.
- oai:arXiv.org:2510.15843v2
- cs.CL
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ TranSimHub:A Unified Air-Ground Simulation Platform for Multi-Modal Perception and Decision-Making
+ https://arxiv.org/abs/2510.15365
+ arXiv:2510.15365v2 Announce Type: replace
+Abstract: Air-ground collaborative intelligence is becoming a key approach for next-generation urban intelligent transportation management, where aerial and ground systems work together on perception, communication, and decision-making. However, the lack of a unified multi-modal simulation environment has limited progress in studying cross-domain perception, coordination under communication constraints, and joint decision optimization. To address this gap, we present TranSimHub, a unified simulation platform for air-ground collaborative intelligence. TranSimHub offers synchronized multi-view rendering across RGB, depth, and semantic segmentation modalities, ensuring consistent perception between aerial and ground viewpoints. It also supports information exchange between the two domains and includes a causal scene editor that enables controllable scenario creation and counterfactual analysis under diverse conditions such as different weather, emergency events, and dynamic obstacles. We release TranSimHub as an open-source platform that supports end-to-end research on perception, fusion, and control across realistic air and ground traffic scenes. Our code is available at https://github.com/Traffic-Alpha/TransSimHub.
+ oai:arXiv.org:2510.15365v2
+ eess.SY
+ cs.LG
+ cs.MA
+ cs.SY
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Shayan Rokhva, Mousa Alizadeh, Maryam Abdollahi Shamami
+ Maonan Wang, Yirong Chen, Yuxin Cai, Aoyu Pang, Yuejiao Xie, Zian Ma, Chengcheng Xu, Kemou Jiang, Ding Wang, Laurent Roullet, Chung Shue Chen, Zhiyong Cui, Yuheng Kan, Michael Lepech, Man-On PunExecutable Epistemology: The Structured Cognitive Loop as an Architecture of Intentional Understanding
https://arxiv.org/abs/2510.15952
- arXiv:2510.15952v3 Announce Type: replace
+ arXiv:2510.15952v4 Announce Type: replace
Abstract: Large language models exhibit intelligence without genuine epistemic understanding, exposing a key gap: the absence of epistemic architecture. This paper introduces the Structured Cognitive Loop (SCL) as an executable epistemological framework for emergent intelligence. Unlike traditional AI research asking "what is intelligence?" (ontological), SCL asks "under what conditions does cognition emerge?" (epistemological). Grounded in philosophy of mind and cognitive phenomenology, SCL bridges conceptual philosophy and implementable cognition. Drawing on process philosophy, enactive cognition, and extended mind theory, we define intelligence not as a property but as a performed process -- a continuous loop of judgment, memory, control, action, and regulation. SCL makes three contributions. First, it operationalizes philosophical insights into computationally interpretable structures, enabling "executable epistemology" -- philosophy as structural experiment. Second, it shows that functional separation within cognitive architecture yields more coherent and interpretable behavior than monolithic prompt based systems, supported by agent evaluations. Third, it redefines intelligence: not representational accuracy but the capacity to reconstruct its own epistemic state through intentional understanding. This framework impacts philosophy of mind, epistemology, and AI. For philosophy, it allows theories of cognition to be enacted and tested. For AI, it grounds behavior in epistemic structure rather than statistical regularity. For epistemology, it frames knowledge not as truth possession but as continuous reconstruction within a phenomenologically coherent loop. We situate SCL within debates on cognitive phenomenology, emergence, normativity, and intentionality, arguing that real progress requires not larger models but architectures that realize cognitive principles structurally.
- oai:arXiv.org:2510.15952v3
+ oai:arXiv.org:2510.15952v4cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by-nc-sa/4.0/Myung Ho Kim
- Colliding with Adversaries at ECML-PKDD 2025 Model Robustness Competition 1st Prize Solution
- https://arxiv.org/abs/2510.16443
- arXiv:2510.16443v2 Announce Type: replace
-Abstract: This report presents the winning solution for Task 2 of Colliding with Adversaries: A Challenge on Robust Learning in High Energy Physics Discovery at ECML-PKDD 2025. The goal of the challenge was to design and train a robust ANN-based model capable of achieving high accuracy in a binary classification task on both clean and adversarial data generated with the Random Distribution Shuffle Attack (RDSA). Our solution consists of two components: a data generation phase and a robust model training phase. In the first phase, we produced 15 million artificial training samples using a custom methodology derived from Random Distribution Shuffle Attack (RDSA). In the second phase, we introduced a robust architecture comprising (i)a Feature Embedding Block with shared weights among features of the same type and (ii)a Dense Fusion Tail responsible for the final prediction. Training this architecture on our adversarial dataset achieved a mixed accuracy score of 80\%, exceeding the second-place solution by two percentage points.
- oai:arXiv.org:2510.16443v2
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Dimitris Stefanopoulos, Andreas Voskou
-
-
- DSEBench: A Test Collection for Explainable Dataset Search with Examples
- https://arxiv.org/abs/2510.17228
- arXiv:2510.17228v2 Announce Type: replace
-Abstract: Dataset search is a well-established task in the Semantic Web and information retrieval research. Current approaches retrieve datasets either based on keyword queries or by identifying datasets similar to a given target dataset. These paradigms fail when the information need involves both keywords and target datasets. To address this gap, we investigate a generalized task, Dataset Search with Examples (DSE), and extend it to Explainable DSE (ExDSE), which further requires identifying relevant fields of the retrieved datasets. We construct DSEBench, the first test collection that provides high-quality dataset-level and field-level annotations to support the evaluation of DSE and ExDSE, respectively. In addition, we employ a large language model to generate extensive annotations for training purposes. We establish comprehensive baselines on DSEBench by adapting and evaluating a variety of lexical, dense, and LLM-based retrieval, reranking, and explanation methods.
- oai:arXiv.org:2510.17228v2
- cs.IR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Human or AI? Comparing Design Thinking Assessments by Teaching Assistants and Bots
+ https://arxiv.org/abs/2510.16069
+ arXiv:2510.16069v2 Announce Type: replace
+Abstract: As design thinking education grows in secondary and tertiary contexts, educators face the challenge of evaluating creative artefacts that combine visual and textual elements. Traditional rubric-based assessment is laborious, time-consuming, and inconsistent due to reliance on Teaching Assistants (TA) in large, multi-section cohorts. This paper presents an exploratory study investigating the reliability and perceived accuracy of AI-assisted assessment compared to TA-assisted assessment in evaluating student posters in design thinking education. Two activities were conducted with 33 Ministry of Education (MOE) Singapore school teachers to (1) compare AI-generated scores with TA grading across three key dimensions: empathy and user understanding, identification of pain points and opportunities, and visual communication, and (2) examine teacher preferences for AI-assigned, TA-assigned, and hybrid scores. Results showed low statistical agreement between instructor and AI scores for empathy and pain points, with slightly higher alignment for visual communication. Teachers preferred TA-assigned scores in six of ten samples. Qualitative feedback highlighted the potential of AI for formative feedback, consistency, and student self-reflection, but raised concerns about its limitations in capturing contextual nuance and creative insight. The study underscores the need for hybrid assessment models that integrate computational efficiency with human insights. This research contributes to the evolving conversation on responsible AI adoption in creative disciplines, emphasizing the balance between automation and human judgment for scalable and pedagogically sound assessment.
+ oai:arXiv.org:2510.16069v2
+ cs.CY
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Qing Shi, Jing He, Qiaosheng Chen, Gong Cheng
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sumbul Khan, Wei Ting Liow, Lay Kee Ang
- MemoryBench: A Benchmark for Memory and Continual Learning in LLM Systems
- https://arxiv.org/abs/2510.17281
- arXiv:2510.17281v3 Announce Type: replace
-Abstract: Scaling up data, parameters, and test-time computation has been the mainstream methods to improve LLM systems (LLMsys), but their upper bounds are almost reached due to the gradual depletion of high-quality data and marginal gains obtained from larger computational resource consumption. Inspired by the abilities of human and traditional AI systems in learning from practice, constructing memory and continual learning frameworks for LLMsys has become an important and popular research direction in recent literature. Yet, existing benchmarks for LLM memory often focus on evaluating the system on homogeneous reading comprehension tasks with long-form inputs rather than testing their abilities to learn from accumulated user feedback in service time. Therefore, we propose a user feedback simulation framework and a comprehensive benchmark covering multiple domains, languages, and types of tasks to evaluate the continual learning abilities of LLMsys. Experiments show that the effectiveness and efficiency of state-of-the-art baselines are far from satisfying, and we hope this benchmark could pave the way for future studies on LLM memory and optimization algorithms.
- oai:arXiv.org:2510.17281v3
- cs.LG
- cs.AI
- cs.IR
- Thu, 11 Dec 2025 00:00:00 -0500
+ UltraCUA: A Foundation Model for Computer Use Agents with Hybrid Action
+ https://arxiv.org/abs/2510.17790
+ arXiv:2510.17790v2 Announce Type: replace
+Abstract: Computer-use agents face a fundamental limitation. They rely exclusively on primitive GUI actions (click, type, scroll), creating brittle execution chains prone to cascading failures. While API-driven agents harness rich capabilities through structured interfaces and tools, computer-use agents remain constrained to low-level visual interactions. We present UltraCUA, a foundation model that transcends this limitation through hybrid action-seamlessly unifying primitive GUI operations with high-level tool execution. Our innovation rests on four critical advances. First, an automated pipeline extracts and scales tool capabilities from software documentation and code repositories. Second, a synthetic data engine produces 17,000+ verifiable tasks capturing real-world computer-use complexity. Third, comprehensive hybrid action trajectory collection incorporates both GUI primitives and strategic tool calls. Fourth, a two-stage training methodology combines supervised fine-tuning with online reinforcement learning, enabling intelligent action selection between GUI and API. Evaluation with our 7B and 32B UltraCUA models reveals transformative performance gains. On OSWorld, UltraCUA achieves 22% relative improvement while executing 11% faster than existing approaches, averagely. Cross-domain validation on WindowsAgentArena demonstrates robust generalization with 21.7% success rate, surpassing Windows-trained baselines. The hybrid action paradigm proves essential, reducing error propagation while improving execution efficiency. This work establishes a scalable paradigm bridging primitive GUI interactions and high-level tool intelligence, enabling more resilient and adaptable computer use agents for diverse environments and complex real-world tasks.
+ oai:arXiv.org:2510.17790v2
+ cs.CV
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qingyao Ai, Yichen Tang, Changyue Wang, Jianming Long, Weihang Su, Yiqun Liu
+ Yuhao Yang, Zhen Yang, Zi-Yi Dou, Anh Nguyen, Keen You, Omar Attia, Andrew Szot, Michael Feng, Ram Ramrakhya, Alexander Toshev, Chao Huang, Yinfei Yang, Zhe Gan
- PrivaDE: Privacy-preserving Data Evaluation for Blockchain-based Data Marketplaces
- https://arxiv.org/abs/2510.18109
- arXiv:2510.18109v2 Announce Type: replace
-Abstract: Evaluating the usefulness of data before purchase is essential when obtaining data for high-quality machine learning models, yet both model builders and data providers are often unwilling to reveal their proprietary assets.
- We present PrivaDE, a privacy-preserving protocol that allows a model owner and a data owner to jointly compute a utility score for a candidate dataset without fully exposing model parameters, raw features, or labels. PrivaDE provides strong security against malicious behavior and can be integrated into blockchain-based marketplaces, where smart contracts enforce fair execution and payment. To make the protocol practical, we propose optimizations to enable efficient secure model inference, and a model-agnostic scoring method that uses only a small, representative subset of the data while still reflecting its impact on downstream training. Evaluation shows that PrivaDE performs data evaluation effectively, achieving online runtimes within 15 minutes even for models with millions of parameters.
- Our work lays the foundation for fair and automated data marketplaces in decentralized machine learning ecosystems.
- oai:arXiv.org:2510.18109v2
- cs.CR
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ AlphaOPT: Formulating Optimization Programs with Self-Improving LLM Experience Library
+ https://arxiv.org/abs/2510.18428
+ arXiv:2510.18428v2 Announce Type: replace
+Abstract: Optimization modeling enables critical decisions across industries but remains difficult to automate: informal language must be mapped to precise mathematical formulations and executable solver code. Prior LLM approaches either rely on brittle prompting or costly retraining with limited generalization. We present AlphaOPT, a self-improving experience library that enables an LLM to learn from limited demonstrations (even answers alone, without gold-standard programs) and solver feedback - without annotated reasoning traces or parameter updates. AlphaOPT operates in a continual two-phase cycle: (i) a Library Learning phase that reflects on failed attempts, extracting solver-verified, structured insights as {taxonomy, condition, explanation, example}; and (ii) a Library Evolution phase that diagnoses retrieval misalignments and refines the applicability conditions of stored insights, improving transfer across tasks. This design (1) learns efficiently from limited demonstrations without curated rationales, (2) expands continually without costly retraining by updating the library rather than model weights, and (3) makes knowledge explicit and interpretable for human inspection and intervention. Experiments show that AlphaOPT steadily improves with more data (65% to 72% from 100 to 300 training items) and surpasses the strongest baseline by 7.7% on the out-of-distribution OptiBench dataset when trained only on answers. Code and data are available at: https://github.com/Minw913/AlphaOPT.
+ oai:arXiv.org:2510.18428v2
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Wan Ki Wong, Sahel Torkamani, Michele Ciampi, Rik Sarkar
+ Minwei Kong, Ao Qu, Xiaotong Guo, Wenbin Ouyang, Chonghe Jiang, Han Zheng, Yining Ma, Dingyi Zhuang, Yuhan Tang, Junyi Li, Shenhao Wang, Haris Koutsopoulos, Hai Wang, Cathy Wu, Jinhua Zhao
- Benchmarking World-Model Learning
- https://arxiv.org/abs/2510.19788
- arXiv:2510.19788v3 Announce Type: replace
-Abstract: Model-learning agents should gather information to learn world models that support many downstream tasks and inferences, such as predicting unobserved states, estimating near- and far-term consequences of actions, planning action sequences, and detecting changes in dynamics. Current methods for learning and evaluating world models diverge from this goal: training and evaluation are anchored to next-frame prediction, and success is scored by reward maximization in the same environment. We propose WorldTest, a protocol to evaluate model-learning agents that separates reward-free interaction from a scored test phase in a different but related environment. WorldTest is open-ended $\unicode{x2014}$ models should support many different tasks unknown ahead of time $\unicode{x2014}$ and agnostic to model representation, allowing comparison across approaches. We instantiated WorldTest with AutumnBench, a suite of 43 interactive grid-world environments and 129 tasks across three families: masked-frame prediction, planning, and predicting changes to the causal dynamics. We compared 517 human participants and three frontier models on AutumnBench. We found that humans outperform the models, and scaling compute improves performance only in some environments but not others. WorldTest provides a novel template $\unicode{x2014}$ reward-free exploration, derived tests, and behavior-based scoring $\unicode{x2014}$ to evaluate what agents learn about environment dynamics, and AutumnBench exposes significant headroom in world-model learning.
- oai:arXiv.org:2510.19788v3
+ Rethinking Driving World Model as Synthetic Data Generator for Perception Tasks
+ https://arxiv.org/abs/2510.19195
+ arXiv:2510.19195v3 Announce Type: replace
+Abstract: Recent advancements in driving world models enable controllable generation of high-quality RGB videos or multimodal videos. Existing methods primarily focus on metrics related to generation quality and controllability. However, they often overlook the evaluation of downstream perception tasks, which are $\mathbf{really\ crucial}$ for the performance of autonomous driving. Existing methods usually leverage a training strategy that first pretrains on synthetic data and finetunes on real data, resulting in twice the epochs compared to the baseline (real data only). When we double the epochs in the baseline, the benefit of synthetic data becomes negligible. To thoroughly demonstrate the benefit of synthetic data, we introduce Dream4Drive, a novel synthetic data generation framework designed for enhancing the downstream perception tasks. Dream4Drive first decomposes the input video into several 3D-aware guidance maps and subsequently renders the 3D assets onto these guidance maps. Finally, the driving world model is fine-tuned to produce the edited, multi-view photorealistic videos, which can be used to train the downstream perception models. Dream4Drive enables unprecedented flexibility in generating multi-view corner cases at scale, significantly boosting corner case perception in autonomous driving. To facilitate future research, we also contribute a large-scale 3D asset dataset named DriveObj3D, covering the typical categories in driving scenarios and enabling diverse 3D-aware video editing. We conduct comprehensive experiments to show that Dream4Drive can effectively boost the performance of downstream perception models under various training epochs. Page: https://wm-research.github.io/Dream4Drive/ GitHub Link: https://github.com/wm-research/Dream4Drive
+ oai:arXiv.org:2510.19195v3
+ cs.CVcs.AI
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Archana Warrier, Dat Nguyen, Michelangelo Naim, Moksh Jain, Yichao Liang, Karen Schroeder, Cambridge Yang, Joshua B. Tenenbaum, Sebastian Vollmer, Kevin Ellis, Zenna Tavares
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Kai Zeng, Zhanqian Wu, Kaixin Xiong, Xiaobao Wei, Xiangyu Guo, Zhenxin Zhu, Kalok Ho, Lijun Zhou, Bohan Zeng, Ming Lu, Haiyang Sun, Bing Wang, Guang Chen, Hangjun Ye, Wentao Zhang
- Enhancing Reasoning Skills in Small Persian Medical Language Models Can Outperform Large-Scale Data Training
- https://arxiv.org/abs/2510.20059
- arXiv:2510.20059v4 Announce Type: replace
-Abstract: Enhancing reasoning capabilities in small language models is critical for specialized applications such as medical question answering, particularly in underrepresented languages like Persian. In this study, we employ Reinforcement Learning with AI Feedback (RLAIF) and Direct preference optimization (DPO) to improve the reasoning skills of a general-purpose Persian language model. To achieve this, we translated a multiple-choice medical question-answering dataset into Persian and used RLAIF to generate rejected-preferred answer pairs, which are essential for DPO training. By prompting both teacher and student models to produce Chain-of-Thought (CoT) reasoning responses, we compiled a dataset containing correct and incorrect reasoning trajectories. This dataset, comprising 2 million tokens in preferred answers and 2.5 million tokens in rejected ones, was used to train a baseline model, significantly enhancing its medical reasoning capabilities in Persian. Remarkably, the resulting model outperformed its predecessor, gaokerena-V, which was trained on approximately 57 million tokens, despite leveraging a much smaller dataset. These results highlight the efficiency and effectiveness of reasoning-focused training approaches in developing domain-specific language models with limited data availability.
- oai:arXiv.org:2510.20059v4
+ TheMCPCompany: Creating General-purpose Agents with Task-specific Tools
+ https://arxiv.org/abs/2510.19286
+ arXiv:2510.19286v2 Announce Type: replace
+Abstract: Since the introduction of the Model Context Protocol (MCP), the number of available tools for Large Language Models (LLMs) has increased significantly. These task-specific tool sets offer an alternative to general-purpose tools such as web browsers, while being easier to develop and maintain than GUIs. However, current general-purpose agents predominantly rely on web browsers for interacting with the environment. Here, we introduce TheMCPCompany, a benchmark for evaluating tool-calling agents on tasks that involve interacting with various real-world services. We use the REST APIs of these services to create MCP servers, which include over 18,000 tools. We also provide manually annotated ground-truth tools for each task. In our experiments, we use the ground truth tools to show the potential of tool-calling agents for both improving performance and reducing costs assuming perfect tool retrieval. Next, we explore agent performance using tool retrieval to study the real-world practicality of tool-based agents. While all models with tool retrieval perform similarly or better than browser-based agents, smaller models cannot take full advantage of the available tools through retrieval. On the other hand, GPT-5's performance with tool retrieval is very close to its performance with ground-truth tools. Overall, our work shows that the most advanced reasoning models are effective at discovering tools in simpler environments, but seriously struggle with navigating complex enterprise environments. TheMCPCompany reveals that navigating tens of thousands of tools and combining them in non-trivial ways to solve complex problems is still a challenging task for current models and requires both better reasoning and better retrieval models.
+ oai:arXiv.org:2510.19286v2cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Mehrdad Ghassabi, Sadra Hakim, Hamidreza Baradaran Kashani, Pedram Rostami
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Reza Esfandiarpoor, Vishwas Suryanarayanan, Stephen H. Bach, Vishal Chowdhary, Anthony Aue
- Anderson-type acceleration method for Deep Neural Network optimization
- https://arxiv.org/abs/2510.20254
- arXiv:2510.20254v2 Announce Type: replace
-Abstract: In this paper we consider the neural network optimization. We develop Anderson-type acceleration method for the stochastic gradient decent method and it improves the network permanence very much. We demonstrate the applicability of the method for Deep Neural Network (DNN) and Convolution Neural Network (CNN).
- oai:arXiv.org:2510.20254v2
- math.NA
- cs.NA
- math.OC
- Thu, 11 Dec 2025 00:00:00 -0500
+ SEA: Semantic Map Prediction for Active Exploration of Uncertain Areas
+ https://arxiv.org/abs/2510.19766
+ arXiv:2510.19766v2 Announce Type: replace
+Abstract: In this paper, we propose SEA, a novel approach for active robot exploration through semantic map prediction and a reinforcement learning-based hierarchical exploration policy. Unlike existing learning-based methods that rely on one-step waypoint prediction, our approach enhances the agent's long-term environmental understanding to facilitate more efficient exploration. We propose an iterative prediction-exploration framework that explicitly predicts the missing areas of the map based on current observations. The difference between the actual accumulated map and the predicted global map is then used to guide exploration. Additionally, we design a novel reward mechanism that leverages reinforcement learning to update the long-term exploration strategies, enabling us to construct an accurate semantic map within limited steps. Experimental results demonstrate that our method significantly outperforms state-of-the-art exploration strategies, achieving superior coverage ares of the global map within the same time constraints.
+ oai:arXiv.org:2510.19766v2
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kazufumi Ito, Tiancheng Xue
+ Hongyu Ding, Xinyue Liang, Yudong Fang, You Wu, Jieqi Shi, Jing Huo, Wenbin Li, Jing Wu, Yu-Kun Lai, Yang Gao
- Neural Diversity Regularizes Hallucinations in Language Models
- https://arxiv.org/abs/2510.20690
- arXiv:2510.20690v2 Announce Type: replace
-Abstract: Language models continue to hallucinate despite increases in parameters, compute, and data. We propose neural diversity -- decorrelated parallel representations -- as a principled mechanism that reduces hallucination rates at fixed parameter and data budgets. While existing mitigation strategies largely target accuracy, we provide the first formal tail bounds for hallucination probability in ensembled language models, reframing it as a second-moment reliability problem and explaining 94.3% of empirical reliability variation seen across parallel configurations. We introduce ND-LoRA (Neural Diversity Low-Rank Adaptation), combining parallel LoRA adapters with Barlow Twins regularization, and reduce hallucinations by up to 25.6% (and 14.6% on average) while preserving general accuracy. Ablations show LoRA adapters and regularization act synergistically, causal interventions prove neurodiversity as the mediating factor and correlational studies indicate scale: a 0.1% neural correlation increase is associated with a 3.8% hallucination increase. Finally, task-dependent optimality emerges: different tasks require different optimal amounts of neurodiversity. Together, our results highlight neural diversity as a third axis of scaling -- orthogonal to parameters and data -- to improve the reliability of language models at fixed budgets.
- oai:arXiv.org:2510.20690v2
- cs.CL
- cs.AI
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ From Factoid Questions to Data Product Requests: Benchmarking Data Product Discovery over Tables and Text
+ https://arxiv.org/abs/2510.21737
+ arXiv:2510.21737v2 Announce Type: replace
+Abstract: Data products are reusable, self-contained assets designed for specific business use cases. Automating their discovery and generation is of great industry interest, as it enables discovery in large data lakes and supports analytical Data Product Requests (DPRs). Currently, there is no benchmark established specifically for data product discovery. Existing datasets focus on answering single factoid questions over individual tables rather than collecting multiple data assets for broader, coherent products. To address this gap, we introduce DPBench, the first user-request-driven data product benchmark over hybrid table-text corpora. Our framework systematically repurposes existing table-text QA datasets by clustering related tables and passages into coherent data products, generating professional-level analytical requests that span both data sources, and validating benchmark quality through multi-LLM evaluation. DPBench preserves full provenance while producing actionable, analyst-like data product requests. Baseline experiments with hybrid retrieval methods establish the feasibility of DPR evaluation, reveal current limitations, and point to new opportunities for automatic data product discovery research.
+ Code and datasets are available at: https://anonymous.4open.science/r/data-product-benchmark-BBA7/
+ oai:arXiv.org:2510.21737v2
+ cs.IR
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Kushal Chakrabarti, Nirmal Balachundhar
+ Liangliang Zhang, Nandana Mihindukulasooriya, Niharika S. D'Souza, Sola Shirai, Sarthak Dash, Yao Ma, Horst Samulowitz
- Smaller Models, Smarter Rewards: A Two-Sided Approach to Process and Outcome Rewards
- https://arxiv.org/abs/2510.23083
- arXiv:2510.23083v3 Announce Type: replace
-Abstract: Generating high-quality code remains a challenge for Large Language Models (LLMs). For the evolution of reasoning models on this task, reward models are a necessary intermediate step. These models judge outcomes or intermediate steps. Decoder-only transformer models can be turned into reward models by introducing a regression layer and supervised fine-tuning. While it is known that reflection capabilities generally increase with the size of a model, we want to investigate whether state-of-the-art small language models like the Phi-4 family can be turned into usable reward models blending the consideration of process rewards and outcome rewards.
- Targeting this goal, we construct a dataset of code samples with correctness labels derived from the APPS coding challenge benchmark. We then train a value-head model to estimate the success probability of intermediate outputs. Our evaluation shows that small LLMs are capable of serving as effective reward models or code evaluation critics, successfully identifying correct solutions among multiple candidates. Using this critic, we achieve over a 20% improvement in the search capability of the most accurate code out of multiple generations.
- oai:arXiv.org:2510.23083v3
- cs.AI
+ An efficient probabilistic hardware architecture for diffusion-like models
+ https://arxiv.org/abs/2510.23972
+ arXiv:2510.23972v2 Announce Type: replace
+Abstract: The proliferation of probabilistic AI has prompted proposals for specialized stochastic computers. Despite promising efficiency gains, these proposals have failed to gain traction because they rely on fundamentally limited modeling techniques and exotic, unscalable hardware. In this work, we address these shortcomings by proposing an all-transistor probabilistic computer that implements powerful denoising models at the hardware level. A system-level analysis indicates that devices based on our architecture could achieve performance parity with GPUs on a simple image benchmark using approximately 10,000 times less energy.
+ oai:arXiv.org:2510.23972v2cs.LG
- cs.SE
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by-nc-nd/4.0/
- Jan Niklas Groeneveld, Xi Qin, Alexander Schaefer, Yaad Oren
+ Andra\v{z} Jelin\v{c}i\v{c}, Owen Lockwood, Akhil Garlapati, Peter Schillinger, Isaac Chuang, Guillaume Verdon, Trevor McCourt
- Breaking the Circle: An Autonomous Control-Switching Strategy for Stable Orographic Soaring in MAVs
- https://arxiv.org/abs/2510.23084
- arXiv:2510.23084v2 Announce Type: replace
-Abstract: Orographic soaring can significantly extend the endurance of micro aerial vehicles (MAVs), but circling behavior, arising from control conflicts between the longitudinal and vertical axes, increases energy consumption and the risk of divergence. We propose a control switching method, named SAOS: Switched Control for Autonomous Orographic Soaring, which mitigates circling behavior by selectively controlling either the horizontal or vertical axis, effectively transforming the system from underactuated to fully actuated during soaring. Additionally, the angle of attack is incorporated into the INDI controller to improve force estimation. Simulations with randomized initial positions and wind tunnel experiments on two MAVs demonstrate that the SAOS improves position convergence, reduces throttle usage, and mitigates roll oscillations caused by pitch-roll coupling. These improvements enhance energy efficiency and flight stability in constrained soaring environments.
- oai:arXiv.org:2510.23084v2
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ LeMiCa: Lexicographic Minimax Path Caching for Efficient Diffusion-Based Video Generation
+ https://arxiv.org/abs/2511.00090
+ arXiv:2511.00090v3 Announce Type: replace
+Abstract: We present LeMiCa, a training-free and efficient acceleration framework for diffusion-based video generation. While existing caching strategies primarily focus on reducing local heuristic errors, they often overlook the accumulation of global errors, leading to noticeable content degradation between accelerated and original videos. To address this issue, we formulate cache scheduling as a directed graph with error-weighted edges and introduce a Lexicographic Minimax Path Optimization strategy that explicitly bounds the worst-case path error. This approach substantially improves the consistency of global content and style across generated frames. Extensive experiments on multiple text-to-video benchmarks demonstrate that LeMiCa delivers dual improvements in both inference speed and generation quality. Notably, our method achieves a 2.9x speedup on the Latte model and reaches an LPIPS score of 0.05 on Open-Sora, outperforming prior caching techniques. Importantly, these gains come with minimal perceptual quality degradation, making LeMiCa a robust and generalizable paradigm for accelerating diffusion-based video generation. We believe this approach can serve as a strong foundation for future research on efficient and reliable video synthesis. Our code is available at :https://github.com/UnicomAI/LeMiCa
+ oai:arXiv.org:2511.00090v3
+ cs.CV
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Sunyou Hwang, Christophe De Wagter, Bart Remes, Guido de Croon
+ Huanlin Gao, Ping Chen, Fuyuan Shi, Chao Tan, Zhaoxiang Liu, Fang Zhao, Kai Wang, Shiguo Lian
- Beyond the Failures: Rethinking Foundation Models in Pathology
- https://arxiv.org/abs/2510.23807
- arXiv:2510.23807v4 Announce Type: replace
-Abstract: Despite their successes in vision and language, foundation models have stumbled in pathology, revealing low accuracy, instability, and heavy computational demands. These shortcomings stem not from tuning problems but from deeper conceptual mismatches: dense embeddings cannot represent the combinatorial richness of tissue, and current architectures inherit flaws in self-supervision, patch design, and noise-fragile pretraining. Biological complexity and limited domain innovation further widen the gap. The evidence is clear-pathology requires models explicitly designed for biological images rather than adaptations of large-scale natural-image methods whose assumptions do not hold for tissue.
- oai:arXiv.org:2510.23807v4
- cs.AI
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Hypergraph based lower bound on Pliable Index Coding based on Nested Side-Information Sets
+ https://arxiv.org/abs/2511.01539
+ arXiv:2511.01539v2 Announce Type: replace
+Abstract: In pliable index coding (PICOD), a number of clients are connected via a noise-free broadcast channel to a server which has a list of messages. Each client has a unique subset of messages at the server as side-information, and requests for any one message not in the side-information. A PICOD scheme of length $\ell$ is a set of $\ell$ encoded transmissions broadcast from the server such that all clients are satisfied. Finding the optimal (minimum) length of PICOD and designing PICOD schemes that have small length are the fundamental questions in PICOD. In this paper, we present a new lower bound for the optimal PICOD length using a new structural parameter called the nesting number, denoted by $\eta(\ch)$ associated with the hypergraph $\ch$ that represents the PICOD problem. While the nesting number bound is not stronger than previously known bounds, it can provide some computational advantages over them. Also, using the nesting number bound, we obtain novel lower bounds for some PICOD problems with special structures, which are tight in some cases.
+ oai:arXiv.org:2511.01539v2
+ cs.IT
+ math.IT
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Hamid R. Tizhoosh
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tulasi Sowjanya B., Prasad Krishnan
- EthVault: A Secure and Resource-Conscious FPGA-Based Ethereum Cold Wallet
- https://arxiv.org/abs/2510.23847
- arXiv:2510.23847v2 Announce Type: replace
-Abstract: Cryptocurrency blockchain networks safeguard digital assets using cryptographic keys, with wallets playing a critical role in generating, storing, and managing these keys. Wallets, typically categorized as hot and cold, offer varying degrees of security and convenience. However, they are generally software-based applications running on microcontrollers. Consequently, they are vulnerable to malware and side-channel attacks, allowing perpetrators to extract private keys by targeting critical algorithms, such as ECC, which processes private keys to generate public keys and authorize transactions. To address these issues, this work presents EthVault, the first hardware architecture for an Ethereum hierarchically deterministic cold wallet, featuring hardware implementations of key algorithms for secure key generation. Also, an ECC architecture resilient to side-channel and timing attacks is proposed. Moreover, an architecture of the child key derivation function, a fundamental component of cryptocurrency wallets, is proposed. The design minimizes resource usage, meeting market demand for small, portable cryptocurrency wallets. FPGA implementation results validate the feasibility of the proposed approach. The ECC architecture exhibits uniform execution behavior across varying inputs, while the complete design utilizes only 27%, 7%, and 6% of LUTs, registers, and RAM blocks, respectively, on a Xilinx Zynq UltraScale+ FPGA
- oai:arXiv.org:2510.23847v2
- cs.CR
- eess.SP
- Thu, 11 Dec 2025 00:00:00 -0500
+ From the Laboratory to Real-World Application: Evaluating Zero-Shot Scene Interpretation on Edge Devices for Mobile Robotics
+ https://arxiv.org/abs/2511.02427
+ arXiv:2511.02427v2 Announce Type: replace
+Abstract: Video Understanding, Scene Interpretation and Commonsense Reasoning are highly challenging tasks enabling the interpretation of visual information, allowing agents to perceive, interact with and make rational decisions in its environment. Large Language Models (LLMs) and Visual Language Models (VLMs) have shown remarkable advancements in these areas in recent years, enabling domain-specific applications as well as zero-shot open vocabulary tasks, combining multiple domains. However, the required computational complexity poses challenges for their application on edge devices and in the context of Mobile Robotics, especially considering the trade-off between accuracy and inference time. In this paper, we investigate the capabilities of state-of-the-art VLMs for the task of Scene Interpretation and Action Recognition, with special regard to small VLMs capable of being deployed to edge devices in the context of Mobile Robotics. The proposed pipeline is evaluated on a diverse dataset consisting of various real-world cityscape, on-campus and indoor scenarios. The experimental evaluation discusses the potential of these small models on edge devices, with particular emphasis on challenges, weaknesses, inherent model biases and the application of the gained information. Supplementary material is provided via the following repository: https://datahub.rz.rptu.de/hstr-csrl-public/publications/scene-interpretation-on-edge-devices/
+ oai:arXiv.org:2511.02427v2
+ cs.CV
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1049/blc2.70028
- IET Blockchain 5, no. 1 (2025): e70028
- Joel Poncha Lemayian, Ghyslain Gagnon, Kaiwen Zhang, Pascal Giard
+ 10.1007/978-3-032-11442-6_21
+ Artificial Intelligence XLII. SGAI-AI 2025. Lecture Notes in Computer Science, vol 16302. Springer, Cham (2026), pp 301-315
+ Nicolas Schuler, Lea Dewald, Nick Baldig, J\"urgen Graf
- TeleEgo: Benchmarking Egocentric AI Assistants in the Wild
- https://arxiv.org/abs/2510.23981
- arXiv:2510.23981v4 Announce Type: replace
-Abstract: Egocentric AI assistants in real-world settings must process multi-modal inputs (video, audio, text), respond in real time, and retain evolving long-term memory. However, existing benchmarks typically evaluate these abilities in isolation, lack realistic streaming scenarios, or support only short-term tasks. We introduce \textbf{TeleEgo}, a long-duration, streaming, omni-modal benchmark for evaluating egocentric AI assistants in realistic daily contexts. The dataset features over 14 hours per participant of synchronized egocentric video, audio, and text across four domains: work \& study, lifestyle \& routines, social activities, and outings \& culture. All data is aligned on a unified global timeline and includes high-quality visual narrations and speech transcripts, curated through human refinement.TeleEgo defines 12 diagnostic subtasks across three core capabilities: Memory (recalling past events), Understanding (interpreting the current moment), and Cross-Memory Reasoning (linking distant events). It contains 3,291 human-verified QA items spanning multiple question formats (single-choice, binary, multi-choice, and open-ended), evaluated strictly in a streaming setting. We propose Real-Time Accuracy (RTA) to jointly capture correctness and responsiveness under tight decision windows, and Memory Persistence Time (MPT) as a forward-looking metric for long-term retention in continuous streams. In this work, we report RTA results for current models and release TeleEgo, together with an MPT evaluation framework, as a realistic and extensible benchmark for future egocentric assistants with stronger streaming memory, enabling systematic study of both real-time behavior and long-horizon memory.
- oai:arXiv.org:2510.23981v4
+ Unsupervised Learning for Industrial Defect Detection: A Case Study on Shearographic Data
+ https://arxiv.org/abs/2511.02541
+ arXiv:2511.02541v2 Announce Type: replace
+Abstract: Shearography is a non-destructive testing method for detecting subsurface defects, offering high sensitivity and full-field inspection capabilities. However, its industrial adoption remains limited due to the need for expert interpretation. To reduce reliance on labeled data and manual evaluation, this study explores unsupervised learning methods for automated anomaly detection in shearographic images. Three architectures are evaluated: a fully connected autoencoder, a convolutional autoencoder, and a student-teacher feature matching model. All models are trained solely on defect-free data. A controlled dataset was developed using a custom specimen with reproducible defect patterns, enabling systematic acquisition of shearographic measurements under both ideal and realistic deformation conditions. Two training subsets were defined: one containing only undistorted, defect-free samples, and one additionally including globally deformed, yet defect-free, data. The latter simulates practical inspection conditions by incorporating deformation-induced fringe patterns that may obscure localized anomalies. The models are evaluated in terms of binary classification and, for the student-teacher model, spatial defect localization. Results show that the student-teacher approach achieves superior classification robustness and enables precise localization. Compared to the autoencoder-based models, it demonstrates improved separability of feature representations, as visualized through t-SNE embeddings. Additionally, a YOLOv8 model trained on labeled defect data serves as a reference to benchmark localization quality. This study underscores the potential of unsupervised deep learning for scalable, label-efficient shearographic inspection in industrial environments.
+ oai:arXiv.org:2511.02541v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jiaqi Yan, Ruilong Ren, Jingren Liu, Shuning Xu, Ling Wang, Yiheng Wang, Xinlin Zhong, Yun Wang, Long Zhang, Xiangyu Chen, Changzhi Sun, Jixiang Luo, Dell Zhang, Hao Sun, Chi Zhang, Xuelong Li
+ 10.1007/978-3-032-11442-6_22
+ Artificial Intelligence XLII. SGAI-AI 2025. Lecture Notes in Computer Science, vol 16302. Springer, Cham (2026), pp 316-329
+ Jessica Plassmann, Nicolas Schuler, Georg von Freymann, Michael Schuth
- Control Synthesis with Reinforcement Learning: A Modeling Perspective
- https://arxiv.org/abs/2510.25063
- arXiv:2510.25063v2 Announce Type: replace
-Abstract: Controllers designed with reinforcement learning can be sensitive to model mismatch. We demonstrate that designing such controllers in a virtual simulation environment with an inaccurate model is not suitable for deployment in a physical setup. Controllers designed using an accurate model is robust against disturbance and small mismatch between the physical setup and the mathematical model derived from first principles; while a poor model results in a controller that performs well in simulation but fails in physical experiments. Sensitivity analysis is used to justify these discrepancies and an empirical region of attraction estimation help us visualize their robustness.
- oai:arXiv.org:2510.25063v2
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Verifying LLM Inference to Detect Model Weight Exfiltration
+ https://arxiv.org/abs/2511.02620
+ arXiv:2511.02620v2 Announce Type: replace
+Abstract: As large AI models become increasingly valuable assets, the risk of model weight exfiltration from inference servers grows accordingly. An attacker controlling an inference server may exfiltrate model weights by hiding them within ordinary model outputs, a strategy known as steganography. This work investigates how to verify model responses to defend against such attacks and, more broadly, to detect anomalous or buggy behavior during inference. We formalize model exfiltration as a security game, propose a verification framework that can provably mitigate steganographic exfiltration, and specify the trust assumptions associated with our scheme. To enable verification, we characterize valid sources of non-determinism in large language model inference and introduce two practical estimators for them. We evaluate our detection framework on several open-weight models ranging from 3B to 30B parameters. On MOE-Qwen-30B, our detector reduces exfiltratable information to <0.5% with false-positive rate of 0.01%, corresponding to a >200x slowdown for adversaries. Overall, this work further establishes a foundation for defending against model weight exfiltration and demonstrates that strong protection can be achieved with minimal additional cost to inference providers.
+ oai:arXiv.org:2511.02620v2
+ cs.CR
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Nikki Xu, Hien Tran
+ Roy Rinberg, Adam Karvonen, Alexander Hoover, Daniel Reuter, Keri Warr
+
+
+ SCALE: Upscaled Continual Learning of Large Language Models
+ https://arxiv.org/abs/2511.03270
+ arXiv:2511.03270v2 Announce Type: replace
+Abstract: We revisit continual pre-training for large language models and argue that progress now depends more on scaling the right structure than on scaling parameters alone. We introduce SCALE, a width upscaling architecture that inserts lightweight expansion into linear modules while freezing all pre-trained parameters. This preserves the residual and attention topologies and increases capacity without perturbing the base model's original functionality. SCALE is guided by two principles: Persistent Preservation, which maintains the base model's behavior via preservation-oriented initialization and freezing of the pre-trained weights, and Collaborative Adaptation, which selectively trains a subset of expansion components to acquire new knowledge with minimal interference. We instantiate these ideas as SCALE-Preserve (preservation-first), SCALE-Adapt (adaptation-first), and SCALE-Route, an optional routing extension that performs token-level routing between preservation and adaptation heads. On a controlled synthetic biography benchmark, SCALE mitigates the severe forgetting observed with depth expansion while still acquiring new knowledge. In continual pre-training on a Korean corpus, SCALE variants achieve less forgetting on English evaluations and competitive gains on Korean benchmarks, with these variants offering the best overall stability-plasticity trade-off. Accompanying analysis clarifies when preservation provably holds and why the interplay between preservation and adaptation stabilizes optimization compared to standard continual learning setups.
+ oai:arXiv.org:2511.03270v2
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jin-woo Lee, Junhwa Choi, Bongkyu Hwang, Jinho Choo, Bogun Kim, JeongSeon Yi, Joonseok Lee, DongYoung Jung, Jaeseon Park, Kyoungwon Park, Suk-hoon Jung
- A Practitioner's Guide to Kolmogorov-Arnold Networks
- https://arxiv.org/abs/2510.25781
- arXiv:2510.25781v2 Announce Type: replace
-Abstract: The so-called Kolmogorov-Arnold Networks (KANs), whose design is merely inspired, rather than dictated, by the Kolmogorov superposition theorem, have emerged as a promising alternative to traditional Multilayer Perceptrons (MLPs). This review provides a systematic and comprehensive overview of the rapidly expanding KAN landscape. By collecting and categorizing a large set of open-source implementations, we map the vibrant ecosystem supporting modern KAN development. We organize the review around four core themes:
- (i) presenting a precise history of Kolmogorov's superposition theory toward neural-network formulations; (ii) establishing the formal equivalence between KANs and MLPs; (iii) analyzing the critical role of basis functions; and (iv) organizing recent advancements in accuracy, efficiency, regularization, and convergence.
- Finally, we provide a practical Choose-Your-KAN guide to assist practitioners in selecting appropriate architectures, and we close by identifying current research gaps and future directions. The associated GitHub repository (https://github.com/AmirNoori68/kan-review) complements this paper and serves as a structured reference for ongoing KAN research.
- oai:arXiv.org:2510.25781v2
+ Magnitude-Modulated Equivariant Adapter for Parameter-Efficient Fine-Tuning of Equivariant Graph Neural Networks
+ https://arxiv.org/abs/2511.06696
+ arXiv:2511.06696v2 Announce Type: replace
+Abstract: Pretrained equivariant graph neural networks based on spherical harmonics offer efficient and accurate alternatives to computationally expensive ab-initio methods, yet adapting them to new tasks and chemical environments still requires fine-tuning. Conventional parameter-efficient fine-tuning (PEFT) techniques, such as Adapters and LoRA, typically break symmetry, making them incompatible with those equivariant architectures. ELoRA, recently proposed, is the first equivariant PEFT method. It achieves improved parameter efficiency and performance on many benchmarks. However, the relatively high degrees of freedom it retains within each tensor order can still perturb pretrained feature distributions and ultimately degrade performance. To address this, we present Magnitude-Modulated Equivariant Adapter (MMEA), a novel equivariant fine-tuning method which employs lightweight scalar gating to modulate feature magnitudes on a per-order and per-multiplicity basis. We demonstrate that MMEA preserves strict equivariance and, across multiple benchmarks, consistently improves energy and force predictions to state-of-the-art levels while training fewer parameters than competing approaches. These results suggest that, in many practical scenarios, modulating channel magnitudes is sufficient to adapt equivariant models to new chemical environments without breaking symmetry, pointing toward a new paradigm for equivariant PEFT design.
+ oai:arXiv.org:2511.06696v2cs.LGcs.AI
- cs.NA
- cs.NE
- math.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Amir Noorizadegan, Sifan Wang, Leevan Ling
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dian Jin, Yancheng Yuan, Xiaoming Tao
- Structural Plasticity as Active Inference: A Biologically-Inspired Architecture for Homeostatic Control
- https://arxiv.org/abs/2511.02241
- arXiv:2511.02241v3 Announce Type: replace
-Abstract: Traditional neural networks, while powerful, rely on biologically implausible learning mechanisms such as global backpropagation. This paper introduces the Structurally Adaptive Predictive Inference Network (SAPIN), a novel computational model inspired by the principles of active inference and the morphological plasticity observed in biological neural cultures. SAPIN operates on a 2D grid where processing units, or cells, learn by minimizing local prediction errors. The model features two primary, concurrent learning mechanisms: a local, Hebbian-like synaptic plasticity rule based on the temporal difference between a cell's actual activation and its learned expectation, and a structural plasticity mechanism where cells physically migrate across the grid to optimize their information-receptive fields. This dual approach allows the network to learn both how to process information (synaptic weights) and also where to position its computational resources (network topology). We validated the SAPIN model on the classic Cart Pole reinforcement learning benchmark. Our results demonstrate that the architecture can successfully solve the CartPole task, achieving robust performance. The network's intrinsic drive to minimize prediction error and maintain homeostasis was sufficient to discover a stable balancing policy. We also found that while continual learning led to instability, locking the network's parameters after achieving success resulted in a stable policy. When evaluated for 100 episodes post-locking (repeated over 100 successful agents), the locked networks maintained an average 82% success rate.
- oai:arXiv.org:2511.02241v3
- cs.NE
- cs.AI
+ OutSafe-Bench: A Benchmark for Multimodal Offensive Content Detection in Large Language Models
+ https://arxiv.org/abs/2511.10287
+ arXiv:2511.10287v3 Announce Type: replace
+Abstract: Since Multimodal Large Language Models (MLLMs) are increasingly being integrated into everyday tools and intelligent agents, growing concerns have arisen regarding their possible output of unsafe contents, ranging from toxic language and biased imagery to privacy violations and harmful misinformation. Current safety benchmarks remain highly limited in both modality coverage and performance evaluations, often neglecting the extensive landscape of content safety. In this work, we introduce OutSafe-Bench, the first most comprehensive content safety evaluation test suite designed for the multimodal era. OutSafe-Bench includes a large-scale dataset that spans four modalities, featuring over 18,000 bilingual (Chinese and English) text prompts, 4,500 images, 450 audio clips and 450 videos, all systematically annotated across nine critical content risk categories. In addition to the dataset, we introduce a Multidimensional Cross Risk Score (MCRS), a novel metric designed to model and assess overlapping and correlated content risks across different categories. To ensure fair and robust evaluation, we propose FairScore, an explainable automated multi-reviewer weighted aggregation framework. FairScore selects top-performing models as adaptive juries, thereby mitigating biases from single-model judgments and enhancing overall evaluation reliability. Our evaluation of nine state-of-the-art MLLMs reveals persistent and substantial safety vulnerabilities, underscoring the pressing need for robust safeguards in MLLMs.
+ oai:arXiv.org:2511.10287v3cs.LG
- q-bio.NC
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuping Yan, Yuhan Xie, Yuanshuai Li, Yingchao Yu, Lingjuan Lyu, Yaochu Jin
+
+
+ Beyond the Hype: Critical Analysis of Student Motivations and Ethical Boundaries in Educational AI Use in Higher Education
+ https://arxiv.org/abs/2511.11369
+ arXiv:2511.11369v2 Announce Type: replace
+Abstract: The rapid integration of generative artificial intelligence (AI) in higher education since 2023 has outpaced institutional preparedness, creating a persistent gap between student practices and established ethical standards. This paper draws on mixed-method surveys and a focused literature review to examine student motivations, ethical dilemmas, gendered responses, and institutional readiness for AI adoption. We find that 92% of students use AI tools primarily to save time and improve work quality, yet only 36% receive formal guidance, producing a de facto "shadow pedagogy" of unguided workflows. Notably, 18% of students reported integrating AI-constructed material into assignments, which suggests confusion about integrity expectations and compromises the integrity of the assessment. Female students expressed greater concern about abuse and distortion of information than male students, revealing a gendered difference in awareness of risk and AI literacies. Correspondingly, 72% of educators use AI, but only 14% feel at ease doing so, reflecting limited training and uneven policy responses. We argue that institutions must adopt comprehensive AI literacy programs that integrate technical skills and ethical reasoning, alongside clear AI-use policies and assessment practices that promote transparency. The paper proposes an Ethical AI Integration Model centered on literacy, gender-inclusive support, and assessment redesign to guide responsible adoption, protect academic integrity, and foster equitable educational outcomes in an AI-driven landscape.
+ oai:arXiv.org:2511.11369v2
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Adeleh Mazaherian, Erfan Nourbakhsh
+
+
+ An Integrated SERVQUAL and Lean Six Sigma Framework for Measuring Customer Satisfaction in Computer Service Companies
+ https://arxiv.org/abs/2511.11723
+ arXiv:2511.11723v2 Announce Type: replace
+Abstract: The computer service industry has expanded rapidly over the past two decades, driven by the proliferation of computing technologies, the entry of large firms, and the availability of online diagnostic and troubleshooting tools. In this increasingly competitive environment, many small and medium sized enterprises struggle to maintain customer satisfaction as rivals deliver higher quality services at lower cost. This study addresses the absence of robust measurement systems for assessing service quality, a key factor underlying customer attrition, by proposing an integrated framework for evaluating satisfaction and identifying sources of dissatisfaction in computer services.
+ The framework combines core principles of Six Sigma with the SERVQUAL instrument within a structured DMAIC methodology (Define, Measure, Analyze, Improve, and Control). SERVQUAL provides the service quality dimensions and gap analysis techniques, while Six Sigma supplies the data driven approach to measurement and improvement. The literature suggests limited prior work integrating Lean Six Sigma with SERVQUAL, and this study contributes by operationalizing that integration in a real world setting.
+ A case study of a computer services company was conducted to demonstrate feasibility and effectiveness. Satisfaction levels were quantified, and root causes of dissatisfaction were identified. The analysis revealed a low overall satisfaction level and five primary drivers of unmet customer requirements. Addressing these causes is expected to increase customer satisfaction, lower customer acquisition costs, and improve overall organizational performance.
+ oai:arXiv.org:2511.11723v2
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Brennen A. Hill
+ Mohammed Abboodi
- DropleX: Liquid sensing on tablet touchscreens
- https://arxiv.org/abs/2511.02694
- arXiv:2511.02694v3 Announce Type: replace
-Abstract: We present DropleX, the first system that enables liquid sensing using the capacitive touchscreen of commodity tablets. DropleX detects microliter-scale liquid samples, and performs non-invasive, through-container measurements to detect whether a drink has been spiked or if a sealed liquid has been contaminated. These capabilities are made possible by a physics-informed mechanism that disables the touchscreen's built-in adaptive filters, originally designed to reject the effects of liquid drops such as rain, without any hardware modifications. We model the touchscreen's sensing capabilities, limits, and non-idealities to inform the design of a signal processing and learning-based pipeline for liquid sensing. Our system achieves 96-99% accuracy in detecting microliter-scale adulteration in soda, wine, and milk, 93-96% accuracy in threshold detection of trace chemical concentrations, and 86-96% accuracy in through-container adulterant detection. Given the predominance of touchscreens, these exploratory results can open new opportunities for liquid sensing on everyday devices.
- oai:arXiv.org:2511.02694v3
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Changes in Real Time: Online Scene Change Detection with Multi-View Fusion
+ https://arxiv.org/abs/2511.12370
+ arXiv:2511.12370v2 Announce Type: replace
+Abstract: Online Scene Change Detection (SCD) is an extremely challenging problem that requires an agent to detect relevant changes on the fly while observing the scene from unconstrained viewpoints. Existing online SCD methods are significantly less accurate than offline approaches. We present the first online SCD approach that is pose-agnostic, label-free, and ensures multi-view consistency, while operating at over 10 FPS and achieving new state-of-the-art performance, surpassing even the best offline approaches. Our method introduces a new self-supervised fusion loss to infer scene changes from multiple cues and observations, PnP-based fast pose estimation against the reference scene, and a fast change-guided update strategy for the 3D Gaussian Splatting scene representation. Extensive experiments on complex real-world datasets demonstrate that our approach outperforms both online and offline baselines.
+ oai:arXiv.org:2511.12370v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chamuditha Jayanga Galappaththige, Jason Lai, Lloyd Windrim, Donald Dansereau, Niko S\"underhauf, Dimity Miller
+
+
+ Seeing Through the Rain: Resolving High-Frequency Conflicts in Deraining and Super-Resolution via Diffusion Guidance
+ https://arxiv.org/abs/2511.12419
+ arXiv:2511.12419v2 Announce Type: replace
+Abstract: Clean images are crucial for visual tasks such as small object detection, especially at high resolutions. However, real-world images are often degraded by adverse weather, and weather restoration methods may sacrifice high-frequency details critical for analyzing small objects. A natural solution is to apply super-resolution (SR) after weather removal to recover both clarity and fine structures. However, simply cascading restoration and SR struggle to bridge their inherent conflict: removal aims to remove high-frequency weather-induced noise, while SR aims to hallucinate high-frequency textures from existing details, leading to inconsistent restoration contents. In this paper, we take deraining as a case study and propose DHGM, a Diffusion-based High-frequency Guided Model for generating clean and high-resolution images. DHGM integrates pre-trained diffusion priors with high-pass filters to simultaneously remove rain artifacts and enhance structural details. Extensive experiments demonstrate that DHGM achieves superior performance over existing methods, with lower costs.
+ oai:arXiv.org:2511.12419v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Siqi Zhang, Mayank Goel, Justin Chan
+ Wenjie Li, Jinglei Shi, Jin Han, Heng Guo, Zhanyu Ma
- SnapStream: Efficient Long Sequence Decoding on Dataflow Accelerators
- https://arxiv.org/abs/2511.03092
- arXiv:2511.03092v5 Announce Type: replace
-Abstract: The proliferation of 100B+ parameter Large Language Models (LLMs) with 100k+ context length support have resulted in increasing demands for on-chip memory to support large KV caches. Techniques such as StreamingLLM and SnapKV demonstrate how to control KV cache size while maintaining model accuracy. Yet, these techniques are not commonly used within industrial deployments using frameworks like vLLM or SGLang. The reason is twofold: on one hand, the static graphs and continuous batching methodology employed by these frameworks make it difficult to admit modifications to the standard multi-head attention algorithm, while on the other hand, the accuracy implications of such techniques on modern instruction-following and reasoning models are not well understood, obfuscating the need for implementing these techniques. In this paper, we explore these accuracy implications on Llama-3.1-8B-Instruct and DeepSeek-R1, and develop SnapStream, a KV cache compression method that can be deployed at scale. We demonstrate the efficacy of SnapStream in a 16-way tensor-parallel deployment of DeepSeek-671B on SambaNova SN40L accelerators running at 128k context length and up to 1832 tokens per second in a real production setting. SnapStream enables $4\times$ improved on-chip memory usage and introduces minimal accuracy degradation on LongBench-v2, AIME24 and LiveCodeBench. To the best of our knowledge, this is the first implementation of sparse KV attention techniques deployed in a production inference system with static graphs and continuous batching.
- oai:arXiv.org:2511.03092v5
- cs.AI
- cs.AR
- cs.DC
- Thu, 11 Dec 2025 00:00:00 -0500
+ FICO: Finite-Horizon Closed-Loop Factorization for Unified Multi-Agent Path Finding
+ https://arxiv.org/abs/2511.13961
+ arXiv:2511.13961v2 Announce Type: replace
+Abstract: Multi-Agent Path Finding is a fundamental problem in robotics and AI, yet most existing formulations treat planning and execution separately and address variants of the problem in an ad hoc manner. This paper presents a system-level framework for MAPF that integrates planning and execution, generalizes across variants, and explicitly models uncertainties. At its core is the MAPF system, a formal model that casts MAPF as a control design problem encompassing classical and uncertainty-aware formulations. To solve it, we introduce Finite-Horizon Closed-Loop Factorization (FICO), a factorization-based algorithm inspired by receding-horizon control that exploits compositional structure for efficient closed-loop operation. FICO enables real-time responses -- commencing execution within milliseconds -- while scaling to thousands of agents and adapting seamlessly to execution-time uncertainties. Extensive case studies demonstrate that it reduces computation time by up to two orders of magnitude compared with open-loop baselines, while delivering significantly higher throughput under stochastic delays and agent arrivals. These results establish a principled foundation for analyzing and advancing MAPF through system-level modeling, factorization, and closed-loop design.
+ oai:arXiv.org:2511.13961v2
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Jonathan Li, Nasim Farahini, Evgenii Iuliugin, Magnus Vesterlund, Christian H\"aggstr\"om, Guangtao Wang, Shubhangi Upasani, Ayush Sachdeva, Rui Li, Faline Fu, Chen Wu, Ayesha Siddiqua, John Long, Tuowen Zhao, Matheen Musaddiq, H\r{a}kan Zeffer, Yun Du, Mingran Wang, Qinghua Li, Bo Li, Urmish Thakker, Raghu Prabhakar
+ Jiarui Li, Alessandro Zanardi, Federico Pecora, Runyu Zhang, Gioele Zardini
+
+
+ Enforcing hidden physics in physics-informed neural networks
+ https://arxiv.org/abs/2511.14348
+ arXiv:2511.14348v2 Announce Type: replace
+Abstract: Physics-informed neural networks (PINNs) represent a new paradigm for solving partial differential equations (PDEs) by integrating physical laws into the learning process of neural networks. However, ensuring that such frameworks fully reflect the physical structure embedded in the governing equations remains an open challenge, particularly for maintaining robustness across diverse scientific problems. In this work, we address this issue by introducing a simple, generalized, yet robust irreversibility-regularized strategy that enforces hidden physical laws as soft constraints during training, thereby recovering the missing physics associated with irreversible processes in the conventional PINN. This approach ensures that the learned solutions consistently respect the intrinsic one-way nature of irreversible physical processes. Across a wide range of benchmarks spanning traveling wave propagation, steady combustion, ice melting, corrosion evolution, and crack growth, we observe substantial performance improvements over the conventional PINN, demonstrating that our regularization scheme reduces predictive errors by more than an order of magnitude, while requiring only minimal modification to existing PINN frameworks.
+ oai:arXiv.org:2511.14348v2
+ cs.LG
+ physics.comp-ph
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Nanxi Chen, Sifan Wang, Rujin Ma, Airong Chen, Chuanjie Cui
- PUL-SLAM: Path-Uncertainty Co-Optimization with Lightweight Stagnation Detection for Efficient Robotic Exploration
- https://arxiv.org/abs/2511.04180
- arXiv:2511.04180v2 Announce Type: replace
-Abstract: Existing Active SLAM methodologies face issues such as slow exploration speed and suboptimal paths. To address these limitations, we propose a hybrid framework combining a Path-Uncertainty Co-Optimization Deep Reinforcement Learning framework and a Lightweight Stagnation Detection mechanism. The Path-Uncertainty Co-Optimization framework jointly optimizes travel distance and map uncertainty through a dual-objective reward function, balancing exploration and exploitation. The Lightweight Stagnation Detection reduces redundant exploration through Lidar Static Anomaly Detection and Map Update Stagnation Detection, terminating episodes on low expansion rates. Experimental results show that compared with the frontier-based method and RRT method, our approach shortens exploration time by up to 65% and reduces path distance by up to 42%, significantly improving exploration efficiency in complex environments while maintaining reliable map completeness. Ablation studies confirm that the collaborative mechanism accelerates training convergence. Empirical validation on a physical robotic platform demonstrates the algorithm's practical applicability and its successful transferability from simulation to real-world environments.
- oai:arXiv.org:2511.04180v2
+ Continuous Vision-Language-Action Co-Learning with Semantic-Physical Alignment for Behavioral Cloning
+ https://arxiv.org/abs/2511.14396
+ arXiv:2511.14396v2 Announce Type: replace
+Abstract: Language-conditioned manipulation facilitates human-robot interaction via behavioral cloning (BC), which learns control policies from human demonstrations and serves as a cornerstone of embodied AI. Overcoming compounding errors in sequential action decisions remains a central challenge to improving BC performance. Existing approaches mitigate compounding errors through data augmentation, expressive representation, or temporal abstraction. However, they suffer from physical discontinuities and semantic-physical misalignment, leading to inaccurate action cloning and intermittent execution. In this paper, we present Continuous vision-language-action Co-Learning with Semantic-Physical Alignment (CCoL), a novel BC framework that ensures temporally consistent execution and fine-grained semantic grounding. It generates robust and smooth action execution trajectories through continuous co-learning across vision, language, and proprioceptive inputs (e.g., robot internal states). Meanwhile, we anchor language semantics to visuomotor representations by a bidirectional cross-attention to learn contextual information for action generation, successfully overcoming the problem of semantic-physical misalignment. Extensive experiments show that CCoL achieves an average 8.0% relative improvement across three simulation suites, with up to 19.2% relative gain in human-demonstrated bimanual insertion tasks. Real-world tests on a 7-DoF robot further confirm CCoL's generalization under unseen and noisy object states.
+ oai:arXiv.org:2511.14396v2cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yizhen Yin, Dapeng Feng, Hongbo Chen, Yuhua Qi
+ Xiuxiu Qi, Yu Yang, Jiannong Cao, Luyao Bai, Chongshan Fan, Chengtai Cao, Hongpeng Wang
- Model-free Adaptive Output Feedback Vibration Suppression in a Cantilever Beam
- https://arxiv.org/abs/2511.06084
- arXiv:2511.06084v2 Announce Type: replace
-Abstract: This paper presents a model-free adaptive control approach to suppress vibrations in a cantilevered beam excited by an unknown disturbance. The cantilevered beam under harmonic excitation is modeled using a lumped parameter approach. Based on retrospective cost optimization, a sampled-data adaptive controller is developed to suppress vibrations caused by external disturbances. Both displacement and acceleration measurements are considered for feedback. Since acceleration measurements are more sensitive to spillover, which excites higher frequency modes, a filter is developed to extract key displacement information from the acceleration data and enhance suppression performance. The vibration suppression performance is compared using both displacement and acceleration measurements.
- oai:arXiv.org:2511.06084v2
- eess.SY
- cs.RO
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Examining the Metrics for Document-Level Claim Extraction in Czech and Slovak
+ https://arxiv.org/abs/2511.14566
+ arXiv:2511.14566v2 Announce Type: replace
+Abstract: Document-level claim extraction remains an open challenge in the field of fact-checking, and subsequently, methods for evaluating extracted claims have received limited attention. In this work, we explore approaches to aligning two sets of claims pertaining to the same source document and computing their similarity through an alignment score. We investigate techniques to identify the best possible alignment and evaluation method between claim sets, with the aim of providing a reliable evaluation framework. Our approach enables comparison between model-extracted and human-annotated claim sets, serving as a metric for assessing the extraction performance of models and also as a possible measure of inter-annotator agreement. We conduct experiments on newly collected dataset-claims extracted from comments under Czech and Slovak news articles-domains that pose additional challenges due to the informal language, strong local context, and subtleties of these closely related languages. The results draw attention to the limitations of current evaluation approaches when applied to document-level claim extraction and highlight the need for more advanced methods-ones able to correctly capture semantic similarity and evaluate essential claim properties such as atomicity, checkworthiness, and decontextualization.
+ oai:arXiv.org:2511.14566v2
+ cs.CL
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Juan Augusto Paredes Salazar, Ankit Goel
+ In: Proceedings of the Nineteenth Workshop on Recent Advances in Slavonic Natural Language Processing, RASLAN 2025, Tribun EU, 2025, pp. 15-24, ISBN 978-80-263-1858-3, ISSN 2336-4289
+ Lucia Makaiova, Martin Fajcik, Antonin Jarolim
- ALIGN: A Vision-Language Framework for High-Accuracy Accident Location Inference through Geo-Spatial Neural Reasoning
- https://arxiv.org/abs/2511.06316
- arXiv:2511.06316v2 Announce Type: replace
-Abstract: Reliable geospatial information on road accidents is vital for safety analysis and infrastructure planning, yet most low- and middle-income countries continue to face a critical shortage of accurate, location-specific crash data. Existing text-based geocoding tools perform poorly in multilingual and unstructured news environments, where incomplete place descriptions and mixed language (e.g. Bangla-English) scripts obscure spatial context. To address these limitations, this study introduces ALIGN (Accident Location Inference through Geo-Spatial Neural Reasoning), a vision-language framework that emulates human spatial reasoning to infer accident location coordinates directly from available textual and map-based cues. ALIGN integrates large language and vision-language model mechanisms within a multi-stage pipeline that performs optical character recognition, linguistic reasoning, and map-level verification through grid-based spatial scanning. The framework systematically evaluates each predicted location against contextual and visual evidence, ensuring interpretable, fine-grained geolocation outcomes without requiring model retraining. Applied to Bangla-language news data source, ALIGN demonstrates consistent improvements over traditional geoparsing methods, accurately identifying district- and sub-district-level crash sites. Beyond its technical contribution, the framework establishes a high accuracy foundation for automated crash mapping in data-scarce regions, supporting evidence-driven road-safety policymaking and the broader integration of multimodal artificial intelligence in transportation analytics.
- oai:arXiv.org:2511.06316v2
+ When Alignment Fails: Multimodal Adversarial Attacks on Vision-Language-Action Models
+ https://arxiv.org/abs/2511.16203
+ arXiv:2511.16203v3 Announce Type: replace
+Abstract: Vision-Language-Action models (VLAs) have recently demonstrated remarkable progress in embodied environments, enabling robots to perceive, reason, and act through unified multimodal understanding. Despite their impressive capabilities, the adversarial robustness of these systems remains largely unexplored, especially under realistic multimodal and black-box conditions. Existing studies mainly focus on single-modality perturbations and overlook the cross-modal misalignment that fundamentally affects embodied reasoning and decision-making. In this paper, we introduce VLA-Fool, a comprehensive study of multimodal adversarial robustness in embodied VLA models under both white-box and black-box settings. VLA-Fool unifies three levels of multimodal adversarial attacks: (1) textual perturbations through gradient-based and prompt-based manipulations, (2) visual perturbations via patch and noise distortions, and (3) cross-modal misalignment attacks that intentionally disrupt the semantic correspondence between perception and instruction. We further incorporate a VLA-aware semantic space into linguistic prompts, developing the first automatically crafted and semantically guided prompting framework. Experiments on the LIBERO benchmark using a fine-tuned OpenVLA model reveal that even minor multimodal perturbations can cause significant behavioral deviations, demonstrating the fragility of embodied multimodal alignment.
+ oai:arXiv.org:2511.16203v3
+ cs.CVcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- MD Thamed Bin Zaman Chowdhury, Moazzem Hossain
+ Yuping Yan, Yuhan Xie, Yixin Zhang, Lingjuan Lyu, Handing Wang, Yaochu Jin
- HEDN: A Hard-Easy Dual Network with Source Reliability Assessment for Cross-Subject EEG Emotion Recognition
- https://arxiv.org/abs/2511.06782
- arXiv:2511.06782v2 Announce Type: replace
-Abstract: Cross-subject electroencephalography (EEG) emotion recognition remains a major challenge in brain-computer interfaces (BCIs) due to substantial inter-subject variability. Multi-Source Domain Adaptation (MSDA) offers a potential solution, but existing MSDA frameworks typically assume equal source quality, leading to negative transfer from low-reliability domains and prohibitive computational overhead due to multi-branch model designs. To address these limitations, we propose the Hard-Easy Dual Network (HEDN), a lightweight reliability-aware MSDA framework. HEDN introduces a novel Source Reliability Assessment (SRA) mechanism that dynamically evaluates the structural integrity of each source domain during training. Based on this assessment, sources are routed to two specialized branches: an Easy Network that exploits high-quality sources to construct fine-grained, structure-aware prototypes for reliable pseudo-label generation, and a Hard Network that utilizes adversarial training to refine and align low-quality sources. Furthermore, a cross-network consistency loss aligns predictions between branches to preserve semantic coherence. Extensive experiments conducted on SEED, SEED-IV, and DEAP datasets demonstrate that HEDN achieves state-of-the-art performance across both cross-subject and cross-dataset evaluation protocols while reducing adaptation complexity.
- oai:arXiv.org:2511.06782v2
- cs.HC
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Cubical coherent confluence, $\omega$-groupoids and the cube equation
+ https://arxiv.org/abs/2511.16852
+ arXiv:2511.16852v2 Announce Type: replace
+Abstract: We study the confluence property of abstract rewriting systems internal to cubical categories. We introduce cubical contractions, a higher-dimensional generalisation of reductions to normal forms, and employ them to construct cubical polygraphic resolutions of convergent rewriting systems. Within this categorical framework, we establish cubical proofs of fundamental rewriting results -- Newman's lemma, the Church-Rosser theorem, and Squier's coherence theorem -- via the pasting of cubical coherence cells. We moreover derive, in purely categorical terms, the cube law known from the $\lambda$-calculus and Garside theory. As a consequence, we show that every convergent abstract rewriting system freely generates an acyclic cubical groupoid, in which higher-dimensional generators can be replaced by degenerate cells beyond dimension two.
+ oai:arXiv.org:2511.16852v2
+ cs.LO
+ math.CT
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Philippe Malbos, Tanguy Massacrier, Georg Struth
+
+
+ A Simple Yet Strong Baseline for Long-Term Conversational Memory of LLM Agents
+ https://arxiv.org/abs/2511.17208
+ arXiv:2511.17208v2 Announce Type: replace
+Abstract: LLM-based conversational agents still struggle to maintain coherent, personalized interaction over many sessions: fixed context windows limit how much history can be kept in view, and most external memory approaches trade off between coarse retrieval over large chunks and fine-grained but fragmented views of the dialogue. Motivated by neo-Davidsonian event semantics, we propose an event-centric alternative that represents conversational history as short, event-like propositions which bundle together participants, temporal cues, and minimal local context, rather than as independent relation triples or opaque summaries. In contrast to work that aggressively compresses or forgets past content, our design aims to preserve information in a non-compressive form and make it more accessible, rather than more lossy. Concretely, we instruct an LLM to decompose each session into enriched elementary discourse units (EDUs) -- self-contained statements with normalized entities and source turn attributions -- and organize sessions, EDUs, and their arguments in a heterogeneous graph that supports associative recall. On top of this representation we build two simple retrieval-based variants that use dense similarity search and LLM filtering, with an optional graph-based propagation step to connect and aggregate evidence across related EDUs. Experiments on the LoCoMo and LongMemEval$_S$ benchmarks show that these event-centric memories match or surpass strong baselines, while operating with much shorter QA contexts. Our results suggest that structurally simple, event-level memory provides a principled and practical foundation for long-horizon conversational agents. Our code and data will be released at https://github.com/KevinSRR/EMem.
+ oai:arXiv.org:2511.17208v2
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qiang Wang, Liying Yang, Jiayun Song, Yifan Bai, Jingtao Du
+ Sizhe Zhou, Jiawei Han
- LLMscape
- https://arxiv.org/abs/2511.07161
- arXiv:2511.07161v3 Announce Type: replace
-Abstract: LLMscape is an interactive installation that investigates how humans and AI construct meaning under shared conditions of uncertainty. Within a mutable, projection-mapped landscape, human participants reshape the world and engage with multiple AI agents, each developing incomplete and provisional accounts of their environment. Exhibited in Shanghai and continually evolving, the work positions AI not as deterministic tools but as embodied co-witnesses to an unstable world, examining the parallels between human and artificial meaning-making and inviting reflection on our shared epistemic limits.
- oai:arXiv.org:2511.07161v3
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Less is More: Data-Efficient Adaptation for Controllable Text-to-Video Generation
+ https://arxiv.org/abs/2511.17844
+ arXiv:2511.17844v2 Announce Type: replace
+Abstract: Fine-tuning large-scale text-to-video diffusion models to add new generative controls, such as those over physical camera parameters (e.g., shutter speed or aperture), typically requires vast, high-fidelity datasets that are difficult to acquire. In this work, we propose a data-efficient fine-tuning strategy that learns these controls from sparse, low-quality synthetic data. We show that not only does fine-tuning on such simple data enable the desired controls, it actually yields superior results to models fine-tuned on photorealistic "real" data. Beyond demonstrating these results, we provide a framework that justifies this phenomenon both intuitively and quantitatively.
+ oai:arXiv.org:2511.17844v2
+ cs.CV
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Shihan Cheng, Nilesh Kulkarni, David Hyde, Dmitriy Smirnov
+
+
+ Beyond Words and Pixels: A Benchmark for Implicit World Knowledge Reasoning in Generative Models
+ https://arxiv.org/abs/2511.18271
+ arXiv:2511.18271v3 Announce Type: replace
+Abstract: Text-to-image (T2I) models today are capable of producing photorealistic, instruction-following images, yet they still frequently fail on prompts that require implicit world knowledge. Existing evaluation protocols either emphasize compositional alignment or rely on single-round VQA-based scoring, leaving critical dimensions such as knowledge grounding, multi-physics interactions, and auditable evidence-substantially undertested. To address these limitations, we introduce PicWorld, the first comprehensive benchmark that assesses the grasp of implicit world knowledge and physical causal reasoning of T2I models. This benchmark consists of 1,100 prompts across three core categories. To facilitate fine-grained evaluation, we propose PW-Agent, an evidence-grounded multi-agent evaluator to hierarchically assess images on their physical realism and logical consistency by decomposing prompts into verifiable visual evidence. We conduct a thorough analysis of 17 mainstream T2I models on PicWorld, illustrating that they universally exhibit a fundamental limitation in their capacity for implicit world knowledge and physical causal reasoning to varying degrees. The findings highlight the need for reasoning-aware, knowledge-integrative architectures in future T2I systems.
+ oai:arXiv.org:2511.18271v3
+ cs.CV
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Gottfried Haider, Jie Zhang
+ Tianyang Han, Junhao Su, Junjie Hu, Peizhen Yang, Hengyu Shi, Junfeng Luo, Jialin Gao
- Finite Volume Analysis of the Poisson Problem via a Reduced Discontinuous Galerkin Space
- https://arxiv.org/abs/2511.09099
- arXiv:2511.09099v2 Announce Type: replace
-Abstract: In this paper, we propose and analyze a high-order finite volume method for the Poisson problem based on the reduced discontinuous Galerkin (RDG) space. The main idea is to employ the RDG space as the trial space and the piecewise constant space as the test space, thereby formulating the scheme in a Petrov-Galerkin framework. This approach inherits the local conservation property of finite volume methods while benefiting from the approximation capabilities of discontinuous Galerkin spaces with significantly fewer degrees of freedom. We establish a rigorous error analysis of the proposed scheme: in particular, we prove optimal-order convergence in the DG energy norm and suboptimal-order convergence in \(L^2\) norm. The theoretical analysis is supported by a set of one- and two-dimensional numerical experiments with Dirichlet and periodic boundary conditions, which confirm both the accuracy and efficiency of the method. The significance of this work lies in bridging finite volume and discontinuous Galerkin methodologies through the RDG space, thus enabling finite volume schemes with a mathematically rigorous convergence theory.
- oai:arXiv.org:2511.09099v2
- math.NA
- cs.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ ConsistCompose: Unified Multimodal Layout Control for Image Composition
+ https://arxiv.org/abs/2511.18333
+ arXiv:2511.18333v2 Announce Type: replace
+Abstract: Unified multimodal models that couple visual understanding with image generation have advanced rapidly, yet most systems still focus on visual grounding-aligning language with image regions-while their generative counterpart, linguistic-embedded layout-grounded generation (LELG) for layout-controllable multi-instance generation, remains underexplored and limits precise compositional control. We present ConsistCompose, a unified multimodal framework that embeds layout coordinates directly into language prompts, enabling layout-controlled multi-instance image generation from Interleaved Image-Text within a single generative interface. We further construct ConsistCompose3M, a 3.4M multi-instance generation dataset with layout and identity annotations (2.6M text-guided and 0.8M image-guided data pairs) that provides large-scale supervision for layout-conditioned generation. Within this framework, LELG is instantiated through instance-coordinate binding prompts and coordinate-aware classifier-free guidance, which translate linguistic layout cues into precise spatial control without task-specific branches. Experiments on COCO-Position and MS-Bench show that ConsistCompose substantially improves spatial accuracy over layout-controlled baselines while preserving identity fidelity and competitive general multimodal understanding, establishing a unified paradigm for layout-controllable multimodal image generation.
+ oai:arXiv.org:2511.18333v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Wenbo Hu, Yinhua Xia
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xuanke Shi, Boxuan Li, Xiaoyang Han, Zhongang Cai, Lei Yang, Dahua Lin, Quan Wang
- Classifying Phonotrauma Severity from Vocal Fold Images with Soft Ordinal Regression
- https://arxiv.org/abs/2511.09702
- arXiv:2511.09702v2 Announce Type: replace
-Abstract: Phonotrauma refers to vocal fold tissue damage resulting from exposure to forces during voicing. It occurs on a continuum from mild to severe, and treatment options can vary based on severity. Assessment of severity involves a clinician's expert judgment, which is costly and can vary widely in reliability. In this work, we present the first method for automatically classifying phonotrauma severity from vocal fold images. To account for the ordinal nature of the labels, we adopt a widely used ordinal regression framework. To account for label uncertainty, we propose a novel modification to ordinal regression loss functions that enables them to operate on soft labels reflecting annotator rating distributions. Our proposed soft ordinal regression method achieves predictive performance approaching that of clinical experts, while producing well-calibrated uncertainty estimates. By providing an automated tool for phonotrauma severity assessment, our work can enable large-scale studies of phonotrauma, ultimately leading to improved clinical understanding and patient care.
- oai:arXiv.org:2511.09702v2
+ GuideFlow: Constraint-Guided Flow Matching for Planning in End-to-End Autonomous Driving
+ https://arxiv.org/abs/2511.18729
+ arXiv:2511.18729v2 Announce Type: replace
+Abstract: Driving planning is a critical component of end-to-end (E2E) autonomous driving. However, prevailing Imitative E2E Planners often suffer from multimodal trajectory mode collapse, failing to produce diverse trajectory proposals. Meanwhile, Generative E2E Planners struggle to incorporate crucial safety and physical constraints directly into the generative process, necessitating an additional optimization stage to refine their outputs. In this paper, we propose \textit{\textbf{GuideFlow}}, a novel planning framework that leverages Constrained Flow Matching. Concretely, \textit{\textbf{GuideFlow}} explicitly models the flow matching process, which inherently mitigates mode collapse and allows for flexible guidance from various conditioning signals. Our core contribution lies in directly enforcing explicit constraints within the flow matching generation process, rather than relying on implicit constraint encoding. Crucially, \textit{\textbf{GuideFlow}} unifies the training of the flow matching with the Energy-Based Model (EBM) to enhance the model's autonomous optimization capability to robustly satisfy physical constraints. Secondly, \textit{\textbf{GuideFlow}} parameterizes driving aggressiveness as a control signal during generation, enabling precise manipulation of trajectory style. Extensive evaluations on major driving benchmarks (Bench2Drive, NuScenes, NavSim and ADV-NuScenes) validate the effectiveness of \textit{\textbf{GuideFlow}}. Notably, on the NavSim test hard split (Navhard), \textit{\textbf{GuideFlow}} achieved SOTA with an EPDMS score of 43.0. The code will be in https://github.com/liulin815/GuideFlow.
+ oai:arXiv.org:2511.18729v2cs.CV
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Katie Matton, Purvaja Balaji, Hamzeh Ghasemzadeh, Jameson C. Cooper, Daryush D. Mehta, Jarrad H. Van Stan, Robert E. Hillman, Rosalind Picard, John Guttag, S. Mazdak Abulnaga
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lin Liu, Caiyan Jia, Guanyi Yu, Ziying Song, JunQiao Li, Feiyang Jia, Peiliang Wu, Xiaoshuai Hao, Yandan Luo
- Boosting In-Silicon Directed Evolution with Fine-Tuned Protein Language Model and Tree Search
- https://arxiv.org/abs/2511.09900
- arXiv:2511.09900v3 Announce Type: replace
-Abstract: Protein evolution through amino acid mutations is a cornerstone of life sciences. Recent advances in protein language models have shown rich evolutionary patterns, offering unprecedented potential for in-silicon directed evolution. However, existing directed evolution methods largely rely on heuristic evolution strategies and have yet to efficiently integrate the transformative protein language models with advanced optimization techniques, such as reinforcement learning, to learn optimal evolution policies. To bridge this gap, we propose AlphaDE, a novel framework that evolves protein sequences by harnessing the innovative paradigms of large language models, such as fine-tuning and test-time inference. First, AlphaDE fine-tunes pretrained protein language models using masked language modeling on homologous protein sequences to activate the evolutionary plausibility of the interested protein family. Second, AlphaDE introduces test-time inference based on Monte Carlo tree search, which effectively evolves proteins with evolutionary guidance from the fine-tuned protein language model. Extensive benchmark experiments show that AlphaDE remarkably outperforms previous state-of-the-art methods even with few-shot fine-tuning. A case study further demonstrates that AlphaDE supports condensing the protein sequence space of avGFP through computational evolution.
- oai:arXiv.org:2511.09900v3
+ Thinking Ahead: Foresight Intelligence in MLLMs and World Models
+ https://arxiv.org/abs/2511.18735
+ arXiv:2511.18735v2 Announce Type: replace
+Abstract: In this work, we define Foresight Intelligence as the capability to anticipate and interpret future events-an ability essential for applications such as autonomous driving, yet largely overlooked by existing research. To bridge this gap, we introduce FSU-QA, a new Visual Question-Answering (VQA) dataset specifically designed to elicit and evaluate Foresight Intelligence. Using FSU-QA, we conduct the first comprehensive study of state-of-the-art Vision-Language Models (VLMs) under foresight-oriented tasks, revealing that current models still struggle to reason about future situations. Beyond serving as a benchmark, FSU-QA also enables the assessment of world models by measuring the semantic coherence of their generated predictions, quantified through performance gains when VLMs are augmented with such outputs. Our experiments further demonstrate that FSU-QA can effectively enhance foresight reasoning: even small VLMs fine-tuned on FSU-QA surpass much larger, advanced models by a substantial margin. Together, these findings position FSU-QA as a principled foundation for developing next-generation models capable of truly anticipating and understanding future events.
+ oai:arXiv.org:2511.18735v2
+ cs.CVcs.AI
- cs.CE
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Yaodong Yang, Yang Wang, Jinpeng Li, Pei Guo, Da Han, Guangyong Chen, Pheng-Ann Heng
+ Zhantao Gong, Liaoyuan Fan, Qing Guo, Xun Xu, Xulei Yang, Shijie Li
- Online Price Competition under Generalized Linear Demands
- https://arxiv.org/abs/2511.10718
- arXiv:2511.10718v3 Announce Type: replace
-Abstract: We study sequential price competition among $N$ sellers, each influenced by the pricing decisions of their rivals. Specifically, the demand function for each seller $i$ follows the single index model $\lambda_i(\mathbf{p}) = \mu_i(\langle \boldsymbol{\theta}_{i,0}, \mathbf{p} \rangle)$, with known increasing link $\mu_i$ and unknown parameter $\boldsymbol{\theta}_{i,0}$, where the vector $\mathbf{p}$ denotes the vector of prices offered by all the sellers simultaneously at a given instant. Each seller observes only their own realized demand -- unobservable to competitors -- and the prices set by rivals. Our framework generalizes existing approaches that focus solely on linear demand models. We propose a novel decentralized policy, PML-GLUCB, that combines penalized MLE with an upper-confidence pricing rule, removing the need for coordinated exploration phases across sellers -- which is integral to previous linear models -- and accommodating both binary and real-valued demand observations. Relative to a dynamic benchmark policy, each seller achieves $O(N^{2}\sqrt{T}\log(T))$ regret, which essentially matches the optimal rate known in the linear setting. A significant technical contribution of our work is the development of a variant of the elliptical potential lemma -- typically applied in single-agent systems -- adapted to our competitive multi-agent environment.
- oai:arXiv.org:2511.10718v3
- cs.GT
- math.ST
- stat.ME
- stat.TH
- Thu, 11 Dec 2025 00:00:00 -0500
+ AttenDence: Maximizing Attention Confidence for Test Time Adaptation
+ https://arxiv.org/abs/2511.18925
+ arXiv:2511.18925v2 Announce Type: replace
+Abstract: Test-time adaptation (TTA) enables models to adapt to distribution shifts at inference time. While entropy minimization over the output distribution has proven effective for TTA, transformers offer an additional unsupervised learning signal through their attention mechanisms. We propose minimizing the entropy of attention distributions from the CLS token to image patches as a novel TTA objective. This approach encourages the model to attend more confidently to relevant image regions under distribution shift and is effective even when only a single test image is available. We demonstrate that attention entropy minimization improves robustness across diverse corruption types while not hurting performance on clean data on a single sample stream of images at test time.
+ oai:arXiv.org:2511.18925v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Daniele Bracale, Moulinath Banerjee, Cong Shi, Yuekai Sun
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Yash Mali
- Forgetting-MarI: LLM Unlearning via Marginal Information Regularization
- https://arxiv.org/abs/2511.11914
- arXiv:2511.11914v2 Announce Type: replace
-Abstract: As AI models are trained on ever-expanding datasets, the ability to remove the influence of specific data from trained models has become essential for privacy protection and regulatory compliance. Unlearning addresses this challenge by selectively removing parametric knowledge from the trained models without retraining from scratch, which is critical for resource-intensive models such as Large Language Models (LLMs). Existing unlearning methods often degrade model performance by removing more information than necessary when attempting to ''forget'' specific data. We introduce Forgetting-MarI, an LLM unlearning framework that provably removes only the additional (marginal) information contributed by the data to be unlearned, while preserving the information supported by the data to be retained. By penalizing marginal information, our method yields an explicit upper bound on the unlearn dataset's residual influence in the trained models, providing provable undetectability. Extensive experiments confirm that our approach outperforms current state-of-the-art unlearning methods, delivering reliable forgetting and better preserved general model performance across diverse benchmarks. This advancement represents an important step toward making AI systems more controllable and compliant with privacy and copyright regulations without compromising their effectiveness.
- oai:arXiv.org:2511.11914v2
+ HunyuanOCR Technical Report
+ https://arxiv.org/abs/2511.19575
+ arXiv:2511.19575v2 Announce Type: replace
+Abstract: This paper presents HunyuanOCR, a commercial-grade, open-source, and lightweight (1B parameters) Vision-Language Model (VLM) dedicated to OCR tasks. The architecture comprises a Native Vision Transformer (ViT) and a lightweight LLM connected via an MLP adapter. HunyuanOCR demonstrates superior performance, outperforming commercial APIs, traditional pipelines, and larger models (e.g., Qwen3-VL-4B). Specifically, it surpasses current public solutions in perception tasks (Text Spotting, Parsing) and excels in semantic tasks (IE, Text Image Translation), securing first place in the ICDAR 2025 DIMT Challenge (Small Model Track). Furthermore, it achieves state-of-the-art (SOTA) results on OCRBench among VLMs with fewer than 3B parameters.
+ HunyuanOCR achieves breakthroughs in three key aspects: 1) Unifying Versatility and Efficiency: We implement comprehensive support for core capabilities including spotting, parsing, IE, VQA, and translation within a lightweight framework. This addresses the limitations of narrow "OCR expert models" and inefficient "General VLMs". 2) Streamlined End-to-End Architecture: Adopting a pure end-to-end paradigm eliminates dependencies on pre-processing modules (e.g., layout analysis). This fundamentally resolves error propagation common in traditional pipelines and simplifies system deployment. 3) Data-Driven and RL Strategies: We confirm the critical role of high-quality data and, for the first time in the industry, demonstrate that Reinforcement Learning (RL) strategies yield significant performance gains in OCR tasks.
+ HunyuanOCR is officially open-sourced on HuggingFace. We also provide a high-performance deployment solution based on vLLM, placing its production efficiency in the top tier. We hope this model will advance frontier research and provide a solid foundation for industrial applications.
+ oai:arXiv.org:2511.19575v2
+ cs.CVcs.AI
- cs.CL
- cs.CR
- cs.IT
- cs.LG
- math.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shizhou Xu, Yuan Ni, Stefan Broecker, Thomas Strohmer
+ Hunyuan Vision Team, Pengyuan Lyu, Xingyu Wan, Gengluo Li, Shangpin Peng, Weinong Wang, Liang Wu, Huawen Shen, Yu Zhou, Canhui Tang, Qi Yang, Qiming Peng, Bin Luo, Hower Yang, Xinsong Zhang, Jinnian Zhang, Houwen Peng, Hongming Yang, Senhao Xie, Longsha Zhou, Ge Pei, Binghong Wu, Rui Yan, Kan Wu, Jieneng Yang, Bochao Wang, Kai Liu, Jianchen Zhu, Jie Jiang, Linus, Han Hu, Chengquan Zhang
- O-Mem: Omni Memory System for Personalized, Long Horizon, Self-Evolving Agents
- https://arxiv.org/abs/2511.13593
- arXiv:2511.13593v3 Announce Type: replace
-Abstract: Recent advancements in LLM-powered agents have demonstrated significant potential in generating human-like responses; however, they continue to face challenges in maintaining long-term interactions within complex environments, primarily due to limitations in contextual consistency and dynamic personalization. Existing memory systems often depend on semantic grouping prior to retrieval, which can overlook semantically irrelevant yet critical user information and introduce retrieval noise. In this report, we propose the initial design of O-Mem, a novel memory framework based on active user profiling that dynamically extracts and updates user characteristics and event records from their proactive interactions with agents. O-Mem supports hierarchical retrieval of persona attributes and topic-related context, enabling more adaptive and coherent personalized responses. O-Mem achieves 51.67% on the public LoCoMo benchmark, a nearly 3% improvement upon LangMem,the previous state-of-the-art, and it achieves 62.99% on PERSONAMEM, a 3.5% improvement upon A-Mem,the previous state-of-the-art. O-Mem also boosts token and interaction response time efficiency compared to previous memory frameworks. Our work opens up promising directions for developing efficient and human-like personalized AI assistants in the future.
- oai:arXiv.org:2511.13593v3
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ It Hears, It Sees too: Multi-Modal LLM for Depression Detection By Integrating Visual Understanding into Audio Language Models
+ https://arxiv.org/abs/2511.19877
+ arXiv:2511.19877v2 Announce Type: replace
+Abstract: Depression is one of the most prevalent mental health disorders globally. In recent years, multi-modal data, such as speech, video, and transcripts, has been increasingly used to develop AI-assisted depression assessment systems. Large language models have further advanced this field due to their strong language understanding and generalization capabilities. However, conventional LLMs remain text-centric and cannot process the rich non-verbal cues found in audio and visual modalities, which are critical components in mental health evaluation. While multi-modal LLMs offer a promising direction, few are tailored for psychological applications. In this study, we propose a novel multi-modal LLM framework for depression detection. Our approach augments an audio language model with visual understanding and aligns audio-visual features at the timestamp level. This fine-grained alignment improves modeling of temporal dynamics across modalities while reducing the need for extensive training data and computational resources. Experiments on the DAIC-WoZ dataset demonstrate that our model outperforms both single-modality approaches and previous multi-modal methods. Moreover, the proposed framework can be extended to incorporate additional physiological signals, paving the way for broader clinical applications beyond mental health.
+ oai:arXiv.org:2511.19877v2
+ cs.MM
+ cs.CV
+ cs.LG
+ eess.AS
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Piaohong Wang, Motong Tian, Jiaxian Li, Yuan Liang, Yuqing Wang, Qianben Chen, Tiannan Wang, Zhicong Lu, Jiawei Ma, Yuchen Eleanor Jiang, Wangchunshu Zhou
+ Xiangyu Zhao, Yaling Shen, Yiwen Jiang, Zimu Wang, Jiahe Liu, Maxmartwell H Cheng, Guilherme C Oliveira, Robert Desimone, Dominic Dwyer, Zongyuan Ge
- GloTok: Global Perspective Tokenizer for Image Reconstruction and Generation
- https://arxiv.org/abs/2511.14184
- arXiv:2511.14184v3 Announce Type: replace
-Abstract: Existing state-of-the-art image tokenization methods leverage diverse semantic features from pre-trained vision models for additional supervision, to expand the distribution of latent representations and thereby improve the quality of image reconstruction and generation. These methods employ a locally supervised approach for semantic supervision, which limits the uniformity of semantic distribution. However, VA-VAE proves that a more uniform feature distribution yields better generation performance. In this work, we introduce a Global Perspective Tokenizer (GloTok), which utilizes global relational information to model a more uniform semantic distribution of tokenized features. Specifically, a codebook-wise histogram relation learning method is proposed to transfer the semantics, which are modeled by pre-trained models on the entire dataset, to the semantic codebook. Then, we design a residual learning module that recovers the fine-grained details to minimize the reconstruction error caused by quantization. Through the above design, GloTok delivers more uniformly distributed semantic latent representations, which facilitates the training of autoregressive (AR) models for generating high-quality images without requiring direct access to pre-trained models during the training process. Experiments on the standard ImageNet-1k benchmark clearly show that our proposed method achieves state-of-the-art reconstruction performance and generation quality.
- oai:arXiv.org:2511.14184v3
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Differential Smoothing Mitigates Sharpening and Improves LLM Reasoning
+ https://arxiv.org/abs/2511.19942
+ arXiv:2511.19942v2 Announce Type: replace
+Abstract: It is widely recognized that reinforcement learning (RL) fine-tuning of large language models often leads to diversity collapse, where outputs lack variety. Prior work has proposed a range of heuristics to counteract this effect, but these methods are ad hoc: they frequently trade off correctness for diversity, their effectiveness varies across tasks, and in some cases they even contradict one another. In this work, we place these observations on a rigorous foundation. We first provide a formal proof of why RL fine-tuning exhibits diversity collapse via a selection and reinforcement bias. Next, we make a key observation that any reward modification to address diversity collapse only needs to be applied on the correct trajectories. Building directly on this analysis, we introduce a principled method -- differential smoothing -- that provably improves both correctness and diversity, outperforming vanilla RL as well as widely used entropy-based heuristics. Our theory precisely characterizes when existing heuristics help and why they fail, while showing that differential smoothing is universally superior. Extensive experiments with models from 1B to 7B parameters, across domains including CountDown and real-world mathematical reasoning, demonstrate consistent gains. Differential smoothing improves both Pass@1 and Pass@k, with up to 6.7% improvements on AIME24 dataset.
+ oai:arXiv.org:2511.19942v2
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Xuan Zhao, Zhongyu Zhang, Yuge Huang, Yuxi Mi, Guodong Mu, Shouhong Ding, Jun Wang, Rizen Guo, Shuigeng Zhou
+ Jingchu Gai, Guanning Zeng, Huaqing Zhang, Aditi Raghunathan
+
+
+ ShapeForce: Low-Cost Soft Robotic Wrist for Contact-Rich Manipulation
+ https://arxiv.org/abs/2511.19955
+ arXiv:2511.19955v2 Announce Type: replace
+Abstract: Contact feedback is essential for contact-rich robotic manipulation, as it allows the robot to detect subtle interaction changes and adjust its actions accordingly. Six-axis force-torque sensors are commonly used to obtain contact feedback, but their high cost and fragility have discouraged many researchers from adopting them in contact-rich tasks. To offer a more cost-efficient and easy-accessible source of contact feedback, we present ShapeForce, a low-cost, plug-and-play soft wrist that provides force-like signals for contact-rich robotic manipulation. Inspired by how humans rely on relative force changes in contact rather than precise force magnitudes, ShapeForce converts external force and torque into measurable deformations of its compliant core, which are then estimated via marker-based pose tracking and converted into force-like signals. Our design eliminates the need for calibration or specialized electronics to obtain exact values, and instead focuses on capturing force and torque changes sufficient for enabling contact-rich manipulation. Extensive experiments across diverse contact-rich tasks and manipulation policies demonstrate that ShapeForce delivers performance comparable to six-axis force-torque sensors at an extremely low cost.
+ oai:arXiv.org:2511.19955v2
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jinxuan Zhu, Zihao Yan, Yangyu Xiao, Jingxiang Guo, Chenrui Tie, Xinyi Cao, Yuhang Zheng, Lin Shao
- PathMind: A Retrieve-Prioritize-Reason Framework for Knowledge Graph Reasoning with Large Language Models
- https://arxiv.org/abs/2511.14256
- arXiv:2511.14256v2 Announce Type: replace
-Abstract: Knowledge graph reasoning (KGR) is the task of inferring new knowledge by performing logical deductions on knowledge graphs. Recently, large language models (LLMs) have demonstrated remarkable performance in complex reasoning tasks. Despite promising success, current LLM-based KGR methods still face two critical limitations. First, existing methods often extract reasoning paths indiscriminately, without assessing their different importance, which may introduce irrelevant noise that misleads LLMs. Second, while many methods leverage LLMs to dynamically explore potential reasoning paths, they require high retrieval demands and frequent LLM calls. To address these limitations, we propose PathMind, a novel framework designed to enhance faithful and interpretable reasoning by selectively guiding LLMs with important reasoning paths. Specifically, PathMind follows a "Retrieve-Prioritize-Reason" paradigm. First, it retrieves a query subgraph from KG through the retrieval module. Next, it introduces a path prioritization mechanism that identifies important reasoning paths using a semantic-aware path priority function, which simultaneously considers the accumulative cost and the estimated future cost for reaching the target. Finally, PathMind generates accurate and logically consistent responses via a dual-phase training strategy, including task-specific instruction tuning and path-wise preference alignment. Extensive experiments on benchmark datasets demonstrate that PathMind consistently outperforms competitive baselines, particularly on complex reasoning tasks with fewer input tokens, by identifying essential reasoning paths.
- oai:arXiv.org:2511.14256v2
+ PaTAS: A Framework for Trust Propagation in Neural Networks Using Subjective Logic
+ https://arxiv.org/abs/2511.20586
+ arXiv:2511.20586v3 Announce Type: replace
+Abstract: Trustworthiness has become a key requirement for the deployment of artificial intelligence systems in safety-critical applications. Conventional evaluation metrics, such as accuracy and precision, fail to appropriately capture uncertainty or the reliability of model predictions, particularly under adversarial or degraded conditions. This paper introduces the Parallel Trust Assessment System (PaTAS), a framework for modeling and propagating trust in neural networks using Subjective Logic (SL). PaTAS operates in parallel with standard neural computation through Trust Nodes and Trust Functions that propagate input, parameter, and activation trust across the network. The framework defines a Parameter Trust Update mechanism to refine parameter reliability during training and an Inference-Path Trust Assessment (IPTA) method to compute instance-specific trust at inference. Experiments on real-world and adversarial datasets demonstrate that PaTAS produces interpretable, symmetric, and convergent trust estimates that complement accuracy and expose reliability gaps in poisoned, biased, or uncertain data scenarios. The results show that PaTAS effectively distinguishes between benign and adversarial inputs and identifies cases where model confidence diverges from actual reliability. By enabling transparent and quantifiable trust reasoning within neural architectures, PaTAS provides a foundation for evaluating model reliability across the AI lifecycle.
+ oai:arXiv.org:2511.20586v3cs.AI
- cs.IR
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yu Liu, Xixun Lin, Yanmin Shang, Yangxi Li, Shi Wang, Yanan Cao
+ Koffi Ismael Ouattara, Ioannis Krontiris, Theo Dimitrakos, Dennis Eisermann, Houda Labiod, Frank Kargl
- Flow-Aided Flight Through Dynamic Clutters From Point To Motion
- https://arxiv.org/abs/2511.16372
- arXiv:2511.16372v2 Announce Type: replace
-Abstract: Challenges in traversing dynamic clutters lie mainly in the efficient perception of the environmental dynamics and the generation of evasive behaviors considering obstacle movement. Previous solutions have made progress in explicitly modeling the dynamic obstacle motion for avoidance, but this key dependency of decision-making is time-consuming and unreliable in highly dynamic scenarios with occlusions. On the contrary, without introducing object detection, tracking, and prediction, we empower the reinforcement learning (RL) with single LiDAR sensing to realize an autonomous flight system directly from point to motion. For exteroception, a depth sensing distance map achieving fixed-shape, low-resolution, and detail-safe is encoded from raw point clouds, and an environment change sensing point flow is adopted as motion features extracted from multi-frame observations. These two are integrated into a lightweight and easy-to-learn representation of complex dynamic environments. For action generation, the behavior of avoiding dynamic threats in advance is implicitly driven by the proposed change-aware sensing representation, where the policy optimization is indicated by the relative motion modulated distance field. With the deployment-friendly sensing simulation and dynamics model-free acceleration control, the proposed system shows a superior success rate and adaptability to alternatives, and the policy derived from the simulator can drive a real-world quadrotor with safe maneuvers.
- oai:arXiv.org:2511.16372v2
+ Transformer Driven Visual Servoing and Dual Arm Impedance Control for Fabric Texture Matching
+ https://arxiv.org/abs/2511.21203
+ arXiv:2511.21203v2 Announce Type: replace
+Abstract: In this paper, we propose a method to align and place a fabric piece on top of another using a dual-arm manipulator and a grayscale camera, so that their surface textures are accurately matched. We propose a novel control scheme that combines Transformer-driven visual servoing with dualarm impedance control. This approach enables the system to simultaneously control the pose of the fabric piece and place it onto the underlying one while applying tension to keep the fabric piece flat. Our transformer-based network incorporates pretrained backbones and a newly introduced Difference Extraction Attention Module (DEAM), which significantly enhances pose difference prediction accuracy. Trained entirely on synthetic images generated using rendering software, the network enables zero-shot deployment in real-world scenarios without requiring prior training on specific fabric textures. Real-world experiments demonstrate that the proposed system accurately aligns fabric pieces with different textures.
+ oai:arXiv.org:2511.21203v2cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Bowen Xu, Zexuan Yan, Minghao Lu, Xiyu Fan, Yi Luo, Youshen Lin, Zhiqiang Chen, Yeke Chen, Qiyuan Qiao, Peng Lu
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Fuyuki Tokuda, Akira Seino, Akinari Kobayashi, Kai Tang, Kazuhiro Kosuge
- DISPATCH -- Decentralized Informed Spatial Planning and Assignment of Tasks for Cooperative Heterogeneous Agents
- https://arxiv.org/abs/2511.17915
- arXiv:2511.17915v3 Announce Type: replace
-Abstract: Spatial task allocation in systems such as multi-robot delivery or ride-sharing requires balancing efficiency with fair service across tasks. Greedy assignment policies that match each agent to its highest-preference or lowest-cost task can maximize efficiency but often create inequities: some tasks receive disproportionately favorable service (e.g., shorter delays or better matches), while others face long waits or poor allocations.
- We study fairness in heterogeneous multi-agent systems where tasks vary in preference alignment and urgency. Most existing approaches either assume centralized coordination or largely ignore fairness under partial observability. Distinct from this prior work, we establish a connection between the Eisenberg-Gale (EG) equilibrium convex program and decentralized, partially observable multi-agent learning. Building on this connection, we develop two equilibrium-informed algorithms that integrate fairness and efficiency: (i) a multi-agent reinforcement learning (MARL) framework, EG-MARL, whose training is guided by a centralized EG equilibrium assignment algorithm; and (ii) a stochastic online optimization mechanism that performs guided exploration and subset-based fair assignment as tasks are discovered.
- We evaluate on Multi-Agent Particle Environment (MPE) simulations across varying team sizes against centralized EG, Hungarian, and Min-Max distance baselines, and also present a Webots-based warehouse proof-of-concept with heterogeneous robots. Both methods preserve the fairness-efficiency balance of the EG solution under partial observability, with EG-MARL achieving near-centralized coordination and reduced travel distances, and the online mechanism enabling real-time allocation with competitive fairness.
- oai:arXiv.org:2511.17915v3
- cs.MA
- Thu, 11 Dec 2025 00:00:00 -0500
+ ReSAM: Refine, Requery, and Reinforce: Self-Prompting Point-Supervised Segmentation for Remote Sensing Images
+ https://arxiv.org/abs/2511.21606
+ arXiv:2511.21606v2 Announce Type: replace
+Abstract: Interactive segmentation models such as the Segment Anything Model (SAM) have demonstrated remarkable generalization on natural images, but perform suboptimally on remote sensing imagery (RSI) due to severe domain shift and the scarcity of dense annotations. To address this, we propose a self-prompting, point-supervised framework that adapts SAM to RSIs using only sparse point annotations. Our method employs a Refine-Requery-Reinforce loop, where coarse pseudo-masks are generated from initial points (Refine), improved with self-constructed box prompts (Requery), and embeddings are aligned across iterations to reduce confirmation bias (Reinforce). Without relying on full-mask supervision, our approach progressively enhances SAM's segmentation quality and domain robustness through self-guided prompt adaptation . We evaluate our proposed method on three RSI benchmark datasets, including WHU, HRSID, and NWPU VHR-10, showing that our method consistently surpasses pretrained SAM and recent point-supervised segmentation methods. Our results demonstrate that self-prompting and semantic alignment provide an efficient path towards scalable, point-level adaptation of foundation segmentation models for remote sensing applications.
+ oai:arXiv.org:2511.21606v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Yao Liu, Sampad Mohanty, Elizabeth Ondula, Bhaskar Krishnamachari
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ M. Naseer Subhani
- Multi-Agent Collaborative Filtering: Orchestrating Users and Items for Agentic Recommendations
- https://arxiv.org/abs/2511.18413
- arXiv:2511.18413v2 Announce Type: replace
-Abstract: Agentic recommendations cast recommenders as large language model (LLM) agents that can plan, reason, use tools, and interact with users of varying preferences in web applications. However, most existing agentic recommender systems focus on generic single-agent plan-execute workflows or multi-agent task decomposition pipelines. Without recommendation-oriented design, they often underuse the collaborative signals in the user-item interaction history, leading to unsatisfying recommendation results. To address this, we propose the Multi-Agent Collaborative Filtering (MACF) framework for agentic recommendations, drawing an analogy between traditional collaborative filtering algorithms and LLM-based multi-agent collaboration. Specifically, given a target user and query, we instantiate similar users and relevant items as LLM agents with unique profiles. Each agent is able to call retrieval tools, suggest candidate items, and interact with other agents. Different from the static preference aggregation in traditional collaborative filtering, MACF employs a central orchestrator agent to adaptively manage the collaboration between user and item agents via dynamic agent recruitment and personalized collaboration instruction. Experimental results on datasets from three different domains show the advantages of our MACF framework compared to strong agentic recommendation baselines.
- oai:arXiv.org:2511.18413v2
- cs.CL
- cs.IR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Dark Speculation: Combining Qualitative and Quantitative Understanding in Frontier AI Risk Analysis
+ https://arxiv.org/abs/2511.21838
+ arXiv:2511.21838v2 Announce Type: replace
+Abstract: Estimating catastrophic harms from frontier AI is hindered by deep ambiguity: many of its risks are not only unobserved but unanticipated by analysts. The central limitation of current risk analysis is the inability to populate the $\textit{catastrophic event space}$, or the set of potential large-scale harms to which probabilities might be assigned. This intractability is worsened by the $\textit{Lucretius problem}$, or the tendency to infer future risks only from past experience. We propose a process of $\textit{dark speculation}$, in which systematically generating and refining catastrophic scenarios ("qualitative" work) is coupled with estimating their likelihoods and associated damages (quantitative underwriting analysis). The idea is neither to predict the future nor to enable insurance for its own sake, but to use narrative and underwriting tools together to generate probability distributions over outcomes. We formalize this process using a simplified catastrophic L\'{e}vy stochastic framework and propose an iterative institutional design in which (1) speculation (including scenario planning) generates detailed catastrophic event narratives, (2) insurance underwriters assign probabilistic and financial parameters to these narratives, and (3) decision-makers synthesize the results into summary statistics to inform judgment. Analysis of the model reveals the value of (a) maintaining independence between speculation and underwriting, (b) analyzing multiple risk categories in parallel, and (c) generating "thick" catastrophic narrative rich in causal (counterfactual) and mitigative detail. While the approach cannot eliminate deep ambiguity, it offers a systematic approach to reason about extreme, low-probability events in frontier AI, tempering complacency and overreaction. The framework is adaptable for iterative use and can be further augmented with AI systems.
+ oai:arXiv.org:2511.21838v2
+ cs.CY
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Yu Xia, Sungchul Kim, Tong Yu, Ryan A. Rossi, Julian McAuley
+ http://creativecommons.org/licenses/by/4.0/
+ Daniel Carpenter, Carson Ezell, Pratyush Mallick, Alexandria Westray
- Connectivity-Preserving Multi-Agent Area Coverage via Optimal-Transport-Based Density-Driven Optimal Control (D2OC)
- https://arxiv.org/abs/2511.18579
- arXiv:2511.18579v3 Announce Type: replace
-Abstract: Multi-agent systems play a central role in area coverage tasks across search-and-rescue, environmental monitoring, and precision agriculture. Achieving non-uniform coverage, where spatial priorities vary across the domain, requires coordinating agents while respecting dynamic and communication constraints. Density-driven approaches can distribute agents according to a prescribed reference density, but existing methods do not ensure connectivity. This limitation often leads to communication loss, reduced coordination, and degraded coverage performance.
- This letter introduces a connectivity-preserving extension of the Density-Driven Optimal Control (D2OC) framework. The coverage objective, defined using the Wasserstein distance between the agent distribution and the reference density, admits a convex quadratic program formulation. Communication constraints are incorporated through a smooth connectivity penalty, which maintains strict convexity, supports distributed implementation, and preserves inter-agent communication without imposing rigid formations.
- Simulation studies show that the proposed method consistently maintains connectivity, improves convergence speed, and enhances non-uniform coverage quality compared with density-driven schemes that do not incorporate explicit connectivity considerations.
- oai:arXiv.org:2511.18579v3
- eess.SY
- cs.RO
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Architecture Decoupling Is Not All You Need For Unified Multimodal Model
+ https://arxiv.org/abs/2511.22663
+ arXiv:2511.22663v2 Announce Type: replace
+Abstract: Unified multimodal models for image generation and understanding represent a significant step toward AGI and have attracted widespread attention from researchers. The main challenge of this task lies in the difficulty in establishing an optimal training paradigm due to inherent conflicting targets in understanding and generation tasks. To alleviate these conflicts and pursue higher performance, many researchers adopt varying degrees of model decoupling (e.g., Double image encoders, MOE/MOT architecture, or frozen MLLM). However, excessive model decoupling can lead to the loss of interleave generation ability, undermining the original intent of unified models. In this work, we aim to explore how to mitigate task conflicts without resorting to model decoupling. Firstly, we analyze why decoupling alleviates conflicts by studying the cross-modal attention behavior of models. We observe that model decoupling essentially drives models toward task-specific multimodal interaction patterns, as seen in Qwen-VL and HunyuanImage, and that the more thorough the decoupling, the more consistent the behavior becomes. Motivated by this observation, we propose Attention Interaction Alignment (AIA) loss, which explicitly learns Task-Specific multimodal interaction patterns during training. To demonstrate the generalizability of our AIA loss, we apply it to Emu3 and Janus-Pro during SFT and post-training stage respectively. Without bells and whistles, AIA not only refines cross-modal attention patterns, but also boosts both generation and understanding performance.
+ oai:arXiv.org:2511.22663v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kooktae Lee, Ethan Brook
+ Dian Zheng, Manyuan Zhang, Hongyu Li, Kai Zou, Hongbo Liu, Ziyu Guo, Kaituo Feng, Yexin Liu, Ying Luo, Yan Feng, Peng Pei, Xunliang Cai, Hongsheng Li
- CoD: A Diffusion Foundation Model for Image Compression
- https://arxiv.org/abs/2511.18706
- arXiv:2511.18706v2 Announce Type: replace
-Abstract: Existing diffusion codecs typically build on text-to-image diffusion foundation models like Stable Diffusion. However, text conditioning is suboptimal from a compression perspective, hindering the potential of downstream diffusion codecs, particularly at ultra-low bitrates. To address it, we introduce \textbf{CoD}, the first \textbf{Co}mpression-oriented \textbf{D}iffusion foundation model, trained from scratch to enable end-to-end optimization of both compression and generation. CoD is not a fixed codec but a general foundation model designed for various diffusion-based codecs. It offers several advantages: \textbf{High compression efficiency}, replacing Stable Diffusion with CoD in downstream codecs like DiffC achieves SOTA results, especially at ultra-low bitrates (e.g., 0.0039 bpp); \textbf{Low-cost and reproducible training}, 300$\times$ faster training than Stable Diffusion ($\sim$ 20 vs. $\sim$ 6,250 A100 GPU days) on entirely open image-only datasets; \textbf{Providing new insights}, e.g., We find pixel-space diffusion can achieve VTM-level PSNR with high perceptual quality and can outperform GAN-based codecs using fewer parameters. We hope CoD lays the foundation for future diffusion codec research. Codes will be released.
- oai:arXiv.org:2511.18706v2
+ Exploring Automated Recognition of Instructional Activity and Discourse from Multimodal Classroom Data
+ https://arxiv.org/abs/2512.00087
+ arXiv:2512.00087v2 Announce Type: replace
+Abstract: Observation of classroom interactions can provide concrete feedback to teachers, but current methods rely on manual annotation, which is resource-intensive and hard to scale. This work explores AI-driven analysis of classroom recordings, focusing on multimodal instructional activity and discourse recognition as a foundation for actionable feedback. Using a densely annotated dataset of 164 hours of video and 68 lesson transcripts, we design parallel, modality-specific pipelines. For video, we evaluate zero-shot multimodal LLMs, fine-tuned vision-language models, and self-supervised video transformers on 24 activity labels. For transcripts, we fine-tune a transformer-based classifier with contextualized inputs and compare it against prompting-based LLMs on 19 discourse labels. To handle class imbalance and multi-label complexity, we apply per-label thresholding, context windows, and imbalance-aware loss functions. The results show that fine-tuned models consistently outperform prompting-based approaches, achieving macro-F1 scores of 0.577 for video and 0.460 for transcripts. These results demonstrate the feasibility of automated classroom analysis and establish a foundation for scalable teacher feedback systems.
+ oai:arXiv.org:2512.00087v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhaoyang Jia, Zihan Zheng, Naifu Xue, Jiahao Li, Bin Li, Zongyu Guo, Xiaoyi Zhang, Houqiang Li, Yan Lu
+ Ivo Bueno, Ruikun Hou, Babette B\"uhler, Tim F\"utterer, James Drimalla, Jonathan Kyle Foster, Peter Youngs, Peter Gerjets, Ulrich Trautwein, Enkelejda Kasneci
- A Note on the Parameterised Complexity of Coverability in Vector Addition Systems
- https://arxiv.org/abs/2511.19212
- arXiv:2511.19212v2 Announce Type: replace
-Abstract: We investigate the parameterised complexity of the classic coverability problem for vector addition systems (VAS): given a finite set of vectors $V \subseteq\mathbb{Z}^d$, an initial configuration $s\in\mathbb{N}^d$, and a target configuration $t\in\mathbb{N}^d$, decide whether starting from $s$, one can iteratively add vectors from $V$ to ultimately arrive at a configuration that is larger than or equal to $t$ on every coordinate, while not observing any negative value on any coordinate along the way. We consider two natural parameters for the problem: the dimension $d$ and the size of $V$, defined as the total bitsize of its encoding. We present several results charting the complexity of those two parameterisations, among which the highlight is that coverability for VAS parameterised by the dimension and with all the numbers in the input encoded in unary is complete for the class XNL under PL-reductions. We also discuss open problems in the topic, most notably the question about fixed-parameter tractability for the parameterisation by the size of $V$.
- oai:arXiv.org:2511.19212v2
- cs.CC
- cs.LO
- Thu, 11 Dec 2025 00:00:00 -0500
+ AFRAgent : An Adaptive Feature Renormalization Based High Resolution Aware GUI agent
+ https://arxiv.org/abs/2512.00846
+ arXiv:2512.00846v2 Announce Type: replace
+Abstract: There is a growing demand for mobile user interface (UI) automation, driven by its broad applications across industries. With the advent of visual language models (VLMs), GUI automation has progressed from generating text-based instructions for humans to autonomously executing tasks, thus optimizing automation workflows. Recent approaches leverage VLMs for this problem due to their ability to 1) process on-screen content directly, 2) remain independent of device-specific APIs by utilizing human actions (e.g., clicks, typing), and 3) apply real-world contextual knowledge for task understanding. However, these models often have trouble accurately identifying widgets and determining actions due to limited spatial information in vision encoder features. Additionally, top-performing models are often large, requiring extensive training and resulting in inference delays. In this work, we introduce AFRAgent, an instruct-BLIP-based multimodal architecture that achieves superior performance in GUI automation while being less than one-fourth the size of its nearest competitor. To enhance image embeddings in the large language model (LLM) pipeline, we propose an adaptive feature renormalization-based (a token-level affine transformation) technique that effectively enriches low-resolution image embeddings and fuses high-resolution details. We evaluate AFRAgent on Meta-GUI and AITW benchmarks, establishing a new state-of-the-art baseline for smartphone automation.
+ oai:arXiv.org:2512.00846v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Micha{\l} Pilipczuk, Sylvain Schmitz, Henry Sinclair-Banks
+ Neeraj Anand, Rishabh Jain, Sohan Patnaik, Balaji Krishnamurthy, Mausoom Sarkar
- MAESTRO: Multi-Agent Environment Shaping through Task and Reward Optimization
- https://arxiv.org/abs/2511.19253
- arXiv:2511.19253v2 Announce Type: replace
-Abstract: Cooperative Multi-Agent Reinforcement Learning (MARL) faces two major design bottlenecks: crafting dense reward functions and constructing curricula that avoid local optima in high-dimensional, non-stationary environments. Existing approaches rely on fixed heuristics or use Large Language Models (LLMs) directly in the control loop, which is costly and unsuitable for real-time systems. We propose MAESTRO (Multi-Agent Environment Shaping through Task and Reward Optimization), a framework that moves the LLM outside the execution loop and uses it as an offline training architect. MAESTRO introduces two generative components: (i) a semantic curriculum generator that creates diverse, performance-driven traffic scenarios, and (ii) an automated reward synthesizer that produces executable Python reward functions adapted to evolving curriculum difficulty. These components guide a standard MARL backbone (MADDPG) without increasing inference cost at deployment. We evaluate MAESTRO on large-scale traffic signal control (Hangzhou, 16 intersections) and conduct controlled ablations. Results show that combining LLM-generated curricula with LLM-generated reward shaping yields improved performance and stability. Across four seeds, the full system achieves +4.0% higher mean return (163.26 vs. 156.93) and 2.2% better risk-adjusted performance (Sharpe 1.53 vs. 0.70) over a strong curriculum baseline. These findings highlight LLMs as effective high-level designers for cooperative MARL training.
- oai:arXiv.org:2511.19253v2
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model
+ https://arxiv.org/abs/2512.02498
+ arXiv:2512.02498v2 Announce Type: replace
+Abstract: Document Layout Parsing serves as a critical gateway for Artificial Intelligence (AI) to access and interpret the world's vast stores of structured knowledge. This process,which encompasses layout detection, text recognition, and relational understanding, is particularly crucial for empowering next-generation Vision-Language Models. Current methods, however, rely on fragmented, multi-stage pipelines that suffer from error propagation and fail to leverage the synergies of joint training. In this paper, we introduce dots.ocr, a single Vision-Language Model that, for the first time, demonstrates the advantages of jointly learning three core tasks within a unified, end-to-end framework. This is made possible by a highly scalable data engine that synthesizes a vast multilingual corpus, empowering the model to deliver robust performance across a wide array of tasks, encompassing diverse languages, layouts, and domains. The efficacy of our unified paradigm is validated by state-of-the-art performance on the comprehensive OmniDocBench. Furthermore, to catalyze research in global document intelligence, we introduce XDocParse, a challenging new benchmark spanning 126 languages. On this testbed, dots.ocr establishes a powerful new baseline, outperforming the next-best competitor by a remarkable +7.4 point margin and proving its unparalleled multilingual capabilities.
+ oai:arXiv.org:2512.02498v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Boyuan Wu
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yumeng Li, Guang Yang, Hao Liu, Bowen Wang, Colin Zhang
- Active Inference in Discrete State Spaces from First Principles
- https://arxiv.org/abs/2511.20321
- arXiv:2511.20321v2 Announce Type: replace
-Abstract: We seek to clarify the concept of active inference by disentangling it from the Free Energy Principle. We show how the optimizations that need to be carried out in order to implement active inference in discrete state spaces can be formulated as constrained divergence minimization problems which can be solved by standard mean field methods that do not appeal to the idea of expected free energy. When it is used to model perception, the perception/action divergence criterion that we propose coincides with variational free energy. When it is used to model action, it differs from an expected free energy functional by an entropy regularizer.
- oai:arXiv.org:2511.20321v2
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Reframing Human-Robot Interaction Through Extended Reality: Unlocking Safer, Smarter, and More Empathic Interactions with Virtual Robots and Foundation Models
+ https://arxiv.org/abs/2512.02569
+ arXiv:2512.02569v2 Announce Type: replace
+Abstract: This perspective reframes human-robot interaction (HRI) through extended reality (XR), arguing that virtual robots powered by large foundation models (FMs) can serve as cognitively grounded, empathic agents. Unlike physical robots, XR-native agents are unbound by hardware constraints and can be instantiated, adapted, and scaled on demand, while still affording embodiment and co-presence. We synthesize work across XR, HRI, and cognitive AI to show how such agents can support safety-critical scenarios, socially and cognitively empathic interaction across domains, and outreaching physical capabilities with XR and AI integration. We then discuss how multimodal large FMs (e.g., large language model, large vision model, and vision-language model) enable context-aware reasoning, affect-sensitive situations, and long-term adaptation, positioning virtual robots as cognitive and empathic mediators rather than mere simulation assets. At the same time, we highlight challenges and potential risks, including overtrust, cultural and representational bias, privacy concerns around biometric sensing, and data governance and transparency. The paper concludes by outlining a research agenda for human-centered, ethically grounded XR agents - emphasizing multi-layered evaluation frameworks, multi-user ecosystems, mixed virtual-physical embodiment, and societal and ethical design practices to envision XR-based virtual agents powered by FMs as reshaping future HRI into a more efficient and adaptive paradigm.
+ oai:arXiv.org:2512.02569v2
+ cs.HC
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Patrick Kenny
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Yuchong Zhang, Yong Ma, Danica Kragic
- ICM-SR: Image-Conditioned Manifold Regularization for Image Super-Resoultion
- https://arxiv.org/abs/2511.22048
- arXiv:2511.22048v2 Announce Type: replace
-Abstract: Real world image super-resolution (Real-ISR) often leverages the powerful generative priors of text-to-image diffusion models by regularizing the output to lie on their learned manifold. However, existing methods often overlook the importance of the regularizing manifold, typically defaulting to a text-conditioned manifold. This approach suffers from two key limitations. Conceptually, it is misaligned with the Real-ISR task, which is to generate high quality (HQ) images directly tied to the low quality (LQ) images. Practically, the teacher model often reconstructs images with color distortions and blurred edges, indicating a flawed generative prior for this task. To correct these flaws and ensure conceptual alignment, a more suitable manifold must incorporate information from the images. While the most straightforward approach is to condition directly on the raw input images, their high information densities make the regularization process numerically unstable. To resolve this, we propose image-conditioned manifold regularization (ICM), a method that regularizes the output towards a manifold conditioned on the sparse yet essential structural information: a combination of colormap and Canny edges. ICM provides a task-aligned and stable regularization signal, thereby avoiding the instability of dense-conditioning and enhancing the final super-resolution quality. Our experiments confirm that the proposed regularization significantly enhances super-resolution performance, particularly in perceptual quality, demonstrating its effectiveness for real-world applications. We will release the source code of our work for reproducibility.
- oai:arXiv.org:2511.22048v2
+ Glance: Accelerating Diffusion Models with 1 Sample
+ https://arxiv.org/abs/2512.02899
+ arXiv:2512.02899v2 Announce Type: replace
+Abstract: Diffusion models have achieved remarkable success in image generation, yet their deployment remains constrained by the heavy computational cost and the need for numerous inference steps. Previous efforts on fewer-step distillation attempt to skip redundant steps by training compact student models, yet they often suffer from heavy retraining costs and degraded generalization. In this work, we take a different perspective: we accelerate smartly, not evenly, applying smaller speedups to early semantic stages and larger ones to later redundant phases. We instantiate this phase-aware strategy with two experts that specialize in slow and fast denoising phases. Surprisingly, instead of investing massive effort in retraining student models, we find that simply equipping the base model with lightweight LoRA adapters achieves both efficient acceleration and strong generalization. We refer to these two adapters as Slow-LoRA and Fast-LoRA. Through extensive experiments, our method achieves up to 5 acceleration over the base model while maintaining comparable visual quality across diverse benchmarks. Remarkably, the LoRA experts are trained with only 1 samples on a single V100 within one hour, yet the resulting models generalize strongly on unseen prompts.
+ oai:arXiv.org:2512.02899v2cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Junoh Kang, Donghun Ryou, Bohyung Han
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhuobai Dong, Rui Zhao, Songjie Wu, Junchao Yi, Linjie Li, Zhengyuan Yang, Lijuan Wang, Alex Jinpeng Wang
- FADiff: Fusion-Aware Differentiable Optimization for DNN Scheduling on Tensor Accelerators
- https://arxiv.org/abs/2511.22348
- arXiv:2511.22348v2 Announce Type: replace
-Abstract: Efficient deployment of Deep Neural Networks (DNNs), such as Large Language Models (LLMs), on tensor accelerators is essential for maximizing computational efficiency in modern AI systems. However, achieving this is challenging due to the enormous and complex design space created by the interaction of intra-layer mapping and inter-layer fusion. In this work, we present FADiff, a gradient-based optimization framework capable of automatically identifying high-quality intra-layer mapping and inter-layer fusion strategies to accelerate inference for DNN workloads. We first construct a unified and differentiable analytical cost model, which accurately predicts the energy and latency of both single-layer mappings and various layer fusion strategies. Then, by encoding discrete constraints into the loss function, we employ a gradient-based approach to efficiently explore the vast design space, determining the optimal joint strategy for mapping and fusion. Experimental results demonstrate the superiority of FADiff, achieving better optimization in terms of energy and latency compared to existing methods.
- oai:arXiv.org:2511.22348v2
- cs.AR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Video4Spatial: Towards Visuospatial Intelligence with Context-Guided Video Generation
+ https://arxiv.org/abs/2512.03040
+ arXiv:2512.03040v2 Announce Type: replace
+Abstract: We investigate whether video generative models can exhibit visuospatial intelligence, a capability central to human cognition, using only visual data. To this end, we present Video4Spatial, a framework showing that video diffusion models conditioned solely on video-based scene context can perform complex spatial tasks. We validate on two tasks: scene navigation - following camera-pose instructions while remaining consistent with 3D geometry of the scene, and object grounding - which requires semantic localization, instruction following, and planning. Both tasks use video-only inputs, without auxiliary modalities such as depth or poses. With simple yet effective design choices in the framework and data curation, Video4Spatial demonstrates strong spatial understanding from video context: it plans navigation and grounds target objects end-to-end, follows camera-pose instructions while maintaining spatial consistency, and generalizes to long contexts and out-of-domain environments. Taken together, these results advance video generative models toward general visuospatial reasoning.
+ oai:arXiv.org:2512.03040v2
+ cs.CV
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Shuao Jia, Zichao Ling, Chen Bai, Kang Zhao, Jianwang Zhai
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Zeqi Xiao, Yiwei Zhao, Lingxiao Li, Yushi Lan, Ning Yu, Rahul Garg, Roshni Cooper, Mohammad H. Taghavi, Xingang Pan
- World in a Frame: Understanding Culture Mixing as a New Challenge for Vision-Language Models
- https://arxiv.org/abs/2511.22787
- arXiv:2511.22787v2 Announce Type: replace
-Abstract: In a globalized world, cultural elements from diverse origins frequently appear together within a single visual scene. We refer to these as culture mixing scenarios, yet how Large Vision-Language Models (LVLMs) perceive them remains underexplored. We investigate culture mixing as a critical challenge for LVLMs and examine how current models behave when cultural items from multiple regions appear together. To systematically analyze these behaviors, we construct CultureMix, a food Visual Question Answering (VQA) benchmark with 23k diffusion-generated, human-verified culture mixing images across four subtasks: (1) food-only, (2) food+food, (3) food+background, and (4) food+food+background. Evaluating 10 LVLMs, we find consistent failures to preserve individual cultural identities in mixed settings. Models show strong background reliance, with accuracy dropping 14% when cultural backgrounds are added to food-only baselines, and they produce inconsistent predictions for identical foods across different contexts. To address these limitations, we explore three robustness strategies. We find supervised fine-tuning using a diverse culture mixing dataset substantially improve model consistency and reduce background sensitivity. We call for increased attention to culture mixing scenarios as a critical step toward developing LVLMs capable of operating reliably in culturally diverse real-world environments.
- oai:arXiv.org:2511.22787v2
+ Hierarchical Attention for Sparse Volumetric Anomaly Detection in Subclinical Keratoconus
+ https://arxiv.org/abs/2512.03346
+ arXiv:2512.03346v2 Announce Type: replace
+Abstract: The detection of weak, spatially distributed anomalies in volumetric medical imaging remains challenging due to the difficulty of integrating subtle signals across non-adjacent regions. This study presents a controlled comparison of sixteen architectures spanning convolutional, hybrid, and transformer families for subclinical keratoconus detection from three-dimensional anterior segment optical coherence tomography (AS-OCT). The results demonstrate that hierarchical architectures achieve 21-23% higher sensitivity and specificity, particularly in the difficult subclinical regime, outperforming both convolutional neural networks (CNNs) and global-attention Vision Transformer (ViT) baselines. Mechanistic analyses indicate that this advantage arises from spatial scale alignment: hierarchical windowing produces effective receptive fields matched to the intermediate extent of subclinical abnormalities, avoiding the excessive locality observed in convolutional models and the diffuse integration characteristic of pure global attention. Attention-distance measurements show that subclinical cases require longer spatial integration than healthy or overtly pathological volumes, with hierarchical models exhibiting lower variance and more anatomically coherent focus. Representational similarity further indicates that hierarchical attention learns a distinct feature space that balances local structure sensitivity with flexible long-range interactions. Auxiliary age and sex prediction tasks demonstrate moderately high cross-task consistency, supporting the generalizability of these inductive principles. The findings provide design guidance for volumetric anomaly detection and highlight hierarchical attention as a principled approach for early pathological change analysis in medical imaging.
+ oai:arXiv.org:2512.03346v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Eunsu Kim, Junyeong Park, Na Min An, Junseong Kim, Hitesh Laxmichand Patel, Jiho Jin, Julia Kruk, Amit Agarwal, Srikant Panda, Fenal Ashokbhai Ilasariya, Hyunjung Shim, Alice Oh
+ Lynn Kandakji, William Woof, Nikolas Pontikos
- Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory
- https://arxiv.org/abs/2511.23083
- arXiv:2511.23083v2 Announce Type: replace
-Abstract: High-capacity kernel Hopfield networks exhibit a \textit{Ridge of Optimization} characterized by extreme stability. While previously linked to \textit{Spectral Concentration}, its origin remains elusive. Here, we analyze the network dynamics on a statistical manifold, revealing that the Ridge corresponds to the Edge of Stability, a critical boundary where the Fisher Information Matrix becomes singular. We demonstrate that the apparent Euclidean force antagonism is a manifestation of \textit{Dual Equilibrium} in the Riemannian space. This unifies learning dynamics and capacity via the Minimum Description Length principle, offering a geometric theory of self-organized criticality.
- oai:arXiv.org:2511.23083v2
- cs.LG
- cs.NE
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ HarnessAgent: Scaling Automatic Fuzzing Harness Construction with Tool-Augmented LLM Pipelines
+ https://arxiv.org/abs/2512.03420
+ arXiv:2512.03420v3 Announce Type: replace
+Abstract: Large language model (LLM)-based techniques have achieved notable progress in generating harnesses for program fuzzing. However, applying them to arbitrary functions (especially internal functions) \textit{at scale} remains challenging due to the requirement of sophisticated contextual information, such as specification, dependencies, and usage examples. State-of-the-art methods heavily rely on static or incomplete context provisioning, causing failure of generating functional harnesses. Furthermore, LLMs tend to exploit harness validation metrics, producing plausible yet logically useless code. % Therefore, harness generation across large and diverse projects continues to face challenges in reliable compilation, robust code retrieval, and comprehensive validation.
+ To address these challenges, we present HarnessAgent, a tool-augmented agentic framework that achieves fully automated, scalable harness construction over hundreds of OSS-Fuzz targets. HarnessAgent introduces three key innovations: 1) a rule-based strategy to identify and minimize various compilation errors; 2) a hybrid tool pool for precise and robust symbol source code retrieval; and 3) an enhanced harness validation pipeline that detects fake definitions. We evaluate HarnessAgent on 243 target functions from OSS-Fuzz projects (65 C projects and 178 C++ projects). It improves the three-shot success rate by approximately 20\% compared to state-of-the-art techniques, reaching 87\% for C and 81\% for C++. Our one-hour fuzzing results show that more than 75\% of the harnesses generated by HarnessAgent increase the target function coverage, surpassing the baselines by over 10\%. In addition, the hybrid tool-pool system of HarnessAgent achieves a response rate of over 90\% for source code retrieval, outperforming Fuzz Introspector by more than 30\%.
+ oai:arXiv.org:2512.03420v3
+ cs.CR
+ cs.SE
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Akira Tamamori
+ Kang Yang, Yunhang Zhang, Zichuan Li, Guanhong Tao, Jun Xu, Xiaojing Liao
- THCRL: Trusted Hierarchical Contrastive Representation Learning for Multi-View Clustering
- https://arxiv.org/abs/2512.00368
- arXiv:2512.00368v2 Announce Type: replace
-Abstract: Multi-View Clustering (MVC) has garnered increasing attention in recent years. It is capable of partitioning data samples into distinct groups by learning a consensus representation. However, a significant challenge remains: the problem of untrustworthy fusion. This problem primarily arises from two key factors: 1) Existing methods often ignore the presence of inherent noise within individual views; 2) In traditional MVC methods using Contrastive Learning (CL), similarity computations typically rely on different views of the same instance, while neglecting the structural information from nearest neighbors within the same cluster. Consequently, this leads to the wrong direction for multi-view fusion. To address this problem, we present a novel Trusted Hierarchical Contrastive Representation Learning (THCRL). It consists of two key modules. Specifically, we propose the Deep Symmetry Hierarchical Fusion (DSHF) module, which leverages the UNet architecture integrated with multiple denoising mechanisms to achieve trustworthy fusion of multi-view data. Furthermore, we present the Average K-Nearest Neighbors Contrastive Learning (AKCL) module to align the fused representation with the view-specific representation. Unlike conventional strategies, AKCL enhances representation similarity among samples belonging to the same cluster, rather than merely focusing on the same sample across views, thereby reinforcing the confidence of the fused representation. Extensive experiments demonstrate that THCRL achieves the state-of-the-art performance in deep MVC tasks.
- oai:arXiv.org:2512.00368v2
+ Think Before You Drive: World Model-Inspired Multimodal Grounding for Autonomous Vehicles
+ https://arxiv.org/abs/2512.03454
+ arXiv:2512.03454v2 Announce Type: replace
+Abstract: Interpreting natural-language commands to localize target objects is critical for autonomous driving (AD). Existing visual grounding (VG) methods for autonomous vehicles (AVs) typically struggle with ambiguous, context-dependent instructions, as they lack reasoning over 3D spatial relations and anticipated scene evolution. Grounded in the principles of world models, we propose ThinkDeeper, a framework that reasons about future spatial states before making grounding decisions. At its core is a Spatial-Aware World Model (SA-WM) that learns to reason ahead by distilling the current scene into a command-aware latent state and rolling out a sequence of future latent states, providing forward-looking cues for disambiguation. Complementing this, a hypergraph-guided decoder then hierarchically fuses these states with the multimodal input, capturing higher-order spatial dependencies for robust localization. In addition, we present DrivePilot, a multi-source VG dataset in AD, featuring semantic annotations generated by a Retrieval-Augmented Generation (RAG) and Chain-of-Thought (CoT)-prompted LLM pipeline. Extensive evaluations on six benchmarks, ThinkDeeper ranks #1 on the Talk2Car leaderboard and surpasses state-of-the-art baselines on DrivePilot, MoCAD, and RefCOCO/+/g benchmarks. Notably, it shows strong robustness and efficiency in challenging scenes (long-text, multi-agent, ambiguity) and retains superior performance even when trained on 50% of the data.
+ oai:arXiv.org:2512.03454v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jian Zhu
-
-
- CryptoBench: A Dynamic Benchmark for Expert-Level Evaluation of LLM Agents in Cryptocurrency
- https://arxiv.org/abs/2512.00417
- arXiv:2512.00417v4 Announce Type: replace
-Abstract: This paper introduces CryptoBench, the first expert-curated, dynamic benchmark designed to rigorously evaluate the real-world capabilities of Large Language Model (LLM) agents in the uniquely demanding and fast-paced cryptocurrency domain. Unlike general-purpose agent benchmarks for search and prediction, professional crypto analysis presents specific challenges: \emph{extreme time-sensitivity}, \emph{a highly adversarial information environment}, and the critical need to synthesize data from \emph{diverse, specialized sources}, such as on-chain intelligence platforms and real-time Decentralized Finance (DeFi) dashboards. CryptoBench thus serves as a much more challenging and valuable scenario for LLM agent assessment. To address these challenges, we constructed a live, dynamic benchmark featuring 50 questions per month, expertly designed by crypto-native professionals to mirror actual analyst workflows. These tasks are rigorously categorized within a four-quadrant system: Simple Retrieval, Complex Retrieval, Simple Prediction, and Complex Prediction. This granular categorization enables a precise assessment of an LLM agent's foundational data-gathering capabilities alongside its advanced analytical and forecasting skills.
- Our evaluation of ten LLMs, both directly and within an agentic framework, reveals a performance hierarchy and uncovers a failure mode. We observe a \textit{retrieval-prediction imbalance}, where many leading models, despite being proficient at data retrieval, demonstrate a pronounced weakness in tasks requiring predictive analysis. This highlights a problematic tendency for agents to appear factually grounded while lacking the deeper analytical capabilities to synthesize information.
- oai:arXiv.org:2512.00417v4
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jiacheng Guo, Suozhi Huang, Zixin Yao, Yifan Zhang, Yifu Lu, Jiashuo Liu, Zihao Li, Nicholas Deng, Qixin Xiao, Jia Tian, Kanghong Zhan, Tianyi Li, Xiaochen Liu, Jason Ge, Chaoyang He, Kaixuan Huang, Lin Yang, Wenhao Huang, Mengdi Wang
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Haicheng Liao, Huanming Shen, Bonan Wang, Yongkang Li, Yihong Tang, Chengyue Wang, Dingyi Zhuang, Kehua Chen, Hai Yang, Chengzhong Xu, Zhenning Li
- VFM-ISRefiner: Towards Better Adapting Vision Foundation Models for Interactive Segmentation of Remote Sensing Images
- https://arxiv.org/abs/2512.00718
- arXiv:2512.00718v2 Announce Type: replace
-Abstract: Interactive image segmentation(IIS) plays a critical role in generating precise annotations for remote sensing imagery, where objects often exhibit scale variations, irregular boundaries and complex backgrounds. However, existing IIS methods, primarily designed for natural images, struggle to generalize to remote sensing domains due to limited annotated data and computational overhead. To address these challenges, we proposed RS-ISRefiner, a novel click-based IIS framework tailored for remote sensing images. The framework employs an adapter-based tuning strategy that preserves the general representations of Vision Foundation Models while enabling efficient learning of remote sensing-specific spatial and boundary characteristics. A hybrid attention mechanism integrating convolutional local modeling with Transformer-based global reasoning enhances robustness against scale diversity and scene complexity. Furthermore, an improved probability map modulation scheme effectively incorporates historical user interactions, yielding more stable iterative refinement and higher boundary accuracy. Comprehensive experiments on six remote sensing datasets, including iSAID, ISPRS Potsdam, SandBar, NWPU, LoveDA Urban and WHUBuilding, demonstrate that RS-ISRefiner consistently outperforms state-of-the-art IIS methods in terms of segmentation accuracy, efficiency and interaction cost. These results confirm the effectiveness and generalizability of our framework, making it highly suitable for high-quality instance segmentation in practical remote sensing scenarios. The codes are available at https://github.com/wondelyan/VFM-ISRefiner .
- oai:arXiv.org:2512.00718v2
+ Fairness-Aware Fine-Tuning of Vision-Language Models for Medical Glaucoma Diagnosis
+ https://arxiv.org/abs/2512.03477
+ arXiv:2512.03477v2 Announce Type: replace
+Abstract: Vision-language models achieve expert-level performance on medical imaging tasks but exhibit significant diagnostic accuracy disparities across demographic groups. We introduce fairness-aware Low-Rank Adaptation for medical VLMs, combining parameter efficiency with explicit fairness optimization. Our key algorithmic contribution is a differentiable MaxAccGap loss that enables end-to-end optimization of accuracy parity across demographic groups. We propose three methods: FR-LoRA integrates MaxAccGap regularization into the training objective, GR-LoRA applies inverse frequency weighting to balance gradient contributions, and Hybrid-LoRA combines both mechanisms. Evaluated on 10,000 glaucoma fundus images, GR-LoRA reduces diagnostic accuracy disparities by 69% while maintaining 53.15% overall accuracy. Ablation studies reveal that strong regularization strength achieves optimal fairness with minimal accuracy trade-off, and race-specific optimization yields 60% disparity reduction. Our approach requires only 0.24% trainable parameters, enabling practical deployment of fair medical AI in resource-constrained healthcare settings.
+ oai:arXiv.org:2512.03477v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Deliang Wang, Peng Liu, Yan Ma, Rongkai Zhuang, Lajiao Chen, Bing Li, Yi Zeng
+ Zijian Gu, Yuxi Liu, Zhenhao Zhang, Song Wang
- Addressing the Plasticity-Stability Dilemma in Reinforcement Learning
- https://arxiv.org/abs/2512.01034
- arXiv:2512.01034v2 Announce Type: replace
-Abstract: Neural networks have shown remarkable success in supervised learning when trained on a single task using a fixed dataset. However, when neural networks are trained on a reinforcement learning task, their ability to continue learning from new experiences declines over time. This decline in learning ability is known as plasticity loss. To restore plasticity, prior work has explored periodically resetting the parameters of the learning network, a strategy that often improves overall performance. However, such resets come at the cost of a temporary drop in performance, which can be dangerous in real-world settings. To overcome this instability, we introduce AltNet, a reset-based approach that restores plasticity without performance degradation by leveraging twin networks. The use of twin networks anchors performance during resets through a mechanism that allows networks to periodically alternate roles: one network learns as it acts in the environment, while the other learns off-policy from the active network's interactions and a replay buffer. At fixed intervals, the active network is reset and the passive network, having learned from prior experiences, becomes the new active network. AltNet restores plasticity, improving sample efficiency and achieving higher performance, while avoiding performance drops that pose risks in safety-critical settings. We demonstrate these advantages in several high-dimensional control tasks from the DeepMind Control Suite, where AltNet outperforms various relevant baseline methods, as well as state-of-the-art reset-based techniques.
- oai:arXiv.org:2512.01034v2
- cs.LG
+ BioMedGPT-Mol: Multi-task Learning for Molecular Understanding and Generation
+ https://arxiv.org/abs/2512.04629
+ arXiv:2512.04629v2 Announce Type: replace
+Abstract: Molecules play a crucial role in biomedical research and discovery, particularly in the field of small molecule drug development. Given the rapid advancements in large language models, especially the recent emergence of reasoning models, it is natural to explore how a general-purpose language model can be efficiently adapted for molecular science applications. In this work, we introduce BioMedGPT-Mol, a molecular language model designed to support molecular understanding and generation tasks. By curating and unifying existing public instruction datasets, we have assembled a large-scale, comprehensive, and high-quality training dataset. The model is then fine-tuned through a meticulously designed multi-task learning framework. On a consolidated benchmark derived from LlaSMol, TOMG-Bench, and MuMOInstruct, BioMedGPT-Mol achieves remarkable performance. Our experimental results demonstrate that a general-purpose reasoning model can be effectively and efficiently post-trained into a professional molecular language model through a well-structured multi-task curriculum. Leveraging these capabilities, we further apply the model to multi-step retrosynthetic planning, achieving state-of-the-art performance on RetroBench and demonstrating its superior efficacy as an end-to-end retrosynthetic planner. We anticipate that our approach can be extended to other biomedical scientific domains.
+ oai:arXiv.org:2512.04629v2cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mansi Maheshwari, John C. Raisbeck, Bruno Castro da Silva
+ Chenyang Zuo, Siqi Fan, Zaiqing Nie
- Generalised Medical Phrase Grounding
- https://arxiv.org/abs/2512.01085
- arXiv:2512.01085v2 Announce Type: replace
-Abstract: Medical phrase grounding (MPG) maps textual descriptions of radiological findings to corresponding image regions. These grounded reports are easier to interpret, especially for non-experts. Existing MPG systems mostly follow the referring expression comprehension (REC) paradigm and return exactly one bounding box per phrase. Real reports often violate this assumption. They contain multi-region findings, non-diagnostic text, and non-groundable phrases, such as negations or descriptions of normal anatomy. Motivated by this, we reformulate the task as generalised medical phrase grounding (GMPG), where each sentence is mapped to zero, one, or multiple scored regions. To realise this formulation, we introduce the first GMPG model: MedGrounder. We adopted a two-stage training regime: pre-training on report sentence--anatomy box alignment datasets and fine-tuning on report sentence--human annotated box datasets. Experiments on PadChest-GR and MS-CXR show that MedGrounder achieves strong zero-shot transfer and outperforms REC-style and grounded report generation baselines on multi-region and non-groundable phrases, while using far fewer human box annotations. Finally, we show that MedGrounder can be composed with existing report generators to produce grounded reports without retraining the generator.
- oai:arXiv.org:2512.01085v2
+ From Generated Human Videos to Physically Plausible Robot Trajectories
+ https://arxiv.org/abs/2512.05094
+ arXiv:2512.05094v2 Announce Type: replace
+Abstract: Video generation models are rapidly improving in their ability to synthesize human actions in novel contexts, holding the potential to serve as high-level planners for contextual robot control. To realize this potential, a key research question remains open: how can a humanoid execute the human actions from generated videos in a zero-shot manner? This challenge arises because generated videos are often noisy and exhibit morphological distortions that make direct imitation difficult compared to real video. To address this, we introduce a two-stage pipeline. First, we lift video pixels into a 4D human representation and then retarget to the humanoid morphology. Second, we propose GenMimic-a physics-aware reinforcement learning policy conditioned on 3D keypoints, and trained with symmetry regularization and keypoint-weighted tracking rewards. As a result, GenMimic can mimic human actions from noisy, generated videos. We curate GenMimicBench, a synthetic human-motion dataset generated using two video generation models across a spectrum of actions and contexts, establishing a benchmark for assessing zero-shot generalization and policy robustness. Extensive experiments demonstrate improvements over strong baselines in simulation and confirm coherent, physically stable motion tracking on a Unitree G1 humanoid robot without fine-tuning. This work offers a promising path to realizing the potential of video generation models as high-level policies for robot control.
+ oai:arXiv.org:2512.05094v2
+ cs.ROcs.CV
- cs.CL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Wenjun Zhang, Shekhar S. Chandra, Aaron Nicolson
+ James Ni, Zekai Wang, Wei Lin, Amir Bar, Yann LeCun, Trevor Darrell, Jitendra Malik, Roei Herzig
- JFR: An Efficient Jump Frontier Relaxation Strategy for Bellman-Ford
- https://arxiv.org/abs/2512.01802
- arXiv:2512.01802v3 Announce Type: replace
-Abstract: We propose JFR, a Bellman-Ford-based optimization framework leveraging frontier contraction and abstract multi-hop jump propagation to accelerate shortest-path computation while strictly preserving correctness. JFR achieves substantial reductions in relaxation operations, ranging from 25 to 99 percent, across sparse, dense, and negative-edge graphs, ensuring robust performance even under adversarial or highly connected topologies. On ultra-large graphs with up to N=20,000 nodes and 295 million edges, JFR maintains strong operational reductions and comparable or improved runtime relative to SPFA-SLF, demonstrating consistent robustness across graph size and density. Lower relaxation counts imply reduced memory-access overheads and computational effort; this normalized work reduction highlights JFR's suitability for scenarios requiring high throughput or energy-conscious operation. Future work focuses on integrating high-performance queue structures, adaptive frontier strategies, and cache-aware techniques to further reduce constant-factor overheads and fully realize JFR's practical runtime potential.
- oai:arXiv.org:2512.01802v3
- cs.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ EvoIR: Towards All-in-One Image Restoration via Evolutionary Frequency Modulation
+ https://arxiv.org/abs/2512.05104
+ arXiv:2512.05104v2 Announce Type: replace
+Abstract: All-in-One Image Restoration (AiOIR) tasks often involve diverse degradation that require robust and versatile strategies. However, most existing approaches typically lack explicit frequency modeling and rely on fixed or heuristic optimization schedules, which limit the generalization across heterogeneous degradation. To address these limitations, we propose EvoIR, an AiOIR-specific framework that introduces evolutionary frequency modulation for dynamic and adaptive image restoration. Specifically, EvoIR employs the Frequency-Modulated Module (FMM) that decomposes features into high- and low-frequency branches in an explicit manner and adaptively modulates them to enhance both structural fidelity and fine-grained details. Central to EvoIR, an Evolutionary Optimization Strategy (EOS) iteratively adjusts frequency-aware objectives through a population-based evolutionary process, dynamically balancing structural accuracy and perceptual fidelity. Its evolutionary guidance further mitigates gradient conflicts across degradation and accelerates convergence. By synergizing FMM and EOS, EvoIR yields greater improvements than using either component alone, underscoring their complementary roles. Extensive experiments on multiple benchmarks demonstrate that EvoIR outperforms state-of-the-art AiOIR methods.
+ oai:arXiv.org:2512.05104v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xin Wang, Xi Chen
+ Jiaqi Ma, Shengkai Hu, Xu Zhang, Jun Wan, Jiaxing Huang, Lefei Zhang, Salman Khan
- Enhancing Floor Plan Recognition: A Hybrid Mix-Transformer and U-Net Approach for Precise Wall Segmentation
- https://arxiv.org/abs/2512.02413
- arXiv:2512.02413v2 Announce Type: replace
-Abstract: Automatic 3D reconstruction of indoor spaces from 2D floor plans necessitates high-precision semantic segmentation of structural elements, particularly walls. However, existing methods often struggle with detecting thin structures and maintaining geometric precision. This study introduces MitUNet, a hybrid neural network combining a Mix-Transformer encoder and a U-Net decoder enhanced with spatial and channel attention blocks. Our approach, optimized with the Tversky loss function, achieves a balance between precision and recall, ensuring accurate boundary recovery. Experiments on the CubiCasa5k dataset and a proprietary regional dataset demonstrate MitUNet's superiority in generating structurally correct masks with high boundary accuracy, outperforming standard models. This tool provides a robust foundation for automated 3D reconstruction pipelines. To ensure reproducibility and facilitate future research, the source code and the proprietary regional dataset are publicly available at https://github.com/aliasstudio/mitunet and https://doi.org/10.5281/zenodo.17871079 respectively.
- oai:arXiv.org:2512.02413v2
+ LoC-Path: Learning to Compress for Pathology Multimodal Large Language Models
+ https://arxiv.org/abs/2512.05391
+ arXiv:2512.05391v2 Announce Type: replace
+Abstract: Whole Slide Image (WSI) understanding is fundamentally challenging due to its gigapixel scale and the extreme sparsity of diagnostically relevant regions. Unlike human experts who primarily rely on key areas to arrive at a diagnosis, existing slide-level multimodal large language models (MLLMs) for pathology rely on heavy slide-level encoders that process thousands of patch features in a brute-force manner, resulting in excessive computational cost. In this work, we revisit the WSI-language modeling paradigm and show that tile-level features exhibit strong global and local redundancy, whereas only a small subset of tiles are truly task-relevant. Motivated by this observation, we introduce an efficient MLLM framework, called LoC-Path, that replaces the expensive slide-level encoder with redundancy-reducing modules. We first design a Sparse Token Merger (STM) and an MAE-pretrained resampler to remove local redundancy and compress globally redundant tile tokens into a compact slide-level representation set. We then propose a Cross-Attention Routing Adapter (CARA) and a Token Importance Scorer (TIS) to integrate the compressed visual representation with the language model in a computation-efficient manner. Extensive experiments demonstrate that our approach achieves performance comparable to existing state-of-the-art whole-slide MLLMs, while requiring significantly lower computation and memory.
+ oai:arXiv.org:2512.05391v2cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Dmitriy Parashchuk, Alexey Kapshitskiy, Yuriy Karyakin
+ Qingqiao Hu, Weimin Lyu, Meilong Xu, Kehan Qi, Xiaoling Hu, Saumya Gupta, Jiawei Zhou, Chao Chen
- Unsupervised Structural Scene Decomposition via Foreground-Aware Slot Attention with Pseudo-Mask Guidance
- https://arxiv.org/abs/2512.02685
- arXiv:2512.02685v2 Announce Type: replace
-Abstract: Recent advances in object-centric representation learning have shown that slot attention-based methods can effectively decompose visual scenes into object slot representations without supervision. However, existing approaches typically process foreground and background regions indiscriminately, often resulting in background interference and suboptimal instance discovery performance on real-world data. To address this limitation, we propose Foreground-Aware Slot Attention (FASA), a two-stage framework that explicitly separates foreground from background to enable precise object discovery. In the first stage, FASA performs a coarse scene decomposition to distinguish foreground from background regions through a dual-slot competition mechanism. These slots are initialized via a clustering-based strategy, yielding well-structured representations of salient regions. In the second stage, we introduce a masked slot attention mechanism where the first slot captures the background while the remaining slots compete to represent individual foreground objects. To further address over-segmentation of foreground objects, we incorporate pseudo-mask guidance derived from a patch affinity graph constructed with self-supervised image features to guide the learning of foreground slots. Extensive experiments on both synthetic and real-world datasets demonstrate that FASA consistently outperforms state-of-the-art methods, validating the effectiveness of explicit foreground modeling and pseudo-mask guidance for robust scene decomposition and object-coherent representation. Code will be made publicly available.
- oai:arXiv.org:2512.02685v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ LMSpell: Neural Spell Checking for Low-Resource Languages
+ https://arxiv.org/abs/2512.05414
+ arXiv:2512.05414v3 Announce Type: replace
+Abstract: Spell correction is still a challenging problem for low-resource languages (LRLs). While pretrained language models (PLMs) have been employed for spell correction, their use is still limited to a handful of languages, and there has been no proper comparison across PLMs. We present the first empirical study on the effectiveness of PLMs for spell correction, which includes LRLs. We find that Large Language Models (LLMs) outperform their counterparts (encoder-based and encoder-decoder) when the fine-tuning dataset is large. This observation holds even in languages for which the LLM is not pre-trained. We release LMSpell, an easy- to use spell correction toolkit across PLMs. It includes an evaluation function that compensates for the hallucination of LLMs. Further, we present a case study with Sinhala to shed light on the plight of spell correction for LRLs.
+ oai:arXiv.org:2512.05414v3
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Huankun Sheng, Ming Li, Yixiang Wei, Yeying Fan, Yu-Hui Wen, Tieliang Gong, Yong-Jin Liu
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Akesh Gunathilake, Nadil Karunarathna, Tharusha Bandaranayake, Nisansa de Silva, Surangika Ranathunga
- Numerical Analysis of the 2D Stochastic Navier-Stokes Equations: Convergence under Transport Noise and No-slip Boundary Conditions
- https://arxiv.org/abs/2512.03483
- arXiv:2512.03483v2 Announce Type: replace
-Abstract: This work is concerned with the numerical approximation of the two-dimensional stochastic Navier-Stokes equation with transport noise and no-slip boundary conditions on a convex polygonal domain. The analysis is challenged by the solution's low spatial regularity and the non-Lipschitz nonlinearity. We derive a convergence rate in the mean-square sense for a spatial semidiscretization. Furthermore, for the full discretization, we prove convergence in probability and establish an explicit rate with respect to the time step.
- oai:arXiv.org:2512.03483v2
- math.NA
- cs.NA
- math.PR
- Thu, 11 Dec 2025 00:00:00 -0500
+ Are Bus-Mounted Edge Servers Feasible?
+ https://arxiv.org/abs/2512.05543
+ arXiv:2512.05543v3 Announce Type: replace
+Abstract: Placement of edge servers is the prerequisite of provisioning edge computing services for Internet of Vehicles (IoV). Fixed-site edge servers at Road Side Units (RSUs) or base stations are able to offer basic service coverage for end users, i.e., vehicles on road. However, the server locations and capacity are fixed after deployment, rendering their inefficiency in handling spationtemporal user dynamics. Mobile servers such as buses, on the other hand, have the potential of adding computation elasticity to such system. To this end, this paper studies the feasibility of bus-mounted edge servers based on real traces. First, we investigate the coverage of the buses and base stations using the Shanghai bus/taxi/Telecom datasets, which shows a great potential of bus-based edge servers as they cover a great portion of geographic area and demand points. Next, we build a mathematical model and design a simple greedy heuristic algorithm to select a limited number of buses that maximizes the coverage of demand points, i.e., with a limited purchase budget. We perform trace-driven simulations to verify the performance of the proposed bus selection algorithm. The results show that our approach effectively handles the dynamic user demand under realistic constraints such as server capacity and purchase quantity. Thus, we claim: bus-mounted edge servers for vehicular networks in urban areas are feasible, beneficial, and valuable.
+ oai:arXiv.org:2512.05543v3
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Binjie Li, Qin Zhou
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xuezhi Li, Jiancong He, Ming Xie, Xuyang Chen, Le Chang, Li Jiang, Gui Gui
- Cross-Space Synergy: A Unified Framework for Multimodal Emotion Recognition in Conversation
- https://arxiv.org/abs/2512.03521
- arXiv:2512.03521v3 Announce Type: replace
-Abstract: Multimodal Emotion Recognition in Conversation (MERC) aims to predict speakers' emotions by integrating textual, acoustic, and visual cues. Existing approaches either struggle to capture complex cross-modal interactions or experience gradient conflicts and unstable training when using deeper architectures. To address these issues, we propose Cross-Space Synergy (CSS), which couples a representation component with an optimization component. Synergistic Polynomial Fusion (SPF) serves the representation role, leveraging low-rank tensor factorization to efficiently capture high-order cross-modal interactions. Pareto Gradient Modulator (PGM) serves the optimization role, steering updates along Pareto-optimal directions across competing objectives to alleviate gradient conflicts and improve stability. Experiments show that CSS outperforms existing representative methods on IEMOCAP and MELD in both accuracy and training stability, demonstrating its effectiveness in complex multimodal scenarios.
- oai:arXiv.org:2512.03521v3
- cs.MM
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Greek Government Decisions Dataset for Public-Sector Analysis and Insight
+ https://arxiv.org/abs/2512.05647
+ arXiv:2512.05647v2 Announce Type: replace
+Abstract: We introduce an open, machine-readable corpus of Greek government decisions sourced from the national transparency platform Diavgeia. The resource comprises 1 million decisions, featuring and high-quality raw text extracted from PDFs. It is released with raw extracted text in Markdown format, alongside a fully reproducible extraction pipeline. Beyond the core dataset, we conduct qualitative analyses to explore boilerplate patterns and design a retrieval-augmented generation (RAG) task by formulating a set of representative questions, creating high-quality answers, and evaluating a baseline RAG system on its ability to retrieve and reason over public decisions. This evaluation demonstrates the potential of large-scale public-sector corpora to support advanced information access and transparency through structured retrieval and reasoning over governmental documents, and highlights how such a RAG pipeline could simulate a chat-based assistant capable of interactively answering questions about public decisions. Due to its scale, quality, and domain coverage, the corpus can also serve as high-value pre-training or fine-tuning material for new Language Models (LMs) and Large Language Models (LLMs) respectively, including specialized models for legal and governmental domains, and as a foundation for novel approaches in domain adaptation, knowledge-grounded generation, and explainable AI. Finally, we discuss limitations, outline future directions, and make both the data and the code accessible.
+ oai:arXiv.org:2512.05647v2
+ cs.CL
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xiaosen Lyu, Jiayu Xiong, Yuren Chen, Wanlong Wang, Xiaoqing Dai, Jing Wang
+ http://creativecommons.org/licenses/by/4.0/
+ Giorgos Antoniou, Giorgos Filandrianos, Aggelos Vlachos, Giorgos Stamou, Lampros Kollimenos, Konstantinos Skianis, Michalis Vazirgiannis
- MoReGen: Multi-Agent Motion-Reasoning Engine for Code-based Text-to-Video Synthesis
- https://arxiv.org/abs/2512.04221
- arXiv:2512.04221v2 Announce Type: replace
-Abstract: While text-to-video (T2V) generation has achieved remarkable progress in photorealism, generating intent-aligned videos that faithfully obey physics principles remains a core challenge. In this work, we systematically study Newtonian motion-controlled text-to-video generation and evaluation, emphasizing physical precision and motion coherence. We introduce MoReGen, a motion-aware, physics-grounded T2V framework that integrates multi-agent LLMs, physics simulators, and renderers to generate reproducible, physically accurate videos from text prompts in the code domain. To quantitatively assess physical validity, we propose object-trajectory correspondence as a direct evaluation metric and present MoReSet, a benchmark of 1,275 human-annotated videos spanning nine classes of Newtonian phenomena with scene descriptions, spatiotemporal relations, and ground-truth trajectories. Using MoReSet, we conduct experiments on existing T2V models, evaluating their physical validity through both our MoRe metrics and existing physics-based evaluators. Our results reveal that state-of-the-art models struggle to maintain physical validity, while MoReGen establishes a principled direction toward physically coherent video synthesis.
- oai:arXiv.org:2512.04221v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Heard or Halted? Gender, Interruptions, and Emotional Tone in U.S. Supreme Court Oral Arguments
+ https://arxiv.org/abs/2512.05832
+ arXiv:2512.05832v2 Announce Type: replace
+Abstract: This study examines how interruptions during U.S. Supreme Court oral arguments shape both the semantic content and emotional tone of advocates' speech, with a focus on gendered dynamics in judicial discourse. Using the ConvoKit Supreme Court Corpus (2010-2019), we analyze 12,663 speech chunks from advocate-justice interactions to assess whether interruptions alter the meaning of an advocate's argument and whether interruptions toward female advocates exhibit more negative emotional valence. Semantic shifts are quantified using GloVe-based sentence embeddings, while sentiment is measured through lexicon-based analysis. We find that semantic similarity between pre- and post-interruption speech remains consistently high, suggesting that interruptions do not substantially alter argumentative content. However, interruptions directed at female advocates contain significantly higher levels of negative sentiment. These results deepen empirical understanding of gendered communication in elite institutional settings and demonstrate the value of computational linguistic methods for studying power, discourse, and equity in judicial proceedings.
+ oai:arXiv.org:2512.05832v2
+ cs.CL
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Xiangyu Bai, He Liang, Bishoy Galoaa, Utsav Nandi, Shayda Moezzi, Yuhang He, Sarah Ostadabbas
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yifei Tong
- Self-Paced and Self-Corrective Masked Prediction for Movie Trailer Generation
- https://arxiv.org/abs/2512.04426
- arXiv:2512.04426v2 Announce Type: replace
-Abstract: As a challenging video editing task, movie trailer generation involves selecting and reorganizing movie shots to create engaging trailers. Currently, most existing automatic trailer generation methods employ a "selection-then-ranking" paradigm (i.e., first selecting key shots and then ranking them), which suffers from inevitable error propagation and limits the quality of the generated trailers. Beyond this paradigm, we propose a new self-paced and self-corrective masked prediction method called SSMP, which achieves state-of-the-art results in automatic trailer generation via bi-directional contextual modeling and progressive self-correction. In particular, SSMP trains a Transformer encoder that takes the movie shot sequences as prompts and generates corresponding trailer shot sequences accordingly. The model is trained via masked prediction, reconstructing each trailer shot sequence from its randomly masked counterpart. The mask ratio is self-paced, allowing the task difficulty to adapt to the model and thereby improving model performance. When generating a movie trailer, the model fills the shot positions with high confidence at each step and re-masks the remaining positions for the next prediction, forming a progressive self-correction mechanism that is analogous to how human editors work. Both quantitative results and user studies demonstrate the superiority of SSMP in comparison to existing automatic movie trailer generation methods. Demo is available at: https://github.com/Dixin-Lab/SSMP.
- oai:arXiv.org:2512.04426v2
+ EmoDiffTalk:Emotion-aware Diffusion for Editable 3D Gaussian Talking Head
+ https://arxiv.org/abs/2512.05991
+ arXiv:2512.05991v2 Announce Type: replace
+Abstract: Recent photo-realistic 3D talking head via 3D Gaussian Splatting still has significant shortcoming in emotional expression manipulation, especially for fine-grained and expansive dynamics emotional editing using multi-modal control. This paper introduces a new editable 3D Gaussian talking head, i.e. EmoDiffTalk. Our key idea is a novel Emotion-aware Gaussian Diffusion, which includes an action unit (AU) prompt Gaussian diffusion process for fine-grained facial animator, and moreover an accurate text-to-AU emotion controller to provide accurate and expansive dynamic emotional editing using text input. Experiments on public EmoTalk3D and RenderMe-360 datasets demonstrate superior emotional subtlety, lip-sync fidelity, and controllability of our EmoDiffTalk over previous works, establishing a principled pathway toward high-quality, diffusion-driven, multimodal editable 3D talking-head synthesis. To our best knowledge, our EmoDiffTalk is one of the first few 3D Gaussian Splatting talking-head generation framework, especially supporting continuous, multimodal emotional editing within the AU-based expression space.
+ oai:arXiv.org:2512.05991v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sidan Zhu, Hongteng Xu, Dixin Luo
+ Chang Liu, Tianjiao Jing, Chengcheng Ma, Xuanqi Zhou, Zhengxuan Lian, Qin Jin, Hongliang Yuan, Shi-Sheng Huang
- Persona-based Multi-Agent Collaboration for Brainstorming
- https://arxiv.org/abs/2512.04488
- arXiv:2512.04488v2 Announce Type: replace
-Abstract: We demonstrate the importance of persona-based multi-agents brainstorming for both diverse topics and subject matter ideation. Prior work has shown that generalized multi-agent collaboration often provides better reasoning than a single agent alone. In this paper, we propose and develop a framework for persona-based agent selection, showing how persona domain curation can improve brainstorming outcomes. Using multiple experimental setups, we evaluate brainstorming outputs across different persona pairings (e.g., Doctor vs VR Engineer) and A2A (agent-to-agent) dynamics (separate, together, separate-then-together). Our results show that (1) persona choice shapes idea domains, (2) collaboration mode shifts diversity of idea generation, and (3) multi-agent persona-driven brainstorming produces idea depth and cross-domain coverage.
- oai:arXiv.org:2512.04488v2
+ WAM-Flow: Parallel Coarse-to-Fine Motion Planning via Discrete Flow Matching for Autonomous Driving
+ https://arxiv.org/abs/2512.06112
+ arXiv:2512.06112v2 Announce Type: replace
+Abstract: We introduce WAM-Flow, a vision-language-action (VLA) model that casts ego-trajectory planning as discrete flow matching over a structured token space. In contrast to autoregressive decoders, WAM-Flow performs fully parallel, bidirectional denoising, enabling coarse-to-fine refinement with a tunable compute-accuracy trade-off. Specifically, the approach combines a metric-aligned numerical tokenizer that preserves scalar geometry via triplet-margin learning, a geometry-aware flow objective and a simulator-guided GRPO alignment that integrates safety, ego progress, and comfort rewards while retaining parallel generation. A multi-stage adaptation converts a pre-trained auto-regressive backbone (Janus-1.5B) from causal decoding to non-causal flow model and strengthens road-scene competence through continued multimodal pretraining. Thanks to the inherent nature of consistency model training and parallel decoding inference, WAM-Flow achieves superior closed-loop performance against autoregressive and diffusion-based VLA baselines, with 1-step inference attaining 89.1 PDMS and 5-step inference reaching 90.3 PDMS on NAVSIM v1 benchmark. These results establish discrete flow matching as a new promising paradigm for end-to-end autonomous driving. The code will be publicly available soon.
+ oai:arXiv.org:2512.06112v2
+ cs.ROcs.AI
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Nate Straub, Saara Khan, Katharina Jay, Brian Cabral, Oskar Linde
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yifang Xu, Jiahao Cui, Feipeng Cai, Zhihao Zhu, Hanlin Shang, Shan Luan, Mingwang Xu, Neng Zhang, Yaoyi Li, Jia Cai, Siyu Zhu
- Distributed scalable coupled policy algorithm for networked multi-agent reinforcement learning
- https://arxiv.org/abs/2512.05447
- arXiv:2512.05447v2 Announce Type: replace
-Abstract: This paper studies networked multi-agent reinforcement learning (NMARL) with interdependent rewards and coupled policies. In this setting, each agent's reward depends on its own state-action pair as well as those of its direct neighbors, and each agent's policy is parameterized by its local parameters together with those of its $\kappa_{p}$-hop neighbors, with $\kappa_{p}\geq 1$ denoting the coupled radius. The objective of the agents is to collaboratively optimize their policies to maximize the discounted average cumulative reward. To address the challenge of interdependent policies in collaborative optimization, we introduce a novel concept termed the neighbors' averaged $Q$-function and derive a new expression for the coupled policy gradient. Based on these theoretical foundations, we develop a distributed scalable coupled policy (DSCP) algorithm, where each agent relies only on the state-action pairs of its $\kappa_{p}$-hop neighbors and the rewards of its $(\kappa_{p}+1)$-hop neighbors. Specially, in the DSCP algorithm, we employ a geometric 2-horizon sampling method that does not require storing a full $Q$-table to obtain an unbiased estimate of the coupled policy gradient. Moreover, each agent interacts exclusively with its direct neighbors to obtain accurate policy parameters, while maintaining local estimates of other agents' parameters to execute its local policy and collect samples for optimization. These estimates and policy parameters are updated via a push-sum protocol, enabling distributed coordination of policy updates across the network. We prove that the joint policy produced by the proposed algorithm converges to a first-order stationary point of the objective function. Finally, the effectiveness of DSCP algorithm is demonstrated through simulations in a robot path planning environment, showing clear improvement over state-of-the-art methods.
- oai:arXiv.org:2512.05447v2
- cs.MA
- Thu, 11 Dec 2025 00:00:00 -0500
+ DDFI: Diverse and Distribution-aware Missing Feature Imputation via Two-step Reconstruction
+ https://arxiv.org/abs/2512.06356
+ arXiv:2512.06356v2 Announce Type: replace
+Abstract: Incomplete node features are ubiquitous in real-world scenarios, e.g., the attributes of web users may be partly private, which causes the performance of Graph Neural Networks (GNNs) to decline significantly. Feature propagation (FP) is a well-known method that performs well for imputation of missing node features on graphs, but it still has the following three issues: 1) it struggles with graphs that are not fully connected, 2) imputed features face the over-smoothing problem, and 3) FP is tailored for transductive tasks, overlooking the feature distribution shift in inductive tasks. To address these challenges, we introduce DDFI, a Diverse and Distribution-aware Missing Feature Imputation method that combines feature propagation with a graph-based Masked AutoEncoder (MAE) in a nontrivial manner. It first designs a simple yet effective algorithm, namely Co-Label Linking (CLL), that randomly connects nodes in the training set with the same label to enhance the performance on graphs with numerous connected components. Then we develop a novel two-step representation generation process at the inference stage. Specifically, instead of directly using FP-imputed features as input during inference, DDFI further reconstructs the features through the whole MAE to reduce feature distribution shift in the inductive tasks and enhance the diversity of node features. Meanwhile, since existing feature imputation methods for graphs only evaluate by simulating the missing scenes with manually masking the features, we collect a new dataset called Sailing from the records of voyages that contains naturally missing features to help better evaluate the effectiveness. Extensive experiments conducted on six public datasets and Sailing show that DDFI outperforms the state-of-the-art methods under both transductive and inductive settings.
+ oai:arXiv.org:2512.06356v2
+ cs.LG
+ cs.SI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Pengcheng Dai, Dongming Wang, Wenwu Yu, Wei Ren
+ Yifan Song, Fenglin Yu, Yihong Luo, Xingjian Tao, Siya Qiu, Kai Han, Jing Tang
- ProbeWalk: Fast Estimation of Biharmonic Distance on Graphs via Probe-Driven Random Walks
- https://arxiv.org/abs/2512.05460
- arXiv:2512.05460v2 Announce Type: replace
-Abstract: The biharmonic distance is a fundamental metric on graphs that measures the dissimilarity between two nodes, capturing both local and global structures. It has found applications across various fields, including network centrality, graph clustering, and machine learning. These applications typically require efficient evaluation of pairwise biharmonic distances. However, existing algorithms remain computationally expensive. The state-of-the-art method attains an absolute-error guarantee epsilon_abs with time complexity O(L^5 / epsilon_abs^2), where L denotes the truncation length. In this work, we improve the complexity to O(L^3 / epsilon^2) under a relative-error guarantee epsilon via probe-driven random walks. We provide a relative-error guarantee rather than an absolute-error guarantee because biharmonic distances vary by orders of magnitude across node pairs. Since L is often very large in real-world networks (for example, L >= 10^3), reducing the L-dependence from the fifth to the third power yields substantial gains. Extensive experiments on real-world networks show that our method delivers 10x-1000x per-query speedups at matched relative error over strong baselines and scales to graphs with tens of millions of nodes.
- oai:arXiv.org:2512.05460v2
- cs.SI
- cs.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ JEEVHITAA -- An End-to-End HCAI System to Support Collective Care
+ https://arxiv.org/abs/2512.06364
+ arXiv:2512.06364v2 Announce Type: replace
+Abstract: Current mobile health platforms are predominantly individual-centric and lack the necessary primitives for coordinated, auditable, multi-actor workflows. However, in many settings worldwide, health decisions are enacted by multi-actor care networks rather than single users. We present JEEVHITAA, an Android/Flutter system that provides context-sensitive, role-aware sharing and verifiable information flows for care circles. JEEVHITAA ingests platform and device data (via Google Health Connect and BLE connectors), constructs multi-layer user profiles from sensor streams and tiered onboarding, and enforces fine-grained, time-bounded access control across permissioned care graphs. Data are end-to-end encrypted in local stores and during peer sync (Firebase), and provisions are made for document capture by camera or upload as PDF. An integrated retrieval-augmented LLM pipeline (i) produces structured, role-targeted summaries and action plans, (ii) enables users to gather advanced insights on health reports, and (iii) performs evidence-grounded user-relevant verification of arbitrary health content, returning provenance, confidence scores, and source citations. We describe the system architecture, connector abstractions, and security primitives, and evaluate robustness and compatibility using synthetic, ontology-driven simulations and vendor compatibility tests. Finally, we outline plans for longitudinal in-the-wild deployments to measure system performance, the correctness of access control, and the real-world effectiveness of relationship-aware credibility support.
+ oai:arXiv.org:2512.06364v2
+ cs.CR
+ cs.HC
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Dehong Zheng, Zhongzhi Zhang
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Shyama Sastha Krishnamoorthy Srinivasan, Harsh Pala, Mohan Kumar, Pushpendra Singh
- Bring Your Dreams to Life: Continual Text-to-Video Customization
- https://arxiv.org/abs/2512.05802
- arXiv:2512.05802v2 Announce Type: replace
-Abstract: Customized text-to-video generation (CTVG) has recently witnessed great progress in generating tailored videos from user-specific text. However, most CTVG methods assume that personalized concepts remain static and do not expand incrementally over time. Additionally, they struggle with forgetting and concept neglect when continuously learning new concepts, including subjects and motions. To resolve the above challenges, we develop a novel Continual Customized Video Diffusion (CCVD) model, which can continuously learn new concepts to generate videos across various text-to-video generation tasks by tackling forgetting and concept neglect. To address catastrophic forgetting, we introduce a concept-specific attribute retention module and a task-aware concept aggregation strategy. They can capture the unique characteristics and identities of old concepts during training, while combining all subject and motion adapters of old concepts based on their relevance during testing. Besides, to tackle concept neglect, we develop a controllable conditional synthesis to enhance regional features and align video contexts with user conditions, by incorporating layer-specific region attention-guided noise estimation. Extensive experimental comparisons demonstrate that our CCVD outperforms existing CTVG baselines on both the DreamVideo and Wan 2.1 backbones. The code is available at https://github.com/JiahuaDong/CCVD.
- oai:arXiv.org:2512.05802v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ RLAX: Large-Scale, Distributed Reinforcement Learning for Large Language Models on TPUs
+ https://arxiv.org/abs/2512.06392
+ arXiv:2512.06392v2 Announce Type: replace
+Abstract: Reinforcement learning (RL) has emerged as the de-facto paradigm for improving the reasoning capabilities of large language models (LLMs). We have developed RLAX, a scalable RL framework on TPUs. RLAX employs a parameter-server architecture. A master trainer periodically pushes updated model weights to the parameter server while a fleet of inference workers pull the latest weights and generates new rollouts. We introduce a suite of system techniques to enable scalable and preemptible RL for a diverse set of state-of-art RL algorithms. To accelerate convergence and improve model quality, we have devised new dataset curation and alignment techniques. Large-scale evaluations show that RLAX improves QwQ-32B's pass@8 accuracy by 12.8% in just 12 hours 48 minutes on 1024 v5p TPUs, while remaining robust to preemptions during training.
+ oai:arXiv.org:2512.06392v2
+ cs.LG
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jiahua Dong, Xudong Wang, Wenqi Liang, Zongyan Han, Meng Cao, Duzhen Zhang, Hanbin Zhao, Zhi Han, Salman Khan, Fahad Shahbaz Khan
+ Runlong Zhou, Lefan Zhang, Shang-Chen Wu, Kelvin Zou, Hanzhi Zhou, Ke Ye, Yihao Feng, Dong Yin, Alex Guillen Garcia, Dmytro Babych, Rohit Chatterjee, Matthew Hopkins, Xiang Kong, Chang Lan, Lezhi Li, Yiping Ma, Daniele Molinari, Senyu Tong, Yanchao Sun, Thomas Voice, Jianyu Wang, Chong Wang, Simon Wang, Floris Weers, Yechen Xu, Guolin Yin, Muyang Yu, Yi Zhang, Zheng Zhou, Danyang Zhuo, Ruoming Pang, Cheng Leong
- Toward Efficient and Robust Behavior Models for Multi-Agent Driving Simulation
- https://arxiv.org/abs/2512.05812
- arXiv:2512.05812v2 Announce Type: replace
-Abstract: Scalable multi-agent driving simulation requires behavior models that are both realistic and computationally efficient. We address this by optimizing the behavior model that controls individual traffic participants. To improve efficiency, we adopt an instance-centric scene representation, where each traffic participant and map element is modeled in its own local coordinate frame. This design enables efficient, viewpoint-invariant scene encoding and allows static map tokens to be reused across simulation steps. To model interactions, we employ a query-centric symmetric context encoder with relative positional encodings between local frames. We use Adversarial Inverse Reinforcement Learning to learn the behavior model and propose an adaptive reward transformation that automatically balances robustness and realism during training. Experiments demonstrate that our approach scales efficiently with the number of tokens, significantly reducing training and inference times, while outperforming several agent-centric baselines in terms of positional accuracy and robustness.
- oai:arXiv.org:2512.05812v2
- cs.RO
+ AGORA: Adversarial Generation Of Real-time Animatable 3D Gaussian Head Avatars
+ https://arxiv.org/abs/2512.06438
+ arXiv:2512.06438v2 Announce Type: replace
+Abstract: The generation of high-fidelity, animatable 3D human avatars remains a core challenge in computer graphics and vision, with applications in VR, telepresence, and entertainment. Existing approaches based on implicit representations like NeRFs suffer from slow rendering and dynamic inconsistencies, while 3D Gaussian Splatting (3DGS) methods are typically limited to static head generation, lacking dynamic control. We bridge this gap by introducing AGORA, a novel framework that extends 3DGS within a generative adversarial network to produce animatable avatars. Our key contribution is a lightweight, FLAME-conditioned deformation branch that predicts per-Gaussian residuals, enabling identity-preserving, fine-grained expression control while allowing real-time inference. Expression fidelity is enforced via a dual-discriminator training scheme leveraging synthetic renderings of the parametric mesh. AGORA generates avatars that are not only visually realistic but also precisely controllable. Quantitatively, we outperform state-of-the-art NeRF-based methods on expression accuracy while rendering at 250+ FPS on a single GPU, and, notably, at $\sim$9 FPS under CPU-only inference - representing, to our knowledge, the first demonstration of practical CPU-only animatable 3DGS avatar synthesis. This work represents a significant step toward practical, high-performance digital humans. Project website: https://ramazan793.github.io/AGORA/
+ oai:arXiv.org:2512.06438v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Fabian Konstantinidis, Moritz Sackmann, Ulrich Hofmann, Christoph Stiller
+ http://creativecommons.org/licenses/by/4.0/
+ Ramazan Fazylov, Sergey Zagoruyko, Aleksandr Parkin, Stamatis Lefkimmiatis, Ivan Laptev
- Utility Boundary of Dataset Distillation: Scaling and Configuration-Coverage Laws
- https://arxiv.org/abs/2512.05817
- arXiv:2512.05817v3 Announce Type: replace
-Abstract: Dataset distillation (DD) aims to construct compact synthetic datasets that allow models to achieve comparable performance to full-data training while substantially reducing storage and computation. Despite rapid empirical progress, its theoretical foundations remain limited: existing methods (gradient, distribution, trajectory matching) are built on heterogeneous surrogate objectives and optimization assumptions, which makes it difficult to analyze their common principles or provide general guarantees. Moreover, it is still unclear under what conditions distilled data can retain the effectiveness of full datasets when the training configuration, such as optimizer, architecture, or augmentation, changes. To answer these questions, we propose a unified theoretical framework, termed configuration--dynamics--error analysis, which reformulates major DD approaches under a common generalization-error perspective and provides two main results: (i) a scaling law that provides a single-configuration upper bound, characterizing how the error decreases as the distilled sample size increases and explaining the commonly observed performance saturation effect; and (ii) a coverage law showing that the required distilled sample size scales linearly with configuration diversity, with provably matching upper and lower bounds. In addition, our unified analysis reveals that various matching methods are interchangeable surrogates, reducing the same generalization error, clarifying why they can all achieve dataset distillation and providing guidance on how surrogate choices affect sample efficiency and robustness. Experiments across diverse methods and configurations empirically confirm the derived laws, advancing a theoretical foundation for DD and enabling theory-driven design of compact, configuration-robust dataset distillation.
- oai:arXiv.org:2512.05817v3
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Learning Agile Striker Skills for Humanoid Soccer Robots from Noisy Sensory Input
+ https://arxiv.org/abs/2512.06571
+ arXiv:2512.06571v2 Announce Type: replace
+Abstract: Learning fast and robust ball-kicking skills is a critical capability for humanoid soccer robots, yet it remains a challenging problem due to the need for rapid leg swings, postural stability on a single support foot, and robustness under noisy sensory input and external perturbations (e.g., opponents). This paper presents a reinforcement learning (RL)-based system that enables humanoid robots to execute robust continual ball-kicking with adaptability to different ball-goal configurations. The system extends a typical teacher-student training framework -- in which a "teacher" policy is trained with ground truth state information and the "student" learns to mimic it with noisy, imperfect sensing -- by including four training stages: (1) long-distance ball chasing (teacher); (2) directional kicking (teacher); (3) teacher policy distillation (student); and (4) student adaptation and refinement (student). Key design elements -- including tailored reward functions, realistic noise modeling, and online constrained RL for adaptation and refinement -- are critical for closing the sim-to-real gap and sustaining performance under perceptual uncertainty. Extensive evaluations in both simulation and on a real robot demonstrate strong kicking accuracy and goal-scoring success across diverse ball-goal configurations. Ablation studies further highlight the necessity of the constrained RL, noise modeling, and the adaptation stage. This work presents a system for learning robust continual humanoid ball-kicking under imperfect perception, establishing a benchmark task for visuomotor skill learning in humanoid whole-body control.
+ oai:arXiv.org:2512.06571v2
+ cs.RO
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Zhengquan Luo, Zhiqiang Xu
+ Zifan Xu, Myoungkyu Seo, Dongmyeong Lee, Hao Fu, Jiaheng Hu, Jiaxun Cui, Yuqian Jiang, Zhihan Wang, Anastasiia Brund, Joydeep Biswas, Peter Stone
- InstructMPC: A Human-LLM-in-the-Loop Framework for Context-Aware Power Grid Control
- https://arxiv.org/abs/2512.05876
- arXiv:2512.05876v3 Announce Type: replace
-Abstract: The transition toward power grids with high renewable penetration demands context-aware decision making frameworks. Traditional operational paradigms, which rely on static optimization of history-based load forecasting, often fail to capture the complex nature of real-time operational conditions, such as operator-issued maintenance mandates, emergency topology changes, or event-driven load surges. To address this challenge, we introduce InstructMPC, a closed-loop framework that integrates Large Language Models (LLMs) to generate context-aware predictions, enabling the controller to optimize power system operation. Our method employs a Contextual Disturbances Predictor (CDP) module to translate contextual information into predictive disturbance trajectories, which are then incorporated into the Model Predictive Control (MPC) optimization. Unlike conventional open-loop forecasting frameworks, InstructMPC features an online tuning mechanism where the predictor's parameters are continuously updated based on the realized control cost with a theoretical guarantee, achieving a regret bound of $O(\sqrt{T \log T})$ for linear dynamics when optimized via a tailored loss function, ensuring task-aware learning and adaption to non-stationary grid conditions.
- oai:arXiv.org:2512.05876v3
- eess.SY
- cs.SY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Hierarchical Deep Learning for Diatom Image Classification: A Multi-Level Taxonomic Approach
+ https://arxiv.org/abs/2512.06613
+ arXiv:2512.06613v2 Announce Type: replace
+Abstract: Accurate taxonomic identification of diatoms is essential for aquatic ecosystem monitoring, yet conventional methods depend heavily on expert taxonomists. Recent deep learning approaches improve automation, but most treat diatom recognition as flat classification, predicting only one taxonomic rank. We investigate whether embedding taxonomic hierarchy into neural network architectures can improve both accuracy and error locality.
+ We introduce DiatomCascadeNet (H-COFGS), a hierarchical convolutional network with five cascaded heads that jointly predict class, order, family, genus, and species. Each head receives shared backbone features and probability distributions from higher levels, with binary masks restricting predictions to valid descendants during training and inference. Using a filtered dataset of 1,456 diatom images covering 82 species, we compare hierarchical and flat models under identical settings.
+ H-COFGS matches flat baselines at the species level (69.4% accuracy) while outperforming at all upper taxonomic levels. When species predictions fail, errors remain taxonomically local: 92.5% of misclassified species are correctly predicted at the genus level, versus 67.2% for flat baselines. H-COFGS reduces mean taxonomic distance by 38.2% (1.209 vs. 1.955).
+ Progressive training reveals bidirectional mechanisms: hierarchical constraint masks operate top-down to constrain prediction space, while gradients from fine-grained levels propagate bottom-up through the shared backbone, refining features. This improves class accuracy from 96.2% to 99.5% and yields 6-8% gains at upper levels, producing more robust, interpretable, and biologically aligned predictions for multi-level taxonomic classification.
+ oai:arXiv.org:2512.06613v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Ruixiang Wu, Jiahao Ai, Tinko Sebastian Bartels, Tongxin Li
+ http://creativecommons.org/licenses/by/4.0/
+ Yueying Ke
- The Tragedy of Productivity: A Unified Framework for Diagnosing Coordination Failures in Labor Markets and AI Governance
- https://arxiv.org/abs/2512.05995
- arXiv:2512.05995v2 Announce Type: replace
-Abstract: Despite productivity increasing eightfold since Keynes's 1930 prediction of 15-hour workweeks, workers globally still work roughly double these hours. Separately, AI development accelerates despite existential risk warnings from leading researchers. We demonstrate these failures share identical game-theoretic structure: coordination failures where individually rational choices produce collectively suboptimal outcomes.
- We synthesize five necessary and sufficient conditions characterizing such coordination failures as structural tragedies: N-player structure, binary choices with negative externalities, dominance where defection yields higher payoffs, Pareto-inefficiency where cooperation dominates mutual defection, and enforcement difficulty from structural barriers. We validate this framework across canonical cases and extend it through condition intensities, introducing a Tragedy Index revealing governance of transformative AI breakthroughs faces orders-of-magnitude greater coordination difficulty than climate change or nuclear weapons.
- Applied to productivity competition, we prove firms face coordination failure preventing productivity gains from translating to worker welfare. European evidence shows that even under favorable conditions, productivity-welfare decoupling persists. Applied to AI governance, we demonstrate development faces the same structure but with amplified intensity across eight dimensions compared to successful arms control, making coordination structurally more difficult than for nuclear weapons. The Russia-Ukraine drone war validates this: both sides escalated from dozens to thousands of drones monthly within two years despite prior governance dialogue.
- The analysis is diagnostic rather than prescriptive, identifying structural barriers to coordination rather than proposing solutions.
- oai:arXiv.org:2512.05995v2
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ LLM-Driven Composite Neural Architecture Search for Multi-Source RL State Encoding
+ https://arxiv.org/abs/2512.06982
+ arXiv:2512.06982v2 Announce Type: replace
+Abstract: Designing state encoders for reinforcement learning (RL) with multiple information sources -- such as sensor measurements, time-series signals, image observations, and textual instructions -- remains underexplored and often requires manual design. We formalize this challenge as a problem of composite neural architecture search (NAS), where multiple source-specific modules and a fusion module are jointly optimized. Existing NAS methods overlook useful side information from the intermediate outputs of these modules -- such as their representation quality -- limiting sample efficiency in multi-source RL settings. To address this, we propose an LLM-driven NAS pipeline in which the LLM serves as a neural architecture design agent, leveraging language-model priors and intermediate-output signals to guide sample-efficient search for high-performing composite state encoders. On a mixed-autonomy traffic control task, our approach discovers higher-performing architectures with fewer candidate evaluations than traditional NAS baselines and the LLM-based GENIUS framework.
+ oai:arXiv.org:2512.06982v2
+ cs.LG
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Ali Dasdan
+ Yu Yu, Qian Xie, Nairen Cao, Li Jin
- Privacy Loss of Noise Perturbation via Concentration Analysis of A Product Measure
- https://arxiv.org/abs/2512.06253
- arXiv:2512.06253v2 Announce Type: replace
-Abstract: Noise perturbation is one of the most fundamental approaches for achieving $(\epsilon,\delta)$-differential privacy (DP) guarantees when releasing the result of a query or function $f(\cdot)\in\mathbb{R}^M$ evaluated on a sensitive dataset $\mathbf{x}$. In this approach, calibrated noise $\mathbf{n}\in\mathbb{R}^M$ is used to obscure the difference vector $f(\mathbf{x})-f(\mathbf{x}')$, where $\mathbf{x}'$ is known as a neighboring dataset. A DP guarantee is obtained by studying the tail probability bound of a privacy loss random variable (PLRV), defined as the Radon-Nikodym derivative between two distributions. When $\mathbf{n}$ follows a multivariate Gaussian distribution, the PLRV is characterized as a specific univariate Gaussian. In this paper, we propose a novel scheme to generate $\mathbf{n}$ by leveraging the fact that the perturbation noise is typically spherically symmetric (i.e., the distribution is rotationally invariant around the origin). The new noise generation scheme allows us to investigate the privacy loss from a geometric perspective and express the resulting PLRV using a product measure, $W\times U$; measure $W$ is related to a radius random variable controlling the magnitude of $\mathbf{n}$, while measure $U$ involves a directional random variable governing the angle between $\mathbf{n}$ and the difference $f(\mathbf{x})-f(\mathbf{x}')$. We derive a closed-form moment bound on the product measure to prove $(\epsilon,\delta)$-DP. Under the same $(\epsilon,\delta)$-DP guarantee, our mechanism yields a smaller expected noise magnitude than the classic Gaussian noise in high dimensions, thereby significantly improving the utility of the noisy result $f(\mathbf{x})+\mathbf{n}$. To validate this, we consider convex and non-convex empirical risk minimization (ERM) problems in high dimensional space and apply the proposed product noise to achieve privacy.
- oai:arXiv.org:2512.06253v2
- cs.CR
- Thu, 11 Dec 2025 00:00:00 -0500
+ $\mathrm{D}^\mathrm{3}$-Predictor: Noise-Free Deterministic Diffusion for Dense Prediction
+ https://arxiv.org/abs/2512.07062
+ arXiv:2512.07062v2 Announce Type: replace
+Abstract: Although diffusion models with strong visual priors have emerged as powerful dense prediction backboens, they overlook a core limitation: the stochastic noise at the core of diffusion sampling is inherently misaligned with dense prediction that requires a deterministic mapping from image to geometry. In this paper, we show that this stochastic noise corrupts fine-grained spatial cues and pushes the model toward timestep-specific noise objectives, consequently destroying meaningful geometric structure mappings. To address this, we introduce $\mathrm{D}^\mathrm{3}$-Predictor, a noise-free deterministic framework built by reformulating a pretrained diffusion model without stochasticity noise. Instead of relying on noisy inputs to leverage diffusion priors, $\mathrm{D}^\mathrm{3}$-Predictor views the pretrained diffusion network as an ensemble of timestep-dependent visual experts and self-supervisedly aggregates their heterogeneous priors into a single, clean, and complete geometric prior. Meanwhile, we utilize task-specific supervision to seamlessly adapt this noise-free prior to dense prediction tasks. Extensive experiments on various dense prediction tasks demonstrate that $\mathrm{D}^\mathrm{3}$-Predictor achieves competitive or state-of-the-art performance in diverse scenarios. In addition, it requires less than half the training data previously used and efficiently performs inference in a single step. Our code, data, and checkpoints are publicly available at https://x-gengroup.github.io/HomePage_D3-Predictor/.
+ oai:arXiv.org:2512.07062v2
+ cs.CV
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Shuainan Liu, Tianxi Ji, Zhongshuo Fang, Lu Wei, Pan Li
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Changliang Xia, Chengyou Jia, Minnan Luo, Zhuohang Dang, Xin Shen, Bowen Ping
- Distributionally Robust Kalman Filter
- https://arxiv.org/abs/2512.06286
- arXiv:2512.06286v2 Announce Type: replace
-Abstract: In this work, we propose a noise-centric formulation of the distributionally robust Kalman filter (DRKF) for discrete-time linear stochastic systems with uncertain noise statistics. By placing Wasserstein ambiguity sets directly on the process and measurement noise distributions, the proposed DRKF preserves the analytical structure of the classical Kalman filter while providing a priori spectral bounds on all feasible covariances. In the time-invariant setting, we derive a steady-state DRKF from a single stationary semidefinite program, yielding a constant-gain estimator with the same per-step computational complexity as the standard Kalman filter. We establish conditions guaranteeing the existence, uniqueness, and convergence of this steady-state solution, and we prove its asymptotic minimax optimality with respect to the worst-case mean-square error. Numerical experiments validate the theory and demonstrate that the proposed DRKF improves estimation accuracy under unknown or uncertain noise models while offering computational advantages over existing robust and distributionally robust filters.
- oai:arXiv.org:2512.06286v2
- eess.SY
- cs.SY
- math.OC
- Thu, 11 Dec 2025 00:00:00 -0500
+ The relationship between offline partisan geographical segregation and online partisan segregation
+ https://arxiv.org/abs/2512.07121
+ arXiv:2512.07121v2 Announce Type: replace
+Abstract: Social media is often blamed for the creation of echo chambers. However, these claims fail to consider the prevalence of offline echo chambers resulting from high levels of partisan segregation in the United States. Our article empirically assesses these online versus offline dynamics by linking a novel dataset of voters' offline partisan segregation extracted from publicly available voter files for 180 million US voters with their online network segregation on Twitter. We investigate offline and online partisan segregation using measures of geographical and network isolation of every matched voter-twitter user to their co-partisans online and offline. Our results show that while social media users tend to form politically homogeneous online networks, these levels of partisan sorting are significantly lower than those found in offline settings. Notably, Democrats are more isolated than Republicans in both settings, and only older Republicans exhibit higher online than offline segregation. Our results contribute to the emerging literature on political communication and the homophily of online networks, providing novel evidence on partisan sorting both online and offline.
+ oai:arXiv.org:2512.07121v2
+ cs.SI
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Minhyuk Jang, Astghik Hakobyan, Insoon Yang
+ http://creativecommons.org/licenses/by/4.0/
+ Megan A. Brown, Tiago Ventura, Joshua A. Tucker, Jonathan Nagler
- Proportional integral derivative booster for neural networks-based time-series prediction: Case of water demand prediction
- https://arxiv.org/abs/2512.06357
- arXiv:2512.06357v2 Announce Type: replace
-Abstract: Multi-step time-series prediction is an essential supportive step for decision-makers in several industrial areas. Artificial intelligence techniques, which use a neural network component in various forms, have recently frequently been used to accomplish this step. However, the complexity of the neural network structure still stands up as a critical problem against prediction accuracy. In this paper, a method inspired by the proportional-integral-derivative (PID) control approach is investigated to enhance the performance of neural network models used for multi-step ahead prediction of periodic time-series information while maintaining a negligible impact on the complexity of the system. The PID-based method is applied to the predicted value at each time step to bring that value closer to the real value. The water demand forecasting problem is considered as a case study, where two deep neural network models from the literature are used to prove the effectiveness of the proposed boosting method. Furthermore, to prove the applicability of this PID-based booster to other types of periodic time-series prediction problems, it is applied to enhance the accuracy of a neural network model used for multi-step forecasting of hourly energy consumption. The comparison between the results of the original prediction models and the results after using the proposed technique demonstrates the superiority of the proposed method in terms of prediction accuracy and system complexity.
- oai:arXiv.org:2512.06357v2
- cs.LG
- cs.AI
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ A linear MARS method for three-dimensional interface tracking
+ https://arxiv.org/abs/2512.07524
+ arXiv:2512.07524v2 Announce Type: replace
+Abstract: For explicit interface tracking in three dimensions, we propose a linear MARS method that (a) represents the interface by a partially ordered set of glued surfaces and approximates each glued surface with a triangular mesh, (b) maintains an $(r,h,\theta)$-regularity on each triangular mesh so that the distance between any pair of adjacent markers is within the range $[rh,h]$ and no angle in any triangle is less than $\theta$, (c) applies to three-dimensional continua with arbitrarily complex topology and geometry, (d) preserves topological structures and geometric features of moving phases under diffeomorphic and isometric flow maps, and (e) achieves second-order and third-order accuracy in terms of the Lagrangian and Eulerian length scales, respectively. Results of classic benchmark tests verify the effectiveness of the novel mesh adjustment algorithms in enforcing the $(r,h,\theta)$-regularity and demonstrate the high accuracy and efficiency of the proposed linear MARS method.
+ oai:arXiv.org:2512.07524v2
+ math.NA
+ cs.NA
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1016/j.engappai.2021.104570
- Tony Salloom, Okyay Kaynak, Xinbo Yub, Wei He
+ Yunhao Qiu, Qinghai Zhang
- When Does Regulation by Insurance Work? The Case of Frontier AI
- https://arxiv.org/abs/2512.06597
- arXiv:2512.06597v2 Announce Type: replace
-Abstract: No one doubts the utility of insurance for its ability to spread risk or streamline claims management; much debated is when and how insurance uptake can improve welfare by reducing harm, despite moral hazard. Proponents and dissenters of "regulation by insurance" have now documented a number of cases of insurers succeeding or failing to have such a net regulatory effect (in contrast with a net hazard effect). Collecting these examples together and drawing on an extensive economics literature, this Article develops a principled framework for evaluating insurance uptake's effect in a given context. The presence of certain distortions - including judgment-proofness, competitive dynamics, and behavioral biases - creates potential for a net regulatory effect. How much of that potential gets realized then depends on the type of policyholder, type of risk, type of insurer, and the structure of the insurance market. The analysis suggests regulation by insurance can be particularly effective for catastrophic non-product accidents where market mechanisms provide insufficient discipline and psychological biases are strongest. As a demonstration, the framework is applied to the frontier AI industry, revealing significant potential for a net regulatory effect but also the need for policy intervention to realize that potential. One option is a carefully designed mandate that encourages forming a specialized insurer or mutual, focuses on catastrophic rather than routine risks, and bars pure captives.
- oai:arXiv.org:2512.06597v2
- cs.CY
- Thu, 11 Dec 2025 00:00:00 -0500
+ Algorithm-hardware co-design of neuromorphic networks with dual memory pathways
+ https://arxiv.org/abs/2512.07602
+ arXiv:2512.07602v2 Announce Type: replace
+Abstract: Spiking neural networks excel at event-driven sensing. Yet, maintaining task-relevant context over long timescales both algorithmically and in hardware, while respecting both tight energy and memory budgets, remains a core challenge in the field. We address this challenge through novel algorithm-hardware co-design effort. At the algorithm level, inspired by the cortical fast-slow organization in the brain, we introduce a neural network with an explicit slow memory pathway that, combined with fast spiking activity, enables a dual memory pathway (DMP) architecture in which each layer maintains a compact low-dimensional state that summarizes recent activity and modulates spiking dynamics. This explicit memory stabilizes learning while preserving event-driven sparsity, achieving competitive accuracy on long-sequence benchmarks with 40-60% fewer parameters than equivalent state-of-the-art spiking neural networks. At the hardware level, we introduce a near-memory-compute architecture that fully leverages the advantages of the DMP architecture by retaining its compact shared state while optimizing dataflow, across heterogeneous sparse-spike and dense-memory pathways. We show experimental results that demonstrate more than a 4x increase in throughput and over a 5x improvement in energy efficiency compared with state-of-the-art implementations. Together, these contributions demonstrate that biological principles can guide functional abstractions that are both algorithmically effective and hardware-efficient, establishing a scalable co-design paradigm for real-time neuromorphic computation and learning.
+ oai:arXiv.org:2512.07602v2
+ cs.NE
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Cristian Trout
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pengfei Sun, Zhe Su, Jascha Achterberg, Giacomo Indiveri, Dan F. M. Goodman, Danyal Akarca
- Financial Fraud Identification and Interpretability Study for Listed Companies Based on Convolutional Neural Network
- https://arxiv.org/abs/2512.06648
- arXiv:2512.06648v2 Announce Type: replace
-Abstract: Since the emergence of joint-stock companies, financial fraud by listed firms has repeatedly undermined capital markets. Fraud is difficult to detect because of covert tactics and the high labor and time costs of audits. Traditional statistical models are interpretable but struggle with nonlinear feature interactions, while machine learning models are powerful but often opaque. In addition, most existing methods judge fraud only for the current year based on current year data, limiting timeliness.
- This paper proposes a financial fraud detection framework for Chinese A-share listed companies based on convolutional neural networks (CNNs). We design a feature engineering scheme that transforms firm-year panel data into image like representations, enabling the CNN to capture cross-sectional and temporal patterns and to predict fraud in advance. Experiments show that the CNN outperforms logistic regression and LightGBM in accuracy, robustness, and early-warning performance, and that proper tuning of the classification threshold is crucial in high-risk settings.
- To address interpretability, we analyze the model along the dimensions of entity, feature, and time using local explanation techniques. We find that solvency, ratio structure, governance structure, and internal control are general predictors of fraud, while environmental indicators matter mainly in high-pollution industries. Non-fraud firms share stable feature patterns, whereas fraud firms exhibit heterogeneous patterns concentrated in short time windows. A case study of Guanong Shares in 2022 shows that cash flow analysis, social responsibility, governance structure, and per-share indicators are the main drivers of the model's fraud prediction, consistent with the company's documented misconduct.
- oai:arXiv.org:2512.06648v2
- cs.LG
- cs.AI
+ Optimization-Guided Diffusion for Interactive Scene Generation
+ https://arxiv.org/abs/2512.07661
+ arXiv:2512.07661v2 Announce Type: replace
+Abstract: Realistic and diverse multi-agent driving scenes are crucial for evaluating autonomous vehicles, but safety-critical events which are essential for this task are rare and underrepresented in driving datasets. Data-driven scene generation offers a low-cost alternative by synthesizing complex traffic behaviors from existing driving logs. However, existing models often lack controllability or yield samples that violate physical or social constraints, limiting their usability. We present OMEGA, an optimization-guided, training-free framework that enforces structural consistency and interaction awareness during diffusion-based sampling from a scene generation model. OMEGA re-anchors each reverse diffusion step via constrained optimization, steering the generation towards physically plausible and behaviorally coherent trajectories. Building on this framework, we formulate ego-attacker interactions as a game-theoretic optimization in the distribution space, approximating Nash equilibria to generate realistic, safety-critical adversarial scenarios. Experiments on nuPlan and Waymo show that OMEGA improves generation realism, consistency, and controllability, increasing the ratio of physically and behaviorally valid scenes from 32.35% to 72.27% for free exploration capabilities, and from 11% to 80% for controllability-focused generation. Our approach can also generate $5\times$ more near-collision frames with a time-to-collision under three seconds while maintaining the overall scene realism.
+ oai:arXiv.org:2512.07661v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Xiao Li
+ Shihao Li, Naisheng Ye, Tianyu Li, Kashyap Chitta, Tuo An, Peng Su, Boyang Wang, Haiou Liu, Chen Lv, Hongyang Li
- LightSearcher: Efficient DeepSearch via Experiential Memory
- https://arxiv.org/abs/2512.06653
- arXiv:2512.06653v3 Announce Type: replace
-Abstract: DeepSearch paradigms have become a core enabler for deep reasoning models, allowing them to invoke external search tools to access up-to-date, domain-specific knowledge beyond parametric boundaries, thereby enhancing the depth and factual reliability of reasoning. Building upon this foundation, recent advances in reinforcement learning (RL) have further empowered models to autonomously and strategically control search tool usage, optimizing when and how to query external knowledge sources. Yet, these RL-driven DeepSearch systems often reveal a see-saw trade-off between accuracy and efficiency-frequent tool invocations can improve factual correctness but lead to unnecessary computational overhead and diminished efficiency. To address this challenge, we propose LightSearcher, an efficient RL framework that incorporates textual experiential memory by learning contrastive reasoning trajectories to generate interpretable summaries of successful reasoning patterns. In addition, it employs an adaptive reward shaping mechanism that penalizes redundant tool calls only in correct-answer scenarios. This design effectively balances the inherent accuracy-efficiency trade-off in DeepSearch paradigms. Experiments on four multi-hop QA benchmarks show that LightSearcher maintains accuracy comparable to SOTA baseline ReSearch, while reducing search tool invocations by 39.6%, inference time by 48.6%, and token consumption by 21.2%, demonstrating its superior efficiency.
- oai:arXiv.org:2512.06653v3
+ Guiding What Not to Generate: Automated Negative Prompting for Text-Image Alignment
+ https://arxiv.org/abs/2512.07702
+ arXiv:2512.07702v2 Announce Type: replace
+Abstract: Despite substantial progress in text-to-image generation, achieving precise text-image alignment remains challenging, particularly for prompts with rich compositional structure or imaginative elements. To address this, we introduce Negative Prompting for Image Correction (NPC), an automated pipeline that improves alignment by identifying and applying negative prompts that suppress unintended content. We begin by analyzing cross-attention patterns to explain why both targeted negatives-those directly tied to the prompt's alignment error-and untargeted negatives-tokens unrelated to the prompt but present in the generated image-can enhance alignment. To discover useful negatives, NPC generates candidate prompts using a verifier-captioner-proposer framework and ranks them with a salient text-space score, enabling effective selection without requiring additional image synthesis. On GenEval++ and Imagine-Bench, NPC outperforms strong baselines, achieving 0.571 vs. 0.371 on GenEval++ and the best overall performance on Imagine-Bench. By guiding what not to generate, NPC provides a principled, fully automated route to stronger text-image alignment in diffusion models. Code is released at https://github.com/wiarae/NPC.
+ oai:arXiv.org:2512.07702v2
+ cs.CVcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hengzhi Lan, Yue Yu, Li Qian, Li Peng, Jie Wu, Wei Liu, Jian Luan, Ting Bai
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Sangha Park, Eunji Kim, Yeongtak Oh, Jooyoung Choi, Sungroh Yoon
- Partial Inverse Design of High-Performance Concrete Using Cooperative Neural Networks for Constraint-Aware Mix Generation
- https://arxiv.org/abs/2512.06813
- arXiv:2512.06813v2 Announce Type: replace
-Abstract: High-performance concrete requires complex mix design decisions involving interdependent variables and practical constraints. While data-driven methods have improved predictive modeling for forward design in concrete engineering, inverse design remains limited, especially when some variables are fixed and only the remaining ones must be inferred. This study proposes a cooperative neural network framework for the partial inverse design of high-performance concrete. The framework integrates an imputation model with a surrogate strength predictor and learns through cooperative training. Once trained, it generates valid and performance-consistent mix designs in a single forward pass without retraining for different constraint scenarios. Compared with baseline models, including autoencoder models and Bayesian inference with Gaussian process surrogates, the proposed method achieves R-squared values of 0.87 to 0.92 and substantially reduces mean squared error by approximately 50% and 70%, respectively. The results show that the framework provides an accurate and computationally efficient foundation for constraint-aware, data-driven mix proportioning.
- oai:arXiv.org:2512.06813v2
- cs.LG
+ SAVE: Sparse Autoencoder-Driven Visual Information Enhancement for Mitigating Object Hallucination
+ https://arxiv.org/abs/2512.07730
+ arXiv:2512.07730v2 Announce Type: replace
+Abstract: Although Multimodal Large Language Models (MLLMs) have advanced substantially, they remain vulnerable to object hallucination caused by language priors and visual information loss. To address this, we propose SAVE (Sparse Autoencoder-Driven Visual Information Enhancement), a framework that mitigates hallucination by steering the model along Sparse Autoencoder (SAE) latent features. A binary object-presence question-answering probe identifies the SAE features most indicative of the model's visual information processing, referred to as visual understanding features. Steering the model along these identified features reinforces grounded visual understanding and effectively reduces hallucination. With its simple design, SAVE outperforms state-of-the-art training-free methods on standard benchmarks, achieving a 10\%p improvement in CHAIR\_S and consistent gains on POPE and MMHal-Bench. Extensive evaluations across multiple models and layers confirm the robustness and generalizability of our approach. Further analysis reveals that steering along visual understanding features suppresses the generation of uncertain object tokens and increases attention to image tokens, mitigating hallucination. Code is released at https://github.com/wiarae/SAVE.
+ oai:arXiv.org:2512.07730v2
+ cs.CVcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Agung Nugraha, Heungjun Im, Jihwan Lee
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Sangha Park, Seungryong Yoo, Jisoo Mok, Sungroh Yoon
- Dual Refinement Cycle Learning: Unsupervised Text Classification of Mamba and Community Detection on Text Attributed Graph
- https://arxiv.org/abs/2512.07100
- arXiv:2512.07100v2 Announce Type: replace
-Abstract: Pretrained language models offer strong text understanding capabilities but remain difficult to deploy in real-world text-attributed networks due to their heavy dependence on labeled data. Meanwhile, community detection methods typically ignore textual semantics, limiting their usefulness in downstream applications such as content organization, recommendation, and risk monitoring. To overcome these limitations, we present Dual Refinement Cycle Learning (DRCL), a fully unsupervised framework designed for practical scenarios where no labels or category definitions are available. DRCL integrates structural and semantic information through a warm-start initialization and a bidirectional refinement cycle between a GCN-based Community Detection Module (GCN-CDM) and a Text Semantic Modeling Module (TSMM). The two modules iteratively exchange pseudo-labels, allowing semantic cues to enhance structural clustering and structural patterns to guide text representation learning without manual supervision. Across several text-attributed graph datasets, DRCL consistently improves the structural and semantic quality of discovered communities. Moreover, a Mamba-based classifier trained solely from DRCL's community signals achieves accuracy comparable to supervised models, demonstrating its potential for deployment in large-scale systems where labeled data are scarce or costly. The code is available at https://github.com/wuanghoong/DRCL.git.
- oai:arXiv.org:2512.07100v2
+ An Introduction to Deep Reinforcement and Imitation Learning
+ https://arxiv.org/abs/2512.08052
+ arXiv:2512.08052v2 Announce Type: replace
+Abstract: Embodied agents, such as robots and virtual characters, must continuously select actions to execute tasks effectively, solving complex sequential decision-making problems. Given the difficulty of designing such controllers manually, learning-based approaches have emerged as promising alternatives, most notably Deep Reinforcement Learning (DRL) and Deep Imitation Learning (DIL). DRL leverages reward signals to optimize behavior, while DIL uses expert demonstrations to guide learning. This document introduces DRL and DIL in the context of embodied agents, adopting a concise, depth-first approach to the literature. It is self-contained, presenting all necessary mathematical and machine learning concepts as they are needed. It is not intended as a survey of the field; rather, it focuses on a small set of foundational algorithms and techniques, prioritizing in-depth understanding over broad coverage. The material ranges from Markov Decision Processes to REINFORCE and Proximal Policy Optimization (PPO) for DRL, and from Behavioral Cloning to Dataset Aggregation (DAgger) and Generative Adversarial Imitation Learning (GAIL) for DIL.
+ oai:arXiv.org:2512.08052v2
+ cs.ROcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Hong Wang, Yinglong Zhang, Hanhan Guo, Xuewen Xia, Xing Xu
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pedro Santana
- Local-Curvature-Aware Knowledge Graph Embedding: An Extended Ricci Flow Approach
- https://arxiv.org/abs/2512.07332
- arXiv:2512.07332v2 Announce Type: replace
-Abstract: Knowledge graph embedding (KGE) relies on the geometry of the embedding space to encode semantic and structural relations. Existing methods place all entities on one homogeneous manifold, Euclidean, spherical, hyperbolic, or their product/multi-curvature variants, to model linear, symmetric, or hierarchical patterns. Yet a predefined, homogeneous manifold cannot accommodate the sharply varying curvature that real-world graphs exhibit across local regions. Since this geometry is imposed a priori, any mismatch with the knowledge graph's local curvatures will distort distances between entities and hurt the expressiveness of the resulting KGE. To rectify this, we propose RicciKGE to have the KGE loss gradient coupled with local curvatures in an extended Ricci flow such that entity embeddings co-evolve dynamically with the underlying manifold geometry towards mutual adaptation. Theoretically, when the coupling coefficient is bounded and properly selected, we rigorously prove that i) all the edge-wise curvatures decay exponentially, meaning that the manifold is driven toward the Euclidean flatness; and ii) the KGE distances strictly converge to a global optimum, which indicates that geometric flattening and embedding optimization are promoting each other. Experimental improvements on link prediction and node classification benchmarks demonstrate RicciKGE's effectiveness in adapting to heterogeneous knowledge graph structures.
- oai:arXiv.org:2512.07332v2
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Towards Visual Re-Identification of Fish using Fine-Grained Classification for Electronic Monitoring in Fisheries
+ https://arxiv.org/abs/2512.08400
+ arXiv:2512.08400v2 Announce Type: replace
+Abstract: Accurate fisheries data are crucial for effective and sustainable marine resource management. With the recent adoption of Electronic Monitoring (EM) systems, more video data is now being collected than can be feasibly reviewed manually. This paper addresses this challenge by developing an optimized deep learning pipeline for automated fish re-identification (Re-ID) using the novel AutoFish dataset, which simulates EM systems with conveyor belts with six similarly looking fish species. We demonstrate that key Re-ID metrics (R1 and mAP@k) are substantially improved by using hard triplet mining in conjunction with a custom image transformation pipeline that includes dataset-specific normalization. By employing these strategies, we demonstrate that the Vision Transformer-based Swin-T architecture consistently outperforms the Convolutional Neural Network-based ResNet-50, achieving peak performance of 41.65% mAP@k and 90.43% Rank-1 accuracy. An in-depth analysis reveals that the primary challenge is distinguishing visually similar individuals of the same species (Intra-species errors), where viewpoint inconsistency proves significantly more detrimental than partial occlusion. The source code and documentation are available at: https://github.com/msamdk/Fish_Re_Identification.git
+ oai:arXiv.org:2512.08400v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Zhengquan Luo, Guy Tadmor, Or Amar, David Zeevi, Zhiqiang Xu
+ Samitha Nuwan Thilakarathna, Ercan Avsar, Martin Mathias Nielsen, Malte Pedersen
- Exploring possible vector systems for faster training of neural networks with preconfigured latent spaces
- https://arxiv.org/abs/2512.07509
- arXiv:2512.07509v2 Announce Type: replace
-Abstract: The overall neural network (NN) performance is closely related to the properties of its embedding distribution in latent space (LS). It has recently been shown that predefined vector systems, specifically An root system vectors, can be used as targets for latent space configurations (LSC) to ensure the desired LS structure. One of the main LSC advantage is the possibility of training classifier NNs without classification layers, which facilitates training NNs on datasets with extremely large numbers of classes. This paper provides a more general overview of possible vector systems for NN training along with their properties and methods for vector system construction. These systems are used to configure LS of encoders and visual transformers to significantly speed up ImageNet-1K and 50k-600k classes LSC training. It is also shown that using the minimum number of LS dimensions for a specific number of classes results in faster convergence. The latter has potential advantages for reducing the size of vector databases used to store NN embeddings.
- oai:arXiv.org:2512.07509v2
- cs.LG
- cs.AI
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Attention is All You Need to Defend Against Indirect Prompt Injection Attacks in LLMs
+ https://arxiv.org/abs/2512.08417
+ arXiv:2512.08417v2 Announce Type: replace
+Abstract: Large Language Models (LLMs) have been integrated into many applications (e.g., web agents) to perform more sophisticated tasks. However, LLM-empowered applications are vulnerable to Indirect Prompt Injection (IPI) attacks, where instructions are injected via untrustworthy external data sources. This paper presents Rennervate, a defense framework to detect and prevent IPI attacks. Rennervate leverages attention features to detect the covert injection at a fine-grained token level, enabling precise sanitization that neutralizes IPI attacks while maintaining LLM functionalities. Specifically, the token-level detector is materialized with a 2-step attentive pooling mechanism, which aggregates attention heads and response tokens for IPI detection and sanitization. Moreover, we establish a fine-grained IPI dataset, FIPI, to be open-sourced to support further research. Extensive experiments verify that Rennervate outperforms 15 commercial and academic IPI defense methods, achieving high precision on 5 LLMs and 6 datasets. We also demonstrate that Rennervate is transferable to unseen attacks and robust against adaptive adversaries.
+ oai:arXiv.org:2512.08417v2
+ cs.CR
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nikita Gabdullin
+ Yinan Zhong, Qianhao Miao, Yanjiao Chen, Jiangyi Deng, Yushi Cheng, Wenyuan Xu
- More than Segmentation: Benchmarking SAM 3 for Segmentation, 3D Perception, and Reconstruction in Robotic Surgery
- https://arxiv.org/abs/2512.07596
- arXiv:2512.07596v2 Announce Type: replace
-Abstract: The recent SAM 3 and SAM 3D have introduced significant advancements over the predecessor, SAM 2, particularly with the integration of language-based segmentation and enhanced 3D perception capabilities. SAM 3 supports zero-shot segmentation across a wide range of prompts, including point, bounding box, and language-based prompts, allowing for more flexible and intuitive interactions with the model. In this empirical evaluation, we assess the performance of SAM 3 in robot-assisted surgery, benchmarking its zero-shot segmentation with point and bounding box prompts and exploring its effectiveness in dynamic video tracking, alongside its newly introduced language prompt segmentation. While language prompts show potential, their performance in the surgical domain is currently suboptimal, highlighting the need for further domain-specific training. Additionally, we investigate SAM 3D's depth reconstruction abilities, demonstrating its capacity to process surgical scene data and reconstruct 3D anatomical structures from 2D images. Through comprehensive testing on the MICCAI EndoVis 2017 and EndoVis 2018 benchmarks, SAM 3 shows clear improvements over SAM and SAM 2 in both image and video segmentation under spatial prompts, while the zero-shot evaluations of SAM 3D on SCARED, StereoMIS, and EndoNeRF indicate strong monocular depth estimation and realistic 3D instrument reconstruction, yet also reveal remaining limitations in complex, highly dynamic surgical scenes.
- oai:arXiv.org:2512.07596v2
+ Thinking with Images via Self-Calling Agent
+ https://arxiv.org/abs/2512.08511
+ arXiv:2512.08511v2 Announce Type: replace
+Abstract: Thinking-with-images paradigms have showcased remarkable visual reasoning capability by integrating visual information as dynamic elements into the Chain-of-Thought (CoT). However, optimizing interleaved multimodal CoT (iMCoT) through reinforcement learning remains challenging, as it relies on scarce high-quality reasoning data. In this study, we propose Self-Calling Chain-of-Thought (sCoT), a novel visual reasoning paradigm that reformulates iMCoT as a language-only CoT with self-calling. Specifically, a main agent decomposes the complex visual reasoning task to atomic subtasks and invokes its virtual replicas, i.e. parameter-sharing subagents, to solve them in isolated context. sCoT enjoys substantial training effectiveness and efficiency, as it requires no explicit interleaving between modalities. sCoT employs group-relative policy optimization to reinforce effective reasoning behavior to enhance optimization. Experiments on HR-Bench 4K show that sCoT improves the overall reasoning performance by up to $1.9\%$ with $\sim 75\%$ fewer GPU hours compared to strong baseline approaches. Code is available at https://github.com/YWenxi/think-with-images-through-self-calling.
+ oai:arXiv.org:2512.08511v2cs.CV
- cs.RO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Wenxi Yang, Yuzhong Zhao, Fang Wan, Qixiang Ye
+
+
+ CogMCTS: A Novel Cognitive-Guided Monte Carlo Tree Search Framework for Iterative Heuristic Evolution with Large Language Models
+ https://arxiv.org/abs/2512.08609
+ arXiv:2512.08609v2 Announce Type: replace
+Abstract: Automatic Heuristic Design (AHD) is an effective framework for solving complex optimization problems. The development of large language models (LLMs) enables the automated generation of heuristics. Existing LLM-based evolutionary methods rely on population strategies and are prone to local optima. Integrating LLMs with Monte Carlo Tree Search (MCTS) improves the trade-off between exploration and exploitation, but multi-round cognitive integration remains limited and search diversity is constrained. To overcome these limitations, this paper proposes a novel cognitive-guided MCTS framework (CogMCTS). CogMCTS tightly integrates the cognitive guidance mechanism of LLMs with MCTS to achieve efficient automated heuristic optimization. The framework employs multi-round cognitive feedback to incorporate historical experience, node information, and negative outcomes, dynamically improving heuristic generation. Dual-track node expansion combined with elite heuristic management balances the exploration of diverse heuristics and the exploitation of high-quality experience. In addition, strategic mutation modifies the heuristic forms and parameters to further enhance the diversity of the solution and the overall optimization performance. The experimental results indicate that CogMCTS outperforms existing LLM-based AHD methods in stability, efficiency, and solution quality.
+ oai:arXiv.org:2512.08609v2
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Wenzhen Dong, Jieming Yu, Yiming Huang, Hongqiu Wang, Lei Zhu, Albert C. S. Chung, Hongliang Ren, Long Bai
+ Hui Wang, Yang Liu, Xiaoyu Zhang, Chaoxu Mu
- Enabling Delayed-Full Charging Through Transformer-Based Real-Time-to-Departure Modeling for EV Battery Longevity
- https://arxiv.org/abs/2512.07723
- arXiv:2512.07723v2 Announce Type: replace
-Abstract: Electric vehicles (EVs) are key to sustainable mobility, yet their lithium-ion batteries (LIBs) degrade more rapidly under prolonged high states of charge (SOC). This can be mitigated by delaying full charging \ours until just before departure, which requires accurate prediction of user departure times. In this work, we propose Transformer-based real-time-to-event (TTE) model for accurate EV departure prediction. Our approach represents each day as a TTE sequence by discretizing time into grid-based tokens. Unlike previous methods primarily dependent on temporal dependency from historical patterns, our method leverages streaming contextual information to predict departures. Evaluation on a real-world study involving 93 users and passive smartphone data demonstrates that our method effectively captures irregular departure patterns within individual routines, outperforming baseline models. These results highlight the potential for practical deployment of the \ours algorithm and its contribution to sustainable transportation systems.
- oai:arXiv.org:2512.07723v2
+ DS FedProxGrad: Asymptotic Stationarity Without Noise Floor in Fair Federated Learning
+ https://arxiv.org/abs/2512.08671
+ arXiv:2512.08671v3 Announce Type: replace
+Abstract: Recent work \cite{arifgroup} introduced Federated Proximal Gradient \textbf{(\texttt{FedProxGrad})} for solving non-convex composite optimization problems in group fair federated learning. However, the original analysis established convergence only to a \textit{noise-dominated neighborhood of stationarity}, with explicit dependence on a variance-induced noise floor. In this work, we provide an improved asymptotic convergence analysis for a generalized \texttt{FedProxGrad}-type analytical framework with inexact local proximal solutions and explicit fairness regularization. We call this extended analytical framework \textbf{DS \texttt{FedProxGrad}} (Decay Step Size \texttt{FedProxGrad}). Under a Robbins-Monro step-size schedule \cite{robbins1951stochastic} and a mild decay condition on local inexactness, we prove that $\liminf_{r\to\infty} \mathbb{E}[\|\nabla F(\mathbf{x}^r)\|^2] = 0$, i.e., the algorithm is asymptotically stationary and the convergence rate does not depend on a variance-induced noise floor.
+ oai:arXiv.org:2512.08671v3cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Yonggeon Lee, Jibin Hwang, Alfred Malengo Kondoro, Juhyun Song, Youngtae Noh
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Huzaifa Arif
- Collaborative Causal Sensemaking: Closing the Complementarity Gap in Human-AI Decision Support
- https://arxiv.org/abs/2512.07801
- arXiv:2512.07801v3 Announce Type: replace
-Abstract: LLM-based agents are increasingly deployed for expert decision support, yet human-AI teams in high-stakes settings do not yet reliably outperform the best individual. We argue this complementarity gap reflects a fundamental mismatch: current agents are trained as answer engines, not as partners in the collaborative sensemaking through which experts actually make decisions. Sensemaking (the ability to co-construct causal explanations, surface uncertainties, and adapt goals) is the key capability that current training pipelines do not explicitly develop or evaluate. We propose Collaborative Causal Sensemaking (CCS) as a research agenda to develop this capability from the ground up, spanning new training environments that reward collaborative thinking, representations for shared human-AI mental models, and evaluation centred on trust and complementarity. Taken together, these directions shift MAS research from building oracle-like answer engines to cultivating AI teammates that co-reason with their human partners over the causal structure of shared decisions, advancing the design of effective human-AI teams.
- oai:arXiv.org:2512.07801v3
- cs.CL
+ Towards Foundation Models with Native Multi-Agent Intelligence
+ https://arxiv.org/abs/2512.08743
+ arXiv:2512.08743v2 Announce Type: replace
+Abstract: Foundation models (FMs) are increasingly assuming the role of the "brain" of AI agents. While recent efforts have begun to equip FMs with native single-agent abilities -- such as GUI interaction or integrated tool use -- we argue that the next frontier is endowing FMs with native multi-agent intelligence. We identify four core capabilities of FMs in multi-agent contexts: understanding, planning, efficient communication, and adaptation. Contrary to assumptions about the spontaneous emergence of such abilities, we provide extensive empirical evidence across 41 large language models showing that strong single-agent performance alone does not automatically yield robust multi-agent intelligence. To address this gap, we outline key research directions -- spanning dataset construction, evaluation, training paradigms, and safety considerations -- for building FMs with native multi-agent intelligence.
+ oai:arXiv.org:2512.08743v2cs.AI
- cs.HC
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.MA
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Raunak Jain, Mudita Khurana
+ Shuyue Hu, Haoyang Yan, Yiqun Zhang, Yang Chen, Dongzhan Zhou, Lei Bai
- The Adoption and Usage of AI Agents: Early Evidence from Perplexity
- https://arxiv.org/abs/2512.07828
- arXiv:2512.07828v2 Announce Type: replace
-Abstract: This paper presents the first large-scale field study of the adoption, usage intensity, and use cases of general-purpose AI agents operating in open-world web environments. Our analysis centers on Comet, an AI-powered browser developed by Perplexity, and its integrated agent, Comet Assistant. Drawing on hundreds of millions of anonymized user interactions, we address three fundamental questions: Who is using AI agents? How intensively are they using them? And what are they using them for? Our findings reveal substantial heterogeneity in adoption and usage across user segments. Earlier adopters, users in countries with higher GDP per capita and educational attainment, and individuals working in digital or knowledge-intensive sectors -- such as digital technology, academia, finance, marketing, and entrepreneurship -- are more likely to adopt or actively use the agent. To systematically characterize the substance of agent usage, we introduce a hierarchical agentic taxonomy that organizes use cases across three levels: topic, subtopic, and task. The two largest topics, Productivity & Workflow and Learning & Research, account for 57% of all agentic queries, while the two largest subtopics, Courses and Shopping for Goods, make up 22%. The top 10 out of 90 tasks represent 55% of queries. Personal use constitutes 55% of queries, while professional and educational contexts comprise 30% and 16%, respectively. In the short term, use cases exhibit strong stickiness, but over time users tend to shift toward more cognitively oriented topics. The diffusion of increasingly capable AI agents carries important implications for researchers, businesses, policymakers, and educators, inviting new lines of inquiry into this rapidly emerging class of AI capabilities.
- oai:arXiv.org:2512.07828v2
- cs.LG
- econ.GN
- q-fin.EC
- Thu, 11 Dec 2025 00:00:00 -0500
+ A Methodology for Quantitative AI Risk Modeling
+ https://arxiv.org/abs/2512.08844
+ arXiv:2512.08844v2 Announce Type: replace
+Abstract: Although general-purpose AI systems offer transformational opportunities in science and industry, they simultaneously raise critical concerns about safety, misuse, and potential loss of control. Despite these risks, methods for assessing and managing them remain underdeveloped. Effective risk management requires systematic modeling to characterize potential harms, as emphasized in frameworks such as the EU General-Purpose AI Code of Practice. This paper advances the risk modeling component of AI risk management by introducing a methodology that integrates scenario building with quantitative risk estimation, drawing on established approaches from other high-risk industries. Our methodology models risks through a six-step process: (1) defining risk scenarios, (2) decomposing them into quantifiable parameters, (3) quantifying baseline risk without AI models, (4) identifying key risk indicators such as benchmarks, (5) mapping these indicators to model parameters to estimate LLM uplift, and (6) aggregating individual parameters into risk estimates that enable concrete claims (e.g., X% probability of >\$Y in annual cyber damages). We examine the choices that underlie our methodology throughout the article, with discussions of strengths, limitations, and implications for future research. Our methodology is designed to be applicable to key systemic AI risks, including cyber offense, biological weapon development, harmful manipulation, and loss-of-control, and is validated through extensive application in LLM-enabled cyber offense. Detailed empirical results and cyber-specific insights are presented in a companion paper.
+ oai:arXiv.org:2512.08844v2
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Jeremy Yang, Noah Yonack, Kate Zyskowski, Denis Yarats, Johnny Ho, Jerry Ma
+ http://creativecommons.org/licenses/by/4.0/
+ Malcolm Murray, Steve Barrett, Henry Papadatos, Otter Quarks, Matt Smith, Alejandro Tlaie Boria, Chlo\'e Touzet, Sim\'eon Campos
- Advancing physiological time series reconstruction and imputation via mixture of receptive fields and experts fusion
- https://arxiv.org/abs/2512.07873
- arXiv:2512.07873v2 Announce Type: replace
-Abstract: Recent studies show that using diffusion models for time series signal reconstruction holds great promise. However, such approaches remain largely unexplored in the domain of medical time series. The unique characteristics of the physiological time series signals, such as multivariate, high temporal variability, highly noisy, and artifact-prone, make deep learning-based approaches still challenging for tasks such as imputation. Hence, we propose a novel Mixture of Experts (MoE)-based noise estimator within a score-based diffusion framework. Specifically, the Receptive Field Adaptive MoE (RFAMoE) module is designed to enable each channel to adaptively select desired receptive fields throughout the diffusion process. Moreover, recent literature has found that when generating a physiological signal, performing multiple inferences and averaging the reconstructed signals can effectively reduce reconstruction errors, but at the cost of significant computational and latency overhead. We design a Fusion MoE module and innovatively leverage the nature of MoE module to generate K noise signals in parallel, fuse them using a routing mechanism, and complete signal reconstruction in a single inference step. This design not only improves performance over previous methods but also eliminates the substantial computational cost and latency associated with multiple inference processes. Extensive results demonstrate that our proposed framework consistently outperforms diffusion-based SOTA works on different tasks and datasets.
- oai:arXiv.org:2512.07873v2
- cs.LG
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse
+ https://arxiv.org/abs/2512.08864
+ arXiv:2512.08864v2 Announce Type: replace
+Abstract: Advanced AI systems offer substantial benefits but also introduce risks. In 2025, AI-enabled cyber offense has emerged as a concrete example. This technical report applies a quantitative risk modeling methodology (described in full in a companion paper) to this domain. We develop nine detailed cyber risk models that allow analyzing AI uplift as a function of AI benchmark performance. Each model decomposes attacks into steps using the MITRE ATT&CK framework and estimates how AI affects the number of attackers, attack frequency, probability of success, and resulting harm to determine different types of uplift. To produce these estimates with associated uncertainty, we employ both human experts, via a Delphi study, as well as LLM-based simulated experts, both mapping benchmark scores (from Cybench and BountyBench) to risk model factors. Individual estimates are aggregated through Monte Carlo simulation. The results indicate systematic uplift in attack efficacy, speed, and target reach, with different mechanisms of uplift across risk models. We aim for our quantitative risk modeling to fulfill several aims: to help cybersecurity teams prioritize mitigations, AI evaluators design benchmarks, AI developers make more informed deployment decisions, and policymakers obtain information to set risk thresholds. Similar goals drove the shift from qualitative to quantitative assessment over time in other high-risk industries, such as nuclear power. We propose this methodology and initial application attempt as a step in that direction for AI risk management. While our estimates carry significant uncertainty, publishing detailed quantified results can enable experts to pinpoint exactly where they disagree. This helps to collectively refine estimates, something that cannot be done with qualitative assessments alone.
+ oai:arXiv.org:2512.08864v2
+ cs.CY
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ci Zhang, Huayu Li, Changdi Yang, Jiangnan Xia, Yanzhi Wang, Xiaolong Ma, Jin Lu, Geng Yuan
+ http://creativecommons.org/licenses/by/4.0/
+ Steve Barrett, Malcolm Murray, Otter Quarks, Matthew Smith, Jakub Kry\'s, Sim\'eon Campos, Alejandro Tlaie Boria, Chlo\'e Touzet, Sevan Hayrapet, Fred Heiding, Omer Nevo, Adam Swanda, Jair Aguirre, Asher Brass Gershovich, Eric Clay, Ryan Fetterman, Mario Fritz, Marc Juarez, Vasilios Mavroudis, Henry Papadatos
- Artificial Intelligence-Driven Network-on-Chip Design Space Exploration: Neural Network Architectures for Design
- https://arxiv.org/abs/2512.07877
- arXiv:2512.07877v2 Announce Type: replace
-Abstract: Network-on-Chip (NoC) design requires exploring a high-dimensional configuration space to satisfy stringent throughput requirements and latency constraints. Traditional design space exploration techniques are often slow and struggle to handle complex, non-linear parameter interactions. This work presents a machine learning-driven framework that automates NoC design space exploration using BookSim simulations and reverse neural network models. Specifically, we compare three architectures - a Multi-Layer Perceptron (MLP),a Conditional Diffusion Model, and a Conditional Variational Autoencoder (CVAE) to predict optimal NoC parameters given target performance metrics. Our pipeline generates over 150,000 simulation data points across varied mesh topologies. The Conditional Diffusion Model achieved the highest predictive accuracy, attaining a mean squared error (MSE) of 0.463 on unseen data. Furthermore, the proposed framework reduces design exploration time by several orders of magnitude, making it a practical solution for rapid and scalable NoC co-design.
- oai:arXiv.org:2512.07877v2
- cs.LG
+ EcomBench: Towards Holistic Evaluation of Foundation Agents in E-commerce
+ https://arxiv.org/abs/2512.08868
+ arXiv:2512.08868v2 Announce Type: replace
+Abstract: Foundation agents have rapidly advanced in their ability to reason and interact with real environments, making the evaluation of their core capabilities increasingly important. While many benchmarks have been developed to assess agent performance, most concentrate on academic settings or artificially designed scenarios while overlooking the challenges that arise in real applications. To address this issue, we focus on a highly practical real-world setting, the e-commerce domain, which involves a large volume of diverse user interactions, dynamic market conditions, and tasks directly tied to real decision-making processes. To this end, we introduce EcomBench, a holistic E-commerce Benchmark designed to evaluate agent performance in realistic e-commerce environments. EcomBench is built from genuine user demands embedded in leading global e-commerce ecosystems and is carefully curated and annotated through human experts to ensure clarity, accuracy, and domain relevance. It covers multiple task categories within e-commerce scenarios and defines three difficulty levels that evaluate agents on key capabilities such as deep information retrieval, multi-step reasoning, and cross-source knowledge integration. By grounding evaluation in real e-commerce contexts, EcomBench provides a rigorous and dynamic testbed for measuring the practical capabilities of agents in modern e-commerce.
+ oai:arXiv.org:2512.08868v2cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Amogh Anshu N, Harish BP
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Rui Min, Zile Qiao, Ze Xu, Jiawen Zhai, Wenyu Gao, Xuanzhong Chen, Haozhen Sun, Zhen Zhang, Xinyu Wang, Hong Zhou, Wenbiao Yin, Bo Zhang, Xuan Zhou, Ming Yan, Yong Jiang, Haicheng Liu, Liang Ding, Ling Zou, Yi R. Fung, Yalong Li, Pengjun Xie
- Investigating the originality of scientific papers across time and domain: A quantitative analysis
- https://arxiv.org/abs/2512.07892
- arXiv:2512.07892v2 Announce Type: replace
-Abstract: The study of creativity in science has long sought quantitative metrics capable of capturing the originality of the scientific insights contained within articles and other scientific works. In recent years, the field has witnessed a substantial expansion of research activity, enabled by advances in natural language processing and network analysis, and has utilised both macro- and micro-scale approaches with success. However, they often do not examine the text itself for evidence of originality. In this paper, we apply a computational measure correlating with originality from creativity science, Divergent Semantic Integration (DSI), to a set of 51,200 scientific abstracts and titles sourced from the Web of Science. To adapt DSI for application to scientific texts, we advance the original BERT method by incorporating SciBERT (a model trained on scientific corpora) into the computation of DSI. In our study, we observe that DSI plays a more pronounced role in the accrual of early citations for papers with fewer authors, varies substantially across subjects and research fields, and exhibits a declining correlation with citation counts over time. Furthermore, by modelling SciBERT- and BERT-DSI as predictors of the logarithm of 5-year citation counts alongside field, publication year, and the logarithm of author count, we find statistically significant relationships, with adjusted R-squared of 0.103 and 0.101 for BERT-DSI and SciBERT-DSI. Because existing scientometric measures rarely assess the originality expressed in textual content, DSI provides a valuable means of directly quantifying the conceptual originality embedded in scientific writing.
- oai:arXiv.org:2512.07892v2
- cs.DL
- Thu, 11 Dec 2025 00:00:00 -0500
+ Luxical: High-Speed Lexical-Dense Text Embeddings
+ https://arxiv.org/abs/2512.09015
+ arXiv:2512.09015v2 Announce Type: replace
+Abstract: Frontier language model quality increasingly hinges on our ability to organize web-scale text corpora for training. Today's dominant tools trade off speed and flexibility: lexical classifiers (e.g., FastText) are fast but limited to producing classification output scores, while the vector-valued outputs of transformer text embedding models flexibly support numerous workflows (e.g., clustering, classification, and retrieval) but are computationally expensive to produce. We introduce Luxical, a library for high-speed "lexical-dense" text embeddings that aims to recover the best properties of both approaches for web-scale text organization. Luxical combines sparse TF--IDF features, a small ReLU network, and a knowledge distillation training regimen to approximate large transformer embedding models at a fraction of their operational cost. In this technical report, we describe the Luxical architecture and training objective and evaluate a concrete Luxical model in two disparate applications: a targeted webcrawl document retrieval test and an end-to-end language model data curation task grounded in text classification. In these tasks we demonstrate speedups ranging from 3x to 100x over varying-sized neural baselines, and comparable to FastText model inference during the data curation task. On these evaluations, the tested Luxical model illustrates favorable compute/quality trade-offs for large-scale text organization, matching the quality of neural baselines. Luxical is available as open-source software at https://github.com/datologyai/luxical.
+ oai:arXiv.org:2512.09015v2
+ cs.CL
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jack H. Culbert, Yoed N. Kenett, Philipp Mayr
+ DatologyAI, :, Luke Merrick, Alex Fang, Aldo Carranza, Alvin Deng, Amro Abbas, Brett Larsen, Cody Blakeney, Darren Teh, David Schwab, Fan Pan, Haakon Mongstad, Haoli Yin, Jack Urbanek, Jason Lee, Jason Telanoff, Josh Wills, Kaleigh Mentzer, Paul Burstein, Parth Doshi, Paul Burnstein, Pratyush Maini, Ricardo Monti, Rishabh Adiga, Scott Loftin, Siddharth Joshi, Spandan Das, Tony Jiang, Vineeth Dorna, Zhengping Wang, Bogdan Gaza, Ari Morcos, Matthew Leavitt
- HOLE: Homological Observation of Latent Embeddings for Neural Network Interpretability
- https://arxiv.org/abs/2512.07988
- arXiv:2512.07988v2 Announce Type: replace
-Abstract: Deep learning models have achieved remarkable success across various domains, yet their learned representations and decision-making processes remain largely opaque and hard to interpret. This work introduces HOLE (Homological Observation of Latent Embeddings), a method for analyzing and interpreting deep neural networks through persistent homology. HOLE extracts topological features from neural activations and presents them using a suite of visualization techniques, including Sankey diagrams, heatmaps, dendrograms, and blob graphs. These tools facilitate the examination of representation structure and quality across layers. We evaluate HOLE on standard datasets using a range of discriminative models, focusing on representation quality, interpretability across layers, and robustness to input perturbations and model compression. The results indicate that topological analysis reveals patterns associated with class separation, feature disentanglement, and model robustness, providing a complementary perspective for understanding and improving deep learning systems.
- oai:arXiv.org:2512.07988v2
- cs.LG
- cs.GR
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Semantic Trajectory Generation for Goal-Oriented Spacecraft Rendezvous
+ https://arxiv.org/abs/2512.09111
+ arXiv:2512.09111v2 Announce Type: replace
+Abstract: Reliable real-time trajectory generation is essential for future autonomous spacecraft. While recent progress in nonconvex guidance and control is paving the way for onboard autonomous trajectory optimization, these methods still rely on extensive expert input (e.g., waypoints, constraints, mission timelines, etc.), which limits the operational scalability in real rendezvous missions. This paper introduces SAGES (Semantic Autonomous Guidance Engine for Space), a trajectory-generation framework that translates natural-language commands into spacecraft trajectories that reflect high-level intent while respecting nonconvex constraints. Experiments in two settings -- fault-tolerant proximity operations with continuous-time constraint enforcement and a free-flying robotic platform -- demonstrate that SAGES reliably produces trajectories aligned with human commands, achieving over 90% semantic-behavioral consistency across diverse behavior modes. Ultimately, this work marks an initial step toward language-conditioned, constraint-aware spacecraft trajectory generation, enabling operators to interactively guide both safety and behavior through intuitive natural-language commands with reduced expert burden.
+ oai:arXiv.org:2512.09111v2
+ cs.RO
+ cs.AI
+ math.OC
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sudhanva Manjunath Athreya, Paul Rosen
+ Yuji Takubo, Arpit Dwivedi, Sukeerth Ramkumar, Luis A. Pabon, Daniele Gammelli, Marco Pavone, Simone D'Amico
- PolyLingua: Margin-based Inter-class Transformer for Robust Cross-domain Language Detection
- https://arxiv.org/abs/2512.08143
- arXiv:2512.08143v2 Announce Type: replace
-Abstract: Language identification is a crucial first step in multilingual systems such as chatbots and virtual assistants, enabling linguistically and culturally accurate user experiences. Errors at this stage can cascade into downstream failures, setting a high bar for accuracy. Yet, existing language identification tools struggle with key cases -- such as music requests where the song title and user language differ. Open-source tools like LangDetect, FastText are fast but less accurate, while large language models, though effective, are often too costly for low-latency or low-resource settings. We introduce PolyLingua, a lightweight Transformer-based model for in-domain language detection and fine-grained language classification. It employs a two-level contrastive learning framework combining instance-level separation and class-level alignment with adaptive margins, yielding compact and well-separated embeddings even for closely related languages. Evaluated on two challenging datasets -- Amazon Massive (multilingual digital assistant utterances) and a Song dataset (music requests with frequent code-switching) -- PolyLingua achieves 99.25% F1 and 98.15% F1, respectively, surpassing Sonnet 3.5 while using 10x fewer parameters, making it ideal for compute- and latency-constrained environments.
- oai:arXiv.org:2512.08143v2
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Relightable and Dynamic Gaussian Avatar Reconstruction from Monocular Video
+ https://arxiv.org/abs/2512.09335
+ arXiv:2512.09335v2 Announce Type: replace
+Abstract: Modeling relightable and animatable human avatars from monocular video is a long-standing and challenging task. Recently, Neural Radiance Field (NeRF) and 3D Gaussian Splatting (3DGS) methods have been employed to reconstruct the avatars. However, they often produce unsatisfactory photo-realistic results because of insufficient geometrical details related to body motion, such as clothing wrinkles. In this paper, we propose a 3DGS-based human avatar modeling framework, termed as Relightable and Dynamic Gaussian Avatar (RnD-Avatar), that presents accurate pose-variant deformation for high-fidelity geometrical details. To achieve this, we introduce dynamic skinning weights that define the human avatar's articulation based on pose while also learning additional deformations induced by body motion. We also introduce a novel regularization to capture fine geometric details under sparse visual cues. Furthermore, we present a new multi-view dataset with varied lighting conditions to evaluate relight. Our framework enables realistic rendering of novel poses and views while supporting photo-realistic lighting effects under arbitrary lighting conditions. Our method achieves state-of-the-art performance in novel view synthesis, novel pose rendering, and relighting.
+ oai:arXiv.org:2512.09335v2
+ cs.CV
+ cs.MM
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Ali Lotfi Rezaabad, Bikram Khanal, Shashwat Chaurasia, Lu Zeng, Dezhi Hong, Hossein Bashashati, Thomas Butler, Megan Ganji
+ 10.1145/3746027.3754851
+ In Proceedings of the 33rd ACM International Conference on Multimedia. 2025. p. 7405-7414
+ Seonghwa Choi, Moonkyeong Choi, Mingyu Jang, Jaekyung Kim, Jianfei Cai, Wen-Huang Cheng, Sanghoon Lee
- OpenSubject: Leveraging Video-Derived Identity and Diversity Priors for Subject-driven Image Generation and Manipulation
- https://arxiv.org/abs/2512.08294
- arXiv:2512.08294v2 Announce Type: replace
-Abstract: Despite the promising progress in subject-driven image generation, current models often deviate from the reference identities and struggle in complex scenes with multiple subjects. To address this challenge, we introduce OpenSubject, a video-derived large-scale corpus with 2.5M samples and 4.35M images for subject-driven generation and manipulation. The dataset is built with a four-stage pipeline that exploits cross-frame identity priors. (i) Video Curation. We apply resolution and aesthetic filtering to obtain high-quality clips. (ii) Cross-Frame Subject Mining and Pairing. We utilize vision-language model (VLM)-based category consensus, local grounding, and diversity-aware pairing to select image pairs. (iii) Identity-Preserving Reference Image Synthesis. We introduce segmentation map-guided outpainting to synthesize the input images for subject-driven generation and box-guided inpainting to generate input images for subject-driven manipulation, together with geometry-aware augmentations and irregular boundary erosion. (iv) Verification and Captioning. We utilize a VLM to validate synthesized samples, re-synthesize failed samples based on stage (iii), and then construct short and long captions. In addition, we introduce a benchmark covering subject-driven generation and manipulation, and then evaluate identity fidelity, prompt adherence, manipulation consistency, and background consistency with a VLM judge. Extensive experiments show that training with OpenSubject improves generation and manipulation performance, particularly in complex scenes.
- oai:arXiv.org:2512.08294v2
+ Development and Testing for Perception Based Autonomous Landing of a Long-Range QuadPlane
+ https://arxiv.org/abs/2512.09343
+ arXiv:2512.09343v2 Announce Type: replace
+Abstract: QuadPlanes combine the range efficiency of fixed-wing aircraft with the maneuverability of multi-rotor platforms for long-range autonomous missions. In GPS-denied or cluttered urban environments, perception-based landing is vital for reliable operation. Unlike structured landing zones, real-world sites are unstructured and highly variable, requiring strong generalization capabilities from the perception system. Deep neural networks (DNNs) provide a scalable solution for learning landing site features across diverse visual and environmental conditions. While perception-driven landing has been shown in simulation, real-world deployment introduces significant challenges. Payload and volume constraints limit high-performance edge AI devices like the NVIDIA Jetson Orin Nano, which are crucial for real-time detection and control. Accurate pose estimation during descent is necessary, especially in the absence of GPS, and relies on dependable visual-inertial odometry. Achieving this with limited edge AI resources requires careful optimization of the entire deployment framework. The flight characteristics of large QuadPlanes further complicate the problem. These aircraft exhibit high inertia, reduced thrust vectoring, and slow response times further complicate stable landing maneuvers. This work presents a lightweight QuadPlane system for efficient vision-based autonomous landing and visual-inertial odometry, specifically developed for long-range QuadPlane operations such as aerial monitoring. It describes the hardware platform, sensor configuration, and embedded computing architecture designed to meet demanding real-time, physical constraints. This establishes a foundation for deploying autonomous landing in dynamic, unstructured, GPS-denied environments.
+ oai:arXiv.org:2512.09343v2
+ cs.ROcs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Yexin Liu, Manyuan Zhang, Yueze Wang, Hongyu Li, Dian Zheng, Weiming Zhang, Changsheng Lu, Xunliang Cai, Yan Feng, Peng Pei, Harry Yang
+ Ashik E Rasul, Humaira Tasnim, Ji Yu Kim, Young Hyun Lim, Scott Schmitz, Bruce W. Jo, Hyung-Jin Yoon
+
+
+ StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation
+ https://arxiv.org/abs/2512.09363
+ arXiv:2512.09363v2 Announce Type: replace
+Abstract: The growing adoption of XR devices has fueled strong demand for high-quality stereo video, yet its production remains costly and artifact-prone. To address this challenge, we present StereoWorld, an end-to-end framework that repurposes a pretrained video generator for high-fidelity monocular-to-stereo video generation. Our framework jointly conditions the model on the monocular video input while explicitly supervising the generation with a geometry-aware regularization to ensure 3D structural fidelity. A spatio-temporal tiling scheme is further integrated to enable efficient, high-resolution synthesis. To enable large-scale training and evaluation, we curate a high-definition stereo video dataset containing over 11M frames aligned to natural human interpupillary distance (IPD). Extensive experiments demonstrate that StereoWorld substantially outperforms prior methods, generating stereo videos with superior visual fidelity and geometric consistency. The project webpage is available at https://ke-xing.github.io/StereoWorld/.
+ oai:arXiv.org:2512.09363v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Ke Xing, Xiaojie Jin, Longfei Li, Yuyang Yin, Hanwen Liang, Guixun Luo, Chen Fang, Jue Wang, Konstantinos N. Plataniotis, Yao Zhao, Yunchao Wei
- GeoDM: Geometry-aware Distribution Matching for Dataset Distillation
- https://arxiv.org/abs/2512.08317
- arXiv:2512.08317v2 Announce Type: replace
-Abstract: Dataset distillation aims to synthesize a compact subset of the original data, enabling models trained on it to achieve performance comparable to those trained on the original large dataset. Existing distribution-matching methods are confined to Euclidean spaces, making them only capture linear structures and overlook the intrinsic geometry of real data, e.g., curvature. However, high-dimensional data often lie on low-dimensional manifolds, suggesting that dataset distillation should have the distilled data manifold aligned with the original data manifold. In this work, we propose a geometry-aware distribution-matching framework, called \textbf{GeoDM}, which operates in the Cartesian product of Euclidean, hyperbolic, and spherical manifolds, with flat, hierarchical, and cyclical structures all captured by a unified representation. To adapt to the underlying data geometry, we introduce learnable curvature and weight parameters for three kinds of geometries. At the same time, we design an optimal transport loss to enhance the distribution fidelity. Our theoretical analysis shows that the geometry-aware distribution matching in a product space yields a smaller generalization error bound than the Euclidean counterparts. Extensive experiments conducted on standard benchmarks demonstrate that our algorithm outperforms state-of-the-art data distillation methods and remains effective across various distribution-matching strategies for the single geometries.
- oai:arXiv.org:2512.08317v2
+ Perception-Inspired Color Space Design for Photo White Balance Editing
+ https://arxiv.org/abs/2512.09383
+ arXiv:2512.09383v2 Announce Type: replace
+Abstract: White balance (WB) is a key step in the image signal processor (ISP) pipeline that mitigates color casts caused by varying illumination and restores the scene's true colors. Currently, sRGB-based WB editing for post-ISP WB correction is widely used to address color constancy failures in the ISP pipeline when the original camera RAW is unavailable. However, additive color models (e.g., sRGB) are inherently limited by fixed nonlinear transformations and entangled color channels, which often impede their generalization to complex lighting conditions.
+ To address these challenges, we propose a novel framework for WB correction that leverages a perception-inspired Learnable HSI (LHSI) color space. Built upon a cylindrical color model that naturally separates luminance from chromatic components, our framework further introduces dedicated parameters to enhance this disentanglement and learnable mapping to adaptively refine the flexibility. Moreover, a new Mamba-based network is introduced, which is tailored to the characteristics of the proposed LHSI color space.
+ Experimental results on benchmark datasets demonstrate the superiority of our method, highlighting the potential of perception-inspired color space design in computational photography. The source code is available at https://github.com/YangCheng58/WB_Color_Space.
+ oai:arXiv.org:2512.09383v2cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xuhui Li, Zhengquan Luo, Zihui Cui, Zhiqiang Xu
-
-
- A Multivariate Bernoulli-Based Sampling Method for Multi-Label Data with Application to Meta-Research
- https://arxiv.org/abs/2512.08371
- arXiv:2512.08371v2 Announce Type: replace
-Abstract: Datasets may contain observations with multiple labels. If the labels are not mutually exclusive, and if the labels vary greatly in frequency, obtaining a sample that includes sufficient observations with scarcer labels to make inferences about those labels, and which deviates from the population frequencies in a known manner, creates challenges. In this paper, we consider a multivariate Bernoulli distribution as our underlying distribution of a multi-label problem. We present a novel sampling algorithm that takes label dependencies into account. It uses observed label frequencies to estimate multivariate Bernoulli distribution parameters and calculate weights for each label combination. This approach ensures the weighted sampling acquires target distribution characteristics while accounting for label dependencies. We applied this approach to a sample of research articles from Web of Science labeled with 64 biomedical topic categories. We aimed to preserve category frequency order, reduce frequency differences between most and least common categories, and account for category dependencies. This approach produced a more balanced sub-sample, enhancing the representation of minority categories.
- oai:arXiv.org:2512.08371v2
- cs.LG
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Simon Chung, Colby J. Vorland, Donna L. Maney, Andrew W. Brown
+ Yang Cheng, Ziteng Cui, Shenghan Su, Lin Gu, Zenghui Zhang
- Finding All Bounded-Length Simple Cycles in a Directed Graph -- Revisited
- https://arxiv.org/abs/2512.08392
- arXiv:2512.08392v2 Announce Type: replace
-Abstract: In 2021, Gupta and Suzumura proposed a novel algorithm for enumerating all bounded-length simple cycles in directed graphs. In this work, we present concrete examples demonstrating that the proposed algorithm fails to enumerate certain valid cycles. Via these examples, we perform a detailed analysis pinpointing the specific points at which the proofs exhibit logical gaps. Furthermore, we propose a corrected formulation that resolves these issues while preserving the desirable property that the algorithm's computational complexity remains $O((c + 1) \cdot k \cdot (n + e))$ where $c$ is the number of simple cycles of a specified maximum length $k$, and $n$ and $e$ the number of graph nodes and edges respectively.
- oai:arXiv.org:2512.08392v2
- cs.DS
- Thu, 11 Dec 2025 00:00:00 -0500
+ Advancing Mathematical Research via Human-AI Interactive Theorem Proving
+ https://arxiv.org/abs/2512.09443
+ arXiv:2512.09443v2 Announce Type: replace
+Abstract: We investigate how large language models can be used as research tools in scientific computing while preserving mathematical rigor. We propose a human-in-the-loop workflow for interactive theorem proving and discovery with LLMs. Human experts retain control over problem formulation and admissible assumptions, while the model searches for proofs or contradictions, proposes candidate properties and theorems, and helps construct structures and parameters that satisfy explicit constraints, supported by numerical experiments and simple verification checks. Experts treat these outputs as raw material, further refine them, and organize the results into precise statements and rigorous proofs. We instantiate this workflow in a case study on the connection between manifold optimization and Grover's quantum search algorithm, where the pipeline helps identify invariant subspaces, explore Grover-compatible retractions, and obtain convergence guarantees for the retraction-based gradient method. The framework provides a practical template for integrating large language models into frontier mathematical research, enabling faster exploration of proof space and algorithm design while maintaining transparent reasoning responsibilities. Although illustrated on manifold optimization problems in quantum computing, the principles extend to other core areas of scientific computing.
+ oai:arXiv.org:2512.09443v2
+ cs.HC
+ cs.AI
+ math.OC
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Frank Bauern\"oppel, J\"org-R\"udiger Sack
+ http://creativecommons.org/licenses/by/4.0/
+ Chenyi Li, Zhijian Lai, Dong An, Jiang Hu, Zaiwen Wen
- Solving Oversmoothing in GNNs via Nonlocal Message Passing: Algebraic Smoothing and Depth Scalability
- https://arxiv.org/abs/2512.08475
- arXiv:2512.08475v2 Announce Type: replace
-Abstract: The relationship between Layer Normalization (LN) placement and the oversmoothing phenomenon remains underexplored. We identify a critical dilemma: Pre-LN architectures avoid oversmoothing but suffer from the curse of depth, while Post-LN architectures bypass the curse of depth but experience oversmoothing.
- To resolve this, we propose a new method based on Post-LN that induces algebraic smoothing, preventing oversmoothing without the curse of depth. Empirical results across five benchmarks demonstrate that our approach supports deeper networks (up to 256 layers) and improves performance, requiring no additional parameters.
- Key contributions:
- Theoretical Characterization: Analysis of LN dynamics and their impact on oversmoothing and the curse of depth.
- A Principled Solution: A parameter-efficient method that induces algebraic smoothing and avoids oversmoothing and the curse of depth.
- Empirical Validation: Extensive experiments showing the effectiveness of the method in deeper GNNs.
- oai:arXiv.org:2512.08475v2
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ StateSpace-SSL: Linear-Time Self-supervised Learning for Plant Disease Detection
+ https://arxiv.org/abs/2512.09492
+ arXiv:2512.09492v2 Announce Type: replace
+Abstract: Self-supervised learning (SSL) is attractive for plant disease detection as it can exploit large collections of unlabeled leaf images, yet most existing SSL methods are built on CNNs or vision transformers that are poorly matched to agricultural imagery. CNN-based SSL struggles to capture disease patterns that evolve continuously along leaf structures, while transformer-based SSL introduces quadratic attention cost from high-resolution patches. To address these limitations, we propose StateSpace-SSL, a linear-time SSL framework that employs a Vision Mamba state-space encoder to model long-range lesion continuity through directional scanning across the leaf surface. A prototype-driven teacher-student objective aligns representations across multiple views, encouraging stable and lesion-aware features from labelled data. Experiments on three publicly available plant disease datasets show that StateSpace-SSL consistently outperforms the CNN- and transformer-based SSL baselines in various evaluation metrics. Qualitative analyses further confirm that it learns compact, lesion-focused feature maps, highlighting the advantage of linear state-space modelling for self-supervised plant disease representation learning.
+ oai:arXiv.org:2512.09492v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Weiqi Guan, Junlin He
+ Abdullah Al Mamun, Miaohua Zhang, David Ahmedt-Aristizabal, Zeeshan Hayder, Mohammad Awrangjeb
- Optimal Perturbation Budget Allocation for Data Poisoning in Offline Reinforcement Learning
- https://arxiv.org/abs/2512.08485
- arXiv:2512.08485v2 Announce Type: replace
-Abstract: Offline Reinforcement Learning (RL) enables policy optimization from static datasets but is inherently vulnerable to data poisoning attacks. Existing attack strategies typically rely on locally uniform perturbations, which treat all samples indiscriminately. This approach is inefficient, as it wastes the perturbation budget on low-impact samples, and lacks stealthiness due to significant statistical deviations. In this paper, we propose a novel Global Budget Allocation attack strategy. Leveraging the theoretical insight that a sample's influence on value function convergence is proportional to its Temporal Difference (TD) error, we formulate the attack as a global resource allocation problem. We derive a closed-form solution where perturbation magnitudes are assigned proportional to the TD-error sensitivity under a global L2 constraint. Empirical results on D4RL benchmarks demonstrate that our method significantly outperforms baseline strategies, achieving up to 80% performance degradation with minimal perturbations that evade detection by state-of-the-art statistical and spectral defenses.
- oai:arXiv.org:2512.08485v2
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ SWEnergy: An Empirical Study on Energy Efficiency in Agentic Issue Resolution Frameworks with SLMs
+ https://arxiv.org/abs/2512.09543
+ arXiv:2512.09543v2 Announce Type: replace
+Abstract: Context. LLM-based autonomous agents in software engineering rely on large, proprietary models, limiting local deployment. This has spurred interest in Small Language Models (SLMs), but their practical effectiveness and efficiency within complex agentic frameworks for automated issue resolution remain poorly understood.
+ Goal. We investigate the performance, energy efficiency, and resource consumption of four leading agentic issue resolution frameworks when deliberately constrained to using SLMs. We aim to assess the viability of these systems for this task in resource-limited settings and characterize the resulting trade-offs.
+ Method. We conduct a controlled evaluation of four leading agentic frameworks (SWE-Agent, OpenHands, Mini SWE Agent, AutoCodeRover) using two SLMs (Gemma-3 4B, Qwen-3 1.7B) on the SWE-bench Verified Mini benchmark. On fixed hardware, we measure energy, duration, token usage, and memory over 150 runs per configuration.
+ Results. We find that framework architecture is the primary driver of energy consumption. The most energy-intensive framework, AutoCodeRover (Gemma), consumed 9.4x more energy on average than the least energy-intensive, OpenHands (Gemma). However, this energy is largely wasted. Task resolution rates were near-zero, demonstrating that current frameworks, when paired with SLMs, consume significant energy on unproductive reasoning loops. The SLM's limited reasoning was the bottleneck for success, but the framework's design was the bottleneck for efficiency.
+ Conclusions. Current agentic frameworks, designed for powerful LLMs, fail to operate efficiently with SLMs. We find that framework architecture is the primary driver of energy consumption, but this energy is largely wasted due to the SLMs' limited reasoning. Viable low-energy solutions require shifting from passive orchestration to architectures that actively manage SLM weaknesses.
+ oai:arXiv.org:2512.09543v2
+ cs.SE
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Junnan Qiu, Yuanjie Zhao, Jie Li
+ Arihant Tripathy, Ch Pavan Harshit, Karthik Vaidhyanathan
- SensHRPS: Sensing Comfortable Human-Robot Proxemics and Personal Space With Eye-Tracking
- https://arxiv.org/abs/2512.08518
- arXiv:2512.08518v2 Announce Type: replace
-Abstract: Social robots must adjust to human proxemic norms to ensure user comfort and engagement. While prior research demonstrates that eye-tracking features reliably estimate comfort in human-human interactions, their applicability to interactions with humanoid robots remains unexplored. In this study, we investigate user comfort with the robot "Ameca" across four experimentally controlled distances (0.5 m to 2.0 m) using mobile eye-tracking and subjective reporting (N=19). We evaluate multiple machine learning and deep learning models to estimate comfort based on gaze features. Contrary to previous human-human studies where Transformer models excelled, a Decision Tree classifier achieved the highest performance (F1-score = 0.73), with minimum pupil diameter identified as the most critical predictor. These findings suggest that physiological comfort thresholds in human-robot interaction differ from human-human dynamics and can be effectively modeled using interpretable logic.
- oai:arXiv.org:2512.08518v2
+ Mastering Diverse, Unknown, and Cluttered Tracks for Robust Vision-Based Drone Racing
+ https://arxiv.org/abs/2512.09571
+ arXiv:2512.09571v2 Announce Type: replace
+Abstract: Most reinforcement learning(RL)-based methods for drone racing target fixed, obstacle-free tracks, leaving the generalization to unknown, cluttered environments largely unaddressed. This challenge stems from the need to balance racing speed and collision avoidance, limited feasible space causing policy exploration trapped in local optima during training, and perceptual ambiguity between gates and obstacles in depth maps-especially when gate positions are only coarsely specified. To overcome these issues, we propose a two-phase learning framework: an initial soft-collision training phase that preserves policy exploration for high-speed flight, followed by a hard-collision refinement phase that enforces robust obstacle avoidance. An adaptive, noise-augmented curriculum with an asymmetric actor-critic architecture gradually shifts the policy's reliance from privileged gate-state information to depth-based visual input. We further impose Lipschitz constraints and integrate a track-primitive generator to enhance motion stability and cross-environment generalization. We evaluate our framework through extensive simulation and ablation studies, and validate it in real-world experiments on a computationally constrained quadrotor. The system achieves agile flight while remaining robust to gate-position errors, developing a generalizable drone racing framework with the capability to operate in diverse, partially unknown and cluttered environments. https://yufengsjtu.github.io/MasterRacing.github.io/
+ oai:arXiv.org:2512.09571v2cs.RO
- cs.AI
- cs.HC
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Nadezhda Kushina, Ko Watanabe, Aarthi Kannan, Ashita Ashok, Andreas Dengel, Karsten Berns
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Feng Yu, Yu Hu, Yang Su, Yang Deng, Linzuo Zhang, Danping Zou
- Mind to Hand: Purposeful Robotic Control via Embodied Reasoning
- https://arxiv.org/abs/2512.08580
- arXiv:2512.08580v2 Announce Type: replace
-Abstract: Humans act with context and intention, with reasoning playing a central role. While internet-scale data has enabled broad reasoning capabilities in AI systems, grounding these abilities in physical action remains a major challenge. We introduce Lumo-1, a generalist vision-language-action (VLA) model that unifies robot reasoning ("mind") with robot action ("hand"). Our approach builds upon the general multi-modal reasoning capabilities of pre-trained vision-language models (VLMs), progressively extending them to embodied reasoning and action prediction, and ultimately towards structured reasoning and reasoning-action alignment. This results in a three-stage pre-training pipeline: (1) Continued VLM pre-training on curated vision-language data to enhance embodied reasoning skills such as planning, spatial understanding, and trajectory prediction; (2) Co-training on cross-embodiment robot data alongside vision-language data; and (3) Action training with reasoning process on trajectories collected on Astribot S1, a bimanual mobile manipulator with human-like dexterity and agility. Finally, we integrate reinforcement learning to further refine reasoning-action consistency and close the loop between semantic inference and motor control. Extensive experiments demonstrate that Lumo-1 achieves significant performance improvements in embodied vision-language reasoning, a critical component for generalist robotic control. Real-world evaluations further show that Lumo-1 surpasses strong baselines across a wide range of challenging robotic tasks, with strong generalization to novel objects and environments, excelling particularly in long-horizon tasks and responding to human-natural instructions that require reasoning over strategy, concepts and space.
- oai:arXiv.org:2512.08580v2
- cs.RO
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ UnReflectAnything: RGB-Only Highlight Removal by Rendering Synthetic Specular Supervision
+ https://arxiv.org/abs/2512.09583
+ arXiv:2512.09583v2 Announce Type: replace
+Abstract: Specular highlights distort appearance, obscure texture, and hinder geometric reasoning in both natural and surgical imagery. We present UnReflectAnything, an RGB-only framework that removes highlights from a single image by predicting a highlight map together with a reflection-free diffuse reconstruction. The model uses a frozen vision transformer encoder to extract multi-scale features, a lightweight head to localize specular regions, and a token-level inpainting module that restores corrupted feature patches before producing the final diffuse image. To overcome the lack of paired supervision, we introduce a Virtual Highlight Synthesis pipeline that renders physically plausible specularities using monocular geometry, Fresnel-aware shading, and randomized lighting which enables training on arbitrary RGB images with correct geometric structure. UnReflectAnything generalizes across natural and surgical domains where non-Lambertian surfaces and non-uniform lighting create severe highlights and it achieves competitive performance with state-of-the-art results on several benchmarks. Project Page: https://alberto-rota.github.io/UnReflectAnything/
+ oai:arXiv.org:2512.09583v2
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Peijun Tang, Shangjin Xie, Binyan Sun, Baifu Huang, Kuncheng Luo, Haotian Yang, Weiqi Jin, Jianan Wang
+ Alberto Rota, Mert Kiray, Mert Asim Karaoglu, Patrick Ruhkamp, Elena De Momi, Nassir Navab, Benjamin Busam
- Decoupling Template Bias in CLIP: Harnessing Empty Prompts for Enhanced Few-Shot Learning
- https://arxiv.org/abs/2512.08606
- arXiv:2512.08606v2 Announce Type: replace
-Abstract: The Contrastive Language-Image Pre-Training (CLIP) model excels in few-shot learning by aligning visual and textual representations. Our study shows that template-sample similarity (TSS), defined as the resemblance between a text template and an image sample, introduces bias. This bias leads the model to rely on template proximity rather than true sample-to-category alignment, reducing both accuracy and robustness in classification. We present a framework that uses empty prompts, textual inputs that convey the idea of "emptiness" without category information. These prompts capture unbiased template features and offset TSS bias. The framework employs two stages. During pre-training, empty prompts reveal and reduce template-induced bias within the CLIP encoder. During few-shot fine-tuning, a bias calibration loss enforces correct alignment between images and their categories, ensuring the model focuses on relevant visual cues. Experiments across multiple benchmarks demonstrate that our template correction method significantly reduces performance fluctuations caused by TSS, yielding higher classification accuracy and stronger robustness. The repository of this project is available at https://github.com/zhenyuZ-HUST/Decoupling-Template-Bias-in-CLIP.
- oai:arXiv.org:2512.08606v2
- cs.CV
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Exqutor: Extended Query Optimizer for Vector-augmented Analytical Queries
+ https://arxiv.org/abs/2512.09695
+ arXiv:2512.09695v2 Announce Type: replace
+Abstract: Vector similarity search is becoming increasingly important for data science pipelines, particularly in Retrieval-Augmented Generation (RAG), where it enhances large language model inference by enabling efficient retrieval of relevant external knowledge. As RAG expands with table-augmented generation to incorporate structured data, workloads integrating table and vector search are becoming more prevalent. However, efficiently executing such queries remains challenging due to inaccurate cardinality estimation for vector search components, leading to suboptimal query plans. In this paper, we propose Exqutor, an extended query optimizer for vector-augmented analytical queries. Exqutor is a pluggable cardinality estimation framework designed to address this issue, leveraging exact cardinality query optimization techniques to enhance estimation accuracy when vector indexes (e.g., HNSW, IVF) are available. In scenarios lacking these indexes, we employ a sampling-based approach with adaptive sampling size adjustment, dynamically tuning the sample size to balance estimation accuracy and sampling overhead. This allows Exqutor to efficiently approximate vector search cardinalities while minimizing computational costs. We integrate our framework into pgvector, VBASE, and DuckDB, demonstrating performance improvements of up to four orders of magnitude on vector-augmented analytical queries.
+ oai:arXiv.org:2512.09695v2
+ cs.DB
+ Fri, 12 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhenyu Zhang, Guangyao Chen, Yixiong Zou, Zhimeng Huang, Yuhua Li
+ Hyunjoon Kim, Chaerim Lim, Hyeonjun An, Rathijit Sen, Kwanghyun Park
- C-DIRA: Computationally Efficient Dynamic ROI Routing and Domain-Invariant Adversarial Learning for Lightweight Driver Behavior Recognition
- https://arxiv.org/abs/2512.08647
- arXiv:2512.08647v2 Announce Type: replace
-Abstract: Driver distraction behavior recognition using in-vehicle cameras demands real-time inference on edge devices. However, lightweight models often fail to capture fine-grained behavioral cues, resulting in reduced performance on unseen drivers or under varying conditions. ROI-based methods also increase computational cost, making it difficult to balance efficiency and accuracy. This work addresses the need for a lightweight architecture that overcomes these constraints. We propose Computationally efficient Dynamic region of Interest Routing and domain-invariant Adversarial learning for lightweight driver behavior recognition (C-DIRA). The framework combines saliency-driven Top-K ROI pooling and fused classification for local feature extraction and integration. Dynamic ROI routing enables selective computation by applying ROI inference only to high difficulty data samples. Moreover, pseudo-domain labeling and adversarial learning are used to learn domain-invariant features robust to driver and background variation. Experiments on the State Farm Distracted Driver Detection Dataset show that C-DIRA maintains high accuracy with significantly fewer FLOPs and lower latency than prior lightweight models. It also demonstrates robustness under visual degradation such as blur and low-light, and stable performance across unseen domains. These results confirm C-DIRA's effectiveness in achieving compactness, efficiency, and generalization.
- oai:arXiv.org:2512.08647v2
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Towards Practical and Usable In-network Classification
+ https://arxiv.org/abs/2512.09809
+ arXiv:2512.09809v2 Announce Type: replace
+Abstract: In-network machine learning enables real-time classification directly on network hardware, offering consistently low inference latency. However, current solutions are limited by strict hardware constraints, scarce on-device resources, and poor usability, making them impractical for ML developers and cloud operators. To this end, we propose ACORN, an end-to-end system that automates the distributed deployment of practical machine learning models across the network. ACORN provides a fully automated pipeline that loads and deploys Python ML models on network devices using an optimized deployment plan from an ILP planner. To support larger models under hardware constraints and allow runtime programmability, ACORN adopts a novel data plane representation for Decision Tree, Random Forest, and Support Vector Machine models. We implement ACORN prototype in P4 and run it on real programmable hardware. Our evaluation shows ACORN can deploy classification ML models with 2-4x more features than state-of-the-art solutions, while imposing negligible overhead on network performance and traffic. We will make our data plane program, model translator, optimizer, and all related scripts publicly available.
+ oai:arXiv.org:2512.09809v2
+ cs.NI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Keito Inoshita
+ http://creativecommons.org/licenses/by/4.0/
+ Di Zhu, Jianxi Chen, Hyojoon Kim
- DS FedProxGrad: Asymptotic Stationarity Without Noise Floor in Fair Federated Learning
- https://arxiv.org/abs/2512.08671
- arXiv:2512.08671v2 Announce Type: replace
-Abstract: Recent work \cite{arifgroup} introduced Federated Proximal Gradient \textbf{(\texttt{FedProxGrad})} for solving non-convex composite optimization problems in group fair federated learning. However, the original analysis established convergence only to a \textit{noise-dominated neighborhood of stationarity}, with explicit dependence on a variance-induced noise floor. In this work, we provide an improved asymptotic convergence analysis for a generalized \texttt{FedProxGrad}-type analytical framework with inexact local proximal solutions and explicit fairness regularization. We call this extended analytical framework \textbf{DS \texttt{FedProxGrad}} (Decay Step Size \texttt{FedProxGrad}). Under a Robbins-Monro step-size schedule \cite{robbins1951stochastic} and a mild decay condition on local inexactness, we prove that $\liminf_{r\to\infty} \mathbb{E}[\|\nabla F(\mathbf{x}^r)\|^2] = 0$, i.e., the algorithm is asymptotically stationary and the convergence rate does not depend on a variance-induced noise floor.
- oai:arXiv.org:2512.08671v2
- cs.LG
- stat.ML
- Thu, 11 Dec 2025 00:00:00 -0500
+ LLMs in Interpreting Legal Documents
+ https://arxiv.org/abs/2512.09830
+ arXiv:2512.09830v2 Announce Type: replace
+Abstract: This chapter explores the application of Large Language Models in the legal domain, showcasing their potential to optimise and augment traditional legal tasks by analysing possible use cases, such as assisting in interpreting statutes, contracts, and case law, enhancing clarity in legal summarisation, contract negotiation, and information retrieval. There are several challenges that can arise from the application of such technologies, such as algorithmic monoculture, hallucinations, and compliance with existing regulations, including the EU's AI Act and recent U.S. initiatives, alongside the emerging approaches in China. Furthermore, two different benchmarks are presented.
+ oai:arXiv.org:2512.09830v2
+ cs.CL
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Huzaifa Arif
+ http://creativecommons.org/licenses/by/4.0/
+ Simone Corbo
- Efficiently Reconstructing Dynamic Scenes One D4RT at a Time
- https://arxiv.org/abs/2512.08924
- arXiv:2512.08924v2 Announce Type: replace
-Abstract: Understanding and reconstructing the complex geometry and motion of dynamic scenes from video remains a formidable challenge in computer vision. This paper introduces D4RT, a simple yet powerful feedforward model designed to efficiently solve this task. D4RT utilizes a unified transformer architecture to jointly infer depth, spatio-temporal correspondence, and full camera parameters from a single video. Its core innovation is a novel querying mechanism that sidesteps the heavy computation of dense, per-frame decoding and the complexity of managing multiple, task-specific decoders. Our decoding interface allows the model to independently and flexibly probe the 3D position of any point in space and time. The result is a lightweight and highly scalable method that enables remarkably efficient training and inference. We demonstrate that our approach sets a new state of the art, outperforming previous methods across a wide spectrum of 4D reconstruction tasks. We refer to the project webpage for animated results: https://d4rt-paper.github.io/.
- oai:arXiv.org:2512.08924v2
+ ReViSE: Towards Reason-Informed Video Editing in Unified Models with Self-Reflective Learning
+ https://arxiv.org/abs/2512.09924
+ arXiv:2512.09924v2 Announce Type: replace
+Abstract: Video unified models exhibit strong capabilities in understanding and generation, yet they struggle with reason-informed visual editing even when equipped with powerful internal vision-language models (VLMs). We attribute this gap to two factors: 1) existing datasets are inadequate for training and evaluating reasoning-aware video editing, and 2) an inherent disconnect between the models' reasoning and editing capabilities, which prevents the rich understanding from effectively instructing the editing process. Bridging this gap requires an integrated framework that connects reasoning with visual transformation. To address this gap, we introduce the Reason-Informed Video Editing (RVE) task, which requires reasoning about physical plausibility and causal dynamics during editing. To support systematic evaluation, we construct RVE-Bench, a comprehensive benchmark with two complementary subsets: Reasoning-Informed Video Editing and In-Context Video Generation. These subsets cover diverse reasoning dimensions and real-world editing scenarios. Building upon this foundation, we propose the ReViSE, a Self-Reflective Reasoning (SRF) framework that unifies generation and evaluation within a single architecture. The model's internal VLM provides intrinsic feedback by assessing whether the edited video logically satisfies the given instruction. The differential feedback that refines the generator's reasoning behavior during training. Extensive experiments on RVE-Bench demonstrate that ReViSE significantly enhances editing accuracy and visual fidelity, achieving a 32% improvement of the Overall score in the reasoning-informed video editing subset over state-of-the-art methods.
+ oai:arXiv.org:2512.09924v2cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chuhan Zhang, Guillaume Le Moing, Skanda Koppula, Ignacio Rocco, Liliane Momeni, Junyu Xie, Shuyang Sun, Rahul Sukthankar, Jo\"elle K. Barral, Raia Hadsell, Zoubin Ghahramani, Andrew Zisserman, Junlin Zhang, Mehdi S. M. Sajjadi
+ http://creativecommons.org/licenses/by/4.0/
+ Xinyu Liu, Hangjie Yuan, Yujie Wei, Jiazheng Xing, Yujin Han, Jiahao Pan, Yanbiao Ma, Chi-Min Chan, Kang Zhao, Shiwei Zhang, Wenhan Luo, Yike Guo
- Equiangular lines via matrix projection
- https://arxiv.org/abs/2110.15842
- arXiv:2110.15842v5 Announce Type: replace-cross
-Abstract: In 1973, Lemmens and Seidel posed the problem of determining the maximum number of equiangular lines in $\mathbb{R}^r$ with angle $\arccos(\alpha)$ and gave a partial answer in the regime $r \leq 1/\alpha^2 - 2$. At the other extreme where $r$ is at least exponential in $1/\alpha$, recent breakthroughs have led to an almost complete resolution of this problem. In this paper, we introduce a new method for obtaining upper bounds which unifies and improves upon previous approaches, thereby yielding bounds which bridge the gap between the aforementioned regimes and are best possible either exactly or up to a small multiplicative constant. Our approach relies on orthogonal projection of matrices with respect to the Frobenius inner product and as a byproduct, it yields the first extension of the Alon-Boppana theorem to dense graphs, with equality for strongly regular graphs corresponding to $\binom{r+1}{2}$ equiangular lines in $\mathbb{R}^r$. Applications of our method in the complex setting will be discussed as well.
- oai:arXiv.org:2110.15842v5
+ Polymer Dynamics via Cliques: New Conditions for Approximations
+ https://arxiv.org/abs/2007.08293
+ arXiv:2007.08293v4 Announce Type: replace-cross
+Abstract: Abstract polymer models are systems of weighted objects, called polymers, equipped with an incompatibility relation. An important quantity associated with such models is the partition function, which is the weighted sum over all sets of compatible polymers. Various approximation problems reduce to approximating the partition function of a polymer model. Central to the existence of such approximation algorithms are weight conditions of the respective polymer model. Such conditions are derived either via complex analysis or via probabilistic arguments. We follow the latter path and establish a new condition -- the clique dynamics condition -- , which is less restrictive than the ones in the literature. We introduce a new Markov chain where the clique dynamics condition implies rapid mixing by utilizing cliques of incompatible polymers that naturally arise from the translation of algorithmic problems into polymer models. This leads to improved parameter ranges for several approximation algorithms, such as a factor of at least $2^{1/\alpha}$ for the hard-core model on bipartite $\alpha$-expanders.
+ oai:arXiv.org:2007.08293v4
+ math.PR
+ cs.DM
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Tobias Friedrich, Andreas G\"obel, Martin S. Krejca, Marcus Pappik
+
+
+ On the minimum number of inversions to make a digraph $k$-(arc-)strong
+ https://arxiv.org/abs/2303.11719
+ arXiv:2303.11719v4 Announce Type: replace-cross
+Abstract: The {\it inversion} of a set $X$ of vertices in a digraph $D$ consists of reversing the direction of all arcs of $D\langle X\rangle$. We study $sinv'_k(D)$ (resp. $sinv_k(D)$) which is the minimum number of inversions needed to transform $D$ into a $k$-arc-strong (resp. $k$-strong) digraph and $sinv'_k(n) = \max\{sinv'_k(D) \mid D~\mbox{is a $2k$-edge-connected digraph of order $n$}\}$. We show :
+ $(i): \frac{1}{2} \log (n - k+1) \leq sinv'_k(n) \leq \log n + 4k -3$ ;
+ $(ii):$ for any fixed positive integers $k$ and $t$, deciding whether a given oriented graph $D$ with $sinv'_k(D)<+\infty$ satisfies $sinv'_k(D) \leq t$ is NP-complete;
+ $(iii):$ for any fixed positive integers $k$ and $t$, deciding whether a given oriented graph $D$ with $sinv_k(D)<+\infty$ satisfies $sinv_k(D) \leq t$ is NP-complete;
+ $(iv):$ if $T$ is a tournament of order at least $2k+1$, then $sinv'_k(T) \leq sinv_k(T) \leq 2k$, and $sinv'_k(T) \leq \frac{4}{3}k+o(k)$;
+ $(v):\frac{1}{2}\log(2k+1) \leq sinv'_k(T) \leq sinv_k(T)$ for some tournament $T$ of order $2k+1$;
+ $(vi):$ if $T$ is a tournament of order at least $19k-2$ (resp. $11k-2$), then $sinv'_k(T) \leq sinv_k(T) \leq 1$ (resp. $sinv_k(T) \leq 3$);
+ $(vii):$ for every $\epsilon>0$, there exists $C$ such that $sinv'_k(T) \leq sinv_k(T) \leq C$ for every tournament $T$ on at least $2k+1 + \epsilon k$ vertices.
+ oai:arXiv.org:2303.11719v4math.CO
- cs.IT
- math.IT
- math.MG
- quant-ph
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.DM
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Igor Balla
+ Julien Duron, Fr\'ed\'eric Havet, Florian H\"orsch, Cl\'ement RambaudPerformance Analysis of Quantum CSS Error-Correcting Codes via MacWilliams Identities
https://arxiv.org/abs/2305.01301
- arXiv:2305.01301v3 Announce Type: replace-cross
+ arXiv:2305.01301v4 Announce Type: replace-cross
Abstract: We analyze the performance of quantum stabilizer codes, one of the most important classes for practical implementations, on both symmetric and asymmetric quantum channels. To this aim, we first derive the weight enumerator (WE) for the undetectable errors based on the quantum MacWilliams identities. The WE is then used to evaluate tight upper bounds on the error rate of CSS quantum codes with \acl{MW} decoding. For surface codes we also derive a simple closed form expression of the bounds over the depolarizing channel. We introduce a novel approach that combines the knowledge of WE with a logical operator analysis, allowing the derivation of the exact asymptotic error rate for short codes. For example, on a depolarizing channel with physical error rate $\rho \to 0$, the logical error rate $\rho_\mathrm{L}$ is asymptotically $\rho_\mathrm{L} \approx 16 \rho^2$ for the $[[9,1,3]]$ Shor code, $\rho_\mathrm{L} \approx 16.3 \rho^2$ for the $[[7,1,3]]$ Steane code, $\rho_\mathrm{L} \approx 18.7 \rho^2$ for the $[[13,1,3]]$ surface code, and $\rho_\mathrm{L} \approx 149.3 \rho^3$ for the $[[41,1,5]]$ surface code. For larger codes our bound provides $\rho_\mathrm{L} \approx 1215 \rho^4$ and $\rho_\mathrm{L} \approx 663 \rho^5$ for the $[[85,1,7]]$ and the $[[181,1,10]]$ surface codes, respectively. Finally, we extend our analysis to include realistic, noisy syndrome extraction circuits by modeling error propagation throughout gadgets. This enables estimation of logical error rates under faulty measurements. The performance analysis serves as a design tool for developing fault-tolerant quantum systems by guiding the selection of quantum codes based on their error correction capability. Additionally, it offers a novel perspective on quantum degeneracy, showing it represents the fraction of non-correctable error patterns shared by multiple logical operators.
- oai:arXiv.org:2305.01301v3
+ oai:arXiv.org:2305.01301v4quant-phcs.ITmath.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/Diego Forlivesi, Lorenzo Valentini, Marco Chiani
- Towards a theory of natural directed paths
- https://arxiv.org/abs/2306.02792
- arXiv:2306.02792v4 Announce Type: replace-cross
-Abstract: We introduce the abstract setting of presheaf category on a thick category of cubes. Precubical sets, symmetric transverse sets, symmetric precubical sets and the new category of (non-symmetric) transverse sets are examples of this structure. All these presheaf categories share the same metric and homotopical properties from a directed homotopy point of view. This enables us to extend Raussen's notion of natural $d$-path for each of them. Finally, we adapt Ziemia\'{n}ski's notion of cube chain to this abstract setting and we prove that it has the expected behavior on precubical sets. As an application, we verify that the formalization of the parallel composition with synchronization of process algebra using the coskeleton functor of the category of symmetric transverse sets has a category of cube chains with the correct homotopy type.
- oai:arXiv.org:2306.02792v4
- math.CT
- cs.LO
- math.AT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Unconditional correctness of recent quantum algorithms for factoring and computing discrete logarithms
+ https://arxiv.org/abs/2404.16450
+ arXiv:2404.16450v2 Announce Type: replace-cross
+Abstract: In 1994, Shor introduced his famous quantum algorithm to factor integers and compute discrete logarithms in polynomial time. In 2023, Regev proposed a multi-dimensional version of Shor's algorithm that requires far fewer quantum gates. His algorithm relies on a number-theoretic conjecture on the elements in $(\mathbb{Z}/N\mathbb{Z})^{\times}$ that can be written as short products of very small prime numbers. We prove a version of this conjecture using tools from analytic number theory such as zero-density estimates. As a result, we obtain an unconditional proof of correctness of this improved quantum algorithm and of subsequent variants.
+ oai:arXiv.org:2404.16450v2
+ math.NT
+ cs.CC
+ quant-ph
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Philippe Gaucher
+ C\'edric Pilatte
- An explicit Euler method for Sobolev vector fields with applications to the continuity equation on non cartesian grids
- https://arxiv.org/abs/2402.04118
- arXiv:2402.04118v4 Announce Type: replace-cross
-Abstract: We prove a novel stability estimate in $L^\infty _t (L^p _x)$ between the regular Lagrangian flow of a Sobolev vector field and a piecewise affine approximation of such flow. This approximation of the flow is obtained by a (sort of) explicit Euler method, and it is the crucial tool to prove approximation results for the solution of the continuity equation by using the representation of the solution as the push-forward via the regular Lagrangian flow of the initial datum. We approximate the solution in two ways, one probabilistic and one deterministic, using different approximations for both the flow and the initial datum. Such estimates for the solution of the continuity equation are derived on non Cartesian grids and without the need to assume a CFL condition.
- oai:arXiv.org:2402.04118v4
- math.AP
- cs.NA
- math.NA
- Thu, 11 Dec 2025 00:00:00 -0500
+ Equivariant Test-Time Training with Operator Sketching for Imaging Inverse Problems
+ https://arxiv.org/abs/2411.05771
+ arXiv:2411.05771v5 Announce Type: replace-cross
+Abstract: Equivariant Imaging (EI) regularization has become the de-facto technique for unsupervised training of deep imaging networks, without any need of ground-truth data. Observing that the EI-based unsupervised training paradigm currently has significant computational redundancy leading to inefficiency in high-dimensional applications, we propose a sketched EI regularization which leverages the randomized sketching techniques for acceleration. We apply our sketched EI regularization to develop an accelerated deep internal learning framework, which can be efficiently applied for test-time network adaptation. Additionally, for network adaptation tasks, we propose a parameter-efficient approach to accelerate both EI and Sketched-EI via optimizing only the normalization layers. Our numerical study on X-ray CT and multicoil magnetic resonance image reconstruction tasks demonstrate that our approach can achieve significant computational acceleration over the standard EI counterpart, especially in test-time training tasks.
+ oai:arXiv.org:2411.05771v5
+ eess.IV
+ cs.CV
+ cs.LG
+ math.OC
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Tommaso Cortopassi
+ Guixian Xu, Jinglai Li, Junqi Tang
- Optimal Transportation by Orthogonal Coupling Dynamics
- https://arxiv.org/abs/2410.08060
- arXiv:2410.08060v2 Announce Type: replace-cross
-Abstract: Many numerical and learning algorithms rely on the solution of the Monge-Kantorovich problem and Wasserstein distances, which provide appropriate distributional metrics. While the natural approach is to treat the problem as an infinite-dimensional linear programming, such a methodology limits the computational performance due to the polynomial scaling with respect to the sample size along with intensive memory requirements. We propose a novel alternative framework to address the Monge-Kantorovich problem based on a projection type gradient descent scheme. The dynamics builds on the notion of the conditional expectation, where the connection with the opinion dynamics is leveraged to devise efficient numerical schemes. We demonstrate that the resulting dynamics recovers random maps with favourable computational performance. Along with the theoretical insight, the proposed dynamics paves the way for innovative approaches to construct numerical schemes for computing optimal transport maps as well as Wasserstein distances.
- oai:arXiv.org:2410.08060v2
- math.OC
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Lightweight Model Attribution and Detection of Synthetic Speech via Audio Residual Fingerprints
+ https://arxiv.org/abs/2411.14013
+ arXiv:2411.14013v4 Announce Type: replace-cross
+Abstract: As speech generation technologies advance, so do risks of impersonation, misinformation, and spoofing. We present a lightweight, training-free approach for detecting synthetic speech and attributing it to its source model. Our method addresses three tasks: (1) single-model attribution in an open-world setting, (2) multi-model attribution in a closed-world setting, and (3) real vs. synthetic speech classification. The core idea is simple: we compute standardized average residuals--the difference between an audio signal and its filtered version--to extract model-agnostic fingerprints that capture synthesis artifacts. Experiments across multiple synthesis systems and languages show AUROC scores above 99%, with strong reliability even when only a subset of model outputs is available. The method maintains high performance under common audio distortions, including echo and moderate background noise, while data augmentation can improve results in more challenging conditions. In addition, out-of-domain detection is performed using Mahalanobis distances to in-domain residual fingerprints, achieving an F1 score of 0.91 on unseen models, reinforcing the method's efficiency, generalizability, and suitability for digital forensics and security applications.
+ oai:arXiv.org:2411.14013v4
+ eess.AS
+ cs.CR
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Mohsen Sadr, Peyman Mohajerin Esfahani, Hossein Gorji
+ Mat\'ias Pizarro, Mike Laszkiewicz, Dorothea Kolossa, Asja Fischer
- Machine Learning for Arbitrary Single-Qubit Rotations on an Embedded Device
- https://arxiv.org/abs/2411.13037
- arXiv:2411.13037v2 Announce Type: replace-cross
-Abstract: Here we present a technique for using machine learning (ML) for single-qubit gate synthesis on field programmable logic for a superconducting transmon-based quantum computer based on simulated studies. Our approach is multi-stage. We first bootstrap a model based on simulation with access to the full statevector for measuring gate fidelity. We next present an algorithm, named adapted randomized benchmarking (ARB), for fine-tuning the gate on hardware based on measurements of the devices. We also present techniques for deploying the model on programmable devices with care to reduce the required resources. While the techniques here are applied to a transmon-based computer, many of them are portable to other architectures.
- oai:arXiv.org:2411.13037v2
- quant-ph
- cs.ET
- Thu, 11 Dec 2025 00:00:00 -0500
+ Extrapolating Jet Radiation with Autoregressive Transformers
+ https://arxiv.org/abs/2412.12074
+ arXiv:2412.12074v2 Announce Type: replace-cross
+Abstract: Generative networks are an exciting tool for fast LHC event fixed number of particles. Autoregressive transformers allow us to generate events containing variable numbers of particles, very much in line with the physics of QCD jet radiation, and offer the possibility to generalize to higher multiplicities. We show how transformers can learn a factorized likelihood for jet radiation and extrapolate in terms of the number of generated jets. For this extrapolation, bootstrapping training data and training with modifications of the likelihood loss can be used.
+ oai:arXiv.org:2412.12074v2
+ hep-ph
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Anja Butter, Fran\c{c}ois Charton, Javier Mari\~no Villadamigo, Ayodele Ore, Tilman Plehn, Jonas Spinner
+
+
+ Beyond Log-Concavity and Score Regularity: Improved Convergence Bounds for Score-Based Generative Models in W2-distance
+ https://arxiv.org/abs/2501.02298
+ arXiv:2501.02298v5 Announce Type: replace-cross
+Abstract: Score-based Generative Models (SGMs) aim to sample from a target distribution by learning score functions using samples perturbed by Gaussian noise. Existing convergence bounds for SGMs in the W2-distance rely on stringent assumptions about the data distribution. In this work, we present a novel framework for analyzing W2-convergence in SGMs, significantly relaxing traditional assumptions such as log-concavity and score regularity. Leveraging the regularization properties of the Ornstein--Uhlenbeck (OU) process, we show that weak log-concavity of the data distribution evolves into log-concavity over time. This transition is rigorously quantified through a PDE-based analysis of the Hamilton--Jacobi--Bellman equation governing the log-density of the forward process. Moreover, we establish that the drift of the time-reversed OU process alternates between contractive and non-contractive regimes, reflecting the dynamics of concavity. Our approach circumvents the need for stringent regularity conditions on the score function and its estimators, relying instead on milder, more practical assumptions. We demonstrate the wide applicability of this framework through explicit computations on Gaussian mixture models, illustrating its versatility and potential for broader classes of data distributions.
+ oai:arXiv.org:2501.02298v5
+ stat.ML
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1007/s42484-024-00214-8
- Madhav Narayan Bhat, Marco Russo, Luca P. Carloni, Giuseppe Di Guglielmo, Farah Fahim, Andy C. Y. Li, Gabriel N. Perdue
+ Marta Gentiloni-Silveri, Antonio Ocello
- Refining Concentration for Gaussian Quadratic Chaos
- https://arxiv.org/abs/2412.03774
- arXiv:2412.03774v3 Announce Type: replace-cross
-Abstract: We slightly modify the proof of Hanson-Wright inequality (HWI) for concentration of Gaussian quadratic chaos where we tighten the bound by increasing the absolute constant in its formulation from the largest known value of 0.125 to at least 0.145 in the symmetric case. We also present a sharper version of an inequality due to Laurent and Massart (LMI) through which we increase the absolute constant in HWI from the largest available value of approximately $0.134$ due to LMI itself to at least $0.152$ in the positive-semidefinite case. A new sequence of concentration bounds indexed by $m=1,2,3,\cdots, \infty$ is developed that involves Schatten norms of the underlying matrix. The case $m=1$ recovers HWI. These bounds undergo a phase transition in the sense that if the tail parameter is smaller than a critical threshold $\tau_c$, then $m=1$ is the tightest and if it is larger than $\tau_c$, then $m=\infty$ is the tightest. This leads to a novel bound called the~$m_\infty$-bound. A separate concentration bound named twin to HWI is also developed that is tighter than HWI for both sufficiently small and large tail parameter. Finally, we explore concentration bounds when the underlying matrix is positive-semidefinite and only the dimension~$n$ and its largest eigenvalue are known. Five candidates are examined, namely, the $m_\infty$-bound, relaxed versions of HWI and LMI, the $\chi^2$-bound and the large deviations bound. The sharpest among these is always either the $m_\infty$-bound or the $\chi^2$-bound. The case of even dimension is given special attention. If $n=2,4,6$, the $\chi^2$-bound is tighter than the $m_\infty$-bound. If $n$ is an even integer greater than or equal to 8, the $m_\infty$-bound is sharper than the $\chi^2$-bound if and only if the ratio of the tail parameter over the largest eigenvalue lies inside a finite open interval which expands indefinitely as $n$ grows.
- oai:arXiv.org:2412.03774v3
- math.PR
+ Rydberg Atomic Quantum Receivers for Multi-Target DOA Estimation
+ https://arxiv.org/abs/2501.02820
+ arXiv:2501.02820v3 Announce Type: replace-cross
+Abstract: Quantum sensing technologies have experienced rapid progresses since entering the `second quantum revolution'. Among various candidates, schemes relying on Rydberg atoms exhibit compelling advantages for detecting radio frequency signals. Based on this, Rydberg atomic quantum receivers (RAQRs) have emerged as a promising solution to classical wireless communication and sensing. To harness the advantages and exploit the potential of RAQRs in wireless sensing, we investigate the realization of the direction of arrival (DOA) estimation by RAQRs. Specifically, we first conceive a Rydberg atomic quantum uniform linear array (RAQ-ULA) aided wireless receiver for multi-target DOA detection and propose the corresponding signal model of this sensing system. Our model reveals that the presence of the radio-frequency local oscillator in the RAQ-ULA creates sensor gain mismatches, which degrade the DOA estimation significantly by employing the classical Estimation of Signal Parameters via Rotational Invariant Techniques (ESPRIT). To solve this sensor gain mismatch problem, we propose the Rydberg atomic quantum ESPRIT (RAQ-ESPRIT) relying on our model. Lastly, we characterize our scheme through numerical simulations, where the results exhibit that it is capable of reducing the estimation error of its classical counterpart on the order of $> 400$-fold and $> 9000$-fold in the PSL and SQL, respectively.
+ oai:arXiv.org:2501.02820v3
+ eess.SPcs.ITmath.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ quant-ph
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kamyar Moshksar
+ Tierui Gong, Chau Yuen, Chong Meng Samson See, M\'erouane Debbah, Lajos Hanzo
- INRetouch: Context Aware Implicit Neural Representation for Photography Retouching
- https://arxiv.org/abs/2412.03848
- arXiv:2412.03848v4 Announce Type: replace-cross
-Abstract: Professional photo editing remains challenging, requiring extensive knowledge of imaging pipelines and significant expertise. While recent deep learning approaches, particularly style transfer methods, have attempted to automate this process, they often struggle with output fidelity, editing control, and complex retouching capabilities. We propose a novel retouch transfer approach that learns from professional edits through before-after image pairs, enabling precise replication of complex editing operations. We develop a context-aware Implicit Neural Representation that learns to apply edits adaptively based on image content and context, and is capable of learning from a single example. Our method extracts implicit transformations from reference edits and adaptively applies them to new images. To facilitate this research direction, we introduce a comprehensive Photo Retouching Dataset comprising 100,000 high-quality images edited using over 170 professional Adobe Lightroom presets. Through extensive evaluation, we demonstrate that our approach not only surpasses existing methods in photo retouching but also enhances performance in related image reconstruction tasks like Gamut Mapping and Raw Reconstruction. By bridging the gap between professional editing capabilities and automated solutions, our work presents a significant step toward making sophisticated photo editing more accessible while maintaining high-fidelity results. The source code and the dataset are publicly available at https://omaralezaby.github.io/inretouch .
- oai:arXiv.org:2412.03848v4
- eess.IV
+ Sublinear Variational Optimization of Gaussian Mixture Models with Millions to Billions of Parameters
+ https://arxiv.org/abs/2501.12299
+ arXiv:2501.12299v2 Announce Type: replace-cross
+Abstract: Gaussian Mixture Models (GMMs) range among the most frequently used models in machine learning. However, training large, general GMMs becomes computationally prohibitive for datasets that have many data points $N$ of high-dimensionality $D$. For GMMs with arbitrary covariances, we here derive a highly efficient variational approximation, which is then integrated with mixtures of factor analyzers (MFAs). For GMMs with $C$ components, our proposed algorithm substantially reduces runtime complexity from $\mathcal{O}(NCD^2)$ per iteration to a complexity scaling linearly with $D$ and sublinearly with $NC$. In numerical experiments, we first validate that the complexity reduction results in a sublinear scaling for the entire GMM optimization process. Second, we show on large-scale benchmarks that the sublinear algorithm results in speed-ups of an order-of-magnitude compared to the state-of-the-art. Third, as a proof of concept, we finally train GMMs with over 10 billion parameters on about 100 million images, observing training times of less than nine hours on a single state-of-the-art CPU. Finally, and forth, we demonstrate the effectiveness of large-scale GMMs on the task of zero-shot image denoising, where sublinear training results in state-of-the-art denoising times while competitive denoising performance is maintained.
+ oai:arXiv.org:2501.12299v2
+ stat.MLcs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Omar Elezabi, Marcos V. Conde, Zongwei Wu, Radu Timofte
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sebastian Salwig, Till Kahlke, Florian Hirschberger, Dennis Forster, J\"org L\"ucke
- SEAL: Speech Embedding Alignment Learning for Speech Large Language Model with Retrieval-Augmented Generation
- https://arxiv.org/abs/2502.02603
- arXiv:2502.02603v2 Announce Type: replace-cross
-Abstract: Embedding-based retrieval models have made significant strides in retrieval-augmented generation (RAG) techniques for text and multimodal large language models (LLMs) applications. However, when it comes to speech larage language models (SLLMs), these methods are limited to a two-stage process, where automatic speech recognition (ASR) is combined with text-based retrieval. This sequential architecture suffers from high latency and error propagation. To address these limitations, we propose a unified embedding framework that eliminates the need for intermediate text representations. Specifically, the framework includes separate speech and text encoders, followed by a shared scaling layer that maps both modalities into a common embedding space. Our model reduces pipeline latency by 50\% while achieving higher retrieval accuracy compared to traditional two-stage methods. We also provide a theoretical analysis of the challenges inherent in end-to-end speech retrieval and introduce architectural principles for effective speech-to-document matching. Extensive experiments demonstrate the robustness of our approach across diverse acoustic conditions and speaker variations, paving the way for a new paradigm in multimodal SLLMs retrieval systems.
- oai:arXiv.org:2502.02603v2
- eess.AS
- cs.CL
- cs.SD
- Thu, 11 Dec 2025 00:00:00 -0500
+ Tight relations and equivalences between smooth relative entropies
+ https://arxiv.org/abs/2501.12447
+ arXiv:2501.12447v4 Announce Type: replace-cross
+Abstract: The precise one-shot characterisation of operational tasks in classical and quantum information theory relies on different forms of smooth entropic quantities. A particularly important connection is between the hypothesis testing relative entropy and the smoothed max-relative entropy, which together govern many operational settings. We first strengthen this connection into a type of equivalence: we show that the hypothesis testing relative entropy is equivalent to a variant of the smooth max-relative entropy based on the information spectrum divergence, which can be alternatively understood as a measured smooth max-relative entropy. Furthermore, we improve a fundamental lemma due to Datta and Renner that connects the different variants of the smoothed max-relative entropy, introducing a modified proof technique based on matrix geometric means and a tightened gentle measurement lemma. We use the unveiled connections and tools to strictly improve on previously known one-shot bounds and duality relations between the smooth max-relative entropy and the hypothesis testing relative entropy, establishing provably tight bounds between them. We use these results to refine other divergence inequalities, in particular sharpening bounds that connect the max-relative entropy with R\'enyi divergences.
+ oai:arXiv.org:2501.12447v4
+ quant-ph
+ cs.IT
+ math-ph
+ math.IT
+ math.MP
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chunyu Sun, Bingyu Liu, Zhichao Cui, Junhan Shi, Anbin Qi, Tian-hao Zhang, Dinghao Zhou, Lewei Lu
+ Bartosz Regula, Ludovico Lami, Nilanjana Datta
- Dynamic Pricing in the Linear Valuation Model using Shape Constraints
- https://arxiv.org/abs/2502.05776
- arXiv:2502.05776v4 Announce Type: replace-cross
-Abstract: We propose a shape-constrained approach to dynamic pricing for censored data in the linear valuation model eliminating the need for tuning parameters commonly required by existing methods. Previous works have addressed the challenge of unknown market noise distribution $F_0$ using strategies ranging from kernel methods to reinforcement learning algorithms, such as bandit techniques and upper confidence bounds (UCB), under the assumption that $F_0$ satisfies Lipschitz (or stronger) conditions. In contrast, our method relies on isotonic regression under the weaker assumption that $F_0$ is $\alpha$-H\"older continuous for some $\alpha \in (0,1]$, for which we derive a regret upper bound. Simulations and experiments with real-world data obtained by Welltower Inc (a major healthcare Real Estate Investment Trust) consistently demonstrate that our method attains lower empirical regret in comparison to several existing methods in the literature while offering the advantage of being tuning-parameter free.
- oai:arXiv.org:2502.05776v4
- stat.ML
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ MFC 5.0: An exascale many-physics flow solver
+ https://arxiv.org/abs/2503.07953
+ arXiv:2503.07953v4 Announce Type: replace-cross
+Abstract: Many problems of interest in engineering, medicine, and the fundamental sciences rely on high-fidelity flow simulation, making performant computational fluid dynamics solvers a mainstay of the open-source software community. Previous work, MFC 3.0, was published, documented, and made open-source by Bryngelson et al. CPC (2021) features numerous physical features, numerical methods, and scalable infrastructure. MFC 5.0 is a significant update to MFC 3.0, featuring a broad set of well-established and novel physical models and numerical methods, as well as the introduction of GPU and APU (or superchip) acceleration. We exhibit state-of-the-art performance and ideal scaling on the first two exascale supercomputers, OLCF's Frontier and LLNL's El Capitan. Combined with MFC's single-accelerator performance, MFC achieves exascale computation in practice and has achieved the largest-to-date public CFD simulation at 200 trillion grid points, earning it a 2025 ACM Gordon Bell Prize finalist. New physical features include the immersed boundary method, $N$-fluid phase change, Euler-Euler and Euler-Lagrange sub-grid bubble models, fluid-structure interaction, hypo- and hyper-elastic materials, chemically reacting flow, two-material surface tension, magnetohydrodynamics (MHD), and more. Numerical techniques now represent the current state-of-the-art, including general relaxation characteristic boundary conditions, WENO variants, Strang splitting for stiff sub-grid flow features, and low Mach number treatments. Weak scaling to tens of thousands of GPUs on OLCF's Summit and Frontier, and LLNL's El Capitan, achieves efficiencies within 5% of ideal to over 90% of their respective system sizes. Strong scaling results for a 16-fold increase in device count show parallel efficiencies exceeding 90% on OLCF Frontier.
+ oai:arXiv.org:2503.07953v4
+ physics.flu-dyn
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Daniele Bracale, Moulinath Banerjee, Yuekai Sun, Kevin Stoll, Salam Turki
+ Benjamin Wilfong, Henry A. Le Berre, Anand Radhakrishnan, Ansh Gupta, Daniel J. Vickers, Diego Vaca-Revelo, Dimitrios Adam, Haocheng Yu, Hyeoksu Lee, Jose Rodolfo Chreim, Mirelys Carcana Barbosa, Yanjun Zhang, Esteban Cisneros-Garibay, Aswin Gnanaskandan, Mauro Rodriguez Jr., Reuben D. Budiardja, Stephen Abbott, Tim Colonius, Spencer H. Bryngelson
- Finite-Sample Analysis of Policy Evaluation for Robust Average Reward Reinforcement Learning
- https://arxiv.org/abs/2502.16816
- arXiv:2502.16816v4 Announce Type: replace-cross
-Abstract: We present the first finite-sample analysis of policy evaluation in robust average-reward Markov Decision Processes (MDPs). Prior work in this setting have established only asymptotic convergence guarantees, leaving open the question of sample complexity. In this work, we address this gap by showing that the robust Bellman operator is a contraction under a carefully constructed semi-norm, and developing a stochastic approximation framework with controlled bias. Our approach builds upon Multi-Level Monte Carlo (MLMC) techniques to estimate the robust Bellman operator efficiently. To overcome the infinite expected sample complexity inherent in standard MLMC, we introduce a truncation mechanism based on a geometric distribution, ensuring a finite expected sample complexity while maintaining a small bias that decays exponentially with the truncation level. Our method achieves the order-optimal sample complexity of $\tilde{\mathcal{O}}(\epsilon^{-2})$ for robust policy evaluation and robust average reward estimation, marking a significant advancement in robust reinforcement learning theory.
- oai:arXiv.org:2502.16816v4
- stat.ML
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ The biequivalence of path categories and axiomatic Martin-L\"of type theories
+ https://arxiv.org/abs/2503.15431
+ arXiv:2503.15431v2 Announce Type: replace-cross
+Abstract: The semantics of extensional type theory has an elegant categorical description: models of extensional =-types, 1-types, and Sigma-types are biequivalent to finitely complete categories, while adding Pi-types yields locally Cartesian closed categories. We establish parallel results for axiomatic type theory, which includes systems like cubical type theory, where the computation rule of the =-types only holds as a propositional axiom instead of a definitional reduction. In particular, we prove that models of axiomatic =-types, and standard 1- and Sigma-types are biequivalent to certain path categories, while adding axiomatic Pi-types yields dependent homotopy exponents.
+ This biequivalence simplifies axiomatic =-types, which are more intricate than extensional ones since they permit higher dimensional structure. Specifically, path categories use a primitive notion of equivalence instead of a direct reproduction of the syntactic elimination rules and computation axioms. We apply our correspondence to prove a coherence theorem: we show that these weak homotopical models can be turned into equivalent strict models of axiomatic type theory. In addition, we introduce a more modular notion, that of a display map path category, which only models axiomatic =-types by default, while leaving room to add other axiomatic type formers such as 1-, Sigma-, and Pi-types.
+ oai:arXiv.org:2503.15431v2
+ math.LO
+ cs.LO
+ math.AT
+ math.CT
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yang Xu, Washim Uddin Mondal, Vaneet Aggarwal
+ Dani\"el Otten, Matteo Spadetto
- Revenue Maximization Under Sequential Price Competition Via The Estimation Of s-Concave Demand Functions
- https://arxiv.org/abs/2503.16737
- arXiv:2503.16737v5 Announce Type: replace-cross
-Abstract: We consider price competition among multiple sellers over a selling horizon of $T$ periods. In each period, sellers simultaneously offer their prices (which are made public) and subsequently observe their respective demand (not made public). The demand function of each seller depends on all sellers' prices through a private, unknown, and nonlinear relationship. We propose a dynamic pricing policy that uses semi-parametric least-squares estimation and show that when the sellers employ our policy, their prices converge at a rate of $O(T^{-1/7})$ to the Nash equilibrium prices that sellers would reach if they were fully informed. Each seller incurs a regret of $O(T^{5/7})$ relative to a dynamic benchmark policy. A theoretical contribution of our work is proving the existence of equilibrium under shape-constrained demand functions via the concept of $s$-concavity and establishing regret bounds of our proposed policy. Technically, we also establish new concentration results for the least squares estimator under shape constraints. Our findings offer significant insights into dynamic competition-aware pricing and contribute to the broader study of non-parametric learning in strategic decision-making.
- oai:arXiv.org:2503.16737v5
- stat.ML
- cs.LG
- math.PR
- math.ST
- stat.TH
- Thu, 11 Dec 2025 00:00:00 -0500
+ A note on the quantum Wielandt inequality
+ https://arxiv.org/abs/2504.21638
+ arXiv:2504.21638v3 Announce Type: replace-cross
+Abstract: In this note, we prove that the index of primitivity of any primitive unital Schwarz map is at most $2(D-1)^2$, where $D$ is the dimension of the underlying matrix algebra. This inequality was first proved by Rahaman for Schwarz maps which were both unital and trace preserving. As we show, the assumption of unitality is basically innocuous, but in general not all primitive unital Schwarz maps are trace preserving. Therefore, the precise purpose of this note is to showcase how to apply the method of Rahaman to unital primitive Schwarz maps that don't preserve trace. As a corollary of this theorem, we show that the index of primitivity of any primitive 2-positive map is at most $2(D-1)^2$, so in particular this bound holds for arbitrary primitive completely positive maps. We briefly discuss of how this relates to a conjecture of Perez-Garcia, Verstraete, Wolf and Cirac.
+ oai:arXiv.org:2504.21638v3
+ quant-ph
+ cs.IT
+ math.IT
+ math.OA
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Owen Ekblad
+
+
+ FE-MCFormer: An interpretable fault diagnosis framework for rotating machinery under strong noise based on time-frequency fusion transformer
+ https://arxiv.org/abs/2505.06285
+ arXiv:2505.06285v2 Announce Type: replace-cross
+Abstract: Many fault diagnosis methods of rotating machines are based on discriminative features extracted from signals collected from the key components such as bearings. However, under complex operating conditions, periodic impulsive characteristics in the signal related to weak fault information are often obscured by noise interference. Consequently, existing approaches struggle to learn interpretable fault-related features in such scenarios. This paper proposes a novel transformer framework (FE-MCFormer) to extract interpretable time-frequency features, with the aim of improving the fault detection accuracy and intrepretability of rotating machines under strong noise. First, a Fourier adaptive reconstruction embedding layer is introduced as a global information encoder in the model. Subsequently, a time-frequency fusion module is designed, further improve the model robustness and interpretability. The effectiveness of FE-MCFormer in machine fault diagnosis is validated through three case studies.
+ oai:arXiv.org:2505.06285v2
+ eess.SP
+ cs.CV
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Daniele Bracale, Moulinath Banerjee, Cong Shi, Yuekai Sun
+ Yuhan Yuan, Xiaomo Jiang, Haibin Yang, Haixin Zhao, Shengbo Wang, Xueyu Cheng, Jigang Meng, Shuhua Yang
- Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems
- https://arxiv.org/abs/2503.18309
- arXiv:2503.18309v4 Announce Type: replace-cross
-Abstract: Gaussian process state-space models (GPSSMs) offer a principled framework for learning and inference in nonlinear dynamical systems with uncertainty quantification. However, existing GPSSMs are limited by the use of multiple independent stationary Gaussian processes (GPs), leading to prohibitive computational and parametric complexity in high-dimensional settings and restricted modeling capacity for non-stationary dynamics. To address these challenges, we propose an efficient transformed Gaussian process state-space model (ETGPSSM) for scalable and flexible modeling of high-dimensional, non-stationary dynamical systems. Specifically, our ETGPSSM integrates a single shared GP with input-dependent normalizing flows, yielding an expressive non-stationary implicit process prior that can capture complex transition dynamics while significantly reducing model complexity. For the inference of the implicit process, we develop a variational inference algorithm that jointly approximates the posterior over the underlying GP and the neural network parameters defining the normalizing flows. To avoid explicit variational parameterization of the latent states, we further incorporate the ensemble Kalman filter (EnKF) into the variational framework, enabling accurate and efficient state estimation. Extensive empirical evaluations on synthetic and real-world datasets demonstrate the superior performance of our ETGPSSM in system dynamics learning, high-dimensional state estimation, and time-series forecasting, outperforming existing GPSSMs and neural network-based SSMs in terms of computational efficiency and accuracy.
- oai:arXiv.org:2503.18309v4
- stat.ML
+ Beyond Basic A/B testing: Improving Statistical Efficiency for Business Growth
+ https://arxiv.org/abs/2505.08128
+ arXiv:2505.08128v2 Announce Type: replace-cross
+Abstract: The standard A/B testing approaches are mostly based on t-test in large scale industry applications. These standard approaches however suffers from low statistical power in business settings, due to nature of small sample-size or non-Gaussian distribution or return-on-investment (ROI) consideration. In this paper, we (i) show the statistical efficiency of using estimating equation and U statistics, which can address these issues separately; and (ii) propose a novel doubly robust generalized U that allows flexible definition of treatment effect, and can handles small samples, distribution robustness, ROI and confounding consideration in one framework. We provide theoretical results on asymptotics and efficiency bounds, together with insights on the efficiency gain from theoretical analysis. We further conduct comprehensive simulation studies, apply the methods to multiple real A/B tests at a large SaaS company, and share results and learnings that are broadly useful.
+ oai:arXiv.org:2505.08128v2
+ stat.MEcs.LG
- eess.SP
- Thu, 11 Dec 2025 00:00:00 -0500
+ math.ST
+ stat.CO
+ stat.TH
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhidi Lin, Ying Li, Feng Yin, Juan Maro\~nas, Alexandre H. Thi\'ery
+ Changshuai Wei, Phuc Nguyen, Benjamin Zelditch, Joyce Chen
- Knowledge Independence Breeds Disruption but Limits Recognition
- https://arxiv.org/abs/2504.09589
- arXiv:2504.09589v2 Announce Type: replace-cross
-Abstract: Despite extensive research on scientific disruption, two questions remain: why disruption has declined amid growing knowledge, and why disruptive work receives fewer and delayed citations. One way to address these questions is to identify an intrinsic, paper-level property that reliably predicts disruption and explains both patterns. Here, we propose a novel measure, knowledge independence, capturing the extent to which a paper draws on references that do not cite one another. Analyzing 114 million publications, we find that knowledge independence strongly predicts disruption and mediates the disruptive advantage of small, onsite, and fresh teams. Its long-term decline, nonreproducible by null models, provides a mechanistic explanation for the parallel decline in disruption. Causal and simulation evidence further indicates that knowledge independence drives the persistent trade-off between disruption and impact. Taken together, these findings fill a critical gap in understanding scientific innovation, revealing a universal law: Knowledge independence breeds disruption but limits recognition.
- oai:arXiv.org:2504.09589v2
- physics.soc-ph
- cs.DL
- cs.SI
- Thu, 11 Dec 2025 00:00:00 -0500
+ AI-Informed Model Analogs for Subseasonal-to-Seasonal Prediction
+ https://arxiv.org/abs/2506.14022
+ arXiv:2506.14022v2 Announce Type: replace-cross
+Abstract: Subseasonal-to-seasonal forecasting is crucial for public health, disaster preparedness, and agriculture, and yet it remains a particularly challenging timescale to predict. We explore the use of an interpretable AI-informed model analog forecasting approach, previously employed on longer timescales, to improve S2S predictions. Using an artificial neural network, we learn a mask of weights to optimize analog selection and showcase its versatility across three varied prediction tasks: 1) classification of Week 3-4 Southern California summer temperatures; 2) regional regression of Month 1 midwestern U.S. summer temperatures; and 3) classification of Month 1-2 North Atlantic wintertime upper atmospheric winds. The AI-informed analogs outperform traditional analog forecasting approaches, as well as climatology and persistence baselines, for deterministic and probabilistic skill metrics on both climate model and reanalysis data. We find the analog ensembles built using the AI-informed approach also produce better predictions of temperature extremes and improve representation of forecast uncertainty. Finally, by using an interpretable-AI framework, we analyze the learned masks of weights to better understand S2S sources of predictability.
+ oai:arXiv.org:2506.14022v2
+ physics.ao-ph
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xiaoyao Yu, Talal Rahwan, Tao Jia
+ Jacob B. Landsberg, Elizabeth A. Barnes, Matthew Newman
- Benchmarking data encoding methods in Quantum Machine Learning
- https://arxiv.org/abs/2505.14295
- arXiv:2505.14295v2 Announce Type: replace-cross
-Abstract: Data encoding plays a fundamental and distinctive role in Quantum Machine Learning (QML). While classical approaches process data directly as vectors, QML may require transforming classical data into quantum states through encoding circuits, known as quantum feature maps or quantum embeddings. This step leverages the inherently high-dimensional and non-linear nature of Hilbert space, enabling more efficient data separation in complex feature spaces that may be inaccessible to classical methods. This encoding part significantly affects the performance of the QML model, so it is important to choose the right encoding method for the dataset to be encoded. However, this choice is generally arbitrary, since there is no "universal" rule for knowing which encoding to choose based on a specific set of data. There are currently a variety of encoding methods using different quantum logic gates. We studied the most commonly used types of encoding methods and benchmarked them using different datasets.
- oai:arXiv.org:2505.14295v2
+ Efficient Gate Reordering for Distributed Quantum Compiling in Data Centers
+ https://arxiv.org/abs/2507.01090
+ arXiv:2507.01090v2 Announce Type: replace-cross
+Abstract: Just as classical computing relies on distributed systems, the quantum computing era requires new kinds of infrastructure and software tools. Quantum networks will become the backbone of hybrid, quantum-augmented data centers, in which quantum algorithms are distributed over a local network of quantum processing units (QPUs) interconnected via shared entanglement. In this context, it is crucial to develop methods and software that minimize the number of inter-QPU communications. Here we describe key features of the quantum compiler araQne, which is designed to minimize distribution cost, measured by the number of entangled pairs required to distribute a monolithic quantum circuit using gate teleportation protocols. We establish the crucial role played by circuit reordering strategies, which strongly reduce the distribution cost compared to a baseline approach.
+ oai:arXiv.org:2507.01090v2quant-ph
- cs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.DC
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Orlane Zang, Gr\'egoire Barru\'e, Tony Quertier
+ Riccardo Mengoni, Walter Nadalin, Mathys Rennela, Jimmy Rotureau, Tom Darras, Julien Laurat, Eleni Diamanti, Ioannis Lavdas
- Structured quantum learning via em algorithm for Boltzmann machines
- https://arxiv.org/abs/2507.21569
- arXiv:2507.21569v2 Announce Type: replace-cross
-Abstract: Quantum Boltzmann machines (QBMs) are generative models with potential advantages in quantum machine learning, yet their training is fundamentally limited by the barren plateau problem, where gradients vanish exponentially with system size. We introduce a quantum version of the em algorithm, an information-geometric generalization of the classical Expectation-Maximization method, which circumvents gradient-based optimization on non-convex functions. Implemented on a semi-quantum restricted Boltzmann machine (sqRBM) -- a hybrid architecture with quantum effects confined to the hidden layer -- our method achieves stable learning and outperforms gradient descent on multiple benchmark datasets. These results establish a structured and scalable alternative to gradient-based training in QML, offering a pathway to mitigate barren plateaus and enhance quantum generative modeling.
- oai:arXiv.org:2507.21569v2
- quant-ph
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Long-Duration Station-Keeping Strategy for Cislunar Spacecraft Formations
+ https://arxiv.org/abs/2507.19620
+ arXiv:2507.19620v2 Announce Type: replace-cross
+Abstract: This paper demonstrates a novel guidance and control strategy for cislunar near-rectilinear halo orbit formation-keeping applied to high-fidelity dynamics. Bounded relative motion is constructed about long-duration ephemeris trajectories with osculating invariant circles to form quasi-periodic relative orbits. State-of-the-art absolute control strategies are paired with a simple and effective relative control feedback law. Finally, a control barrier function is implemented to ensure recursively passively-safe bounded relative motion under feedback in the presence of possible missed maneuver events for the duration of the formation flight. The strategy is verified in high-fidelity simulation environments through Monte Carlo trials.
+ oai:arXiv.org:2507.19620v2
+ math.OC
+ cs.SY
+ eess.SY
+ math.DS
+ Fri, 12 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Takeshi Kimura, Kohtaro Kato, Masahito Hayashi
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ethan Foss, Yuji Takubo, Simone D'Amico
- Diffusion Secant Alignment for Score-Based Density Ratio Estimation
- https://arxiv.org/abs/2509.04852
- arXiv:2509.04852v3 Announce Type: replace-cross
-Abstract: Estimating density ratios has become increasingly important with the recent rise of score-based and diffusion-inspired methods. However, current tangent-based approaches rely on a high-variance learning objective, which leads to unstable training and costly numerical integration during inference. We propose \textit{Interval-annealed Secant Alignment Density Ratio Estimation (ISA-DRE)}, a score-based framework along diffusion interpolants that replaces the instantaneous tangent with its interval integral, the secant, as the learning target. We show theoretically that the secant is a provably lower variance and smoother target for neural approximation, and also a strictly more general representation that contains the tangent as the infinitesimal limit. To make secant learning feasible, we introduce the \textit{Secant Alignment Identity (SAI)} to enforce self consistency between secant and tangent representations, and \textit{Contraction Interval Annealing (CIA)} to ensure stable convergence. Empirically, this stability-first formulation produces high efficiency and accuracy. ISA-DRE achieves comparable or superior results with fewer function evaluations, demonstrating robustness under large distribution discrepancies and effectively mitigating the density-chasm problem.
- oai:arXiv.org:2509.04852v3
+ Trustworthy scientific inference with generative models
+ https://arxiv.org/abs/2508.02602
+ arXiv:2508.02602v2 Announce Type: replace-cross
+Abstract: Generative artificial intelligence (AI) excels at producing complex data structures (text, images, videos) by learning patterns from training examples. Across scientific disciplines, researchers are now applying generative models to "inverse problems" to directly predict hidden parameters from observed data along with measures of uncertainty. While these predictive or posterior-based methods can handle intractable likelihoods and large-scale studies, they can also produce biased or overconfident conclusions even without model misspecifications. We present a solution with Frequentist-Bayes (FreB), a mathematically rigorous protocol that reshapes AI-generated posterior probability distributions into (locally valid) confidence regions that consistently include true parameters with the expected probability, while achieving minimum size when training and target data align. We demonstrate FreB's effectiveness by tackling diverse case studies in the physical sciences: identifying unknown sources under dataset shift, reconciling competing theoretical models, and mitigating selection bias and systematics in observational studies. By providing validity guarantees with interpretable diagnostics, FreB enables trustworthy scientific inference across fields where direct likelihood evaluation remains impossible or prohibitively expensive.
+ oai:arXiv.org:2508.02602v2stat.ML
+ astro-ph.IMcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.AP
+ stat.ME
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Wei Chen, Shigui Li, Jiacheng Li, Jian Xu, Zhiqi Lin, Junmei Yang, Delu Zeng, John Paisley, Qibin Zhao
+ James Carzon, Luca Masserano, Joshua D. Ingram, Alex Shen, Antonio Carlos Herling Ribeiro Junior, Tommaso Dorigo, Michele Doro, Joshua S. Speagle, Rafael Izbicki, Ann B. Lee
- Next-Generation Reservoir Computing for Dynamical Inference
- https://arxiv.org/abs/2509.11338
- arXiv:2509.11338v2 Announce Type: replace-cross
-Abstract: We present a simple and scalable implementation of next-generation reservoir computing (NGRC) for modeling dynamical systems from time-series data. The method uses a pseudorandom nonlinear projection of time-delay embedded inputs, allowing the feature-space dimension to be chosen independently of the observation size and offering a flexible alternative to polynomial-based NGRC projections. We demonstrate the approach on benchmark tasks, including attractor reconstruction and bifurcation diagram estimation, using partial and noisy measurements. We further show that small amounts of measurement noise during training act as an effective regularizer, improving long-term autonomous stability compared to standard regression alone. Across all tests, the models remain stable over long rollouts and generalize beyond the training data. The framework offers explicit control of system state during prediction, and these properties make NGRC a natural candidate for applications such as surrogate modeling and digital-twin applications.
- oai:arXiv.org:2509.11338v2
- stat.ML
+ Bellman Optimality of Average-Reward Robust Markov Decision Processes with a Constant Gain
+ https://arxiv.org/abs/2509.14203
+ arXiv:2509.14203v3 Announce Type: replace-cross
+Abstract: Learning and optimal control under robust Markov decision processes (MDPs) have received increasing attention, yet most existing theory, algorithms, and applications focus on finite-horizon or discounted models. Long-run average-reward formulations, while natural in many operations research and management contexts, remain underexplored. This is primarily because the dynamic programming foundations are technically challenging and only partially understood, with several fundamental questions remaining open. This paper steps toward a general framework for average-reward robust MDPs by analyzing the constant-gain setting. We study the average-reward robust control problem with possible information asymmetries between the controller and an S-rectangular adversary. Our analysis centers on the constant-gain robust Bellman equation, examining both the existence of solutions and their relationship to the optimal average reward. Specifically, we identify when solutions to the robust Bellman equation characterize the optimal average reward and stationary policies, and we provide one-sided weak communication conditions ensuring solutions' existence. These findings expand the dynamic programming theory for average-reward robust MDPs and lay a foundation for robust dynamic decision making under long-run average criteria in operational environments.
+ oai:arXiv.org:2509.14203v3
+ math.OCcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Rok Cestnik, Erik A. Martens
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shengbo Wang, Nian Si
- DeepMech: A Machine Learning Framework for Chemical Reaction Mechanism Prediction
- https://arxiv.org/abs/2509.15872
- arXiv:2509.15872v2 Announce Type: replace-cross
-Abstract: Prediction of complete step-by-step chemical reaction mechanisms (CRMs) remains a major challenge. Whereas the traditional approaches in CRM tasks rely on expert-driven experiments or costly quantum chemical computations, contemporary deep learning (DL) alternatives ignore key intermediates and mechanistic steps and often suffer from hallucinations. We present DeepMech, an interpretable graph-based DL framework employing atom- and bond-level attention, guided by generalized templates of mechanistic operations (TMOps), to generate CRMs. Trained on our curated ReactMech dataset (~30K CRMs with 100K atom-mapped and mass-balanced elementary steps), DeepMech achieves 98.98+/-0.12% accuracy in predicting elementary steps and 95.94+/-0.21% in complete CRM tasks, besides maintaining high fidelity even in out-of-distribution scenarios as well as in predicting side and/or byproducts. Extension to multistep CRMs relevant to prebiotic chemistry, demonstrates the ability of DeepMech in effectively reconstructing 2 pathways from simple primordial substrates to complex biomolecules such as serine and aldopentose. Attention analysis identifies reactive atoms/bonds in line with chemical intuition, rendering our model interpretable and suitable for reaction design.
- oai:arXiv.org:2509.15872v2
- physics.chem-ph
- cs.AI
- cs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Stokes' theorem as an entropy-extremizing duality
+ https://arxiv.org/abs/2509.16386
+ arXiv:2509.16386v2 Announce Type: replace-cross
+Abstract: Given a manifold $\mathcal{M} \subset \mathbb{R}^n$, we consider all codimension-1 submanifolds of $\mathcal{M}$ that satisfy the generalized Stokes' theorem and show that $\partial\mathcal{M}$ uniquely maximizes the associated entropy functional. This provides an information theoretic characterization of the duality expressed by Stokes' theorem, whereby a manifold's boundary is its 'least informative' subset satisfying the Stokes relation.
+ oai:arXiv.org:2509.16386v2
+ math.DG
+ cs.IT
+ math.FA
+ math.IT
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Manajit Das, Ajnabiul Hoque, Mayank Baranwal, Raghavan B. Sunoj
+ Daniel Lazarev
- Forecasting the Future with Yesterday's Climate: Temperature Bias in AI Weather and Climate Models
- https://arxiv.org/abs/2509.22359
- arXiv:2509.22359v2 Announce Type: replace-cross
-Abstract: AI-based climate and weather models have rapidly gained popularity, providing faster forecasts with skill that can match or even surpass that of traditional dynamical models. Despite this success, these models face a key challenge: predicting future climates while being trained only with historical data. In this study, we investigate this issue by analyzing boreal winter land temperature biases in AI weather and climate models. We examine two weather models, FourCastNet V2 Small (FourCastNet) and Pangu Weather (Pangu), evaluating their predictions for 2020-2025 and Ai2 Climate Emulator version 2 (ACE2) for 1996-2010. These time periods lie outside of the respective models' training sets and are significantly more recent than the bulk of their training data, allowing us to assess how well the models generalize to new, i.e. more modern, conditions. We find that all three models produce cold-biased mean temperatures, resembling climates from 15-20 years earlier than the period they are predicting. In some regions, like the Eastern U.S., the predictions resemble climates from as much as 20-30 years earlier. Further analysis shows that FourCastNet's and Pangu's cold bias is strongest in the hottest predicted temperatures, indicating limited training exposure to modern extreme heat events. In contrast, ACE2's bias is more evenly distributed but largest in regions, seasons, and parts of the temperature distribution where climate change has been most pronounced. These findings underscore the challenge of training AI models exclusively on historical data and highlight the need to account for such biases when applying them to future climate prediction.
- oai:arXiv.org:2509.22359v2
- physics.ao-ph
+ Optimizing the non-Clifford-count in unitary synthesis using Reinforcement Learning
+ https://arxiv.org/abs/2509.21709
+ arXiv:2509.21709v2 Announce Type: replace-cross
+Abstract: In this paper we study the potential of using reinforcement learning (RL) in order to synthesize quantum circuits, while optimizing the T-count and CS-count, of unitaries that are exactly implementable by the Clifford+T and Clifford+CS gate sets, respectively. We have designed our RL framework to work with channel representation of unitaries, that enables us to perform matrix operations efficiently, using integers only. We have also incorporated pruning heuristics and a canonicalization of operators, in order to reduce the search complexity. As a result, compared to previous works, we are able to implement significantly larger unitaries, in less time, with much better success rate and improvement factor. Our results for Clifford+T synthesis on two qubit unitaries achieve close-to-optimal decompositions for up to 100 T gates, 5 times more than previous RL algorithms and to the best of our knowledge, the largest instances achieved with any method to date. Our RL algorithm is able to recover previously-known optimal linear complexity algorithm for T-count-optimal decomposition of 1 qubit unitaries. We illustrate significant reduction in the asymptotic T-count estimate of important primitives like controlled cyclic shift (43%), controlled adder (14.3%) and multiplier (14%), without adding any extra ancilla. For 2-qubit Clifford+CS unitaries, our algorithm achieves a linear complexity, something that could only be accomplished by a previous algorithm using SO(6) representation.
+ oai:arXiv.org:2509.21709v2
+ quant-phcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jacob B. Landsberg, Elizabeth A. Barnes
+ http://creativecommons.org/licenses/by/4.0/
+ David Kremer, Ali Javadi-Abhari, Priyanka Mukhopadhyay
- Good quantum codes with addressable and parallelizable transversal non-Clifford gates
- https://arxiv.org/abs/2510.19809
- arXiv:2510.19809v2 Announce Type: replace-cross
-Abstract: In this work, we prove that for any $m>1$, there exists a family of good qudit quantum codes supporting transversal logical $\mathsf{C}^{m-1}\mathsf{Z}$ gates that can address specified logical qudits and be largely executed in parallel. Building on the family of good quantum error-correcting codes presented in He et al. (2025), which support addressable and transversal logical $\mathsf{CCZ}$ gates, we extend their framework and show how to perform large sets of gates in parallel. The construction relies on the classical algebraic geometry codes of Stichtenoth (IEEE Trans. Inf. Theory, 2006). Our results lead to a substantial reduction in the depth overhead of multi-control-$Z$ circuits. In particular, we show that the minimal depth of any logical $\mathsf{C}^{m-1}\mathsf{Z}$ circuit involving qudits from $m$ distinct code blocks is upper bounded by $O(k^{m-1})$, where $k$ is the code dimension. While this overhead is optimal for dense $\mathsf{C}^{m-1}\mathsf{Z}$ circuits, for sparse circuits we discuss how the depth overhead can be significantly reduced by exploiting the structure of the quantum code.
- oai:arXiv.org:2510.19809v2
- quant-ph
- cs.IT
- math.IT
- Thu, 11 Dec 2025 00:00:00 -0500
+ Token Is All You Price
+ https://arxiv.org/abs/2510.09859
+ arXiv:2510.09859v3 Announce Type: replace-cross
+Abstract: We build a mechanism design framework where a platform designs GenAI models to screen users who obtain instrumental value from the generated conversation and privately differ in their preference for latency. We show that the revenue-optimal mechanism is simple: deploy a single aligned (user-optimal) model and use token cap as the only instrument to screen the user. The design decouples model training from pricing, is readily implemented with token metering, and mitigates misalignment pressures.
+ oai:arXiv.org:2510.09859v3
+ econ.TH
+ cs.AI
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Virgile Gu\'emard
+ Weijie Zhong
- Generalized Guarantees for Variational Inference in the Presence of Even and Elliptical Symmetry
- https://arxiv.org/abs/2511.01064
- arXiv:2511.01064v2 Announce Type: replace-cross
-Abstract: We extend several recent results providing symmetry-based guarantees for variational inference (VI) with location-scale families. VI approximates a target density $p$ by the best match $q^*$ in a family $Q$ of tractable distributions that in general does not contain $p$. It is known that VI can recover key properties of $p$, such as its mean and correlation matrix, when $p$ and $Q$ exhibit certain symmetries and $q^*$ is found by minimizing the reverse Kullback-Leibler divergence. We extend these guarantees in two important directions. First, we provide symmetry-based guarantees for $f$-divergences, a broad class that includes the reverse and forward Kullback-Leibler divergences and the $\alpha$-divergences. We highlight properties specific to the reverse Kullback-Leibler divergence under which we obtain our strongest guarantees. Second, we obtain further guarantees for VI when the target density $p$ exhibits even and elliptical symmetries in some but not all of its coordinates. These partial symmetries arise naturally in Bayesian hierarchical models, where the prior induces a challenging geometry but still possesses axes of symmetry. We illustrate these theoretical results in a number of experimental settings.
- oai:arXiv.org:2511.01064v2
- stat.ML
+ Same model, better performance: the impact of shuffling on DNA Language Models benchmarking
+ https://arxiv.org/abs/2510.12617
+ arXiv:2510.12617v2 Announce Type: replace-cross
+Abstract: Large Language Models are increasingly popular in genomics due to their potential to decode complex biological sequences. Hence, researchers require a standardized benchmark to evaluate DNA Language Models (DNA LMs) capabilities. However, evaluating DNA LMs is a complex task that intersects genomic's domain-specific challenges and machine learning methodologies, where seemingly minor implementation details can significantly compromise benchmark validity. We demonstrate this through BEND (Benchmarking DNA Language Models), where hardware-dependent hyperparameters -- number of data loading workers and buffer sizes -- create spurious performance variations of up to 4% for identical models. The problem stems from inadequate data shuffling interacting with domain specific data characteristics. Experiments with three DNA language models (HyenaDNA, DNABERT-2, ResNet-LM) show these artifacts affect both absolute performance and relative model rankings. We propose a simple solution: pre-shuffling data before storage eliminates hardware dependencies while maintaining efficiency. This work highlights how standard ML practices can interact unexpectedly with domain-specific data characteristics, with broader implications for benchmark design in specialized domains.
+ oai:arXiv.org:2510.12617v2
+ q-bio.GNcs.LG
- stat.CO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Charles C. Margossian, Lawrence K. Saul
+ Davide Greco, Konrad Rawlik
- Statistical Properties of Rectified Flow
- https://arxiv.org/abs/2511.03193
- arXiv:2511.03193v3 Announce Type: replace-cross
-Abstract: Rectified flow (Liu et al., 2022; Liu, 2022; Wu et al., 2023) is a method for defining a transport map between two distributions, and enjoys popularity in machine learning, although theoretical results supporting the validity of these methods are scant. The rectified flow can be regarded as an approximation to optimal transport, but in contrast to other transport methods that require optimization over a function space, computing the rectified flow only requires standard statistical tools such as regression or density estimation, which we leverage to develop empirical versions of transport maps. We study some structural properties of the rectified flow, including existence, uniqueness, and regularity, as well as the related statistical properties, such as rates of convergence and central limit theorems, for some selected estimators. To do so, we analyze the bounded and unbounded cases separately as each presents unique challenges. In both cases, we are able to establish convergence at faster rates than those for the usual nonparametric regression and density estimation.
- oai:arXiv.org:2511.03193v3
- math.ST
- cs.LG
- stat.ME
+ Distributional Shrinkage I: Universal Denoisers in Multi-Dimensions
+ https://arxiv.org/abs/2511.09500
+ arXiv:2511.09500v2 Announce Type: replace-cross
+Abstract: We revisit the problem of denoising from noisy measurements where only the noise level is known, not the noise distribution. In multi-dimensions, independent noise $Z$ corrupts the signal $X$, resulting in the noisy measurement $Y = X + \sigma Z$, where $\sigma \in (0, 1)$ is a known noise level. Our goal is to recover the underlying signal distribution $P_X$ from denoising $P_Y$. We propose and analyze universal denoisers that are agnostic to a wide range of signal and noise distributions. Our distributional denoisers offer order-of-magnitude improvements over the Bayes-optimal denoiser derived from Tweedie's formula, if the focus is on the entire distribution $P_X$ rather than on individual realizations of $X$. Our denoisers shrink $P_Y$ toward $P_X$ optimally, achieving $O(\sigma^4)$ and $O(\sigma^6)$ accuracy in matching generalized moments and density functions. Inspired by optimal transport theory, the proposed denoisers are optimal in approximating the Monge-Amp\`ere equation with higher-order accuracy, and can be implemented efficiently via score matching.
+ Let $q$ represent the density of $P_Y$; for optimal distributional denoising, we recommend replacing the Bayes-optimal denoiser, \[ \mathbf{T}^*(y) = y + \sigma^2 \nabla \log q(y), \] with denoisers exhibiting less aggressive distributional shrinkage, \[ \mathbf{T}_1(y) = y + \frac{\sigma^2}{2} \nabla \log q(y), \] \[ \mathbf{T}_2(y) = y + \frac{\sigma^2}{2} \nabla \log q(y) - \frac{\sigma^4}{8} \nabla \left( \frac{1}{2} \| \nabla \log q(y) \|^2 + \nabla \cdot \nabla \log q(y) \right) . \]
+ oai:arXiv.org:2511.09500v2stat.ML
+ cs.LG
+ math.STstat.TH
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Gonzalo Mena, Arun Kumar Kuchibhotla, Larry Wasserman
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Tengyuan Liang
- Function-on-Function Bayesian Optimization
- https://arxiv.org/abs/2511.12783
- arXiv:2511.12783v2 Announce Type: replace-cross
-Abstract: Bayesian optimization (BO) has been widely used to optimize expensive and gradient-free objective functions across various domains. However, existing BO methods have not addressed the objective where both inputs and outputs are functions, which increasingly arise in complex systems as advanced sensing technologies. To fill this gap, we propose a novel function-on-function Bayesian optimization (FFBO) framework. Specifically, we first introduce a function-on-function Gaussian process (FFGP) model with a separable operator-valued kernel to capture the correlations between function-valued inputs and outputs. Compared to existing Gaussian process models, FFGP is modeled directly in the function space. Based on FFGP, we define a scalar upper confidence bound (UCB) acquisition function using a weighted operator-based scalarization strategy. Then, a scalable functional gradient ascent algorithm (FGA) is developed to efficiently identify the optimal function-valued input. We further analyze the theoretical properties of the proposed method. Extensive experiments on synthetic and real-world data demonstrate the superior performance of FFBO over existing approaches.
- oai:arXiv.org:2511.12783v2
- stat.ML
+ Variational analysis of determinantal varieties
+ https://arxiv.org/abs/2511.22613
+ arXiv:2511.22613v2 Announce Type: replace-cross
+Abstract: Determinantal varieties -- the sets of bounded-rank matrices or tensors -- have attracted growing interest in low-rank optimization. The tangent cone to low-rank sets is widely studied and underpins a range of geometric methods. The second-order geometry, which encodes curvature information, is more intricate. In this work, we develop a unified framework to derive explicit formulas for both first- and second-order tangent sets to various low-rank sets, including low-rank matrices, tensors, symmetric matrices, and positive semidefinite matrices. The framework also accommodates the intersection of a low-rank set and another set satisfying mild assumptions, thereby yielding a tangent intersection rule. Through the lens of tangent sets, we establish a necessary and sufficient condition under which a nonsmooth problem and its smooth parameterization share equivalent second-order stationary points. Moreover, we exploit tangent sets to characterize optimality conditions for low-rank optimization and prove that verifying second-order optimality is NP-hard. In a separate line of analysis, we investigate variational geometry of the graph of the normal cone to matrix varieties, deriving the explicit Bouligand tangent cone, Fr\'echet and Mordukhovich normal cones to the graph. These results are further applied to develop optimality conditions for low-rank bilevel programs.
+ oai:arXiv.org:2511.22613v2
+ math.OC
+ cs.AIcs.LG
- Thu, 11 Dec 2025 00:00:00 -0500
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jingru Huang, Haijie Xu, Manrui Jiang, Chen Zhang
+ Yan Yang, Bin Gao, Ya-xiang Yuan
- Solving a Research Problem in Mathematical Statistics with AI Assistance
- https://arxiv.org/abs/2511.18828
- arXiv:2511.18828v2 Announce Type: replace-cross
-Abstract: Over the last few months, AI models including large language models have improved greatly. There are now several documented examples where they have helped professional mathematical scientists prove new results, sometimes even helping resolve known open problems. In this short note, we add another example to the list, by documenting how we were able to solve a previously unsolved research problem in robust mathematical statistics with crucial help from GPT-5. Our problem concerns robust density estimation, where the observations are perturbed by Wasserstein-bounded contaminations. In a previous preprint (Chao and Dobriban, 2023, arxiv:2308.01853v2), we have obtained upper and lower bounds on the minimax optimal estimation error; which were, however, not sharp.
- Starting in October 2025, making significant use of GPT-5 Pro, we were able to derive the minimax optimal error rate (reported in version 3 of the above arxiv preprint). GPT-5 provided crucial help along the way, including by suggesting calculations that we did not think of, and techniques that were not familiar to us, such as the dynamic Benamou-Brenier formulation, for key steps in the analysis. Working with GPT-5 took a few weeks of effort, and we estimate that it could have taken several months to get the same results otherwise. At the same time, there are still areas where working with GPT-5 was challenging: it sometimes provided incorrect references, and glossed over details that sometimes took days of work to fill in. We outline our workflow and steps taken to mitigate issues. Overall, our work can serve as additional documentation for a new age of human-AI collaborative work in mathematical science.
- oai:arXiv.org:2511.18828v2
- math.ST
+ Hierarchical Molecular Language Models (HMLMs)
+ https://arxiv.org/abs/2512.00696
+ arXiv:2512.00696v2 Announce Type: replace-cross
+Abstract: Artificial intelligence (AI) is reshaping computational and network biology by enabling new approaches to decode cellular communication networks. We introduce Hierarchical Molecular Language Models (HMLMs), a novel framework that models cellular signaling as a specialized molecular language, where signaling molecules function as tokens, protein interactions define syntax, and functional consequences constitute semantics. HMLMs employ a transformer-based architecture adapted to accommodate graph-structured signaling networks through information transducers, mathematical entities that capture how molecules receive, process, and transmit signals. The architecture integrates multi-modal data sources across molecular, pathway, and cellular scales through hierarchical attention mechanisms and scale-bridging operators that enable information flow across biological hierarchies. Applied to a complex network of cardiac fibroblast signaling, HMLMs outperformed traditional approaches in temporal dynamics prediction, particularly under sparse sampling conditions. Attention-based analysis revealed biologically meaningful crosstalk patterns, including previously uncharacterized interactions between signaling pathways. By bridging molecular mechanisms with cellular phenotypes through AI-driven molecular language representation, HMLMs establish a foundation for biology-oriented large language models (LLMs) that could be pre-trained on comprehensive pathway datasets and applied across diverse signaling systems and tissues, advancing precision medicine and therapeutic discovery.
+ oai:arXiv.org:2512.00696v2
+ q-bio.MNcs.AI
- cs.LG
- stat.TH
- Thu, 11 Dec 2025 00:00:00 -0500
+ cs.ET
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Edgar Dobriban
+ Hasi Hays, Yue Yu, William J. Richardson
- Data-Driven Learnability Transition of Measurement-Induced Entanglement
- https://arxiv.org/abs/2512.01317
- arXiv:2512.01317v2 Announce Type: replace-cross
-Abstract: Measurement-induced entanglement (MIE) captures how local measurements generate long-range quantum correlations and drive dynamical phase transitions in many-body systems. Yet estimating MIE experimentally remains challenging: direct evaluation requires extensive post-selection over measurement outcomes, raising the question of whether MIE is accessible with only polynomial resources. We address this challenge by reframing MIE detection as a data-driven learning problem that assumes no prior knowledge of state preparation. Using measurement records alone, we train a neural network in a self-supervised manner to predict the uncertainty metric for MIE--the gap between upper and lower bounds of the average post-measurement bipartite entanglement. Applied to random circuits with one-dimensional all-to-all connectivity and two-dimensional nearest-neighbor coupling, our method reveals a learnability transition with increasing circuit depth: below a threshold, the uncertainty is small and decreases with polynomial measurement data and model parameters, while above it the uncertainty remains large despite increasing resources. We further verify this transition experimentally on current noisy quantum devices, demonstrating its robustness to realistic noise. These results highlight the power of data-driven approaches for learning MIE and delineate the practical limits of its classical learnability.
- oai:arXiv.org:2512.01317v2
+ Quantum Encrypted Control of Networked Systems
+ https://arxiv.org/abs/2512.03434
+ arXiv:2512.03434v2 Announce Type: replace-cross
+Abstract: Encrypted control has been extensively studied to ensure the confidentiality of system states and control inputs for networked control systems. This paper presents a computationally efficient encrypted control framework for networked systems enabled by quantum communication. A quantum channel between sensors and actuators is used to generate identical secret keys, whose security is further enhanced through quantum key distribution. These keys enable lightweight encryption and decryption while preserving confidentiality and control accuracy. We develop a novel encryption-decryption architecture for state-feedback control of linear systems based on quantum keys, and characterize the impact of quantum state errors on closed-loop stability. In particular, we establish the existence of a critical threshold on intrinsic quantum noise below which stability is guaranteed. In contrast to classical encrypted control schemes, which may collapse under a single key-bit error, the proposed quantum encrypted control exhibits strong robustness to key imperfections. We further adopt quantization techniques to address the scenarios with limited communication bits in practical situations, and implement privacy protection for quantum keys based on a stochastic quantizer. These results demonstrate that integrating quantum technologies into control systems in a nontrivial and principled manner, even at their current level of maturity, can yield substantial performance gains in reducing computational complexity and improving resilience to key errors while ensuring security against multiple eavesdropping sources.
+ oai:arXiv.org:2512.03434v2quant-ph
- cond-mat.dis-nn
+ cs.SY
+ eess.SY
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zihao Ren, Daniel Quevedo, Salah Sukkarieh, Guodong Shi
+
+
+ Beyond Lux thresholds: a systematic pipeline for classifying biologically relevant light contexts from wearable data
+ https://arxiv.org/abs/2512.06181
+ arXiv:2512.06181v2 Announce Type: replace-cross
+Abstract: Background: Wearable spectrometers enable field quantification of biologically relevant light, yet reproducible pipelines for contextual classification remain under-specified.
+ Objective: To establish and validate a subject-wise evaluated, reproducible pipeline and actionable design rules for classifying natural vs. artificial light from wearable spectral data.
+ Methods: We analysed ActLumus recordings from 26 participants, each monitored for at least 7 days at 10-second sampling, paired with daily exposure diaries. The pipeline fixes the sequence: domain selection, log-base-10 transform, L2 normalisation excluding total intensity (to avoid brightness shortcuts), hour-level medoid aggregation, sine/cosine hour encoding, and MLP classifier, evaluated under participant-wise cross-validation.
+ Results: The proposed sequence consistently achieved high performance on the primary task, with representative configurations reaching AUC = 0.938 (accuracy 88%) for natural vs. artificial classification on the held-out subject split. In contrast, indoor vs. outdoor classification remained at feasibility level due to spectral overlap and class imbalance (best AUC approximately 0.75; majority-class collapse without contextual sensors). Threshold baselines were insufficient on our data, supporting the need for spectral-temporal modelling beyond illuminance cut-offs.
+ Conclusions: We provide a reproducible, auditable baseline pipeline and design rules for contextual light classification under subject-wise generalisation. All code, configuration files, and derived artefacts will be openly archived (GitHub + Zenodo DOI) to support reuse and benchmarking.
+ oai:arXiv.org:2512.06181v2
+ q-bio.QM
+ cs.LG
+ Fri, 12 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yanuo Zhou
+
+
+ Latency-Response Theory Model: Evaluating Large Language Models via Response Accuracy and Chain-of-Thought Length
+ https://arxiv.org/abs/2512.07019
+ arXiv:2512.07019v2 Announce Type: replace-cross
+Abstract: The proliferation of Large Language Models (LLMs) necessitates valid evaluation methods to guide downstream applications and actionable future improvements. The Item Response Theory (IRT) has recently emerged as a promising framework for evaluating LLMs via their response accuracy. Beyond simple response accuracy, LLMs' chain of thought (CoT) lengths serve as a vital indicator of their reasoning ability. To leverage the CoT length information to assist the evaluation of LLMs, we propose Latency-Response Theory (LaRT) to jointly model the response accuracy and CoT length by introducing the latent ability, latent speed, and a key correlation parameter between them. We derive an efficient estimation algorithm and establish rigorous identifiability results for the population parameters to ensure the statistical validity of estimation. Theoretical asymptotic analyses and simulation studies demonstrate LaRT's advantages over IRT in terms of higher estimation accuracy and shorter confidence intervals for latent traits. A key finding is that the asymptotic estimation precision of the latent ability under LaRT exceeds that of IRT whenever the latent ability and latent speed are correlated. We collect real responses from diverse LLMs on popular benchmark datasets. The application of LaRT reveals a strong negative correlation between the latent ability and latent speed in all benchmarks, with stronger correlation for more difficult benchmarks. This finding supports the intuition that higher reasoning ability correlates with slower speed and longer response latency. LaRT yields different LLM rankings than IRT and outperforms IRT across multiple key evaluation metrics including predictive power, item efficiency, ranking validity, and LLM evaluation efficiency. Code and data are available at https://github.com/Toby-X/Latency-Response-Theory-Model.
+ oai:arXiv.org:2512.07019v2
+ stat.MEcs.AI
- Thu, 11 Dec 2025 00:00:00 -0500
+ stat.AP
+ stat.ML
+ Fri, 12 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Dongheng Qian, Jing Wang
+ Zhiyu Xu, Jia Liu, Yixin Wang, Yuqi Gu
- Stronger is not better: Better Augmentations in Contrastive Learning for Medical Image Segmentation
- https://arxiv.org/abs/2512.05992
- arXiv:2512.05992v2 Announce Type: replace-cross
-Abstract: Self-supervised contrastive learning is among the recent representation learning methods that have shown performance gains in several downstream tasks including semantic segmentation. This paper evaluates strong data augmentation, one of the most important components for self-supervised contrastive learning's improved performance. Strong data augmentation involves applying the composition of multiple augmentation techniques on images. Surprisingly, we find that the existing data augmentations do not always improve performance for semantic segmentation for medical images. We experiment with other augmentations that provide improved performance.
- oai:arXiv.org:2512.05992v2
- eess.IV
- cs.CV
- Thu, 11 Dec 2025 00:00:00 -0500
+ Functional Percolation: A Perspective on Criticality of Form and Function
+ https://arxiv.org/abs/2512.09317
+ arXiv:2512.09317v2 Announce Type: replace-cross
+Abstract: Understanding the physical constraints and minimal conditions that enable information processing in extended systems remains a central challenge across disciplines, from neuroscience and artificial intelligence to social and physical networks. Here we study how network connectivity both limits and enables information processing by analyzing random networks across the structural percolation transition. Using cascade-mediated dynamics as a minimal and universal mechanism for propagating state-dependent responses, we examine structural, functional, and information-theoretic observables as functions of mean degree in Erdos-Renyi networks. We find that the emergence of a giant connected component coincides with a sharp transition in realizable information processing: complex input-output response functions become accessible, functional diversity increases rapidly, output entropy rises, and directed information flow quantified by transfer entropy extends beyond local neighborhoods. These coincident transitions define a regime of functional percolation, referring to a sharp expansion of the space of realizable input-output functions at the structural percolation transition. Near criticality, networks exhibit a Pareto-optimal tradeoff between functional complexity and diversity, suggesting that percolation criticality provides a universal organizing principle for information processing in systems with local interactions and propagating influences.
+ oai:arXiv.org:2512.09317v2
+ physics.soc-ph
+ cond-mat.stat-mech
+ cs.AI
+ physics.comp-ph
+ Fri, 12 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Azeez Idris, Abdurahman Ali Mohammed, Samuel Fanijo
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Galen J. Wilkerson
- Adversarial Barrier in Uniform Class Separation
- https://arxiv.org/abs/2512.08149
- arXiv:2512.08149v2 Announce Type: replace-cross
-Abstract: We identify a strong structural obstruction to Uniform Separation in constructive arithmetic. The mechanism is independent of semantic content; it emerges whenever two distinct evaluator predicates are sustained in parallel and inference remains uniformly representable in an extension of HA. Under these conditions, any putative Uniform Class Separation principle becomes a distinguished instance of a fixed point construction. The resulting limitation is stricter in scope than classical separation barriers (Baker; Rudich; Aaronson et al.) insofar as it constrains the logical form of uniform separation within HA, rather than limiting particular relativizing, naturalizing, or algebrizing techniques.
- oai:arXiv.org:2512.08149v2
- math.LO
- cs.CC
- cs.LO
- Thu, 11 Dec 2025 00:00:00 -0500
+ Meta-learning three-factor plasticity rules for structured credit assignment with sparse feedback
+ https://arxiv.org/abs/2512.09366
+ arXiv:2512.09366v2 Announce Type: replace-cross
+Abstract: Biological neural networks learn complex behaviors from sparse, delayed feedback using local synaptic plasticity, yet the mechanisms enabling structured credit assignment remain elusive. In contrast, artificial recurrent networks solving similar tasks typically rely on biologically implausible global learning rules or hand-crafted local updates. The space of local plasticity rules capable of supporting learning from delayed reinforcement remains largely unexplored. Here, we present a meta-learning framework that discovers local learning rules for structured credit assignment in recurrent networks trained with sparse feedback. Our approach interleaves local neo-Hebbian-like updates during task execution with an outer loop that optimizes plasticity parameters via \textbf{tangent-propagation through learning}. The resulting three-factor learning rules enable long-timescale credit assignment using only local information and delayed rewards, offering new insights into biologically grounded mechanisms for learning in recurrent circuits.
+ oai:arXiv.org:2512.09366v2
+ q-bio.NC
+ cond-mat.dis-nn
+ cs.LG
+ physics.bio-ph
+ Fri, 12 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Milan Rosko
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Dimitra Maoutsa