diff --git "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
--- "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
+++ "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
@@ -7,1124 +7,2116 @@
http://www.rssboard.org/rss-specificationen-us
- Mon, 22 Dec 2025 05:00:08 +0000
+ Tue, 23 Dec 2025 05:00:17 +0000rss-help@arxiv.org
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500SaturdaySunday
- Do Generalized=Gamma Scale Mixtures of Normals Fit Large Image Data-Sets?
- https://arxiv.org/abs/2512.17038
- arXiv:2512.17038v1 Announce Type: new
-Abstract: A scale mixture of normals is a distribution formed by mixing a collection of normal distributions with fixed mean but different variances. A generalized gamma scale mixture draws the variances from a generalized gamma distribution. Generalized gamma scale mixtures of normals have been proposed as an attractive class of parametric priors for Bayesian inference in inverse imaging problems. Generalized gamma scale mixtures have two shape parameters, one that controls the behavior of the distribution about its mode, and the other that controls its tail decay. In this paper, we provide the first demonstration that the prior model is realistic for multiple large imaging data sets. We draw data from remote sensing, medical imaging, and image classification applications. We study the realism of the prior when applied to Fourier and wavelet (Haar and Gabor) transformations of the images, as well as to the coefficients produced by convolving the images against the filters used in the first layer of AlexNet, a popular convolutional neural network trained for image classification. We discuss data augmentation procedures that improve the fit of the model, procedures for identifying approximately exchangeable coefficients, and characterize the parameter regions that best describe the observed data sets. These regions are significantly broader than the region of primary focus in computational work. We show that this prior family provides a substantially better fit to each data set than any of the standard priors it contains. These include Gaussian, Laplace, $\ell_p$, and Student's $t$ priors. Finally, we identify cases where the prior is unrealistic and highlight characteristic features of images that suggest the model will fit poorly.
- oai:arXiv.org:2512.17038v1
+ A Critical Review of Monte Carlo Algorithms Balancing Performance and Probabilistic Accuracy with AI Augmented Framework
+ https://arxiv.org/abs/2512.17968
+ arXiv:2512.17968v1 Announce Type: new
+Abstract: Monte Carlo algorithms are a foundational pillar of modern computational science, yet their effective application hinges on a deep understanding of their performance trade offs. This paper presents a critical analysis of the evolution of Monte Carlo algorithms, focusing on the persistent tension between statistical efficiency and computational cost. We describe the historical development from the foundational Metropolis Hastings algorithm to contemporary methods like Hamiltonian Monte Carlo. A central emphasis of this survey is the rigorous discussion of time and space complexity, including upper, lower, and asymptotic tight bounds for each major algorithm class. We examine the specific motivations for developing these methods and the key theoretical and practical observations such as the introduction of gradient information and adaptive tuning in HMC that led to successively better solutions. Furthermore, we provide a justification framework that discusses explicit situations in which using one algorithm is demonstrably superior to another for the same problem. The paper concludes by assessing the profound significance and impact of these algorithms and detailing major current research challenges.
+ oai:arXiv.org:2512.17968v1
+ stat.CO
+ cs.AI
+ cs.CL
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ravi Prasad
+
+
+ Sampling from multimodal distributions with warm starts: Non-asymptotic bounds for the Reweighted Annealed Leap-Point Sampler
+ https://arxiv.org/abs/2512.17977
+ arXiv:2512.17977v1 Announce Type: new
+Abstract: Sampling from multimodal distributions is a central challenge in Bayesian inference and machine learning. In light of hardness results for sampling -- classical MCMC methods, even with tempering, can suffer from exponential mixing times -- a natural question is how to leverage additional information, such as a warm start point for each mode, to enable faster mixing across modes. To address this, we introduce Reweighted ALPS (Re-ALPS), a modified version of the Annealed Leap-Point Sampler (ALPS) that dispenses with the Gaussian approximation assumption. We prove the first polynomial-time bound that works in a general setting, under a natural assumption that each component contains significant mass relative to the others when tilted towards the corresponding warm start point. Similarly to ALPS, we define distributions tilted towards a mixture centered at the warm start points, and at the coldest level, use teleportation between warm start points to enable efficient mixing across modes. In contrast to ALPS, our method does not require Hessian information at the modes, but instead estimates component partition functions via Monte Carlo. This additional estimation step is crucial in allowing the algorithm to handle target distributions with more complex geometries besides approximate Gaussian. For the proof, we show convergence results for Markov processes when only part of the stationary distribution is well-mixing and estimation for partition functions for individual components of a mixture. We numerically evaluate our algorithm's mixing performance compared to ALPS on a mixture of heavy-tailed distributions.
+ oai:arXiv.org:2512.17977v1
+ stat.ML
+ cs.LG
+ math.PR
+ math.ST
+ stat.CO
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Holden Lee, Matheau Santana-Gijzen
+
+
+ Empirical parameterization of the Elo Rating System
+ https://arxiv.org/abs/2512.18013
+ arXiv:2512.18013v1 Announce Type: new
+Abstract: This study aims to provide a data-driven approach for empirically tuning and validating rating systems, focusing on the Elo system. Well-known rating frameworks, such as Elo, Glicko, TrueSkill systems, rely on parameters that are usually chosen based on probabilistic assumptions or conventions, and do not utilize game-specific data. To address this issue, we propose a methodology that learns optimal parameter values by maximizing the predictive accuracy of match outcomes. The proposed parameter-tuning framework is a generalizable method that can be extended to any rating system, even for multiplayer setups, through suitable modification of the parameter space. Implementation of the rating system on real and simulated gameplay data demonstrates the suitability of the data-driven rating system in modeling player performance.
+ oai:arXiv.org:2512.18013v1stat.AP
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Brandon Marks, Yash Dave, Zixun Wang, Hannah Chung, Riya Patwa, Simon Cha, Michael Murphy, Alexander Strang
+ Shirsa Maitra, Tathagata Banerjee, Anushka De, Diganta Mukherjee, Tridib Mukherjee
- A systematic assessment of Large Language Models for constructing two-level fractional factorial designs
- https://arxiv.org/abs/2512.17113
- arXiv:2512.17113v1 Announce Type: new
-Abstract: Two-level fractional factorial designs permit the study multiple factors using a limited number of runs. Traditionally, these designs are obtained from catalogs available in standard textbooks or statistical software. However, modern Large Language Models (LLMs) can now produce two-level fractional factorial designs, but the quality of these designs has not been previously assessed. In this paper, we perform a systematic evaluation of two popular classes of LLMs, namely GPT and Gemini models, to construct two-level fractional factorial designs with 8, 16, and 32 runs, and 4 to 26 factors. To this end, we use prompting techniques to develop a high-quality set of design construction tasks for the LLMs. We compare the designs obtained by the LLMs with the best-known designs in terms of resolution and minimum aberration criteria. We show that the LLMs can effectively construct optimal 8-, 16-, and 32-run designs with up to eight factors.
- oai:arXiv.org:2512.17113v1
+ Deep Gaussian Processes with Gradients
+ https://arxiv.org/abs/2512.18066
+ arXiv:2512.18066v1 Announce Type: new
+Abstract: Deep Gaussian processes (DGPs) are popular surrogate models for complex nonstationary computer experiments. DGPs use one or more latent Gaussian processes (GPs) to warp the input space into a plausibly stationary regime, then use typical GP regression on the warped domain. While this composition of GPs is conceptually straightforward, the functional nature of the multi-dimensional latent warping makes Bayesian posterior inference challenging. Traditional GPs with smooth kernels are naturally suited for the integration of gradient information, but the integration of gradients within a DGP presents new challenges and has yet to be explored. We propose a novel and comprehensive Bayesian framework for DGPs with gradients that facilitates both gradient-enhancement and gradient posterior predictive distributions. We provide open-source software in the "deepgp" package on CRAN, with optional Vecchia approximation to circumvent cubic computational bottlenecks. We benchmark our DGPs with gradients on a variety of nonstationary simulations, showing improvement over both GPs with gradients and conventional DGPs.
+ oai:arXiv.org:2512.18066v1stat.ME
- stat.CO
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Alan R. Vazquez, Kilian M. Rother, Marco V. Charles-Gonzalez
+ http://creativecommons.org/licenses/by/4.0/
+ Annie S. Booth
- Uncovering latent territorial structure in ICFES Saber 11 performance with Bayesian multilevel spatial models
- https://arxiv.org/abs/2512.17119
- arXiv:2512.17119v1 Announce Type: new
-Abstract: This article develops a Bayesian hierarchical framework to analyze academic performance in the 2022 second semester Saber 11 examination in Colombia. Our approach combines multilevel regression with municipal and departmental spatial random effects, and it incorporates Ridge and Lasso regularization priors to compare the contribution of sociodemographic covariates. Inference is implemented in a fully open source workflow using Markov chain Monte Carlo methods, and model behavior is assessed through synthetic data that mirror key features of the observed data. Simulation results indicate that Ridge provides the most balanced performance in parameter recovery, predictive accuracy, and sampling efficiency, while Lasso shows weaker fit and posterior stability, with gains in predictive accuracy under stronger multicollinearity. In the application, posterior rankings show a strong centralization of performance, with higher scores in central departments and lower scores in peripheral territories, and the strongest correlates of scores are student level living conditions, maternal education, access to educational resources, gender, and ethnic background, while spatial random effects capture residual regional disparities. A hybrid Bayesian segmentation based on K means propagates posterior uncertainty into clustering at departmental, municipal, and spatial scales, revealing multiscale territorial patterns consistent with structural inequalities and informing territorial targeting in education policy.
- oai:arXiv.org:2512.17119v1
- stat.AP
+ Data adaptive covariate balancing for causal effect estimation for high dimensional data
+ https://arxiv.org/abs/2512.18069
+ arXiv:2512.18069v1 Announce Type: new
+Abstract: A key challenge in estimating causal effects from observational data is handling confounding and is commonly achieved through weighting methods that balance distribution of covariates between treatment and control groups. Weighting approaches can be classified by whether weights are estimated using parametric or nonparametric methods, and by whether the model relies on modeling and inverting the propensity score or directly estimates weights to achieve distributional balance by minimizing a measure of dissimilarity between groups. Parametric methods, both for propensity score modeling and direct balancing, are prone to model misspecification. In addition, balancing approaches often suffer from the curse of dimensionality, as they assign equal importance to all covariates, thus potentially de-emphasizing true confounders. Several methods, such as the outcome adaptive lasso, attempt to mitigate this issue through variable selection, but are parametric and focus on propensity score estimation rather than direct balancing. In this paper, we propose a nonparametric direct balancing approach that uses random forests to adaptively emphasize confounders. Our method jointly models treatment and outcome using random forests, allowing the data to identify covariates that influence both processes. We construct a similarity measure, defined by the proportion of trees in which two observations fall into the same leaf node, yielding a distance between treatment and control distributions that is sensitive to relevant covariates and captures the structure of confounding. Under suitable assumptions, we show that the resulting weights converge to normalized inverse propensity scores in the L2 norm and provide consistent treatment effect estimates. We demonstrate the effectiveness of our approach through extensive simulations and an application to a real dataset.
+ oai:arXiv.org:2512.18069v1stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Laura Pardo, Juan Sosa
+ Simion De, Jared D. Huling
- Disentangled representations via score-based variational autoencoders
- https://arxiv.org/abs/2512.17127
- arXiv:2512.17127v1 Announce Type: new
-Abstract: We present the Score-based Autoencoder for Multiscale Inference (SAMI), a method for unsupervised representation learning that combines the theoretical frameworks of diffusion models and VAEs. By unifying their respective evidence lower bounds, SAMI formulates a principled objective that learns representations through score-based guidance of the underlying diffusion process. The resulting representations automatically capture meaningful structure in the data: it recovers ground truth generative factors in our synthetic dataset, learns factorized, semantic latent dimensions from complex natural images, and encodes video sequences into latent trajectories that are straighter than those of alternative encoders, despite training exclusively on static images. Furthermore, SAMI can extract useful representations from pre-trained diffusion models with minimal additional training. Finally, the explicitly probabilistic formulation provides new ways to identify semantically meaningful axes in the absence of supervised labels, and its mathematical exactness allows us to make formal statements about the nature of the learned representation. Overall, these results indicate that implicit structural information in diffusion models can be made explicit and interpretable through synergistic combination with a variational autoencoder.
- oai:arXiv.org:2512.17127v1
+ Causal Inference as Distribution Adaptation: Optimizing ATE Risk under Propensity Uncertainty
+ https://arxiv.org/abs/2512.18083
+ arXiv:2512.18083v1 Announce Type: new
+Abstract: Standard approaches to causal inference, such as Outcome Regression and Inverse Probability Weighted Regression Adjustment (IPWRA), are typically derived through the lens of missing data imputation and identification theory. In this work, we unify these methods from a Machine Learning perspective, reframing ATE estimation as a \textit{domain adaptation problem under distribution shift}. We demonstrate that the canonical Hajek estimator is a special case of IPWRA restricted to a constant hypothesis class, and that IPWRA itself is fundamentally Importance-Weighted Empirical Risk Minimization designed to correct for the covariate shift between the treated sub-population and the target population.
+ Leveraging this unified framework, we critically examine the optimization objectives of Doubly Robust estimators. We argue that standard methods enforce \textit{sufficient but not necessary} conditions for consistency by requiring outcome models to be individually unbiased. We define the true "ATE Risk Function" and show that minimizing it requires only that the biases of the treated and control models structurally cancel out. Exploiting this insight, we propose the \textbf{Joint Robust Estimator (JRE)}. Instead of treating propensity estimation and outcome modeling as independent stages, JRE utilizes bootstrap-based uncertainty quantification of the propensity score to train outcome models jointly. By optimizing for the expected ATE risk over the distribution of propensity scores, JRE leverages model degrees of freedom to achieve robustness against propensity misspecification. Simulation studies demonstrate that JRE achieves up to a 15\% reduction in MSE compared to standard IPWRA in finite-sample regimes with misspecified outcome models.
+ oai:arXiv.org:2512.18083v1stat.MLcs.LG
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Benjamin S. H. Lyo, Eero P. Simoncelli, Cristina Savin
+ http://creativecommons.org/licenses/by/4.0/
+ Ashley Zhang
+
+
+ Distribution-Free Selection of Low-Risk Oncology Patients for Survival Beyond a Time Horizon
+ https://arxiv.org/abs/2512.18118
+ arXiv:2512.18118v1 Announce Type: new
+Abstract: We study the problem of selecting a subset of patients who are unlikely to experience an event within a specified time horizon, by calibrating a screening rule based on the output of a black-box survival model. This statistics problem has many applications in medicine, including identifying candidates for treatment de-escalation and prioritizing the allocation of limited medical resources. In this paper, we compare two families of methods that can provide different types of distribution-free guarantees for this task: (i) high-probability risk control and (ii) expectation-based false discovery rate control using conformal $p$-values. We clarify the relation between these two frameworks, which have important conceptual differences, and explain how each can be adapted to analyze time-to-event data using inverse probability of censoring weighting. Through experiments on semi-synthetic and real oncology data from the Flatiron Health Research Database, we find that both approaches often achieve the desired survival rate among selected patients, but with distinct efficiency profiles. The conformal method tends to be more powerful, whereas high-probability risk control offers stronger guarantees at the cost of some additional conservativeness. Finally, we provide practical guidance on implementation and parameter tuning.
+ oai:arXiv.org:2512.18118v1
+ stat.AP
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Matteo Sesia, Vladimir Svetnik
+
+
+ Distributed Asymmetric Allocation: A Topic Model for Large Imbalanced Corpora in Social Sciences
+ https://arxiv.org/abs/2512.18119
+ arXiv:2512.18119v1 Announce Type: new
+Abstract: Social scientists employ latent Dirichlet allocation (LDA) to find highly specific topics in large corpora, but they often struggle in this task because (1) LDA, in general, takes a significant amount of time to fit on large corpora; (2) unsupervised LDA fragments topics into sub-topics in short documents; (3) semi-supervised LDA fails to identify specific topics defined using seed words. To solve these problems, I have developed a new topic model called distributed asymmetric allocation (DAA) that integrates multiple algorithms for efficiently identifying sentences about important topics in large corpora. I evaluate the ability of DAA to identify politically important topics by fitting it to the transcripts of speeches at the United Nations General Assembly between 1991 and 2017. The results show that DAA can classify sentences significantly more accurately and quickly than LDA thanks to the new algorithms. More generally, the results demonstrate that it is important for social scientists to optimize Dirichlet priors of LDA to perform content analysis accurately.
+ oai:arXiv.org:2512.18119v1
+ stat.ME
+ cs.CL
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Kohei Watanabe
+
+
+ Efficient Bayesian inference for two-stage models in environmental epidemiology
+ https://arxiv.org/abs/2512.18143
+ arXiv:2512.18143v1 Announce Type: new
+Abstract: Statistical models often require inputs that are not completely known. This can occur when inputs are measured with error, indirectly, or when they are predicted using another model. In environmental epidemiology, air pollution exposure is a key determinant of health, yet typically must be estimated for each observational unit by a complex model. Bayesian two-stage models combine this stage-one model with a stage-two model for the health outcome given the exposure. However, analysts usually only have access to the stage-one model output without all of its specifications or input data, making joint Bayesian inference apparently intractable. We show that two prominent workarounds-using a point estimate or using the posterior from the stage-one model without feedback from the stage-two model-lead to miscalibrated inference. Instead, we propose efficient algorithms to facilitate joint Bayesian inference and provide more accurate estimates and well-calibrated uncertainties. Comparing different approaches, we investigate the association between PM2.5 exposure and county-level mortality rates in the South-Central USA.
+ oai:arXiv.org:2512.18143v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Konstantin Larin, Daniel R. Kowal
+
+
+ Frequentist forecasting in regime-switching models with extended Hamilton filter
+ https://arxiv.org/abs/2512.18149
+ arXiv:2512.18149v1 Announce Type: new
+Abstract: Psychological change processes, such as university student dropout in math, often exhibit discrete latent state transitions and can be studied using regime-switching models with intensive longitudinal data (ILD). Recently, regime-switching state-space (RSSS) models have been extended to allow for latent variables and their autoregressive effects. Despite this progress, estimation methods for handling both intra-individual changes and inter-individual differences as predictors of regime-switches need further exploration. Specifically, there's a need for frequentist estimation methods in dynamic latent variable frameworks that allow real-time inferences and forecasts of latent or observed variables during ongoing data collection. Building on Chow and Zhang's (2013) extended Kim filter, we introduce a first frequentist filter for RSSS models which allows hidden Markov(-switching) models to depend on both latent within- and between-individual characteristics. As a counterpart of Kelava et al.'s (2022) Bayesian forecasting filter for nonlinear dynamic latent class structural equation models (NDLC-SEM), our proposed method is the first frequentist approach within this general class of models. In an empirical study, the filter is applied to forecast emotions and behavior related to student dropout in math. Parameter recovery and prediction of regime and dynamic latent variables are evaluated through simulation study.
+ oai:arXiv.org:2512.18149v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Kento Okuyama, Tim Fabian Schaffland, Pascal Kilian, Holger Brandt, Augustin Kelava
- A Synthetic Instrumental Variable Method: Using the Dual Tendency Condition for Coplanar Instruments
- https://arxiv.org/abs/2512.17301
- arXiv:2512.17301v1 Announce Type: new
-Abstract: Traditional instrumental variable (IV) methods often struggle with weak or invalid instruments and rely heavily on external data. We introduce a Synthetic Instrumental Variable (SIV) approach that constructs valid instruments using only existing data. Our method leverages a data-driven dual tendency (DT) condition to identify valid instruments without requiring external variables. SIV is robust to heteroscedasticity and can determine the true sign of the correlation between endogenous regressors and errors--an assumption typically imposed in empirical work. Through simulations and real-world applications, we show that SIV improves causal inference by mitigating common IV limitations and reducing dependence on scarce instruments. This approach has broad implications for economics, epidemiology, and policy evaluation.
- oai:arXiv.org:2512.17301v1
+ quollr: An R Package for Visualizing 2-D Models from Nonlinear Dimension Reductions in High-Dimensional Space
+ https://arxiv.org/abs/2512.18166
+ arXiv:2512.18166v1 Announce Type: new
+Abstract: Nonlinear dimension reduction methods provide a low-dimensional representation of high-dimensional data by applying a Nonlinear transformation. However, the complexity of the transformations and data structures can create wildly different representations depending on the method and hyper-parameter choices. It is difficult to determine whether any of these representations are accurate, which one is the best, or whether they have missed important structures. The R package quollr has been developed as a new visual tool to determine which method and which hyper-parameter choices provide the most accurate representation of high-dimensional data. The scurve data from the package is used to illustrate the algorithm. Single-cell RNA sequencing (scRNA-seq) data from mouse limb muscles are used to demonstrate the usability of the package.
+ oai:arXiv.org:2512.18166v1stat.ME
+ stat.CO
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jayani P. Gamage, Dianne Cook, Paul Harrison, Michael Lydeamore, Thiyanga S. Talagala
+
+
+ Copula Entropy: Theory and Applications
+ https://arxiv.org/abs/2512.18168
+ arXiv:2512.18168v1 Announce Type: new
+Abstract: This is the monograph on the theory and applications of copula entropy (CE). This book first introduces the theory of CE, including its background, definition, theorems, properties, and estimation methods. The theoretical applications of CE to structure learning, association discovery, variable selection, causal discovery, system identification, time lag estimation, domain adaptation, multivariate normality test, copula hypothesis test, two-sample test, change point detection, and symmetry test are reviewed. The relationships between the theoretical applications and their connections to correlation and causality are discussed. The framework based on CE for measuring statistical independence and conditional independence is compared to the other similar ones. The advantages of CE based methodologies over the other comparable ones are evaluated with simulations. The mathematical generalizations of CE are reviewed. The real applications of CE to every branch of science and engineering are briefly introduced.
+ oai:arXiv.org:2512.18168v1
+ stat.ME
+ cs.IT
+ math.IT
+ math.PRmath.STstat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Ratbek Dzhumashev, Ainura Tursunalieva
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jian Ma
- Penalized Fair Regression for Multiple Groups in Chronic Kidney Disease
- https://arxiv.org/abs/2512.17340
- arXiv:2512.17340v1 Announce Type: new
-Abstract: Fair regression methods have the potential to mitigate societal bias concerns in health care, but there has been little work on penalized fair regression when multiple groups experience such bias. We propose a general regression framework that addresses this gap with unfairness penalties for multiple groups. Our approach is demonstrated for binary outcomes with true positive rate disparity penalties. It can be efficiently implemented through reduction to a cost-sensitive classification problem. We additionally introduce novel score functions for automatically selecting penalty weights. Our penalized fair regression methods are empirically studied in simulations, where they achieve a fairness-accuracy frontier beyond that of existing comparison methods. Finally, we apply these methods to a national multi-site primary care study of chronic kidney disease to develop a fair classifier for end-stage renal disease. There we find substantial improvements in fairness for multiple race and ethnicity groups who experience societal bias in the health care system without any appreciable loss in overall fit.
- oai:arXiv.org:2512.17340v1
+ cardinalR: Generating Interesting High-Dimensional Data Structures
+ https://arxiv.org/abs/2512.18172
+ arXiv:2512.18172v1 Announce Type: new
+Abstract: Simulated high-dimensional data is useful for testing, validating, and improving algorithms used in dimension reduction, supervised and unsupervised learning. High-dimensional data is characterized by multiple variables that are dependent or associated in some way, such as linear, nonlinear, clustering or anomalies. Here we provide new methods for generating a variety of high-dimensional structures using mathematical functions and statistical distributions organized into the R package cardinalR. Several example data sets are also provided. These will be useful for researchers to better understand how different analytical methods work and can be improved, with a special focus on nonlinear dimension reduction methods. This package enriches the existing toolset of benchmark datasets for evaluating algorithms.
+ oai:arXiv.org:2512.18172v1stat.ME
- cs.CY
- cs.LGstat.AP
- stat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jayani P. Gamage, Dianne Cook, Paul Harrison, Michael Lydeamore, Thiyanga S. Talagala
+
+
+ Applying non-negative matrix factorization with covariates to structural equation modeling for blind input-output analysis
+ https://arxiv.org/abs/2512.18250
+ arXiv:2512.18250v1 Announce Type: new
+Abstract: Structural equation modeling (SEM) describes directed dependence and feedback, whereas non-negative matrix factorization (NMF) provides interpretable, parts-based representations for non-negative data. We propose NMF-SEM, a unified non-negative framework that embeds NMF within a simultaneous-equation structure, enabling latent feedback loops and a reduced-form input-output mapping when intermediate flows are unobserved. The mapping separates direct effects from cumulative propagation effects and summarizes reinforcement using an amplification ratio.
+ We develop regularized multiplicative-update estimation with orthogonality and sparsity penalties, and introduce structural evaluation metrics for input-output fidelity, second-moment (covariance-like) agreement, and feedback strength. Applications show that NMF-SEM recovers the classical three-factor structure in the Holzinger-Swineford data, identifies climate- and pollutant-driven mortality pathways with negligible feedback in the Los Angeles system, and separates deprivation, general morbidity, and deaths-of-despair components with weak feedback in Mississippi health outcomes.
+ oai:arXiv.org:2512.18250v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Carter H. Nakamoto, Lucia Lushi Chen, Agata Foryciarz, Sherri Rose
+ Kenichi Satoh
- Sharp Structure-Agnostic Lower Bounds for General Functional Estimation
- https://arxiv.org/abs/2512.17341
- arXiv:2512.17341v1 Announce Type: new
-Abstract: The design of efficient nonparametric estimators has long been a central problem in statistics, machine learning, and decision making. Classical optimal procedures often rely on strong structural assumptions, which can be misspecified in practice and complicate deployment. This limitation has sparked growing interest in structure-agnostic approaches -- methods that debias black-box nuisance estimates without imposing structural priors. Understanding the fundamental limits of these methods is therefore crucial. This paper provides a systematic investigation of the optimal error rates achievable by structure-agnostic estimators. We first show that, for estimating the average treatment effect (ATE), a central parameter in causal inference, doubly robust learning attains optimal structure-agnostic error rates. We then extend our analysis to a general class of functionals that depend on unknown nuisance functions and establish the structure-agnostic optimality of debiased/double machine learning (DML). We distinguish two regimes -- one where double robustness is attainable and one where it is not -- leading to different optimal rates for first-order debiasing, and show that DML is optimal in both regimes. Finally, we instantiate our general lower bounds by deriving explicit optimal rates that recover existing results and extend to additional estimands of interest. Our results provide theoretical validation for widely used first-order debiasing methods and guidance for practitioners seeking optimal approaches in the absence of structural assumptions. This paper generalizes and subsumes the ATE lower bound established in \citet{jin2024structure} by the same authors.
- oai:arXiv.org:2512.17341v1
- stat.ML
- cs.LG
- econ.EM
- math.ST
+ On Efficient Adjustment in Causal Graphs
+ https://arxiv.org/abs/2512.18315
+ arXiv:2512.18315v1 Announce Type: new
+Abstract: Observational studies in fields such as epidemiology often rely on covariate adjustment to estimate causal effects. Classical graphical criteria, like the back-door criterion and the generalized adjustment criterion, are powerful tools for identifying valid adjustment sets in directed acyclic graphs (DAGs). However, these criteria are not directly applicable to summary causal graphs (SCGs), which are abstractions of DAGs commonly used in dynamic systems. In SCGs, each node typically represents an entire time series and may involve cycles, making classical criteria inapplicable for identifying causal effects. Recent work established complete conditions for determining whether the micro causal effect of a treatment or an exposure $X_{t-\gamma}$ on an outcome $Y_t$ is identifiable via covariate adjustment in SCGs, under the assumption of no hidden confounding. However, these identifiability conditions have two main limitations. First, they are complex, relying on cumbersome definitions and requiring the enumeration of multiple paths in the SCG, which can be computationally expensive. Second, when these conditions are satisfied, they only provide two valid adjustment sets, limiting flexibility in practical applications. In this paper, we propose an equivalent but simpler formulation of those identifiability conditions and introduce a new criterion that identifies a broader class of valid adjustment sets in SCGs. Additionally, we characterize the quasi-optimal adjustment set among these, i.e., the one that minimizes the asymptotic variance of the causal effect estimator. Our contributions offer both theoretical advancement and practical tools for more flexible and efficient causal inference in abstracted causal graphs.
+ oai:arXiv.org:2512.18315v1stat.ME
- stat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
+ cs.AI
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Isabela Belciug, Simon Ferreira, Charles K. Assaad
+
+
+ Bayesian Brain Edge-Based Connectivity (BBeC): a Bayesian model for brain edge-based connectivity inference
+ https://arxiv.org/abs/2512.18403
+ arXiv:2512.18403v1 Announce Type: new
+Abstract: Brain connectivity analysis based on magnetic resonance imaging is crucial for understanding neurological mechanisms. However, edge-based connectivity inference faces significant challenges, particularly the curse of dimensionality when estimating high-dimensional covariance matrices. Existing methods often struggle to account for the unknown latent topological structure among brain edges, leading to inaccurate parameter estimation and unstable inference. To address these issues, this study proposes a Bayesian model based on a finite-dimensional Dirichlet distribution. Unlike non-parametric approaches, our method utilizes a finite-dimensional Dirichlet distribution to model the topological structure of brain networks, ensuring constant parameter dimensionality and improving algorithmic stability. We reformulate the covariance matrix structure to guarantee positive definiteness and employ a Metropolis-Hastings algorithm to simultaneously infer network topology and correlation parameters. Simulations validated the recovery of both network topology and correlation parameters. When applied to the Alzheimer's Disease Neuroimaging Initiative dataset, the model successfully identified structural subnetworks. The identified clusters were not only validated by composite anatomical metrics but also consistent with established findings in the literature, collectively demonstrating the model's reliability. The estimated covariance matrix also revealed that intragroup connection strength is stronger than intergroup connection strength. This study introduces a Bayesian framework for inferring brain network topology and high-dimensional covariance structures. The model configuration reduces parameter dimensionality while ensuring the positive definiteness of covariance matrices. As a result, it offers a reliable tool for investigating intrinsic brain connectivity in large-scale neuroimaging studies.
+ oai:arXiv.org:2512.18403v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by-nc-sa/4.0/
- Jikai Jin, Vasilis Syrgkanis
+ Zijing Li, Chenhao Zeng, Shufei Ge
+
+
+ Analysing Skill Predominance in Generalized Fantasy Cricket
+ https://arxiv.org/abs/2512.18467
+ arXiv:2512.18467v1 Announce Type: new
+Abstract: In fantasy sports, strategic thinking-not mere luck-often defines who wins and who falls short. As fantasy cricket grows in popularity across India, understanding whether success stems from skill or chance has become both an analytical and regulatory question. This study introduces a new limited-selection contest framework in which participants choose from four expert-designed teams and share prizes based on the highest cumulative score. By combining simulation experiments with real performance data from the 2024 Indian Premier League (IPL), we evaluate whether measurable skill emerges within this structure. Results reveal that strategic and informed team selection consistently outperforms random choice, underscoring a clear skill advantage that persists despite stochastic variability. The analysis quantifies how team composition, inter-team correlation, and participant behaviour jointly influence winning probabilities, highlighting configurations where skill becomes statistically dominant. These findings provide actionable insights for players seeking to maximise returns through strategy and for platform designers aiming to develop fair, transparent, and engaging skill-based gaming ecosystems that balance competition with regulatory compliance.
+ oai:arXiv.org:2512.18467v1
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Supratim Das, Sarthak Sarkar, Subhamoy Maitra, Tridib Mukherjee
+
+
+ Calibrating hierarchical Bayesian domain inference for a proportion
+ https://arxiv.org/abs/2512.18479
+ arXiv:2512.18479v1 Announce Type: new
+Abstract: Small area estimation (SAE) improves estimates for local communities or groups, such as counties, neighborhoods, or demographic subgroups, when data are insufficient for each area. This is important for targeting local resources and policies, especially when national-level or large-area data mask variation at a more granular level. Researchers often fit hierarchical Bayesian models to stabilize SAE when data are sparse. Ideally, Bayesian procedures also exhibit good frequentist properties, as demonstrated by calibrated Bayes metrics. However, hierarchical Bayesian models tend to shrink domain estimates toward the overall mean and may produce credible intervals that do not maintain nominal coverage. Hoff et al. developed the Frequentist, but Assisted by Bayes (FAB) intervals for subgroup estimates with normally distributed outcomes. However, non-normally distributed data present new challenges, and multiple types of intervals have been proposed for estimating proportions. We examine domain inference with binary outcomes and extend FAB intervals to improve nominal coverage. We describe how to numerically compute FAB intervals for a proportion and evaluate their performance through repeated simulation studies. Leveraging multilevel regression and poststratification (MRP), we further refine SAE to correct for sample selection bias, construct the FAB intervals for MRP estimates and assess their repeated sampling properties. Finally, we apply the proposed inference methods to estimate COVID-19 infection rates across geographic and demographic subgroups. We find that the FAB intervals improve nominal coverage, at the cost of wider intervals.
+ oai:arXiv.org:2512.18479v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Rayleigh Lei, Yajuan Si
+
+
+ A Bayesian likely responder approach for the analysis of randomized controlled trials
+ https://arxiv.org/abs/2512.18492
+ arXiv:2512.18492v1 Announce Type: new
+Abstract: An important goal of precision medicine is to personalize medical treatment by identifying individuals who are most likely to benefit from a specific treatment. The Likely Responder (LR) framework, which identifies a subpopulation where treatment response is expected to exceed a certain clinical threshold, plays a role in this effort. However, the LR framework, and more generally, data-driven subgroup analyses, often fail to account for uncertainty in the estimation of model-based data-driven subgrouping. We propose a simple two-stage approach that integrates subgroup identification with subsequent subgroup-specific inference on treatment effects. We incorporate model estimation uncertainty from the first stage into subgroup-specific treatment effect estimation in the second stage, by utilizing Bayesian posterior distributions from the first stage. We evaluate our method through simulations, demonstrating that the proposed Bayesian two-stage model produces better calibrated confidence intervals than na\"ive approaches. We apply our method to an international COVID-19 treatment trial, which shows substantial variation in treatment effects across data-driven subgroups.
+ oai:arXiv.org:2512.18492v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Annan Deng, Carole Siegel, Hyung G. Park
+
+
+ The Illusion of Consistency: Selection-Induced Bias in Gated Kalman Innovation Statistics
+ https://arxiv.org/abs/2512.18508
+ arXiv:2512.18508v1 Announce Type: new
+Abstract: Validation gating is a fundamental component of classical Kalman-based tracking systems. Only measurements whose normalized innovation squared (NIS) falls below a prescribed threshold are considered for state update. While this procedure is statistically motivated by the chi-square distribution, it implicitly replaces the unconditional innovation process with a conditionally observed one, restricted to the validation event. This paper shows that innovation statistics computed after gating converge to gate-conditioned rather than nominal quantities. Under classical linear--Gaussian assumptions, we derive exact expressions for the first- and second-order moments of the innovation conditioned on ellipsoidal gating, and show that gating induces a deterministic, dimension-dependent contraction of the innovation covariance. The analysis is extended to NN association, which is shown to act as an additional statistical selection operator. We prove that selecting the minimum-norm innovation among multiple in-gate measurements introduces an unavoidable energy contraction, implying that nominal innovation statistics cannot be preserved under nontrivial gating and association. Closed-form results in the two-dimensional case quantify the combined effects and illustrate their practical significance.
+ oai:arXiv.org:2512.18508v1
+ stat.ME
+ cs.AI
+ cs.SY
+ eess.SP
+ eess.SY
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Barak Or
- False detection rate control in time series coincidence detection
- https://arxiv.org/abs/2512.17372
- arXiv:2512.17372v1 Announce Type: new
-Abstract: We study the problem of coincidence detection in time series data, where we aim to determine whether the appearance of simultaneous or near-simultaneous events in two time series is indicative of some shared underlying signal or synchronicity, or might simply be due to random chance. This problem arises across many applications, such as astrophysics (e.g., detecting astrophysical events such as gravitational waves, with two or more detectors) and neuroscience (e.g., detecting synchronous firing patterns between two or more neurons). In this work, we consider methods based on time-shifting, where the timeline of one data stream is randomly shifted relative to another, to mimic the types of coincidences that could occur by random chance. Our theoretical results establish rigorous finite-sample guarantees controlling the probability of false positives, under weak assumptions that allow for dependence within the time series data, providing reassurance that time-shifting methods are a reliable tool for inference in this setting. Empirical results with simulated and real data validate the strong performance of time-shifting methods in dependent-data settings.
- oai:arXiv.org:2512.17372v1
+ State-Space Modeling of Time-Varying Spillovers on Networks
+ https://arxiv.org/abs/2512.18584
+ arXiv:2512.18584v1 Announce Type: new
+Abstract: Many modern time series arise on networks, where each component is attached to a node and interactions follow observed edges. Classical time-varying parameter VARs (TVP-VARs) treat all series symmetrically and ignore this structure, while network autoregressive models exploit a given graph but usually impose constant parameters and stationarity. We develop network state-space models in which a low-dimensional latent state controls time-varying network spillovers, own-lag persistence and nodal covariate effects. A key special case is a network time-varying parameter VAR (NTVP-VAR) that constrains each lag matrix to be a linear combination of known network operators, such as a row-normalised adjacency and the identity, and lets the associated coefficients evolve stochastically in time. The framework nests Gaussian and Poisson network autoregressions, network ARIMA models with graph differencing, and dynamic edge models driven by multivariate logistic regression. We give conditions ensuring that NTVP-VARs are well-defined in second moments despite nonstationary states, describe network versions of stability and local stationarity, and discuss shrinkage, thresholding and low-rank tensor structures for high-dimensional graphs. Conceptually, network state-space models separate where interactions may occur (the graph) from how strong they are at each time (the latent state), providing an interpretable alternative to both unstructured TVP-VARs and existing network time-series models.
+ oai:arXiv.org:2512.18584v1
+ stat.MEmath.STstat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ruiting Liang, Samuel Dyson, Rina Foygel Barber, Daniel E. Holz
+ http://creativecommons.org/licenses/by/4.0/
+ Marios Papamichalis, Regina Ruane, Theofanis Papamichalis
- Generative modeling of conditional probability distributions on the level-sets of collective variables
- https://arxiv.org/abs/2512.17374
- arXiv:2512.17374v1 Announce Type: new
-Abstract: Given a probability distribution $\mu$ in $\mathbb{R}^d$ represented by data, we study in this paper the generative modeling of its conditional probability distributions on the level-sets of a collective variable $\xi: \mathbb{R}^d \rightarrow \mathbb{R}^k$, where $1 \le k<d$. We propose a general and effcient learning approach that is able to learn generative models on different level-sets of $\xi$ simultaneously. To improve the learning quality on level-sets in low-probability regions, we also propose a strategy for data enrichment by utilizing data from enhanced sampling techniques. We demonstrate the effectiveness of our proposed learning approach through concrete numerical examples. The proposed approach is potentially useful for the generative modeling of molecular systems in biophysics, for instance.
- oai:arXiv.org:2512.17374v1
- stat.ML
- math.OC
- Mon, 22 Dec 2025 00:00:00 -0500
+ Graphon-Level Bayesian Predictive Synthesis for Random Network
+ https://arxiv.org/abs/2512.18587
+ arXiv:2512.18587v1 Announce Type: new
+Abstract: Bayesian predictive synthesis provides a coherent Bayesian framework for combining multiple predictive distributions, or agents, into a single updated prediction, extending Bayesian model averaging to allow general pooling of full predictive densities. This paper develops a static, graphon level version of Bayesian predictive synthesis for random networks. At the graphon level we show that Bayesian predictive synthesis corresponds to the integrated squared error projection of the true graphon onto the linear span of the agent graphons. We derive nonasymptotic oracle inequalities and prove that least-squares graphon-BPS, based on a finite number of edge observations, achieves the minimax L^2 rate over this agent span. Moreover, we show that any estimator that selects a single agent graphon is uniformly inconsistent on a nontrivial subset of the convex hull of the agents, whereas graphon-level Bayesian predictive synthesis remains minimax-rate optimal-formalizing a combination beats components phenomenon. Structural properties of the underlying random graphs are controlled through explicit Lipschitz bounds that transfer graphon error into error for edge density, degree distributions, subgraph densities, clustering coefficients, and giant component phase transitions. Finally, we develop a heavy tail theory for Bayesian predictive synthesis, showing how mixtures and entropic tilts preserve regularly varying degree distributions and how exponential random graph model agents remain within their family under log linear tilting with Kullback-Leibler optimal moment calibration.
+ oai:arXiv.org:2512.18587v1
+ math.ST
+ stat.ME
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Fatima-Zahrae Akhyar, Wei Zhang, Gabriel Stoltz, Christof Sch\"utte
+ Marios Papamichalis, Regina Ruane
- A General Stability Approach to False Discovery Rate Control
- https://arxiv.org/abs/2512.17401
- arXiv:2512.17401v1 Announce Type: new
-Abstract: Stability and reproducibility are essential considerations in various applications of statistical methods. False Discovery Rate (FDR) control methods are able to control false signals in scientific discoveries. However, many FDR control methods, such as Model-X knockoff and data-splitting approaches, yield unstable results due to the inherent randomness of the algorithms. To enhance the stability and reproducibility of statistical outcomes, we propose a general stability approach for FDR control in feature selection and multiple testing problems, named FDR Stabilizer. Taking feature selection as an example, our method first aggregates feature importance statistics obtained by multiple runs of the base FDR control procedure into a consensus ranking. Then, we construct a stabilized relaxed e-value for each feature and apply the e-BH procedure to these stabilized e-values to obtain the final selection set. We theoretically derive the finite-sample bounds for the FDR and the power of our method, and show that our method asymptotically controls the FDR without power loss. Moreover, we establish the stability of the proposed method, showing that the stabilized selection set converges to a deterministic limit as the number of repetitions increases. Extensive numerical experiments and applications to real datasets demonstrate that the proposed method generally outperforms existing alternatives.
- oai:arXiv.org:2512.17401v1
+ Wavelet Latent Position Exponential Random Graphs
+ https://arxiv.org/abs/2512.18592
+ arXiv:2512.18592v1 Announce Type: new
+Abstract: Many network datasets exhibit connectivity with variance by resolution and large-scale organization that coexists with localized departures. When vertices have observed ordering or embedding, such as geography in spatial and village networks, or anatomical coordinates in connectomes, learning where and at what resolution connectivity departs from a baseline is crucial. Standard models typically emphasize a single representation, i.e. stochastic block models prioritize coarse partitions, latent space models prioritize global geometry, small-world generators capture local clustering with random shortcuts, and graphon formulations are fully general and do not solely supply a canonical multiresolution parameterization for interpretation and regularization. We introduce wavelet latent position exponential random graphs (WL-ERGs), an exchangeable logistic-graphon framework in which the log-odds connectivity kernel is represented in compactly supported orthonormal wavelet coordinates and mapped to edge probabilities through a logistic link. Wavelet coefficients are indexed by resolution and location, which allows multiscale structure to become sparse and directly interpretable. Although edges remain independent given latent coordinates, any finite truncation yields a conditional exponential family whose sufficient statistics are multiscale wavelet interaction counts and conditional laws admit a maximum-entropy characterization. These characteristics enable likelihood-based regularization and testing directly in coefficient space. The theory is naturally scale-resolved and includes universality for broad classes of logistic graphons, near-minimax estimation under multiscale sparsity, scale-indexed recovery and detection thresholds, and a band-limited regime in which canonical coefficient-space tilts are non-degenerate and satisfy a finite-dimensional large deviation principle.
+ oai:arXiv.org:2512.18592v1
+ math.STstat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Jiajun Sun, Zhanrui Cai, Wei Zhong
+ Marios Papamichalis, Regina Ruane
- Perfect reconstruction of sparse signals using nonconvexity control and one-step RSB message passing
- https://arxiv.org/abs/2512.17426
- arXiv:2512.17426v1 Announce Type: new
-Abstract: We consider sparse signal reconstruction via minimization of the smoothly clipped absolute deviation (SCAD) penalty, and develop one-step replica-symmetry-breaking (1RSB) extensions of approximate message passing (AMP), termed 1RSB-AMP. Starting from the 1RSB formulation of belief propagation, we derive explicit update rules of 1RSB-AMP together with the corresponding state evolution (1RSB-SE) equations. A detailed comparison shows that 1RSB-AMP and 1RSB-SE agree remarkably well at the macroscopic level, even in parameter regions where replica-symmetric (RS) AMP, termed RS-AMP, diverges and where the 1RSB description itself is not expected to be thermodynamically exact. Fixed-point analysis of 1RSB-SE reveals a phase diagram consisting of success, failure, and diverging phases, as in the RS case. However, the diverging-region boundary now depends on the Parisi parameter due to the 1RSB ansatz, and we propose a new criterion -- minimizing the size of the diverging region -- rather than the conventional zero-complexity condition, to determine its value. Combining this criterion with the nonconvexity-control (NCC) protocol proposed in a previous RS study improves the algorithmic limit of perfect reconstruction compared with RS-AMP. Numerical solutions of 1RSB-SE and experiments with 1RSB-AMP confirm that this improved limit is achieved in practice, though the gain is modest and remains slightly inferior to the Bayes-optimal threshold. We also report the behavior of thermodynamic quantities -- overlaps, free entropy, complexity, and the non-self-averaging susceptibility -- that characterize the 1RSB phase in this problem.
- oai:arXiv.org:2512.17426v1
+ Accuracy of Uniform Inference on Fine Grid Points
+ https://arxiv.org/abs/2512.18627
+ arXiv:2512.18627v1 Announce Type: new
+Abstract: Uniform confidence bands for functions are widely used in empirical analysis. A variety of simple implementation methods (most notably multiplier bootstrap) have been proposed and theoretically justified. However, an implementation over a literally continuous index set is generally computationally infeasible, and practitioners therefore compute the critical value by evaluating the statistic on a finite evaluation grid. This paper quantifies how fine the evaluation grid must be for a multiplier bootstrap procedure over finite grid points to deliver valid uniform confidence bands. We derive an explicit bound on the resulting coverage error that separates discretization effects from the intrinsic high-dimensional bootstrap approximation error on the grid. The bound yields a transparent workflow for choosing the grid size in practice, and we illustrate the implementation through an example of kernel density estimation.
+ oai:arXiv.org:2512.18627v1
+ stat.ME
+ econ.EM
+ stat.CO
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Shunsuke Imai
+
+
+ A pivotal transform for the high-dimensional location-scale model
+ https://arxiv.org/abs/2512.18705
+ arXiv:2512.18705v1 Announce Type: new
+Abstract: We study the high-dimensional linear model with noise distribution known up to a scale parameter. With an $\ell_1$-penalty on the regression coefficients, we show that a transformation of the log-likelihood allows for a choice of the tuning parameter not depending on the scale parameter. This transformation is a generalization of the square root Lasso for quadratic loss. The tuning parameter can asymptotically be taken at the detection edge. We establish an oracle inequality, variable selection and asymptotic efficiency of the estimator of the scale parameter and the intercept. The examples include Subbotin distributions and the Gumbel distribution.
+ oai:arXiv.org:2512.18705v1
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Sara van de Geer, Sylvain Sardy, Maxim\k{e} van Cutsem
+
+
+ Unsupervised Feature Selection via Robust Autoencoder and Adaptive Graph Learning
+ https://arxiv.org/abs/2512.18720
+ arXiv:2512.18720v1 Announce Type: new
+Abstract: Effective feature selection is essential for high-dimensional data analysis and machine learning. Unsupervised feature selection (UFS) aims to simultaneously cluster data and identify the most discriminative features. Most existing UFS methods linearly project features into a pseudo-label space for clustering, but they suffer from two critical limitations: (1) an oversimplified linear mapping that fails to capture complex feature relationships, and (2) an assumption of uniform cluster distributions, ignoring outliers prevalent in real-world data. To address these issues, we propose the Robust Autoencoder-based Unsupervised Feature Selection (RAEUFS) model, which leverages a deep autoencoder to learn nonlinear feature representations while inherently improving robustness to outliers. We further develop an efficient optimization algorithm for RAEUFS. Extensive experiments demonstrate that our method outperforms state-of-the-art UFS approaches in both clean and outlier-contaminated data settings.
+ oai:arXiv.org:2512.18720v1stat.ML
- cond-mat.dis-nncs.LG
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xiaosi Gu, Ayaka Sakata, Tomoyuki Obuchi
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Feng Yu, MD Saifur Rahman Mazumder, Ying Su, Oscar Contreras Velasco
- Bayesian Markov-Switching Partial Reduced-Rank Regression
- https://arxiv.org/abs/2512.17471
- arXiv:2512.17471v1 Announce Type: new
-Abstract: Reduced-Rank (RR) regression is a powerful dimensionality reduction technique but it overlooks any possible group configuration among the responses by assuming a low-rank structure on the entire coefficient matrix. Moreover, the temporal change of the relations between predictors and responses in time series induce a possibly time-varying grouping structure in the responses. To address these limitations, a Bayesian Markov-switching partial RR (MS-PRR) model is proposed, where the response vector is partitioned in two groups to reflect different complexity of the relationship. A \textit{simple} group assumes a low-rank linear regression, while a \textit{complex} group exploits nonparametric regression via a Gaussian Process. Differently from traditional approaches, group assignments and rank are treated as unknown parameters to be estimated. Then temporal persistence in the regression function is accounted for by a Markov-switching process that drives the changes in the grouping structure and model parameters over time. Full Bayesian inference is preformed via a partially collapsed Gibbs sampler, which allows uncertainty quantification without the need for trans-dimensional moves. Applications to two real-world macroeconomic and commodity data demonstrate the evidence of time-varying grouping and different degrees of complexity both across states and within each state.
- oai:arXiv.org:2512.17471v1
- stat.ME
- stat.CO
- Mon, 22 Dec 2025 00:00:00 -0500
+ Functional Modeling of Learning and Memory Dynamics in Cognitive Disorders
+ https://arxiv.org/abs/2512.18760
+ arXiv:2512.18760v1 Announce Type: new
+Abstract: Deficits in working memory, which includes both the ability to learn and to retain information short-term, are a hallmark of many cognitive disorders. Our study analyzes data from a neuroscience experiment on animal subjects, where performance on a working memory task was recorded as repeated binary success or failure data. We estimate continuous probability of success curves from this binary data in the context of functional data analysis, which is largely used in biological processes that are intrinsically continuous. We then register these curves to decompose each function into its amplitude, representing overall performance, and its phase, representing the speed of learning or response. Because we are able to separate speed from performance, we can address the crucial question of whether a cognitive disorder impacts not only how well subjects can learn and remember, but also how fast. This allows us to analyze the components jointly to uncover how speed and performance co-vary, and to compare them separately to pinpoint whether group differences stem from a deficit in peak performance or a change in speed.
+ oai:arXiv.org:2512.18760v1
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Maria F. Pintado, Matteo Iacopini, Luca Rossini, Alexander Y. Shestopaloff
+ Maria Laura Battagliola, Laura J. Benoit, Sarah Canetta, Shizhe Zhang, R. Todd Ogden
- Inference for high dimensional repeated measure designs with the R package hdrm
- https://arxiv.org/abs/2512.17478
- arXiv:2512.17478v1 Announce Type: new
-Abstract: Repeated-measure designs allow comparisons within a group as well as between groups, and are commonly referred to as split-plot designs. While originating in agricultural experiments, they are now widely used in medical research, psychology, and the life sciences, where repeated observations on the same subject are essential.
- Modern data collection often produces observation vectors with dimension $d$ comparable to or exceeding the sample size $N$. Although this can be advantageous in terms of cost efficiency, ethical considerations, and the study of rare diseases, it poses substantial challenges for statistical inference.
- Parametric methods based on multivariate normality provide a flexible framework that avoids restrictive assumptions on covariance structures or on the asymptotic relationship between $d$ and $N$. Within this framework, the freely available R-package hdrm enables the analysis of a wide range of hypotheses concerning expectation vectors in high-dimensional repeated-measure designs, covering both single-group and multi-group settings with homogeneous or heterogeneous covariance matrices.
- This paper describes the implemented tests, demonstrates their use through examples, and discusses their applicability in practical high-dimensional data scenarios. To address computational challenges arising for large $d$, the package incorporates efficient estimators and subsampling strategies that substantially reduce computation time while preserving statistical validity.
- oai:arXiv.org:2512.17478v1
- stat.CO
+ Non-stationary Spatial Modeling Using Fractional SPDEs
+ https://arxiv.org/abs/2512.18768
+ arXiv:2512.18768v1 Announce Type: new
+Abstract: We construct a Gaussian random field (GRF) that combines fractional smoothness with spatially varying anisotropy. The GRF is defined through a stochastic partial differential equation (SPDE), where the range, marginal variance, and anisotropy vary spatially according to a spectral parametrization of the SPDE coefficients. Priors are constructed to reduce overfitting in this flexible covariance model, and parameter estimation is done with an efficient gradient-based optimization approach that combines automatic differentiation with sparse matrix operations. In a simulation study, we investigate how many observations are required to reliably estimate fractional smoothness and non-stationarity, and find that one realization containing 500 observations or more is needed in the scenario considered. We also find that the proposed penalization prevents overfitting across varying numbers of observation locations. Two case studies demonstrate that the relative importance of fractional smoothness and non-stationarity is application dependent. Non-stationarity improves predictions in an application to ocean salinity, whereas fractional smoothness improves predictions in an application to precipitation. Predictive ability is assessed using mean squared error and the continuous ranked probability score. In addition to prediction, the proposed approach can be used as a tool to explore the presence of fractional smoothness and non-stationarity.
+ oai:arXiv.org:2512.18768v1stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Paavo Sattler, Nils Hichert
+ http://creativecommons.org/licenses/by/4.0/
+ Elling Svee, Geir-Arne Fuglstad
- Inference on state occupancy in covariate-driven hidden Markov models
- https://arxiv.org/abs/2512.17496
- arXiv:2512.17496v1 Announce Type: new
-Abstract: Hidden Markov models (HMMs) are popular tools for analysing animal behaviour based on movement, acceleration and other sensor data. In particular, these models allow to infer how the animal's decision-making process interacts with internal and external drivers, by relating the probabilities of switching between distinct behavioural states to covariates. A key challenge arising in the statistical analysis of behavioural data using covariate-driven HMMs is the models' interpretation, especially when there are more than two states, as then several functional relationships between state-switching probabilities and covariates need to be jointly interpreted. The model-implied probabilities of occupying the different states, as a function of a covariate of interest, constitute a much simpler summary statistic. A pragmatic approximation of the state occupancy distribution, namely the hypothetical stationary distribution of the model's underlying Markov chain for fixed covariate values, has in fact routinely been reported in HMM-based analyses of ecological data. However, for stochastically varying covariates with relatively little persistence, we show that this approximation can be severely biased, potentially invalidating ecological inference. We develop two alternative approaches for obtaining the state occupancy distribution as a function of a covariate of interest - one based on resampling of the covariate process, the other obtained by regression analysis of the empirical state probabilities. The practical application of these approaches is demonstrated in simulations and a case study on Gal\'apagos tortoise (Chelonoidis niger) movement data. Our methods enable practitioners to conduct unbiased inference on the relationship between animal behaviour and general types of covariates, thus allowing to uncover the factors influencing behavioural decisions made by animals.
- oai:arXiv.org:2512.17496v1
+ Consistent Bayesian meta-analysis on subgroup specific effects and interactions
+ https://arxiv.org/abs/2512.18785
+ arXiv:2512.18785v1 Announce Type: new
+Abstract: Commonly, clinical trials report effects not only for the full study population but also for patient subgroups. Meta-analyses of subgroup-specific effects and treatment-by-subgroup interactions may be inconsistent, especially when trials apply different subgroup weightings. We show that meta-regression can, in principle, with a contribution adjustment, recover the same interaction inference regardless of whether interaction data or subgroup data are used. Our Bayesian framework for subgroup-data interaction meta-analysis inherently (i) adjusts for varying relative subgroup contribution, quantified by the information fraction (IF) within a trial; (ii) is robust to prevalence imbalance and variation; (iii) provides a self-contained, model-based approach; and (iv) can be used to incorporate prior information into interaction meta-analyses with few studies.The method is demonstrated using an example with as few as seven trials of disease-modifying therapies in relapsing-remitting multiple sclerosis. The Bayesian Contribution-adjusted Meta-analysis by Subgroup (CAMS) indicates a stronger treatment-by-disability interaction (relapse rate reduction) in patients with lower disability (EDSS <= 3.5) compared with the unadjusted model, while results for younger patients (age < 40 years) are unchanged.By controlling subgroup contribution while retaining subgroup interpretability, this approach enables reliable interaction decision-making when published subgroup data are available.Although the proposed CAMS approach is presented in a Bayesian context, it can also be implemented in frequentist or likelihood frameworks.
+ oai:arXiv.org:2512.18785v1stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Maya N. Vienken, Jan-Ole Koslik, Roland Langrock
+ Renato Panaro, Christian R\"over, Tim Friede
- Fast and Robust: Computationally Efficient Covariance Estimation for Sub-Weibull Vectors
- https://arxiv.org/abs/2512.17632
- arXiv:2512.17632v1 Announce Type: new
-Abstract: High-dimensional covariance estimation is notoriously sensitive to outliers. While statistically optimal estimators exist for general heavy-tailed distributions, they often rely on computationally expensive techniques like semidefinite programming or iterative M-estimation ($O(d^3)$). In this work, we target the specific regime of \textbf{Sub-Weibull distributions} (characterized by stretched exponential tails $\exp(-t^\alpha)$). We investigate a computationally efficient alternative: the \textbf{Cross-Fitted Norm-Truncated Estimator}. Unlike element-wise truncation, our approach preserves the spectral geometry while requiring $O(Nd^2)$ operations, which represents the theoretical lower bound for constructing a full covariance matrix. Although spherical truncation is geometrically suboptimal for anisotropic data, we prove that within the Sub-Weibull class, the exponential tail decay compensates for this mismatch. Leveraging weighted Hanson-Wright inequalities, we derive non-asymptotic error bounds showing that our estimator recovers the optimal sub-Gaussian rate $\tilde{O}(\sqrt{r(\Sigma)/N})$ with high probability. This provides a scalable solution for high-dimensional data that exhibits tails heavier than Gaussian but lighter than polynomial decay.
- oai:arXiv.org:2512.17632v1
- stat.ML
+ Effect measures for comparing paired event times
+ https://arxiv.org/abs/2512.18860
+ arXiv:2512.18860v1 Announce Type: new
+Abstract: The progression-free survival ratio (PFSr) is a widely used measure in personalized oncology trials. It evaluates the effectiveness of treatment by comparing two consecutive event times - one under standard therapy and one under an experimental treatment. However, most proposed tests based on the PFSr cannot control the nominal type I error rate, even under mild assumptions such as random right-censoring. Consequently the results of these tests are often unreliable.
+ As a remedy, we propose to estimate the relevant probabilities related to the PFSr by adapting recently developed methodology for the relative treatment effect between paired event times. As an additional alternative, we develop inference procedures based on differences and ratios of restricted mean survival times.
+ An extensive simulation study confirms that the proposed novel methodology provides reliable inference, whereas previously proposed techniques break down in many realistic settings. The utility of our methods is further illustrated through an analysis of real data from a molecularly aided tumor trial.
+ oai:arXiv.org:2512.18860v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Merle Munko, Simon Mack, Marc Ditzhaus, Stefan Fr\"ohling, Dennis Dobler, Dominic Edelmann
+
+
+ Fast simulation of Gaussian random fields with flexible correlation models in Euclidean spaces
+ https://arxiv.org/abs/2512.18884
+ arXiv:2512.18884v1 Announce Type: new
+Abstract: The efficient simulation of Gaussian random fields with flexible correlation structures is fundamental in spatial statistics, machine learning, and uncertainty quantification. In this work, we revisit the \emph{spectral turning-bands} (STB) method as a versatile and scalable framework for simulating isotropic Gaussian random fields with a broad range of covariance models. Beyond the classical Mat\'ern family, we show that the STB approach can be extended to two recent and flexible correlation classes that generalize the Mat\'ern model: the Bummer-Tricomi model, which allows for polynomially decaying correlations and long-range dependence, and the Gauss-Hypergeometric model, which admits compactly supported correlations, including the Generalized Wendland family as a special case. We derive exact stochastic representations for both families: a Beta-prime mixture formulation for the Kummer-Tricomi model and complementary Beta- and Gasper-mixture representations for the Gauss-Hypergeometric model. These formulations enable exact, numerically stable, and computationally efficient simulation with linear complexity in the number of spectral components. Numerical experiments confirm the accuracy and computational stability of the proposed algorithms across a wide range of parameter configurations, demonstrating their practical viability for large-scale spatial modeling. As an application, we use the proposed STB simulators to perform parametric bootstrap for standard error estimation and model selection under weighted pairwise composite likelihood in the analysis of a large climate dataset.
+ oai:arXiv.org:2512.18884v1stat.CO
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Moreno Bevilacqua, Xavier Emery, Francisco Cuevas-Pacheco
+
+
+ Model-Agnostic Bounds for Augmented Inverse Probability Weighted Estimators' Wald-Confidence Interval Coverage in Randomized Controlled Trials
+ https://arxiv.org/abs/2512.18898
+ arXiv:2512.18898v1 Announce Type: new
+Abstract: Nonparametric estimators, such as the augmented inverse probability weighted (AIPW) estimator, have become increasingly popular in causal inference. Numerous nonparametric estimators have been proposed, but they are all asymptotically normal with the same asymptotic variance under similar conditions, leaving little guidance for practitioners to choose an estimator. In this paper, I focus on another important perspective of their asymptotic behaviors beyond asymptotic normality, the convergence of the Wald-confidence interval (CI) coverage to the nominal coverage. Such results have been established for simpler estimators (e.g., the Berry-Esseen Theorem), but are lacking for nonparametric estimators. I consider a simple but practical setting where the AIPW estimator based on a black-box nuisance estimator, with or without cross-fitting, is used to estimate the average treatment effect in randomized controlled trials. I derive non-asymptotic Berry-Esseen-type bounds on the difference between Wald-CI coverage and the nominal coverage. I also analyze the bias of variance estimators, showing that the cross-fit variance estimator might overestimate while the non-cross-fit variance estimator might underestimate, which might explain why cross-fitting has been empirically observed to improve Wald-CI coverage.
+ oai:arXiv.org:2512.18898v1
+ math.ST
+ stat.ME
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Even He
+ Hongxiang Qiu
- Estimation and model errors in Gaussian-process-based Sensitivity Analysis of functional outputs
- https://arxiv.org/abs/2512.17635
- arXiv:2512.17635v1 Announce Type: new
-Abstract: Global sensitivity analysis (GSA) of functional-output models is usually performed by combining statistical techniques, such as basis expansions, metamodeling and sampling based estimation of sensitivity indices. By neglecting truncation error from basis expansion, two main sources of errors propagate to the final sensitivity indices: the metamodeling related error and the sampling-based, or pick-freeze (PF), estimation error. This work provides an efficient algorithm to estimate these errors in the frame of Gaussian processes (GP), based on the approach of Le Gratiet et al. [16]. The proposed algorithm takes advantage of the fact that the number of basis coefficients of expanded model outputs is significantly smaller than output dimensions. Basis coefficients are fitted by GP models and multiple conditional GP trajectories are sampled. Then, vector-valued PF estimation is used to speed-up the estimation of Sobol indices and generalized sensitivity indices (GSI). We illustrate the methodology on an analytical test case and on an application in non-Newtonian hydraulics, modelling an idealized dam-break flow. Numerical tests show an improvement of 15 times in the computational time when compared to the application of Le Gratiet et al. [16] algorithm separately over each output dimension.
- oai:arXiv.org:2512.17635v1
+ Testing for latent structure via the Wilcoxon--Wigner random matrix of normalized rank statistics
+ https://arxiv.org/abs/2512.18924
+ arXiv:2512.18924v1 Announce Type: new
+Abstract: This paper considers the problem of testing for latent structure in large symmetric data matrices. The goal here is to develop statistically principled methodology that is flexible in its applicability, computationally efficient, and insensitive to extreme data variation, thereby overcoming limitations facing existing approaches. To do so, we introduce and systematically study certain symmetric matrices, called Wilcoxon--Wigner random matrices, whose entries are normalized rank statistics derived from an underlying independent and identically distributed sample of absolutely continuous random variables. These matrices naturally arise as the matricization of one-sample problems in statistics and conceptually lie at the interface of nonparametrics, multivariate analysis, and data reduction. Among our results, we establish that the leading eigenvalue and corresponding eigenvector of Wilcoxon--Wigner random matrices admit asymptotically Gaussian fluctuations with explicit centering and scaling terms. These asymptotic results enable rigorous parameter-free and distribution-free spectral methodology for addressing two hypothesis testing problems, namely community detection and principal submatrix detection. Numerical examples illustrate the performance of the proposed approach. Throughout, our findings are juxtaposed with existing results based on the spectral properties of independent entry symmetric random matrices in signal-plus-noise data settings.
+ oai:arXiv.org:2512.18924v1stat.ME
+ math.PRmath.ST
- stat.AP
+ stat.MLstat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Jonquil Z. Liao, Joshua Cape
+
+
+ Integrating Prioritized and Non-Prioritized Structures in Win Statistics
+ https://arxiv.org/abs/2512.18946
+ arXiv:2512.18946v1 Announce Type: new
+Abstract: Composite endpoints are frequently used as primary or secondary analyses in cardiovascular clinical trials to increase clinical relevance and statistical efficiency. Alternatively, the Win Ratio (WR) and other Win Statistics (WS) analyses rely on a strict hierarchical ordering of endpoints, assigning higher priority to clinically important endpoints. However, determining a definitive endpoint hierarchy can be challenging and may not adequately reflect situations where endpoints have comparable importance. In this study, we discuss the challenges of endpoint prioritization, underscore its critical role in WS analyses, and propose Rotation WR (RWR), a hybrid prioritization framework that integrates both prioritized and non-prioritized structures. By permitting blocks of equally-prioritized endpoints, RWR accommodates endpoints of equal or near equal clinical importance, recurrent events, and contexts requiring individualized shared decision making. Statistical inference for RWR is developed using U-statistics theory, including the hypothesis testing procedure and confidence interval construction. Extensions to two additional WS measures, Rotation Net Benefit and Rotation Win Odds, are also provided. Through extensive simulation studies involving multiple time-to-event endpoints, including recurrent events, we demonstrate that RWR achieves valid type I error control, desirable statistical power, and accurate confidence interval coverage. We illustrate both the methodological and practical insights of our work in a case study on endpoint prioritization with the SPRINT clinical trial, highlighting its implications for real-world clinical trial studies.
+ oai:arXiv.org:2512.18946v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuri Taglieri S\'ao, Olivier Roustant, Geraldo de Freitas Maciel
+ Yunhan Mou, Scott Hummel, Yuan Huang
- Generative Multi-Objective Bayesian Optimization with Scalable Batch Evaluations for Sample-Efficient De Novo Molecular Design
- https://arxiv.org/abs/2512.17659
- arXiv:2512.17659v1 Announce Type: new
-Abstract: Designing molecules that must satisfy multiple, often conflicting objectives is a central challenge in molecular discovery. The enormous size of chemical space and the cost of high-fidelity simulations have driven the development of machine learning-guided strategies for accelerating design with limited data. Among these, Bayesian optimization (BO) offers a principled framework for sample-efficient search, while generative models provide a mechanism to propose novel, diverse candidates beyond fixed libraries. However, existing methods that couple the two often rely on continuous latent spaces, which introduces both architectural entanglement and scalability challenges. This work introduces an alternative, modular "generate-then-optimize" framework for de novo multi-objective molecular design/discovery. At each iteration, a generative model is used to construct a large, diverse pool of candidate molecules, after which a novel acquisition function, qPMHI (multi-point Probability of Maximum Hypervolume Improvement), is used to optimally select a batch of candidates most likely to induce the largest Pareto front expansion. The key insight is that qPMHI decomposes additively, enabling exact, scalable batch selection via only simple ranking of probabilities that can be easily estimated with Monte Carlo sampling. We benchmark the framework against state-of-the-art latent-space and discrete molecular optimization methods, demonstrating significant improvements across synthetic benchmarks and application-driven tasks. Specifically, in a case study related to sustainable energy storage, we show that our approach quickly uncovers novel, diverse, and high-performing organic (quinone-based) cathode materials for aqueous redox flow battery applications.
- oai:arXiv.org:2512.17659v1
+ On Conditional Stochastic Interpolation for Generative Nonlinear Sufficient Dimension Reduction
+ https://arxiv.org/abs/2512.18971
+ arXiv:2512.18971v1 Announce Type: new
+Abstract: Identifying low-dimensional sufficient structures in nonlinear sufficient dimension reduction (SDR) has long been a fundamental yet challenging problem. Most existing methods lack theoretical guarantees of exhaustiveness in identifying lower dimensional structures, either at the population level or at the sample level. We tackle this issue by proposing a new method, generative sufficient dimension reduction (GenSDR), which leverages modern generative models. We show that GenSDR is able to fully recover the information contained in the central $\sigma$-field at both the population and sample levels. In particular, at the sample level, we establish a consistency property for the GenSDR estimator from the perspective of conditional distributions, capitalizing on the distributional learning capabilities of deep generative models. Moreover, by incorporating an ensemble technique, we extend GenSDR to accommodate scenarios with non-Euclidean responses, thereby substantially broadening its applicability. Extensive numerical results demonstrate the outstanding empirical performance of GenSDR and highlight its strong potential for addressing a wide range of complex, real-world tasks.
+ oai:arXiv.org:2512.18971v1stat.ML
- cond-mat.mtrl-scics.LG
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Madhav R. Muthyala, Farshud Sorourifar, Tianhong Tan, You Peng, Joel A. Paulson
+ Shuntuo Xu, Zhou Yu, Jian Huang
- Imputation Uncertainty in Interpretable Machine Learning Methods
- https://arxiv.org/abs/2512.17689
- arXiv:2512.17689v1 Announce Type: new
-Abstract: In real data, missing values occur frequently, which affects the interpretation with interpretable machine learning (IML) methods. Recent work considers bias and shows that model explanations may differ between imputation methods, while ignoring additional imputation uncertainty and its influence on variance and confidence intervals. We therefore compare the effects of different imputation methods on the confidence interval coverage probabilities of the IML methods permutation feature importance, partial dependence plots and Shapley values. We show that single imputation leads to underestimation of variance and that, in most cases, only multiple imputation is close to nominal coverage.
- oai:arXiv.org:2512.17689v1
- stat.ML
- cs.LG
+ A Universal Framework for Factorial Matched Observational Studies with General Treatment Types: Design, Analysis, and Applications
+ https://arxiv.org/abs/2512.18997
+ arXiv:2512.18997v1 Announce Type: new
+Abstract: Matching is one of the most widely used causal inference frameworks in observational studies. However, all the existing matching-based causal inference methods are designed for either a single treatment with general treatment types (e.g., binary, ordinal, or continuous) or factorial (multiple) treatments with binary treatments only. To our knowledge, no existing matching-based causal methods can handle factorial treatments with general treatment types. This critical gap substantially hinders the applicability of matching in many real-world problems, in which there are often multiple, potentially non-binary (e.g., continuous) treatment components. To address this critical gap, this work develops a universal framework for the design and analysis of factorial matched observational studies with general treatment types (e.g., binary, ordinal, or continuous). We first propose a two-stage non-bipartite matching algorithm that constructs matched sets of units with similar covariates but distinct combinations of treatment doses, thereby enabling valid estimation of both main and interaction effects. We then introduce a new class of generalized factorial Neyman-type estimands that provide model-free, finite-population-valid definitions of marginal and interaction causal effects under factorial treatments with general treatment types. Randomization-based Fisher-type and Neyman-type inference procedures are developed, including unbiased estimators, asymptotically valid variance estimators, and variance adjustments incorporating covariate information for improved efficiency. Finally, we illustrate the proposed framework through a county-level application that evaluates the causal impacts of work- and non-work-trip reductions (social distancing practices) on COVID-19-related and drug-related outcomes during the COVID-19 pandemic in the United States.
+ oai:arXiv.org:2512.18997v1
+ stat.ME
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jianan Zhu, Tianruo Zhang, Diana Silver, Ellicott Matthay, Omar El-Shahawy, Hyunseung Kang, Siyu Heng
+
+
+ Operator Tail Densities of Multivariate Copulas
+ https://arxiv.org/abs/2512.19023
+ arXiv:2512.19023v1 Announce Type: new
+Abstract: Operator regular variation of a multivariate distribution can be decomposed into the operator tail dependence of the underlying copula and the regular variation of the univariate marginals. In this paper, we introduce operator tail densities for copulas and show that an operator-regularly-varying density can be characterized through the operator tail density of its copula together with the marginal regular variation. As an example, we demonstrate that although a Liouville copula is not available in closed form, it nevertheless admits an explicit operator tail-dependence function.
+ oai:arXiv.org:2512.19023v1
+ math.ST
+ math.PR
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Haijun Li
+
+
+ Dyadic Flow Models for Nonstationary Gene Flow in Landscape Genomics
+ https://arxiv.org/abs/2512.19035
+ arXiv:2512.19035v1 Announce Type: new
+Abstract: The field of landscape genomics aims to infer how landscape features affect gene flow across space. Most landscape genomic frameworks assume the isolation-by-distance and isolation-by-resistance hypotheses, which propose that genetic dissimilarity increases as a function of distance and as a function of cumulative landscape resistance, respectively. While these hypotheses are valid in certain settings, other mechanisms may affect gene flow. For example, the gene flow of invasive species may depend on founder effects and multiple introductions. Such mechanisms are not considered in modern landscape genomic models. We extend dyadic models to allow for mechanisms that range-shifting and/or invasive species may experience by introducing dyadic spatially-varying coefficients (DSVCs) defined on source-destination pairs. The DSVCs allow the effects of landscape on gene flow to vary across space, capturing nonstationary and asymmetric connectivity. Additionally, we incorporate explicit landscape features as connectivity covariates, which are localized to specific regions of the spatial domain and may function as barriers or corridors to gene flow. Such covariates are central to colonization and invasion, where spread accelerates along corridors and slows across landscape barriers. The proposed framework accommodates colonization-specific processes while retaining the ability to assess landscape influences on gene flow. Our case study of the highly invasive cheatgrass (Bromus tectorum) demonstrates the necessity of accounting for nonstationarity gene flow in range-shifting species.
+ oai:arXiv.org:2512.19035v1
+ stat.APstat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Pegah Golchian, Marvin N. Wright
+ Michael R. Schwob, Nicholas M. Calzada, Justin J. Van Ee, Diana Gamba, Rebecca A. Nelson, Megan L. Vahsen, Peter B. Adler, Jesse R. Lasky, Mevin B. Hooten
- A Dependent Feature Allocation Model Based on Random Fields
- https://arxiv.org/abs/2512.17701
- arXiv:2512.17701v1 Announce Type: new
-Abstract: We introduce a flexible framework for modeling dependent feature allocations. Our approach addresses limitations in traditional nonparametric methods by directly modeling the logit-probability surface of the feature paintbox, enabling the explicit incorporation of covariates and complex but tractable dependence structures. The core of our model is a Gaussian Markov Random Field (GMRF), which we use to robustly decompose the latent field, separating a structural component based on the baseline covariates from intrinsic, unstructured heterogeneity. This structure is not a rigid grid but a sparse k-nearest neighbors graph derived from the latent geometry in the data, ensuring high-dimensional tractability. We extend this framework to a dynamic spatio-temporal process, allowing item effects to evolve via an Ornstein-Uhlenbeck process. Feature correlations are captured using a low-rank factorization of their joint prior. We demonstrate our model's utility by applying it to a polypharmacy dataset, successfully inferring latent health conditions from patient drug profiles.
- oai:arXiv.org:2512.17701v1
- stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ Unraveling time-varying causal effects of multiple exposures: integrating Functional Data Analysis with Multivariable Mendelian Randomization
+ https://arxiv.org/abs/2512.19064
+ arXiv:2512.19064v1 Announce Type: new
+Abstract: Mendelian Randomization is a widely used instrumental variable method for assessing causal effects of lifelong exposures on health outcomes. Many exposures, however, have causal effects that vary across the life course and often influence outcomes jointly with other exposures or indirectly through mediating pathways. Existing approaches to multivariable Mendelian Randomization assume constant effects over time and therefore fail to capture these dynamic relationships. We introduce Multivariable Functional Mendelian Randomization (MV-FMR), a new framework that extends functional Mendelian Randomization to simultaneously model multiple time-varying exposures. The method combines functional principal component analysis with a data-driven cross-validation strategy for basis selection and accounts for overlapping instruments and mediation effects. Through extensive simulations, we assessed MV-FMR's ability to recover time-varying causal effects under a range of data-generating scenarios and compared the performance of joint versus separate exposure effect estimation strategies. Across scenarios involving nonlinear effects, horizontal pleiotropy, mediation, and sparse data, MV-FMR consistently recovered the true causal functions and outperformed univariable approaches. To demonstrate its practical value, we applied MV-FMR to UK Biobank data to investigate the time-varying causal effects of systolic blood pressure and body mass index on coronary artery disease. MV-FMR provides a flexible and interpretable framework for disentangling complex time-dependent causal processes and offers new opportunities for identifying life-course critical periods and actionable drivers relevant to disease prevention.
+ oai:arXiv.org:2512.19064v1
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Bernardo Flores, Yang Ni, Yanxun Xu, Peter M\"uller
+ Nicole Fontana, Francesca Ieva, Luisa Zuccolo, Emanuele Di Angelantonio, Piercesare Secchi
- Recursive state estimation via approximate modal paths
- https://arxiv.org/abs/2512.17737
- arXiv:2512.17737v1 Announce Type: new
-Abstract: In this paper, a method for recursively computing approximate modal paths is developed. A recursive formulation of the modal path can be obtained either by backward or forward dynamic programming. By combining both methods, a ``two-filter'' formula is demonstrated. Both method involves a recursion over a so-called value function, which is intractable in general. This problem is overcome by quadratic approximation of the value function in the forward dynamic programming paradigm, resulting in both a filtering and smoothing method. The merit of the approach is verified in a simulation experiments, where it is shown to be on par or better than other modern algorithms.
- oai:arXiv.org:2512.17737v1
+ Smoothed Quantile Estimation: A Unified Framework Interpolating to the Mean
+ https://arxiv.org/abs/2512.19187
+ arXiv:2512.19187v1 Announce Type: new
+Abstract: This paper develops and analyzes three families of estimators that continuously interpolate between classical quantiles and the sample mean. The construction begins with a smoothed version of the $L_{1}$ loss, indexed by a location parameter $z$ and a smoothing parameter $h \ge 0$, whose minimizer $\hat q(z,h)$ yields a unified M-estimation framework. Depending on how $(z, h)$ is specified, this framework generates three distinct classes of estimators: fixed-parameter smoothed quantile estimators, plug-in estimators of fixed quantiles, and a new continuum of mean-estimating procedures. For all three families we establish consistency and asymptotic normality via a uniform asymptotic equicontinuity argument. The limiting variances admit closed forms, allowing a transparent comparison of efficiency across families and smoothing levels. A geometric decomposition of the parameter space shows that, for fixed quantile level $\tau$, admissible pairs $(z, h)$ lie on straight lines along which the estimator targets the same population quantile while its asymptotic variance evolves. The theoretical analysis reveals two efficiency regimes. Under light-tailed distributions (e.g., Gaussian), smoothing yields a monotone variance reduction. Under heavy-tailed distributions (e.g., Laplace), a finite smoothing parameter $h^{*}(\tau) > 0$ strictly improves efficiency for quantile estimation. Numerical experiments -- based on simulated data and real financial returns -- validate these conclusions and show that, both asymptotically and in finite samples, the mean-estimating family does not improve upon the sample mean.
+ oai:arXiv.org:2512.19187v1stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Filip Tronarp
+ Sa\"id Maanan (LPP), Azzouz Dermoune (LPP), Ahmed El Ghini
- Day-Ahead Electricity Price Forecasting Using Merit-Order Curves Time Series
- https://arxiv.org/abs/2512.17758
- arXiv:2512.17758v1 Announce Type: new
-Abstract: We introduce a general, simple, and computationally efficient framework for predicting day-ahead supply and demand merit-order curves, from which both point and probabilistic electricity price forecasts can be derived. Specifically, we leverage functional principal component analysis to efficiently represent a pair of supply and demand curves in a low-dimensional vector space and employ regularized vector autoregressive models for their prediction. We conduct a rigorous empirical comparison of price forecasting performance between the proposed curve-based model, i.e., derived from predicted merit-order curves, and state-of-the-art price-based models that directly forecast the clearing price, using data from the Italian day-ahead market over the 2023-2024 period. Our results show that the proposed curve-based approach significantly improves both point and probabilistic price forecasting accuracy relative to price-based approaches, with average gains of approximately 5%, and improvements of up to 10% during mid-day hours, when prices occasionally drop due to high renewable generation and low demand.
- oai:arXiv.org:2512.17758v1
- stat.AP
- Mon, 22 Dec 2025 00:00:00 -0500
+ Scale-Invariant Robust Estimation of High-Dimensional Kronecker-Structured Matrices
+ https://arxiv.org/abs/2512.19273
+ arXiv:2512.19273v1 Announce Type: new
+Abstract: High-dimensional Kronecker-structured estimation faces a conflict between non-convex scaling ambiguities and statistical robustness. The arbitrary factor scaling distorts gradient magnitudes, rendering standard fixed-threshold robust methods ineffective. We resolve this via Scaled Robust Gradient Descent (SRGD), which stabilizes optimization by de-scaling gradients before truncation. To further enforce interpretability, we introduce Scaled Hard Thresholding (SHT) for invariant variable selection. A two-step estimation procedure, built upon robust initialization and SRGD--SHT iterative updates, is proposed for canonical matrix problems, such as trace regression, matrix GLMs, and bilinear models. The convergence rates are established for heavy-tailed predictors and noise, identifying a phase transition where optimal convergence rates recover under finite noise variance and degrade optimally for heavier tails. Experiments on simulated data and two real-world applications confirm superior robustness and efficiency of the proposed procedure.
+ oai:arXiv.org:2512.19273v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Guillaume Koechlin, Filippo Bovera, Piercesare Secchi
+ Xiaoyu Zhang, Zhiyun Fan, Wenyang Zhang, Di Wang
- Towards Sharp Minimax Risk Bounds for Operator Learning
- https://arxiv.org/abs/2512.17805
- arXiv:2512.17805v1 Announce Type: new
-Abstract: We develop a minimax theory for operator learning, where the goal is to estimate an unknown operator between separable Hilbert spaces from finitely many noisy input-output samples. For uniformly bounded Lipschitz operators, we prove information-theoretic lower bounds together with matching or near-matching upper bounds, covering both fixed and random designs under Hilbert-valued Gaussian noise and Gaussian white noise errors. The rates are controlled by the spectrum of the covariance operator of the measure that defines the error metric. Our setup is very general and allows for measures with unbounded support. A key implication is a curse of sample complexity which shows that the minimax risk for generic Lipschitz operators cannot decay at any algebraic rate in the sample size. We obtain essentially sharp characterizations when the covariance spectrum decays exponentially and provide general upper and lower bounds in slower-decay regimes.
- oai:arXiv.org:2512.17805v1
+ Simple Cubic Variance Functions on $\R^n$, Part one
+ https://arxiv.org/abs/2512.19303
+ arXiv:2512.19303v1 Announce Type: new
+Abstract: The classification of natural exponential families started with the paper \cite {Morri} where Carl Morris unifies six very familiar families by the fact that their variance functions are polynomials of degree less or equal to two. Extension of this classification to $\R^n$ and to degree three is the subject of this paper.
+ Keywords: Actions of the group $GL(n+1,\R)$, classification of natural exponential families, multivariate Lagrange formula. variance functions.
+ oai:arXiv.org:2512.19303v1math.ST
- cs.NA
- math.NA
- stat.MLstat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Ben Adcock, Gregor Maier, Rahul Parhi
+ Abdelhanid Hassairi, G\'erard Letac
- Delayed Acceptance Slice Sampling
- https://arxiv.org/abs/2512.17868
- arXiv:2512.17868v1 Announce Type: new
-Abstract: Slice sampling is a well-established Markov chain Monte Carlo method for (approximate) sampling of target distributions which are only known up to a normalizing constant. The method is based on choosing a new state on a slice, i.e., a superlevel set of the given unnormalized target density (with respect to a reference measure). However, slice sampling algorithms usually require per step multiple evaluations of the target density, and thus can become computationally expensive. This is particularly the case for Bayesian inference with costly likelihoods. In this paper, we exploit deterministic approximations of the target density, which are relatively cheap to evaluate, and propose delayed acceptance versions of hybrid slice samplers. We show ergodicity of the resulting slice sampling methods, discuss the superiority of delayed acceptance (ideal) slice sampling over delayed acceptance Metropolis-Hastings algorithms, and illustrate the benefits of our novel approach in terms improved computational efficiency in several numerical experiments.
- oai:arXiv.org:2512.17868v1
- stat.CO
+ High dimensional matrix estimation through elliptical factor models
+ https://arxiv.org/abs/2512.19325
+ arXiv:2512.19325v1 Announce Type: new
+Abstract: Elliptical factor models play a central role in modern high-dimensional data analysis, particularly due to their ability to capture heavy-tailed and heterogeneous dependence structures. Within this framework, Tyler's M-estimator (Tyler, 1987a) enjoys several optimality properties and robustness advantages. In this paper, we develop high-dimensional scatter matrix, covariance matrix and precision matrix estimators grounded in Tyler's M-estimation. We first adapt the Principal Orthogonal complEment Thresholding (POET) framework (Fan et al., 2013) by incorporating the spatial-sign covariance matrix as an effective initial estimator. Building on this idea, we further propose a direct extension of POET tailored for Tyler's M-estimation, referred to as the POET-TME method. We establish the consistency rates for the resulting estimators under elliptical factor models. Comprehensive simulation studies and a real data application illustrate the superior performance of POET-TME, especially in the presence of heavy-tailed distributions, demonstrating the practical value of our methodological contributions.
+ oai:arXiv.org:2512.19325v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Xinyue Xu, Huifang Ma, Hongfei Wang, Long Feng
+
+
+ A hybrid-Hill estimator enabled by heavy-tailed block maxima
+ https://arxiv.org/abs/2512.19338
+ arXiv:2512.19338v1 Announce Type: new
+Abstract: When analysing extreme values, two alternative statistical approaches have historically been held in contention: the seminal block maxima method (or annual maxima method, spurred by hydrological applications) and the peaks-over-threshold. Clamoured amongst statisticians as wasteful of potentially informative data, the block maxima method gradually fell into disfavour whilst peaks-over-threshold-based methodologies were ushered to the centre stage of extreme value statistics. This paper proposes a hybrid method which reconciles these two hitherto disconnected approaches. Appealing in its simplicity, our main result introduces a new universal limiting characterisation of extremes that eschews the customary requirement of a sufficiently large block size for the plausible block maxima-fit to an extreme value distribution. We advocate that inference should be drawn solely on larger block maxima, from which practice the mainstream peaks-over-threshold methodology coalesces. The asymptotic properties of the promised hybrid-Hill estimator herald more than its efficiency, but rather that a fully-fledged unified semi-parametric stream of statistics for extreme values is viable. A finite sample simulation study demonstrates that a reduced-bias off-shoot of the hybrid-Hill estimator fares exceptionally well against the incumbent maximum likelihood estimation that relies on a numerical fit to the entire sample of block maxima.
+ oai:arXiv.org:2512.19338v1math.ST
+ stat.AP
+ stat.MEstat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Claudia Neves, Chang Xu
+
+
+ Cluster-Based Generalized Additive Models Informed by Random Fourier Features
+ https://arxiv.org/abs/2512.19373
+ arXiv:2512.19373v1 Announce Type: new
+Abstract: Explainable machine learning aims to strike a balance between prediction accuracy and model transparency, particularly in settings where black-box predictive models, such as deep neural networks or kernel-based methods, achieve strong empirical performance but remain difficult to interpret. This work introduces a mixture of generalized additive models (GAMs) in which random Fourier feature (RFF) representations are leveraged to uncover locally adaptive structure in the data. In the proposed method, an RFF-based embedding is first learned and then compressed via principal component analysis. The resulting low-dimensional representations are used to perform soft clustering of the data through a Gaussian mixture model. These cluster assignments are then applied to construct a mixture-of-GAMs framework, where each local GAM captures nonlinear effects through interpretable univariate smooth functions. Numerical experiments on real-world regression benchmarks, including the California Housing, NASA Airfoil Self-Noise, and Bike Sharing datasets, demonstrate improved predictive performance relative to classical interpretable models. Overall, this construction provides a principled approach for integrating representation learning with transparent statistical modeling.
+ oai:arXiv.org:2512.19373v1
+ stat.ML
+ cs.LG
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Xin Huang, Jia Li, Jun Yu
+
+
+ A Reduced Basis Decomposition Approach to Efficient Data Collection in Pairwise Comparison Studies
+ https://arxiv.org/abs/2512.19398
+ arXiv:2512.19398v1 Announce Type: new
+Abstract: Comparative judgement studies elicit quality assessments through pairwise comparisons, typically analysed using the Bradley-Terry model. A challenge in these studies is experimental design, specifically, determining the optimal pairs to compare to maximize statistical efficiency. Constructing static experimental designs for these studies requires spectral decomposition of a covariance matrix over pairs of pairs, which becomes computationally infeasible for studies with more than approximately 150 objects. We propose a scalable method based on reduced basis decomposition that bypasses explicit construction of this matrix, achieving computational savings of two to three orders of magnitude. We establish eigenvalue bounds guaranteeing approximation quality and characterise the rank structure of the design matrix. Simulations demonstrate speedup factors exceeding 100 for studies with 64 or more objects, with negligible approximation error. We apply the method to construct designs for a 452-region spatial study in under 7 minutes and enable real-time design updates for classroom peer assessment, reducing computation time from 15 minutes to 15 seconds.
+ oai:arXiv.org:2512.19398v1
+ stat.ME
+ stat.CO
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Jiahua Jiang, Joseph Marsh, Rowland G Seymour
+
+
+ A Statistical Framework for Understanding Causal Effects that Vary by Treatment Initiation Time in EHR-based Studies
+ https://arxiv.org/abs/2512.19553
+ arXiv:2512.19553v1 Announce Type: new
+Abstract: Comparative effectiveness studies using electronic health records (EHR) consider data from patients who could ``enter'' the study cohort at any point during an interval that spans many years in calendar time. Unlike treatments in tightly controlled trials, real-world treatments can evolve over calendar time, especially if comparators include standard of care, or procedures where techniques may improve. Efforts to assess whether treatment efficacy itself is changing are complicated by changing patient populations, with potential covariate shift in key effect modifiers. In this work, we propose a statistical framework to estimate calendar-time specific average treatment effects and describe both how and why effects vary across treatment initiation time in EHR-based studies. Our approach projects doubly robust, time-specific treatment effect estimates onto candidate marginal structural models and uses a model selection procedure to best describe how effects vary by treatment initiation time. We further introduce a novel summary metric, based on standardization analysis, to quantify the role of covariate shift in explaining observed effect changes and disentangle changes in treatment effects from changes in the patient population receiving treatment. Extensive simulations using EHR data from Kaiser Permanente are used to validate the utility of the framework, which we apply to study changes in relative weight loss following two bariatric surgical interventions versus no surgery among patients with severe obesity between 2005-2011.
+ oai:arXiv.org:2512.19553v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Luke Benz, Rajarshi Mukherjee, Rui Wang, David Arterburn, Heidi Fischer, Catherine Lee, Susan M. Shortreed, Alexander W. Levis, Sebastien Haneuse
+
+
+ Possibilistic Inferential Models for Post-Selection Inference in High-Dimensional Linear Regression
+ https://arxiv.org/abs/2512.19588
+ arXiv:2512.19588v1 Announce Type: new
+Abstract: Valid uncertainty quantification after model selection remains challenging in high-dimensional linear regression, especially within the possibilistic inferential model (PIM) framework. We develop possibilistic inferential models for post-selection inference based on a regularized split possibilistic construction (RSPIM) that combines generic high-dimensional selectors with PIM validification through sample splitting. A first subsample is used to select a sparse model; ordinary least-squares refits on an independent inference subsample yield classical t/F pivots, which are then turned into consonant plausibility contours. In Gaussian linear models this leads to coor-dinatewise intervals with exact finite-sample strong validity conditional on the split and selected model, uniformly over all selectors that use only the selection data. We further analyze RSPIM in a sparse p >> n regime under high-level screening conditions, develop orthogonalized and bootstrap-based extensions for low-dimensional targets with high-dimensional nuisance, and study a maxitive multi-split aggregation that stabilizes inference across random splits while preserving strong validity. Simulations and a riboflavin gene-expression example show that calibrated RSPIM intervals are well behaved under both Gaussian and heteroskedastic errors and are competitive with state-of-the-art post-selection methods, while plausibility contours provide transparent diagnostics of post-selection uncertainty.
+ oai:arXiv.org:2512.19588v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kevin Bitterlich, Daniel Rudolf, Bj\"orn Sprungk
+ Yaohui Lin
- Principled Identification of Structural Dynamic Models
- https://arxiv.org/abs/2512.17005
- arXiv:2512.17005v1 Announce Type: cross
-Abstract: We take a new perspective on identification in structural dynamic models: rather than imposing restrictions, we optimize an objective. This provides new theoretical insights into traditional Cholesky identification. A correlation-maximizing objective yields an Order- and Scale-Invariant Identification Scheme (OASIS) that selects the orthogonal rotation that best aligns structural shocks with their reduced-form innovations. We revisit a large number of SVAR studies and find, across 22 published SVARs, that the correlations between structural and reduced-form shocks are generally high.
- oai:arXiv.org:2512.17005v1
+ srvar-toolkit: A Python Implementation of Shadow-Rate Vector Autoregressions with Stochastic Volatility
+ https://arxiv.org/abs/2512.19589
+ arXiv:2512.19589v1 Announce Type: new
+Abstract: We introduce srvar-toolkit, an open-source Python package for Bayesian vector autoregression with shadow-rate constraints and stochastic volatility. The toolkit implements the methodology of Grammatikopoulos (2025, Journal of Forecasting) for forecasting macroeconomic variables when interest rates hit the effective lower bound. We provide conjugate Normal-Inverse-Wishart priors with Minnesota-style shrinkage, latent shadow-rate data augmentation via Gibbs sampling, diagonal stochastic volatility using the Kim-Shephard-Chib mixture approximation, and stochastic search variable selection. Core dependencies are NumPy, SciPy, and Pandas, with optional extras for plotting and a configuration-driven command-line interface. We release the software under the MIT licence at https://github.com/shawcharles/srvar-toolkit.
+ oai:arXiv.org:2512.19589v1
+ stat.COecon.EM
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Charles Shaw
+
+
+ Ant Colony Optimisation applied to the Travelling Santa Problem
+ https://arxiv.org/abs/2512.19627
+ arXiv:2512.19627v1 Announce Type: new
+Abstract: The hypothetical global delivery schedule of Santa Claus must follow strict rolling night-time windows that vary with the Earth's rotation and obey an energy budget that depends on payload size and cruising speed. To design this schedule, the Travelling-Santa Ant-Colony Optimisation framework (TSaP-ACO) was developed. This heuristic framework constructs potential routes via a population of artificial ants that iteratively extend partial paths. Ants make their decisions much like they do in nature, following pheromones left by other ants, but with a degree of permitted exploration. This approach: (i) embeds local darkness feasibility directly into the pheromone heuristic, (ii) seeks to minimise aerodynamic work via a shrinking sleigh cross sectional area, (iii) uses a low-cost "rogue-ant" reversal to capture direction-sensitive time-zones, and (iv) tunes leg-specific cruise speeds on the fly. On benchmark sets of 15 and 30 capital cities, the TSaP-ACO eliminates all daylight violations and reduces total work by up to 10% compared to a distance-only ACO. In a 40-capital-city stress test, it cuts energy use by 88%, and shortens tour length by around 67%. Population-first routing emerges naturally from work minimisation (50% served by leg 11 of 40). These results demonstrate that rolling-window, energy-aware ACO has potential applications more realistic global delivery scenarios.
+ oai:arXiv.org:2512.19627v1
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Elliot Fisher, Robin Smith
+
+
+ A Markov Chain Modeling Approach for Predicting Relative Risks of Spatial Clusters in Public Health
+ https://arxiv.org/abs/2512.19635
+ arXiv:2512.19635v1 Announce Type: new
+Abstract: Predicting relative risk (RR) of spatial clusters is a complex task in public health that can be achieved through various statistical and machine-learning methods for different time intervals. However, high-resolution longitudinal data is often unavailable to successfully apply such methods. The goal of the present study is to further develop and test a new methodology proposed in our previous work for accurate sequential RR predictions in the case of limited lon gitudinal data. In particular, we first use a well-known likelihood ratio test to identify significant spatial clusters over user-defined time intervals. Then we apply a Markov chain modeling ap approach to predict RR values for each time interval. Our findings demonstrate that the proposed approach yields better performance with COVID-19 morbidity data compared to the previous study on mortality data. Additionally, increasing the number of time intervals enhances the accuracy of the proposed Markov chain modeling method.
+ oai:arXiv.org:2512.19635v1stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Lyza Iamrache, Kamel Rekab, Majid Bani-Yagoub, Julia Pluta, Abdelghani Mehailia
+
+
+ Testing for Conditional Independence in Binary Single-Index Models
+ https://arxiv.org/abs/2512.19641
+ arXiv:2512.19641v1 Announce Type: new
+Abstract: We wish to test whether a real-valued variable $Z$ has explanatory power, in addition to a multivariate variable $X$, for a binary variable $Y$. Thus, we are interested in testing the hypothesis $\mathbb{P}(Y=1\, | \, X,Z)=\mathbb{P}(Y=1\, | \, X)$, based on $n$ i.i.d.\ copies of $(X,Y,Z)$. In order to avoid the curse of dimensionality, we follow the common approach of assuming that the dependence of both $Y$ and $Z$ on $X$ is through a single-index $X^\top\beta$ only. Splitting the sample on both $Y$-values, we construct a two-sample empirical process of transformed $Z$-variables, after splitting the $X$-space into parallel strips. Studying this two-sample empirical process is challenging: it does not converge weakly to a standard Brownian bridge, but after an appropriate normalization it does. We use this result to construct distribution-free tests.
+ oai:arXiv.org:2512.19641v1
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ John H. J. Einmahl, Denis Kojevnikov, Bas J. M. Werker
+
+
+ An Adaptive Graphical Lasso Approach to Modeling Symptom Networks of Common Mental Disorders in Eritrean Refugee Population
+ https://arxiv.org/abs/2512.19681
+ arXiv:2512.19681v1 Announce Type: new
+Abstract: Despite the significant public health burden of common mental disorders (CMDs) among refugee populations, their underlying symptom structures remain underexplored. This study uses Gaussian graphical modeling to examine the symptom network of post-traumatic stress disorder (PTSD), depression, anxiety, and somatic distress among Eritrean refugees in the Greater Washington, DC area. Given the small sample size (n) and high-dimensional symptom space (p), we propose a novel extension of the standard graphical LASSO by incorporating adaptive penalization, which improves sparsity selection and network estimation stability under n < p conditions. To evaluate the reliability of the network, we apply bootstrap resampling and use centrality measures to identify the most influential symptoms. Our analysis identifies six distinct symptom clusters, with somatic-anxiety symptoms forming the most interconnected group. Notably, symptoms such as nausea and reliving past experiences emerge as central symptoms linking PTSD, anxiety, depression, and somatic distress. Additionally, we identify symptoms like feeling fearful, sleep problems, and loss of interest in activities as key symptoms, either being closely positioned to many others or acting as important bridges that help maintain the overall network connectivity, thereby highlighting their potential importance as possible intervention targets.
+ oai:arXiv.org:2512.19681v1
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Elizabeth B. Amona, Indranil Sahoo, David Chan, Marianne B. Lund, Miriam Kuttikat
+
+
+ A curated UK rain radar data set for training and benchmarking nowcasting models
+ https://arxiv.org/abs/2512.17924
+ arXiv:2512.17924v1 Announce Type: cross
+Abstract: This paper documents a data set of UK rain radar image sequences for use in statistical modeling and machine learning methods for nowcasting. The main dataset contains 1,000 randomly sampled sequences of length 20 steps (15-minute increments) of 2D radar intensity fields of dimension 40x40 (at 5km spatial resolution). Spatially stratified sampling ensures spatial homogeneity despite removal of clear-sky cases by threshold-based truncation. For each radar sequence, additional atmospheric and geographic features are made available, including date, location, mean elevation, mean wind direction and speed and prevailing storm type. New R functions to extract data from the binary "Nimrod" radar data format are provided. A case study is presented to train and evaluate a simple convolutional neural network for radar nowcasting, including self-contained R code.
+ oai:arXiv.org:2512.17924v1
+ physics.ao-ph
+ cs.CV
+ cs.LG
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Viv Atureta, Rifki Priansyah Jasin, Stefan Siegert
+
+
+ Comparative Evaluation of Explainable Machine Learning Versus Linear Regression for Predicting County-Level Lung Cancer Mortality Rate in the United States
+ https://arxiv.org/abs/2512.17934
+ arXiv:2512.17934v1 Announce Type: cross
+Abstract: Lung cancer (LC) is a leading cause of cancer-related mortality in the United States. Accurate prediction of LC mortality rates is crucial for guiding targeted interventions and addressing health disparities. Although traditional regression-based models have been commonly used, explainable machine learning models may offer enhanced predictive accuracy and deeper insights into the factors influencing LC mortality. This study applied three models: random forest (RF), gradient boosting regression (GBR), and linear regression (LR) to predict county-level LC mortality rates across the United States. Model performance was evaluated using R-squared and root mean squared error (RMSE). Shapley Additive Explanations (SHAP) values were used to determine variable importance and their directional impact. Geographic disparities in LC mortality were analyzed through Getis-Ord (Gi*) hotspot analysis. The RF model outperformed both GBR and LR, achieving an R2 value of 41.9% and an RMSE of 12.8. SHAP analysis identified smoking rate as the most important predictor, followed by median home value and the percentage of the Hispanic ethnic population. Spatial analysis revealed significant clusters of elevated LC mortality in the mid-eastern counties of the United States. The RF model demonstrated superior predictive performance for LC mortality rates, emphasizing the critical roles of smoking prevalence, housing values, and the percentage of Hispanic ethnic population. These findings offer valuable actionable insights for designing targeted interventions, promoting screening, and addressing health disparities in regions most affected by LC in the United States.
+ oai:arXiv.org:2512.17934v1
+ cs.LG
+ cs.AI
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Neville Francis, Peter Reinhard Hansen, Chen Tong
+ 10.1200/CCI-24-00310
+ JCO Clin Cancer Inform JCO Clinical Cancer Informatics, 2025 Nov:9:e2400310
+ Soheil Hashtarkhani, Brianna M. White, Benyamin Hoseini, David L. Schwartz, Arash Shaban-Nejad
- Flux-Preserving Adaptive Finite State Projection for Multiscale Stochastic Reaction Networks
- https://arxiv.org/abs/2512.17064
- arXiv:2512.17064v1 Announce Type: cross
-Abstract: The Finite State Projection (FSP) method approximates the Chemical Master Equation (CME) by restricting the dynamics to a finite subset of the (typically infinite) state space, enabling direct numerical solution with computable error bounds. Adaptive variants update this subset in time, but multiscale systems with widely separated reaction rates remain challenging, as low-probability bottleneck states can carry essential probability flux and the dynamics alternate between fast transients and slowly evolving stiff regimes. We propose a flux-based adaptive FSP method that uses probability flux to drive both state-space pruning and time-step selection. The pruning rule protects low-probability states with large outgoing flux, preserving connectivity in bottleneck systems, while the time-step rule adapts to the instantaneous total flux to handle rate constants spanning several orders of magnitude. Numerical experiments on stiff, oscillatory, and bottleneck reaction networks show that the method maintains accuracy while using substantially smaller state spaces.
- oai:arXiv.org:2512.17064v1
- cs.CE
+ Adaptive Agents in Spatial Double-Auction Markets: Modeling the Emergence of Industrial Symbiosis
+ https://arxiv.org/abs/2512.17979
+ arXiv:2512.17979v1 Announce Type: cross
+Abstract: Industrial symbiosis fosters circularity by enabling firms to repurpose residual resources, yet its emergence is constrained by socio-spatial frictions that shape costs, matching opportunities, and market efficiency. Existing models often overlook the interaction between spatial structure, market design, and adaptive firm behavior, limiting our understanding of where and how symbiosis arises. We develop an agent-based model where heterogeneous firms trade byproducts through a spatially embedded double-auction market, with prices and quantities emerging endogenously from local interactions. Leveraging reinforcement learning, firms adapt their bidding strategies to maximize profit while accounting for transport costs, disposal penalties, and resource scarcity. Simulation experiments reveal the economic and spatial conditions under which decentralized exchanges converge toward stable and efficient outcomes. Counterfactual regret analysis shows that sellers' strategies approach a near Nash equilibrium, while sensitivity analysis highlights how spatial structures and market parameters jointly govern circularity. Our model provides a basis for exploring policy interventions that seek to align firm incentives with sustainability goals, and more broadly demonstrates how decentralized coordination can emerge from adaptive agents in spatially constrained markets.
+ oai:arXiv.org:2512.17979v1
+ cs.GT
+ cs.AI
+ cs.MA
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ AAMAS 2026, Paphos, IFAAMAS, 10 pages
+ Matthieu Mastio, Paul Saves, Benoit Gaudou, Nicolas Verstaevel
+
+
+ Inference in partially identified moment models via regularized optimal transport
+ https://arxiv.org/abs/2512.18084
+ arXiv:2512.18084v1 Announce Type: cross
+Abstract: Partial identification often arises when the joint distribution of the data is known only up to its marginals. We consider the corresponding partially identified GMM model and develop a methodology for identification, estimation, and inference in this model. We characterize the sharp identified set for the parameter of interest via a support-function/optimal-transport (OT) representation. For estimation, we employ entropic regularization, which provides a smooth approximation to classical OT and can be computed efficiently by the Sinkhorn algorithm. We also propose a statistic for testing hypotheses and constructing confidence regions for the identified set. To derive the asymptotic distribution of this statistic, we establish a novel central limit theorem for the entropic OT value under general smooth costs. We then obtain valid critical values using the bootstrap for directionally differentiable functionals of Fang and Santos (2019). The resulting testing procedure controls size locally uniformly, including at parameter values on the boundary of the identified set. We illustrate its performance in a Monte Carlo simulation. Our methodology is applicable to a wide range of empirical settings, such as panels with attrition and refreshment samples, nonlinear treatment effects, nonparametric instrumental variables without large-support conditions, and Euler equations with repeated cross-sections.
+ oai:arXiv.org:2512.18084v1
+ econ.EMmath.ST
- stat.COstat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Grigory Franguridi, Laura Liu
+
+
+ TraCeR: Transformer-Based Competing Risk Analysis with Longitudinal Covariates
+ https://arxiv.org/abs/2512.18129
+ arXiv:2512.18129v1 Announce Type: cross
+Abstract: Survival analysis is a critical tool for modeling time-to-event data. Recent deep learning-based models have reduced various modeling assumptions including proportional hazard and linearity. However, a persistent challenge remains in incorporating longitudinal covariates, with prior work largely focusing on cross-sectional features, and in assessing calibration of these models, with research primarily focusing on discrimination during evaluation. We introduce TraCeR, a transformer-based survival analysis framework for incorporating longitudinal covariates. Based on a factorized self-attention architecture, TraCeR estimates the hazard function from a sequence of measurements, naturally capturing temporal covariate interactions without assumptions about the underlying data-generating process. The framework is inherently designed to handle censored data and competing events. Experiments on multiple real-world datasets demonstrate that TraCeR achieves substantial and statistically significant performance improvements over state-of-the-art methods. Furthermore, our evaluation extends beyond discrimination metrics and assesses model calibration, addressing a key oversight in literature.
+ oai:arXiv.org:2512.18129v1
+ cs.LG
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Aditya Dendukuri, Shivkumar Chandrasekaran, Linda Petzold
+ Maxmillan Ries, Sohan Seth
- Dirichlet Meets Horvitz and Thompson: Estimating Homophily in Large Networks via Sampling
- https://arxiv.org/abs/2512.17084
- arXiv:2512.17084v1 Announce Type: cross
-Abstract: Assessing homophily in large-scale networks is central to understanding structural regularities in graphs, and thus inform the choice of models (such as graph neural networks) adopted to learn from network data. Evaluation of smoothness metrics requires access to the entire network topology and node features, which may be impractical in several large-scale, dynamic, resource-limited, or privacy-constrained settings. In this work, we propose a sampling-based framework to estimate homophily via the Dirichlet energy (Laplacian-based total variation) of graph signals, leveraging the Horvitz-Thompson (HT) estimator for unbiased inference from partial graph observations. The Dirichlet energy is a so-termed total (of squared nodal feature deviations) over graph edges; hence, estimable under general network sampling designs for which edge-inclusion probabilities can be analytically derived and used as weights in the proposed HT estimator. We establish that the Dirichlet energy can be consistently estimated from sampled graphs, and empirically study other heterophily measures as well. Experiments on several heterophilic benchmark datasets demonstrate the effectiveness of the proposed HT estimators in reliably capturing homophilic structure (or lack thereof) from sampled network measurements.
- oai:arXiv.org:2512.17084v1
- eess.SP
- cs.SI
- stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ Adapting cluster graphs for inference of continuous trait evolution on phylogenetic networks
+ https://arxiv.org/abs/2512.18139
+ arXiv:2512.18139v1 Announce Type: cross
+Abstract: Dynamic programming approaches have long been applied to fit models of univariate and multivariate trait evolution on phylogenetic trees for discrete and continuous traits, and more recently adapted to phylogenetic networks with reticulation. We previously showed that various trait evolution models on a network can be readily cast as probabilistic graphical models, so that likelihood-based estimation can proceed efficiently via belief propagation on an associated clique tree. Even so, exact likelihood inference can grow computationally prohibitive for large complex networks. Loopy belief propagation can similarly be applied to these settings, using non-tree cluster graphs to optimize a factored energy approximation to the log-likelihood, and may provide a more practical trade-off between estimation accuracy and runtime. However, the influence of cluster graph structure on this trade-off is not precisely understood. We conduct a simulation study using the Julia package PhyloGaussianBeliefProp to investigate how varying maximum cluster size affects this trade-off for Gaussian trait evolution models on networks. We discuss recommended choices for maximum cluster size, and prove the equivalence of likelihood-based and factored-energy-based parameter estimates for the homogeneous Brownian motion model.
+ oai:arXiv.org:2512.18139v1
+ q-bio.PE
+ stat.CO
+ Tue, 23 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Hamed Ajorlou, Gonzalo Mateos, Luana Ruiz
+ Benjamin Teo, C\'ecile An\'e
- Smoothing DiLoCo with Primal Averaging for Faster Training of LLMs
- https://arxiv.org/abs/2512.17131
- arXiv:2512.17131v1 Announce Type: cross
-Abstract: We propose Generalized Primal Averaging (GPA), an extension of Nesterov's method in its primal averaging formulation that addresses key limitations of recent averaging-based optimizers such as single-worker DiLoCo and Schedule-Free (SF) in the non-distributed setting. These two recent algorithmic approaches improve the performance of base optimizers, such as AdamW, through different iterate averaging strategies. Schedule-Free explicitly maintains a uniform average of past weights, while single-worker DiLoCo performs implicit averaging by periodically aggregating trajectories, called pseudo-gradients, to update the model parameters. However, single-worker DiLoCo's periodic averaging introduces a two-loop structure, increasing its memory requirements and number of hyperparameters. GPA overcomes these limitations by decoupling the interpolation constant in the primal averaging formulation of Nesterov. This decoupling enables GPA to smoothly average iterates at every step, generalizing and improving upon single-worker DiLoCo. Empirically, GPA consistently outperforms single-worker DiLoCo while removing the two-loop structure, simplifying hyperparameter tuning, and reducing its memory overhead to a single additional buffer. On the Llama-160M model, GPA provides a 24.22% speedup in terms of steps to reach the baseline (AdamW's) validation loss. Likewise, GPA achieves speedups of 12% and 27% on small and large batch setups, respectively, to attain AdamW's validation accuracy on the ImageNet ViT workload. Furthermore, we prove that for any base optimizer with regret bounded by $O(\sqrt{T})$, where $T$ is the number of iterations, GPA can match or exceed the convergence guarantee of the original optimizer, depending on the choice of interpolation constants.
- oai:arXiv.org:2512.17131v1
+ Central Limit Theorem for ergodic averages of Markov chains \& the comparison of sampling algorithms for heavy-tailed distributions
+ https://arxiv.org/abs/2512.18255
+ arXiv:2512.18255v1 Announce Type: cross
+Abstract: Establishing central limit theorems (CLTs) for ergodic averages of Markov chains is a fundamental problem in probability and its applications. Since the seminal work~\cite{MR834478}, a vast literature has emerged on the sufficient conditions for such CLTs. To counterbalance this, the present paper provides verifiable necessary conditions for CLTs of ergodic averages of Markov chains on general state spaces. Our theory is based on drift conditions, which also yield lower bounds on the rates of convergence to stationarity in various metrics.
+ The validity of the ergodic CLT is of particular importance for sampling algorithms, where it underpins the error analysis of estimators in Bayesian statistics and machine learning. Although heavy-tailed sampling is of central importance in applications, the characterisation of the CLT and the convergence rates are theoretically poorly understood for almost all practically-used Markov chain Monte Carlo (MCMC) algorithms. In this setting our results provide sharp conditions on the validity of the ergodic CLT and establish convergence rates for large families of MCMC sampling algorithms for heavy-tailed targets. Our study includes a rather complete analyses for random walk Metropolis samplers (with finite- and infinite-variance proposals), Metropolis-adjusted and unadjusted Langevin algorithms and the stereographic projection sampler (as well as the independence sampler). By providing these sharp results via our practical drift conditions, our theory offers significant insights into the problems of algorithm selection and comparison for sampling heavy-tailed distributions (see short YouTube presentations~\cite{YouTube_talk} describing our \href{https://youtu.be/m2y7U4cEqy4}{\underline{theory}} and \href{https://youtu.be/w8I_oOweuko}{\underline{applications}}).
+ oai:arXiv.org:2512.18255v1
+ math.PR
+ stat.CO
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Miha Bre\v{s}ar, Aleksandar Mijatovi\'c, Gareth Roberts
+
+
+ Privacy Data Pricing: A Stackelberg Game Approach
+ https://arxiv.org/abs/2512.18296
+ arXiv:2512.18296v1 Announce Type: cross
+Abstract: Data markets are emerging as key mechanisms for trading personal and organizational data. Traditional data pricing studies -- such as query-based or arbitrage-free pricing models -- mainly emphasize price consistency and profit maximization but often neglect privacy constraints and strategic interactions. The widespread adoption of differential privacy (DP) introduces a fundamental privacy-utility trade-off: noise protects individuals' privacy but reduces data accuracy and market value. This paper develops a Stackelberg game framework for pricing DP data, where the market maker (leader) sets the price function and the data buyer (follower) selects the optimal query precision under DP constraints. We derive the equilibrium strategies for both parties under a balanced pricing function where the pricing decision variable enters linearly into the original pricing model. We obtain closed-form solutions for the optimal variance and pricing level, and determine the boundary conditions for market participation. Furthermore, we extend the analysis to Stackelberg games involving nonlinear power pricing functions. The model bridges DP and economic mechanism design, offering a unified foundation for incentive-compatible and privacy-conscious data pricing in data markets.
+ oai:arXiv.org:2512.18296v1
+ cs.GT
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Lijun Bo, Weiqiang Chang
+
+
+ Towards Guided Descent: Optimization Algorithms for Training Neural Networks At Scale
+ https://arxiv.org/abs/2512.18373
+ arXiv:2512.18373v1 Announce Type: cross
+Abstract: Neural network optimization remains one of the most consequential yet poorly understood challenges in modern AI research, where improvements in training algorithms can lead to enhanced feature learning in foundation models, order-of-magnitude reductions in training time, and improved interpretability into how networks learn. While stochastic gradient descent (SGD) and its variants have become the de facto standard for training deep networks, their success in these over-parameterized regimes often appears more empirical than principled. This thesis investigates this apparent paradox by tracing the evolution of optimization algorithms from classical first-order methods to modern higher-order techniques, revealing how principled algorithmic design can demystify the training process. Starting from first principles with SGD and adaptive gradient methods, the analysis progressively uncovers the limitations of these conventional approaches when confronted with anisotropy that is representative of real-world data. These breakdowns motivate the exploration of sophisticated alternatives rooted in curvature information: second-order approximation techniques, layer-wise preconditioning, adaptive learning rates, and more. Next, the interplay between these optimization algorithms and the broader neural network training toolkit, which includes prior and recent developments such as maximal update parametrization, learning rate schedules, and exponential moving averages, emerges as equally essential to empirical success. To bridge the gap between theoretical understanding and practical deployment, this paper offers practical prescriptions and implementation strategies for integrating these methods into modern deep learning workflows.
+ oai:arXiv.org:2512.18373v1cs.LG
- cs.AI
+ math.OCstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500crosshttp://creativecommons.org/licenses/by/4.0/
- Aaron Defazio, Konstantin Mishchenko, Parameswaran Raman, Hao-Jun Michael Shi, Lin Xiao
+ Ansh Nagwekar
- meval: A Statistical Toolbox for Fine-Grained Model Performance Analysis
- https://arxiv.org/abs/2512.17409
- arXiv:2512.17409v1 Announce Type: cross
-Abstract: Analyzing machine learning model performance stratified by patient and recording properties is becoming the accepted norm and often yields crucial insights about important model failure modes. Performing such analyses in a statistically rigorous manner is non-trivial, however. Appropriate performance metrics must be selected that allow for valid comparisons between groups of different sample sizes and base rates; metric uncertainty must be determined and multiple comparisons be corrected for, in order to assess whether any observed differences may be purely due to chance; and in the case of intersectional analyses, mechanisms must be implemented to find the most `interesting' subgroups within combinatorially many subgroup combinations. We here present a statistical toolbox that addresses these challenges and enables practitioners to easily yet rigorously assess their models for potential subgroup performance disparities. While broadly applicable, the toolbox is specifically designed for medical imaging applications. The analyses provided by the toolbox are illustrated in two case studies, one in skin lesion malignancy classification on the ISIC2020 dataset and one in chest X-ray-based disease classification on the MIMIC-CXR dataset.
- oai:arXiv.org:2512.17409v1
+ The Challenger: When Do New Data Sources Justify Switching Machine Learning Models?
+ https://arxiv.org/abs/2512.18390
+ arXiv:2512.18390v1 Announce Type: cross
+Abstract: We study the problem of deciding whether, and when an organization should replace a trained incumbent model with a challenger relying on newly available features. We develop a unified economic and statistical framework that links learning-curve dynamics, data-acquisition and retraining costs, and discounting of future gains. First, we characterize the optimal switching time in stylized settings and derive closed-form expressions that quantify how horizon length, learning-curve curvature, and cost differentials shape the optimal decision. Second, we propose three practical algorithms: a one-shot baseline, a greedy sequential method, and a look-ahead sequential method. Using a real-world credit-scoring dataset with gradually arriving alternative data, we show that (i) optimal switching times vary systematically with cost parameters and learning-curve behavior, and (ii) the look-ahead sequential method outperforms other methods and is able to approach in value an oracle with full foresight. Finally, we establish finite-sample guarantees, including conditions under which the sequential look-ahead method achieve sublinear regret relative to that oracle. Our results provide an operational blueprint for economically sound model transitions as new data sources become available.
+ oai:arXiv.org:2512.18390v1cs.LG
- stat.AP
- stat.MEstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Vassilis Digalakis Jr, Christophe P\'erignon, S\'ebastien Saurin, Flore Sentenac
+
+
+ Why Most Optimism Bandit Algorithms Have the Same Regret Analysis: A Simple Unifying Theorem
+ https://arxiv.org/abs/2512.18409
+ arXiv:2512.18409v1 Announce Type: cross
+Abstract: Several optimism-based stochastic bandit algorithms -- including UCB, UCB-V, linear UCB, and finite-arm GP-UCB -- achieve logarithmic regret using proofs that, despite superficial differences, follow essentially the same structure. This note isolates the minimal ingredients behind these analyses: a single high-probability concentration condition on the estimators, after which logarithmic regret follows from two short deterministic lemmas describing radius collapse and optimism-forced deviations. The framework yields unified, near-minimal proofs for these classical algorithms and extends naturally to many contemporary bandit variants.
+ oai:arXiv.org:2512.18409v1
+ cs.LG
+ cs.SY
+ eess.SY
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1007/978-3-032-05870-6_19
- MICCAI FAIMI Workshop 2025
- Dishantkumar Sutariya, Eike Petersen
+ Vikram Krishnamurthy
- Alternating Direction Method of Multipliers for Nonlinear Matrix Decompositions
- https://arxiv.org/abs/2512.17473
- arXiv:2512.17473v1 Announce Type: cross
-Abstract: We present an algorithm based on the alternating direction method of multipliers (ADMM) for solving nonlinear matrix decompositions (NMD). Given an input matrix $X \in \mathbb{R}^{m \times n}$ and a factorization rank $r \ll \min(m, n)$, NMD seeks matrices $W \in \mathbb{R}^{m \times r}$ and $H \in \mathbb{R}^{r \times n}$ such that $X \approx f(WH)$, where $f$ is an element-wise nonlinear function. We evaluate our method on several representative nonlinear models: the rectified linear unit activation $f(x) = \max(0, x)$, suitable for nonnegative sparse data approximation, the component-wise square $f(x) = x^2$, applicable to probabilistic circuit representation, and the MinMax transform $f(x) = \min(b, \max(a, x))$, relevant for recommender systems. The proposed framework flexibly supports diverse loss functions, including least squares, $\ell_1$ norm, and the Kullback-Leibler divergence, and can be readily extended to other nonlinearities and metrics. We illustrate the applicability, efficiency, and adaptability of the approach on real-world datasets, highlighting its potential for a broad range of applications.
- oai:arXiv.org:2512.17473v1
- eess.SP
+ Secret mixtures of experts inside your LLM
+ https://arxiv.org/abs/2512.18452
+ arXiv:2512.18452v1 Announce Type: cross
+Abstract: Despite being one of the earliest neural network layers, the Multilayer Perceptron (MLP) is arguably one of the least understood parts of the transformer architecture due to its dense computation and lack of easy visualization. This paper seeks to understand the MLP layers in dense LLM models by hypothesizing that these layers secretly approximately perform a sparse computation -- namely, that they can be well approximated by sparsely-activating Mixture of Experts (MoE) layers.
+ Our hypothesis is based on a novel theoretical connection between MoE models and Sparse Autoencoder (SAE) structure in activation space. We empirically validate the hypothesis on pretrained LLMs, and demonstrate that the activation distribution matters -- these results do not hold for Gaussian data, but rather rely crucially on structure in the distribution of neural network activations.
+ Our results shine light on a general principle at play in MLP layers inside LLMs, and give an explanation for the effectiveness of modern MoE-based transformers. Additionally, our experimental explorations suggest new directions for more efficient MoE architecture design based on low-rank routers.
+ oai:arXiv.org:2512.18452v1cs.LG
- math.OC
+ cs.AIstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Atharva Awari, Nicolas Gillis, Arnaud Vandaele
+ Enric Boix-Adsera
+
+
+ Detecting stellar flares in the presence of a deterministic trend and stochastic volatility
+ https://arxiv.org/abs/2512.18559
+ arXiv:2512.18559v1 Announce Type: cross
+Abstract: We develop a new and powerful method to analyze time series to rigorously detect flares in the presence of an irregularly oscillatory baseline, and apply it to stellar light curves observed with TESS. First, we remove the underlying non-stochastic trend using a time-varying amplitude harmonic model. We then model the stochastic component of the light curves in a manner analogous to financial time series, as an ARMA+GARCH process, allowing us to detect and characterize impulsive flares as large deviations inconsistent with the correlation structure in the light curve. We apply the method to exemplar light curves from TIC13955147 (a G5V eruptive variable), TIC269797536 (an M4 high-proper motion star), and TIC441420236 (AU Mic, an active dMe flare star), detecting up to $145$, $460$, and $403$ flares respectively, at rates ranging from ${\approx}0.4$--$8.5$~day$^{-1}$ over different sectors and under different detection thresholds. We detect flares down to amplitudes of $0.03$%, $0.29$%, and $0.007$% of the bolometric luminosity for each star respectively. We model the distributions of flare energies and peak fluxes as power-laws, and find that the solar-like star exhibits values similar to that on the Sun ($\alpha_{E,P}\approx1.85,2.36$), while for the less- and highly-active low-mass stars $\alpha_{E,P}>2$ and $<2$ respectively.
+ oai:arXiv.org:2512.18559v1
+ astro-ph.SR
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Qiyuan Wang, Giovanni Motta, Genaro Sucarrat, Vinay L. Kashyap
- Koenigs functions in the subcritical and critical Markov branching processes with Poisson probability reproduction of particles
- https://arxiv.org/abs/2512.17485
- arXiv:2512.17485v1 Announce Type: cross
-Abstract: Special functions have always played a central role in physics and in mathematics, arising as solutions of nonlinear differential equations, as well as in the theory of branching processes, which extensively uses probability generating functions. The theory of iteration of real functions leads to limit theorems for the discrete-time and real-time Markov branching processes. The Poisson reproduction of particles in real time is analysed through the integration of the Kolmogorov equation. These results are further extended by employing graphical representations of Koenigs functions under subcritical and critical branching mechanisms. The limit conditional law in the subcritical case and the invariant measure for the critical case are discussed, as well. The obtained explicit solutions contain the exponential Bell polynomials and the modified exponential-integral function $\rm{Ein} (z)$.
- oai:arXiv.org:2512.17485v1
+ From Shortcut to Induction Head: How Data Diversity Shapes Algorithm Selection in Transformers
+ https://arxiv.org/abs/2512.18634
+ arXiv:2512.18634v1 Announce Type: cross
+Abstract: Transformers can implement both generalizable algorithms (e.g., induction heads) and simple positional shortcuts (e.g., memorizing fixed output positions). In this work, we study how the choice of pretraining data distribution steers a shallow transformer toward one behavior or the other. Focusing on a minimal trigger-output prediction task -- copying the token immediately following a special trigger upon its second occurrence -- we present a rigorous analysis of gradient-based training of a single-layer transformer. In both the infinite and finite sample regimes, we prove a transition in the learned mechanism: if input sequences exhibit sufficient diversity, measured by a low ``max-sum'' ratio of trigger-to-trigger distances, the trained model implements an induction head and generalizes to unseen contexts; by contrast, when this ratio is large, the model resorts to a positional shortcut and fails to generalize out-of-distribution (OOD). We also reveal a trade-off between the pretraining context length and OOD generalization, and derive the optimal pretraining distribution that minimizes computational cost per sample. Finally, we validate our theoretical predictions with controlled synthetic experiments, demonstrating that broadening context distributions robustly induces induction heads and enables OOD generalization. Our results shed light on the algorithmic biases of pretrained transformers and offer conceptual guidelines for data-driven control of their learned behaviors.
+ oai:arXiv.org:2512.18634v1
+ cs.LG
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Ryotaro Kawata, Yujin Song, Alberto Bietti, Naoki Nishikawa, Taiji Suzuki, Samuel Vaiter, Denny Wu
+
+
+ Counterfactual Basis Extension and Representational Geometry: An MDL-Constrained Model of Conceptual Growth
+ https://arxiv.org/abs/2512.18732
+ arXiv:2512.18732v1 Announce Type: cross
+Abstract: Concept learning becomes possible only when existing representations fail to account for experience. Most models of learning and inference, however, presuppose a fixed representational basis within which belief updating occurs. In this paper, I address a prior question: under what structural conditions can the representational basis itself expand in a principled and selective way?
+ I propose a geometric framework in which conceptual growth is modeled as admissible basis extension evaluated under a Minimum Description Length (MDL) criterion. Experience, whether externally observed or internally simulated, is represented as vectors relative to a current conceptual subspace. Residual components capture systematic representational failure, and candidate conceptual extensions are restricted to low-rank, admissible transformations. I show that any MDL-accepted extension can be chosen so that its novel directions lie entirely within the residual span induced by experience, while extensions orthogonal to this span strictly increase description length and are therefore rejected.
+ This yields a conservative account of imagination and conceptual innovation. Internally generated counterfactual representations contribute to learning only insofar as they expose or amplify structured residual error, and cannot introduce arbitrary novelty. I further distinguish representational counterfactuals--counterfactuals over an agent's conceptual basis--from causal or value-level counterfactuals, and show how MDL provides a normative selection principle governing representational change.
+ Overall, the framework characterizes conceptual development as an error-driven, geometry-constrained process of basis extension, clarifying both the role and the limits of imagination in learning and theory change.
+ oai:arXiv.org:2512.18732v1
+ cs.AI
+ cs.LG
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chainarong Amornbunchornvej
+
+
+ Convergence of the adapted empirical measure for mixing observations
+ https://arxiv.org/abs/2512.18838
+ arXiv:2512.18838v1 Announce Type: cross
+Abstract: The adapted Wasserstein distance $\mathcal{AW}$ is a modification of the classical Wasserstein metric, that provides robust and dynamically consistent comparisons of laws of stochastic processes, and has proved particularly useful in the analysis of stochastic control problems, model uncertainty, and mathematical finance. In applications, the law of a stochastic process $\mu$ is not directly observed, and has to be inferred from a finite number of samples. As the empirical measure is not $\mathcal{AW}$-consistent, Backhoff, Bartl, Beiglb\"ock and Wiesel introduced the adapted empirical measure $\widehat{\mu}^N$, a suitable modification, and proved its $\mathcal{AW}$-consistency when observations are i.i.d.
+ In this paper we study $\mathcal{AW}$-convergence of the adapted empirical measure $\widehat{\mu}^N$ to the population distribution $\mu$, for observations satisfying a generalization of the $\eta$-mixing condition introduced by Kontorovich and Ramanan. We establish moment bounds and sub-exponential concentration inequalities for $\mathcal{AW}(\mu,\widehat{\mu}^N)$, and prove consistency of $\widehat{\mu}^N$. In addition, we extend the Bounded Differences inequality of Kontorovich and Ramanan for $\eta$-mixing observations to uncountable spaces, a result that may be of independent interest. Numerical simulations illustrating our theory are also provided.
+ oai:arXiv.org:2512.18838v1math.PR
- stat.CO
- Mon, 22 Dec 2025 00:00:00 -0500
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Penka Mayster, Assen Tchorbadjieff
+ Ruslan Mirmominov, Johannes Wiesel
+
+
+ Adapting Skill Ratings to Luck-Based Hidden-Information Games
+ https://arxiv.org/abs/2512.18858
+ arXiv:2512.18858v1 Announce Type: cross
+Abstract: Rating systems play a crucial role in evaluating player skill across competitive environments. The Elo rating system, originally designed for deterministic and information-complete games such as chess, has been widely adopted and modified in various domains. However, the traditional Elo rating system only considers game outcomes for rating calculation and assumes uniform initial states across players. This raises important methodological challenges in skill modelling for popular partially randomized incomplete-information games such as Rummy. In this paper, we examine the limitations of conventional Elo ratings when applied to luck-driven environments and propose a modified Elo framework specifically tailored for Rummy. Our approach incorporates score-based performance metrics and explicitly models the influence of initial hand quality to disentangle skill from luck. Through extensive simulations involving 270,000 games across six strategies of varying sophistication, we demonstrate that our proposed system achieves stable convergence, superior discriminative power, and enhanced predictive accuracy compared to traditional Elo formulations. The framework maintains computational simplicity while effectively capturing the interplay of skill, strategy, and randomness, with broad applicability to other stochastic competitive environments.
+ oai:arXiv.org:2512.18858v1
+ cs.GT
+ stat.AP
+ stat.CO
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Avirup Chakraborty, Shirsa Maitra, Tathagata Banerjee, Diganta Mukherjee, Tridib Mukherjee
- A Systems-Theoretic View on the Convergence of Algorithms under Disturbances
- https://arxiv.org/abs/2512.17598
- arXiv:2512.17598v1 Announce Type: cross
-Abstract: Algorithms increasingly operate within complex physical, social, and engineering systems where they are exposed to disturbances, noise, and interconnections with other dynamical systems. This article extends known convergence guarantees of an algorithm operating in isolation (i.e., without disturbances) and systematically derives stability bounds and convergence rates in the presence of such disturbances. By leveraging converse Lyapunov theorems, we derive key inequalities that quantify the impact of disturbances. We further demonstrate how our result can be utilized to assess the effects of disturbances on algorithmic performance in a wide variety of applications, including communication constraints in distributed learning, sensitivity in machine learning generalization, and intentional noise injection for privacy. This underpins the role of our result as a unifying tool for algorithm analysis in the presence of noise, disturbances, and interconnections with other dynamical systems.
- oai:arXiv.org:2512.17598v1
+ Sharp Decoupling Inequalities for the Variances and Second Moments of Sums of Dependent Random Variables
+ https://arxiv.org/abs/2512.19063
+ arXiv:2512.19063v1 Announce Type: cross
+Abstract: Both complete decoupling and tangent decoupling are classical tools aiming to compare two random processes where one has a weaker dependence structure. We give a new proof for the complete decoupling inequality, which provides a lower bound for the sum of dependent square-integrable nonnegative random variables $\sum\limits^n_{i=1} d_i$ \[ \frac{1}{2} \mathbb E \left( \sum\limits^n_{i=1} z_i \right)^2 \leq \mathbb E \left( \sum\limits^n_{i=1} d_i \right)^2, \] where $z_i \stackrel{\mathcal{L}}{=} d_i$ for all $i\leq n$ and $z_i$'s are mutually independent. We will then provide the following sharp tangent decoupling inequalities \[\mathbb Var \left( \sum\limits^n_{i=1} d_i\right) \leq 2 \mathbb Var \left( \sum\limits^n_{i=1} e_i\right),\] and \[\mathbb E \left( \sum\limits^n_{i=1} d_i\right)^2 \leq 2 \mathbb E \left( \sum\limits^n_{i=1} e_i\right)^2 - \left[ \mathbb E \left( \sum\limits^n_{i=1} e_i\right) \right]^2,\] where $\{e_i\}$ is the decoupled sequences of $\{d_i\}$ and $d_i$'s are not forced to be nonnegative. Applications to construct Chebyshev-type inequality and Paley-Zygmund-type inequality, and to bound the second moments of randomly stopped sums will be provided.
+ oai:arXiv.org:2512.19063v1
+ math.PR
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Victor H. de la Pena, Heyuan Yao, Demissie Alemayehu
+
+
+ A Convex Loss Function for Set Prediction with Optimal Trade-offs Between Size and Conditional Coverage
+ https://arxiv.org/abs/2512.19142
+ arXiv:2512.19142v1 Announce Type: cross
+Abstract: We consider supervised learning problems in which set predictions provide explicit uncertainty estimates. Using Choquet integrals (a.k.a. Lov{\'a}sz extensions), we propose a convex loss function for nondecreasing subset-valued functions obtained as level sets of a real-valued function. This loss function allows optimal trade-offs between conditional probabilistic coverage and the ''size'' of the set, measured by a non-decreasing submodular function. We also propose several extensions that mimic loss functions and criteria for binary classification with asymmetric losses, and show how to naturally obtain sets with optimized conditional coverage. We derive efficient optimization algorithms, either based on stochastic gradient descent or reweighted least-squares formulations, and illustrate our findings with a series of experiments on synthetic datasets for classification and regression tasks, showing improvements over approaches that aim for marginal coverage.
+ oai:arXiv.org:2512.19142v1cs.LGmath.OCstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Francis Bach (SIERRA)
+
+
+ The asymptotic distribution of the likelihood ratio test statistic in two-peak discovery experiments
+ https://arxiv.org/abs/2512.19333
+ arXiv:2512.19333v1 Announce Type: cross
+Abstract: Likelihood ratio tests are widely used in high-energy physics, where the test statistic is usually assumed to follow a chi-squared distribution with a number of degrees of freedom specified by Wilks' theorem. This assumption breaks down when parameters such as signal or coupling strengths are restricted to be non-negative and their values under the null hypothesis lie on the boundary of the parameter space. Based on a recent clarification concerning the correct asymptotic distribution of the likelihood ratio test statistic for cases where two of the parameters are on the boundary, we revisit the the question of significance estimation for two-peak signal-plus-background counting experiments. In the high-energy physics literature, such experiments are commonly analyzed using Wilks' chi-squared distribution or the one-parameter Chernoff limit. We demonstrate that these approaches can lead to strongly miscalibrated significances, and that the test statistic distribution is instead well described by a chi-squared mixture with weights determined by the Fisher information matrix. Our results highlight the need for boundary-aware asymptotics in the analysis of two-peak counting experiments.
+ oai:arXiv.org:2512.19333v1
+ hep-ex
+ physics.data-an
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Clara Bertinelli Salucci, Hedvig Borgen Reiersrud, A. L. Read, Anders Kvellestad, Riccardo De Bin
+
+
+ Orthogonal Approximate Message Passing with Optimal Spectral Initializations for Rectangular Spiked Matrix Models
+ https://arxiv.org/abs/2512.19334
+ arXiv:2512.19334v1 Announce Type: cross
+Abstract: We propose an orthogonal approximate message passing (OAMP) algorithm for signal estimation in the rectangular spiked matrix model with general rotationally invariant (RI) noise. We establish a rigorous state evolution that precisely characterizes the algorithm's high-dimensional dynamics and enables the construction of iteration-wise optimal denoisers. Within this framework, we accommodate spectral initializations under minimal assumptions on the empirical noise spectrum. In the rectangular setting, where a single rank-one component typically generates multiple informative outliers, we further propose a procedure for combining these outliers under mild non-Gaussian signal assumptions. For general RI noise models, the predicted performance of the proposed optimal OAMP algorithm agrees with replica-symmetric predictions for the associated Bayes-optimal estimator, and we conjecture that it is statistically optimal within a broad class of iterative estimation methods.
+ oai:arXiv.org:2512.19334v1
+ cs.IT
+ cs.LG
+ math.IT
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Haohua Chen, Songbin Liu, Junjie Ma
+
+
+ Toward Scalable and Valid Conditional Independence Testing with Spectral Representations
+ https://arxiv.org/abs/2512.19510
+ arXiv:2512.19510v1 Announce Type: cross
+Abstract: Conditional independence (CI) is central to causal inference, feature selection, and graphical modeling, yet it is untestable in many settings without additional assumptions. Existing CI tests often rely on restrictive structural conditions, limiting their validity on real-world data. Kernel methods using the partial covariance operator offer a more principled approach but suffer from limited adaptivity, slow convergence, and poor scalability. In this work, we explore whether representation learning can help address these limitations. Specifically, we focus on representations derived from the singular value decomposition of the partial covariance operator and use them to construct a simple test statistic, reminiscent of the Hilbert-Schmidt Independence Criterion (HSIC). We also introduce a practical bi-level contrastive algorithm to learn these representations. Our theory links representation learning error to test performance and establishes asymptotic validity and power guarantees. Preliminary experiments suggest that this approach offers a practical and statistically grounded path toward scalable CI testing, bridging kernel-based theory with modern representation learning.
+ oai:arXiv.org:2512.19510v1
+ cs.LG
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Alek Frohlich, Vladimir Kostic, Karim Lounici, Daniel Perazzo, Massimiliano Pontil
+
+
+ Deep Learning for Primordial $B$-mode Extraction
+ https://arxiv.org/abs/2512.19577
+ arXiv:2512.19577v1 Announce Type: cross
+Abstract: The search for primordial gravitational waves is a central goal of cosmic microwave background (CMB) surveys. Isolating the characteristic $B$-mode polarization signal sourced by primordial gravitational waves is challenging for several reasons: the amplitude of the signal is inherently small; astrophysical foregrounds produce $B$-mode polarization contaminating the signal; and secondary $B$-mode polarization fluctuations are produced via the conversion of $E$ modes. Current and future low-noise, multi-frequency observations enable sufficient precision to address the first two of these challenges such that secondary $B$ modes will become the bottleneck for improved constraints on the amplitude of primordial gravitational waves. The dominant source of secondary $B$-mode polarization is gravitational lensing by large scale structure. Various strategies have been developed to estimate the lensing deflection and to reverse its effects the CMB, thus reducing confusion from lensing $B$ modes in the search for primordial gravitational waves. However, a few complications remain. First, there may be additional sources of secondary $B$-mode polarization, for example from patchy reionization or from cosmic polarization rotation. Second, the statistics of delensed CMB maps can become complicated and non-Gaussian, especially when advanced lensing reconstruction techniques are applied. We previously demonstrated how a deep learning network, ResUNet-CMB, can provide nearly optimal simultaneous estimates of multiple sources of secondary $B$-mode polarization. In this paper, we show how deep learning can be applied to estimate and remove multiple sources of secondary $B$-mode polarization, and we further show how this technique can be used in a likelihood analysis to produce nearly optimal, unbiased estimates of the amplitude of primordial gravitational waves.
+ oai:arXiv.org:2512.19577v1
+ astro-ph.CO
+ cs.CV
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Eric Guzman, Joel Meyers
+
+
+ Scalably Enhancing the Clinical Validity of a Task Benchmark with Physician Oversight
+ https://arxiv.org/abs/2512.19691
+ arXiv:2512.19691v1 Announce Type: cross
+Abstract: Automating the calculation of clinical risk scores offers a significant opportunity to reduce physician administrative burden and enhance patient care. The current standard for evaluating this capability is MedCalc-Bench, a large-scale dataset constructed using LLM-based feature extraction and rule-based aggregation. However, treating such model-generated benchmarks as static oracles risks enshrining historical model errors as evaluation gold standards, a problem dangerously amplified when these datasets serve as reward signals for Reinforcement Learning (RL). In this work, we propose viewing benchmarks for complex tasks such as clinical score computation as ''in-progress living documents'' that should be periodically re-evaluated as the processes for creating them improve. We introduce a systematic, physician-in-the-loop pipeline that leverages advanced agentic verifiers to audit and relabel MedCalc-Bench, utilizing automated triage to reserve scarce clinician attention for the most contentious instances. Our audit reveals that a notable fraction of original labels diverge from medical ground truth due to extraction errors, calculator logic mismatches, and clinical ambiguity. To study whether this label noise meaningfully impacts downstream RL training, we fine-tune a Qwen3-8B model via Group Relative Policy Optimization (GRPO) and demonstrate that training on corrected labels yields an 8.7% absolute improvement in accuracy over the original baseline -- validating that label noise materially affects model evaluation. These findings underscore that in safety-critical domains, rigorous benchmark maintenance is a prerequisite for genuine model alignment.
+ oai:arXiv.org:2512.19691v1
+ cs.AI
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500cross
+ http://creativecommons.org/licenses/by/4.0/
+ Junze Ye, Daniel Tawfik, Alex J. Goodell, Nikhil V. Kotha, Mark K. Buyyounouski, Mohsen Bayati
+
+
+ Structure of Classifier Boundaries: Case Study for a Naive Bayes Classifier
+ https://arxiv.org/abs/2212.04382
+ arXiv:2212.04382v3 Announce Type: replace
+Abstract: Classifiers assign complex input data points to one of a small number of output categories. For a Bayes classifier whose input space is a graph, we study the structure of the \emph{boundary}, which comprises those points for which at least one neighbor is classified differently. The scientific setting is assignment of DNA reads produced by \NGSs\ to candidate source genomes. The boundary is both large and complicated in structure. We introduce a new measure of uncertainty, Neighbor Similarity, that compares the result for an input point to the distribution of results for its neighbors. This measure not only tracks two inherent uncertainty measures for the Bayes classifier, but also can be implemented for classifiers without inherent measures of uncertainty.
+ oai:arXiv.org:2212.04382v3
+ stat.ML
+ cs.LG
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Alan F. Karr, Zac Bowen, Adam A. Porter, Jeanne Ruane
+
+
+ Causal inference with misspecified network interference structure
+ https://arxiv.org/abs/2302.11322
+ arXiv:2302.11322v3 Announce Type: replace
+Abstract: Under interference, the treatment of one unit may affect the outcomes of other units. Such interference patterns between units are typically represented by a network. Correctly specifying this network requires identifying which units can affect others -- an inherently challenging task. Nevertheless, most existing approaches assume that a known and accurate network specification is given. In this paper, we study the consequences of such misspecification.
+ We derive bounds on the bias arising from estimating causal effects using a misspecified network, showing that the estimation bias grows with the divergence between the assumed and true networks, quantified through their induced exposure probabilities. To address this challenge, we propose a novel estimator that leverages multiple networks simultaneously and remains unbiased if at least one of the networks is correct, even when we do not know which one. Therefore, the proposed estimator provides robustness to network specification. We illustrate key properties and demonstrate the utility of our proposed estimator through simulations and analysis of a social network field experiment.
+ oai:arXiv.org:2302.11322v3
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Bar Weinstein, Daniel Nevo
+
+
+ Generalized Data Thinning Using Sufficient Statistics
+ https://arxiv.org/abs/2303.12931
+ arXiv:2303.12931v3 Announce Type: replace
+Abstract: Our goal is to develop a general strategy to decompose a random variable $X$ into multiple independent random variables, without sacrificing any information about unknown parameters. A recent paper showed that for some well-known natural exponential families, $X$ can be "thinned" into independent random variables $X^{(1)}, \ldots, X^{(K)}$, such that $X = \sum_{k=1}^K X^{(k)}$. These independent random variables can then be used for various model validation and inference tasks, including in contexts where traditional sample splitting fails. In this paper, we generalize their procedure by relaxing this summation requirement and simply asking that some known function of the independent random variables exactly reconstruct $X$. This generalization of the procedure serves two purposes. First, it greatly expands the families of distributions for which thinning can be performed. Second, it unifies sample splitting and data thinning, which on the surface seem to be very different, as applications of the same principle. This shared principle is sufficiency. We use this insight to perform generalized thinning operations for a diverse set of families.
+ oai:arXiv.org:2303.12931v3
+ stat.ME
+ math.ST
+ stat.ML
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1080/01621459.2024.2353948
+ Journal of the American Statistical Association, 120(549), 511-523 (2025)
+ Ameer Dharamshi, Anna Neufeld, Keshav Motwani, Lucy L. Gao, Daniela Witten, Jacob Bien
+
+
+ Finite sample rates of convergence for the Bigraphical and Tensor graphical Lasso estimators
+ https://arxiv.org/abs/2304.00441
+ arXiv:2304.00441v4 Announce Type: replace
+Abstract: Many modern datasets exhibit dependencies among observations as well as variables. A decade ago, Kalaitzis et. al. (2013) proposed the Bigraphical Lasso, an estimator for precision matrices of matrix-normals based on the Cartesian product of graphs; they observed that the associativity of the Kronecker sum yields an approach to the modeling of datasets organized into 3 or higher-order tensors. Subsequently, Greenewald, Zhou and Hero (2019) explored this possibility to a great extent, by introducing the tensor graphical Lasso (TeraLasso) for estimating sparse $L$-way decomposable inverse covariance matrices for all $L \ge 2$, and showing the rates of convergence in the Frobenius and operator norms for estimating this class of inverse covariance matrices for sub-gaussian tensor-valued data. In this paper, we provide sharper rates of convergence for both Bigraphical and TeraLasso estimators for inverse covariance matrices. This improves upon the rates presented in GZH 2019. In particular, (a) we strengthen the bounds for the relative errors in the operator and Frobenius norm by a factor of approximately $\log p$; (b) Crucially, this improvement allows for finite sample estimation errors in both norms to be derived for the two-way Kronecker sum model. This closes the gap between the low single-sample error for the two-way model as observed in GZH 2019 and the lack of theoretical guarantee for this particular case. The two-way regime is important because it is the setting that is the most theoretically challenging, and simultaneously the most common in applications. In the current paper, we elaborate on the Kronecker Sum model, highlight the proof strategy and provide full proofs of all main theorems.
+ oai:arXiv.org:2304.00441v4
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Shuheng Zhou, Kristjan Greenewald
+
+
+ Approximate co-sufficient sampling with regularization
+ https://arxiv.org/abs/2309.08063
+ arXiv:2309.08063v3 Announce Type: replace
+Abstract: In this work, we consider the problem of goodness-of-fit (GoF) testing for parametric models. This testing problem involves a composite null hypothesis, due to the unknown values of the model parameters. In some special cases, co-sufficient sampling (CSS) can remove the influence of these unknown parameters via conditioning on a sufficient statistic -- often, the maximum likelihood estimator (MLE) of the unknown parameters. However, many common parametric settings do not permit this approach, since conditioning on a sufficient statistic leads to a powerless test. The recent approximate co-sufficient sampling (aCSS) framework of Barber and Janson (2022) offers an alternative, replacing sufficiency with an approximately sufficient statistic (namely, a noisy version of the MLE). This approach recovers power in a range of settings where CSS cannot be applied, but can only be applied in settings where the unconstrained MLE is well-defined and well-behaved, which implicitly assumes a low-dimensional regime. In this work, we extend aCSS to the setting of constrained and penalized maximum likelihood estimation, so that more complex estimation problems can now be handled within the aCSS framework, including examples such as mixtures-of-Gaussians (where the unconstrained MLE is not well-defined due to degeneracy) and high-dimensional Gaussian linear models (where the MLE can perform well under regularization, such as an $\ell_1$ penalty or a shape constraint).
+ oai:arXiv.org:2309.08063v3
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wanrong Zhu, Rina Foygel Barber
+
+
+ Ordering sampling rules for sequential anomaly identification under sampling constraints
+ https://arxiv.org/abs/2309.14528
+ arXiv:2309.14528v2 Announce Type: replace
+Abstract: We consider the problem of sequential anomaly identification over multiple independent data streams, under the presence of a sampling constraint. The goal is to quickly identify those that exhibit anomalous statistical behavior, when it is not possible to sample every source at each time instant. Thus, in addition to a stopping rule that determines when to stop sampling, and a decision rule that indicates which sources to identify as anomalous upon stopping, one needs to specify a sampling rule that determines which sources to sample at each time instant. We focus on the family of ordering sampling rules that select the sources to be sampled at each time instant based not only on the currently estimated subset of anomalous sources as the probabilistic sampling rules \cite{Tsopela_2022}, but also on the ordering of the sources' test-statistics. We show that under an appropriate design specified explicitly, an ordering sampling rule leads to the optimal expected time for stopping among all policies that satisfy the same sampling and error constraints to a first-order asymptotic approximation as the false positive and false negative error thresholds go to zero. This is the first asymptotic optimality result for ordering sampling rules, when more than one sources can be sampled per time instant, and it is established under a general setup where the number of anomalous sources is not required to be known. A novel proof technique is introduced that encompasses all different cases of the problem concerning sources' homogeneity, and prior information on the number of anomalies. Simulations show that ordering sampling rules have better performance in finite regime compared to probabilistic sampling rules.
+ oai:arXiv.org:2309.14528v2
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Aristomenis Tsopelakos, Georgios Fellouris
+
+
+ Joint Object Tracking and Intent Recognition
+ https://arxiv.org/abs/2311.06139
+ arXiv:2311.06139v2 Announce Type: replace
+Abstract: This paper presents a Bayesian framework for inferring the posterior of the augmented state of a target, incorporating its underlying goal or intent, such as any intermediate waypoints and/or final destination. The methodology is thus for joint tracking and intent recognition. Several latent intent models are proposed here within a virtual leader formulation. They capture the influence of the target's hidden goal on its instantaneous behaviour. In this context, various motion models, including for highly maneuvering objects, are also considered. The a priori unknown target intent (e.g. destination) can dynamically change over time and take any value within the state space (e.g. a location or spatial region). A sequential Monte Carlo (particle filtering) approach is introduced for the simultaneous estimation of the target's (kinematic) state and its intent. Rao-Blackwellisation is employed to enhance the statistical performance of the inference routine. Simulated data and real radar measurements are used to demonstrate the efficacy of the proposed techniques.
+ oai:arXiv.org:2311.06139v2
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Jiaming Liang, Bashar I. Ahmad, Simon Godsill
+
+
+ Variational Markov chain mixtures with automatic component selection
+ https://arxiv.org/abs/2406.04653
+ arXiv:2406.04653v2 Announce Type: replace
+Abstract: Markov state modeling has gained popularity in various scientific fields since it reduces complex time-series data sets into transitions between a few states. Yet common Markov state modeling frameworks assume a single Markov chain describes the data, so they suffer from an inability to discern heterogeneities. As an alternative, this paper models time-series data using a mixture of Markov chains, and it automatically determines the number of mixture components using the variational expectation-maximization algorithm.Variational EM simultaneously identifies the number of Markov chains and the dynamics of each chain without expensive model comparisons or posterior sampling. As a theoretical contribution, this paper identifies the natural limits of Markov state mixture modeling by proving a lower bound on the classification error. It then presents numerical experiments where variational EM achieves performance consistent with the theoretically optimal error scaling. The experiments are based on synthetic and observational data sets including Last.fm music listening, ultramarathon running, and gene expression. In each of the three data sets, variational EM leads to the identification of meaningful heterogeneities.
+ oai:arXiv.org:2406.04653v2
+ stat.ME
+ cs.NA
+ math.NA
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Christopher E. Miles, Robert J. Webber
+
+
+ Any-Time Regret-Guaranteed Algorithm for Control of Linear Quadratic Systems
+ https://arxiv.org/abs/2406.07746
+ arXiv:2406.07746v2 Announce Type: replace
+Abstract: We propose a computationally efficient algorithm that achieves anytime regret of order $\mathcal{O}(\sqrt{t})$, with explicit dependence on the system dimensions and on the solution of the Discrete Algebraic Riccati Equation (DARE). Our approach uses an appropriately tuned regularization and a sufficiently accurate initial estimate to construct confidence ellipsoids for control design. A carefully designed input-perturbation mechanism is incorporated to ensure anytime performance. We develop two variants of the algorithm. The first enforces strong sequential stability, requiring each policy to be stabilizing and successive policies to remain close. This sequential condition helps prevent state explosion at policy update times; however, it results in a suboptimal regret scaling with respect to the DARE solution. Motivated by this limitation, we introduce a second class of algorithms that removes this requirement and instead requires only that each generated policy be stabilizing. Closed-loop stability is then preserved through a dwell-time inspired policy-update rule. This class of algorithms also addresses key shortcomings of most existing approaches which lack explicit high-probability bounds on the state trajectory expressed in system-theoretic terms. Our analysis shows that partially relaxing the sequential-stability requirement yields optimal regret. Finally, our method eliminates the need for any \emph{a priori} bound on the norm of the DARE solution, an assumption required by all existing computationally efficient OFU based algorithms.
+ oai:arXiv.org:2406.07746v2
+ stat.ML
+ cs.LG
+ cs.SY
+ eess.SY
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jafar Abbaszadeh Chekan, Cedric Langbort
+
+
+ Asymptotic theory for nonparametric testing of $k$-monotonicity in discrete distributions
+ https://arxiv.org/abs/2407.01751
+ arXiv:2407.01751v2 Announce Type: replace
+Abstract: In shape-constrained nonparametric inference, it is often necessary to perform preliminary tests to verify whether a probability mass function (p.m.f.) satisfies qualitative constraints such as monotonicity, convexity, or in general $k$-monotonicity. In this paper, we are interested in nonparametric testing of $k$-monotonicity of a finitely supported discrete distribution. We consider a unified testing framework based on a natural statistic which is directly derived from the very definition of $k$-monotonicity. The introduced framework allows us to design a new consistent method to select the unknown knot points that are required to consistently approximate the limit distribution of several test statistics based either on the empirical measure or the shape-constrained estimators of the p.m.f. We show that the resulting tests are asymptotically valid and consistent for any fixed alternative. Additionally, for the test based solely on the empirical measure, we study the asymptotic power under contiguous alternatives and derive a quantitative separation result that provides sufficient conditions to achieve a given power. We employ this test to design an estimator for the largest parameter $k \in \mathbb N_0$ such that the p.m.f. is $j$-monotone for all $j = 0, \ldots, k$, and show that the estimator is different from the true parameter with probability which is asymptotically smaller than the nominal level of the test. A detailed simulation study is performed to assess the finite sample performance of all the proposed tests, and applications to several real datasets are presented to illustrate the theory.
+ oai:arXiv.org:2407.01751v2
+ math.ST
+ stat.ME
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fadoua Balabdaoui, Antonio Di Noia
+
+
+ Fast convergence rates for estimating the stationary density in SDEs driven by a fractional Brownian motion with semi-contractive drift
+ https://arxiv.org/abs/2408.15904
+ arXiv:2408.15904v2 Announce Type: replace
+Abstract: We study the estimation of the invariant density of additive fractional stochastic differential equations with Hurst parameter $H \in (0,1)$. We first focus on continuous observations and develop a kernel-based estimator achieving faster convergence rates than previously available. This result stems from a martingale decomposition combined with new bounds on the (conditional) convergence in total variation to equilibrium of fractional SDEs. For $H<1/2$, we further refine the rates based on recent bounds on the marginal density. We then extend the methodology to discrete observations, showing that the same convergence rates can be attained. Moreover, we establish concentration inequalities for the estimator and introduce a data-driven bandwidth selection procedure that adapts to unknown smoothness. Numerical experiments for the fractional Ornstein-Uhlenbeck process illustrate the estimator's practical performance. Finally, our results weaken the usual convexity assumptions on the drift component, allowing us to consider settings where strong convexity only holds outside a compact set.
+ oai:arXiv.org:2408.15904v2
+ math.ST
+ math.PR
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Chiara Amorino, Eulalia Nualart, Fabien Panloup, Julian Sieber
+
+
+ The $\infty$-S test via regression quantile affine LASSO
+ https://arxiv.org/abs/2409.04256
+ arXiv:2409.04256v3 Announce Type: replace
+Abstract: A novel test in the linear $\ell_1$ (LAD) and quantile regressions is proposed, based on the scores provided by the dual variables (signs) arising in the calculation of the (so-called) affine-lasso estimate--a Rao-type, Lagrange multiplier test using the thresholding, towards the null hypothesis of the test, function of the latter estimate.
+ oai:arXiv.org:2409.04256v3
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sylvain Sardy, Ivan Mizera, Xiaoyu Ma, Hugo Gaible
+
+
+ Estimating velocities of infectious disease spread through spatio-temporal log-Gaussian Cox point processes
+ https://arxiv.org/abs/2409.05036
+ arXiv:2409.05036v2 Announce Type: replace
+Abstract: Understanding the spread of infectious diseases such as COVID-19 is crucial for informed decision-making and resource allocation. A critical component of disease behavior is the velocity with which disease spreads, defined as the rate of change between time and space. In this paper, we propose a spatio-temporal modeling approach to determine the velocities of infectious disease spread. Our approach assumes that the locations and times of people infected can be considered as a spatio-temporal point pattern that arises as a realization of a spatio-temporal log-Gaussian Cox process. The intensity of this process is estimated using fast Bayesian inference by employing the integrated nested Laplace approximation (INLA) and the Stochastic Partial Differential Equations (SPDE) approaches. The velocity is then calculated using finite differences that approximate the derivatives of the intensity function. Finally, the directions and magnitudes of the velocities can be mapped at specific times to examine better the spread of the disease throughout the region. We demonstrate our method by analyzing COVID-19 spread in Cali, Colombia, during the 2020-2021 pandemic.
+ oai:arXiv.org:2409.05036v2
+ stat.AP
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Fernando Rodriguez Avellaneda, Jorge Mateu, Paula Moraga
+
+
+ Optimal sequencing depth for single-cell RNA-sequencing in Wasserstein space
+ https://arxiv.org/abs/2409.14326
+ arXiv:2409.14326v2 Announce Type: replace
+Abstract: How many samples should one collect for an empirical distribution to be as close as possible to the true population? This question is not trivial in the context of single-cell RNA-sequencing. With limited sequencing depth, profiling more cells comes at the cost of fewer reads per cell. Therefore, one must strike a balance between the number of cells sampled and the accuracy of each measured gene expression profile. In this paper, we analyze an empirical distribution of cells and obtain upper and lower bounds on the Wasserstein distance to the true population. Our analysis holds for general, non-parametric distributions of cells, and is validated by simulation experiments on a real single-cell dataset.
+ oai:arXiv.org:2409.14326v2
+ math.ST
+ stat.ME
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jakwang Kim, Sharvaj Kubal, Geoffrey Schiebinger
+
+
+ A generalized e-value feature detection method with FDR control at multiple resolutions
+ https://arxiv.org/abs/2409.17039
+ arXiv:2409.17039v5 Announce Type: replace
+Abstract: Multiple resolutions arise across a range of explanatory features due to domain-specific structures, leading to the formation of feature groups. It follows that the simultaneous detection of significant features and groups aimed at a specific response with false discovery rate (FDR) control stands as a crucial issue, such as the spatial genome-wide association studies. Nevertheless, existing detection methods with multilayer FDR control generally rely on valid p-values or knockoff statistics, which can be not flexible, powerful and stable in several settings. To fix this issue effectively, this article develops a novel method of Stabilized Flexible E-Filter Procedure (SFEFP), by constructing unified generalized e-values, leveraging a generalized e-filter, and adopting a stabilization treatment with power enhancement. This method flexibly incorporates diverse base detection procedures at different resolutions to provide consistent, powerful, and stable results, while controlling FDR at multiple resolutions simultaneously. Statistical properties of multilayer filtering procedure encompassing one-bit property, multilayer FDR control, and stability guarantee are established. We also develop several examples for SFEFP such as the eDS-filter. Simulation studies and the analysis of HIV mutation data demonstrate the efficacy of SFEFP.
+ oai:arXiv.org:2409.17039v5
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chengyao Yu, Ruixing Ming, Min Xiao, Zhanfeng Wang, Bingyi Jing
+
+
+ Theoretical Convergence Guarantees for Variational Autoencoders
+ https://arxiv.org/abs/2410.16750
+ arXiv:2410.16750v3 Announce Type: replace
+Abstract: Variational Autoencoders (VAE) are popular generative models used to sample from complex data distributions. Despite their empirical success in various machine learning tasks, significant gaps remain in understanding their theoretical properties, particularly regarding convergence guarantees. This paper aims to bridge that gap by providing non-asymptotic convergence guarantees for VAE trained using both Stochastic Gradient Descent and Adam algorithms. We derive a convergence rate of $\mathcal{O}(\log n / \sqrt{n})$, where $n$ is the number of iterations of the optimization algorithm, with explicit dependencies on the batch size, the number of variational samples, and other key hyperparameters. Our theoretical analysis applies to both Linear VAE and Deep Gaussian VAE, as well as several VAE variants, including $\beta$-VAE and IWAE. Additionally, we empirically illustrate the impact of hyperparameters on convergence, offering new insights into the theoretical understanding of VAE training.
+ oai:arXiv.org:2410.16750v3
+ stat.ML
+ cs.LG
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Sobihan Surendran (LPSM), Antoine Godichon-Baggioni (LPSM), Sylvain Le Corff (LPSM)
+
+
+ A dual approach to proving electoral fraud using statistics and forensic evidence (Dvojnoe dokazatel'stvo falsifikazij na vyborah statistikoj i kriminalistikoj)
+ https://arxiv.org/abs/2412.04535
+ arXiv:2412.04535v3 Announce Type: replace
+Abstract: Electoral fraud often manifests itself as statistical anomalies in election results, yet its extent can rarely be reliably confirmed by other evidence. Here we report the complete results of municipal elections in the town of Vlasikha near Moscow, where we observe both statistical irregularities in the vote-counting transcripts and forensic evidence of tampering with ballots during their overnight storage. We evaluate two types of statistical signatures in the vote sequence that can prove batches of fraudulent ballots have been injected. We find that pairs of factory-made security bags with identical serial numbers are used in this fraud scheme. At 8 out of our 9 polling stations, the statistical and forensic evidence agrees (identifying 7 as fraudulent and 1 as honest), while at the remaining station the statistical evidence detects the fraud while the forensic one is insufficient. We also illustrate that the use of tamper-indicating seals at elections is inherently unreliable.
+ oai:arXiv.org:2412.04535v3
+ stat.AP
+ physics.soc-ph
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-sa/4.0/
+ Elect. Polit. 14, issue 2, 4 (2025); https://electoralpolitics.org/en/articles/dvoinoe-dokazatelstvo-falsifikatsii-na-vyborakh-statistikoi-i-kriminalistikoi/
+ Andrey Podlazov, Vadim Makarov
+
+
+ Estimating excess mortality during the Covid-19 pandemic in Aotearoa New Zealand
+ https://arxiv.org/abs/2412.08927
+ arXiv:2412.08927v3 Announce Type: replace
+Abstract: Background. The excess mortality rate in Aotearoa New Zealand during the Covid-19 pandemic is frequently estimated to be among the lowest in the world. However, to facilitate international comparisons, many of the methods that have been used to estimate excess mortality do not use age-stratified data on deaths and population size, which may compromise their accuracy.
+ Methods. We used a quasi-Poisson regression model for monthly all-cause deaths among New Zealand residents, controlling for age, sex and seasonality. We fitted the model to deaths data for 2014-19. We estimated monthly excess mortality for 2020-23 as the difference between actual deaths and projected deaths according to the model. We conducted sensitivity analysis on the length of the pre-pandemic period used to fit the model. We benchmarked our results against a simple linear regression on the standardised annual mortality rate.
+ Results. We estimated cumulative excess mortality in New Zealand in 2020-23 was 1040 (95% confidence interval [-1134, 2927]), equivalent to 0.7% [-0.8%, 2.0%] of expected mortality. Excess mortality was negative in 2020-21. The magnitude, timing and age-distribution of the positive excess mortality in 2022-23 were closely matched with confirmed Covid-19 deaths.
+ Conclusions. Negative excess mortality in 2020-21 reflects very low levels of Covid-19 and major reductions in seasonal respiratory diseases during this period. In 2022-23, Covid-19 deaths were the main contributor to excess mortality and there was little or no net non-Covid-19 excess. Overall, New Zealand experienced one of the lowest rates of pandemic excess mortality in the world.
+ oai:arXiv.org:2412.08927v3
+ stat.AP
+ q-bio.PE
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1093/ije/dyaf093
+ International Journal of Epidemiology (2025), 54(4): dyaf093
+ Michael John Plank, Pubudu Senanayake, Richard Lyon
+
+
+ Parameter-Specific Bias Diagnostics in Random-Effects Panel Data Models
+ https://arxiv.org/abs/2412.20555
+ arXiv:2412.20555v4 Announce Type: replace
+Abstract: The Hausman specification test detects inconsistency of the random-effects estimator by comparing it with an alternative fixed-effects estimator. This note shows how a recently proposed bias diagnostic for linear mixed models can complement this test in random-effects panel-data applications. The diagnostic delivers parameter-specific internal estimates of finite-sample bias of the random-effects estimator, together with permutation-based $p$-values, from a single fitted random-effects model. We illustrate its use in a gasoline-demand panel and in a value-added model for teacher evaluation, using publicly available R packages, and we discuss how the resulting bias summaries can be incorporated into routine practice.
+ oai:arXiv.org:2412.20555v4
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Andrew T. Karl
+
+
+ Anytime Validity is Free: Inducing Sequential Tests
+ https://arxiv.org/abs/2501.03982
+ arXiv:2501.03982v5 Announce Type: replace
+Abstract: Anytime valid sequential tests permit us to stop testing based on the current data, without invalidating the inference. Given a maximum number of observations $N$, one may believe this must come at the cost of power when compared to a conventional test that waits until all $N$ observations have arrived. Our first contribution is to show that this is false: for any valid test based on $N$ observations, we show how to construct an anytime valid sequential test that matches it after $N$ observations. Our second contribution is that we may continue testing by using the outcome of a $[0, 1]$-valued test as a conditional significance level in subsequent testing, leading to an overall procedure that is valid at the original significance level. This shows that anytime validity and optional continuation are readily available in traditional testing, without requiring explicit use of e-values. We illustrate this by deriving the anytime valid sequentialized $z$-test and $t$-test, which at time $N$ coincide with the traditional $z$-test and $t$-test. Finally, we characterize the SPRT by invariance under test induction, and also show under an i.i.d. assumption that the SPRT is induced by the Neyman-Pearson test for a tiny significance level and huge $N$.
+ oai:arXiv.org:2501.03982v5
+ math.ST
+ stat.ME
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nick W. Koning, Sam van Meer
+
+
+ Inverse sampling intensity weighting for preferential sampling adjustment
+ https://arxiv.org/abs/2503.05067
+ arXiv:2503.05067v2 Announce Type: replace
+Abstract: Traditional geostatistical methods assume independence between observation locations and the spatial process of interest. Violations of this independence assumption are referred to as preferential sampling (PS). Standard methods to address PS rely on estimating complex shared latent variable models and can be difficult to apply in practice. We study the use of inverse sampling intensity weighting (ISIW) for PS adjustment in model-based geostatistics. ISIW is a two-stage approach wherein we estimate the sampling intensity of the observation locations then define intensity-based weights within a weighted likelihood adjustment. Prediction follows by substituting the adjusted parameter estimates within kriging. We introduce an implementation of ISIW based on the Vecchia approximation, enabling computational gains while maintaining strong predictive accuracy. Interestingly, we found that ISIW outpredicts standard PS methods under misspecification of the sampling design, and that accurate parameter estimation had little correlation with predictive performance, raising questions about the conditions driving optimal implementation of kriging-based predictors under PS. Our work highlights the potential of ISIW to adjust for PS in an intuitive, fast, and effective manner. We illustrate these ideas on spatial prediction of lead concentrations measured through moss biomonitoring data in Galicia, Spain, and PM2.5 concentrations from the U.S. EPA Air Quality System network in California.
+ oai:arXiv.org:2503.05067v2
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Thomas W. Hsiao, Lance A. Waller
+
+
+ Spearman's rho for zero-inflated count data: formulation and attainable bounds
+ https://arxiv.org/abs/2503.13148
+ arXiv:2503.13148v2 Announce Type: replace
+Abstract: We propose an alternative formulation of Spearman's rho for zero-inflated count data. The formulation yields an estimator with explicitly attainable bounds, facilitating interpretation in settings where the standard range [-1,1] is no longer informative.
+ oai:arXiv.org:2503.13148v2
+ stat.ME
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jasper Arends, Guanjie Lyu, Mhamed Mesfioui, Elisa Perrone, Julien Trufin
+
+
+ Density estimation via mixture discrepancy and moments
+ https://arxiv.org/abs/2504.01570
+ arXiv:2504.01570v2 Announce Type: replace
+Abstract: With the aim of generalizing histogram statistics to higher dimensional cases, density estimation via discrepancy based sequential partition (DSP) has been proposed to learn an adaptive piecewise constant approximation defined on a binary sequential partition of the underlying domain, where the star discrepancy is adopted to measure the uniformity of particle distribution. However, the calculation of the star discrepancy is NP-hard and it does not satisfy the reflection invariance and rotation invariance either. To this end, we use the mixture discrepancy and the comparison of moments as a replacement of the star discrepancy, leading to the density estimation via mixture discrepancy based sequential partition (DSP-mix) and density estimation via moment-based sequential partition (MSP), respectively. Both DSP-mix and MSP are computationally tractable and exhibit the reflection and rotation invariance. Numerical experiments in reconstructing Beta mixtures, Gaussian mixtures and heavy-tailed Cauchy mixtures up to 30 dimension are conducted, demonstrating that MSP can maintain the same accuracy compared with DSP, while gaining an increase in speed by a factor of two to twenty for large sample size, and DSP-mix can achieve satisfactory accuracy and boost the efficiency in low-dimensional tests ($d \le 6$), but might lose accuracy in high-dimensional problems due to a reduction in partition level.
+ oai:arXiv.org:2504.01570v2
+ stat.ML
+ cs.LG
+ physics.comp-ph
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zhengyang Lei, Lirong Qu, Sihong Shao, Yunfeng Xiong
+
+
+ Stable EEG Source Estimation for Standardized Kalman Filter using Change Rate Tracking
+ https://arxiv.org/abs/2504.01984
+ arXiv:2504.01984v2 Announce Type: replace
+Abstract: This article focuses on the measurement and evolution modeling of Standardized Kalman filtering for brain activity estimation using non-invasive electroencephalography data. Here, we propose new parameter tuning and a model that uses the rate of change in the brain activity distribution to improve the stability of otherwise accurate estimates. Namely, we propose a backward-differentiation-based measurement model for the change rate, which notably improves the filtering-parametrization-stability of the tracking. Simulated data and data from a real subject were used in experiments.
+ oai:arXiv.org:2504.01984v2
+ stat.AP
+ cs.NA
+ eess.SP
+ math.NA
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replacehttp://creativecommons.org/licenses/by-nc-sa/4.0/
- Guner Dilsad Er, Sebastian Trimpe, Michael Muehlebach
+ Joonas Lahtinen
- On the supremum and its location of the standardized uniform empirical process
- https://arxiv.org/abs/2512.17674
- arXiv:2512.17674v1 Announce Type: cross
-Abstract: We show that the maximizing point and the supremum of the standardized uniform empirical process converge in distribution. Here, the limit variable (Z, Y ) has independent components. Moreover, Z attains the values zero and one with equal probability one half and Y follows the Gumbel-distribution.
- oai:arXiv.org:2512.17674v1
- math.PR
+ Unifying Different Theories of Conformal Prediction
+ https://arxiv.org/abs/2504.02292
+ arXiv:2504.02292v2 Announce Type: replace
+Abstract: This paper presents a unified framework for understanding the methodology and theory behind several different methods in the conformal prediction literature, which includes standard conformal prediction (CP), weighted conformal prediction (WCP), nonexchangeable conformal prediction (NexCP), and randomly-localized conformal prediction (RLCP), among others. At the crux of our framework is the idea that conformal methods are based on revealing partial information about the data at hand, and positing a conditional distribution for the data given the partial information. Different methods arise from different choices of partial information, and of the corresponding (approximate) conditional distribution. In addition to recovering and unifying existing results, our framework leads to both new theoretical guarantees for existing methods, and new extensions of the conformal methodology.
+ oai:arXiv.org:2504.02292v2math.STstat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Dietmar Ferger
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Rina Foygel Barber, Ryan J. Tibshirani
- Convergence Guarantees for Federated SARSA with Local Training and Heterogeneous Agents
- https://arxiv.org/abs/2512.17688
- arXiv:2512.17688v1 Announce Type: cross
-Abstract: We present a novel theoretical analysis of Federated SARSA (FedSARSA) with linear function approximation and local training. We establish convergence guarantees for FedSARSA in the presence of heterogeneity, both in local transitions and rewards, providing the first sample and communication complexity bounds in this setting. At the core of our analysis is a new, exact multi-step error expansion for single-agent SARSA, which is of independent interest. Our analysis precisely quantifies the impact of heterogeneity, demonstrating the convergence of FedSARSA with multiple local updates. Crucially, we show that FedSARSA achieves linear speed-up with respect to the number of agents, up to higher-order terms due to Markovian sampling. Numerical experiments support our theoretical findings.
- oai:arXiv.org:2512.17688v1
- cs.LG
- stat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
- cross
+ Estimation of Population Linear Spectral Statistics by Marchenko--Pastur Inversion
+ https://arxiv.org/abs/2504.03390
+ arXiv:2504.03390v4 Announce Type: replace
+Abstract: A new method of estimating population linear spectral statistics from high-dimensional data is introduced. When the dimension $d$ grows with the sample size $n$ such that $\frac{d}{n} \to c>0$, the proposed method is the first with proven convergence rate of $\mathcal{O}(n^{\varepsilon - 1})$ for any $\varepsilon > 0$ in a general nonparametric setting. For Gaussian data, a CLT for the estimation error with normalization factor $n$ is shown.
+ oai:arXiv.org:2504.03390v4
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Paul Mangold, Elo\"ise Berthier, Eric Moulines
+ Ben Deitmar
- Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
- https://arxiv.org/abs/2512.17696
- arXiv:2512.17696v1 Announce Type: cross
-Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
- oai:arXiv.org:2512.17696v1
- cs.LG
+ High-Dimensional Invariant Tests of Multivariate Normality Based on Radial Concentration
+ https://arxiv.org/abs/2504.09237
+ arXiv:2504.09237v2 Announce Type: replace
+Abstract: While the problem of testing multivariate normality has received considerable attention in the classical low-dimensional setting where the sample size $n$ is much larger than the feature dimension $d$ of the data, there is presently a dearth of existing tests which are valid in the high-dimensional setting where $d$ is of comparable or larger order than $n$. This paper studies the hypothesis testing problem of determining whether $n$ i.i.d. samples are generated from a $d$-dimensional multivariate normal distribution, in settings where $d$ grows with $n$ at some rate under a broad regime. To this end, we propose a new class of computationally efficient tests which can be regarded as a high-dimensional adaptation of the classical radial approach to testing normality. A key member of this class is a range-type test which, under a very general rate of growth of $d$ with respect to $n$, is proven to achieve both type I error-control and consistency for three important classes of alternatives; namely, finite mixture model, non-Gaussian elliptical, and leptokurtic alternatives. Extensive simulation studies demonstrate the superiority of our test compared to existing methods, and two gene expression applications demonstrate the effectiveness of our procedure for detecting violations of multivariate normality which are of potentially practical significance.
+ oai:arXiv.org:2504.09237v2stat.ME
- stat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
- cross
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuri Calleo
+ Xin Bing, Derek Latremouille
- Mitigating Forgetting in Low Rank Adaptation
- https://arxiv.org/abs/2512.17720
- arXiv:2512.17720v1 Announce Type: cross
-Abstract: Parameter-efficient fine-tuning methods, such as Low-Rank Adaptation (LoRA), enable fast specialization of large pre-trained models to different downstream applications. However, this process often leads to catastrophic forgetting of the model's prior domain knowledge. We address this issue with LaLoRA, a weight-space regularization technique that applies a Laplace approximation to Low-Rank Adaptation. Our approach estimates the model's confidence in each parameter and constrains updates in high-curvature directions, preserving prior knowledge while enabling efficient target-domain learning. By applying the Laplace approximation only to the LoRA weights, the method remains lightweight. We evaluate LaLoRA by fine-tuning a Llama model for mathematical reasoning and demonstrate an improved learning-forgetting trade-off, which can be directly controlled via the method's regularization strength. We further explore different loss landscape curvature approximations for estimating parameter confidence, analyze the effect of the data used for the Laplace approximation, and study robustness across hyperparameters.
- oai:arXiv.org:2512.17720v1
- cs.LG
- stat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Joanna Sliwa, Frank Schneider, Philipp Hennig, Jose Miguel Hernandez-Lobato
+ A stochastic method to estimate a zero-inflated two-part mixed model for human microbiome data
+ https://arxiv.org/abs/2504.15411
+ arXiv:2504.15411v2 Announce Type: replace
+Abstract: Human microbiome studies based on genetic sequencing techniques produce compositional longitudinal data of the relative abundances of microbial taxa over time, allowing to understand, through mixed-effects modeling, how microbial communities evolve in response to clinical interventions, environmental changes, or disease progression. In particular, the Zero-Inflated Beta Regression (ZIBR) models jointly and over time the presence and abundance of each microbe taxon, considering the compositional nature of the data, its skewness, and the over-abundance of zeros. However, as for other complex random effects models, maximum likelihood estimation suffers from the intractability of likelihood integrals. Available estimation methods rely on log-likelihood approximation, which is prone to potential limitations such as biased estimates or unstable convergence. In this work we develop an alternative maximum likelihood estimation approach for the ZIBR model, based on the Stochastic Approximation Expectation Maximization (SAEM) algorithm. The proposed methodology allows to model unbalanced data, which is not always possible in existing approaches. We also provide estimations of the standard errors and the log-likelihood of the fitted model. The performance of the algorithm is established through simulation, and its use is demonstrated on two microbiome studies, showing its ability to detect changes in both presence and abundance of bacterial taxa over time and in response to treatment.
+ oai:arXiv.org:2504.15411v2
+ stat.ME
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ John Barrera, Cristian Meza, Ana Arribas-Gil
- Perceptions of the Metaverse at the Peak of the Hype Cycle: A Cross-Sectional Study Among Turkish University Students
- https://arxiv.org/abs/2512.17750
- arXiv:2512.17750v1 Announce Type: cross
-Abstract: During the height of the hype in late 2021, the Metaverse drew more attention from around the world than ever before. It promised new ways to interact with people in three-dimensional digital spaces. This cross-sectional study investigates the attitudes, perceptions, and predictors of the willingness to engage with the Metaverse among 381 Turkish university students surveyed in December 2021. The study employs Fisher's Exact Tests and binary logistic regression to assess the influence of demographic characteristics, prior digital experience, and perception-based factors. The results demonstrate that demographic factors, such as gender, educational attainment, faculty association, social media engagement, and previous virtual reality exposure, do not significantly forecast the propensity to participate in the Metaverse. Instead, the main things that affect people's intentions to adopt are how they see things. Belief in the Metaverse's capacity to revolutionize societal frameworks, especially human rights, surfaced as the most significant positive predictor of willingness. Conversely, apprehensions regarding psychological harm, framed as a possible 'cyber syndrome' represented a significant obstacle to participation. Perceptions of technical compatibility and ethical considerations showed complex effects, showing that optimism, uncertainty, and indifference affect willingness in different ways. In general, the results show that early adoption of the Metaverse is based on how people see it, not on their demographics. The research establishes a historically informed benchmark of user skepticism and prudent assessment during the advent of Web 3.0, underscoring the necessity of addressing collective psychological, ethical, and normative issues to promote future engagement.
- oai:arXiv.org:2512.17750v1
- cs.CY
- stat.AP
- Mon, 22 Dec 2025 00:00:00 -0500
- cross
+ Linear Regression Using Principal Components from General Hilbert-Space-Valued Covariates
+ https://arxiv.org/abs/2504.16780
+ arXiv:2504.16780v3 Announce Type: replace
+Abstract: We consider linear regression with covariates that are random elements in a general Hilbert space. We first develop a principal component analysis for Hilbert-space-valued covariates based on finite-dimensional projections of the covariance operator, and establish asymptotic linearity and joint Gaussian limits for the leading eigenvalues and eigenfunctions under mild moment conditions. We then propose a principal component regression framework that combines Euclidean and Hilbert-space-valued covariates, obtain root-n consistent and asymptotically normal estimators of the regression parameters, and establish the validity of nonparametric and wild bootstrap procedures for inference. Simulation studies with two- and three-dimensional imaging predictors demonstrate accurate recovery of eigenstructures, regression coefficients, and bootstrap coverage. The methodology is further illustrated with neuroimaging data, in both a standard regression setting and a precision-medicine formulation.
+ oai:arXiv.org:2504.16780v3
+ math.ST
+ stat.ME
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mehmet Ali Erkan, Halil Eren Ko\c{c}ak
-
-
- Integrating Computational Methods and AI into Qualitative Studies of Aging and Later Life
- https://arxiv.org/abs/2512.17850
- arXiv:2512.17850v1 Announce Type: cross
-Abstract: This chapter demonstrates how computational social science (CSS) tools are extending and expanding research on aging. The depth and context from traditionally qualitative methods such as participant observation, in-depth interviews, and historical documents are increasingly employed alongside scalable data management, computational text analysis, and open-science practices. Machine learning (ML) and natural language processing (NLP), provide resources to aggregate and systematically index large volumes of qualitative data, identify patterns, and maintain clear links to in-depth accounts. Drawing on case studies of projects that examine later life--including examples with original data from the DISCERN study (a team-based ethnography of life with dementia) and secondary analyses of the American Voices Project (nationally representative interview)--the chapter highlights both uses and challenges of bringing CSS tools into more meaningful dialogue with qualitative aging research. The chapter argues such work has potential for (1) streamlining and augmenting existing workflows, (2) scaling up samples and projects, and (3) generating multi-method approaches to address important questions in new ways, before turning to practices useful for individuals and teams seeking to understand current possibilities or refine their workflow processes. The chapter concludes that current developments are not without peril, but offer potential for new insights into aging and the life course by broadening--rather than replacing--the methodological foundations of qualitative research.
- oai:arXiv.org:2512.17850v1
- cs.CY
- cs.AI
- cs.HC
- stat.AP
- Mon, 22 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Corey M. Abramson
-
-
- Weighted Stochastic Differential Equation to Implement Wasserstein-Fisher-Rao Gradient Flow
- https://arxiv.org/abs/2512.17878
- arXiv:2512.17878v1 Announce Type: cross
-Abstract: Score-based diffusion models currently constitute the state of the art in continuous generative modeling. These methods are typically formulated via overdamped or underdamped Ornstein--Uhlenbeck-type stochastic differential equations, in which sampling is driven by a combination of deterministic drift and Brownian diffusion, resulting in continuous particle trajectories in the ambient space. While such dynamics enjoy exponential convergence guarantees for strongly log-concave target distributions, it is well known that their mixing rates deteriorate exponentially in the presence of nonconvex or multimodal landscapes, such as double-well potentials. Since many practical generative modeling tasks involve highly non-log-concave target distributions, considerable recent effort has been devoted to developing sampling schemes that improve exploration beyond classical diffusion dynamics.
- A promising line of work leverages tools from information geometry to augment diffusion-based samplers with controlled mass reweighting mechanisms. This perspective leads naturally to Wasserstein--Fisher--Rao (WFR) geometries, which couple transport in the sample space with vertical (reaction) dynamics on the space of probability measures. In this work, we formulate such reweighting mechanisms through the introduction of explicit correction terms and show how they can be implemented via weighted stochastic differential equations using the Feynman--Kac representation. Our study provides a preliminary but rigorous investigation of WFR-based sampling dynamics, and aims to clarify their geometric and operator-theoretic structure as a foundation for future theoretical and algorithmic developments.
- oai:arXiv.org:2512.17878v1
- cs.LG
- cs.AI
- stat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/publicdomain/zero/1.0/
- Herlock Rahimi
+ Xinyi Li, Margaret Hoch, Michael R. Kosorok
- Regularized Random Fourier Features and Finite Element Reconstruction for Operator Learning in Sobolev Space
- https://arxiv.org/abs/2512.17884
- arXiv:2512.17884v1 Announce Type: cross
-Abstract: Operator learning is a data-driven approximation of mappings between infinite-dimensional function spaces, such as the solution operators of partial differential equations. Kernel-based operator learning can offer accurate, theoretically justified approximations that require less training than standard methods. However, they can become computationally prohibitive for large training sets and can be sensitive to noise. We propose a regularized random Fourier feature (RRFF) approach, coupled with a finite element reconstruction map (RRFF-FEM), for learning operators from noisy data. The method uses random features drawn from multivariate Student's $t$ distributions, together with frequency-weighted Tikhonov regularization that suppresses high-frequency noise. We establish high-probability bounds on the extreme singular values of the associated random feature matrix and show that when the number of features $N$ scales like $m \log m$ with the number of training samples $m$, the system is well-conditioned, which yields estimation and generalization guarantees. Detailed numerical experiments on benchmark PDE problems, including advection, Burgers', Darcy flow, Helmholtz, Navier-Stokes, and structural mechanics, demonstrate that RRFF and RRFF-FEM are robust to noise and achieve improved performance with reduced training time compared to the unregularized random feature model, while maintaining competitive accuracy relative to kernel and neural operator tests.
- oai:arXiv.org:2512.17884v1
- cs.LG
- cs.NA
- math.NA
- stat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
- cross
+ A Unified Framework for Community Detection and Model Selection in Blockmodels
+ https://arxiv.org/abs/2505.22459
+ arXiv:2505.22459v2 Announce Type: replace
+Abstract: Blockmodels are a foundational tool for modeling community structure in networks, with the stochastic blockmodel (SBM), degree-corrected blockmodel (DCBM), and popularity-adjusted blockmodel (PABM) forming a natural hierarchy of increasing generality. While community detection under these models has been extensively studied, much less attention has been paid to the model selection problem, i.e., determining which model best fits a given network. Building on recent theoretical insights about the spectral geometry of these models, we propose a unified framework for simultaneous community detection and model selection across the full blockmodel hierarchy. A key innovation is the use of loss functions that serve a dual role: they act as objective functions for community detection and as test statistics for hypothesis testing. We develop a greedy algorithm to minimize these loss functions and establish theoretical guarantees for exact label recovery and model selection consistency under each model. Extensive simulation studies demonstrate that our method achieves high accuracy in both tasks, outperforming or matching state-of-the-art alternatives. Applications to five real-world networks further illustrate the interpretability and practical utility of our approach. R code for implementing the method is available at https://github.com/subhankarbhadra/model-selection.
+ oai:arXiv.org:2505.22459v2
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replacehttp://creativecommons.org/licenses/by/4.0/
- Xinyue Yu, Hayden Schaeffer
+ 10.1080/10618600.2025.2590073
+ Subhankar Bhadra, Minh Tang, Srijan Sengupta
- Sparse Anomaly Detection Across Referentials: A Rank-Based Higher Criticism Approach
- https://arxiv.org/abs/2312.04924
- arXiv:2312.04924v2 Announce Type: replace
-Abstract: Detecting anomalies in large sets of observations is crucial in various applications, such as epidemiological studies, gene expression studies, and systems monitoring. We consider settings where the units of interest result in multiple independent observations from potentially distinct referentials. Scan statistics and related methods are commonly used in such settings, but rely on stringent modeling assumptions for proper calibration. We instead propose a rank-based variant of the higher criticism statistic that only requires independent observations originating from ordered spaces. We show under what conditions the resulting methodology is able to detect the presence of anomalies. These conditions are stated in a general, non-parametric manner, and depend solely on the probabilities of anomalous observations exceeding nominal observations. The analysis requires a refined understanding of the distribution of the ranks under the presence of anomalies, and in particular of the rank-induced dependencies. The methodology is robust against heavy-tailed distributions through the use of ranks. Within the exponential family and a family of convolutional models, we analytically quantify the asymptotic performance of our methodology and the performance of the oracle, and show the difference is small for many common models. Simulations confirm these results. We show the applicability of the methodology through an analysis of quality control data of a pharmaceutical manufacturing process.
- oai:arXiv.org:2312.04924v2
- stat.ME
+ Density estimation via periodic scaled Korobov kernel method with exponential decay condition
+ https://arxiv.org/abs/2506.15419
+ arXiv:2506.15419v2 Announce Type: replace
+Abstract: We propose the periodic scaled Korobov kernel (PSKK) method for nonparametric density estimation on $\mathbb{R}^d$. By first wrapping the target density into a periodic version through modulo operation and subsequently applying kernel ridge regression in scaled Korobov spaces, we extend the kernel approach proposed by Kazashi and Nobile (SIAM J. Numer. Anal., 2023) and eliminate its requirement for inherent periodicity of the density function. This key modification enables effective estimation of densities defined on unbounded domains. We establish rigorous mean integrated squared error (MISE) bounds, proving that for densities with smoothness of order $\alpha$ and exponential decay, our PSKK method achieves an $\mathcal{O}(M^{-1/(1+1/(2\alpha)+\epsilon)})$ MISE convergence rate with an arbitrarily small $\epsilon>0$. While matching the convergence rate of the previous kernel approach, our method applies to non-periodic distributions at the cost of stronger differentiability and exponential decay assumptions. Numerical experiments confirm the theoretical results and demonstrate a significant improvement over traditional kernel density estimation in large-sample regimes.
+ oai:arXiv.org:2506.15419v2math.ST
+ cs.NA
+ math.NAstat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1214/24-AOS2477
- Annals of Statistics 2025, Vol. 53, No. 2, 676--702
- Ivo V. Stoepker, Rui M. Castro, Ery Arias-Castro
+ http://creativecommons.org/licenses/by/4.0/
+ Ziyang Ye, Haoyuan Tan, Xiaoqun Wang, Zhijian He
- Differentially private Bayesian tests
- https://arxiv.org/abs/2401.15502
- arXiv:2401.15502v3 Announce Type: replace
-Abstract: Differential privacy has emerged as an significant cornerstone in the realm of scientific hypothesis testing utilizing confidential data. In reporting scientific discoveries, Bayesian tests are widely adopted since they effectively circumnavigate the key criticisms of P-values, namely, lack of interpretability and inability to quantify evidence in support of the competing hypotheses. We present a novel differentially private Bayesian hypotheses testing framework that arise naturally under a principled data generative mechanism, inherently maintaining the interpretability of the resulting inferences. Furthermore, by focusing on differentially private Bayes factors based on widely used test statistics, we circumvent the need to model the complete data generative mechanism and ensure substantial computational benefits. We also provide a set of sufficient conditions to establish results on Bayes factor consistency under the proposed framework. The utility of the devised technology is showcased via several numerical experiments.
- oai:arXiv.org:2401.15502v3
+ The Condition Number as a Scale-Invariant Proxy for Information Encoding in Neural Units
+ https://arxiv.org/abs/2506.16289
+ arXiv:2506.16289v2 Announce Type: replace
+Abstract: This paper explores the relationship between the condition number of a neural network's weight tensor and the extent of information encoded by the associated processing unit, viewed through the lens of information theory. It argues that a high condition number, though not sufficient for effective knowledge encoding, may indicate that the unit has learned to selectively amplify and compress information. This intuition is formalized for linear units with Gaussian inputs, linking the condition number and the transformation's log-volume scaling factor to the characteristics of the output entropy and the geometric properties of the learned transformation. The analysis demonstrates that for a fixed weight norm, a concentrated distribution of singular values (high condition number) corresponds to reduced overall information transfer, indicating a specialized and efficient encoding strategy. Furthermore, the linear stage entropy bound provides an upper limit on post-activation information for contractive, element-wise nonlinearities, supporting the condition number as a scale-invariant proxy for encoding capacity in practical neural networks. An empirical case study applies these principles to guide selective fine-tuning of Large Language Models for both a new task and a new input modality. The experiments show that the proposed method, named KappaTune, effectively mitigates catastrophic forgetting. Unlike many existing catastrophic forgetting mitigation methods that rely on access to pre-training statistics, which are often unavailable, this selective fine-tuning approach offers a way to bypass this common requirement.
+ oai:arXiv.org:2506.16289v2stat.ML
- cs.CRcs.LG
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/publicdomain/zero/1.0/
- Abhisek Chakraborty, Saptati Datta
+ http://creativecommons.org/licenses/by/4.0/
+ Oswaldo Ludwig
- SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning
- https://arxiv.org/abs/2402.04114
- arXiv:2402.04114v3 Announce Type: replace
-Abstract: In this paper, we analyze the sample and communication complexity of the federated linear stochastic approximation (FedLSA) algorithm. We explicitly quantify the effects of local training with agent heterogeneity. We show that the communication complexity of FedLSA scales polynomially with the inverse of the desired accuracy $\epsilon$. To overcome this, we propose SCAFFLSA a new variant of FedLSA that uses control variates to correct for client drift, and establish its sample and communication complexities. We show that for statistically heterogeneous agents, its communication complexity scales logarithmically with the desired accuracy, similar to Scaffnew. An important finding is that, compared to the existing results for Scaffnew, the sample complexity scales with the inverse of the number of agents, a property referred to as linear speed-up. Achieving this linear speed-up requires completely new theoretical arguments. We apply the proposed method to federated temporal difference learning with linear function approximation and analyze the corresponding complexity improvements.
- oai:arXiv.org:2402.04114v3
- stat.ML
- cs.LG
- math.OC
- Mon, 22 Dec 2025 00:00:00 -0500
+ On Random Fields Associated with Analytic Wavelet Transform
+ https://arxiv.org/abs/2508.10495
+ arXiv:2508.10495v2 Announce Type: replace
+Abstract: Despite the broad application of the analytic wavelet transform (AWT), a systematic statistical characterization of its magnitude and phase as inhomogeneous random fields on the time-frequency domain when the input is a random process remains underexplored. In this work, we study the magnitude and phase of the AWT as random fields on the time-frequency domain when the observed signal is a deterministic function plus additive stationary Gaussian noise. We derive their marginal and joint distributions, establish concentration inequalities that depend on the signal-to-noise ratio (SNR), and analyze their covariance structures. Based on these results, we derive an upper bound on the probability of incorrectly identifying the time-scale ridge of the clean signal, explore the regularity of scalogram contours, and study the relationship between AWT magnitude and phase. Our findings lay the groundwork for developing rigorous AWT-based algorithms in noisy environments.
+ oai:arXiv.org:2508.10495v2
+ math.ST
+ math.PR
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Paul Mangold, Sergey Samsonov, Safwan Labbi, Ilya Levin, Reda Alami, Alexey Naumov, Eric Moulines
+ Gi-Ren Liu, Yuan-Chung Sheu, Hau-Tieng Wu
- Efficient Sampling in Disease Surveillance through Subpopulations: Sampling Canaries in the Coal Mine
- https://arxiv.org/abs/2405.10742
- arXiv:2405.10742v2 Announce Type: replace
-Abstract: We consider outbreak detection settings of endemic diseases where the population under study consists of various subpopulations available for stratified surveillance. These subpopulations can for example be based on age cohorts, but may also correspond to other subgroups of the population under study such as international travellers. Rather than sampling uniformly across the population, one may elevate the effectiveness of the detection methodology by optimally choosing a sampling subpopulation. We show (under some assumptions) the relative sampling efficiency between two subpopulations is inversely proportional to the ratio of their respective baseline disease risks. This implies one can increase sampling efficiency by sampling from the subpopulation with higher baseline disease risk. Our results require careful treatment of the power curves of exact binomial tests as a function of their sample size, which are non-monotonic due to the underlying discreteness. A case study of COVID-19 cases in the Netherlands illustrates our theoretical findings.
- oai:arXiv.org:2405.10742v2
+ Tree-based methods for length-biased survival data
+ https://arxiv.org/abs/2508.16312
+ arXiv:2508.16312v3 Announce Type: replace
+Abstract: Left-truncated survival data commonly arise in prevalent cohort studies, where only individuals who have experienced disease onset and survived until enrollment in the study. When the onset process follows a stationary Poisson process, the resulting data are length-biased. This sampling mechanism induces a selection bias towards longer survival individuals, and statistical methods for traditional survival data are not directly applicable. While tree-based methods developed for left-truncated data can be applied, they may be inefficient for length-biased data, as they do not account for the distribution of truncation times. To address this, we propose new survival trees and forests for length-biased right-censored data within the conditional inference framework. Our approach uses a score function derived from the full likelihood to construct permutation test statistics for variable splitting. For survival prediction, we consider two estimators of the unbiased survival function, differing in statistical efficiency and computational complexity. These elements enhance efficiency in tree construction and improve accuracy of survival prediction in ensemble settings. Simulation studies demonstrate efficiency gains in both tree recovery and survival prediction, often exceeding the gains from ensembling alone. We further illustrate the utility of the proposed methods using lung cancer data from the Cancer Public Library Database, a nationwide cancer registry in South Korea.
+ oai:arXiv.org:2508.16312v3stat.ME
- stat.AP
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1016/j.spl.2025.110384
- Statistics and Probablity Letters 2025, Vol. 222, 110384
- Ivo V. Stoepker
+ Jinwoo Lee, Donghwan Lee, Hyunwoo Lee, Jiyu Sun
- Operational Dosage: Implications of Capacity Constraints for the Design and Interpretation of Experiments
- https://arxiv.org/abs/2407.21322
- arXiv:2407.21322v3 Announce Type: replace
-Abstract: We study RCTs that evaluate the impact of service interventions, for example, teachers or advisors conducting proactive outreach to at-risk students, medical providers giving medication adherence support by calling or texting, or social workers that conduct home visits. A defining feature of service interventions is that they are delivered by a capacity-constrained resource -- teachers, healthcare providers, or social workers -- whose limited availability creates causal inference complications. Because participants share a finite service capacity, adding more participants can reduce the timeliness or intensity of the service that others receive, introducing interference across participants. This generates hidden variation in the treatment itself, which we term operational dosage. We provide a mathematical model of service interventions using techniques from queueing theory and study the impact of capacity constraints on experimental outcomes. Our main insight is that treatment effects are both capacity- and sample-size-dependent, as well as decreasing in sample size once a critical threshold is exceeded. Interestingly, an implication is that statistical power of service intervention RCTs peaks at intermediate sample sizes -- directly contradicting conventional power calculations that assume monotonically increasing power with sample size. We instantiate our insights using simulations calibrated to a real-world trial evaluating a behavioral health intervention for tuberculosis patients in Kenya. Our simulation results suggest that a trial with high service capacity but limited sample size can obtain the same statistical power as a trial with lower service capacity but large sample size. Taken together, our results highlight the importance of capacity selection in experiment design and provide a mechanism for why experiments may fail to replicate or perform at scale.
- oai:arXiv.org:2407.21322v3
+ Understanding Spatial Regression Models from a Weighting Perspective in an Observational Study of Superfund Remediation
+ https://arxiv.org/abs/2508.19572
+ arXiv:2508.19572v2 Announce Type: replace
+Abstract: A key challenge in environmental health research is unmeasured spatial confounding, driven by unobserved spatially structured variables that influence both treatment and outcome. A common approach is to fit a spatial regression that models the outcome as a linear function of treatment and covariates, with a spatially structured error term to account for unmeasured spatial confounding. However, it remains unclear to what extent spatial regression actually accounts for such forms of confounding in finite samples, and whether this regression adjustment can be reformulated from a design-based perspective. Motivated by an observational study on the effect of Superfund site remediation on birth outcomes, we present a weighting framework for causal inference that unifies three canonical classes of spatial regression models$\unicode{x2013}$random effects, conditional autoregressive, and Gaussian process models$\unicode{x2013}$and reveals how they implicitly construct causal contrasts across space. Specifically, we show that: (i) the spatial error term induces approximate balance on a latent set of covariates and therefore adjusts for a specific form of unmeasured confounding; and (ii) the covariance structure of the spatial error can be equivalently represented as regressors in a linear model. Building on these insights, we introduce a new estimator that jointly addresses multiple forms of unmeasured spatial confounding and develop visual diagnostics. Using our new estimator, we find evidence of a small but beneficial effect of remediation on the percentage of small vulnerable newborns.
+ oai:arXiv.org:2508.19572v2stat.ME
- math.OC
- Mon, 22 Dec 2025 00:00:00 -0500
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Justin Boutilier, Jonas Oddur Jonasson, Hannah Li, Erez Yoeli
+ Sophie M. Woodward, Francesca Dominici, Jose R. Zubizarreta
- Adjusting Model Size in Continual Gaussian Processes: How Big is Big Enough?
- https://arxiv.org/abs/2408.07588
- arXiv:2408.07588v5 Announce Type: replace
-Abstract: Many machine learning models require setting a parameter that controls their size before training, e.g. number of neurons in DNNs, or inducing points in GPs. Increasing capacity typically improves performance until all the information from the dataset is captured. After this point, computational cost keeps increasing, without improved performance. This leads to the question "How big is big enough?" We investigate this problem for Gaussian processes (single-layer neural networks) in continual learning. Here, data becomes available incrementally, and the final dataset size will therefore not be known before training, preventing the use of heuristics for setting a fixed model size. We develop a method to automatically adjust model size while maintaining near-optimal performance. Our experimental procedure follows the constraint that any hyperparameters must be set without seeing dataset properties, and we show that our method performs well across diverse datasets without the need to adjust its hyperparameter, showing it requires less tuning than others.
- oai:arXiv.org:2408.07588v5
- stat.ML
- cs.LG
- Mon, 22 Dec 2025 00:00:00 -0500
+ Reorienting Age-Friendly Frameworks for Rural Contexts: A Spatial Competence-Press Framework for Aging in Chinese Villages
+ https://arxiv.org/abs/2510.20343
+ arXiv:2510.20343v2 Announce Type: replace
+Abstract: While frameworks such as the WHO Age-Friendly Cities have advanced urban aging policy, rural contexts demand fundamentally different analytical approaches. The spatial dispersion, terrain variability, and agricultural labor dependencies that characterize rural aging experiences require moving beyond service-domain frameworks toward spatial stress assessment models. Current research on rural aging in China exhibits methodological gaps, systematically underrepresenting the spatial stressors that older adults face daily, including terrain barriers, infrastructure limitations, climate exposure, and agricultural labor burdens. Existing rural revitalization policies emphasize standardized interventions while inadequately addressing spatial heterogeneity and the spatially-differentiated needs of aging populations. This study developed a GIS-based spatial stress analysis framework that applies Lawton and Nahemow's competence-press model to quantify aging-related stressors and classify rural villages by intervention needs. Using data from 27 villages in Mamuchi Township, Shandong Province, we established four spatial stress indicators: slope gradient index (SGI), solar radiation exposure index (SREI), walkability index (WI), and agricultural intensity index (AII). Analysis of variance and hierarchical clustering revealed significant variation in spatial pressures across villages and identified distinct typologies that require targeted intervention strategies. The framework produces both quantitative stress measurements for individual villages and a classification system that groups villages with similar stress patterns, providing planners and policymakers with practical tools for designing spatially-targeted age-friendly interventions in rural China and similar contexts.
+ oai:arXiv.org:2510.20343v2
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- PMLR 267:48974-49000 (2025)
- Guiomar Pescador-Barrios, Sarah Filippi, Mark van der Wilk
+ 10.1177/00420980251394180
+ Urban Studies, First published online December 13, 2025
+ Ziyuan Gao
- Targeted Learning for Variable Importance
- https://arxiv.org/abs/2411.02221
- arXiv:2411.02221v2 Announce Type: replace
-Abstract: Variable importance is one of the most widely used measures for interpreting machine learning with significant interest from both statistics and machine learning communities. Recently, increasing attention has been directed toward uncertainty quantification in these metrics. Current approaches largely rely on one-step procedures, which, while asymptotically efficient, can present higher sensitivity and instability in finite sample settings. To address these limitations, we propose a novel method by employing the targeted learning (TL) framework, designed to enhance robustness in inference for variable importance metrics. Our approach is particularly suited for conditional permutation variable importance. We show that it (i) retains the asymptotic efficiency of traditional methods, (ii) maintains comparable computational complexity, and (iii) delivers improved accuracy, especially in finite sample contexts. We further support these findings with numerical experiments that illustrate the practical advantages of our method and validate the theoretical results.
- oai:arXiv.org:2411.02221v2
- stat.ML
- cs.LG
- stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ Approximating evidence via bounded harmonic means
+ https://arxiv.org/abs/2510.20617
+ arXiv:2510.20617v2 Announce Type: replace
+Abstract: Efficient Bayesian model selection relies on the model evidence or marginal likelihood, whose computation often requires evaluating an intractable integral. The harmonic mean estimator (HME) has long been a standard method of approximating the evidence. While computationally simple, the version introduced by Newton and Raftery (1994) potentially suffers from infinite variance. To overcome this issue,Gelfand and Dey (1994) defined a standardized representation of the estimator based on an instrumental function and Robert and Wraith (2009) later proposed to use higher posterior density (HPD) indicators as instrumental functions.
+ Following this approach, a practical method is proposed, based on an elliptical covering of the HPD region with non-overlapping ellipsoids. The resulting estimator, called the Elliptical Covering Marginal Likelihood Estimator (ECMLE), not only eliminates the infinite-variance issue of the original HME and allows exact volume computations, but is also able to be used in multimodal settings. Through several examples, we illustrate that ECMLE outperforms other recent methods such as THAMES and its improved version (Metodiev et al 2024, 2025). Moreover, ECMLE demonstrates lower variance, a key challenge that subsequent HME variants have sought to address, and provides more stable evidence approximations, even in challenging settings.
+ oai:arXiv.org:2510.20617v2
+ stat.CO
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Xiaohan Wang, Yunzhe Zhou, Giles Hooker
+ Dana Naderi, Christian P Robert, Kaniav Kamary, Darren Wraith
- Refined Analysis of Federated Averaging and Federated Richardson-Romberg
- https://arxiv.org/abs/2412.01389
- arXiv:2412.01389v2 Announce Type: replace
-Abstract: In this paper, we present a novel analysis of \FedAvg with constant step size, relying on the Markov property of the underlying process. We demonstrate that the global iterates of the algorithm converge to a stationary distribution and analyze its resulting bias and variance relative to the problem's solution. We provide a first-order bias expansion in both homogeneous and heterogeneous settings. Interestingly, this bias decomposes into two distinct components: one that depends solely on stochastic gradient noise and another on client heterogeneity. Finally, we introduce a new algorithm based on the Richardson-Romberg extrapolation technique to mitigate this bias.
- oai:arXiv.org:2412.01389v2
- stat.ML
- cs.LG
- math.OC
- Mon, 22 Dec 2025 00:00:00 -0500
+ COBASE: A new copula-based shuffling method for ensemble weather forecast postprocessing
+ https://arxiv.org/abs/2510.25610
+ arXiv:2510.25610v2 Announce Type: replace
+Abstract: Weather predictions are often provided as ensembles generated by repeated runs of numerical weather prediction models. These forecasts typically exhibit bias and inaccurate dependence structures due to numerical and dispersion errors, requiring statistical postprocessing for improved precision. A common correction strategy is the two-step approach: first adjusting the univariate forecasts, then reconstructing the multivariate dependence. The second step is usually handled with nonparametric methods, which can underperform when historical data are limited. Parametric alternatives, such as the Gaussian Copula Approach (GCA), offer theoretical advantages but often produce poorly calibrated multivariate forecasts due to random sampling of the corrected univariate margins. In this work, we introduce COBASE, a novel copula-based postprocessing framework that preserves the flexibility of parametric modeling while mimicking the nonparametric techniques through a rank-shuffling mechanism. This design ensures calibrated margins and realistic dependence reconstruction. We evaluate COBASE on multi-site 2-meter temperature forecasts from the ALADIN-LAEF ensemble over Austria and on joint forecasts of temperature and dew point temperature from the ECMWF system in the Netherlands. Across all regions, COBASE variants consistently outperform traditional copula-based approaches, such as GCA, and achieve performance on par with state-of-the-art nonparametric methods like SimSchaake and ECC, with only minimal differences across settings. These results position COBASE as a competitive and robust alternative for multivariate ensemble postprocessing, offering a principled bridge between parametric and nonparametric dependence reconstruction.
+ oai:arXiv.org:2510.25610v2
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Paul Mangold, Alain Durmus, Aymeric Dieuleveut, Sergey Samsonov, Eric Moulines
+ Maurits Flos, Bastien Fran\c{c}ois, Irene Schicker, Kirien Whan, Elisa Perrone
- Foundation for unbiased cross-validation of spatio-temporal models for species distribution modeling
- https://arxiv.org/abs/2502.03480
- arXiv:2502.03480v2 Announce Type: replace
-Abstract: Evaluating the predictive performance of species distribution models (SDMs) under realistic deployment scenarios requires careful handling of spatial and temporal dependencies in the data. Cross-validation (CV) is the standard approach for model evaluation, but its design strongly influences the validity of performance estimates. When SDMs are intended for spatial or temporal transfer, random CV can lead to overoptimistic results due to spatial autocorrelation (SAC) among neighboring observations.
- We benchmark four machine learning algorithms (GBM, XGBoost, LightGBM, Random Forest) on two real-world presence-absence datasets, a temperate plant and an anadromous fish, using multiple CV designs: random, spatial, spatio-temporal, environmental, and forward-chaining. Two training data usage strategies (LAST FOLD and RETRAIN) are evaluated, with hyperparameter tuning performed within each CV scheme. Model performance is assessed on independent out-of-time test sets using AUC, MAE, and correlation metrics.
- Random CV overestimates AUC by up to 0.16 and produces MAE values up to 80 percent higher than spatially blocked alternatives. Blocking at the empirical SAC range substantially reduces this bias. Training strategy affects evaluation outcomes: LAST FOLD yields smaller validation-test discrepancies under strong SAC, while RETRAIN achieves higher test AUC when SAC is weaker. Boosted ensemble models consistently perform best under spatially structured CV designs. We recommend a robust SDM workflow based on SAC-aware blocking, blocked hyperparameter tuning, and external temporal validation to improve reliability under spatial and temporal shifts.
- oai:arXiv.org:2502.03480v2
- stat.AP
- cs.LG
- Mon, 22 Dec 2025 00:00:00 -0500
+ On the class of exponential statistical structures of type B
+ https://arxiv.org/abs/2510.26863
+ arXiv:2510.26863v2 Announce Type: replace
+Abstract: The article is devoted to the study of exponential statistical structures of type B, which constitute a subclass of exponential families of probability distributions. This class is characterized by a number of analytical and probabilistic properties that make it a convenient tool for solving both theoretical and applied problems in statistics. The relevance of this research lies in the need to generalize known classes of distributions and to develop a unified framework for their analysis, which is essential for applications in stochastic modeling, machine learning, financial mathematics. The paper proposes a formal definition of type B. Necessary and sufficient conditions for a statistical structure to belong to class B are established, and it is proved that such structures can be represented through a dominating measure with an explicit Laplace transform. The obtained results make it possible to describe a wide range of well-known one-dimensional and multivariate distributions, including the Binomial, Poisson, Normal, Gamma, Polynomial, Logarithmic distributions, as well as specific cases such as the Borel-Tanner and Random Walk distributions. Particular attention is given to the proof of structural theorems that determine the stability of class B under linear transformations and the addition of independent random vectors. Recursive relations for initial and central moments as well as for semi-invariants are obtained, providing an efficient analytical and computational framework for their evaluation. Furthermore, the tails of type B distributions are investigated using the properties of the Laplace transform. New exponential inequalities for estimating the probabilities of large deviations are derived. The obtained results can be applied in theoretical studies and in practical problems of stochastic modeling.
+ oai:arXiv.org:2510.26863v2
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/publicdomain/zero/1.0/
- 10.1016/j.ecoinf.2025.103521
- Ecological Informatics, Volume 92, Article 103521, 2025
- Diana Koldasbayeva, Alexey Zaytsev
+ http://creativecommons.org/licenses/by/4.0/
+ Oleksandr Volkov, Yurii Volkov
- Constraint-based causal discovery with tiered background knowledge and latent variables in single or overlapping datasets
- https://arxiv.org/abs/2503.21526
- arXiv:2503.21526v3 Announce Type: replace
-Abstract: In this paper we consider the use of tiered background knowledge within constraint based causal discovery. Our focus is on settings relaxing causal sufficiency, i.e. allowing for latent variables which may arise because relevant information could not be measured at all, or not jointly, as in the case of multiple overlapping datasets. We first present novel insights into the properties of the 'tiered FCI' (tFCI) algorithm. Building on this, we introduce a new extension of the IOD (integrating overlapping datasets) algorithm incorporating tiered background knowledge, the 'tiered IOD' (tIOD) algorithm. We show that under full usage of the tiered background knowledge tFCI and tIOD are sound, while simple versions of the tIOD and tFCI are sound and complete. We further show that the tIOD algorithm can often be expected to be considerably more efficient and informative than the IOD algorithm even beyond the obvious restriction of the Markov equivalence classes. We provide a formal result on the conditions for this gain in efficiency and informativeness. Our results are accompanied by a series of examples illustrating the exact role and usefulness of tiered background knowledge.
- oai:arXiv.org:2503.21526v3
- stat.ML
- cs.LG
+ Study of power series distributions with specified covariances
+ https://arxiv.org/abs/2511.01081
+ arXiv:2511.01081v2 Announce Type: replace
+Abstract: This paper presents a study of power series distributions (PSD) with prescribed covariance characteristics. Such distributions constitute a fundamental class in probability theory and mathematical statistics, as they generalize a wide range of well-known discrete distributions and enable the description of various stochastic phenomena with a predetermined variance structure. The aim of the research is to develop analytical methods for constructing power series distributions with given covariances and to establish the conditions under which a particular function can serve as the covariance of a certain PSD. The paper derives a first-order differential equation for the generating function of the distribution, which determines the relationship between its parameters and the form of the covariance function. It is shown that the choice of an analytical or polynomial covariance completely specifies the structure of the corresponding generating function. The analysis made it possible to construct new families of PSDs that generalize the classical Bernoulli, Poisson, geometric, and other distributions while preserving a given covariance structure. The proposed approach is based on the analytical relationship between the generating function and the covariance function, providing a framework for constructing stochastic models with predefined dispersion properties. The results obtained expand the theoretical framework for describing discrete distributions and open up opportunities for practical applications in statistical estimation, modeling of complex systems, financial processes, machine learning where it is crucial to control the dependence between the mean and the variation. Further research may focus on constructing continuous analogues of such distributions, studying their limiting properties, and applying them to problems of regression and Bayesian analysis.
+ oai:arXiv.org:2511.01081v2math.STstat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Christine W. Bang, Vanessa Didelez
+ Oleksandr Volkov, Yurii Volkov, Nataliia Voinalovych
- A Survey on Archetypal Analysis
- https://arxiv.org/abs/2504.12392
- arXiv:2504.12392v2 Announce Type: replace
-Abstract: Archetypal analysis (AA) was originally proposed in 1994 by Adele Cutler and Leo Breiman as a computational procedure for extracting distinct aspects, so-called archetypes, from observations, with each observational record approximated as a mixture (i.e., convex combination) of these archetypes. AA thereby provides straightforward, interpretable, and explainable representations for feature extraction and dimensionality reduction, facilitating the understanding of the structure of high-dimensional data and enabling wide applications across the sciences. However, AA also faces challenges, particularly as the associated optimization problem is non-convex. This is the first survey that provides researchers and data mining practitioners with an overview of the methodologies and opportunities that AA offers, surveying the many applications of AA across disparate fields of science, as well as best practices for modeling data with AA and its limitations. The survey concludes by explaining crucial future research directions concerning AA.
- oai:arXiv.org:2504.12392v2
- stat.ME
- cs.LG
- stat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Identification of Separable OTUs for Multinomial Classification in Compositional Data Analysis
+ https://arxiv.org/abs/2511.02509
+ arXiv:2511.02509v2 Announce Type: replace
+Abstract: High-throughput sequencing has transformed microbiome research, but it also produces inherently compositional data that challenge standard statistical and machine learning methods. In this work, we propose a multinomial classification framework for compositional microbiome data based on penalized log-ratio regression and pairwise separability screening. The method quantifies the discriminative ability of each OTU through the area under the receiver operating characteristic curve ($AUC$) for all pairwise log-ratios and aggregates these values into a global separability index $S_k$, yielding interpretable rankings of taxa together with confidence intervals. We illustrate the approach by reanalyzing the Baxter colorectal adenoma dataset and comparing our results with Greenacre's ordination-based analysis using Correspondence Analysis and Canonical Correspondence Analysis. Our models consistently recover a core subset of taxa previously identified as discriminant, thereby corroborating Greenacre's main findings, while also revealing additional OTUs that become important once demographic covariates are taken into account. In particular, adjustment for age, gender, and diabetes medication improves the precision of the separation index and highlights new, potentially relevant taxa, suggesting that part of the original signal may have been influenced by confounding. Overall, the integration of log-ratio modeling, covariate adjustment, and uncertainty estimation provides a robust and interpretable framework for OTU selection in compositional microbiome data. The proposed method complements existing ordination-based approaches by adding a probabilistic and inferential perspective, strengthening the identification of biologically meaningful microbial signatures.
+ oai:arXiv.org:2511.02509v2
+ stat.AP
+ stat.CO
+ Tue, 23 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Aleix Alcacer, Irene Epifanio, Sebastian Mair, Morten M{\o}rup
+ http://creativecommons.org/licenses/by/4.0/
+ R. Alberich, N. A. Cruz, R. Fern\'andez, I. Garc\'ia Mosquera, A. Mir, F. Rossell\'o
- The Stochastic Occupation Kernel (SOCK) Method for Learning Stochastic Differential Equations
- https://arxiv.org/abs/2505.11622
- arXiv:2505.11622v2 Announce Type: replace
-Abstract: We present a novel kernel-based method for learning multivariate stochastic differential equations (SDEs). The method follows a two-step procedure: we first estimate the drift term function, then the (matrix-valued) diffusion function given the drift. Occupation kernels are integral functionals on a reproducing kernel Hilbert space (RKHS) that aggregate information over a trajectory. Our approach leverages vector-valued occupation kernels for estimating the drift component of the stochastic process. For diffusion estimation, we extend this framework by introducing operator-valued occupation kernels, enabling the estimation of an auxiliary matrix-valued function as a positive semi-definite operator, from which we readily derive the diffusion estimate. This enables us to avoid common challenges in SDE learning, such as intractable likelihoods, by optimizing a reconstruction-error-based objective. We propose a simple learning procedure that retains strong predictive accuracy while using Fenchel duality to promote efficiency. We validate the method on simulated benchmarks and a real-world dataset of Amyloid imaging in healthy and Alzheimer's disease subjects.
- oai:arXiv.org:2505.11622v2
+ Source-Optimal Training is Transfer-Suboptimal
+ https://arxiv.org/abs/2511.08401
+ arXiv:2511.08401v3 Announce Type: replace
+Abstract: We prove that training a source model optimally for its own task is generically suboptimal when the objective is downstream transfer. We study the source-side optimization problem in L2-SP ridge regression and show a fundamental mismatch between the source-optimal and transfer-optimal source regularization: outside of a measure-zero set, $\tau_0^* \neq \tau_S^*$. We characterize the transfer-optimal source penalty $\tau_0^*$ as a function of task alignment and identify an alignment-dependent reversal: with imperfect alignment ($0<\rho<1$), transfer benefits from stronger source regularization, while in super-aligned regimes ($\rho>1$), transfer benefits from weaker regularization. In isotropic settings, the decision of whether transfer helps is independent of the target sample size and noise, depending only on task alignment and source characteristics. We verify the linear predictions in a synthetic ridge regression experiment, and we present CIFAR-10 experiments as evidence that the source-optimal versus transfer-optimal mismatch can persist in nonlinear networks.
+ oai:arXiv.org:2511.08401v3stat.MLcs.LG
- Mon, 22 Dec 2025 00:00:00 -0500
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Michael L. Wells, Kamel Lahouel, Bruno Jedynak
+ http://creativecommons.org/publicdomain/zero/1.0/
+ C. Evans Hedges
- Clustering and Pruning in Causal Data Fusion
- https://arxiv.org/abs/2505.15215
- arXiv:2505.15215v2 Announce Type: replace
-Abstract: Data fusion, the process of combining observational and experimental data, can enable the identification of causal effects that would otherwise remain non-identifiable. Although identification algorithms have been developed for specific scenarios, do-calculus remains the only general-purpose tool for causal data fusion, particularly when variables are present in some data sources but not others. However, approaches based on do-calculus may encounter computational challenges as the number of variables increases and the causal graph grows in complexity. Consequently, there exists a need to reduce the size of such models while preserving the essential features. For this purpose, we propose pruning (removing unnecessary variables) and clustering (combining variables) as preprocessing operations for causal data fusion. We generalize earlier results on a single data source and derive conditions for applying pruning and clustering in the case of multiple data sources. We give sufficient conditions for inferring the identifiability or non-identifiability of a causal effect in a larger graph based on a smaller graph and show how to obtain the corresponding identifying functional for identifiable causal effects. Examples from epidemiology and social science demonstrate the use of the results.
- oai:arXiv.org:2505.15215v2
+ Ensuring Calibration Robustness in Split Conformal Prediction Under Adversarial Attacks
+ https://arxiv.org/abs/2511.18562
+ arXiv:2511.18562v2 Announce Type: replace
+Abstract: Conformal prediction (CP) provides distribution-free, finite-sample coverage guarantees but critically relies on exchangeability, a condition often violated under distribution shift. We study the robustness of split conformal prediction under adversarial perturbations at test time, focusing on both coverage validity and the resulting prediction set size. Our theoretical analysis characterizes how the strength of adversarial perturbations during calibration affects coverage guarantees under adversarial test conditions. We further examine the impact of adversarial training at the model-training stage. Extensive experiments support our theory: (i) Prediction coverage varies monotonically with the calibration-time attack strength, enabling the use of nonzero calibration-time attack to predictably control coverage under adversarial tests; (ii) target coverage can hold over a range of test-time attacks: with a suitable calibration attack, coverage stays within any chosen tolerance band across a contiguous set of perturbation levels; and (iii) adversarial training at the training stage produces tighter prediction sets that retain high informativeness.
+ oai:arXiv.org:2511.18562v2stat.MLcs.LG
- stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Otto Tabell, Santtu Tikka, Juha Karvanen
+ Xunlei Qian, Yue Xing
- On the performance of multi-fidelity and reduced-dimensional neural emulators for inference of physiological boundary conditions
- https://arxiv.org/abs/2506.11683
- arXiv:2506.11683v2 Announce Type: replace
-Abstract: Solving inverse problems in cardiovascular modeling is particularly challenging due to the high computational cost of running high-fidelity simulations. In this work, we focus on Bayesian parameter estimation and explore different methods to reduce the computational cost of sampling from the posterior distribution by leveraging low-fidelity approximations. A common approach is to construct a surrogate model for the high-fidelity simulation itself. Another is to build a surrogate for the discrepancy between high- and low-fidelity models. This discrepancy, which is often easier to approximate, is modeled with either a fully connected neural network or a nonlinear dimensionality reduction technique that enables surrogate construction in a lower-dimensional space. A third possible approach is to treat the discrepancy between the high-fidelity and surrogate models as random noise and estimate its distribution using normalizing flows. This allows us to incorporate the approximation error into the Bayesian inverse problem by modifying the likelihood function. We validate five different methods which are variations of the above on analytical test cases by comparing them to posterior distributions derived solely from high-fidelity models, assessing both accuracy and computational cost. Finally, we demonstrate our approaches on two cardiovascular examples of increasing complexity: a lumped-parameter Windkessel model and a patient-specific three-dimensional anatomy.
- oai:arXiv.org:2506.11683v2
- stat.ML
- cs.CE
- cs.LG
- math.ST
- q-bio.QM
- stat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
+ Monotone data augmentation algorithm for longitudinal continuous, binary and ordinal outcomes: a unifying approach
+ https://arxiv.org/abs/2512.06621
+ arXiv:2512.06621v2 Announce Type: replace
+Abstract: The monotone data augmentation (MDA) algorithm has been widely used to impute missing data for longitudinal continuous outcomes. Compared to a full data augmentation approach, the MDA scheme accelerates the mixing of the Markov chain, reduces computational costs per iteration, and aids in missing data imputation under nonignorable dropouts. We extend the MDA algorithm to the multivariate probit (MVP) model for longitudinal binary and ordinal outcomes. The MVP model assumes the categorical outcomes are discretized versions of underlying longitudinal latent Gaussian outcomes modeled by a mixed effects model for repeated measures. A parameter expansion strategy is employed to facilitate the posterior sampling, and expedite the convergence of the Markov chain in MVP. The method enables the sampling of the regression coefficients and covariance matrix for longitudinal continuous, binary and ordinal outcomes in a unified manner. This property aids in understanding the algorithm and developing computer codes for MVP. We also introduce independent Metropolis-Hasting samplers to handle complex priors, and evaluate how the choice between flat and diffuse normal priors for regression coefficients influences parameter estimation and missing data imputation. Numerical examples are used to illustrate the methodology.
+ oai:arXiv.org:2512.06621v2
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1016/j.compbiomed.2025.111389
- Computers in Biology and Medicine 200, 111389 (2026)
- Chloe H. Choi, Andrea Zanoni, Daniele E. Schiavazzi, Alison L. Marsden
+ Yongqiang Tang
- On Bayes factor functions
- https://arxiv.org/abs/2506.16674
- arXiv:2506.16674v3 Announce Type: replace
-Abstract: We describe Bayes factors functions based on the sampling distributions of \emph{z}, \emph{t}, $\chi^2$, and \emph{F} statistics, using a class of inverse-moment prior distributions to define alternative hypotheses. These non-local alternative prior distributions are centered on standardized effects, which serve as indices for the Bayes factor function. We compare the conclusions drawn from resulting Bayes factor functions to those drawn from Bayes factors defined using local alternative prior specifications and examine their frequentist operating characteristics. Finally, an application of Bayes factor functions to replicated experimental designs in psychology is provided.
- oai:arXiv.org:2506.16674v3
- stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ Mixed exponential statistical structures and their approximation operators
+ https://arxiv.org/abs/2512.07870
+ arXiv:2512.07870v2 Announce Type: replace
+Abstract: The paper examines the construction and analysis of a new class of mixed exponential statistical structures that combine the properties of stochastic models and linear positive operators.The relevance of the topic is driven by the growing need to develop a unified theoretical framework capable of describing both continuous and discrete random structures that possess approximation properties. The aim of the study is to introduce and analyze a generalized family of mixed exponential statistical structures and their corresponding linear positive operators, which include known operators as particular cases. We define auxiliary statistical structures B and H through differential relations between their elements, and construct the main Phillips-type structure. Recurrent relations for the central moments are obtained, their properties are established, and the convergence and approximation accuracy of the constructed operators are investigated. The proposed approach allows mixed exponential structures to be viewed as a generalization of known statistical systems, providing a unified analytical and stochastic description. The results demonstrate that mixed exponential statistical structures can be used to develop new classes of positive operators with controllable preservation and approximation properties. The proposed methodology forms a basis for further research in constructing multidimensional statistical structures, analyzing operators in weighted spaces, and studying their asymptotic characteristics.
+ oai:arXiv.org:2512.07870v2
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Saptati Datta, Riana Guha, Rachael Shudde, Valen E. Johnson
+ Yurii Volkov, Oleksandr Volkov, Nataliia Voinalovych
- Quantifying Uncertainty in the Presence of Distribution Shifts
- https://arxiv.org/abs/2506.18283
- arXiv:2506.18283v2 Announce Type: replace
-Abstract: Neural networks make accurate predictions but often fail to provide reliable uncertainty estimates, especially under covariate distribution shifts between training and testing. To address this problem, we propose a Bayesian framework for uncertainty estimation that explicitly accounts for covariate shifts. While conventional approaches rely on fixed priors, the key idea of our method is an adaptive prior, conditioned on both training and new covariates. This prior naturally increases uncertainty for inputs that lie far from the training distribution in regions where predictive performance is likely to degrade. To efficiently approximate the resulting posterior predictive distribution, we employ amortized variational inference. Finally, we construct synthetic environments by drawing small bootstrap samples from the training data, simulating a range of plausible covariate shift using only the original dataset. We evaluate our method on both synthetic and real-world data. It yields substantially improved uncertainty estimates under distribution shifts.
- oai:arXiv.org:2506.18283v2
- stat.ML
- cs.LG
- Mon, 22 Dec 2025 00:00:00 -0500
+ The limit joint distributions of some statistics used in testing the quality of random number generators
+ https://arxiv.org/abs/2512.08002
+ arXiv:2512.08002v2 Announce Type: replace
+Abstract: The limit joint distribution of statistics that are generalizations of some statistics from the NIST STS, TestU01, and other packages is found under the following hypotheses $H_0$ and $H_1$. Hypothesis $H_0$ states that the tested sequence is a sequence of independent random vectors with a known distribution, and the simple alternative hypothesis $H_1$ converges in some sense to $H_0$ with increasing sample size. In addition, an analogue of the Berry-Esseen inequality is obtained for the statistics under consideration, and conditions for their asymptotic independence are found.
+ oai:arXiv.org:2512.08002v2
+ math.ST
+ stat.AP
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Yuli Slavutsky, David M. Blei
+ M. P. Savelov
- A Structure-Preserving Assessment of VBPBB for Time Series Imputation Under Periodic Trends, Noise, and Missingness Mechanisms
- https://arxiv.org/abs/2508.19535
- arXiv:2508.19535v3 Announce Type: replace
-Abstract: Incomplete time-series data compromise statistical inference, particularly when the underlying process exhibits periodic structure (e.g., annual or monthly cycles). Conventional imputation procedures rarely account for such temporal dependence, leading to attenuation of seasonal signals and biased estimates. This study proposes and evaluates a structure-preserving multiple imputation framework that augments imputation models with frequency-specific covariates derived via the Variable Bandpass Periodic Block Bootstrap (VBPBB). In controlled simulations, we generate series with annual and monthly components, impose Gaussian noise across low, moderate, and high signal-to-noise regimes, and introduce Missing Completely at Random (MCAR) patterns from 5% to 70% missingness. Dominant periodic components are extracted with VBPBB, resampled to stabilize uncertainty, and incorporated as covariates in Amelia II. Compared with baseline methods that do not model temporal structure, the VBPBB-enhanced approach consistently yields lower imputation error and superior retention of periodic features, with the largest gains observed under high noise and when multiple components are included. These findings demonstrate that explicitly modeling periodic content during imputation improves reconstruction accuracy and preserves time-series structure in the presence of substantial missingness.
- oai:arXiv.org:2508.19535v3
- stat.AP
- Mon, 22 Dec 2025 00:00:00 -0500
+ Bounds for causal mediation effects
+ https://arxiv.org/abs/2512.11549
+ arXiv:2512.11549v2 Announce Type: replace
+Abstract: Several frameworks have been proposed for studying causal mediation analysis. What these frameworks have in common is that they all make assumptions for point identifications that can be violated even when treatment is randomized. When a causal effect is not point-identified, one can sometimes derive bounds, i.e. a range of possible values that are consistent with the observed data. In this work, we study causal bounds for mediation effects under both the natural effects framework and the separable effects framework. In particular, we show that when there are unmeasured confounders for the intermediate variables(s) the sharp symbolic bounds on separable (in)direct effect coincide with existing bounds for natural (in)direct effects in the analogous setting. We compare these bounds to valid bounds for the natural direct effects when only the cross-world independence assumption does not hold. Furthermore, we demonstrate the use and compare the results of the bounds on data from a trial investigating the effect of peanut consumption on the development of peanut allergy in infants through specific pathways of measured immunological biomarkers.
+ oai:arXiv.org:2512.11549v2
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Asmaa Ahmad, Eric J Rose, Michael Roy, Edward Valachovic
+ Marie S. Breum, Vanessa Didelez, Erin E. Gabriel, Michael C. Sachs
+
+
+ Universality of high-dimensional scaling limits of stochastic gradient descent
+ https://arxiv.org/abs/2512.13634
+ arXiv:2512.13634v2 Announce Type: replace
+Abstract: We consider statistical tasks in high dimensions whose loss depends on the data only through its projection into a fixed-dimensional subspace spanned by the parameter vectors and certain ground truth vectors. This includes classifying mixture distributions with cross-entropy loss with one and two-layer networks, and learning single and multi-index models with one and two-layer networks. When the data is drawn from an isotropic Gaussian mixture distribution, it is known that the evolution of a finite family of summary statistics under stochastic gradient descent converges to an autonomous ordinary differential equation (ODE), as the dimension and sample size go to $\infty$ and the step size goes to $0$ commensurately. Our main result is that these ODE limits are universal in that this limit is the same whenever the data is drawn from mixtures of arbitrary product distributions whose first two moments match the corresponding Gaussian distribution, provided the initialization and ground truth vectors are coordinate-delocalized. We complement this by proving two corresponding non-universality results. We provide a simple example where the ODE limits are non-universal if the initialization is coordinate aligned. We also show that the stochastic differential equation limits arising as fluctuations of the summary statistics around their ODE's fixed points are not universal.
+ oai:arXiv.org:2512.13634v2
+ stat.ML
+ cs.LG
+ math.PR
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Reza Gheissari, Aukosh Jagannath
- Dirichlet moment tensors and the correspondence between admixture and mixture of product models
- https://arxiv.org/abs/2509.25441
- arXiv:2509.25441v3 Announce Type: replace
-Abstract: Understanding posterior contraction behavior in Bayesian hierarchical models is of fundamental importance, but progress in this question is relatively sparse in comparison to the theory of density estimation. In this paper, we study two classes of hierarchical models for grouped data, where observations within groups are exchangeable. Using moment tensor decomposition of the distribution of the latent variables, we establish a precise equivalence between the class of Admixture models (such as Latent Dirichlet Allocation) and the class of Mixture of products of multinomial distributions. This correspondence enables us to leverage the result from the latter class of models, which are more well-understood, so as to arrive at the identifiability and posterior contraction rates in both classes under conditions much weaker than in existing literature. For instance, our results shed light on cases where the topics are not linearly independent or the number of topics is misspecified in the admixture setting. Finally, we analyze individual documents' latent allocation performance via the borrowing of strength properties of hierarchical Bayesian modeling. Many illustrations and simulations are provided to support the theory.
- oai:arXiv.org:2509.25441v3
+ Sharp convergence rates for Spectral methods via the feature space decomposition method
+ https://arxiv.org/abs/2512.14473
+ arXiv:2512.14473v2 Announce Type: replace
+Abstract: In this paper, we apply the Feature Space Decomposition (FSD) method developed in [LS24, GLS25, ALSS26] to obtain, under fairly general conditions, matching upper and lower bounds for the population excess risk of spectral methods in linear regression under the squared loss, for every covariance and every signal. This result enables us, for a given linear regression problem, to define a partial order on the set of spectral methods according to their convergence rates, thereby characterizing which spectral algorithm is superior for that specific problem. Furthermore, this allows us to generalize the saturation effect proposed in inverse problems and to provide necessary and sufficient conditions for its occurrence. Our method also shows that, under broad conditions, any spectral algorithm lacks a feature learning property, and therefore cannot overcome the barrier of the information exponent in problems such as single-index learning.
+ oai:arXiv.org:2512.14473v2math.STstat.TH
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Dat Do, Sunrit Chakraborty, Jonathan Terhorst, XuanLong Nguyen
+ Guillaume Lecu\'e, Zhifan Li, Zong Shang
- Constructing Large Orthogonal Minimally Aliased Response Surface Designs by Concatenating Two Definitive Screening Designs
- https://arxiv.org/abs/2511.02984
- arXiv:2511.02984v2 Announce Type: replace
-Abstract: Orthogonal minimally aliased response surface (OMARS) designs permit the study of quantitative factors at three levels using an economical number of runs. In these designs, the linear effects of the factors are neither aliased with each other nor with the quadratic effects and the two-factor interactions. Complete catalogs of OMARS designs with up to five factors have been obtained using an enumeration algorithm. However, the algorithm is computationally demanding for designs with many factors and runs. To overcome this issue, we propose a construction method for large OMARS designs that concatenates two definitive screening designs and improves the statistical features of its parent designs. The concatenation employs an algorithm that minimizes the aliasing among the second-order effects using foldover techniques and column permutations for one of the parent designs. We study the properties of the new OMARS designs and compare them with alternative designs in the literature.
- oai:arXiv.org:2511.02984v2
+ On the bias of the Gini estimator: Poisson and geometric cases, a characterization of the gamma family, and unbiasedness under gamma distributions
+ https://arxiv.org/abs/2512.14983
+ arXiv:2512.14983v2 Announce Type: replace
+Abstract: In this paper, we derive a general representation for the expectation of the Gini coefficient estimator in terms of the Laplace transform of the underlying distribution, together with the mean and the Gini coefficient of its exponentially tilted version. This representation leads to a new characterization of the gamma family within the class of nonnegative scale families, based on a stability property under exponential tilting. As direct applications, we show that the Gini estimator is biased for both Poisson and geometric populations and provide an alternative, unified proof of its unbiasedness for gamma populations. By using the derived bias expressions, we propose plug-in bias-corrected estimators and assess their finite-sample performance through a Monte Carlo study, which demonstrates substantial improvements over the original estimator. Compared with existing approaches, our framework highlights the fundamental role of scale invariance and exponential tilting, rather than distribution-specific algebraic calculations, and complements recent results in Baydil et al. (2025) [Unbiased estimation of the gini coefficient. SPL, 222:110376] and Vila and Saulo (2025a,b) [Bias in Gini coefficient estimation for gamma mixture populations. STPA, 66:1-18; and The mth gini index estimator: Unbiasedness for gamma populations. J. Econ. Inequal].
+ oai:arXiv.org:2512.14983v2stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Alan R. Vazquez, Peter Goos, Eric D. Schoen
+ Roberto Vila, Helton Saulo
- Conformalized Bayesian Inference, with Applications to Random Partition Models
- https://arxiv.org/abs/2511.05746
- arXiv:2511.05746v2 Announce Type: replace
-Abstract: Bayesian posterior distributions naturally represent parameter uncertainty informed by data. However, when the parameter space is complex, as in many nonparametric settings where it is infinite-dimensional or combinatorially large, standard summaries such as posterior means, credible intervals, or simple notions of multimodality are often unavailable, hindering interpretable posterior uncertainty quantification. We introduce Conformalized Bayesian Inference (CBI), a broadly applicable and computationally efficient framework for posterior inference on nonstandard parameter spaces. CBI yields a point estimate, a credible region with assumption-free posterior coverage guarantees, and a principled analysis of posterior multimodality, requiring only Monte Carlo samples from the posterior and a notion of discrepancy between parameters. The method builds a pseudo-density score for each parameter value, yielding a MAP-like point estimate and a credible region derived from conformal prediction principles. The key conceptual step underlying this construction is the reinterpretation of posterior inference as prediction on the parameter space. A final density-based clustering step identifies representative posterior modes. We investigate a number of theoretical and methodological properties of CBI and demonstrate its practicality, scalability, and versatility in simulated and real data clustering applications with random partition models. An accompanying Python library, cbi_partitions, is available at github.com/nbariletto/cbi_partitions_repo.
- oai:arXiv.org:2511.05746v2
+ Bayesian Empirical Bayes: Simultaneous Inference from Probabilistic Symmetries
+ https://arxiv.org/abs/2512.16239
+ arXiv:2512.16239v2 Announce Type: replace
+Abstract: Empirical Bayes (EB) improves the accuracy of simultaneous inference "by learning from the experience of others" (Efron, 2012). Classical EB theory focuses on latent variables that are iid draws from a fitted prior (Efron, 2019). Modern applications, however, feature complex structure, like arrays, spatial processes, or covariates. How can we apply EB ideas to these settings? We propose a generalized approach to empirical Bayes based on the notion of probabilistic symmetry. Our method pairs a simultaneous inference problem-with an unknown prior-to a symmetry assumption on the joint distribution of the latent variables. Each symmetry implies an ergodic decomposition, which we use to derive a corresponding empirical Bayes method. We call this methodBayesian empirical Bayes (BEB). We show how BEB recovers the classical methods of empirical Bayes, which implicitly assume exchangeability. We then use it to extend EB to other probabilistic symmetries: (i) EB matrix recovery for arrays and graphs; (ii) covariate-assisted EB for conditional data; (iii) EB spatial regression under shift invariance. We develop scalable algorithms based on variational inference and neural networks. In simulations, BEB outperforms existing approaches to denoising arrays and spatial data. On real data, we demonstrate BEB by denoising a cancer gene-expression matrix and analyzing spatial air-quality data from New York City.
+ oai:arXiv.org:2512.16239v2stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Nicola Bariletto, Nhat Ho, Alessandro Rinaldo
+ Bohan Wu, Eli N. Weinstein, David M. Blei
- Generalized infinite dimensional Alpha-Procrustes based geometries
- https://arxiv.org/abs/2511.09801
- arXiv:2511.09801v2 Announce Type: replace
-Abstract: This work extends the recently introduced Alpha-Procrustes family of Riemannian metrics for symmetric positive definite (SPD) matrices by incorporating generalized versions of the Bures-Wasserstein (GBW), Log-Euclidean, and Wasserstein distances. While the Alpha-Procrustes framework has unified many classical metrics in both finite- and infinite- dimensional settings, it previously lacked the structural components necessary to realize these generalized forms. We introduce a formalism based on unitized Hilbert-Schmidt operators and an extended Mahalanobis norm that allows the construction of robust, infinite-dimensional generalizations of GBW and Log-Hilbert-Schmidt distances. Our approach also incorporates a learnable regularization parameter that enhances geometric stability in high-dimensional comparisons. Preliminary experiments reproducing benchmarks from the literature demonstrate the improved performance of our generalized metrics, particularly in scenarios involving comparisons between datasets of varying dimension and scale. This work lays a theoretical and computational foundation for advancing robust geometric methods in machine learning, statistical inference, and functional data analysis.
- oai:arXiv.org:2511.09801v2
+ Generative modeling of conditional probability distributions on the level-sets of collective variables
+ https://arxiv.org/abs/2512.17374
+ arXiv:2512.17374v2 Announce Type: replace
+Abstract: Given a probability distribution $\mu$ in $\mathbb{R}^d$ represented by data, we study in this paper the generative modeling of its conditional probability distributions on the level-sets of a collective variable $\xi: \mathbb{R}^d \rightarrow \mathbb{R}^k$, where $1 \le k<d$. We propose a general and efficient learning approach that is able to learn generative models on different level-sets of $\xi$ simultaneously. To improve the learning quality on level-sets in low-probability regions, we also propose a strategy for data enrichment by utilizing data from enhanced sampling techniques. We demonstrate the effectiveness of our proposed learning approach through concrete numerical examples. The proposed approach is potentially useful for the generative modeling of molecular systems in biophysics, for instance.
+ oai:arXiv.org:2512.17374v2stat.ML
- cs.LG
- math.FAmath.OC
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Salvish Goomanee, Andi Han, Pratik Jawanpuria, Bamdev Mishra
+ http://creativecommons.org/licenses/by/4.0/
+ Fatima-Zahrae Akhyar, Wei Zhang, Gabriel Stoltz, Christof Sch\"utte
- Modeling Issues with Eye Tracking Data
- https://arxiv.org/abs/2512.15950
- arXiv:2512.15950v2 Announce Type: replace
-Abstract: I describe and compare procedures for binary eye-tracking (ET) data. These procedures are applied to both raw and compressed data. The basic GLMM model is a logistic mixed model combined with random effects for persons and items. Additional models address autocorrelation eye-tracking serial observations. In particular, two novel approaches are illustrated that address serial without the use of an observed lag-1 predictor: a first-order autoregressive model obtained with generalized estimating equations, and a recurrent two-state survival model. Altogether, the results of four different analyses point to unresolved issues in the analysis of eye-tracking data and new directions for analytic development.
- oai:arXiv.org:2512.15950v2
- stat.ME
- Mon, 22 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Gregory Camilli
+ A Consistent ICM-based $\chi^2$ Specification Test
+ https://arxiv.org/abs/2208.13370
+ arXiv:2208.13370v3 Announce Type: replace-cross
+Abstract: In spite of the omnibus property of Integrated Conditional Moment (ICM) specification tests, they are not commonly used in empirical practice owing to features such as the non-pivotality of the test and the high computational cost of available bootstrap schemes, especially in large samples. This paper proposes specification and mean independence tests based on ICM metrics. The proposed test exhibits consistency, asymptotic $\chi^2$-distribution under the null hypothesis, and computational efficiency. Moreover, it demonstrates robustness to heteroskedasticity of unknown form and can be adapted to enhance power towards specific alternatives. A power comparison with classical bootstrap-based ICM tests using Bahadur slopes is also provided. Monte Carlo simulations are conducted to showcase the excellent size control and competitive power of the proposed test.
+ oai:arXiv.org:2208.13370v3
+ econ.EM
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Feiyu Jiang, Emmanuel Selorm Tsyawo
- Opening the House: Datasets for Mixed Doubles Curling
- https://arxiv.org/abs/2512.16574
- arXiv:2512.16574v2 Announce Type: replace
-Abstract: We introduce the most comprehensive publicly available datasets for mixed doubles curling, constructed from eleven top-level tournaments from the CurlIT (https://curlit.com/results) Results Booklets spanning 53 countries, 1,112 games, and nearly 70,000 recorded shots. While curling analytics has grown in recent years, mixed doubles remains under-served due to limited access to data. Using a combined text-scraping and image-processing pipeline, we extract and standardize detailed game- and shot-level information, including player statistics, hammer possession, Power Play usage, stone coordinates, and post-shot scoring states. We describe the data engineering workflow, highlight challenges in parsing historical records, and derive additional contextual features that enable rigorous strategic analysis. Using these datasets, we present initial insights into shot selection and success rates, scoring distributions, and team efficiencies, illustrating key differences between mixed doubles and traditional 4-player curling. We highlight various ways to analyze this type of data including from a shot-, end-, game- or team-level to display its versatilely. The resulting resources provide a foundation for advanced performance modeling, strategic evaluation, and future research in mixed doubles curling analytics, supporting broader analytical engagement with this rapidly growing discipline.
- oai:arXiv.org:2512.16574v2
- stat.AP
- Mon, 22 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Robyn Ritchie, Alexandre Leblanc, Thomas Loughin
+ Multiperiodic Processes: Ergodic Sources with a Sublinear Entropy
+ https://arxiv.org/abs/2302.09049
+ arXiv:2302.09049v3 Announce Type: replace-cross
+Abstract: We construct multiperiodic processes -- a simple example of stationary ergodic (but not mixing) processes over natural numbers that enjoy the vanishing entropy rate under a mild condition. Multiperiodic processes are supported on randomly shifted deterministic sequences called multiperiodic sequences, which can be efficiently generated using an algorithm called the Infinite Clock. Under a suitable parameterization, multiperiodic sequences exhibit relative frequencies of particular numbers given by Zipf's law. Exactly in the same setting, the respective multiperiodic processes satisfy an asymptotic power-law growth of block entropy, called Hilberg's law. Hilberg's law is deemed to hold for statistical language models, in particular.
+ oai:arXiv.org:2302.09049v3
+ cs.IT
+ cs.LG
+ math.IT
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ {\L}ukasz D\k{e}bowski
- Unifying Distributionally Robust Optimization via Optimal Transport Theory
- https://arxiv.org/abs/2308.05414
- arXiv:2308.05414v2 Announce Type: replace-cross
-Abstract: In recent years, two prominent paradigms have shaped distributionally robust optimization (DRO), modeling distributional ambiguity through $\phi$-divergences and Wasserstein distances, respectively. While the former focuses on ambiguity in likelihood ratios, the latter emphasizes ambiguity in outcomes and uses a transportation cost function to capture geometric structure in the outcome space. This paper proposes a unified framework that bridges these approaches by leveraging optimal transport (OT) with conditional moment constraints. Our formulation enables adversarial distributions to jointly perturb likelihood ratios and outcomes, yielding a generalized OT coupling between the nominal and perturbed distributions. We further establish key duality results and develop tractable reformulations that highlight the practical power of our unified approach.
- oai:arXiv.org:2308.05414v2
- math.OC
+ Normalized mutual information is a biased measure for classification and community detection
+ https://arxiv.org/abs/2307.01282
+ arXiv:2307.01282v3 Announce Type: replace-cross
+Abstract: Normalized mutual information is widely used as a similarity measure for evaluating the performance of clustering and classification algorithms. In this paper, we argue that results returned by the normalized mutual information are biased for two reasons: first, because they ignore the information content of the contingency table and, second, because their symmetric normalization introduces spurious dependence on algorithm output. We introduce a modified version of the mutual information that remedies both of these shortcomings. As a practical demonstration of the importance of using an unbiased measure, we perform extensive numerical tests on a basket of popular algorithms for network community detection and show that one's conclusions about which algorithm is best are significantly affected by the biases in the traditional mutual information.
+ oai:arXiv.org:2307.01282v3
+ cs.SIstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jose Blanchet, Daniel Kuhn, Jiajin Li, Bahar Taskesen
+ 10.1038/s41467-025-66150-8
+ Nature Communications 16, 11268 (2025)
+ Maximilian Jerdee, Alec Kirkley, M. E. J. Newman
+
+
+ Persistence diagrams as morphological signatures of cells: A method to measure and compare cells within a population
+ https://arxiv.org/abs/2310.20644
+ arXiv:2310.20644v3 Announce Type: replace-cross
+Abstract: Cell biologists study in parallel the morphology of cells with the regulation mechanisms that modify this morphology. Such studies are complicated by the inherent heterogeneity present in the cell population. It remains difficult to define the morphology of a cell with parameters that can quantify this heterogeneity, leaving the cell biologist to rely on manual inspection of cell images. We propose an alternative to this manual inspection that is based on topological data analysis. We characterise the shape of a cell by its contour and nucleus. We build a filtering of the edges defining the contour using a radial distance function initiated from the nucleus. This filtering is then used to construct a persistence diagram that serves as a signature of the cell shape. Two cells can then be compared by computing the Wasserstein distance between their persistence diagrams. Given a cell population, we then compute a distance matrix that includes all pairwise distances between its members. We analyse this distance matrix using hierarchical clustering with different linkage schemes and define a purity score that quantifies consistency between those different schemes, which can then be used to assess homogeneity within the cell population. We illustrate and validate our approach to identify sub-populations in human mesenchymal stem cell populations.
+ oai:arXiv.org:2310.20644v3
+ q-bio.QM
+ math.AT
+ stat.AP
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Yossi Bokor Bleile, Pooja Yadav, Patrice Koehl, Florian Rehfeldt
- On the Identification of Temporally Causal Representation with Instantaneous Dependence
- https://arxiv.org/abs/2405.15325
- arXiv:2405.15325v3 Announce Type: replace-cross
-Abstract: Temporally causal representation learning aims to identify the latent causal process from time series observations, but most methods require the assumption that the latent causal processes do not have instantaneous relations. Although some recent methods achieve identifiability in the instantaneous causality case, they require either interventions on the latent variables or grouping of the observations, which are in general difficult to obtain in real-world scenarios. To fill this gap, we propose an \textbf{ID}entification framework for instantane\textbf{O}us \textbf{L}atent dynamics (\textbf{IDOL}) by imposing a sparse influence constraint that the latent causal processes have sparse time-delayed and instantaneous relations. Specifically, we establish identifiability results of the latent causal process based on sufficient variability and the sparse influence constraint by employing contextual information of time series data. Based on these theories, we incorporate a temporally variational inference architecture to estimate the latent variables and a gradient-based sparsity regularization to identify the latent causal process. Experimental results on simulation datasets illustrate that our method can identify the latent causal process. Furthermore, evaluations on multiple human motion forecasting benchmarks with instantaneous dependencies indicate the effectiveness of our method in real-world settings.
- oai:arXiv.org:2405.15325v3
+ Certified Defense on the Fairness of Graph Neural Networks
+ https://arxiv.org/abs/2311.02757
+ arXiv:2311.02757v3 Announce Type: replace-cross
+Abstract: Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks over the years. Nevertheless, due to the vulnerabilities of GNNs, it has been empirically shown that malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data. In this paper, we take crucial steps to study a novel problem of certifiable defense on the fairness level of GNNs. Specifically, we propose a principled framework named ELEGANT and present a detailed theoretical certification analysis for the fairness of GNNs. ELEGANT takes {\em any} GNN as its backbone, and the fairness level of such a backbone is theoretically impossible to be corrupted under certain perturbation budgets for attackers. Notably, ELEGANT does not make any assumptions over the GNN structure or parameters, and does not require re-training the GNNs to realize certification. Hence it can serve as a plug-and-play framework for any optimized GNNs ready to be deployed. We verify the satisfactory effectiveness of ELEGANT in practice through extensive experiments on real-world datasets across different backbones of GNNs and parameter settings.
+ oai:arXiv.org:2311.02757v3cs.LG
+ cs.CRstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zijian Li, Yifan Shen, Kaitao Zheng, Ruichu Cai, Xiangchen Song, Mingming Gong, Guangyi Chen, Kun Zhang
+ http://creativecommons.org/licenses/by/4.0/
+ Yushun Dong, Binchi Zhang, Hanghang Tong, Jundong Li
- Low-Rank Filtering and Smoothing for Sequential Deep Learning
- https://arxiv.org/abs/2410.06800
- arXiv:2410.06800v2 Announce Type: replace-cross
-Abstract: Learning multiple tasks sequentially requires neural networks to balance retaining knowledge, yet being flexible enough to adapt to new tasks. Regularizing network parameters is a common approach, but it rarely incorporates prior knowledge about task relationships, and limits information flow to future tasks only. We propose a Bayesian framework that treats the network's parameters as the state space of a nonlinear Gaussian model, unlocking two key capabilities: (1) A principled way to encode domain knowledge about task relationships, allowing, e.g., control over which layers should adapt between tasks. (2) A novel application of Bayesian smoothing, allowing task-specific models to also incorporate knowledge from models learned later. This does not require direct access to their data, which is crucial, e.g., for privacy-critical applications. These capabilities rely on efficient filtering and smoothing operations, for which we propose diagonal plus low-rank approximations of the precision matrix in the Laplace approximation (LR-LGF). Empirical results demonstrate the efficiency of LR-LGF and the benefits of the unlocked capabilities.
- oai:arXiv.org:2410.06800v2
+ Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method
+ https://arxiv.org/abs/2402.08992
+ arXiv:2402.08992v3 Announce Type: replace-cross
+Abstract: High-probability guarantees in stochastic optimization are often obtained only under strong noise assumptions such as sub-Gaussian tails. We show that such guarantees can also be achieved under the weaker assumption of bounded variance by developing a stochastic proximal point method. This method combines a proximal subproblem solver, which inherently reduces variance, with a probability booster that amplifies per-iteration reliability into high-confidence results. The analysis demonstrates convergence with low sample complexity, without restrictive noise assumptions or reliance on mini-batching.
+ oai:arXiv.org:2402.08992v3
+ math.OCcs.LGstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig
+ Jiaming Liang
- Fairness via Independence: A (Conditional) Distance Covariance Framework
- https://arxiv.org/abs/2412.00720
- arXiv:2412.00720v2 Announce Type: replace-cross
-Abstract: We explore fairness from a statistical perspective by selectively utilizing either conditional distance covariance or distance covariance statistics as measures to assess the independence between predictions and sensitive attributes. We boost fairness with independence by adding a distance covariance-based penalty to the model's training. Additionally, we present the matrix form of empirical (conditional) distance covariance for parallel calculations to enhance computational efficiency. Theoretically, we provide a proof for the convergence between empirical and population (conditional) distance covariance, establishing necessary guarantees for batch computations. Through experiments conducted on a range of real-world datasets, we have demonstrated that our method effectively bridges the fairness gap in machine learning. Our code is available at \url{https://github.com/liuhaixias1/Fair_dc/}.
- oai:arXiv.org:2412.00720v2
+ A Riemannian Optimization Perspective of the Gauss-Newton Method for Feedforward Neural Networks
+ https://arxiv.org/abs/2412.14031
+ arXiv:2412.14031v5 Announce Type: replace-cross
+Abstract: In this work, we establish non-asymptotic convergence bounds for the Gauss-Newton method in training neural networks with smooth activations. In the underparameterized regime, the Gauss-Newton gradient flow in parameter space induces a Riemannian gradient flow on a low-dimensional embedded submanifold of the function space. Using tools from Riemannian optimization, we establish geodesic Polyak-Lojasiewicz and Lipschitz-smoothness conditions for the loss under appropriately chosen output scaling, yielding geometric convergence to the optimal in-class predictor at an explicit rate independent of the conditioning of the Gram matrix. In the overparameterized regime, we propose adaptive, curvature-aware regularization schedules that ensure fast geometric convergence to a global optimum at a rate independent of the minimum eigenvalue of the neural tangent kernel and, locally, of the modulus of strong convexity of the loss. These results demonstrate that Gauss-Newton achieves accelerated convergence rates in settings where first-order methods exhibit slow convergence due to ill-conditioned kernel matrices and loss landscapes.
+ oai:arXiv.org:2412.14031v5
+ math.OC
+ cs.AIcs.LG
- cs.CY
+ cs.SY
+ eess.SYstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ruifan Huang, Haixia Liu
+ Semih Cayci
- Towards Human-Guided, Data-Centric LLM Co-Pilots
- https://arxiv.org/abs/2501.10321
- arXiv:2501.10321v3 Announce Type: replace-cross
-Abstract: Machine learning (ML) has the potential to revolutionize various domains, but its adoption is often hindered by the disconnect between the needs of domain experts and translating these needs into robust and valid ML tools. Despite recent advances in LLM-based co-pilots to democratize ML for non-technical domain experts, these systems remain predominantly focused on model-centric aspects while overlooking critical data-centric challenges. This limitation is problematic in complex real-world settings where raw data often contains complex issues, such as missing values, label noise, and domain-specific nuances requiring tailored handling. To address this we introduce CliMB-DC, a human-guided, data-centric framework for LLM co-pilots that combines advanced data-centric tools with LLM-driven reasoning to enable robust, context-aware data processing. At its core, CliMB-DC introduces a novel, multi-agent reasoning system that combines a strategic coordinator for dynamic planning and adaptation with a specialized worker agent for precise execution. Domain expertise is then systematically incorporated to guide the reasoning process using a human-in-the-loop approach. To guide development, we formalize a taxonomy of key data-centric challenges that co-pilots must address. Thereafter, to address the dimensions of the taxonomy, we integrate state-of-the-art data-centric tools into an extensible, open-source architecture, facilitating the addition of new tools from the research community. Empirically, using real-world healthcare datasets we demonstrate CliMB-DC's ability to transform uncurated datasets into ML-ready formats, significantly outperforming existing co-pilot baselines for handling data-centric challenges. CliMB-DC promises to empower domain experts from diverse domains -- healthcare, finance, social sciences and more -- to actively participate in driving real-world impact using ML.
- oai:arXiv.org:2501.10321v3
+ Keep It Light! Simplifying Image Clustering Via Text-Free Adapters
+ https://arxiv.org/abs/2502.04226
+ arXiv:2502.04226v2 Announce Type: replace-cross
+Abstract: In the era of pre-trained models, effective classification can often be achieved using simple linear probing or lightweight readout layers. In contrast, many competitive clustering pipelines have a multi-modal design, leveraging large language models (LLMs) or other text encoders, and text-image pairs, which are often unavailable in real-world downstream applications. Additionally, such frameworks are generally complicated to train and require substantial computational resources, making widespread adoption challenging. In this work, we show that in deep clustering, competitive performance with more complex state-of-the-art methods can be achieved using a text-free and highly simplified training pipeline. In particular, our approach, Simple Clustering via Pre-trained models (SCP), trains only a small cluster head while leveraging pre-trained vision model feature representations and positive data pairs. Experiments on benchmark datasets, including CIFAR-10, CIFAR-20, CIFAR-100, STL-10, ImageNet-10, and ImageNet-Dogs, demonstrate that SCP achieves highly competitive performance. Furthermore, we provide a theoretical result explaining why, at least under ideal conditions, additional text-based embeddings may not be necessary to achieve strong clustering performance in vision.
+ oai:arXiv.org:2502.04226v2
+ cs.CVcs.LG
+ cs.NE
+ stat.COstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Evgeny Saveliev, Jiashuo Liu, Nabeel Seedat, Anders Boyd, Mihaela van der Schaar
+ Yicen Li, Haitz S\'aez de Oc\'ariz Borde, Anastasis Kratsios, Paul D. McNicholas
- Saturation-Aware Snapshot Compressive Imaging: Theory and Algorithm
- https://arxiv.org/abs/2501.11869
- arXiv:2501.11869v3 Announce Type: replace-cross
-Abstract: Snapshot Compressive Imaging (SCI) uses coded masks to compress a 3D data cube into a single 2D snapshot. In practice, multiplexing can push intensities beyond the sensor's dynamic range, producing saturation that violates the linear SCI model and degrades reconstruction. This paper provides the first theoretical characterization of SCI recovery under saturation. We model clipping as an element-wise nonlinearity and derive a finite-sample recovery bound for compression-based SCI that links reconstruction error to mask density and the extent of saturation. The analysis yields a clear design rule: optimal Bernoulli masks use densities below one-half, decreasing further as saturation strengthens. Guided by this principle, we optimize mask patterns and introduce a novel reconstruction framework, Saturation-Aware PnP Net (SAPnet), which explicitly enforces consistency with saturated measurements. Experiments on standard video-SCI benchmarks confirm our theory and demonstrate that SAPnet significantly outperforms existing PnP-based methods.
- oai:arXiv.org:2501.11869v3
- eess.IV
- cs.IT
- math.IT
- stat.AP
- Mon, 22 Dec 2025 00:00:00 -0500
+ Variational Online Mirror Descent for Robust Learning in Schr\"odinger Bridge
+ https://arxiv.org/abs/2504.02618
+ arXiv:2504.02618v4 Announce Type: replace-cross
+Abstract: The Schr\"{o}dinger bridge (SB) has evolved into a universal class of probabilistic generative models. In practice, however, estimated learning signals are innately uncertain, and the reliability promised by existing methods is often based on speculative optimal case scenarios. Recent studies regarding the Sinkhorn algorithm through mirror descent (MD) have gained attention, revealing geometric insights into solution acquisition of the SB problems. In this paper, we propose a variational online MD (OMD) framework for the SB problems, which provides further stability to SB solvers. We formally prove convergence and a regret bound for the novel OMD formulation of SB acquisition. As a result, we propose a simulation-free SB algorithm called Variational Mirrored Schr\"{o}dinger Bridge (VMSB) by utilizing the Wasserstein-Fisher-Rao geometry of the Gaussian mixture parameterization for Schr\"{o}dinger potentials. Based on the Wasserstein gradient flow theory, the algorithm offers tractable learning dynamics that precisely approximate each OMD step. In experiments, we validate the performance of the proposed VMSB algorithm across an extensive suite of benchmarks. VMSB consistently outperforms contemporary SB solvers on a wide range of SB problems, demonstrating the robustness as well as generality predicted by our OMD theory.
+ oai:arXiv.org:2504.02618v4
+ cs.LG
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Mengyu Zhao, Shirin Jalali
+ Dong-Sig Han, Jaein Kim, Hee Bin Yoo, Byoung-Tak Zhang
- Regularized Langevin Dynamics for Combinatorial Optimization
- https://arxiv.org/abs/2502.00277
- arXiv:2502.00277v4 Announce Type: replace-cross
-Abstract: This work proposes a simple yet effective sampling framework for combinatorial optimization (CO). Our method builds on discrete Langevin dynamics (LD), an efficient gradient-guided generative paradigm. However, we observe that directly applying LD often leads to limited exploration. To overcome this limitation, we propose the Regularized Langevin Dynamics (RLD), which enforces an expected distance between the sampled and current solutions, effectively avoiding local minima. We develop two CO solvers on top of RLD, one based on simulated annealing (SA), and the other one based on neural network (NN). Empirical results on three classic CO problems demonstrate that both of our methods can achieve comparable or better performance against the previous state-of-the-art (SOTA) SA- and NN-based solvers. In particular, our SA algorithm reduces the runtime of the previous SOTA SA method by up to 80\%, while achieving equal or superior performance. In summary, RLD offers a promising framework for enhancing both traditional heuristics and NN models to solve CO problems. Our code is available at https://github.com/Shengyu-Feng/RLD4CO.
- oai:arXiv.org:2502.00277v4
+ Stochastic Optimization with Optimal Importance Sampling
+ https://arxiv.org/abs/2504.03560
+ arXiv:2504.03560v2 Announce Type: replace-cross
+Abstract: Importance Sampling (IS) is a widely used variance reduction technique for enhancing the efficiency of Monte Carlo methods, particularly in rare-event simulation and related applications. Despite its effectiveness, the performance of IS is highly sensitive to the choice of the proposal distribution and often requires stochastic calibration. While the design and analysis of IS have been extensively studied in estimation settings, applying IS within stochastic optimization introduces a fundamental challenge: the decision variable and the importance sampling distribution are mutually dependent, creating a circular optimization structure. This interdependence complicates both convergence analysis and variance control. We consider convex stochastic optimization problems with linear constraints and propose a single-loop stochastic approximation algorithm, based on a joint variant of Nesterov's dual averaging, that jointly updates the decision variable and the importance sampling distribution, without time-scale separation or nested optimization. The method is globally convergent and achieves minimal asymptotic variance among stochastic gradient schemes, matching the performance of an oracle sampler adapted to the optimal solution.
+ oai:arXiv.org:2504.03560v2
+ math.OCcs.LG
+ math.STstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- International conference on machine learning, 2025, PMLR
- Shengyu Feng, Yiming Yang
+ http://creativecommons.org/licenses/by/4.0/
+ Liviu Aolaritei, Bart P. G. Van Parys, Henry Lam, Michael I. Jordan
- On Agnostic PAC Learning in the Small Error Regime
- https://arxiv.org/abs/2502.09496
- arXiv:2502.09496v2 Announce Type: replace-cross
-Abstract: Binary classification in the classic PAC model exhibits a curious phenomenon: Empirical Risk Minimization (ERM) learners are suboptimal in the realizable case yet optimal in the agnostic case. Roughly speaking, this owes itself to the fact that non-realizable distributions $\mathcal{D}$ are simply more difficult to learn than realizable distributions -- even when one discounts a learner's error by $\mathrm{err}(h^*_{\mathcal{D}})$, the error of the best hypothesis in $\mathcal{H}$ for $\mathcal{D}$. Thus, optimal agnostic learners are permitted to incur excess error on (easier-to-learn) distributions $\mathcal{D}$ for which $\tau = \mathrm{err}(h^*_{\mathcal{D}})$ is small.
- Recent work of Hanneke, Larsen, and Zhivotovskiy (FOCS `24) addresses this shortcoming by including $\tau$ itself as a parameter in the agnostic error term. In this more fine-grained model, they demonstrate tightness of the error lower bound $\tau + \Omega \left(\sqrt{\frac{\tau (d + \log(1 / \delta))}{m}} + \frac{d + \log(1 / \delta)}{m} \right)$ in a regime where $\tau > d/m$, and leave open the question of whether there may be a higher lower bound when $\tau \approx d/m$, with $d$ denoting $\mathrm{VC}(\mathcal{H})$. In this work, we resolve this question by exhibiting a learner which achieves error $c \cdot \tau + O \left(\sqrt{\frac{\tau (d + \log(1 / \delta))}{m}} + \frac{d + \log(1 / \delta)}{m} \right)$ for a constant $c \leq 2.1$, thus matching the lower bound when $\tau \approx d/m$. Further, our learner is computationally efficient and is based upon careful aggregations of ERM classifiers, making progress on two other questions of Hanneke, Larsen, and Zhivotovskiy (FOCS `24). We leave open the interesting question of whether our approach can be refined to lower the constant from 2.1 to 1, which would completely settle the complexity of agnostic learning.
- oai:arXiv.org:2502.09496v2
+ Reliable Decision Support with LLMs: A Framework for Evaluating Consistency in Binary Text Classification Applications
+ https://arxiv.org/abs/2505.14918
+ arXiv:2505.14918v2 Announce Type: replace-cross
+Abstract: This study introduces a framework for evaluating consistency in large language model (LLM) binary text classification, addressing the lack of established reliability assessment methods. Adapting psychometric principles, we determine sample size requirements, develop metrics for invalid responses, and evaluate intra- and inter-rater reliability. Our case study examines financial news sentiment classification across 14 LLMs (including claude-3-7-sonnet, gpt-4o, deepseek-r1, gemma3, llama3.2, phi4, and command-r-plus), with five replicates per model on 1,350 articles. Models demonstrated high intra-rater consistency, achieving perfect agreement on 90-98% of examples, with minimal differences between expensive and economical models from the same families. When validated against StockNewsAPI labels, models achieved strong performance (accuracy 0.76-0.88), with smaller models like gemma3:1B, llama3.2:3B, and claude-3-5-haiku outperforming larger counterparts. All models performed at chance when predicting actual market movements, indicating task constraints rather than model limitations. Our framework provides systematic guidance for LLM selection, sample size planning, and reliability assessment, enabling organizations to optimize resources for classification tasks.
+ oai:arXiv.org:2505.14918v2
+ cs.CLcs.LGstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by-nc-nd/4.0/
- Julian Asilis, Mikael M{\o}ller H{\o}gsgaard, Grigoris Velegkas
+ Fadel M. Megahed, Ying-Ju Chen, L. Allision Jones-Farmer, Younghwa Lee, Jiawei Brooke Wang, Inez M. Zwetsloot
- Non-parametric kernel density estimation of magnitude distribution for the analysis of seismic hazard posed by anthropogenic seismicity
- https://arxiv.org/abs/2503.04393
- arXiv:2503.04393v2 Announce Type: replace-cross
-Abstract: Frequent significant deviations of the observed magnitude distribution of anthropogenic seismicity from the Gutenberg-Richter relation require alternative estimation methods for probabilistic seismic hazard assessments. We evaluate five nonparametric kernel density estimation (KDE) methods on simulated samples drawn from four magnitude distribution models: the exponential, concave and convex bi-exponential, and exponential-Gaussian distributions. The latter three represent deviations from the Gutenberg-Richter relation due to the finite thickness of the seismogenic crust and the effect of characteristic earthquakes. The assumed deviations from exponentiality are never more than those met in practice. The studied KDE methods include Silverman's and Scott's rules with Abramson's bandwidth adaptation, two diffusion-based methods (ISJ and diffKDE), and adaptiveKDE, which formulates the bandwidth estimation as an optimization problem. We assess their performance for magnitudes from 2 to 6 with sample sizes of 400 to 5000, using the mean integrated square error (MISE) over 100,000 simulations. Their suitability in hazard assessments is illustrated by the mean of the mean return period (MRP) for a sample size of 1000. Among the tested methods, diffKDE provides the most accurate cumulative distribution function estimates for larger magnitudes. Even when the data is drawn from an exponential distribution, diffKDE performs comparably to maximum likelihood estimation when the sample size is at least 1000. Given that anthropogenic seismicity often deviates from the exponential model, we recommend using diffKDE for probabilistic seismic hazard assessments whenever a sufficient sample size is available.
- oai:arXiv.org:2503.04393v2
- physics.geo-ph
- stat.AP
- Mon, 22 Dec 2025 00:00:00 -0500
+ Attractor learning for spatiotemporally chaotic dynamical systems using echo state networks with transfer learning
+ https://arxiv.org/abs/2505.24099
+ arXiv:2505.24099v2 Announce Type: replace-cross
+Abstract: In this paper, we explore the predictive capabilities of echo state networks (ESNs) for the generalized Kuramoto-Sivashinsky (gKS) equation, an archetypal nonlinear PDE that exhibits spatiotemporal chaos. Our research focuses on predicting changes in long-term statistical patterns of the gKS model that result from varying the dispersion relation or the length of the spatial domain. We use transfer learning to adapt ESNs to different parameter settings and successfully capture changes in the underlying chaotic attractor. Previous work has shown that transfer learning can be used effectively with ESNs for single-orbit prediction. The novelty of our paper lies in our use of this pairing to predict the long-term statistical properties of spatiotemporally chaotic PDEs. We also show that transfer learning nontrivially improves the length of time that predictions of individual gKS trajectories remain accurate.
+ oai:arXiv.org:2505.24099v2
+ math.DS
+ cs.AI
+ cs.LG
+ nlin.CD
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by/4.0/
- 10.1007/s11600-025-01762-8
- Francis Tong, Stanis{\l}aw Lasocki, Beata Orlecka-Sikora
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mohammad Shah Alam, William Ott, Ilya Timofeyev
- Sample, Don't Search: Rethinking Test-Time Alignment for Language Models
- https://arxiv.org/abs/2504.03790
- arXiv:2504.03790v2 Announce Type: replace-cross
-Abstract: Increasing test-time computation has emerged as a promising direction for improving language model performance, particularly in scenarios where model finetuning is impractical or impossible due to computational constraints or private model weights. However, existing test-time search methods using a reward model (RM) often degrade in quality as compute scales, due to the over-optimization of what are inherently imperfect reward proxies. We introduce QAlign, a new test-time alignment approach. As we scale test-time compute, QAlign converges to sampling from the optimal aligned distribution for each individual prompt. By adopting recent advances in Markov chain Monte Carlo for text generation, our method enables better-aligned outputs without modifying the underlying model or even requiring logit access. We demonstrate the effectiveness of QAlign on mathematical reasoning benchmarks (GSM8K and GSM-Symbolic) using a task-specific RM, showing consistent improvements over existing test-time compute methods like best-of-n and majority voting. Furthermore, when applied with more realistic RMs trained on the Tulu 3 preference dataset, QAlign outperforms direct preference optimization (DPO), best-of-n, majority voting, and weighted majority voting on a diverse range of datasets (GSM8K, MATH500, IFEval, MMLU-Redux, and TruthfulQA). A practical solution to aligning language models at test time using additional computation without degradation, our approach expands the limits of the capability that can be obtained from off-the-shelf language models without further training.
- oai:arXiv.org:2504.03790v2
- cs.CL
- cs.LG
- stat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ NA-DiD: Extending Difference-in-Differences with Capabilities
+ https://arxiv.org/abs/2507.12690
+ arXiv:2507.12690v2 Announce Type: replace-cross
+Abstract: This paper introduces the Non-Additive Difference-in-Differences (NA-DiD) framework, which extends classical DiD by incorporating non-additive measures the Choquet integral for effect aggregation. It serves as a novel econometric tool for impact evaluation, particularly in settings with non-additive treatment effects. First, we introduce the integral representation of the classial DiD model, and then extend it to non-additive measures, therefore deriving the formulae for NA-DiD estimation. Then, we give its theoretical properties. Applying NA-DiD to a simulated hospital hygiene intervention, we find that classical DiD can overestimate treatment effects, f.e. failing to account for compliance erosion. In contrast, NA-DiD provides a more accurate estimate by incorporating non-linear aggregation. The Julia implementation of the techniques used and introduced in this article is provided in the appendices.
+ oai:arXiv.org:2507.12690v2
+ econ.EM
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Gon\c{c}alo Faria, Noah A. Smith
+ Stanis{\l}aw M. S. Halkiewicz
- A Certified Unlearning Approach without Access to Source Data
- https://arxiv.org/abs/2506.06486
- arXiv:2506.06486v3 Announce Type: replace-cross
-Abstract: With the growing adoption of data privacy regulations, the ability to erase private or copyrighted information from trained models has become a crucial requirement. Traditional unlearning methods often assume access to the complete training dataset, which is unrealistic in scenarios where the source data is no longer available. To address this challenge, we propose a certified unlearning framework that enables effective data removal \final{without access to the original training data samples}. Our approach utilizes a surrogate dataset that approximates the statistical properties of the source data, allowing for controlled noise scaling based on the statistical distance between the two. \updated{While our theoretical guarantees assume knowledge of the exact statistical distance, practical implementations typically approximate this distance, resulting in potentially weaker but still meaningful privacy guarantees.} This ensures strong guarantees on the model's behavior post-unlearning while maintaining its overall utility. We establish theoretical bounds, introduce practical noise calibration techniques, and validate our method through extensive experiments on both synthetic and real-world datasets. The results demonstrate the effectiveness and reliability of our approach in privacy-sensitive settings.
- oai:arXiv.org:2506.06486v3
+ Data-driven particle dynamics: Structure-preserving coarse-graining for emergent behavior in non-equilibrium systems
+ https://arxiv.org/abs/2508.12569
+ arXiv:2508.12569v2 Announce Type: replace-cross
+Abstract: Multiscale systems are ubiquitous in science and technology, but are notoriously challenging to simulate as short spatiotemporal scales must be appropriately linked to emergent bulk physics. When expensive high-dimensional dynamical systems are coarse-grained into low-dimensional models, the entropic loss of information leads to emergent physics which are dissipative, history-dependent, and stochastic. To machine learn coarse-grained dynamics from time-series observations of particle trajectories, we propose a framework using the metriplectic bracket formalism that preserves these properties by construction; most notably, the framework guarantees discrete notions of the first and second laws of thermodynamics, conservation of momentum, and a discrete fluctuation-dissipation balance crucial for capturing non-equilibrium statistics. We introduce the mathematical framework abstractly before specializing to a particle discretization. As labels are generally unavailable for entropic state variables, we introduce a novel self-supervised learning strategy to identify emergent structural variables. We validate the method on benchmark systems and demonstrate its utility on two challenging examples: (1) coarse-graining star polymers at challenging levels of coarse-graining while preserving non-equilibrium statistics, and (2) learning models from high-speed video of colloidal suspensions that capture coupling between local rearrangement events and emergent stochastic dynamics. We provide open-source implementations in both PyTorch and LAMMPS, enabling large-scale inference and extensibility to diverse particle-based systems.
+ oai:arXiv.org:2508.12569v2cs.LG
- cs.CR
+ cs.CE
+ physics.comp-phstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Umit Yigit Basaran, Sk Miraj Ahmed, Amit Roy-Chowdhury, Basak Guler
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Quercus Hernandez, Max Win, Thomas C. O'Connor, Paulo E. Arratia, Nathaniel Trask
- A Generic Machine Learning Framework for Radio Frequency Fingerprinting
- https://arxiv.org/abs/2510.09775
- arXiv:2510.09775v3 Announce Type: replace-cross
-Abstract: Fingerprinting radio frequency (RF) emitters typically involves finding unique characteristics that are featured in their received signal. These fingerprints are nuanced, but sufficiently detailed, motivating the pursuit of methods that can successfully extract them. The downstream task that requires the most meticulous RF fingerprinting (RFF) is known as specific emitter identification (SEI), which entails recognising each individual transmitter. RFF and SEI have a long history, with numerous defence and civilian applications such as signal intelligence, electronic surveillance, physical-layer authentication of wireless devices, to name a few. In recent years, data-driven RFF approaches have become popular due to their ability to automatically learn intricate fingerprints. They generally deliver superior performance when compared to traditional RFF techniques that are often labour-intensive, inflexible, and only applicable to a particular emitter type or transmission scheme. In this paper, we present a generic and versatile machine learning (ML) framework for data-driven RFF with several popular downstream tasks such as SEI, data association (EDA) and RF emitter clustering (RFEC). It is emitter-type agnostic. We then demonstrate the introduced framework for several tasks using real RF datasets for spaceborne surveillance, signal intelligence and countering drones applications.
- oai:arXiv.org:2510.09775v3
+ GenUQ: Predictive Uncertainty Estimates via Generative Hyper-Networks
+ https://arxiv.org/abs/2509.21605
+ arXiv:2509.21605v2 Announce Type: replace-cross
+Abstract: Operator learning is a recently developed generalization of regression to mappings between functions. It promises to drastically reduce expensive numerical integration of PDEs to fast evaluations of mappings between functional states of a system, i.e., surrogate and reduced-order modeling. Operator learning has already found applications in several areas such as modeling sea ice, combustion, and atmospheric physics. Recent approaches towards integrating uncertainty quantification into the operator models have relied on likelihood based methods to infer parameter distributions from noisy data. However, stochastic operators may yield actions from which a likelihood is difficult or impossible to construct. In this paper, we introduce, GenUQ, a measure-theoretic approach to UQ that avoids constructing a likelihood by introducing a generative hyper-network model that produces parameter distributions consistent with observed data. We demonstrate that GenUQ outperforms other UQ methods in three example problems, recovering a manufactured operator, learning the solution operator to a stochastic elliptic PDE, and modeling the failure location of porous steel under tension.
+ oai:arXiv.org:2509.21605v2cs.LG
- cs.CR
+ cs.NA
+ math.NAstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Alex Hiles, Bashar I. Ahmad
+ http://creativecommons.org/licenses/by/4.0/
+ Tian Yu Yen, Reese E. Jones, Ravi G. Patel
- Towards Causal Market Simulators
- https://arxiv.org/abs/2511.04469
- arXiv:2511.04469v3 Announce Type: replace-cross
-Abstract: Market generators using deep generative models have shown promise for synthetic financial data generation, but existing approaches lack causal reasoning capabilities essential for counterfactual analysis and risk assessment. We propose a Time-series Neural Causal Model VAE (TNCM-VAE) that combines variational autoencoders with structural causal models to generate counterfactual financial time series while preserving both temporal dependencies and causal relationships. Our approach enforces causal constraints through directed acyclic graphs in the decoder architecture and employs the causal Wasserstein distance for training. We validate our method on synthetic autoregressive models inspired by the Ornstein-Uhlenbeck process, demonstrating superior performance in counterfactual probability estimation with L1 distances as low as 0.03-0.10 compared to ground truth. The model enables financial stress testing, scenario analysis, and enhanced backtesting by generating plausible counterfactual market trajectories that respect underlying causal mechanisms.
- oai:arXiv.org:2511.04469v3
+ A Single-Loop First-Order Algorithm for Linearly Constrained Bilevel Optimization
+ https://arxiv.org/abs/2510.24710
+ arXiv:2510.24710v2 Announce Type: replace-cross
+Abstract: We study bilevel optimization problems where the lower-level problems are strongly convex and have coupled linear constraints. To overcome the potential non-smoothness of the hyper-objective and the computational challenges associated with the Hessian matrix, we utilize penalty and augmented Lagrangian methods to reformulate the original problem as a single-level one. Especially, we establish a strong theoretical connection between the reformulated function and the original hyper-objective by characterizing the closeness of their values and derivatives. Based on this reformulation, we propose a single-loop, first-order algorithm for linearly constrained bilevel optimization (SFLCB). We provide rigorous analyses of its non-asymptotic convergence rates, showing an improvement over prior double-loop algorithms -- form $O(\epsilon^{-3}\log(\epsilon^{-1}))$ to $O(\epsilon^{-3})$. The experiments corroborate our theoretical findings and demonstrate the practical efficiency of the proposed SFLCB algorithm. Simulation code is provided at https://github.com/ShenGroup/SFLCB.
+ oai:arXiv.org:2510.24710v2
+ math.OC
+ cs.ITcs.LG
- q-fin.CP
- stat.OT
- Mon, 22 Dec 2025 00:00:00 -0500
+ math.IT
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Dennis Thumm, Luis Ontaneda Mijares
+ Wei Shen, Jiawei Zhang, Minhui Huang, Cong Shen
- Incremental Generation is Necessary and Sufficient for Universality in Flow-Based Modelling
- https://arxiv.org/abs/2511.09902
- arXiv:2511.09902v2 Announce Type: replace-cross
-Abstract: Incremental flow-based denoising models have reshaped generative modelling, but their empirical advantage still lacks a rigorous approximation-theoretic foundation. We show that incremental generation is necessary and sufficient for universal flow-based generation on the largest natural class of self-maps of $[0,1]^d$ compatible with denoising pipelines, namely the orientation-preserving homeomorphisms of $[0,1]^d$. All our guarantees are uniform on the underlying maps and hence imply approximation both samplewise and in distribution.
- Using a new topological-dynamical argument, we first prove an impossibility theorem: the class of all single-step autonomous flows, independently of the architecture, width, depth, or Lipschitz activation of the underlying neural network, is meagre and therefore not universal in the space of orientation-preserving homeomorphisms of $[0,1]^d$. By exploiting algebraic properties of autonomous flows, we conversely show that every orientation-preserving Lipschitz homeomorphism on $[0,1]^d$ can be approximated at rate $O(n^{-1/d})$ by a composition of at most $K_d$ such flows, where $K_d$ depends only on the dimension. Under additional smoothness assumptions, the approximation rate can be made dimension-free, and $K_d$ can be chosen uniformly over the class being approximated. Finally, by linearly lifting the domain into one higher dimension, we obtain structured universal approximation results for continuous functions and for probability measures on $[0,1]^d$, the latter realized as pushforwards of empirical measures with vanishing $1$-Wasserstein error.
- oai:arXiv.org:2511.09902v2
- cs.LG
- cs.NA
- math.CA
- math.DS
- math.NA
+ Nonasymptotic Convergence Rates for Plug-and-Play Methods With MMSE Denoisers
+ https://arxiv.org/abs/2510.27211
+ arXiv:2510.27211v4 Announce Type: replace-cross
+Abstract: It is known that the minimum-mean-squared-error (MMSE) denoiser under Gaussian noise can be written as a proximal operator, which suffices for asymptotic convergence of plug-and-play (PnP) methods but does not reveal the structure of the induced regularizer or give convergence rates. We show that the MMSE denoiser corresponds to a regularizer that can be written explicitly as an upper Moreau envelope of the negative log-marginal density, which in turn implies that the regularizer is 1-weakly convex. Using this property, we derive (to the best of our knowledge) the first sublinear convergence guarantee for PnP proximal gradient descent with an MMSE denoiser. We validate the theory with a one-dimensional synthetic study that recovers the implicit regularizer. We also validate the theory with imaging experiments (deblurring and computed tomography), which exhibit the predicted sublinear behavior.
+ oai:arXiv.org:2510.27211v4
+ math.OC
+ eess.SPstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hossein Rouhvarzi, Anastasis Kratsios
+ Henry Pritchard, Rahul Parhi
- Look-Ahead Reasoning on Learning Platforms
- https://arxiv.org/abs/2511.14745
- arXiv:2511.14745v2 Announce Type: replace-cross
-Abstract: On many learning platforms, the optimization criteria guiding model training reflect the priorities of the designer rather than those of the individuals they affect. Consequently, users may act strategically to obtain more favorable outcomes. While past work has studied strategic user behavior on learning platforms, the focus has largely been on strategic responses to a deployed model, without considering the behavior of other users. In contrast, look-ahead reasoning takes into account that user actions are coupled, and -- at scale -- impact future predictions. Within this framework, we first formalize level-k thinking, a concept from behavioral economics, where users aim to outsmart their peers by looking one step ahead. We show that, while convergence to an equilibrium is accelerated, the equilibrium remains the same, providing no benefit of higher-level reasoning for individuals in the long run. Then, we focus on collective reasoning, where users take coordinated actions by optimizing through their joint impact on the model. By contrasting collective with selfish behavior, we characterize the benefits and limits of coordination; a new notion of alignment between the learner's and the users' utilities emerges as a key concept. Look-ahead reasoning can be seen as a generalization of algorithmic collective action; we thus offer the first results characterizing the utility trade-offs of coordination when contesting algorithmic systems.
- oai:arXiv.org:2511.14745v2
+ Nonparametric estimation of conditional probability distributions using a generative approach based on conditional push-forward neural networks
+ https://arxiv.org/abs/2511.14455
+ arXiv:2511.14455v3 Announce Type: replace-cross
+Abstract: We introduce conditional push-forward neural networks (CPFN), a generative framework for conditional distribution estimation. Instead of directly modeling the conditional density $f_{Y|X}$, CPFN learns a stochastic map $\varphi=\varphi(x,u)$ such that $\varphi(x,U)$ and $Y|X=x$ follow approximately the same law, with $U$ a suitable random vector of pre-defined latent variables. This enables efficient conditional sampling and straightforward estimation of conditional statistics through Monte Carlo methods. The model is trained via an objective function derived from a Kullback-Leibler formulation, without requiring invertibility or adversarial training. We establish a near-asymptotic consistency result and demonstrate experimentally that CPFN can achieve performance competitive with, or even superior to, state-of-the-art methods, including kernel estimators, tree-based algorithms, and popular deep learning techniques, all while remaining lightweight and easy to train.
+ oai:arXiv.org:2511.14455v3cs.LG
- cs.GT
- stat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ stat.ME
+ Tue, 23 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Haiqing Zhu, Tijana Zrnic, Celestine Mendler-D\"unner
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Nicola Rares Franco, Lorenzo TedescoSpectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory
https://arxiv.org/abs/2511.23083
- arXiv:2511.23083v4 Announce Type: replace-cross
+ arXiv:2511.23083v5 Announce Type: replace-cross
Abstract: High-capacity kernel Hopfield networks exhibit a \textit{Ridge of Optimization} characterized by extreme stability. While previously linked to \textit{Spectral Concentration}, its origin remains elusive. Here, we analyze the network dynamics on a statistical manifold, revealing that the Ridge corresponds to the Edge of Stability, a critical boundary where the Fisher Information Matrix becomes singular. We demonstrate that the apparent Euclidean force antagonism is a manifestation of \textit{Dual Equilibrium} in the Riemannian space. This unifies learning dynamics and capacity via the Minimum Description Length principle, offering a geometric theory of self-organized criticality.
- oai:arXiv.org:2511.23083v4
+ oai:arXiv.org:2511.23083v5cs.LGcs.NEstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/Akira Tamamori
- Safeguarded Stochastic Polyak Step Sizes for Non-smooth Optimization: Robust Performance Without Small (Sub)Gradients
- https://arxiv.org/abs/2512.02342
- arXiv:2512.02342v2 Announce Type: replace-cross
-Abstract: The stochastic Polyak step size (SPS) has proven to be a promising choice for stochastic gradient descent (SGD), delivering competitive performance relative to state-of-the-art methods on smooth convex and non-convex optimization problems, including deep neural network training. However, extensions of this approach to non-smooth settings remain in their early stages, often relying on interpolation assumptions or requiring knowledge of the optimal solution. In this work, we propose a novel SPS variant, Safeguarded SPS (SPS$_{safe}$), for the stochastic subgradient method, and provide rigorous convergence guarantees for non-smooth convex optimization with no need for strong assumptions. We further incorporate momentum into the update rule, yielding equally tight theoretical results. On non-smooth convex benchmarks, our experiments are consistent with the theoretical predictions on how the safeguard affects the convergence neighborhood. On deep neural networks the proposed step size achieves competitive performance to existing adaptive baselines and exhibits stable behavior across a wide range of problem settings. Moreover, in these experiments, the gradient norms under our step size do not collapse to (near) zero, indicating robustness to vanishing gradients.
- oai:arXiv.org:2512.02342v2
+ Neural CDEs as Correctors for Learned Time Series Models
+ https://arxiv.org/abs/2512.12116
+ arXiv:2512.12116v2 Announce Type: replace-cross
+Abstract: Learned time-series models, whether continuous- or discrete-time, are widely used to forecast the states of a dynamical system. Such models generate multi-step forecasts either directly, by predicting the full horizon at once, or iteratively, by feeding back their own predictions at each step. In both cases, the multi-step forecasts are prone to errors. To address this, we propose a Predictor-Corrector mechanism where the Predictor is any learned time-series model and the Corrector is a neural controlled differential equation. The Predictor forecasts, and the Corrector predicts the errors of the forecasts. Adding these errors to the forecasts improves forecast performance. The proposed Corrector works with irregularly sampled time series and continuous- and discrete-time Predictors. Additionally, we introduce two regularization strategies to improve the extrapolation performance of the Corrector with accelerated training. We evaluate our Corrector with diverse Predictors, e.g., neural ordinary differential equations, Contiformer, and DLinear, on synthetic, physics simulation, and real-world forecasting datasets. The experiments demonstrate that the Predictor-Corrector mechanism consistently improves the performance compared to Predictor alone.
+ oai:arXiv.org:2512.12116v2
+ cs.LG
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Muhammad Bilal Shahid, Prajwal Koirla, Cody Fleming
+
+
+ Stopping Rules for Stochastic Gradient Descent via Anytime-Valid Confidence Sequences
+ https://arxiv.org/abs/2512.13123
+ arXiv:2512.13123v3 Announce Type: replace-cross
+Abstract: We study stopping rules for stochastic gradient descent (SGD) for convex optimization from the perspective of anytime-valid confidence sequences. Classical analyses of SGD provide convergence guarantees in expectation or at a fixed horizon, but offer no statistically valid way to assess, at an arbitrary time, how close the current iterate is to the optimum. We develop an anytime-valid, data-dependent upper confidence sequence for the weighted average suboptimality of projected SGD, constructed via nonnegative supermartingales and requiring no smoothness or strong convexity. This confidence sequence yields a simple stopping rule that is provably $\varepsilon$-optimal with probability at least $1-\alpha$, with explicit bounds on the stopping time under standard stochastic approximation stepsizes. To the best of our knowledge, these are the first rigorous, time-uniform performance guarantees and finite-time $\varepsilon$-optimality certificates for projected SGD with general convex objectives, based solely on observable trajectory quantities.
+ oai:arXiv.org:2512.13123v3math.OCcs.LG
+ math.ST
+ stat.ML
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Liviu Aolaritei, Michael I. Jordan
+
+
+ From Zipf's Law to Neural Scaling through Heaps' Law and Hilberg's Hypothesis
+ https://arxiv.org/abs/2512.13491
+ arXiv:2512.13491v2 Announce Type: replace-cross
+Abstract: We inspect the deductive connection between the neural scaling law and Zipf's law -- two statements discussed in machine learning and quantitative linguistics. The neural scaling law describes how the cross entropy rate of a foundation model -- such as a large language model -- changes with respect to the amount of training tokens, parameters, and compute. By contrast, Zipf's law posits that the distribution of tokens exhibits a power law tail. Whereas similar claims have been made in more specific settings, we show that the neural scaling law is a consequence of Zipf's law under certain broad assumptions that we reveal systematically. The derivation steps are as follows: We derive Heaps' law on the vocabulary growth from Zipf's law, Hilberg's hypothesis on the entropy scaling from Heaps' law, and the neural scaling from Hilberg's hypothesis. We illustrate these inference steps by a toy example of the Santa Fe process that satisfies all the four statistical laws.
+ oai:arXiv.org:2512.13491v2
+ cs.IT
+ cs.LG
+ math.IT
+ math.ST
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ {\L}ukasz D\k{e}bowski
+
+
+ Prospects for quantum advantage in machine learning from the representability of functions
+ https://arxiv.org/abs/2512.15661
+ arXiv:2512.15661v2 Announce Type: replace-cross
+Abstract: Demonstrating quantum advantage in machine learning tasks requires navigating a complex landscape of proposed models and algorithms. To bring clarity to this search, we introduce a framework that connects the structure of parametrized quantum circuits to the mathematical nature of the functions they can actually learn. Within this framework, we show how fundamental properties, like circuit depth and non-Clifford gate count, directly determine whether a model's output leads to efficient classical simulation or surrogation. We argue that this analysis uncovers common pathways to dequantization that underlie many existing simulation methods. More importantly, it reveals critical distinctions between models that are fully simulatable, those whose function space is classically tractable, and those that remain robustly quantum. This perspective provides a conceptual map of this landscape, clarifying how different models relate to classical simulability and pointing to where opportunities for quantum advantage may lie.
+ oai:arXiv.org:2512.15661v2
+ quant-ph
+ cs.LGstat.ML
- Mon, 22 Dec 2025 00:00:00 -0500
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Dimitris Oikonomou, Nicolas Loizou
+ Sergi Masot-Llima, Elies Gil-Fuster, Carlos Bravo-Prieto, Jens Eisert, Tommaso Guaita
- Credit Risk Estimation with Non-Financial Features: Evidence from a Synthetic Istanbul Dataset
- https://arxiv.org/abs/2512.12783
- arXiv:2512.12783v2 Announce Type: replace-cross
-Abstract: Financial exclusion constrains entrepreneurship, increases income volatility, and widens wealth gaps. Underbanked consumers in Istanbul often have no bureau file because their earnings and payments flow through informal channels. To study how such borrowers can be evaluated we create a synthetic dataset of one hundred thousand Istanbul residents that reproduces first quarter 2025 T\"U\.IK census marginals and telecom usage patterns. Retrieval augmented generation feeds these public statistics into the OpenAI o3 model, which synthesises realistic yet private records. Each profile contains seven socio demographic variables and nine alternative attributes that describe phone specifications, online shopping rhythm, subscription spend, car ownership, monthly rent, and a credit card flag. To test the impact of the alternative financial data CatBoost, LightGBM, and XGBoost are each trained in two versions. Demo models use only the socio demographic variables; Full models include both socio demographic and alternative attributes. Across five fold stratified validation the alternative block raises area under the curve by about one point three percentage and lifts balanced \(F_{1}\) from roughly 0.84 to 0.95, a fourteen percent gain. We contribute an open Istanbul 2025 Q1 synthetic dataset, a fully reproducible modeling pipeline, and empirical evidence that a concise set of behavioural attributes can approach bureau level discrimination power while serving borrowers who lack formal credit records. These findings give lenders and regulators a transparent blueprint for extending fair and safe credit access to the underbanked.
- oai:arXiv.org:2512.12783v2
+ Learning Confidence Ellipsoids and Applications to Robust Subspace Recovery
+ https://arxiv.org/abs/2512.16875
+ arXiv:2512.16875v2 Announce Type: replace-cross
+Abstract: We study the problem of finding confidence ellipsoids for an arbitrary distribution in high dimensions. Given samples from a distribution $D$ and a confidence parameter $\alpha$, the goal is to find the smallest volume ellipsoid $E$ which has probability mass $\Pr_{D}[E] \ge 1-\alpha$. Ellipsoids are a highly expressive class of confidence sets as they can capture correlations in the distribution, and can approximate any convex set. This problem has been studied in many different communities. In statistics, this is the classic minimum volume estimator introduced by Rousseeuw as a robust non-parametric estimator of location and scatter. However in high dimensions, it becomes NP-hard to obtain any non-trivial approximation factor in volume when the condition number $\beta$ of the ellipsoid (ratio of the largest to the smallest axis length) goes to $\infty$. This motivates the focus of our paper: can we efficiently find confidence ellipsoids with volume approximation guarantees when compared to ellipsoids of bounded condition number $\beta$?
+ Our main result is a polynomial time algorithm that finds an ellipsoid $E$ whose volume is within a $O(\beta)^{\gamma d}$ multiplicative factor of the volume of best $\beta$-conditioned ellipsoid while covering at least $1-O(\alpha/\gamma)$ probability mass for any $\gamma < \alpha$. We complement this with a computational hardness result that shows that such a dependence seems necessary up to constants in the exponent. The algorithm and analysis uses the rich primal-dual structure of the minimum volume enclosing ellipsoid and the geometric Brascamp-Lieb inequality. As a consequence, we obtain the first polynomial time algorithm with approximation guarantees on worst-case instances of the robust subspace recovery problem.
+ oai:arXiv.org:2512.16875v2
+ cs.DScs.LG
- q-fin.ST
- stat.AP
- Mon, 22 Dec 2025 00:00:00 -0500
+ math.ST
+ stat.ML
+ stat.TH
+ Tue, 23 Dec 2025 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Chao Gao, Liren Shan, Vaidehi Srinivas, Aravindan Vijayaraghavan
+
+
+ Alternating Direction Method of Multipliers for Nonlinear Matrix Decompositions
+ https://arxiv.org/abs/2512.17473
+ arXiv:2512.17473v2 Announce Type: replace-cross
+Abstract: We present an algorithm based on the alternating direction method of multipliers (ADMM) for solving nonlinear matrix decompositions (NMD). Given an input matrix $X \in \mathbb{R}^{m \times n}$ and a factorization rank $r \ll \min(m, n)$, NMD seeks matrices $W \in \mathbb{R}^{m \times r}$ and $H \in \mathbb{R}^{r \times n}$ such that $X \approx f(WH)$, where $f$ is an element-wise nonlinear function. We evaluate our method on several representative nonlinear models: the rectified linear unit activation $f(x) = \max(0, x)$, suitable for nonnegative sparse data approximation, the component-wise square $f(x) = x^2$, applicable to probabilistic circuit representation, and the MinMax transform $f(x) = \min(b, \max(a, x))$, relevant for recommender systems. The proposed framework flexibly supports diverse loss functions, including least squares, $\ell_1$ norm, and the Kullback-Leibler divergence, and can be readily extended to other nonlinearities and metrics. We illustrate the applicability, efficiency, and adaptability of the approach on real-world datasets, highlighting its potential for a broad range of applications.
+ oai:arXiv.org:2512.17473v2
+ eess.SP
+ cs.LG
+ math.OC
+ stat.ML
+ Tue, 23 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Atalay Denknalbant, Emre Sezdi, Zeki Furkan Kutlu, Polat Goktas
+ Atharva Awari, Nicolas Gillis, Arnaud Vandaele